My Angular course: https://angularstart.com/ ChatGPT has unlocked opportunities for me that I otherwise might not have had the …
My Angular course: https://angularstart.com/ ChatGPT has unlocked opportunities for me that I otherwise might not have had the …
I have a newsletter, you can join it if you like: https://mobirony.ck.page/4a331b9076
It can help you get unstuck by missing knowledge
Almost like a…
… Search…
… Engine…
Which is all I've seen so far, a very clever ish search engine
Exactly, it's a tool to assist, not something to rely upon completely. I like to use it to comment out my code, which i find tedious. Drop code in ask politely for comments, then tweak whatever needs changed.
I don't use AI much for coding. I don't really see the point.
Like, I understand that copilot can help with boilerplatey code, but either it's a short thing I can quickly do myself, a repetitive thing I can quickly do with a macro, or something more complicated that I don't want to blindly accept from AI.
So… I just don't bother with it.
I do send the occasional message to ChatGPT, but it's like once every other month or so. Mostly because I found my own research to be faster than trying to get it to understand what I want from it. Of course, once that research takes too long, or I don't know where to start, I might pop it a question.
it’s easy to fall in to the trap with ChatGPT (I like to call him Gary). Especially when frustration or tiredness ensues, reliance on chat gpt to just make it work and move on is enticing. However after a few copy pastes and new errors that you didn’t have before, one could get Real lost. At these times I have to have a break or a sleep before coming back to the problem and steering the ship.
However steering the ship requires quality prompts. Giving Chat GPT context will enable it to give better answers.
The dream is when you can systematise the context, time constraints, and scope of the task.
If you tell it that “we are fixing a bug in a component/function that is already depended upon and we need something specific”, the. It focuses in on the task.
Whereas if you are lazy with the prompt it will quite rightly give you more of a generic answer.
What's your solution when Chat bot start hallucinating again?
Knowledge base is an apt description I use ChatGPT for. As a game/digi twin developer, using Unreal Engine is requires learning new parts of the engine every day, all of which is not possible to squeeze in to my little, squishy noggin.
Questions would range from project specific real world domain knowledge, to questions on how to implement something small in C++, how to use or extend UE's API.
Most of the work is still done by me, but getting some consise info on a topic and validating that info is a lot faster than getting that consise info myself.
If it's a more complex and or niche 3D tech topic, I usually browse though a couple of articles, tutorials and talks (Siggraph, GDC and such) on the topic, then ask a specific question to ChatGPT on how would X works in UE or how to make Y with Z in mind in UE.
I also only use it to fill gaps in my knowledge, or to re-write something in a different language, if I am needing to use a specific class from C# in TS.
I find the AI solutions are hit an miss, sometimes it works, sometimes you spend way more time trying to explain the problem in different ways to try get a workable solution.
I have been trying Codeium lately, and it is ok. sometimes it suggests valid work, pulled from parts of the current project. the other times it just makes up it's own stuff that is completely wrong.
Especially in highly customised bespoke work for clients. If you are going to use AI, ALWAYS triple check what it is suggesting.
I think that AI that's "too" integrated can cause you to quickly and easily implement solutions that you would otherwise have the time to question if the design was good to begin with. Struggling with a problem can help make better solutions.
"Until we get to the point where AI can do all of the coding"
I'm not looking forward to the point where AI can rewrite itself.
Its ok at algebra and simplifying calculus to algebra, but it avoids calculus like a plague
I've been using ChatGPT to help build an RPG engine. It was very helpful setting up everything in the beginning but as I go I am coding more and more of it myself. I still use it to help write some scripts with smaller knowledge gaps. But generally if i let it get to far it starts giving me some weird nonsense. Got me back into coding my own game, so I am thankful for that.
I had multiple arguments on the capability of VBA in creating custom excel functions with the appropriate documentation.
I was right, but it took a lot of coaching to get Chat to assist.
This is the problem of Typescript exactly!! You always got stuck at something meaningless!!!
Solution: call it a language model instead of AI.
LLMs are the new "coding duck". The best way to use the tool is to break down a question into as small of fragments you can think of, in turn helping you solve the problem at a fundamental level
Excellent video, exactly points out what I experience the whole time while developing^^
It helps me personally to often refresh conversations, using the classic GPT4 model, using API and providing the model as much context about the coding issue as possible
I will disagree with GitHub copilot taking on the steers. I treated more as a fancy intellisense rather then intelligent programmer. It might provide bad solutions to problems but it's something to aid not to believe truthfully
You can go straight to the solution without getting a full grasp of the context with or without ChatGPT, it's not the tool fault. One of the cool things about ChatGPT is that it can be an amazing learning tool even more efficient than a programming tool. It will give you faulty code all the time but it explains what is under the hood for everything, even bad code.
I read somewhere that chatgpt is made to make seemingly real messages. Meaning it actually have no idea wtf is going on. But the fact is, trying to make believieble messages usually tides to actual and correct information. But what if chat gpt havent had fed data that is related to your needs? Well it will do what its made to do: make believieble messages which often completely gaslights you, because it does what it does best… Make believieble messages.
A few weeks ago I used gpt to build a comprehensive automation and reporting suite for a marketing campaign using wildly complex equations, app scripts, and api calls inside of google sheets
I am a marketing manager. I've never coded a thing in my life
In this case it was less like filling knowledge gaps and more like giving me something I never would have had since this execution is outside of my core.
I will say it was interesting after days of seeing the scripts I could spot of something wasn't going to work without even implementing it
I also found it helpful to have 2 GPTs open at once, and after getting code I'd ask the other gpt to analyze it and explain what that code would do and how it worked
That enabled me to catch wrong codes or scripts without even deploying
Totally agree. Remember to have seen a similar opinion on copy-paste of Stack Overflow. The problem is not AI, but the confidence we gave to their answers, as well as the intellectual laziness we have as human, especially when the pressure of a deadline is present.
So, AI for high-pressure domains? That is where I started listening to the doomers.
well thought out and presented. subscribed.
I think your sus that chatgpt is good for getting unstuck and overcoming small and fairly specific knowledge gaps applies to most uses of chatgpt. The mistake is to think AI can do it for you.
ChatGPT is useful because a lot of the problems you’re having as a programmer has been most likely solved before. If it’s been solved before GPT can most likely find an acceptable solution or two.
Of course, GPT can also surface wrong answers or answers that aren’t quite there, but that’s for your brain to figure out.
As someone with a CS degree who's been in IT since the 80s, I can tell you this is spot on. I've seen this same scenario play out with my own uses, where I've watched them "answer" (hallucinate, but with great confidence) what should be relatively simple and generic questions, in ways that turn out to be complete BS – completely inaccurate, made-up sources, inefficient, irrelevant, etc. Often, they then apologize and then offer something even worse. I would suggest that your observations are also applicable to other areas (outside coding) where *GTPs are being used. They can be very useful and time saving TOOLS, but still need oversight by humans with enough knowledge on their own to know if they are producing accurate results.
I find ChatGPT is most helpful at the beginning of projects when I am searching for useful libraries. I often use R, and the CRAN repository now has 20,000+ packages. So searching them to find ones that contribute to my workflow can be very time consuming. The lists of reasons I elicit from Chat for its choices are often helpful. The code examples are sometimes useful and sometimes not. Chat simply does not know where to take the project next. Still, I consider it to be a good addition to my software workbench.
Your point on Chat being very helpful for narrow knowledge gaps and more dangerous to employ to bridge large knowledge gaps is very well taken. Thanks for the post!
copilot autocomplete from jsdocs and ts is like regular autocorrect, copilot is like code grammarly, chat gpt is more like last resort when the docs, stack overflow and forums don't have an answer. Usually when I get at that point I realize I should probably break the problem down. The bad part is that, when I run into that sort of scenario in a personal project, I'm much more inclined to change the scope than to improve my implementation.
Any dev who feels threatened by current ChatGPT is either VERY new or needs to find a new career yesterday. I think I work with some of them.
Damn Typescript can be an ugly monster. Seems like you're spending more time battling Typescript than actually making the product. I love types, but I haven't seen this kind of gymnastics in statically typed languages.
Very cool video. This is something that appeared to me quite quickly and, for its defense, I was testing the GPT4 on its ability to be creative in a complex topic in which I am particularly at ease. I was a bit suspicious at first, but 100% agree about its usefulness in the little gaps, for the larger gaps however, humility of the human wielding it is needed. Because in the end, its just a machine printing the most probable words as a response to yours. It helped me as much as it didn't, I should say. Since then, I see these AI's as promising "minds", but which still have much to learn. And this applies to us all. Since AI is (among others) learning on our contents and behaviors, shouldn't we feel a bit responsible to set a good example?
Comments are closed.