I haven’t been in development for nearly 20 years now, but I assumed it worked like that:
You generate unit tests for a very specific function of rather limited magnitude, then you let AI generate the function. How could this work otherwise?
Bonus points if you let the AI divide your overall problem into smaller problems of manageable magnitudes. That wouldn’t involve code generation as such…
Am I wrong with this approach?
The complexity here lies in having to craft a comprehensive enough spec. Correctness is one aspect, but another is performance. If the AI craps out code that passes your tests, but does it in really inefficient way then it’s still a problem.
Also worth noting that you don’t actually need AI to do such things. For example, Barliman is a tool that can do program synthesis. Given a set of tests to pass, it attempts to complete the program for you. Synthesis is performed using logic programming. Not only is it capable of generating code, but it can also reuse code it’s already come up with as basis for solving bigger problems.
https://github.com/webyrd/Barliman
here’s a talk about how it works https://www.youtube.com/watch?v=er_lLvkklsk
At that point you should be able to just write the code yourself.
The A"I" will either make mistakes even under defined bounds, or it will never make any mistakes ever in which case it’s not an autocomplete, it’s a compiler and we’ve just gone full circle.
Don’t copilot anything longer than a function of about 15 lines. That way you can quickly see if it made mistakes. Ensure it works, move on to the next.
And only do that for boring, repetitive work. The tough challenges and critical parts you’re (for now) better off solving yourself.
Absolutely, I think the people who say it’s completely useless for code are in denial
Definitely not replacing anyone but my god it has sped up development by generating code I already know how to write 90% of
No more having to look up “what was the for loop syntax in this language again?”
“Copilot is really good in things which I already know” and that is perfectly fine
Exactly.
It’s to speed up boilerplate and save you having to look up function names or language specific syntax for that one feature you want to use, not to entirely do your job for you
Tried to learn coding using chatGPT. Wanted to make my own game engine for a phone game. Ended up looking up tutorials.
Ai is great for finding small flaws or reciting documentation in a more succinct way. But writing new code and functions? That’s a fools errand hoping it works out
I use it for writing functions and snippets all the time, at least in python and rust as long as you describe what you want it to do properly it works great
Example I used recently: “Please generate me a rust function that will take a u32 user id and return a unique RGB colour”
Generated the function, I plugged it in and it worked perfectly first time
To be honest yes. That is the sort of thing that sounds great. I have a little project I’m about to start so I’ll take a look
Sloppy joes is the new spaghetti code
Eh, if it ain’t right, I just bounce it back to ChatGPT to fix. With enough guidance and oversight it will get there.
Hi ChatGPT, write code with no memory or logic errors to perform <thing you want to do>.
I’m not sure how to talk to ChatGPT, I’m assuming like Siri.
LLMs are statistical word association machines. Or tokens more accurately. So if you tell it to not make mistakes, it’ll likely weight the output towards having validation, checks, etc. It might still produce silly output saying no mistakes were made despite having bugs or logic errors. But LLMs are just a tool! So use them for what they’re good at and can actually do, not what they themselves claim they can do lol.
I’ve found it behaves like a stubborn toddler
If you tell it not to do something it will do it more, you need to give it positive instructions not negative
Just explain what you are trying to do like you would with a person and it will give you the code. It probably won’t quite be what you want so refine your request or post a photo of the errors you might get and try again.
If you are working with a language that you aren’t familiar with it is very helpful. Just remember that it has been trained on all the code on the internet, so it knows a lot.
It is a great tool, with limitations, of course. As the developer you have to know how to apply it to get the most from it. You can see from the downvotes that there’s a lot of negativity towards llms, but I have positive experiences with using ChatGPT.