Does this mean “AI was used as a fancy autocomplete”? Because that’s my number 1 use case for AI like copilot, and if that’s the case, over 25% of my code is written by AI. But let me tell you, it still gets it wrong, repeatedly making the same syntax errors no matter how many times I correct it. It starts to get it right, then later reverts to making the same syntax errors, even making up variable names that violate widely known public APIs.
Agreed. It’s really shit for new code, but if I’m writing glue code stuff or repetitive code it saves a lot of time spent on typing.
If they’re counting all the auto-completed code that’s inserted after pressing Tab on an AI suggestion (such as from Copilot), then I easily believe it.
Tons of places in code only have 1 possible thing that can go on a particular line, given the context, and there is no point in typing it all out manually.
Ah Elon most love all this extra code being written. If course it’s super inefficient but look at all those lines sooo much code.
Are the lines salient though?
I really don’t believe the headline. Google has thousands of teams of engineers that are writing code for dozens or hundreds of different products… There’s no way all of them are generating anywhere near 25% of their new code via AI.
Unless they’re doing something like generating massive test fixtures or training data sets using AI and classifying them as “code” 🤔
I really don’t believe the headline.
The The company had a strong quarter thanks in large part to AI. part is what makes it sound strange to me, sounds like shareholder egostrking.
That said all they need to do is mandate use of AI during development like my company’s done and they can boast this kind of bullshit easily.
That said all they need to do is mandate use of AI during development
Wtf does that mean? Like what if you know exactly what you want to do? Do you have to ask GPT to review your code?
Where i work they had us use AI with the IDEs.
I’d say about 20% of the times what it suggests is actually usable.
That’s autocomplete on steroids for you.
I wonder if “code” means pull requests and they have a load of automated ones to update versions of external and internal libraries
Given the size of lockfiles this would not surprise me but who the hell counts lock files code. Their barely configs :/.
How often does a solution need “new” code and not “basically the same code as a previous issue but with two small details changed”? This is a genuine question, I have only ever coded as a hobby. But 25% of your work being essentially just copy pasted sounds plausible, and that’s sorta all LLMs are doing, right?
Reusable code is usually pulled out into a library and reused that way, rather than copied and pasted into a new project. You might copy and paste some boilerplate to new projects but it wouldn’t be anywhere near 25% of the code.
I’m not sure why someone downvoted you (it wasn’t me!) because your comment did seem like a genuine question.
I’ve read exactly the opposite article a few days ago:
https://www.cio.com/article/3540579/devs-gaining-little-if-anything-from-ai-coding-assistants.html
Now they can fill new holes at the google graveyard at twice the speed!
NICE TRY, AI !
Not disappointed by The Verge, first paragraph paraphrases the title with no source and the following is just off topic.
The source for first paragraph: https://blog.google/inside-google/message-ceo/alphabet-earnings-q3-2024/
Ah, indeed:
Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster.
Sounds like bs to me, comes across as marketing talk to promote their AI offerings.
Makes sense considering how shitty Google products have become.