why everyone's code is suddenly written by ai
The dirty open secret behind why so many companies are trusting so much of their code to chatbots.
According to tech news and social media posts, anywhere between a third and half of all code powering your favorite apps is being written by AI. Google claims that 75% of its new features are being generated by a large language model. Microsoft says that 30% of its entire codebase is now AI. Meta is also targeting a 75% LLM-generated code rate by the end of this year. Clearly, the early hiccups are over and the tools are so powerful that all those coders now have to eat crow and supervise their soon to be omnipotent robot overlords like good little minions, right?
You'd think so, but at the same time, executives keep reporting that they're actually not seeing revenue or productivity gains from AI, and are having trouble getting their employees to use it as often as they need to make the numbers make sense to their financial models, while refusing to accept that frontline workers who aren't mostly looking at data summaries and dashboards aren't finding as much practical use for it.
They also dismiss those who warn their bosses that of all the things they'd love to automate with AI, half the time, they simply can't and are just ending up with even more work as the assumption that chatbots will make them instantly effective and productive means they can take on twice the work. The idea that the chatbots are not as helpful as expected is quickly rejected because there's no way, right? AI is the future and everyone else is seeing amazing results! Aren't they?
If they're having similar problems, how are adoption numbers through the roof? Well, the answer is right in the Meta release about their AI code generation targets. Instead of just seeing an uptick in code generation adoption until the tool's output just takes off, they are mandating that 65% of their engineers generate 75% of new code by AI, or end up on a PIP if not outright fired. Same deal with Microsoft and Google. You have to use the LLMs at least so much of the time if you want to keep your job.
Tech giants selling subscriptions to LLMs want the silent implication to be that the tools are finally so amazing, they're just churning out terrific code. But the reality of tech work today is that code assistants are hit or miss because that's just how the math involved works, but every feature you add and every bug you fix must use the AI or you'll get an earful. Again, it's not some insider secret and companies are on the record warning employees to use these tools, or else, because they need to show results for all those sunk costs.
from rubber duck to abused goose
It's very telling that they have to stuff these assistant down our throats as if we're abused geese on a foie gras farm because programmers are a lazy people. If there's a tool that can really automate a significant percentage of our busywork and do it reliably and well, you'd be risking serious bodily harm asking us to put it down. In fact, I've written dozens of tools to handle code generation for myself long before a beta of Copilot was anything more than a dream at Microsoft.
You want me to write thousands of lines of SQL to turn a regulatory form into an automatic online application? Ha! Nope. Not happening. I'm creating an XML version of the paper questionnaire and writing a tool to seamlessly generate the insert-or-update script so any modification takes a few hours instead of the four weeks it usually did. (Yes, that was an actual project, done on a bet which I won by creating this tool in two days of my allotted week.)
People like me, who specialize in frameworks, metaprogramming, and complex, event-driven, distributed workflows, would love to have AI assistants, but sadly, we find that it's in the finer details of our work that its results are inconsistent. Which makes sense because they're probabilistic machine and telling it "just do what you did the same way you did it last time" only goes so far because... again, math.
In one attempt, it'll generate some fantastic code that does a nasty but of eldritch mathematics in ten lines. In another, it barfs out a thousand lines that are doomed to fail because its initial condition will never actually be met, but all the automated tests pass because it returns a valid value.
This is with what the codebase behind your favorite tools is being filled, which is why Amazon's data centers are glitching out and 6.3 million orders got lost, new startups find their entire production environments nuked on a whim, and AI bug reports are completely overwhelming coders who now have to parse real issues from hallucinated ones. Fixing which is supposed to be done with AI, by the way, which may solve the problem, or create ten new ones. Though, to be fair, you see the same problem with humans as well.
where do we go from here?
Okay, fine, some people say. If the code is generated by an AI, will be maintained by an AI, and humans will just test the finished product, does it matter if the code itself is bad as far as a human is concerned? Well, yes, it kind of does. If you add a lot of layers of abstraction to code, it significantly affects performance and makes finding and fixing bugs far more difficult.
LLMs are trained on human code, so if the code is so complicated or arcane that it no longer resembles their training sets, they're going to have an awful lot of trouble fixing it, randomly generating millions of lines from scratch over and over again to hopefully solve the problems. Eventually, a human will have to rewrite the whole thing from scratch as it becomes a revenue-killing liability.
Fine, what if we just keep adding parameters to LLMs so they have greater context windows and can handle more code? Also not exactly a winning solution. You will now be committed to infinitely expand parameters into the quadrillion range and higher just to keep up with the codebase of Y Combinator's latest headline project, Shittr, the toiler sharing app for those on the oh-my-god-gotta-go-now. Oh, and all of that will require trillions of dollars and hundreds of new data centers.
So, where do we go from here? Right now, the biggest voices for using AI for every little thing, including coding, are the equivalent of Gold Rush businessmen yelling that there's gold in 'dem 'dere hills and if you just exit through their shop, they can rent you – not buy, rent – denim overalls, pickaxes, wheelbarrows, and dynamite, so you go and try your luck, and their bank will be there to take care of all that gold you'll surely find.
We need to have a conversation in which we have to accept that many bosses and executives are being outright indoctrinated into thinking that handy tools which require thought and care in their deployment, are the answer to everything, and if they just keep doubling down on them, they will eventually reach the AGI revenue nirvana as promised by the Cult of the Singularity. And that they should really be running a company and not a chapter of a cyberpunk cult.