why a.i. startups keep tilting at windmills
A new Stanford study says AI startups' products work exactly backwards according to their customers. How did that happen?
Some of the most common advice given to startups in Silicon Valley is to fail fast and pivot often. On its own, not terrible advice. Try as many ideas as possible to see what works and can be scaled into a lucrative business, and if something goes wrong, just move on to the next idea. In practice, however, it often leads to wannabe founder with dreams of billion dollar valuations in their eyes to just go for the easiest way to get VC attention, especially when it comes to AI and automation.
A new study from Stanford says that more than two in five AI startups are automating a process that either doesn’t need to be automated, or doesn’t solve anything. Which pretty much perfectly explains why we get so many emails and notifications outright begging us to use chat bots and generative AI features when they’re often just in the way and we keep saying we don’t want to use them.
In a now classic tweet, sci-fi and fantasy author Joanna Maciejewska reacted to LLMs being pitched by AI bros as the ultimate tool for writing fiction with “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” It’s an opinion that found a lot of traction with creative workers, and is now resonating throughout corporate offices.
What’s happening according to the study is that in their quest to create a super-AI, or to at least hype a large and expensive product into orbit, these startups are dead set on taking over strategy and design, advising executives to fire armies of experts with no backup plan, while also demanding customers’ most sensitive data to do Cthulhu knows what with behind the scenes and potentially leaking them to users. Meanwhile, frontline workers are wasting countless hours doing busywork that should be given to computers, freeing them to do more high value tasks.
even a.i. doesn’t want to do your busywork
Basically, we’re doing this new wave automation exactly backwards. Generative AI is great at churning out slop, spam, scams, committing ad fraud, or barfing out generic SaaS app designs for prototyping and proofs of concept. It’s terrible at work wasting people’s bandwidth, like collating datasets, keeping business documents up to date, or diagnosing nasty IT issues. And it might even be slowing down work that requires both precision and creativity with its hallucinations, like a toddler who’s trying to help with all his might while you just know an adult will have to redo it.
Which prompts the question, why the disconnect? How is it not obvious that people’s goal at work is to be able to outsource mindless or tedious busywork so they can do more complex and creative things for so many tech founders? My guess is that it’s a critical difference in values, goals, and culture between workers, creatives, and the AI bros currently preaching the Gospel According to ChatGPT.
At the risk of offending AI hypebeasts, I would be doing readers a disservice if I didn’t point out that they’re basically the one pump chumps of creativity. They’re interested in just skipping to the end result. The generated book. The finished video. The app to submit to the major platforms. They’re not really concerned what’s happening behind the scenes, nor are they interested in the process, refinement, and the challenges we need to tackle to create an engrossing book, a stylish video, or a great app.
a study in hype over substance
Sure, AI can help the passionate creative or detail oriented expert get to the end result faster or help them get unstuck if they’re struggling, but the bros? Nah, they just want to skip to what they think of as the fun part, roll over, and start snoring away. But for a creative or an expert, the actual process of making a new thing, the experimentation, research, and refinement is the fun part. They care about the actual result, not simply delivering something quickly and in large quantities. And so do their customers.
People have absolutely noticed that companies bragging about becoming AI first have much worse quality and customer service, so much so, many had to roll back their big AI initiatives in the most face-saving way they can muster. AI adoption has also been a lot slower than Silicon Valley anticipated, despite boosterism that verges on delivering religious sermons to the masses. (Incidentally, for years, tech companies’ most visible PR people were officially called evangelists.)
The study from Stanford is another loud klaxon clearly showing that Silicon Valley’s if-you-build-it-they-will-come-or-else AI strategy isn’t working. Hardcore users who are getting lost in virtual fantasy worlds aren’t paying for all those new data centers. Most of the corporate users spending tens of thousands per month are balking at sharing a lot of sensitive data, grumbling about not getting the most out of the AI tools, and the rank and file is frustrated that their day to day hasn’t improved much.
But improving the daily routine of white collar peons isn’t easy to sell to the C-suite to whom those low level processes are, well, boring, so there go all those huge per seat margins that attract VC funding. It’s also extremely difficult to have tools designed for predicting probabilities to also be precise and accurate within four decimal points, so many AI startups don’t even try. As a result, 90% of them are failing, while nearly half of all companies already sold on generative AI packages are backtracking.
This is what happens as reality doesn’t live up to the demos and social media buzz. If your CEO keeps talking about the Singularity and how they’re on the verge of an AI to end all AIs, but your customers are not renewing subscriptions and complaining that your product isn’t solving their problems, your trajectory simply isn’t sustainable.