Useful To Who?
I've been thinking about the usefulness of AI since I wrote about it last week. The latest salvo in this battle is a recently-released study showing that AI is neither replacing jobs nor drastically affecting wages. Plenty of grains of salt to be taken here: the data is from Denmark, self-reported, and from 2023-2024 so predating some of the recent leaps and bounds made in the technology.
Still, I think there's a growing realization that, while this tech might have the potential to be world-changing, it's not actually there yet. Most of the cases where it's getting the most use are the ones that suck! Creating more layers of servile chatbots that don't actually help you, pumping websites full of text that all sounds the same, or helping kids cheat on their homework. This isn't exactly the AI-powered utopia some people are selling.
As somebody who follows a lot of researchers in the field out of a perverse desire to understand what's going on I think it's easy for me to get caught up in the enthusiasm of people at the cutting edge. I'm still highly skeptical of the tech, but you read enough people convinced this is the Next Big Thing and you start to believe there must be something there. But every time I actually try and use these tools I'm wildly underwhelmed. Other than my brief foray into Deep Research, which does strike me as something with promise, nothing actually feels that useful! I spent a few hours playing around with Gemini Flash 2.5 and came away deeply underwhelmed: it's a sycophantic little interface that has very little useful problem-solving ability, at least for the real-life problems I tossed its way. Again, there are domain-specific applications that I see a future for here, especially in repetitive tasks like programming, but this is a Fancy Calculator miscast as actual intelligence. Frankly, I need to work on not getting taken in by fairly smart people who are excited by this tech: I get it, I think it's interesting too, but every time I step back and actually use the thing I am reminded that it's mostly hype.
The depressing conclusion that I am coming to about these tools is that for anyone who actually likes to think, or read, or has significant domain knowledge, they are at best useless and at worst actively harmful. Unfortunately there are also a lot of people for whom the most mediocre output imaginable is perfectly fine. If you're too lazy to actually learn something then an output that is kind of correct some of the time is probably good enough. This is the thing that makes me think the tech might not be going away - there will always be deeply lazy, incurious, or just scammy people out there happy to outsource their work to the mediocrity machine.
The good news, I guess, is that I still don't see many domains where this can actually outdo a thoughtful person. John Henry still seems to have the advantage over this particular steam engine, and it seems to me to be a fundamental limitation of the tech that it will never really replace a thoughtful human being where it matters. And given the pullback from AI investment in a variety of sectors it seems like the markets are starting to agree with me. There is something there with large language models but frankly I don't think anyone's quite figured out what it is yet. I don't know if or when the bubble will burst but it certainly seems like people are starting to get wise to the fact that this tech just isn't there yet.