I guess it was inevitable that I would have to write something about AI here, although I’ve been putting it off for a while now. But seeing as the hype still does not seem like it is about to die down, I feel like I can put it off no longer. I am not sure if I have anything incredibly unique to add to this topic, but I nevertheless feel like I should put a stake in the ground.

First of all, the field of AI has obviously been around for more than half a century, so it consists of a lot of different technologies, from expert systems to Bayesian classification systems (like those commonly used in spam classifiers), to LLMs. The latter being the source of the incredible hype cycle and bubble economics of AI that we’re in the middle of, so that is what I’ll focus on.

The fact that it’s a bubble is, in my opinion, blatantly obvious. Ed Zitron’s incredibly detailed run-down of OpenAI finances, makes it clear that there is not a sustainable business anywhere in there. Even Microsoft seems to think so, pulling back on data centre construction plans. As the hype meets reality failure rates for AI projects rise with some high-profile examples where early AI adopters had to walk back their optimistic initial commitments.

Even so, the hype cycle continues. Even if an LLM-based tool can’t actually do what it promises today, surely this is only a few iterations away. Clearly the hype is working. I’ve lost count of how many times I’ve seen someone show off an AI-based tool with a comment that is some variant of “well, as you can see it doesn’t actually work that well, but I’m sure it will improve with just a bit more training”. Without any kind of justification for that claim.

I’ve even heard people stand up and proclaim that the system they are using was too complex to build an interface to that would be usable by someone without expert knowledge. And then, with a straight face, go on to say that they expect an LLM would be able to make sense of the system and provide a chat-based interface to it, if we just feed the documentation of the system into the LLM as training material. The only way I can explain that kind of wishful thinking is by the sheer power of hype being able to override common sense.

Experiences with gen-AI tools

Hype aside, the experiences I’ve personally had when interacting with gen-AI based tools have been ranging from “disastrous” to “kinda okay”. I’ll name a few: One involved a colleague asking Gemini to produce a “recommendations document” based on his notes from a customer meeting. What it produced was a couple of pages of text of what can only be described as the most basic of recommendations, instead of the specialised domain-specific advice we were trying to produce. Thankfully, I managed to convince the co-worker in question to scrap the AI output and start from scratch.

Another experience that produced somewhat better results was a trial run of a tool to apply kernel patch backports to older RHEL kernels, using an AI model to resolve conflicts, trained on existing resolutions in the RHEL tree. This tool is quite useful; although in my mind mostly because the tool itself has a lot of knowledge about the backporting process built in outside of the AI-based conflict resolution logic, which can automate a lot of the tediousness around the process. The actual conflict resolution suggestions provided by the AI model ranged from OK for the trivial stuff, up to “partly correct” for more complex things. Which is still useful, but more in a normal “iterative improvement” sense, not the hyped-up revolutionary sense.

Drawbacks and costs

When considering the effectiveness of any tool, attention has to be paid not only to the benefits, but also to the costs. And for generative AI, the costs are pretty significant. Not just in the form of the astronomical monetary and environmental cost, which in itself ought to be deal breakers as far as I’m concerned. But also in the form of real harms that the technology visits on both a personal and a societal level.

And no, I’m not talking about the “generative AI will become sentient and destroy us all” nonsense. I’m talking about real harms being done today; things like chatbots destroying relationships, or images and video becoming unreliable sources of information. Of harms to workers when they are “replaced” by AI, only to be hired back as precarious or temp workers when it turned out they were not, in fact, indispensable after all (which is what happened in that example I linked above). Of the hidden human cost of training the models. Of the security nightmares attaching AI to everything brings with it, and the bogus “security” reports that open source projects are being subjected to. And of the harm of even further concentration of technological power in the hands of a few giant corporations who own the models everyone is adding dependencies on.

On the individual level, there’s even some evidence to suggest that outsourcing our thinking to AI models leads to loss of competence over time, as we forget how to think for ourselves. An effect that persists when going back to manual work after having used an AI for a related task. Other have noticed similar things, some even going so far so to compare generative AI to digital cocaine.

Finding a balance

Given those pretty monumental harms, and the kinda “meh” benefits to be gotten from this technology, I don’t think it’s even a difficult choice. I got into this field, and free software in particular, to create technology that is under the control of users, not technology that is used to control them. Even though there’s been an effort to create an open source AI definition (which is in itself controversial), the current crop of LLMs all seem to be based on proprietary models and systems that lock people in, while disproportionally harming people who are already on the receiving end of many of the world’s ills.

Sure, if LLMs were completely free to operate and create, didn’t concentrate power in the hands of a few corporations, and didn’t lead to torrents of misinformation and reduced our cognitive ability, they would be… kinda neat toys? But while we’re wishing for things that don’t exist, can I have a unicorn too, please?

I guess that once the dust settles, we’ll see which uses of the technology actually turn out to be useful enough to stick around once they are no longer subsidised by vast buckets of venture capital. And I look forward to the time when we can once again get back to asking “what is the best tool for this job”, and not the backwards “how can we solve this problem with AI” that seems to be the current modus operandi of the tech world.

In the meantime, I just hope that we won’t have boiled the planet, or drowned our society in misinformation and deepfakes before the bubble pops.