How Do We Know if AI Is Smoke and Mirrors?

Musings on whether the “AI Revolution” is more like the printing press or crypto. (Spoiler: it’s neither.)

Photo by Daniele Levis Pelusi on Unsplash

I am not nearly the first person to sit down and really think about what the advent of AI means for our world, but it’s a question that I still find being asked and talked about. However, I think most of these conversations seem to miss key factors.

Before I begin, let me give you three anecdotes that illustrate different aspects of this issue that have shaped my thinking lately.

  1. I had a conversation with my financial advisor recently. He remarked that the executives at his institution were disseminating the advice that AI is a substantive change in the economic scene, and that investing strategies should regard it as revolutionary, not just a hype cycle or a flash in the pan. He wanted to know what I thought, as a practitioner in the machine learning industry. I told him, as I’ve said before to friends and readers, that there’s a lot of overblown hype, and we’re still waiting to see what’s real under all of that. The hype cycle is still happening.
  2. Also this week, I listened to the episode of Tech Won’t Save Us about tech journalism and Kara Swisher. Guest Edward Ongweso Jr. remarked that he thought Swisher has a pattern of being credulous about new technologies in the moment and changing tune after those new technologies prove not to be as impressive or revolutionary as they promised (see, self-driving cars and cryptocurrency). He thought that this phenomenon was happening with her again, this time with AI.
  3. My partner and I both work in tech, and regularly discuss tech news. He remarked once about a phenomenon where you think that a particular pundit or tech thinker has very wise insights when the topic they are discussing is one you don’t know a lot about, but when they start talking about something that’s in your area of expertise, suddenly you realize that they’re very off base. You go back in your mind and wonder, “I know they’re wrong about this. Were they also wrong about those other things?” I’ve been experiencing this from time to time recently on the subject of machine learning.

It’s really hard to know how new technologies are going to settle and what their long term impact will be on our society. Historians will tell you that it’s easy to look back and assume “this is the only way that events could have panned out”, but in reality, in the moment no one knew what was going to happen next, and there were myriad possible turns of events that could have changed the whole outcome, equally or more likely than what finally happened.

TL;DR

AI is not a total scam. Machine learning really does give us opportunities to automate complex tasks and scale effectively. AI is also not going to change everything about our world and our economy. It’s a tool, but it’s not going to replace human labor in our economy in the vast majority of cases. And, AGI is not a realistic prospect.

AI is not a total scam. … AI is also not going to change everything about our world and our economy.

Why do I say this? Let me explain.

First, I want to say that machine learning is pretty great. I think that teaching computers to parse the nuances of patterns that are too complex for people to really grok themselves is fascinating, and that it creates loads of opportunities for computers to solve problems. Machine learning is already influencing our lives in all kinds of ways, and has been doing so for years. When I build a model that can complete a task that would be tedious or nearly impossible for a person, and it is deployed so that a problem for my colleagues is solved, that’s very satisfying. This is a very small scale version of some of the cutting edge things being done in generative AI space, but it’s in the same broad umbrella.

Expectations

Speaking to laypeople and speaking to machine learning practitioners will get you very different pictures of what AI is expected to mean. I’ve written about this before, but it bears some repeating. What do we expect AI to do for us? What do we mean when we use the term “artificial intelligence”?

To me, AI is basically “automating tasks using machine learning models”. That’s it. If the ML model is very complex, it might enable us to automate some complicated tasks, but even little models that do relatively narrow tasks are still part of the mix. I’ve written at length about what a machine learning model really does, but for shorthand: mathematically parse and replicate patterns from data. So that means we’re automating tasks using mathematical representations of patterns. AI is us choosing what to do next based on the patterns of events from recorded history, whether that’s the history of texts people have written, the history of house prices, or anything else.

AI is us choosing what to do next based on the patterns of events from recorded history, whether that’s the history of texts people have written, the history of house prices, or anything else.

However, to many folks, AI means something far more complex, on the level of being vaguely sci-fi. In some cases, they blur the line between AI and AGI, which is poorly defined in our discourse as well. Often I don’t think people themselves know what they mean by these terms, but I get the sense that they expect something far more sophisticated and universal than what reality has to offer.

For example, LLMs understand the syntax and grammar of human language, but have no inherent concept of the tangible meanings. Everything an LLM knows is internally referential — “king” to an LLM is defined only by its relationships to other words, like “queen” or “man”. So if we need a model to help us with linguistic or semantic problems, that’s perfectly fine. Ask it for synonyms, or even to accumulate paragraphs full of words related to a particular theme that sound very realistically human, and it’ll do great.

But there is a stark difference between this and “knowledge”. Throw a rock and you’ll find a social media thread of people ridiculing how ChatGPT doesn’t get facts right, and hallucinates all the time. ChatGPT is not and will never be a “facts producing robot”; it’s a large language model. It does language. Knowledge is even one step beyond facts, where the entity in question has understanding of what the facts mean and more. We are not at any risk of machine learning models getting to this point, what some people would call “AGI”, using the current methodologies and techniques available to us.

Knowledge is even one step beyond facts, where the entity in question has understanding of what the facts mean and more. We are not at any risk of machine learning models getting to this point using the current methodologies and techniques available to us.

If people are looking at ChatGPT and wanting AGI, some form of machine learning model that has understanding of information or reality on par with or superior to people, that’s a completely unrealistic expectation. (Note: Some in this industry space will grandly tout the impending arrival of AGI in PR, but when prodded, will back off their definitions of AGI to something far less sophisticated, in order to avoid being held to account for their own hype.)

As an aside, I am not convinced that what machine learning does and what our models can do belongs on the same spectrum as what human minds do. Arguing that today’s machine learning can lead to AGI assumes that human intelligence is defined by increasing ability to detect and utilize patterns, and while this certainly is one of the things human intelligence can do, I don’t believe that is what defines us.

In the face of my skepticism about AI being revolutionary, my financial advisor mentioned the example of fast food restaurants switching to speech recognition AI at the drive-thru to reduce problems with human operators being unable to understand what the customers are saying from their cars. This might be interesting, but hardly an epiphany. This is a machine learning model as a tool to help people do their jobs a bit better. It allows us to automate small things and reduce human work a bit, as I’ve mentioned. This is not unique to the generative AI world, however! We’ve been automating tasks and reducing human labor with machine learning for over a decade, and adding LLMs to the mix is a difference of degrees, not a seismic shift.

We’ve been automating tasks and reducing human labor with machine learning for over a decade, and adding LLMs to the mix is a difference of degrees, not a seismic shift.

I mean to say that using machine learning can and does definitely provide us incremental improvements in the speed and efficiency by which we can do lots of things, but our expectations should be shaped by real comprehension of what these models are and what they are not.

Practical Limits

You may be thinking that my first argument is based on the current technological capabilities for training models, and the methods being used today, and that’s a fair point. What if we keep pushing training and technologies to produce more and more complex generative AI products? Will we reach some point where something totally new is created, perhaps the much vaunted “AGI”? Isn’t the sky the limit?

The potential for machine learning to support solutions to problems is very different from our ability to realize that potential. With infinite resources (money, electricity, rare earth metals for chips, human-generated content for training, etc), there’s one level of pattern representation that we could get from machine learning. However, with the real world in which we live, all of these resources are quite finite and we’re already coming up against some of their limits.

The potential for machine learning to support solutions to problems is very different from our ability to realize that potential.

We’ve known for years already that quality data to train LLMs on is running low, and attempts to reuse generated data as training data prove very problematic. (h/t to Jathan Sadowski for inventing the term “Habsburg AI,” or “a system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.”) I think it’s also worth mentioning that we have poor capability to distinguish generated and organic data in many cases, so we may not even know we’re creating a Habsburg AI as it’s happening, the degradation may just creep up on us.

I’m going to skip discussing the money/energy/metals limitations today because I have another piece planned about the natural resource and energy implications of AI, but hop over to the Verge for a good discussion of the electricity alone. I think we all know that energy is not an infinite resource, even renewables, and we are committing the electrical consumption equivalent of small countries to training models already — models that do not approach the touted promises of AI hucksters.

I also think that the regulatory and legal challenges to AI companies have potential legs, as I’ve written before, and this must create limitations on what they can do. No institution should be above the law or without limitations, and wasting all of our earth’s natural resources in service of trying to produce AGI would be abhorrent.

My point is that what we can do theoretically, with infinite bank accounts, mineral mines, and data sources, is not the same as what we can actually do. I don’t believe it’s likely machine learning could achieve AGI even without these constraints, in part due to the way we perform training, but I know we can’t achieve anything like that under real world conditions.

[W]hat we can do theoretically, with infinite bank accounts, mineral mines, and data sources, is not the same as what we can actually do.

Even if we don’t worry about AGI, and just focus our energies on the kind of models we actually have, resource allocation is still a real concern. As I mentioned, what the popular culture calls AI is really just “automating tasks using machine learning models”, which doesn’t sound nearly as glamorous. Importantly, it reveals that this work is not a monolith, as well. AI isn’t one thing, it’s a million little models all over the place being slotted in to workflows and pipelines we use to complete tasks, all of which require resources to build, integrate, and maintain. We’re adding LLMs as potential choices to slot in to those workflows, but it doesn’t make the process different.

As someone with experience doing the work to get business buy-in, resources, and time to build those models, it is not as simple as “can we do it?”. The real question is “is this the right thing to do in the face of competing priorities and limited resources?” Often, building a model and implementing it to automate a task is not the most valuable way to spend company time and money, and projects will be sidelined.

Conclusion

Machine learning and its results are awesome, and they offer great potential to solve problems and improve human lives if used well. This is not new, however, and there’s no free lunch. Increasing the implementation of machine learning across sectors of our society is probably going to continue to happen, just like it has been for the past decade or more. Adding generative AI to the toolbox is just a difference of degree.

AGI is a completely different and also entirely imaginary entity at this point. I haven’t even scratched the surface of whether we would want AGI to exist, even if it could, but I think that’s just an interesting philosophical topic, not an emergent threat. (A topic for another day.) But when someone tells me that they think AI is going to completely change our world, especially in the immediate future, this is why I’m skeptical. Machine learning can help us a great deal, and has been doing so for many years. New techniques, such as those used for developing generative AI, are interesting and useful in some cases, but not nearly as profound a change as we’re being led to believe.