We Need to Stop Talking About "AI" | Hendrik Erz

Abstract: The category of AI is very broad. When someone talks about AI, they could be talking about anything from a very simple computer program to a large neural network. In this article I argue that talking about AI implies talking about expectations, not about actual, purposeful implementations. In order to be capable of talking about and regulating the dangers of AI we have to limit our use of the term "AI" to precisely what it denotes: expectations and possibilities that may or may not be realized some time in the future.


I know I’m very late to the party, but yesterday I finally finished Mass Effect 3, a game that’s almost ten years old by now.

Mass Effect is a trilogy of games whose first title was released way back in 2007. The last game, Mass Effect 3, was released in 2013 and, relatively recently, in 2021 publisher BioWare released a “legendary edition” with better graphics and overall improvements.

Mass Effect’s story plays in a (relatively) near future in which humans have developed space travel and have found a so-called “relay” that links the solar system to other stars featuring several alien races. As the story progresses, the player controls the main character, commander Shepard, in their fight against a galactic menace called the “reapers”. The game is story-centric with many RPG-elements and the story itself is very well written.

The name of the game is a reference to the one breakage of the laws of physics that was required in order to enable faster-than-light space travel and therefore the whole story: Basically all technology in the game is based on what is called “element zero” which, when subject to an electric current, can reduce its mass to almost zero (and, as we all know, anything with zero mass can travel at the speed of light).

Originally, I bought the first game in 2014 and a month ago the “legendary edition” with all three games in a bundle. Since then I spent my evenings re-playing the first, and then continuing with the other two games. There are two reasons why I was adamant on finishing the complete story: first, because I really like the story and wanted to know how it ends; second, because I needed to know everything the game has to say about how it depicts a certain discourse before writing this article: artificial intelligence.

As you may know, I’m more of a critic of artificial intelligence. Not because I’m a machine-smashing Luddite, but because I know how AI works. As I’ve written a few months ago in my article about the LaMDA-incident, the current AI discourse is lacking a grounding in reality. When people talk about AI, there is always this latent reference to sentient computers. And when we have real engineers suddenly fantasizing about their AI becoming sentient, we have a problem.

My problem for the better part of the last few years, ever since I started working with machine learning and artificial intelligence, was always that I knew something was off with the AI discourse, but I couldn’t really put my finger on it. Re-playing Mass Effect over the summer now gave me a clue for what it is that bothers me.

People talking about AI can for the most part be grouped into two extremes: There is this weird spectrum between naïve techno evangelism and anarcho-primitivist smartphone-denialism. Of course, the discourse is a spectrum and many people have a more measured response to AI, but those you tend to see online debating AI can be coarsely assigned one of the two extremes.

However, as is especially visible with the LAWS-discourse (“lethal autonomous weapon systems”), neither the arguments pro nor contra AI are helpful in bringing more clarity as to what AI should be doing, and where it ought to be heading.

We are still discussing the same ethical limitations that were already visible years ago, and we are still asking the same hypothetical questions that we were asking decades ago.

I was always curious as to what the reason for this deadlock in AI discourse was. I already suspected that it was probably not a scientific issue and more a political one, but still, it was hard to put my finger on it.

Enter Mass Effect. The revelation came when I learned about a differentiation that Mass Effect makes in its treatment of AI. There are two categories of intelligent computer programs: On the one hand, there is “Virtual Intelligence”, VI, that can be best described as “Siri with a face”. VIs in Mass Effect are kind of voice assistants with a holographic avatar. They are limited in what they can do, but they are a very convenient audio-interface to control computers without a keyboard. On the other hand, there is “Artificial Intelligence”, AI. Unlike VI, AI is self-conscious and can make its own decisions.

In the universe of Mass Effect, developing AI is strictly prohibited because 300 years prior to the start of the main story, one race developed an AI as servants – a collection of networked robots – that gained sentience and then tried to slaughter its creators.

This distinction between VI and AI is at the same time very simple to understand and a very powerful tool that has the potential to improve our own AI discourse and free us from the current deadlock.

Once I knew about the distinction the game made, I immediately tried to think about what this would mean for our world. One can create a very precise operationalization of which computer programs fall into the category of VI, and which into the AI-category:

  • If it has a concrete use-case, for example, controlling a hardware platform, or information retrieval, it’s a VI.
  • If it has no use-case (or, no purpose), if it just “exists”, it is an AI.

At this point, it becomes clear what is happening to our discourse on AI: We are never talking about concrete use-cases or purposes.

When we talk about AI, we talk about expectations.

When we talk about AI, we mostly state what some new type of model could be used to, but rarely about what the precise purpose of the model is. Transformers, for example, never had a specific use-case. They are rather a framework upon which to build other models. The most prominent model that builds on transformers is BERT, but that in itself is also more a framework with no real purpose. Only the final, fine-tuned BERT-models serve an actual purpose (for example, in our institute, we use BERT to classify text into semantic categories that we can then use for our very own statistical magic).

However, as soon as an AI model actually has a purpose, a precisely demarcated use-case, we don’t hear about it anymore. Those purposeful models are no longer part of the AI discourse. They become part of the discourse on the corresponding research field, if they get mentioned at all.

This preserves the AI discourse as an always-hazy, vague category into which we can project expectations. Specific use-cases are left aside, they do not belong to the discourse, because use-cases are not expectations anymore.

Viewing the AI discourse as a projection screen for expectations explains many of the sometimes weird phenomena we can observe in public discourse. Since it’s just about expectations, politicians can promise to promote implementation of AI or “smart solutions” such as the “smart city” — without ever stating what its purpose is.

At the same time, military officials can use “AI” as a signifier that promises to provide “technological solutions to political problems” — namely that it has become politically unsustainable to have any soldiers die on the battlefield. If you can tell voters that you have the “solution” for that problem – remote-controlled drones like the infamous General Atomics MQ-1 (“Predator”) and MQ-9 (“Reaper”) —, the political damage of entering yet another theater of liminal warfare becomes manageable.

Lastly, this explains why the LAWS-debate is stalling since 2013. As a refresher, in 2013 the “Campaign to Stop Killer Robots” was created with the aim to prevent the development of fully autonomous weapon systems that would replace humans as the final decision makers in warfare. However, its success has been limited. On the one hand, the terminology is vague (when does something count as “autonomous”?). On the other hand and, more importantly, the debate only focuses about expectations. The debate always has a subjunctive mood. The reason is that there is not yet a lethal autonomous weapon system. We have an idea what the potential problems could be, but we have no data about it since even the most infamous “killer robots” are still fully under the control of human pilots, even though they are remote-controlled.

The main problem, I would argue, is that it’s impossible to ban expectations. It is only possible to regulate what is already there, and hence without actual autonomous weapon systems, it is difficult to write into law what exactly should be forbidden with these systems. Is it that they can fly completely on their own? Is it just that they should not be able to pull the trigger without human intervention? Or is it problematic if they already pre-sort the data before the human is presented a sanitized list of potential targets, which may increase civilian deaths?

All of these questions touch more on fundamental philosophical issues, and therefore escape legal terminology.

I think the distinction into VI and AI that BioWare made for its game has a lot of potential for our debates on AI. Since a VI has very specific purposes for which it is employed in the game, it is easy for all participants to know what they talk about when they talk about VI. And since VI is limited in what it should be capable of doing, it is easy to regulate that. When someone talks about AI, however, it is much more fuzzy.

We need to abandon talking about AI when we talk about concrete risks and problems, and start talking about actual capabilities. The current AI discourse is heavily influenced by the expectations of Silicon Valley; mainly because it is easier to sell people expectations than purposeful but limited programs.

We need to stop thinking in expectations and possibilities, and start to talk about the actual implementations of AI that are out there and that have clearly delimited problems and benefits for people.

We need to stop talking about “AI”.

Suggested Citation

Erz, Hendrik (2022). “We Need to Stop Talking About "AI"”. hendrik-erz.de, 19 Aug 2022, https://www.hendrik-erz.de/post/we-need-to-stop-talking-about-ai.

Did you enjoy this article? Leave a tip on Ko-Fi!

← Return to the post list