An Era of ‘Artificial Fake Truth’? On the Effects of Large AI Models | Hendrik Erz

Abstract: Yes, I'm still talking about large AI models. But today I want to highlight an aspect that has many people worried: what could be the effects of these models going forward? Luckily there is already a debate going on that focuses on these issues.


With the AI hype being all the rage currently, there is a discussion simmering in a corner of public discourse that deserves attention just as much as the open letters and rebuttals that are being thrown around in the media. This discussion focuses less on the abilities of Large Language Models (LLM) or other big AI models that are being thrown onto the market every week. Instead, this discussion is all about what effects these models can have – on themselves as well as on society at large.

This discussion revolves around a simple question: What happens as more and more content online is being produced not by human beings, but rather by large language/image/video models? Currently, the debate bears on two areas. What happens with the models as more potential data that they’re being trained on stems from earlier iterations of these models themselves? And what happens to society as the share of computer-generated text and images increases?

LLMs are here to stay, and we better begin thinking about their effects sooner rather than later. I think that many of the answers (or, rather: thoughts) to this are still very speculative, so this article will be just as speculative. The only thing I’m relatively certain of is that the actual effects of AI generators may be different from what public discourse (which is mainly driven by the big players in the market) may have us believe.

Diminishing Variance

Let us first talk about the effect those models may have on themselves as we utilize them in our daily lives. Millions of people already share prompts and outputs of ChatGPT liberally. We also see more and more AI generated art floating around. There is a quite disturbing effect of this development on machine learning models.

To understand this, we first need to know why these models are so good. As Margaret Mitchell, Emily Bender, and Timnit Gebru have described it years ago: These models are good because they are capable of reproducing human-level content with slight variations just enough to make it seem original while not seeming weird (looking at you, Google Dream). And how do they do that? By ingesting metric tons of raw material. And where does this material come from? The internet.

So what happens if you combine the need for a very large dataset and an increasing ratio of AI-generated content on the open web? Well, going forward, more and more content of the training datasets will come from previous iterations of these models.

And if you think this to its logical conclusion, you arrive at a time when a sufficiently large fraction of content on the internet – and subsequently of the training datasets for these models – is machine generated. In simple terms: We’ll soon begin training large AI models on content that has in part been generated by previous iterations of the same models.

My colleague David Adler has written a wonderful article on the implications of this, where he approaches this issue from an information/entropy perspective.

To summarize his article in layman’s terms: Even though AI models produce novel-looking content, they don’t produce new information. Since they still run on computers, where there is no true randomness, they cannot “think” of something new. They are genuine pastiche-producers.

In other words, they cannot add variance to things in existence. If you train a model on a 100% machine-generated dataset, it can only reproduce a subset of this dataset, and if you continue this for long enough, any model – regardless of whether it’s for language or images – will produce white noise.

Now you may say “Well, that’s true, but because machine-generated content has less variance, it is easy to detect and filter out machine-generated content from datasets.” And you are absolutely right. But this proves a point in a seemingly completely unrelated debate: digital ownership.

For basically as long as the internet exists, people have copied data around: images, videos, text, software – you name it. Everything that can be represented digitally can be copied. And copying is practically cost-free. Proponents of the free “share-ability” of digital content have argued for ages that, once content has been created, one should be allowed to share it freely.

Content distributors – most notably Hollywood, the music, and the book industries – have naturally disagreed, because they make money by taking existing content and redistributing it to end-users. Once content exists in a digital form, there is no added cost in duplicating it.

While AI models are not really content distributors, they finally lead this debate ad absurdum: Machine-generated content does not have any intrinsic value since it is a mere pastiche; but what has value is the original works that go into its training dataset.

This highlights a point that Karl Marx has made a long time ago: What adds value to any product is not a machine, it is the workers who operate this machine. The same holds true for intellectual products: What adds value to a piece of music, for example, is not a streaming service that offers limited access to it, but the musicians remixing it to match a certain style and their own preferences. This goes to show that we definitely need to pay artists and content creators, but also raises the question of why we pay middle men.

We’ll never arrive at a point in time when AI models degrade in performance because there is no more original data to train them on. But we will have to continue the discussion of paying those people adequately who ensure that these models continue to perform well: artists, writers, and actors.

“Nothing is real anymore”

The other arm of the debate focuses around the effects the increasing prevalence of machine-generated content can have on society at large. From the outset, it looks dire: While we are already more or less accustomed to the fact that we cannot just trust anything that is written on the internet, now we cannot even trust the images we see and the videos we watch. Anything could be computer-generated!

Take for example, the generated image of the pope wearing luxurious apparel: Most of the internet (including me) took the image as a real picture. Later, it turned out that it was generated. What effect could this have? While some are already imagining the worst, I personally think that it won’t be too bad.

Think about what really changed once large language models were actually released into the public. Did that change your perception of how much fake/false news is out there? I hope not, because it didn’t. When you use a computer model to generate text in order to deceive someone else, you are merely exchanging the tool. Machines don’t spread fake news, humans do.

The same happens with images: Of course there will be actors who generate images in order to deceive people. But photoshopping images is as old as the profession of photography.

My point stands: While AI models certainly add a few complexions to the issue, the root problem then and now have always been bad actors, and not necessarily the tools they use.1 Gullible people will continue to fall for deceptions, careful people will spot the most carefully set traps – just as they did before.

Continuing this train of thought: In a certain way, AI will make the job of journalist more important again, as they are (hopefully) well-trained to separate chaff from wheat. They do the necessary research to check images and claims for their truthfulness before compiling the research results into a news article that we can then read.

We Need to Stay Vigilant, As We Always Should Have

AI models pronounce some attributes of our society, and they modify some others. But they are not the fundamental, groundbreaking change that some want us to believe. With increasingly computer-generated content on the internet, the basic rules of the game do not change. We should still question the motives of whoever releases some form of information into the internet. We should still adequately pay creators. We should still treat the internet with caution. But other than that, there is no reason to panic.

What language models and image models do to the internet is not new. In fact, they perpetuate a feature that has been part of our society since at least the end of World War II. The images produced by AI models are not the issue here; it is what people use them for. Or, as Guy Debord has put it:

The spectacle is not a collection of images, but a social relation among people, mediated by images. (La société de la Spectacle, § 4)


1 The clever reader may have noticed that the statement “Machines don’t spread fake news, humans do” is linguistically the exact same as “Guns don’t kill people, humans do”. This opens up my entire chain of thought to the rebuttal that even if what I say is true, this doesn’t preclude the need for regulation of AI. Rest assured: I’m all in for regulation, but this article only focuses on effects of AI models; regulation is a different (but nevertheless important) topic.

Suggested Citation

Erz, Hendrik (2023). “An Era of ‘Artificial Fake Truth’? On the Effects of Large AI Models”. hendrik-erz.de, 7 Apr 2023, https://www.hendrik-erz.de/post/an-era-of-artificial-fake-truth-on-the-effects-of-large-ai-models.

← Return to the post list