This article has primarily been published in French in l’Uniscope, the magazine of the University of Lausanne in September 2025. Link to the original article.
Large-scale doubt-mongering, political manipulation, damage to reputations: AI-generated content is not just entertaining videos to watch while waiting for the bus. Experts from the University of Lausanne and EPFL decipher the phenomenon and share tools for protection.
Large-scale doubt-mongering, political manipulation, damage to reputations: AI-generated content is not just entertaining videos to watch while waiting for the bus. Experts from the University of Lausanne and EPFL decipher the phenomenon and share tools for protection.
The word “deepfake,” a contraction of “deep learning” and “fake,” is appearing more and more in public conversations. Whether images, videos, or audio, they all have in common that they are created or modified by artificial intelligence, with such realism that it becomes difficult to distinguish between what is real and what is fake. But creating counterfeits is nothing new, explains Olivier Glassey, a sociologist at the Laboratory for the Study of Science and Technology (STS Lab) at the University of Lausanne: “Image and text manipulation have been around for a long time. What sets deepfakes apart is both the technological leap that has been made and the democratization of these tools, which are now accessible to a wider audience.”
Mass production of doubt
With the help of a few software programs available online, it is now easy to create with AI. Some people use deepfakes to gain visibility on the Internet, which can generate significant revenue when the audience is there. Others use them to discredit public figures, particularly politicians. Still more are simply looking to have fun. But the majority of deepfakes distributed online are pornographic in nature, generating money either through viewing volume or blackmail. As a result, without even realizing it, we are increasingly exposed to content created by artificial intelligence.
“If you go on social media, on TikTok for example, there are so many deepfakes that the question of whether or not something is a deepfake constantly arises,” explains Olivier Glassey. “As soon as something seems a little out of the ordinary, we ask ourselves: is it real or not?” The main effect, according to him, is the instillation of systematic doubt. “When this doubt preempts all content, it transforms the way it is read, which can fuel generalized feelings of uncertainty or weariness.” The relationship with the truth becomes blurred.

The individual and societal impact of deepfakes
In addition to creating doubt, deepfakes can cause damage to identity formation, particularly due to pornographic content, continues the specialist in the sociology of science and technology: “Criticism of the stereotypical representation of bodies in social media is continuing and intensifying today with AI-generated images. Far from escaping bias, the proliferation of artificial entities tends to accentuate stereotypes that are often sexist.”
Our societies can also find themselves transformed by deepfakes. “If doubt about political institutions is amplified enough, it can be a way of destabilizing them,” continues the sociologist. This is what happened in 2018 when Gabonese President Ali Bongo, weakened by a stroke, appeared in a New Year’s video. The abnormal movements of his eyes, actually caused by the stroke, sparked rumors that it was a deepfake and that he was actually dead. An attempted coup followed. Sometimes it is not only the dissemination of false content that is unsettling, but the very idea that it exists.
AI versus AI
In response to these abuses, the Multimedia Signal Processing Laboratory at EPFL is launching a collaboration with the School of Criminal Sciences at the University of Lausanne. Touradj Ebrahimi, a professor at EPFL, is the head of the group. “Eight years ago, we became interested in artificial intelligence to find out how it could help us secure content. Then we discovered that AI could itself create modified versions, so we started using it to detect these modifications. ” The difficulty? The recent emergence of a technology called GAN, or Generative Adversarial Network, which creates ultra-realistic content through the parallel training of two competing neural networks. The first network generates synthetic content, while its opponent tries to detect whether it is real. They improve each other and thus generate content that is increasingly difficult to detect.
In the lab, engineers use three approaches to determine whether content is authentic. First, they try to detect the style of AI used to produce it. Depending on the software used, the model employed can be recognized… by AI. “We fight fire with fire,” he explains. Second, fact checking involves using a network of trusted verifiers (a set of actors, methods, and systems) to determine whether content comes from another context: “For example, if a photo relates to an event currently taking place in Ukraine, but a trusted verifier says they saw it three years ago, it’s likely to be fake content. “ The final technique is provenance technologies, which attempt to detect a signature left deliberately during its creation and subsequent modifications. ”It’s as if we were accumulating the trajectory of a piece of content and could then trace it back. As part of a project funded by [seal] to promote the Canton of Vaud, Touradj Ebrahimi’s laboratory is working to improve these technologies in collaboration with the University of Lausanne, which generates data, and the start-up Virtuosis, which markets the technology.

A broad interpretation of the law is needed
So, there are technological methods for detecting deepfakes. But is technology enough? “The problem is global, so we need global solutions,” says Touradj Ebrahimi. “We also need to think about international rules to regulate and legislate where necessary.” “ With regard to Swiss law, Quentin Jacquemin, a doctoral student in law at the University of Lausanne, explains that there is no legal definition of deepfakes, nor is there any law dedicated to this phenomenon. To crack down on them, the usual rules of civil and criminal law must therefore be applied, while taking ”a broader view than that offered by case law.”
In criminal law, for example, pornographic deepfakes are prohibited insofar as they constitute offenses such as identity theft, sexual assault, or defamation. But the law is unclear in some cases, the doctoral student continues: “It is not clear that the pornographic nature of an AI-generated deepfake depicting a naked child can be established.” Indeed, the Federal Court currently holds that it is not pornographic if the child is not in a provocative position or if the author did not exert any influence on the child during the shooting. “The interpretation needs to be broadened to include new technologies,” he continues.
When it comes to deepfakes that aim to undermine political processes, there is a “real gap,” says Quentin Jacquemin. There is only one article that comes into play, and it requires a very broad interpretation to punish deepfakes that destabilize a political opponent. “ The problem, he says, is that Switzerland tends to work reactively. ”Given the enormous impact that political destabilization can have, I think it would be worthwhile to adopt a preventive measure, even if it means removing or modifying it if it is not used.”
Between threat and appropriation
Whether technological or regulatory, tools exist to combat the risks posed by AI-generated content and prevent it from destabilizing our societal, political, and individual functioning. Other questions, sociological or even philosophical, remain open, such as those posed by sociologist Olivier Glassey: “How do deepfakes impact us and how can we manage our beliefs and values in this environment? How can we appropriate them?”
Article written by journalist Marion de Vevey. For more article on l’uniscope: https://wp.unil.ch/uniscope/
