Apr 18, 2024
< 1 min read

Ask Sumsubers: Does the EU AI Act regulate deepfakes? 

Sumsub keeps getting questions from our followers about the specifics of regulatory compliance, verification, automated solutions, and everything in between. We’ve therefore decided to launch a bi-weekly Q&A series, where our legal, tech, and other experts answer your most frequently asked questions. Check out The Sumsuber and our social media every other Thursday for new answers, and don’t forget to ask about the things that interest you.

This week, our AI Policy and Compliance Specialist, Natalia Fritzen, will talk about the EU AI Act in relation to deepfake regulations.

Follow this bi-weekly series and submit your own questions to our Instagram and LinkedIn.

Does the EU Act regulate deepfakes?

The EU AI Act was approved by the European Parliament on March 13, 2024, and is set to become enforceable 20 days after its publication in the Official Journal of the EU. The Act acknowledges the potential negative disruptive effect of “synthetic content” (including deepfakes) on modern societies. Yet, the Act does not prohibit deepfakes in any way, but rather sets transparency requirements for the providers of technologies capable of creating synthetic content—as well as for the deployers of such technologies.

When it comes to providers, Article 52 of the Act establishes that:

“1a. Providers of AI systems, including GPAI systems, generating synthetic audio, image, video or text content, shall ensure the outputs of the AI system are marked in a machine- readable format and detectable as artificially generated or manipulated. (…)”

This requirement applies mainly to watermarking. Watermarks are a transparency technology. They attach a “unique signature” to the output of an AI model that signals that the output is AI generated. The process of watermarking requires: (i) teaching the AI model to embed watermarks in the outputs it produces and; (ii) availability of algorithms that can detect and “read” the watermark.

Whilst watermarkers have been the most recommended remedy against deepfakes, several concerns have been raised around their effectiveness, such as technical implementation, accuracy and robustness. Therefore, for this provision to be effective, it is desirable that the European Commission sets certain standardization requirements for watermarks in order to facilitate their use and increase their effectiveness.

Another question raised by Article 52 (1a) is related to its enforceability capabilities. After all, a fraudster cannot be expected to insert ‘watermarks’ on the fraudulent content they create. At the end of the day, the Act offers measures against deepfakes that are rather reactive, instead of proactive—meaning the Act does not necessarily prevent the creation of malicious deepfakes.

Turning now to the requirements imposed on the deployers (users) of deepfake generators, Art. 52(3) determines that:

“3. Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall disclose that the content has been artificially generated or manipulated… Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public on matters of public interest shall disclose that the text has been artificially generated or manipulated.”

When it comes to the above requirements imposed on the deployers, the questions that can be raised go around the circumstances of loosening of this requirement, and how they are broadly defined. The circumstances are the cases where the content is “evidently artistic, creative, satirical, fictional analogous work”, but the parameters of these cases are not specified in the Act, which generates an uncertainty to be (potentially) resolved only by future case-law.

The Act also does not offer concrete measures against cases of non-compliance with its provisions. All in all, the EU AI Act is a ‘first of its kind’ type of regulation, and it’s quite comprehensive and consequential. Nonetheless, its provisions around deepfakes cast doubt on whether it offers enough protection against them. As the Act evolves, a lot of attention will be paid to its implementing acts and ancillary regulations, as well the Courts legal interpretations of the Act.

Natalia Fritzen

AI Policy and Compliance Specialist

AIDeepfakesEU