From Albania to Uruguay — and with Norway among them. A historically broad coalition appeal from 61 data protection authorities worldwide is now directing strong criticism towards the producers of generative AI tools, with deepfakes as the foremost threat to privacy and information integrity.
The Norwegian Data Protection Authority (Datatilsynet) is among the signatories, according to Digi.no, and the market signal is clear: technology companies that develop and offer tools for creating artificially manipulated content will face coordinated, international resistance.
What exactly is the problem with deepfakes?
Deepfakes – digitally manipulated video, audio, or images that portray real people in situations they were never in – have existed for some years, but the quality and availability of the tools have exploded in the past year. This makes the problem far more pressing than before.
Data protection authorities point to a number of serious consequences: the spread of disinformation, identity theft, fraud, and not least non-consensual intimate images – so-called “deepfake pornography” – which severely affects individuals.
– This indicates the seriousness of the development, according to sources covering the global initiative.

Existing legislation already applies – but is rarely enforced
A central point of the initiative is that deepfakes involving recognizable individuals are already regulated under existing data protection legislation. In Europe, this means GDPR.
When a deepfake tool uses someone's face, voice, or body in a recognizable way, it is by definition the processing of personal data – and this requires a legal basis, in most cases explicit consent from the person concerned.
The Dutch Autoriteit Persoonsgegevens (AP) and the European Data Protection Board (EDPB) have previously emphasized this point. Biometric data, such as facial features, is a particularly sensitive category under GDPR Article 9, and the processing of such data is generally prohibited without explicit consent.

The EU AI Act sets new standards
The EU AI Act, which gradually comes into force from 2025 onwards, introduces binding requirements that AI systems generating deepfakes must clearly label the content as artificially produced. Violations of the most serious provisions can trigger fines of up to six percent of a company's global turnover, according to available sources.
China, for its part, has introduced its own “deep synthesis” rules that require watermarking or text marking of all AI-generated content. The UK's Online Safety Act from 2023 obliges platforms to remove non-consensual intimate images, including deepfakes.
Detection lags behind technology
A pervasive problem highlighted in the research material surrounding the initiative is that technology for detecting deepfakes is still far behind the technology for creating them. This underscores the need for regulation and digital literacy among the population, not just technological solutions.
The collective message from the data protection authorities is that the industry cannot wait for detection tools to catch up. Responsibility must be placed on those who create and distribute the tools – not solely on the consumer or the affected individual.
What does this mean for Norwegian businesses?
For Norwegian companies and developers working with generative AI, the message is clear: the Norwegian Data Protection Authority's participation in this international initiative signals increased attention and likely stricter supervision. Businesses that process biometric data in connection with AI-generated content should review their legal bases and document their risk assessments.
GDPR already applies, the EU AI Act is on its way, and now 61 supervisory authorities have indicated that they are moving in the same direction — and that they are cooperating to enforce it.
