Generative artificial intelligence is revolutionizing digital content creation at an unprecedented speed. In just a few years, we have moved from basic filters and automated retouching to systems capable of producing hyper-realistic images, recreating non-existent human faces, or simulating scenes that never happened.
This technological leap has driven industries such as advertising, entertainment, graphic design, and education. However, it has also opened a concerning door: the possibility of creating images of real people without their knowledge or consent.
On February 23, 2026, the United Kingdom’s Information Commissioner’s Office (ICO), together with more than 60 international privacy authorities, issued a joint statement warning about the risks associated with AI-generated images representing identifiable individuals without authorization.
This coordinated action was not symbolic; it sent a clear signal that regulators consider it urgent to address the impact of generative AI on privacy, dignity, and fundamental rights. The warning comes amid growing global concern over deepfakes, digital manipulation, and the erosion of trust in visual content.
The ease with which credible fake images can now be generated has reduced the technical barriers that previously limited these abuses. What once required advanced editing skills can now be achieved with a simple text prompt on a publicly accessible interface.
Why do authorities warn about AI-generated images?
Data protection authorities have made it clear that the fact an image is created using generative AI does not eliminate its potential to violate fundamental rights. When generative AI produces an image representing an identifiable person—whether through their face, distinctive features, or a recognizable context—that AI-generated image may constitute the processing of personal data.
Consequently, any use of generative AI that involves the direct or indirect identification of a person must comply with applicable data protection laws. The rapid expansion of generative AI reminds us that technological innovation is not above the existing legal framework.

The core concern around generative AI lies in consent. Many platforms integrating generative AI tools allow the creation of realistic images of individuals without their authorization. Thanks to its advanced synthesis capabilities, generative AI can even recreate fake intimate scenes or compromising situations that never occurred.
The harm caused by the misuse of generative AI is not merely hypothetical: the dissemination of content created with generative AI can seriously affect a person’s reputation, emotional stability, and safety. Therefore, the debate around generative AI is no longer only technological; it is deeply legal and ethical.
Additionally, there is the issue of scalability inherent to generative AI. Unlike traditional digital editing methods, generative AI can produce thousands of images in minutes. This massive capability multiplies the potential harm and makes rapid response by victims and authorities more difficult.
Regulators warn that without adequate controls, generative AI could become a tool for widespread abuse, especially when combined with viral distribution platforms. The systemic risk associated with generative AI increases proportionally with its ease of use and global accessibility.
Another relevant aspect linked to generative AI is the reuse of publicly available data. Photos shared on social media can be used to train generative AI systems or to create new synthetic representations through generative AI.
Even if an original image was voluntarily published, its reuse by generative AI systems to produce fake content does not automatically imply valid consent. This secondary use of data by generative AI raises questions about the limits of consent in the digital age and the responsibility of those who develop and exploit generative AI technologies.
Deepfakes: The Greatest Risk of Generative AI
The term “deepfake” has become synonymous with advanced digital manipulation powered by generative AI. It refers to audiovisual content created or altered using generative AI that appears authentic but is in fact produced with generative AI technology. While generative AI can have legitimate applications in film, advertising, or historical recreation, its misuse presents significant risks.
One of the most harmful uses of deepfakes created with generative AI is the production of fake intimate images without consent. This phenomenon, amplified by the accessibility of generative AI, disproportionately affects women and public figures, although anyone can become a victim of misused generative AI. The psychological impact of discovering a fake intimate image created with generative AI can be devastating, even if it is later proven to be artificially generated.
Beyond the intimate sphere, generative AI allows deepfakes to be used for political manipulation and disinformation. The ability of generative AI to fabricate images of leaders or public figures in false situations threatens trust in democratic debate. When generative AI makes it difficult to distinguish between real and artificial content, one of the fundamental pillars of an informed society is weakened.
The risks associated with generative AI are not limited to reputation. Deepfakes created with generative AI can also facilitate financial fraud and identity theft. Combined with other techniques, generative AI can be used to create convincing fake profiles, deceive users, or even attempt to bypass biometric verification systems through synthetic content generated by generative AI.
Legal Framework: What Does Current Law Say About AI and Privacy?
Despite the technological novelty associated with generative AI, authorities have emphasized that the existing legal framework already provides tools to address the risks arising from generative AI. In Europe, the General Data Protection Regulation (GDPR) establishes that any processing of personal data, including that derived from generative AI, must be based on a valid legal basis, such as explicit consent. If an image created with generative AI allows the identification of a person, its generation and dissemination through generative AI systems may constitute processing subject to this regulation.
The regulation also imposes the principle of proactive accountability, which fully applies to generative AI. Organizations that develop or use generative AI must demonstrate that they have adopted adequate measures to protect data from the design stage of the generative AI system. This includes conducting impact assessments when the use of generative AI may pose a high risk to the rights and freedoms of individuals.

On the other hand, the Digital Services Act establishes specific obligations for digital platforms that integrate generative AI tools, including the rapid removal of illegal content generated by generative AI and the mitigation of systemic risks associated with generative AI. AI-generated images that violate rights may fall into this category, which obliges platforms offering generative AI-based services to act diligently.
Additionally, the AI Act introduces a risk-based approach to regulate artificial intelligence systems, including generative AI. Although generative AI is not prohibited in itself, the regulation imposes transparency requirements and additional obligations when generative AI may significantly affect individuals, thereby reinforcing legal oversight over the development and deployment of generative AI.
Special Protection for Minors and Vulnerable Groups
One of the most sensitive aspects highlighted by regulators regarding generative AI is the protection of minors against risks arising from generative AI. The creation of synthetic images using generative AI that depict children in intimate or harmful contexts can constitute a serious crime.
Even when no original real photograph exists, the ability of generative AI to produce realistic representations can generate profound legal and psychological consequences. The ease with which generative AI can recreate faces and scenes increases authorities’ concern about the misuse of generative AI to the detriment of minors.
Minors do not always understand the scope of digital exposure in an environment dominated by generative AI. An innocently shared photograph can become the basis for later manipulation carried out with generative AI. Authorities insist that generative AI developers incorporate specific technical safeguards within the generative AI systems themselves to prevent such abuse and limit the generation of harmful content using generative AI.
Other vulnerable groups, such as victims of domestic violence or people in at-risk situations, may also be disproportionately affected by the misuse of generative AI. The dissemination of fake images created with generative AI can be used as a tool for intimidation, harassment, or blackmail, amplifying pre-existing vulnerabilities through the massive reach that generative AI allows.
What Do Regulators Require from Artificial Intelligence Companies?
Authorities have emphasized the need to integrate privacy by design into all generative AI development. This means that generative AI systems must be created with potential risks associated with generative AI in mind from the initial phase. It is not enough to react to scandals linked to generative AI; it is necessary to anticipate possible abuses arising from the use of generative AI.
Transparency in the use of generative AI is also required. Users must know when they are interacting with content produced by generative AI. Clear labeling of content generated by generative AI can help reduce misinformation and strengthen trust in environments where generative AI is increasingly common.
Similarly, companies that develop or integrate generative AI must implement agile mechanisms for removing harmful content created with generative AI. Rapid response to abuses related to generative AI is essential to minimize the impact on victims. A slow or ineffective system in addressing problems arising from generative AI can significantly aggravate the harm.
The Ethical Challenge: Digital Identity in the AI Era
Digital identity has become an extension of personal identity in an environment shaped by generative AI. The possibility that third parties use generative AI to create false representations poses a profound ethical dilemma directly linked to the use of generative AI. This technology, especially in its generative AI form, challenges the traditional notion of control over one’s own image and how generative AI can reconstruct, alter, or reinvent it without consent.
Erosion of trust is another central problem associated with generative AI. If images created with generative AI can be easily falsified, society may become skeptical even of authentic visual evidence in an era dominated by generative AI. This phenomenon, known as the “liar’s dividend,” is amplified by generative AI, as it allows people accused of real actions to claim that the material is the product of generative AI and therefore false.
Technological ethics demand a balance between innovation and responsibility in the development of generative AI. The goal is not to halt the progress of generative AI but to guide it toward respecting human rights and toward responsible use of generative AI that does not compromise dignity or public trust.

The debate about privacy in the era of generative AI is only just beginning, and with the growing sophistication of generative models, the challenges associated with generative AI will continue to evolve. Detecting synthetic content generated by generative AI will be a constant race between those who create these images and those who seek to verify them.
Despite the rapid pace of technological advancement, the direction is clear: personal data protection and human dignity must occupy a central place in any generative AI development, and international cooperation will be essential to ensure coherent and responsible standards in the use of generative AI.
Ultimately, the question is not whether generative AI will continue to advance, but under what conditions it will do so. If responsibility and respect for fundamental rights are prioritized, generative AI can become a transformative and positive tool. Conversely, the lack of clear limits in the use of generative AI increases the risk to privacy and public trust.
The joint warning from authorities marks a milestone, representing a call to action for governments, companies, and citizens. In the era of deepfakes and AI-generated images, protecting privacy is not optional: it is an urgent necessity to preserve the integrity of our digital identity and trust in the global technological environment.
To ensure a safe and responsible approach to generative AI, ITD Consulting offers specialized solutions in technology risk management and privacy compliance. Contact us at [email protected] to discover how to protect your organization from the challenges of generative AI.