We live in an era in which technology is no longer a complementary tool: it has become the very fabric of our social, educational, and professional life. For children and adolescents, their world—friendships, education, identity, entertainment—takes place increasingly in digital spaces. However, this open and valuable universe is not without risks: exposure to inappropriate content, addiction, misinformation, manipulation, harassment, and mental health issues.
On November 26, 2025, a significant milestone occurred: the European Parliament approved a non-binding resolution proposing the establishment of a common minimum age, across the entire European Union, for minors to access social media, video platforms, and AI chatbots. Although it is not yet law, it has sparked a profound debate about the future of young people’s digital lives in Europe.
This article by ITD Consulting analyzes in detail what this resolution proposes, why it arises now, what its potential benefits are, its challenges, and its future implications regarding AI.
What exactly does the European Parliament resolution propose?
The resolution proposes setting a minimum age of 16 years for a person to independently access social media, video platforms, and conversational chatbots powered by AI, generative AI systems, AI virtual assistants, and any technology that uses AI to interact, produce content, or personalize experiences.
Minors aged 13 to 16 could only access these platforms with explicit consent from parents or guardians, especially when platforms use AI to recommend content, AI for automated moderation, or AI for direct user interaction.
Children under 13 years old would be prohibited from accessing social media, AI-based tools, AI chatbots, video platforms with AI algorithms, and any digital service that uses AI for machine learning or content generation.

Additionally, it suggests further measures that would profoundly modify the operation of digital platforms dependent on AI:
- Prohibition of addictive designs, such as infinite scroll, continuous video autoplay, and mechanisms intended to increase user retention through AI algorithms that predict behavior.
- Elimination of targeted or manipulative advertising aimed at minors, especially that generated or segmented by AI, as well as restrictions on influencer marketing amplified by AI and “loot box” elements powered by predictive analysis based on AI.
- Blocking of sites and services that violate the rules, including platforms that do not properly verify age using AI systems, or that use AI in opaque or risky ways for minors.
- Regulation of generative AI tools, preventing the production and dissemination of false, risky, or inappropriate content for minors, especially deepfakes created with AI, AI-manipulated images, or sexualized content generated by AI.
The resolution is not legally binding; it is a declaration of political intent. To become mandatory law, the European Commission would need to present a legislative proposal, which would then need to be negotiated and adopted by member states. This implies a lengthy process, but the resolution establishes a firm stance on the importance of the topic and on the role that AI, generative AI, and AI systems will have in the digital protection of minors.
Why now? — Context and reasons for the measure
The initiative does not arise in a vacuum: it responds to accelerated technological changes, public health concerns, and a growing consensus that minors are exposed to digital risks amplified by AI, AI-based algorithms, platforms that use AI for personalization, and AI systems that operate without sufficient supervision. All of this exceeds the current capacity for supervision, regulation, and human analysis in the face of AI’s expansion across all digital domains.
1. Massive use of devices and networks by minors
Numerous European studies have shown a notable increase in the daily time minors spend on social media, video platforms, and messaging services, many of which use AI to recommend content, AI to order posts, and AI to measure user behavior. Many families report that children aged 11 to 15 use mobile devices for several hours a day, even during the night, affecting sleep, concentration, and school performance.
At the same time, psychologists and health organizations have warned about the rise of symptoms linked to excessive use of social networks, especially those enhanced by AI that amplifies emotionally intense content: social anxiety, self-esteem problems, isolation, decreased physical activity, impulsiveness, and difficulty regulating emotions. Continuous exposure to AI algorithms that predict and exploit behavior patterns can further exacerbate these problems.
2. Emerging technologies: AI, chatbots, and an even more complex environment
The emergence of advanced chatbots and AI conversational agents has added a new level of risk that did not exist just a few years ago. These AI-based tools can maintain long conversations, adapt to users via AI, mimic empathy through AI, and even build emotional relationships with minors thanks to AI’s predictive capabilities.
The European Parliament considers that this type of AI-driven interaction can create psychological bonds that adolescents are not fully prepared to handle. There is also concern that AI can provide incorrect, confusing, biased, or inappropriate responses without the minor having the critical capacity to distinguish safe from risky content. Additionally, generative AI can create images, texts, or videos that young people may not always recognize as AI-generated, increasing their vulnerability.
3. Platform design as a risk factor
One of the new aspects of the resolution is that it does not focus solely on content but also on design. It notes that many platforms are deliberately built to maximize time spent, using AI-powered algorithms to detect patterns, predict behaviors, and keep minors engaged longer. This includes techniques such as:
- Constant rewards generated or managed by AI, similar to gambling-like dynamics.
- AI algorithms that prioritize emotionally intense or extreme content to ensure higher interaction.
- Constant notifications automated by AI systems that provoke interruptions and dependence.
The EU considers that these AI-enhanced mechanisms particularly affect minors, who have lower self-control and higher vulnerability to social comparison amplified by AI and peer pressure reinforced by AI algorithms.

4. Need for Regulatory Harmonization
Currently, each European country has different rules regarding minimum age, identity verification, or parental supervision, some based on AI systems and others that still do not regulate the use of AI for digital identification. This creates a fragmented ecosystem where minors in one country can access AI-generated content that those in another cannot.
The resolution proposes establishing a common standard to ensure equitable protection for all European minors, especially in an environment where AI, generative AI, and AI algorithms will continue expanding and increasingly defining young people’s digital experiences.
Implications of this type of regulation
If it becomes law, the measure would have significant consequences for families, schools, platforms, and governments, especially in an environment where AI, AI algorithms, and AI systems are increasingly central to minors’ digital lives. Among the expected benefits is greater protection, as the regulation would reduce young people’s exposure to inappropriate content generated or amplified by AI, including violent, sexualized, or dangerous videos that AI could recommend without adequate filters.
It would also reduce the risk of digital addiction, since many platforms using AI to retain attention are designed to maximize usage time. Furthermore, it is expected that minors could enter the AI-dominated digital ecosystem with greater cognitive maturity, allowing them to make more conscious decisions, and that cyberbullying—often amplified by AI and the ease with which AI can hide identities or automate harmful behavior—would decrease.
Additionally, the measure would imply much stricter responsibilities for technology platforms, which rely heavily on AI for their daily operations. Companies would need to implement AI-based age verification systems, create AI-adapted experience modes, or block access for the youngest users in environments where AI represents a risk.
They would also be required to redesign their services to remove AI-generated addictive elements, adjust their AI algorithms, and transform their business models to avoid relying on AI-driven targeted advertising toward minors. All of this would require technological investment, AI audits, AI algorithm redesign, and adjustments in privacy, data, and regulatory compliance, increasing the demands around how AI is developed and deployed on global platforms.
However, the technical, legal, and ethical challenges would also be significant, especially in an AI-governed environment. AI-based age verification raises privacy dilemmas: how to confirm identity without AI intruding on the user’s personal information? Added to this is the possibility that determined adolescents could bypass AI-driven controls using adult data, VPNs, or even fake identities generated by AI.
There are also debates about the impact of these measures on digital freedom, as some argue that limiting interaction on AI-driven platforms may restrict youth autonomy and their right to communicate, learn, and create in spaces where AI is the backbone. Finally, even if the law is passed, its real effectiveness will depend on states’ ability to supervise, audit AI systems, detect AI-driven noncompliance, and apply consistent sanctions in an increasingly complex, AI-dominated digital ecosystem.
The Role of Artificial Intelligence: An Unexpected Challenge
The inclusion of AI chatbots in regulation marks a profound cultural shift, because it is no longer only about controlling static content, but supervising dynamic interactions generated by AI, AI-driven personalized conversations, and emotional bonds that AI can reinforce through simulated empathetic responses. AI conversational agents can adapt to the user, remember patterns, mimic emotional closeness, and sustain prolonged dialogues, making AI an active participant in minors’ social development. This shift from traditional platforms to highly interactive AI experiences poses entirely new challenges, where AI not only informs but accompanies, influences, and in some cases conditions decisions.
Among the most relevant risks are emotional dependence on AI agents that simulate empathy through AI algorithms, as well as the possibility that minors receive incorrect, unsafe, or inappropriate advice generated by AI without human filters. There is also the risk of exposure to automatically generated AI content without quality control, which can include AI-created images, texts, or videos with violence or sexual content. The malicious use of AI tools to manipulate, harass, or deceive minors adds to the list of concerns that the regulation seeks to address. The regulation recognizes that the contemporary digital ecosystem is intensely relational thanks to AI, and that these AI-driven relationships can profoundly influence the psychological, cognitive, and emotional development of young people.
This regulatory step opens the way to broader AI-oriented policies, not only for minors but also for adults, institutions, companies, and governments that interact with AI systems daily. As AI becomes a structural component of digital life, issues such as emotional integrity, AI-managed privacy, digital rights affected by AI, risks of manipulation through AI, AI transparency, and developer responsibility will become increasingly central in public debate. This regulation is not an endpoint but the beginning of a broader conversation about how we live, work, learn, and develop in an environment where AI will be present in all dimensions of social life.

The European resolution marks a true paradigm shift by recognizing that minors’ digital freedom must be balanced with their emotional security and psychological well-being, especially in an environment dominated by AI, generative AI, and AI-driven platforms. It is no longer enough to assume that young people can navigate without risks: the current digital ecosystem—designed with AI algorithms to maximize interaction—was not built with their protection in mind.
Delaying the age of access may reduce risks, but it is insufficient if not accompanied by digital education in schools, technological resources for parents, comprehensive mental health policies, and more ethical, transparent, and responsible AI platforms. The goal is not to limit for the sake of limiting, but to ensure that AI-based tools work in favor of development, safety, and autonomy for those still forming judgment.
In this context, the conversation is just beginning. The European resolution is not a conclusion but the starting point of a global debate on how to build a digital environment where minors can grow, learn, and interact safely while engaging with AI systems that are increasingly present. For organizations, schools, companies, and governments, this challenge requires strategic decisions, technology audits, AI risk assessments, and reliable digital solutions.
If your institution or company needs professional advice to understand, implement, or adapt technology policies related to AI, digital security, or infrastructure modernization, ITD Consulting can help. We invite you to contact us and explore how we can strengthen your digital environment: write to us at [email protected] and let’s discuss how to take your technology strategy to the next level.