On February 4, 2025, Google announced a significant change in its ethical principles regarding the development and use of artificial intelligence (AI). Google, which had maintained a firm commitment to ethical responsibility in its AI products for years, removed explicit prohibitions from its guidelines concerning the use of technology for military and surveillance applications.
This modification by Google, which abandons previous promises not to use AI in weapon design or in systems that could cause general harm, marks an important shift in the company’s ethical trajectory.
The change in Google was first detected by The Washington Post and quickly went viral in the tech community. According to Google’s new principles, Google’s AI products "will align with human rights," but without specifying how this alignment will be ensured.
This ambiguous modification by Google sparked a series of reactions both inside and outside the organization, casting doubt on Google’s ability to adhere to its own values of social responsibility and protection of fundamental rights. Below, ITD Consulting tells you everything about this change at Google.
Google and its ethical principles in AI: a groundbreaking change
Since acquiring DeepMind in 2014, Google adopted a clear stance on the ethics of artificial intelligence, publicly committing not to develop technologies that could be used in military or mass surveillance applications. In 2018, Google formalized these commitments through a set of ethical principles that governed the development of its AI technologies.
At that time, Google stated it would not design or deploy AI for uses that could cause "general harm" or that contravened internationally accepted human rights principles. The change in principles that Google implemented in February 2025 represents a radical departure from its original stance.
Google’s new guidelines do not prohibit the use of AI in the development of military technologies or surveillance systems. In fact, Google removes any explicit commitment regarding the non-use of AI in the creation of weapons or tools that could cause harm. Instead of these restrictions, the new principles statement asserts that Google’s AI products "will align with human rights," but does not provide clear details on how this alignment will be ensured.

The removal of these restrictions has sparked an intense debate about Google's ethical responsibility in developing technologies that can have a profound and lasting impact on society. While the global competition for AI supremacy is a key factor in the evolution of these policies, it is also essential that tech companies maintain high ethical standards to protect users and prevent the abuse of these technologies in sensitive areas such as defense, privacy, and human rights.
Internal reactions within Google
The change in Google's AI guidelines did not go unnoticed within the company. Employees, most of whom had supported previous ethical AI policies, began expressing their dissatisfaction in internal forums. The most notable reaction occurred on Memegen, a message board used by Google employees, where several shared memes and critical comments about the company’s new stance.
In one of the messages, an employee posted an image of CEO Sundar Pichai searching on Google “how to become a weapons contractor,” suggesting the company was willing to abandon its ethical commitment to secure lucrative contracts from the military sector. In another post, an employee used a comedic sketch referencing Nazi soldiers, ironically asking if Google was now becoming "the bad guy" in history.
Although these posts came from a small number of employees, they reflect growing concern among Google's workers about the company’s ethical direction. The internal reaction intensified due to the context of the historic revolt in 2018, when employees opposed Project Maven, a Pentagon program using AI to analyze drone images. At that time, internal pressure was enough to make Google abandon the project. However, now with the removal of its ethical principles on using AI in military applications, employees fear the company is abandoning its social responsibility.
This shift in Google's direction also highlights a broader tension within the tech industry, where companies must balance innovation and global competition with the ethical values they publicly support. The ethical commitments Google had made seven years ago, largely driven by internal protest, no longer seem to carry the same weight as the company faces market and government pressures demanding rapid advancements in artificial intelligence, especially in areas related to defense and national security.
The global competition for AI leadership
Google’s shift in its artificial intelligence policy cannot be understood without considering the current geopolitical context. In a world where artificial intelligence is emerging as one of the most powerful and transformative technologies of the 21st century, tech companies are in a frantic race to dominate the market and lead innovation.
The United States, China, and other nations are fighting to establish themselves as AI powerhouses, and companies like Google, Microsoft, Amazon, and Meta are at the heart of this competition.
In their official blog, Demis Hassabis, CEO of Google DeepMind, and James Manyika, senior vice president of technology and society, argued that the update to AI's ethical principles was necessary due to the increasing global competition. According to them, the "increasingly complex geopolitical landscape" requires closer cooperation between companies and governments in developing AI, especially in areas related to national security and defense.

Google’s argument is based on the premise that democracies should lead the development of artificial intelligence, but also that tech companies must collaborate with governments to ensure that AI applications are used responsibly. While collaboration between the private and public sectors is key to technological progress, the use of AI in military and surveillance applications raises serious ethical concerns, especially in a world where the risks of abuse of power are high.
The development of AI for national security is not an isolated phenomenon. Other major tech companies, like Amazon and Microsoft, have already signed significant contracts with governments and military agencies to apply their AI technologies in data analysis and security.
These agreements have sparked controversy due to the ethical implications of using AI in the context of war, surveillance, and social control. Google’s stance in 2025 seems to be aligning with that of its competitors, suggesting that geopolitical pressures are playing a crucial role in the redefinition of its ethical principles.
The use of AI in the military and surveillance fields: risks and consequences
One of the most concerning aspects of Google’s policy change is the possibility that artificial intelligence could be used in the creation of autonomous weapons and mass surveillance systems. In the military realm, AI could be used to develop autonomous systems capable of making decisions without human intervention, posing significant risks to civilian safety and the ethics of warfare.
The use of AI in autonomous weapons has been debated for years, as it raises the possibility that decisions about life and death could be made by algorithms rather than people. Autonomous systems could be designed to identify and eliminate targets without human operators’ supervision, raising concerns about the accuracy and ethics of these decisions.
Furthermore, AI algorithms may be subject to biases or errors that could result in harm to innocent people, further amplifying the risks associated with their use. In the surveillance domain, AI could be employed to create mass monitoring systems capable of tracking individuals in real-time, infringing on their right to privacy.
Governments may use these technologies to control and monitor their citizens, potentially leading to abuses of power and the creation of authoritarian states. If used irresponsibly, artificial intelligence could undermine individual freedoms and fundamental human rights.
Google’s shift in its ethical principles opens the door to the possibility that the company could participate in the development and deployment of AI technologies for military and surveillance purposes, risking the protection of human rights and civil liberties. While collaboration between tech companies and governments can be beneficial in terms of security, it is essential to implement clear and strict regulations to ensure that AI is not used in ways that harm society.
The importance of AI regulation
Google’s change in stance highlights the urgent need to establish clear and effective regulations regarding the development and use of artificial intelligence. While tech companies may adopt ethical principles to guide the development of their products, these voluntary guidelines are insufficient to ensure that AI is used responsibly and that human rights are respected.
International laws and regulations are crucial in establishing a normative framework that ensures AI is used for the benefit of humanity and not to perpetuate abuses of power or violations of fundamental rights. AI regulation must address the specific risks posed by this technology in military, surveillance, and other sensitive contexts.
Governments must work together to establish standards that guide the ethical development of AI, ensuring that it is not used for purposes that could endanger security, privacy, or human dignity. Tech companies, including Google, must be subject to these regulations to ensure that their AI products are used responsibly.

Google’s ethical shift regarding its artificial intelligence principles marks a significant turning point in the tech industry. The removal of Google’s commitments to not use AI in weapons and surveillance highlights the commercial and geopolitical pressures that tech companies face today.
However, this change at Google also raises important questions about the ethical responsibility of companies in the development of technologies that have the power to change the course of history. As the competition for AI leadership intensifies, it is crucial that companies like Google do not lose sight of the core values that should guide the development of these technologies.
Artificial intelligence has the potential to significantly improve our lives, but only if it is developed and used ethically, responsibly, and with respect for human rights. AI regulation and oversight will be essential to ensure that its impact on society is positive and to prevent its use for purposes that could harm individuals and nations.
Ultimately, the responsibility of tech companies like Google should not be limited to the pursuit of commercial profit but should extend to the protection of human rights and fundamental freedoms. Artificial intelligence must be a tool for the common good, not for abuse or exploitation.
The future of AI will depend on the decisions we make today, and it is essential that these decisions are based on solid principles and a commitment to global well-being. If you want to learn more about the latest innovations in AI and the role that Google is playing with this ethical shift, write to us at [email protected]. We offer the technological advisory your company needs to stay at the forefront.