Google adapts to Donald Trump's populist demands

The influence of Donald Trump, President of the United States, on the big tech companies is an increasingly controversial issue, as his populist positions on regulation, censorship and competition have generated both support and rejection in the industry.

While some Silicon Valley companies have tried to distance themselves from his influence, others have sought to align themselves with his decisions in order to qualify for regulatory or tax benefits. These are resolutions that are totally contrary to those established by Joe Biden in the last legislature.
After taking office, Trump signed an executive order on Artificial Intelligence that would override previous government policies that, according to his order, ‘impede American innovation in Artificial Intelligence’.

In this context, Google, a company of the Alphabet parent company, has published a report called ‘Responsible AI: Our 2024’ in which they take stock of 2024 and explain what changes they are going to establish to prevent both Gemini and DeepMind from being left behind in the race for the development of Artificial Intelligence.
The emergence of DeepSeek and China's influence in the sector, together with Trump's measures, have precipitated Google's change of its regulations established in 2018. The report is the first update in six years, during which time artificial intelligence technology has developed exponentially. A fact confirmed by Google CEO, Sudnar Pichai, as there are billions of people using Artificial Intelligence technology.

‘As AI development advances, new capabilities can present new risks. That's why we're introducing the first version of our Border Security Framework: a set of protocols that help us anticipate the potential risks of powerful frontier AI models,’ the report states.
The security processes included in this new Google legal framework include aids for identifying risks, implementation mitigation procedures and avoiding an increase in the risk of deceptive alignment.
There is currently global competition for leadership in Artificial Intelligence in an increasingly complex geopolitical landscape. In this context, Google states that ‘democracies should lead the development of Artificial Intelligence, guided by fundamental values such as freedom, equality and respect for human rights’.

For this reason, Google has updated the principles of its applications to focus on three new fundamental theses: bold innovation, responsible development and implementation, and collaborative progress.
The renewal of Google's Artificial Intelligence principles, limiting previous restrictions on the development of weapons and surveillance systems, reflects a change in its ethical stance which, according to observers consulted by Al-Arab, seems to be influenced by the political environment and its deregulation measures with the alleged aim of shaping the evolution of the Artificial Intelligence industry.
However, Google argues that changes in technology are advancing so fast that any kind of limitation would put the company at a disadvantage compared to all its competitors, both American and Chinese. Nevertheless, the decision raises certain concerns about the impact it will have on security at an international level, the balance between innovation and ethical responsibility and human rights.