Senior Google executives defended a shift in its AI ethics policy which opens the door for its technology to be used in military applications as a change necessary to defend the principles of democracy.

Various news outlets noted updated Google AI Principles omit wording about employing the technology for harmful purposes including weapons development or surveillance which had been fundamental to its approach since 2018.

Bloomberg, BBC News and others noted Google’s latest principles drop a commitment to ensure AI was not used in ways “likely to cause” harm, pointing to weapons development as a key example.

In a blog, the tech giant’s SVP of research, labs, technology and society James Manyika, and Demis Hassabis, CEO and co-founder of Google DeepMind, delved deeply into the company’s AI credentials and transparency, but added the ongoing development of the technology required it to adapt.

The executives noted emerging AI capabilities “may present new risks”, because it had evolved from being a “niche research topic in the laboratory to a technology that is becoming as pervasive as mobile phones and the internet itself”, which is used by “billions of people”.

They argued governments, companies and other organisations should work together to develop principles which promote national security, among other goals.

Experts told Bloomberg the reworked governance principles would likely lead to AI being used in more nefarious ways, though the news outlet noted Google’s move comes at a time of a broader hardening of attitudes among large technology players, including the removal of fact checking and diversity programmes.