Photo Credit: Getty Images

In a major policy reversal, Alphabet Inc., the parent company of Google, has revised its artificial intelligence (AI) principles, purging earlier restrictions against AI development for weapon and surveillance purposes. This shift is consistent with the changing terrain of AI technology and its incorporation in national security paradigms.

 

Google  focused on ethics 2018, aset forth ethics writ large that clearly room out one of th early impacts the technologies can have on human life for unlawful and discriminatory uses such as weapon development and surveillance run afoul of the United Nations standards. These were applied after protests within the company on the issue of the company engagement in a US military drone working, resulting in the non-renewal of that contract.

The updated principles, as discussed in The Google DeepMind blog of Demis Hassabis, Chief Executive Officer, and James Manyika, Senior Vice President, highlight the importance of democracies to spearhead AI technology. They advocate for collaboration among companies, governments, and organizations that share values such as freedom, equality, and respect for human rights to create AI that protects people, promotes global growth, and supports national security.

The updated guidelines no longer list specific banned applications. Rather, they emphasize the need for proper human oversight, due diligence, and congruence with international law and human rights. This methodology seeks to provide a tradeoff between the development of AI technology and ethical implications and addressing issues of societal impact.

This policy change is in line with other tech companies that have incorporated AI or AI technology in defense applications. For example, OpenAI has collaborated with defence-technologies company Anduril on building a US and allies drone defence technologies.

However, the decision has sparked internal and external concerns. Some Google employees have criticized the lack of staff involvement in the decision-making process and fear potential ethical compromises. Critiques also smoke about possible abuse of AI, which can result in unforeseeable bad outcomes.

Such an evolution highlights the ongoing discussion arising from AI within society, especially in sensitive fields such as national security and surveillance. As AI becomes increasingly pervasive, companies and governments face the challenge of leveraging its benefits while mitigating risks and adhering to ethical standards.

Policy adjustment cited by Google represents part of a trend in the technology sector, among companies questioning their ethical frameworks in order to manage the ongoing growth and use of AI technologies. Between creativity, ethical implications, and social consequences, balance is an ongoing critical dimension as AI advances.

Only registered members can post comments.

RECENT NEWS

AROUND THE CITIES