Google rules out using its AI technology for Pentagon’s weapon project

After facing backlash over its involvement in an Artificial Intelligence (AI)-powered Pentagon project “Maven”, Google CEO Sundar Pichai has emphasized that the company will not work on technologies that cause or are likely to cause overall harm.

About 4,000 Google employees had signed a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology”.

Following the anger, Google decided not to renew the “Maven” AI project with the US Defence Department after it expires in 2019.

“We will not design or deploy AI in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people,” Pichai said in a blog post late Thursday.

“We will not pursue AI in “technologies that gather or use information for surveillance violating internationally accepted norms,” the Indian-born CEO added.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas like cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” Pichai noted.

Google will incorporate its privacy principles in the development and use of its AI technologies, providing appropriate transparency and control over the use of data, Pichaienphasised.

In a blog post describing seven “AI principles”, he said these are not theoretical concepts but “concrete standards that will actively govern our research and product development and will impact our business decisions”.

“How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right,” Pichai posted.

You may also like

Comments are closed.