San Francisco, Jun 2: Google workers have got word that the internet titan will retreat from a deal to help the US military use artificial intelligence to analyze drone video following an outcry from staff, according to reports. The collaboration with the US Department of Defense was said to have sparked rebellion inside the California-based company.
An internal petition calling for Google to stay out of the business of war garnered thousands of signatures, and some workers reportedly quit to protest a collaboration with the military.
The New York Times and tech news website Gizmodo cited unnamed sources as saying that a Google’s cloud team executive told employees Friday that the company would not seek to renew the controversial contract after it expires next year.
The contract, reported to be worth less than $10 million to Google, was thought to have potential to lead to more lucrative technology collaborations with the military.
Google did not respond to a request for comment.
Google has remained mum about ‘Project Maven’, which reportedly uses machine learning and engineering talent to distinguish people and objects in drone videos for the Defense Department.
“We believe that Google should not be in the business of war,” the employee petition reads, according to copies posted online.
“Therefore, we ask that ‘Project Maven’ be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”
The Electronic Frontier Foundation, an internet rights group, and the International Committee for Robot Arms Control (ICRAC) were among those who have weighed in with support.
As military commanders come to see the object recognition algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems,” ICRAC said in an open letter.
“We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control.”
Google has gone on the record saying that its work to improve machines’ ability to recognize objects is not for offensive uses.
The EFF and others stressed the need for moral and ethical frameworks regarding the use of artificial intelligence in weaponry.
“The use of AI in weapons systems is a crucially important topic and one that deserves an international public discussion and likely some international agreements to ensure global safety,” the EFF said in a blog post on the topic.