VIEW FULL VERSION: Link
Title: Google Says It Won’t Make AI to Help Murder People
Tags:
Blog Entry: In the wake of backlash over its involvement in a U.S. military drone program, Google CEO Sundar Pichai released a set of artificial intelligence principles on Thursday pledging that the technology company would never “design or deploy” AI to aid weaponized systems and surveillance.To get more latest technology news , you can visit shine news official website. Instead, it said its applications would now be socially beneficial, avoid bias, be tested for safety, have strong privacy protections and be accountable. Yet despite seemingly listening to the many critics of “Project Maven”—the Department of Defense-led operation to use AI to analyze bulk drone surveillance footage—Google said it will still work on contracts with the U.S. government and military. “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” Pichai wrote.These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” he continued. “These collaborations are important, and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.” The CEO said Google would not make technologies “that cause or are likely to cause overall harm.” He wrote: “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.” It is unclear what such benefits would be. Google did not respond to a request for comment. Last month, hundreds of academics urged the Mountain View, California, company to abandon all work on the Maven project, which they argued could eventually lead to the aiding of targeted killing. In April more than 3,000 Google staffers petitioned against the firm’s stance on offensive warfare. Gizmodo, which first reported the news, revealed that some employees had resigned over the matter. Google initially maintained the work was for “non-offensive purposes,” even if an internal memo from inside the project stated in black and white that it would help to “enhance military decision-making.” Diane Greene, CEO of Google Cloud, confirmed in a blog post on Thursday that the company would not pursue follow-on contracts for Project Maven once the current contract expires in 2019. She rejected calls for Google to cancel its DoD work immediately, saying it needs to fulfill its obligations. “There has been public focus on a limited contract we entered into in September 2017 that fell under the U.S. Department of Defense’s Maven initiative,” Greene wrote. “This contract involved drone video footage and low-res object identification using AI, saving lives was the overarching intent.” She added: “There have been calls for Google to cancel the September 2017 contract with the [DoD]. I would like to be unequivocal that Google Cloud honors its contracts. We will not be pursuing follow on contracts […] and because of that, we are now working with our customer to responsibly fulfill our obligations in a way that works long-term for them and is also consistent with our AI principles.