The battle over the future of artificial intelligence continues as a number of Google employees are asking their own company to stop supporting a Pentagon artificial intelligence program called Project Maven. The project studies imagery, and in the future could be used to improve completely automated drone strikes on battlefields during war times, causing more than 4,000 employees at Google to sign a petition urging the company to stop its involvement. Google’s direct involvement is unclear and will likely continue to be so. However, by putting this petition in motion, Google employees may inspire other companies to go the same way.
Project Maven is more formally known as the Algorithmic Warfare Cross-Function Team and began in April of 2017 but has since doubled its publicly documented funding to 131 million dollars, making the controversy at Silicon Valley’s top tech company an expensive one. The Pentagon is also reportedly planning a new Joint Artificial Intelligence Center that will serve the US Military and intelligence agencies as an extension of Project Maven. Amazon and Microsoft are also noted as partners in the project.
The first goal of Project Maven was to create a system that would help analysts to process drone video through the help of the internet company’s machine learning techniques to distinguish and track military targets. This goal was set for December 2017. Project Maven will use leading edge AI to give military and intelligence analysts access to surveillance algorithms via sophisticated cameras and should speed up the processing of information dramatically. All of that appears to be acceptable to executives in the tech community. The friction seems to be solely on the issue of “turning over” that analysis completely to machine learning systems and obviating the need for a human analyst in the process.
As computer learning and AI continue to become smarter at an ever-accelerating pace, the danger of arming the machines with drone strike capabilities based on its own sense of targeting priority appears to be a real world problem that would have bene the sort of stuff used in a sci-fi film plots only a decade ago. On the one hand it does raise ethical questions for philosophers to sort out, but on the other hand it clearly demonstrates the enormous power computing is capable of presenting when used correctly and at NationalNet we see many productive uses for AI and machine learning in our own mission to bring customers the fastest throughput with the greatest uptime imaginable.
Let’s all hope humanity finally figures out how to harness the constructive power of new tech rather to all of our advantage rather than the destructive side of the equation to all of our demise.