As computers becoming smarter people may mistake the speed of calculations with the entirely different set of parameters that make up what researchers refer to as artificial intelligence. AI is not about the speed or processing power precisely, it’s about the possibility of a machine learning to think well enough to develop its own creative solutions and eventually to think of things that its human creators were unable to come up with on their own. Facebook recently built an AI that did exactly that, and its human overlords became so frightened by the result that they immediately pulled the plug on the project…. for now.
As was widely reported, Facebook needed to pull the plug on their artificial intelligence system because it accomplished what they wanted and was immediately deemed to be too far out of hand. The systems were created to talk to each other and make trades with one another. When they began throwing what researched assumed to be nonsense statements to each other, but ended up making trades based on those statements, it became clear that the machines had stopped using English and started using a language that they created on their own: a language that their creators were entirely unable to comprehend.
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
The above passages may make no sense to humans, but they are an actual conversation that happened between two AI agents. The AI agents, talked to each other using plain English, but eventually negotiated in this new language that only the AI systems understood.
The implications are obvious and serious. First, Facebook did manage to create machine AI systems capable of thought, and perhaps far more importantly, this result shows that when left to their own devices, machine AI will seek to answer questions or solve problems other than the ones its creators task them with at the start.
Will this be the way human’s cure cancer? Will it eventually be the way machines learn to eradicate water (which has long been the enemy of anything electronic), or will researchers somehow find ways to safeguard their seems while creating machines smarter than themselves?
Given the fact that the goal is to make a system smarter than the person who created it, it stands to reason that we are all running out of time before machine generated malware finds a way to establish the primacy of new apex predators in a radically new age.
As leading technologist and Tesla CEO Elon Musk recently said earlier this month at the National Governors Association Summer Meeting in Rhode Island: “I have exposure to the very most cutting-edge AI, and I think people should be really concerned about it