Skynet is not coming. Yet.

Nikhil Thiyyar

As Facebook announces a feature that brings complex artificial intelligence software to our smartphones, it's important to recognise that there are deep pitfalls that come attached with the advent of AI

 

During F8, Facebook's annual developer conference, Mark Zuckerberg announced a new feature that was presented more as a footnote but had Artificial Intelligence (AI) enthusiasts raving in unison. It was a feature that lets users attach digital artifacts to real-life objects through super-intelligent image recognition software. Not surprisingly AI nerds clogged machine learning Reddit threads and chatrooms with debates about the implications of this move. 

Last year when the Defense Advanced Research Projects Agency (DARPA) announced an AI hacking challenge, Elon Musk tweeted that the challenge was the first in a series of steps that would lead to the birth of a Skynet-like hive mind. For the uninitiated, in the Terminator movie franchise, Skynet is often used as an analogy for the dangers of AI. Skynet is a fictional neural net-based conscious group mind and artificial general intelligence that serves as the franchise's main antagonist. Musk may have tweeted about the DARPA challenge in jest but he has time and again pointed out the fact that AI may be humanity's greatest existential threat. That fear may take some time to materialise but one thing is clear. We do not understand how AI works. 

Behind AI is the basic idea that an artificial neural network can mimic the human brain's neocortex. The concept is decades old and for a long time, any progress in the field was stymied by a lack of capacity. Capacity was simply the lack of brute processing power. Until the dawn of the new millennium, AI was mostly confined to the realm of science fiction. As Moore's law kicked in and the processing power available to AI researchers increased exponentially, the field began to leapfrog. In the past couple of years, AI has been used for image captioning, speech recognition and language translation. 

This does not take away from the fact that we need to find ways to make techniques of deep learning more understandable to its creators and accountable to users. Relying on a black box method won't help. This is what happened when medical researchers at Mount Sinai hospital decided to employ deep learning techniques with patient records. Almost 7,00,000 records were fed to a program called Deep Patient. As new patients were admitted to the hospital, the program quickly diagnosed the ailment that the patient was suffering from. The only problem: Deep Patient did not offer any rationale for its prediction. In other words, the program worked with eerie precision but no one could figure out how and why it did so. The fact that as AI technology evolves, its creators may no longer have control over it is a discomfiting one. The field of AI has techniques, such as neural networks and evolutionary programming, which have grown in power with the slow tweaking of decades. But neural networks are opaque—the user has no idea how the neural net is making its decisions—and cannot easily be rendered transparent. 

Equally unsettling is the possibility that AI tech can go horribly wrong as Microsoft's Twitter bot Tay did. Tay was an experiment in real-time machine learning. Among other things, Tay said the Holocaust never happened, said blatantly racist things and used offensive terms to describe a prominent female game developer. Google similarly had to face the public embarrassment of its image recognition software tagging black people as gorillas. When AI goes wrong it can have also have deadly consequences. In 2015, a robot for grabbing auto parts in a Volkswagen plant grabbed and killed a man. This may be an extreme case but on an everyday basis, the shortcomings of AI are there for all to see and experience. Spam filters block important emails, GPS provides faulty directions and tells you to take a left on a bridge, machine translations corrupt the meaning of phrases, autocorrect replaces a desired word with a wrong and sometimes potentially embarrassing and offensive one. 

As AI permeates every aspect of our life, it's perhaps important to ensure that the development of any superintelligent machine learning framework is in sync with human needs and takes our flaws into account. To paraphrase E.O. Wilson, the real problem of humanity is that we have Paleolithic emotions, medieval institutions, and god-like technology.