As technology improves, AI is taking over the further with more and more applications supposed to improve lives. However, the rise of AI also brings its own set of threats to the peace in the world. A view of things that NASNet identified during a test trial. (Image via NASA)

 

From the movie “Ex Machina” to the more recent movie “Upgrade,” the world has witnessed the evolution of Artificial Intelligence (AI). The concept of a machine intelligent enough to do tasks that are considered human behaviors have evolved to the point that the machine can not only be proactive but also duplicate itself and adapt to new environments. Science fiction has a reputation of pushing the boundary of the possible, but this year Google has proven to the world that AI technology is evolving faster than imaginable. Even though this sounds exciting, the rate of development or growth of AI technology raises a few questions about its possible applications that nobody really wants to talk about.

 

An AI that creates its own AI? How did it even come about? During his presentation at Google I/O this year, Google’s CEO Sundar Pichai confessed that he was inspired by the movie "Inception." Now, although that movie is not directly related to AI, Pichai felt that the idea of going deeper into a realm perfectly applied to innovation. He explained that to create AIs that are powerful and unique, the team of engineers at Google, mostly Machine Learning PhDs according to him, had to relinquish the responsibility of developing certain aspects of the creation to one initial AI. The “mother” AI is a network of neural nets and AutoML allows them to each create another neural net that is trained by the controller neural net to perform a specific task. Data from that performance is collected to improve the initial neural net which in turn can perfect its “child.”

 

The advantage of having AutoML is purely time-related. Usually programming an AI takes a lot of time, so programming multiple AIs that would perform different tasks or even one AI that would multitask would be painstakingly time-consuming. In addition, so far AutoML’s child NASNet is an AI that could be used for visual recognition with an accuracy of 82% and 4% more efficiency. With such performance, NASNet is considered more powerful than computers meant for such tasks. With AutoML better tools can be created, such as self-driving car visual support or enabling visual recognition for robots working in health care. Now, can espionage, propaganda, and cybercrime also benefit from Artificial Intelligence?

 

The answer is yes. In fact, every progress in the AI industry feeds a new level of crime. Just like the internet opened doors to online scams, advances in AI could be making already existing crimes even more powerful and dangerous. The website Axios.com shared information of a new report by the Center for a New American Security that outlines the threats and possible ways to fight them.

 

Between governments and cybercriminals, it is seemingly difficult to tell in whose hands AI is more dangerous. With the rise of automation, the fear is that combined with AI, it can serve to take down organizations or even societies. And, it all stems from twisting the truth and/or perceptions. It is possible today to produce a video revealing some events that actually never happened using AI; that is an example of “deepfakes” according to on of the authors of the report. With deepfakes, governments or political parties can run campaigns to mislead or share fake news. Some might argue that fake news is not a new phenomenon, but that is to prove again that AI technology will take existing crime to the next level. With deepfakes, surveillance by corporations or governments will help predict people’s likes and dislikes as well as their habits. Furthermore, AI will facilitate the art of social engineering making people do things they wouldn’t choose to do normally.

 

Luckily the report suggests solutions. However, most of them rely on the goodwill of governments and corporations to control their use of AI. While governments vote laws to regulate the applications and uses of AI, private AI engineering companies should evaluate and focus on positive uses of their inventions. But who is holding governments accountable?

 

While Sundar Pichai is dreaming of going deeper into AI development, the world is also sinking into an era that gets darker and scarier. The fact that governments around the world are no longer the main funders for AI development projects opens the gate to a flood of applications including the malicious ones. Now, it is true that some countries are taking measures to regulate the industry, but until those laws are applicable, the world remains exposed to scams, hackers and fakers. Will the future generations know how to identify truth? More than truth, it is the freedom that is in danger.

 

See more news at:

http://twitter.com/Cabe_Atwell