Last week a small group of about two dozen protestors supposedly representing an organization called “Stop the Robots” marched in front of the entrance to the annual South by Southwest tech festival taking place at the Austin Convention Center. The group came complete with picket signs proclaiming "humans are the future,"  they chanted "I say robot, you say no-bot," and they were wearing blue t-shirts with the inscriptions, "Stop the robots." and "Humans make mistakes.”


Who were these people? Just a bunch of malcontents, technology Luddites or religious zealots?

None of the above. It turned out to be a hoax—a viral marketing stunt for the dating app Quiver, which is part of another relationship/matchmaking app called Couple. Couple claims that instead of using matchmaking algorithms it uses your friends—humans—to find potential matches. And that, if anything, is at the core of their beef with AI systems.

The fact that the protest got media attention—the little demonstration was picked up by USA Today, Fox News and other media outlets--was not because they used catchy slogans but because recently some notable luminaries including theoretical physicist Stephen Hawking (yes, that one, the subject of the Academy Award nominated film The Theory of Everything) and Elon Musk, CEO of both rocket-maker Space X and electric car manufacturer Tesla,  have spoken out against what they see as the dangers of artificial intelligence.

Musk for example, speaking last October at the AeroAstro Centennial Symposium told MIT students “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” he said, adding. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”


Elon Musk

For his part Prof Hawking told the BBC in an interview "The development of full artificial intelligence could spell the end of the human race."

Both Hawking and Musk along with other scientists and entrepreneurs signed an open letter promising to ensure that AI research benefits humanity. The letter, drafted by the Future of Life Institute and signed by dozens of academics and technologists, said we should seek to head off risks that could wipe out mankind and it called on the artificial intelligence science community to not only invest in research into making good decisions and plans for the future, but to also thoroughly check how those advances might affect society.

Of course neither Musk nor Hawking can be described as anti-technology. Indeed, Musk, along with Facebook’s Mark Zuckerberg, has invested in Vicarious, a company aiming to build a computer that can think like a person, and mimic the part of the brain that controls vision, body movement and language. Musk has also put some of his cash into DeepMind Technologies, an AI company that has been acquired by Google.

What do you think? Clearly AI is, and is likely to continue to be, useful in areas such as speech recognition, image analysis, driverless cars, and robotic automation. But are safeguards on intelligent machines needed so that mankind does not have a dismal future? And should we fear AI or control it going forward?