artificial_intelligence_benefits_risk.jpg

Two recent developments in AI may pose a threat to human poke players and workers. How much can we trust AI? (Photo from Future of Life Institute)

 

Is AI out to take over the world? Maybe not just yet, but recent developments show it may be a bigger threat to humans than we thought. First, they want to take our entry level jobs, at least according to George Zarakadis, now they beat us in poker. For the first time, Libratus, an AI developed by Carnegie Mellon University, was crowned the victor at a heads-up, no-limit Texas Hold’Em competition titled Brains vs Artificial Intelligence: Upping the Ante in Pittsburgh. Competing against Jason Les, Dong Kyu Kim, Daniel McAulay, and Jimmy Chou, the AI had tough competition to take out and the match lasted for 20 days. But in the end, Libratus was declared the winner with a daily total of $206,061 in chips and a total pile of $1,766,250. That’s a lot of dough for an AI with nothing to spend it on.

 

So, how did Libratus beat out four human players? According to Carnegie University professor Tuomas Sandholm, it had nothing to do with luck and everything to do with science. How the AI works is it uses a series of algorithms that interprets the rules and restrictions of certain situations. It then figures out the best way to come at the situation when it doesn’t know what others involved know. Sandholm revealed Lirbatus analyzed the holes the other players found in the AI’s strategy. It then took a look at those holes and “algorithmically patch the top three” using a companion supercomputer every night. Since it was able to update its strategy for each hand and match, it gave Libratus the advantage. But the humans didn’t do too shabby. They got to split a pot of $200,000 based on how well they compared to the AI. Since it wasn’t an easy match, none of the players are upset about being beat out by a computer.

 

Maybe an AI winning a poker match isn’t that alarming, but using it to monitor you at work is. That’s what StatusToday, a firm in London, is doing. They recently joined a security accelerator run by the UK’s GHCQ intelligence agency that allows them to track an employee’s work habits. How it works is the AI collects metadata to analyze how companies, departments, and employees usually work and note any unusual behavior. In theory, this system isn’t such a bad idea. The idea is to catch employees who are doing things they aren’t supposed to, like stealing company data or being in departments they have no business in. If someone starts copying a bunch of files they normally don’t look at, the system will flag it, and alarm supervisors. It also makes sure ex-employees can’t take anything when it’s time to leave the company. It may be ideal for cyber security threats, but it does present the problem of privacy.

 

AI monitoring can easily be abused to track employees on the job especially their productivity. Take a small break to check Facebook? The system may flag it. Strike up a chat with someone from a different department? The system may flag it. The problem is these innocuous activities will be seen as abnormal to the system making employers question their workers. Then you have the problem of the company spying on employees without telling them. If this system is going to be common place, companies need to be upfront about it and explain what they’re doing with it. Otherwise, it could lead to a lot of problems. Making it feel like someone is over your shoulder 24/7 at work doesn’t make for a healthy work environment. While the system could be useful, it brings up a lot of questions that companies need to consider before they decide to implement it. Would you want to be monitored at work? Do you feel threatened by these latest AI developments?

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell