Artificial intelligence (AI) is so obtainable for anyone these days. Take a look at the “AI at the edge” series from element14 here.

 

Some researchers believe we’re approaching the limits of AI, while others have made a breakthrough that helps it progress. So, who’s right? Are AI and deep learning truly limitless? (Image credit: Shuttershock)

 

Artificial intelligence and deep learning have been pivotal in medical, scientific, and technological research. As time goes on, it seems AI has only gotten better, but how much farther can we go? At first, AI and deep learning seemed to be limitless. Now, that doesn’t seem to be the case. MIT researchers believe we’re reaching the computational limits of deep learning.

 

In a recent study, researchers from MIT-IBM Watson AI Lab, Underwood International College, and the University of Brasilia found that progress in deep learning heavily relies on increases in computational power. Continued progress will either require changes to existing techniques or a new method entirely. They looked at 1,058 papers from the preprint server Arxiv.org as well as other sources to understand the connection between deep learning performance and computation, paying particular attention to domains like image classification, object detection, question answering, named entity recognition, and machine translation.

 

They saw “highly statistically significant” slopes and “strong explanatory power” for all benchmarks except machine translation from English to German, where there was little change in computation power. Object detection, named-entity recognition, and machine translation showed significant increases in hardware usage with little improvements in outcomes.

 

Though there have been major improvements for deep learning, like Google’s tensor processing units, researchers still think there isn’t much further to go due to the computational power it would demand. It would take a big breakthrough to continue AI’s progress, and a team of scientists may have discovered it.

Researchers from George Washington University recently discovered a new approach in the development of AI that uses light instead of electricity to perform computations. This new method improves the speed and efficiency of machine learning neural networks. It could also help AI learn complex tasks without supervision. Their study showed that using photon units within the neural network processing units could help machine learning perform complex operations without increasing the power demands.

 

“We found that integrated photonic platforms that integrate efficient optical memory can obtain the same operations as a tensor processing unit, but they consume a fraction of the power and have higher throughput,” said Mario Miscuglio, one of the paper’s authors.

 

This new development may help the continued progress of AI. Still, as MIT researchers mentioned, this progress is possible because it relies on a new method. So, while we shouldn’t give up on AI’s progress, it’s important to keep in mind it’ll take a lot of hard work to keep going.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell