Stanford University is opening up a new institute to study and develop human-centered artificial intelligence technologies and applications. The university will be working with Human-Centered Artificial Intelligence (HAI) in an effort to gather input from various fields at the university including business, engineering, humanities and finance. They want to ensure artificial intelligence has a better future for humanity and wants to address any challenges and disruptions it may have on society.
Stanford will also be working with governments and non-governmental organizations that have the same goal in mind. The institute will also be working with companies in the technology, financial services, health care and manufacturing sectors. All faculty from Stanford's seven schools will also be participating in the studies to gain a better understanding of AI's impact on the future.
Stanford HAI launched at a symposium on March 18th with guest speakers Bill Gates, Gavin Newsom, Kate Crawford Demis Hassabis, Eric Horvitz, Reid Hoffman and Alison Gopnik. The institute will also hire 20 new faculty members from all fields including humanities, engineering, medicine, the arts or general sciences. HAI will also have a new 200,000 square foot Data Science Institute intended for collaborative studies. There is also concern about how AI can impact the workforce.
Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have designed a system that makes robots capable of picking up and handling any object. Most factory robots will need to be pre-programmed with the objects they'll be handling. Robotic engineers are creating different technologies to help robots become aware of object manipulation. CSAIL's system creates visual roadmaps of objects by viewing them as collections of 3D keypoints.
Robots will be able to manipulate an object just by observing them and figuring out what to do with them. (Image Credit: MIT CSAIL)
Keypoint Affordance Manipulation (kPAM) has a high level of accuracy than other systems. After finding each coordinate on an object, the system determines what it can do with the object. For example, if it finds a mug with a handle, the robot will figure out how to hang the mug on a hook by its handle. It can also place a pair of shoes on a rack after looking at it. Understanding the main location of the keypoints is more than enough to allow the robot to carry out its tasks. It works really well with advanced machine learning and planning algorithms.
Researchers are also hoping to continue development on the system until machines can perform larger tasks like unloading dishes from the dishwasher and cleaning out a kitchen. It also has the potential to be used in factories as part of a bigger machine in the future.
Have a story tip? Message me at: cabe(at)element14(dot)com