Stanford University’s Volkswagen Passat Junior is outfitted with LIDAR and other sensors for self-navigation. (via Stanford)

Believe it or not, autonomous vehicles were being experimented with in the 1920’s, underwent promising advancements in the 50’s and actually became a reality in the 80’s with Carnegie Mellon University’s Navlab (1984) and Mercedes-Benz’s EUREKA Prometheus Project (1987). Fast-forward a couple of decades and it’s clear to see that autonomous vehicles have come a long way in their self-driving capabilities. Listed are some of the more well-known vehicles and some you may not have heard about.

First on the list (in no particular order) comes from Stanford University’s Racing Team in conjunction with Volkswagen with their Passat Junior. The vehicle was initially designed to compete in DARPAs’ Urban Challenge a decade ago (and won) and has since undergone a few upgrades. The original featured no less than 5 LIDAR sensors for navigation, which have since been reduced to 3. The Junior 3 (third revision) also sports 3 Bosch LRR2 (Long Range Radar) mounted on the front of the vehicle, which provides positional data for the vehicle to maneuver. Other guidance data is provided by onboard GPS, accelerometers and gyroscopes, which provides an internal guidance system, allowing the vehicle to do everything from obstacle avoidance to self-parking.



The Mercedes-Benz F 015 concept vehicle was designed to combine autonomous driving with luxury. (via Mercedes-Benz)


When it comes to Mercedes-Benz, we tend to think of luxury, even when it comes to being autonomous. The prominent German automaker has entered the AV (autonomous vehicle) fray with their F 015 concept car, which has a sleek, silvery futuristic design without all the sensor racks that most others feature. Sure, it’s a concept car but there are existing prototypes that even at this point look incredible.

The vehicle is outfitted with hidden cameras and sensors that it uses for navigation and features six high-resolution touch screens that passengers can use for entertainment or other applications. The front of the vehicle features two different colored LEDs that denote when the vehicle is in autonomous mode (blue) or driven manually (white). Another cool feature is that it can be app controlled using mobile devices in much the same fashion a KITT from Knight Rider but without the condescending AI.




BMW gets on the AV bandwagon with their Concept X5 eDrive- a new take on the hybrid vehicle.


BMW’s autonomous offering is actually a hybrid vehicle of sorts, which combines the company’s xDrive intelligent all-wheel drive system with their plug-in hybrid technology (part electric, part gas depending on your mood). This vehicle isn’t exactly autonomous in that it drives itself without human input, rather it engages a smart-drive system when adverse road conditions are experienced, such as icy, slick or uneven roads. When those conditions are encountered, the vehicle sends increased power to the wheels that have the surest footing on the road, which it does in a matter of milliseconds. While it may not be fully autonomous, it is an intelligent system that takes over for the driver when needed.


The Audi A7 ‘Jack’ recently completed a 550-mile journey to CES 2015 mostly by itself.


One of the more advanced vehicles in this list with autonomous capability comes from Audi with their A7 Concept car, which recently completed a 550-mile journey from Silicon Valley to Las Vegas almost entirely by itself. The A7 is outfitted with a series of hidden sensors, including laser, 6X RADAR packages, 3X cameras and 2X light detectors and LIDAR units, which keep the vehicle on the road and able to navigate itself through traffic. Those sensors allow the vehicle to choose the optimal path when going from point A to B and can actually perform smooth lane changes, maintain safe driving distances and even transition from slow and fast lanes based on surrounding traffic with relative ease and safety.



DARPA’s Ground X-Vehicle Technology looks to bring a level of autonomous navigation in combat vehicles. (via DARPA)


Autonomous drive capability isn’t centered on commercial vehicles alone as DARPA has been experimenting with the technology through their Ground X-Vehicle Technology Program, which aims to create a more mobile and less armored combat platform. Like BMW’s Concept X5, the combat vehicles would be semi-autonomous- meaning they can navigate to a certain extent (performing routine driving tasks) allowing the occupants to focus on other combat related tasks. One of the more notable concept vehicles does away with windows in favor of high-resolution screens, which provide a detailed view of the battlefield with augmented reality overlays that provide detailed information through the vehicles sensor suite.


The Mercedes-Benz Future Truck 2025 looks to bring autonomous technology to the trucking industry. (via Mercedes-Benz)


Cars, wagons, tanks and now semi-tractor trailers are getting in on self-driving capabilities as well, effectively making those long-haul runs a lot easier on the drivers thanks to Mercedes-Benz. The F 015 isn’t the only autonomous vehicle they have been developing, they are also getting in on the trucking industry with their Future Truck 2025. Like some of the others featured in this list, the Future Truck 2025 is semi-autonomous, in that it takes over when it reaches a certain cruising speed, allowing the driver to focus on other tasks. While not much is known about the 2025, some details have been made public- it’s outfitted with stereo cameras and RADAR sensors for navigation as well as identifying pedestrians, other vehicles and even road conditions, which it then acts accordingly.


It may not be pretty but GUSS is the Marine Corps entry into the autonomous vehicle world and is designed to follow troops downrange. (via Virginia Tech)


The military already possesses drones capable of autonomous flight and now they’re turning their attention to ground-based vehicles that can do the same. Case in point- the US Marines Ground Unmanned Support Surrogate (GUSS). The vehicle was actually developed by TORC Robotics, Virginia Tech and Naval Surface Warfare Dahlgren Division (a newly built R&D lab) to act as a ‘pack mule’ of sorts and carry gear and wounded soldiers. Like the other vehicles, this fully autonomous platform is outfitted with LIDAR, cameras and advanced mapping computers that allow it to function on its own. Soldiers can also take control of the vehicle using a Tactical Robotic Controller if the need arises, making it a versatile vehicle.


google self drive.jpg

It may look cartoonish but Google’s self-driving vehicle is all about safety. (via Google)


Google’s foray into to the autonomous vehicle world looks like something out of a children’s book but looks can be deceiving. The company is actually looking take humans out of the equation altogether, as passengers would simply tell the vehicle where they want to go, much like the cabs in the original Total Recall movie. The car is outfitted with a host of sensors that allow it to ‘see’ around it for navigation and obstacle avoidance. While the current prototypes offer manual controls, the final version will have none at all. Google takes the issue of safety seriously when it comes to their AV vehicles, going beyond obstacle avoidance, as the tech company was recently granted a patent for external airbags in case pedestrians get a little too close.



RDM’s Lutz Pathfinder is one of the few autonomous vehicles that will soon be available to the public. (via Lutz)


While most other autonomous vehicles are still in the development process, UK tech company RDM is looking to release their Lutz Pathfinder ‘pod’ vehicle to the public very soon, once their trialed that is. The difference between the Pathfinder and other self-driving vehicles is that they were designed to operate on pedestrian pavements or sidewalks. They will ferry passengers around city centers and other populated areas much like a tram or bus. The University of Oxford’s Mobile Robotics Group developed the navigation sensors (22 in all), which include light and LIDAR for mapping the surrounding area and objects in its vicinity.



Delphi’s self-driving vehicle makes use of an Audi SQ5 SUV and is currently undertaking a coast-to-coast test-drive from San Francisco to New York. (via Delphi)


The final vehicle on the list comes from Delphi Automotive PLC- a UK-based company specializing in vehicle technologies. The company outfitted an Audi SQ5 SUV with a windshield-mounted camera that reads traffic lights, road signs and lane markers directly in front of it, while 4 midrange sensors, 6 long-range sensors, 3 camera sensors and 6 LIDARs garner positioning data for navigation. What makes this unique is the fact that Delphi is test-driving the vehicle from coast-to-coast across the US completely hands free. The journey started on March 22 from Treasure Island in San Francisco and will complete its journey in New York City at some point in the coming weeks, covering a distance of over 3,500 miles!

With so much work on autonomous vehicles, it wouldn't be surprising to see adoption in the next decade.




See more news at:



Eve (via University of Cambridge)


When many think about artificial intelligence, images of military drones and faceless soldiers often come to mind. The University of Manchester, however, is seeking to change that with its robot Eve, an AI robot that recently discovered a compound that can be used to fight malaria.


Eve isn’t the first of her kind. In fact, she came after Adam (need we explain?). Adam was an AI robot created by the Universities of Cambridge and Aberystwyth to automate the scientific process, including the development of hypotheses. Adam needed a mate, however, and Eve was built to aid researchers in the discovery of compounds that could fight against Neglected Tropical Diseases.


Neglected Tropical Diseases include dengue fever, Chagas disease and leprosy/Hansen’s disease. These parasitic and bacterial-based diseases kill at least half a million people each year, according to estimates from the Centers for Disease Control and Prevention. Manchester scientists also decided to use Eve to search for a possible cure for malaria, which affected at least 219 million people and killed over 660,000 in 2010, according to the CDC. Malaria is difficult to fight because of its ability to resist drug treatments. With this, NTDs are neglected because the development of these vaccines is not cost effective for pharmaceutical companies, as the people who need the vaccine are largely impoverished. Manchester scientists had this in mind with the development of Eve.


Eve has the ability to screen 10,000 compounds per day and assess if any of them could be good candidates to fight a particular target disease. Using a genetically engineered yeast, Eve can determine if a particular compound is toxic or harmful to humans and screen them out. After determining which compounds are most likely to successfully fight off the target disease, it retests them to rule out false positives. And so the process goes until Eve finds a match, and she did.


In tests, Eve discovered that a compound currently being tested for cancer prevention is also a good malaria candidate, as it blocks the DHFR compound in malaria. While pharmaceutical companies already use the compound in preventative malaria vaccines, it is a huge success for Manchester scientists, as it validates Eve’s accuracy and paves the way for how helpful the technology could prove moving forward.


Eve was developed to speed up the early stages of drug development. As no one human could possibly screen 10,000 compounds daily, Eve works with scientists to make their jobs easier. The Robot Scientist can make pharmaceutical development more economical, bringing much-needed drugs to millions who need them worldwide.


The team that developed Eve is hoping to further advance the technology to include even more features, such as the ability to synthesize candidate compounds it finds. For now, Eve’s recent discovery proved just how useful artificial intelligence can be to the development of new and improved candidate drugs.


With diseases like malaria and other bacteria developing resistance to current drug treatment, such as antibiotics and anti-malarial drugs, there’s no time to waste in the development of drugs that can put an end to these deadly diseases, once and for all. Manchester scientists are hopeful that Eve will provide a path to faster, more efficient drug development, and we are too.


The European Commission and the Biotechnology & Biological Sciences Research Council supported the research and development of Eve. 



See more news at:



Amazon.com moved one step closer to realizing its dream of using drones to deliver orders to its customers across the United States. The Federal Aviation Administration issued Amazon with an experimental airworthiness certificate last week which allows it to experiment with new drone designs for research and development and crew training.


As EETimes' Junko Yoshida explains, this may be more of a symbolic victory for Amazon than a step which takes it appreciably further down the road of commercial drone delivery:


Amazon’s certificate allows experiments with new drone designs for R&D and crew training, but not for commercial purposes. An “airworthiness certificate” is fundamentally different from the “exemptions” some drone operators have gotten from the FAA, under what’s called Section 333. Those with exemptions under Section 333 can perform commercial operations in low-risk, controlled environments...


Obviously, it wasn’t the company’s first choice, either. Critics describe an experimental airworthiness certificate as “the same document required for a private, non-commercial plane owner to fly a Cessna.”


The main benefit of receiving the FAA nod is that Amazon may now test drones outside, rather than just in an enclosed space.


Whether the future of online shopping will include automated drone delivery remains to be seen. But if Amazon can convince the FAA that its UAVs are safe enough for commercial use, the future could look like this:



What do you think? Let us know by voting in our new poll.


Last week a small group of about two dozen protestors supposedly representing an organization called “Stop the Robots” marched in front of the entrance to the annual South by Southwest tech festival taking place at the Austin Convention Center. The group came complete with picket signs proclaiming "humans are the future,"  they chanted "I say robot, you say no-bot," and they were wearing blue t-shirts with the inscriptions, "Stop the robots." and "Humans make mistakes.”


Who were these people? Just a bunch of malcontents, technology Luddites or religious zealots?

None of the above. It turned out to be a hoax—a viral marketing stunt for the dating app Quiver, which is part of another relationship/matchmaking app called Couple. Couple claims that instead of using matchmaking algorithms it uses your friends—humans—to find potential matches. And that, if anything, is at the core of their beef with AI systems.

The fact that the protest got media attention—the little demonstration was picked up by USA Today, Fox News and other media outlets--was not because they used catchy slogans but because recently some notable luminaries including theoretical physicist Stephen Hawking (yes, that one, the subject of the Academy Award nominated film The Theory of Everything) and Elon Musk, CEO of both rocket-maker Space X and electric car manufacturer Tesla,  have spoken out against what they see as the dangers of artificial intelligence.

Musk for example, speaking last October at the AeroAstro Centennial Symposium told MIT students “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. So we need to be very careful,” he said, adding. “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.”


Elon Musk

For his part Prof Hawking told the BBC in an interview "The development of full artificial intelligence could spell the end of the human race."

Both Hawking and Musk along with other scientists and entrepreneurs signed an open letter promising to ensure that AI research benefits humanity. The letter, drafted by the Future of Life Institute and signed by dozens of academics and technologists, said we should seek to head off risks that could wipe out mankind and it called on the artificial intelligence science community to not only invest in research into making good decisions and plans for the future, but to also thoroughly check how those advances might affect society.

Of course neither Musk nor Hawking can be described as anti-technology. Indeed, Musk, along with Facebook’s Mark Zuckerberg, has invested in Vicarious, a company aiming to build a computer that can think like a person, and mimic the part of the brain that controls vision, body movement and language. Musk has also put some of his cash into DeepMind Technologies, an AI company that has been acquired by Google.

What do you think? Clearly AI is, and is likely to continue to be, useful in areas such as speech recognition, image analysis, driverless cars, and robotic automation. But are safeguards on intelligent machines needed so that mankind does not have a dismal future? And should we fear AI or control it going forward?


This TurtleBot creation brings coffee right to you (via Turtlebot)

Ever have dreams of competing in Robot Wars? Longing for a Roomba style robot to deliver your coffee? TurtleBot has a program that be right up your alley. TurtleBot is an open-source development kit for rolling robot applications and thanks to their new initiative anyone can learn robotics programming free of charge. The program features thirty online tutorial sessions that aim to teach almost anyone how to use the Robot Operating System (ROS), an open-source flexible robotics framework system aimed to simplify the creation of of complex robot behavior, to run a Turtle Bot.


The TurtleBot itself is a personal robot kit that can drive around people's homes, see in 3D by using Microsoft's Kinect add-on, allow users to create maps of their homes, and the ability to create different applications. It can also take pictures of user's homes and piece them together for 360 panoramas. A video posted on the company's website shows the bot tracking human movement and even delivering a snack to its patient owner. It costs around $1000 for a fully-built bot for those driven enough to build the robot on their own. Designed at a high-school level, the tutorials are expected to take three or four days to complete. A functional TurtleBot is also available to buy for those who want to skip the building process. It includes a Kobuki base, Microsoft Xbox Kinect, ROS compatible netbook, and factory calibrated gyro. Whether you know nothing about robotics or are somewhat of an expert, this program is sure to have something for everyone.



See more news at:



The Internet of Things (IoT) we’ve all been hearing about usually involves people wearing health monitoring bracelets or intelligent additions to the human ecosystem, such as smart lighting controls for the home and smart car systems in which sensors alert the driver when a vehicle wanders outside of its traffic lane.

Less well known, but a very big new opportunity for the Internet of Things focuses on industrial infrastructure. Smart machines (or as General Electric has described them in its advertising, “Brilliant machines”) have the capacity to change the world economy more than anything since the industrial revolution. This next wave of the Internet of Things will create smarter, more competitive factories by connecting machines and devices together into functioning, intelligent systems. These interconnected devices — collectively known as the Industrial Internet of Things (IIoT)--will enhance the productivity, efficiency and operation of our manufacturing facilities.


In Europe IIoT is being called Industry 4.0 because it refers to the fourth industrial revolution. The first industrial revolution introduced the mechanization of production using water and steam power. It was followed by the second industrial revolution which introduced mass production pioneered by Henry Ford with the help of electric power. Industry 3.0 was the digital revolution, bringing electronics, information technology and control systems to the factory floor to further automate production using computers, robots and programmable logic controllers (PLCs). Now, Industry 4.0 entails using networked communications and the cloud to combine smart machines into truly intelligent distributed systems.

The Industrial IoT introduces new requirements for the speed and volume of information exchange. Connections between machines must be real-time, they must be secure and they must work over wireless links, because without wireless communications acting as the conduit to deliver data between machines, M2M and therefore IIoT cannot exist. In an IIoT set up real-time data will be shared between mobile devices via the cloud and can be accessed through a web browser. The IIoT system must efficiently scale up to handle streaming updates, alarms, configuration settings and command instructions, all as needed and it must be enterprise-friendly.


(Source: General Electric)

Though manufacturing companies have been implementing robotics and computerized automation for decades, the sensors, Programmable Logic Controllers (PLC) and PC-based management systems on most factory floors are largely not connected to broader in-plant IT networks. What is more, unlike other IoT applications, in most cases they are not connected to the Internet. Consequently, the concept of networking industrial devices to achieve higher levels of automated interaction also will involve upgrading current automation and robotics systems and developing a connected approach to maintenance as well as retrofitting other, older equipment to receive information to allow for decision-making without human intervention.

Logistics made possible by the IIoT will allow plants to react to unexpected changes in production, such as materials shortages and bottlenecks. IIot also assumes cloud-based “big data” analytics will be used in decision making and for event prediction based on the streams of incoming data from a myriad of sensors.

The benefits of IIoT include improving machine uptime, conserving or consolidating factory floor space, reducing labor costs and boosting throughput. And because of the unforgiving environments in which industrial devices can exist, such as in harsh physical conditions, to obtain these benefits IIoT solutions must meet the challenging requirements of industrial-strength reliability, security, connectivity and backwards compatibility with legacy installations.

Let’s now look at some industrial IoT examples

Getting mining equipment operators out of harm’s way

Robots work alongside humans on the factory floor. But what if the industrial automation application is outside of a plant? .Joy Mining is the world's largest underground mining equipment manufacturer. The company’s equipment mines coal by digging channels in coal seams. The cutting end of the digging machine has a rapidly-spinning cylinder with 6-inch diamond-studded cutting teeth. It chews through tens of tons of rock per minute. The huge machine simultaneously creates a rectangular mine tunnel using hydraulic lifters to support the ceiling as the machine moves forward. Then, automated drills drive 3-ft long screws into the ceiling to stabilize it. A set of gathering "fingers" scoop up the rock and coal and deposit it onto a conveyor belt. The conveyor passes under the machine and out the back. A train of conveyor belt cars, up to a mile long, follows the cutter into the mine. The rock shoots along this train at over 400 feet per minute until it empties into rail cars at the end.

The current system places an operator cage next to the cutter. Not a great place to work: there is choking and potentially explosive dust, the risk of collapse and the proximity to rock flying all over the place. All of which make the operator’s cage a hazardous location. Joy Mining’s new 14CM Continuous Miner system uses Connext DDS communications software from Real Time Innovations (RTI, Sunnyvale, CA), which allows the operator to be moved back to a safe distance from the action. DDS middleware is a communications technology designed to provide controlled access to the data and is specifically designed to handle Industrial Internet applications. Intelligent control algorithms built into the software optimize cutter pressure and rate, check and maintain floor and ceiling levels and enforce machine limits to reduce failures. Connext DDS also delivers data up and away from the mine to allow surface monitoring. In the future, it will integrate machine control all the way to cloud-based analysis and production monitoring systems.


The Joy Mining 14CM Continuous Miner. Data connectivity, both within the machine and to surface operations, is critical to safe, efficient operation. (Source: RTI)

No driver needed

Mines often are found in very remote and hostile areas, where it may be difficult to find or attract enough qualified truck operators. Komatsu’s Frontrunner Autonomous Haulage System (AHS) uses GPS navigation to allow large electric mining trucks to operate without a driver. The AHS trucks use pre-defined courses and navigate autonomously from loading units to dump locations.

Human drivers aren’t needed since the sensors track conditions and control speed and locations. Remote operators monitor the truck’s performance.  Position sensors guide the trucks on the shortest route and save on fuel consumption.

The Komatsu system is at work at Rio Tinto mines in the Pilbara region of Western Australia. More than 50 autonomous trucks are in operation at the mines. In addition to GPS the dump trucks are equipped with vehicle controllers, an obstacle detection system and a wireless network system. The trucks are operated and controlled via a supervisory computer, enabling them to be unmanned. Information on target course and speed is sent wirelessly from the supervisory computer to the driverless dump trucks, while the GPS is used to ascertain their position. When loading, the dump trucks are automatically guided to the loading spot after computing the position of the bucket of the GPS-fitted hydraulic excavator or wheel loader. The supervisory computer also sends information on a specific course to the dumping spot.

Komatsu trucks.jpg

Unmanned dump trucks at work at the Rio Tinto mine

The company says that implementing autonomous haulage means more material can be moved efficiently and safely, creating a direct increase in productivity.

Sensors also provide data for preventative maintenance; 32 sensors are embedded in the engine block, as many as 120 sensors in the drivetrains and 40 in the wheels.

From a safety perspective, the fleet control system prevents collisions with other dump trucks, service vehicles or other equipment at the mining site. In case an obstacle detection system detects another vehicle or person inside the hauling course under AHS operation, the vehicles will reduce speed or stop immediately, keeping the system safe and reliable.

Building Jeep Wrangler bodies

Based in Augsburg, Germany, KUKA is a leading manufacturer of industrial robots for a number of industries. One of its United States subsidiaries is KUKA Toledo Production Operations. KTPO builds the bodies of all Jeep Wranglers sold in the world. When KUKA built its Jeep production facility in Toledo, Ohio, the company took advantage of the Internet of Things to create a highly automated plant that connects as many as 60,000 devices and factory-floor robots to a central data management system.


A Jeep Wrangler body being produced in Toledo, Ohio

KUKA implemented an intelligent system based on Windows Embedded and Microsoft’s SQL Server that connects 259 assembly-line robots, a controller, more than 60,000 device points, and backend systems. All the control tasks, including creating and running programs and diagnostic processes, can be performed directly on the robots from the control panel’s Windows-based interface — a familiar tool to many employees.

A new Wrangler is due in 2017 and to continue building Jeeps in Toledo the plant may need to update its robotics, add new tooling and possibly also new paint facilities. It will have to build these facilities without shutting down the line.

Integrating people and processes

There is little good in having billions of industrial sensors and devices connected to the Internet if they can’t all talk to each other. To insure that they do, groups such as the Industrial Internet Consortium (IIC), an open membership international nonprofit consortium, are attempting to set the architectural framework and direction for the Industrial Internet. Founded by AT&T, Cisco, GE, IBM and Intel in March of 2014, the IIC’s mission to coordinate the integration of objects with people, processes and data using common architectures, interoperability and open standards.


The IIC is managed by the Object Management Group (OMG), the world’s largest systems software standards organization. The OMG also manages the Data Distribution Service (DDS) middleware protocol standard.

As you read this the IIC is preparing to release a Reference Architecture to its members. The first part of the Reference Architecture is to define the different components to making IIoT work, which are connectivity, sensors and actuators, data processing and security.



UK’s Lutz Pod (via Lutz)

When we hear about 150 car pile-ups, it begs the question, “when will self-driving cars be here?” While the UK’s Lutz pod is making history as the first driverless pod to be used in a public area, critics of the driverless car believe it will be a long while before we see these vehicles on the road.


The UK’s Lutz pod is a beacon of hope in driverless automobile technology. It’s making history for being the first driverless pod in the UK, but also the first expected to be used in a public area in the region. The Lutz is by no means fit for the road, but it’s probably one of the closest driverless automobiles on the cusp of entering the market.


The Lutz Pod is a driverless vehicle largely intended for commercial use. Created by Transport Systems Catapult and the RDM Group, the futuristic pod seats two people and can transport them at a whopping 15mph. It’s equipped with six cameras and LIDAR sensors to give riders and the smart technology a 360-degree view. The car can run for six hours per charge and is intended to transport elderly people or lazy shoppers in Milton Keynes, Buckinghamshire, England. While the Lutz pod is gearing up for live, public trials this summer, driverless cars intended for the consumer market aren’t having as much luck.



Transport Research Laboratory’s Driverless Simulation (via TRL)


While driverless car manufacturers, including Audi, are perfecting their car’s smart controls, many are neglecting where human passengers fit in. In an attempt to fix this problem, the Transport Research Laboratory was commissioned by the U.K. government to determine how a humans respond in smart cars and how smoothly (or abrupt) the shift from automatic to manual controls went. The results were less than impressive.


The facility houses a driverless Honda Civic and recruits people to take simulated drives, as it analyzes the human psyche in the futuristic vehicle. Drivers have discovered that when they want to take control of the vehicle, it often happens abruptly, perhaps making the probability of an accident all the more likely. Companies are working hard on devising a car that really can drive itself safely, while incorporating a manual system that makes human passengers feel more comfortable. Driverless cars sound great, but in dangerous traffic situations, can we really entrust smart cars with our lives?


While companies work out the kinks on the futuristic car with driverless capabilities, the Transport Research Laboratory was also looking at the probability of bringing driverless technology to the trucking industry. While the TRL was hoping to improve fuel efficiency with their research, it would be interesting to see smart trucks to combat the high prevalence of trucking accidents for cross-country drivers here in the U.S. There is no word yet on any U.S.-based company tackling that market, but the consumer market here is looking increasingly promising.


Audi RS 7 concept car specs (via Audi)


The Audi RS 7 concept car made history on the Ascari racetrack in Spain by driving around the track at a record-breaking 149mph. Audi is also trying to incorporate a first-class driving experience for autonomous car passengers. As such, the car houses a television and passengers can look out the window and relax, knowing ‘Audi is in control.’ The company hasn’t addressed the ability for the car to switch back into manual mode, but perhaps as the car moves from concept to consumer market, it’ll become more feasible. For now, it looks like we’re going to have to drive our own cars. But get ready boys; we’re heading towards the future.



See more news at:


Links to Previous Posts




I started to build a quadcopter from scratch and in the previous posts, I covered making the frame from scrap wood and bought the motors and ESC. I also tried to power the thing up with a basic output but that did not work.


In this post, I explain the theory behind the quadcopter brain.


The Motion of the Quadcopter


We have all seen videos of these machines flying through air and doing all kinds of stuff. We need to understand how the quad does all that because in the case of cars, we can make the vehicle move by rotating the wheels and make it stop by cutting the voltage off. Friction does the job for us in the case of the wheel but in the case of an aerial vehicle, how do you stop the quad? Lets understand the basic motion of the quad.


The quadcopter has four motors and propellers which generate thrust which can make it go up or down. A combination of variable power to these will make the movement possible.



The above diagram shows the lingo associated with aerial vehicles. The Roll, pitch and yaw are the three possible motions of any quad etc and we will need sensors to accurately tell us these three parameters of our quad in air. If we can keep the Yaw, Pitch and Roll fixed, then that means the quad is stable in air right? Well yes but thats just half the story. Imagine a quadcopter which is perfectly oriented in mid air and a gust of wind blows it away in the same orientation, then we need to know about the motion of the quadcopter in the air. A GPS cannot track that accurately but we can use a little math and some sensor data to estimate the movement.


We understand that if all the four rotors generate thrust, the quad goes up and if thrust decreases, gravity does its job and brings it down.

d2f3114899699dfb78a2d0f474db848e.jpg fe6816ee3356a156acd633a55fb5b9b6.jpg

If two rotors on the same side, increase thrust(or decrease if), the quad can roll and if the rotors on the back or front do that, then it can pitch forward or backward as shown in the image above. This is simple physics. In addition to this rolling and pitching movement, the quad will also drift like for example, in the case of the roll, when rolling right, the quad will drift or side strafe right because the thrust on when the quad is tilted, there is a little thrust going sideways was well(see vector math for more details). Similarly, when pitching forward or backward, the quad will move forward or backward. Neat! so we get some basic control. Unfortunately, there is friction and in order to stop, we don't have a break. So when we want the quad to stop drifting, we need to tilt it in the OPPOSITE directions for a SMALL duration till it comes to a halt. So how long is small? Well depends on the weight of the quad(inertia) and speed(momentum) and this can vary. Instead of doing this manually, the first thing we want to get done on the quad is to make it stop! Sounds counter productive but the truth is that if you can make a quad hover in a fixed location, that means it's a good machine. We need a brain to do all that for us and you can prolly buy one but since this series is "building a quad from scratch, we will make a brain of our own. Later on we will add more and more functionality as we see fit. Awesomeness here we come!


Reverse Engineering the Brain!


Before we start buying sensors and controllers, we need to understand the requirement of our quadcopter. We understand that the propellers will be responsible for the movement and we can control them using the ESCs that we bought out but the brain is what makes everything ticks and it will fire commands to the propellers. Since we can only program the controller using a programming language, we need a way of representing our entire system mathematically in the form of equations which can be later converted into statements of a programming language. The language will be C and our initial work will be done independent of a particular controller.

Lets dive into a bit of rigid body math...


Reference Systems reverse engineered


Our quadcopter is a fixed(almost) body which is expected to float in mid air. This is termed as a rigid body in free space and the Yaw, Pitch and Roll needs to be quantified. Thanks to mathematicians, we already have a number of ways we can describe our quad in air and they are discussed briefly in the proceeding sections. There are two reference systems that need to be discussed.

One is the fixed body frame where the rotor's body is taken as the reference and all things happen according to it's orgientation in space and the other is the North East Down where our earth's geographics are taken as the reference. Lets try to understand this a little more.


For the quadcopter that is flying in air or any other object, the force of gravity is the only external force that is constant. Hence we can use it to tell which way is up and which way is down. For someone standing on the ground, the quadcopter is a moving and the ground is fixed. Alternatively, for the quadcopter, it may consider the ground to be moving and itself to be fixed. i.e. The motion of a body can only be described relative to something else... Complicated? To simplify this, we use the inertial frame of reference which takes the Acceleration Due to Gravity as a fixed reference. Simply put gravity tells us which way is down. Now where is right left? This is not absolute since there is no gravity on any side and we normally do not need it to orient the quad in a particular North facing directions. If we do, then well the simple answer is use a compass! Ships use a compass to navigate so why not our quad and hence we use an electronic compass to tell us north south east and west. We do need to find out if the quad is drifting and we can estimate that later. Hence we have our three dimensional space all figured out. But we need to find a way to represent this in code...


A bit of math


1. Euler Angles:


From Wikipedia,"The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body.[1] To describe such an orientation in 3-dimensional Euclidean space three parameters are required. They can be given in several ways, Euler angles being one of them; see charts on SO(3) for others. Euler angles are also used to describe the orientation of a frame of reference (typically, a coordinate system or basis) relative to another. They are typically denoted as α, β, γ, or φ, θ, ψ."



Simply put if we put the quad is at rest and at the center of our assumed origin, then if there is any change in  Roll, Pitch or Yaw, then it can be signified using one of the three angles. As seen in the image above, if there is any roll, then phi will change. If there is any pitching, then the angle theta will change and for yaw, there is a change in xi. The sensors can be used to fill in these values. They should measure these angles and we can have a good idea of what the situation of the quad in air. Then if we want to move forward, then we speed up the back rotors such that the theta changes to the appropriate value and so on and so forth.


The same way, we can describe euler's angles for a the quadcotper in free space taking any point on the ground as the reference.



2. Orientation vector

From Wikipedia,"Euler also realized that the composition of two rotations is equivalent to a single rotation about a different fixed axis (Euler's rotation theorem). Therefore the composition of the former three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed.


Based on this fact he introduced a vectorial way to describe any rotation, with a vector on the rotation axis and module equal to the value of the angle. Therefore any orientation can be represented by a rotation vector (also called Euler vector) that leads to it from the reference frame. When used to represent an orientation, the rotation vector is commonly called orientation vector, or attitude vector."


3. Orientation Matrix


In vector math, its easy to divide a single vector into two orthogonal vector and vice verse and if you are reading this you prolly know that. When we have to deal with rotating bodies, we need a way to decompose the motion of a body in freespace in terms of angles.(In our case, the roll pitch and yaw). Just like vectors, an arbitrary rotation in 3D Space can be described by two angles theta and phi and the position and motion in 3D space can be described by these two angles and a magnitude. The simplest video explaining this concept and rotational matrices, that I could find is given below.


3. Quaternions or Versors

From wikipedia,"In mathematics, the quaternions are a number system that extends the complex numbers. They were first described by Irish mathematician William Rowan Hamilton in 1843[1][2] and applied to mechanics in three-dimensional space. A feature of quaternions is that multiplication of two quaternions is noncommutative. Hamilton defined a quaternion as the quotient of two directed lines in a three-dimensional space[3] or equivalently as the quotient of two vectors.[4]"


I could not isolate a single video to explain quaternions to the masses so I just might do one in the future. The basic principle is to represent orientation and rotation of any object in 3D Space. According to Euler's rotational theorm, any rotation(around a fixed point,  can be represented by a given angle theta and the axis of rotation given by a unit vector. The image below shows the same.


Quaternions give a way to 'encode' this information in four number. Say I have a point in 3D Space given by (ax,by,cz). A vector from the origin to this point can be written (inEulicidian Space) as p = (ax)i + (by)j + (cz)k where i,j,k are unit vectors orthogonal to each other and representing the Cartesian axes. Say we rotate this through an angle theta around a vector u' = (ux) i + (uy)j + (yz)k. Then this can be represented by the quaternion as


q = cos(theta/2) + (u')sin(theta/2)  or

q = cos(theta/2) + ()sin(theta/2)(ux) i + (uy)j + (yz)k


This means we can plug in the angle or rotation and the unit vector into the above equation to get a quaternion for rotation. The rotation itself happens if p is the original vector(as described above), q is the quaternion describing the rotation and the vector after rotation is


p' = qp(q^-1) using Hamilton product


OK. This is getting thick so I will leave it here for now.


The point I am trying to make with the above math is that we need a system of how to represent the orientation of our quad in space with respect to itself and the earth.

We will come to this again once we have a better idea of what exactly we can use from all this.


The AHRS what?

We need our electronic brain to manage the quadcopter and we just went through the mathematical part. In the real world, we can now start talking about the AHRS which stands for attitude and heading reference system.

Attitude? Don't be scared, Attitude control is controlling the orientation of an object with respect to an inertial frame of reference. Its the orientation with respect to say the earth. Heading is the direction where the quadcopter is going and the AHRS manages to use sensors to calculate the required information.


It's time!

So the first task we are going to do is to select a microcontroller platform and the sensors. There are lots of projects out there who have used the Atmega328P and arduino as the core and it seems sufficient. However, since my experiments are from scratch and I want to learn to make this thing from scratch I will write my code almost platform Independent. Easier said than done but you will see me do that in the upcoming segments.

Since I am going to be experimenting, I will start mine with both and Arduino Uno as well as the Freescale Freedom K64F board from the IBM IoT Kit. The reason I have chosen these two is because of their differences. The arduino is 8-bit and has less power BUT is more community supported. On the other side, the K64F is more powerful, has an FPU which means I can do floating point math very easily while doing a lot of stuff in parallel. Both are "Arduino Compatible and I will be doing some shield PCBs in the future so their interchangeability will help.


The Freedom to do

The Freedom K64F is an (almost) arduino compatible board with a lot more power and a LOT more IOs and capabilities. It comes with ethernet, USB host and SD Card slot along with the FXOS8700CQ which is a 3 axis accelerometer and a 3 axis magnetometer. I will be doing some demos on this sensor but will prolly move to a different one for the final system.



The image above shows the pinout and there is no need to download any software since we will be using the online compiler! Yes! There is an online compiler which allows you to write the code online, compile it online and downloads the binary to your PC. You just copy the file into your MBED device which shows up as a pendrive and it does the rest! Brilliant. There is an offline option as well but I won't go into that in the near future.


The Kit I received has two boards-

1. The K64F Board and

2. A Board with an LCD, Speaker, RGB LED, POTS, Joystick and an XBEE Socket.


I plan to use the shield for my ground station where the joystick is used to control the motion and the pots the altitude. Fingers crossed.






In this post I have tried to dive into the mathematical part and gotten an overview so that when I start coding, I know what all is out there.


In the next post, I will finally start the AHRS with an IMU and demonstrate how we can obtain the orientation of our quad in air. Till then





Leonard Nimoy at the Phoenix Comicon back in 2011, he revealed that he had COPD in 2014 and succumbed to the illness on February 27, 2015.

At this point, we all have heard the tragic news that famed actor, poet, photographer and singer/songwriter Leonard Nimoy had passed away on February 27, last month. Most of us knew him as Spock, Kirk’s first officer (science officer) from the Star Trek TV and movies series, which had become incredibly popular only after the original show became syndicated. Needless to say, he will be missed but his legacy will continue to live on, not only through his fans but also through the inspiration Spock gives both old and new scientists, astronauts and engineers.



Astronaut Terry W. Virts tweeted this image from the ISS after having learned of Nimoy’s death.


Star Trek had given rise to the notion that anyone, of any color or any creed could travel in space. Dr. Mae Jemison (former astronaut), Colonel Terry W. Virts (astronaut), Harold White (NASA engineer) and even NASA Administrator Charles Bolden are just a few examples of people that were inspired by the show (and Spock) and went on to peruse their dreams. The people at NASA were so enamored with the original show that they christened one of their space shuttles the ‘Enterprise,’ with cast present for the ceremony.


The shows various tech devices have inspired engineers to bring them into reality and for the most part, they have succeeded. Engineer Martin Cooper headed up Motorola’s communications division back in the early 70’s and brought about the first mobile phone after being inspired by the communicators on Star Trek. Engineers from NASA and Britain’s National Health Services worked together to develop a Star Trek style medical bay that can diagnose diseases without invasive procedures. Star Trek fan and entrepreneur Walter De Brouwer designed a working tricorder called the Scanadu that is able to measure vitals such as heart rate, temperature and blood pressure by placing it on the patient’s forehead.


batteries not included.jpg

Batteries Not Included character models. (ComicCon 2012)


It’s easy to see how sci-fi shows and movies of all kinds can become a jumping platform for inspired fans to go on and become inventors or engineers. For me, it was the 1987 movie Batteries not Included. Seeing the robot characters move around, I knew it would be possible. I also knew they were all puppets, but I thought I could build one too. Somewhere lost in the boxes and junk of my youth, my first soup can robot sits.


- Chime in below and talk about Leonard Nimoy -and- what inspired you to be an engineer, maker or creator. -





The expected instant photo printing gadget for iPhone and Android is taking Kickstarter by storm (via kickstarter & Prynt)

A few short months ago we featured a French, start-up company that had nothing more than a prototype and a dream. Now, their dream is becoming an epic reality. They finally featured their campaign on Kickstarter needing a pledge of $50,000. Within only a few hours, they had raised over $125,000 for their campaign. Now, within days, they have almost $600,000 raised! It seems that they have chosen to capture the right market at the right time.


Prynt is a current Kickstarter success story, which should get your entrepreneurial libido moving; considering that only 44% of projects get funded successfully, and the vast majority of projects never raise over $10,000.

For those of you that missed the previous post on Prynt, I’ll give a run-down on this interesting gadget. Prynt is essentially the modern-day Polaroid for iPhone and Android. It acts as a ‘case’ that you plug your phone into (negating the need for syncing via Bluetooth). Once connected, you can use the Prynt app to print any photo on your phone in a matter of seconds. The photo comes out much like a Polaroid and you can feel free to personalize it beforehand. The new twist on the Polaroid is that you can record a 5 to 10 second video clip which will play on your phone when your phone camera catches a glimpse of the photo. Hence, it has an augmented reality feature that works in Harry Potter-esque fashion: allowing you to capture more than just one moment.


The augmented reality footage can supposedly be shared with any of your friends and family that download the app. Hence, it may make a clever idea for party and wedding invitations.


The photo printer doesn’t use inkjet technology, because the ink is embedded in the paper itself and activated by heat pulses: aka Polaroid ZINK photo paper. Considering the amount of money pledged already, there are no more early-bird specials left which give up to 50% off the list price.


However, you can still score one on Kickstarter for $99 or more. You can also pre-order it on the Prynt Case website for $99. The basic package includes the Prynt Case and 10 pieces of photo paper. You can also order additional pieces of photo paper (once they’re all set up) for $5 per pack of 10. You are supposed to be able to order photo paper directly from the app itself and it will be shipped to your house ASAP. Personally, I am thinking that you can probably buy some ZINK photo paper (or equivalent) online, in bulk, and cut it down to the proper size (just saying...).


One of the nicest features of this gadget is that it has its own power source and requires no pairing, meaning that you don’t have to waste your precious phone’s battery life to print as many Prynt Case photos as you like. However, it will cost you photo paper which is the major drawback of this device. Each paper will cost you more than what it would cost to print at your local Walgreens, however it scores major points on convenience. In a world of digital media, it seems photos are making a comeback.



See more news at:




If Artificial Intelligence (AI) makes you nervous, stop reading now. A new research project, conducted by researchers at the University of Maryland, aims to create self-learning robots that can learn through visual input, also known as YouTube. In a recent study, robots were actually able to acquire new skills by mimicking what was “seen” on YouTube videos, without human intervention. There’s no doubt about it, we’ve traveled to the future.


DARPA’s Mathematics of Sensing, Exploitation and Execution initiative funded the University of Maryland researchers, who hope to eventually create a technology that leads to robots that can develop new skills autonomously – and they’re not far off. In their recent study, robots were fed YouTube cooking videos directly from the World Wide Web and were programmed to mimic the tasks seen. The results are impressive, as the robots successfully recognized, selected and utilized the correct kitchen utensil and executed the appropriate tasks seamlessly. The robots exhibited incredible accuracy without any human interaction whatsoever. In short, robots can eventually cook you a lovely dinner, if you can find a cooking show on YouTube.


While the University of Maryland researchers deserve due credit, they are part of DARPA’s larger vision to enhance robotics in ways that seems like science fiction. DARPA hopes its MSEE program eventually results in robotic sensory processing and intercommunication. In layman terms, it hopes to create robots that can “see” something, “think” about the appropriate action, take that action and “teach” one another how to do the same thing, all without human interference. It’s an incredible program that’s already making a lot of headway. Imagine, an army of “seeing,” “thinking” and “doing” robots protecting our national interests… now that’s what we call national defense. 


Outside of creating an army of self-learning superbots, there are economic benefits to further development of this technology. If robots can “think” or at least acquire new skills on their own, resources that were previously used for robotic programming can be allocated elsewhere. The idea is that the U.S. will build an revolutionary generation of robots that can learn tasks much faster at a much lower cost and be used in areas ranging from military machine maintenance to domestic servitude (we can only hope).


The University of Maryland researchers presented their research at the 29th meeting of the Association for the Advancement of Artificial Intelligence. They intend to continue their research, with their eyes set on developing a technology that produces fully self-learning robots. This poses questions for the future of the American workforce, as machines have already replaced a number of jobs once executed by  people. If robots can “think” too, there’s no telling which jobs machines will continue to fill. Thankfully, that nightmare is still some time away from being realized, but it’s something to consider. Encourage your kids to become innovation roboticists. Maybe you’ll be first in line for one of those autonomous butler robots when they hit the commercial market. We can only hope.



See more news at:



Boreal Bikes’ smrtGRiPs provide haptic feedback for hands-on navigation (all images via Boreal Bikes)


We do it in cars and while walking or hiking so why not while riding bicycles? I’m talking about using navigation apps on our smart devices to find our way around. The problem is, it can become a deadly distraction while driving or even bicycling for that matter and you wind up not making it to your destination. With the IoT connecting more and more things to the internet on a seemingly daily basis, it was only a matter of time before bicycle grips became ‘connected’ as well.



Installation of the smrtGRiPs is pretty  straightforward and connection to user’s smartphones is done through the grips Bluetooth module.


Boreal Bikes (located in Montreal, Canada) designed the smrtGRiPs to provide a hands-free, eyes-free way to navigate while also providing haptic feedback notifications as well as the ability to track lost or stolen bicycles. . Instead of hearing the traditional audio directions of most GPS-based navigation apps, the grips vibrate when users need to turn left or right and intensify when approaching the indicated street. They can even alert riders to road hazards or traffic issues, which can be accompanied by audio alerts as well.



The accompanying app provides eyes-free navigation along with a tracking app and even a locater app.


If rider’s bikes become lost, (in a sea of bike racks?) they can check the smrtGRiPs app and it will guide them to its location and even allow users to enable an auditory tone to signal its location. If the bike becomes stolen, it will send out a signal to other smrtGRiPs users in the immediate area and if they come within 330-feet of the stolen bike, users will receive an instant notification of where their bike is located. While that might not seem like an ideal solution, it’s better than nothing. On the other hand, perhaps users could modify the grips with an internal real-time GPS device to find the bike’s exact location.


The waterproof grips are actually made of a durable aluminum casing that houses the Bluetooth module, vibrational module and battery, which is recharged through the grip’s micro USB connector. Boreal Bikes is currently crowd-funding their smrtGRiPs on Indiegogo and those looking to get a pair can do so by pledging $59 or higher, which nets you a pair of grips and a charger.



See more news at:




Authorities attempting to crack down on illegal fishing

A joint partnership between the Pew Charitable Trust and UK Satellite Applications Catapult is attempting to crack down on illegal fishing in a big way. Their mission is called ‘Project Eyes’ and it utilizes advanced technology to track ships and alert authorities of potential illegal activity.


Most illegal fishing is perpetrated by modern-day pirates because it poses the potential for vast economic rewards. Experts estimate that the trade of illegally captured fish is worth over $20 billion each year. Why should consumers care about illegally caught fish? Firstly, because regulations are meant to prevent overfishing and wildlife protection. Secondly, because you may want to know exactly what you’re eating when you purchase food from your local supermarket. No one wants another horse meat scandal, right?


For governments, heavier regulation of fishing practices means they lose less money. And cracking down on $20 billion of illegal, tax-free profits amounts to something sizable.


Project Eyes is starting in a Catapult watchroom in Harwell, Oxfordshire, UK. If the project is successful, then it can be expected to expand throughout Europe and beyond. In order to track and predict illegal activity, Project Eyes is using a ‘smart’ monitoring system of satellites and an advanced algorithm.


More than simply tracking sea vessels, the algorithm analyses their movements, sea conditions, and probable fishing locations to predict what each ship is doing. There is a lot of historical data which has been input into the algorithm to predict vessel movement and fishing locations. The system should be able to detect when a vessel is fishing, and it will sound an alert if a vessel is fishing in a no-take zone. Project Eyes can then alert the authorities and catch the pirates red-handed.


While there are many satellite systems keeping a watchful eye on the seas, Project Eyes is processing a wealth of live data that supersedes what has previously been done. While vessels are supposed to be fitted with transponders, these can be compromised to give false data. Project Eye’s satellite radar data should give an accurate account of vessel activity and allow them to catch bigger boats that carry out transhipments for pirates.


Chile and Palau (a Pacific island nation) are going to be the first to use Project Eyes to crack down on illegal activity to preserve their vast marine wildlife reserve which has many Asian pirates attempting to trawl rare species. Their resources are limited, so the help of satellite technology can help them use their resources more effectively to prevent the destruction of their marine reserve.


In future, Project Eyes may even create partnerships with supermarkets to enable them to track the sourcing of their produce down to the region in which the fish were caught, and by which fisherman. This will allow companies to head new Corporate Social Responsibility campaigns that ensure ethical sourcing standards are followed. It also ensures that we, as consumers, are not supporting illegal fishing activities that can upset the balance of the ecosystem and endanger species.



See more news at:


selfdestruct mice.jpg

Micromotors before and after reacting with stomach acid (via BBC and UCSD)

Nanobot technology has been on the rise. From autonomous construction to underwater ballet, nanobots are becoming the iconic technology of the future, and of the now. Some researchers, however, are going even further than self-repairing nanobot prototypes. The Department of Nanoengineering at the University of California, San Diego, just created a self-destructing nanobots and successfully tested it on live mice for the first time.


There has been intensive research conducted to observe the possibility of using nanobots as tiny mobile machines, including mobile self-constructing bots and those that deliver medicine on demand. UCSD researchers, however, wondered if the micromotor technology could be used inside of the body. Prior to January, this had never been done. The researchers figured if they could successfully design a nanobot that could locate damaged tissue in vivo, or inside of the body, their micro creations could replace therapeutic drugs or even invasive surgery. Talk about the future!


The UCSD researchers constructed a micromotor based on existing zinc-powered, in vitro nanobots (those residing outside of the body). The team determined if they could create a micromotor fueled by stomach-acid, it could eventually find its way into the lining of the stomach, and they were right. In a clinical trial, the novel micromotor zoomed through the stomach, eventually nestled in its lining and self-destructed. But don’t worry; it’s not what you think. While we could definitely see this nanotechnology being further developed for military use, UCSD’s version only delivers medicine. In this case, when the bot self-destructs it releases a therapeutic payload that could mean the difference between treating a peptic ulcer with a micromotor pill and invasive surgery – you decide.


The micromotors are only 20 micrometers long (roughly the width of a human hair), but they can withstand the harsh conditions of the stomach for extended periods of time. As they slowly dissolve within the gastric acid, they eventually leave a pure dose of medicine behind. After having dissolved, nothing toxic is left behind (so Mickey was okay after all). The UCSD team can’t take all of the credit, however. The idea for the technology was actually first publicized by physicist Richard Feynman in 1959.


Feynman issued a speech in 1959 titled, “There is Plenty of Room at the Bottom,” where he goes on to say that future technologies should focus not only on enhancing orally administered medicine, but also on the development of tiny robots that could eventually replace invasive surgery altogether. In an example, he states that a tiny robot could be sent to diagnose a blocked artery, and not only discover it, but treat it, too. Feynman was ahead of his time, but his ideas may have helped set a vision for the future, if only at UCSD.


Researchers from the Department of Nanotechnology believe medicine administered directly into the lining of the stomach is wildly more effective than orally administered therapeutic drugs. The team said much work is yet to be done before micromotors are widely used on human subjects, but they hope this experiment provides a bridge between nanotechnology and medicine for a multidisciplinary approach to treating disease.



See more news at:


lowrez zipper bot.JPG

Low-rez capture of The Zipperbot in action (via Government Fishbowl)


The Massachusetts Institute of Technology, or MIT, is a force to be reckoned with in the world of technology. From developing tech savvy urban dwellings to experimenting with engineered insulin, the institution is pushing the boundaries of the relationship between man and machine. In typical MIT fashion, the trendy technology hub is at it again, as its Personal Robotics Group recently announced its newest development – a robot that can zip up you pants for you.


The tiny robot is called Zipperbot (very creative, we know) and as its name suggests, it zippers. Not only does it zipper, but in a recent video released by the Personal Robotics Group, it also unzips (ooh… ahh…). That’s right ladies and gents. With the Zipperbot you never have to zip, or unzip, your clothing ever again. In the video, we see a brave volunteer wearing a yellow fabric sleeve, equipped with one long, winding zipper. The Zipperbot successfully completed what the Group calls basic pattern self-assembly, as it successfully zipped the cylindrical pattern and stopped on cue. The group also demonstrated what it calls coordinated movement [of] multiple Zipperbots, or in layman’s terms, two Zipperbots working at once to zip, and then unzip, the same zipper. What’s the point in that, you ask? We too have similar questions…


While the announcement seems small, it’s part of MIT’s larger Sartorial Robotics project, which aims to unite man and machine in more social ways. In essence, we need not be afraid of robots, because they can help us… zipper our pants and such…  The overall aim seems to want to create a world where robotic wearables are more widely accepted, and while you’d have to be the laziest person in the world to refuse to zip your own pants, the invention does have practical uses for the disabled. Amputees, paraplegics and workers who wear bulky suits, such as quarantine or astronaut suits, could practically benefit from a self-zipping zipper. The rest of us cool cats can just bask in our technological glory.


While the Personal Robots Group is rather proud of its accomplishment, it isn’t the only robotics team working on robotic clothing. Last fall, for example, researchers from Purdue University developed a technology that could potentially turn any fabric into a robot. The wearable technology enables fabric and other soft materials to bend in different directions, enabling it to slither ever so slowly. If you’re not a good dancer, perhaps this wearable is for you. While this and the Zipperbot are only concepts at this stage, some wearables are already on the market, including Rest Devices’ Mimo Baby, a wearable onesie for infants that monitors respiration and sends vital updates to your smartphone. If you’re sheepish about wearable technology, it may be time to get over it.


If one thing is for certain, it’s that wearable robotic technology will only continue to advance. Today it’s autonomous zippers; tomorrow, a version of Google Glass that actually works (or so we hope).



See more news at:


Filter Blog

By date:
By tag: