Willow Garage is a company known mostly for its robotic hardware and open source software. Their most famous project has been the PR2 Robot. Although this was a highly advanced robot, the price and size of the bot was a little too much for most companies or manufacturers to invest in. Now a group of former employees for Willow Garage have come together to create a company of their own. It is called Unbounded Robotics and they have just announced their new robot, the UBR-1.

 

This will not be the first time former employees of Willow Garage have come together to create their own company. Not too long ago another group of former employees created Suitable Technologies, a company which specialized in developing  a telepresence robot dubbed Beam. Unlike these other efforts the robot Unbounded has been developed almost in direct competition with their former PR2 robot. Unlike the PR2, the UBR-1 costs only $35,000 compared to the asking price of $400,000 for the PR2. This price drop makes it much more accessible to many companies and most of all, research institutions.

 

The Unbounded Robotics team consists of CEO Melonee Wise, CTO Michael Ferguson, lead systems engineer Derek King, and lead mechanical engineer Eric Diehr. Together they are hoping the drastically lower price of the robot will create a larger community surrounding the UBR-1. In addition, the new bot will be shipping with ROS Move It! software. This means people purchasing the robot will not have to develop their own algorithms for simple pick and place movements. This will allow people doing research on robotic movements or business applications to quickly get to work and eliminate the need to spend valuable time developing software.

 

During an interview with IEEE Spectrum, Derek King commented, “One of the things that having a little bit more of a business focus gives us is that we're committed to robust software, where it's about more than just a good demo. And that helps researchers when they're trying to build higher level applications on top of these more robust lower layers.”

 

As mentioned, the robot comes equipped with the ROS software. Furthermore, the robot features one arm with 7 degrees of freedom which can support a payload of 3.3 pounds. The bot stands anywhere from 32 to 52 inches, weighs 160 pounds, and moves around at a speed of 1m/s. Inside is a 4th generation Intel i5 processor with 8GB of RAM and a 120 GB hard drive. Running continuously the robot can work for approximately 3.5 hours and it will take that same amount of time to re-charge its batteries to 90% capacity.

 

ubr1.png

Unbounded Robotics' UBR-1 (via UR)

 

The UBR-1 has also been built to support extra sensors and modifications which may be required for specific tasks. King stated, “The other thing we have on our robot are modularity points, so if you did want some higher level sensor, you could just mount it and have the UBS 3.0 connection to attach it to the computer. So for the people who want that extra sensor that they can't do without, it's easy to add to this platform. One of the other modularity points is the gripper: it can do 80 percent of the things you might want to do, but there's definitely a a lot of people that want either compliant grippers, or more fingers, or suction cups, or electrostatic. So to handle that, we made the gripper as modular as possible for either of us or outside vendors or even customers to replace our gripper with their own designs.”

 

The robot also possesses 3 USB ports, 1 display port, and an Ethernet port. The robot is not yet available and is not expected to begin shipping until next summer. However, any institution which was looking to invest in a robot for whatever reason may be waiting for the release thanks to its robust build and low price. Additionally, since the robot is more accessible, it can help advance robotics and create a common foundation for robotics researchers all around the world. And for anyone worried about the robot rebelling, it also has an emergency stop button easily accessible on its back.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

20131002171409-0.jpg

M-Block robot. Angular momentum move it around... genius! See the blocks jump into place, you will agree. (via MIT)

 

It began in 2011 when MIT senior John Romanishin presented his idea for self-assembling robots. “It can't be done,” is the response he received countless times from colleagues and professors. Now, two years later, he has working prototypes and is preparing to present his small cube shaped robots at the IEEE International Conference on Intelligent Robots and Systems.

 

The bots are called M-Blocks and are able to move and connect to one another without the use of any external components. In order to accomplish movements, the inside of each block contains a flywheel which can spin as fast as 20,000 revolutions per minute. The flywheel is then braked resulting in angular momentum propelling the block in a specific direction. In addition, the outside of each block is lined with corner magnets and cylindrical magnets placed in special positions. These allow the blocks to attach to other cubes and re-arrange themselves efficiently.

 

Kyle Gilpin, a postdoc student at MIT collaborating with Romanishin, adds, “There's a point in time when the cube is essentially flying through the air, and you are depending on the magnets to bring it into alignment when it lands. That's something that's totally unique to this system.”

 

What makes the cubes so efficient at moving and connecting to one another is the engineering behind the magnet's architecture and assembly. On each edge of a cube a magnet can be found which acts as a pivot for a cube rotating around an edge. When the cubes are connected face-to-face, there is a small gap between each of the cube's edges. However, when one is in the motion of rotating around the face of another, the magnets touch creating a stronger magnet connection, ultimately acting as an anchor for rotation. Furthermore, each edge contains two cylindrical magnets. These are positioned slightly further from each edge and help the magnets stay connected to one another. The poles of each cylindrical magnet will naturally align with the poles of the magnets on another cube, thus allowing them to attach to any side of any cube.

 

Hod Lipson, a robotics researcher from Cornell and early critic of the M-Blocks mentions, “What they did that was very interesting is they showed several modes of locomotion. Not just one cube flipping around, but multiple cubes working together, multiple cubes moving other cubes – a lot of other modes of motion that really open the door to many, many applications, much beyond what people usually consider when they talk about self-assembly. They rarely think about parts dragging other parts – this kind of cooperative group behavior.”

 

The researcher’s next step is to develop algorithms which will guide a swarm of 100 M-Blocks. They mentioned they would like to be able to see the cubes autonomously assemble into a variety of structures and objects. Furthermore, they are also hoping to create special cubes which will have unique functions. For example, possibly a cube with an integrated camera, or batteries. These cubes will not contain motors of their own, but would rather be moved and guided by the other general purpose cubes. It will be interesting to see what the first application may be.

 


 

C

See more news at:

http://twitter.com/Cabe_Atwell


 

You would think a large insect-shaped robot would be a little creepy for children and parents. However, the insect inspired robot, Dash, is actually the complete opposite. It’s small, fun, fast, cute, and cheap. Built by a team of PhD engineers from UC Berkeley their work was previously funded by the National Science Foundation. They worked on building robots based on biological inspired mechanics. For example, they studied how insects run, glide, and use their tails to steer. Additionally, they also looked into how a gecko inspired adhesive could be used to give their robots the capabilities to climb walls. While their current version will only be scurrying around on the ground they hope their crowd funding project will help bring these features to future versions.

 

What they are currently offering is a beta model of their robot. The robot will cost only $65 and is available to the first 1000 backers through the crowd funding Dragon Innovation website. They realized their little insect bot could be a great educational tool for children or even hobbyist. “Seeing Dash scramble across terrain brings a smile to your face. It’s as if he has a personality – to me, always in a hurry and very persistent. Dash is great for those who enjoy building and are interested in experimenting with robots that can evolve and interact with the environment. This is an exciting first step toward a new world of robotics,” commented Dave Vadasz, former VP of Corporate & Business Development at Palm.

 

The surprisingly cheap robot was made possible through a new type of manufacturing process. A single sheet of smart composite microstructures (SCM) is used which consists of cardboard, plastic, and adhesive. They then use a laser to cut designs into the material which only cuts through the cardboard part of the SCM. In doing so, they can create specific shapes, which can then be bent and folded together to create a 3 dimensional shape, very similar to how origami works. The result is a body structure which is extremely durable and easy to assemble and can be put together in less than an hour.

 

If purchasing the kit you will also receive a motor, transmission, and plug-and-play electronics. The electronics will include a Bluetooth 4.0, which will be used to communicate with smart phones. Through an app on a smart phone or tablet users will then control and move their robot around. In addition, an Arduino compatible micro-controller will also be used in the assembly. This will allow the more advanced users to modify the bot's behavior or possibly add sensors or additional features.

 

Currently, their prototype only supports iOS devices with Bluetooth 4.0. However, the team has mentioned they are working hard at getting support for Android devices with Bluetooth 4.0 capabilities. Furthermore, they have already reached their funding goal of $64,000 with 20 days still left to go. This will be a perfect gift for children interested in robotics. The electronics are plug in play so no programming is required. Nevertheless, the engineers have made the system hackable, so when the time is right, the budding engineer can begin to program the robot on his own. Additionally, the whole system runs off the Arduino, which is probably the most accessible way for young people to learn about programming. With its low cost, this can be a very big hit for Moms, Dads, and children. 

 

C

See more news at:

http://twitter.com/Cabe_Atwell

robotplant.jpg

IIT’s robotic plant system. Tech mimics life... making better machines based on the tried and true (via IIT & Plantoid Project)

 

Robots come in many forms, which are based on the functions they perform. There are robotic arms used in manufacturing plants, small disks that vacuum floors and even some based on humanoid babies to get a better understanding of human behavior. It’s not uncommon for robots to mimic various life forms, just take a look at DARPA’s pack mule LS3, which performs the same function as its mammalian counterpart. Or Virginia Tech’s Cyro robotic jellyfish that’s part of the US Navy’s autonomous vehicle project. While those are based on animal forms, another academic institution has developed a robot based on plant life.

 

The Italian Institute of Technology has designed robotic plants for soil monitoring using its roots, which function almost identically to that of their organic cousins. Known as the Plantoid project (headed by Barbara Mazzolai), the robotic plants are equipped with ‘smart’ roots that house tiny actuators to unwind material that allows the root to dig through the soil. Those roots are equipped with bespoke soft sensors that guide the root around obstacles, such as rocks or other impenetrable material, so the root can unwind safely and reach its target depth. Once the depth is reached the root then unfurls and extends into the area around itself, much like a real root system does, and begins to monitor the soil as well as the environment around the robotic plant. The artificial root is outfitted with an array of sensors that are capable of monitoring water, temperature, pH, nitrates/phosphates and even gravity!

 

The root system gains its power from the ‘flower-head’ that sits atop its stalk, which is actually four mini solar cells that are used to power a rotor that unwinds the root system. The team of Italian researchers wanted to garner a better understanding of how a root system functions as organic root systems bend when they grow in length to avoid and traverse around obstacles. It does this by growing new cells on the opposite side of the direction the root structure is heading while at the same time it prioritizes several chemical and physical stimuli;, it is not completely understood how it is able to do so. The researchers also want to get a better understanding on how organic root systems interact with one another, which may provide a new type of swarm intelligence (Day of the Triffids anyone?). Developing the robotic root system could lead to more energy efficient robots capable of adapting to their environments as well as monitoring the area in disaster zones or toxic sites allowing first-responders to find injured persons quickly and efficiently. Since the root is capable of anchoring itself in soil, it could be used in space for exploration of comets or other planets. Even the medical field could benefit from the technology behind the robo-plant, which could be used for a bendable, growing endoscope to explore the human body. The Plantoid project was showcased last month at the Living Machines conference in London.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

TRAKDOT.jpg

Trackdot device render. (via trackdot)

 

Bob was nervous and excited at the same time and he had good reason to be. He’s in line to be promoted to the head of operations at his tech company’s headquarters in Singapore and is set to deliver a speech regarding the firm’s future. He’s never made the trip from Silicon Valley to the east Asian country before but the journey will give him time to polish his speech before delivering it to the board members. Then it happens. He’s been staring at the luggage turnstile for the better part of an hour after landing and there is no sign of his bags. The problem is, Bob requires medication to keep his anxiety and anger under control otherwise he becomes a manic, neurotic mess of a human being that has a hair-trigger temper when under stress. His medications, only available in the US, are in his luggage, which are on their way to Sydney, Australia. Suffice it to say Bob never got that promotion. If only his luggage had flown with him that disaster could have been averted.

 

The story is fictional but the prospect of airlines losing luggage can be all too real for some and it’s never a pleasant experience. To help passengers keep tabs on where their luggage is headed, GlobaTrac has designed a new device that lets users track their bags all over the globe. The company’s Trakdot box is outfitted with their patented ‘micro-electronics’ that includes cellular technology which ‘pings’ nearby cell towers to triangulate its position rather than relying on GPS. In fact, the device is equipped with a SIM card (utilizing quad-based GSM allowing functionality in most countries) that allows it to connect seamlessly to the towers much in the same fashion as other mobile devices. The device is FCC and FAA compliant, meaning it can safely be used on an airplane without messing around with the aircraft’s avionics. The device functions normally on the ground and transitions over to ‘airplane mode’ when in flight. It does so using a built-in accelerometer that senses the plane’s speed during take-off as well as when the plane leaves the ground to transition over to ‘sleep mode’. It does the same when landing as the sensor monitors the reduction in speed and switches itself back on and connects with cell towers in the immediate area. Once on the ground, the device sends an SMS or email (depending on preference) to the user’s mobile device alerting them as to which airport their luggage is located (down to within 30 feet of its actual location). As with any mobile device, GlobalTrac is selling their Trakdot for $49.95 US and requires an $8 activation fee as well as a recurring annual fee of $12.99 (akin to a service plan). While that may seem too high of a price for some, the peace of mind it will deliver is well worth the fee. Be aware however, GlobalTrac is currently delivering backorders at this point, so those looking to get acquire a Trakdot will have to wait a few weeks to receive theirs.

 

Travelling international may render the system useless, as cell/data coverage in baggage areas are practically non-existent. More control of your world or a new avenue for illicit activities?

 

C

See more news at:

http://twitter.com/Cabe_Atwell

installations_bg2.png

Engineered Arts’ Robothespian robot. (via Robothespian)

 

Comedians know all too well how hard it can be to be successful when playing to an audience. Being funny is tough and many people simply can’t make the cut even when they thought their jokes were funny. Robots on the other hand have it even tougher and perhaps that is why there are not that many robotic comedians to begin with except in the fictional realm. Sure, there was Johnny-5 from Short Circuit (really not that funny) and Bender B. Rodriguez from Futurama, however those bots are funny only on video and will never be a live robot comic, at least not anytime soon.

 

Compared to the thousands of human comedians there are around the world, there are only two robotic jesters, which have cut their teeth (as it were) recently playing to small audiences. The first, created by Engineering Arts, is known as Robothespian, which is humanoid in appearance and was designed for social human interaction in a public setting. The robot isn’t very autonomous with its movements and speech as it is pneumatically controlled using air and is overall controlled using a tablet PC. The same can be said for its speech even though it functions using a rudimentary AI. It’s programmed with a set of responses for any given question but can also be programmed with new words phrases and it can actually scour the internet to get answers for questions that are asked. If that wasn’t enough, the Robothespian can perform as a telepresence robot that lets users see and hear what people are doing around it while providing vocal feedback through the use of a microphone. Robothespian’s comic debut happened at London’s Barbican Centre in early August of this year as the follow-up act for comedians Andrew O’Neill and Tiernan Douieb. As expected, it only garnered a few laughs from its pre-programmed jokes, which included references to the lackluster OS Windows 8 and its shortcomings as well as R2-D2’s use of swear words that have to be ‘bleeped out’ for the kids.

 

While the jokes themselves were not that notable, researchers Pat Healey and Kleomenis Katevas (Queen Mary University of London) are using the audience’s reactions to help further develop its social skills. The pair used cameras to track the facial expressions of the audience along with their gaze and head movements to make a comparison between the reactions garnered from both the robot and its human comedic counterparts. Of course, like all comedy shows, there needs to be at least two or more comedians to get the audience rolling, in this case, the first performer happens to be from Heather Knight and her Nao robot aptly named Data. Like Robothespian, this robot was also pre-programed with jokes, however in this case they were delivered with better timing than that of its big brother. Heather used the robot in an experiment similar to that of Healey and Katevas’ in that she’s looking to bring social interaction with robots to a more personal level. In her comedic show, audience members responded to jokes by raising colored paper to signify whether they liked or disliked the joke. Data targets those cards, and based on the majority of the color shown and the amount of applause or booing, adjusts its decision-making process to deliver better jokes. While both robots have a long way to go in terms of comedy, there’s no doubt that sometime in the near future robots will have learned from their engagements and become more socially interactive with humans.

 

(Thanks Newscientist)

 

C

See more news at:

http://twitter.com/Cabe_Atwell

pressurelight.jpg

Georgia Institute of Technology’s pressure sensor. (via Georgia Institute of Technology)

 

 

We interact with pressure sensors almost on a daily basis, especially when shopping for products, as using a credit or debit card requires signing a pressure sensitive pad (unless those stores still use paper). The problem with checkout pads is that they degrade through repeated use becoming scratched, which requires more pressure to obscurely sign your name. A new development in sensor technology may hold the solution to that problem as well as a host of others, including sensitive skin for robots. Researchers at Georgia Institute of Technology have designed a new pressure-sensitive sensor that mechanically converts pressure directly into light signals that can be optically processed.

 

The process of turning pressure into light is known as piezo-phototronics, which generate a charge polarization when under stress. GIT researchers incorporated that process into their newly developed pressure sensor by using thousands of individual zinc oxide nanowires that are chemically grown on top of a gallium-nitride film. When subjected to pressure, the wires produce visible light on the opposite side of the film and the amount of pressure determines the amount of perceived light. When the wires are under strain, they create a piezoelectric charge at both ends of the wire that alters the wires band structure allowing the electrons to remain at the ends of the wire longer. When the nanowires are grouped, they create a form of pixelated light that creates an electroluminescent signal, which can then be read with on-chip photo-optics and transmitted to a computer that can process that information. The incredible thing about their sensor is that it can create up to an astoundinlyg high resolution of 6,300 dpi when using their method of growing the nanowires in a pixelated pattern! The researchers tested their sensor by pressing letters into the top of the film, which in turn displayed the same letters on the opposite side of the film and found it had a response rate of only 90 nanoseconds.

 

All of the nanowire emitters were able to be seen simultaneously from both sides, which were then tested for 25 on/off cycles in an effort to look for any light degradation and found that the sensor fluctuated in intensity at only 5% of the overall base signal. While the researcher’s sensor is an amazing accomplishment unto itself, the team is already looking for ways to refine their design and upping the resolution even more, which they say could be done by reducing the thickness of the nanowires. This would allow more of the wires to be grown on the film using a higher temperature fabrication process. Besides using their sensor pad for biometrics or signatures at checkout lines, it could be adapted for use as a type of e-skin for robotics that could provide an artificial sense of touch as well as providing a new level of human-machine interface. GIT surmises that the technology could be made commercially available in the next 5 to 7 years, which isn’t very long considering the technology is still in its infancy.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

rts solbot.jpg

QBotix Solbot R-225 robotic farm hand. Turns the solar panels as needed... (via QBotix)

 

Solar parks or PV farms are on the rise as more countries are adopting renewable energy alternatives to fossil fuels. In order to generate a significant amount of energy, these installations tend to be on a massive scale. With that in mind, maintenance is often tedious and requires arrays to be precisely calibrated to track the position of the sun, which is often done through automation but sometimes requires a human hand. For engineers the process of recalibration is both costly and time consuming and can offset the cost savings of using solar power. To address these issues, QBotix has devised an automated system that employs a pair of full-time robots that perform most of the functions those engineers do in the field but at a reduced cost while working 24/7. Conventional solar arrays are typically situated on metal bracings connected to a concrete base, with an automated pulley system to orient the panels to track the sun. As the sun moves across the sky at roughly ten degrees every 40 minutes, those panels have to be adjusted and with large conventional automated system, this process has a tendency to falter on accuracy. This is where QBotix Solbots shine, as the pair of robotic maintenance workers are outfitted with sensors that pinpoint the sun’s position within an error margin of 1-degree and traverse the paneled area, providing adjustments to each panel every 45 minutes.

 

The company’s RTS (Robotic Tracking System) removes the individual motors and actuators needed for every panel and replaces them with a monorail-type track that is routed and positioned next to each panel. A pair of Solbot R-225 traverse the track and adjust each panel for optimum tracking of the sun as it arcs across the sky, which makes the system about 22-times more efficient over other automated systems, according to QBotix.. The robots have monitoring software that allow it to provide detailed data of the overall system, making it easy for engineers to get a detailed picture on system performance. The robots are incredibly robust with being both water and dust resistant and can operate in harsh environments. Solbots are also battery powered and capable of autonomously recharging themselves (using charging stations located on the track), which reduces the overall routine maintenance needed to keep them up and running. Each robot is capable of maintaining 340kW power on one system, which translates to about 1,200 individual solar panels, making it incredibly efficient over human workers. QBotix states that their RTS system gives alternative resource companies dealing in solar power the cost/effectiveness ratio to justify using renewable energies rather than fossil fuels. The only problem with using solar as a resource is that it is weather dependent, meaning it is only efficient when there is no sky overcast and the sun is shining. Otherwise, no matter how many solar panels there are on a massive PV farm, if there is no sun, there is no DC output, the system can’t justify the cost and those robots just might find themselves in the unemployment line.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

brewbit.jpg

BrewBit Model-T (via kickstarter)

 

No one actually knows just how many different kinds of beer there is all over the globe, however some of the more modest estimates put that number at well over 20,000. (I made my own beer once, “Dos Cabes,” so add another one to the count). That’s a lot of beer, however most of that number could probably be broken down to the micro-brew category as they have become more popular over the last couple of decades. Even with all the different varieties of beer that are on the market, some prefer to create their own in terms of home brewing. The art of home brewing takes a lot of skill and finesse to craft beer of an exceptional quality, which cannot be accomplished if the brew-master does not have the right tools.

 

In order to successfully brew the ‘brew’, crafters need either a glass or a food-grade airtight container along with a stopper in the form of a fermentation lock that allows carbon-dioxide to vent during the fermentation process. The key to creating great beer is optimum temperature, which needs to be consistently maintained otherwise the final product can be ruined. In an effort to help those who home brew their own beer maintain a steady temperature, Inebriated Innovations (sounds like an Irish technology firm) has designed and developed a new device that helps maintain an optimal temperature even when crafters are not at home. Their device is known as the BrewBit Model-T and functions as a wireless temperature controller that helps maintain a steady temperature for the fermentation process.

 

The 3D printed device is equipped with two temperature probes that provide independent power control for both heating and cooling as well as two power outlets for independent control of two separate fermentation containers. The device is equipped with Wi-Fi allowing users to take control of their brewing process using an internet connection, which is handy especially if users can’t spend a whole lot of time at home. BrewBit features open-source hardware and software that allows users to modify and customize almost everything to suit their needs including temperature profiles that can be loaded on the fly to suit any configurations the user may need. Don’t leave the house that often? No problem, as the device has a built-in screen that users can select their profiles from as well as displaying accurate independent temperature readings if using different brewing containers. The unit even provides alerts in SMS or email form if something should go wrong during the brewing process or if the device itself malfunctions. Inebriated Innovations is currently crowd-funding the BrewBit temperature controller on Kickstarter in order to fully develop the device and get it to the manufacturers before a vast amount of home brewed beer is obliterated. The company has reached its pledge goal of $80,000 US with backers providing over $96,000 during the run. Those looking to acquire a BrewBit Model-T when the device is ready for the market can pledge $160 and up, however be forewarned, while the device will help in crafting great beer they do not come with beer goggles.

 


 

C

See more news at:

http://twitter.com/Cabe_Atwell

kapi.jpg

NeverWet makes for a water-resistant surface. It isn't a miracle. (via Rust-Oleum)

 

Man has endeavored to waterproof things since the first rainfall. A few hundred years ago, pitch and tar were applied to protect a boat’s hull and the roofs of houses against the elements. Tar is still used to waterproof foundations. Throughout the years, many different methods and products have been developed to prevent moisture from permeating everything from clothing to electronics. Things like paper, cosmetics, tents, wood and canvas have all been modified in one way or another to withstand water and the elements. Recently, within the last few decades, electronics have also received waterproofing using several different methods including gaskets or O-rings, protective casings/containers and polyurethane-based bags. Since the introduction of mobile and smart devices, there has been a revolution in the waterproofing industry to find a way of using said devices in adverse weather and even underwater without destroying the electronics out-right.

 

One of the new methods emerging on the market are spray-on applications that purportedly seal electronic devices and then can be submerged underwater and yet still function. This would seem great for anyone who’s ever dropped his or her device in a puddle of water or, even worse, the toilet. A few years ago in 2011 a start-up out of Lancaster, PA made headlines with their silicone-based spray that seemingly keeps everything dry even when exposed to torrents of liquid. Called ‘NeverWet’ from Ross Nanotech, the spray is actually intended to seal wood, metals and plastic along with vinyl, PVC and asphalt from liquids, however they do not recommend using it on electronics. NeverWet is known as a superhydrophobic material whose molecules repel those that make up water or liquids such as oil. The contact angles found in a water droplet simply ‘roll off’ of the superhydrophobic material, as the contact angle is less than 100 compared to the droplets 1100 and over angles. Ross collaborated with Rust-Oleum to bring NeverWet to the market and consists of two applications with one for the base coat and one for the top, which has a ‘frosted clear’ color when it dries. In a video presented by LancasterOnline (news outlet), the makers of NeverWet showcased what the spray is capable of including spraying it on cardboard boxes, clothing and even an iPhone 4 with seemingly spectacular results when repelling liquids. Like all advertisements or infomercials, demoing a product is one thing while actually using it in the real world is something different altogether.

 

Several internet sites (primarily Slate and Gizmodo) ran their own reviews of the product and their results were slightly different from those found on the company’s demo video. Yes, NeverWet works and works well to a point. The spray needs to cure for around 12 hours before being subjected to water, meaning users can’t just ‘spray and play’ without disastrous consequences. When cured the applications dry with a white film so using it on dark colors will show the coat more prominently. While some found that NeverWet could work on clothing and keep them clean, others found that it didn’t work so well on fabrics. The same can be said with electronics, as testing on thumb-drives submerged in water fared well afterwards with no after effects (granted it was only submerged for several seconds). Testing on an iPhone resulted in a non-functioning phone after a few seconds of submersion, which turned out nothing like the promotional video. Reviews also showed that the spray only lasted a few seconds to a few hours before wearing off, leaving behind the white semi-sticky residue that’s a nightmare to clean off. Overall, those looking to work with their laptops while scuba diving should stay clear of NeverWet, while those looking to waterproof their white cinder blocks could probably benefit using the spray-on application.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

type1.png

(via Kibo Robot Project)

 

Sending robots into space is nothing new as NASA sent up their Robonaut 2 to the ISS back in February of 2011. The robot’s primary function is to conduct repairs either autonomously or by remote control via personnel on the ISS or at ground control stations. Even though it resembles the human form, ithe robot cannot interact with people on a human level such as having conversations with its counterparts. On the other hand, Japan’s first robot astronaut, Kirobo, can have extensive conversations but it’s not capable of performing repair operations in or out of the station. The tiny robot can respond to natural language, which is a feat in itself and it  was designed to be a companion of sorts to the current ISS commander Koichi Wakata. Kirobo is the end result of the Kibo Robot Project, which is a collaboration of several robotics companies and institutions that include dentsu (project manager), the University of Tokyo, JAXA and Robo Garage (robot design). Toyota was also part of the collaboration and developed the software that allows Kirobo to understand natural speech and voice recognition.


The 13-inch tall, 2.2-pound robot is capable of full range of motion allowing it to walk, move its arms and turn its head much in the same fashion as humans do but it’s the ability to interact through speech that sets it apart from other robots in its class. In fact, that is its primary function; to act as an assistant for various experiments conducted by astronauts on the ISS. How can it assist being so tiny? It will serve as a voice and video recorder (log) during the experiments that are to be conducted in low earth orbit. It will also relay messages from crewmembers on the station as well as relay communication from those back on Earth. The robot is also part of a feasibility study on how well Kirobo can be used for emotional support for astronauts conducting long missions in space much like the AI GERTY in the movie Moon. The robot itself is capable of not only conducting natural speech but is also outfitted with facial recognition software as well as voice recognition abilities that allow the robot to not only recognize who it is speaking with but also build on its speech using past conversations. Like most astronaut crews, agencies maintain a backup crew on standby in case any problems should arise and Kirobo is no exception to the rule. Back on the ground at Japan’s JAXA headquarters is Kirobo’s twin brother Mirata who functions on standby much like its human counterparts. The robot is identical to its twin and can perform all of the functions of its astronaut brother including moving in a low gravity environment. According to their primary creator (Associate Professor Tomotaka Takahashi), it took over nine months to develop and dozens of tests (including parabolic flight tests) to see if the robots could function reliably in a zero G environment. If all goes well with Kirobo on the ISS, it could lead to the regular inclusion of robots on future missions into space as skilled companions for astronauts.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

 

While 3D printing dominates the news lately, not much is heard about the venerable CNC machines anymore. In fact, CNC machines haven't changed much. Workers will design what they need on a computer and then bring the material to the machine to be cut. In a recent move to invigorate the CNC world, Shopbot decided users should be able to bring the CNC machine to the materials. They created the Handibot, which functions as a portable CNC machine.

 

The Handibot is built and designed to cut shapes designed through apps. Using a computer or a smart mobile device the user can specify the shape, dimensions, or figures that will be cut. Once the specifics have been decided the user only needs to load the information into the Handibot and then press a start button. One feature that can be beneficial or work against Shopbot is its control through multiple apps. Rather than using a single app for all the design purposes, multiple apps will be available for purchase, which will allow users to simply load specific shapes or designs right into the machine. This is an innovation in its own right.

 

Shopbot mentions virtually anything can be cut using the Handibot with a step resolution of 0.00025 inches. It is also useful for surfaces that may be bigger than the Handibot's cutting area. The default maximum volume is 6 x 8 inches and 4 inches in depth. Using jigs and additional motors, a larger automated system can be put together to cut larger shapes out of larger pieces. This is allowed by the Handibot's capability to control extra external motors.

 

In addition, Shopbot decided using crowd funding to get the Handibot into production. “We are seeking crowd funding to support development of the Handibot tool and its ecosystem of job-related and task-based software applications. As a small manufacturing company, we believe the best way for us to quickly and efficiently develop this tool and grow its library of apps is to reach out to the greatest possible number of people for help,” Shopbot mentioned on its Kickstarter page.

 

Over the period of a month, the Handibot faced no difficulties meeting its goal. The goal was $125,000 and they successfully collected $349,498. The cheapest available price for one of the devices was $1,995 available to the first 10 backers who pledged. However, after that it cost any additional backers $2,400 to purchase a Handibot. While the uses of the Handibot truly are limitless; I don't see many hobbyist and home owners purchasing one for themselves. As much as they might want one, the expected price of $2,500 is a bit expensive for a new shop tool of that size. On the other hand, this can be a great investment for a business. Using one of these machines, any business can greatly expand their capabilities and produce amazing work. Portable, on the spot, manufacturing is surely going to open doors to small business.

 

 

C

See more news at:

http://twitter.com/Cabe_Atwell

smell2.jpg

FragWrap robot from the UbiLab at Keio University

 

Robots are fast becoming a normal part of our daily lives with most programmed to perform a routine function of one kind or another. We now have robots that can assemble automobiles or work on an assembly line while others autonomously rove around the home vacuuming floors, cleaning gutters and even windows (iRobot’s Roomba and Looj/Ilshim Global’s Windoro robots respectively). While those do indeed serve a basic function, others that are set to hit the market perform tasks more out of the ordinary. Presented here are some of the more unusual.

 

1: Ubilab’s FragWrap

 

Engineers from Keio University’s HT Media and System Lab have designed a robot whose sole function is to make the air in our homes smell pleasant. Far from being a simple air freshener, their FragWrap (Fragrance Wrapper) robot is engineered to ‘blow’ large basketball-sized bubbles in the air, which when popped, release a pleasant fragrance. The robot is outfitted with an Arduino unit that handles everything from movement to voice command instructions. Simply tell FragWrap which scent you prefer and it begins the process of bubble building by using a syringe-type device to extract the requested scent. The scent is injected into a fan chamber where it is mixed with fog (from a fog machine). At the same time, soap is introduced to a ring positioned below the fan chamber where the fragrance mixture is pushed through making for one huge fragrant bubble that releases the scent once popped. To add some dramatic effect, the engineers incorporated some LED lights in a ‘stacked ring-formation’ to light the massive bubble once it’s dropped from the robot. Sure, the robot itself is rather large, but that’s to be expected from something that produces a bubble of that magnitude. Glade Plugins have nothing on this robot. The engineers plan to demonstrate their FragWrap robot at this year’s Ubicomp conference being held in Zurich (Switzerland).

 

7.jpg

IBIS keyhole surgical robot

 

2: Tokyo Institute of Technology’s IBIS pneumatic keyhole surgery robot.

 

This robot will not blow bubbles, however it is adept at surgical procedures, specifically laparoscopic or ‘keyhole’ surgery performed in the abdomen or pelvic areas. Roboticists from the Tokyo Institute of Technology are developing their IBIS pneumatic keyhole surgical robot to allow surgeons a new level of precision over other surgical robots currently on the market such as Intuitive Surgical’s da Vinci Surgical System. The IBIS surgical robot is actually two robots that work in tandem with both a ‘master’ unit, that the surgeon controls, and a ‘slave’ that performs the surgery. IBIS functions with the surgeon who views the patient through a stereoscopic 3D display from the master unit and controls the slave robot using hand operated manipulators. The surgeon’s hand controls manipulate the slave robot’s pneumatically powered arms for precise surgical movements. The arms are what separate this robot from the others as they are manipulated pneumatically, which provides a force feedback sensation to the surgeon when the arms touch an object. Essentially the air pressure at the end of the robot appendages is used to estimate the amount of force being applied, which is critical when interacting with soft tissues. The engineers state that their IBIS robot is also cheaper to manufacture over the da Vinci (around 1/3 to 1/10th the cost) allowing more facilities to acquire their own affordable units.

 

medxrl.jpg

University of Pennsylvania’s XLR RHex robot.

 

3: RHex robot

 

While the IBIS robot may be adept at performing surgery, it isn’t very mobile and will not be traversing rugged terrain anytime soon. Sure, there are other bots that were developed to perform their functions on hostile terrain, however those are usually outfitted with either legs or tank treads of some kind making them somewhat slow and encumbered. This isn’t the case for the University of Pennsylvania’s (Kod*Lab) RHex hexapedal robot, which can maneuver around and over objects quickly and efficiently. The robot is able to do so because of its unusually designed legs and software that allow it to analyze the terrain before it and then move accordingly. If the XLR RHex looks somewhat familiar, that’s because it was first introduced by Boston Dynamics roughly 10 years ago and implemented for inclusion into one of DARPA’s many projects for the US Army a few years back. Researchers from Penn State’s Kod*Lab have modified the original design with a whole new body and frame composed of carbon fiber along with aluminum plating, making it lighter than the previous versions. The XLR version still uses a total of six independent motors to actuate the robot’s legs to traverse terrain and can be adapted with a host of sensors depending on the task needed. Previous versions were outfitted with mil-spec railing to accommodate a host of weaponry along with IR sensors for working in hostile environments. The XLR however isn’t a fighting machine but rather is intended to be a rescue bot and therefore has its legs outfitted with force and power sensors that allow the robot to determine the kind of surface its on (gravel, sand or even glass) and adjust its stride accordingly. The robot is also outfitted with a scanning laser to further determine the terrain layout enabling the XLR to move at a brisk and steady pace.

 

drone_group_full.jpg

AeroSee project with UAVs from E-Migs.

 

4: AeroSee

 

Robots are not restricted to using legs alone for moving to and fro, as they can also take to the air to perform a host of functions including S&R (Search and Rescue) missions like those being fielded by researchers from the University of Central Lancashire. The researchers have successfully conducted S&R tests through their AeroSee project using a series of drones supplied by E-Migs. Hiking through the woods and mountainous areas offer up some beautiful scenery but it can also be hazardous with scores of people becoming lost or hurt every year. UCLan developed a way to ‘crowd-source’ search and rescue operations through their AeroSee initiative. Their AeroSee project uses a series of UAVs outfitted with cameras that are piloted over a large area where help is needed. The video taken by the drones is transmitted to ground stations where online users can peruse the images and tag any video frame they find with the victim or telltale signs of their location. In a recent test of the AeroSee project, 350 online users from all over the globe took part in trying to identify makeshift persons in distress. The AeroSee website was inundated with 211,000 tags provided by the participants who successfully identified the lost persons in just under 5 minutes after the drones were launched! Not bad considering rescue personnel with search dogs can go days, weeks or even months before finding (or not) those persons who’ve become lost or hurt in the field.

 

Robots in general have gotten a bad rap due to Hollywood movies and sci-fi books over the decades but those found in the real world are very different from those stereotypes. The vast majority of robots currently in the workforce function to help humankind, even those employed by the military. EOD robots are used to disarm IEDs and other explosives, SOF robots are able to provide Intel on enemy weapon emplacements while drones provide enemy location, which all help in keeping soldiers out of harm’s way. Robots in the civilian workforce help with production efficiency and still others help with domestic chores. The robots currently being developed will only further help in making our lives safer, productive and more enjoyable in the years to come.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

1.jpg

WaterColorBot. Looks like an amazing tool for people who have low hand/arm dexterity. They can still create! (via WaterColorBot's kickstarter)

 

Back in April of this year (2013), the White House hosted a national science fair in conjunction with the STEM (Science, Technology, Engineering and Math) program that brought 100 students to Washington from all over the nation. Showcased were a mix of different projects ranging from oil producing algae to UUVs as well as a myriad of game and app coders, rocket designers and even city planners. One student focused on creating watercolor artwork and was able to incorporate it seamlessly into today’s technology, which was demonstrated for President Obama in the State Dining Room. The student, Sylvia Todd (AKA Super Awesome Sylvia on her YouTube Channel), designed and developed her WaterColorBot with the help of Evil Mad Scientist Laboratories, which functions much like it sounds. Her initial goal was to design an art robot to enter into the 2013 RoboGames in the Artbot Painting category where she took the silver medal behind Poland-based KoNaRobotic’s Calliope sketch robot. Realizing that the bot was not constrained as a single project, the team (Both Sylvia and EMSL) decided to develop the bot into a stand-alone kit where it was also showcased at this year’s Maker Faire.

 

WaterColorBot is in essence a computer automated numerically controlled CNC machine (and can function as one to boot) that is able to paint in watercolors taking its information from input through paint-based software from desktop PCs, laptops or mobile devices. The bot functions much like a pen plotter (or Etch-a-Sketch) and uses two motors to move the paintbrush mechanism along the X and Y-axis. The brush carriage, also outfitted with a tiny servo, allows for the brush to be descended or elevated depending on the task. Vector and elevation is controlled through an onboard The EiBotBoard 2.0 USB motor controller that gets its instructions from files based on the SVG format, however a number of other formats may be used (PDF, Illustrator, etc.) after being converted using Inkscape. The WaterColorBot is currently being funded through Kickstarter in order to get the kit manufactured en masse so that other artists can get their respective creations showcased on refrigerator doors all over the globe; the initial funding goal of $50,000 US has been surpassed with a total of over $75,000 (not bad for some starving artists). Those interested in getting their hands on a first-production run of the bot can pledge $295 or more (the $275 version has sold out at this time) and will receive one WaterColorBot kit (with some assembly required). Backers at that price-point should receive theirs by mid-December, just in time for the holidays (you may have enough time to use it and give the gift of art!). Lazy automation or artistic tool?

 


 

C

See more news at:

http://twitter.com/Cabe_e14

AIREALVortexRingFig.jpg

Aireal demonstrating a haptic feedback event (via Disney)

 

Gamers have been using haptic feedback game-pads since Nintendo released the Rumble Pak for its N64 controller back in 1997. After its initial launch, almost every other mainstream console manufacturers thereafter (Sony, Sega and Xbox to name a few) featured controllers with haptic feedback built into them, which provided a level of immersion into the games themselves. Fast forward to 2010 and haptic feedback in gaming devices are still present and considered more as a staple rather than a feature. Even mobile devices are outfitted with haptic feedback, such as vibration alerts for incoming calls, text messaging and screen interactions - a pervasive concept. A new interaction standard was released in 2010 as Microsoft introduced their Kinect camera system that provided consumers a completely new level of immersion, rumble pads seemed like a toy afterward. Combining the two would be absolutely incredible and would provide a unique experience when it comes to game interaction. But joining the two together seemed impossible, at least that was the case until Disney decided to give it a try. The company’s research department is developing a device, known as Aireal, which allows users to actually feel virtual objects, textures or a virtual form of haptic feedback using the air itself.

 

The device isn’t some massive air compressor that bursts air pressure at the user but rather a relatively small devices that unleashes small ‘air-donuts’ that are actually small traveling low-pressure bubbles that simulate the feeling of substance when touched. Researchers designed the Aireal using a 3D printed enclosure that’s outfitted with micro-subwoofer speakers that encompass five sides of the device. The speakers emit a burst of low-frequency pressure that is forced through a small flexible nozzle at the front of the device, which forms small vortices that create what Disney calls ‘dynamic free-air sensations’. A small IR camera is attached to the front of the device that tracks the user’s body and aligns itself to the user’s position using pan and tilt motors to correctly aim its nozzle. Disney researchers have designed two prototypes with one for an individual and a larger version for groups with gaming demos for each. The first demo involves using a table display with a projector housed underneath, which projects a small butterfly that flies around the screen. Players attempt to capture the butterfly, which is projected on their hands when they get in range. While you cannot actually feel the butterfly, you can feel the simulated air movement of its wings while in flight. The other demo involves using an air gun to fire simulated, slow-moving cannon balls between two players that have a chance to dodge the incoming projectiles. The device has its drawbacks at this point in its development however, as the micro-subwoofers are not completely silent and the vortices produced are not consistent from one to another. Interaction is also an issue as there is a delay of about 150 milliseconds (for the larger version) between body detection and vortices produced but the research team is looking to solve these problems through increased development cycles. Still, it’s an amazing feat to feel tangible sensations through gesturing, which should bring increased immersion in the gaming world. Soon, there will be some who never want to leave their virtual world… (WoW fans aside.)

 

C

See more news at:

http://twitter.com/Cabe_e14

Filter Blog

By date:
By tag: