Cheetah Bot.png

Like a futuristic fossil, MIT's Cheetah bot seeks to replicate the real thing (via MIT)


Speed and power are not everything. A team at MIT’s Biomimetics Robotics Lab is showing that huge improvements in the function of a quadruped robot can be made by following nature’s biometrics in animals to achieve their efficiency. The robot, called the Cheetah Bot, employs a biomimetic architecture to run at 13.7 mph at efficiency much higher than bipedal robots and approaching the efficiency of real animals.



The project is being led by MIT Professor Sangbae Kim. He and his team have designed everything from the lightweight, high-strength bones of the robot to the electrical three-phase permanent magnet synchronous motors. They did this to achieve a better efficiency compared to similar robots like the Big Dog being prototyped by Boston Dynamics, which uses pneumatic motors. The MIT Cheetah is only half as fast as the Big Dog, but the innovation is its performance and efficient use of electricity.



Two coaxial motors with single planetary gears are located at the shoulder joint and at the hip joint of the cheetah. These use regenerative drive, similar to regenerative braking, so that energy is conserved with every stride. A differential actuated spine activates cyclically with the motion of the legs and acts to further improve the efficiency of the bots movement upon impact and upon hind leg propulsion. These custom motors were made from off-the-shelf Emoteq HT-5001 motors. The team converged on using a 5.8:1 gear ratio for their planetary gear. These efforts have managed to double the torque density of the commercial motors they were using at first.


Composite Leg Design.pngFEM of Kevlar Tendon reducing stress  .png

(Left) Leg concept and production part. (Right) Tensegrity analysis. (via MIT)


The Cheetah Bot’s composite frame is made out polyurethane filled with foam “bone marrow.” This structure is then covered with resin that makes the “bones” low-mass and high-strength. Kevlar tendons representing the Achilles tendon and the gastrocnemius muscles carry tension and replicate the tensegrity of the bones in the foot and leg of a real cheetah. This replica structure reduces the stress in the robot’s joints by 60%, which further improves the mechanism’s overall efficiency.



These design solutions mean that MIT’s cheetah has a cost of transport (COT) ratio of 0.52. The COT factor is determined by multiplying a mechanism’s power consumption by the corresponding velocity then dividing by its total weight. The 0.52 COT is very impressive compared to Honda’s Asimo that has a COT of 2, and Boston Dynamics’ Big Dog that has a very inefficient COT of 15.



This high efficiency means that the cheetah can be tested while carrying 6.6 lbs of dummy weights that represent the four 22.2V lithium polymer batteries it will one day carry. The cheetah’s impressive COT means that it requires 1 Watt when running at the max 13.7 mph. When batteries are put to use, the bot will be able to run at 5.2 mph for 1.23 hours, a total distance of 6.2 mi. With further work, the team wants to achieve a COT of 0.33, which is “between the efficiencies of runners and flyers in nature”.



MIT’s bot, like the Big Dog, still need supports to keep it running on a treadmill. The team has published a paper titled “Design Principles for Highly Efficient Quadrupeds and Implementation on the MIT Cheetah Robot.” The team’s findings were announced at this year’s IEEE International Conference on Robotics and Automation. The work is being funded by DARPA’s Maximum Mobility and Manipulation (M3) program.




See more news at:


Linkbots in action (via Borobo & kickstarter)


Designing, creating, and programming robots is no easy task and can take years to accomplish and that’s just for companies that can sink a few million dollars into those R&D projects. For those of us who don’t quite have that much money, it’s almost near impossible to get our coffee-can for a head design to even move. Barobo Inc. (designers of the Modbot) is looking to take away a lot of the frustration out of hobbyist robotics with their educational line of Linkbots. The idea behind the robot design is to get children (or adults for that matter) interested in innovation and mathematics to become the next generation of robotic enthusiasts. Constructing various robots is near unlimited as the creators of Linkbots engineered them to be upgradable with new options becoming available as your knowledge and skills grow (much like playing an RPG). The robots are actually pre-assembled modular sections that can be combined together to form new and interesting designs. The robots are created using a base platform as a starting point and feature three sides for mounting additional modules to get your bot off and running. These include (but are not limited to) wheels for mobility, camera mounts and gripper modules that connect to one another using Barobo’s SnapConnectors which look akin to wall outlets. Not satisfied with simply snapping the modules together for your DIY project? Not a problem as the modules come equipped to handle #6-32 bolts that allow you to affix virtually anything to the bot using the bolt pattern found on all three surfaces.


The possibilities that can be achieved using Linkbots are incredible. More advanced users can even download all three of Barobo’s modules (at a cost) and modify them to fit their projects, which can then be printed out using a 3D printer (if available). Users also have the option to use a breakout board to connect sensors, range finders and LEDs (among a host of other things) to projects that may call for them. The Linkbot comes standard with a host of features already built in including a pair of rotating hubs with absolute coding, an accelerometer, LED lighting, buzzer feedback and is compatible with Arduino. Barobo provides the firmware for flashing the Linkbot as well as the software to program the Arduino board for Windows, Linux and Mac. If that wasn’t enough the robot modules come equipped to handle ZigBee wireless with around a 100-meter LOS range. While several prototypes of the Linkbot have been shipped to various schools Barobo is looking for funding to bring the bots to the masses using Kickstarter. The company has currently raised over $7,000 US so far they still have not reached their goal of $40,000 to get the project scaled for manufacturing. Those interested can pledge a minimum of $129 to get the Linkbot plus two wheels along with eight mounting screws and BaroboLink software to help get your robotic projects off the ground.



See more news at:

Cabe Atwell

Love and Robots

Posted by Cabe Atwell May 20, 2013


Lovotics robot in an interaction. The bot is a dome shaped bot with a handful of sensors, battery, and wheels for movement. Very lovable. (via Lovotics)


The level of attachment between a human and a machine can only go so far. Although some people can become addicted to computers, it is the human at the other end of the connection that really keeps people engaged. A new field of robotics is studying and developing machines that will offer what we all want from other people: an affective and emotional connection. Although this would be artificial, the potential for this tech to engage humans over the long term is real but largely unknown. While most people are willing to entertain the thought of a robot loving them, a reciprocal, bidirectional love between human and machine is a bit harder for people to believe.



Hooman Samani, a researcher at the Social Robotics Lab at the National University of Singapore, works on making devices that can transfer not only data, but also emotions and affections. After years of research, he has found three methods to give robots the ability to interact on a level similar to human emotion. These modules are named Probabilistic Love Assembly (PLA), based on the psychology of love, Affective State Transition (AST), based on emotional changes and states and Artificial Endocrine Systems (AES), based on the human physiological signals that go along with feelings and emotions as response to stimulus. These provide the foundation for a new field of research called “Lovotics.”


pla network.JPG

Overview of the Bayesian network for "Probabilistic Love Assembly (PLA) module" (via Lovotics)


The PLA and AST methods rely on statistical methods to determine the robot’s mood. The first functions on a Bayesian Probabilistic Network and the second uses a Dynamic Bayesian Network. AES uses an algorithm that handles transitions from one affective state to another. These three systems can be incorporated into one robot to give it different layers and modes for interaction and perception.


The inputs to these functions and algorithms are provided by the human through touch, sound, sights and even locations. Parameters that measure the robot’s affective reaction include proximity, propinquity, repeated exposure, similarity, desirability, attachment, reciprocal liking, satisfaction, privacy, chronemics (study of time in nonverbal communication), attraction, form and mirroring.


Layers of Endocine System.png

(via Lovotics)


Samani has developed artificial hormones that mimic the effects of oxytocin, endorphins, serotonin and dopamine in the robots behavior. Biological hormones that regulate body temperature, appetite etc., are also being developed in code that the robot can use to determine emotional state. So far, he has designed a simple robot in the shape of a hemisphere that wheels around looking for someone with a smile that will touch it and speak to it.


The goal of these types of robots is to increase the probability that a human will be engaged to a robot over a long period of time. This could be very useful in supporting caregivers and providing therapies to disabled people or elders. People with disorders could use them to develop communication skills. They could help kids and adults stay focused and engaged while learning and even provide personal assistance for tracking moods, giving companionship and influencing mood changes. Of course, there is a huge potential for making Lovotic robots for entertainment as well.


This field is particularly exciting because of all of the disciplines that it combines. Central concepts of engineering, robotics, computer science, and artificial intelligence are being combined with psychology, neuroscience, sociology, anthropology and philosophy. In other words, the combination of hard and soft sciences has unbound potentials in the new age of human-machine interactions.


Lovotics may be on to something. See the film "Robot & Frank," a look at how bots and humans can interact.


In the meantime, check out a Lovotics "episode" featuring their lovable bot:




See more news at:


Yale’s GRAB LAB focuses on the design of grasping & manipulation robotic systems for a wide range of practical applications. It’s most recent design, designed alongside Harvard and iRobot, was accepted into DARPA’s ARM program. (via DARPA)


DARPA’s Autonomous Robotic Manipulation (ARM) program sets out to promote the creation of human-like robotic hands with a high degree of manipulation control for various applications. Being as though DARPA is a military focused organization, such robotic hand manipulation systems are used to reduce casualties by taking people out of harsh environments where human operators are usually required. ARM-H, the program’s hardware side, recently selected an impressive design that was submitted in a joint effort by iRobot, Harvard, and Yale’s GRAB Lab.


The dexterous manipulator system is capable of grasping items as large as a basketball, as small and thin as a transit card, and as heavy as a 50 lb weight. All of these actions are controlled by a human operator that orients the robotic hand to perform an assigned task - for instance, grabbing a key that lay flat on a table and using it to unlock a door.


The major challenge in designing robotic hand manipulation systems comes about when creating systems intended to mimic the human hand - bringing production costs upwards of $50,000 USD. However, under the DARPA program, mimetic-based design takes a backseat to functionality. This design is in fact under actuated - less joint sensors and actuators means it is cheaper to make and easier to ruggedize. Yet, it still achieves incredible dexterity and complex manipulation moves for practical usage at a low cost of $3000 USD per unit (in production batches of 1000 or more).


Besides DARPA’s usual military application story, there is no word on how this technology may be used for public use quite yet - though industrial usage where humans operate in unsavory conditions, like nuclear facilities, may be a key player. Such tech also entices wonderment about possible applications for fully dexterous robot builds - something the DIY autonomous bot community may find some interest in. Check out the video of DARPA’s new hand in action below!




See more news at:

From strt to finish. X2Jiggy shows how his controller is built. Worth a watch for the great 8-bit tune. (via x2Jiggy)


Since the emergence of smart phones and development boards such as the Raspberry Pi, there has been an increased popularity of retro games coming to these platforms. Emulators exist on phones, which can be easily downloaded to quickly begin gaming, and the same is true for the Raspberry Pi. As for owners of the Pi, retro games can also be a good motivational first project to kick start a hacking hobby. For one hacker enthusiast, who goes by the alias of x2Jiggy, creating a custom controller to play those retro games was his project to bring back the glory days of the Atari.


Due to the simple circuitry of the original Atari controllers, x2Jiggy decided to create his own design housed in a wooden craft box. Included in the controller is a joystick, paddle, and keypad. The keypad was created using perfboard, along with 12 tact switches. The switches are aligned and soldered together in the traditional way most hobby keypads are made. In addition, a male and two female DE-9 connectors are used to mount the keypad to the top of the box. The joystick used in his design was salvaged from a Playstation arcade controller; however, he states any arcade joystick will work fine. To finish off his design, a momentary-on pushbutton is mounted on the case along with a 1 Megaohm linear potentiometer.


X2Jiggy was nice enough to post his design and a list of parts needed to complete the projects on the web. They can be found on along with a pdf of instructions for creating the project. As noted by the creator this controller will plug right into the original console and eliminate any need for switching between the joystick, paddle, or keypad. For anyone interested on taking this project on, it will also be very easy to add in your own features or change it in anyway you would like. On an another note, it would also be cool to see this pad wired to a microcontroller to drive an RC car or control a small robot.


The only aspect that bugs me is the harvesting of one controller for another. All the controller parts can be purchased individually, just so everyone knows. Otherwise, it’s a little like “Homer Simpson’s chili spoon, carved from a bigger spoon.”



See more news at:


Victor Mateevitsi using the SpiderSense tech at UIC (via VMateevitsi & Lance Long)


It is no doubt that emerging technologies are now beginning to introduce sensory enhancing devices that drastically change the way humans perceive the world around them. Brain scanners that can detect stress-inducing information overloads, smartphone apps that detect emotion through voice recognition software, and the Army uniform of the future that can be used to alert soldiers to the severity of an injured teammate. What’s next, a full-body suit that artificially gifts its wearer with spider-sense? If you guessed yes, then you would be correct. A new suit, called SpiderSense, is next in line in the field of sensory augmenting technology.


Victor Mateevitsi, a PhD student at the University of Illinois at Chicago, developed the suit alongside fellow classmates to further enhance the immediate environmental perception of those wearing it. The smart sensors used in the suit allow for a directional awareness of objects around an individual through its strategically placed sensor modules around the body.


The suit is packaged with seven sensor modules and one controller box. Each of the seven modules contains an HC-SR04 ultrasonic sensor, a T-Pro Mini SG-90 9G-servo motor, and a small robotic pressure arm. The controller box, consisting of a series of switches powered by an Arduino Mega microcontroller, is hooked up to each sensor via a 10-pin connector. By sending out an ultrasonic pulse to scan environment, reflected waves are picked up by the sensors and transmit object distance data up to 200 inches away. This data is then sent to the controller box, which converts it into a rotation angle that is transmitted back to the appropriate sensor. The sensor module’s servo motor then turns the pressure arm, applying pressure to the human body.


Victor and colleagues hope that the technology will be useful for both compensating dysfunctional vision or hearing senses, and supplementing existing senses. For example, a person with impaired eyesight may enhance their ability to move around quicker. They also explain that the technology may be useful for bicyclists to feel the traffic around them as a safety measure.


The paper, dubbed “Sensing the Environment through SpiderSense," has been accepted to the 4th Augmented Human International Conference in Stuggart, Germany. The researchers’ presentation was held on March 7th, 2013.



See more news at:


Concept diagram. The idea is to reconstruct what is inside the building based on heat signitures. (via Dr. Pietro Ferraro)


Imaging technology has grown in its capability of capturing stunning, real-life quality scenes through the recording of light, or other electromagnetic radiation, with the appropriate sensor. Of course, great images typically require ideal lighting conditions. Over the years, firefighters have begun to use infrared imaging technologies to assist their vision through the less than ideal conditions that they normally find themselves in at work. Unfortunately, the technology that is currently used for these situations is easily blinded by the intense radiation given off by flames. A team of Italian researchers believe they have developed a novel holographic imaging technique that effectively solves the issue at hand.


Firefighters today are stressed enough by the conditions they are faced with when searching through a flaming building for survivors - not being able to see well does not help. The IR lenses utilized for improved vision by firefighters does indeed help them see through the smoke, but become oversaturated at the sight of fire blinding them of what lies behind the flames. This technology relies on a bolometer-based design that takes in electromagnetic radiation through a heating element. The heating element detects changes in temperature from the absorbed light, which is then used to reconstruct an image. The way to get around this drawback would be to equip the IR sensors with a lense:


"IR cameras cannot 'see' objects or humans behind flames because of the need for a zoom lens that concentrates the rays on the sensor to form the image," says Pietro Ferraro of the Consiglio Nazionale delle Ricerche (CNR) Istituto Nazionale di Ottica in Italy.



Example of the reconstruction and how it is done. (via Optics InfoBase)


The Italian scientists took a completely different approach by applying digital holography to their device. With the new imaging system, a laser light is spread throughout a room that easily penetrates through smoke and fire. The light then reflects off of objects in the room and is captured by a holographic imager. In addition to the illuminating light beam that is dispersed throughout the room, the holographic technique requires a second light beam, called the reference beam, that interferes directly with the re-captured illumination beam on the holographic imaging plate. This interference pattern creates the 3D effect found in holographic images, and in this case, creates a 3D movie-like view of the room.


In addition to applying this technology to a firefighting environment, the researchers hope it also finds a home in the biomedical industry, says Pietro Ferraro:


"Besides life-saving applications in fire and rescue, the potential to record dynamic scenes of a human body could have a variety of other biomedical uses including studying or monitoring breathing, cardiac beat detection and analysis, or measurement of body deformation due to various stresses during exercise."


Employment of the digital holography imaging technique now awaits the researchers’ efforts to actualize it in a portable or mobile platform. Once again, the development of novel technologies expresses its deep-rooted essence of life-saving and life-improving accomplishment.



See more news at:

Software system architecture.png

System concept block-diagram (via MIT)


If robots will ever serve humans--or take over the world--they are going to have to be able to build their own tools and furniture... like a table. A duo of bots coordinated by software developed by Ross Knepper, Todd Layton, John Romanishin and Daniela Rus at MIT’s Distributed Robotics Lab, is already getting started on achieving the ability to put the simplest of Ikea furniture together.


Knepper and his team have devised a system that turns two non-specialized Kuka YouBots into a well coordinated team that collaborate to build a LACK coffee table. This table design is easy to assemble, as it only needs the legs to be screwed on. However, one robot could not do it alone. So, while one of them sets the screw into one of the empty holes, the other positions itself around the leg to begin rotating it. Flipping the table over also needs precise work in concert. One of the robots has a specially designed arm attachment to allow it to screw in the leg, but the regular factory-installed gripper arm can be retrieved by the bot from a dock when it is time to use it.


The building instructions are synthesized from CAD drawings of the table’s parts. The CAD drawing is currently done by hand, as the robots are not visually enabled. In the future, the drawings could be obtained using a robot’s RGB+D data.


Geometric reasoning software is used to determine where the holes are and where pieces attach. This stage also looks at how parts must align, takes into account symmetrical interchangeable parts and makes sure that there are no collisions between the bots or parts. This creates a blueprint but does not provide directions or a plan to follow.


Gripper in use.png

Soft-gripper in use (via MIT)


To first construct the building plan, the robot system uses custom, object-oriented symbolic planning language that is generated from the CAD-derived blueprint and turned into text. This text then compiles to ABPL, A Better Planning Language, similar to PDDL, or Planning Domain Definition Language, which allows the user to provide problem data in a conventional-object oriented format and bots to use existing solutions to planning problems in the field of artificial intelligence. ABPL problem specification is one-fourth the size of PDDL, which allows for faster problem creation and thus execution.


So far, the building team can only handle building LACK coffee tables. The team wants to generalize the software so it can be adapted to different table designs. They will also keep working to develop more planning programs that can assemble other types of furniture.


This is the beginning to programs that could allow mobile manipulation robots to work together on factory floors and perhaps even in homes. This MIT robot duo still relies on humans to make sure all of the pieces are oriented in a predetermined way. A paper on their research is due to be presented at the International Conference on Robotics and Automation in Karlsruhe, Germany in May.




See more news at:


Meet Shuntaro-kun: the robotic foot-odor sniffing dog that cuddles up to nice smelling feet and passes out in the presence of BO. (via Asahi)



When natural disasters strike an area close to home, it’s likely common for engineers and technologists to begin thinking of ways that technology can help prevent future events from happening, and possibly even uplift the people of now disaster-ridden areas. That is precisely what occurred after the recent Great East Japan Earthquake that struck the Tohoku region in March of 2011. Tsutsumi, the robot-maker CrazyLabo President, visited the area on a consistent enough basis to be inspired to create tech that would return joy to the Tohoku area. Though these inventions are far from preventing any similar foreseeable catastrophes - the BO detecting bots developed by CrazyLabo, in partnership with the Kitakyushu National College of Technology, are sure to spark a chuckle or two out of the Tohoku residents.


During a meeting with the university’s mechanical engineering professor, Takashi Takimoto, Tsutsumi shared that his family enjoyed pointing out his breath and smelly feet - inspiration struck! From there, a team of 10 male students were assembled to continuously analyze dirty sock, bad breath odors, food smell characteristics (garlic, fermented soybeans), and other pleasing and displeasing smells. Computer programs were then created to store the odor bank of information. With the use of commercially available odor sensors, two bots were developed - a breath sensing female bot, and a foot smelling canine.


The robotic female breath critic, named Kaori-chan, has brown hair, blue eyes, and sits atop a box which hold the odor analyzing components. To evaluate, Kaori-chan responds with vocal remarks such as - “it smells like citrus!” (good odor) or “Emergency! There’s an emergency taking place! That’s beyond the limit of patience!” (not-so good breath).


The dog robot, named Shuntaro-kun, analyzes foot odors to the tune of Beethoven’s ‘Symphony No. 5’. Instead of responding by speech - Shuntaro-kun will either cuddle up with feet that please his senses, or barks, growl, and pass out when socks need a changing to the tune of Chopin’s funeral march no less. 


“I want to continue to produce things that make people laugh and create a good atmosphere,” said Tsutsumi after the project’s completion. He is now planning on visiting the Tohoku region with the bots and leasing them out to the public to enjoy. The tech inspiration hasn’t stopped there, either - CrazyLabo is now working on a Pinocchio bot that can detect lies by monitoring brain waves. And yes, his nose grows when a lie is detected.


Tsutsumi’s tale goes to show us that if technologically innovative inspiration strikes, go with it - you never know where it can lead to and what kind of beneficial impact it can have in the world.


BO bots.JPG

Watch a video of both bots in action after this link to Asahi's website.



See more news at:

There have been plenty of previous attempts to build robots capable of climbing up challenging vertical environments. Of these, the most impressive thus far used tiny hairs called setae on their footpads to adhere to surfaces. These hairs mimic a gecko’s method of climbing up walls, and hence, are limited in their payload capacity - the bots can only carry their own weight. A team of researchers from the Swiss Federal Institute of Technology in Zurich recently took a creative approach implementing the innate fluid flow-like properties of thermoplastics to a bot’s footpads - allowing it to melt and cool its sticky feet up vertical terrains.


Thermoplastics (TPAs) are polymers that melt and cool at specific temperatures that are generally controllable with embedded electronics. Since the transient states of TPAs are associated to the reformation of polymer chains by intermolecular forces, the material can achieve a high payload capacity when solid and a crack-filling, tacky state when softened.


The Swiss scaling bot contains TPAs on its footpads - thermal resistors are used to heat its pads above 70C when a foot needs to move or begin to adhere to a surface by melting onto it. Then, embedded thermoelectrics are used to cool the pad to a solid state. This method repeats itself as the robot sticks and unsticks its way of a vertical surface.


To test out their design, the Swiss team of researchers prepped a series of “complex vertical terrains” to challenge the bots tacky feet. In one test, a two-foot scaling bot was able to carry a 7kg weight up walls made of wood, plastic, stone, and aluminum. Unlike previous scaling robot attempts, the Swiss bot carried 7 times its weight with an 80-100% success rate - weighing in at just under 1kg.


The next step is for the researchers to begin tests in natural environments, such as cliffs, mountain sides, trees, etc - a feat no other payload carrying robot climbers have yet to handle well. Make sure to check out the video below to see the Swiss bot climber in action.


Only known video, courtesy of NewScientist



See more news at:

The amazing Robo Raven created by UMD Robotics. So realistic in flight that a passing hawk mistook it for prey!

Read their full press realease on the Robo Raven here.


Hawk Attack Robo Raven.JPG


The vague concept from DARPA


Most of today’s robots need some type of programming in order to carry out their intended tasks or functions. This applies to autonomous as well, even those with learning capabilities such as Robot Consortium’s iCub (able to learn languages like that of a child) need to have some specialized base-programming to function. Researchers from DARPA have recently announced that they have been developing a robotic brain that ‘looks and functions’ like that of its human counterpart. The brain is part of DARPA’s Physical Intelligence program which has received millions of government funded dollars over the past few years to develop a robotic brain that is able to ‘spontaneously evolve nontrivial intelligent behavior under thermodynamic pressure from its environment’ (or rather learn on its own). While the details of DARPA’s new robotic brain are for the most part non-existent, some details have emerged from Professor James Gimzewski (University of California) who is one of the researchers connected to the project. According to James the brain works similar to that of a humans in that it generates synthetic synaptic responses using nano-scale interconnected wires (instead of organic tissue) that perform billions of connections which is powered by thermodynamic pressure. Human synaptic responses (electrical and chemical) allow neurons to pass information to another cell, which enables us to learn and react to our environment, however until now it has never been successfully artificially replicated. According to DARPA the prospects of its new robotic brain will help in developing analytical tools which will lead to creating human-engineered physically intelligent systems (smart drones and robots for the military?) as well as giving researchers a better understanding of physical intelligence in the natural world. As with most of DARPA’s projects, don’t expect to see this new brain tech anytime soon in the civilian sector as it’s still early in development and hasn’t yet been adapted for use in future weapon system or robotic platforms.



See more news at:


PanaCast camera module (via Aurangzeb Khan)


The days in which you can make faces at your boss during a conference call may be nearing an end. A new device, called the PanaCast made by Altia Systems, is offering HD video with a 200-degree field of view, which may make it difficult to hide during a video conference call. However, the expanded field of view also means that it is great for capturing an entire room of people and creates a much more immersive experience than the traditional webcam that is commonly used.


After a very successful Kickstarter, the PanaCast is now ready to be sold for $599, which is much cheaper than comparable video conference call hubs and telepresence systems. The hub is in the shape of a disk and has six fixed focus cameras located around its perimeter. Custom multi-imaging video processing software combines the footage from all the cameras in real-time to deliver one panoramic 2700 x 540 pixel view.


The PanaCast only requires 350 kb of bandwidth and through that, it can deliver 60 fps on a fast internet connection. The device will also function using mobile 3G but at a slower fps rate (4G is in the works). No matter what the connections is, it offers enterprise encryption and a stream that can be accessed via mobile phone and soon, via PC.


To link your smartphone to a specific PanaCast, all you have to do is download the free PanaCast app and take a picture of the QR code atop the unit or, if your phone is NFC enabled, a simple tap will link the two devices. Each remote viewer can swipe to change their own POV and pinch to zoom in and switch between feeds on the app.


The company plans to offer integrated audio through the client app so the remote viewer can talk and hear through their phone, but as of now, it seems the PanaCast only delivers video and audio must be shared in parallel through a “Plain Old Telephone System.”


The camera is compatible with Skype, Google Hangout and other enterprise apps. Up to two remote viewers can join in a conference call for free. However, if you wish to have more viewers or use the PanaCast cloud service, a $19/month fee applies.


There is no way to record the video stream but this option is being reviewed. An open source SDK will be available in the future but the video stream could be viewed using the open source VLC program using the device’s RTSP URL.



PanaCast on stand. We all see you now.. (via Aurangzeb Khan)


Right now, the app is only available for iOS. But, since this app is written in Ubuntu, the team promises apps for Android, OSX and Windows XP and higher will be available in the near future. At the heart of the device is a SoC including dual ARM11 cores. The PanaCast also comes with an Ethernet port, and a USB 2.0 port. It achieves seamless streams using a high-speed, ultra-low latency H.264 codec by Cavium Networks. The PanaCast fits on the palm of your hand so it is very portable (but it does not use batteries!). The 5.5’’ diameter device comes with a 15’’ stand to assure the entire field of view is utilized. The is no escaping the camera now…




See more news at:


The Leap Motion Controller hardware & software allows users to interact with their personal computers with a flick of a wrist. (via Leap Motion)

The Leap Motion device offers consumers a revolutionary way to immediately add motion-control capability to a home computer or laptop with a tiny, palm sized piece of hardware that costs a mere only $79.99 - less than half of the Xbox Kinect sensor. The device, which will start shipping out all pre-orders on May 13th, has already been a big hit in the app development world after 12,000 units were shipped out to developers last year. Now with only a few weeks before its launch, the folks at Leap Motion have announced a recent move to partner with HP and bring their 3D motion controllers to market alongside the well-established PC company’s products.

The partnership will allow HP to bundle the Leap Motion controller alongside its lineup of new HP desktops and laptops - but most interestingly signifies that future HP computers will have the Leap Motion technology directly integrated into them. As announced, these devices will be pre-loaded with the Leap Motion app store, fittingly named Airspace, where users will have access to a wide range of 3D motion control enabling apps.

We can only marvel at the amazing user experience this partnership will have in store for future computer products - however, as mentioned before, Leap Motion technology has already created a swirl of activity in the DIY developer realm. Innovation by hacking has begun, to say the least.

For instance, Mingming Fan, a student from University of California-Irvine has already managed to hack the Leap controller to work on a smartphone. Fan mentions that his goal is to make the space around the phone usable, allowing users to “reach into” the smartphone to take control of on-screen objects. Another “small-time” hack comes from the group over at LabView Hacker, who were able to use Leap Motion’s technology to operate a Parrot’s AR.Drone by hand gesture. Check out the video below for a run through from the LabView hackers themselves:


Gamers may be quick to point out the exciting enhancements such technology would have on overall PC gaming experience. Vedran Skarica of divIT says it may be awhile before motion control is able to completely phase 2D controls (mouse) out as the PC gamer controller of choice, but reassures excited gamers by explaining that the Leap’s technology is the first thing to come along in a while that can out-do the classic mouse.

And finally... even AutoDesk has joined in on the Leap hacking fun: Brian Pene, AutoDesk researcher, assembled prototype hardware that allows hand-gesture controlled manipulation of digital models on AutoCad! Remind you of Iron Man much? Pene stated, "Using a mouse you'd have to pick up everything in 2D space while constantly manipulating the view. With Leap you can reach in and grab much like you do in the physical world.”


See more news at:

Considering lighting is responsible for almost a quarter of the world’s energy usage, (the US alone uses around 200 TW of electricity for lighting annually), further rise in the efficiency and efficacy of light bulbs is very impressive to me. Phillips has just announced a prototype that will bring the efficiencies of LED’s to a new high while producing the most useful and certainly the most common kind of light, the warm light given off by traditional bulbs.



The TLED concept (via Philips)


The new TLED light represents many innovations in the lighting game. Arguably the biggest achievement is the emission of warm white light from an LED at 200lm/W. Although a 200lm/W makes these TLEDs twice as efficient as LED lights readily available on the market, similar efficiencies have been claimed before but usually required less than ideal operating conditions or simply did not give of comfortable light.


Preliminary reports say the light produced is of about 2700K-3000K in light temperature, gives off 1,500 lumens at 7.5Watts, has an R9 level of 20 (saturated red), and Color Rendering Index of 80. All of this means that Philips’ TLED bulbs will most likely get rid of the fluorescent bulbs commonly used in office buildings and industrial settings and make up more than half of all the worlds lighting.


The way in which Philips creates this pleasing light so efficiently, is by altering the common RGB method of mixing red, green and blue LEDs to make white light. The green LED itself is usually about 100lm/W but the blue LED has an efficiency of 380lm/W. So the folks at Philips got very clever and decided to make green light using a blue LED made of Indium-Gallium-Nitride (InGaN) and covered with a phosphor that absorbs just the right spectrum of blue to produce the green light required to make warm white light. The team also decided to use a 630nm red LED to produce the enhanced levels of saturated red.


These LED’s by Philips also produce little to no notable heat so they do not need a heat sink. This eliminates constraints on design and materials, which increases their value as it lessens costs.


Phillips said the TLEDs will be available for industrial and business settings in 2015 before they are sold for domestic use. No pricing has been announced but they most likely will be more expensive at first. Their invasion and defeat of fluorescent bulbs could be a long one considering their current wide spread use and decent efficiency. If TLEDs replaced fluorescents in the US today, we would save 100 TW, (that is 50 medium sized power plants that could be used for other things), $12 billion (not actually that much money compared to military) and 60 metric tons of CO2.

Let’s hope the TLED cost will be a catalyst to the savings…(not like their last innovation)



See more news at:

Filter Blog

By date:
By tag: