Skip navigation

In a competitive industry how do you give university graduates a great chance to stand tall above the rest?


Formula Student is one of the largest motorsport events in the world, with over 100 teams from almost 40 countries entering just the UK event in Silverstone. The students will tackle challenges that go beyond what they would study at University, such as; working in a very large team on a lengthy and multidisciplinary project, pitching a business in a 'Dragons Den' environment, gaining immense amounts of practical skills and technical knowledge, financial awareness and how to balance cost with performance, commercial awareness when working closely with sponsors and trying to attract new partners and a great amount of personal development.




The event itself is so broad that not only engineering students are encouraged to get involved; at Portsmouth we invite students from all across the University to take part and bring their unique skills to the table. For students it is an extra curricular activity aimed at giving them the best chance at getting the career they want once they graduate; the skills and lessons learned are transferrable outside of motorsport end engineering.


In the future of this blog you will read about the design, procurement, fabrication, assembly and event preparation challenges we face at UPRacing each year. We aim to take you from start to finish from ideas in mind to an open wheel formula style car competing against rival teams from around the world.


Always fun to watch.


BRETT working through a given task (via UC Berkeley)

These days artificial intelligence can do some amazing things from drawing pictures to removing stalled cars, but it knows how to do these things through explicit instructions. Is it possible for a robot to learn a task on its own? A research team from the University of California, Berkley seems to think so. BRETT (Berkley Robot for the Elimination of Tedious Tasks), a Willow Garage Personal Robot 2 (PR2), has the ability to complete tasks on its own through trial and error, much like humans. It can assemble a toy airplane or Lego bricks by using neural network-based deep learning algorithms to master certain tasks.


This technique is loosely inspired by the neural circuitry of the human brain when it interacts with the world. In turn, it helps the robot recognize patterns and categories its receiving. Unlike other forms of artificial intelligence, you rarely have to tell the robot what to do via new code – just give it a task and enough time to figure things out. There's even a reward system that scores BRETT on how well it learns a new task. The movements and strategies that allow it to finish the task are scored higher. The information is then relayed across thousands of parameters in the neural net.


While it would be amazing to have a robot assemble your Ikea furniture it's not exactly ready for the real world. It takes quite a bit of time for BRETT to complete a task; ten minutes when told exactly where to start and stop and three hours if its learning things by itself. With further development the team hopes the technology will improve to handle more data over the next several years.


“With more data, you can start learning more complex things,” said Professor Pieter Abbeel of UC Berkley's Department of Electrical Engineering and Computer Sciences.“We still have a long way to go before our robots can learn to clean a house or sort laundry, but our initial results indicate that these kinds of deep learning techniques can have a transformative effect in terms of enabling robots to learn complex tasks entirely from scratch. In the next five to 10 years, we may see significant advances in robot learning capabilities through this line of work.”


BRETT is part of a new People and Robots Initiative at UC's Center for Information Technology and Research in the Interest of Society (CITRIS). The goal of this new campus, multidisciplinary research initiative is to keep advances in artificial intelligence, robotics, and automation aligned to human needs. The team's research is supported by The Defense Advanced Research Projects Agency, Office of Naval Research, U.S. Army Research Laboratory, and National Science Foundation.


See more news at:

KAISTs DRC-HUBO Egress  -Day1.jpg

Team Kaist’s DARPA Robotics Challenge winner DRC-HUBO completes the vehicle egress task

The DARPA Robotics Challenge Finals competition was held Friday and Saturday at the Fairplex in Pomona, California. After years of research and development, several intense days of preparation at the competition site, a day of rehearsal and two full days of head-to-head competition the winner, taking the  $2 million in prize money that goes with it, was DRC-HUBO, the latest of the "HUmanoid robot” (HUBO) robots developed by the Korean Institute for Science and Technology (KAIST). DRC-HUBO is a bipedal robot with a twist: It has wheels on its knees and can transform so that the wheels are used to move the robot around.

The US Department of Defense's project to develop robots that can help responders in disaster areas saw KAIST beat 23 other teams from around the world in front of a crowd of 10,000 people. Team KAIST’s robot navigated the DARPA obstacle course in less than 45 minutes. Coming in second and taking home $1 million was Team IHMC Robotics of Pensacola, Fla., and its robot Running Man. The third place finisher, earning the $500,000 prize, was Tartan Rescue of Pittsburgh, and its robot CHIMP.

The DRC Finals competition challenged participating robotics teams and their robots to complete a difficult course of eight tasks relevant to disaster response, among them walking through rubble, tripping circuit breakers, opening a door, turning valves and climbing a flight of stairs. To prevent the teams from pre-programming the robots to run the course, a surprise task was included, which on the final day of the two-day competition required the robots to remove an electrical plug from a socket and set it in a different socket.

Launched in response to a humanitarian need that became clear during the nuclear disaster at Fukushima, Japan in 2011, the DARPA Robotics Challenge consisted of three increasingly demanding competitions over two years. The goal was to accelerate progress in robotics and hasten the day when robots have sufficient dexterity and robustness to enter areas too dangerous for humans and mitigate the impact of natural or man-made disasters.

It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.

Or it falls down.





Researchers at Georgia Tech’s GRITS Lab recently developed a technology that programs robots to follow a beam of light, controllable with your fingertip. The development could allow for a swarm of thousands of robots to report to a specific location, avoiding all obstacles along the way (including one another), without any necessary programming.  (via Georgia Tech)


Since the advent of drones, many thought the tiny robots would immediately take the place of military intelligence on the ground. Years have passed, and drones still have not replaced humans. Part of the reason may be that its difficult to command a swarm of drones to fly to specific coordinates without intercommunication between the tiny robots, and programming each one to allow for that type of communication is time costly. Georgia Tech’s GRITS Lab, however, may have solved that problem.


What if a swarm of robots could follow the movement of a laser beam? That’s the question GRITS Lab scientists asked themselves. The swarm robotics researchers decided instead of programming thousands of robots to follow intricate coordinates, they would program them to follow a beam of light while avoiding collision with one another and their surroundings. The destination of that beam is entirely controllable from a connected tablet.


Operating the synced tablet, a user can control where the beam of light appears with the touch of a finger. If the person wants to send two swarms of robots to different locations, he can simply touch those locations with two fingers on the tablet interface. The swarm will split into two equal pods and follow the beams of light, avoiding any obstacles, including one another.


The implications of the technology are huge, as it bypasses the need to individually program drones for surveillance. During an emergency, an operator can command a team of drones to scope out the distressed area, keeping responders safe from a potentially dangerous situation. The military can also use the technology to gain intelligence without putting human lives at risk. The potential is limitless.


Researchers at GRITS Lab hope to develop robots that can work together independently to execute complex missions and tasks. The swarming robot division takes this a step further, working to create robots that can move efficiently as a unit, without laborious programming. There is no word on if or when the technology will be incorporated into market-ready products, but the capability is there. Now it’s only a matter of time.



See more news at:


The Tynker app is available for Android and iPad. Tynker teaches kids programming by letting them create their own games and animations. (via Tynker)

With technology present in most homes across America, toy companies have dedicated time to create kid friendly apps and tablets with an educational focus. Now, there's a startup that can teach children programming fundamentals. Founded by Krishna Vedait, Tynker focuses on teaching children of all ages programming by having them create their own games. Rather than trying to teach young ones the complicated language of programming, Tynker offers fun tasks for kids to complete, such as creating their own animated characters or making their own game levels. It offers various tutorials and tools to interact with in order to build the games through a convenient drag-and-drop interface.


The company is now expanding its platform by allowing kids to use connected smart devices, such as robots and drones. This will allow kids to control real world objects, like home lighting, by building apps that can control toys, command robots, and fly drones. Tynker will also work with popular connected devices like Sphero Robots and Phillips Hue/Lux lightning system. Additional support will also be released for the Tynker iPad and Android apps.


In order to work with these smart devices, Tynker is introducing new code blocks which allows kids to create apps that control certain objects through a visual interface. Pre-coded templates like “Flappy Drone,” based off the popular mobile game “Flappy Bird,” Robo Race, and Stunt Pilot will be included to make it easier for kids to start programming.


Vedati hopes Tynker will expand to support more drones and remote control toys and will be compatible with Apple HomeKit, Parrot Flower Power, along with providing simple interfaces to Lego, Arduino, and Raspberry Pi, in the future.


Tynker also offers an at home service, which allows brings the tutorials, lessons, videos, and missions to personal computers. In addition to this, the company has teamed up with Dave McFarland, the author of various O'Reilly programming books, to create Tynkr's “Introduction to Programming” materials. The software costs $50 per student.


Over the span of three years, 23 million kids have started coding with Tynker and it's now being used in more than 20,000 schools across the United States, Canada, U.K., Australia, and New Zealand.  The company is growing at a steady rate with 500,000 new sign ups per month.


New code blocks and training puzzles are currently available in the Tynker apps on Google Play and iTunes.



See more news at:


MIT researchers who previously built a robotic cheetah have now trained it to automatically detect and leap over multiple objects while it runs. The scientists claim it's the first four-legged robot to be able to do so. To get a running jump, the robot plans out its path, much like a human runner: As it detects an approaching obstacle, it estimates that object’s height and distance. The robot then gauges the best position from which to jump, and adjusts its stride to land just short of the obstacle, before exerting enough force to push up and over.

Last September, Sangbae Kim, an assistant professor of mechanical engineering at MIT and his colleagues demonstrated that their robotic cheetah was able to run untethered— a feat that Kim notes the robot performed “blind,” without the use of cameras or other vision systems.


Now, the robot can “see,” with the use of onboard LIDAR — a visual system that uses reflections from a laser to map terrain. The team developed a three-part algorithm to plan out the robot’s path, based on LIDAR data. Both the vision and path-planning system are onboard the robot, giving it complete autonomous control.


The algorithm’s first component enables the robot to detect an obstacle and estimate its size and distance. The researchers devised a formula to simplify a visual scene, representing the ground as a straight line, and any obstacles as deviations from that line. With this formula, the robot can estimate an obstacle’s height and distance.


Once the robot has detected an obstacle, the second component of the algorithm kicks in, allowing the robot to adjust its approach while nearing the obstacle. Based on the obstacle’s distance, the algorithm predicts the best position from which to jump in order to safely clear it, then backtracks from there to space out the robot’s remaining strides, speeding up or slowing down in order to reach the optimal jumping-off point.


In experiments on a treadmill and an indoor track, the cheetah robot successfully cleared obstacles up to 18 inches high — more than half of the robot’s own height — while maintaining an average running speed of 5 miles per hour.


The MIT scientists next plan to demonstrate their cheetah’s running jump at the DARPA Robotics Challenge in June, and will present a paper detailing the autonomous system in July at the conference “Robotics: Science and Systems”.


[Buidling a Quadcopter][Part 4]

Posted by ipv1 Jun 1, 2015



I started a series of posts on how to build a Quadcopter from scratch and in the previous posts, I had explained the basic hardware build. I built up the wooden frame and mounted the motors, speed controllers and props and I also did a basic test. The brainless beast was able to takeoff but it cannot even stay straight. Hence we now need some sensors and a brain to control the quad and so in the next few posts, we will talk about designing the flight controller from scratch. Sounds complicated? In a related post, (Self Balancing Robot - Temporary Diversion from the Quadcopter Project - Demo  )I demonstrated a self balancing robot that was based on the same principle. I discussed the math behind it and in this post, I tell you about the flight controller and remote.  A picture of the frame with the ESC is given below.



The road more travelled

When I was researching about existing quadcopter projects, I came across a lot of information on the subject that I want to share. For people already experienced with RC hobby and quads, this may not be very exciting bits but its useful for the newbie.

There are a number of open source flight controller projects out there and the more popular names are as follows.


1. ArduPilot

I have followed this project for some number of years and it started as a shield for an arduino which was called an ‘oil pan’ and later was upgraded to bigger hardware and is currently one of the most expensive hardware for RC airframes. The GUI is great and the performance is great as well. It has automated flight modes etc but to a beginner its just too much.


I have read good things about this one and the full featured version is called the DJI NAZA M V2 which is a costlier than the Ardupilot, is closed source but is the absolute best at what it does. Bucket load of features for a bucket load of money. There is also a DJI Naza Lite which is much cheaper but again closed source.

3. OpenPilot CC3D

The best open source hardware I have read about is the CC3D which is based on an STM32 based chip and has the MPU6000 and 6 channels. Its open source and you can install your own firmware on this one like base flight and clean flight(more on this later). It was originally a kickstarter project but is now available from a number of sources. I recently bought one of these and I have to say its the EASIEST to setup as the software has a wizard to guide you through all the steps the first time around. You can mess with the advanced controls later.

4. NAZE32

The NAZE32 is the next best thing to the CC3D and is a bit more flexible BUT its a bit more difficult to setup as opposed to the CC3D. Its used by advanced fliers who have control over the controls and want their quads to do more tricks.

5. KK2.1

This is one of the first boards you will find online when you search for quadcopter controllers. It has an LCD which allows you to set it up without a PC and is based on the AVR controllers. It used the MPU6050 as a sensor and you may write your own firmware for it but you will need a AVR ISP programmer since it does not have one on board. Its cheap but requires manual tuning and is better for the more advanced flier.

6. KKMulticontroller

Yes! its different… well almost. Its based on the Atmel AVR (168p) as well but I think the support for this one has been discontinued. Their website kkmulicopter com is gone and I think the makers have moved to making 32bit flightcontrollers or something. Its a bit outdated and used Murata Gyros only for measuring the orientation. No sensor fusion and the gyros themselves are analog and you have trims to set the offsets. Pretty neat but highly outdated.

7. MultiWii

This is not actually a hardware for sale but rather a Hardware you build. Its a project where you use an arduino and sensors from the Nintendo Wii remotes to make a quadcopter. Pretty neat. There is a lot of detail on how to get/use other sensors and this is the project where I start from.


About a RC Transmitter and receiver

In order to control a vehicle remotely, we need a wireless method of sending commands. The most common method of doing this is to use an RC remote which has dedicated channels at the input via the remote control e.g. throttle, yaw, pitch, roll etc and correspondingly has receiver pins each with individual signals. These signals can be PWM which means the width of the pulse will vary with the variation in remote control stick position. Alternatively it can be PPM which means the position or time distance between pulses will vary with stick position. There are a bunch of other possibilities which are beyond the scope of this article. I am using a 6 Channel remote with PWM outputs at the receiver.

In a different article we will talk about making one ourself as well.


Using the CC3D

I have been working on this project for a long time and have had a mix of results with programming experiments and so instead of diving into the coding part, I decided to get a flight controller that works out of the box and a RC transmitter receiver as well. The reason why I chose the CC3D is because its the easiest to configure and setup and is not very expensive. I got mine off ebay and it came in a case with the necessary cables.


I used zip ties to fix the flight controller on top and used some sponge/foam to pad the controller and shield it from vibrations. The result is shown below.


You can see that I have used duct tape and some wires to ‘hang’ the battery below the frame. This can cause issue but I have not choice right now.


Calibration of the CC3D and testing

The CC3D needs calibration before we can take off. For this I followed this video...


I did everything as instructed and then tried to control the quad without the props attached. Everything worked out! What next?


Calibrating the Remote

This step depends highly on the type of remote. I am using a remote which has a PWM output which means that the width of the pulse output will vary with the position of the stick on the RC Transmitter. I mentioned before that I will be doing an entire post on how to make a remote your self and will go into more details there. For now, the calibration is more for the CC3D and less on the remote end.


Taking Flight

The final step is to take flight and a word of warning. This thing is dangerous! I have received cuts and bruises in the past from this quad while testing so I suggest you be careful. I put the quad in an open area and well the video below is my second test flight. I am scared of this thing!


When you first activate the throttle, it will try and take off and in my case, it was leaning in a particular direction. To rectify this, I used the trims on the remote to make corrections until the quad was almost stable. My objective was to make it hover without using the sonar. I will probably add the sonar module to the project later but this first test made it clear that the response of the quad was very quick despite the fact that it was big and heavy.


Here is the video of a test flight.




In this post, I presenting the quick and dirty way of making a quadcopter and we saw that it can indeed take off. We used bought outs BUT we have a platform that we can modify. In the next post, I will be demonstrating a flight controller on an arduino based board the KKMulticopter 5.5 and hopefully we will implement the flight controller piece by piece in the next few videos.

Be safe!




The macro-sieve created by Daniel Moran. Daniel Moran and his team are developing an electrode implant to re-create sense of touch for amputees. (via Daniel Moran)

There have been several innovations in modern prosthetics from ones controlled with the power of your mind to ones created by 3D Printers, but they still lack the sense of touch for amputees. Daniel Moran, professor of biomedical engineering, and his team at Washington University in St. Louis are hoping to change that.


Moran's method would use an electrode capable of connecting a prosthetic arm's robotic sense of touch to human nervous systems it may be attached to. The electrode, called a marco-sieve peripheral nerve interface, is made of a thin material, similar to a contact lens, that's 20 percent the diameter of a dime and resembles a wagon wheel with open spaces to allow the nerve to grow. It's supposed to allow users to feel heat, cold, and pressure by simulating the ulnar and median nerves of the upper arm. Moran and his team received a $1.9 billion grant from Defense Advanced Research Projects Agency (DARPA) for testing.

touch devicedevelo.jpg

Figure A shows nerve regeneration through the high-transparency regenerative macro-sieve electrode. Figure B shows a regenerated nerve through a silicone conduit. Figures C and D are epoxy nerve sections demonstrating numerous myelinated axons. (Credit: Dan Moran, PhD, Washington University in St. Louis)


While it sounds good on paper, the now three year old project has a while to go. Before the device can be implanted in to people, Moran's team needs to determine how much sensory information is actually encoded in natural systems. For testing, prototypes will be implanted into the forearms of “nonhuman primates,” which will then be monitored for the stimulation of peripheral nerves using the current steering technique. Current steering uses multiple stimulation sources to direct current flow through specific regions of brain tissue. The test subjects will then be taught, by Moran and his team, to play a video game using a joystick. The team will give them cues as to how to move the joystick by stimulating ulnar and median nerves.


In a statement about further development, Moran said “We want to see what they can perceive. If we stimulate this sector of the nerve, that tells them to reach to one side in a standard reaching task. We want to figure out how small we can make the stimulation so they can still sense it.” Once the team has the appropriate information, they'll be able to create more accurate sensor suites in future prosthetics, similar to the Luke Hand that DARPA is currently building. The Luke Hand is a high tech bionic limb created by DEKA Research designed to help servicemen, women, and veteran with upper arm amputations.


This is a big step forward in modern prosthetics. If all goes well, Moran's device will allow amputees to feel certain objects, such as hot mugs, which gives them more control over the prosthesis. They will no longer have to rely solely on their vision to determine how to use objects more efficiently.



See more news at:


Some scientists are giving a new meaning to ‘the web’ with nanotechnology. Researchers at University of Trento have discovered that covering spiders with carbon nanoparticles can allow them to produce silk up to 3.5 times stronger than usual. This weird science has even weirder potential applications.  (via University of Trento)

In the spirit of weird science, a team of Italian researchers from the University of Trento have employed a series of strange experiments getting even stranger results. This particular research team has delved into the science of testing the strength of natural materials before, and they are currently trying to cook up how to enhance the strength of natural materials with the strongest inorganic materials.


Currently, a sheet of grapheme one atom thick is the strongest artificial material that scientists have been happy to integrate into new experiments. In this case, researchers at University of Trento thought, why not combine the strongest artificial material with the strongest organic material? Yep, it sounds like an idea you would cook up when you’re a toddler, but luckily these scientists had a lot more resources at their disposal than just Lego blocks and sippy cups.


They had graphene nanoparticles and a population of Orb spiders, from the Pholcidae family, on hand. Orb spiders are currently known to produce one of the strongest naturally occurring materials on Earth: silk spun webs. They conducted a series of experiments with different test groups of these spiders.


Their first test group consisted of 15 orb spiders that they sprayed with a mixture of water and graphene nanoparticles 200 to 300 nanometers wide. They found that some of these spiders produced silk strands that were stronger than usual.


However, 4 of these spiders died immediately upon being sprayed with the graphene and water mixture.

Their second test group consisted of 10 orb spiders that were doused with a graphene and water solution. The third test group consisted of 5 spiders that were doused with a mixture of carbon nanotubes and water. The results actually showed that some spiders created less strong silk as a result of the treatment; however some of the spiders created silk that was 3.5 times stronger than their usual silk strength. In fact, spiders covered in a mixture of carbon nanotubes created the strongest silk.


What is weirder than these experiments and results is the fact that the researchers don’t know exactly how this happened. At first, they suspected that the silk was coated with carbon that caused the increase in strength. However, this hypothesis is not supported mathematically because simply dousing the silk strands in carbon would not sufficiently increase the strength to support their test results. Instead, team leader, Nicola Pugno, thinks that the spiders may harvest the nanoparticles covered on them and integrate them into the silk strands themselves to increase the strength.


The team intends to continue to investigate these phenomena in the hopes of creating a future silk production of hybrid natural and synthetic materials. Their next intended research will conduct a similar study on silkworms. One of the applicable uses for this potential super silk material is acting as a net to catch falling airplanes (seems lofty), according to Pugno. With ideas like these, these experiments could lead anywhere. I can certainly see DARPA coming up with some insane uses for this... in a way that only DARPA can.



See more news at:

rice leg generator.jpg

A leg brace that generates power to keep an artificial heart ticking. Rice University students and researchers have been working on a way to create a wearable energy generator to power artificial hearts for a few years now. They are currently closer to creating a more realistic prototype to serve their mission. (via Rice University)

Can you imagine how annoying it can be to change the battery on a pacemaker? Well a team at Rice University has been trying to create a sustainable source of energy for artificial hearts. The team is called “Farmers” and they have had at least three different groups of students try their hand at creating a wearable generator that can power an artificial heart. The Farmers team has come up with a lot of different prototypes over the years.


Their latest working prototype is a wearable leg brace that generates power every time the user bends their knee. Hence, it is supposed to produce 4 Watts of power every time the wearer walks, which is then fed into a lithium ion battery. At the moment, they are not testing this on actual implanted hearts, but their current prototypes may work theoretically. However, there is an issue of how to get the energy from the external battery to the implanted artificial heart – as you can imagine. The team is hoping to be able to wirelessly transmit the energy from the battery to the heart.


Wireless energy transmission has been made possible only last year as a company called WiTricity released the success of their prototype on everything from light bulbs, laptops, and cellphones to car batteries. The technology utilizes what WiTricity calls a “source resonator” that can transmit energy to other batteries when power is running through it via an electric coil that generates a magnetic field. If a device that has a partner coil is within range, it will be powered by this source resonator. The technology happens utilizing the same type of field generated by a WiFi signal. WiTricity has already teamed up with a medical company to wirelessly power artificial hearts. Hence, Rice University could be using this technology for their project within the next year, they hope.


In one of their earlier prototypes, they used a petal inside the user’s shoe to help generate power, however only their current iteration has the ability to actually store the energy generated into a lithium ion battery. The hardest problem to solve, they report, was making the brace comfortable enough to wear for long periods of time.


Users of this current version reported that the brace was comfortable enough, so they have certainly made progress. They also had to scale down the generator so that a user didn’t have a huge thing on their leg, causing trouble getting about. Now, I don’t know how often a user would have to walk everyday in order to maintain enough power to ensure that the battery can power their heart. Hopefully they won’t have to walk around this a brace the majority of their life, which would be extremely irritating.


While this project is pointed at powering artificial hearts, this same concept can be used to power just about anything else. Hence, I can see it being a starting concept for a wearable generator. If you combine a streamlined wearable generator with wireless energy transmission, I think we’ll be living in a very different world in the coming years.



See more news at:


So much money... and no way to carry it home... until now! The Sumitomo Mitsui Banking Corp is the first to test Cyberdyne’s exosuits designed specifically to help employees over 65 to carry wads of cash. If successful, this trend could continue within the Japanese banking sector. (via Sumitomo Mitsui Banking Corp & WSJ)

One Japanese bank is test piloting an exosuit created by Cyberdyne Inc., to allow senior employees to carry tons of cash easily. Sumitomo Mitsui Banking Corp has about 16% of employees who are over the age of 65.


They hope that the exosuits will help relieve these employees of the extensive need to do heavy lifting when moving large parcels of money around the bank. I’ve never held that much cash before, but a part of me really wouldn’t mind the burden.

However, large stacks of cash can get pretty heavy – especially bags of coins. Hence, the exosuit would reduce the weight of loads by 40%. So, 20 pounds would feel like 12 pounds. They hope that the initial trial will be successful so that they can roll out the exosuits to their other banking branches. 


Cyberdyne Inc.’s exosuit design first trialed at ten Japanese hospitals (via Takashi Nakajima)


This isn’t the first time Cyberdyne has used exosuits to lessen burdens of the elderly. About two years ago they trialed specialized exosuits in ten Japanese hospitals as part of a rehabilitation program to teach patients with debilitating nerve and muscle conditions to walk. This particular suit had robotic parts that used bio-electrical sensors to trigger different motors on the exosuit that helped the patients walk.


It seems like Cyberdyne is doing good work that is helping a lot of people with limited mobility. Their recent tactics show that they are also pursing more market verticals that are going beyond the healthcare sector.



See more news at:

New from Stanford - a tiny robot called Microtugs that can pull 100 times its own weight. In fact, when they scaled the robot up, it was able to haul even more than 2,000 times its own weight! The new technology is really exciting and could have really interesting implications for search and rescue missions, along with construction. Who knows! Perhaps the next skyscrapers will be built by these little robots pulling huge beams up buildings?


The robot was created by Stanford University engineers who are planning to demo and present on this new robot in at the upcoming ICRA 2015 conference in Seattle, WA from May 26th thru May 30th. The ICRA conference is hosted by the IEEE Robotics and Automation Society each year.

The creation of this robot utilizes nature as an inspiration to unlock the mysteries of how tiny creatures like ants can carry exponentially more than their weight. This particular innovation used Geckos as their inspiration. The underbelly of the robot has tiny rubber spikes that act like adhesive feet (much like insects who climb on walls and ceilings). The robot can push these rubber spikes into a surface to increase the surface area of the rubber, increasing the adhesive capabilities. When the rubber spikes are straightened out, they easily lose their adhesiveness, making it easy for the robot to control each rubber spike.


utugs inside.JPG

Microtugs on the inside (via Stanford)


However, more than just using rubber adhesive feet as the secret to this technology, they also use a unique movement to get the robot to pull over 100 times its own weight. Rather than just using brute force to power the weight up the wall, the robot inches its way up the wall like an inchworm – kind of literally. The feet in front act as a sticky anchor to keep the robot in place and carry the load, while the other feet move forward and create an anchor that will allow the other feet to inch forward. If a robot were able to climb mountains, than this is how it would be done, I suppose.


The most exciting part about this new super strong robot is how much weight it can carry. They started with a 20 milligram robot could carry 500 milligram up a wall. That’s about 25 times its own weight, which is impressive for the microscopic robot. The working demo they have a video of is a 9 gram robot that is able to carry an impressive 1kg weight up a wall! Even more impressive is a bigger 12 gram robot, called µTug, that was able to carry weight more than 2000 times heavier than it! This is the equivalent of a human carrying a blue whale, in math terms.


The team hopes that the robot can aid in search and rescue missions where large debris must be lifted off of victims. Its seems that a larger version of this robot can have a lot of applications. Given the positive results so far, it doesn’t seem farfetched that this type of technology can have practical applications in the near future.


See more news at:

acsii selfie.jpg

::votl:: group modifies a typewriter to print out selfie ASCII art (via votl)

This would have been huge in the early 90s! Those fun BBS days.

I thought I had seen the last of selfie technology with the toaster that toasts a selfie into your bread.... but alas, I was wrong! The Russian project ::votl:: has been having a blast with their modified typewriter that prints out selfies in ASCII fashion. ::votl:: is a project headed by a Moscow-based artist named Dmitry Morozov. He is a media-artist and a sound engineer/musician that seems to produce a few art projects every month for display at different exhibitions around Moscow.


You can see his typewriter in action.


It seems like a very easy project to recreate if you are bored at home and want a quick spring project to keep yourself entertained. His nifty idea utilizes a Brother sx-4000 typewriter, an Arduino microcontroller, a camera, and a lamp (for optimal selfie lighting). It seems like he used a tablet to help the process along.


The video also gives you a glimpse of his coding which he did using Pure Data, a graphical coding interface that many artists use to program installations. They also hooked up Pure Data and Arduino with Max/Msp to process the camera data and lead to the final ASCII selfie ‘art.’ You can see a ton of finished products with some very serious looking Russian people on the::votl:: website:

Another example of repurposing outdated technology for today’s passing fancies. This artist has a lot of projects that they do which involve sound and video.


Their other recent project, called a kalculator, utilizes a person’s internet ‘fame’ as an input for an electronic composition. This was featured at the "Worker and Kolkhoz Woman" exhibition at the Museum and Exhibition Center in Moscow this year. A user presses a button to randomly select one of 18 of the most famous artists in Russia. Then Google searches the internet for the number of times the artist’s name is mentioned. That number is then used to generate an electronic composition. The greater the number, the more complex and lengthy the composition becomes.


I suppose it also highlights how the internet is constantly growing and changing, which should serve to increase or decrease the ‘fame’ of the artists and thereby change the composition played. The hardware for this is an android table, an Arduino microcontroller, and a simple 2 channel speaker system. They just used Max/Msp for the software.


This group has a slew of different projects like these. Perhaps it can help you come up with future projects that you may want to tinker with in the coming months. Or perhaps you’ll be spending more time outdoors, away from the computers and hardware.


I suppose a more public art display would be more useful for a spring and summer tinker session, and they’ll be plenty around as the weather gets better. For now, enjoy the sunshine and get inspired! There will be more wacky displays and tech coming up soon.



See more news at:

Filter Blog

By date:
By tag: