1.jpg

WaterColorBot. Looks like an amazing tool for people who have low hand/arm dexterity. They can still create! (via WaterColorBot's kickstarter)

 

Back in April of this year (2013), the White House hosted a national science fair in conjunction with the STEM (Science, Technology, Engineering and Math) program that brought 100 students to Washington from all over the nation. Showcased were a mix of different projects ranging from oil producing algae to UUVs as well as a myriad of game and app coders, rocket designers and even city planners. One student focused on creating watercolor artwork and was able to incorporate it seamlessly into today’s technology, which was demonstrated for President Obama in the State Dining Room. The student, Sylvia Todd (AKA Super Awesome Sylvia on her YouTube Channel), designed and developed her WaterColorBot with the help of Evil Mad Scientist Laboratories, which functions much like it sounds. Her initial goal was to design an art robot to enter into the 2013 RoboGames in the Artbot Painting category where she took the silver medal behind Poland-based KoNaRobotic’s Calliope sketch robot. Realizing that the bot was not constrained as a single project, the team (Both Sylvia and EMSL) decided to develop the bot into a stand-alone kit where it was also showcased at this year’s Maker Faire.

 

WaterColorBot is in essence a computer automated numerically controlled CNC machine (and can function as one to boot) that is able to paint in watercolors taking its information from input through paint-based software from desktop PCs, laptops or mobile devices. The bot functions much like a pen plotter (or Etch-a-Sketch) and uses two motors to move the paintbrush mechanism along the X and Y-axis. The brush carriage, also outfitted with a tiny servo, allows for the brush to be descended or elevated depending on the task. Vector and elevation is controlled through an onboard The EiBotBoard 2.0 USB motor controller that gets its instructions from files based on the SVG format, however a number of other formats may be used (PDF, Illustrator, etc.) after being converted using Inkscape. The WaterColorBot is currently being funded through Kickstarter in order to get the kit manufactured en masse so that other artists can get their respective creations showcased on refrigerator doors all over the globe; the initial funding goal of $50,000 US has been surpassed with a total of over $75,000 (not bad for some starving artists). Those interested in getting their hands on a first-production run of the bot can pledge $295 or more (the $275 version has sold out at this time) and will receive one WaterColorBot kit (with some assembly required). Backers at that price-point should receive theirs by mid-December, just in time for the holidays (you may have enough time to use it and give the gift of art!). Lazy automation or artistic tool?

 


 

C

See more news at:

http://twitter.com/Cabe_e14

AIREALVortexRingFig.jpg

Aireal demonstrating a haptic feedback event (via Disney)

 

Gamers have been using haptic feedback game-pads since Nintendo released the Rumble Pak for its N64 controller back in 1997. After its initial launch, almost every other mainstream console manufacturers thereafter (Sony, Sega and Xbox to name a few) featured controllers with haptic feedback built into them, which provided a level of immersion into the games themselves. Fast forward to 2010 and haptic feedback in gaming devices are still present and considered more as a staple rather than a feature. Even mobile devices are outfitted with haptic feedback, such as vibration alerts for incoming calls, text messaging and screen interactions - a pervasive concept. A new interaction standard was released in 2010 as Microsoft introduced their Kinect camera system that provided consumers a completely new level of immersion, rumble pads seemed like a toy afterward. Combining the two would be absolutely incredible and would provide a unique experience when it comes to game interaction. But joining the two together seemed impossible, at least that was the case until Disney decided to give it a try. The company’s research department is developing a device, known as Aireal, which allows users to actually feel virtual objects, textures or a virtual form of haptic feedback using the air itself.

 

The device isn’t some massive air compressor that bursts air pressure at the user but rather a relatively small devices that unleashes small ‘air-donuts’ that are actually small traveling low-pressure bubbles that simulate the feeling of substance when touched. Researchers designed the Aireal using a 3D printed enclosure that’s outfitted with micro-subwoofer speakers that encompass five sides of the device. The speakers emit a burst of low-frequency pressure that is forced through a small flexible nozzle at the front of the device, which forms small vortices that create what Disney calls ‘dynamic free-air sensations’. A small IR camera is attached to the front of the device that tracks the user’s body and aligns itself to the user’s position using pan and tilt motors to correctly aim its nozzle. Disney researchers have designed two prototypes with one for an individual and a larger version for groups with gaming demos for each. The first demo involves using a table display with a projector housed underneath, which projects a small butterfly that flies around the screen. Players attempt to capture the butterfly, which is projected on their hands when they get in range. While you cannot actually feel the butterfly, you can feel the simulated air movement of its wings while in flight. The other demo involves using an air gun to fire simulated, slow-moving cannon balls between two players that have a chance to dodge the incoming projectiles. The device has its drawbacks at this point in its development however, as the micro-subwoofers are not completely silent and the vortices produced are not consistent from one to another. Interaction is also an issue as there is a delay of about 150 milliseconds (for the larger version) between body detection and vortices produced but the research team is looking to solve these problems through increased development cycles. Still, it’s an amazing feat to feel tangible sensations through gesturing, which should bring increased immersion in the gaming world. Soon, there will be some who never want to leave their virtual world… (WoW fans aside.)

 

C

See more news at:

http://twitter.com/Cabe_e14

roboants.png

(a) Two Alice robots facing each other, with (left) and without (right) the additional module for light detection. (b) Three Alice robots pursuing a light trail. (c) Typical time course of an experiment with three loops (access to a fourth loop that is visible had been blocked) and symmetrical bifurcations. The letter S indicates the starting area of the network where the robots are placed at the beginning of the experiment.  The letter T indicates the target area. The top three pictures represent 3 snapshots of an experiment where a group of 10 robots selects the shortest path. (via Simon Garnier)

 

 

The highway traffic system can appear to be a bit of a pain at times - the dreaded rush hour which results in a seemingly endless line of vehicles slowly tip-toeing their way home. The solution can not possibly be to continue forth designing similar wide-lane traffic transportation systems that lead to traffic inducing, bottleneck effects. A way of achieving optimal travel at any time of day must exist, and it appears that ants have it down pat.

 

On daily foraging missions, ant colonies efficiently travel great distances with the help of several navigational aids. These typically include: visual cues, pheromone based route tracking, celestial navigation, and even step/body rotation counting - not unlike many fitness enthusiasts of today. But when faced with a fork in the road, the possibility of ant utilizing the geometric properties of their environment to navigate comes into question. This hypothesis has been tested in a recent study contributed to by Simon Garnier, Maude Combe, Christian Jost, and Guy Theraulaz at the New Jersey Institute of Technology’s Swarm Lab.

 

The sugar-cube sized ant robots were equipped with two light sensing antennae for the study. In addition to testing the geometric variable present in their route navigation system, the ant method of using pheromones for navigation was simulated using light; each individual robot would leave a light-trail on its path down the maze. A previous study explained that the angle created by a fork in the road determined which path an ant would take: a small incident angle led to the food source, while a larger incident angle generally led back to the nest. Thus, the study was performed utilizing two distinct mazes: one with symmetrical bifurcations - perfectly symmetrical forks in the road - and one with asymmetric bifurcations.

 

Programmed to like ants, whom generally follow a relatively straight path with little desire for exploration, the 10 robots were allowed to navigate the maze galleries as close attention was paid to the relationship between the movement of the robots, the light-trail path, and the maze bifurcations.

 

The maze galleries were carved out of PVC and white cardboard and presented the robotic ants with a network of 9cm wide, 2.5cm high corridors. The robo-ants, named Alices by the research group, were built at the EPFL in Switzerland. These 22mm x 21mm x 20mm bots were equipped with two watch motors with a max travelling speed of 40mm/s, four infrared sensors for target and obstacle detection, and a photodiode equipped module to detect changing light gradients.

 

The results of the experiment demonstrated remarkably similar patterns of movement between the ants and the Alices. The findings showed that the physical angle of the bifurcations were not as important in the individual case: by monitoring the bot behavior with respect to the “pheromone” light-trails it was found that the ant-bots preferred to take the path that had been previously travelled by another bot. In this method, once the shortest route was found, more and more ant-bots would take that same route due to the increased light “pheromone” presence. At the collective behavior level, the group of Alices were more likely to choose the shortest path to the target destination when traversing an assymetric network.

 

Simon Garnier, the research team leader, wasn’t too surprised that the experimented explained ant navigation behavior as dependant on both physical layout and the presence of pheromones. Ants have demonstrated a preference to navigate through angled bifurcations that require the least bodily movement - hence the small incident angle - and that also contain a stronger pheromone scent; this results to the development of a positive feedback loop traffic system. The research will further help scientists study how physical layouts and pheromone trails in the environment can affect the travel of other insects. Subsequent work hopes to stem into the study of how similar navigation systems can be used to alter human-made environments for optimal travel.

 

Also, can we have Ant-Bots for our holiday 2013?

 

C

See more news at:

http://twitter.com/Cabe_e14

See_Me_a.jpg

Concept art for the "On-Demand Satellite" from Darpa (via DARPA)

 

DARPA is currently talking with contractors to produce disposable satellites to assist in military operations planning. The program is called SeeMe (Space Enabled Effects for Military Engagement) and looks to allow satellites to orbit around Earth for 60 to 90 days and provide reconnaissance within 90 minutes of deployment.

 

Unmanned aircraft vehicles (UAVs) such as military drones presently provide much of location information intelligence. However, they cannot provide extended territory coverage over areas without need to refuel. The disposable satellites would allow on-demand coverage of areas efficiently and without the need to refuel. Additionally, this technology will be accessible by currently used hand held devices. The satellites are planned to burn up in the atmosphere after their 2-3 month operation span making them disposable. The satellites will cost around $500,000 and will only be launched in groups of 24. As a result, DARPA is seeking help from the mobile phone industries for rapid low-cost manufacturing technologies. They are also seeking manufacturers for propulsion technologies, solid state components, valve technologies, and advanced optics industries. DARPA will hold an event call Proposer's Day to further discuss the technologies.

 

A recent kickstarter is allowing the private sector to expore space with micro-satelites. See the "Pocket Spacecraft."

 

C

See more news at:

http://twitter.com/Cabe_e14

scanstik_scrolling_sm.gif

Planon ScanStick SK600, for all your spy needs. (via Planon)

 

Companies and businesses are consistently trying to reduce costs when it comes to just about everything, but more so when it comes to office supplies. Factoring paper alone, the US runs through roughly 70 million tons per year (due to books, magazines, newspapers as well as Post-it notes), however this number is falling due to mobile devices and eBooks. To put that into perspective, the Department of Homeland Services spends roughly $100 million annually on software, office equipment and supplies, which is incredibly costly (for taxpayers). In an effort to reduce those expenditures, companies continually look to technology as a supplement to using paper products includinge laptops, tablets and smartphones., However these are lacking when it comes to scanning important documents in the field that need to be e-mailed to interested parties. There are many products on the market today that are capable of scanning material in the field, including Epson’s WorkForce scanner, Fujitsu’s ScanSnap and Xerox’s Mobile Scanner. While those respective scanners function well, they are bulky in size and may not be suitable for those who prefer a smaller package for use with tablets or smartphones.

 

This is where Planon excels as the company has recently released their ScanStick ultra-portable color scanner with a relatively small pen-sized footprint. Unlike other handheld scanners that typically scan either one line of text at a time or 2 to 3 inches of a page per-pass, the ScanStick is capable of scanning a whole page in one swipe at resolutions up to 600 DPI. It features a MicroSD slot for storing images as well as voice messages (using the included VoiceNotes software) in case the document needs some explaining. On the face of the scanner is housed an LCD screen that shows a host of information including resolution settings, memory space, battery level and whether it is in color or BW mode. The scanner has a built-in USB connector that enables users to upload their images and voice recordings to most Windows-based mobile devices and PCs and Mac OSX platforms as well as Android and Blackberry-based devices. Planon also includes Paperport and ABBYY OCR software for the device that lets you edit scanned documents in a myriad of styles including PDF format that allows for picture editing as well. It also allows users to translate the text into most languages in the event the scanned documents need to be sent globally. Planon has targeted their ScanStick SK600 towards business professionals who are constantly on the move as well as those working from home. The scanner is available now for $159 US direct from Planon, which includes the scanner, USB cable, and a 2 GB MicroSD card for storage. Not too bad for those looking to minimize their need for office supplies.

 

C

See more news at:

http://twitter.com/Cabe_e14

collin_tie_full.jpg

The Ampli-tie in action. (via adafruit)

 

Adafruit Industries, founded in 2005, has been supplying the Do-It-Yourself (DIY) community with the necessary components and parts for projects of all skill levels since Limor “Ladyada” Fried began her venture. In addition, they also provide kits to create unique projects that can be helpful for learning about electronics or just fun to create. For example, on Adafruit you can find a section for young engineers which features kits such as snap circuits for the very beginners, or for the more advanced tinkerers, starter packs for Arduino or Parallax microcontrollers.

 

On the website you can also find all types of help and support for creating projects. Almost every week there are new tutorials that are posted to assist in building projects. One of the most recent tutorials involves creating a tie with LEDs that is sensitive to sound. Dubbed, the LED Ampli-tie, this project allows an enthusiast to create a neck tie with LEDs which act as a Volume Unit (VU) meter. All the products and necessary components, along with instructions to create the project, are available through Adafruit, with the exception of the tie.

 

At the heart of the project is the FLORA, a wearable, Arduino compatible, electronic platform. This small platform (1.75” diameter) is designed to be integrated into clothing or fabric and can handle large amounts of LEDs. It is also beginner friendly with on board diodes and regulators to prevent damage from connecting batteries backwards or batteries with too much power. Furthermore, the project uses a few more of Adafruit's custom products. Flora NeoPixels are the LEDs that are attached to the tie and a microphone amplifier breakout board is used to pick up incoming sound. After a little sewing, with conductive threading, of course, you can also be ready to light up the room with a very unique tie.

 

Adafruit is the one stop shop for any wearable electronic projects you may have. With that said, it is also definitely worth checking out for other various pieces. Browsing through the site, you may find something that is perfect for a project you may have thought of, but never started, or you may come across something that will spark a new idea in your head. Since creating her business in 2005, Limor’s Adafruit has now grown to over 45 employees and has been the recipient of various awards. She was recently awarded Entrepreneur magazine's Entrepreneur of the Year award and was the first female engineer to be featured on the cover of Wired magazine.

 

 

 

C

See more news at:

http://twitter.com/Cabe_e14

rfid cookbook.JPG

Patent concept block-diagram. Right most image shows how the RFID tag will be embedded into the pages. (via USPTO #8403232)

 

RFID tags certainly don’t seem to have anything to do with delicious food. But soon, the technology may come in handy when trying to create a recipe. A patent filed by LG uses RF tags to digitally expand the pages of a conventional cookbook. The project is aimed at assisting regular cooks in discovering or experimenting with a wider variety of foods.

 

The idea is a simple one. RF tags are embedded in the pages of a cookbook and identify the ingredient or food being used in a recipe. These tags can be read by a terminal station, which could hypothetically be an oven or refrigerator with an RF reader. This station ideally connects wirelessly to a server, which contains a library of information on the food being used. Info- like recipes or valuable cooking tips can be accessed without having to fill the physical pages of the book. Other kitchen appliances can also be tagged and their specs can be read and factored into the cooking process to facilitate the success of the recipe.

 

Eventually, communication between appliances according to recipes could unleash the full potential of this technology. The Internet of Things, along with RFID could make sure that you never overcook or undercook your roast by informing the oven with the proper cooking temperature and duration, though this is not part of the patent.

 

Details about where exactly the information is stored, how much on the RF tag and how or from where additional info is retrieved are being worked out to maximize the efficiency of the process. No such book has yet been printed but the patent suggests many systems will be tested. The physical limitations of cookbooks may soon be a thing of the past and, hopefully, poorly executed recipes will go along with them.

 

C

See more news at:

http://twitter.com/Cabe_e14

yi-treadmill.jpg

PhD candidate Bum chul Kwon shows off his work on the ReadingMate: an enabling technology that allows multi-taskers to exercise their minds and bodies simultaneously by reading as they run. (via Purdue)

 

As more information becomes available to the human population about health, the importance of exercising both the body and mind consistently becomes an intrinsic necessity. Multi-taskers, busy students in particular, might find themselves brushing up on their class notes prior to an exam while also getting some work in on the gym’s stationary bike. Running on a treadmill, however, may cause a bit of a problem when trying to read small text from a screen -no matter  whether or not that text is a reading for class or a simple tweet. Bum chol Kwon from Purdue University has developed software, called ReadingMate, that enables multi-taskers to read as they run by accounting for bobbing movements of the head and the eyes as one focuses in on a text-filled screen

 

Kwon’s study tested the hypothesis that most reading difficulty while running arises from the vertical running motion. As the head moves up and down, the eyes also auto-adjust to compensate for the motion. Thus, both the head bobbing and eye-reflex mechanism had to be accounted for in the ReadingMate program.

 

Fifteen student participants were chosen to test the software. Each participant wore a pair of goggles with infrared LEDs that an infrared camera atop a computer screen used to detect the runner’s head movements. These students were then asked to count how many times the letter ‘F’ appeared in two lines of text that were placed in between a total of 10 text lines.

 

The results showed that those using ReadingMate located the letter ‘F’ at a higher accuracy than those who did not. Kwon also mentions that participants without the ReadingMate would often give up on counting because the text was too small and/or difficult to read.

 

The self-adjusting text properties of ReadingMate is likely to find most use in industries where employees are required to read text while under the pressures of turbulence and vibrations - such as pilots and heavy machinery operators. For now, students and insistent multi-taskers will have to continue keeping their reading and running exercises separate - which might actually be best if you’re in need of polishing up the ol’ time management skills. Still, Kwon’s ReadingMate technology is a great example of innovation created to optimize overall human performance.

 

C

See more news at:

http://twitter.com/Cabe_e14

img_1401.jpg

Wallet sensor in its "final form." (via hack-a-day http://hackaday.com/2013/03/08/quick-wallet-hack-adds-pickpocket-alarm/)

 

Caleb Craft was searching the web when he noticed references to a small group of people who can pickpocket just about anybody. Being the clever hacker he is, he decided to integrate an alarm into a wallet that would sound off if someone ever tried to swipe it. There are currently solutions to this problem already available, however, and as Caleb points out most of the wallet alarms available will not go off until light hits the device. If a pickpocket swipes your wallet and does not open it right away, the alarm would be useless. This is where Caleb used his ingenuity to create a better solution.

 

Finding a small magnetic window alarm Caleb quickly went to work on how to incorporate it into a wallet. The original alarm works using a buzzer and magnetic reed switch. The reed switch works by an applied magnetic field which in turn, opens or closes a switch. Furthermore, Caleb decided all he would need to change on the alarm would be the thickness of the design. Leaving the casing around the buzzer, which acts as an amplifier for it, the reed switch was de-soldered and moved to the side of the unit. This allowed the design to slim down just enough to make it unnoticeable while the wallet sits in your back pocket.  The switch was placed in the wallet towards the spine and is easily accessible for the user to reach. His hack included using hot glue to secure the alarm to the wallet and a small magnet that gets secured in the back pocket.

 

Caleb's design worked as planned and fit comfortably in his pocket without any unusual bulkiness. When the wallet is removed the alarm continuously goes off and would probably scare any thief into dropping the alarming wallet. Caleb states that the only improvement he would suggest for anyone would be a stronger magnet so the wallet would not need to be aligned perfectly in the pocket. If any thief decides to try to swipe Caleb's wallet, they could be sure they are in for an unusual surprise.

 

 

C

See more news at:

http://twitter.com/Cabe_e14

 

We have seen the Microsoft Kinect used for many things besides gaming. However, the latest use of the Kinect, which comes from a Japanese clothing store, United Arrows, may create one of the creepiest advertisement schemes yet. Using the Kinect along with a specialized motor and 16 strings, the store's mannequin mimics potential customer's movements from outside the store. The idea is to get people looking at the store's products, which will be outfitted on the mannequin.

 

The Marionettebot, as it is called, is capable of various amounts of movements. For example, the mannequin is capable of jumping, walking, and moving its hands and feet. Located in the heart of Tokyo this display has not failed in drawing the attention of people strolling by. Many people stop, dance, wave, and do all sorts of different movements to see how well the mannequin follows along.

 

The Japanese have a very unique culture. What seems unusual for the rest of the world may be an every day affair in Japan. Not too long ago researchers from Keio University created plants that interact with their environment. The plants were fitted with sensors and programmed to display emotions. Sensing movements made by humans, the plants displayed their “emotions” through strings attached to their branches and servos. Seems like the Japanese have a knack for making things move. I wouldn't be surprised to see their tech used in future marionette stage shows. With Kinect 2.0 on the horizon, and the Leap Motion controller, even finger tracking is an option - opening up even more control in puppetry.

 

C

See more news at:

http://twitter.com/Cabe_e14

DRC_FRONT.png

Atlas, a Boston Dynamics creation (via DARPA)

 

As robotic technology increases more and more emergency civil agencies, first responders and military units incorporate them into their respective arsenals. Doctors will soon be able to use remote imaging robots in situations where they cannot get to the patient. Firefighters will soon be able to quickly image an internal structure and find those trapped in burning buildings using a robotic ROV and factories will be able to use robots that can help in industrial or hazardous situations when the need arises. Of course then we come to military applications involving robots and the immediate stigma arises of ‘Terminator-style’ robots involved only with destruction. Nothing could be further from the truth. Sure, there are drones and remotely operated robots that are equipped with weapons platforms, however these allow soldiers to stay out of harm’s way when the situation warrants. A vast majority of military contractors are actually looking to develop robotic platforms whose primary function is to save lives. For the better part of 20 years or more one of these agencies, DARPA, has been developing a variety of robots that are capable of providing multiple life-saving services on the battlefield and have made significant advances over those years.

 

The defense agency recently unveiled their Atlas humanoid robot at Boston Dynamics headquarters located in Massachusetts. The robot is the result of DARPA’s Virtual Robotics Challenge/DARPA Robotics Challenge, designed to develop advanced robots to assist humans through natural and manufactured disasters. Actually, like all robots, the frame is basically only a shell that houses its advanced algorithm brain that controls the hulking mass. That ‘brain’ was written and developed by several teams through the VRC challenge while several others from Boston Dynamics were responsible for the construction of the Atlas. Besides Boston Dynamics, several other agencies and institutes were involved in the DRC including NASA, Carnegie Mellon University, SSCAFT Inc. and Virginia Tech-- all had a hand in developing the robot in one form or another. The Atlas itself is a an agile anthropomorphic robot outfitted with an onboard computer, 28 hydraulic actuators for movement (arms, legs, torso and head), a hydraulic pump complete with cooling system, and 2 fully functional hands built by Sandia National Labs and iRobot. The robot’s head is packed with stereo sensors for accurate data acquisition and LIDAR (analyzing refracted light through laser illumination). Atlas is powered by an onboard battery system to move the massive 330lbs, 6-foot 2-inch body that is controlled by a remote human operator. When it comes to movement, the Atlas is no slouch as it is capable of climbing over obstacles using its hands and feet, dynamic walking and even calisthenics! The teams responsible for the robot’s construct and development have until December of this year (2013) to make any refinements before they head off to compete in the Military Robotics Challenge at the Homestead Miami Speedway in Florida in December. Atlas is set to go up against NASA’s Valkyrie and Robosimion robots along with several other entries from various institutions including an entry from the University of California (Santa Barbara). The challenge will determine a winner that is capable of working in real-world scenarios involving simulated disasters.

 

Atlas is giving Boston Dynamics' other project, PETMAN, a run for its money.

 


 

C

See more news at:

http://twitter.com/Cabe_e14

 

Tired of cruising from place to place on your typical 4-wheeled transportation machine? Have you already seen all that the sky has to offer high above on your private jet? Or maybe your private yacht isn’t quite meeting your adventurous expectations anymore. Don’t know what else to spend loads of your money on? Well then I believe Spymaster’s new underwater mode of transportation is for you.

 

At the recent Harrods Technology 2.0 showcase, Spymaster made a dazzling announcement of a brand new private submarine it is now taking orders on that is sure to spark the interest of many wealthy adventurers around the world. Don’t be fooled by the model image of Spymaster’s Orcasub, it has already been announced that the underwater vehicle will be available at an entry price of $2-million USD.

 

The Orcasub’s full scale two-seater design is built to a 22-foot long size, 4 ton weight, capable of dropping to a depth of 2000 feet below sea level. Its construction is much like that of a plane; two floor pedals and a joy stick allow drivers to control thrust, lift, drag, and can even perform aerial-like maneuvers such as banks and curves. The Orcasub also provides 80-hours of life support and 60,000 Lumens of LED headlight power to illuminate the dark, deep-sea waters - all powered via on-board battery.

 

And, of course, if you have some “extra” money laying around, Orcasub will also be making a $9.32-million version that can dive up to 6000 feet below sea level. That may not be deep enough to go pay the Titanic a visit, but it sure shows the depth one is willing to explore into their wallet to experience what the Orcasub has to offer. In addition to every purchase, Spymaster will cover five days’ worth of Orcasub training to help buyers acclimate themselves with their new toy.

 

Well it seems we can build just about anything nowadays given the right amount of money. Might be time to start pushing those resources toward even more life-enhancing technologies, as the Orcasub shows us just a smidge of what we are capable of. For now, we know what’s going to be on the holiday wish-list of many rich folks worldwide.

 

C

See more news at:

http://twitter.com/Cabe_e14

volvo.JPG

Volvo’s new Pedestrian and Cyclist Detection System monitors the distance between pedestrians and vehicles while also tracking their movement path. (via Volvo)

 

Motor vehicle related accidents make up a large amount of preventable injuries and fatalities that occur in heavily populated, urban environments around the world. With the recent advances in remote sensing technologies, the ability to prevent such incidents is at an all time high. For example, Volvo, who first introduced a pedestrian detection system in its 2010 lineup of vehicles, has just announced an upgrade that includes cyclist detection with full-auto brake.

 

The 2013 Geneva Motor Show attracts a huge audience of car enthusiasts to witness the unveiling of vehicles equipped with state-of-the-art technologies. Volvo’s new accident prevention system, aside from other enhanced bells and whistles, will be part of a new vehicle lineup focused on quality and attention to detail to meet customer needs.

 

The upgraded pedestrian and cyclist detection system utilizes two forms of in-vehicle environmental awareness: a radar unit integrated into the vehicle’s grille scans the area ahead while a rear-view mirror mounted camera detects the type of objects in front of the car. The two technologies work in synergy with a central control module that directs the vehicle to take immediate preventive measures if need be.

 

By continuously monitoring the traffic situation around the vehicle, the significance of pedestrian or cyclist injury can be reduced or avoided altogether via a full auto-brake system response. For example, if a cyclist riding ahead of an oncoming vehicle were to quickly swerve in from of the Volvo, the advanced sensor system would immediately detect the distance between the car and cyclist, track the moving path of the cyclist, and apply a full-brake command. The system will also apply to vehicles driving in the same lane.

 

Though likely not of the flashiest in-vehicle enhancements found at the Geneva Auto Show, Volvo’s new innovative system is well constructed to help diminish very preventable on-road accidents worldwide. Volvo’s cyclist and pedestrian detection system will come standard in its V40, S60, V60, XC60, V70, XC70 and S80 models starting in mid-May of this year (2013).

 

C

See more news at:

http://twitter.com/Cabe_e14

ballcamera.jpg

Squito, toss this camera for eyes on a dangerous location (via serveball)

 

Photographers know how difficult it can be to get those perfect shots. More so if there trying to capture images during extreme conditions such as fires, foreign insurgent uprisings or ejecting out of a fighter plane traveling at over 400 knots (ok maybe not so much that last one, but it could happen). Some photographers go to great lengths to capture images that they’ve been known to construct intricate railing systems or housing their expensive gear in protective casings to not only capture those shots but to limit the damage equipment can incur doing so. While some photographers and videographers have used some rather crazy ideas to garner a one of a kind shot, none (or just a few?) has thought to actually throw the camera in the air to grab an image or video clip. That’s just plain insane, well actually that used to be the thought until now as Serveball has developed a camera system that is actually designed to be thrown. Sure it may not be the first (the military has small robotic camera systems that can get in some pretty tight spaces), but it’s loaded with features that would make any camera enthusiast take a second look.

 

Their camera, known as Squito Throwable 3600 Panoramic Camera, is outfitted with a series (3 total) of miniature cameras encircling the casing allowing it to take panoramic images and video while ‘in flight’. The sphere is also equipped with embedded positioning and orientation sensors (IMUs) along with micro-controller and image processor that allow the camera to take the various images taken from each camera and stich them together forming the panoramic view. The onboard sensors also function to stabilize the images and video while maintaining a positive lock on the subject when the camera is spiraling/spinning through the air. Both the images and video are sent wirelessly to the user’s mobile device or desktop for immediate viewing. The Squito, about the size of a tennis ball, is designed for use in recreational sports, aerial point of view shots and fly-through video applications (unfortunately, it can’t be used as a baseball). The company has another version of the throwable camera, known as the Darkball, for more extreme operations such as SAR, tactical military operations and first responder applications. The camera system for this model also includes near infrared and thermal imaging systems that allow it to be used in total darkness or in a foggy or smoke laden atmosphere such as burning buildings or a bog on a creepy cool autumn evening. The company states that both the Darkball and Squito can survive repeated high-g impacts without affecting, or right out destroying, the camera’s internals. There’s no word yet on when the cameras will be released or how much the cost will be for each, but you can expect them to be available in the near future once their development phase is over.

 


 

C

See more news at:

http://twitter.com/Cabe_e14

small 1.jpg

"oh, small helicopter... why can't we fly away?" Micro-helicopter, Nano-Falcon, wows audiences at the Tokyo Toy Show (via CCP)

 

Everyone has an inner child no matter how old one becomes. Some things we grow up playing with as children can be just too fun to ever really stop playing with them. Some things we may have never had the chance to experience while growing up are readily available today thanks to technology. For instance, toy helicopters and drones. Just a few years ago, toy helicopters could cost upwards to the hundreds of dollars to purchase. Only true hobbyist and enthusiasts were capable of purchasing or building them for recreational use. However, thanks to the steady increase of technology, the prices of these drones has dropped dramatically, and the availability of them has increased substantially. Some are in the tens of dollars.

 

A new type of remote-controlled helicopter was revealed just recently at the Tokyo Toy Show. The toy show occurs annually and everything from action figures, LEGOs, and video games are on display for the world to see. One thing that caught many people's eyes was a tiny toy helicopter. The helicopter, which the creators claim to be the smallest one ever made, was created mostly using parts from old smart phones. The chopper sizes in at 6.5cm long and weighs only 11 grams. It has a range of 15 feet and goes by the name of the Nano-Falcon.

 

small 2.jpg

Nano-Falcon, sure to be a stocking stuffer this holiday season. (via CCP)

 

The creators say it was made specifically to target Japan's adult population. Head of sales at maker CCP, Naoki Nakagawa said, “Japan's aging population made us think of developing a toy targeting adults. Ten or twenty years ago, helicoptor-toys could cost a lot of money. Those who couldn't afford it at the time can now make their childhood dream come true at a reasonable price.” The Nano-Falcon is selling for 4700 yen in Japan which is equivalent to about $47.00 in the U.S.

 

Another drone, which is a step up from the Nano-Falcon, is Parrot's AR. Drone. The French company, Parrot, has built their drones for serious augmented reality fun. The quadcopters are designed to be controlled through mobile operating systems such as tablets and smart phones. The first drone was revealed at CES in 2010. In addition, the apps to support the drones include different types of operating modes. Users can joy ride in AR Freeflight, race other drones in AR Race, or can interact with other drone owners in combat simulations.

 

parrot-ardrone-2.jpg

AR. Drone 2.0 "black box" capable of recording two hours of HD video footage. Also an interface to the drone itself through smartphones or tablets. (via AR Drone)

 

The drones themselves run off an on-board Linux computer. This computer communicates with the user's device through Wi-Fi capabilities. A 15-Watt brushless motor keeps the drone airborne while an ultrasonic altimeter provides vertical stability. Additionally, the whole system is powered by an 11.1 Volt lithium polymer battery. The battery can handle up to 12 minutes of flight while also powering the dual cameras the drones feature. Each drones consist of a vertical camera and a front facing camera. The front camera is a VGA sensor made with a 93 degrees lens while the vertical camera features a 64 degrees lens and is capable of video recording up to 60 frames per second.

 

Although the drones made their first appearance at CES in 2010, they did not stop there. It seems each year since they have been back at the annual event featuring not new drones, but rather hardware and software upgrades for their customers. Each small upgrade enhances the user's ability to control the drone and overall create a better product. For example, the camera quality was increased and the sensors on board were either upgraded or had an option for optimized software. The latest upgrade for the drones is what is called the “Black Box.” It is essentially a flight recorder, which features 4GBs of storage dedicated to tracking the drone. Furthermore, it allows desktop support for the drones. Users can set a series of pre-planned points using the GPS and the drone will navigate through the path. Along with a 50 percent increase in battery life, these toys aren't just for kids anymore.

 

The drones are opening up all types of new ideas and ways for people to see the world. The AR. Drone has been used in multiple research experiments and even has its own Open Source Application programming interface for game development. In general, drones are becoming more commonplace within the public. Many university students are now beginning to start societies featuring aerospace and aeronautic studies. Some are even branches of IEEE student societies. During a large protest in 2011, a modified AR drone was used to monitor the police. This protest is known as the Occupy Wall Street protest and the man providing the live video of the police activities was Tim Pool. Not only was this a demonstration of technologies' progression but also of how accessible and readily available drones are becoming. Drone technology is only at its infancy, we can expect to see more of them in the future and upcoming years.

 

C

See more news at:

http://twitter.com/Cabe_e14

Filter Blog

By date:
By tag: