The Bistro smart cat feeder (via Bistro)


The Bistro is the latest high-tech gadget for pet lovers who want to keep a close eye on their cat’s eating habits. The Bistro is advertised as a premiere way to catalogue the food intake and weight of the feline friend who has poor eating habits due to sickness or pickiness.


It is also thought to be an excellent way to track a cat’s health and to ensure they are not too obese or too scrawny. Alternatively, it is a good option for owners who want to know what their cats are up to every minute of the day.


This Bistro cat feeder is pretty high-tech as it not only films your cat at the dinner table, but it also takes their weight and posts the results to your Smartphone. The camera also has face recognition software which allows it to ensure it is tracking the correct cat’s results. This feature can actually be very useful for pets under supervision at a rescue home or at the veterinarian.


The Bistro is inspired by a true story of Momo, the cat. This poor kitty underwent a lot of medical treatment and operations due to dehydration and disease. The owner felt that they could have recognized Momo’s condition earlier if they had been able to track Momo’s eating habits. Cats typically eat less or refuse food when they feel unwell, hence tracking a cat’s eating habits can provide a method of early prevention for disease and other issues.


The Bistro smart cat feeder sought funding on IndieGoGo and superseded their pledged goal of $100,000. Those who donate to the campaign have the opportunity to purchase a Bistro cat feeder for $179. After this, the retail price for the Bistro will be $249 on their website. Considering the funding, it seems the Bistro is being welcomed into the feline world with open paws.



See more news at:


As I huddle by a plain white electric heater, I write about this - The Hot Art space heater disguised as a piece of art atop the mantelpiece (via brzbrands and kickstarter)

Hot Art is a 24” by 40” space heater that is disguised as a painting that can be placed virtually anywhere in your house or office – well – anywhere near a power outlet. The heater uses far infrared heat to heat up a room efficiently without getting hot to the touch, which would ruin its painting disguise and possibly burn your house down. Consumers are supposed to be able to choose from hundreds of different paintings or customize their image and design for an extra cost.


BRZ Brands, the creators of Hot Art, claim that using their far infrared heater will save you 60% more money on your heating bill per hour in comparison with a traditional space heater. This is supposedly due to the fact that Hot Art heaters use 600 watts, but should provide the same heat as a 1500 watt space heater provides. It is also supposed to last for 30 years, but I don't see a money-back-guarantee on that claim.


When comparing the price of a regular space heater with a Hot Art heater, it better save you money on your heating bill because one unit costs at least $279! A regular space heater can cost you about $30. That is quite a difference. It is even more astonishing to think about if you factor in the notion that space heaters only work for one room. So, you'd have to buy more than one if you wanted to heat your entire house.


Their Kickstarter campaign was recently canceled due to poor backing: they only had 1 backer! However, BRZ’s website is still promising the moon on a silver spoon so perhaps this project isn't entirely dead in the water..


If you're obsessed with making your appliances serve as Art, then check out the official company website for more information:



See more news at:


(All pictures via UralVagonZavod)

Russia may have lost the Cold War but the country won’t be denied a spot as one of the most innovative world powers. The superpower recently unveiled its newest tram design – and it looks like it came right out of the movie Tron.


The new tram, dubbed the Russia One, was designed by UralVagonZavod and seeks to be one of the most comfortable and technologically advanced trams on the tracks. Not only does it provide music and Wi-Fi for passengers, but it also features GPS, GLONASS navigation, anti-bacterial hand rails, air conditioning and seven HD-CCTV cameras for safety; it even includes a USB 3.0 port for the conductor to charge his mobile devices. The tram isn’t modeled after Japan’s bullet train but instead seeks to provide an elegant and superior ride for its travelers (and it’s sure to boost tourism, too).

Driver cabin_01.jpg.jpgDriver cabin_04.jpg.jpg

Russia One console day and night mode. The video display method, at the bottom of the middle screen, looks like something a Bond villain would have. I love it.


The Russia One is not only superior with its user experience, but it’s also a safer and more cost-effective machine than their Western counterparts. The entire floor of the tram was built using Russian-made low-floor tested bogeys, significantly reducing building cost and allowing the machine to safely travel up to 24km/hr. With this, the windshield of the conductor’s cabin was designed on a slant and allows for 30 percent more visibility, minimizing the risk of hitting pedestrians.


The tram is still a prototype at this point but developers expect the machine to hit mass production as early as 2015. It can hold 190-270 passengers and Alexey Maslov, who is heading the project, said it was designed to run in any environment, from forests to urban settings. In Russia, the train is expected to run in Moscow, St. Petersburg, Nizhy Novgorod, Yekaterinburg and Volgograd, but Maslov said it could also easily hit the tracks in Melbourne and other countries.


It’ll be a while before we see any of these futuristic trains in the states, but eliminating the scent of urine on our urban trains would surely be a great start.




See more news at:

google self drive.jpg

Latest Google self driving car. (via Google)

Google never ceases to amaze (or scare) us with their bright ideas. The innovation headquarters is at it again, this time bringing us a self-driving car that doesn’t have a steering wheel, brake pedals or accelerator pedals. Why not? Google says, because its new cars “don’t need them.” Need a lift? Just push “Go.”


The cars look like something you might find in a Dr. Seuss book – a cross between a pastel golf kart, smart car and Pixar’s Wall-E robot. The cute chauffeur even looks like it has a face, but don’t let it fool you; this driver means business, and safety is his biggest concern. Google said in its official blog that it believes self-driving cars can be a cure for the dangers of driving, especially for seniors. Seniors, youngsters and families alike can utilize self-driving cars to eliminate the dangers of drunk, distracted, blind or senile driving.


The passenger need only tell the car the destination and it will autonomously take them there, with the push of a button. The prototype is currently capped at 25mph for safety while testing, but features sensors at every angle of the car that can “see” hazards at a radius of more than two football fields. The prototype also offers manual controls, but the final version will not. The car drives itself and Google says that it intends to keep it that way.

The current prototype is a two-seater with sufficient trunk space, but its M.O. is safety, not comfort. The safety wagon does not feature a radio, personal assistant or any of the nifty bells and whistles that most new cars do, but it’ll feature basic dashboard controls (hopefully air conditioning is one of them) and seatbelts, of course.


Google is planning on releasing approximately 100 test cars in an early testing. If all goes according to plan, the company will run a pilot program in California to see how the cars fare on the open roads. Google said it hopes that bigger automobile manufacturers jump on board eventually to make the technology widespread, but the vision won’t be without it’s challenges.


The California Department of Motor Vehicles recently updated its regulations regarding autonomous vehicles and it will now be more difficult than ever for drivers to board the self-sufficient automobiles. All autonomous “drivers” will need to undergo a year long training program to get an official permit for handling this type of vehicle. Each passenger must also be considered an official autonomous vehicle tester and be on the manufacturer’s payroll to be permitted to drive, and that’s not all.


Each robotic ride on the road will be required to have five million dollars worth of liability insurance and riders inside the vehicles must remain behind the wheel at all times. If there is a glitch and the driver returns to manual mode for any reason, they are now required by law to report the incident.


Drivers of autonomous vehicles are also required to report all accidents. Google isn’t fazed by the new law and is keeping its plans moving forward. The early testing is expected to begin this summer, with the massive trial starting in the fall, if all goes well. 2015 may see the first commercial self-driving car on road.



See more news at:


PocketQube Kit for amateur satellite makers (via PocketQube Kit)


Building a satellite is a hard job if you aren't a NASA space engineer, and buying all the components to build one is extremely costly. For instance, a 'less' expensive packet (from Cubesat) to build a cube satellite will set you back six figures. It's not exactly as accessible as a 3D printer.


However, a new company is making cube satellites easy to build and easier on the pocketbook. While you won’t be able to purchase a PocketQube for as little as a MakerBot, costing about $6000 it is very affordable for individuals and small companies with bigger budgets.. The price doesn't include the cost of solar panels for the power source, however, so you'll have to set aside some extra dough to make it work in the vastness of space. Still, it includes everything you'll need to get your cube satellite up and running in one easy set-up. You can also purchase a one, two, or three cube configuration, which costs $5999, $6149, and $6299 respectively.


The components included in the PocketQube satellite are an onboard computer to serve as the central brain of the satellite, cube skeletons for the cube configuration, a Labsat board to test electronics, and a radio board for communication.


Specifically, you could buy the components separately and create the configuration yourself. All you need is an Alba Orbital Skeletonized structure (1p, 2p or 3p), a Radiobro MiniSatCom, an Alba Orbital Labsat (Test and Development Board), and an Alba Orbital On- Board Computer (OBC). However, it will only cost you an extra $3 to buy it pre-made from PocketQube so I would just go with their package unless you are very confident in your skills.


While adding a solar power cell and panels to the mix could have your cube satellite ready for a space mission for under $10,000, the actual launch and landing from space could set you back considerably more.


According to PocketQube, due to the smaller size, weight, and configuration of the PocketQube it can launch and land for much less than its counterparts. Of course, money is all a matter of perspective and the cost of a space mission for your PocketQube will be around the cost of a car. So, basically, out of the range of most people but still cheaper than other satellites and possible within the price range of a small company or a university.


For comparison, the cost of sending a cube satellite into space can cost about the same as a house: in the six figures range. So, PocketQube is certainly making cube satellites more accessible and less expensive than previous models. Still, it is hard to believe that satellite technology will be the next prosumer technology on the mass market at prices as high as this.


Personally, I'm really interested to see what PocketQube adopters do with this technology. NASA seems interested in the technology already as PocketQube workshops have already been co-sponsored by Kentucky Space and NASA Kentucky. Perhaps NASA will be using low-cost, ready-made tech like the PocketQube to run future missions.



See more news at:


Smart skin made of stretchable silicone that can 'feel' touch, temperature and more (via Kim et al/Nature Communications)


Researchers from Seoul National University, Korea have developed a silicone smart skin that can stretch over prosthetic limbs to enable touch sensations for the user. Previous attempts at enabling senses in prosthetic limbs have focused on sensing pressure to avoid the user applying too much or too little pressure ( think: me Hulk, me smash).


This skin is the first to develop the ability to sense multiple factors including pressure, humidity, and temperature, all in one. The synthetic skin is made of stretchable silicone which can give touch sensations throughout the entire prosthetic limb, not just in the fingers.


Dae-Hyeong Kim leads this team of scientists who have embedded the skin with nanoribbons made up of single crystalline silicone. This ribbons allow the skin to have an array of sensors which include pressure arrays, strain sensors, temperature arrays, humidity sensors, and electro-resistive heaters. The skin also has stretchable mufti-electrode arrays to stimulate nerves making the device compatible with prototype interface methods for prosthetics using this skin.


This allows the skin to sense the wetness or heat of an object, while the strain sensors help keep the prosthetic limb moving within a realistic movement spectrum which they determined through recording natural hand movements.


While the sensors allow the user to get a more realistic sensation of touch using a robotic, prosthetic limb, the electro-resistive heaters exist for the sole purpose of bringing the skin up to body temperature to give the hand a real-life feel to others.


The sensor arrays are layered on top of each other in a thin film to operate effectively with little interference. Most interestingly, platinum nanowires and ceria nanoparticles are used to form an interface with the nerves of the user. This allows the device an opportunity to meld with the human nervous system for a real sensory experience, and possibly grow with further development of  mind-controlled prosthetic technologies. These nanoparticles also reduce inflammation which could be caused by the device.


Within their tests, which are documented in a journal article in Nature, the team successfully designed the current skin and hand to perform tasks including human to human contact, touching dry/wet surfaces, hand shaking, and holding a cup.


This skin looks like a promising step towards adding real-life touch sensations to prosthetic devices. Couple this with mind-controlled hands, like the one DARPA is funding, and we could have lifelike robotic replacement limbs for future Earth-dwellers.


This skin builds on  a previous experimental array from the Georgia Institute of Technology which researchers there and in China have been working on for the past couple years. However, this skin by Kim and his team seems to be the first one yielding applicable results in a working prototype. 


As nanotechnologies advance in the next few years, this skin could become more sophisticated and feature in future robotic prosthetic limbs. It would be a great addition and could give users a new lease on life as they can sense their way through the world once more.



See more news at:

darpa hand.png

Updated mind-controlled robot arm from DARPA (via Journal of Neural Engineering/IOP Publishing)

Expand NY is a team of scientists and engineers that have been funded by DARPA to work on a mind-controlled robotic arm, among other things. While they released a prototype two years ago, the robotic arm needed much improvement and testing. The results of their current prototype along with notes on all of the improvements the project has implemented in the past two years is documented in an article they published in the Journal of Neural Engineering.


Overall, advances in mind-controlled technology are very exciting and promising considering all the successes Expand NY and other, similar projects have had in the lab. While it will take further development and testing to have mind-controlled prosthetic on the market anytime soon, it looks like that day won't be too far away.


The test user for this prototype was Jan Scheuermann, a Pittsburgh native who is paralyzed from the neck down. Scheuermann ran a full program of tests at the University of Pittsburgh with Expand NY and their latest technology. In order to give the robotic arm mind-controlled capabilities, Scheuermann had to undergo surgery to place neural implants in the parts of her brain that controlled her right arm and right hand. Once the neural implants were in place, the team of scientists had to program the arm to respond accordingly to Scheuermann's natural neural impulses.


Expand NY programmed the arm to react according to  Scheuermann's neural impulses by having her watch a series of video animations of people performing various acts with their right arm and right hand. They then asked her to imagine herself performing these acts as they monitored her brain activity.  They then used the recorded brain activity patterns to program the arm to move accordingly.


Of course, the entire arm, fingers and wrists were not moving right out of the starting gate, but by the end of the program, Scheuermann could already move the right arm, fingers, and wrist to perform a variety of impressive tasks. Most impressively, Scheuermann was able to eat a chocolate bar using her prosthetic arm and hand. She was also able to beat her brother in a game of rock-paper-scissors which is impressive due to the time-sensitive nature of the game.


It seems this new mind-controlled robotic arm has much better interfacing between the neural implants of the user and the intended arm movements and control. This robotic arm is also processing data faster and using more simplified code to do so. However, this model still has a few kinks that need to be worked out and they'll be needing new future testing participants. The biggest issue is that the arm seems to stall when it's holding an object. Indeed, according to their paper in the Journal of Neural Engineering, the researchers worked a lot on how to code the hand to hold and pick up objects, which is made difficult by a need for correct hand-shaping in relation to the object.


Expand NY will also work on creating a wireless bionic arm that can be mounted onto a wheelchair to make it more practical and useful to users outside of the lab.



See more news at:


Bitsbox kit co-created by Scott Lininger to teach kids aged 7-11 to code. (via Bitbox)

Learning to code, teaching kids to code, teaching Obama to code, are all in the news recently as the campaign to keep code alive is continuing – spearheaded by MNC giants like Apple and Microsoft. This has always been a hot topic as the high-tech world continues to find new ways to engage a younger generation that will become the coders and engineers of the future.


While there are already programs out there to encourage more women and children to code, a few new projects have been released to target children in a younger age group: 7-11. The idea is that coding is a language and that kids should be exposed to coding alongside french and other languages when they're most susceptible to learning them. While that all seems to make sense from the standpoint of coding fanatics like former Google software engineer and co-creator of Bitbox, Scott Lininger, it may not be a shared viewpoint of parents and children.


The main issue is that coding is boring for kids, with a capital -B- and tons of exclamations. Programs like App Inventor for Android are successful, graphical ways of teaching kids about coding syntax and the structure of the language, but it fails to have them write actual code.


Lininger stresses how important having kids write out actual code is to their success in learning how to code. This is why he has released Bitbox, which plans to expand through a Kickstarter campaign. Currently, Bitbox is an online, game-like, interface that has children copying code line by line, which they can then upload to a tablet or mobile device (iOS or Android) and play with. The coding will create fun games that they can play. The idea is to get them excited by how easy it is to turn a bit of coding work into a fully formed game that they can enjoy. While not every child will feel motivated to continue coding all of the cool games on offer, the hope is to expose as many children as possible to coding at a young age to give them a head start. So far, they already have 70,000 users signed up and using Bitbox online.


The Kickstarter campaign will fund the release of a monthly Bitbox to kids with code for more than a dozen apps for them to build, trade-able coding cards and more. Each box will cost a subscription of $30 a month, or parents can buy one box for $40. The first boxes are expected to ship in April 2015. Stay tuned for the release of their Kickstarter.


Earlier this month, Hour of Code took place between December 8th to December 14th which is an initiative headed by and funded by giants including Apple and Microsoft. The project funded over 76,000 hour-long coding classes in US schools and also hosted over 100 live YouTube video chat with Hour of Code ambassadors including Bill Gates and Aston Kutcher.


Obama also participated by hosting an Hour of Code class at the White House where he became the first president to write a computer program: albeit a very simple and useless one.


These projects are the new kids on the block, trying to inspire the next generation of coders, tinkerers, and makers.



See more news at:

first tree.png

The first Christmas tree with lights, by Edward H. Johnson from 25 Dec. 1882 (via US National Park Service)


Tis' the season when you can see lit Christmas trees in windows, rows of houses with glowing Santa with his reindeer on the front lawn. Lights blink and dazzle, occasionally set to dance to Christmas themed music, but who came up with the original string of Christmas lights that we horde today for special occasions? While Thomas Edison came up with miniature lamps, his friend and business associate, Edward H. Johnson invented the first string of Christmas lights, which he debuted on Christmas day in 1882. Not only did Johnson's tree light up with 6 sets of stringed lights of different colors, but he also used a motor to rotate the tree and even devised a mechanism to make the lights blink. This is pretty impressive for 1882 when electricity was still a novel concept.


In fact, many people were skeptical of using electricity at all, especially as a form of lighting. Christmas lights remained a luxury for the wealthy until the early 20th century.


The design for the lights was based upon Edison's miniature lamps and Johnson worked closely with Edison, later becoming the President of Thomas Edison's Illumination Company. However, Johnson gets credit because he was the first to use lights to decorate a Christmas tree. Traditionally, wax candles were used to light trees, which is ironic because people were afraid the electric bulbs would set their house on fire. While such things are possible, I would bet my money on open candle flames as a bigger fire hazard.


Johnson led a particularly interesting life as he first served as the assistant to William Jackson Palmer, helping to construct the Kansas Pacific Railroad from Kansas City, MO to Denver, CO. Later, Johnson gave Edison his first start when Johnson hired a 24 year old Edison to work for the Automatic Telegraph Company. Edison wasn't the only young talent that Johnson recruited as he rounded up electrical geniuses like Frank J. Sprauge, Charles W. Batchelor and Francis R. Upton. This team of partners worked together as associates in various companies including the Edison Electric Lamp Company and what later became General Electric.


While Johnson's first rendition of his Christmas lights were created by hand, they were initially mass produced under Edison General Electric Company which offered strings of 9 carbon filament lights. It wasn't until Albert Sadacca, the head of a lighting company, pushed a major Christmas light campaign that the quintessential decoration became a mainstay in the 1930's. The reason for the final public decision to switch to electric lights was the number of tragic fires caused by candle-lit trees – very unsurprising.


If Howard Johnson put his Christmas Light invention on Kickstarter today, if his campaign would be successful. Technology adoption is indeed fascinating, especially when you have beautiful invention created for the sole purpose of aesthetics and enjoyment. Now, it's time to sit by the fire with some egg nog in hand and stare placidly at the lit Christmas tree as my vision becomes progressively hazy from Bourbon and merriment. Happy Holidays Everyone!



See more news at:


Unnecessarily eerie digital double of a model’s face and eyes (via Disney)


Do you wish your Disney animated characters were more creepily realistic? The computer graphics scientists at Disney Research Zurich have found a way to more accurately depict the human eye and make on-screen characters seem more life-like. 



From human model to digital mod

The researchers at Disney developed a new technology, called PAPILLON, to scan and reconstruct the eyes of human models by capturing the eye with multiple cameras and under varied conditions of light movement and brightness. This new technique, they say, will better represent the nuanced individual variation of the eye and therefore render more believable character faces.


Typical approaches for depicting the human eye assume regular geometric shapes for the sclera (the white part), iris (the colored part surrounding the pupil), and cornea (the transparent bulb covering the iris). 



The anatomy of the human eye (via


These traditional models often render the sclera and cornea as two perfectly round, intersecting spheres, and the iris as a flat disc or cone. However, researchers at Disney point out that each of these eye features has its own uniquely irregular 3D shape, and the eye as a whole is asymmetrical and heterogeneous, with many surface details and imperfections. Furthermore, they remark that many traditional approaches do not use real eyes as models for their renderings, so these individualistic features cannot be captured.



Idealized (b) vs. realistic (c) models of the human eye


The unique, 3-D shapes of nine irises


To create replicas more true-to-life, PAPILLON captured nine eyes from six different models. They adjusted lighting to prompt the iris dilator muscle to contract and relax, causing pupil constriction and dilation, respectively. The resulting digital models accurately portray the transparency of the cornea, the imperfections of the sclera, and the constrictions and dilations of the iris.


Top: human models; bottom: computer renderings. Even two eyes from the same person are not quite the same. (via Disney)


This technology, which will undoubtedly start showing up in Disney’s animated films very soon, is key for creating more realistic and even more relatable characters. Besides this, the researchers hope that their technology will be useful in other fields, with potential applications in ophthalmology.


See more news at:


Technological advances have not only made our lives easier, but also made entertainment, like gaming, more immersive. As we wait on the edge of our seat to see if hover boards and ray guns will become a reality, more practical, if less glitzy, breakthroughs are being made. For example, a team of researchers at the University of Bristol's Department of Computer Science has invented a new method of haptic feedback using ultrasound with the ability to produce 3D shapes that can be touched in mid-air.


The complex patterns of the ultrasound can be focused onto hands placed above the device, which causes air disturbances rendered as 3D shapes that users can feel on their skin. The shapes are actually invisible but researchers  have enabled the patterns to appear by using a thin layer of oil. The corresponding video shows this technology in action, creating multiple shapes and rotations on the surface of the oil. The 3D shapes that are created with the ultrasound can be added to 3D displays and can also match a picture of a 3D shape to one previously created by the system.


Dr Ben Long, research assistant from the Bristol Interaction and Graphics (BIG) group in the Department of Computer Science, says some of the possible uses for the system are “touchable holograms, immersive virtual reality” and “touchable controls in free space.” This new breakthrough can also be used by surgeons during a CT scan that allows them to feel a disease, like a tumor, using haptic feedback.


The study can be found online in the ACM Transactions on Graphics journal.



See more news at:

pinhole selfie.png

Photographer Ignas Kutavicius and his mega hipster pinhole selfie contraption (via Ignas Kutavicius)

A project by photographer Ignas Kutavicius takes the selfie mania to a new level. Actually, it takes selfies not just to a new level, but creates a new form of media, and takes you into a different time period.


Kutavicius has created a pinhole camera for the single purpose of taking selfies that make you look like you're from the 1800's. Truth be told, Kutavicius simply took two things that already existed and put them together ( probably in a drunken stupor), to create something that people are actually raving about in the photography world. I can't recognize whether the interest in Kutavicius's camera is due to people having nothing better to do, or selfie mania, but people actually want to take their selfie with Kutavicius's camera. So, let's give a round of applause to Kutavicius for finding his niche in the market.


With this device you no longer have to spend the time and effort  it previously required to take a selfie with your iPhone and create a pinhole effect. In maker terms, this camera is actually a fun project and pretty easy to duplicate. The pinhole camera is created from an empty energy drink can, but you can replace this with an empty of your favorite brew. The small pinhole acts like the lens, and black & white photo paper captures the inverted image obtained from the makeshift lens.


The small, pinhole camera is fixed onto a rig that attaches to the users head so they can take a selfie with no hands. Because the user’s face is closest to the camera, and it is attached to their head, the camera can take a crisp image regardless of slight movements. However, the background will always shift when the users turn their head, so the background comes out blurred. The finished product is a pinhole effect coupled with a fisheye effect that focuses on the user’s face, while giving a surreal image of their surroundings. Basically, the camera takes one hell of a selfie for the hipster nation that must have everything kitsch.


Kutavicius, who's from Sweden, said he made the pinhole camera to combine something new (selfies) with something old (the pinhole camera). He was also surprised when people were interested in his camera and wanted him to take pictures of them. It seems that even Kutavicius underestimated the fickle nature of humans.


Kutavicius has a Pinhole Selfie series that you can view here:


Kutavicius notes that he wanted to give an impression of what selfies would have looked like back in the 1800's, but you don't actually have to wonder because The Public Domain Review has found one for us! Feast your eyes on this:


Robert Cornelius was the first man to be featured in a documented selfie c.1839. And yes, he was working that camera (figuratively and metaphorically). The moral of this story: play with pinhole cameras because they're fun and easy to use. Maybe a pinhole camera drone is next?



See more news at:

punching keyboard.png

A Punching bag keyboard for you work and exercise needs (via Bless)

Let's face it. Sitting in front of a computer all day is horrible and it's made continually worse by having to come up with brilliant ideas which are expected to float effortlessly across the page. Unfortunately,  sometimes when you've been sitting on your butt all day ideas don't come. Sometimes you need to exercise and stop drinking so much coffee. But is it possible to do both at the same time?


Practically speaking, probably not. But theoretically, yes you can and a German design firm has created a quirky project to enable writers to take their frustration out on their keyboard. Or just pump out some words with gusto. It's a keyboard designed as a set of interconnected punching bags with a bag for every letter in the alphabet. You can also kick the delete and space button and body slam into the enter button if you'd like. I'm personally the kind of person that likes a good clacky keyboard so I can type so loudly that I can hear and feel the visceral damage my fingers are doing to the keys. Hence, I'd be pretty excited to beat the crap out of my enter key, metaphorically speaking. I also wouldn't mind developing killer abs in the process of working.


Bless, a group of wacky Germans decided to showcase their design in the Istanbul Tasarim Bienali 2014. If you want to see a strange video of how participants interacted with their punching bag keyboard, then watch their video here: If you like random slow motion shots, set to euro techno music for no reason whatsoever, then you'll love it. They call it the N°41 Workout Computer.


As you can see by the layout of the design, it's not exactly conducive to actually using in practice as a Workout Computer and workstation. However, it is a really fun concept and the Bless team notes that they created the design for fun and to make a statement about the way people are living their lives these days: in front of a computer screen. With obesity rates and diabetes soaring in America and worldwide, it isn't hard to see the root cause is a lifestyle that requires sitting down all day and eating crap.


In fact, when it comes down to it, it is t terrifying to realize that our culture not only forces workers to sit in front of a computer all day, but most leisure time is spent sitting in front of a screen as well: TV, iPad, Kindle, iPhone. Somehow, someway, a healthier way of living in tandem with computers must arise and perhaps creating a computer workout station isn't a bad idea.


Currently, standing desks are all the rage, with some companies replacing their old desks with these new standing workstations that allow workers to get more exercise and prevent back injury and fatigue.  I've also seen new computer workstation treadmills out! It seems a bit silly, but it isn't a bad idea to walk 6 miles while cranking out your typical workload. You'll not only be productive, but you'll be super healthy as well.



See more news at:


A demo of the new Prynt case for instant smartphone picture printing (via Prynt SAS)

Coming to the market is a smartphone case that instantly prints photos, directly from your phone. It's a modern-day Polaroid camera, but better and hopefully cheaper.


A French start-up called Prynt SAS, recently demo’ed their working prototype of the Prynt case to TechCrunch and is expecting to launch a Kickstarter campaign in early January. The cost for the Prynt case on Kickstarter will be $99, so if you're interested, stay tuned to get the early bird specials. You can stay updated on the campaign from their website:


Currently, the photo printing smart case only accommodates 4” screens, like an iPhone 5. So, for those of you with phones the size of tablets, the Prynt case will not fit your phone. However, you still may be able to print your photos from the device.


The current prototype connects to the phone via Bluetooth, but the future Prynt device is supposed to connect to the phone via a lightning connection into the phone for instant printing.


As it stands, the printing process takes up to 50 seconds, but Prynt intends to reduce this time to 30 seconds in the next generation they're launching on Kickstarter. The Prynt case prints by heating up ink-filled paper. The current prototype holds about 10 pieces of ink-filled paper, but the next generation is expected to hold between 10-30 pieces of paper at any time. You can refill the paper whenever you like and order it directly from Prynt for delivery to your home.


Prynt is expecting to charge cents per sheet for their photo paper. Best part about it!


In addition to just printing, the Prynt case also shows a video when held up to the camera of your phone, if you have their special app. This means Harry Potter fans can go wild with this augmented reality addition. When your take a photo with the Prynt app, it also captures a video. In future, when you print out the photo and hold it up in your camera's view, it will show a short video clip recorded during the snapshot of that photo. This feature is also meant to work even if you give the photo to a friend (who would probably have to have the same app, I would think).


This idea is pretty interesting and adds a lot of extra value to make the Prynt app something more unique than other photo printers on the market. It allows you to really capture a moment in time with more than just one shot.


In future, Prynt is going to try and turn your picture of one thing into a video of something else. For instance, you can send a wedding invitation and invite your guests to hold it up to their smartphone camera to see a personalized video invite from the bride and groom. I can see that idea really catching on.


This gadget seems to be catching people's attention and imagination since it demo’ed at HAXLR8R and coming soon to Kickstarter!



See more news at:

3d printed leds.png

Printed LEDs from bionic nanomaterials by McAlpine Research Group (via McAlpine Research Group)


The McAlpine Research Group from Princeton University are the first to successfully 3D print fully functional quantum dot LEDs. Not only did the group successfully print LEDs, but all of the electrical components, housings, and more have been 3D printed as well, making this the first functioning electrical device to be 3D printed from scratch.


The creation of LEDs may not be as exciting as a Star Trek device that 3D prints any type of food you can think of (and yes – I do wish this existed), but it opens up new possibilities. Michael McAlpine and his team of researchers have been leading some cutting-edge scientific discoveries in the realm of nanotechnology, biomedicine, and energy sciences. Previously, the team was working on a method of generating power from bodily functions like breathing and walking.

In order to successfully harness energy from bodily functions, they created rubber films that were able to generate and capture energy from flexing as the body moves naturally. In another first, McAlpine's nanoribbons are the first to combine silicone and lead ziconate titanate (PZT) into a material 100 times thinner than a single millimeter.


Jumping off from this breakthrough, McAlpine and his team decided to take the world of 3D printing further and innovate ways of printing an entire electrical device in one go. Their final product is a box containing working quantum dot LEDs, which are 3D printed using 5 different materials.


The 3D printer was developed in six months and cost $20,000 but the ability of the machine is impressive. In particular the materials used are a variable mixture of inorganic and organic nanoparticles, metals, and polymers. Considering most 3D printers can only use plastic, powder, and possibly metal, this is very impressive. Another interesting aspect of this 3D printer is that it is able to 3D print the LEDs on a curvilinear surface. This may lend itself to the production of things like 3D printed contact lenses and biomedical implants, according to McAlpine.


The LEDs are made of five layers, of which the bottom layer is a ring of silver nanoparticles to conduct  electricity which is being supplied by two polymer layers that move the electrical current to the third layer of the LED which is made of cadmium selenide nanoparticles within a zinc sulfide shell and a cathode ray layer made of eutectic gallium indium.


Put all this together and you basically have an LED that creates orange or green light by forcing electrons to crash into the quantum dots. The overall electrical design is also interesting and efficient, pointing towards new future developments in the area of 3D printing electronic devices.


The McAlpine Research team is expecting that continued efforts in 3D printing electronic devices using nanoparticles will result in biomedical implants being developed. Perhaps DARPA will use this idea to develop a microchip that is small enough to inject into soldiers' brains.



See more news at:

Filter Blog

By date:
By tag: