Skip navigation
1 2 3 Previous Next

Test & Tools

238 posts

EPFL scientists developed the sensory-enhanced prosthetic hand to give amputees their sense of feeling back. (Image credit: BBC video screenshot)


Back in 1993, Almerina Mascarello lost her left hand in an accident at a steel factory, 25 years later; the 62-year old mother of two has been outfitted with a bionic replacement that also gives back her sense touch thanks to an international team of scientists from Italy, Switzerland, and Germany. In an interview given to the BBC, Mrs. Mascarello stated, “It’s almost like its back again,” in regards to her lost limb.


In lab tests, she was able to discern whether an object she picked up was hard or soft while being blindfolded, thanks to some pressure sensors outfitted on the prosthetic hand. The information from those sensors is sent to Mascarello’s brain via tiny electrodes implanted into the nerves on her upper arm. More accurately, pressure data from those sensors are relayed to a computer (worn in a rucksack), which translates that information into signals the brain can understand and then routed through the implanted electrodes. Not only does this allow her to ‘feel’ but also manipulate the bionic hand as well.


EPFL’s first iteration of their bionic hand, note the sensory and computer equipment needed to process the implant signals. (Image credit: EPFL)


The actual bionic technology isn’t new as the EPFL (Ecole Polytechnique Federale de Lausanne) first developed the device back in 2014, but the sensory and computer equipment was too large to go mobile, leaving it restricted to the lab. The ability to feel using the hand comes from artificial tendons in the hand that control finger movement, which is done by measuring the tension and turning that information into an electrical current the body can understand.



That information is too garbled for us humans to understand, so the scientists developed algorithms to do that translation for us, turning those sensory signals into electrical impulses our nerves can interpret. Thanks to new technology updates, the sensory and computer equipment has become small enough that the wearer can go mobile. That being said, Mrs. Mascarello was only allowed to wear the bionic prosthetic for six months before having to return the prosthetic as the device is still in the prototype stage and will undergo further development to miniaturize the equipment enough for commercialization.  


Have a story tip? Message me at: cabe(at)element14(dot)com

Peter, Jon and I are building a Programmable Electronic Load. In this series of blogs I'm building a LabVIEW library of reusable components



In this first article: Initialising the instrument


LabVIEW has a set of blocks to build flows with serial communication devices. Those blocks work well with our instrument.

However, inspired by the example for the Rigol DP8xxx family, I'm going to create more abstract blocks.

You can then use these in a LabVIEW flow.

To show the difference between using the serial blocks or a low level integration, check these two flows:


The first one uses the Serial blocks. You set all communication parameters and then can shoot SCPI commands and read them back.

You'll see that this is simple, but has no device or error management

When you look at the same flow using a (built in this blog) abstract block, it looks like this:


This block hides the communication settings that aren't configurable (the serial settings are fixed in the instrument's firmware).

It has additional support for device management (you can check if it's really our electronic load, you can reset it) and it supports LabVIEW device error management.


Initialize component

Here is the internal flow of that Initialize component:

I'll break this down into the lower levels. Note that each block that has a selector (the little rectangle in the middle of the top bar of a block) represents multiple flows.

The image just shows one of the possible ones (e.g. for the first block, it shows the process if the option to validate the instrument's IDN is set to True).

You expose the in and out parameters on the block's icon:


The elements that don't have a spot on the icon are private to the block. You can't set them from external.

For the Initialize block, the input parameters are

  • VISA resource name eload
  • delay before read (ms)
  • ID Query (T/F)
  • Reset (T/F)
  • eror in (no error)

The output parameters are

  • VISA resource name out
  • error out

After running this block, the following functionality is performed:

  • The VISA device that you pass to this block (it's serial so this will always be a COM port) will be opened, using the internal Serial settings of the block.
  • If ID Query is set to True, the block will compare the instrument's ID string with "THEBREADBOARD,ELECTRONICLOAD".
    If the instrument's string doesn't start with this string, the block will throw an error:
    -1074003951: The ID Query failed.  This may mean that you selected the wrong instrument or your instrument did not respond. 
    You may also be using a model that is not officially supported by this driver.
    If you are sure that you have selected the correct instrument and it is responding, try disabling the ID Query.
  • If Reset is true, the block will execute the Reset block (this custom block will be discussed later). This will shoot a *RST SCPI command to the device.
    Else it calls the Default block (also discussed later). That one sends a *CLS command.
  • If an error is detected, the VISA resource is closed and error info is passed to the caller.
    Else the block passes the initiated device back to the calling flow.


Initialising the VISA resource


The first step in the block is to retrieve the VISA resource from the calling flow. You can also pass it the error stack from previous processes.

It will then initiate communication with the device, using the serial settings defined within this block (9600, 8, 1, N).


Query the Identifier


The image shows the two possible flows.

The first one is executed if you pass True for ID Query. The second one (that does nothing) when you pass False.

An *IDN? command is sent via the lower level VISA Serial block that comes with LabVIEW, the block waits 500 ms (passed via delay before read (ms)),  reads the SCPI reply from the instrument and checks if the identifier starts with our check string.

If there's a match, this block doesn't touch the error info. If the identifier is not ok, we push an eror message on the stack and set the status to error.

It then hands over control to the next block.





Here are again two possibilities.

If you pass True to the Reset (True) signal, the flow calls the custom Default Instrument Setup block.

If you pass False to the Reset (True) signal, the flow calls the custom Reset block.

These two blocks are part of the library that we're building here and will be siscussed later.




Also two possibilities. If the previous actions were successful, the block does nothing.

Else it closes the VISA resource and clears the VISA resource name out parameter.

Then control is handed back to the calling flow.


Appa iMeter 5

Posted by kk99 Jan 6, 2018

Here is short presentation of battery replacement in my small portable digital multimeter: Appa iMeter 5. I really like this portable digital multimeter. Only one cons of this multimeter is that no True RMS functionality and small range of capacitor measurements. Please share what is your favorite digitial multimeter in comment ?

A new wireless microphone on the market claims to provide better sound than anything else on the market. But it might not be for everyone. (Image via Mikme)


Simple ideas executed elegantly are a great inspiration. I think this could be made a bit cheaper... but first, let me explain what it is.


One industry that is always targeted by creators of new technology is the entertainment industry. The way artists record and present their arts has changed so much that it has opened the doors to many artists to be independent. However, with independence comes the pressure of providing a product as good as that offered by artists that are backed by various corporations. Luckily, there are companies like Mikme, who are always working to make life easier for independent artists.


Mikme is a European company that created a high-end wireless microphone of the same name, with the purpose of helping artists for whom audio is a pillar of their art, deliver the best sound possible. With their global team, Mikme designed and built the microphone in Europe, but made the product available for purchase in the United States as well. Although a new product, Mikme is the fruit of the combined brains of a team which has 15 years of experience. Why is this microphone considered high-end?


For one thing, the price of Mikme says it is not for the average American. Although $499, in U.S., customers have the option of acquiring the device at $495 from Europe through the website Another reason Mikme could be considered high-end is the quality of the sound that comes out of it. A few videos on Vimeo and on the company's website clearly demonstrate that a vlogger or journalist or musician had better use Mikme instead of the mic incorporated in an iPhone. It is easy to tell that the sound from Mikme is a lot louder and crispier than that coming from an iPhone, but one might wonder if it is because the iPhone is further from the mouth than the Mikme was.


Regardless of its price or the quality of its sound, Mikme seems to be the state of the art when it comes to the actual hardware. From inside out, Mikme shows a 1" gold plated condenser capsule which offers more than 90KHz of Analog to Digital conversion audio. The device also has a built-in audio recorder with a memory of 16GB. The Mikme is powered with a battery of 920mAh rechargeable with a micro USB cable that is also used for file transfer. The micro USB is compatible with iOS 9 and newer, MAC OS 10.8 and higher and even windows XP computers. Weighing 0.35 lbs. or 162 grams net, the Mikme measures 70mmX73mmX35mm. The company seems to have opted for a simple design when it comes to the interface of the device. The top side of the device carries only one button which serves to power Mikme, but also as an indicator when the app is performing other tasks such as transfer of files, playback of the last track. The bottom of the microphone presents a 3/8 inch and ¼ inch thread that permits that the device stands alone, either on a tripod or on a selfie stick.


Designed to work in synchronicity with an app on the user's smartphone, Mikme uses Bluetooth 2.1, an enhanced version of Bluetooth 2.0, which makes the pairing of the device with an iPhone a lot more secure. The app serves the artist during the recording and after the recording during the editing part of his work. However, the most exciting feature of Mikme is probably its lost and found protocol. According to the company's website, if lost, an audio transmission with patented packet loss and sync detection, through the app, will allow the microphone to be found quickly.


With all those features, Mikme seems to be the perfect solution for people who are always in the pursuit of better sound in their work. Whether Mikme actually delivers or not on its promises, there is no doubt that the tech world will continue to make progress. So, artists should believe that they are getting closer to the perfect sound one device at a time.


I wanted to create a microphone like the Mikme with a Raspberry Pi. I think it is completely possible… more on this soon.


Have a story tip? Message me at: cabe(at)element14(dot)com

The HRU standard resistors (10 ohm and 100 ohm) will be formally used on the Watt Balance system for mass standard which has been developed by NIST (US) and NRC (Canada) as next generation’s mass standard unit from 2018. At Watt Balance system, current (A) and voltage (V) are extremely important as these enable researchers to determine the mass of an object indirectly via two measuring modes: the strength of the magnetic field, and the current running through a coil of wire. For this application, NIST required ultra stable 100 ohm
standard resistor under different current levels and so, via NCSLI and CPEM conference, Alpha Electronics has provided evaluation units in 2013. Since then, NIST has found that HRU-101 showed much better power stability comparing their own design over 50 years age standard resistors.


More information abou Alpha Electronics, please visit:

Grab your smartphone and lightsaber and battle foes like Kylo Ren in your living room. (Image credit Lenovo)


I no longer have to ‘play fight’ Star Wars villains using my imagination, a wrapping paper tube, and voice effects thanks to Lenovo’s Star Wars: Jedi Challenges AR platform. The system uses the company’s Mirage AR headset and a compatible smartphone to overlay virtual 3D images against a reality-based background, much in the same fashion as Samsung’s Gear VR but at twice the cost ($200 vs. $39.99 respectively). It also comes with a lightsaber handle modeled after the ones featured in the movies and while it looks great, think of it more like a hand-held controller in the same vein as Samsung’s and Oculus’ as the headset uses it for tracking your saber motion.




An interesting facet of Jedi Challenges is that it uses an illuminated floor beacon to track your entire body motion in a confined area, meaning it only tracks location at a certain distance, so you can run away if the fight becomes too complicated. It also uses your phone’s IMU sensor for rotational tracking, which is why some phones are not compatible but anything in the last few years should work fine. Connections for the interface include Micro USB, Lightning, and USB C, allowing for most recent Android and iPhone smartphones compatible.


Jedi Challenges features three games including lightsaber combat, 3D Holochess, and Strategic Combat. (Image credit Lenovo)


Jedi Challenges comes packed with two additional interactive games beyond Lightsaber Battles (also features Darth Vader, waves of Storm Troopers and more on six different planets), which includes the iconic Holochess- battling against alien monsters similar to Battle Chess but only better. This game requires you to use your lightsaber to select and move your pieces- somewhat of an unusual action, but then again, C3PO didn’t physically move the holo-pieces by hand on the Millennium Falcon either.


You also get Strategic Combat that’s similar to most RTS games (Command and Conquer, StarCraft, etc.) where you create and control entire armies- setup your bases, factories to produce mechanized machines (AT-ATs, AT-STs, etc.) and barracks to train troops along with a host of other playing mechanics and battle against invading opponents.


There are some drawbacks to playing Jedi Challenges, with the first being Lenovo doesn’t currently have any other games for the platform beyond Challenges, and Lenovo hasn’t announced any upcoming titles at this point. You also can’t use multiple accounts on a single phone, limiting play among several users (sorry family and friends, you’ll have to get your own AR system). Finally, there is no quit button or command with the accompanying app, meaning you have to shut it down manually when you take the headset off. That being said, it is an exciting AR experience few other platforms provide, especially for Star Wars fans and while it is a little on the expensive side, you can’t put a price on the Force.


Have a story tip? Message me at: cabe(at)element14(dot)com

Buoy is an all-in-one solution that uses algorithms to monitor water usage in real time, alerts you when there are leaks and offers remote shutdown capability (Image credit Buoy Labs)


Who knew you could task AI and machine learning to monitor the water that flows through your home? Apparently, Santa Cruz-based Buoy Labs did, and they’ve used those algorithms to develop a water monitoring platform that could help lessen your water bills, which is good news for those who live in areas with a declining water supply such as California. The company’s aptly-named Buoy is a device that’s meant to help cut down your water usage and waste, something the EPA says affects 10% of the homes in the US.


Buoy was designed to be installed where the city’s water mainline connects to your home and uses a series of sensors to monitor the rate at which water flows through your home’s pipes in a real-time setting. The device then uses a Wi-Fi connection to upload that data to the company’s servers, which uses AI and machine learning algorithms to categorize your water usage. It’s able to monitor shower, toilet, and faucet flow rates simultaneously and even appliances that use water, such as dishwashers and swamp coolers, which it does when the sensors pick up a spike in the water flow.



Through a companion app, Buoy can display the home’s water usage on a daily basis and overall water usage by category- shower, washing machine, and so on. It will also alert you to water leaks and in some cases, what the cause is based on the collected data analyzed by the company’s AI. The residents to can then use all of that received data to paint a bigger picture of where their water is going and where they can cut or curb their water usage. Moreover, they can also use the app to remotely shut off water to their homes in the event it detects a leak, perfect for those on vacation or who own multiple properties they don’t spend much time around.


The Buoy doesn’t come cheap though, costing $799 just for the unit but that does include professional installation by a plumber and lifetime usage of Buoy’s app. The company also states that Buoy is compatible with most single-family homes and newer multi-family buildings, so you won’t have to purchase any additional equipment such as an adapter.



See more news at:

MIT researchers create new sensors that warn you when plants are running out of water using carbon-based ink. These sensors are placed on leaves and let researchers examine the stomata (Photo from Betsy Skrip)


Plants can be a wonderful addition to any home. They make the room look nicer, can give off good smells, and just be a pleasant experience. But, what inevitably happens is you forget to water it, and there goes your plant in the trash. What if your plants could alert you when they’re running out of water? Engineers at MIT may have found a way to do just that.


The team at MIT created sensors that are printed on plant leaves that tell you when it's running out of water. The sensors rely on the plants’ stoma, small pores on the surface of a leaf that lets the water evaporate. When this happens, water pressure in the plant drops allowing it to suck up water from the soil in a process known as transpiration. When exposed to light, the stomata open ─ and it closes in the dark, which scientists are now able to study in real time.


So how does printing on a leaf work? The team used an ink made of carbon nanotubes, which are tiny hollow tubes of carbon that conduct electricity. The ink is dissolved in an organic compound called sodium dodecyl sulfate that doesn’t cause damage to the stomata. The ink can be printed across a pore to make an electronic circuit. Once the pore is closed, the circuit is active and the current can be measured by connecting the circuit to a device called a multimeter. The circuit breaks and the current stops flowing when the pore opens, which lets researchers measure when a pore is opened or closed.


Scientists study the opening and closing of the stomata over a few days and found that they can tell when a plant is running out of water. Results show that stomata take seven minutes to open after it’s exposed to light, while it takes 53 minutes to close when it gets dark. But during dry conditions, these responses change. If the plant doesn’t have enough water, the stomata take roughly 25 minutes to open and 45 minutes to close.


Not only could this save your house plants, but it could alert farmers when their crops are in danger. While there are devices that warn farmers of an upcoming drought, like soil sensors and satellite imaging, they don’t detail which specific plant is drying out.


Right now, MIT researchers are working on a new way to place electronic circuits by placing a sticker on a leaf instead of using the carbon-based ink. They believe this new research could have big implications for farming and could save more corps and plants in the face of drought or water shortages. And maybe now your houseplants can survive longer than a week.



See more news at:

Ford Performance is looking to improve your mental performance by relying on tactics used by professional drivers, VR, and an EEG. Prototype of a racing helmet integrated with EEG Ford is currently working on (Photo via Ford)


When you’re a professional driver, concentration is a must when behind the wheel. Letting your focus break could lead to a major accident, which is why they use certain mental training techniques. One of these methods includes a new brain scanning helmet that measures how racecar drivers improve their performance with mental training. This is part of a new study that explores these same mental training techniques could help us all deal with the stress of everyday life. 


While many athletes have been using mental training techniques to improve their performance for a while, it’s just spreading to the mainstream. Since people are constantly looking for new ways to deal with stress this new study looks at how these techniques used by athletes might improve our own brain performance.


Dubbed The Psychology of Performance Study, it’s being developed by Ford Performance, a motorsport branch of the US car maker in collaboration with King’s College London and tech partner UNIT9. The test works by using an EEG (electroencephalogram) headset that monitors your brainwaves. It’s ideal for this experiment since it’s versatile and has the ability to be used outside of lab setting. This way brainwaves can be studied in real-life situations and, in this case, VR.


Participants, which will include professional drivers and members of the public, will have their performance and brain activity measured throughout the test. Some participants will have prepared using mental techniques while others will have no preparation at all. This will allow researchers to see how these two groups perform with and without the mental training.


The team hopes by using VR that they can study how fatigue affects your driving and whether or not mental training can help improve focus and alertness for longer periods of time. Dr Elias Mouchlianitis of the Institute of Psychiatry, Psychology & Neuroscience, King's College London says the benefit of using VR is having “the subject is completely absorbed in your experiment; there are fewer distractions and you can control everything about the world that surrounds them in very precise ways.”


Aside from the study, Ford is also working on a prototype EEG race helmet for their motorsport team to be used in live simulations. Designers will integrate the EEG headset and sensors into a race helmet to measure drivers’ brainwaves in real-life practice environments. The results of the study will be published later on this month.



See more news at:

Scientists from EPFL are currently testing the use of VR to reduce phantom pain in paraplegics and those who suffer from spinal cord injuries. Phantom pain can’t be treated with medication, but VR may be the solution. (Photo from EPFL)


VR technology is still on the rise, but it still hasn’t become an every day thing for the masses. Typical complaints: the gear is too expensive and the games are usually just short demos rather than anything sustainable. But VR may soon find new life in the medical field. Doctors are using the tech in different ways to help them tackle different issues. A team of scientists from the Ecole Polytechnique federale de Lausanne (EPFL) have found a way to use VR to help paraplegics deal with phantom pain.


People who become paraplegics due to a spinal cord injury often have to deal with phantom pain, which sadly can’t be treated with medicine. With the team’s latest breakthrough, VR could be a cause for some relief. They had people wear VR googles, which showed live feed from a camera filming a pair of dummy legs. The camera was set up to mimic a person’s point of view in relation to their own legs. This gave the illusion that the dummy legs actually belonged to them.


From there, scientists then tapped the dummy legs and the area above the subject’s spinal lesions. After about a minute, the subjects felt like it was their own legs being tapped. They reported that the sensation helped reduced the neuropathic pain. Why does this happen? Team leader Olaf Blanke explains that the tapping is “translated onto the legs because the visual stimulus dominates over the tactile one.”


After the successful results, Blanke and his team are now working on a digital therapy that automates visuo-tactile simulations for those who suffer from chronic pain conditions and spinal injury patients. Amputees often experience a similar condition with phantom limbs, but the EPFL didn’t say whether this new technique could work for them or other people who have other conditions.


Incorporating VR into the medical field is becoming a more common practice. The EPFL isn't the first team to use VR for medical purposes. A team of Duke University researchers has developed a VR system that helps paralyzed patients regain some movement. A team from Oxford University is using VR to help paranoia patients face their fears and a team in Europe is using VR as a means to fight depression.


It’s great to see that VR is being treated more than a video game gimmick. While it may not be popular with all gamers, at least others have found ways to take advantage of the technology and use it for something that could change a lot of lives. It’s better than having those headsets live in the closet.




See more news at:

A demonstration, of sorts. A new system from Origin Wireless uses Time Reversal Machine technology to have WiFi detect the slightest movement and breathing in the room. This home security system uses WiFi instead of cameras. (Video via Origin Wireless)


I could have used this concept in my indoor motion sensing project.


Home security is a must when it comes to protecting yourself and your family, both inside and outside. For indoor systems, people turn to motion detection, which needs cameras and sensors to properly work, but there are some drawbacks here. Notably, the hardware installation costs can be high along with paying a monthly fee. And despite how comfortable we are taking selfies all the time, not everyone wants a camera on them all the time. This is where Origin Wireless comes in, which uses WiFi signals to detect movement.


So how does it work? The system uses “Time Reversal Machine” technology which is comprised of smart algorithmic work that doesn’t put a strain on the processor. The setup normally includes two hubs: one router for the “Origin” transmitter with the other routers acting as “Bot” receivers. These devices work on 5GHz over 802.11a, 802.11n or 802.11ac signals. The signal also relies on channel state information (CSI) to avoid any interference.


Normally, having WiFi signals that bounce around is bad, but this new device takes advantage of the delays to show the activity of the room. If something moves, the multi-paths will change along with the delay. The software can scan for changes 50 times per second and can detect any motion with an accuracy of 1 to 2 cm with a little bit of machine learning.  All this boils down to the machine can detect the respiration rates of everyone inside the room. Breathing rates are displayed on the live chart.


While it does sound creepy, almost like someone watching you while you sleep, it does show how much we can do with WiFi. And the company thinks they can do more with the system beyond security monitoring. They believe the system can be useful for the elderly who may live alone. Just imagine if they fall, the software can detect the rapid chance in the room along with a long silence.


Right now the device is still being tested and demoed, so we don’t have many real-world situations. But if it works as advertised, it could chance the way we think of home security. It may even be a tad bit safer than a camera set up since those can easily be hacked.


Have a story tip? Message me at: cabe(at)element14(dot)com

After several hiccups, the Thirty Meter Telescope has been approved for construction, but native Hawaiians still aren’t happy and won’t stop fighting the cause. Mauna Kea is considered sacred grounds that’s already home to 13 telescopes. (Photo from Flickr)


The Thirty Meter Telescope is meant to be the world’s largest telescope, but its journey has been a difficult one. Since its initial approval in 2011, its planned construction atop of Hawaii’s Mauna Kea has been halted numerous times. Recently, the project received major approval from the state. The state’s land board granted the project construction approval in a 5-2 vote, but this doesn’t mean those opposed to the telescope will stop fighting.


Mauna Kea is considered sacred grounds and holds religious and cultural significance to Native Hawaiians. Many activists have put a lot of effort into halting construction over the years, including blocking construction crews from heading up the mountain in 2015 and the project’s website being hacked in the same year. The state’s Supreme Court even stepped in to nullify the project’s permit in December 2015 since it was granted without giving those opposed a chance to air their concerns.


To make matters worse, the Thirty Meter Telescope wouldn’t be the first on the grounds. There are already 13 telescopes built on it. Mauna Kea is a popular spot because it provides a clear view for most of the year with limited light and air pollution. This new one is supposed to give us a deeper view into the universe. With it being three times wider than the current largest visible-light telescope and with a high resolution, it’s supposed to be better than the Hubble Space Telescope.


The project’s new permits come with some stipulations. First, they have to commit to cultural and natural resources training, and they have to follow strict environmental regulations. They also have to hire local residents for jobs generated by the project “to the greatest extent possible.”


Despite these stipulations, natives are still not happy with the move and are already filing motions to put the permit on hold until an appeal can be heard by Hawaii’s Supreme Court. Protest leader Kahookahi Kanuha said “For the Hawaiian people, I have a message: This is our time to rise as a people. This is our time to take back all of the things that we know are ours. All the things that were illegally taken from us."


Not all Native Hawaiians are opposed to the construction. Many believe it will create great opportunities for kids and will greatly benefit the community. Also, the observatories on Mauna Kea are a big part of Hawaii’s economy brining in about $60 million in earnings and taxes in 2012. With this latest addition, they would receive a great boost in earnings. Even if the new telescope goes ahead as planned, it’s clear protestors aren’t going down without a fight.


Have a story tip? Message me at: cabe(at)element14(dot)com

Google Clips is a small camera that can be worn or placed in a room that will autonomously take photos. Innovative or creepy? (Photo from Google)


I am considering making a Raspberry Pi bodycam for a future product. Then I saw this product last week, now I have to step up my design a notch or two.


Sometimes it feels like the phrase “Live in the moment” is uttered by people more than “Hello.” Whenever you pull out your smartphone to snap a photo or record a short video clip, it can be the first thing out of people’s mouths. There’s nothing wrong with wanting to capture a moment, but if you’re focusing too much on taking pictures, it can be a distraction. Google may have a solution with Google Clips, a body cam that automatically snaps pictures.


Clips is a small, square camera that you can place on your body or just leave in a room, which is recommended by Google. It uses AI to automatically take “motion pictures,” a new picture format that includes brief movement around the frame – similar to Apple’s Live Photo. It doesn’t catch audio, and it doesn’t use any kind of network connection, so you don’t have to worry about accidentally broadcasting something. To start capturing, just twist the lens to turn it on, then set it and forget it.


The camera keeps an eye out for anything it finds interesting, like certain people, facial expressions like smiles, and other indications it should start recording. Over time, it will learn faces and will take pictures of those people rather than a lot of strangers. Plus, it can also recognize pets. Some of its other features include a 130-degree field of view, Gorilla Glass 3, USB C, Wi-Fi Direct, and Bluetooth connectivity. It also comes with 16GB of onboard storages and offers up to three hours of smart capturing per charge.  


While it’s interesting idea, you can’t avoid the fact that it sounds creepy. Not too many people are going to be fine with a camera capturing random moments without their input. And Google knows this. Google product manager Juston Payne addresses this by saying the design of the camera is meant to be obvious. There’s no question that the little device on the table is a camera. Also, everything on Clips happens locally. The only thing that is synched with Google Cloud are the photos you save onto Google Photos. It won’t take pictures of faces it doesn’t recognize and the clips are only stored on the camera itself. They’re also encrypted in case you lose it.


Google Clips retails for $249. With a steep price tag, it’ll be hard to convince most consumers this is something they need. Smartphones are equipped with cameras and most of them are pretty good. While there will be some people interested in Clips, it may have a hard time finding a mass audience.


I am wondering if I could beat that price with my Raspberry Pi version.



Have a story tip? Message me at: cabe(at)element14(dot)com

Nike is using NFC tags in new NBA jerseys that connect with Nike’s app to give fans exclusive content. Nike’s new way of connecting with NBA fans (Photo via Nike)


When the NBA season kicks off, players will be sporting new jerseys by Nike; Adidas has been making their jerseys since 2006 but didn’t renew their contract this time around. But aside from wearing the classic Nike logo, the company is taking jerseys to the next level with NikeConnect.


The new jerseys come with authentication tags powered by Near-Field Communication (NFC) that can be hooked up to your smartphone via the NikeConnect app. This gives fans access to exclusive content from their favorite team and players, such as videos, tickets, highlights, and Gifs directly from NBA. Think of it as a NBA social network for your favorite team right at your fingertips.


To get access, all you have to do is scan the tag, and you’re in. As a bonus, when you buy a specific player’s jersey, you’ll get a “boost” code that makes that player better in the NBA2K18 video game. And, as you would expect, NikeConnect also wants to sell you stuff. The app will give you the chance to purchase limited edition products, like shoes and other gear picked especially for you depending on whose jersey you’re wearing.


Connect will give Nike more information about consumers, who bought what player’s jersey, where are they from, and where did they scan from. This information could then be relayed to the player giving them the option to interact with fans who bought their jersey rather than sending out a message on social media that millions will see.


So what about price? You’d think with the new technology, Nike would hike up the costs, but the company says this was not the intent. The new jerseys run between $110 and $200, which is pretty average for your standard jersey. The most expensive ones will be the authentic models, which is made from the same material NBA players wear.


NikeConnect could be used in other ways beneficial to NBA, like fighting against counterfeit merchandise. The NFL is currently doing something similar where NFC tags are used to keep track of memorabilia. If this new technology proves to be successful, don’t be surprised if other sports pick up on it. Nike already has a deal with NFL and big soccer clubs, like Paris Saint-Germain, Chelsea, F.C. Barcelona, and Juventus.


NBA Connected jerseys are available to purchase online and from retailers now. Just don’t expect to see your favorite player sporting these jerseys; they’re strictly meant for fans, at least for the time being.


Have a story tip? Message me at: cabe(at)element14(dot)com



Google’s new Pixel Buds go beyond your average headphones and will translate into 40 different languages in real time. Pixel Buds are more than meets the eye. (Photo from Google)


Babel Fish a reality?


Many thought Apple reinvented ear-buds, or were just crazy, with Air Pods. But once again, Google has blown them out of the water with their new ear-buds for the upcoming Pixel 2. The company unveiled the updated phone along with the new headphones at the Pixel 2 event this week. While they showed off a lot of updated tech, it was their new wireless headphones that stole the show.


What makes Pixel Buds different from your average Bluetooth headphones? The ability to translate 40 different languages in real time. Once the headphones are paired with the smartphone, you tap on the right earpiece and issue a command to Google Assistant. Along with playing music, providing directions, and making a call, you can tell it to “help me speak Japanese” and start speaking in English. The phone’s speakers will play your translated words as you speak them, which you’ll hear via the ear-buds.


Adam Champy, Google’s product manager, described it as having a personal translator everywhere you go. During their demonstration, there wasn’t any lag time when it came to translating. Of course, we’ll have to wait and see how this holds up once placed in the real world with background noise, weak WiFi connections, and crosstalk.


Google uses a similar method for its Translate app. Once you activate the live mic, the app will listen to your sentence in English and translate what you just said in 40 different languages. Other companies have similar technology, like Skype’s Live Translation feature, which works with four languages in spoken audio and 50 over IM. However, these translations aren’t necessarily in real-time since there’s a lag between when the original message was sent and when the translation arrives. 


Google’s Pixel Buds sound truly amazing. Imagine the possibilities when traveling around the world. The headphones will allow you to hold a natural conversation. You won’t have to rely on translation websites, which are spotty at best or carry around language dictionaries. No more awkward moments filled with hand gestures and cringe-worthy mispronunciations. It may even replace popular language learning tools, like Rosetta Stone.


Pixel Buds come out in November and will cost $159.


Have a story tip? Message me at: cabe(at)element14(dot)com

Filter Blog

By date: By tag: