Skip navigation
1 2 3 Previous Next


87 Posts authored by: Cabe Atwell    


A team of faculty members and students at the University of Washington have developed the first phone that can operate without a battery to power its functions. The phone is made with commercially available components on a printed circuit board. (Photo via University of Washington, you can read the research paper here)


Communication is an essential part of life, and the telephone has likely been the greatest innovation in enabling communication between two remote locations, but ever since the need to speak on telephones went mobile, reliance on batteries can range from a minor inconvenience to a catastrophe. The phone developed by researchers at the University of Washington is a promising development in mobile communication and navigates around the possible perfect storm of an emergency scenario and a dead cell phone. It uses ambient power from surrounding radio signals, as well as from light because it has tiny photodiodes which capture light and convert it into an electrical current.


The user places a call by pressing capacitive touch buttons on the circuit board (which have the same layout as a regular phone), and according to the research team’s video, the phone transmits digital packets back to the cellular network of the base station from which it draws power, and they combine to form a phone number that is dialed using Skype. According to the team’s research paper, in its testing, the phone picked up power from radio frequency signals transmitted by a base station 31 feet away from the phone and was able to place a Skype Call to a base station that was 50 feet away. The team believes that their recent innovation, “ a major leap in the capability of battery-free devices and a step towards a fully functional battery-free cellphone.”


At this stage in its development, the battery-free phone’s prototype has limited functionality, but it only consumes about 3.5 microWatts of power which is sufficiently supplied by ambient radio waves and light, for the purposes of this research. In Jennifer Langston’s article for UW News, co-author and electrical engineering doctoral student, Bryce Kellogg, is quoted as saying, “...the amount of power you can actually gather from ambient radio or light is on the order of 1 or 10 microwatts. So real-time phone operations have been really hard to achieve without developing an entirely new approach to transmitting and receiving speech.”


According to Langston, the team plans on improving the operating range and encrypting conversations, as well as trying to stream video on a battery-free cell phone by adding a visual display using low-power E-ink screens. This will obviously necessitate more power, and therefore a new approach to supplying the power needed based on the estimates of available power provided by Kellogg. As it stands, the University of Washington team has provided an intriguing proof-of-concept, as well as future directions for exploration and refinement, so now the world must wait to see if their revolutionary invention sparks an even greater change in the culture of mobile communication.


The team’s research was funded by the National Science Foundation and Google Faculty Research Awards.


Watch the video below to see the team demonstrate the operation of their battery-free phone.




Have a story tip? Message me at: cabe(at)element14(dot)com


Google creates a neural network that’s capable of multitasking called Multimodal. A diagram of how Google’s new neural network works (Photo via Google)


My immediate thought… Neural Network Raspberry Pi?


Multitasking is something we do every day whether or not we realize it. While some of us are better at it than others, we all still have this capability. Neural networks don’t, however. Normally, they’re trained to do one task, whether that’s adding animation to video games or translating languages. Try to give it another task and the network can’t do its first job very well. Tech giant Google is looking to change this with their latest system, MultiModal.


Modeled after the human brain, their new system can handle eight tasks at one time and pull them off pretty well. Some of the tasks the system is now able to do are detect objects in images, recognize speech, translate between four pairs of languages along with deciphering grammar and syntax and provide captions. The system did all of these tasks at the same time, which is impressive for a neural network.


So, how does it do it? The neural network from Google Brain, the company’s deep learning team, is made up of subnetworks that specialize in certain tasks relating to audio, images or text. It also has a shared model equipped with an encoder, input/output mixer, and decoder. From this, the system learned how to perform these eight tasks at the same time. During testing, the system didn’t break any records and still showed some errors, but its performance was consistently high. It achieved an accuracy score of 86 percent meaning its image recognition abilities were only 9 percent worse than specialized algorithms.  Still, it managed to match the abilities of the best algorithms in use five years ago.


While there’s still work to be done to improve the system, MultiModal is already showing its benefits. Normally, deep-learning systems need large amounts of data for training to complete its task. With Google’s new system, it learns from gathering data from a completely different task. For instance, the network’s ability to parse sentences for grammar improved when trained on a database of images, which has nothing to do with sentence parsing.


Not wanting to keep the system to themselves, Google released the MultiModal code as part of its Tensor Flow open source project. Now, other engineers can experiment with the neural network and see what they can get it to do. The company hopes sharing the source code will help facilitate quicker researcher in order to improve the neural network.


Have a story tip? Message me at: cabe(at)element14(dot)com


Verizon and Korea Telecom demoed the first ever hologram call using their 5G networks. (Image credit Korea Telecom)


Earlier this month, Verizon and Korea Telecom tested the first international hologram-based video call over their respective 5G networks. The call was demoed during a meeting between Verizon CEO Lowell McAdam and KT CEO Hwang Chang-kyu who discussed expanding their partnership to advance the 5th generation infrastructure.


Both companies have been gobbling up spectrum licenses in the 30 and 40GHz range to better implement the 5G standard regarding throughput, which makes sense if you consider that hologram video calling requires massive bandwidth, which 3G, 4G, and LTE cannot provide. Of course, you’re also going to need an infrastructure that is capable of delivering that spectrum, and as a result, Verizon just dropped $1-billion in pocket change for fiber-optic cable from Corning. They plan on unspooling that cable in Boston and several other US cities over the next few years (2018-2020) as 5G takes hold.


As far as the numbers game goes, Verizon and KT aren’t the only communications companies spending big on the millimeter-wave spectrum as AT&T recently bought-out Straight Path Communications for $1.6-billion and grabbed FiberTower for an undisclosed amount. Both had extensive licenses in the 28 and 39GHz spectrum. Another major holder of spectrum licenses is Dish Network, who shelled-out $6.2-billion for titles in the 600MHz spectrum during the FCC’s Broadcast Incentive Auction held last week.


With all that money being dropped on spectrum licenses, we should be able to do much more than just making holo-calls, but it was an important first step in that it showed two separate 5G infrastructures could play well together and the connection only took 10-minutes to complete rather than days. As far as the tech used in the demonstration, it’s vague at best but my guess is they employed millimeter-wave devices (perhaps the Snapdragon X50 5G modem?) as KT have been developing hologram live calling over the past several years.


KT also says that the technology can work on today’s mobile devices without issue and doesn’t require any specialized displays to function. So no, we won’t be getting Star Wars-like hologram calling anytime soon, but the demonstration was still impressive, and KT expects to implement trial services of their 5G network in 2018 for the PyeongChang Winter Olympics and then as commercial service in 2019.


What’s interesting about Verizon’s and KT’s endeavors, is that there is currently no standard for 5G, just an outline of the what the technology should entail from the NGMNA (Next Generation Mobile Networks Alliance), however they do state 5G should roll out for the commercial and business markets by 2020.


Have a story tip? Message me at: cabe(at)element14(dot)com


In it for the G- AT&T buys Straight Path for the increase in wave spectrum it needs to unleash 5G. (Image credit AT&T)


AT&T announced recently that it’s buying out Straight Path Communications to the tune of $1.6-billion in stocks to grab the airwaves it needs to advance their 5G endeavor. Chief strategy officer (Technology and Operations) John Donovan made a rather bold statement earlier this year about AT&T’s roadmap to the 5G horizon saying, “Our 5G Evolution plans will pave the way to the next-generation of higher speeds for customers. We’re not waiting until the final standards are set to lay the foundation for our evolution to 5G, we’re executing now.”


So what exactly does $1.6-billion (tax-free to boot) buy? 735 mmWave licenses in the 39GHz band and 133 in the 39GHz spectrum, both of which are considered the gold-zone for 5G implementation. AT&T states that those licenses cover the entire US, making it easy to rollout future 5G technologies. As part of AT&T’s 5G Evolution plan, the company collaborated with Nokia to demonstrate the feasibility of 5G technology by streaming DirectTV Now using mmWave hardware.


Of course, this isn’t AT&T’s first acquisition in the 5G realm as the company snagged the 24 and 39 licenses from FiberTower back in February of this year, giving them about the same chunk of pie as Verizon, who have also been gobbling up telecommunications companies like the Cookie Monster with a pallet of Chips Ahoy!. Their recent acquisition of XO Communications cost them $1.8-billion and net them a sizable share of the 28 and 39GHz spectrum.


It’s important to note that there currently is no 5G standard, only a footprint laid out by the NGMN (Next Generation Mobile Network) Alliance- a group of telecom companies, research institutes, vendors and manufacturers who gave us LTE, SAE, and WiMax. The footprint for that 5G standard they sketched-out is as follows:


    -Data rates of tens of megabits per second for tens of thousands of users.

    -Data rates of 100 megabits per second for metropolitan areas.

    -1Gb per second simultaneously to many workers on the same office floor.

    -Several hundreds of thousands of simultaneous connections for wireless sensors (IoT applications).

    -Spectral efficiency significantly enhanced compared to 4G.


Sounds great for those living in cities with office jobs but not so much for those living in rural areas. However, they would also like to expand coverage to those areas at some point (see: never), perhaps over a satellite network.


Remember AT&T's Bogarting of iPhones when they first launched in 2007? Perhaps they'll share with the other networks. Otherwise, they can charge whatever they want, like with the iPhones back then. Those 300-page bills were just crazy.


Have a story tip? Message me at: cabe(at)element14(dot)com

self assembly.PNG

Researchers from MIT and Chicago making denser chips with wires that partially build themselves. Faster technology requires better and faster microchips (Image via MIT)


As technology, such as computers, get faster and better, they require microchips that can keep up. The only problem is it’s becoming more difficult to create denser chips. Not only does it make the chips more fragile, but manufacturers also run into several limitations, like wavelengths of light used to create wire patterns. A team of researchers from MIT and Chicago may have overcome this challenge with their new, self-assembling chip.


This new method makes finer wires for chips by letting them partly build themselves, instead of using deliberate and slow ultraviolent or scanning processes. To make their chip, the team start with using an electron beam to make patterns on a chip. From there, they use a mix of two polymers, called a block copolymer, that separate into patterns naturally. The block copolymer contains chain-like molecules that each have two different polymer materials connected end-to-end.


Once the protective polymer coating is placed on top of the other polymers, it fires up the chemical vapor deposition (iCVD) process. This forces them to build themselves in a vertical manner that results in four wires. Generally, there would only be one. Each of the produced wires is a fourth as wide resulting in finer lines. Since the top polymer layer can be patterned, the method can produced any kind of complex patterning needed for the interconnections of a chip.


These results show promise when compared to standard methods of making chips. Not only does the method rely on extreme ultraviolent light, but it’s also expensive and a very slow process, which isn’t effective when making chips on a mass scale. This new method would not only cut down on time but on cost as well.


It might be a while before this method becomes the norm, but researchers predict it should be an easy transition. Current microchip manufactures still using the lithographic method don’t even have to change their machines to use the new method. It’s as simple as adding the coating in their current process. This would allow them to make denser chips without changing their current technology. With this new breakthrough, we don’t have to worry our technology is changing at such a fast pace, that other parts can’t keep up.


Have a story tip? Message me at: cabe(at)element14(dot)com


SIG says the new spec can deliver robust and reliable IoT connections, making full-home and outdoor use a lot easier to implement.


Almost every new mobile device features it but most of us really never think about it until there becomes a connection issue. I am, of course, talking about Bluetooth- the wireless technology standard for exchanging data over short distances. It’s been in its current form, version 4.2 since December of 2014 and has since been officially replaced by version 5.0 according to the Bluetooth Special Interest Group.


The Group published a press release back in June detailing the new specs that make v4.2 look like antiquated technology, which includes quadruple the range in which devices can be connected, doubles the data transfer speeds and increases the data flow 8-times over. One of the areas that will not increase is the power consumption, using the same low-power IP connectivity as the previous version even though its core specs have increased.


The new spec also greatly benefits low-powered IoT devices, especially where range and broadcasting capabilities are a problem such as full-home and outdoor options. In these cases, broadcasting and receiving data from IoT devices such as remote sensing and data collection would benefit immensely as they typically feature small batteries that can provide power for weeks or months at a time.


Imagine too, walking through a smart-home and interacting with appliances, security systems, and lighting that wirelessly connect to a central beacon rather than multiple deployed nodes. Not only does that reduce the hardware needed but also saves on energy.

The spec features 2x the bandwidth, 4x the range, while sticking with the popular Low Energy of v4.2LE.


As it stands today, SIG expects the new 5.0 standard to be adopted by tech companies within a 2 to 5-month period, which matches up perfectly with the latest mobile device revisions, including the iPhone 8 and Samsung Galaxy S8 set to hit the market in roughly the same timeframe. We will no doubt also see 5.0 incorporated into new SoCs, development boards and add-on wireless modules, considering the technology benefits IoT devices. In fact, Nordic Semiconductor has already released a Preview Dev Kit that features the new technology-


Those looking for more information on the new Bluetooth 5>0 standard should check the Special Interest Group’s press release found here.   


Have a story tip? Message me at: cabe(at)element14(dot)com

IBM Computer Chip.png

Researchers from IBM and ETH Zurich have developed a liquid battery that uses prior “flow” technology and applies it to small computer chips. Computer chips can be stacked with alternating layers of chips and flow batteries that would both power and cool them at the same time. (via IBM Research Zurich)


Heat is a byproduct of the work done by batteries, computers, and computer chips, and overheating is a problem that is often tackled with fans and various systems of ventilation. Now, scientists from IBM and ETH Zurich are approaching the issue of heat regulation by using liquid electrolyte systems to both power and cool these systems simultaneously. Flow batteries use two liquid electrolytes to provide energy through an electrochemical reaction that occurs when they are pumped to the battery cell from the outside through a closed electrolyte loop. Usually, flow batteries are used for larger scale stationary power systems like wind and solar energy because they are capable of storing energy in the two electrolyte liquids for a long time with minimal degradation, but now it is being applied to computer technology. The team in Zurich have developed “miniaturized redox flow cells” that use flow battery technology to cool the computer chips using the liquid electrolytes already involved in the flow cell which power the computer.


They team in Zurich managed to find two liquids that are effective as both flow-battery electrolytes and cooling agents that dissipate heat from the computer chips in the same circuit, and according to ETH Zurich doctoral student, they are, “...the first scientists to build such a small flow battery so as to combine energy supply and cooling.” The team’s battery has a measured output 1.4 Watts per square centimeter, which according to Fabio Bergamin of ETH Zurich News, is a record-high for its given size. Even after accounting for the power required to pump the liquid electrolytes to the battery, the resulting net power density is still 1 Watt per square centimeter. The battery itself is only about 1.5 millimeters thick so their plan would be to assemble stacks of computer chips with alternating layers of computer chip and their thin battery cell, which provides the electricity, and at the same time cools the stack to prevent overheating.


At the moment, the electricity generated by the redox flow cell batteries is too low to power a single computer chip, therefore, as Bergamin notes, their work must be optimized by partners in the industry in order to be used in a computer chip stack. The scientists identify that the flow battery approach has other potential applicability in things like lasers and solar cells, but above all, this team has demonstrated that small flow batteries are a concept worth exploring.


The video provided below shows how flow batteries use liquid electrolytes on a large scale



Have a story tip? Message me at: cabe(at)element14(dot)com


Russian artist Vtol used his own blood as a power source for his latest electric sound exhibit. Vtol draws his blood onstage to help power his creation (photograph via Vtol)


Blood is a life source and important for our daily functions, but did you ever think it could power other things aside from our bodies? Russian artist Vtol (Dmitry Morozoy) showed just how powerful blood is with one of his latest projects. Titled “Until I Die,” Vtol built an electronic sound installation he powers himself with eleven “blood batteries.” The piece uses his blood as an electrolyte resulting in direct current batteries when mixed with metals like aluminum and copper. The blood powers an electronic synth module, which creates sound compositions and plays via a speaker.


To make this creation come to life, Vtol extracted and store under 1.2 gallons of blood over 18 months. Generally, it’s not good practice to store blood that long, so various manipulations had to be done to keep the blood’s color, chemical composition, homogeneity, and sterility intact. In the end, he gathered about 4.5 liters of blood, which was then diluted to produce 7 liters, which is how much the installation needs to run properly. For an even more dramatic effect, the last bit of blood needed for the installation was drawn from Vtol’s arm during the performance. And you thought getting blood drawn at the doctor’s office was bad.


So why go through the trouble? Just for the sake of art? Not exactly. Vtol explains that the performance is a “symbolic act.” Since he can power this device with his blood, he sees it as an extension of himself. There is literally a part of him in this creation, and that’s what he wanted. And what better way to show just how powerful and vital blood is? Here is an installation showing you how exactly blood works as an energy source. It’s something to think about the next time you hear about a local blood drive.


If you’re hoping to see this wild performance for yourself, you’re out of luck. The initial performance took place at the Kapelica Gallery, Ljubljana in December 2016. Luckily, documentation of the event recently surfaced online. You can watch the mind blowing performance here. Chances are you won’t be seeing phones and tablets powered by blood in the future. But the fact that someone powered this device with such a vital fluid makes you change the way you think about blood.



Have a story tip? Message me at: cabe(at)element14(dot)com

DNA Researchers.jpg

A pair of researchers from Columbia University and the New York Genome Center (NYGC) have found a way to code information using nature’s storage system: DNA. Yaniv Erlich and Dina Zielinski: the duo that worked on the DNA data storage technology. (image via New York Genome Center)


Deoxyribonucleic Acid, or DNA, is the material that composes all humans and almost every other living organism. It contains the instructions for how we are to be assembled and maintained, and is coded using four chemical bases: Adenine (A), Thymine (T), Cytosine (C), and Guanine (G); A pairs with T and C pairs with G. These chemical base pairings are also connected to a phosphate molecule and a sugar molecule, which form what is called a nucleotide. DNA is in the form of a double helix, which looks somewhat like a ladder, where the chemical base pairings form the rungs, and the phosphate and sugar molecules form the strands that hold the rungs in place. This natural information storage technology has been adapted for other information storage purposes and has so far been used to encode a $50 Amazon gift card, a Pioneer plaque, an 1895 French film, a computer virus, a 1948 study by information theorist Claude Shannon, and a full operating system.


The data from these various files were split into strings of binary code (zeros and ones), and using what is called an “erasure-correcting algorithm,” which are also called “fountain codes,” the strings were randomly packaged into “droplets,” which are then encoded using the four nucleotide bases in DNA. Although the binary storage of DNA is theoretically limited to two binary digits per nucleotide, and practically limited to 1.8 digits per nucleotide, Erlich and Zielinski package an average of 1.6 digits per nucleotide, which is still 60% more than any previously published method. The algorithm excluded letter combinations that were known to cause errors and supplied a barcode for every droplet in order to help reassemble the files later using DNA sequencing technology.


What’s more is that this form of coding, storage, and retrieval is extremely reliable. In total, 72,000 DNA strands, each 200 bases long, were generated and sent as a text file to Twist Bioscience, a San Francisco DNA-synthesis startup. Twist specializes in transforming digital data into biological data, and after two weeks, Erlich and Zielinski received a vial with the freshly-coded DNA molecules, and ultimately the files were recovered without a single error. This technology is incredibly important not only because of its compact nature but also because of its ease of replicability and resistance to degradation. Unfortunately, it is an expensive process, and therefore might not replace current data storage methods just yet, but it is definitely a promising leap in information storage technology.


Have a story tip? Message me at: cabe(at)element14(dot)com


Hasbro introduces new Disney doll that allows you to program her dance routines with companion app. Parents will be glad to know that this doll can sing, dance, and say over 100 phrases (Photo via Hasbro)


With a live-action remake of the Disney classic Beauty and the Beast on the way, you can expect a new line of toys to come with it. Hasbro revealed a new Belle doll to tie in with the film ahead of Toy Fair 2017. It talks, moves, and dances all on her own, making it stand out from all the others Belle dolls. But it also does something else, teaches your kids how to code. In another attempt to take advantage of the code learning craze Hasbro’s newest doll lets kids create their own dance routines for Belle using a basic programming app. While they’re creating the dances, they’re also getting the hang of the basics of coding.


The doll is meant to appeal to all ages. There’s a connect the dots mode for younger kids where they create dance patterns by dragging their finger across the screen. If they press various shapes that appear on the screen, they can add some extra pizzazz to the routine. Older kids can take advantage of the more advanced block coding mode. Here, dance routines are manually created by dragging and dropping moves and commands into a long sequence.  Once the routine is done, it can be synched to the doll, which runs on batteries, over a Bluetooth connection.


As an added bonus, Belle can also say over 100 different phrases and even sings four songs from the original movie, like “Be Our Guest.” The doll will be officially available in fall right in time for the holiday season and will run you $120. This is one doll you want the kids to ruin or tire of after only two days.


All things considered, the doll sounds pretty cool, but will it actually get kids interested in coding? That remains to be seen. Many people believe the future of the job market relies on programming, so it’s understandable why you’d want to foster these skills at a young age. But it could also discourage them, especially if they have no interest in programming in the long run. This trend of apps, toys, websites, etc that want to teach kids coding may burn them out in the end. How many of you were forced to learn a skill as a kid? Did you enjoy it and continue practicing it? Probably not. What’s wrong with having regular toys that allow kids to be imaginative? On the other hand, it could play a role in encouraging girls to get interested in STEM (Science, Technology, Engineering, and Math) fields, which is always a good thing.


Have a story tip? Message me at: cabe(at)element14(dot)com

ibm quantum chip.jpg

Researchers from the University of Maryland and IBM have pitted their quantum computers against each other to determine which is the superior technology. (image) An IBM Quantum Computer Chip. (via MIT Technology Review)


Quantum physics refers to the laws that govern and explain the behaviors of quantum particles (smallest possible discrete objects), and this branch of scientific theory allows for particles to exist in two physical states simultaneously (i.e. particle and wave). Essentially, quantum computers are to traditional computers what quantum physics is to classical physics. Whereas traditional computers use binary systems; coding bits as either zeros or ones, quantum computers use quantum bits, or qubits, which can assume “superpositions” of both 0 and 1 simultaneously. According to Gabriel Popkin of Science, it is also possible to, “join the superposition states of many qubits,” which gives, “[Quantum computers] potential calculating power that grows exponentially with every added bit.” The quantum computing technologies of both IBM and the University of Maryland researchers are still in their infancy, but they both present promising, and unique approaches to a burgeoning technological field with potentially very wide practical benefits. Though, the states of the qubits are fragile such that small external disturbances can cause superpositions to collapse into either a 0 or a 1.


The quantum computer built by researchers at the University of Maryland is built around five ytterbium ions that are held in an electromagnetic trap and manipulated by lasers, and IBM’s quantum computer, on the other hand, essentially works through five small loops of superconducting metal that can be manipulated by microwave signals. IBM’s device is also the only quantum computer that can be programmed online by users through a cloud system, rather than exclusively by scientists in a lab.


This technological faceoff marks the first time that two different quantum computing technologies can be compared in an “algorithm-crunching” exercise, but the victor remains somewhat unclear. When it came to the competition, a set of standard algorithms was run on each device, and the outputs were compared to test the computers’ performance. IBM's quantum computer was faster but less accurate than that of the researchers from the University of Maryland. One test revealed that Maryland’s computer was 77.1 percent accurate, while IBM’s was only 35.1 percent accurate. However,  IBM’s was up to 1,000 faster than its competitor, so therein lies the ambiguity. Though, there is no need for a champion because, according to Popkin, “both labs are already working on more reliable next-generation devices with more qubits.” When it comes to advancing quantum computer technology, like many other things in life, there is no time like the present.


Have a story tip? Message me at: cabe(at)element14(dot)com


The Freestyle uses a black LCD screen, stylus, and knobs that create stamps for the updated version. The new Etch-a-Sketch dubbed Freestyle (via Spin Master)


My immediate reaction to this toy was, "how could I do this with a Raspberry Pi?"


No matter how old you are, you’ve played with an Etch-a-Sketch at one point in your life. The toy has been a staple of childhood since it was first introduced in the late 50s. However, for years the design has stayed the same: red plastic frame, white knobs, and sand. Now, the toy is getting an upgrade to compete with today’s smart toys. The updated Etch-a-Sketch by Spin Master will replace the aluminum powder with a black LCD screen. Instead of turning small knobs to draw, you’ll use a stylus for your creations. However, don’t worry; you still erase all your mistakes the same way.


Dubbed the Freestyle, the board has a similar look to the LCD writer Boogie Board, which is no coincidence. Spin Master teamed up with Boogie Board to create the new design. It even uses the same technology as other Boogie Board products like Magic Sketch and Play N’ Trace. Though you no longer use the knobs to draw, the iconic white buttons are there. They are now rubber stamps that can add marks like stars and circles to the screen. No more drawing in a drab black and white. Your creations will pop with the vibrant rainbow colors.


Though the Freestyle isn’t out yet, some purists aren’t very happy with the new design. The Verge called it “half the fun of the classic…with none of the effort.” Admittedly, it’s strange to see the updated toy, especially when the original design has been around for so long. People who grew up with the original will most likely scoff at the Freestyle. But, Etch-a-Sketch has to keep up. Kids’ time is often spent in front of a screen whether it’s via phone or tablet. Today’s kids may find the new design more engaging than the old school style.


Freestyle drops this fall and will only cost you $20. Don’t worry purists, Spin Master isn’t getting rid of the classic design. The company will still sell the toy we all know and love. Anyone else feel like picking up an Etch-a-Sketch now?


Have a story tip? Message me at: cabe(at)element14(dot)com

dna pc.jpg

Researchers at Eindhoven University used a DNA computer to create a pill that looks at how sick you are and doles out the proper amount of medicine. An illustration of what the “smart” pill would look like. (via Eindhoven University)


When you’re not feeling well, you often turn to medicine whether it’d be over the counter drugs or prescribed by a doctor. However, you don’t always need meds to feel better and when you do take them, how do you know when enough’s enough? Researchers at Eindhoven University of Technology (TU/e) have made a breakthrough in medicine. They’ve developed a “smart” pill that can access your health state and dole out the proper amount of chemicals.


Medicine is what we turn to first for our different ailments, but it’s not always recommended. It’s not easy to determine when you should or shouldn’t rely on medication for relief. Though meds come with directions about when and how to take it, it’s easy to ignore instructions, especially when the only thing on your mind is feeling better. This could lead to unwanted side effects and ultimately, waste the medication you spent a lot of money on. The idea behind this new “smart” pill is having it release specific amounts based on your needs.


The team, led by Maarten Merkx, developed this new method by using a DNA computer to help them gather data. This computer looks for molecules that it can react with to gather the proper data. This allows researchers to program the correct reaction circuits. The systems also find specific antibodies to help determine how ill someone is. Measuring the concentration of certain antibodies helps determine whether or not someone has a specific disease.


Once the antibodies are identified, they are translated into a unique piece of DNA which the DNA computer can then decide whether medicine is necessary, depending on the presence of one or more antibodies. It can also help determine how much medicine is needed if you need to be treated. Not only is this a breakthrough for medicine, it sets a new record. The team are the first to successfully link the presence of antibodies to a DNA computer.


Ideally, the DNA computer would gather this information from a pill you take just like any other. From there, it will determine how much chemicals, if any, needs to be released. Though the “smart” pill is still in its early phase, it shows great potential for intelligent medicine. Imagine being able to have the right amount of chemicals in your system. It reduces the risk of overdosing and makes sure you’re not taking drugs when you don’t need them. With further researching and testing, the team hopes this new method will be able to lessen side-effects that usually comes with medication and reduce the cost in the future. In our society where we have the tendency to be overmedicated, this “smart” pill can help us be healthier and safer.


Have a story tip? Message me at: cabe(at)element14(dot)com

Screenshot 2017-02-03 at 3.30.18 PM.png

Psychometric profiling mines big data from social media platforms to create advertising tailored to the personality traits of select people. A screenshot of Cambridge Analytica’s Data Dashboard tool, which provides demographic data based on the OCEAN personality model to political campaign workers (via


This sort of data analysis always freaks me out.


You’ve probably noticed that the ads which pop up on your browser and Facebook feed are highly relevant to you and often feature products you’ve purchased in the past. You may already know that this is because of your digital footprint-the trace you leave when you visit a web page or use your credit card to buy something. How does the internet know this about you? The answer lies in big data and the world of statistical programming. Statistical programming is a way to mine extremely large amounts of data for predictive modelling. Computer programs use complicated mathematics to analyze volumes of data too big for the human mind.


Predicting climate and weather patterns is one form of statistical modelling. An enormous amount of data on temperature, humidity, and wind, among other variables, are analyzed by computer programs which then generate predictions of future climate patterns. Another, based on marketing, has recently been developed using data from Facebook.


Begun in 2008 by then-doctoral student Michal Kosinski while at Cambridge University, the project aimed to measure anyone’s personality according to five traits psychologists term OCEAN-openness, conscientiousness, extraversion, and neuroticism. How much you enjoy new things, how much you care about taking care of someone else’s needs, how much you like to spend time with others, and what kind of anxious tendencies you have. These traits are remarkably accurate in how they can predict behavior. What Kosinski did was figure out a way to assess someone’s OCEAN profile based entirely on their Facebook activity. He started by sending out questionnaires to friends.


The results were then compared with their Facebook activity-what they liked, posted and shared. As Facebook grew, so did the pool of questionnaires and profiles. Very strong correlations between respondents’ questionnaires and their Facebook activity emerged. Gay men are more likely to ‘like’ the cosmetic line MAC. Straight men are more likely to ‘like’ Wu-Tang Clan. By 2012, Kosinski’s team was able to predict age, skin color, religious and political affiliation, and many other traits, from 68 likes on Facebook.


Well, so what? It turns out that you can do a lot with this information, as Kosinski’s team discovered when they were approached by private firm Cambridge Analytica with an offer to purchase usage rights of the research. Cambridge Analytica designed models for engaging with different OCEAN types and developed marketing to appeal to someone based on those traits.


Guess who hired Cambridge Analytica for targeted marketing? Both the Brexit and Trump campaigns. While Kosinski claims that it’s impossible to know how much his research affected election outcomes, one thing is certain: there’s going to be a lot more targeted marketing in the coming years.


Have a story tip? Message me at: cabe(at)element14(dot)com


Tokyo’s 2020 Olympics committee wants the public to donate old gadgets to extract metals for them and create medals for the 2020 games. The medals for the upcoming Olympic Games will be made out of old gadgets. (Photo via Tokyo 2020 Olympics)


Tokyo’s 2020 Olympics they have something special up their sleeve: making medals out of old gadgets. To involve the community and promote recycling, the committee is asking the public to turn in unused or forgotten gadgets, like old smartphones. These items and other household appliances have small traces of the materials generally needed to make the medals. Rather than relying on mining companies, Tokyo wants to give people’s unwanted gadgets a new purpose. Saying that your old toaster went to making a gold medal is a pretty high honor.


The planning committee teamed up with partner companies NTT DOCOMO and Japan Environmental Sanitation Center (JESC) for the program. Starting in April public offices and over 2,400 NTT DOCOMO stores will have collection boxes where people can drop off their unwanted items. The goal is to collect eight tons of metal, which will equal to two tons after the production process, the total amount needed to make 5,000 medals for the Olympic and Paralympic games. Once they have the eight tons, the collection will come to an end.


This effort not only lets the community get involved but also directly responds to Recommendation 4 of the Olympic Agenda 2020, which aims to make sustainability integrated into planning and execution of the games. Many Olympic athletes spoke positively about the collection, saying it makes the medals that much more special. Gymnast Kohei Uchimura believes it wasteful to “discard devices every time there is a technological advance” and thinks this is a great way to reduce that waste. Decathlete Aston Eaton believes the medals from the collected items will represent the “weight of a nation.”


Making medals out of discarded objects is a novel way to recycle them. Many often don’t know what to do with their old phones and computers and settle for stuffing them in a junk drawer or leaving them in the dump. Perhaps this new effort will inspire further projects that tackle recycle in a similar way.


The Olympic 2020 planning committee isn’t the first to extract metals from these devices. Last year, tech giant Apple revealed they managed to collect 2,204 pounds of gold from broken iPhones in 2015. Apple promotes various recycling program, including the popular Apple Renew, which lets you recycle any Apple device at their stores. The company collected over 90 million pounds of e-waste, 61 million of which were reusable materials. The company then uses many of these extracted materials for their own products.


Wish they would release a potential prototype picture.


Have a story tip? Message me at: cabe(at)element14(dot)com

Filter Blog

By date: By tag: