Skip navigation
1 2 3 Previous Next

Embedded

205 posts

SLAC’s accelerator on a chip could match the power of conventional accelerators in a tabletop package. (Image credit SLAC)

 

When it comes to particle accelerators, the first image that comes to mind is Cern’s Large Hedron Collider, which just happens to be the largest, most powerful accelerator on the planet. Alternatively, perhaps you might envision the Fermi National Accelerator Laboratory (Fermilab) or the now-defunct Tevatron (both in the same area in Illinois). Regardless of which you might envision, all of them have one thing in common- they feature large Injector Rings that span miles in circumference to push electrons at incredible speeds. But what if scientists could do the same thing with particles only at a smaller size- say around just a hundred feet or so? That’s exactly what scientists at SLAC National Accelerator Laboratory have been developing for a few years now.

 

In what they call an “advance that could dramatically shrink particle accelerators for new breakthroughs in science and medicine,” SLAC scientists used a conventional laser to push electrons at rate 10-times higher than traditional accelerators utilizing a glass chip no bigger than a grain of rice. They claim that at its full potential, their new technology (dubbed an accelerator on a chip) could match the power of their 2-mile long LINAC accelerator at just 100 feet and at the same time, deliver a million more electron pulses per-second.

 

 

SLAC’s nanofabricated silica-based chips are the key to accelerating electrons at higher rates. (Image credit SLAC)

 

To function at smaller distances, scientists push the electrons at near light speeds using a conventional accelerator (this is the caveat). Once accelerated, the particles are then focused into a small half-micron channel within a half-millimeter long silica-based chip, patterned with equally spaced nanoscale ridges. IR laser light pulsed on those ridges generates an electrical field that interacts with those electrons, giving them increased energy.

 

Using a conventional accelerator to get the particles up to speed is SLAC’s only drawback with their new accelerator platform. However scientists are currently looking at ways to overcome this obstacle and introduce true tabletop accelerators.

 

 

 

As for the tiny accelerator’s applications, SLAC scientists envision endeavors that go beyond physics research stating their laser accelerators could be used for medical purposes, including small, portable X-ray devices for people injured in combat or medical imaging in hospitals. They could also be implemented for security with the ability to X-ray everything from humans to luggage at a much faster rate than conventional scanners. New biological and materials science research could also benefit from using the technology so it will be interesting to see what developments the accelerator on a chip will uncover in the near future.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

Meet AT&T’s new drone Flying COW, which is helping to provide temporary connectivity to people in Puerto Rico. AT&T’s new drone the Flying COW (Photo from AT&T)

 

It’s always heartening when major events spur innovation. Or maybe it’s how rapidly that innovation gets to where it needs to be. I’m glad so many are helping, Puerto Rico needs all the help it can get. (Here is another such innovation)

 

Hurricane Maria is one of the most devastating hurricanes Puerto Rico has experienced. Months after the disaster, much of the country is still left in the dark without electricity or communications. Companies have been doing their part to provide temporary service to the country, like Project Loon. Now, AT&T is getting on board with their new effort to provide communications via its Flying COW (Cell on Wings).

 

The Flying Cow is a small helicopter that provides wireless connectivity to people in a 40 square mile area. Flying 200 feet above the ground, it can extend its coverage farther than other temporary cell sites. According to AT&T, this is the first time a device such as this has been successfully deployed. One of these devices can provide coverage to up to 8000 people at the same time. This factor depends on the equipment and the stability of the network.

 

Right now the drone is flying around the San Juan area. The company has plans to relocate the drone to other areas that need support in the coming days, including the military hospital at Manati Coliseum. In the meantime, the company is also hard at work at permanently restoring their network along with providing additional assets in other impacted areas.  They report that roughly 70% of the population in Puerto Rico and nearly 95% in the US Virgin Islands are now connected.

 

AT&T is already thinking about how the Flying COW can go beyond this devastating event. They believe that LTE-connected drones could be very useful for FirstNet-subscribers. They plan to explore how else this technology can be used and how first responders can use the drone in the future. There’s only one Flying COW in operation right now, but there are plans to add more drone models to the fleet.

 

This is only one of the temporary solutions AT&T is offering to help restore Puerto Rico. They’re also using portable satellite units that sit at the base of cell towers in areas where fiber lines connected to the towers haven’t been repaired. They also collaborated with Alphabet for a balloon-powered network that provides emergency cell service to the country.

 

With so many companies doing their part to help restore power and provide temporary solutions for Puerto Rico, hopefully, the country will be back on its feet soon.

 

 

C

See more news at:

http://twitter.com/Cabe_Atwell

DeepMind’s AlphaGo Zero can now beat the best Go players without any training or help from human players. This AI program is on top of its game. (Photo from DeepMind)

 

Google’s AlphaGo turned heads last years when the DeepMind AI beat Go world champion Ke Jie making it the world’s best Go player. It previously beat two of Game’s biggest champions in under a year. Now, the AI is more advanced with its latest iteration, AlphaGo Zero. So, what makes this version different from its predecessors? It can now defeat the best Go players without any help from humans.

 

Previous versions of AlphaGo learned how to play the game by training on thousands of games played by champions. Once it learned the game, the program played against different versions of itself to help it learn from its mistakes and figure out what needs to win the game. AlphaGo Zero learned how to play the game by playing itself millions of times over. It learned the best method of winning the game through reinforcement learning – if it made a good move, it would be rewarded. If it made a bad move, it got closer to losing.

 

After playing roughly five million games against itself, the updated AI program could defeat human players and the original AlphaGo. After 40 days, it even reigned supreme over AlphaGo Master. The program depends on a group of software neurons that are connected together to form an artificial neural network. During each turn, the network examines the positions of the pieces on the Go board and determines the moves that might be made next and the chances of them leading to a win. The network updates itself after every game to make itself stronger for the next match.

 

Along with being more advanced, AlphaGo Zero is a simpler program. It was able to learn the game faster even though it trained on less data and runs on a smaller computer. While the program is only capable at kicking some major butt in Go, its creators see this as a milestone for general-purpose AIs. Most AIs aren’t capable of doing much aside from one specific task, like recognizing faces and translating languages. DeepMind believes with more work AlphaGo Zero could help solve a number of real-world problems. Currently, the program working out how proteins fold, which could greatly improve drug discovery.

 

But it’ll be a while before we AlphaGo Zero apply itself to other issues. For now, it’ll continue to be the world’s best Go player. The human champions need to step up their game if they want to keep up.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

The Kano Computer Complete Kit is a “laptop” kids build along with learning how to program popular games. Kano gives you everything you need to start programming. (Photo via Kano)

 

Adding to the saturated market of devices, games, and sites devoted to teaching kids how to program comes the Computer Kit Complete, from Kano. The company’s latest device is similar to a portable computer with the screen and keyboard separated from each other. The components are housed inside the display unit and come equipped with a Raspberry Pi 3 board, which runs on custom Kano OS software. The device is also packed with kid-friendly activities and apps like Youtube and WhatsApp.

 

Unlike Kano’s previous computer, this one comes with just about everything you need to get started: battery, 8GB memory card, build-it-yourself speaker, three USB ports, and a Sound Sensor that connects via USB. Similar to Kano’s Motion Sensor, this can be used to manipulate and set off code.

 

Once the computer is actually built, kids can learn how to make games like Snake and Pong. They can even manipulate code for the popular game Minecraft along with program their own music. Kids are taught via a block-based programming language that’s meant to be easy to understand and will even reward progress through all the tutorials.

 

Some of the other teaching tools Kano includes are a Story Mode teaching how computers word, a drag and drop coding app and step-by-step coding challenges. The latter tool can be used to teach kids JavaScript, Python, and Unix commands. If you have something you want to share, your projects and code can be shared with others via the Kano World community.

 

So does Kano offer something different from the countless other kid-friendly devices? Doesn’t seem like it. It’s another computer that promises to make coding easy and fun for kids. There’s nothing wrong with encouraging this skill and making it accessible, but it feels like similar devices are released every month. It’s safe to say there will be more programming devices to choose from once the holidays roll around.

 

The Computer Kit Complete costs $250 and will be available to purchase November 1, just in time for the holiday season.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

BatteryFreePhone-UW.jpg

A team of faculty members and students at the University of Washington have developed the first phone that can operate without a battery to power its functions. The phone is made with commercially available components on a printed circuit board. (Photo via University of Washington, you can read the research paper here)

 

Communication is an essential part of life, and the telephone has likely been the greatest innovation in enabling communication between two remote locations, but ever since the need to speak on telephones went mobile, reliance on batteries can range from a minor inconvenience to a catastrophe. The phone developed by researchers at the University of Washington is a promising development in mobile communication and navigates around the possible perfect storm of an emergency scenario and a dead cell phone. It uses ambient power from surrounding radio signals, as well as from light because it has tiny photodiodes which capture light and convert it into an electrical current.

 

The user places a call by pressing capacitive touch buttons on the circuit board (which have the same layout as a regular phone), and according to the research team’s video, the phone transmits digital packets back to the cellular network of the base station from which it draws power, and they combine to form a phone number that is dialed using Skype. According to the team’s research paper, in its testing, the phone picked up power from radio frequency signals transmitted by a base station 31 feet away from the phone and was able to place a Skype Call to a base station that was 50 feet away. The team believes that their recent innovation, “...is a major leap in the capability of battery-free devices and a step towards a fully functional battery-free cellphone.”

 

At this stage in its development, the battery-free phone’s prototype has limited functionality, but it only consumes about 3.5 microWatts of power which is sufficiently supplied by ambient radio waves and light, for the purposes of this research. In Jennifer Langston’s article for UW News, co-author and electrical engineering doctoral student, Bryce Kellogg, is quoted as saying, “...the amount of power you can actually gather from ambient radio or light is on the order of 1 or 10 microwatts. So real-time phone operations have been really hard to achieve without developing an entirely new approach to transmitting and receiving speech.”

 

According to Langston, the team plans on improving the operating range and encrypting conversations, as well as trying to stream video on a battery-free cell phone by adding a visual display using low-power E-ink screens. This will obviously necessitate more power, and therefore a new approach to supplying the power needed based on the estimates of available power provided by Kellogg. As it stands, the University of Washington team has provided an intriguing proof-of-concept, as well as future directions for exploration and refinement, so now the world must wait to see if their revolutionary invention sparks an even greater change in the culture of mobile communication.

 

The team’s research was funded by the National Science Foundation and Google Faculty Research Awards.

 

Watch the video below to see the team demonstrate the operation of their battery-free phone.

 

 

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

multitasking.png

Google creates a neural network that’s capable of multitasking called Multimodal. A diagram of how Google’s new neural network works (Photo via Google)

 

My immediate thought… Neural Network Raspberry Pi?

 

Multitasking is something we do every day whether or not we realize it. While some of us are better at it than others, we all still have this capability. Neural networks don’t, however. Normally, they’re trained to do one task, whether that’s adding animation to video games or translating languages. Try to give it another task and the network can’t do its first job very well. Tech giant Google is looking to change this with their latest system, MultiModal.

 

Modeled after the human brain, their new system can handle eight tasks at one time and pull them off pretty well. Some of the tasks the system is now able to do are detect objects in images, recognize speech, translate between four pairs of languages along with deciphering grammar and syntax and provide captions. The system did all of these tasks at the same time, which is impressive for a neural network.

 

So, how does it do it? The neural network from Google Brain, the company’s deep learning team, is made up of subnetworks that specialize in certain tasks relating to audio, images or text. It also has a shared model equipped with an encoder, input/output mixer, and decoder. From this, the system learned how to perform these eight tasks at the same time. During testing, the system didn’t break any records and still showed some errors, but its performance was consistently high. It achieved an accuracy score of 86 percent meaning its image recognition abilities were only 9 percent worse than specialized algorithms.  Still, it managed to match the abilities of the best algorithms in use five years ago.

 

While there’s still work to be done to improve the system, MultiModal is already showing its benefits. Normally, deep-learning systems need large amounts of data for training to complete its task. With Google’s new system, it learns from gathering data from a completely different task. For instance, the network’s ability to parse sentences for grammar improved when trained on a database of images, which has nothing to do with sentence parsing.

 

Not wanting to keep the system to themselves, Google released the MultiModal code as part of its Tensor Flow open source project. Now, other engineers can experiment with the neural network and see what they can get it to do. The company hopes sharing the source code will help facilitate quicker researcher in order to improve the neural network.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

By Christine Young, Blogger, Maxim Integrated

 

 

From Yahoo and LinkedIn to the Internal Revenue Service, the Democratic National Committee, and everyday objects like dolls and DVRs, it seems that almost nothing is safe from hacking. Indeed, as more of our everyday things get smarter and connected, they can also become open to attack.

 

As an example, look no further than last fall’s distributed denial-of-service (DDoS) attack that brought down popular websites such as Amazon, Netflix, Reddit, and Spotify. This large-scale internet outage was caused by the Mirai botnet, which hacked into CCTV video cameras and DVRs. Indeed, DDoS attacks are on Wired’s list of the biggest security threats for this year, along with ransomware, weaponized consumer drones, and another iPhone encryption clash.

 

In May 2000, the FBI opened its Internet Crime Complaint Center (IC3). The bureau’s most recent cybercrime report, its 2015 Internet Crime Report, reports that IC3 has amassed more than 3.4 million complaints since the center was formed. In 2015 alone, according to the report, there were more than 288,000 complaints amounting to more than $1 billion in reported losses. As RSA notes in its white paper, “2016: Current State of Cybercrime:” “From mobile threats and ransomware to the role of biometrics in reducing fraud, a myriad of threats exist across the cyber landscape and the commoditization of cybercrime is making it easier and cheaper to launch attacks on a global scale.”

 

Too many businesses consider security to be expensive, time-consuming, and/or complex to implement. Truth be told, there are techniques and technologies that you can tap into to integrate robust security into your design in an efficient and affordable manner. The fact to keep in mind is, a breach can turn out to be far more costly in terms of lost revenue as well as damage to brand reputation and customer loyalty.

 

White Paper: Essential Design Security Technology

Maxim has a new white paper, “Why Hardware-Based Design Security is Essential for Every Application,” that corrects the misconceptions around implementing design security. Read the paper to better understand why hardware-based security presents a much more robust option than a software-based approach. Learn about cost-effective embedded security technologies that simplify the process of designing in security. Read the white paper today and protect your next design against threats such as hacking, counterfeiting, and more.

verizonkthologram.jpg

Verizon and Korea Telecom demoed the first ever hologram call using their 5G networks. (Image credit Korea Telecom)

 

Earlier this month, Verizon and Korea Telecom tested the first international hologram-based video call over their respective 5G networks. The call was demoed during a meeting between Verizon CEO Lowell McAdam and KT CEO Hwang Chang-kyu who discussed expanding their partnership to advance the 5th generation infrastructure.

 

Both companies have been gobbling up spectrum licenses in the 30 and 40GHz range to better implement the 5G standard regarding throughput, which makes sense if you consider that hologram video calling requires massive bandwidth, which 3G, 4G, and LTE cannot provide. Of course, you’re also going to need an infrastructure that is capable of delivering that spectrum, and as a result, Verizon just dropped $1-billion in pocket change for fiber-optic cable from Corning. They plan on unspooling that cable in Boston and several other US cities over the next few years (2018-2020) as 5G takes hold.

 

As far as the numbers game goes, Verizon and KT aren’t the only communications companies spending big on the millimeter-wave spectrum as AT&T recently bought-out Straight Path Communications for $1.6-billion and grabbed FiberTower for an undisclosed amount. Both had extensive licenses in the 28 and 39GHz spectrum. Another major holder of spectrum licenses is Dish Network, who shelled-out $6.2-billion for titles in the 600MHz spectrum during the FCC’s Broadcast Incentive Auction held last week.

 

With all that money being dropped on spectrum licenses, we should be able to do much more than just making holo-calls, but it was an important first step in that it showed two separate 5G infrastructures could play well together and the connection only took 10-minutes to complete rather than days. As far as the tech used in the demonstration, it’s vague at best but my guess is they employed millimeter-wave devices (perhaps the Snapdragon X50 5G modem?) as KT have been developing hologram live calling over the past several years.

 

KT also says that the technology can work on today’s mobile devices without issue and doesn’t require any specialized displays to function. So no, we won’t be getting Star Wars-like hologram calling anytime soon, but the demonstration was still impressive, and KT expects to implement trial services of their 5G network in 2018 for the PyeongChang Winter Olympics and then as commercial service in 2019.

 

What’s interesting about Verizon’s and KT’s endeavors, is that there is currently no standard for 5G, just an outline of the what the technology should entail from the NGMNA (Next Generation Mobile Networks Alliance), however they do state 5G should roll out for the commercial and business markets by 2020.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

5g_evolution_logo_946x432.jpg

In it for the G- AT&T buys Straight Path for the increase in wave spectrum it needs to unleash 5G. (Image credit AT&T)

 

AT&T announced recently that it’s buying out Straight Path Communications to the tune of $1.6-billion in stocks to grab the airwaves it needs to advance their 5G endeavor. Chief strategy officer (Technology and Operations) John Donovan made a rather bold statement earlier this year about AT&T’s roadmap to the 5G horizon saying, “Our 5G Evolution plans will pave the way to the next-generation of higher speeds for customers. We’re not waiting until the final standards are set to lay the foundation for our evolution to 5G, we’re executing now.”

 

So what exactly does $1.6-billion (tax-free to boot) buy? 735 mmWave licenses in the 39GHz band and 133 in the 39GHz spectrum, both of which are considered the gold-zone for 5G implementation. AT&T states that those licenses cover the entire US, making it easy to rollout future 5G technologies. As part of AT&T’s 5G Evolution plan, the company collaborated with Nokia to demonstrate the feasibility of 5G technology by streaming DirectTV Now using mmWave hardware.

 

Of course, this isn’t AT&T’s first acquisition in the 5G realm as the company snagged the 24 and 39 licenses from FiberTower back in February of this year, giving them about the same chunk of pie as Verizon, who have also been gobbling up telecommunications companies like the Cookie Monster with a pallet of Chips Ahoy!. Their recent acquisition of XO Communications cost them $1.8-billion and net them a sizable share of the 28 and 39GHz spectrum.

 

It’s important to note that there currently is no 5G standard, only a footprint laid out by the NGMN (Next Generation Mobile Network) Alliance- a group of telecom companies, research institutes, vendors and manufacturers who gave us LTE, SAE, and WiMax. The footprint for that 5G standard they sketched-out is as follows:

 

    -Data rates of tens of megabits per second for tens of thousands of users.

    -Data rates of 100 megabits per second for metropolitan areas.

    -1Gb per second simultaneously to many workers on the same office floor.

    -Several hundreds of thousands of simultaneous connections for wireless sensors (IoT applications).

    -Spectral efficiency significantly enhanced compared to 4G.

 

Sounds great for those living in cities with office jobs but not so much for those living in rural areas. However, they would also like to expand coverage to those areas at some point (see: never), perhaps over a satellite network.

 

Remember AT&T's Bogarting of iPhones when they first launched in 2007? Perhaps they'll share with the other networks. Otherwise, they can charge whatever they want, like with the iPhones back then. Those 300-page bills were just crazy.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

self assembly.PNG

Researchers from MIT and Chicago making denser chips with wires that partially build themselves. Faster technology requires better and faster microchips (Image via MIT)

 

As technology, such as computers, get faster and better, they require microchips that can keep up. The only problem is it’s becoming more difficult to create denser chips. Not only does it make the chips more fragile, but manufacturers also run into several limitations, like wavelengths of light used to create wire patterns. A team of researchers from MIT and Chicago may have overcome this challenge with their new, self-assembling chip.

 

This new method makes finer wires for chips by letting them partly build themselves, instead of using deliberate and slow ultraviolent or scanning processes. To make their chip, the team start with using an electron beam to make patterns on a chip. From there, they use a mix of two polymers, called a block copolymer, that separate into patterns naturally. The block copolymer contains chain-like molecules that each have two different polymer materials connected end-to-end.

 

Once the protective polymer coating is placed on top of the other polymers, it fires up the chemical vapor deposition (iCVD) process. This forces them to build themselves in a vertical manner that results in four wires. Generally, there would only be one. Each of the produced wires is a fourth as wide resulting in finer lines. Since the top polymer layer can be patterned, the method can produced any kind of complex patterning needed for the interconnections of a chip.

 

These results show promise when compared to standard methods of making chips. Not only does the method rely on extreme ultraviolent light, but it’s also expensive and a very slow process, which isn’t effective when making chips on a mass scale. This new method would not only cut down on time but on cost as well.

 

It might be a while before this method becomes the norm, but researchers predict it should be an easy transition. Current microchip manufactures still using the lithographic method don’t even have to change their machines to use the new method. It’s as simple as adding the coating in their current process. This would allow them to make denser chips without changing their current technology. With this new breakthrough, we don’t have to worry our technology is changing at such a fast pace, that other parts can’t keep up.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

press-release-five2016.jpg

SIG says the new spec can deliver robust and reliable IoT connections, making full-home and outdoor use a lot easier to implement.

 

Almost every new mobile device features it but most of us really never think about it until there becomes a connection issue. I am, of course, talking about Bluetooth- the wireless technology standard for exchanging data over short distances. It’s been in its current form, version 4.2 since December of 2014 and has since been officially replaced by version 5.0 according to the Bluetooth Special Interest Group.

 

The Group published a press release back in June detailing the new specs that make v4.2 look like antiquated technology, which includes quadruple the range in which devices can be connected, doubles the data transfer speeds and increases the data flow 8-times over. One of the areas that will not increase is the power consumption, using the same low-power IP connectivity as the previous version even though its core specs have increased.

 

The new spec also greatly benefits low-powered IoT devices, especially where range and broadcasting capabilities are a problem such as full-home and outdoor options. In these cases, broadcasting and receiving data from IoT devices such as remote sensing and data collection would benefit immensely as they typically feature small batteries that can provide power for weeks or months at a time.

 

Imagine too, walking through a smart-home and interacting with appliances, security systems, and lighting that wirelessly connect to a central beacon rather than multiple deployed nodes. Not only does that reduce the hardware needed but also saves on energy.

The spec features 2x the bandwidth, 4x the range, while sticking with the popular Low Energy of v4.2LE.

 

As it stands today, SIG expects the new 5.0 standard to be adopted by tech companies within a 2 to 5-month period, which matches up perfectly with the latest mobile device revisions, including the iPhone 8 and Samsung Galaxy S8 set to hit the market in roughly the same timeframe. We will no doubt also see 5.0 incorporated into new SoCs, development boards and add-on wireless modules, considering the technology benefits IoT devices. In fact, Nordic Semiconductor has already released a Preview Dev Kit that features the new technology-

 

Those looking for more information on the new Bluetooth 5>0 standard should check the Special Interest Group’s press release found here.   

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

IBM Computer Chip.png

Researchers from IBM and ETH Zurich have developed a liquid battery that uses prior “flow” technology and applies it to small computer chips. Computer chips can be stacked with alternating layers of chips and flow batteries that would both power and cool them at the same time. (via IBM Research Zurich)

 

Heat is a byproduct of the work done by batteries, computers, and computer chips, and overheating is a problem that is often tackled with fans and various systems of ventilation. Now, scientists from IBM and ETH Zurich are approaching the issue of heat regulation by using liquid electrolyte systems to both power and cool these systems simultaneously. Flow batteries use two liquid electrolytes to provide energy through an electrochemical reaction that occurs when they are pumped to the battery cell from the outside through a closed electrolyte loop. Usually, flow batteries are used for larger scale stationary power systems like wind and solar energy because they are capable of storing energy in the two electrolyte liquids for a long time with minimal degradation, but now it is being applied to computer technology. The team in Zurich have developed “miniaturized redox flow cells” that use flow battery technology to cool the computer chips using the liquid electrolytes already involved in the flow cell which power the computer.

 

They team in Zurich managed to find two liquids that are effective as both flow-battery electrolytes and cooling agents that dissipate heat from the computer chips in the same circuit, and according to ETH Zurich doctoral student, they are, “...the first scientists to build such a small flow battery so as to combine energy supply and cooling.” The team’s battery has a measured output 1.4 Watts per square centimeter, which according to Fabio Bergamin of ETH Zurich News, is a record-high for its given size. Even after accounting for the power required to pump the liquid electrolytes to the battery, the resulting net power density is still 1 Watt per square centimeter. The battery itself is only about 1.5 millimeters thick so their plan would be to assemble stacks of computer chips with alternating layers of computer chip and their thin battery cell, which provides the electricity, and at the same time cools the stack to prevent overheating.

 

At the moment, the electricity generated by the redox flow cell batteries is too low to power a single computer chip, therefore, as Bergamin notes, their work must be optimized by partners in the industry in order to be used in a computer chip stack. The scientists identify that the flow battery approach has other potential applicability in things like lasers and solar cells, but above all, this team has demonstrated that small flow batteries are a concept worth exploring.

 

The video provided below shows how flow batteries use liquid electrolytes on a large scale

 

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

6_670.jpg

Russian artist Vtol used his own blood as a power source for his latest electric sound exhibit. Vtol draws his blood onstage to help power his creation (photograph via Vtol)

 

Blood is a life source and important for our daily functions, but did you ever think it could power other things aside from our bodies? Russian artist Vtol (Dmitry Morozoy) showed just how powerful blood is with one of his latest projects. Titled “Until I Die,” Vtol built an electronic sound installation he powers himself with eleven “blood batteries.” The piece uses his blood as an electrolyte resulting in direct current batteries when mixed with metals like aluminum and copper. The blood powers an electronic synth module, which creates sound compositions and plays via a speaker.

 

To make this creation come to life, Vtol extracted and store under 1.2 gallons of blood over 18 months. Generally, it’s not good practice to store blood that long, so various manipulations had to be done to keep the blood’s color, chemical composition, homogeneity, and sterility intact. In the end, he gathered about 4.5 liters of blood, which was then diluted to produce 7 liters, which is how much the installation needs to run properly. For an even more dramatic effect, the last bit of blood needed for the installation was drawn from Vtol’s arm during the performance. And you thought getting blood drawn at the doctor’s office was bad.

 

So why go through the trouble? Just for the sake of art? Not exactly. Vtol explains that the performance is a “symbolic act.” Since he can power this device with his blood, he sees it as an extension of himself. There is literally a part of him in this creation, and that’s what he wanted. And what better way to show just how powerful and vital blood is? Here is an installation showing you how exactly blood works as an energy source. It’s something to think about the next time you hear about a local blood drive.

 

If you’re hoping to see this wild performance for yourself, you’re out of luck. The initial performance took place at the Kapelica Gallery, Ljubljana in December 2016. Luckily, documentation of the event recently surfaced online. You can watch the mind blowing performance here. Chances are you won’t be seeing phones and tablets powered by blood in the future. But the fact that someone powered this device with such a vital fluid makes you change the way you think about blood.

 

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

DNA Researchers.jpg

A pair of researchers from Columbia University and the New York Genome Center (NYGC) have found a way to code information using nature’s storage system: DNA. Yaniv Erlich and Dina Zielinski: the duo that worked on the DNA data storage technology. (image via New York Genome Center)

 

Deoxyribonucleic Acid, or DNA, is the material that composes all humans and almost every other living organism. It contains the instructions for how we are to be assembled and maintained, and is coded using four chemical bases: Adenine (A), Thymine (T), Cytosine (C), and Guanine (G); A pairs with T and C pairs with G. These chemical base pairings are also connected to a phosphate molecule and a sugar molecule, which form what is called a nucleotide. DNA is in the form of a double helix, which looks somewhat like a ladder, where the chemical base pairings form the rungs, and the phosphate and sugar molecules form the strands that hold the rungs in place. This natural information storage technology has been adapted for other information storage purposes and has so far been used to encode a $50 Amazon gift card, a Pioneer plaque, an 1895 French film, a computer virus, a 1948 study by information theorist Claude Shannon, and a full operating system.

 

The data from these various files were split into strings of binary code (zeros and ones), and using what is called an “erasure-correcting algorithm,” which are also called “fountain codes,” the strings were randomly packaged into “droplets,” which are then encoded using the four nucleotide bases in DNA. Although the binary storage of DNA is theoretically limited to two binary digits per nucleotide, and practically limited to 1.8 digits per nucleotide, Erlich and Zielinski package an average of 1.6 digits per nucleotide, which is still 60% more than any previously published method. The algorithm excluded letter combinations that were known to cause errors and supplied a barcode for every droplet in order to help reassemble the files later using DNA sequencing technology.

 

What’s more is that this form of coding, storage, and retrieval is extremely reliable. In total, 72,000 DNA strands, each 200 bases long, were generated and sent as a text file to Twist Bioscience, a San Francisco DNA-synthesis startup. Twist specializes in transforming digital data into biological data, and after two weeks, Erlich and Zielinski received a vial with the freshly-coded DNA molecules, and ultimately the files were recovered without a single error. This technology is incredibly important not only because of its compact nature but also because of its ease of replicability and resistance to degradation. Unfortunately, it is an expensive process, and therefore might not replace current data storage methods just yet, but it is definitely a promising leap in information storage technology.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

yepj9uwq6hwet384vk9n.png

Hasbro introduces new Disney doll that allows you to program her dance routines with companion app. Parents will be glad to know that this doll can sing, dance, and say over 100 phrases (Photo via Hasbro)

 

With a live-action remake of the Disney classic Beauty and the Beast on the way, you can expect a new line of toys to come with it. Hasbro revealed a new Belle doll to tie in with the film ahead of Toy Fair 2017. It talks, moves, and dances all on her own, making it stand out from all the others Belle dolls. But it also does something else, teaches your kids how to code. In another attempt to take advantage of the code learning craze Hasbro’s newest doll lets kids create their own dance routines for Belle using a basic programming app. While they’re creating the dances, they’re also getting the hang of the basics of coding.

 

The doll is meant to appeal to all ages. There’s a connect the dots mode for younger kids where they create dance patterns by dragging their finger across the screen. If they press various shapes that appear on the screen, they can add some extra pizzazz to the routine. Older kids can take advantage of the more advanced block coding mode. Here, dance routines are manually created by dragging and dropping moves and commands into a long sequence.  Once the routine is done, it can be synched to the doll, which runs on batteries, over a Bluetooth connection.

 

As an added bonus, Belle can also say over 100 different phrases and even sings four songs from the original movie, like “Be Our Guest.” The doll will be officially available in fall right in time for the holiday season and will run you $120. This is one doll you want the kids to ruin or tire of after only two days.

 

All things considered, the doll sounds pretty cool, but will it actually get kids interested in coding? That remains to be seen. Many people believe the future of the job market relies on programming, so it’s understandable why you’d want to foster these skills at a young age. But it could also discourage them, especially if they have no interest in programming in the long run. This trend of apps, toys, websites, etc that want to teach kids coding may burn them out in the end. How many of you were forced to learn a skill as a kid? Did you enjoy it and continue practicing it? Probably not. What’s wrong with having regular toys that allow kids to be imaginative? On the other hand, it could play a role in encouraging girls to get interested in STEM (Science, Technology, Engineering, and Math) fields, which is always a good thing.

 

Have a story tip? Message me at: cabe(at)element14(dot)com

http://twitter.com/Cabe_Atwell

Filter Blog

By date: By tag: