Skip navigation
1 2 3 Previous Next

Embedded

35 Posts authored by: Cabe Atwell

unspecified1.jpg

Researchers of CU-Boulder, MIT and UC Berkeley have successfully built a photonic microchip that uses light to transmit data. It has a bandwidth of 300 gigabits across a minute 3x6mm area and is the first of its kind. It may revolutionize data transmission forever. (via University of Colorado Boulder)


While Intel’s new computer processing chips have gained a reputation for packing unprecedented power and speed, researchers at The University of Colorado Boulder are reinventing how we execute data transmission. In collaboration with researchers from MIT and UC Berkeley, the team has successfully transmitted data using light instead of electricity.

 

Relying on light for data transmission is genius. The technology can send information over a larger distance using the same amount of energy electrical units take, which means standard microchips will require even less energy than they already do. With this, photonic technology has another significant advantage – multiple streams of data can be transmitted at once across different electromagnetic spectrums, i.e., colors of light, on the same fibers currently used to transmit data electronically. Basing microchip technology on photons, while recycling existing hardware, will thus revolutionize data transmission, by transmitting data faster and more energy-efficiently than any technology currently available.

 

The technology is based on infrared light, with a wavelength one-hundredth the thickness of a human hair, and shorter than one micron, Miloš Popović, an assistant professor in CU-Boulder’s Department of Electrical, Computer, and Energy Engineering, co-corresponded the study, told reporters at CU-Boulder. One a single microchip, the researchers successfully developed a functional photonic chip with a bandwidth density of 300 gigabits per second per square millimeter. This is up to 50 times greater bandwidth than anything currently available on the market.

 

The researchers successfully built a functional photonic microchip that mimics electricity-only design. The chip is 3 by 6mm and utilizes the same electronic circuitry of existing models. Its light-based transmission technology, however, allows it to have 850 optical I/O components, and the design can be mass-produced by existing manufacturing processes fairly smoothly. It is the only chip of its kind – the only processor in the world to transmit data using light.

 

The researchers are confident in the technology’s contribution to modern computing. Mark Wade, a CU-Boulder PhD candidate and co-lead author of the study, said the design solves the computing communication bottleneck of electric-only systems, while remaining streamline enough to be mass-produced. The research team has plans to sell the technology, and a start-up was created to do just that. Ayar Labs (formerly ObtiBit) will continue to operate independently, specializing in high-volume data transmission using energy-efficient technology. The start up also won the MIT Clean Energy Prize just last year.

 

We live in the age of information. With current computing speeds already nearing the physical limitations of electricity-based technology, our societal advancements are limited by our computing speed. According to John E. Howland of Trinity University, meteorologists are limited by slower computing speeds. Faster processing will have a direct impact on the natural sciences, and our ability to understand the world around us. Beyond faster gaming and data retrieval than we ever thought possible, artificial intelligence and science will advance beyond our wildest imaginations when faster processing speeds are possible. And now they are.

 

According to study researchers, manufacturers have begun streamlining processes to mass-produce photonic technology. It won’t be long before we see the direct benefits of what a limitless society can accomplish together. Rajeev Ram, a professor of electrical engineering at MIT, led the research team. The details of the study were published in the journal Nature.


See more news at:

http://twitter.com/Cabe_Atwell

rose.jpg

Ordinary roses or a living, renewable biofuel source? Possible both? A group of Swedish scientists have made an epic breakthrough by successful incorporating functioning circuitry into a living organism (in this case, a common rose). They recently released their discoveries which successfully caused ions within the rose’s leaves to light up. The next step is using electronic-organic plants to acts as biofuel power plants. (image via Panoramic Images)

 

It seems that technology has triumphed over nature once more – taking something once sublime and beautiful and turning it into a cold, calculated machine. Never before were scientists able to successfully combine organic plant matter and electronic circuits without killing the plant. Now, a Swedish group of researchers from of Linköping University have released their chilling findings in Science Advances. Their project started in 2012, after many unsuccessful attempts. It seems that this time they are on the right track with a breakthrough that may change our relationships with plants and the whole natural world forever.

 

It starts with a rose: a beautiful and temperamental plant whose only function is to look beautiful. However, why simply enjoy a thing of beauty when you can turn it into an instrument, perhaps the rose can serve as a radio transmitter, or renewable energy source instead of just sitting there; or at least that is what many scientists may think. The issue with combining plants and electronics was that scientists were trying to splice them together somehow, or combine the inorganic with the organic by inorganic means.

 

rose tech.png

A schematic of how their new technology works from their journal article (via Berggren et al., 2015, Science Advances, Vol. 1, no. 10)

 

The genius of Magnus Breggren and his team from Sweden is that they have discovered how to use the natural functions of the plant and its components to create electronic circuitry. They have used a synthetic polymer which they feed to the plant the way plants feed on water for nutrients. As the polymer makes its way up the vascular system of the rose stem, it becomes a part of the xylem, the leaves, the veins, and the signals of the rose. These components of the plant are then used as the main components of the circuitry which allow electronics and organic bodies to merge and act as one.

 

Their current synthetic polymer mixture creates a wire that’s up to 10 cm long inside of the stem (xylem) without impeding the rose’s ability to absorb water and nutrients. Via this method, the group of scientists was able to light ions within the leaves of the plants. Berggren was so surprised that their experiments have actually worked that he can’t wait to test out new projects: among them is a biofuel concept. Berggren told Motherboard, “Right now we are trying to put electrodes into the leaves with enzymes that we connect to the electrodes,” he said. “The sugar that is produced in the leaves is converted by the enzyme; they deliver a charge to the electrode and then hopefully we can collect that charge in a biofuel cell.”

 

This latest proposition could entirely change our relationship with plants, as forests could turn into renewable power plants for nearby cities. Berggren hopes that this new biofuel possibility will allow us to gain resources from our natural world without destroying it. However, how viable is the health of the rose in the long term? No one knows. It is still very early days, but there is no doubt that science is about to get weirder as electronics and plants can truly begin to meld into a cyborg technology for years to come.


See more news at:

http://twitter.com/Cabe_Atwell

engineersdem.jpg

This chip is a huge step forward in fiber optic communications. University of Colorado researchers combined electrons and photons within a single chip for this landmark development. (all images via University of Colorado & Glenn Asakawa)

 

Here is a claim and a wish I've heard for decades.


Advances in technology never cease to amaze no matter how big or small, but the University of Colorado takes the cake for best innovation of 2015. The university's researchers have created the first full-fledged processor that transmits data using light instead of electricity. This was done by successfully combining electrons and photons within a single microprocessor. So what does this all mean? It's a big development that could lead to ultrafast, low power data crunching. It also marks a major step for fiber optic communication.

 

To get the successful outcome, researchers put two processor cores with more than 70 million transistors and 850 photonic components on a chip. They were then able to create the processor in a foundry that produces high performance computer chips on a mass scale. This means the design can be easily and quickly made up for commercial production. Though the design isn't completely photonic the processor is still pretty impressive with an output of 300Gbps per square millimeter – 10 to 50 times the normal speed.


light_chip11ga.jpgmmc_07_revised_crop.jpg

(Left) "The light-enabled microprocessor, a 3 x 6 millimeter chip, installed on a circuit board." (Right) "Electrical signals are encoded on light waves in this optical transmitter consisting of a spoked ring modulator, monitoring photodiode (left) and light access port (bottom)"

 

Fiber optic communication is a big goal for many researchers and organizations due to its many advances. It supports greater bandwidth , carries data at higher speeds over larger distances, and uses less energy in general, which is good news for a society that aims to consume less power. There have been some advances in fiber optic technology, but up until now it has proven difficult to merge photonics and computer chips together. Now, these University of Colorado researchers have jumped over that hurdle.

 

But does the chip actually work? Researchers ran several test and showed that the chip was able to run various computer programs that required it to send and receive instructions and data from memory. This is how they were able to discover the chip had a bandwidth density of 300 Gbps.

 

“The advantage with optical is that with the same amount of power, you can go a few centimeters, a few meters or a few kilometers," said study co-lead author Chen Sun. "For high-speed electrical links, 1 meter is about the limit before you need repeaters to regenerate the electrical signal, and that quickly increases the amount of power needed. For an electrical signal to travel 1 kilometer, you'd need thousands of picojoules for each bit.”

 

If there's further advances in the technology not only will it mean posting Facebook statues as lightening fast speed, it also means data centers will be more green. According to the Natural Resources Defense Council, data centers used an estimated 91 billion kilowatt-hours of electricity in 2013, which is around 2 percent of electricity consumed in the United States. Considering those numbers, this is a great way to promote a greener society.

 

See more news at:

http://twitter.com/Cabe_Atwell


A team of researchers from Columbia Engineering, Seoul National University and Korea Research Institute of Standards and Science recently developed the world’s smallest lightbulb – at just one atom thick – using graphene. The structure may also revolutionize computing and chemical experimentation.  (via Columbia)


Graphene never stops to amaze. Take a look at everything written about the material here at element14, click here. A team of researchers from Columbia Engineering, Seoul National University and Korea Research Institute of Standards and Science recently created the world’s thinnest light bulb at just one atom thick. The micro bulb on a chip may revolutionize light displays, chemistry and computing. Researchers are currently further developing the technology for practical use in the near future.

 

Postdoctoral research scientist Young Duck Kim of James Hone’s team at Columbia Engineering headed the project. He and his team of researchers took the same principles of the incandescent light bulb and applied them to graphene to see if Thomas Edison’s world-changing invention could be updated.

 

The team placed the one-atom-thick pieces of graphene on a small strip with metal electrodes. They suspended the structure above the substrate and heated it by sending a current through filaments lining the contraption. To their surprise, as the graphene was heated, it became luminous – even to the naked eye. The structure is essentially the thinnest ever visible light bulb, but its potential for impacting numerous technologies is huge.

 

If the graphene light chip comes to market, it could play a critical role in enhancing the capabilities of photonic circuit technology. Photonic circuits are much like electrical circuits, but seek to rely upon light as a semiconducting heat source. In order for light to have enough energy to function properly, the light bulb filaments must be able to handle heat up to thousands of degrees Celsius. A chip that was both able to handle that level of heat and small enough to fit on a circuit board never existed, until now.

 

The micro light bulb on a chip may have other uses too. Since it can handle more than 2500 degrees Celsius, it may be used to heat tiny hot plates to observe high-temperature chemical reactions. The tiny bulbs are also see-through and can revolutionize commercial light displays as well. If the chips can turn off and on more quickly, they may have a future as computer bits as well.

 

Young and his team are continuing to expand upon the technology. It was a joint efforts between researchers from Columbia Engineering, Korea Research Institute of Standards and Science, Seoul National University, Konkuk University, Sogang Univeristy, Sejong University, Standford Univeristy and the University of Illinois at Urbana-Champaign. Read more about this achievement at Nature after this link...

 

C

See more news at:

http://twitter.com/Cabe_Atwell

self-destructing-chip.jpg

Made out of Gorilla Glass, the chip obliterates itself. This new chip shatters into thousands of pieces under extreme stress. (via Xerox PARC, pic via IDG.tv)


I was just thinking, there has to be a way to store data that will self-destruct upon access. Seems we are close to it.

 

The latest development from Xerox PARC engineers is a device straight out of a James Bond film. The team has created a chip that can explode into bits on command as part of the Defense Advanced Research Projects Agency's (DARPA) Vanishing Programmable Resources project. How does the chip get this shattering effect? It was made using Gorilla Glass, supposed used for smart phone screens, instead of plastic and metal. The glass was then modified to become tempered glass under extreme stress, which will case it to easily disintegrated when triggered.

 

In a demonstration, the chip reached breaking point by heat. A small resistor heated up and the glass shattered into a ton of tiny pieces. Even after it broke, the small fragments continued to break into even smaller pieces for tens of seconds afterward.

 

Is the chip supposed to just look cool? Even though the result is awesome, the chip can actually be a great security measure. It could be used to store sensitive data like encryption keys and can shatter into so many pieces it becomes impossible to reconstruct it. It's a pretty intense way to deal with electronic security, but it's a viable option if it happens to fall into the wrong hands.

 

The self-destructible chip was demonstrated in all its glory at DARPA's Wait, What? Event in St. Louis last week.

 

“The applications we are interested in are data security and things like that,” said Gregory Whiting, a senior scientist at PARC in Palo Alto, California. “We really wanted to come up with a system that was very rapid and compatible with commercial electronics.”

 

With so much information being stored electronically, more and more companies are employing similar techniques for security. Similar technology is used for Snapchat, which lets users send images to friends for a short amount of time before the message can no longer be accessed. And Gmail recently introduced the “Undo Send” feature that allows people to cancel sent emails, but it's limited to 30 messages. Now, if only we could make our phones explode when they get stolen.

 

PARC is a Xeorx company that provides tech services, expertise, and intellectual property to various companies including Fortune 500 businesses, startups, and government agencies.


 

C

See more news at:

http://twitter.com/Cabe_Atwell

Printed Circuit Boards (PCBs) are without a doubt central to all electronics. As technology advances, however, PCBs must be made faster and smaller than ever before. Before you get busy, make sure you nip sloppy PCB production in the bud, before it costs you big bucks. Read on to discover the 12 biggest PCB development mistakes and how to avoid them.

 

Layout

 

1. Improper Planning

 

Have you ever heard “proper planning prevents poor performance?” It’s true. There’s a reason we consider poor planning the number one PCB development mistake. There is no substitute for proper PCB planning. It can save you time and energy. If you build it wrong, you will have to spend additional resources to go back and fix it. How do you plan properly? Consider numbers 2-6 on our list before you physically begin building. You’ll be thankful you did.

 

2. Incorrect Design

 

There is an infinite number of layout possibilities with PCBs. Keep function in-mind when designing the form. For example, if there’s a good change you’ll need to add on in the future, you may want to consider something like a ball grid array (BGA), which can help conserve space on an existing board to enable you to build upon that design in the future. If your design must incorporate copper, you’d be best going with a polygon-style design. Whatever your function, choose the right form.

 

3. Improper Board Size

 

It’s much easier to begin with the right size first. Although the portion of the project you’re working on now may only require a small board, if you’re going to have to add on in the future, you’re better off getting the larger board now. Stringing multiple boards together may be difficult due to potential circuitry and connectivity issues. Plan adequately not only for current function but future function so you save yourself time and money.

 

4. Failing to Group Like Items

 

Grounding your PCB is a critical part of production. Grouping like items will not only help you keep your trace lengths short (another important element of design), but it will also help you avoid circuitry issues, ease testing and make error correction much more simple.

 

5. Software, Software, Software

 

We know you can design a PCB from scratch, but why would you want to when you can use software? Software makes your life easier. Electronic Design Automation gives you recommendations for the best layout to choose and other programs may suggest the best materials to use, based on prospective board function. Software won’t do all of the thinking for you, but it sure does help.

 

6. Using the Silk Screen Improperly

 

A huge ally when creating a design for a PCB board is the silk screen. When used properly, it’s a great tool that allows you to map out all aspects of your PCB before construction, including circuitry planning. However, be careful and maintain best practices. When used improperly, the silk screen can make it difficult to know where connectors and components are supposed to go. Use full words as descriptors when possible, or keep a key of your symbols nearby.

 

 

Building

 

Once you’re done planning, you can begin building your board. You’re still not out of the water, however. Building is another area where people make costly mistakes. When done well, however, you can build PCBs faster than ever. 

 

7. Poorly-Constructed Via-in-Pad

 

This issue is one of the biggest detriments to proper PCB development. Many boards now require via-in-pads, but when soldered incorrectly, vias can lead to breakouts in your ground plane. This creates a larger circuitry issue, as power travels between boards instead of connectors and components. Test your ground plane. If you suspect you have a shaky via-in-pad on your board, cap or mask it and test it again. It may slow down production now, but it’ll save you time in the long run.

 

8. Using the Wrong Materials

 

Although this mistake may seem like a novice move, it happens. PCBs can be constructed using various materials. Know the purpose for which you are building your board, and which materials are best for that design, before you start building. If you’re building an FR-2 grade, single-sided PCB, you can use phenolic or paper materials. Anything more complex, however, should use epoxy or glass cloth. Also, different materials have different temperaments. Keep this in mind. If you’re building a simple design that needs to hold up in an area with a lot of humidity, it may be worth it to go with epoxy. 

 

9. Too Lazy to Test It

 

If there’s one habit you begin to change, it should be the frequency you test your prototype. Assuming your board is grounded and that circuits will function in perfect accordance with their potential ground paths and voltages is asking for trouble. We know it takes time to test your board, but it’ll take more time to find and correct an error as time goes on. Test it now. Every design has an issue, keep that thought in mind.

 

 

Manufacturing

 

So you properly planned and built your board. Things couldn’t still go wrong, could they? Wrong! They can and they do. These are the three mistakes to avoid.

 

10. Failing to Double Crunch the Numbers

 

We’ve all felt the pressure of an upcoming product deadline. You’re sweating, over-caffeinated and running on lack of sleep. We know you’re an engineer, but don’t let your ego cost your company huge amounts of money due to an error. Always double-check your numbers before sending your model to production. This includes testing your board, ensuring the size is in line with your client’s specifications and double-checking your design is ideal for the intended function. It’s always better to have one model that needs to be reworked than a thousand. Rewind to #1 in this list… proper planning. Never jump the gun when sending out the design.

 

11. Temperature Control

 

This step is often neglected, but it’s important. Even if you do everything right leading to the production process, you will ruin your boards if you neglect temperature during development and storage. Every step in the process must factor in temperature. Soldering in cold temperatures, for example, often leads to poor connections. Likewise, storing boards in extreme heat or humidity may damage components and the board itself. At every step in the process, consider temperature and ensure its working for you.

 

12. Communicate

 

Building PCBs can be fun, if you create functional boards at the end of the grueling process. So you designed your board well and followed best practices during production – you’re in the clear, right? Not always. Ensure you properly communicate with your clients at all times. It sounds simple, but what’s said isn’t always what’s heard. Your finished product can be rejected. Save yourself a step by making sure you’re creating what your client wants at every step in the process, so you can move onto more fun things, like making a paper airplane machine gun.


C

See more news at:

http://twitter.com/Cabe_Atwell

55b64d196a7d0.jpg

Plasmonic Circuit. A research team from ETH Zurich recently published an article in Nature Photonics that announced the discovery of a new technology that enables faster, cheaper data transmission.  (via Nature Phontonics)


Networks may get an upgrade. A team of researchers from ETH Zurich recently developed a technology that may make the future of data transmission faster, cheaper and smaller than ever before.

 

Professor of Photonics and Communications Juerg Leuthold and his team of researchers recently released a seminal paper in Nature Photonics disclosing a new technology that can transmit data with a modulator roughly one hundred times smaller than modern methods. The new method can shrink modulators from a few micrometers to a few nanometers to allow faster and small transmission of data.

 

The research team discovered that surface-plasmon-polaritrons could be used to shrink light signals to a fraction of their normal size. Using this trick, they were able to send light signals as normal, shrink them down to enable movement through smaller electrical spaces, and expand them again later. The technology is similar to keeping a secret message in a small box, flattening that box so it fits between the crack of a doorway, and opening it up again on the other side. The technology minimizes the data without compromising it, and bypasses the limitations of current technology.

 

Leuthold plans to continue his research, although he has not disclosed the next step for his work. The current model uses gold, and is still more affordable than building current modulators. Perhaps various conductors will be used in future models and the team might attempt to build compatible hardware. These are all speculations, but one thing certain – if it comes to market, it’ll significantly change the way we transmit data every day.

 

C

See more news at:

http://twitter.com/Cabe_Atwell


Memristor circuit.jpg

Memristor Circuit. Researchers at UC Santa Barbara and Stony Brook University successfully built a neural network to house memristors. The prototype was successful in recognizing small images and may be expanded to develop futuristic computers that simulate the human brain. (via UC Santa Barbara)


A team at the University of California – Santa Barbara and Stony Brook University is on the brink of finally breaking the secret on how to develop memristors on their own neural hardware using old perceptron technology. Memristor research has been a long time coming, but if the researchers are successful, the devices can assist in computer energy consumption management and may allegedly lead to thinking computers that mimic human neurons and synapses.

 

Memristors, or memory resistors, are thought to be a crucial component to developing computers that can really “think” like human brains. A human brain will build brand new synapses based on an individual’s need for a particular type of information. A mathematician, for example, would have a very different brain, structurally, than a musician, because the part of the brain most used would become more developed over time. Computer scientists think memristors are the key to allowing computers to work in this way, as they can regulate the flow of electrical energy to various circuits, based on which circuits are most frequently used.

 

41634054_01_d-533x371_f-1_c-451x182.png41634054_02_d-600x217_f-0_c-300x108.jpg

Concept Blueprint (via UC Santa Barbara & Nature)

 

Although memristors are a common topic of conversation for future computer-building, scientists struggle with building a neural hardware to house them. The new study published by UC Santa Barbara and Stony Brook University, however, may change that. The team built a 12 x 12 memristive crossbar assay that functions as a single perceptron, or an early neural network often used for pattern recognition and basic information organization. The team programmed a network of perceptrons to decipher things like letters and patterns and say together, the micro hardware functions as a collection of basic synapses.

 

The hardware is built using aluminum and titarium, but manufactured under low temperatures to allow for monolithic three-dimensional combination. This allows for the memristor to “remember” the amount of energy and the direction of the previous current for future use, even after the main device has been powered off. This recognition is currently possible using other technology, but it is much more involved. Using memristors means easier functionality while using no power.

 

In the trial, the memristor model was able to decipher 3 x 3-pixel back-and-white patterns into three types. The model they created had thee outputs, ten inputs and 30 perceptron synapses. In future, the team plans to shrink the current device down to 30nm across, in the hopes to simulating 100 billion synapses per square centimeter.

 

While some argue computers will never have the real processing power of the human brain, others say memristors will still be useful as analog memory devices or components of logic for larger systems. Since they use no energy, but record energy used, memristors may also be useful for energy management.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

Europa squid.jpg

NASA eel bot that may delve into the depths of moon Europa. NASA recently announced their current 15 winners of NIAC funding for $100,000 for each candidate. Among them is a project to develop a robotic eel to explore Europa, Jupiter’s moon.  (via NASA)

 

 

Anyone seen that movie Europa Report? It may have inspired NASA...

 

NASA recently announced their winners of their annual NASA Innovative Advanced Concepts (NIAC) program. There are 15 winners in total that have far-out ideas (pun intended) about making science fiction a reality. NASA is hoping that these highly innovative, and a bit crazy, ideas will lead them to advances that can progress their ability to delve further into space.

 

One crazy idea that just might work is NIAC 2015 winner Mason Peck’s research to design a robotic eel that can explore the depths of Europa, one of Jupiter’s many moons. The idea is highly innovative and calls for the invention of new technologies – including new power systems.

 

A mock-up for the robot design is seen above. It would be a soft-bodied robot that can swim and explore the aquatic depths of Europa. Peck describes the robot as more of a squid than an eel, as NASA calls it. The science behind it is pretty inspiring. The body of the eel/squid would have ‘tentacle’ structures that allow it to harvest power effectively from changing electromagnetic fields.  The energy will power its rover subsystems, one of which allows it to expand and change shape to propel itself in water and on land. It would do this by electrolysis of water, creating H2 and O2 gas that will be harvested to expand, and combusted internally to act as a propulsion system. To learn more about the other 14 winners who scored $100,000 to develop technology like this, see their extensive report.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

CVD-Graphene-640x354.jpg

Chalmers University of Technology researchers have found that large area graphene helps prolong the spin of electrons over longer periods of time (via Chalmers)


Chances are you own a smartphone, tablet or PC/laptop that features some form of solid-state technology - typically in the form of RAM, flash drives or SSD hard drive. Those devices are faster than their mechanical counterparts and new findings by researchers from Sweden’s Chalmers University of Technology are set to make that technology even faster and more energy efficient through the use of graphene.

 

Specifically, they found that large area graphene is able to prolong the spin of electrons (spintronics) over a longer period of time over that of ferrous metals. Spintronics deals with the intrinsic spin of electrons in a magnetic moment- or the torque it will experience when an external magnetic field is applied. As mentioned above there are already spintronic devices on the market, however they use ferrous metals for their base platform. It’s the impurities in those metals that hold spintronics back from becoming a mainstream component in today’s electronic circuitry- limiting the size of the components themselves.

 

This is where graphene comes into play as the material extends the area of spintronics from nanometers to millimeters, making the spin of those electrons last longer and travel farther than ever before. So why is that good? Data (in the form of 1’s and 0’s) is encoded onto those electrons as they spin up and spin down rather than relying on the other method of turning the electrical state of off and on using traditional circuits. The problem is as the process nodes become smaller it results in increased electrical ‘bleed’ across transistors in the off state thereby preventing us from building transistors that consume less power.

 

Using graphene as the substrate for spintronics allows for the electrons to maintain their spin alignment to a duration of 1.2 nanoseconds and transmit information contained in those electrons up to 16-micrometers long without degradation. Of course, progress doesn’t come without its problems- in this case it’s the graphene itself or rather the manufacturing process. Producing large sheets of the one-atom thick substance is still an issue for manufacturers and when it’s produced it usually has defects in terms of wrinkles and roughness, which can have negative effects on electron’s spin rate and decay.

 

The researchers however have found that the CVD (Chemical Vapor Deposition) method is promising and the team hopes to capitalize on it to produce a logical component in the short term with a long-term goal of producing graphene/spintronic-based components that will surpass solid-state devices in both speed and energy efficiency.

 

See more news at:

http://twitter.com/Cabe_Atwell

microchip ceo.jpg

Microchip CEO Steve Sanghi (via Microchip)


Microchip Technology, Inc., is celebrating this week, as it was just named the number one provider of 8-bit microcontrollers (MCU) globally. The title was awarded by Garner’s annual ranking publication, in its 2014 edition.

 

Microchip Technology, Inc., is an innovation giant that specializes in mixed-signal, Flash-IP and analog solutions. It has long been a leader in the microcontroller industry and although the powerhouse is celebrating its reclaim of the top spot for 8-bit MCUs, it is a leading provider of 16-bit and 32-bit MCU production as well.

 

Microchip is committed to growing its MCU technologies in all markets, including 8-bit, 16-bit and 32-bit product lines, and its dedication and commitment to excellence is paying off. The technology innovator was ranked the fastest growing MCU supplier of all top 10 providers in 2014. Its rate of growth was charted as double that of its competitors. With this, the company was also named one of the top 10 providers of 32-bit MCUs for the first time ever. While its stats across the MCU industry are impressive, what’s most striking is that Microchip closed a 41% revenue deficit to reclaim the stop spot from Renesas.

 

Renesas is a company resulting from the merge between NEC, Hitachi and Mitsubishi. These three companies were the leading semiconductor companies of Japan and when they merged, Microchip was knocked out of the top spot for 8-bit MCUs in 2010. At the time, Renasas’ business was 41% larger than that of Microchip, but it worked tirelessly each year, and finally won with a 10.5% advantage over the Japanese supplier in 2014.

 

MCUs are used for a number of different products, including watches, mobile phones and many digital household electronics. The need for MCUs is increasing, as the consumer market and global technologies shift toward digitization. The Internet of Things devices, “smart” household products and other digital devices will all rely on MCUs for their processing power as the demand for technologically advanced goods continues to rise – good news for Microchip.

 

Microchip offers a wide range of MCU products in its portfolio, including MCUs for analog peripherals, core independent peripherals, low-power products and more. If you’re interested in Microchip products, you can find a complete list of their solutions on their website.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

silicene_Fig1a2.jpg

Silicene Structure concept art (via UT at Austin)

 

While some researchers are hard at work to achieve quantum computing on a chip, scientists from the University of Texas at Austin’s Cockrell School are busy making history. The research team recently created an atom-thick transistor made from silicon particles, called silicene, which may revolutionize computer chips.

 

There had been talk about the development of silicene, but it had yet to be constructed, until recently. Assistant Professor in the Department of Electrical and Computer Engineering Deji Akinwande and lead researcher Li Tao successfully built the first-ever silicene chip last month. The team looked to current graphene-based chip development for guidance, but discovered a major issue at the onset – silicene was sensitive to air.

 

To circumvent this issue, Akinwande and Tao worked with Alessandro Molle of the Institute for Microelectronics and Microsystems in Agrate Brianza, Italy, to construct the delicate material in an airtight space. The team was able to form a thin silicene sheet by condensing silicon vapor onto a crystalline silver block in a vacuum chamber. Once the sheet was formed, silicene atoms were placed on a thin silver sheet and covered with a layer of alumina that was one nanometer thick. Once formed, the team was able to peel the silicene sheet off of the base and move it to an oxidized-silicon substrate. The result was a functional silicene transistor that joined two metal groups of electrodes.

 

The transistor was only functional for a few minutes before crumbling due to instability in air. While the transistor’s capabilities were rather archaic, the UT team was successfully able to fabricate silicene devices for the first time ever through low-temperature manufacturing. As silicone is a common base for computer chips, the researchers are confident that the technology could be adopted relatively easily, to make for faster, low-energy digital chips.

 

The team of scientists plans to continue its research to develop a more stable silicene chip. Having a super-thin silicene transistor could incredibly enhance the speed of computing, but it isn’t without competition. Graphene-based transistors have been under development for quite some time and may also be a solution to the question of how to enhance computing capabilities. Both technologies, however, may fail to surpass the potential power of the Università degli Studi di Pavia in Italy’s newest quantum chip. The chip features entanglement capabilities, potentially allowing an entire network to function as one unit. The new technology may also make cyber threats a thing of the past.

 

At present, emerging chip technologies are all still in need of further development before they are ready to hit the market. No one knows which technology will prevail, but it certainly is exciting.

 

The Cockrell School’s Southwest Academy of Nanoelectronics, the U.S. Army Research Laboratory’s Army Research Office and the European Commission’s Future and Emerging Technologies Programme funded the University of Texas at Austin-based project.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

photon-entanglement-ring-resonator.jpg

Photon Entanglement Ring Resonator visualization (via Davide Grassani, Stefano Azzini, Marco Liscidini, Matteo Galli, Michael J. Strain, Marc Sorel, J. E. Sipe, and Daniele Bajoni)


As IBM readies its brain-like computer-on-a-chip for mass production, the Università degli Studi di Pavia in Italy is making history, as it just built the very first chip capable of entangling individual light particles. The new technology may inspire a host of novel computing innovations and quite possibly put an end to cyber threats as we known them.

 

Entanglement is an essential quantum effect that enables the instant connection between two particles, regardless of distance. This means that anything done to one particle will be instantaneously done to another particle, even if it is at the other end of the universe. The entanglement of photons isn’t a new technology, but researchers at the Università degli Studi di Pavia, including co-author on the paper Daniele Bajoni, made history in successfully scaling the technology down to fit on a chip.

 

Researchers have been trying to scale down entanglement technology for years. Typically, the technology is harnessed through specialized crystals, but even the smallest set-up was still a few millimeters thick. Bajoni and his team decided to try a different approach and instead built what they call micro-ring resonators onto an ordinary silicon chip. The resonators embed coils into silicon wafers that capture and re-release photons. The design results in successful entanglement at an unparalleled width of 20 microns, or one-tenth the thickness of a strand of human hair.

 

The technology has huge implications for computing, as entanglement can exponentially increase computing power and speed. Computing communication can become instantaneous, as can other communication technologies. Tweeting at the speed of light, anyone? While these potentialities for advancements in computing are impressive, the biggest impact it may make is in inhibiting cyber threats.

In entanglement, particles act as one cohesive unit. Hackers operate by identifying weaknesses in computer and information systems and exploiting them. If computing and information systems, however, operate as one cohesive unit, there would be no way through which a hacker could breach the system, thus eliminating cyber threats. Sorry Dshell analysts.

 

The new quantum chip is infinitely more powerful than even the most cutting-edge supercomputers around today. It has the potential power to revolutionize communication, computing and cybersecurity, by enabling the adoption of quantum technologies, such as quantum cryptography and quantum information technologies. When we can expect to see this technology rule supreme, however, is another subject entirely.

 

Bajoni believes the technology is the connector through which innovation technologies can begin harnessing quantum power on a small scale, but others disagree. Some believe ring resonators must be produced on a nanoscale first to compete with up-and-coming nano-processors. Only time will tell, but our bet is cybersecurity stakeholders, at the least, will begin looking into the chip’s development. Until quantum mobile communication is available, however, you’ll just have to upload your social media photos like everybody else, 3-4GBs at a time.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

bb0.png

PowerBar installed (via Andice Labs)

 

If you've ever thought of designing a BeagleBone-based vigilante robot that fights crime in the rural Mojave Desert using only battery power, now you can with Andice Lab's PowerBar. The PowerBar was designed exclusively for the BeagleBone open hardware computer and enables it to function fully on DC, or battery, power. Portability is inspiring.

 

bb1.png

PowerBar attached to BeagleBone (via Andice Labs)

 

The PowerBar is a "micro cape" power supply that provides the low-power BeagleBone (SBC) computer with enough energy to run from anywhere, even in outer space (cue Twilight Zone theme song). The battery pack runs 5V of energy to the computer and even offers 15V over-voltage protection and reverse-voltage protection to protect against surges. It's a simple power pack that works for both BeagleBone White and Black.

 

bb2.jpg

BeagleBone White (via BeagleBoard)

 

BeagleBoard's BeagleBone is a single board computer based on Linux that runs Android and Ubuntu. The White version comes equipped with an AM335x 720MHz ARM processor, 256MB DDR2 RAM, 3D graphics chip, ARM Cortex-M3 and 2 PRU 32-bit RISC CPU's. BeagleBone Black was made with developers in mind and features double the power, with 512 DDR2 RAM, 4GB 8-bit built-in EMMC flash memory and a NEON point accelerator. Both computers offer USB, Ethernet and HDMI connectivity. It also runs Cloud9 IDE and Debian. What makes it unique is its open hardware design.

 

bb3.jpg

BeagleBone Black (via BeagleBone)

 

Open hardware designs take open-source to a whole new level. Not only are software platforms completely open to developers, but designs are too. That means you can buy a BeagleBone Black, or you can go directly to the BeagleBoard website and find the instructions for building your very own. Open hardware is developed for the love of innovation and raising up the next generation of tinkerers. My only critique of this cape is that I could do the same with an external cell-phone battery backup. Countless battery bricks out there too.

 

The development of the PowerBar now allows us to take our innovations on-the-go. Now remote locations all over the world can still gain access to the unscripted power of BeagleBone. If you take the lead from one tinkerer, you can power your very own brewery using the mini computer. Even the pirates in the Mojave Desert would raise a glass to that.

cpulse.jpg

The cPulse is seen in action being used as a home rave device (via Codlight)


The French company, Codlight Inc. is currently seeking funding on Kickstarter to produce one of the first fully customizable LED Smartphone cases. While the prospect of becoming a walking, breathing billboard advertisement doesn't particularly appeal to me, I must give Codlight Inc. credit for the multitude of features and uses it offers.

 

The company certainly left no stone unturned when they programmed the cPulse smartphone case for a variety of uses. The cPulse LED case can act as everything from a notification banner, to a homemade rave device, to a form of light therapy. This feature can also be used to mimic a good old-fashioned analog clock radio.

 

The cPulse uses a panel of 128 high-efficiency LED lights powered by the Smartphone battery, and controlled by a custom program which allows the user to specify different commands, modes, notifications, and create customizable light shows set to music.

These light displays sap battery power at a rate of about 7% per hour so you may want to have quarters on hand if you need to call someone on short notice. - Remember payphones?

 

The LED light panel and the smartphone case  are 3D printed by Sketchfab and Sculpteo. Kickstarter backers who fund at least $79 to this Codlight initiative will receive a kit that will allow them to 3D print their very own cPulse case. Donors who are a bit more generous, funding at least $89 will receive a fully functioning cPulse case delivered to their home.

 

At the moment, the case is specifically made for the Android 4.4 smartphone, however if the project gets off of its feet, its easy customization could allow anyone to own a cPulse.

 

I must say, I am still pretty impressed by the functionality of this device, even though it is entirely unnecessary and a product of a culture of consumption and excess.

 

For now, Codlight Inc. is asking for no paltry sum, with a pledged goal of $350,000. They are currently nowhere near the goal, but still have about a month left to raise over a quarter of a million dollars.

 

If you are obsessed with bright, shiny objects and want to blind and dazzle those around you, you can get your very own cPulse from Kickstarter.



C

See more news at:

http://twitter.com/Cabe_Atwell

Filter Blog

By date:
By tag: