Skip navigation
1 2 3 Previous Next


141 posts


Researchers of CU-Boulder, MIT and UC Berkeley have successfully built a photonic microchip that uses light to transmit data. It has a bandwidth of 300 gigabits across a minute 3x6mm area and is the first of its kind. It may revolutionize data transmission forever. (via University of Colorado Boulder)

While Intel’s new computer processing chips have gained a reputation for packing unprecedented power and speed, researchers at The University of Colorado Boulder are reinventing how we execute data transmission. In collaboration with researchers from MIT and UC Berkeley, the team has successfully transmitted data using light instead of electricity.


Relying on light for data transmission is genius. The technology can send information over a larger distance using the same amount of energy electrical units take, which means standard microchips will require even less energy than they already do. With this, photonic technology has another significant advantage – multiple streams of data can be transmitted at once across different electromagnetic spectrums, i.e., colors of light, on the same fibers currently used to transmit data electronically. Basing microchip technology on photons, while recycling existing hardware, will thus revolutionize data transmission, by transmitting data faster and more energy-efficiently than any technology currently available.


The technology is based on infrared light, with a wavelength one-hundredth the thickness of a human hair, and shorter than one micron, Miloš Popović, an assistant professor in CU-Boulder’s Department of Electrical, Computer, and Energy Engineering, co-corresponded the study, told reporters at CU-Boulder. One a single microchip, the researchers successfully developed a functional photonic chip with a bandwidth density of 300 gigabits per second per square millimeter. This is up to 50 times greater bandwidth than anything currently available on the market.


The researchers successfully built a functional photonic microchip that mimics electricity-only design. The chip is 3 by 6mm and utilizes the same electronic circuitry of existing models. Its light-based transmission technology, however, allows it to have 850 optical I/O components, and the design can be mass-produced by existing manufacturing processes fairly smoothly. It is the only chip of its kind – the only processor in the world to transmit data using light.


The researchers are confident in the technology’s contribution to modern computing. Mark Wade, a CU-Boulder PhD candidate and co-lead author of the study, said the design solves the computing communication bottleneck of electric-only systems, while remaining streamline enough to be mass-produced. The research team has plans to sell the technology, and a start-up was created to do just that. Ayar Labs (formerly ObtiBit) will continue to operate independently, specializing in high-volume data transmission using energy-efficient technology. The start up also won the MIT Clean Energy Prize just last year.


We live in the age of information. With current computing speeds already nearing the physical limitations of electricity-based technology, our societal advancements are limited by our computing speed. According to John E. Howland of Trinity University, meteorologists are limited by slower computing speeds. Faster processing will have a direct impact on the natural sciences, and our ability to understand the world around us. Beyond faster gaming and data retrieval than we ever thought possible, artificial intelligence and science will advance beyond our wildest imaginations when faster processing speeds are possible. And now they are.


According to study researchers, manufacturers have begun streamlining processes to mass-produce photonic technology. It won’t be long before we see the direct benefits of what a limitless society can accomplish together. Rajeev Ram, a professor of electrical engineering at MIT, led the research team. The details of the study were published in the journal Nature.

See more news at:


Ordinary roses or a living, renewable biofuel source? Possible both? A group of Swedish scientists have made an epic breakthrough by successful incorporating functioning circuitry into a living organism (in this case, a common rose). They recently released their discoveries which successfully caused ions within the rose’s leaves to light up. The next step is using electronic-organic plants to acts as biofuel power plants. (image via Panoramic Images)


It seems that technology has triumphed over nature once more – taking something once sublime and beautiful and turning it into a cold, calculated machine. Never before were scientists able to successfully combine organic plant matter and electronic circuits without killing the plant. Now, a Swedish group of researchers from of Linköping University have released their chilling findings in Science Advances. Their project started in 2012, after many unsuccessful attempts. It seems that this time they are on the right track with a breakthrough that may change our relationships with plants and the whole natural world forever.


It starts with a rose: a beautiful and temperamental plant whose only function is to look beautiful. However, why simply enjoy a thing of beauty when you can turn it into an instrument, perhaps the rose can serve as a radio transmitter, or renewable energy source instead of just sitting there; or at least that is what many scientists may think. The issue with combining plants and electronics was that scientists were trying to splice them together somehow, or combine the inorganic with the organic by inorganic means.


rose tech.png

A schematic of how their new technology works from their journal article (via Berggren et al., 2015, Science Advances, Vol. 1, no. 10)


The genius of Magnus Breggren and his team from Sweden is that they have discovered how to use the natural functions of the plant and its components to create electronic circuitry. They have used a synthetic polymer which they feed to the plant the way plants feed on water for nutrients. As the polymer makes its way up the vascular system of the rose stem, it becomes a part of the xylem, the leaves, the veins, and the signals of the rose. These components of the plant are then used as the main components of the circuitry which allow electronics and organic bodies to merge and act as one.


Their current synthetic polymer mixture creates a wire that’s up to 10 cm long inside of the stem (xylem) without impeding the rose’s ability to absorb water and nutrients. Via this method, the group of scientists was able to light ions within the leaves of the plants. Berggren was so surprised that their experiments have actually worked that he can’t wait to test out new projects: among them is a biofuel concept. Berggren told Motherboard, “Right now we are trying to put electrodes into the leaves with enzymes that we connect to the electrodes,” he said. “The sugar that is produced in the leaves is converted by the enzyme; they deliver a charge to the electrode and then hopefully we can collect that charge in a biofuel cell.”


This latest proposition could entirely change our relationship with plants, as forests could turn into renewable power plants for nearby cities. Berggren hopes that this new biofuel possibility will allow us to gain resources from our natural world without destroying it. However, how viable is the health of the rose in the long term? No one knows. It is still very early days, but there is no doubt that science is about to get weirder as electronics and plants can truly begin to meld into a cyborg technology for years to come.

See more news at:


This chip is a huge step forward in fiber optic communications. University of Colorado researchers combined electrons and photons within a single chip for this landmark development. (all images via University of Colorado & Glenn Asakawa)


Here is a claim and a wish I've heard for decades.

Advances in technology never cease to amaze no matter how big or small, but the University of Colorado takes the cake for best innovation of 2015. The university's researchers have created the first full-fledged processor that transmits data using light instead of electricity. This was done by successfully combining electrons and photons within a single microprocessor. So what does this all mean? It's a big development that could lead to ultrafast, low power data crunching. It also marks a major step for fiber optic communication.


To get the successful outcome, researchers put two processor cores with more than 70 million transistors and 850 photonic components on a chip. They were then able to create the processor in a foundry that produces high performance computer chips on a mass scale. This means the design can be easily and quickly made up for commercial production. Though the design isn't completely photonic the processor is still pretty impressive with an output of 300Gbps per square millimeter – 10 to 50 times the normal speed.


(Left) "The light-enabled microprocessor, a 3 x 6 millimeter chip, installed on a circuit board." (Right) "Electrical signals are encoded on light waves in this optical transmitter consisting of a spoked ring modulator, monitoring photodiode (left) and light access port (bottom)"


Fiber optic communication is a big goal for many researchers and organizations due to its many advances. It supports greater bandwidth , carries data at higher speeds over larger distances, and uses less energy in general, which is good news for a society that aims to consume less power. There have been some advances in fiber optic technology, but up until now it has proven difficult to merge photonics and computer chips together. Now, these University of Colorado researchers have jumped over that hurdle.


But does the chip actually work? Researchers ran several test and showed that the chip was able to run various computer programs that required it to send and receive instructions and data from memory. This is how they were able to discover the chip had a bandwidth density of 300 Gbps.


“The advantage with optical is that with the same amount of power, you can go a few centimeters, a few meters or a few kilometers," said study co-lead author Chen Sun. "For high-speed electrical links, 1 meter is about the limit before you need repeaters to regenerate the electrical signal, and that quickly increases the amount of power needed. For an electrical signal to travel 1 kilometer, you'd need thousands of picojoules for each bit.”


If there's further advances in the technology not only will it mean posting Facebook statues as lightening fast speed, it also means data centers will be more green. According to the Natural Resources Defense Council, data centers used an estimated 91 billion kilowatt-hours of electricity in 2013, which is around 2 percent of electricity consumed in the United States. Considering those numbers, this is a great way to promote a greener society.


See more news at:

The size of LED board circuit

The LED board circuit requires more and more small size.
Now most of electronic products have become smart and small.
In LED lighting industry also have the same require, result of PCB board manufacturers have to produce small size PCBs.

The temperature of LED board circuit

The head dissipation problem due to small size LED board circuit.
Small LED lighting not only require smaller size, but also requires better heat dissipation performance.
Because of the trend towards miniaturization, thermal output per surface unit is increasing, which means that ever increasing heat is emitted onto an ever smaller surface area for dissipation.



A team of researchers from Columbia Engineering, Seoul National University and Korea Research Institute of Standards and Science recently developed the world’s smallest lightbulb – at just one atom thick – using graphene. The structure may also revolutionize computing and chemical experimentation.  (via Columbia)

Graphene never stops to amaze. Take a look at everything written about the material here at element14, click here. A team of researchers from Columbia Engineering, Seoul National University and Korea Research Institute of Standards and Science recently created the world’s thinnest light bulb at just one atom thick. The micro bulb on a chip may revolutionize light displays, chemistry and computing. Researchers are currently further developing the technology for practical use in the near future.


Postdoctoral research scientist Young Duck Kim of James Hone’s team at Columbia Engineering headed the project. He and his team of researchers took the same principles of the incandescent light bulb and applied them to graphene to see if Thomas Edison’s world-changing invention could be updated.


The team placed the one-atom-thick pieces of graphene on a small strip with metal electrodes. They suspended the structure above the substrate and heated it by sending a current through filaments lining the contraption. To their surprise, as the graphene was heated, it became luminous – even to the naked eye. The structure is essentially the thinnest ever visible light bulb, but its potential for impacting numerous technologies is huge.


If the graphene light chip comes to market, it could play a critical role in enhancing the capabilities of photonic circuit technology. Photonic circuits are much like electrical circuits, but seek to rely upon light as a semiconducting heat source. In order for light to have enough energy to function properly, the light bulb filaments must be able to handle heat up to thousands of degrees Celsius. A chip that was both able to handle that level of heat and small enough to fit on a circuit board never existed, until now.


The micro light bulb on a chip may have other uses too. Since it can handle more than 2500 degrees Celsius, it may be used to heat tiny hot plates to observe high-temperature chemical reactions. The tiny bulbs are also see-through and can revolutionize commercial light displays as well. If the chips can turn off and on more quickly, they may have a future as computer bits as well.


Young and his team are continuing to expand upon the technology. It was a joint efforts between researchers from Columbia Engineering, Korea Research Institute of Standards and Science, Seoul National University, Konkuk University, Sogang Univeristy, Sejong University, Standford Univeristy and the University of Illinois at Urbana-Champaign. Read more about this achievement at Nature after this link...



See more news at:

For the second installment in the 32 vs 8 bit MCU series I will get a little more technical here and talk about processing power, interrupt latency, pointer efficiency and pointer ease of use.

The two architectures that I am very familiar with are the 8051 on the 8 bit and ARM on the 32 bit.  Because of this, the comparisons I make will be based on those architectures.  There are many others out there like PIC with its modified Harvard architecture as well as other RISCs but my breathe of knowledge on those lacks a bit so I will focus on the 8051 and the ARM architectures.



When people think of 32 bit vs 8 bit most automatically assume that the 32 is going to out process or be much faster than the 8 bit.  While generally that may be true 8 bit 8051s excel at what they were made to do, handle 8 bit data….  The code and memory size will be smaller for a program which shifts and alters 8 bit data on an 8051 than on an ARM.  A smaller program uses less memory and allows the designer to use an even smaller chip lending many advantages to the 8 bit.  However, when moving 16 bit data the efficiency of the 32 bit begins to differentiate itself.  When it comes to 32 bit data and math the 32 bit outshines the 8 bit because it can do 32 bit addition/subtraction in 1 instruction while it takes the 8 bit 4 instructions.  This makes the 32 bit better suited for a large data streaming role.  However, the 32 bit core isn’t always better at this data pass through especially in the simple cases.  For example if it is a simple SPI-UART bridge then a 32 bit MCU sits in idle for long periods of time and on top of that the entire application is small at < 8kb flash.  This makes the 8 bit the right choice in this simple example and many USB – SPI/I2C/etc devices which simply pass through peripherals are repurposed 8 bit MCUs like the Cp210x family.

To illustrate the point of 8, 16 and 32 bit data efficiency I compiled the function below on a 32 bit ARM core and an 8 bit 8051 MCU with varying sizes of uint8_t, uint16_t and uint32_t.


uint32_t funcB(uint32_t testA, uint32_t testB){

  return  (testA * testB)/(testA - testB)


| data type | 32bit(-o3) | 8bit  |

| uint8_t     |         20   |   13  | bytes

| uint16_t   |         20   |   20  | bytes

| uint32_t   |         16   |   52  | bytes


As the data size/type used increased the 8051 begins to require more and more code size to deal with it and eventually surpassing the size of the ARM function. The 16-bit case is dead even for this example however the 32 bit used less instructions and therefore has the edge here. Also, it’s important to know that this comparison is only valid when compiling the ARM code with optimization. Un-optimized code is several times larger.  This can also be dependent on the compiler and the level of coding being done.



Interrupt Speed

The latency involved in interrupts and function calls are vastly different between the two.  The 8 bit is going to be faster at performing interrupts as well as communicating with 8 bit peripherals. The 8051 core has an advantage in ISR service times and latency. But this only comes into play on entry and exit so as the ISR gets larger the benefit will die out.  The ARM interrupt controller will automatically save around 4 registers depending on the type of core.  If the interrupt service routine(ISR) doesn’t need all four or five of these registers then those cycles are wasted.  So for the smaller ISRs the 8051 will be faster to execute.  Again the theme is, the bigger the system the less edge the 8 bit will have.  Also you must consider what the ISR is doing.  If it is performing 32 bit math or something to that degree it will take the 8 bit longer and the faster ISR entry/exit will be negated.



When using pointers with an 8051 device, it is more efficient to specify which data segment the pointer is pointing to, for example XDATA.  Generic pointers exist but are slower and use more memory.  If you have an application which utilizes pointers especially in data transfer then the 8051 may start to lag behind the ARM core in terms of efficiency.  This is because the ARM has a unified memory and a pointer on the ARM core can point to anywhere and transfer without the need to copy the data to another segment.  Also using memory specified pointers on the 8051 can be tough to understand and can be taxing on the developer thus lending the advantage to ARM if your application heavily utilizes pointers.  It is all a game of tradeoffs and knowing what you application needs can help you better weigh the options.

When beginning a project that you know will contain an MCU there are so many options that it may be overwhelming.  One of the biggest questions is what size MCU should you use?  I will release a series of articles that will shed some light on this issue faced by so many designers and developers.

Uncovering the differences and architectures

The 2 ends of the scale for bit size of microcontrollers are 8 bit and 32 bit. Bit size in this case means that the MCU processes an 8 or 32 bit data word at a time and also dictates the register, address and bus size.  Both have advantages and disadvantages and throughout the series I will talk about both. The most common architectures within these categories is 8051 architecture for the 8 bit and ARM architecture for the 32 bit.  There seems to be a popular belief that 8 bit MCUs are on their way out. With the release of new products such as the EFM8 MCUs, I have heard many people question using an 8 bit in their design.  Well, not only will this series distinguish the best cases for each MCU but also shed some light on why the 8 bit with its 8051 architecture still thrives in many applications.


First, let’s discuss some of the more general and obvious differences, namely size, cost, and ease of use.  Before we begin there are fine lines that can be drawn between which is better for a certain application.  If a systems demands >64 KB of RAM then the choice to use a 32 bit MCU is an easy one to make.  If the system is ultra-cost sensitive then using an 8 bit MCU is the correct decision. However for the application where this clear line can’t be drawn there are deeper things that must be considered.


Ease of use and cost

The picture is very exaggerated but it points out the general truth that the ARM core, which most 32 bit devices use, is easier to use compared with the 8051 core, which the majority of 8 bit MCU use. The ARM core 32 bit MCUs utilize familiar compilers, have a long list of available libraries and the perhaps the most important of all, have unified memory, make coding on the ARM an easier task than on an 8051. However, you pay for this as the price for ARM based devices are normally higher than the 8051. The most aggressively priced 8 bit 8051s out there hit ridiculous lows and can be bought for cents. But, with ease of use comes a quicker time to market.  For some products time to market is a deciding factor in its success therefore for these end products paying a bit more for the 32 bit can be well worth it.



An advantage of the 8 bit devices is that they are generally smaller devices. This becomes an enormous edge over the 32 bit if the final product is space constrained. If you were designing a wearable like a watch, an 8 bit 8051 could allow the device to be smaller and have the same functionality compared to a wearable with a 32 bit. To give the edge even more to the 8 bit, some manufacturers like Silicon Labs use chip scale packages (CSP) with their 8 bit devices which decreases the size significantly. Their CSP 8051 is 1.66x1.78 mm2 which is one of the smallest on the market. This compared to their smallest 32 bit, the Tiny Gecko which is 4x4 mm2 is 4 times the size!


A theme you will see multiple times in this series is that knowing your application and final design is perhaps the most important thing when choosing the MCU for you!


Made out of Gorilla Glass, the chip obliterates itself. This new chip shatters into thousands of pieces under extreme stress. (via Xerox PARC, pic via

I was just thinking, there has to be a way to store data that will self-destruct upon access. Seems we are close to it.


The latest development from Xerox PARC engineers is a device straight out of a James Bond film. The team has created a chip that can explode into bits on command as part of the Defense Advanced Research Projects Agency's (DARPA) Vanishing Programmable Resources project. How does the chip get this shattering effect? It was made using Gorilla Glass, supposed used for smart phone screens, instead of plastic and metal. The glass was then modified to become tempered glass under extreme stress, which will case it to easily disintegrated when triggered.


In a demonstration, the chip reached breaking point by heat. A small resistor heated up and the glass shattered into a ton of tiny pieces. Even after it broke, the small fragments continued to break into even smaller pieces for tens of seconds afterward.


Is the chip supposed to just look cool? Even though the result is awesome, the chip can actually be a great security measure. It could be used to store sensitive data like encryption keys and can shatter into so many pieces it becomes impossible to reconstruct it. It's a pretty intense way to deal with electronic security, but it's a viable option if it happens to fall into the wrong hands.


The self-destructible chip was demonstrated in all its glory at DARPA's Wait, What? Event in St. Louis last week.


“The applications we are interested in are data security and things like that,” said Gregory Whiting, a senior scientist at PARC in Palo Alto, California. “We really wanted to come up with a system that was very rapid and compatible with commercial electronics.”


With so much information being stored electronically, more and more companies are employing similar techniques for security. Similar technology is used for Snapchat, which lets users send images to friends for a short amount of time before the message can no longer be accessed. And Gmail recently introduced the “Undo Send” feature that allows people to cancel sent emails, but it's limited to 30 messages. Now, if only we could make our phones explode when they get stolen.


PARC is a Xeorx company that provides tech services, expertise, and intellectual property to various companies including Fortune 500 businesses, startups, and government agencies.



See more news at:

Printed Circuit Boards (PCBs) are without a doubt central to all electronics. As technology advances, however, PCBs must be made faster and smaller than ever before. Before you get busy, make sure you nip sloppy PCB production in the bud, before it costs you big bucks. Read on to discover the 12 biggest PCB development mistakes and how to avoid them.




1. Improper Planning


Have you ever heard “proper planning prevents poor performance?” It’s true. There’s a reason we consider poor planning the number one PCB development mistake. There is no substitute for proper PCB planning. It can save you time and energy. If you build it wrong, you will have to spend additional resources to go back and fix it. How do you plan properly? Consider numbers 2-6 on our list before you physically begin building. You’ll be thankful you did.


2. Incorrect Design


There is an infinite number of layout possibilities with PCBs. Keep function in-mind when designing the form. For example, if there’s a good change you’ll need to add on in the future, you may want to consider something like a ball grid array (BGA), which can help conserve space on an existing board to enable you to build upon that design in the future. If your design must incorporate copper, you’d be best going with a polygon-style design. Whatever your function, choose the right form.


3. Improper Board Size


It’s much easier to begin with the right size first. Although the portion of the project you’re working on now may only require a small board, if you’re going to have to add on in the future, you’re better off getting the larger board now. Stringing multiple boards together may be difficult due to potential circuitry and connectivity issues. Plan adequately not only for current function but future function so you save yourself time and money.


4. Failing to Group Like Items


Grounding your PCB is a critical part of production. Grouping like items will not only help you keep your trace lengths short (another important element of design), but it will also help you avoid circuitry issues, ease testing and make error correction much more simple.


5. Software, Software, Software


We know you can design a PCB from scratch, but why would you want to when you can use software? Software makes your life easier. Electronic Design Automation gives you recommendations for the best layout to choose and other programs may suggest the best materials to use, based on prospective board function. Software won’t do all of the thinking for you, but it sure does help.


6. Using the Silk Screen Improperly


A huge ally when creating a design for a PCB board is the silk screen. When used properly, it’s a great tool that allows you to map out all aspects of your PCB before construction, including circuitry planning. However, be careful and maintain best practices. When used improperly, the silk screen can make it difficult to know where connectors and components are supposed to go. Use full words as descriptors when possible, or keep a key of your symbols nearby.





Once you’re done planning, you can begin building your board. You’re still not out of the water, however. Building is another area where people make costly mistakes. When done well, however, you can build PCBs faster than ever. 


7. Poorly-Constructed Via-in-Pad


This issue is one of the biggest detriments to proper PCB development. Many boards now require via-in-pads, but when soldered incorrectly, vias can lead to breakouts in your ground plane. This creates a larger circuitry issue, as power travels between boards instead of connectors and components. Test your ground plane. If you suspect you have a shaky via-in-pad on your board, cap or mask it and test it again. It may slow down production now, but it’ll save you time in the long run.


8. Using the Wrong Materials


Although this mistake may seem like a novice move, it happens. PCBs can be constructed using various materials. Know the purpose for which you are building your board, and which materials are best for that design, before you start building. If you’re building an FR-2 grade, single-sided PCB, you can use phenolic or paper materials. Anything more complex, however, should use epoxy or glass cloth. Also, different materials have different temperaments. Keep this in mind. If you’re building a simple design that needs to hold up in an area with a lot of humidity, it may be worth it to go with epoxy. 


9. Too Lazy to Test It


If there’s one habit you begin to change, it should be the frequency you test your prototype. Assuming your board is grounded and that circuits will function in perfect accordance with their potential ground paths and voltages is asking for trouble. We know it takes time to test your board, but it’ll take more time to find and correct an error as time goes on. Test it now. Every design has an issue, keep that thought in mind.





So you properly planned and built your board. Things couldn’t still go wrong, could they? Wrong! They can and they do. These are the three mistakes to avoid.


10. Failing to Double Crunch the Numbers


We’ve all felt the pressure of an upcoming product deadline. You’re sweating, over-caffeinated and running on lack of sleep. We know you’re an engineer, but don’t let your ego cost your company huge amounts of money due to an error. Always double-check your numbers before sending your model to production. This includes testing your board, ensuring the size is in line with your client’s specifications and double-checking your design is ideal for the intended function. It’s always better to have one model that needs to be reworked than a thousand. Rewind to #1 in this list… proper planning. Never jump the gun when sending out the design.


11. Temperature Control


This step is often neglected, but it’s important. Even if you do everything right leading to the production process, you will ruin your boards if you neglect temperature during development and storage. Every step in the process must factor in temperature. Soldering in cold temperatures, for example, often leads to poor connections. Likewise, storing boards in extreme heat or humidity may damage components and the board itself. At every step in the process, consider temperature and ensure its working for you.


12. Communicate


Building PCBs can be fun, if you create functional boards at the end of the grueling process. So you designed your board well and followed best practices during production – you’re in the clear, right? Not always. Ensure you properly communicate with your clients at all times. It sounds simple, but what’s said isn’t always what’s heard. Your finished product can be rejected. Save yourself a step by making sure you’re creating what your client wants at every step in the process, so you can move onto more fun things, like making a paper airplane machine gun.


See more news at:


Adding USB in 4 easy steps

Posted by txbeech Aug 20, 2015

Add USB interface in 4 easy steps with CP210x family


Sometimes we forget to add USB to our designs, or we need USB to access the design more efficiently from our development platform.

Don’t worry. It’s easy to drop-in USB connectivity to any design—old or new—with the fixed-function CP210x MCU family from Silicon Labs.

Step 1 – Connect your CP210x EVK to your Windows PC.

Step 2 – Launch Windows driver installer and walk through the wizard to set the driver name, address and other configurations.



Step 3 – Install the driver on the target device and reboot Windows to recognize it. No additional code writing necessary.


Step 4 – Once the drivers are in place and the device is recognized, open a com port and, USB-am! start sending and receiving USB data.


This is the set up. There are 2 wires that go from my UART ports on the device to the TX and RX ports of the Silicon Labs CP2102 device.

The USB then goes to the host computer where the terminal is viewed.



To learn more or to get involved please follow the links below!

Also feel free to message me with questions or to get more information.


Learn more at the Silicon Labs CP210x Page

CP210x devices

Download AN721 for more detailed instructions

AN721, adding USB walkthrough

Buy the CP210x EVK to get started

Evaluation kit

Customize the USB driver

Custom driver info


Plasmonic Circuit. A research team from ETH Zurich recently published an article in Nature Photonics that announced the discovery of a new technology that enables faster, cheaper data transmission.  (via Nature Phontonics)

Networks may get an upgrade. A team of researchers from ETH Zurich recently developed a technology that may make the future of data transmission faster, cheaper and smaller than ever before.


Professor of Photonics and Communications Juerg Leuthold and his team of researchers recently released a seminal paper in Nature Photonics disclosing a new technology that can transmit data with a modulator roughly one hundred times smaller than modern methods. The new method can shrink modulators from a few micrometers to a few nanometers to allow faster and small transmission of data.


The research team discovered that surface-plasmon-polaritrons could be used to shrink light signals to a fraction of their normal size. Using this trick, they were able to send light signals as normal, shrink them down to enable movement through smaller electrical spaces, and expand them again later. The technology is similar to keeping a secret message in a small box, flattening that box so it fits between the crack of a doorway, and opening it up again on the other side. The technology minimizes the data without compromising it, and bypasses the limitations of current technology.


Leuthold plans to continue his research, although he has not disclosed the next step for his work. The current model uses gold, and is still more affordable than building current modulators. Perhaps various conductors will be used in future models and the team might attempt to build compatible hardware. These are all speculations, but one thing certain – if it comes to market, it’ll significantly change the way we transmit data every day.



See more news at:

Memristor circuit.jpg

Memristor Circuit. Researchers at UC Santa Barbara and Stony Brook University successfully built a neural network to house memristors. The prototype was successful in recognizing small images and may be expanded to develop futuristic computers that simulate the human brain. (via UC Santa Barbara)

A team at the University of California – Santa Barbara and Stony Brook University is on the brink of finally breaking the secret on how to develop memristors on their own neural hardware using old perceptron technology. Memristor research has been a long time coming, but if the researchers are successful, the devices can assist in computer energy consumption management and may allegedly lead to thinking computers that mimic human neurons and synapses.


Memristors, or memory resistors, are thought to be a crucial component to developing computers that can really “think” like human brains. A human brain will build brand new synapses based on an individual’s need for a particular type of information. A mathematician, for example, would have a very different brain, structurally, than a musician, because the part of the brain most used would become more developed over time. Computer scientists think memristors are the key to allowing computers to work in this way, as they can regulate the flow of electrical energy to various circuits, based on which circuits are most frequently used.



Concept Blueprint (via UC Santa Barbara & Nature)


Although memristors are a common topic of conversation for future computer-building, scientists struggle with building a neural hardware to house them. The new study published by UC Santa Barbara and Stony Brook University, however, may change that. The team built a 12 x 12 memristive crossbar assay that functions as a single perceptron, or an early neural network often used for pattern recognition and basic information organization. The team programmed a network of perceptrons to decipher things like letters and patterns and say together, the micro hardware functions as a collection of basic synapses.


The hardware is built using aluminum and titarium, but manufactured under low temperatures to allow for monolithic three-dimensional combination. This allows for the memristor to “remember” the amount of energy and the direction of the previous current for future use, even after the main device has been powered off. This recognition is currently possible using other technology, but it is much more involved. Using memristors means easier functionality while using no power.


In the trial, the memristor model was able to decipher 3 x 3-pixel back-and-white patterns into three types. The model they created had thee outputs, ten inputs and 30 perceptron synapses. In future, the team plans to shrink the current device down to 30nm across, in the hopes to simulating 100 billion synapses per square centimeter.


While some argue computers will never have the real processing power of the human brain, others say memristors will still be useful as analog memory devices or components of logic for larger systems. Since they use no energy, but record energy used, memristors may also be useful for energy management.



See more news at:



Posted by zylotech May 25, 2015

Hi all,


I recently bought a development board LPC4357 - EVB .

I can not so much regarding ARM programming or no experience when it comes ULINK2 , Keil and LPC4357 - EVB .


I have downloaded Examples2 \ GPIO \ Gpio_LedBlinky project .

When I program GPIO_ledBlinky in InFlash mode.

All goes well , I see that LED flashes.


But when I remove ULINK2 from the USB port when the LED stops flashing.

Should not be the code to be programmed into the LPC4357 chip when selecting " InFlash " ?


And why can not program in SPIFI mode (the flash is full chip Erased before i try flash in SPIFI mode)

DIP switch is

1. Down

2. UP

3. UP

4. UP

I get an error like this pop-up message "ERROR: Flash Download failed - "Cortex-M4"


in Keil Build Output windows

Load "C:\\LocalData\\LPC4357-EVB\\Examples2\\GPIO\\Gpio_LedBlinky\\Keil\\SPIFI 64MB Debug\\example.axf"

Erase Done.

Programming Failed!

Error: Flash Download failed  -  "Cortex-M4"

Flash Load finished at 14:32:41


Best Regards



Europa squid.jpg

NASA eel bot that may delve into the depths of moon Europa. NASA recently announced their current 15 winners of NIAC funding for $100,000 for each candidate. Among them is a project to develop a robotic eel to explore Europa, Jupiter’s moon.  (via NASA)



Anyone seen that movie Europa Report? It may have inspired NASA...


NASA recently announced their winners of their annual NASA Innovative Advanced Concepts (NIAC) program. There are 15 winners in total that have far-out ideas (pun intended) about making science fiction a reality. NASA is hoping that these highly innovative, and a bit crazy, ideas will lead them to advances that can progress their ability to delve further into space.


One crazy idea that just might work is NIAC 2015 winner Mason Peck’s research to design a robotic eel that can explore the depths of Europa, one of Jupiter’s many moons. The idea is highly innovative and calls for the invention of new technologies – including new power systems.


A mock-up for the robot design is seen above. It would be a soft-bodied robot that can swim and explore the aquatic depths of Europa. Peck describes the robot as more of a squid than an eel, as NASA calls it. The science behind it is pretty inspiring. The body of the eel/squid would have ‘tentacle’ structures that allow it to harvest power effectively from changing electromagnetic fields.  The energy will power its rover subsystems, one of which allows it to expand and change shape to propel itself in water and on land. It would do this by electrolysis of water, creating H2 and O2 gas that will be harvested to expand, and combusted internally to act as a propulsion system. To learn more about the other 14 winners who scored $100,000 to develop technology like this, see their extensive report.



See more news at:


Chalmers University of Technology researchers have found that large area graphene helps prolong the spin of electrons over longer periods of time (via Chalmers)

Chances are you own a smartphone, tablet or PC/laptop that features some form of solid-state technology - typically in the form of RAM, flash drives or SSD hard drive. Those devices are faster than their mechanical counterparts and new findings by researchers from Sweden’s Chalmers University of Technology are set to make that technology even faster and more energy efficient through the use of graphene.


Specifically, they found that large area graphene is able to prolong the spin of electrons (spintronics) over a longer period of time over that of ferrous metals. Spintronics deals with the intrinsic spin of electrons in a magnetic moment- or the torque it will experience when an external magnetic field is applied. As mentioned above there are already spintronic devices on the market, however they use ferrous metals for their base platform. It’s the impurities in those metals that hold spintronics back from becoming a mainstream component in today’s electronic circuitry- limiting the size of the components themselves.


This is where graphene comes into play as the material extends the area of spintronics from nanometers to millimeters, making the spin of those electrons last longer and travel farther than ever before. So why is that good? Data (in the form of 1’s and 0’s) is encoded onto those electrons as they spin up and spin down rather than relying on the other method of turning the electrical state of off and on using traditional circuits. The problem is as the process nodes become smaller it results in increased electrical ‘bleed’ across transistors in the off state thereby preventing us from building transistors that consume less power.


Using graphene as the substrate for spintronics allows for the electrons to maintain their spin alignment to a duration of 1.2 nanoseconds and transmit information contained in those electrons up to 16-micrometers long without degradation. Of course, progress doesn’t come without its problems- in this case it’s the graphene itself or rather the manufacturing process. Producing large sheets of the one-atom thick substance is still an issue for manufacturers and when it’s produced it usually has defects in terms of wrinkles and roughness, which can have negative effects on electron’s spin rate and decay.


The researchers however have found that the CVD (Chemical Vapor Deposition) method is promising and the team hopes to capitalize on it to produce a logical component in the short term with a long-term goal of producing graphene/spintronic-based components that will surpass solid-state devices in both speed and energy efficiency.


See more news at:

Filter Blog

By date:
By tag: