1 2 3 Previous Next


137 posts

A team of researchers from Columbia Engineering, Seoul National University and Korea Research Institute of Standards and Science recently developed the world’s smallest lightbulb – at just one atom thick – using graphene. The structure may also revolutionize computing and chemical experimentation.  (via Columbia)

Graphene never stops to amaze. Take a look at everything written about the material here at element14, click here. A team of researchers from Columbia Engineering, Seoul National University and Korea Research Institute of Standards and Science recently created the world’s thinnest light bulb at just one atom thick. The micro bulb on a chip may revolutionize light displays, chemistry and computing. Researchers are currently further developing the technology for practical use in the near future.


Postdoctoral research scientist Young Duck Kim of James Hone’s team at Columbia Engineering headed the project. He and his team of researchers took the same principles of the incandescent light bulb and applied them to graphene to see if Thomas Edison’s world-changing invention could be updated.


The team placed the one-atom-thick pieces of graphene on a small strip with metal electrodes. They suspended the structure above the substrate and heated it by sending a current through filaments lining the contraption. To their surprise, as the graphene was heated, it became luminous – even to the naked eye. The structure is essentially the thinnest ever visible light bulb, but its potential for impacting numerous technologies is huge.


If the graphene light chip comes to market, it could play a critical role in enhancing the capabilities of photonic circuit technology. Photonic circuits are much like electrical circuits, but seek to rely upon light as a semiconducting heat source. In order for light to have enough energy to function properly, the light bulb filaments must be able to handle heat up to thousands of degrees Celsius. A chip that was both able to handle that level of heat and small enough to fit on a circuit board never existed, until now.


The micro light bulb on a chip may have other uses too. Since it can handle more than 2500 degrees Celsius, it may be used to heat tiny hot plates to observe high-temperature chemical reactions. The tiny bulbs are also see-through and can revolutionize commercial light displays as well. If the chips can turn off and on more quickly, they may have a future as computer bits as well.


Young and his team are continuing to expand upon the technology. It was a joint efforts between researchers from Columbia Engineering, Korea Research Institute of Standards and Science, Seoul National University, Konkuk University, Sogang Univeristy, Sejong University, Standford Univeristy and the University of Illinois at Urbana-Champaign. Read more about this achievement at Nature after this link...



See more news at:


For the second installment in the 32 vs 8 bit MCU series I will get a little more technical here and talk about processing power, interrupt latency, pointer efficiency and pointer ease of use.

The two architectures that I am very familiar with are the 8051 on the 8 bit and ARM on the 32 bit.  Because of this, the comparisons I make will be based on those architectures.  There are many others out there like PIC with its modified Harvard architecture as well as other RISCs but my breathe of knowledge on those lacks a bit so I will focus on the 8051 and the ARM architectures.



When people think of 32 bit vs 8 bit most automatically assume that the 32 is going to out process or be much faster than the 8 bit.  While generally that may be true 8 bit 8051s excel at what they were made to do, handle 8 bit data….  The code and memory size will be smaller for a program which shifts and alters 8 bit data on an 8051 than on an ARM.  A smaller program uses less memory and allows the designer to use an even smaller chip lending many advantages to the 8 bit.  However, when moving 16 bit data the efficiency of the 32 bit begins to differentiate itself.  When it comes to 32 bit data and math the 32 bit outshines the 8 bit because it can do 32 bit addition/subtraction in 1 instruction while it takes the 8 bit 4 instructions.  This makes the 32 bit better suited for a large data streaming role.  However, the 32 bit core isn’t always better at this data pass through especially in the simple cases.  For example if it is a simple SPI-UART bridge then a 32 bit MCU sits in idle for long periods of time and on top of that the entire application is small at < 8kb flash.  This makes the 8 bit the right choice in this simple example and many USB – SPI/I2C/etc devices which simply pass through peripherals are repurposed 8 bit MCUs like the Cp210x family.

To illustrate the point of 8, 16 and 32 bit data efficiency I compiled the function below on a 32 bit ARM core and an 8 bit 8051 MCU with varying sizes of uint8_t, uint16_t and uint32_t.


uint32_t funcB(uint32_t testA, uint32_t testB){

  return  (testA * testB)/(testA - testB)


| data type | 32bit(-o3) | 8bit  |

| uint8_t     |         20   |   13  | bytes

| uint16_t   |         20   |   20  | bytes

| uint32_t   |         16   |   52  | bytes


As the data size/type used increased the 8051 begins to require more and more code size to deal with it and eventually surpassing the size of the ARM function. The 16-bit case is dead even for this example however the 32 bit used less instructions and therefore has the edge here. Also, it’s important to know that this comparison is only valid when compiling the ARM code with optimization. Un-optimized code is several times larger.  This can also be dependent on the compiler and the level of coding being done.



Interrupt Speed

The latency involved in interrupts and function calls are vastly different between the two.  The 8 bit is going to be faster at performing interrupts as well as communicating with 8 bit peripherals. The 8051 core has an advantage in ISR service times and latency. But this only comes into play on entry and exit so as the ISR gets larger the benefit will die out.  The ARM interrupt controller will automatically save around 4 registers depending on the type of core.  If the interrupt service routine(ISR) doesn’t need all four or five of these registers then those cycles are wasted.  So for the smaller ISRs the 8051 will be faster to execute.  Again the theme is, the bigger the system the less edge the 8 bit will have.  Also you must consider what the ISR is doing.  If it is performing 32 bit math or something to that degree it will take the 8 bit longer and the faster ISR entry/exit will be negated.



When using pointers with an 8051 device, it is more efficient to specify which data segment the pointer is pointing to, for example XDATA.  Generic pointers exist but are slower and use more memory.  If you have an application which utilizes pointers especially in data transfer then the 8051 may start to lag behind the ARM core in terms of efficiency.  This is because the ARM has a unified memory and a pointer on the ARM core can point to anywhere and transfer without the need to copy the data to another segment.  Also using memory specified pointers on the 8051 can be tough to understand and can be taxing on the developer thus lending the advantage to ARM if your application heavily utilizes pointers.  It is all a game of tradeoffs and knowing what you application needs can help you better weigh the options.

When beginning a project that you know will contain an MCU there are so many options that it may be overwhelming.  One of the biggest questions is what size MCU should you use?  I will release a series of articles that will shed some light on this issue faced by so many designers and developers.

Uncovering the differences and architectures

The 2 ends of the scale for bit size of microcontrollers are 8 bit and 32 bit. Bit size in this case means that the MCU processes an 8 or 32 bit data word at a time and also dictates the register, address and bus size.  Both have advantages and disadvantages and throughout the series I will talk about both. The most common architectures within these categories is 8051 architecture for the 8 bit and ARM architecture for the 32 bit.  There seems to be a popular belief that 8 bit MCUs are on their way out. With the release of new products such as the EFM8 MCUs, I have heard many people question using an 8 bit in their design.  Well, not only will this series distinguish the best cases for each MCU but also shed some light on why the 8 bit with its 8051 architecture still thrives in many applications.


First, let’s discuss some of the more general and obvious differences, namely size, cost, and ease of use.  Before we begin there are fine lines that can be drawn between which is better for a certain application.  If a systems demands >64 KB of RAM then the choice to use a 32 bit MCU is an easy one to make.  If the system is ultra-cost sensitive then using an 8 bit MCU is the correct decision. However for the application where this clear line can’t be drawn there are deeper things that must be considered.


Ease of use and cost

The picture is very exaggerated but it points out the general truth that the ARM core, which most 32 bit devices use, is easier to use compared with the 8051 core, which the majority of 8 bit MCU use. The ARM core 32 bit MCUs utilize familiar compilers, have a long list of available libraries and the perhaps the most important of all, have unified memory, make coding on the ARM an easier task than on an 8051. However, you pay for this as the price for ARM based devices are normally higher than the 8051. The most aggressively priced 8 bit 8051s out there hit ridiculous lows and can be bought for cents. But, with ease of use comes a quicker time to market.  For some products time to market is a deciding factor in its success therefore for these end products paying a bit more for the 32 bit can be well worth it.



An advantage of the 8 bit devices is that they are generally smaller devices. This becomes an enormous edge over the 32 bit if the final product is space constrained. If you were designing a wearable like a watch, an 8 bit 8051 could allow the device to be smaller and have the same functionality compared to a wearable with a 32 bit. To give the edge even more to the 8 bit, some manufacturers like Silicon Labs use chip scale packages (CSP) with their 8 bit devices which decreases the size significantly. Their CSP 8051 is 1.66x1.78 mm2 which is one of the smallest on the market. This compared to their smallest 32 bit, the Tiny Gecko which is 4x4 mm2 is 4 times the size!


A theme you will see multiple times in this series is that knowing your application and final design is perhaps the most important thing when choosing the MCU for you!


Made out of Gorilla Glass, the chip obliterates itself. This new chip shatters into thousands of pieces under extreme stress. (via Xerox PARC, pic via IDG.tv)

I was just thinking, there has to be a way to store data that will self-destruct upon access. Seems we are close to it.


The latest development from Xerox PARC engineers is a device straight out of a James Bond film. The team has created a chip that can explode into bits on command as part of the Defense Advanced Research Projects Agency's (DARPA) Vanishing Programmable Resources project. How does the chip get this shattering effect? It was made using Gorilla Glass, supposed used for smart phone screens, instead of plastic and metal. The glass was then modified to become tempered glass under extreme stress, which will case it to easily disintegrated when triggered.


In a demonstration, the chip reached breaking point by heat. A small resistor heated up and the glass shattered into a ton of tiny pieces. Even after it broke, the small fragments continued to break into even smaller pieces for tens of seconds afterward.


Is the chip supposed to just look cool? Even though the result is awesome, the chip can actually be a great security measure. It could be used to store sensitive data like encryption keys and can shatter into so many pieces it becomes impossible to reconstruct it. It's a pretty intense way to deal with electronic security, but it's a viable option if it happens to fall into the wrong hands.


The self-destructible chip was demonstrated in all its glory at DARPA's Wait, What? Event in St. Louis last week.


“The applications we are interested in are data security and things like that,” said Gregory Whiting, a senior scientist at PARC in Palo Alto, California. “We really wanted to come up with a system that was very rapid and compatible with commercial electronics.”


With so much information being stored electronically, more and more companies are employing similar techniques for security. Similar technology is used for Snapchat, which lets users send images to friends for a short amount of time before the message can no longer be accessed. And Gmail recently introduced the “Undo Send” feature that allows people to cancel sent emails, but it's limited to 30 messages. Now, if only we could make our phones explode when they get stolen.


PARC is a Xeorx company that provides tech services, expertise, and intellectual property to various companies including Fortune 500 businesses, startups, and government agencies.



See more news at:


Printed Circuit Boards (PCBs) are without a doubt central to all electronics. As technology advances, however, PCBs must be made faster and smaller than ever before. Before you get busy, make sure you nip sloppy PCB production in the bud, before it costs you big bucks. Read on to discover the 12 biggest PCB development mistakes and how to avoid them.




1. Improper Planning


Have you ever heard “proper planning prevents poor performance?” It’s true. There’s a reason we consider poor planning the number one PCB development mistake. There is no substitute for proper PCB planning. It can save you time and energy. If you build it wrong, you will have to spend additional resources to go back and fix it. How do you plan properly? Consider numbers 2-6 on our list before you physically begin building. You’ll be thankful you did.


2. Incorrect Design


There is an infinite number of layout possibilities with PCBs. Keep function in-mind when designing the form. For example, if there’s a good change you’ll need to add on in the future, you may want to consider something like a ball grid array (BGA), which can help conserve space on an existing board to enable you to build upon that design in the future. If your design must incorporate copper, you’d be best going with a polygon-style design. Whatever your function, choose the right form.


3. Improper Board Size


It’s much easier to begin with the right size first. Although the portion of the project you’re working on now may only require a small board, if you’re going to have to add on in the future, you’re better off getting the larger board now. Stringing multiple boards together may be difficult due to potential circuitry and connectivity issues. Plan adequately not only for current function but future function so you save yourself time and money.


4. Failing to Group Like Items


Grounding your PCB is a critical part of production. Grouping like items will not only help you keep your trace lengths short (another important element of design), but it will also help you avoid circuitry issues, ease testing and make error correction much more simple.


5. Software, Software, Software


We know you can design a PCB from scratch, but why would you want to when you can use software? Software makes your life easier. Electronic Design Automation gives you recommendations for the best layout to choose and other programs may suggest the best materials to use, based on prospective board function. Software won’t do all of the thinking for you, but it sure does help.


6. Using the Silk Screen Improperly


A huge ally when creating a design for a PCB board is the silk screen. When used properly, it’s a great tool that allows you to map out all aspects of your PCB before construction, including circuitry planning. However, be careful and maintain best practices. When used improperly, the silk screen can make it difficult to know where connectors and components are supposed to go. Use full words as descriptors when possible, or keep a key of your symbols nearby.





Once you’re done planning, you can begin building your board. You’re still not out of the water, however. Building is another area where people make costly mistakes. When done well, however, you can build PCBs faster than ever. 


7. Poorly-Constructed Via-in-Pad


This issue is one of the biggest detriments to proper PCB development. Many boards now require via-in-pads, but when soldered incorrectly, vias can lead to breakouts in your ground plane. This creates a larger circuitry issue, as power travels between boards instead of connectors and components. Test your ground plane. If you suspect you have a shaky via-in-pad on your board, cap or mask it and test it again. It may slow down production now, but it’ll save you time in the long run.


8. Using the Wrong Materials


Although this mistake may seem like a novice move, it happens. PCBs can be constructed using various materials. Know the purpose for which you are building your board, and which materials are best for that design, before you start building. If you’re building an FR-2 grade, single-sided PCB, you can use phenolic or paper materials. Anything more complex, however, should use epoxy or glass cloth. Also, different materials have different temperaments. Keep this in mind. If you’re building a simple design that needs to hold up in an area with a lot of humidity, it may be worth it to go with epoxy. 


9. Too Lazy to Test It


If there’s one habit you begin to change, it should be the frequency you test your prototype. Assuming your board is grounded and that circuits will function in perfect accordance with their potential ground paths and voltages is asking for trouble. We know it takes time to test your board, but it’ll take more time to find and correct an error as time goes on. Test it now. Every design has an issue, keep that thought in mind.





So you properly planned and built your board. Things couldn’t still go wrong, could they? Wrong! They can and they do. These are the three mistakes to avoid.


10. Failing to Double Crunch the Numbers


We’ve all felt the pressure of an upcoming product deadline. You’re sweating, over-caffeinated and running on lack of sleep. We know you’re an engineer, but don’t let your ego cost your company huge amounts of money due to an error. Always double-check your numbers before sending your model to production. This includes testing your board, ensuring the size is in line with your client’s specifications and double-checking your design is ideal for the intended function. It’s always better to have one model that needs to be reworked than a thousand. Rewind to #1 in this list… proper planning. Never jump the gun when sending out the design.


11. Temperature Control


This step is often neglected, but it’s important. Even if you do everything right leading to the production process, you will ruin your boards if you neglect temperature during development and storage. Every step in the process must factor in temperature. Soldering in cold temperatures, for example, often leads to poor connections. Likewise, storing boards in extreme heat or humidity may damage components and the board itself. At every step in the process, consider temperature and ensure its working for you.


12. Communicate


Building PCBs can be fun, if you create functional boards at the end of the grueling process. So you designed your board well and followed best practices during production – you’re in the clear, right? Not always. Ensure you properly communicate with your clients at all times. It sounds simple, but what’s said isn’t always what’s heard. Your finished product can be rejected. Save yourself a step by making sure you’re creating what your client wants at every step in the process, so you can move onto more fun things, like making a paper airplane machine gun.


See more news at:



Adding USB in 4 easy steps

Posted by txbeech Aug 20, 2015

Add USB interface in 4 easy steps with CP210x family


Sometimes we forget to add USB to our designs, or we need USB to access the design more efficiently from our development platform.

Don’t worry. It’s easy to drop-in USB connectivity to any design—old or new—with the fixed-function CP210x MCU family from Silicon Labs.

Step 1 – Connect your CP210x EVK to your Windows PC.

Step 2 – Launch Windows driver installer and walk through the wizard to set the driver name, address and other configurations.



Step 3 – Install the driver on the target device and reboot Windows to recognize it. No additional code writing necessary.


Step 4 – Once the drivers are in place and the device is recognized, open a com port and, USB-am! start sending and receiving USB data.


This is the set up. There are 2 wires that go from my UART ports on the device to the TX and RX ports of the Silicon Labs CP2102 device.

The USB then goes to the host computer where the terminal is viewed.



To learn more or to get involved please follow the links below!

Also feel free to message me with questions or to get more information.


Learn more at the Silicon Labs CP210x Page

CP210x devices

Download AN721 for more detailed instructions

AN721, adding USB walkthrough

Buy the CP210x EVK to get started

Evaluation kit

Customize the USB driver

Custom driver info


Plasmonic Circuit. A research team from ETH Zurich recently published an article in Nature Photonics that announced the discovery of a new technology that enables faster, cheaper data transmission.  (via Nature Phontonics)

Networks may get an upgrade. A team of researchers from ETH Zurich recently developed a technology that may make the future of data transmission faster, cheaper and smaller than ever before.


Professor of Photonics and Communications Juerg Leuthold and his team of researchers recently released a seminal paper in Nature Photonics disclosing a new technology that can transmit data with a modulator roughly one hundred times smaller than modern methods. The new method can shrink modulators from a few micrometers to a few nanometers to allow faster and small transmission of data.


The research team discovered that surface-plasmon-polaritrons could be used to shrink light signals to a fraction of their normal size. Using this trick, they were able to send light signals as normal, shrink them down to enable movement through smaller electrical spaces, and expand them again later. The technology is similar to keeping a secret message in a small box, flattening that box so it fits between the crack of a doorway, and opening it up again on the other side. The technology minimizes the data without compromising it, and bypasses the limitations of current technology.


Leuthold plans to continue his research, although he has not disclosed the next step for his work. The current model uses gold, and is still more affordable than building current modulators. Perhaps various conductors will be used in future models and the team might attempt to build compatible hardware. These are all speculations, but one thing certain – if it comes to market, it’ll significantly change the way we transmit data every day.



See more news at:


Memristor circuit.jpg

Memristor Circuit. Researchers at UC Santa Barbara and Stony Brook University successfully built a neural network to house memristors. The prototype was successful in recognizing small images and may be expanded to develop futuristic computers that simulate the human brain. (via UC Santa Barbara)

A team at the University of California – Santa Barbara and Stony Brook University is on the brink of finally breaking the secret on how to develop memristors on their own neural hardware using old perceptron technology. Memristor research has been a long time coming, but if the researchers are successful, the devices can assist in computer energy consumption management and may allegedly lead to thinking computers that mimic human neurons and synapses.


Memristors, or memory resistors, are thought to be a crucial component to developing computers that can really “think” like human brains. A human brain will build brand new synapses based on an individual’s need for a particular type of information. A mathematician, for example, would have a very different brain, structurally, than a musician, because the part of the brain most used would become more developed over time. Computer scientists think memristors are the key to allowing computers to work in this way, as they can regulate the flow of electrical energy to various circuits, based on which circuits are most frequently used.



Concept Blueprint (via UC Santa Barbara & Nature)


Although memristors are a common topic of conversation for future computer-building, scientists struggle with building a neural hardware to house them. The new study published by UC Santa Barbara and Stony Brook University, however, may change that. The team built a 12 x 12 memristive crossbar assay that functions as a single perceptron, or an early neural network often used for pattern recognition and basic information organization. The team programmed a network of perceptrons to decipher things like letters and patterns and say together, the micro hardware functions as a collection of basic synapses.


The hardware is built using aluminum and titarium, but manufactured under low temperatures to allow for monolithic three-dimensional combination. This allows for the memristor to “remember” the amount of energy and the direction of the previous current for future use, even after the main device has been powered off. This recognition is currently possible using other technology, but it is much more involved. Using memristors means easier functionality while using no power.


In the trial, the memristor model was able to decipher 3 x 3-pixel back-and-white patterns into three types. The model they created had thee outputs, ten inputs and 30 perceptron synapses. In future, the team plans to shrink the current device down to 30nm across, in the hopes to simulating 100 billion synapses per square centimeter.


While some argue computers will never have the real processing power of the human brain, others say memristors will still be useful as analog memory devices or components of logic for larger systems. Since they use no energy, but record energy used, memristors may also be useful for energy management.



See more news at:




Posted by zylotech May 25, 2015

Hi all,


I recently bought a development board LPC4357 - EVB .

I can not so much regarding ARM programming or no experience when it comes ULINK2 , Keil and LPC4357 - EVB .


I have downloaded Examples2 \ GPIO \ Gpio_LedBlinky project .

When I program GPIO_ledBlinky in InFlash mode.

All goes well , I see that LED flashes.


But when I remove ULINK2 from the USB port when the LED stops flashing.

Should not be the code to be programmed into the LPC4357 chip when selecting " InFlash " ?


And why can not program in SPIFI mode (the flash is full chip Erased before i try flash in SPIFI mode)

DIP switch is

1. Down

2. UP

3. UP

4. UP

I get an error like this pop-up message "ERROR: Flash Download failed - "Cortex-M4"


in Keil Build Output windows

Load "C:\\LocalData\\LPC4357-EVB\\Examples2\\GPIO\\Gpio_LedBlinky\\Keil\\SPIFI 64MB Debug\\example.axf"

Erase Done.

Programming Failed!

Error: Flash Download failed  -  "Cortex-M4"

Flash Load finished at 14:32:41


Best Regards



Europa squid.jpg

NASA eel bot that may delve into the depths of moon Europa. NASA recently announced their current 15 winners of NIAC funding for $100,000 for each candidate. Among them is a project to develop a robotic eel to explore Europa, Jupiter’s moon.  (via NASA)



Anyone seen that movie Europa Report? It may have inspired NASA...


NASA recently announced their winners of their annual NASA Innovative Advanced Concepts (NIAC) program. There are 15 winners in total that have far-out ideas (pun intended) about making science fiction a reality. NASA is hoping that these highly innovative, and a bit crazy, ideas will lead them to advances that can progress their ability to delve further into space.


One crazy idea that just might work is NIAC 2015 winner Mason Peck’s research to design a robotic eel that can explore the depths of Europa, one of Jupiter’s many moons. The idea is highly innovative and calls for the invention of new technologies – including new power systems.


A mock-up for the robot design is seen above. It would be a soft-bodied robot that can swim and explore the aquatic depths of Europa. Peck describes the robot as more of a squid than an eel, as NASA calls it. The science behind it is pretty inspiring. The body of the eel/squid would have ‘tentacle’ structures that allow it to harvest power effectively from changing electromagnetic fields.  The energy will power its rover subsystems, one of which allows it to expand and change shape to propel itself in water and on land. It would do this by electrolysis of water, creating H2 and O2 gas that will be harvested to expand, and combusted internally to act as a propulsion system. To learn more about the other 14 winners who scored $100,000 to develop technology like this, see their extensive report.



See more news at:



Chalmers University of Technology researchers have found that large area graphene helps prolong the spin of electrons over longer periods of time (via Chalmers)

Chances are you own a smartphone, tablet or PC/laptop that features some form of solid-state technology - typically in the form of RAM, flash drives or SSD hard drive. Those devices are faster than their mechanical counterparts and new findings by researchers from Sweden’s Chalmers University of Technology are set to make that technology even faster and more energy efficient through the use of graphene.


Specifically, they found that large area graphene is able to prolong the spin of electrons (spintronics) over a longer period of time over that of ferrous metals. Spintronics deals with the intrinsic spin of electrons in a magnetic moment- or the torque it will experience when an external magnetic field is applied. As mentioned above there are already spintronic devices on the market, however they use ferrous metals for their base platform. It’s the impurities in those metals that hold spintronics back from becoming a mainstream component in today’s electronic circuitry- limiting the size of the components themselves.


This is where graphene comes into play as the material extends the area of spintronics from nanometers to millimeters, making the spin of those electrons last longer and travel farther than ever before. So why is that good? Data (in the form of 1’s and 0’s) is encoded onto those electrons as they spin up and spin down rather than relying on the other method of turning the electrical state of off and on using traditional circuits. The problem is as the process nodes become smaller it results in increased electrical ‘bleed’ across transistors in the off state thereby preventing us from building transistors that consume less power.


Using graphene as the substrate for spintronics allows for the electrons to maintain their spin alignment to a duration of 1.2 nanoseconds and transmit information contained in those electrons up to 16-micrometers long without degradation. Of course, progress doesn’t come without its problems- in this case it’s the graphene itself or rather the manufacturing process. Producing large sheets of the one-atom thick substance is still an issue for manufacturers and when it’s produced it usually has defects in terms of wrinkles and roughness, which can have negative effects on electron’s spin rate and decay.


The researchers however have found that the CVD (Chemical Vapor Deposition) method is promising and the team hopes to capitalize on it to produce a logical component in the short term with a long-term goal of producing graphene/spintronic-based components that will surpass solid-state devices in both speed and energy efficiency.


See more news at:


microchip ceo.jpg

Microchip CEO Steve Sanghi (via Microchip)

Microchip Technology, Inc., is celebrating this week, as it was just named the number one provider of 8-bit microcontrollers (MCU) globally. The title was awarded by Garner’s annual ranking publication, in its 2014 edition.


Microchip Technology, Inc., is an innovation giant that specializes in mixed-signal, Flash-IP and analog solutions. It has long been a leader in the microcontroller industry and although the powerhouse is celebrating its reclaim of the top spot for 8-bit MCUs, it is a leading provider of 16-bit and 32-bit MCU production as well.


Microchip is committed to growing its MCU technologies in all markets, including 8-bit, 16-bit and 32-bit product lines, and its dedication and commitment to excellence is paying off. The technology innovator was ranked the fastest growing MCU supplier of all top 10 providers in 2014. Its rate of growth was charted as double that of its competitors. With this, the company was also named one of the top 10 providers of 32-bit MCUs for the first time ever. While its stats across the MCU industry are impressive, what’s most striking is that Microchip closed a 41% revenue deficit to reclaim the stop spot from Renesas.


Renesas is a company resulting from the merge between NEC, Hitachi and Mitsubishi. These three companies were the leading semiconductor companies of Japan and when they merged, Microchip was knocked out of the top spot for 8-bit MCUs in 2010. At the time, Renasas’ business was 41% larger than that of Microchip, but it worked tirelessly each year, and finally won with a 10.5% advantage over the Japanese supplier in 2014.


MCUs are used for a number of different products, including watches, mobile phones and many digital household electronics. The need for MCUs is increasing, as the consumer market and global technologies shift toward digitization. The Internet of Things devices, “smart” household products and other digital devices will all rely on MCUs for their processing power as the demand for technologically advanced goods continues to rise – good news for Microchip.


Microchip offers a wide range of MCU products in its portfolio, including MCUs for analog peripherals, core independent peripherals, low-power products and more. If you’re interested in Microchip products, you can find a complete list of their solutions on their website.



See more news at:



TI MSP432 Webinar.

Posted by DAB Apr 30, 2015

Hi All,


I just saw the official TI Webinar on the new MSP432 processor.


The 13 USD Launchpad is very impressive, but the new features of the TI software are awesome.


They evidently spent some time looking at the excellent Cypress Semiconductor software and have upgraded CCS with a lot of very nice user features with simplified control.


Definitely worth looking at.




My PI is Alive!

Posted by DAB Apr 11, 2015

After watching everyone else explore the Raspberry Pi, I finally took the plunge with the RPi 2.


I finally got all the pieces in place, plugged it in and about 15 min later, my RPi 2 was alive and well.


My only complaint was the 6 point type used for the little guide included in the box.


Luckily I bought the camera kit and it came with a real sized guide so I could actually read the text.


Next step is to hook up the camera and wifi.


I have no idea how long these actions will take, but I will give you another post documenting my experience.


Meanwhile may all your Pi's be good.





Flash is the storage technology used inside the thinnest, lightest laptops and nearly every cellphone, tablet and mobile device. With users of these devices constantly demanding increasing functionality the amount of NAND flash memory needed has grown accordingly. Traditional planar NAND flash memory, however, is nearing its practical scaling limits, posing significant challenges for the memory industry.

Happily, once again technology is coming to the rescue. Last week, coincidentally on the same day and in separate announcements, Micron/Intel and Toshiba/SanDisk announced the availability of flash cells that are vertically stacked in multiple layers, known as 3D NAND technology. Products using 3D NAND are expected to be able to keep flash storage solutions on track for continued performance gains and cost savings, driving more widespread use of flash storage. This is important because solid state drives (SSDs) employing flash have had a significant impact on computing, but although prices have dropped, the capacities still lag far behind those of traditional magnetic hard drives.

The 3D NAND technology jointly developed by Intel and Micron (who have partnered to make 3D NAND Flash since the formation of their joint venture in 2006) stacks 32 layers of data storage cells vertically.  It uses floating gate cells a universally utilized design refined through years of high-volume planar flash manufacturing and enables what the companies say is the highest-density flash device ever developed—three times higher capacity than other NAND die in production. The immediate result will be seen in gum stick-sized SSDs with more than 3.5 terabytes (TB) of storage and standard 2.5-inch SSDs with greater than 10TB capacity.

Because capacity is achieved by stacking cells vertically, the individual cell dimensions can be considerably larger. This is expected to increase both performance and endurance and make the technology well-suited for data center storage. What is more, in the Intel/Micron design a new sleep modes enable low-power use by cutting power to inactive NAND die (even when other die in the same package are active), dropping power consumption significantly in standby mode.

The 256Gb multilevel cell version of 3D NAND is sampling today with select partners, and the 384Gb triple-level cell design will begin sampling later this spring.

Toshiba's 3D NAND structure (which will also appear under the SanDisk label since the two have a NAND joint venture) is called BiCS, for Bit Cost Scaling., Toshiba’s new flash memory stores two bits of data per transistor, meaning it's a multi-level cell (MLC) flash chip. It can store 128Gbits (16GB) per chip. Toshiba said its 48-layer stacking process enhances the reliability of write/erase endurance, boosts write speed, and is suited for use in diverse applications, but primarily solid-state drives (SSDs).Sample shipments of products using the new process technology began last Thursday. Toshiba is preparing for mass production in their new Fab2 at Yokkaichi Operations,


For its part last year Samsung became the first company to announce it was mass-producing 3D flash chips, which it calls V-NAND. Samsung’s chips stack 32-layers of transistors. V-NAND crams in 3-bits per transistor in what the industry refers to as triple-level cell (TLC) NAND. Because Samsung uses TLC memory, its chips are said to be able to store as much as Toshiba's 48-layer 3D NAND -- 128Gbits or 16GB.

Going forward these and subsequent 3D NAND announcement could mean SSDs will have the density to see it eclipsing hard drives as the primary storage medium in devices meeting most people’s needs.

Filter Blog

By date:
By tag: