1 2 3 Previous Next

Embedded

128 posts

opening.JPG


Altium’s Circuit Studio opens into what looks like a moderns CAD system, complete with a design tree. Into each project sits a collection of details about it. Click the home tab to navigate the less ‘designy’ aspects of the software. What I like most is simplistic preview of all the design files in the home screen. It boarders on tablet/smartphone level simplistic.

 

A familiar set of PCB layout tools populate the top band of the window. If familiar with any other PCB design package, or even CAD software, then the tools will feel quite familiar. However, similarity doesn’t go hand in hand with familiarity here. I followed the one and only PCB layout example on Circuit Studio’s website, since most of the critical selections I had to make were all over the place.

 

bowtie schematic.JPG

 

Laying out a simple circuit, 2 LEDs, 1 resistor and power contacts was quite easy. I started in a schematic design window. I laid out the parts as simply as possible. I used the generic component, which will only place a footprint for the parts. Standard is through-hole components. Using the Altium component vault, I could have grabbed specific parts.

 

Next, I “compiled” the schematic. This will allow me to import the schematic as components in the PCB layout window. I did just that. I then placed the parts on the board. (Which, I re-shapped to look like a bow-tie. This wasn’t intended, as I was seeing how the “board shape” options work. So, I went with it.)


bowtie.JPG


After placing the components, I used the manual route option. Which I simply connected the dots. There is an auto-trace option too, but, for this simple design, I didn’t this is was necessary. I checked for errors using the “Design Rule Check” button, found none. It is done!


I switched to the “3D view” from the view tab. There you go. A design in less than 3 minutes.


bowtie 3d view.JPG

 

I can from here get all the necessary design files for fabrication… but I will leave this step for the next-level review of Altium’s Circuit Studio 1.0.

 


Some critiques…

 

Since this is a new program, examples are nil. On the Circuit Studio documentation page, there is a single example that explains all the steps to creating a simple PCB. I didn’t see much about multi-layer designs though.

 

I have a tendency to move the cursor to the top bar and select the tool I want to use. However, Circuit Studio keeps the last tool used active until right-click a few times or hit escape. So, as I move the cursor the view on the screen moves alone with the mouse. I just have to get used to it.


My optimism of the simplistic menu view changed after having to navigate through the menu trees to change simple settings, get library files, etc. I think this is where Circuit Studio should improve with later versions. Bring the most common setting to the top of menus, put all the rest in the background out of sight.

 

While I am at it… everyone that uses this software will do the same thing. Make a schematic, lay out a PCB, get the build files. The software should simply walk the user through those steps automatically. Instead, it leaves the used stranded with lots of little options everywhere. As I said above, not all those little options will be used.

 

Originally, I thought the design was so simple I could just lay it all out in the PCB view, skipping the schematic altogether. This proved impossible. I could place component footprints, lay tracks/traces, but it always gave me errors. So, lesson here… start with the schematic stage.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

DAB

TI MSP432 Webinar.

Posted by DAB Apr 30, 2015

Hi All,

 

I just saw the official TI Webinar on the new MSP432 processor.

 

The 13 USD Launchpad is very impressive, but the new features of the TI software are awesome.

 

They evidently spent some time looking at the excellent Cypress Semiconductor software and have upgraded CCS with a lot of very nice user features with simplified control.

 

Definitely worth looking at.

 

DAB

The world is getting smarter. We are surrounded by talk about smartphones, smartwatches and even smartfridges – all components of the much-heralded internet of things (IoT).

While these devices all incorporate sensors and processors to interpret and display data, there is a pressing need to store the data too.

Smart devices typically store data in NAND Flash memory chips, and the price per bit of these chips has fallen dramatically. This can be ascribed in a large part to the dramatic increases in memory density that have reduced the amount of silicon needed to store individual "bits" of data.

- You can read the rest of this article on the Toshiba innovation section: http://toshiba.semicon-storage.com/eu/design-support/innovationcentre/tcm0048_eMMC.html

DAB

My PI is Alive!

Posted by DAB Apr 11, 2015

After watching everyone else explore the Raspberry Pi, I finally took the plunge with the RPi 2.

 

I finally got all the pieces in place, plugged it in and about 15 min later, my RPi 2 was alive and well.

 

My only complaint was the 6 point type used for the little guide included in the box.

 

Luckily I bought the camera kit and it came with a real sized guide so I could actually read the text.

 

Next step is to hook up the camera and wifi.

 

I have no idea how long these actions will take, but I will give you another post documenting my experience.

 

Meanwhile may all your Pi's be good.

 

DAB

Demand from consumer and mobile markets, automotive and industrial sectors, and emerging Internet of Things (IoT) applications is driving Flash storage technology to aggressively move to smaller and smaller process nodes.

 

 

Unfortunately, many chipsets and NAND host controllers are unable to perform error correction using more than 1-bit or 4-bit ECC, and often the cost to update these make moving to later process nodes prohibitive.

 

 

Single-Level Cell NAND Flash (SLC NAND), which stores one bit per cell and can endure around 60,000 write/erase cycles, is currently the most widely used Flash technology for these applications.

 

 

Extending product lifetimes with new NAND flash technologies

Built-in ECC NAND (BENAND™) is a new type of SLC NAND Flash that has an embedded ECC function capable of offloading the burden of ECC from the host processor.

 

Toshiba’s engineers have enabled a number of customers to integrate BENAND into both existing and new designs, delivering the benefits of migrating to the latest device technology while helping avoid the high costs associated with significant system redesign or long-term use of legacy technology.

 

In one case, Toshiba assisted a customer achieve a cost-effective lifetime extension for a Bluetooth® hands-free product aimed at the automotive aftermarket. The original design used SLC-NAND with 1-bit ECC to store the boot code, OS image, application code, application parameters and user data.  However, the availability of the SLC-NAND was becoming uncertain.

 

To compound matters, the original, custom SoC processor was unable to meet the increased ECC demands for more recent, and cost-effective, SLC-NAND generations.

 

Rather than redesign the device and upgrade the processor, or engage a longevity support system to guarantee the supply of the original NAND, despite the increased cost, the OEM turned to Toshiba’s BENAND.

 

This solution helped avoid any need to change the design of the device, while enabling the OEM to enjoy the cost-down benefit of the more cost-efficient 24nm BENAND chips.

 

Pulled from the Innovation centre by Toshiba. You can read the full article below.

 

Extending product lifetimes with new NAND flash technologies | Innovation Centre | TOSHIBA Semiconductor & Storage Produ…

NAND.png

 

Flash is the storage technology used inside the thinnest, lightest laptops and nearly every cellphone, tablet and mobile device. With users of these devices constantly demanding increasing functionality the amount of NAND flash memory needed has grown accordingly. Traditional planar NAND flash memory, however, is nearing its practical scaling limits, posing significant challenges for the memory industry.


Happily, once again technology is coming to the rescue. Last week, coincidentally on the same day and in separate announcements, Micron/Intel and Toshiba/SanDisk announced the availability of flash cells that are vertically stacked in multiple layers, known as 3D NAND technology. Products using 3D NAND are expected to be able to keep flash storage solutions on track for continued performance gains and cost savings, driving more widespread use of flash storage. This is important because solid state drives (SSDs) employing flash have had a significant impact on computing, but although prices have dropped, the capacities still lag far behind those of traditional magnetic hard drives.


The 3D NAND technology jointly developed by Intel and Micron (who have partnered to make 3D NAND Flash since the formation of their joint venture in 2006) stacks 32 layers of data storage cells vertically.  It uses floating gate cells a universally utilized design refined through years of high-volume planar flash manufacturing and enables what the companies say is the highest-density flash device ever developed—three times higher capacity than other NAND die in production. The immediate result will be seen in gum stick-sized SSDs with more than 3.5 terabytes (TB) of storage and standard 2.5-inch SSDs with greater than 10TB capacity.

Because capacity is achieved by stacking cells vertically, the individual cell dimensions can be considerably larger. This is expected to increase both performance and endurance and make the technology well-suited for data center storage. What is more, in the Intel/Micron design a new sleep modes enable low-power use by cutting power to inactive NAND die (even when other die in the same package are active), dropping power consumption significantly in standby mode.


The 256Gb multilevel cell version of 3D NAND is sampling today with select partners, and the 384Gb triple-level cell design will begin sampling later this spring.


Toshiba's 3D NAND structure (which will also appear under the SanDisk label since the two have a NAND joint venture) is called BiCS, for Bit Cost Scaling., Toshiba’s new flash memory stores two bits of data per transistor, meaning it's a multi-level cell (MLC) flash chip. It can store 128Gbits (16GB) per chip. Toshiba said its 48-layer stacking process enhances the reliability of write/erase endurance, boosts write speed, and is suited for use in diverse applications, but primarily solid-state drives (SSDs).Sample shipments of products using the new process technology began last Thursday. Toshiba is preparing for mass production in their new Fab2 at Yokkaichi Operations,


Toshiba.jpg

For its part last year Samsung became the first company to announce it was mass-producing 3D flash chips, which it calls V-NAND. Samsung’s chips stack 32-layers of transistors. V-NAND crams in 3-bits per transistor in what the industry refers to as triple-level cell (TLC) NAND. Because Samsung uses TLC memory, its chips are said to be able to store as much as Toshiba's 48-layer 3D NAND -- 128Gbits or 16GB.


Going forward these and subsequent 3D NAND announcement could mean SSDs will have the density to see it eclipsing hard drives as the primary storage medium in devices meeting most people’s needs.

silicene_Fig1a2.jpg

Silicene Structure concept art (via UT at Austin)

 

While some researchers are hard at work to achieve quantum computing on a chip, scientists from the University of Texas at Austin’s Cockrell School are busy making history. The research team recently created an atom-thick transistor made from silicon particles, called silicene, which may revolutionize computer chips.

 

There had been talk about the development of silicene, but it had yet to be constructed, until recently. Assistant Professor in the Department of Electrical and Computer Engineering Deji Akinwande and lead researcher Li Tao successfully built the first-ever silicene chip last month. The team looked to current graphene-based chip development for guidance, but discovered a major issue at the onset – silicene was sensitive to air.

 

To circumvent this issue, Akinwande and Tao worked with Alessandro Molle of the Institute for Microelectronics and Microsystems in Agrate Brianza, Italy, to construct the delicate material in an airtight space. The team was able to form a thin silicene sheet by condensing silicon vapor onto a crystalline silver block in a vacuum chamber. Once the sheet was formed, silicene atoms were placed on a thin silver sheet and covered with a layer of alumina that was one nanometer thick. Once formed, the team was able to peel the silicene sheet off of the base and move it to an oxidized-silicon substrate. The result was a functional silicene transistor that joined two metal groups of electrodes.

 

The transistor was only functional for a few minutes before crumbling due to instability in air. While the transistor’s capabilities were rather archaic, the UT team was successfully able to fabricate silicene devices for the first time ever through low-temperature manufacturing. As silicone is a common base for computer chips, the researchers are confident that the technology could be adopted relatively easily, to make for faster, low-energy digital chips.

 

The team of scientists plans to continue its research to develop a more stable silicene chip. Having a super-thin silicene transistor could incredibly enhance the speed of computing, but it isn’t without competition. Graphene-based transistors have been under development for quite some time and may also be a solution to the question of how to enhance computing capabilities. Both technologies, however, may fail to surpass the potential power of the Università degli Studi di Pavia in Italy’s newest quantum chip. The chip features entanglement capabilities, potentially allowing an entire network to function as one unit. The new technology may also make cyber threats a thing of the past.

 

At present, emerging chip technologies are all still in need of further development before they are ready to hit the market. No one knows which technology will prevail, but it certainly is exciting.

 

The Cockrell School’s Southwest Academy of Nanoelectronics, the U.S. Army Research Laboratory’s Army Research Office and the European Commission’s Future and Emerging Technologies Programme funded the University of Texas at Austin-based project.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

photon-entanglement-ring-resonator.jpg

Photon Entanglement Ring Resonator visualization (via Davide Grassani, Stefano Azzini, Marco Liscidini, Matteo Galli, Michael J. Strain, Marc Sorel, J. E. Sipe, and Daniele Bajoni)


As IBM readies its brain-like computer-on-a-chip for mass production, the Università degli Studi di Pavia in Italy is making history, as it just built the very first chip capable of entangling individual light particles. The new technology may inspire a host of novel computing innovations and quite possibly put an end to cyber threats as we known them.

 

Entanglement is an essential quantum effect that enables the instant connection between two particles, regardless of distance. This means that anything done to one particle will be instantaneously done to another particle, even if it is at the other end of the universe. The entanglement of photons isn’t a new technology, but researchers at the Università degli Studi di Pavia, including co-author on the paper Daniele Bajoni, made history in successfully scaling the technology down to fit on a chip.

 

Researchers have been trying to scale down entanglement technology for years. Typically, the technology is harnessed through specialized crystals, but even the smallest set-up was still a few millimeters thick. Bajoni and his team decided to try a different approach and instead built what they call micro-ring resonators onto an ordinary silicon chip. The resonators embed coils into silicon wafers that capture and re-release photons. The design results in successful entanglement at an unparalleled width of 20 microns, or one-tenth the thickness of a strand of human hair.

 

The technology has huge implications for computing, as entanglement can exponentially increase computing power and speed. Computing communication can become instantaneous, as can other communication technologies. Tweeting at the speed of light, anyone? While these potentialities for advancements in computing are impressive, the biggest impact it may make is in inhibiting cyber threats.

In entanglement, particles act as one cohesive unit. Hackers operate by identifying weaknesses in computer and information systems and exploiting them. If computing and information systems, however, operate as one cohesive unit, there would be no way through which a hacker could breach the system, thus eliminating cyber threats. Sorry Dshell analysts.

 

The new quantum chip is infinitely more powerful than even the most cutting-edge supercomputers around today. It has the potential power to revolutionize communication, computing and cybersecurity, by enabling the adoption of quantum technologies, such as quantum cryptography and quantum information technologies. When we can expect to see this technology rule supreme, however, is another subject entirely.

 

Bajoni believes the technology is the connector through which innovation technologies can begin harnessing quantum power on a small scale, but others disagree. Some believe ring resonators must be produced on a nanoscale first to compete with up-and-coming nano-processors. Only time will tell, but our bet is cybersecurity stakeholders, at the least, will begin looking into the chip’s development. Until quantum mobile communication is available, however, you’ll just have to upload your social media photos like everybody else, 3-4GBs at a time.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

bb0.png

PowerBar installed (via Andice Labs)

 

If you've ever thought of designing a BeagleBone-based vigilante robot that fights crime in the rural Mojave Desert using only battery power, now you can with Andice Lab's PowerBar. The PowerBar was designed exclusively for the BeagleBone open hardware computer and enables it to function fully on DC, or battery, power. Portability is inspiring.

 

bb1.png

PowerBar attached to BeagleBone (via Andice Labs)

 

The PowerBar is a "micro cape" power supply that provides the low-power BeagleBone (SBC) computer with enough energy to run from anywhere, even in outer space (cue Twilight Zone theme song). The battery pack runs 5V of energy to the computer and even offers 15V over-voltage protection and reverse-voltage protection to protect against surges. It's a simple power pack that works for both BeagleBone White and Black.

 

bb2.jpg

BeagleBone White (via BeagleBoard)

 

BeagleBoard's BeagleBone is a single board computer based on Linux that runs Android and Ubuntu. The White version comes equipped with an AM335x 720MHz ARM processor, 256MB DDR2 RAM, 3D graphics chip, ARM Cortex-M3 and 2 PRU 32-bit RISC CPU's. BeagleBone Black was made with developers in mind and features double the power, with 512 DDR2 RAM, 4GB 8-bit built-in EMMC flash memory and a NEON point accelerator. Both computers offer USB, Ethernet and HDMI connectivity. It also runs Cloud9 IDE and Debian. What makes it unique is its open hardware design.

 

bb3.jpg

BeagleBone Black (via BeagleBone)

 

Open hardware designs take open-source to a whole new level. Not only are software platforms completely open to developers, but designs are too. That means you can buy a BeagleBone Black, or you can go directly to the BeagleBoard website and find the instructions for building your very own. Open hardware is developed for the love of innovation and raising up the next generation of tinkerers. My only critique of this cape is that I could do the same with an external cell-phone battery backup. Countless battery bricks out there too.

 

The development of the PowerBar now allows us to take our innovations on-the-go. Now remote locations all over the world can still gain access to the unscripted power of BeagleBone. If you take the lead from one tinkerer, you can power your very own brewery using the mini computer. Even the pirates in the Mojave Desert would raise a glass to that.

cpulse.jpg

The cPulse is seen in action being used as a home rave device (via Codlight)


The French company, Codlight Inc. is currently seeking funding on Kickstarter to produce one of the first fully customizable LED Smartphone cases. While the prospect of becoming a walking, breathing billboard advertisement doesn't particularly appeal to me, I must give Codlight Inc. credit for the multitude of features and uses it offers.

 

The company certainly left no stone unturned when they programmed the cPulse smartphone case for a variety of uses. The cPulse LED case can act as everything from a notification banner, to a homemade rave device, to a form of light therapy. This feature can also be used to mimic a good old-fashioned analog clock radio.

 

The cPulse uses a panel of 128 high-efficiency LED lights powered by the Smartphone battery, and controlled by a custom program which allows the user to specify different commands, modes, notifications, and create customizable light shows set to music.

These light displays sap battery power at a rate of about 7% per hour so you may want to have quarters on hand if you need to call someone on short notice. - Remember payphones?

 

The LED light panel and the smartphone case  are 3D printed by Sketchfab and Sculpteo. Kickstarter backers who fund at least $79 to this Codlight initiative will receive a kit that will allow them to 3D print their very own cPulse case. Donors who are a bit more generous, funding at least $89 will receive a fully functioning cPulse case delivered to their home.

 

At the moment, the case is specifically made for the Android 4.4 smartphone, however if the project gets off of its feet, its easy customization could allow anyone to own a cPulse.

 

I must say, I am still pretty impressed by the functionality of this device, even though it is entirely unnecessary and a product of a culture of consumption and excess.

 

For now, Codlight Inc. is asking for no paltry sum, with a pledged goal of $350,000. They are currently nowhere near the goal, but still have about a month left to raise over a quarter of a million dollars.

 

If you are obsessed with bright, shiny objects and want to blind and dazzle those around you, you can get your very own cPulse from Kickstarter.



C

See more news at:

http://twitter.com/Cabe_Atwell

onbeep.png

A real-life Star Trek communicator for $99 (via OnBeep)


OnBeep is a San Francisco start-up company that recently unveiled its Onyx communicator to technocrats in New York, sparking buzz. OnBeep is only one year old, but they raised $6.25 million in early 2014 to develop their Onyx device: something that lets you communicate with groups of people at the touch of a button.

 

The working, finished product was only unveiled early last month, but Business Insider, CNN, Forbes, and Wired already have something to say about it. The design is meant to be worn on any type of clothing, handbags, belts, or even put inside your pocket. The ease of talking at the push of a button was inspired by Star Trek, so your LARPing adventures can be fortified by this device for sure.

 

In practice, the Onyx seems like an expensive, stylish speaker phone in the style of a walkie-talkie. In terms of hardware and design, it basically is exactly that. But the co-founder, Jessie Robbins notes that it does more: it allows a group of people to work together and stay focused on the task at hand. Both Robbins, and the OnBeep CTO, Greg Albrecht, have experience in emergency situations as firefighters and EMTs. Hence, the Onyx really makes sense when you need to communicate real-time with a group of colleagues and can’t afford to waste time messing around with a phone.

 

The cool thing about the Onyx is that in thoeryit allows you to collaborate with anyone around the world. For now, radio frequency regulations mean that people outside the US can't technically buy the Onyx. Considering the amount of funding OnBeep has raised, it seems like a matter of time before the Onyx is available everywhere. The device can currently be pre-ordered for expected release in December 2014. The current cost of the Onyx is $99 which seems a bit steep for an extension of your smartphone, but I can see how it can be super helpful depending on your job environment.

 

I can certainly see businesses adopting this technology as a new part of team management: cutting the time and space between employees. Perhaps this is why so many business gurus are interested in the technology since it enables people to work together, real-time, outside of boring meetings.

 

The Onyx works by using Bluetooth to sync to your smartphone. In order to take advantage of Onyx's capabilities, you must download the OnBeep smartphone app which is currently available for iPhone and Android systems. The Onyx then takes advantage of wireless data/WiFi to contact your networks and stay connected. The app allows you to manage your groups, see who's available, and see where every member of your team is located – if you are worried that Tom forgot the dip, for instance.

 

You can talk to up to 15 people at once with the Onyx, and you can create as many groups as you like. The platform works regardless of network carrier, however it is only compatible with iPhone and Android at the moment.

 

C

See more news at:

http://twitter.com/Cabe_Atwell

Hello Everyone!  Just trying to network a little bit, on my breaks...

 

So, my team has developed some cool products that are actually being utilized out there in the market.

 

  • EISS™ Virtual Top Node Server

          http://energy.ipkeys.com/products/autodr/eiss/

 

 

  • EISSBox - OpenADR 2.0b Certified Virtual End Node

         http://energy.ipkeys.com/products/autodr/eissbox-ven-hardware/

 

  • EISSClient Software Platform

         http://energy.ipkeys.com/products/autodr/eissclient-software/

 

  • EISSBox Data Logger (for Data Logging / Telemetry Endpoint)

         http://energy.ipkeys.com/products/autodr/telemetry/

         

 

 

If you're close to the Eatontown, NJ area, we're always looking for Embedded Systems and Raspberry Pi experience on our team - growing pretty fast!  We have some real neat stuff going on.  Definitely let me know if you're interested.

 

 

 

EISS.jpeg

 

ipk-light.png

 

 

 

 

 

 

 

 

 

 

 


This article was first issued on embedded beat (Freescale blogs)

 

A mixed environment system is one where a multicore system runs a combination of a real-time operating system and a feature-rich operating system.  It’s not a new concept, and there are many examples of products in the industry today, particularly in automotive and high-end industrial. These devices are feature-rich and highly user-interactive, but must respond quickly and reliably to system level events that are driving critical operation of the device.

 

After presenting earlier this month on the topic at ARM TechCon, I was energized to see the level of interest in heterogeneous processing for mixed environment use cases. What’s new is that the underlying hardware architecture for a mixed environment use case, if implemented correctly, can now be used to solve new design challenges like improving energy efficiency of devices that need to stay connected and provide continuous monitoring of environmental inputs. The device itself does not need to be in a high level state of operation because it is essentially just maintaining a network connection (Wi-Fi, Bluetooth, others), processing sensor inputs and is not required to perform heavy processing. But the device must also be able to quickly elevate to a higher state of processing when needed.

 

Split-Shared-Topology

 

What I talked about in my session was the challenge of implementing this type of heterogeneous architecture in a single-chip solution that also provides system flexibility without sacrificing system integrity. System flexibility means that both cores have the ability to access all peripherals and shared memory. This ultimately allows the system to be able to adapt to new application use cases. However, this type of shared bus topology means that both cores now have the ability to access all peripherals and shared memory in the system. So the architecture must provide a way to configure and enforce the safe sharing of system resources.

 

What is the ultimate benefit of this type of heterogeneous architecture?  A more energy-efficient, system-aware device that can also provide a feature-rich user experience and yet not sacrifice on real-time responsiveness.

 

Where does Freescale fit in?


Freescale is no stranger to multicore and heterogeneous processing, but earlier this year we announced that this architecture will be coming to the i.MX 6 series with the first applications processor to integrate an ARM® Cortex®- A9 core with an Cortex-M4 core in a single chip design. And, heterogeneous processing will bring new applications and new levels of scalability to the i.MX 6 series which already has a broad footprint and acceptance in the embedded market.

You can see more on the next generation of i.MX 6 series in this short informational video.  (Full product disclosure coming in Q1 2015.)

 

Amanda McGregor is a product marketer for i.MX applications processors.

This article was first published on embedded beat (Freescale blogs)

 

Some of the Kinetis MCUs are designed to provide industry-leading new technologies, others are optimized to solve specific problems, while others are just designed to appeal to the biggest number of engineers and please everyone.

 

What type of Kinetis MCU are you most like?

 

Take this short Kinetis MCU Personality Quiz to find out!

 

What type of Kinetis MCU are you.


Kathleen Jachimiak is a Kinetis MCU product marketer.

This article was first published on embedded beat (Freescale blogs)

 

It’s not just about performance and integration, the ARM Cortex-M4 based Kinetis K Series brings world class low-power modes and a comprehensive set of development tools and software, helping you save precious time and resources.

 

If you haven’t noticed, the Kinetis portfolio is vast – from general embedded, to ultra-low power, to a wide range of application specific MCUs.  As for the Kinetis K Series, we’ve honed in on general embedded applications.  Need USB? We’ve got that. Ethernet, Crypto? Yep. Graphic LCD? Ditto.  And there’s more.  Kinetis K devices range in flash size from 32KB to 2MB, up to 256KB of onboard SRAM and a broad range of peripheral combinations for measurement and control, connectivity and security.

 

Selector Guide

 

Here’s the family lineup:

K0x Entry-level MCUs
K1x Baseline MCUs
K2x USB MCUs
K3x Segment LCD MCUs
K4x USB & Segment LCD MCUs
K5x Measurement MCUs
K6x Ethernet Crypto MCUs
K7x Graphic LCD MCUs

How do you decide which device is best for your design with what seems like endless options?

We’ve helped make the selection process easier with the Kinetis K Selector Guide.

Try it out and let me know what you think.

 

Justin Mortimer is a Kinetis Product Owner

Filter Blog

By date:
By tag: