Skip navigation
1 2 3 Previous Next

Industrial Automation

40 posts

Airbus recently moved the aircraft from the company’s headquarters to a new home in Pendleton, OR. This (apparently) is the future of taxis (Photo from Airbus)


If you’ve ever thought those flying taxis from The Fifth Element were cool, you may be interested in Airbus’ latest project. The company’s Vahana project is their attempt to build a self- flying taxi network. While there’s still a lot of work the company needs to do before this becomes a reality, they recently hit a major milestone that marks great progress. Airbus is now ready to test its flying car since it moved the car from the company’s headquarters to a dedicated hangar in Pendleton, OR.


The task of moving an entire car sounds daunting, but the team reveals in a blog post it was a relatively simple process. They disassembled the entire aircraft and loaded it into a truck. When the truck arrived in Pendleton, the aircraft was reassembled and prepared to take flight for its first test. The entire process took less than a day; the aircraft is designed to be taken apart and put back together quickly. From there, the team installed the high voltage power system and motors that will give the Vahana flight.


The aircraft they’re working on looks nothing like a car. Instead, it looks like a small futuristic plane, like something you’d see from a sci-fi movie. Its liftoff is similar to a helicopter, but it flies like a regular plane. This is ideal for short trips for about two people. And while there will be a pilot of sorts in the aircraft, the Vahana will rely on a computer to stabilize the flight. So the aircrafts won’t be fully autonomous, but it’s something the company is working on.


The idea is to have passengers hop on the aircraft from someplace like an airport and be flown to their destination for about as much as a regular taxi costs. In order to keep the costs low, you’ll have to share a ride with other people, so that’s something to keep in mind. Any luggage you may have will be delivered by another service. And to make sure hackers can’t get into the system, the company is making sure to have tight security with a service they call zenCyber.


As if that wasn’t enough, Airbus is also working on a drone delivery service. Similar to Amazon’s drone service, the vehicles fly around with the cargo in “aerial corridors” and drop off the goods and send delivery notifications to the customer.  They hope that this project “increases acceptance for passenger flight testing, thus giving a boost to urban air vehicle projects.”


You gotta admit, flying taxis sound pretty cool, but what are the benefits? Maybe it’s supposed to be faster than your average taxi. And even though it’s something we’ve all dreamed about since The Jetsons, a good amount of people still won’t be so eager to do more flying in the skies. Hopefully, Vanhana’s tests will be successful, and we’ll start seeing these aircrafts zipping through the air. Now, if only someone can start working on those long-promised hover cars.



See more news at:

I have been in industrial maintenance for 35 years, working on electrical controls for over 30 years. I design, build and repair controls, build automation machinery, perform OSHA updates, retrofit existing machinery, design and build controls for machines customers build in-house or purchase without controls.


I am now operating Aabeck Controls LLC in Westland, Michigan performing services all over the Detroit metropolitan area and southeastern Michigan. I have done service in Ontario, Canada in the past but due to visa requirements lately I no longer service Canada. I have mostly worked on industrial and automotive machinery, but also have experience in quite a few other areas. I also have been programming old PLC's I have acquired for home automation, holiday lights and even one for modifying the operation of all the exterior lights on a SUV - so they work the way I want them (as in don't turn off the low beam headlamps when the high beams are turned on) and add a light show function for when it is parked or at a show or exhibit.


I perform emergency service 24/7, can do online diagnostics and programming through TeamViewer.


Contact me at (313) 283-7140 or email

Aeroscraft is updating the airship and hopes to bring major changes to freight shipping. The future of airships or a UFO? (Photo from Aeroscraft)


You know what people never get nostalgic for? Blimps. At one point they were seen as the future of air travel, but disasters such as the Hindenburg accident saw their popularity decrease. And as airplanes improved, they soon took over the skies. Now, if you see a blimp it usually has some banner advertisement attached to it. The Van Wagner group, an airship organization, estimates there are only 25 blimps in operation. Igor Pasternak wants to change this and has a plan to make blimps viable again.


Pasternack, who studies airships in his spare time, made a breakthrough in zeppelin technology in the form of the COSH system – Control of Static Heaviness. It works by rapidly compressing helium into storage tanks, which make the airship heavier than air. This would allow airships to land on any flat area where they can enter without relying on ground teams. It also increases their versatility while it reduces cost.


But even Pasternak knows blimps won’t take over passenger airlines, mainly because they’re so slow. He does think these airships can bring changes to freight shipping. Pasternak and his company, Worldwide Aeros Corp, is working on a prototype of an airship, dubbed the Aeroscraft. This new airships can haul up to 66 tons at a speed of 120 knots and a range of over 5000 miles. The company also hopes to work on a larger version that hauls 250 tons.


Though the airship will be three times as fuel efficient as shipping in airplanes, it won’t be as efficient as land or sea shipping. Still, the company thinks it’ll be a breakthrough for cargo shipping. They even think it could have some limited passenger applications. Aeros representative John Kiehle says the airship can be used for tourist trips and it may even have a practical use for passengers in rural areas.


Seems like Pasternak isn’t the only one reimaging blimps. A similar project started by Google dubbed Project Loon, uses a network of balloons to provide internet access to people in remote and rural areas. The balloons travel along the edge of space and transmit high-speed internet to the nearest balloon from their telecommunications partners on the ground. It’s then sent back down to users on the ground. Recently, Project Loon helped restore internet and signals to Puerto Rico after their devastating hurricane.


So, while you shouldn’t expect to fly home for the holidays on a blimp, you may start seeing them in the skies more, and this time they won’t be flying cheesy banners behind them.


Hmm… Luxury flights on a dirigible? Like in The Last Crusade? I’m all for it.


Have a story tip? Message me at: cabe(at)element14(dot)com

turn switchable-mirror.jpg

By finely tuning the space between a single layer of nanoparticles, researchers developed a filter that changes from a mirror to a window. (Image credit Imperial College London)


Researchers at Imperial College London have developed a unique filter that can change from a reflective state to a clear state and back again in real-time- essentially a mirror that transforms into a window and back again on demand. Similar in the way Innovative Glass’ eGlass works by applying a low current to nanoparticles to alter their state of orientation, making clear glass instantly fog for privacy, like in spy movies.


Detailed in a recently released paper published in Nature, the researchers developed their Electrotunable Nanoplasmonic Liquid Mirror for applications that range from new sensors to super lenses and much more. Achieving this goal proved a challenge for materials scientists, considering those changes require precision control of the nanoparticles in said filter- in this case, gold nanoparticles.



To develop their ‘tunable mirror,' the researchers used a single layer of gold nanoparticles and localized them between two immiscible electrolyte solutions that don’t mix with each other. When a small current (±0.5 V) is applied, the tunable particle layer becomes dense or sparse, switching between a reflective state and transparent surface. More accurately, when the particles are closer together, they create a mirrored surface and when further apart, it becomes clear, allowing light to pass through.


turn how-it-works.jpg

How the Electrotunable Nanoplasmonic Liquid Mirror is designed. (Image credit YouTube screenshot)


“It’s a really fine balance – for a long time, we could only get the nanoparticles to clump together when they assembled, rather than being accurately spaced out. But many models and experiments have brought us to the point where we can create a truly tuneable layer,” stated Professor Joshua Edel, co-author of the paper.


While the researchers were not the first to manipulate nanoparticle arrays, they were the first to develop a system that’s reversible- from one state to another with a precision level of uniformity. As demonstrated in the video above, the researchers showed that when a current was supplied, it reflected a 1-pound coin situated above the material. When that current was taken away, it revealed a ten-pound note below the array. It’s important to note that the filter is still under development, but it could lead to breakthroughs in chemical-based sensors, optical filters that can focus on certain light wavelengths or novelty mirrors that let you spy on people.   


Have a story tip? Message me at: cabe(at)element14(dot)com

john crop-sprayer.jpg

John Deere will tap into AI for more efficient crop sprayers. (Image credit John Deere)


AI isn’t limited to robots, computer science or education as it’s being implemented in nearly all facets of industry, including farming. Of course, its application in this area is hardly new either as it’s being tasked to help grow better tomatoes, helping to identify crop diseases and used to analyze satellite data to help farmers become prosperous (among a few others). Legendary farming equipment manufacturer John Deere is also utilizing AI to enhance their agricultural spraying equipment.


The farming company recently announced the acquisition of Blue River Technology- an institution that deals in AI machine learning and computer vision for efficient farming. In the press release, Deere CIO John May stated, “We welcome the opportunity to work with a Blue River Technology team that is highly skilled and intensely dedicated to rapidly advancing the implementation of machine learning in agriculture. As a leader in precision agriculture, John Deere recognizes the importance of technology to our customers. Machine learning is an important capability for Deere's future.” He backed up that statement with $305-million in capital to gain the AI company, which will be completed later this month.



So, what exactly does Blue River bring to the John Deere table? A company that specializes in integrated computer learning and vision that will help farmers reduce their use of pesticides and herbicides by spraying only portions of their fields that are affected rather than the entire plot. It’s capable of identifying individual plants and can determine who to best proceed based on what it sees- pull or spray the weeds while leaving the crops alone.


With Deere purchasing of Blue River seems to be akin to the company’s purchase of NAVComm Technology back in 1999, which gave them GPS technology, helping them better map farmers’ fields, ultimately reducing the amount of time to plant, tend and harvest crops in a more efficient manner. Most likely, Deere is looking to capitalize on smart/automated farming, which we’ll need if we’re going to feed 10-billion people by 2050.


Blue River has already successfully tested their AI platform with their LettuceBot- a robotic platform that rolls through fields photographing 5,000 plants per-minute, using the software side to identify ‘friend (cabbage) or foe (weeds).' It will be interesting to see what John Deere will be capable of in the next five or even ten years as Blue River continues to advance their machine learning efforts beyond what’s available today.


Have a story tip? Message me at: cabe(at)element14(dot)com


(Image credit Pexels)


It’s not unheard of for Li-ion batteries to catch fire or explode when damaged or poorly designed, just look at Samsung’s recent debacle with the Galaxy Note 7, and you can see why. The problem with most catastrophic failures in these types of batteries is heat, or rather a buildup of heat resulting from damage or short circuit, which causes a chain reaction that can’t be cooled sufficiently to stop it.


As the battery’s electrodes charge and discharge when ions move from one to the other, organic electrolytic chemicals help make it easy for those ions to travel back and forth. Unfortunately, those electrolytes have a tendency to become volatile when a crap-ton of heat is introduced, causing it, in some cases, to boil and catch fire.


Sounds scary doesn’t it, as failure/fire/explosions often happen when we least expect it, even more so when you consider Li-ion batteries are everywhere- mobile devices, jumbo jets, your favorite Tesla Model S, solar arrays and much more. Suffice it to say, we rely on these batteries to live our daily lives and would prefer them to be as fireproof as possible.


Researchers from the University of Maryland may have a solution to the pyro problem by replacing the organic-based electrolyte with a water-based version. Fireproof water-based electrolytes are nothing new, but they have an issue at producing enough volts to become relevant, meaning they’re not very powerful. UMD’s version however, is capable of producing 4-volts or enough juice as some organic-based electrolytes.


Water-based electrolytes aren’t without its problems either- Chunsheng Wang (co-author of a recent paper outlining the development of their new battery) and his team worked with the US Army to develop a 3-volt water-based electrolyte battery, and while it was successful, it degraded one of the electrodes, resulting in reduced energy storage capacity. Wang and his team overcame this issue for 4.0 by using a solid coating to protect the electrodes from degrading.


Li-ion batteries that feature organic-based electrolytes have some of the chemical decompose, creating a protective layer on the surface of the electrode- this is known as a SEI or Solid Electrolyte Interphase, which doesn’t happen with water-based electrolytes. The solution to that problem is literally in the solution as Wang and his team dosed that water with enough salt to kill slugs, thereby creating a SEI to protect the electrodes and thus- allowing it to hold more energy.


Alas, this new battery has one last issue to overcome, as it only lasts for about 70-cycles of recharging, whereas most commercial manufacturers require batteries that can last for 500-cycles or more. This is the issue Wang and his team are currently working to overcome, considering they solved the combustion issue, I have no doubt Wand will be successful in his endeavor.


Have a story tip? Message me at: cabe(at)element14(dot)com


MIT’s RFly system uses autonomous drones to relay signals emitted by a standard RFID reader to track inventory. (Image credit MIT)


Implementing RFID in supply chain management was supposed to make tracking inventory a whole lot easier; however, in 2013 Walmart reported a $3-billion loss due to product mismanagement. Even the US Army suffered warehouse inventory losses to the tune of $5.8-billion between 2003 and 2011. A 2016 DoD Audit also found the Army lost $1-billion worth of weapons and equipment in Iraq and still have no idea where it went due to poor tracking.


MIT may have just solved that costly inventory bleed by utilizing drones to take over the tracking process using a novel approach to onboard RFID they’ve codenamed RFly. Their new system allows small, safe drones to fly around and read RFID tags and their locations from tens of meters away with an average error in recognition of around 19-centimeters.


The research team encountered several notable issues during the development of their RFly system- most notably drone size. Most drones that can be safely operated among humans are on the small side so that they won’t inflict any damage; this makes them too small to carry a RFID reader. To overcome this issue, they used the drones themselves to relay signals emitted by a standard RFID reader to track the inventory.


Not only would this fix the safety problem posed by using large drones, but it also means the drones could be deployed with existing RFID systems already in place without the need for new tags, readers or even software, two bird with one stone.


This fix however, created additional obstacles to overcome- considering RFID tags are powered wirelessly by the reader, both transmit the same frequency simultaneously. Throwing a relay system in the mix compounds the problem- you now have two other frequencies fighting to be king of the hill, making it a foursome in a system battle royal.


Now add to that the issue of finding or localizing the RFID tags and the problem grows bigger as the platform uses an antenna array to do so. If those antennas are clustered together, a signal broadcasted at an angle will result in different arrival times, meaning the signals transmitted to the array will be slightly out of phase. It’s from those phase differences that the software can locate where the transmission originated from, which is key for the drones.


Since the drones are constantly moving and taking readings at different time increments from different locations, it simulates that multi-antenna array, providing the ability to effectively grab signal location. To separate the signals (those emitted from the reader and tags), the researchers outfitted the drones with an analog filter. The low frequency emitted from the tag is then coupled with the base frequency resulting in location identification.


It’s the researchers hope that their new RFly system will be deployed in large warehouses for continuous monitoring of product inventory to prevent inventory mismatches and loss while allowing employees to focus more on customer demands.   


Have a story tip? Message me at: cabe(at)element14(dot)com


UMD engineers develop first biocompatible ion current battery. (Image credit UMD)


UMD engineers have designed a new type of ion current battery that’s completely biocompatible. The new battery produces the same ion-based energy used by humans and other living organisms, and those ions (in the form of sodium, potassium and other electrolytes) are the electrical signals that do everything from powering the brain to flexing muscles.

To get a better sense of what’s going on here we need to look at how a traditional (electrochemical) battery functions- in this case, chemical reactions from an electrolyte causes a buildup of electrons on the anode (negative), resulting in a difference between the cathode (positive). Think of it as an unstable buildup of electrons, which want to rearrange themselves to get rid of said difference but the only place to go is the cathode. That electrolyte keeps those electrons from doing so until the gap between the anode and cathode is bridged, thereby completing an electrical circuit. The new biocompatible battery, on the other hand, uses electron movement to produce a flow of ions to generate power.



The new battery using grass as the ionic cable. (Image credit UMD)


UMD’s professor of materials Liangbing Hu, who headed the battery’s development describes how it functions compared to a traditional electrochemical battery, stating, “In our reverse design, a traditional battery is electronically shorted (that means electrons are flowing through the metal wires). Then ions have to flow through the outside ionic cables. In this case, the ions in the ionic cable - here, grass fibers -- can interface with living systems." (Cited here)


Yes, you read that right, the new battery uses grass as the medium to store energy rather than an electrolyte. More accurately, it uses Kentucky bluegrass coated with a lithium-salt solution, as the channels normally used to move plant nutrients up and down were ideal in holding the ion-producing solution.



Demonstration of the new battery in a biosystem. (Image credit UMD)


The team’s demonstration battery features two glass tubes packed with ion exchange membranes and a blade of solution-soaked grass with both connected together using a thin wire. That wire is where the electrons flow, moving from one end to the other while slowly dissipating energy while a pair of metal tips on the other end of the glass tubes are where the ion current flows.

To prove that ionic flow, the researchers connected the glass tubes at the ends of a lithium-soaked cotton string with a deposition of blue-dyed copper ions placed in the middle. When the current started flowing, that deposition began moving toward the negative charged glass pole, thus proving the ionic current.


The team has high hopes for their new battery and envision them being used for a number of applications, including the micro-manipulation of neural activities to prevent or treat people with Alzheimer's disease and depression. They also plan to diversify the types of ion batteries they can produce by using different ionic conductors including cellulose, hydrogels, and polymers. 


Have a story tip? Message me at: cabe(at)element14(dot)com


A team of researchers at Berkeley Labs have discovered a seaweed derivative can stabilize lithium-sulfur batteries. The seaweed derivative acts a binding agent for the sulfur (Photo via Berkley Lab)


Our search for the most reliable battery never seems to end. While lithium-sulfur batteries are best for powering gadgets, vehicles, and application grids, their biggest drawback is its lifespan ─ Sulfur dissolves making the batteries unreliable. But a team from the Department of Energy’s Lawrence Berkley National Laboratory believes they may have stumbled on a solution to extend their lifespan and it involves seaweed.


The team, led by Gao Liu, discovered that carrageenan, a derivative of red seaweed, can stabilize a lithium-sulfur battery and make it more useful for a wider variety of devices. The improved stability means a better lifespan and more cycling. The seaweed derivative acts like a glue or binder, which holds the active materials in a battery cell together. It reacts with the sulfur and keeps it from dissolving.


To help with the discovery, the team used Berkley Lab’s Advanced Light Source, one of the world’s brightest sources of ultraviolet and soft x-ray beams. They detected and studied the sulfur with the help of this powerful light monitoring the “electrochemistry simultaneously while the battery is charging.” When they saw the sulfur wasn’t moving, they knew they found something promising.


The benefits of longer lasting batteries are endless, but the team sees it most useful for transportation. Because lithium-sulfur is lighter than lithium-ion, it’s better for drones and other electric aircraft. They could also prove to be useful in airplanes and electric cars. Since one of Berkeley Lab’s partners is GM, we can only guess they’ll be eager to take advantage of the latest discovery. Not to mention it’s cheaper to produce since sulfur is inexpensive.


But don’t get too excited; chances are we won’t be seeing these batteries for a while. It’s still early in the process, and there’s a lot the team needs to learn. Their next steps include learning more about how the derivative interacts with sulfur and figuring out whether or not it’s reversible. The team feels like once they’ve passed this hurdle, they can use the knowledge to further improve lithium-sulfur batteries. They’ve actually been reaching these batteries for several years and published a paper regarding what they’ve found last year in Nano Letters.


Have a story tip? Message me at: cabe(at)element14(dot)com


PLC simulator

Posted by andrewblog Aug 19, 2017

Ladder diagram is the most used PLC programming language. Main ladder logic elements work like electromechanical circuits, in other words they are virtual relays.

A PLC exchanges information with the external environment through input and output terminals.

Swithces and sensor are connected to input terminals. Output terminals are supposed to control contactors, actuators, lamps, etc.

This video shows how to use a PLC simulation software. The new version is available here.


Instant Hydrogen Pic.jpg

Hydrogen is a very attractive prospect for alternative energy sources, and prior obstacles to the utilization of hydrogen power are being overcome through the introduction of a new aluminum alloy material and its unique reaction to water. Recent developments give greater credibility to the notion that hydrogen power could replace battery power appliances and products, ranging from laptops to cars and buses. (Photo via Tomohiro Ohsumi/Getty)


The automotive industry is still a powerhouse, even in this day and age. Innovation on wheels is still needed, like in the IoT on Wheel Design Challenge. You'll be surprised what comes from that contest...


But this innovation is out of the blue:


In the last decade or so, the viability of alternative sources of energy has been explored more and more due to the rising concerns about the threatening consequences of pollution and climate change. The issue of climate change may be up for debate for some, but pollution is undeniably dangerous, and while fossil fuels are the probably the dirtiest and most environmentally hazardous source of energy, even greener options like solar energy and lithium-ion batteries. The Washington Post recently published a story exposing the cobalt-mining industry in Congo that is ruining the health and livelihoods of tens of thousands of children for the sake of the cobalt needed for lithium-ion batteries, which are in increasingly high demand for smartphones and electric cars, among many other things. So, in seeking environmentally-friendly sources of energy, a resource, and more importantly, a nation and its people are being exploited for the sake of a growing industry and its economic demands. The accidental discovery made by Army researchers at the Research Laboratory at Aberdeen Proving Ground in Maryland may be able to provide a path forward for a hydrogen-powered society, which will hopefully also lead to a decline in the exploitative, destructive practices that are byproducts of the industries for other energy sources.


A 2008 study by the U.S. Department of Energy (DOE) describes the issue of obtaining hydrogen using aluminum-water reactions, saying that, “a coherent and adherent layer of aluminum oxide... prevents water from coming into direct contact with the aluminum metal,” and that the key to maintaining this reaction to yield hydrogen necessitates, “...the continual removal and/or disruption of this coherent/adherent aluminum oxide layer.” The issue of the aluminum-oxide layer was approached in a number of different ways, but the 2008 report ultimately concludes that none were commercially viable.


According to David Hambling of New Scientist, the recent discovery was made during the routine testing of a high-strength aluminum alloy, and when water was poured onto the material, it immediately started reacting and producing hydrogen gas. It’s a promising sign, given that “previous attempts to drive the reaction [referring to 2008 DOE study] required high temperatures or catalysts, and were slow: obtaining the hydrogen took hours and was around 50 per cent efficient.” The leader of the Army research team, Scott Grendahl, says that the new approach operates at near one-hundred percent efficiency, and occurs in less than three minutes. These are staggering improvements, and while the team has only tried using this technique to power a small, radio-controlled tank, Grendahl believes it can be scaled-up for powering larger vehicles like hydrogen-powered cars and buses. More research needs to be performed and possible applications explored, but this advancement is encouraging for environmentalists and human rights activists alike.


Have a story tip? Message me at: cabe(at)element14(dot)com

face 1.png

(Image credit Facebook Research)


Professor Stephen Hawking and Elon Musk are staunch opponents when it comes to the advancement of AI, feeling that it could lead to the downfall of the human race and maybe they’re right. Just read the back and forth communication from a pair of Facebook AI agents:

    -Bob: “I can can I I everything else.”

    -Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”


Clearly, this is cause for alarm as Bob looks to handle ‘everything else’ while Alice understandably has an aversion to ‘balls,' which I’m going to go ahead and blame on Bob. All kidding aside, the communication between the two bots is interesting in that they went beyond their programming to create their own language and it was cause enough to shut the programs down.


face 2.png

(Image credit Facebook Research)


So what exactly caused the pair to speak in gibberish?


It began with simple negotiation, or rather the ability to negotiate between a pair of dialog agents developed by researchers at FAIR (Facebook Artificial Intelligence Research). The two were designed to apply the art of negotiation to get the ‘best deal’ to any given situation in much the same fashion as you or I when taking on goals or running into conflicts.


Think of it as a bit of adversarial dog-fighting between two humans using any type of communication you want. In this case, the software wasn’t restricted to speaking English in the normal manner but began communicating with each other by doing so, only the platform apparently found it inefficient and began diverging the language into nonsensical word combinations, ultimately inventing code words that only they could understand. For example, imagine using the word ‘five’ ten times and translating that into meaning I want ten copies of that number, almost like a type of shorthand only in this case for AI. This is essentially what happened with Bob and Alice, only on an unknown level.


face 3.jpg

(Image credit Facebook Research)


Why was creating their own language deemed an issue and why was it terminated?


Simply put, we humans often have an issue with things we don’t understand, especially with languages we don’t understand. The same can be said for AI, while they may have no problem interpreting us, we would have no clue what they were communicating, and that is the problem. We wouldn’t know what they were communicating with each other much less to us, and for some, that’s frightening.


Facebook’s AI uses what’s known as multi-issue bargaining to plan their negotiating tactics with each shown a collection of items and are tasked at how to divide them equally between them. A value is then entered next to each item denoting how much each cares for them, however those values are not known to the opposing agent, much like in real life.


Once those parameters are set, the agents are then instructed to deal, and they go about trying to get the items that are valuable to them. The only thing- the researchers never set a reward system for the agents as an incentive to use proper English dialog in their endeavor and therefore created a much more efficient code (based on the English language) to talk with each other.


This created a problem for the researchers, not with fear of never understanding them but because the base software is used with other projects and therefore could undergo the same unexpected learning process and ruin valuable data. It was with that regard that they decided to bring the pair of agents back to using coherent English sentences when communicating and to look at how they could prevent it from affecting other projects that use the same AI platform.


Then why did humans freak out about AI learning a different language?


Two words: click bait. Considering most headlines about AI often include big name scientists who are against future AI advancements without safeguards, people see them as humanity’s doom. Bold letter headlines with the words ‘Facebook AI gets Shut Down after Developing It’s Own Language’ and you get the gist. 


Have a story tip? Message me at: cabe(at)element14(dot)com

I'm trying to control an Adafruit stepper motor with the high-end timer (NHET) module of a Texas Instruments Hercules microcontroller

I got a freebie from TI almost a year ago. An Adafruit stepper motor, a driver board and a Hercules RM57 LaunchPad. The code to run the motor was expected to arrive too (it was an assignment for an internal) but that never materialised. In this blog series I'm trying to program the NHET module so that it sends the right signals to make the stepper step.



In the 8th blog I finally try to create the dynamic pulses for the Stepper Motor with the Hercules HET module.


When you think something is complex, and it's already been done and published


N2HET Based Pulse Train Output

(first it was called HET, then NHET for New HET, now N2HET for New Advanced 2nd Ggeneration HET. Don't ask me. Call TI)


I have found an application note for the Hercules RM57: "TI Designs: High Availability Industrial High Speed Counter (HSC) / Pulse Train Output (PTO)".

An amazing document that shows how to control  production line belt using counters and pulses to check and drive it.


It's a great document if you're interested in autonomous timer co-processors. One of the best documents on that subject for the Hercules HET.

It covers a number of areas that are hard to understand and even harder to get examples for:

  • dynamic loading of timer code, based on the strategy selected (in this case: switch between different ways to drive a motor at run-time)
  • real world example with interaction between controller and timer co-processor
  • and info and insights into the HET module that you don't find anywhere else. Hardware, software, theory behind the decisions, practical considerations)

I don't like to be the fanboy. But I am. I can't hide it this time

(and tru fakt: I only realised today that it's written by my favourite Hercules engineer)


I am interested in the part that explains how to program the HET module to drive a stepper motor.

The appnote supports three ways to drive the motor.

I'm using the document's count/dir functionality to generate period-shifted pulses, because the DRV8711 stepper motor controller expects pulses.


Ramp up and Ramp down Profiles

The HET design supports stable speed, ramp-up and ramp-down. All functions accept the number of pulses to generate.

In the stable speed function, the requested number of pulses are generated with the same time base.

In the two ramp-xx functions, the period of the pulses decreases (ramp-up) or increases (ramp-down) linear.


The HET is programmed as an autonomous state machine.



The only duty for the ARM core (called HOST DRIVER on the above state diagram) of the Hercules controller is to fill the command buffer with a profile:


    ptoInit(pto1, ptoModeCountDir);
    ptoCmdCreate(&cmdList[0], 1000000, 50, ptoDirRev, ptoAccLinAcc);
 ptoCmdCreate&cmdList[1  105132 10 ptoDirRev ptoAccZero 
 ptoCmdCreate&cmdList[2  105132 50 ptoDirRev ptoAccLinDec 
// ...
    for (i = 0; i < NUMCMD; i++) {
        ptoCmdSubmit(pto1, cmdList[i]);




* ptoRetVal_t ptoCmdCreate(ptoCmd_t *cmd, uint32_t icnt, uint32_t nstp, ptoDir_t dir, ptoAcc_t acc);
* - icnt = initial pulse width (in counts of N2HET High Resolution Clocks
* - nstp = number of steps to execute in the command
* - dir = direction of steps (forward, reverse, or pure time delay)
* - acc = acceleration type (accelerate, decelerate, or zero acceleration/constant speed)


HET will first pulse out a train of 50 pulses, with the pulse width 1 000 000 times the period of NHET clock as start period:



If I count correctly, 1 / 110 MHz * 1 000 000 = 9.090909.... ms. Let's check on a capture of the sequence above with the logic analyzer:

Spot on!.


Then 10 steps at the stable speed of 1 110 MHz  105132105132 956 s

Again correct.

It will the ramp-down in 50 pulses.


Here's a capture of the SPI initialisation (expanded in the inset below right) and the pulse train from start to stable.



If you have OpenLogic Sniffer software, you can review the measurements. I've attached the .olp project. It contains all sample info.

In the sample resolution that I used, my logic analyser (a Papilio Pro FPGA board!) doesn't have enough memory to catch the full pulse train and all HET information in one shot.

Here's a capture, at slower sampling speed, of the pulse train only (click for higher image resolution):

On an oscilloscope:

RIGOL Print Screen11-6-2017 11_40_38.412.png


I assumed that my motor would start spinning, now that I have replicated SPI sequence and pulse train - and drive the nSLEEP pin correctly.

But it doesn't. I'm going to rewire the original MSP430 setup and see if I miss something.

Hang on....



edit: the reason why it wasn't working was because of the SPI settings.

I had to change the clock phase. That changes at what time the data is written (and read) relative to the clock signal.

I've attached the code composer studio project, including the HET assembly code.


Related Blog
Part 1: Hardware Overview
Part 2: Stepper Controller and MSP430 Firmware
Part 3: SPI Commands and Pulse Control
Part 4: Analyse MSP430 PWM Step Signal
Part 5: Hercules RM57 Hardware Provisioning
Part 6: Hercules RM57 SPI
Part 7: HET Assembly Language Test
Part 8: HET Based Pulse Train Output

I'm trying to control an unknown stepper motor with the high-end timer (NHET) module of a Texas Instruments Hercules microcontroller

I got a freebie from TI almost a year ago. An unknown stepper motor, a driver board and a Hercules RM57 LaunchPad. The code to run the motor was expected to arrive too (it was an assignment for an internal) but that never materialised. In this blog series I'm trying to program the NHET module so that it sends the right signals to make the stepper step.


In the 7th blog I create an Assembler program for the HET submodule.



Hercules High-End Timer

Hercules microcontrollers have a particular type of timer on board. It's an autonomous sub-controller.

The difference with your typical timer is that you don't set registers to control it.

You write a program in the HET assembly language.

The asm commands are also different from those of a controller or a processor. Everything is related to timer, angular functions, tight control of timer loop.

Often they can do several things in parallel during a single click of the HET clock.



And the module can interact with the ARM controller(s) on the Hercules - memory access and interrupts.

Expert programmers can achieve amazing things.

It's possible to handle the whole server motor control (ramp up, constant speed and ramp down) in the HET module.

The ARM would just set the number of steps to take and go to deep sleep. I'd like to achieve that in this blog series.


I have another example published by TI where the whole i²c protocol is implemented in the HET assembly language.


HET Test Program

Today I just want to see if I can get a HET program assembled and linked into my code, and have a HET pin generate a PWM signal.

HET assembler PWM signal

The program is a little more complex than it can be, because it allows for flexible period - something we can ignore here.


; PWM, 1 channel with buffer field to modulate PWM period
L00  CNT      {  reg=A, irq=OFF, max= 3 }
L01  BR        { next= L03, cond_addr=L02, event= ZERO }
L02  MOV64 { remote= L00, cntl_val= 4, data=0, hr_data=0 } ; CPU to write new period value to control field
L03  MCMP   {  en_pin_action= ON, pin= 4, order= REG_GE_DATA, action= PULSELO, reg=A, data=2, hr_data=64}
L04  BR        { next= L00, cond_addr=L00, event= NOCOND }


Don't try to understand this unless you want to learn the assembler language. A manual is available from TI's Hercules product page.

The only thing that's interesting for this exercise is knowing that this code will generate a 234 kHz PWM with 50% duty cycle on HET pin 4.


I've chosen PIN 4 deliberately because that matches the stepper pin of the DRV8711 Stepper Driver BoosterPack.

There's a HET IDE that asists you when writing code (and can integrate with a simulator to pre-test your design on a PC).

You can also assemble the code from the IDE. This will generate a .c and a .h file. They will end up in your project in the next step:


Configure Hercules to Run the HET Code

Like the other modules, HET is configured in HALCoGen. First thing to do is enable the driver:


This will take care that HALCoGen creates the driver code for the timer. Then we set the PINMUX so that NHET1 pin 4 is active.

(not really really needed because it's the default assignment for the pin, but it helps finding conflicts later on)


On the NHET module itself, you define the frequency, and say that you want to use HET assembly.

(This module can also be configured as a straightforward PWM generator without programming)

The .h and .c files that we select here are the ones generated by the HET IDE when you pressed the assembler.

When you press the HALCoGen "Generate Code" button later on, thes files will be copied into your Hercules project code.

As a last step in this blog, I'll show how all of this can be scripted so that both the assembly and copy step are automatically done as part of a CCS build.


The last step in the HET configuration pages is to make HET1 pin 4 an output pin:


CCS Project Settings and Automating the Build

I prefer to minimise the number of manual steps when building a firmware binary. If at all possible, no manual steps, so that a build is repeatable.

In this case it is possible because the HET suite for the Hercules come with a command line assembler.

In CCS, you have the option to add commands before or after a build, so that's a perfect location to put these.


I have created my HET IDE project in the het subfolder of the CCS project.

For some reason, the build steps are executed from a folder one level in the project (too lazy to find out which one), so we have to prepend all folders with  ..\.

The first line assembles the .het source into  .c and .h files in the het folder:

${HET_COMPILER} -n0 -v2  -hc32 ..\het\het.het


The next two lines copy the .h and .c into the HALCoGen project source:

copy /Y  ..\het\het.h ..\HALCoGen\include\het.h
copy /Y  ..\het\het.c ..\HALCoGen\source\het.c


That's all. A little bit of scripting automates the whole thing. The results can be found back in the console after build:

**** Build of configuration Debug for project RM57_Stepper ****

"D:\\ti\\ccsv7\\utils\\bin\\gmake" -k -j 8 all -O 
"D:\Program Files (x86)\Texas Instruments\Hercules\HET IDE\03.05.01\bin\hetp.exe" -n0 -v2  -hc32 ..\het\het.het
NHET Assembler    Release 1.7
Texas Instruments Incorporated. 

 No Errors, No Warnings
copy /Y  ..\het\het.h ..\HALCoGen\include\het.h
        1 file(s) copied.
copy /Y  ..\het\het.c ..\HALCoGen\source\het.c
        1 file(s) copied.
' '
'Building file: ../HALCoGen/source/HL_gio.c'
'Invoking: ARM Compiler'


I've registered the HET command line tool in a project parameter, so that the script is PC independent:


A last point of attention: if the HET project is a subfolder of your CCS project, you'll have to exclude it from compiling. Else you have the het.c file twice in your build and it'll fail.



Running the HET code

The easiest part. It runs immediately when you initialise the HET module.


// ...
#include "HL_het.h"
// ...

int main(void)
// ...
// ...


The oscilloscope grab at the start of the blog is the actual result of running the code of this blog.


In a next blog we'll communicate with the HET sub-controller so that it only generates pulses when we want it, and how we want it.

The end-game is to get a signal like this out of the module, based on a ramp profile that we define.


Related Blog
Part 1: Hardware Overview
Part 2: Stepper Controller and MSP430 Firmware
Part 3: SPI Commands and Pulse Control
Part 4: Analyse MSP430 PWM Step Signal
Part 5: Hercules RM57 Hardware Provisioning
Part 6: Hercules RM57 SPI
Part 7: HET Assembly Language Test
Part 8: HET Based Pulse Train Output

I'm trying to control an unknown stepper motor with the high-end timer (NHET) module of a Texas Instruments Hercules microcontroller

I got a freebie from TI almost a year ago. An unknown stepper motor, a driver board and a Hercules RM57 LaunchPad. The code to run the motor was expected to arrive too (it was an assignment for an internal) but that never materialised. In this blog series I'm trying to program the NHET module so that it sends the right signals to make the stepper step.



In the 6th blog I port the MSP40 SPI commands for the DRV8711 to the Hercules RM57 safety microcontroller..


This post shows that even with very different hardware, you can often reuse big chunks of code.

In this case, the DRV8711 register definitions serve me very well.


Big SPI differences between Hercules Controller and MSP430, Little Impact

SPI is SPI. In the example here where we use SPI to set the registers of the DRV8711, all logic to define the register values can be copied.


One of the examples is the DRV8711 Control register:


// CTRL Register
struct CTRL_Register
  uint16_t Address; // bits 14-12
  uint16_t DTIME; // bits 11-10
  uint16_t ISGAIN; // bits 9-8
  uint16_t EXSTALL; // bit 7
  uint16_t MODE; // bits 6-3
  uint16_t RSTEP; // bit 2
  uint16_t RDIR; // bit 1
  uint16_t ENBL; // bit 0


The choice has been made for readability, not code size. We're spending 8 integer locations to store info that needs to be contained in a 16 bit value.

Not efficient for code size, but I don't mind. It makes understanding and debugging the application easy.

You can always check the values of a particular subset of the register without having to think too much.

If you need the space later, this can be replaced by a single integer per register, and setter + getter functions that use masking, ANDing and ORing to set the right bits.


When setting the final value for communication, the different parts of the register get combined. I translated this part to fit everything in a single integer:


    uint32_t data;

    // Write CTRL Register
    data = REGWRITE | (G_CTRL_REG.Address << 12) | (G_CTRL_REG.DTIME << 10) | (G_CTRL_REG.ISGAIN << 8) |(G_CTRL_REG.EXSTALL << 7) | (G_CTRL_REG.MODE << 3) | (G_CTRL_REG.RSTEP << 2) | (G_CTRL_REG.RDIR << 1) | (G_CTRL_REG.ENBL);


original code on the MSP430:


    unsigned char dataHi = 0x00;
    unsigned char dataLo = 0x00;

    // Write CTRL Register
    dataHi = REGWRITE | (G_CTRL_REG.Address << 4) | (G_CTRL_REG.DTIME << 2) | (G_CTRL_REG.ISGAIN);
    dataLo = (G_CTRL_REG.EXSTALL << 7) | (G_CTRL_REG.MODE << 3) | (G_CTRL_REG.RSTEP << 2) | (G_CTRL_REG.RDIR << 1) | (G_CTRL_REG.ENBL);
    SPI_DRV8711_ReadWrite(dataHi, dataLo);



With the Hercules, you get a GUI to configure the modules and registers. A nice solution that helps to master the (very complex!) peripherals of this family.

The GUI is non-intrusive and supports round-trip design. You can override the settings in code. If done properly, you can keep on using the GUI and the code changes interactively.

Here's how you set the SPI bit width and speed:


There's also a screen where you can fine-tune the timings within a SPI communication burst:


When performing the communications, these settings can be used to do unbuffered (upper part of screen capture) and buffered (lower part) SPI.


The upper part exchanges one  16-bit value (in our case, because we've defined a Data Format 0 with bit size ).


In the lower part, we can exchange 8 16-bit values in a single shot.

The Hercules off-loads the work to the SPI module.

The ARM controller can do different things in the meantime.


You can see that I haven't selected a SPI CHIP Select. That's because the Stepper Motor BoosterPack doesn't have it's CS pin on a SPI capable pin.

Don't ask me why. If only they moved it to the free pin above, it would be OK - that's a default CS pin for BoosterPacks. Don't call me. Call TI.


Solving Incompatible Chip Select

Easy (in our case. Less easy when dealing with a big chunk of buffered data. In that scenario I would change the hardware).


As told before, the CS pin on the boosterpack isn't matching a CS of the LaunchPad Standard. So we have to provide the CS ourselves.

The first thing to do is tell the SPI module to not set a CS.


From that moment on, the CS decision is up to your own firmware.

The DRV8711 CS matches the RM57 GIOB[2] pin.

So I have to do a few things

. First is to set this as a GIO output pin:


Once we've called gioInit(), this pin can be programmed.

The DRV8711, ignorant of what's common on SPI modules, expects the CS to be high when active.

void SPI_DRV8711_Write(uint16_t data) {
    gioSetBit(gioPORTB, 2, 1); // manual CS high
    mibspiSetData(mibspiREG3, 0, &data);
    mibspiTransfer(mibspiREG3, 0 );
    while(!(mibspiIsTransferComplete(mibspiREG3, 0))) {
    gioSetBit(gioPORTB, 2, 0); // manual CS low


The code above bitbangs the CS high before ommunication, low after.

It works. The downside is that the ARM core needs to do the activity.

If the BoosterPack would be fully compatible, the controller would be free for any other activity once it informed the SPI module of the location and size of data ..

The while loop would be unnecessary. We'd just call the transfer module and go on doing things that matter...

Again, that's not an issue with the Hercules controller or the DRV8711 chip. It's because the way the BoosterPack routes it. Call TI.

In your own design, you'd route this DRV8711 pin to a MinSPI3 CS pin (there's 5 of them,so choice galore).


The excellent news is that everything works. I've replicated the code that initialises the DRV8711 from the MSP430, and the Logic Analyzer capture is fully compatible:


void WriteAllRegisters() {
    uint32_t data;

    // Write CTRL Register
    data = REGWRITE | (G_CTRL_REG.Address << 12) | (G_CTRL_REG.DTIME << 10) | (G_CTRL_REG.ISGAIN << 8) |(G_CTRL_REG.EXSTALL << 7) | (G_CTRL_REG.MODE << 3) | (G_CTRL_REG.RSTEP << 2) | (G_CTRL_REG.RDIR << 1) | (G_CTRL_REG.ENBL);

    // Write TORQUE Register

    // Write OFF Register
    data = REGWRITE | (G_OFF_REG.Address << 12) | (G_OFF_REG.PWMMODE << 8) | G_OFF_REG.TOFF;

    // Write BLANK Register
    data = REGWRITE | (G_BLANK_REG.Address << 12) | (G_BLANK_REG.ABT << 8) | G_BLANK_REG.TBLANK;

    // Write DECAY Register
    data = REGWRITE | (G_DECAY_REG.Address << 12) | (G_DECAY_REG.DECMOD << 8) | G_DECAY_REG.TDECAY;

    // Write STALL Register
    data = REGWRITE | (G_STALL_REG.Address << 12) | (G_STALL_REG.VDIV << 10) | (G_STALL_REG.SDCNT << 8) | G_STALL_REG.SDTHR;

    // Write DRIVE Register

    // Write STATUS Register
    data = REGWRITE | (G_STATUS_REG.Address << 12) | (G_STATUS_REG.STDLAT << 7) | (G_STATUS_REG.STD << 6) | (G_STATUS_REG.UVLO << 5) | (G_STATUS_REG.BPDF << 4) | (G_STATUS_REG.APDF << 3) | (G_STATUS_REG.BOCP << 2) | (G_STATUS_REG.AOCP << 1) | (G_STATUS_REG.OTS);


Protocol Analyser capture:



That's perfect for this application. There's room for optimisation, but let's only do that if needed. Any changes make comparison with the original MSP430 example more complex ..


For the mere mortal, this may seem to be a small step. For the down-to-earth ones: we can now focus on the core problem - creating a perfect PWM signal to drive the stepper.


Related Blog
Part 1: Hardware Overview
Part 2: Stepper Controller and MSP430 Firmware
Part 3: SPI Commands and Pulse Control
Part 4: Analyse MSP430 PWM Step Signal
Part 5: Hercules RM57 Hardware Provisioning
Part 6: Hercules RM57 SPI
Part 7: HET Assembly Language Test
Part 8: HET Based Pulse Train Output