Hopefully we'll hear more soon, but there appears to be another BeagleBone board launched at Embedded World, with the same form-factor as the BeagleBone Black.

It's called the BeagleBone AI. I'm guessing it's not a replacement and won't be anywhere near as cost-reduced as the BBB and its variants (for example there is 16GB of eMMC on-board on the BB-AI), the board looks very useful for working with image recognition or other video processing.

(Photo from the Beagleboard website, will replace with a better photo if I find one):

The main chip is TI's AM5729, which doesn't have a datasheet yet, but appears to be the same or very similar to AM5728. The diagram below highlights some interesting features. There are a lot of processing sub-systems.

The applications processor portion is a dual-core Cortex-A15 (dark red) running at 1.5GHz. There are 2D and dual 3D graphics processors (shown in green), plus a couple of DSPs (orange).  There is a Video Processing Engine (VPE) shown in brown, for stuff like basic image scaling.

There is still the PRU capability shown in purple (2 x dual-core PRU), but there's also a couple of dual-core Cortex-M4 processors for real-time operations too (shown in blue). This is nice, I''m guessing it allows for real-time computation, and high-speed low-level protocol handling, to be separated if desired.


What wasn't on the block diagram but I've added it in yellow, are four EVE embedded vision engines! These are apparently controlled via normal C programming. Open CL is possible, and some other API maybe.

Unlike the older NEON (which was a basic, general SIMD processor for doing operations on a few bytes of memory in a single instruction - some further description here: BBB, NEON and making Tintin bigger  ), EVE simplifies things so that you don't need to program in complex SIMD instructions, it handles it for you. Plus it is specially designed for video operations, for example object identification. You can get it to identify shapes or objects in real-time live video really efficiently, at low power, since the key building-blocks that the image recognition algorithms use are implemented in hardware. I can't find much info on it yet, but I have not google'd it a lot so far. It could be useful for object identification while driving, if you're designing the next Tesla.


Also, the BB-AI brings out USB 3.0 speeds, with a USC Type C connector. And GigEth. Plus wireless (2.4 and 5GHz) including 802.11ac! so super-high throughput hopefully. Also supports BLE as I understand.


If anyone is at Embedded World, or has more information to share, it would be great to see it!


Texas Instruments Deep Learning (TIDL)

There is a framework to make use of the DSP and EVE processors, if desired, for some computer vision types of scenarios.

According to a TI Deep Learning overview PDF it can be used for some popular things like classifying objects, and detecting multiple different objects and their locations.

Some more examples of this:

(The images above are from that PDF document).

It can be done in live video streams, as shown here, screenshot from video at https://e2e.ti.com/support/processors/f/791/t/710226?TDA3-Our-TIDL-car-detection-on-TDA3X-are-there-any-methods-to-incre… :

The main resource on learning the TIDL framework is the Processor SDK TI Deep Learning Documentation.




Edited 28th Feb 2019 - I'd mistyped the TI part code in the text above, it is corrected now.

Edited 3rd March 2019 - Added a bit on TI Deep Learning (TIDL)