Skip navigation

Poll: How many TOPS can the NXP i.MX 8M Plus processor perform for inference acceleration in AI Edge devices?

The NXP  i.MX 8M Plus family focuses on machine learning and vision, advanced multimedia, and industrial IoT. It has an integrated machine learning accelerator that can process neural networks about thirty times faster than Arm® processor cores. It provides dedicated machine learning hardware—a neural processing unit (NPU) from VeriSilicon (Vivante VIP8000). Developers can off-load machine learning inference functions to the NPU, allowing the high-performance Cortex-A53 and Cortex-M7 cores, DSP and GPU, to execute other system-level or user applications tasks. (To learn more, check out this tech spotlight: AI at the Edge: i.MX 8M Plus Applications Processor with a Machine Learning Accelerator)

 

Poll Question:  How many tera operations per second (TOPS) can the i.MX 8M Plus processor perform for inference acceleration in AI Edge devices?

Poll Results
  • 1.2 (0%)
    0/3
  • 1.8 (0%)
    0/3
  • 2.3 (100%)
    3/3
  • 2.9 (0%)
    0/3
  • 3.4 (0%)
    0/3
  • I need to read the tech spotlight again (0%)
    0/3
  • Other; please explain in comments (0%)
    0/3

Comments

Archive Poll

Confirm archive of Poll: How many TOPS can the NXP i.MX 8M Plus processor perform for inference acceleration in AI Edge devices?

Archiving expires a poll and removes it from the active polls list.

To restore an archived poll, edit the poll, change the dates as desired, and save the poll.

Delete Poll

Confirm delete of Poll: How many TOPS can the NXP i.MX 8M Plus processor perform for inference acceleration in AI Edge devices?

Warning: This will delete the poll and all of its comments.