Version 2

    A hot topic right now is intelligent processing at the edge, called so because data is stored and processed locally and closest to the edge of a network.  Typically, this intelligent processing implements complex functions such as artificial intelligence, machine learning, data analytics and decision making. Applications that rely upon edge-based intelligence include Autonomous Vehicles, Vison-Guided Drones & Robotics, and Industry 4.0 (or Industrial IoT).


    For these applications, edge processing is required due to the required response / processing time. For example, it isn’t safe for an autonomous vehicle to rely on cloud-based processing for maneuvering decisions due to the latency involved.


    But it’s not just the latency developers must consider when deciding whether to implement edge processing. They need also consider network availability. Network connections may not always be available due to network coverage, outages, weather or natural features, and urban environments blocking the signal.


    Challenges of Edge Processing

    Of course, processing at the edge brings with it challenges for the developer such as:


    • Performance – High performance algorithms are required to be implemented, often with a hard, real-time performance target. This places significant demands on the chosen processing solution.
    • Power efficiency – Edge solutions are often required to achieve high-performance within a constrained power budget.
    • Safety & Security – Edge solutions are often deployed remotely where access to them is not strictly limited to authorized personnel only. As such, the system developer must ensure any data and intellectual property stored within the system remains secure and that it cannot be modified by unauthorized personnel due to the potential safety implications.


    Achieving High Performance

    To meet the processing demands required by the application we must select a high-performance multi-core processing system such as, GPU, CPU or DSP. Often to aid in achieving performance requirements, either in throughput or real-time response, the multicore processor is combined with an external FPGA or ASIC connected using a high-speed interface such as peripheral component interconnect express (PCIe).


    Gaining Power Efficiency

    Creating a power efficient solution requires the implementation of architectures which supports several operating modes, enabling the power constraints to be achieved.  While power modes depend upon the application, the typically observed modes at the edge are:


    • Active Power Mode – Full operation is being undertaken, for example, an autonomous vehicle navigating in its environment. 
    • Low Power / Reduced Processing Mode – Application is performing reduced processing.
    • Sleep Mode – The lowest power mode, with no processing occurring. The processor must be woken from this mode for processing to continue.


    Component selection will also play a crucial element in achieving the power budget. Of course, the processing requirements will drive the selection of the processor. However, a close second consideration should be the power efficiency. To help us select the most efficient processing solution several metrics exist. Two of the most common are Floating Point Operations per Watt (FLOPS/Watt) or Millions of Instructions Per Second per Watt (MIPS/Watt).


    What about Security?

    Due to their remote deployment and the consequent inability to strictly control access, the safety and security of  edge-based deployments is critical. In an edge-based deployment, a security breach could have a wide impact, ranging from reputational damage to legal and regulatory repercussions.


    To protect against malicious attackers, the system should be subjected to a threat analysis during its design phase. This threat analysis is performed early in the design cycle, prior to starting the detailed design, to ensure the necessary security features can be implemented to secure the system.


    This threat analysis will consider different elements of the design, its data sensitivity, and the different methods in which the system can be attacked. As such, the threat analysis will consider elements including:


    • Application – Is the application mission or life critical? What is the end effect if the device security is compromised?
    • Data – How critical is the information stored within the system?
    • Deployment – Is the system remotely deployed or used within a semi-controlled environment?
    • Access – Both physical and remote. Does the system allow access remotely for control, maintenance, or updates? If so, how does the application verify the access is authorized?
    • Communication Interfaces – Is information transmitted to or from the system critical? Should the application be concerned about eavesdroppers snooping? Does the equipment need to be able to protect against advanced attacks, for example, reply attacks?
    • Reverse Engineering – Does the embedded system contain Intellectual Property (IP) or other sensitive design techniques which must be protected?


    The results of this threat analysis are used by the engineering design team to implement strategies within the design which address these identified threats. At a high level, addressing the identified threats can be categorized into one of the following approaches:


    • Information Assurance – Ensuring information stored within the system and its communications are secure. This also needs to address identity assurance which ensures access to the unit is from a trusted source. For example, when communicating and controlling its operation or updating application software in the field.
    • Anti-Tamper – Ensuring the system can protect itself from external attacks to access the system and its contents.


    Advantages of Using a Heterogeneous SoC

    One solution which can address the performance, power, and security requirements is the use of a heterogeneous System on Chip such as Xilinx Zynq SoCs or Zynq UltraScale+ MPSoCs, which combine processors with programmable logic. Often these devices will contain both application and real-time processors along with the programmable logic (See Figure 1).


    This tight coupling of logic and processors allows for the creation of a system which is more responsive, reconfigurable, and power efficient. A Traditional CPU / GPU based approach requires the use of external memory from one stage of algorithm to the next. This reduces determinism and increases both power dissipation and latency.


    Using a Heterogeneous SoC enables a deterministic response time with a reduced latency. The programmable logic also offers a very efficient implementation, when considering the MIPS/Watt.


    Heterogeneous SoCs also provide complex internal power architectures and frameworks that allow the powering down of processors and peripherals within the SoC.  Many heterogeneous SoCs also use power management software frameworks which are compliant with the IEEE P2415 Standard for Unified Hardware Abstraction and Layer for Energy Proportional Electronic Systems.


    Heterogeneous SoCs also provide many features which enable a secure design from the use of secure configuration, which includes AES Encryption, RSA and SHA signatures to prevent reverse engineering or tampering with the bit stream, to the use of internal mixed signal converters used to monitor device temperature and supply voltages to prevent external tamper events.


    When it comes to executing software, Trust Zone technology and Virtualization can be used to create Orthogonal software worlds, which ensure that higher privilege software (SW) and logic peripherals cannot be accessed by SW applications running with a lower access privilege.


    In short, heterogeneous SoCs are capable of providing the performance, power efficiency, and security required for many edge applications.


    What Alternatives Exist to Edge Processing?

    While edge processing is necessary for many applications not all processing needs intelligence at the edge. Alternatives to edge processing include:


    • Cloud Processing – In cloud processing, the data is transferred back from the edge for processing. Cloud processing applications have a longer response time than edge processing applications (See Figure 2). Example applications include Voice Controlled Home Automation where delays in response acceptable.
    • Fog Processing – In Fog processing, the process node is located closer to the edge node which is gathering the data, typically the processing node is located on a Local Area Network (LAN). One example application would be an Industry 4.0 manufacturing solution which processes manufacturing test results.


    Intelligent processing at the edge of a network presents several challenges to the system developer, including performance, power, and security. There are several considerations that developers need to make for performance, power efficiency, and especially for security. However, Heterogeneous SoCs provide the ability to address all these challenges.