Version 3

    Artificial Intelligence (AI) has made breakthrough achievements in recent years. Innovations like big data, autonomous vehicles, and medical research are just a few of the ground-breaking applications that emerged from AI development. This article explains the fundamentals of AI on Single Board Computers (SBCs) like the Raspberry Pi.


    A machine that can perform cognitive functions like perception, solve problems, learn, and reason when needed is deemed to be artificially intelligent. It infers that AI exists only when a machine possesses cognitive ability. The AI benchmark is the human level of reasoning, vision, and speech.


    AI vs. Machine Learning

    AI is an umbrella concept encapsulating human intelligence in machines. Machine Learning (ML) is a deep dive into AI algorithms where experiences and data automatically tune the computer implementable instructions.


    ML is a subset of AI that demonstrates the ability to accept sets of data, learn from that data, and consequently change, upgrade, or improve the algorithms to boost overall performance. Deep Learning (DL) is a subset of ML that uses deep neural networks for learning process simulation.


    The key steps to consider to develop an ML model are:


    • Data Collection: The data must be collated to train the model.
    • Data Preparation: The data gets loaded to a suitable place and prepped for ML training use. It gets split into two parts: Training and Evaluation.
    • Model selection:  Data scientists and researchers have coded innumerable models over the years, with a few examples being images, audio, numerical, and text. It means that different algorithms exist for different tasks. You must select the correct one.
    • Model training: This action aims to make a correct prediction or answer a question. The model being developed should be better as compared to a baseline.
    • Model evaluation: Training completion paves the way for measure selection and actual evaluation.  This metric enables you to observe how the newly created model may perform against any unseen data. This is a simulacrum of the model performing in the real, physical world.
    • Hyperparameter Tuning: This is the process of adjusting parameters in order to obtain a machine learning model with the best performance.
    • Prediction: Another name for Inference. This is the question-answer step and the point where all the previous ML actions come to fruition.


    ML frameworks that support model development offer a complete resources stack (Figure 1). Inference and training libraries offer services for defining the models, training, and running them at the apex of a standard stack. These models then build on optimized implementations attributed to kernel functions like convolutions and activation functions like ReLU (Rectified Linear Unit), matrix multiplication, Sigmoid, and many others.


    Sigmoid and ReLU are termed activation functions in the neural network. Gradual changes are observed, while different optimizers are tested in the Keras section. These changes typify ReLU and sigmoid functions and form fundamental building blocks to develop an incrementally adapting learning algorithm.  The adaptation progressively reduces mistakes made by the nets.



    These are expressed as a Sigmoid function:


    The ReLU is simply defined as:

    The optimized math functions partner with lower-level drivers to offer an abstraction layer that interfaces with a general-purpose CPU. The ML resources stack can also take advantage of specialized hardware like graphics processing units (GPUs), when available.

    Figure 1: Higher-level libraries, with other ML algorithms, provide neural network implementation functions in any standard ML stack. The latter is extracted from specialized math libraries that implement kernel functions specially optimized for GPUs and CPUs within the underlying hardware layer. 


    ML has two parts. The train/test part uses massive amounts of data to construct a model. The deployment part is used as a project component. This is where Raspberry Pi makes its entrance.


    ML and the Raspberry Pi

    Since multiple GPUs and fast CPUs are needed to craft AI models (decision-making plans), these models run on Raspberry Pi and access key AI tools through cloud services. The Raspberry Pi functions as an interface to the physical computing hardware, transforming those AI models into practical real-world projects.


    The Raspberry Pi brings a few instant advantages in its role as an ML application development platform. The Arm Cortex-A53 (the quad-core processor) of Raspberry Pi offers superior performance capabilities, made possible by the single instruction multiple data (SIMD) extensions. These enable the core’s NEON to process a given ML type. Developers may also latch on any number of available and compatible hardware add-ons to extend the base Raspberry Pi hardware platform.


    TensorFlow and the Raspberry Pi

    The availability of ML frameworks has radically eased model implementation and training complexity. Multiple free tools are available for building, deploying, and training ML inference models like TensorFlow, Theano, Deeplearning4j, Scikit-learn, Keras, and more. These free tools are used to build models. The alternative is to employ a pre-trained model and use its include Image classification, Segmentation, Object detection, Smart reply, Pose estimation, and Segmentation Tools, among others.


    Tensorflow offers a pre-trained models set that solves multiple ML complications. These models are converted to work in tandem with TensorFlow. You can use them in your applications.


    Tensorflow powers AI projects worldwide. This powerful open-source software framework creates neural networks and ML computers, among other increasingly complex tasks. The framework is made more efficient, using the comparatively simple Keras API to train the models, and churns out better performance.


    Python’s pip package system can install TensorFlow from its pre-built binaries. Installation is effortless if the Raspberry Pi runs Raspbian 9.0 and either Python 2.7 or any code younger than Python 3.4. It can be installed on Raspberry Pi by typing in the terminal:


    $ Sudo apt-get install libatlas-base-dev

    $ sudo apt-get install python3-pip

    $ pip3 install TensorFlow


    Resources for ML Applications

    Raspberry Pi can be combined with other resources for ML applications. Here are some examples:


    The Google Artificial Intelligence Yourself (AIY) kitGoogle Artificial Intelligence Yourself (AIY) kit is one of several platforms to use ML on Raspberry Pi. It allows you to code your own natural language processor (NLP) and link the same to Google Assistant or even Cloud Speech-to-Text service. Such an arrangement permits you to ask queries and also issue voice commands to the programs. All of this fits in a nifty cardboard cube, powered by the Raspberry Pi. It allows the coding of a personal language processor which is then connected to Google Assistant. The AIY project’s second iteration was also based on Raspberry Pi, but the voice was substituted by a vision kit centered on Pi Bonnet, which itself is based on the Raspberry Pi Phat (the partial version of Raspberry Pi Hat). Makers were able to assemble their personalized voice-activated AI assistant akin to the Amazon Alexa, Apple’s Siri, and Google’s own Google Home Assistant. For more information, click AIY Projects 2: Google’s AIY Projects Kits or AIY Voice Kit from Google.


    An possible example of Raspberry Pi used as a principal controller is within a cucumber sorting system which takes images of the fruit. Since the Raspberry Pi controls deep neural networks, these images are first shot and then the TensorFlow bolstered neural network classifies whether the image in question is a cucumber (or not). These images are then sent for higher classification to a Linux network.  An Arduino Micro which regulates the conveyor and servo motors categorizes the fruit and then places it in its proper place, as dictated by the given classification.  A Raspberry Pi, in combination with TensorFlow, identifies the cucumbers traveling on the conveyor. Photos are subsequently dispatched to Google Cloud for additional processing that takes place inside the Image Sorting application.