Skip navigation
> RoadTest Reviews

Brainium /SMARTEDGE AGILE - Review

Scoring

Product Performed to Expectations: 8
Specifications were sufficient to design with: 6
Demo Software was of good quality: 8
Product was easy to use: 10
Support materials were available: 6
The price to performance ratio was good: 8
TotalScore: 46 / 60
  • RoadTest: Brainium /SMARTEDGE AGILE
  • Buy Now
  • Evaluation Type: Development Boards & Tools
  • Was everything in the box required?: Yes
  • Comparable Products/Other parts you considered: While there are plenty of IoT management platforms, this product is sold as end-to-end solution, which combines the platform with an edge device which comes with some unique features out-of-the-box (like AI), so a comparison with any other IoT platform would be inappropriate.
  • What were the biggest problems encountered?: There are 2 big problems I have experienced: the first is the lack of information on very important topics like integration with other systems (I can understand there might be IP protection issues here, but this kind of information is highly valuable when choosing to invest in a particular solution), the second is with the sensor reading reliability: the SmartEdge Agile enclosure seems to affect some of the sensor reading, reducing sensitivity/accuracy.

  • Detailed Review:

     

    Introduction

    After taking part to the beta testing of the Brainium/SmartEdge Agile solution, I have offered to participate to the RoadTest of the device, as I already had the device, it would not affect the number of devices offered to the community and I would have not deprived any other member of the opportunity to take part to this RoadTest.

     

    My main interest in the device lays on the AI side, therefore my review might be a little biased towards those particular features. Nevertheless, I will explore the Brainium/SmartEdge Agile solution, and try to cover as many features as I can. To help me go through this review, I also wrote some blogs, where some extra information can be found. Below you can find the links to the blogs:

     

    AI to the Edge - Part 1: Introducing the SmartEdge Agile device

    AI to the Edge - Part 2 : Introducing the Brainium Platform

    AI to the Edge - Part 3 : AI Studio

     

    Unboxing

    I'm afraid I have really very little to show for the unboxing this time, as I received the device during the beta testing phase, and as such it came in an element14 cardboard box, already stripped of its fancy plastic packaging. In the box there was the SmartEdge Agile device and the USB-C cable.

     

    Product features

    Going through the marketing material and brochures, it is clear the focus for this product is firmly on its capability of performing processing at the edge, with particular emphasis on the introduction of an AI engine. Together with the "out-of-the-box" AI functionalities, the SmartEdge Agile is also capable of performing the sensors' data processing, managing the logic for the alarms, the monitoring and ensuring a secure link to the back-end.

     

    But focusing exclusively on the device's capabilities would be a mistake, as it represents only half of the solution. The other half is made up by the Brainium IoT platform, which provides the software infrastructure to be able to manage the SmartEdge Agile and its functionalities, offering an easy route to the cloud.

     

    So, this RoadTest is really a tale of two stories, where the hardware and the software each tell their own story and together make the product under review.

     

    Let's start by talking about the hardware side of the solution. In my first blog I have gone through the specifications of the SmartEdge Agile, which can be summarised as follow: the device has a main board, which hosts the STM32L46QG and the nRF52840, two ARM Cortex-M4F used respectively for the main processing and the Bluetooth LE connectivity. The choice of using the Nordic SoC for the wireless communication, beside making the Bluetooth LE stack available, also allows exploiting the security features offered by the chip (encrypted communication between the device and the gateway).

     

     

    {gallery} SmartEdge Agile

    SmartEdge Agile Teardown

    SmartEdge Agile board close-up

    SmartEdge Agile battery

    SmartEdge Agile ecosystem

     

     

     

    The board accommodates two headers, used to expand the system. One of the headers is taken by the sensor board, which is comes fitted as standard, leaving the other available for extra connectivity boards (like LoRa, for example). There is also another route to expand the system, but this feature is not ready at the time of this writing: the USB-C port is going to be used to connect other sensors to the SmartEdge Agile, so that they can be managed using Brainium. The same USB-C port is also used to power the device, and also to recharge the 260mAh battery the SmartEdge Agile comes provided with. Running off the battery, I managed to achieve autonomy between 18 and 22 hours for the SmartEdge Agile (depending on how many sensors are active and their sampling rate), and recharging the battery from flat took about 3.5 hours.

    sensors

     

    As mentioned before, the device comes complete with the sensor board, which includes the following sensors:

     

    1. Light Sensor: AMS TSL2540
    2. Temperature/Humidity Sensor: STMicroelectronics HTS221TRl
    3. MEMS Microphone: STMicroelectronics MP34DT01-M
    4. Time-of-flight (Proximity) Sensor: STMicroelectronics VL53L1CXV0FY/1
    5. Pressure Sensor: STMicroelectronics LPS22HB
    6. Inertial Measurement Unit (Accelero/Gyro) Sensor: STMicroelectronics LSM6DSLTR
    7. Magnetic Sensor: STMicroelectronics LIS2MDL

     

    Checking the datasheets, seems the SmartEdge Agile has been equipped with decent sensors (all but one being produced by STMicroelectronics), which I believe are good enough, considering the device is not going to be used as a precision instrument.

     

    For example, the pressure sensor has a typical relative accuracy of ±10Pa, which is equivalent to about ±85cm height accuracy, the time-of-flight sensor has a range that stretches to 4m, with a standard deviation on the measure varying from 2.5mm up to 5mm, depending on the the timing budget available for the measurement (i.e. the time window used to take multiple readings - the higher the time budget, the lower is the standard deviation), but the range is strongly affected by the lighting condition (more light = shorter detection range), which can limit the reliability of the readings even below the meter range when exposed to intense light!

     

    Also, the design of the SmartEdge Agile enclosure doesn't help, in mostly because of its IP67 requirement, as it does get in the way of many of the sensor measurements,especially if the device is dismantled and then put back together, as there might be a problem with proper alignment of the sensors with the little windows present on the enclosure (this only applies to the 4 sensors at the very top of the sensor board, marked from 1 to 4 on the picture above).

     

    For the software side, I have investigated the features of the Brainium platform in my second blog, and I will summarise them here. The product leverages Octonion's IoT management framework, which the Brainium Application is built upon. The main components of this architecture are:

     

    • device
    • gateway
    • cloud

     

    The Brainium firmware takes care of the device: it manages the sensors settings and processes their data, communicates with the gateway via secure link (leveraging the Nordic SoC Bluetooth 5 BLE secure connection features), and provides the OTA firmware update service. Part of the data processing, the firmware offers some “smart monitoring” features at the edge, like condition-based alarm triggering, and includes the Brainium AI model executor, for AI-based alarms.

     

    In order to communicate its data to the server in the cloud, the device needs to use the Brainium gateway. At the moment, the gateway can only be a smartphone (Android/Apple iOS) or a Raspberry Pi (Raspbian), and needs to support both BLE and either a WiFi, Wired or Mobile data connection to the internet. Beside basically tunnelling the data, the gateway does very little else. At the moment, there is also a limitation of 2 devices per gateway (not sure if it is due to licensing restriction, or resource constraints).

     

    The cloud component is the the one that does all the heavy lifting: all the core services are hosted in the cloud. The illustration below gives an idea of the Brainium architecture and what services are available.

     

    architecture

     

    One of the selling point highlighted by the marketing literature is the security. Indeed, the platform offers security features like OAuth 2.0, X509 certificates management and SSL/TLS encryption.

     

    The picture above also shows some 3rd-party integration paths: at the moment, for the trial users, all the cloud infrastructure is hosted on Microsoft Azure, but I have been assured the Brainium platform cloud components can be deployed  on, and/or integrated with, other cloud platforms, like Amazon AWS. Unfortunately, the information about what and how to integrate Brainium with other applications is not publicly available, and if you think you have a potential use case/project, the only way to gain any further information is to engage with an AVNET representative and work with them.

     

    One integration path that is documented is the API exposed by the platform, but I will talk about them later, when I introduce a test case for their usage.

     

    The only cloud component visible to the end user is the Brainium portal, which is also the one showing the "operational" features available. Through the portal, the user can perform the following operations:

     

    • infrastructure management: add/remove gateways, add/remove devices, turn on/off each sensor on each device
    • basic monitoring: enable/disable live tracking of the sensors data on each device, record tracking sessions, store them and download data as CSV files
    • smart monitoring: create/delete rules for generating alarms on specific condition, targeting specific devices/sensors
    • AI-based monitoring: create/remove AI-based rules for Motion Recognition and Predictive Maintenance, targeting specific devices

     

    Unlike all the other functionalities, working with AI-based monitoring requires the use of the AI Studio tool, available on the portal. The basic idea behind the product is to offer the user a "zero-code" experience for implementing AI features. As such the AI Studio offers an easy and intuitive way to create the models, which are needed for using AI. At the time of this RoadTest, the only two AI features available with Brainium are Motion Recognition and Predictive Maintenance.

     

    With Motion Recognition, the user needs to record the motions they want to detect, using the SmartEdge Agile device, making sure each motion is repeated multiple time, then use those recordings as training to generate the AI model. Once the model is generated and deployed to the device, AI rules for monitoring can be created. The overall process is quite simple.

     

    Predictive Maintenance works by analysing vibration patterns, with the aim to determine that some machinery is in need for maintenance work by identifying some pattern, typical of a failing machine. Unlike Motion Recognition, we don't know the patterns that indicate an incoming failure, but we know the patterns of a piece of machinery in good working order, so the training has to start from there. Once we have recorded some typical patterns, using the SmartEdge Agile, a model can be generated and the deployed to the target device. Once deployed, AI rules can be created for the monitoring. While monitoring, the device will keep "learning" and report any new patterns identified. Those pattern can be later used to enrich the original model, increasing its accuracy. Again, the process is pretty simple.

     

    If you are interested in more details, AI Studio is the subject of my third blog.

     

    Product testing

    Before getting into the details of the testing, I want to stress once more the impact of the SmartEdge Agile enclosure on the measurements.

    The photos on the right show how the enclosure is designed. The effect of this are particularly felt for the following sensors:

     

    • proximity sensor
    • light sensor

     

    For the proximity and the light sensors, the effect of the enclosure can introduce an offset on the measure, while also reducing the sensors' sensitivity. Those effects are particularly noticeable for the the light sensor: to test this, I have used the light sensor

    included in my smartphone for comparison. The test measurements are taken at the same time, in the same light condition, on both the smartphone and the SmartEdge Agile. For the latter, the measure are repeated twice: once with the sensor exposed and once with the sensor in the enclosure. Below you can find my results:

     

    Test conditions            Smartphone sensorSmartEdge Agile
    (naked)
    SmartEdge Agile
    (enclosed)
    Sunset, Indoor, no direct light on the devices93 lux90 lux8 lux

     

    As it can be appreciated, especially in low light conditions, the difference is rather dramatic.

     

    The situation is not much better when it comes to using the proximity sensor: with the enclosure on it can hardly read distances passes the meter, and even within the meter the error is a lot higher than when the reading is taken with the naked sensor. I have performed some testing, at different distances, repeating the readings with and without the enclosure (each measurement is the average of 10 consecutive readings), and you can read the results below:

     

    Distance10cm20cm50cm100cm150cm
    SmartEdge (naked)9.18cm19.84cm49.25cm96.42cm149.24cm
    SmartEdge (enclosed)11.00cm20.73cm50.26cm90.62cm200cm

     

    I have limited the testing to 150cm distance, because the reading for the enclosed sensor was already wrong at that distance (with the "naked" sensor I managed to reach 275cm distance). From the data, it can be appreciated how the enclosure degrades the measure both at close proximity (<20cm) and at "out of reach" distance (>75cm).

     

    Now, I'm quite sure those kind of effects are taken into account while designing the board, and surely there must be some calibration process, possibly in the SmartEdge Agile firmware, which allows for correction. Unfortunately either I have ended up with a dodgy unit, or the calibration does not seem to be working. There is also the possibility that opening the device does affect those sensors, but if this is the case, I would have expected at least a warning on the user guide, stating that opening the device may cause the device to require a recalibration (but then, what are you supposed to do if, for example, you need to replace the 260mAh battery?).

     

    Anyway, enough with this, let's now move on and do some testing of the Brainium portal. The portal is the user dashboard, which allows the user to manage and monitor the product. It offers 2 main views for the system: the Projects and the Equipment (there is also a third one: the AI Studio). The project view it is the “operational” view, which gives the user access to all the monitoring functionalities, while the equipment view is the “physical” view, which allows the user to manage the system infrastructure (gateways and devices).

     

    But first, before using the product, the new user needs to sign-up on the Brainium website and create an account. This will also allow the user to download and install the Gateway app. The instructions provided in the User Guide document are clear and the whole process is easy and quick. Once signed up, the download and install of the app is straightforward. The Android device uses a wireless connection to provide internet connectivity. I have not used mobile data connection for the testing. If the gateway is left unattended, and is meant to be active all the time, it is important the Android system is configured so that the Brainium GW app is protected from any power saving policy, keeping it running even when the device screen is turned off and the power saving mode kicks in (obviously same applies to the WiFi and Bluetooth connections the GW relies upon).

     

    The first test scenario involves adding one gateway and one device to the system using the Equipment view. The gateway needs to be added first, which requires the Brainium GW app to be already installed and running on the smartphone (i.e. logged in and connected to the internet), so that it become visible in the list of available gateways on the portal. Then the device is added to the gateway. As long as the device is switched on and the gateway is connected, the addition of the device is successful.

     

    Now that the physical system is connected, the operational side can be tested too. The first step is to create a new project, then assign the newly connected device to the project, so it can be monitored. In the project view, the information from the SmartEdge Agile sensors can be displayed (tracked) using the widgets. Each widget is associated with one sensor of the device, and can display its data either as a chart (line or bar) or as plain data (the visualisation mode can be changed at anytime). The tracking session can be recorded, and later downloaded as CSV (comma separated values) file. There is a time limit on the session length: the maximum time span for a single session recording is 1 hour. You can record as many sessions as you wish, as they are stored on the server and kept for 30 days.

     

    Within the project, the user can define rules to monitor each device and generate alerts if some events occur. There are 2 type of rules: Smart and AI. The Smart rules allow the definition of conditions (equal to, not equal to, greater than, lesser than) for each of the sensors. I have tested all the sensors, using the different conditions, and the product worked very well, offering a very good monitoring platform, easy to set up and manage. The alarms are clear and the system feels quite responsive.

     

    Rather than the smart rules, since the beginning I have been more interested in the use of the AI rules: Motion Recognition and Predictive Maintenance. In my blog about AI Studio, I go through the AI functionalities of Brainium and, in particular, I test the Predictive Maintenance feature (the photo below refer to the blog).

     

     

    So, for this review, I'm going to test the other AI feature: Motion Recognition. Put it simply, this feature allows the recognition of a movement without the need for the user to write any code or algorithm: all you need to do is "teach" the AI engine the motions you want it to learn and be able to detect. This learning process has the ultimate goal to generate a model the AI engine can use. Therefore, the model is the central concept of the solution. In order for the model to be as accurate as possible, for each motion the user needs to record as many sample as possible. The creation of training recording is, per se, pretty straightforward. Once you have enough motions recorded, model generation can be attempted. Depending on the amount of training data used, the time needed for the creation of the model varies: more motions equals longer processing time. Overall, the processing time it is reasonable.

     

    The created model can be inspected, and the model confusion matrix can be checked, although it doesn't give good indication of the quality of the model. A further parameter, the "Maturity", has been created by the Brainium team to give user a quality indicator (it is a proprietary index, which seems to be linked with the amount of training samples provided for the learning: the more repetitions for the motion are provided, the higher is the index).

     

     

    I have tested the AI motion model by using a complex motion (the “infinite” - - gesture, which is an 8 oriented horizontally rather than vertically), hoping that, providing many motion recording it would help recognising the gesture, once used as AI Rule. I have managed to reach a Maturity of 58% for the model providing something like a thousand repetitions of the motion (it took some time!).

     

    The SmartEdge Agile accelerometer is a 3-axis sensor, and the data collected are 4-dimensional (acceleration vector [x,y,z] and the vector magnitude). The figure on the right shows the axis orientation for the device. The “infinite” movement used to create the model is supposed to be a “2-dimension” movement, all contained on the plane x-z (i.e. as far as possible, the movement should have negligible acceleration along the y axis).

     

    Once the AI Model is applied to the device and the AI Rule is created , indeed, the movement was recognised by the SmartEdge Agile device (alarm triggered) with pretty good accuracy, even when device orientation was varied , as long as the z axis didn’t deviate from the terrestrial gravity force vector more than approx. 45 degrees (I suspect this could depend on the way the movements for the training have been executed, because they all had the constant terrestrial gravity “bias” applied along the z axis).

     

    Unfortunately, also some similar movements (rapid zig-zag movements, for example) where wrongly recognised as the infinite, but I suppose this could be fixed by adding more training recording for the infinite motion, and rebuilding the model.

     

    The overall process is simple: the wizard-like style chosen for the user interface guides the user through the creation of the training records and the model. The image gallery below walks through the process, to give the visual feel of the process.

     

     

    {gallery} My Gallery Title

    Step 1: Start by creating a Motion

    Step 2: Record a training set, containing repetition of the motion

    Step 3: Check the recording of the training set, to make sure the motion has been correctly identified

    Step 4: Once you got enough recording for the motion, you can use them to create the model

    Step 5: Apply the model to the device

    Step 6: Create AI Rule to monitor the device. When the rule is triggered, the event can be notified as Alert, email or via IFTTT.

     

     

     

    Leveraging API

    The platform offers 2 sets of API to access the data, one based on the REST architectural style and the other based on the MQTT protocol over WebSocket (the full documentation for both API can be found on the Brainium.website). The REST API are useful to get information about devices, gateways, projects, widgets, alerts, recordings and motions. The data accessed via this API are not real-time, but historical. If you want to get access to real-time data, you can use the MQTT API, which provides the data thanks to its subscription model.

     

    Both API are secured using an access token to grant authorization, which is provided by the portal. One important thing to notice is the API allows only "read-only" access to the Brainium resources: you can list the devices, but you cannot add or remove them, and same applies to all the other objects.

     

    For the REST API, the Brainium documentation page offers plenty of snippets of sample code, so my sample code focuses on MQTT API instead. The code I wrote is based on the example provided in the documentation, which basically connects the MQTT client to the Brainium MQTT broker, subscribes itself to all the topics available, and once it receives a message, processes it according to the type of message. Below you can find the Python source code. To use it, you obviously need a Brainium account, and from your profile details you need to get your user ID and your password (token). In order to secure the connection, you also need to download the cacert.crt certificate, used for the example, available here.

     

    This sample code is only very basic, and it is provided just as a mean to test the MQTT API. As such, things like error management are not implemented here.

     

     

    {tabbedtable} Tab LabelTab Content
    test_mqtt_api.py

    This is the file which contains the code to setup the MQTT client (paho-mqtt) connection to Brainium. It is quite simple: once configured (line 23 to 28), the client attempts the connection to the broker (line 30), then it subscribes itself to all the topics, and loops forever, waiting for incoming messages from the broker.

     

    The two callback functions  registered (lines 24 and 25) are invoked  respectively upon connection (on_connect) and upon message arrival (on_message).

     

    When a new message is received, a Factory object (defined in the message.py file) will take care of instantiating the message object of the right class, against which the process method can be invoked.

     

    import uuid
    
    import paho.mqtt.client as mqtt
    from message import topics, Factory
    
    mqtt_user_name = 'oauth2-user'
    mqtt_password = '<your_password>'  # copy and paste here external client id from your account
    user_id = '<your_id>'  # copy and paste here your user id
    device_id = '<your_device>'  # copy and paste here your device id
    
    ca_cert_path = 'cacert.crt'
    
    def on_connect(client, userdata, flags, rc):
        print('Connected with result code {code}'.format(code=rc))
    
    def on_message(client, userdata, msg):
        factory = Factory()
        msgObj = factory.getInstance(msg)
        msgObj.process()
    
    def main():
        client = mqtt.Client(client_id=str(uuid.uuid4()), transport='websockets')
        client.on_connect = on_connect
        client.on_message = on_message
    
        client.tls_set(ca_certs=ca_cert_path)
        client.username_pw_set(mqtt_user_name, mqtt_password)
    
        client.connect('ns01-wss.brainium.com', 443)
    
        for topic_str in topics:
            topic = topics[topic_str].format(userId=user_id, deviceId=device_id)
            client.subscribe(topic)
    
        client.loop_forever()
    
    if __name__ == "__main__":
        main()
    
    message.py

    This file contains the classes used to encapsulate the MQTT messages received from the broker. The idea here is to have different objects for different messages, all capable of processing  the message (the process method). In this example, the processing is limited to printing out some information from the message received.

     

    import json
    
    class msg_dummy:
        def __init__(self, topic=None, payload=None):
            self.topic = topic
            self.payload = payload
    
    class Message:
        def __init__(self, msg):
            self.topic = msg.topic
            self.payload = msg.payload.decode("utf-8")
            if not (self.payload in (None, '')):
                self.json = json.loads(self.payload)
                self.parse()
    
        def getType(self):
            return type(self).__name__
    
        def __str__(self):
            return type(self).__name__
    
        def dump(self):
            return str(self.json)
    
        def parse(self):
            pass
    
    class ProbabilityRank(Message):
        def parse(self):
            self.code = self.json["code"]
            self.name = self.json["name"]
            self.rank = self.json["rank"]
    
    class Motion(Message):
        def parse(self):
            self.id = self.json["id"]
            self.deviceId = self.json["deviceId"]
            self.projectId = self.json["projectId"]
            self.modelId = self.json["modelId"]
            self.startedAt = self.json["startedAt"]
            self.finishedAt = self.json["finishedAt"]
            self.receivedAt = self.json["receivedAt"]
            self.code = self.json["code"]
            self.name = self.json["name"]
            self.selfProbability = self.json["selfProbability"]
            self.acceleration = self.json["acceleration"]
            self.speed = self.json["speed"]
            self.probabilityRank = []
            for probabilityRank in self.json["probabilityRank"]:
                self.probabilityRank.append(ProbabilityRank(msg_dummy(self.topic, str(probabilityRank))))
    
        def process(self):
            print("Motion: ", self.name, " - Model ID: ", self.modelId)
            print("Start: ", self.startedAt, " - Finished: ", self.finishedAt, " - Received: ", self.receivedAt)
    
    class Alert(Message):
        def parse(self):
            self.id = self.json["id"]
            self.type = self.json["type"]
            self.triggeredTimestamp = self.json["triggeredTimestamp"]
            self.ruleId = self.json["ruleId"]
            self.projectId = self.json["projectId"]
            self.projectName = self.json["projectName"]
            self.deviceId = self.json["deviceId"]
            self.deviceName = self.json["deviceName"]
            if self.type == "SMART_RULE":
                self.datasource = self.json["datasource"]
                self.condition = self.json["condition"]
                self.value = self.json["value"]  # array
                self.datasourceUnits = self.json["datasourceUnits"]
                self.triggeredValue = self.json["triggeredValue"]
            elif self.type == "AI_RULE":
                self.motionTypeCode = self.json["motionTypeCode"]
                self.motionTypeName = self.json["motionTypeName"]
            else:  # AI PREDICTIVE MAINTENANCE
                self.patternId = self.json["patternId"]
                self.patternName = self.json["patternName"]
                self.anomalyType = self.json["anomalyType"]
    
        def process(self):
            if self.type == "SMART_RULE":
                print("Source: ", self.datasource, " - Timestamp: ", self.triggeredTimestamp)
                print("Value: ", self.triggeredValue, " ", self.datasourceUnits, " (", self.condition, " ", self.value, ")")
            elif self.type == "AI_RULE":
                print("Motion: ", self.motionTypeName, " - Timestamp: ", self.triggeredTimestamp)
                print("Code: ", self.motionTypeCode)
            else:
                print("Source: ", self.patternName, " - Timestamp: ", self.triggeredTimestamp)
                print("Id: ", self.patternId, " - Anomaly Type: ", self.anomalyType)
    
    class PDM(Message):
        def parse(self):
            self.id = self.json["id"]
            self.name = self.json["name"]
            self.type = self.json["type"]
            self.triggeredFirstTime = self.json["triggeredFirstTime"]
            self.triggeredLastTime = self.json["triggeredLastTime"]
            self.triggeredCounter = self.json["triggeredCounter"]
    
        def process(self):
            print("Pattern: ", self.name, " - Type: ", self.type, " - Trigger Counter: ", self.triggeredCounter)
            print("Triggered First Time: ", self.triggeredFirstTime, " - Triggered Last Time: ", self.triggeredLastTime)
    
    class Telemetry(Message):
        __vectors__ = ("ACCELERATION_NORM", "WORLD_ACCELERATION_NORM", "ROTATION", "MAGNETIC_FIELD_NORM")
        __scalars__ = ("GYROSCOPE_NORM", "HUMIDITY_TEMPERATURE", "PRESSURE", "HUMIDITY", "PROXIMITY",
                       "VISIBLE_SPECTRUM_LIGHTNESS", "IR_SPECTRUM_LIGHTNESS", "SOUND_LEVEL")
    
        def parse(self):
            if type(self.json) is list:
                self.readings = self.json
            else:
                self.reading = [self.json]
    
        def isVector(self):
            return self.topic in self.__vectors__
    
        def process(self):
            for reading in self.readings:
                print(self.topic, " - Timestamp: ", reading["timestamp"])
                if self.isVector():
                    print(reading["vector"])
                else:
                    print(reading["scalar"])
    
    topics = {
        "ACCELERATION_NORM": "/v1/users/{userId}/in/devices/{deviceId}/datasources/ACCELERATION_NORM",
        "WORLD_ACCELERATION_NORM": "/v1/users/{userId}/in/devices/{deviceId}/datasources/WORLD_ACCELERATION_NORM",
        "GYROSCOPE_NORM": "/v1/users/{userId}/in/devices/{deviceId}/datasources/GYROSCOPE_NORM",
        "ROTATION": "/v1/users/{userId}/in/devices/{deviceId}/datasources/ROTATION",
        "HUMIDITY_TEMPERATURE": "/v1/users/{userId}/in/devices/{deviceId}/datasources/HUMIDITY_TEMPERATURE",
        "PRESSURE": "/v1/users/{userId}/in/devices/{deviceId}/datasources/PRESSURE",
        "HUMIDITY": "/v1/users/{userId}/in/devices/{deviceId}/datasources/HUMIDITY",
        "PROXIMITY": "/v1/users/{userId}/in/devices/{deviceId}/datasources/PROXIMITY",
        "VISIBLE_SPECTRUM_LIGHTNESS": "/v1/users/{userId}/in/devices/{deviceId}/datasources/VISIBLE_SPECTRUM_LIGHTNESS",
        "IR_SPECTRUM_LIGHTNESS": "/v1/users/{userId}/in/devices/{deviceId}/datasources/IR_SPECTRUM_LIGHTNESS",
        "MAGNETIC_FIELD_NORM": "/v1/users/{userId}/in/devices/{deviceId}/datasources/MAGNETIC_FIELD_NORM",
        "SOUND_LEVEL": "/v1/users/{userId}/in/devices/{deviceId}/datasources/SOUND_LEVEL",
        "PDM_PATTERN": "/v1/users/{userId}/in/devices/{deviceId}/datasources/PDM_PATTERN",
        "PDM_EVENT": "/v1/users/{userId}/in/devices/{deviceId}/datasources/PDM_EVENT",
        "alerts": '/v1/users/{userId}/in/alerts',
        "MOTION": "/v1/users/{userId}/in/devices/{deviceId}/datasources/MOTION"
    }
    
    class Factory:
        __classMapping__ = {
            "alerts": Alert,
            "MOTION": Motion,
            "ACCELERATION_NORM": Telemetry,
            "WORLD_ACCELERATION_NORM": Telemetry,
            "GYROSCOPE_NORM": Telemetry,
            "ROTATION": Telemetry,
            "HUMIDITY_TEMPERATURE": Telemetry,
            "PRESSURE": Telemetry,
            "HUMIDITY": Telemetry,
            "PROXIMITY": Telemetry,
            "VISIBLE_SPECTRUM_LIGHTNESS": Telemetry,
            "IR_SPECTRUM_LIGHTNESS": Telemetry,
            "MAGNETIC_FIELD_NORM": Telemetry,
            "SOUND_LEVEL": Telemetry,
            "PDM_PATTERN": PDM,
            "PDM_EVENT": PDM
        }
    
        def getInstance(self, msg):
            obj = None
            for key in self.__classMapping__:
                if key in msg.topic:
                    cls = self.__classMapping__.get(key)
                    obj = cls(msg)
                    break
            return obj
    

     

     

    Conclusion

    Let's start with the positives: the SmartEdge Agile device is definitely a nice device: it is loaded with very useful sensors, and using the smart rules, it makes a very powerful IoT device, able to cover many monitoring scenarios (if anything, perhaps there are more sensors than really needed!).

     

    The Brainium platform is quite an interesting actor in the IoT and AI space: it is fresh, and it makes a good job in simplifying the tasks for the user. It is clear that a lot of thoughts have been put in designing the user experience. Also, the synergy with the SmartEdge Agile creates an end-to-end solution, which is offering something different from the traditional IoT vendors. As for the AI features, the Motion Recognition is definitely a good fit for the "zero-code" approach, and it seems to work pretty well.

     

    This product is aiming to address the needs of manufactures for a tool that will let them experiment IoT and AI for their products or in their factory, without having to invest lots of money and time in acquiring the knowledge in house. Brainium and the SmartEdge Agile could offer a fast route to market for such manufacturers, and probably it makes sense in business terms. But is this need genuine, or it is instilled by the huge hype around both IoT and AI?

     

    And most of all, what is the cost of adopting this solution? This brings up my first big problem: lack of information. Not only information on the technical side of the product (for instance, the integration capabilities of the platform are not disclosed), but also on the financial side (how can someone estimate how much would it cost to run a solution based on Brainium and SmartEdge Agile without any indication on prices?). I mean, I'm not asking for a monthly cost calculator like the one for the Amazon Web Services, but even a simple pricelist would be of great help.

     

    Obviously I can understand some information are confidential, and the companies involved want to protect their IP, but I'm sure something better than "contact your AVNET representative" can be done, to make some more information publicly available.

     

    I also have some reservations about the way AI features are implemented. While the effort to simplify the experience with AI is commendable, I believe simplifying the user experience in creating an AI model has been made at the expense of ability to fine tune the training process and model itself. It is now widely recognised that a significant improvement to model prediction accuracy can be achieved if the learning is carried using quality training data sets. Polishing and adjusting the data is almost a mandatory step, and in Brainium that happens "behind closed doors", without any possibility for the user to intervene to make adjustments.

     

    This is even more true when we talk about Predictive Maintenance, a long process that is iterative by nature, and needs feedback and fine tuning to bring concrete results, all of which is not possible with the Brainium implementation of PDM. Definitely, the users would benefit from having some access to the process to be able to get more information and change some parameters, etc.

     

    I want to be clear: saying this I don't mean this product is not good,far from it! I have thoroughly enjoyed spending time learning about the SmartEdge Agile and the Brainium platform, I just get the feeling that it is not mature yet, and perhaps has been rushed to the market, because the window of opportunity for IoT and AI solution is now. I'm sure with time, it has the potential to become a great product, and I will be keeping an eye on its progresses.

     

    Finally, I would like to thank rscasny, element14, AVNET and Octonion for giving me the chance to test this product. A special thank to brainium Joelle Foster for her patience in dealing with my many questions, and all the valuable information she has provided me during all this time.


Comments

Also Enrolling

Enrollment Closes: Sep 15 
Enroll
Enrollment Closes: Sep 8 
Enroll
Enrollment Closes: Aug 21 
Enroll
Enrollment Closes: Aug 28 
Enroll
Enrollment Closes: Aug 4 
Enroll
Enrollment Closes: Aug 25 
Enroll
Enrollment Closes: Aug 18 
Enroll
Enrollment Closes: Aug 18 
Enroll
Enrollment Closes: Aug 17 
Enroll
Enrollment Closes: Aug 7 
Enroll