Skip navigation
1 2 3 Previous Next

Internet of Things

258 posts

Stephen Hawking believes that while AI can do some good, it can do a lot of harm and we need to be prepared for the risks involved. Stephen Hawking’s outlook on the future of AI isn’t so optimistic (Photo via


Casual, but maybe a little bit scary. The story of human civilization being overthrown by computers and AI has been turned into numerous books, movies, and TV shows. But it seemed like a threat only if you were really paranoid by technology. Well, maybe it’s time to panic now since Stephen Hawking said AI has the potential to destroy us and he knows what he’s talking about.


Hawking dropped this bombshell during a technology conference in Lisbon, Portugal. He says the only way we can prevent it is if we find a way to control computers. According to him, computers have the ability to “emulate human intelligence and exceed it.” And since we’re constantly looking for ways to improve AI it could be the best thing for society or the worst. “We just don't know,” he said. “So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.”


He admitted that AI does have the potential to reverse the damage done to the natural world or even eradicate poverty and disease, but it’s the uncertainty of the future that sets off alarms for him. And we have to be ready for a worst-case scenario. Hawking says we have to learn how to prepare for an avoid risk with AI as it can bring along new dangers and can disrupt the economy.

This isn’t the first time he’s warned us about AI. He expressed his fears in a new interview with Wired. He too is afraid AI will outperform humans and will become so advanced it can create a new lifeform. And in case you wanted to hear more bad news he thinks the Earth is doomed too.


Earlier this year, he said humans have about 100 years to leave Earth to survive as a species. He’s become more vocal about finding a new planet to live on. Why so? Mainly because we’re running out of room on the planet we’re on, and our natural resources are disappearing. And, you know, there is global warming to think about.


Needless to say, he’s not very positive about the future. But he did point out new European legislations that establish new rules regarding AI and robots. And he doesn’t think all AI is bad. He knows some of it can be created for the good of the world, but he just wants people to be aware of the dangers and risk related to them. So, maybe we should keep this in mind before Google’s AlphaGo gets any smarter.



See more news at:

Amazon now has an ARview feature on its shopping app, and Microsoft’s HoloLens are now certified as safety goggles. Preview items in your home with Amazon ARview. (Photo from Amazon)


People are always finding new ways to use technology, like using your smartphone as a projector. Companies are doing the same thing but on a slightly bigger scale. Amazon is always on the cusp of introducing technology you really don’t need. The retail giant is launching a new AR feature inside its shopping app that previews products in 3D like kitchen appliances and toys right in your home released just in time for holiday shopping.


Amazon wants to improve your shopping experience by letting you see how products look in your home. It’s not a necessary feature and something you’ll more likely play around with, but it can help you decide if that vase goes with the rest of your décor. You can rotate items 360 degrees letting you see objects from every angle. 



The feature works on Apple’s ARkit meaning you need to have an iPhone 6S or newer to take advantage of this. Will AR view come to Android devices? We don’t know yet. It most likely depends on how well this new gimmick goes over. It probably won’t completely change the way you shop, but it may save you a couple of returns.


On a different note, Microsoft has found a new use for its HoloLens. First released in 2015, it launched as a product for gamers but is a slow seller due to its $3000 price tag. Microsoft isn’t too worried about; the device is popular with businesses since it lets designers visualize digital changes on real-life objects. This helps employees complete complex tasks and even present high tech demos. The device has been so popular the company is expanding its sales to 29 new European markets bringing up the total to 39 nations.


In addition to this, the HoloLens is now certified as basic protective eyewear. It received an IP50 rating for dust protection in construction zones. Microsoft wasted no time with the new certification and announced that a HoloLens hard hat accessory is currently in production and will be released next year. Though it sounds innovative, Intel tested a mixed reality headset with a built-in hard hat last year at CES.


HoloLens may not be the most popular VR option for gaming, it’s at least finding new life for businesses. Can’t really say Amazon’s VR view will have the same success.



See more news at:

Hello element14 friends :-)

There are only a couple weeks left (deadline is Nov. 13) for entering the fourth edition of our Open IoT Challenge!
The Open IoT Challenge encourages IoT enthusiasts and developers to build innovative solutions for the Internet of Things using open standards and open source technology. As you remember from the past years, we have lots of great prizes ($3,000 the grand winner!), including hardware vouchers for the 10 short-listed best proposals.

Some ideas of things I would love to see being used for your projects include deep learning, low power/long range technologies (Sierra Wireless is giving away some MangOH RED boards, btw ), and you can find more thoughts in this blog post.

I really hope to see lots of projects coming from the element14 community, and that you guys will be sharing your journey as blog posts here! Let me know if you submit a proposal, or if you need advice before doing so

B –

Have you ever wanted to keep track of how many people pass by a location? Have you ever wanted to see what computer vision can do for you or maybe wanted to get involved with IOT? Well, this may be the project for you!

This project uses a USB webcam and MATLAB to develop a people tracking algorithm that uses the ThingSpeak IOT platform. Here is a quick link to the video for those who are visual learners - 


When a person's face is in front of the webcam, the program does several things:

  • Draws a bounding box around the face
  • Captions the bounding box with the Face ID number
  • Tracks the person's face throughout the video frame with the captioned bounding box
Additionally, for every 5th face detected per session, we take a picture of the user and display it on the screen with a congratulatory message for five seconds. Does It Work?

Note: You must be connected to the internet in order for the people counter to transmit/receive data from ThingSpeak.
  • How are the faces tracked?
The people counter uses functions from the Computer Vision System Toolbox to detect and track the human faces. The faces are tracked based off key facial structure points and this method helps the algorithm track faces even if the face rotates in the video. This people counter is able to track multiple faces by using 'MultiObjectTrackerKLT.m' which is based upon the Kanade-Lucas-Tomasi feature tracker.
  • How is the Face ID determined?

The unique face ID is determined by reading the data in the ThingSpeak channel. The latest channel entry contains the ID of the last face that was detected so essentially ThingSpeak serves as the secretary for the people counter. To make sure we don't use invalid data, the program checks to make sure the entry was logged in the current day. If it is not from the current day, the unique face ID will start at 1. Whenever a new face is detected, the algorithm writes new data to the ThingSpeak channel which updates the face ID number.

Documentation and Demonstration

To download the project files, go to the following link:


PS - This is a repost from a Hackster post by my colleagues Nick and Dan

Mobile chipmaker ARM introduces a new security framework to increase IoT security, while congress reforms a bill that allows intelligence agents to spy on citizens. Will ARM and congress get us any closer to improving IoT security and privacy? (Photo via Getty)


The Internet of Things was supposed to make our lives easier, but lately, it’s given us nothing but headaches. It feels like every other week there’s a report of another data breach causing you to check your accounts every hour – security is a big issue blocking IoT. It’s clear IoT has some security and privacy issues, so how can they be fixed? UK mobile chip maker ARM, which created chips used by Qualcomm and Samsung, thinks they have the answer with their new security framework Platform Security Architecture (PSA).


PSA helps designers build security directly into the device’s firmware. The main component of the new framework is an open source reference “Firmware-M” that the company plans to reveal for Amrv8-M systems early next year. The PSA also gives designers IoT threat models security analyzes, and hardware and firmware architecture specifications.


So far, several tech giants like Google, Cisco, Sprint, and Softbank have signed on to support PSA. While the company wants to expand the coverage of this framework, for now, they’ll be focusing on IoT. We’ll have to wait until next year to see if the framework actually improves issues of security and privacy. We could also, you know, not have every device be smart, like smart clothing.


Before you rest easy thinking IoT issues are about to be solved, there’s another fight regarding privacy and security happening in Congress. This one involves the NSA’s internet surveillance program. Recently, new legislation was introduced to reform Section 702 of the Foreign Internet Intelligence Surveillance Act. This allows intelligence agencies to keep an eye on communications of foreign targets living outside the US. But the agencies can also get information on US citizens if they’re in contact with the non-citizens being monitored.


The section will expire at the end of the year, and the new legislation would renew the act for four years. It would also require the NSA to get a warrant before searching through a US citizens communications. People can also more easily challenge the law in court and provide the Civil Liberties Oversight Board more oversight power.


"Congress must not continue to allow our constitutional standard of 'innocent until proven guilty' to be twisted into 'If you have nothing to hide, you have nothing to fear.' The American people deserve better from their own government than to have their Internet activity swept up in warrantless, unlimited searches that ignore the Fourth Amendment," said Senator Rand Paul said in a statement.


While US intelligence officials see Section 702 as a vital tool for fighting national and cybersecurity threats, the legislation has come under fire. Privacy advocates believe the bill doesn’t have enough safeguards. Others think it might expand the US government’s surveillance powers and could ultimately be exploited. If you want to read more about the bill, check out a one-page summary here.


Have a story tip? Message me at: cabe(at)element14(dot)com


Sierra Wireless has just launched their newest offering in the IoT space. The MangOH Red is a smaller and more compact board than it's older brother the MangOH Green. Aimed at a being used in an end product rather than the development, the board resembles the footprint of the Raspberry Pi. With onboard Bluetooth and WiFi, the MangOH Red is ready to be used in any IoT application. Still standard are the CF3 modules with their on chip cellular connectivity and GLONASS and GPS positioning capabilities. The CF3 module cellular options include 3G, 4G LTE and LTE-m1/NB-IoT modules.


MangOH Hardware

The MangOH Red has a notably different hardware setup from the MangOH Green. Being more compact, the MangOH Red has one CF3 slot and one IoT expansion card slot. Because the MangOH Red has onboard bluetooth and WiFi there is an onboard antenna as well as  u.fl connector to allow for an external antenna to be attached for these services. The other previously supported antenna connections (cellular, Glonass and diversity) are all still provided. No longer provided onboard is ethernet, RS323 and the arduino shield connector. For the users who may miss these, they are still available via the IoT expansion cards. The debugging interface has been made simple with a micro USB connector.


New to the MangOH Red are pressure, light and temperature sensors. These new sensors along with the IMU gives the board spacial awareness right out of the box. Also new the the MangOH Red is a Raspberry Pi Hat connector. This allows for the more complex and capable boards designed for use with the Raspberry Pi to be used with the MangOH Red.  Built in battery charging and monitoring circuitry allow for a rechargeable battery to be added. With this setup Sierra Wireless has made a product that a true IoT board that is ready to be deployed anywhere monitoring is needed.


Unboxing and Setup

The MangOH Red comes in a neatly packed box with everything you need to get started. One big improvement from the MangOH Green is the inclusion of a universally compatible sim card. With 100MB of data this is enough to get anyone started with the demos and basic applications. Setting up the MangOH Red is quick and easy The WP module is slipped into the module holder and the cover snapped closed. After connecting the cellular antenna all that is left to do is connect the USB cables. These are used to provide power and access to the console. While it is possible to provide power from either USB cable, access to both the console and CF3 module via SSH is useful.


The documentation for the MangOH Red has been revised and updated from the MangOH Green. This new revision has produced a clearer and more concise set of documents. The initial setup time, from out of the box to getting the demos running has been reduced with the aid of better step by step instructions. The “MangOH Red Setup Guide” is especially helpful in getting the system setup and performing its first set of data logging to the cloud.


After everything has been connected the hardware is ready to be used. Upon powering up the system, you will need to work through the getting started guide. This will setup your environment on your PC as well as install the latest applications on the MangOH Red. The only issue encountered was a change in RSA key which the command line explained how to resolve.


Once done, completing the installation is easy and smooth. The rest of the getting started guide follows the same well explained step by step paradigm. As the rest of the setup and getting started is self explanatory we’ll move to the software structure of Legato used by the MangOH


MangOH Software - Basic Structure

The MangOH boards use the Legato framework as the basis for their software. The Legato framework provides a lot of APIs take care of the simple as well as to simplify the more complex tasks that can be performed with the MangOH boards. The framework, while well thought out and logically ordered, can take some time to get used to for those just starting out. The basic file structure as well as the chain between variables and peripherals will be explained below.


Basic organization of the Legato file structure


The first folder (application folder) acts as a container for the application and is often named with the name of the application. This folder contains the application definition file as well as the component folders. The application definition file (adef) allows the compiler to know what components are used in the application as well as what peripherals are required. The adef also binds external hardware or devices to internally used variables.


Application Definition File (ADEF)

Using a very simple example we will look at the heartbeatRed application. In this application which uses very few resources and has only one component the  the adef looks as shown below. Starting from the top, the executables defines what code should be run in this application. In this snippet the heartbeatComponent is what we would like the application to run. Since a component can be run with multiple instances each instance is given a unique name. In the code below there is only one instance named heartbeat. Now that the instance has been named, we let the system know under processes that we would like this instance to be run. To do this we place the instance name heartbeat in the subsection run. This will start the application when the system loads (provided we have put on line 3 “start: auto” and not “start: manual”). If it is set to manual, you will need to do: app start heartbeatRed. Lastly there is the bindings section. This links the external devices (ports, files, etc.) to variables the software can use. In the code below we would like to be able to control pin 34, this is the onboard LED. To accomplish this the variable mangoh_led which is found in the heartbeatComponent and is part of the heartbeat executable is connected to the  gpioService. This service through the Legato GPIO service  then connects the variable to the specified pin.


sandboxed: true

version: 1.0.0

start: auto



    heartbeat = ( heartbeatComponent )










        ( heartbeat )


    faultAction: restart




    heartbeat.heartbeatComponent.mangoh_button -> gpioExpanderServiceRed.mangoh_gpioExpPin14

    heartbeat.heartbeatComponent.mangoh_led -> gpioService.le_gpioPin34


heartbeatRed adef file as found in the heartbeatRed application folder

Component Definition File (CDEF)

Now that we have shown the compiler what components are to be included as well as what devices are needed and provided a handle for components to access them, let's look at the file that explains how the component is put together. The component definition file (cdef) explains how the various files are integrated as well as what source files the component needs to be correctly compiled.

As mentioned in the adef, we would like to have access to peripherals and as such we have linked a variable to them in the adef. In the cdef we now connect them to an API to allow us to manipulate and interact with these hardware or service components. This is done in the requires section by listing the variable in the api subsection and linking it to the required API. In this code snippet we need access to the gpio API, the mangoh_led variable is therefore linked to the le_gpio.api. The other section in this code snippet lists all the source files needed by the component to function correctly.






        mangoh_button = ${LEGATO_ROOT}/interfaces/le_gpio.api

        mangoh_led = ${LEGATO_ROOT}/interfaces/le_gpio.api







Component.cdef file as found in the heartbeatComponent folder

Source Code

Let's now have a quick look at the source file that makes up this component and controls how the LED behaves. Below is the full source code for this component it is the code listed in the cdef and used to turn on and off the onboard LED. The first thing to note is the inclusion of both legato.h and interfaces.h. The first allows us to use any of the legato header files used by the component, all legato programs will use some legato header. The second file include.h links in the auto generated header file from the cdef.


Moving further down the code we see in the function LedTimer a variable called mangoh_led_Deactivate, this variable is created through the binding section in the .adef file. In essence this is using the variable mangoh_led, created in the cdef and linked to hardware in the adef, with the api. We are therefore saying the variable mangoh_led should be used with the function call Deactivate to turn off the specified pin. This same principle applies to the other variables in the code that use the legato APIs. The next function, ConfigureGpios sets the pin with the LED attached as an output. If this fails the legato API is then used to send a message to the system log using LE_FATAL_IF. This ability set in the cdef under envVars and allows the system to log messages at the info level and lower.

The last and most important part of the C source file is the COMPONENT_INIT. This is similar to main() in C programs but, because there is no main() in legato applications, we need a different entry point. The COMPONENT_INIT is this entry point. It is important to note though, that unlike main functions this function must return. If COMPONENT_INIT does not return then the rest of the application will not run. In this specific COMPONENT_INIT function a timer instance is created to control the intervals between turning on and off the LED. After an instance is created various parameters for the timer (period, whether to repeat or not as well as its handle) are set.  Lastly the gpios are configured using the previously created function and the timer is then started. After all this is done the COMPONENT_INIT is exited and control is handed back to the legato framework.     



* @file


* Blinks the user controlled LED at 1Hz. If the push-button is pressed, the LED

* will remain on until the push-button is released.


* <HR>


* Copyright (C) Sierra Wireless, Inc. Use of this work is subject to license.


#include "legato.h"

#include "interfaces.h"

#define LED_TIMER_IN_MS (1000)

static bool LedOn;

static le_timer_Ref_t LedTimerRef;


* Toggle the LED when the timer expires


static void LedTimer(le_timer_Ref_t ledTimerRef)


    if (LedOn)



        LedOn = false;





        LedOn = true;




* Turn the LED on and disable the timer while the button is pressed. When the  button is

* released, turn off the LED and start the timer.


static void PushButtonHandler(bool state, void *ctx) //<true if the button is pressed, context pointer - not used


    if (state)


        LE_DEBUG("turn on LED due to push button");






        LE_DEBUG("turn off LED due to push button");


        LedOn = false;





* Sets default configuration LED D750 as on


static void ConfigureGpios(void)


    // Set LED GPIO to output and initially turn the LED ON

    LE_FATAL_IF(mangoh_led_SetPushPullOutput(MANGOH_LED_ACTIVE_HIGH, true) != LE_OK, "Couldn't configure LED GPIO as a push pull output");

    LedOn = true;

    // Set the push-button GPIO as input

    LE_FATAL_IF(mangoh_button_SetInput(MANGOH_BUTTON_ACTIVE_LOW) != LE_OK,

"Couldn't configure push button as input");

    mangoh_button_AddChangeEventHandler(MANGOH_BUTTON_EDGE_BOTH, PushButtonHandler, NULL, 0);




    LedTimerRef = le_timer_Create("LED Timer");

    le_timer_SetMsInterval(LedTimerRef, LED_TIMER_IN_MS);

    le_timer_SetRepeat(LedTimerRef, 0);

    le_timer_SetHandler(LedTimerRef, LedTimer);




heartbeat.c source file as found in the heartbeatComponent folder


While this was a rather simple and easy to follow demonstration it outlines the most important parts of setting up a legato application. The most complex and important part of this example is how variables are linked to the device or hardware they wish to control. The adef links the device to a variable name. The cdef then links this to an API which the source code can then use to interact with and manipulate the device.


I am planning on releasing another blog post shortly that will explain the more complex redSensorToCloud application. This application has multiple components and uses linux drivers for some of the peripherals.







Original blog entry here:

MangOH Red Launch and Legato Framework

More pictures here:

The Embedded Shack

Upcoming reviews and info here:


German automotive company Daimler got permission to publicly test self-driving trucks on Oregon and Nevada highways. These autonomous trucks are coming to a highway near you. (Photo from Daimler)


It takes a lot of patience, endurance, and skill to be a long-haul truck driver. But soon all of that may not be needed if autonomous trucks become a reality. Recently, German automotive company Daimler revealed plans to drive digitally connected trucks on public highways in Oregon and Nevada. This practice, known as platooning, groups trucks enable with smart technology and advanced driving systems together so they can communicate with one another. This ensures to maximize efficiency and is supposed to be safer behind the wheel.


Daimler Trucks North American gained permission to run public platooning tests after a successful trial in Madras, Oregon. Tests with fleet customers are scheduled to begin next year. So how do they plan to connect the trucks? They intend to combine connectivity tech with automated driving tech. The system uses a WiFi-based truck to truck communication that interacts with a driver assistance system. This comes equipped with adaptive cruise control, lane departure, and active break assistance. Some of the benefits, aside from increased efficiency, includes fuel savings since it lowers the drag and improves the safety on breaking distances.


The company previously experimented with the technology in 2016 during a Europe spanning challenge to prove autonomous trucks were capable of driving across an entire continent. It sounds like the company is looking to replace human truck drivers, but they assure this isn’t the case. In a press release, President, and CEO of Daimler trucks Roger Nielsen said the technology is meant to help drivers, not replace them. They place a big focus on safety saying that the automated trucks react to traffic in three-tenths of a second compared to more than one it takes a human.


Daimler isn’t the only automotive company with platooning on their radar. Earlier this year Toyota and Volkswagen started a three-truck convoy test. And, if you couldn’t guess, Tesla is investigating the technology. Back in August, it was revealed that Elon Musk is working on a prototype for a self-driving, electric semi-truck that would move in platoons. Since Tesla wants to create a fully autonomous car, it’s no surprise they want to bring this technology to semi-trucks.


While the technology sounds interesting, do we really need self-driving trucks? Yes, they’re supposed to be safer, but many are still not convinced that they can perform better than a human behind the wheel. And with the recent car accidents involving self-driving cars from Uber and Tesla, it’ll be a while before people are fully on board with autonomous vehicles.


I fear there will be many accidents before these self-driving trucks will be acceptable. Kind of scary.


Have a story tip? Message me at: cabe(at)element14(dot)com

little studio.jpg

iZotope’s Spire Studio is a compact device featuring different recording tools to easily create quality demos. Avoid recording demos in a studio with this compact device. (Photo from iZotope)


I don’t always write about complete products, especially refined consumer projects. This is one of those. But, this one represents something important… replacing expensive industry products and services is a way forward. This one, in particular, gives anyone the ability to make quality music on the cheap.


Being a musician today can be both a blessing and a curse with the endless technology available to use. Just choosing which tool is right for you can be overwhelming. One company aims to solve this problem with a new device called Spire Studio. Created by audio company iZotope, who have been making recording equipment for 16 years, Spire Studio is a compact device that has the features and tools you need to easily capture songs.


Looking like a close cousin to Amazon’s Alexa, this device has a built-in mic, which is “pro-level,” along with inputs for actual microphones and even includes audio effects like reverb and delay for more professional sounding demos. All you do is connect it to your phone, and you’re ready to go. Once you download the companion app, there are even more options to choose from. Think of the Spire Studio App as a quick and dirty version of Protools. It doesn’t have a full fledge work station, but it offers a stripped down experience with basic elements and filters that can be applied to multiple layers of a song.


With Spire Studio, you can use its eight tracks to overdub and record on your own or sending along tracks to your bandmates who own their own Spire Studio. This makes collaborating easy since you don’t even need to be in the same location. And if you decide you need more sophisticated tools to get the sound just right, the tracks can be exported to various audio software, like Protools and Logic.


It seems so simple after reading the specs. I keep asking myself, why didn't I try to make this? This happens all too often these days.


The device is meant for both professionals and beginners looking to record a good demo. Those with years of experience can fiddle around with Spire Studio’s various multi-layering tools, while beginners can a get a feel for what they’re doing. Keep in mind, this isn’t meant to give you high-quality albums ready to share with the world. Instead, it’s a simplified means of getting ideas down and recording demos – an improvement over using voice apps for recording. If anything, it can help you get started on a larger project.


Spire Studio launches this fall and costs $349. If you’re serious about music and don’t want to spend money hitting up a studio to lay down a demo, this device could be what you’re looking for.


The real project… getting your music into peoples’ ears.


Have a story tip? Message me at: cabe(at)element14(dot)com

Artificial Neural Network (ANN) consists of connections made from thousands interconnected artificial neurons sequentially stacked in layers. The layered ANN is one of the key features to have Machine Learning (ML): the labeled data used to feed the ANN enables it to learn how to interpret that data like a human (and sometimes better); this is called Supervised Learning.


One of the simplest types of ANN is Feedforward Neural Network (FNN). The FNN moves data in only one direction, with no loops or cycles. Data travels from input nodes (and through hidden nodes, if any) to output nodes. Convolutional Neural Network (CNN) is a class of FNN inspired by the animal visual cortex organization and the connectivity pattern between neurons.


Convolutional Neural Network is commonly used to analyze visual imagery: recognizing objects or scenes, performing object detection and segmentation. Compared to other image classification algorithms, CNN brings independence from previous knowledge by using its little pre-processing: learning the filters that in traditional algorithms were hand-engineered —this also saves human effort.


Filters can have very simple features (like checking brightness or identifying edges) and increase complexity as the layers progress (like define unique characteristics of the object). Filters are applied to every training image at different resolutions, the input of the next layer uses the output of each convolved image. CNN have hundreds or thousands of layers; each one of them can learn to detect different features of the image.


There are some ways for training a CNN to perform object recognition. Using MATLAB®  and Neural Network Toolbox eases the transfer learning for a CNN:
1. From scratch. Requiring the definition of the layer and filter numbers, also the learning weights and other parameters. This type of training demands massive amounts of data (millions of samples).
2. Using a pre-trained model. Automatically extracts features from a new data set without requiring huge amounts of it, long computation, or training time. Pre-trained model is called Transfer Learning.




Machine Learning (ML) is the science of how computers can improve their perception, cognition, and action with experience; in other words, ML teaches computers to lean from experience by using adaptive algorithms combined with computational methods to learn information directly from data without relying on code as a model. Machine Learning is about how computers can act by themselves without being explicitly programmed. As a field of Artificial Intelligence (AI), ML improves computers' performance from data, knowledge, experience, and interaction.


Machine Learning started with two breakthroughs:
1. The Arthur Samuel's pioneering work on computer gaming and AI, that made possible computers to learn from themselves instead of instruct them everything they need to know and how to do tasks.
2. The Internet growth of the past decade; making available huge amounts of digital information for analysis.


Engineers realized that it was far more efficient to code computers to think and understand the world like humans, giving access to all the information available on the internet and letting them to learn; keeping the innate advantages computers hold over humans: speed, accuracy, or lack of bias.

Machine Learning can only happen using Neural Networks —computer systems designed to recognize and to classify information as a human brain does. A Neural Network essentially works on probabilities, making statements, decisions, or predictions with a degree of certainty based on a data feed. By adding a feedback loop, the computer can modify its future approach after being told or sensing whether its decision was right or wrong, allowing the “learning”.


ML helps to generate insights, to make better decisions, and to develop predictions. As computers outperform humans on counting, calculating, following logical yes/no algorithms, and finding patterns, Machine Learning is recommended when having a complex task or problem involving a large amount of data and lots of variables, but no existing formula or equation.


Machine Learning nowadays has become relevant in these areas:

  • Automotive: self-driving cars, motion and object detection, predictive maintenance...
  • Data security:  cloud access patterns, anomaly identification, security breaches prediction...
  • Personal security: ID processing, security screenings, face recognition...
  • Finance: stock market predictions, pricing and load forecasting, credit scoring, fraud detection...
  • Healthcare: tumor detection, drug discovery, DNA sequencing, human genome mapping...
  • Natural Language Processing (NLP): speech recognition, translation...
  • Marketing: profile personalization, recommendations, online search...


Machine Learning uses two types of techniques:

1. Supervised Learning: It makes predictions based on evidence in the presence of unknown.
Supervised Learning takes a known set of input data and known responses to the output data and trains a model to generate reasonable predictions for the response to new data. The inner relations of the processed data can be uncertain, but the output from the model is known.

Supervised Learning uses these two techniques to develop predictive models:
- Classification: it anticipates discrete responses. Its use is recommended when the data can be tagged, categorized, or separated into specific groups or classes.
Common Classification algorithms are support vector machine (SVM), boosted and bagged decision trees, k-nearest neighbor, Naïve Bayes, discriminant analysis, logistic regression, and neural networks.
- Regression: it anticipates continuous responses. Can be used when working with a data range or if the nature of the response is a real number.
Common Regression algorithms are linear/nonlinear models, regularization, stepwise regression, boosted and bagged decision trees, neural networks, and adaptive neuro-fuzzy learning.

2. Unsupervised Learning: It finds intrinsic structures or hidden patterns in data.
Unsupervised Learning is used to draw inferences from datasets consisting of input data without labeled responses. Even the result can be unknown, might be relations with the processed data but it is too complex to guess (once normalized, the algorithm itself may suggest ways to categorize the data).

Clustering is the most common unsupervised learning technique. It is used for exploratory data analysis to find hidden patterns or data groupings.
Common Clustering algorithms are k-means and k-medoids, hierarchical clustering, Gaussian mixture models, hidden Markov models, self-organizing maps, fuzzy c-means clustering, and subtractive clustering.



The most common challenges related to Machine Learning are associated with:

  • Data management: as data can be incomplete, multi-formatted, or with several shapes and sizes. Different types of data require different approaches and specialized knowledge and tools.
  • Data model: it has to fit the data. Flexible models overfit data by modeling minor variations that can cause noise, while simple models might assume much.




How to build a smart controller for drip irrigation and automatic plant watering.

There are few areas of the globe that don’t have to be concerned with water shortages, few agricultural centres, large, or small, that don’t need to concern themselves with efficiently delivering regular watering to their crops. There are few farmers, homesteaders or amateur gardeners, who would say no to a drip irrigation system that effectively watered their carefully nurtured plants, and did not waste valuable water.

By 2025, an estimated 1.8 billion people will live in areas with water scarcity. Two-thirds of the world’s population will live in water-stressed regions. (Source: National Geographic).


What if, not only can you have drip irrigation, you can monitor, and control water usage across your crops? Reacting to weather conditions, in real time, with just the right amount of water through a cloud based, android compatible application. What if, the circuit could also control the addition of fertilizer, remotely?


Drip irrigation is growing in popularity for small, and large agricultural growers. Through drip irrigation, water is delivered directly to the root zone, preventing runoff, and avoiding watering bare soil areas.

Drip irrigation is thought to be 90-95 per cent efficient versus 30-60 per cent for sprays and rotors.


Drip irrigation also improves soil aeration to aid plant root growth. Watering can be targeted to plant roots, or above plant, to those crops which respond well to overhead watering. Drip irrigation saves water, money and time, reducing the need for manual labour for watering processes.

Though drip irrigation provides a watering solution, and conserves water, there has been research that suggests plants consume additional water via drip irrigation. Though it’s also safe to say that plants which consume more water overall deliver higher yields, it’s another reason to ensure that drip irrigation is truly water saving and productive.

What if there is a solution for water saving, cloud controlled, drip irrigation suitable, and accessible for nearly any farmer?

There is – by using this DIY smart controller for cloud controlled drip irrigation, and plant watering!


Onto building the smart controller…


What were the problems we needed to solve?

  • Irrigate, but prevent water wastage due to excessive evaporation.
  • Not all plants need the same amount of water, so we should be able to water accordingly.
  • Compensate for hot days by increasing the quantity of water for the day.
  • Monitor water usage to understand seasonal trends for water usage.


We solved these problems with our smart controller for drip irrigation by:

  • Reducing the amount of water needed for irrigation, by using an automatic drip irrigation system with a cloud interface for scheduling early morning watering cycles.
  • Splitting the farm into multiple zones, and utilising an Android interface for the supervisory controls logic, to water each zone based on a separate schedule.
  • Utilizing an on-board temperature sensor to log data to the cloud, and use this for understanding temperature trends. Using weather data to preview conditions for the next few days.
  • Pioneering the use of drip irrigation at the Michigan Urban Farming Initiative for collecting farm data, for analysis.

We created a plant watering system, controlled remotely by an Android application, for water scheduling, data logging, and system monitoring.

So how will it help, really?

Having a drop irrigation system, controlled by an application, via the cloud, gives remote control of watering from any, internet connected location. If your irrigated fields are any kind of distance from your home, base, or your simply away from your farm, a system likes this lets you react, in real time. If rain is forecasted you can pause irrigation until it’s needed again without needing to be on-site. You can adjust the scheduling of your irrigation remotely to meet weather conditions and growing needs.


It’s not just about control though, you can monitor water usage to manage your water source, and any related costs. If water is scarce, you can make sure every drop counts, scheduling for the best times of day, and manually overriding the system if some maintenance work needs to be done. Water is scarce, no matter where you are in the world. Even those of us lucky to have water on tap 24/7, should make every effort to conserve EVERY drop.




Automatic Drip Irrigation at the Michigan Urban Farming Initiative


“We used to just switch on the water and let it run for long time. Lots of water was wasted, and we weren’t sure if all the plants were getting enough water.” - Jeffery Pituch, MUFI


As a part of the volunteering effort, we wanted to get a drip irrigation network up and running at the Michigan Urban Farming Initiative. And, not just a regular network, but a smart internet controlled one! The work on the farm began in Fall 2016, and we currently have a 0.5-acre farm under drip irrigation. The goal is to have the full one acre farm under the drip irrigation network before the notorious Michigan winter sets in. Winter is coming, and there isn’t much time!

The Michigan Urban Farming Initiative is an all-volunteer 501(c)(3) non-profit organization that seeks to engage members of the community in sustainable agriculture. It is based in Detroit's North End community. The primary focus of MUFI is the redevelopment of a three-acre area in Detroit's North End, which is being positioned as an epicenter of urban agriculture. You can find more information on their webpage here:


                             May 2016                                                                                     Oct 2016.


How was the control system setup?

Our goal was to get a drip irrigation network up and running at the Michigan Urban Farming Initiative. Apart from the drip tapes, and pump setup, we also came up with a controller for scheduling the farm watering. In this section, I will give you a few more details about the controls architecture. The controller is a distributed control system and has four main components:

  • Arduino Uno and circuit board
  • Android Phone/Tablet as a part of the controller
  • ThingSpeak IoT interface.
  • Android App on User’s device for remote operation


Here is a high-level diagram to give a little bit of perspective:


Arduino Uno:

  1. The Uno, being the versatile open source prototyping platform, was chosen for doing the low-level interaction with the solenoid valves and water pump.
  2. The Uno interfaces with the Android phone using Bluetooth. It receives the switch on and switch off signal from the phone, and periodically sends sensor data back.
  3. For the hardware connections and the sensors, I also created a small PCB for a cleaner look.
  4. Since cost was a concern, further improvements were made to reduce costs for the design, including a 50% size reduction for the PCB layout. A few of the “instructables” that were extremely helpful for me have been listed below:
    1. a. Use of an ATMEL328 instead of the full Arduino Uno to save costs. Check out this tutorial: Instructable tutorial: How to make Arduino from scratch
    2. b. Use of a separate PCB for the SSR board: Instructable: Build a Solid State Relay board


Android Phone/ Tablet:

(Any Android phone/tablet will do. We used the Samsung Galaxy Core Prime.)

  1. The phone runs an Android app, and has the scheduling logic for the three farm zones.
  2. The phone is responsible for interfacing with ThingSpeak Cloud, and displaying controller status, and other farm data to the user.
  3. It also has an override button for manually operating the pump, and solenoid valves. This comes in handy if you want to do some maintenance work, or manual mode operation.

ThingSpeak Cloud Interface:

  1. This is the key component for ensuring that the system can be remotely operated.
  2. The ThingSpeak channels serve as the data store for all the scheduling data. And ThingSpeak is a free service, which is great!
  3. The Android App syncs this data periodically, so that the controller gets the updated information, in case a user has modified the schedule.
  4. Data becomes powerful when you can extract meaningful insights from it. Apart from the data storage, ThingSpeak also allows you to add custom MATLAB code that can be run on the cloud. This way, we can run analysis algorithms in the cloud itself, based on the farm data.
  5. Currently, jut to test things out, I am logging the farm temperature, and calculating the GDD i.e. Growing Degree Days for the farm.
          • pic6_a.png
  6. Based on this information, or just the logged temperature, we can also modify the schedule, or react to an event using one of the many ThingSpeak apps. I haven’t started using this yet, because we would like to collect data first, and then start doing the controls. We think that “Diagnose before you prescribe” is a good practice to follow.
  7. Another nice thing about the ThingSpeak service is that you can make the channels either private or public. For example, for the first zone of the farm, I have made the data public and you can view the water schedule and the temperature profile here.
  8. I have described only a small subset of the capabilities that I am using. But MathWorks has some great examples for you to get started on the ThingSpeak page.



Android App for remote operation:

  1. The Android app was created using Android Studio.
  2. We did the android version, because, we are using the exact same front end for the controller and user. More information and snapshot of the app is in the timeline section below.



A brief timeline of the project in pictures

  • Oct 2015 - First “Lab” test: Back in the day, the project started in an elaborate in-house test facility:



  • Nov 2015 - Field test 1: Followed by a small proof of concept layout in a section of the farm. This is at the Michigan Urban farming facility in Detroit’s North End:



  • April 2016 - Planning: The reason we started small was so that we could test various drip fittings and understand the pros and cons of different types of fittings. Going through this exercise also allowed us to get some hands on experience of working on a real farm and laying down the drip network!


  • May 2016 - Farm setup: Once we locked down the parts, it was time to get half of the farm set up. We had to ensure that the sections are of similar size. Measure twice, and cut once was put in effect,  and the results were well worth the effort.



  • June 2016 - Field Test 2: With the hardware section complete in early summer of 2016, it was time to bring out the star attraction – the controller! We tested out the first version of the controller in the field with the partial drip network (0.25 acre) in place. While there were some issues with the pump sizing, the overall result was a huge success when the control system worked as expected. We had underestimated the total pressure loss in the system, and had to get a slightly larger pump to ensure that the farthest end of the farm also received adequate water. A visit to the local hardware store fixed that.


                    Abhishek Bhat and Jeff Pituch on farm



  • Sept 2016 - Controller electronics: The PCB layout was created using Fritzing. This is by far the simplest software I have used. While this tool isn’t meant for heavy duty PCB design, it worked for my application. . With that out of the way, the only thing left to do was get the PCB printed, and set it up in a fancy box:

          pic12_a.jpg                         pic13.jpg

          Controller with waterproof enclosure                                              PCB + solid state relays               



  • June 2016: Ongoing - Android App Interface: The controller version of the android app is used with the PCB, while the user version is intended to be installed on a user’s phone/tablet:




  • Dec 2016 - Cloud connectivity with ThingSpeak: ThingSpeak is an open IoT platform with MATLAB analytics.




  • May 2017 - Smaller PCB: New iteration of the controller with 3d printed enclosure and a smaller PCB




  • June 2017 – Installation: The system has been installed on the farm, and will be extended to the remaining 0.5 acres when the groundwork and laying down of the drip tapes is competed.





Currently, we are working on converting the remaining 0.5 acres to drip irrigation. By the end of the setup, we will also share the PCB layout, the 3D printed enclosure design, and the Android App online for others to recreate the setup. The goal is to ensure that similar projects can be undertaken in other communities.


There is a video that will be released soon that shows the full system and project in action. Stay tuned. If you would like more information, reach out in the comments section, and I can share my experiences with you.

Artificial Intelligence (AI) is the science and engineering of imitating cognitive functions to create intelligent machines —by using smart computer programs, that react and carry out tasks normally done by people. AI simulates the capacity for abstract, creative, and deductive thought; also, the ability to learn. Basically, AI looks for computers to do "smart" things.


Alan Turing, an English mathematician that become an AI theorist, pioneered that AI was best researched by programming computers rather than by building machines. He wrote Computing Machinery and Intelligence in 1950 in which discussed conditions for considering a machine to be intelligent, arguing that if the machine could successfully pretend to be human to a knowledgeable observer, then you certainly should consider it intelligent; this is now known as the “Turing test”.


Artificial Intelligence research has focused mainly on these components of intelligence, by using the binary logic of computers: Learning, Reasoning, Knowledge, Problem-solving, Perception, Planning, Moving & Manipulating Objects, and Language-understanding.


Nowadays, the research of AI has two branches:

- Generalized AI, that develops different machine intelligences to perform any task, just like people do. This AI simulates how the human brain works; it is currently being slow researched because it requires a more complete understanding of the organ and more available computing power. Neuromorphic Processors —a new generation of computer chip technology are being designed to efficiently run brain-simulator code. In parallel, scientists are developing computers systems (like IBM’s Watson) that use high-level simulations of human neurological processes to perform a broader range of chores without being specifically taught how to do them.

- Specialized AI, that uses principles of simulating human thought to carry out one specific task. Specialized AI is already providing breakthroughs in physics, medicine, finance, marketing, manufacturing, telecommunications, and transportation (self-driving and autonomous cars) fields.





This new sensor by Carnegie Mellon plus into an electrical outlet, keeps track of home activity, and assigns various signatures to different objects and functions. All it takes is this chip to transform your home into a smart house (Photo via Carnegie Mellon)


Even before The Jetsons graced our television screens, we dreamed about living in a smart house. This goal is now actually in reach. Devices like Amazon Echo, Alexa, and Google Home make it possible to control certain functions of your house, but it can be expensive. So what’s your next option? Retrofitting all your older appliances with sensor tags, which is just time-consuming.

But before long there may just be a third option that promises to be simple. Researchers at Carnegie Mellon created a hub that plugs into an electrical outlet and tracks ambient data – it’s a sensor that can keep track of the activity in the entire space.


Introduced at the ACM CHI, the human-computer interaction conference, Synthetic Sensors uses its 10 embedded sensors to collect information from the surrounding space. It collects data from sound, humidity, electromagnetic noise, motion, and light to log information, send it to a cloud back-end over Wifi, and apply it to certain functions and appliances. For example, the device can tell you if you forgot to turn on the oven or how much water is being wasted by that leaky faucet.


How it works is researchers assign each object or action with a unique signature based on the data captured by the sensors. This way the device can distinguish between the opening of the fridge door and the sound of running water. The team, led by Gierad Laput, essentially trained machine learning algorithms to distinguish these signatures giving a wide library of senseable objects and actions. Synthetic Sensors has a wide range of capabilities, but the one notable item missing is a camera. Researchers intentionally didn’t build a camera for the device for security reasons. This is also why raw environmental data isn’t uploaded to the cloud, only the analyzed results are.


Right now the device is still in the prototype phase as there are some kinks to work out. The team realizes pulling off this machine learning “across a bunch of different sensor feeds” is difficult to make stable and reliable. They want to make the device truly universal, but there are various obstacles, like making sure bringing in new appliances doesn’t mess up the established system. Also, the sounds, lights, etc. in the home environment constantly change. They have to make sure the system knows how to distinguish between the constant and the temporal.


Another issue they have to address is the lack of user interface. Right not the prototype is very bare bones and is not necessarily friendly to use. The team may create an app to the control the system, but their end goal is to include the tech into smart home hubs in an effort to capture better data without a camera. For now, you’ll have to rely on Alexa if you want a smart house and even that has its limitations.



Have a story tip? Message me at: cabe(at)element14(dot)com

Hey guys Sorry for late upload ; Last years CHRISTMAS SPECIAL Project.




Hello Folks,


First of all Wishing you Merry Christmas and a Happy New Year. I am finally Restarting the blog I started few years ago, and what's more fun than getting rid of storage devices occupying my damn USB ports ( @all MAC users You know What I mean ;-) )!!


So,  Lets get started. 


Now there are many wireless storage devices like these that costed me about 3K INR !! hey but where is the fun of learning there ?
So, I went with buying a Linkit Smart 7688 Duo Board that costed me about 1.6K INR and as I already had a 64 GB Memory card laying around; Why not build something like  this using
Linkit Smart?




1.  Introduction

     Linkit Smart 788 Duo is a Small development platform based on OpenWRT, i.e. the same tiny footprint linux based OS running on your routers, Arduino Yun (now changed to Linino OS), and many other development platform. Below listed is some technical data for NErDS!!


      So, basically this board is a tiny computer running tiny version of Linux which comes in a tiny box :).


Well next I wanted to get an SD card and mount it on the device; Obviously a 32 MB flash wont even hold anything  . So we get our mighty 64 GB Memory card laying around and insert it. That was easy


Ohh boy,; But how to access the card or the device ?


For now just follow the steps, we will cover the later in detail
Step1 : Connect Linkit Smart (Left most usb with pwr/mcu label) to power adapter (5v at least 500mA). Ensure green light turning on. The orange light will glow once and then will glow again for about 10 sec before blinking.
Step2 : Connect to the Wirless network created by linkit one something like this LinkIt_Smart_7688_XXXX.
Step3 : Open a web browser and type http://mylinkit.local/
Step4 : The web page will ask you to setup a new password for root. Do that and login on to the page.
Step5 : Find "Current IP Address" tab and note down your IP address.
Step6 : Open your terminal and type
             ssh root@<IP_ADDR> (Replace <IP_ADDR> with IP address )
t will ask you to enter your password. Enter that and you are in


Now that the card is in , We need to mount the card; well before that I thought, why cant we boot the entire system from the memory card itself ?? Why to boot from 32 MB raw flash that seems to have a limited write cycles ? So a little googling fetched me this

Mounting the root FS on an SD card

This is an essential step as you are now booting the device from the SD card. Don't worry if you are not able to understand this just copy paste the commands given in the above link and you are good to go


Well now that we have our base ready, all we have to do is sharing a folder that we can access from anywhere. We will do this using openWRT console.

STEP 1 :  Create a folder to share. Type the following on your terminal

                 cd /root

                 mkdir AirDrive

                 chmod o+rwx /root/AirDrive

                 chown nobody /root/AirDrive/

STEP 2 : Open a web browser and type http://mylinkit.local/ ;Clicko Go to OpenWrt

STEP 3 : Login using username root and your password.


STEP 4 : Click on the Services tab.


STEP 5 : Click Network Shares tab. and replicate the contents as given in the image.

Click Save and Apply and you are ready to go :).

STEP 6 : Now Open File Explorer/ Finder  browse to network and you will be able to see "AirDrive". on opening use Guest login and enjoy your own Wireless Pendrive.

Hope you liked the project

PS : Just for fun I am using the Cute tiny box it came in as its cover 


P.S. : Have updated this project since then with Recycled 18650 Batteries from old Laptop Will Upload the Updates Soon.
# I am now getting a backup of 5-6 Hours with the battery


1.  Seeed Studio Product Page
2.  Seeed Studio Wiki Page
3. Product Page (To buy in INDIA)
4.  H/W Schematic Files
5.  Mediatek Labs Page
6.  Mediatek Getting Started Guide
Pl.visit my blog for more.

Cisco Live 2017 in Berlin

Posted by shabaz Top Member Feb 24, 2017

The week-long Cisco Live finished today, and due to work I had extremely little time this year to check out the exhibits. I think I spent ten minutes before it closed : (

Anyway, here are the brief snippets that were possible to capture in ten minutes!


Firstly, a quad-Pi train : )



Every year the train demonstrations seem to get better. This years model looks very slick.


End-on view. The small rectangular white and black thing inside near the front is some Bosch gadget:



A set of arcade controls; this looks like the kind of thing that would be fun to make rather than purchase though:


A cool maze game using Sphero 'bots:


The guy in the stripey T-shirt did quite well automating his 'bot through the maze:


This network operations center live display was interesting because infographics are always cool:



There were lots of opportunities to learn:



And really the highlight of the show for me was seeing how passionate everyone was after a week, to still be coding away. Everyone wanted to learn to code.


The world is going software-defined everything.


Filter Blog

By date: By tag: