Skip navigation
2017

MATRIX Creator Eclipse Weather App

In celebration of Eclipse Day we have made this app to tell you what the weather is outside so you know if you will be able to see the eclipse or not with your current local weather conditions. This guide provides step-by-step instructions for locating your general location to give you information about the weather via a series of LED animations on a Raspberry Pi with a MATRIX Creator. It demonstrates how to use the IP-API.com to find your location and then feed it to the Dark Sky API to get the relevant local weather information that will be used to show an LED animation on your MATRIX Creator. The main goal of this app was to give an interesting new way to receive your current weather conditions.

 

Required Hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH) into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard and mouse - and learn how to tail logs for troubleshooting.

 

Let's get started

We will be using MATRIX OS (MOS) to easily program the Raspberry Pi and MATRIX Creator in Javascript.

 

Step 1: Setting up MOS

Download and configure MOS and its CLI tool for your computer using the following installation guide in the MATRIX Docs: Installation Guide

 

Step 2: Create a MATRIX-Weather-App

To create your own MATRIX-Weather-App app on your local computer, use the command "matrix create MATRIX-Weather-App". Then you will be directed to enter a description and keywords for your app. A new folder will be created for the app with five new files. The one you will be editing is the app.js file. You will also be creating a file called weatherAnimations.js for the weather animations.

From here you can clone the MATRIX-Weather-App GitHub repo with the code or follow the guide below for an overview of the code. Either way, make sure to follow the instructions in step 4.

 

Step 3: Global Variables

In the app.js file you will need to set up the following libraries and global variables for the app:

 

//Load libraries
var weatherAnims = require(__dirname+'/weatherAnimations'); //custom weather animations
var Forecast = require('forecast'); //https://www.npmjs.com/package/forecast
var request = require('request'); //https://www.npmjs.com/package/request


////////////////////////////////
//Global Variables
////////////////////////////////
//Detailed location data
var location = {};


//Configure forecast options
var forecast = new Forecast({
    service: 'darksky', //only api available
    key: 'YOUR_KEY_HERE', //darksky api key (https://darksky.net/dev/account)
    units: 'fahrenheit', //fahrenheit or celcius
    cache: false //cache forecast data
});

 

Step 4: Dark Sky API

Within the forecast variable created in Step 3 change YOUR_KEY_HERE to be the API key you get once you make an account with Dark Sky here.

 

Step 5: Obtaining Location Data

To obtain your location data we will be using IP-API.com in order to get your Latitude and Longitude from your IP address. This is done with the following code in the app.js file:

 

////////////////////////////////
//Obtaining location data
////////////////////////////////
function getLocation(callback){
    request.get('http://ip-api.com/json')
    //catch any errors
    .on('error', function(error){
        return console.log(error + '\nCould Not Find Location!');
    })
    //get response status
    .on('response', function(data) {
        console.log('Status Code: '+data.statusCode)
    })
    //get location data
    .on('data', function(data){
        try{
            //save location data
            location = JSON.parse(data);


            //log all location data
            console.log(location);


            callback();
        }
        catch(error){
            console.log(error);
        }
    });
}

 

Step 6: Selecting Weather Animations

Within the app.js file there will be a function that stops and loads an LED animation corresponding to the weather information provided by Dark Sky. Use the function below:

 

////////////////////////////////
//Selecting Weather Animation
////////////////////////////////
function setWeatherAnim(forecast){
    //clear MATRIX LEDs
    weatherAnims.emit('stop');
    //set MATRIX LED animation
    weatherAnims.emit('start', forecast);
}

 

In the MATRIX-Weather-App folder you will need to create a file called weatherAnimations.js. You can find the code for the weatherAnimations.js file here.

 

Each LED sequence in the weatherAnimations.js file is tied to one of these responses from the Dark Sky API.

  • clear-day
  • clear-night
  • rain
  • snow
  • sleet
  • wind
  • fog
  • cloudy
  • partly-cloudy-day
  • Partly-cloudy-night

If there is a hazard such as hail, thunderstorms, or tornadoes than the LED's will turn red.

If there is no LED sequence created for the current weather the LED's will turn yellow.

 

Step 7: Obtaining Forecast Data

Using the forecast NPM module this function in the app.js file retrieves and stores relevant weather information received from Dark Sky. Use the following code:

 

////////////////////////////////
//Obtaining Forecast data
////////////////////////////////
function determineForecast(lat, lon){
    // Retrieve weather information
    forecast.get([lat, lon], true, function(error, weather) {
        //stop if there's an error
        if(error)
            console.log(error+'\n\x1b[31mThere has been an issue retrieving the weather\nMake sure you set your API KEY \x1b[0m ');
        else{
            //pass weather into callback
            setWeatherAnim(weather.currently.icon);


            //loop every X milliseconds
            setTimeout(function(){
                determineForecast(lat,lon);
  180000 
        }
    });
}

 

The weather is updated every 3 minutes.

 

Step 8: Action Zone

This last function calls all the previous functions and starts the app with the following code:

 

////////////////////////////////
//Action Zone
////////////////////////////////
//Auto Obtain Location
getLocation(function(){
    //Start Forcast requests
    determineForecast(location.lat, location.lon);//input your coordinates for better accuracy ex. 25.7631,-80.1911
});

 

If you experience an inaccurate forecast feel free to hardcode your location in the place of the location.lat and location.lon variables. This inaccuracy with your location is due to the approximately 2 mile error margin of using your IP for location.

 

All code for the app can be found on GitHub here: https://github.com/matrix-io/MATRIX-Weather-App

Dataplicity released a new feature "Custom Actions" that might be useful for projects including remote control.

 

:arrow:http://docs.dataplicity.com/docs/intro-to-actions

 

:)

MathWorks recently ran a mobile devices challenge where users were asked to submit a project in which they programmed their Android or iOS devices using MATLAB or Simulink. There were over 15 submissions that competed for the grand prize of 1000 USD.

 

The third place winning team built a low cost alternative to expensive GPS systems, click here to read more about this project and learn more about the other two winners. The link contains video references to their projects as well.

 

MATRIX Creator Amazon Alexa

This guide provides step-by-step instructions for setting up AVS on a Raspberry Pi with a MATRIX Creator. It demonstrates how to access and test AVS using our Java sample app (running on a Raspberry Pi), a Node.js server, and a third-party wake word engine using MATRIX mic array. You will use the Node.js server to obtain a Login with Amazon (LWA) authorization code by visiting a website using your Raspberry Pi's web browser.

Required hardware

Before you get started, let's review what you'll need.

  • Raspberry Pi 3 (Recommended) or Pi 2 Model B (Supported) - Buy on Element14 - Pi 3 or Pi 2.
  • MATRIX Creator - The Raspberry Pi does not have a built-in microphone, the MATRIX Creator has an 8 mic array perfect for Alexa - Buy MATRIX Creator on Element14.
  • Micro-USB power supply for Raspberry Pi - 2.5A 5V power supply recommended
  • Micro SD Card (Minimum 8 GB) - You need an operating system to get started. NOOBS (New Out of the Box Software) is an easy-to-use operating system install manager for Raspberry Pi. The simplest way to get NOOBS is to buy an SD card with NOOBS pre-installed - Raspberry Pi 16GB Preloaded (NOOBS) Micro SD Card. Alternatively, you can download and install it on your SD card.
  • External Speaker with 3.5mm audio cable - Buy on Amazon
  • A USB Keyboard & Mouse, and an external HDMI Monitor - we also recommend having a USB keyboard and mouse as well as an HDMI monitor handy if you're unable to remote(SSH) into your Pi.
  • Internet connection (Ethernet or WiFi)
  • (Optional) WiFi Wireless Adapter for Pi 2 (Buy on Element14). Note: Pi 3 has built-in WiFi.

For extra credit, enable remote(SSH) into your device, eliminating the need for a monitor, keyboard and mouse - and learn how to tail logs for troubleshooting.

Let's get started

The original Alexa on a Pi project required manual download of libraries/dependencies and updating configuration files, which is prone to human error. To make the process faster and easier, we've included an install script with the project that will take care of all the heavy lifting. Not only does this reduce setup time to less than an hour on a Raspberry Pi 3, it only requires developers to adjust three variables in a single install script.

Step 1: Setting up your Pi

Configure your Raspberry Pi like in the original Alexa documentation, for this please complete steps: 1,2,3,4,5 and 6 from the original documentation: Raspberry Pi Alexa Documentation

Step 2: Override ALSA configuration

MATRIX Creator has 8 physical microphone channels and an additional virtual beam formed channel that combines the physical ones. Utilize a microphone channel by placing the following in /home/pi/.asoundrc .

pcm.!default
{
  type asym
  playback.pcm {
    type hw
    card 0
    device 0
  }
  capture.pcm {
    type file
    file "/tmp/matrix_micarray_channel_0"
    infile "/tmp/matrix_micarray_channel_0"
    format "raw"
    slave {
        pcm sc
    }
  }
}

Step 3: Install MATRIX software and reboot

echo "deb http://packages.matrix.one/matrix-creator/ ./" | sudo tee --append /etc/apt/sources.list;
sudo apt-get update;
sudo apt-get upgrade;
sudo apt-get install libzmq3-dev xc3sprog matrix-creator-openocd wiringpi cmake g++ git;
sudo apt-get install matrix-creator-init matrix-creator-malos
sudo reboot

Step 4: Run your web service, sample app and wake word engine

Return to the  Raspberry Pi Alexa Documentation and execute Step 7 but in the last terminal select the sensory wake word engine with:

cd ~/Desktop/alexa-avs-sample-app/samples
cd wakeWordAgent/src && ./wakeWordAgent -e sensory


Step 5: Talk to Alexa



You can now talk to Alexa by simply using the wake word "Alexa". Try the following:

Say "Alexa", then wait for the beep. Now say "what's the time?"

Say "Alexa", then wait for the beep. Now say "what's the weather in Seattle?"

If you prefer, you can also click on the "Listen" button, instead of using the wake word. Click the "Listen" button and wait for the audio cue before beginning to speak. It may take a second or two before you hear the audio cue.

Music has always been driven forward in part by the technology used to make it. The piano combined the best features of the harpsichord and clavichord to help concert musicians; the electric guitar made performing and recording different forms of blues, jazz, and rock music possible; and electronic drum machines both facilitated songwriting and spawned entire genres of music in themselves. Code has become a part of so many different ways of making music today: digital audio workstation (DAW) software records and sequences it, digital instruments perform it, and digital consoles at live music venues process and enhance it for your enjoyment. But using Sonic Pi you actually perform the music by writing code, and Sebastien Rannou used this technique to cover one of his favorite songs, "Aerodynamic," by electronic music legends Daft Punk.

 

Q: To start off, for someone like me who knows little to nothing about code in general, what exactly is happening in this video!? I’ve watched it several times in full, and I’m still not sure!

 

Sebastien: It's a video where a song by Daft Punk is played from code being edited on the fly. This happens in a software called Sonic Pi, which is a bit like a text editor; you can write code in the middle of the screen and it plays some music according to the recipe you provided. Sometimes you can see the screen blink in pink; this is when the code is evaluated, and Sonic Pi takes up modifications. A bit after that, you'll hear something changing in the music. It's a bit like you were writing a recipe with a pencil and at the same time instantly getting the result in your food.

 

 

Q: Among the most famous features of Daft Punk’s music is the extensive use of sampling, i.e. using existing recordings that are re-purposed to create new compositions. In covering a song that is sample based, as is the case with "Aerodynamic" - which is based on a Sister Sledge track - how did you go about doing a cover?

 

S: This is one of my favorite songs, but the choice of doing this cover was more motivated by the different technical aspects it offers. My initial goal was to write an article about Sonic Pi, so I wanted a song where different features of it could be shown. "Aerodynamic" was good for this purpose, as it's made of distinct parts using different techniques: samples, instruments, audio effects, etc. Recreating the sampled part was especially interesting, as there isn't much more than this, so I had of one of those 'a-ha' moments when I got the sequence right, and it surprised me.

 

Q: How did you come to use Sonic Pi? Do you feel it has any particular strengths and weaknesses in what it does?

sonic pi logo.png

 

S: I really like the idea of generating sound from code; I think it makes a lot of sense, as there are many patterns in music which can be expressed in a logical way.

 

I started playing around with Extempore and Overtone, which are both environments to play music from code. The initial learning curve was harder than I expected, as they implied learning a new language (Extempore comes with its own Scheme and DSL languages, and Overtone uses Clojure). So the initial time spent there was more about learning a new language and environment, so it removes some part of the fun you can have (not the technical fun part, but the musical one). On the other hand, Sonic Pi is really easy to start with: one of its main goals is to offer a platform to teach people how to code, and I think Sam Aaron (the creator of Sonic Pi) did a very good job on this. What's surprising is that, even though it's initially made to teach you how to code, you don't feel limited and can go around and do most of the crazy stuff you need to express musically.

 

One thing which is a bit hard to get right at the beginning is that live coding environments aren't live in the same way an instrument is: you don't get instant feedback on your live modifications if you tweak a parameter within Sonic Pi, as those are usually caught up to on the next musical measure. So you have to think of what's going to happen in the next bar or two, and try to imagine how it's going to sound. This takes some practice.

 

sonic pi additive_synthesis.png

 

Q: There’s quite a bit of discussion about how Daft Punk recorded the “guitar solo” in this track; how did you go about covering it?

 

S: I don't know much about the theories of how they did the guitar solo part, which I naïvely thought they did digitally. I did a spectral analysis of the track, and isolated each individual note to get their pitch and an approximation of their envelope characteristics (the attack, decay, sustain, and release, essentially how the sound develops over time). Then it was just a matter of using a Sonic Pi instrument that sounded a bit like a guitar, and telling it to play them. I then wrapped it in a reverb and a bitcrusher effect (which downgrades the audio's bit rate and / or sampling rate) to make it sound a bit more metallic. Because the notes are so fast during this solo, it sounds kind of good as is (unlike the sound of the bells at the beginning, more on this later!).

 

Q: As you were working on your cover, did you run into any notable technical problems, and how did you solve them?:

 

S: Yes! I spent a lot of time trying to get the bells sound right, but failed. Usually when an instrument plays a note, it has a timbre: this is a sort of signature which can be more or less explained, for instance a violin has a very complex timbre, whereas a wheel organ is way more simple. This complexity is highlighted when you look at audio frequencies when such an instrument plays a note: there is usually one frequency that outweighs others (the frequency of the pitch or the fundamental), and a myriad of others, which correspond to the timbre.

 

The timbre of the bells at the beginning of "Aerodynamic" is very complex, and it evolves in a non-trivial way. I've tried different approaches to reproducing it, including doing Fourier transforms to extract bands of main frequencies at play at different intervals and converting these to Sonic Pi code (more about this here). Sonic Pi comes with a very simple sine instrument, which plays only one frequency, so the idea was to call this instrument several times using different frequencies all together. I kind of got something that sounded like a bell, but it was far from sounding right. I ended up using the bell instrument that also comes with Sonic Pi, playing it at different octaves at the same time, and wrapping these in a reverb effect. That's kind of a poor solution, but at least I had fun in this adventure!

 

Q: Have you used Sonic Pi to create original music? If so, how did you feel about that process? If not, how do you imagine it would be?

 

S: Yes, I have, using different approaches. For example, I tried using only Sonic Pi, which ended up sounding a bit experimental, and then by composing in a DAW software (Digital Audio Workstation, eg Pro Tools) and then sampling that so it can be easily imported into Sonic Pi. With this approach I can then use Sonic Pi as a sequencer and wrap the samples in effects. I did another cover using that method, this time of a Yann Tiersen song, and also a few songs with my band, Camembert Au Lait Crew (SoundCloud). The code can all be found here on github.

 

 

 

Q: Do you have any plans for future music projects using Sonic Pi?

 

S: There are recent changes in Sonic Pi version 3 which I'm really excited about, especially the support of MIDI, so you can now control external synths with code from Sonic Pi while keeping the ability to turn knobs on your synth. I haven't tried this yet, but it's definitely what I want to do next. Sam Aaron did a live coding session recently showing this and I find it amazing: https://www.youtube.com/watch?v=tEEYL2UwzF4

Filter Blog

By date: By tag: