Skip navigation
1 2 Previous Next

MusicTech

18 Posts authored by: liamtmlacey

So this is my 18th and final blogpost for my music tech design challenge project, and for it I have created three new videos talking about and demoing the end result of the project:

  1. A specification overview video, outlining all the features of the synth as well as briefly discussing the overall development of it and how it works
  2. A complete walkthrough video, demoing in detail each control and parameter of the synthesiser
  3. A video on how the device can be used as a MIDI controller

 

I've also included the demo sound video that I posted a couple of weeks back, as that video is a great example of the range of sounds that the synth can produce, as well as some new higher-quality photos of the synth.

 

Specification Overview Video

 

This video briefly goes over the specifications and features of the synth from both a user and development point-of-view.

 

 

Complete Walkthrough Video

 

This video is an extension of the last video, and in detail talks about and demos each control and parameter of the vintage toy synth. I apologise that the video quality isn't great and that you can't quite make out all the text on the panel, however hopefully I explain in enough detail what I am doing as I go along for you to make sense of it. Also it was produced to demo how each control/parameter effects the sound, rather than how to create a great sounding patch, so please don't judge the sound capabilities of the synth based on the fairly average patches that I produce in this video.

 

 

Sound/Patch Demo Video

 

I first posted this video a couple of weeks back, however it is a perfect example of the types and range of sounds that you can create with the final state of the vintage toy synth, so I thought it would make sense to include it here as well. This video also demonstrates the use of the VTS Editor - a desktop/laptop application I developed for the synth for adding patch saving and loading capabilities to the instrument.

 

 

MIDI Controller Video

 

While the previous videos demonstrate the instruments primary function as a standalone synthesiser, this video shows how the device can be used as a MIDI controller. The video demos the synths MIDI capabilities with Logic Pro and Ableton Live, however the device could theoretically be used to control any external MIDI software or hardware.

 

 

Photos

 

Below are some high-quality photos of the final state of the vintage toy synthesiser.

 

vintage toy synth 1

vintage toy synth 2

vintage toy synth 3

vintage toy synth 4

vintage toy synth 5

vintag toy synth 6

vintage toy synth 7

 

 

Final Development Material

 

All the final code, circuit diagrams and design files for the synthesiser can be found in the projects Github repository.

 

Conclusion

 

I've spent the majority of my free time over the past 3 and half months working on this project, and I couldn't be happier with the result. My skills in both software and hardware development have dramatically improved thanks to this project, and even though it's been a lot of hard work and very stressful at times, it has overall been a very fun experience. I hope those of you who have been following the project have enjoyed what I've done, and please feel free to leave any questions below.

 

Thanks!

In order to undertake this project I had to completely take-apart the toy piano, unfortunately slightly damaging it in the process due to the way it was originally connected together. As all the electronics for the project are now finished, I spent this week putting the piano back together as well as adding some small extra touches, some to make the synth easier to use though some just to improve the aesthetics of the design.

 

vintage toy synth grand piano style

The finished enclosure of the vintage toy synthesiser, propped open like a grand piano

 

Attaching the Panel

 

Instead of securing the synth panel to the top of the piano enclosure in the original standard way, I decided to connect it in a way so that the top of the synth could be opened like that of a real grand piano (see image above). I chose to do this for a couple of reasons:

  1. It improves the charming miniature form of the toy piano, a characteristic of the object that I didn't want to lose in the conversion.
  2. It gives it a great modular-synth-esque look, exposing all the colourful wires and flashy LEDs of the microcontrollers.
  3. It allows me to easily get into the synth for development and repairs

 

To do this I added 8 miniature hinges to the top-left-underside of the panel, using screws to attach the hinges to the wooden side, but unfortunately having to use superglue to attach the hinges to the acrylic panel due to the screws being too brittle for the tougher material (I prefer screws so things can be easily taken apart again if needed).

 

vintage toy piano hinges

The hinges attaching the panel to the enclosure

 

Back Labels

 

A couple of weeks ago I posted a blogpost about the sockets and controls I've added to the back of the synth, and this week I added some labels to the sockets/controls so that the user knows what each socket/control is for. I made these labels using gloss white filmic sticker sheets, using text of the same font and colour as that of the front panel on a black background, in the hope that it would look as similar as possible to the panel for continuity without being able to apply the same laser-engraving method to this part of the synth. Unfortunately I don't think they look quite as professional as the text on the panel, and I'm probably going to recut and reposition them before the end of the project so that they look a bit neater, however they're not highly visible and are mainly there so that I can remember which MIDI socket is MIDI-in and which is MIDI-out!

 

vintage toy synth back panel

The finished (-ish) back panel

 

Keyboard Gap Covering

 

The original toy piano enclosure came with a strip of blue fabric (possibly velvet) draped above the keyboard to hide a fairly large gap into the pianos body. Unfortunately I made the mistake of removing and misplacing this fabric, however on the plus side it gave me the chance to experiment with different types and colours of material to use for the synth. After trying out various colours of both ribbon and felt, I settled on using a burgundy ribbon as a replacement. I chose a burgundy colour as I felt it matched with the red on the front of the keyboard keys but without being too garish, with the glossy/shiny aspect of ribbon going well with the rest of the glossy enclosure. Below are a couple of photos:

 

vintage toy synth keyboard gap

The gap above the keyboard

 

vintage toy synth ribbon

A strip of ribbon used to cover the gap

 

Painting

 

The main thing that got damaged when taking apart the existing toy piano enclosure was the paintwork, so I needed to touch up the paint where this had happened. I also had to paint some new areas of the existing enclosure now that the front panel could be opened and expose some previously-hidden areas. After trying out a disastrous gloss black paint which destroyed the first synth panel I had produced, it turned out that gloss black nail varnish was the perfect tool for painting the enclosure.

 

vintage toy synth painting

Painting the synth with nail varnish

 

Other

 

A couple of other things I did to fix-together and refine the enclosure:

  • The base sections of the toy piano (including the keyboard) were reattached to the rest of the enclosure using self-tapping screws. It was initially secured together using nails which is what made taking it apart so difficult and destructive, however I'd like the option to remove the base/keyboard in the future incase I need to do improvements or repairs to the electronics I can't get to otherwise.
  • I attached a set of rubber feet to the base of the synth, so that the enclosure could sit stably on unevenly surfaces.
  • All stripboards and the BeagleBone Black have been secured to the enclosure with self-tapping screws

 

The Result

 

Here are some images of the final enclosure of the vintage toy synthesiser. I'm probably going to take some better quality images for my final blogpost next week.

 

the original piano enclosure

The original toy piano enclosure

 

the finished vintage toy synth enclosure

The finished vintage toy synthesiser enclosure

 

vintage toy synth grand piano

The synth propped open like a grand piano

 

vintage toy synth grand piano back

Back view of the synth propped open

 

That's it for now. Next week I'll be posting my final blogpost, after doing some final software tweaks, in which I hope to include a set of videos demoing the finished synth and showing everything that it can do.

At the start of this project I wasn't planning on having any kind of sound/patch storage or management within the vintage toy synthesiser, however as the project progressed I was more and more finding the need to quickly save and recall patches for both testing and demoing the functionality of the sound synthesis engine. In the end I decided to implement an external desktop application to handle this.

 

Approach

 

Synthesiser patch management allows the user to save the sound parameter settings into a 'patch' so that a particular sound can be quickly recalled at a later time. It is a common features on commercial synthesisers, however I originally decided not to include patch management on the vintage toy synth for the following reasons:

  1. Patch management works best on synths that have relative or stateless controls (e.g. rotary encoders, which just increment or decrement a stored value in the backend) with an LCD for displaying control/parameter values, as opposed to absolute controls (e.g. potentiometers, which set a specific value determined by their position). This is because, unless you have motorised controls, loading a patch doesn't change the physical state of the controls, meaning that with pots it would cause them to be potentially complete out-of-sync with the backend. I didn't want to add an LCD to the piano as it would take away the vintage aesthetic of the object, as well as adding cost and implementation time to the project. Also I like the fact that with pots a user can glance at the panel and instantly see all the parameter values.
  2. Another reason an LCD is so important for patch storage is so that the user knows what number patch they are saving or loading. A minimal patch storage interface could be implemented using a set of toggle switches that represent patch numbers using a base-2 numeral system, however this would have involved an extra set of controls on the panel that I initially didn't think I could add, in regards to both space on the panel and connections to the Arduino Pro Mini.

 

However as the project progressed I kept finding myself wanting to save the sounds I was able to create with the synth, which would make the device a lot easier to demo once finished. However by this point the front panel was already constructed so adding any extra controls on the synth was out of the question. However after giving it a bit of thought I realised that I could simply implement patch management in a separate external application that runs on a desktop/laptop computer which communicates with the synth via MIDI, which would work with the existing synth hardware. I therefore set about developing a Mac OS X GUI application using the C++ framework JUCE, and you can see the code for this in the project Github repo here.

 

Having an external patch manager application isn't my preferred solution as it means you'll always need a computer with a MIDI interface to save and load patches, however from an interaction design perspective it could be considered a better implementation over adding an LCD to the synth. I recently attended MiXD 2016, a music interaction design symposium hosted by Birmingham Conservatoire's Integra Lab research group, where keynote speaker Jason Mesut stated that it could be considered inferior to add costly and complex LCDs and displays to products such as digital musical instruments when most of us already carry smartphones/tablets/laptops with us at all times - devices that can easily be used to control other digital devices.

 

How it Works

 

The patch manager application, which is called 'VTS Editor', is very simply and just relies on the correct MIDI messages being sent between the application and the synth in order for it to work correctly.

 

Saving a patch works as follows:

  1. A specific MIDI CC 127 value is sent from the application to the synth to request all patch data
  2. The synth sends back all the current patch data in the form of the parameters MIDI CC messages (the same as the ones that come from the synths panel)
  3. Once the synth has sent all patch data it sends a 'finished' MIDI CC value so that the patch manager application knows it has got a complete patch
  4. The patch data is encoded into lines of text and saved into its own text file

 

Loading a patch is even simpler:

  1. A patch text file is decoded into patch parameter values
  2. The patch parameter values are sent to the synth as a stream of MIDI CC messages

 

I've called the application 'VTS Editor' rather 'VTS Patch Manager' as in the future I'd like this application to become a full software editor for the synth (essentially an extended virtual version of the synths panel), however that's beyond the scope of this design challenge. However I have already implemented a couple of extra controls/features within the editor application that aren't related to patch loading/saving:

  • Reset Synth to Panel Settings - triggers the sound engine to be set to the current panel settings. This functionality also happens when the synth is turned on so that the panel and the backend are in sync.
  • Disable/Enable Synth Panel - temporarily disables the synths panel controls (except for the volume control) from doing anything. I need this as unfortunately I'm still getting some very occasional panel potentiometer jitter, so this allows the user to load a patch without any jitter changing the way it should sound. Eventually I hope to get rid of this control once I've fixed the pot jitter issues.

 

The Interface

 

Below is a screenshot of the current VTS Editor interface:

VTS editor

 

A couple of notes on the interface and its controls:

  • Loading a patch is done using a window that displays all saved patch files
  • It includes controls for setting which MIDI input and output the vintage toy synthesiser is connected to

 

It's also worth noting that I haven't yet finished the general look-and-feel of the interface - eventually I want it to use the same colour scheme and font as that of the synths front panel.

 

Sound Demos

 

Here is a quick and rough video previewing a couple of demo sounds I have made with the vintage toy synthesiser.  I'm neither a sound designer nor a keyboardist so don't expect anything mind-blowing, plus I've still got a couple of tweaks to do to the sound engine, however this should give you an idea of the range of sounds that you can create with the synth. There is also a bit of noise (possibly ground loop/hum) in the recording. At the end of the project I'm hoping to do a much better video that covers all the features and controls of the synth, as well as some high-quality patch demos and recordings. Enjoy!

 

 

Just to be clear, all sound is coming directly from the BeagleBone Black within the synth itself, and I'm only using the MacBook to send patch change information to the synth via MIDI.

Just over a month ago I posted about the implementation of the audio synthesis engine for the vintage toy synthesiser, however since then I've got the synths front panel developed and fully working which has allowed me to rapidly complete the main features of the synth. Here I'm going to follow on from that blogpost and talk about the final few features I've implemented since then, however it's worth mentioning that there are still a few refinements I need to make before I can settle on a final implementation of the brain and sound engine software for the synth, which I'll probably cover in a future blogpost.

 

Voice Mode and Voice Allocation

 

The Voice Mode parameter on the synth sets whether the device is in polyphonic mode or monophonic mode. Here I'm going to cover how I've implemented both poly and mono mode in the vintage toy synth, which is implemented within the vintageBrain application on the synth.

 

Polyphonic Mode

 

Poly mode is implemented using an array that stores an ordered-list of 'free' voices - voices that are not currently playing a note. The number at the beginning of the list always represents the next available free voice. I've also implemented 'last note' voice stealing, so that if attempting to play a note when there are no free voices left it will 'steal' the voice that is playing the last played note.

 

This is how poly mode works when a note-on message is received:

  1. The next available free voice is pulled out of the first index of the 'free voice' array
  2. If the voice number from point 1 is a valid voice number (1 or above):
    1. The 'free voice' array is rearranged so that all numbers are shuffled forward by 1 (removing the next available free voice), and a '0' (representing 'no free voice') is added to the end of the array. This puts the following free voice for the next note-on message at the beginning of the array.
    2. The note number of the note message is stored in an array of 'voice notes', which signifies what note each voice is currently playing.
    3. The voice number is stored as the last used voice (for the note stealing implementation).
    4. The voice number is used to set the MIDI channel of the MIDI note-on message that is sent to the voice engine, triggering that particular voice to play a note.
  3. If the voice number from point 1 invalid voice number (0), meaning there are no free voices left:
    1. The last used voice number is set as the voice to use
    2. A MIDI note-off message is sent to the stolen voice so that when sending the new note-on it enters the attack phase of the note
    3. The note number of the note-on message is stored in an array of 'voice notes', which signifies what note each voice is currently playing.
    4. The voice number is used to set the MIDI channel of the MIDI note-on message that is sent to the voice engine, triggering that particular voice to play a note.

 

When a note-off message is received:

  1. A search for the note number in the 'voice notes' array is done
  2. If the note number is found in the 'voice notes' array, the index of the number is used to signify the voice number that is currently playing the note
  3. The voice number is put back into the 'free voice' array, replacing the first instance of '0' found at the end of the array
  4. The index of the 'voice notes' array that represents this voice is set to -1 to signify that this voice is no longer playing a note
  5. The voice number is used to set the MIDI channel of the MIDI note-off message that is sent to the voice engine, triggering that particular voice to stop playing a note.

 

Monophonic Mode

 

Surprisingly, the mono mode implementation is just as complex as poly mode even though it only ever uses the first voice. This is because we need to store a 'stack' of notes that represent all the keys that are currently being held down, so that if a key is released whilst there are still keys being held down the played note is changed to the previously played key rather than just turning the note off. This is the expected behaviour of a monophonic voice mode within a synthesiser.

 

This is how mono mode works when a note-on message is received:

  1. The note number is added to the 'mono stack' array, at an index that represents the number of keys currently being held down (if this is the first pressed key it will be the 1st index, if there is already one key being held down it will be 2nd index, and so on).
  2. A 'stack pointer' variable is incremented by 1 to signify that a note has been added to the 'mono stack' array
  3. The note number is sent to voice 0 of the sound engine in the form of a MIDI note-on message

 

When a note-off is received:

  1. A search for the note number in the 'mono stack' array is done
  2. If the note number is found in the 'mono stack' array, the note is removed by shuffling forward all elements of the array above it by 1
  3. The 'stack pointer' variable is decremented by 1 to signify that a note has been removed from the 'mono stack' array
  4. If there is still at least 1 note in the 'mono stack' array, signified by the value of the 'stack pointer' variable, a MIDI note-on message is sent to voice 0 of the sound engine using the note number at the top of the mono stack, changing the playing note.
  5. If there is no notes left in the 'mono stack' array, a MIDI note-off message is sent to voice 0 of the sound engine, stopping the playing note.

 

Below is the current code that handles both poly and mono voice/note allocation, however for an up-to-date version of the code see the vintageBrain.c file in the projects Github repo.

 

//====================================================================================
//====================================================================================
//====================================================================================
//Gets the next free voice (the oldest played voice) from the voice_alloc_data.free_voices buffer,
//or steals the oldest voice if not voices are currently free.

uint8_t GetNextFreeVoice (VoiceAllocData *voice_alloc_data)
{
    uint8_t free_voice = 0;
    
    //get the next free voice number from first index of the free_voices array
    free_voice = voice_alloc_data->free_voices[0];
    
    //if got a free voice
    if (free_voice != 0)
    {
        //shift all voices forwards, removing the first value, and adding 0 on the end...
        
        for (uint8_t voice = 0; voice < NUM_OF_VOICES - 1; voice++)
        {
            voice_alloc_data->free_voices[voice] = voice_alloc_data->free_voices[voice + 1];
        }
        
        voice_alloc_data->free_voices[NUM_OF_VOICES - 1] = 0;
        
    } //if (free_voice != 0)
    
    else
    {
        //use the oldest voice
        free_voice = voice_alloc_data->last_voice;
        
        //TODO: Send a note-off message to the stolen voice so that when
        //sending the new note-on it enters the attack phase....
        
    } //else ((free_voice != 0))
    
    return free_voice;
}

//====================================================================================
//====================================================================================
//====================================================================================
//Adds a new free voice to the voice_alloc_data.free_voices buffer

uint8_t FreeVoiceOfNote (uint8_t note_num, VoiceAllocData *voice_alloc_data)
{
    //first, find which voice note_num is currently being played on
    
    uint8_t free_voice = 0;
    
    for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++)
    {
        if (note_num == voice_alloc_data->note_data[voice].note_num)
        {
            free_voice = voice + 1;
            break;
        }
        
    } //for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++)
    
    
    //if we have a voice to free up
    if (free_voice > 0)
    {
        //find space in voice buffer
        
        for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++)
        {
            //if we find zero put the voice in that place
            if (voice_alloc_data->free_voices[voice] == 0)
            {
                voice_alloc_data->free_voices[voice] = free_voice;
                break;
            }
            
        } //for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++)
        
    } //if (free_voice > 0)
    
    return free_voice;
}

//====================================================================================
//====================================================================================
//====================================================================================
//Returns a list of voices that are currently playing note note_num (using the voice_list array)
//as well as returning the number of voices.
//Even though at the moment it will probably only ever be 1 voice here, I'm implementing
//it to be able to return multiple voices incase in the future I allow the same note
//to play multiple voices.

uint8_t GetVoicesOfNote (uint8_t note_num, VoiceAllocData *voice_alloc_data, uint8_t voice_list[])
{
    uint8_t num_of_voices = 0;
    
    for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++)
    {
        if (note_num == voice_alloc_data->note_data[voice].note_num)
        {
            voice_list[num_of_voices] = voice + 1;
            num_of_voices++;
        }
        
    } //for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++)
    
    return num_of_voices;
}

//====================================================================================
//====================================================================================
//====================================================================================
///Removes a note from the mono stack by shuffling a set of notes down

void RemoveNoteFromMonoStack (uint8_t start_index, uint8_t end_index, VoiceAllocData *voice_alloc_data)
{
    //shuffle the notes in the stack down to remove the note
    for (uint8_t index = start_index; index < end_index; index++)
    {
        voice_alloc_data->note_data[index].note_num = voice_alloc_data->note_data[index + 1].note_num;
    }
    
    //set top of stack to empty
    voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].note_num = VOICE_NO_NOTE;
    voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].note_vel = VOICE_NO_NOTE;
    
    //set internal keyboard note stuff
    voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].keyboard_key_num = VOICE_NO_NOTE;
    
    //decrement pointer if above 0
    if (voice_alloc_data->mono_note_stack_pointer)
    {
        voice_alloc_data->mono_note_stack_pointer--;
    }
}

//====================================================================================
//====================================================================================
//====================================================================================
//Adds a note to mono mode stack

void AddNoteToMonoStack (uint8_t note_num, uint8_t note_vel, VoiceAllocData *voice_alloc_data, bool from_internal_keyboard, uint8_t keyboard_key_num)
{
    //add note to the top of the stack
    voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].note_num = note_num;
    voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].note_vel = note_vel;
    
    //set internal keyboard note stuff
    voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].from_internal_keyboard = from_internal_keyboard;
    if (from_internal_keyboard)
        voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer].keyboard_key_num = keyboard_key_num;
    
    //increase stack pointer
    voice_alloc_data->mono_note_stack_pointer++;
    
    //if the stack is full
    if (voice_alloc_data->mono_note_stack_pointer >= VOICE_MONO_BUFFER_SIZE)
    {
        //remove the oldest note from the stack
        RemoveNoteFromMonoStack (0, VOICE_MONO_BUFFER_SIZE, voice_alloc_data);
    }
}

//====================================================================================
//====================================================================================
//====================================================================================
//Pulls a note from the mono stack

void PullNoteFromMonoStack (uint8_t note_num, VoiceAllocData *voice_alloc_data)
{
    uint8_t note_index;
    
    //find the note in the stack buffer
    for (uint8_t i = 0; i < voice_alloc_data->mono_note_stack_pointer; i++)
    {
        //if it matches
        if (voice_alloc_data->note_data[i].note_num == note_num)
        {
            //store index
            note_index = i;
            
            //break from loop
            break;
        }
        
    } //if (uint8_t i = 0; i < voice_alloc_data->mono_note_stack_pointer; i++)
    
    //remove the note from the stack
    RemoveNoteFromMonoStack (note_index, voice_alloc_data->mono_note_stack_pointer, voice_alloc_data);
}

//====================================================================================
//====================================================================================
//====================================================================================
//Processes a note message recieived from any source, sending it to the needed places

void ProcessNoteMessage (uint8_t message_buffer[],
                         PatchParameterData patch_param_data[],
                         VoiceAllocData *voice_alloc_data,
                         bool send_to_midi_out,
                         int sock,
                         struct sockaddr_un sound_engine_sock_addr,
                         bool from_internal_keyboard,
                         uint8_t keyboard_key_num)
{
    
    //====================================================================================
    //Voice allocation for sound engine
    
    //FIXME: it is kind of confusing how in mono mode the seperate functions handle the setting
    //of voice_alloc_data, however in poly mode all of that is done within this function. It
    //may be a good idea to rewrite the voice allocation stuff to make this neater.
    
    //=========================================
    //if a note-on message
    if ((message_buffer[0] & MIDI_STATUS_BITS) == MIDI_NOTEON)
    {
        //====================
        //if in poly mode
        if (patch_param_data[PARAM_VOICE_MODE].user_val > 0)
        {
            //get next voice we can use
            uint8_t free_voice = GetNextFreeVoice (voice_alloc_data);
            
            #ifdef DEBUG
            printf ("[VB] Next free voice: %d\r\n", free_voice);
            #endif
            
            //if we have a voice to use
            if (free_voice > 0)
            {
                //put free_voice into the correct range
                free_voice -= 1;
                
                //store the note info for this voice
                voice_alloc_data->note_data[free_voice].note_num = message_buffer[1];
                voice_alloc_data->note_data[free_voice].note_vel = message_buffer[2];
                
                //set the last played voice (for note stealing)
                voice_alloc_data->last_voice = free_voice + 1;
                
                //set internal keyboard note stuff
                voice_alloc_data->note_data[free_voice].from_internal_keyboard = from_internal_keyboard;
                if (from_internal_keyboard)
                    voice_alloc_data->note_data[free_voice].keyboard_key_num = keyboard_key_num;
                
                //Send to the sound engine...
                
                uint8_t note_buffer[3] = {MIDI_NOTEON + free_voice, message_buffer[1], message_buffer[2]};
                SendToSoundEngine (note_buffer, 3, sock, sound_engine_sock_addr);
                
            } //if (free_voice > 0)
            
        } //if (patch_param_data[PARAM_VOICE_MODE].user_val > 0)
        
        //====================
        //if in mono mode
        else
        {
            AddNoteToMonoStack (message_buffer[1], message_buffer[2], voice_alloc_data, from_internal_keyboard, keyboard_key_num);
            
            //Send to the sound engine for voice 0...
            uint8_t note_buffer[3] = {MIDI_NOTEON, message_buffer[1], message_buffer[2]};
            SendToSoundEngine (note_buffer, 3, sock, sound_engine_sock_addr);
            
        } //else (mono mode)
        
    } //((message_buffer[0] & MIDI_STATUS_BITS) == MIDI_NOTEON)
    
    //=========================================
    //if a note-off message
    else
    {
        //====================
        //if in poly mode
        if (patch_param_data[PARAM_VOICE_MODE].user_val > 0)
        {
            //free used voice of this note
            uint8_t freed_voice = FreeVoiceOfNote (message_buffer[1], voice_alloc_data);
            
            #ifdef DEBUG
            printf ("[VB] freed voice: %d\r\n", freed_voice);
            #endif
            
            //if we sucessfully freed a voice
            if (freed_voice > 0)
            {
                //put freed_voice into the correct range
                freed_voice -= 1;
                
                //reset the note info for this voice
                voice_alloc_data->note_data[freed_voice].note_num = VOICE_NO_NOTE;
                voice_alloc_data->note_data[freed_voice].note_vel = VOICE_NO_NOTE;
                voice_alloc_data->note_data[freed_voice].keyboard_key_num = VOICE_NO_NOTE;
                
                //Send to the sound engine...
                
                uint8_t note_buffer[3] = {MIDI_NOTEOFF + freed_voice, message_buffer[1], message_buffer[2]};
                SendToSoundEngine (note_buffer, 3, sock, sound_engine_sock_addr);
                
            } //if (freed_voice > 0)
            
        } //if (patch_param_data[PARAM_VOICE_MODE].user_val > 0)
        
        //====================
        //if in mono mode
        else
        {
            PullNoteFromMonoStack (message_buffer[1], voice_alloc_data);
            
            //if there is still atleast one note on the stack
            if (voice_alloc_data->mono_note_stack_pointer != 0)
            {
                //Send a note-on message to the sound engine with the previous note on the stack...
                
                uint8_t note_num = voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer - 1].note_num;
                uint8_t note_vel = voice_alloc_data->note_data[voice_alloc_data->mono_note_stack_pointer - 1].note_vel;
                
                uint8_t note_buffer[3] = {MIDI_NOTEON, note_num, note_vel};
                SendToSoundEngine (note_buffer, 3, sock, sound_engine_sock_addr);
                
            } //if (prev_stack_note != VOICE_NO_NOTE)
            
            //if this was the last note in the stack
            else
            {
                //Send to the sound engine as a note off...
                
                uint8_t note_buffer[3] = {MIDI_NOTEOFF, message_buffer[1], message_buffer[2]};
                SendToSoundEngine (note_buffer, 3, sock, sound_engine_sock_addr);
            }
            
        } //else (mono mode)
        
    } //else (note-off message)
    
    //====================================================================================
    //Sending to MIDI-out
    
    //Send to MIDI out if needed
    if (send_to_midi_out)
    {
        WriteToMidiOutFd (message_buffer, 3);
    }
}

 

Keyboard Parameters

 

There are three keyboard parameters on the vintage toy synth that generate and control what notes the keyboard plays - scale, octave, and transpose; another set of parameters that are implemented within the vintageBrain application on the synth. The keyboard on the synth sends key/note messages to the brain application using MIDI note messages, using note numbers 0-17 to signify the key number that has been pressed/released, therefore it is up to these parameters to convert these key numbers into meaningful and audible note numbers.

 

Scale

 

Scale controls what particular musical scale is played by the keyboard, and at the moment I have included a selection of 8 scale - chromatic, major, major pentatonic, minor, minor pentatonic, melodic minor, harmonic minor, and blues. This has been implemented fairly simply by putting each scale into its own array in the form of semitones starting from 0, and using the key number coming from the keyboard to select a note/semitone value from the array:

 

//apply scale value
//Note numbers come from the keyboard in the range of 0 - KEYBOARD_NUM_OF_KEYS-1,
//and are used to select an index of keyboardScales[patchParameterData[PARAM_KEYS_SCALE].user_val]
note_num = keyboardScales[patch_param_data[PARAM_KEYS_SCALE].user_val][keyboard_key_num];

 

Octave

 

Octave controls the musical octave that the keyboard scale is offset by, where an octave value of 0 sets the bottom key on the keyboard to play middle E (MIDI note 64), with greater octave value adding 12 semitones each time, or lower octave values reducing the notes by 12 semitones each time:

 

//apply octave value
//if octave value is 64 (0) bottom key is note 64 (middle E, as E is the first key)
note_num = (note_num + 64) + ((patch_param_data[PARAM_KEYS_OCTAVE].user_val - 64) * 12);

 

Transpose

 

Transpose controls a singular semitone offset applied to the note number, and allows the bottom key on the keyboard to be any musical note rather than just E:

 

//apply tranpose
//a value of 64 (0) means no transpose
note_num += patch_param_data[PARAM_KEYS_TRANSPOSE].user_val - 64;

 

Global Volume

 

The global volume parameter is a boring yet essential control that needs to be included, and I have implemented this to control the main volume of the Linux OS soundcard driver I am using on the BBB. As I am using the ALSA soundcard driver for audio output, I need to use the amixer command-line application to do this, using the sset command:

 

//set the Linux system volume...
        
//create start of amixer command to set 'Speaker' control value
//See http://linux.die.net/man/1/amixer for more options
uint8_t volume_cmd[64] = {"amixer -q sset Speaker "};
        
//turn the param value into a percentage string
uint8_t volume_string[16];
sprintf(volume_string, "%d%%", param_val);
        
//append the value string onto the command
strcat (volume_cmd, volume_string);
        
//Send the command to the system
system (volume_cmd);

Vintage Amount

 

The idea of the Vintage Amount parameter is to allow the synth to model old or even broken analogue synthesiser voices, however as this is an uncommon setting found on commercial synthesisers there is no set functionality for this parameter. The most obvious behaviour for this parameter, and the way it currently works, is that it randomly modifies the pitch of each voice when a new note is played, with a greater amount value creating larger pitch offsets:

 

//============================
//Set 'vintage amount' pitch offset
int16_t vintage_pitch_offset = 0;
        
//if there is a vintage value
if (patchParameterData[PARAM_GLOBAL_VINTAGE_AMOUNT].voice_val != 0)
{
    //get a random pitch value using the vintage amount as the max possible value
    vintage_pitch_offset = rand() % (int)patchParameterData[PARAM_GLOBAL_VINTAGE_AMOUNT].voice_val;
    //offset the random pitch value so that the offset could be negative
    vintage_pitch_offset -= patchParameterData[PARAM_GLOBAL_VINTAGE_AMOUNT].voice_val / 2;
            
    //FIXME: the above algorithm will make lower notes sound less out of tune than higher notes - fix this.
            
} //if (patchParameterData[PARAM_GLOBAL_VINTAGE_AMOUNT].voice_val != 0)

 

However the more and more I play with the current implementation, the more I realise that adding random pitch offsets to each voice isn't very musically useful, especially when using a large amount value.

Therefore I'm probably going to experiment with other behaviours for this parameter that are potentially more musical useful before I settle on a final implementation for this, such as:

  • Randomly detuning each oscillator on a voice by a small amount rather than the whole voice, which would create phase and 'beating' effects
  • Adding random amounts of noise to each note
  • Adding random rhythmic amplitude modulation (like an LFO set to a random shape modulating amplitude amount)

Last week I posted about the design and construction of the front panel for the vintage toy synthesiser, however another thing I had been doing alongside that is putting together the electronics and software for allowing the synthesis engine to be controlled by the panel controls. This ended up being a bit of a nightmare to get working well as I'll talk about below, but I think I've finally got it into a stable state. A lot of the electronics and software for the panel is very similar to that of the key mechanism of the synth, therefore I will often refer to the blogpost on that within this post rather than repeating myself.

 

Electronics

 

Components used:

  • Potentiometer, 10K, regular (x 35)
  • Potentiometer, 10k, centre-detented (x 7)
  • Toggle Switch
  • Resistor, 10k
  • Ceramic capacitor, 0.1uF (x 4)
  • MC14067 multiplexer (x 3)
  • Arduino Pro Mini (3.3V version)
  • DIP24 0.6" IC socket (x 4)

 

Controls

 

As mentioned in a previous blogpost the only controls I am using on my panel are potentiometers/dials and a toggle switch, simply because these are the most useful and common controls that are used in similar projects and products.

 

Potentiometers

 

I decided to only use dial pots instead of slider pots as they take up less room on the panel. I am using pots with a value of 10k as this is recommended pot to use when just using a microcontroller to read its value. I am also using a few centre-detented pots for the bipolar depth controls so that the user can easily centre these values. I had considered using centre-detented pots for a few of the other parameters (oscillator coarse tune, pulse amount, keyboard octave and transpose) however from testing these pots they often don't actually centre on the exact central value, which would not work with these particular parameters which are quite coarse.

 

I have connected the pots to the circuit in the standard way - the two outer pins go to power and ground and the centre pin goes to an analogue input (which in my case is on a multiplexer).

 

arduino pot

A potentiometer connected directly to an Arduino. Source: https://www.arduino.cc/en/Tutorial/AnalogReadSerial

 

Toggle Switch

 

The switch I am using is a SPST (Single Pole, Single Throw) switch, which is all that is needed when wanting to read a switch/button value using a microcontroller.

 

I have connected the toggle switch to the circuit in a standard way, using a 10k pull-down resistor so that when the switch is off it gets pulled to ground to produce a value of LOW. However as all my multiplexers are connected to analogue inputs the switch is connected to an analogue input instead of a digital input, but this just mean I'll get a value of 0 or 1023 instead of LOW or HIGH.

 

arduino button

A button connected to an Arduino. Source: https://www.arduino.cc/en/Tutorial/Button

Microcontroller and Multiplexers

 

Just like with the synths key mechanism, I am using a 3.3V Arduino Pro Mini microcontroller for reading the control values which are then send to the BBB via serial. See the key mechanism production blogpost for more info on this design decision. However there are a couple of changes I have made here compared to that of the key mech:

  • I am using 16-channel multiplexers instead of 8-channel multiplexers. This is simply because I am not able to get enough analogue inputs for all 43 panel controls using 8-channel muxes with an Arduino Pro Mini (well that's what I thought at the time of developing this circuit, however since then I have learnt that that's not the case, which I've talked about below in the 'Alternative Circuit Design' section).
  • All the muxes and the Arduino are attached to the circuit via DIP IC sockets. I did this so that these components can be easily replaced if they break, which is something I learnt the hard way with the key mech circuit (I have actually since gone back and added this to the key mech circuit).
  • All the muxes (as well as the VCC signal to the pots) have had 0.1uf decoupling capacitors added to them - something that digital circuits should have which I wasn't aware of (another thing that I have since gone back and added to the key mech circuit).

 

The Completed Circuit

 

The completed circuit for the panel has been developed using stripboard which will be screwed to the underside of the panel using standoffs, using solid core wire to make all connections. Below is a breadboard diagram of the circuit but with only one potentiometer attached:

 

panel circuit

 

Here are some photos of the completed circuit:

 

completed panel circuit

The completed vintage toy synth panel circuit

 

attached panel controls

The potentiometers and toggle switch connected to the panel

 

It's not my neatest or prettiest wiring, though unfortunately if attempting to develop a circuit that contains 42 potentiometers on stripboard instead of a PCB there are going to be lots of wires.

 

Alternative Circuit Design

 

As with the key mech circuit, within the panel circuit each mux uses its own set of digital and analogue pins on the Arduino, meaning that in total I've used 12 digital pins (4 digital outs as the control/select inputs for each mux) and 3 analogue pins (1 analogue output from each mux). At the time of developing this circuit I thought that this was the only way it could be done, however since then I've discovered through one of my superiors that it can be done using less Arduino pins, meaning that I could have used cheaper 8-channel muxes (such as 4051s) and still able to get enough analogue inputs. This can be done by sharing the digital pins between the muxes (connecting the same 4 digital outs to all of the mux select/control pins), which can be done as I only need to read from one mux at a time. This can be taken a step further by using only one analogue input on the Arduino and sharing it between all the muxes, using the mux inhibit pins to only turn on one mux at a time. Therefore using these two methods I could change this panel circuit to only use 7 digital pins (4 for the mux control/select inputs, and 3 for each of the muxes inhibit pins) and 1 analogue pin (for the analogue output coming from each mux).

 

The main benefit to this alternative circuit design is that it allows you to add more inputs/outputs to your microcontrollers, which is very useful when using boards such as the Arduino Pro Mini which only has a limited number of them. For example, using these two methods with an Arduino pro mini, which has 12 digital pins (ignoring the serial RX and TX pins) and 8 analogue pins (which can be used as digital pins if needed), it would be possible to have a total of 128 analogue inputs using 16 8-channel 4051 muxes, or 240 analogue inputs/outputs using 15 16-channel 4067 muxes! However the main downside to these methods is that they are more prone to errors such as reading from multiple muxes at the same time, so you need to be extra careful in the software that you are definitely turning off one mux before you start reading from the next one.

 

Software

 

As mentioned above, all the reading of controls is handled using an Arduino microcontroller, so the only software required for the front panel is a single Arduino sketch that needs to handle two things - reading value changes from the controls, and sending these changes to the BBB as serial-based MIDI messages.

 

The panel software is a lot less complex than that of the key mechanism. All it needs to do is read the state of every pot and switch, and if it reads a new/changed value for a controls it converts it into the range of the sound parameter it is controlling and sends the value to the BBB via serial as a MIDI message. The MIDI message used by the panel are Control Change (CC) messages, where the first byte is 176 + MIDI channel (always 0 in this case), the second byte is a control number, and third byte is control value. Each parameter within the synth has it's own MIDI CC controller number, which is used within the panel and the BBB software for accessing and setting the parameters value. It can also be used by external MIDI gear for controlling that parameter externally, or for controlling external MIDI gear using the synths panel. I haven't yet offically documented the MIDI CC specification of the synth, however you can see a list of the CCs in the globals.h file.

 

I have created a GitHub repository to host all my code and schematics/diagrams for this project. To see the up-to-date panel code click here, or for the code at the time of writing this blogpost see below.

 

/*
   Vintage Toy Synthesiser Project - panel code.


   This the code for the Arduino Pro Mini attached to the piano's panel.
   This particular code is for using up to 4 16-channel multiplexers.


   All pins are used for the following:
   2 - 5: Mux1 select output pins
   6 - 9: Mux2 select output pins
   10 - 13: Mux3 select output pins
   A4 - A7 (as digital outputs): Mux4 select output pins
   A0: Mux1 input pin
   A1: Mux2 input pin
   A3: Mux3 input pin
   A4: Mux4 input pin


   Note that Mux4 may not be connected, but this code allows for it to be
   used. Mux4 mist be connected if NUM_OF_CONTROLS is greater than 16 * 3.


   //REMEMBER THAT ANY SERIAL DEBUGGING HERE MAY SCREW UP THE SERIAL COMMS TO THE BBB!
*/


//==========================================


//The number of pots/switches attached
const byte NUM_OF_CONTROLS = 43;


//for dev
const byte FIRST_CONTROL = 0;
const byte LAST_CONTROL = 42;


//The previous anologue value received from each control
int prevAnalogueValue[NUM_OF_CONTROLS] = {0};
//The previous param/MIDI value sent by each control
byte prevParamValue[NUM_OF_CONTROLS] = {0};


//MIDI channel we want to use
const byte midiChan = 0;


const byte VAL_CHANGE_OFFSET = 8;


//==========================================
//param data for each control


struct ControlParamData
{
  const byte cc_num;
  const byte cc_min_val;
  const byte cc_max_val;
  const bool is_depth_param;
};


ControlParamData controlParamData[NUM_OF_CONTROLS] =
{
  {.cc_num = 74, .cc_min_val = 0, .cc_max_val = 127, false}, //0 - PARAM_FILTER_CUTOFF
  {.cc_num = 19, .cc_min_val = 0, .cc_max_val = 127, false}, //1 - PARAM_FILTER_RESO
  {.cc_num = 26, .cc_min_val = 0, .cc_max_val = 127, false}, //2 - PARAM_FILTER_LP_MIX
  {.cc_num = 28, .cc_min_val = 0, .cc_max_val = 127, false}, //3 - PARAM_FILTER_HP_MIX
  {.cc_num = 27, .cc_min_val = 0, .cc_max_val = 127, false}, //4 - PARAM_FILTER_BP_MIX
  {.cc_num = 29, .cc_min_val = 0, .cc_max_val = 127, false}, //5 - PARAM_FILTER_NOTCH_MIX
  {.cc_num = 50, .cc_min_val = 0, .cc_max_val = 3, false}, //6 - PARAM_LFO_SHAPE
  {.cc_num = 47, .cc_min_val = 0, .cc_max_val = 127, false}, //7 - PARAM_LFO_RATE
  {.cc_num = 48, .cc_min_val = 0, .cc_max_val = 127, true}, //8 - PARAM_LFO_DEPTH
  {.cc_num = 14, .cc_min_val = 0, .cc_max_val = 127, false}, //9 - PARAM_OSC_SINE_LEVEL
  {.cc_num = 15, .cc_min_val = 0, .cc_max_val = 127, false}, //10 - PARAM_OSC_TRI_LEVEL
  {.cc_num = 16, .cc_min_val = 0, .cc_max_val = 127, false}, //11 - PARAM_OSC_SAW_LEVEL
  {.cc_num = 18, .cc_min_val = 0, .cc_max_val = 127, false}, //12 - PARAM_OSC_SQUARE_LEVEL
  {.cc_num = 17, .cc_min_val = 0, .cc_max_val = 127, false}, //13 - PARAM_OSC_PULSE_LEVEL
  {.cc_num = 3, .cc_min_val = 0, .cc_max_val = 127, false}, //14 - PARAM_OSC_PULSE_AMOUNT
  {.cc_num = 7, .cc_min_val = 0, .cc_max_val = 127, false}, //15 - PARAM_AEG_AMOUNT
  {.cc_num = 73, .cc_min_val = 0, .cc_max_val = 127, false}, //16 - PARAM_AEG_ATTACK
  {.cc_num = 75, .cc_min_val = 0, .cc_max_val = 127, false}, //17 - PARAM_AEG_DECAY
  {.cc_num = 79, .cc_min_val = 0, .cc_max_val = 127, false}, //18 - PARAM_AEG_SUSTAIN
  {.cc_num = 72, .cc_min_val = 0, .cc_max_val = 127, false}, //19 - PARAM_AEG_RELEASE
  {.cc_num = 13, .cc_min_val = 0, .cc_max_val = 127, false}, //20 - PARAM_FX_DISTORTION_AMOUNT
  {.cc_num = 33, .cc_min_val = 40, .cc_max_val = 88, false}, //21 - PARAM_OSC_SINE_NOTE
  {.cc_num = 34, .cc_min_val = 40, .cc_max_val = 88, false}, //22 - PARAM_OSC_TRI_NOTE
  {.cc_num = 35, .cc_min_val = 40, .cc_max_val = 88, false}, //23 - PARAM_OSC_SAW_NOTE
  {.cc_num = 37, .cc_min_val = 40, .cc_max_val = 88, false}, //24 - PARAM_OSC_SQUARE_NOTE
  {.cc_num = 36, .cc_min_val = 40, .cc_max_val = 88, false}, //25 - PARAM_OSC_PULSE_NOTE
  {.cc_num = 20, .cc_min_val = 0, .cc_max_val = 127, false}, //26 - PARAM_OSC_PHASE_SPREAD
  {.cc_num = 22, .cc_min_val = 0, .cc_max_val = 127, false}, //27 - PARAM_FEG_ATTACK
  {.cc_num = 23, .cc_min_val = 0, .cc_max_val = 127, false}, //28 - PARAM_FEG_DECAY
  {.cc_num = 24, .cc_min_val = 0, .cc_max_val = 127, false}, //29 - PARAM_FEG_SUSTAIN
  {.cc_num = 25, .cc_min_val = 0, .cc_max_val = 127, false}, //30 - PARAM_FEG_RELEASE
  {.cc_num = 107, .cc_min_val = 0, .cc_max_val = 127, false}, //31 - PARAM_GLOBAL_VINTAGE_AMOUNT
  {.cc_num = 102, .cc_min_val = 0, .cc_max_val = 7, false}, //32 - PARAM_KEYS_SCALE
  {.cc_num = 114, .cc_min_val = 61, .cc_max_val = 67, false}, //33 - PARAM_KEYS_OCTAVE
  {.cc_num = 106, .cc_min_val = 58, .cc_max_val = 70, false}, //34 - PARAM_KEYS_TRANSPOSE
  {.cc_num = 103, .cc_min_val = 0, .cc_max_val = 127, false}, //35 - PARAM_VOICE_MODE
  {.cc_num = 58, .cc_min_val = 0, .cc_max_val = 127, true}, //36 - PARAM_MOD_LFO_AMP
  {.cc_num = 112, .cc_min_val = 0, .cc_max_val = 127, true}, //37 - PARAM_MOD_LFO_CUTOFF
  {.cc_num = 56, .cc_min_val = 0, .cc_max_val = 127, true}, //38 - PARAM_MOD_LFO_RESO
  {.cc_num = 9, .cc_min_val = 0, .cc_max_val = 100, false}, //39 - PARAM_GLOBAL_VOLUME
  {.cc_num = 63, .cc_min_val = 0, .cc_max_val = 127, true}, //40 - PARAM_MOD_VEL_AMP
  {.cc_num = 109, .cc_min_val = 0, .cc_max_val = 127, true}, //41 - PARAM_MOD_VEL_CUTOFF
  {.cc_num = 110, .cc_min_val = 0, .cc_max_val = 127, true}, //42 - PARAM_MOD_VEL_RESO
};


//FOR DEVELOPMENT
//ControlParamData controlParamData[NUM_OF_CONTROLS] =
//{
//
//   {.cc_num = 0, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 1, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 2, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 3, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 4, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 5, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 6, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 7, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 8, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 9, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 10, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 11, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 12, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 13, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 14, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 15, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 16, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 17, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 18, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 19, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 20, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 21, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 22, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 23, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 24, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 25, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 26, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 27, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 28, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 29, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 30, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 31, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 32, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 33, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 34, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 35, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 36, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 37, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 38, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 39, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 40, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 41, .cc_min_val = 0, .cc_max_val = 127},
//   {.cc_num = 42, .cc_min_val = 0, .cc_max_val = 127},
//};


void setup()
{
  //Setup serial comms for sending MIDI messages to BBB.
  //We don't need to use the MIDI baud rate (31250) here, as we're sending the messages to a general
  //serial output rather than a MIDI-specific output.
  Serial.begin(38400);


  //set all needed digital output pins
  for (byte i = 2; i <= 13; i++)
  {
    pinMode (i, OUTPUT);
  }


  pinMode (A4, OUTPUT);
  pinMode (A5, OUTPUT);
  pinMode (A6, OUTPUT);
  pinMode (A7, OUTPUT);


}


void loop()
{
  byte input_to_read;
  byte mux_input_pin;
  byte first_select_pin;


  //for each control
  for (byte control_num = FIRST_CONTROL; control_num <= LAST_CONTROL; control_num++)
  {
    //==========================================
    //==========================================
    //==========================================
    //Read analogue control input...


    //Select the mux/analogue pin we want to read from based on the control number
    //FIXME: there are probably equations I can use here instead.
    if (control_num < 16)
    {
      input_to_read = A0;
      mux_input_pin = control_num;
      first_select_pin = 2;
    }
    else if (control_num < 32)
    {
      input_to_read = A1;
      mux_input_pin = control_num - 16;
      first_select_pin = 6;
    }
    else if (control_num < 48)
    {
      input_to_read = A2;
      mux_input_pin = control_num - 32;
      first_select_pin = 10;
    }
    else
    {
      input_to_read = A3;
      mux_input_pin = control_num - 48;
      first_select_pin = A4;
    }


    //select the input pin on the mux we want to read from, by splitting
    //the mux input pin into bits and sending the bit values to mux select pins.
    int b0 = bitRead (mux_input_pin, 0);
    int b1 = bitRead (mux_input_pin, 1);
    int b2 = bitRead (mux_input_pin, 2);
    int b3 = bitRead (mux_input_pin, 3);
    digitalWrite (first_select_pin, b0);
    digitalWrite (first_select_pin + 1, b1);
    digitalWrite (first_select_pin + 2, b2);
    digitalWrite (first_select_pin + 3, b3);


    //read the input value
    int read_val = analogRead (input_to_read);


    //==========================================
    //==========================================
    //==========================================
    //Process analogue control input...


    //if the read control value is greater that +/-VAL_CHANGE_OFFSET from the last value
    //this is a quick dirty hack to prevent jitter
    if ((read_val > prevAnalogueValue[control_num] + VAL_CHANGE_OFFSET) ||
        (read_val < prevAnalogueValue[control_num] - VAL_CHANGE_OFFSET) ||
        (read_val == 0 && prevAnalogueValue[control_num] != 0) ||
        (read_val == 1023 && prevAnalogueValue[control_num] != 1023))
    {


      // Serial.print(control_num);
      // Serial.print(" ");
      // Serial.println(read_val);


      //store the value
      prevAnalogueValue[control_num] = read_val;


      //convert the control value into a param/MIDI CC value
      byte param_val = ConvertControlValToParamVal (control_num);


      //if this control is for a bipolar depth parameter
      if (controlParamData[control_num].is_depth_param == true)
      {
        //make sure the control definietly centres on the centre value of the parameter
        //by setting a certain window around the centre value to be set to the centre value


        if (param_val >= 63 && param_val <= 65)
        {
          param_val = 64;


        } //if (param_val >= 63 && param_val <= 65)


      } //if (controlParamData[control_num].is_bipolar_control == true)


      //if the param val is different from the last param val
      if (prevParamValue[control_num] != param_val)
      {
        //store the value
        prevParamValue[control_num] = param_val;


        //Send the param value as a MIDI CC message
        SendMidiMessage (0xB0 + midiChan, controlParamData[control_num].cc_num, prevParamValue[control_num]);


      } //if (prevParamValue[control_num] != param_val)


    } //if (prevAnalogueValue[control_num] != read_val)


    //slow down control reading to help prevent jitter.
    //it also means when pots are turned fast they only send a small number of values
    delay (2);


  } //for (byte control_num; control_num < NUM_OF_CONTROLS; control_num++)


  //==========================================
  //==========================================
  //==========================================
  //Read serial input...


  //if there is something to read on the serial port
  if (Serial.available())
  {
    Serial.println ("Received messages from serial input");


    byte midi_in_buf[64];


    int num_of_bytes = Serial.readBytes (midi_in_buf, 64);


    //if received a request for all panel control values
    if (num_of_bytes == 3 && midi_in_buf[0] == 0xB0 && midi_in_buf[1] == 127 && midi_in_buf[2] == 1)
    {
      //send back all control values
      for (byte control_num = 0; control_num < NUM_OF_CONTROLS; control_num++)
      {
        SendMidiMessage (0xB0 + midiChan, controlParamData[control_num].cc_num, prevParamValue[control_num]);
      }


    } //if (num_of_bytes == 3 && midi_in_buf[0] == 0xB0 && midi_in_buf[1] = 127 && midi_in_buf[2] == 1)


  } //if (Serial.available())


}


//=====================================================
//=====================================================
//=====================================================
//Converts a control value into a param/MIDI CC value


byte ConvertControlValToParamVal (byte control_num)
{
  byte result;


  result = ((((float)controlParamData[control_num].cc_max_val - (float)controlParamData[control_num].cc_min_val) * (float)prevAnalogueValue[control_num]) / 1023.0) + (float)controlParamData[control_num].cc_min_val;


  return result;
}


//=====================================================
//=====================================================
//=====================================================
//Sends a 3 byte MIDI message to the serial output


void SendMidiMessage (byte cmd_byte, byte data_byte_1, byte data_byte_2)
{
  byte buf[3] = {cmd_byte, data_byte_1, data_byte_2};


  Serial.write (buf, 3);


  //  Serial.print(buf[0]);
  //  Serial.print(" ");
  //  Serial.print(buf[1]);
  //  Serial.print(" ");
  //  Serial.println(buf[2]);
}

 

Issues

 

As mentioned at the start it was a bit of a nightmare getting a stable working panel. These are the main issues I had and how I resolved them:

  1. Non-working or erratic potentiometers. Up to this point I've had about 10-15 pots that either spat out erratic values or didn't work at all. In most cases they would behave fine, but after moving the panel or rearranging the wires they would suddenly start misbehaving, which suggested it was a problem with the pots or wiring rather than the Arduino, muxes, or software. After getting the circuit checked out by one of my superiors it turned out I was soldering the pots wrong - I was soldering the wires very close the opening of the internal mechanism of the pots instead of the pins/legs, and most probably getting solder/flux inside or damaging the terminal, causing them to misbehave or break. I was soldering them here as my original soldering on the pins was becoming disconnected very easily, but it turns out that's a common issue. Replacing the broken pots with a very careful soldering job fixed the issue. So lesson learnt - solder on the pot legs only!
  2. Potentiometer jitter. A very common problem with pots, but I didn't realise how much I would get. I added the decoupling capacitors to the circuit to help prevent this, but this didn't appear to be enough. Therefore in the software I have done two things to help prevent jitter:
    1. Any new pot value has to be greater or less than 8 of the previous pot value for it to send a new parameter value for to the BBB. This decreases the resolution of the pots, however the greatest resolution of a parameter value sent to the BBB is 7 bit (0-127) which is the same as scaling down from the 10 bit analogue input value (1024 / 8 = 128).
    2. I've slowed down how often the analogue inputs are read by adding a small delay between reading each control value.
    3. I attempted to implement the common running/moving average method for smoothing analogue input values, however this ended up just using up most of the Arduino's memory.

 

Example video

 

I was planning on adding an example video of using the panel here, however unfortunately last night my BBB decided to stop working (from searching online for other cases of the symptoms it looks like the processor has randomly blown). Therefore I'll post an example video at a later date once I get a new BBB.

Even though I ended up constructing a brand new front panel for the toy piano for this project, the rest of the enclosure of the vintage toy synth will be using the existing piano enclosure. Apart from the front panel, the other part of the piano that needs modifying for the project is the back section where I need to add a set of sockets and controls so that the synth can be easily connected to a power source, an audio output, and external MIDI gear. A second part to this task was connecting these sockets to the internal electronics of the vintage toy synth.

 

Construction

 

The sockets I have added are:

  • 2x 5-pin DIN sockets (for MIDI I/O)
  • 1x 6.3mm stereo jack socket (for audio output)
  • 1x 2.1mm/5.5mm DC socket (for power), coupled with 1x standard SPST toggle switch (as a power switch)

 

The first thing I needed to do was to get out my Dremel and cut out some holes for all five components. Below is a photo of the back of the piano enclosure after I had done this:

 

toy piano socket holes

Back of the vintage toy piano with holes cut out for sockets and controls

 

I'm in no way saying that this is my best Dremel work - the holes aren't perfect circles or in line. However as it is a vintage hand-build piano nothing is perfectly straight anyway, so my sloppy drilling actually goes quite well with the existing enclosure!

 

Below are some examples of what the back will look like once the sockets and controls are added:

 

toy piano back

Back of the toy piano with the sockets/controls added

 

midi sockets

5-pin DIN MIDI sockets

 

stereo jack socket

6.3 mm stereo jack socket

 

power socket and switch

DC socket and toggle switch

 

toy piano synth back sockets

An example of the sockets with the rest of the synth

 

The MIDI sockets and the toggle switch were long enough to fit through the wooden panel, however the jack and DC sockets were too short to allow me to fit a washer and nut to them for securing the sockets to the panel. Therefore on the inside of the enclosure I had to cut away an area of wood around the holes for these sockets so that the components would fit correctly, as show in the below photo:

 

inside of back panel

Inside of the back

 

Socket Choice

 

There were a couple of reasons why I chose these particular sockets/controls to use on the back of the synth:

  • I decided to use the metal-framed MIDI sockets rather than the more-commonly used right-angle MIDI sockets as they are much easier to connect and secure to 6mm-thick wood. Also the right-angle sockets are designed to be secured to a PCB/circuit instead of the enclosure, which would have given me less freedom in regards to where I place the MIDI circuit/stripboard within the synth.
  • I decided to use a 6.3mm audio jack socket instead of a 3.5mm mini jack socket as they are more commonly found on commercial synthesisers and similar products. Even though the current synth engine is just monophonic, I chose to use a stereo jack instead of a mono jack so that stereo headphones could be used without only one ear being active.
  • I didn't particularly need to add a power switch, however it is a nice little extra thing to have. I am also considering having a power LED too.

 

Connecting to Internal Electronics

 

Now that I have a bunch of sockets connected to the back of my synth the user can easily apply power, get audio, and connect to MIDI gear without needing to open up the device. However these sockets need to be connected to the rest of the electronics of the synth in some way...

 

MIDI Sockets

 

Connecting these sockets were easy - If you've read my previous blogpost on the development of the MIDI I/O electronics you would have seen the circuit I made that allows MIDI gear to be connected to the BeagleBone Black via MIDI DIN sockets. Therefore here I just needed to connect these sockets to my MIDI I/O circuit via the screw terminals I added.

 

Audio Jack Socket

 

From my previous blogpost on BeagleBone Black audio you would have read that I'm using an EC Technology USB audio adapter for the audio output of the BBB within my synth, which has a standard 3.5mm stereo mini jack as the audio connector. Initially I was trying to find an existing cable/adapter that goes from a male 3.5mm jack to a female 6.3mm jack, where the socket side of the cable could be secured to a hole using a washer and nut, however I had no luck finding this cable. Therefore I ended up making my own cable, where I have attached a mini-jack plug the the jack socket using the three need wires - left (tip),  right (ring), and ground (sleeve). As the cable isn't longer than 8 inches I didn't need to worry about insulating the wires to prevent noise interference.

 

DIY audio cable

MY DIY audio cable

IMG_3729.JPG

The jack socket side of the cable, which will be attached to the back of the synth

IMG_3730.JPG

The jack plug side of the cable, that will connect to the USB audio adapter connected to the BBB

 

DIY audio cable BBB

The DIY audio cable connected to the BBB

 

DC Power Socket

 

For the power socket I have essentially done the same kind of thing as that for the audio connection - I've build my own cable that goes from the socket to a DC plug that connects to the 5V power socket on the BBB. However here I have also added in the power switch that breaks the power line when turned off.

 

DC socket cable

My DIY power cable

 

DIY power cable BBB

The DIY power cable connected to the BBB

The front panel of my vintage toy synthesiser is the place where all the dials and buttons for controlling the sound parameters will be attached to the toy piano. While the final design of the panel has turned out very similar to how I had originally planned it to look, the construction of the panel compared to my initial plan has changed dramatically. In this blogpost I'm going to cover the process of both designing and constructing the front panel for the vintage toy synthesiser, which has been an ongoing process for me over the past couple of weeks.

 

Design

 

When approaching the design of the panel there were three main aspects I needed to consider - control layout, control aesthetics, and labelling/text.

 

Control Layout

 

Control layout is the process of placing all the needed controls on the front the panel. There were a few things to consider here that affected my final design:

  • The total number of sound parameters within the synthesiser - 43
  • Panel size - the overall size I can use here is roughy 614cm squared
  • Control size - the majority of the controls I am using are potentiometers which are 16mm x 25mm
  • Grouping similar controls together - one of the most important rules to any good interface design is that similar controls should be grouped together within their own sections
  • Leaving space for other things - I need to make sure I've left enough room for a user to easily operate the controls (e.g. their fingers can fit around the dials), as well as leaving space for control labelling.

 

Control Aesthetics

 

My original plan for this project was to use vintage and old-looking controls; however when consider other things such as budget, time, and panel layout, this proved to be a very hard task. Therefore in the end I abandoned this idea, and set myself a new plan to just make sure the controls match the black/white/silver colour scheme of the piano. However another part of my initial plan was to make sure controls are small/miniature, again keeping inline with the design of the piano.

 

There were only two types of controls I needed for the front panel - dials/knobs/potentiometers, and a toggle switches.

 

Dials

 

I've spent the past couple of months buying a range of different knob caps from eBay, and seeing how they look attached to the toy piano. The knob cap I settled on is an aluminium black and silver cap with a very simple design, simply because I thought it went well with the existing aesthetics of the piano. I tried several sizes of the same knob cap, however settled on a 13mm one.

 

knob caps

Different knob caps I tried, with the one I settled on on the far right.

 

Toggle Switches

 

One parameter of the synth needs to use a switch rather than a dial, and from the get-go of this project I knew exact what switch I would use which would suit the vintage toy piano aesthetic - a simple mini silver metal toggle switch.

 

toggle switch

The type of toggle switch I will be using on the panel

 

Labelling/Text

 

All the controls on the front panel need to be labelled in some way so that the user knows what they do, and the main thing to consider here was what type of font to use. Whereas I had original planned to use a handwritten or old-style font, I ended up choosing a common sans-serif font due to it looking best with the final panel construction method (see Construction section below).  I also had to consider what colour to use here, which preferably would be silver/grey/white.

 

Final Design

 

Here is a technical drawing of the final design of the front panel, showing all the positions of the controls as well as the labelling of the controls:

 

synth panel design

The final panel design, show control positions and labelling

 

There are a couple of reasons why I placed the controls in this particular layout:

  • All controls are grouped into their relevant individual sections
  • There's space left for adding further controls into relevant positions in the future

 

Construction

 

As mentioned above, the construction of the front panel of the vintage toy synthesiser changed dramatically from my original plan.

 

Initial Plan

 

My initial plan for constructing the front panel was to drill holes into the existing wooden panel of the piano, and labelling each control my etching text into the existing paintwork. However both of these ideas ended up being abandoned for the following main reasons:

  • The existing panel was too thick and wouldn't have allowed me to fit the bolts onto the potentiometers to attach them to the panel. I attempted to find pots with longer shafts, but this proved to be very difficult.
  • The existing panel was quite brittle and would probably have split quite easily after drilling 43 holes into it.
  • The paintwork was also very brittle and chipped easily, so attempting to etch text into it wouldn't have looked very good.

 

Laser Cutting - First Attempt

 

After realising I would need to construct a whole new panel for the piano I was recommended getting it produced using laser cutting, as this could cut out all the needed holes instead of me having to do it myself. With the help of my wonderful girlfriend I got a CAD drawing produced, found a local laser cutting company, Bristol Design Forge, and got a new panel constructed in 3mm birch plywood with all the needed holes for the controls. The thickness was perfect for attaching potentiometers, and was a lot stronger.

 

panel technical drawing

A CAD drawing for the first panel design

 

plywood panelplywood panel on piano

The 3mm birch plywood laser cut panel

 

The main downside of this method was that I would now have to completely paint the panel, and this is where disaster struck. First of all I used a gloss black paint that probably wasn't designed to to be used on objects that would be handled a lot (it was tacky and smelly, even after it had dried), and secondly the paint caused the panel to warp quite considerably meaning that it now didn't sit nicely on the existing piano enclosure. I learned two things here:

  1. Plywood is susceptible to warping
  2. Try paint on a test bit of material first!!

 

I decided to learn from these mistakes and move on quickily.

 

Final Laser Cut Panel

 

After the first failed attempt at laser cutting I was then recommended to consider using acrylic instead of wood. While I really wanted to keep all parts of the synth wooden, keeping inline with the existing enclosure, there were quite a few benefits to using acrylic instead of wood:

  • It could come in gloss black without me needing to apply any paint
  • I could use laser engraving to produce the control labelling on my panel, which would come out in frosted white - one of preferable labelling colours. This would mean I wouldn't need to paint or stick labels on the panel myself, which probably wouldn't have looked that good.
  • It's not susceptible to warping

 

Therefore once again with the help of my wonderful girlfriend and Bristol Design Forge I got a second laser cut panel produced, this time in 3mm gloss black acrylic.

 

synth panel CAD file 2

A close up of the CAD drawing for the second version of the panel, showing cut lines in red and engrave lines in white.

The .dxf design file for this can be found in the projects git repository.

 

vintage toy synth panel 1 vintage toy synth panel 2

vintage toy synth panel on piano vintage toy synth panel on piano with knob caps

Photos of the gloss black acrylic panel

 

While I was initially concerned that using acrylic instead of wood would ruin the aesthetics of the vintage toy piano, it turned out to not look too different from the original panel. Hopefully this is the final panel design and construction, and now all I need to do is attach all the controls and get them talking to the BeagleBone Black!

Over the past week I've been working on various parts of my project - designing the front panel, starting on the panel electronics, as well as optimising the sound engine software. All of these things are only half-finished so I don't want to document them in a blog post yet, however one small yet important thing I have completed this week is the wiring and soldering of the BeagleBone Proto Cape, so I thought I'd do a quick and short (for a change!) blog post on how I've used the proto shield.

 

beaglebone proto cape

The BeagleBone Proto Cape

 

The Proto Cape is important for my project, and probably for most serious BBB projects, as it allows you to solder your connections to the board so that things don't accidentally become unconnected during use. Saying that, the idea of permanently soldering all of my connections on my BBB didn't appeal to me, so instead I soldered a set of screw terminals to my proto cape (like I did with the MIDI interface circuit for my project), allowing me to disconnect certain connections and circuits from the BBB if needed (which is very useful during development), but at the same time providing a way to securely connect everything when needed.

 

Here are a couple of photos of my proto cape:

 

beaglebone proto cape top

beaglebone proto cape bottom

beaglebone proto cape side

 

As can be seen from the above photos I've attached four pairs of screw terminals to the cape. These are for the following connections:

  • Three pairs for connecting my keyboard, panel, and MIDI interface circuits to the BBB via the UART serial pins (both TX and RX for each circuit).
  • Two pairs for providing 3.3V power to my three circuits (leaving one terminal currently unused)
  • Two pairs for connecting the GND of my circuits to the BBB (leaving one terminal currently unused)
  • A spare pair, just incase.

 

Here's a photo of the cape in use, with the keyboard, MIDI interface, and panel fully connected:

 

beaglebone proto cape in use

Since my blogpost a couple of weeks back where I highlighted the design for my audio synthesis engine I've been hard at work attempting to implement it using the C++ audio synthesis library Maximilian. I'm now at a stage where I have a working and controllable synthesis engine, so I thought it would be a good time to talk about how I've done it. I've managed to implement most of my original design plus a few extra parameters, however I've still got a few small things to implement as well as some niggling bugs to iron out.

 

Before I go on, just thought I'd mention that the code used at the end of my project my change slightly from the code example shown here, so for up-to-date and full code see the Github repository for this project.

 

The Synthesis Engine Application

 

In my last blogpost on software architecture I briefly introduced the vintageSoundEngine application which is the program running on the BeagleBone Black that generates the sound for my synth. This application has two main tasks - receiving note and control messages and forwarding them onto the correct 'voice', and mixing together the audio output from each voice and sending it to the main audio output. This is all done within the main piece of code for the application, vintageSoundEngine.cpp, however the code that handles the audio processing for each voice is implemented as a C++ class/object, vintageVoice, and multiple instances of this object are created depending on the polyphony value of the synth. While I'm on the subject of polyphony, at the moment I've just got a polyphony value of two due to high CPU usage of each voice, however I'm hoping to increase this before the end of the project.

 

Processing Note and Control Messages

 

As mentioned in my last blogpost it is the vintageBrain application that handles voice allocation, therefore vintageSoundEngine doesn't have to do anything complicated in order to forward MIDI note messages to the correct voice - it just uses the MIDI channel of the note message to determine the voice number. This is also the same for MIDI control/CC messages, however I also use MIDI channel 15 here to specify that a message needs to go to all voices. Once the program knows which voice the message needs to go to, it calls a specific function within the desired voice to forward the message. Here is a snippet of the current code that handles this:

 

//================================
                    //Process note-on messages
                    if (input_message_flag == MIDI_NOTEON)
                    {
                        //channel relates to voice number
                        uint8_t voice_num = input_message_buffer[0] & MIDI_CHANNEL_BITS;
                        
                        vintageVoice[voice_num]->processNoteMessage (1, input_message_buffer[1], input_message_buffer[2]);

                    } //if (input_message_flag == MIDI_NOTEON)
                    
                    //================================
                    //Process note-off messages
                    else if (input_message_flag == MIDI_NOTEOFF)
                    {
                        //channel relates to voice number
                        uint8_t voice_num = input_message_buffer[0] & MIDI_CHANNEL_BITS;
                        
                        vintageVoice[voice_num]->processNoteMessage (0, 0, 0);
                        
                    } //if (input_message_flag == MIDI_NOTEOFF)
                    
                    //================================
                    //Process CC/param messages
                    else if (input_message_flag == MIDI_CC)
                    {
                        //channel relates to voice number. Channel 15 means send to all voices
                        uint8_t voice_num = input_message_buffer[0] & MIDI_CHANNEL_BITS;
                        
                        for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++)
                        {
                            //if we want to send this message to voice number 'voice'
                            if (voice_num == 15 || voice_num == voice)
                            {
                                //TODO: check if this param/CC num is a sound param, and in range.
                                //At this point it always should be, but it may be best to check anyway.
                                
                                //set the paramaters voice value
                                vintageVoice[voice]->setPatchParamVoiceValue (input_message_buffer[1], input_message_buffer[2]);
                                
                            } //if (voice_num == 15 || voice_num == voice)
                            
                        } //for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++)
                        
                    } //if (input_message_flag == MIDI_CC)

 

Mixing Voices

 

Mixing the audio output of the voice objects is done in the audio callback function that is called for each audio sample by the audio streaming thread of the application, handled by the RtAudio API. This is done in the same way as that of the Maximilian examples, however their code for generating and controlling audio was not split into separate objects. Here is the current code that handles this:

 

void play (double *output)
    {
        double voice_out[NUM_OF_VOICES];
        double mix = 0;
        
        //process each voice
        for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++)
        {
            vintageVoice[voice]->processAudio (&voice_out[voice]);
        }
        
        //mix all voices together (for some reason this won't work if done in the above for loop...)
        for (uint8_t voice = 0; voice < NUM_OF_VOICES; voice++)
        {
            mix += voice_out[voice];
        }
        
        //set output
        for (uint8_t i = 0; i < maxiSettings::channels; i++)
        {
            output[i] = mix;
        }
    }

 

The code is fairly simply, and just does three things:

  1. Calls the audio processing function of each voice, passing in the variable that the voices audio sample will be store in
  2. Mixes the audio samples of each voice into a single sample
  3. Puts the sample into all channels of the audio output buffer

 

Voice Design Implementation

 

Now I'm going to talk about the more interesting code - the code that generates and controls the synthesised audio within each voice. As stated above this is all within the vintageVoice class, and relies mostly on the Maximilian library for the implementation of the essential components of the synthesis engine. When talking about all the features here, remember that this is for each voice.

 

To implement the synthesis engine I needed the following Maximilian objects:

  • maxiOsc (x6) - objects for creating the five separate oscillators as well as the LFO for each voice
  • maxiEnv (x2) - objects for creating the amplitude and filter envelopes for each voice
  • maxiSVF - object for creating the State-Variable-Filter for each voice
  • maxiDistortion - object for applying distortion to each voice

 

As previously mentioned vintageSoundEngine is a multithreaded application. The main thread handles the receiving and processing of MIDI messages, whereas the second thread handles all the audio streaming and processing.

 

Processing Control Messages

 

As stated above, MIDI CC messages are sent to the voices to control the parameters of the sound. When the CC messages get to a voice they are converted into a value that the voice parameters understand (e.g. from the typical MIDI CC value of 0-127 to the typical filter cutoff value of 20-20000Hz), and then stored in an array of parameter data that is used throughout the rest of the code, most importantly within the audio processing callback function. For certain CC messages other particular things need to be done, e.g. if it is an oscillator coarse tune control message the pitch of the oscillator needs to be updated. To make developing the audio processing code easier, macros are used instead of parameter numbers and the array of parameter values are stored as part of a struct which contain variables for storing other data about each parameters such as the range of the value. See the globals.h file for more info.

 

This task is handled in the main thread. Here is the current code that processes MIDI CC messages:

 

//==========================================================
//==========================================================
//==========================================================
//Sets a parameters voice value based on the parameters current user value

void VintageVoice::setPatchParamVoiceValue (uint8_t param_num, uint8_t param_user_val)
{
    patchParameterData[param_num].user_val = param_user_val;
    //FIXME: this could probably be done within vintageSoundEngine.cpp instead of within the voice object,
    //as each voice will probably be given the same value most of the time, so it would save CPU
    //to only have to do this once instead of for each voice.
    patchParameterData[param_num].voice_val = scaleValue (patchParameterData[param_num].user_val,
                                                          patchParameterData[param_num].user_min_val,
                                                          patchParameterData[param_num].user_max_val,
                                                          patchParameterData[param_num].voice_min_val,
                                                          patchParameterData[param_num].voice_max_val);
    
    //==========================================================
    //Set certain things based on the recieved param num
    
    if (param_num == PARAM_AEG_ATTACK)
    {
        envAmp.setAttack (patchParameterData[param_num].voice_val);
    }
    
    else if (param_num == PARAM_AEG_DECAY)
    {
        envAmp.setDecay (patchParameterData[param_num].voice_val);
    }
    
    else if (param_num == PARAM_AEG_SUSTAIN)
    {
        envAmp.setSustain (patchParameterData[param_num].voice_val);
    }
    
    else if (param_num == PARAM_AEG_RELEASE)
    {
        envAmp.setRelease (patchParameterData[param_num].voice_val);
    }
    
    else if (param_num == PARAM_FEG_ATTACK)
    {
        envFilter.setAttack (patchParameterData[param_num].voice_val);
    }
    
    else if (param_num == PARAM_FEG_DECAY)
    {
        envFilter.setDecay (patchParameterData[param_num].voice_val);
    }
    
    else if (param_num == PARAM_FEG_SUSTAIN)
    {
        envFilter.setSustain (patchParameterData[param_num].voice_val);
    }
    
    else if (param_num == PARAM_FEG_RELEASE)
    {
        envFilter.setRelease (patchParameterData[param_num].voice_val);
    }
    
    else if (param_num == PARAM_OSC_SINE_NOTE)
    {
        convert mtof;
        oscSinePitch = mtof.mtof (rootNoteNum + (patchParameterData[param_num].voice_val - 64));
    }
    
    else if (param_num == PARAM_OSC_TRI_NOTE)
    {
        convert mtof;
        oscTriPitch = mtof.mtof (rootNoteNum + (patchParameterData[param_num].voice_val - 64));
    }
    
    else if (param_num == PARAM_OSC_SAW_NOTE)
    {
        convert mtof;
        oscSawPitch = mtof.mtof (rootNoteNum + (patchParameterData[param_num].voice_val - 64));
    }
    
    else if (param_num == PARAM_OSC_PULSE_NOTE)
    {
        convert mtof;
        oscPulsePitch = mtof.mtof (rootNoteNum + (patchParameterData[param_num].voice_val - 64));
    }
    
    else if (param_num == PARAM_OSC_SQUARE_NOTE)
    {
        convert mtof;
        oscSquarePitch = mtof.mtof (rootNoteNum + (patchParameterData[param_num].voice_val - 64));
    }
    
    else if (param_num == PARAM_OSC_PHASE_SPREAD)
    {
        //FIXME: I need to properly understand what the phase value represents in order to implement a definitive algorithm here.
        //But basically what it does is, the higher the param value, the more spread the phases are of each oscillator from one another.
        //Sine will always stay at 0, tri will change of a small range, saw over a slightly bigger range, and so on.
        
        oscSine.phaseReset(0.0);
        oscTri.phaseReset (patchParameterData[param_num].voice_val * 0.002);
        oscSaw.phaseReset (patchParameterData[param_num].voice_val * 0.004);
        oscPulse.phaseReset (patchParameterData[param_num].voice_val * 0.006);
        oscSquare.phaseReset (patchParameterData[param_num].voice_val * 0.008);
    }
    
    else if (param_num == PARAM_MOD_VEL_AMP)
    {
        //vel->amp env modulation
        velAmpModVal = getModulatedParamValue (param_num, PARAM_AEG_AMOUNT, voiceVelocityValue);
    }
    
    else if (param_num == PARAM_MOD_VEL_FREQ)
    {
        //vel->amp env modulation
        velAmpModVal = getModulatedParamValue (param_num, PARAM_FILTER_FREQ, voiceVelocityValue);
    }
    
    else if (param_num == PARAM_MOD_VEL_RESO)
    {
        //vel->amp env modulation
        velAmpModVal = getModulatedParamValue (param_num, PARAM_FILTER_RESO, voiceVelocityValue);
    }
}

 

Processing Note Messages

 

Processing MIDI notes messages within the voices are a little bit more complicated than processing MIDI CC messages.

The following main things happen for each note message:

  1. If a note-on message:
    1. The pitches of the five oscillators are set based on the received MIDI note number as well as the oscillators coarse tune values
    2. The MIDI note velocity value (0-127) is converted into a voice amplitude value (0-1)
    3. Velocity modulation depth parameter values are used to generate the realtime parameter modulation values that need to be added to the parameter patch values
    4. The LFO oscillator phase is reset to 0
  2. The amplitude envelope trigger value is set. If a note-on message, this opens the envelope and causes sound to start playing in the audio thread, however if a note-off message it triggers the envelope to go to the release phase, eventually silencing the audio.
  3. The filter envelope trigger value is set.

 

Again this task is handled in the main thread. Here is the function that handles this:

 

//==========================================================
//==========================================================
//==========================================================
//Function that does everything that needs to be done when a new
//note-on or note-off message is sent to the voice.

void VintageVoice::processNoteMessage (bool note_status, uint8_t note_num, uint8_t note_vel)
{
    //==========================================================
    //if a note-on
    if (note_status == true)
    {
        //============================
        //store the root note num
        rootNoteNum = note_num;
        
        //============================
        //set the oscillator pitches
        convert mtof;
        oscSinePitch = mtof.mtof (rootNoteNum + (patchParameterData[PARAM_OSC_SINE_NOTE].voice_val - 64));
        oscTriPitch = mtof.mtof (rootNoteNum + (patchParameterData[PARAM_OSC_TRI_NOTE].voice_val - 64));
        oscSawPitch = mtof.mtof (rootNoteNum + (patchParameterData[PARAM_OSC_SAW_NOTE].voice_val - 64));
        oscPulsePitch = mtof.mtof (rootNoteNum + (patchParameterData[PARAM_OSC_PULSE_NOTE].voice_val - 64));
        oscSquarePitch = mtof.mtof (rootNoteNum + (patchParameterData[PARAM_OSC_SQUARE_NOTE].voice_val - 64));
        
        //TODO: vintage amount paramater - randomly detune each oscillator and/or the overall voice tuning
        //on each note press, with the vintage amount value determining the amount of detuning.
        
        //============================
        //set the note velocity
        voiceVelocityValue = scaleValue (note_vel, 0, 127, 0., 1.);
        
        //============================
        //work out velocity modulation values
        
        //vel->amp env modulation
        velAmpModVal = getModulatedParamValue (PARAM_MOD_VEL_AMP, PARAM_AEG_AMOUNT, voiceVelocityValue);
        
        //vel->cutoff modulation
        velFreqModVal = getModulatedParamValue (PARAM_MOD_VEL_FREQ, PARAM_FILTER_FREQ, voiceVelocityValue);
        
        //vel->resonance modulation
        velResoModVal = getModulatedParamValue (PARAM_MOD_VEL_RESO, PARAM_FILTER_RESO, voiceVelocityValue);
        
        //============================
        //reset LFO osc phase
        lfo.phaseReset(0.0);
        
    } //if (note_status == true)
    
    //==========================================================
    //if a note-off
    else if (note_status == false)
    {
        //reset aftertouch value
        aftertouchValue = 0;
    }
    
    //==========================================================
    //set trigger value of envelopes
    envAmp.trigger = note_status;
    envFilter.trigger = note_status;
}

 

Generating and Processing Audio

 

As previously mentioned all audio processing is handled within an audio callback function which is repetitively called by the audio processing thread for each sample in the audio stream. Here I'm going to outline each section of the audio callback function within the voice class, which relies heavily on the Maximilian library.

 

LFO

 

The LFO is generated and set in the following way:

  1. An output sample of an oscillator object is generated using the following parameters:
    1. LFO shape controls which maxiOsc shape is used
    2. LFO rate controls the frequency/pitch of the maxiOsc object
  2. The oscillator output (-1 to +1) is converted into the range needed for an LFO (0 - 1).
  3. The LFO output sample is multiplied by the LFO depth parameter value

 

//==========================================================
    //process LFO...
    
    //set shape and rate
        //FIXME: for LFO rate it would be better if we used an LFO rate table (an array of 128 different rates).
    if (patchParameterData[PARAM_LFO_SHAPE].voice_val == 0)
        lfoOut = lfo.sinewave (patchParameterData[PARAM_LFO_RATE].voice_val);
    else if (patchParameterData[PARAM_LFO_SHAPE].voice_val == 1)
        lfoOut = lfo.triangle (patchParameterData[PARAM_LFO_RATE].voice_val);
    else if (patchParameterData[PARAM_LFO_SHAPE].voice_val == 2)
        lfoOut = lfo.saw (patchParameterData[PARAM_LFO_RATE].voice_val);
    else if (patchParameterData[PARAM_LFO_SHAPE].voice_val == 3)
        lfoOut = lfo.square (patchParameterData[PARAM_LFO_RATE].voice_val);
    
    //convert the osc wave into an lfo wave (multiply and offset)
    lfoOut = ((lfoOut * 0.5) + 0.5);
    
    //set depth
    lfoOut = lfoOut * patchParameterData[PARAM_LFO_DEPTH].voice_val;

 

 

Amplitude Envelope

 

The amplitude envelope is generated and set in the following way:

  1. The LFO->amplitude modulation depth parameter value is used to generate the realtime parameter modulation value that needs to be added to the amplitude envelope amount parameter value
  2. The envelope amount value is worked out by adding the realtime amplitude modulation values (generated by both velocity and LFO modulation) to the amplitude envelope amount parameter value
  3. An output sample of the envelope is generated using a maxiEnv object, passing in the envelope amount value to control the depth, and the envelope trigger value that was set by the last received MIDI note message to set the current phase of the envelope.

 

//==========================================================
    //Amp envelope stuff...
    
    //process LFO->amp env modulation
    double amp_lfo_mod_val = getModulatedParamValue (PARAM_MOD_LFO_AMP, PARAM_AEG_AMOUNT, lfoOut);
    
    //Add the amp modulation values to the patch value, making sure the produced value is in range
    double amp_val = patchParameterData[PARAM_AEG_AMOUNT].voice_val + amp_lfo_mod_val + velAmpModVal;
    amp_val = boundValue (amp_val, patchParameterData[PARAM_AEG_AMOUNT].voice_min_val, patchParameterData[PARAM_AEG_AMOUNT].voice_max_val);
    
    //generate the amp evelope output using amp_val as the envelope amount
    envAmpOut = envAmp.adsr (amp_val, envAmp.trigger);

 

Filter Envelope

 

This is generated in essentially the same way as the amplitude envelope, however it uses a different maxiEnv object, and a static value of 1 as the envelope depth.

 

    //==========================================================
    //process filter envelope
    envFilterOut = envFilter.adsr (1.0, envFilter.trigger);

 

Oscillators

 

The oscillators are generated and set in the following way:

  1. An output sample of each of the five oscillator objects is generated using the following parameters:
    1. Each oscillator uses a different shape of the maxiOsc class
    2. The frequency/pitch of each oscillator are set to the pitch values generated with the last received MIDI note-on message
    3. The oscillator mix/level parameters multiply the output sample
    4. For the pulse oscillator, the pulse amount is set using the pulse amount parameter
  2. The five samples are mixed into a single sample, and divided by the number of samples to prevent gain clipping.

 

This is the point in the audio processing callback function that sound is initially generated.

 

//==========================================================
    //process oscillators
    oscSineOut = oscSine.sinewave (oscSinePitch) * patchParameterData[PARAM_OSC_SINE_LEVEL].voice_val;
    oscTriOut = (oscTri.triangle (oscTriPitch) * patchParameterData[PARAM_OSC_TRI_LEVEL].voice_val);
    oscSawOut = (oscSaw.saw (oscSawPitch) * patchParameterData[PARAM_OSC_SAW_LEVEL].voice_val);
    oscPulseOut = (oscPulse.pulse (oscPulsePitch, patchParameterData[PARAM_OSC_PULSE_AMOUNT].voice_val) * patchParameterData[PARAM_OSC_PULSE_LEVEL].voice_val);
    oscSquareOut = (oscSquare.square (oscSquarePitch) * patchParameterData[PARAM_OSC_SQUARE_LEVEL].voice_val);
    
    //mix oscillators together
    oscMixOut = (oscSineOut + oscTriOut + oscSawOut + oscPulseOut + oscSquareOut) / 5.;

 

Filter

 

The filter is generated, set, and used in the following way:

  1. The LFO->cutoff modulation depth parameter value is used to generate the realtime parameter modulation value that needs to be added to the cutoff parameter value
  2. The filter cutoff value is worked out by adding the realtime cutoff modulation values (generated by both velocity and LFO modulation) to the filter cutoff parameter value
  3. The maxiSVF object cutoff value is set using the cutoff value multiplied by the current output sample of the filter envelope
  4. The LFO->resonance modulation depth parameter value is used to generate the realtime parameter modulation value that needs to be added to the resonance parameter value
  5. The filter resonance value is worked out by adding the realtime resonance modulation values (generated by both velocity and LFO modulation) to the filter resonance parameter value
  6. The maxiSVF object resonance value is set using the resonance value
  7. An output sample of the filter applied to the mixed oscillator sample is generated by calling play() on the maxiSVF object using the following parameters:
    1. The passed in audio sample is the output of the oscillators
    2. The filter LP, HP, BP, and notch mix parameters are used to set the mix of the filter

 

//==========================================================
    //process filter (pass in oscOut, return filterOut)
    
    //================================
    //process LFO->cutoff modulation
    double cutoff_lfo_mod_val = getModulatedParamValue (PARAM_MOD_LFO_FREQ, PARAM_FILTER_FREQ, lfoOut);
    
    //Add the cutoff modulation values to the patch value, making sure the produced value is in range
    double cutoff_val = patchParameterData[PARAM_FILTER_FREQ].voice_val + cutoff_lfo_mod_val + velFreqModVal;
    cutoff_val = boundValue (cutoff_val, patchParameterData[PARAM_FILTER_FREQ].voice_min_val, patchParameterData[PARAM_FILTER_FREQ].voice_max_val);
    
    //set cutoff value, multipled by filter envelope
    filterSvf.setCutoff (cutoff_val * envFilterOut);
    
    //================================
    //process LFO->reso modulation
    double reso_lfo_mod_val = getModulatedParamValue (PARAM_MOD_LFO_RESO, PARAM_FILTER_RESO, lfoOut);
    
    //Add the reso modulation values to the patch value, making sure the produced value is in range
    double reso_val = patchParameterData[PARAM_FILTER_RESO].voice_val + reso_lfo_mod_val + velResoModVal;
    reso_val = boundValue (reso_val, patchParameterData[PARAM_FILTER_RESO].voice_min_val, patchParameterData[PARAM_FILTER_RESO].voice_max_val);
    
    //set resonance value
    filterSvf.setResonance (reso_val);
    
    //================================
    //Apply the filter
    
    filterOut = filterSvf.play (oscMixOut,
                                patchParameterData[PARAM_FILTER_LP_MIX].voice_val,
                                patchParameterData[PARAM_FILTER_BP_MIX].voice_val,
                                patchParameterData[PARAM_FILTER_HP_MIX].voice_val,
                                patchParameterData[PARAM_FILTER_NOTCH_MIX].voice_val);

 

Distortion

 

The current implementation of applying distortion to the voices is as follows:

  1. An output sample of distorted audio is generated by passing the filtered audio sample into the maxiDistortion::atanDist function with a static shape value of 200.
  2. The distorted audio sample is mixed with the undistorted filtered audio sample, using the distortion amount parameter value to set the gain/mix of each audio sample

 

    //==========================================================
    //process distortion...
    //FIXME: should PARAM_FX_DISTORTION_AMOUNT also change the shape of the distortion?
    distortionOut = distortion.atanDist (filterOut, 200.0);
    
    //process distortion mix
    //FIXME: is this (mixing dry and wet) the best way to apply distortion? Or should I just always be running the main output through the distortion function?
    //FIXME: probably need to reduce the disortionOut value so bringing in disortion doesn't increase the overall volume too much
    effectsMixOut = (distortionOut * patchParameterData[PARAM_FX_DISTORTION_AMOUNT].voice_val) + (filterOut * (1.0 - patchParameterData[PARAM_FX_DISTORTION_AMOUNT].voice_val));

 

However, as per the comments in the above code, I may change this implementation so that I don't mix a 'dry' audio sample with the distorted sample, and instead just use the distortion amount parameter value to control the shape of the distortion.

 

Output

 

Lastly the generated audio sample needs to be applied to audio sample that goes to the main audio output. This is done by setting the output sample to be the generated audio sample multiplied by the current output sample of the amplitude envelope.

 

    //==========================================================
    //apply amp envelope, making all channels the same (pass in effectsMixOut, return output)
    for (uint8_t i = 0; i < maxiSettings::channels; i++)
    {
        output[i] = effectsMixOut * envAmpOut;
    }

 

Changes from the Initial Synthesis Engine Design

 

As can be seen from above I've managed to implement the majority of my initial design, however there have been a few changes:

  1. I've added coarse tune parameters for each of the oscillators
  2. Due to the last point, I've renamed the sub oscillator to just be called the square oscillator
  3. I've added a  'phase spread' parameter to the oscillators, allowing the phase of the oscillators to be different from each other at varying amounts
  4. I've added velocity->cutoff and velocity->resonance modulation
  5. I've removed all aftertouch modulation (for now), as currently the audio glitches fairly bad when attempting to process aftertouch messages. However I'm hoping to put this back in eventually if I have time to figure out what the issue is.

 

What's Next

 

There are a couple of parameters within my initial synth engine design that I haven't mentioned here, simply because I haven't yet implemented them. This includes:

  • Voice mode. I've implemented voice allocation for polyphonic mode, but not yet for mono mode. This feature is handled within the vintageBrain application.
  • All keyboard parameters, which once again will be handled within the vintageBrain application.
  • Vintage amount, which will detuned the oscillators by random amounts on each note press.
  • Global volume, which will set the Linux system volume.

 

Also there are a couple of bugs I need to address, the main one being random frequent audio glitches. I'm not sure whether this is related to CPU usage, audio buffer size, thread priority, or something else, but it's the main thing that's holding me back putting out some audio examples of my synthesis engine.

Over the past couple of weeks I have been dipping in and out of various parts of my project - developing the MIDI I/O interface (as seen in my last couple of blogposts), as well as starting to implement my audio synthesis engine design into a working entity (which I will probably talk about it my next blogpost). However both of these elements have required me to develop a general structure of software on the BeagleBone Black board that allow the keyboard, MIDI interface, and eventually the panel to communicate with a sound engine. Therefore in this blogspot I thought I'd cover the various different pieces of software that make up the vintage toy synthesiser, both on the BBB and off, and how they all connect together.

 

To begin with, here's a diagram of the software architecture of the synth:

 

software_architecture_diagram

 

Arduino Software

 

Keyboard

 

As shown in my third blogpost, the keys/sensors on the digitised keyboard mechanism are scanned/read by a dedicated microcontroller - an Arduino Pro Mini. The Arduino software, or sketch, for this Pro Mini simply reads the state of each sensor over and over, and detects any changes in the press or pressure status of any of the keys. Note and aftertouch messages are then sent from the Arduino to the BBB using MIDI messages over serial.

 

As previous stated I decided to use a dedicated microcontroller for this task, instead if using the BBB, for two main reasons:

  1. Splitting tasks - The main job for the BBB in this project is to run a sound synthesis engine which is going to be time critical, so I don't want it to be doing any extra tasks that could slow it down. Also the scanning of the pianos 18 keys needs to be done as fast as possible so that the keys trigger sound as soon as they are pressed, so using a dedicated microcontroller for this task would be preferable.
  2. More Modular - Connecting a microcontroller to the BBB rather than connecting 18 sensors directly requires a lot less connections and wires to the BBB, which makes it easier to remove the key mech or BBB from the piano if desired.

 

You can see the latest version of the Keyboard code here.

 

Panel

 

The software for the panel is essentially going to be the same as that of the keyboard - a sketch running on a second Arduino Pro Mini that scans the state of a number of potentiometers and switches, sending any control changes to the BBB over serial using MIDI CC messages. Once again a dedicated microcontroller is being used for this task for the exact same reasons.

 

I've only just started writing the panel code, and as I haven't yet completed the circuit this may change, so I'll wait until a later blogpost to show the code for this.

 

BeagleBone Black Software

 

The BBB is both the brain and the soul of the vintage toy piano - by that I mean it runs the central process that communicates between all the different parts of the synth, as well as running the synthesis engine that creates the sound of the synthesiser. I decided to split these two main tasks into separate pieces of software which run side-by-side on the Linux OS - vintageBrain and vintageSoundEngine, which communicate with each other using standard MIDI messages but sent via datagram sockets.

 

I've given each of these tasks dedicated applications for almost the same reasons as using Arduino's as well as the BBB:

  1. Multithreading - Splitting the tasks into two separate applications means that each process can run in its own thread without the complexities of writing multi-threaded applications.
  2. Using multiple programming languages - vintageBrain is written in C where I've had the most experience with developing this kind of application, however vintageSoundEngine is written in C++ due to using the C++ audio synthesis library Maximilian. However these two languages aren't that different and can easily be combined if needed.
  3. Keeping code separate - developing two completely separate applications means that the code is separate, rather than potentially having lots of different code that does different things mixed together, potentially making it harder to maintain. This kind of thing can be solved using object-orientated programming languages such as C++ where code can be split into dedicated classes/objects, however the C language doesn't have this feature.
  4. More modular - Say in the future I want to swap my digital sound engine for an analogue one, having the brain application separate from the sound application means that all I'd need to do is reroute my messages from the brain to a different destination, rather than having the rewrite a large chunk of the program.

 

vintageBrain

 

As stated above, the vintage brain application handles the task of allowing all the separate parts of the synthesiser to communicate. It is a single-threaded application that listens for messages coming from the keyboard, MIDI, and panel serial ports, and sends the messages to the sound engine and possible back to the MIDI serial port. It also handles all the voice and keyboard settings of the synthesiser, particularly:

  • Voice mode and voice allocation. In polyphonic mode this involves knowing which digital 'voice' within the vintageSoundEngine application each note and aftertouch message needs to be sent to, and in monophonic mode keeping track of all currently held down notes/keys so that the synth can be played with the expected mono behaviour.
  • Keyboard notes. The raw MIDI note messages coming directly from the keyboard will always be the same, however it's the vintageBrain's job to modify these messages based on the octave, transpose, and scale settings, allowing the user to chose the exact range of notes that the keyboard can play.

 

This application is developed in C, and I use my cross-compiler mentioned in a previous blogpost to compile the application before using a script to copy the binary onto the BBB. You can see the up-to-date code for vintageBrain here.

 

vintageSoundEngine

 

vintageSoundEngine is the more interesting application of the two, as this is where the sound synthesis engine has been developed. It is a multithreaded application where the main thread is responsible for processing any MIDI messages coming from vintageBrain via the datagram socket which are used to trigger and control the sound, however the second thread is used to handle audio streaming and processing. As stated previously I am using the Maximilian audio synthesis library to develop my synthesis engine, and a lot of the structure of this application is based on the example Maximilian applications. However within this application I've created a 'vintageVoice' class which handles all the audio processing for a single 'voice' within my synth; making a dedicated class/object for this allows me to easy increase or decrease the amount of voices within my synth.

 

This application is developed in C++, and is compiled on the BBB itself due to not being able to get my cross-compiler to compile any Maximilian-based application, as outlined in a previous blogpost. I will talk about this application and the sound engine in a lot more detail in a future blogpost, as well give examples of the code.

In my last blogpost I talked about the implementation of the electronics needed for adding a MIDI interface to my vintage toy synthesiser. As a suitable follow-on, within this post I thought I'd talk in-depth about MIDI message processing; specifically about five factors of the MIDI message format that make processing MIDI messages more complicated than it appears, or at least in regards to allowing full compatibility with all MIDI gear. As I'm not using any MIDI library (which would typically be used to handling the processing of MIDI messages) to develop the software for this project, I needed to write this code from scratch; C code which I have shared at the bottom of this blogpost.

 

Basics to MIDI Processing

 

MIDI is a form of serial communication, therefore bytes are transmitted one-byte at a time rather than in chunks or packets. Therefore to process MIDI messages coming from a serial port, each byte needs to be read and processed individually.

 

The first byte of any MIDI message, the 'status' byte, will always have a value of 128 or above, with the following 'data' bytes having a value of less than 128, therefore the first part to correctly processing MIDI messages is to check the value of each byte against this value - if the byte is 128 or above you know you have just received the start of a new MIDI message, or if the byte is 127 or below you that this byte is part of the previous MIDI message.

 

However the second part to correctly processing MIDI messages is to know the length of each MIDI message based on the status byte value, so that you know when you've received a full MIDI message which can now be used. Different MIDI messages have different lengths, and as the status byte represents the message type, this will indicate how many data bytes we should expect to receive after the status byte. Once we've received the correct amount of data bytes after a status byte, we know we've received a full MIDI message which can now be used within the software/system.

 

An example of using the above two rules in a MIDI processing algorithm:

  1. From a serial port you read a byte of value 176. Because it is a value greater than 127 you know it is a status byte, and the start of a new message, so you store this byte at the start of a message buffer.
  2. You look up what the status byte represents - it is CC message on MIDI channel 0. As it is a CC, you expect to receive two data bytes next to complete this message, and you flag that so far you have only received one out of three bytes.
  3. You read a second byte of value 1. As you've flagged that we've currently received just one byte of a CC message, this must be the first data byte, or the CC number. You store this value in the second index of the message buffer, and flag that so far you have received two out of three bytes.
  4. You read a third byte of value 23. As you've flagged that we've currently received two bytes of a CC message, this must be the second data byte, or the CC value. You store this value in the third index of the message buffer. As you have now received all three bytes of a CC message, you flag that this message have been fully received and can be used to trigger an event within your application.

 

The above rules mean that invalid message can be caught and discarded rather than the system attempting to use them. For example, if a new status byte is received before all the data bytes of the last message are received, you know to discard the previous message in the message buffer and start storing and processing a new message. Or if too many data bytes are received after a status byte, as you would have already processed the MIDI message and are waiting to receive a status byte to start processing a new message, any extra data bytes would just be ignored.

 

It is worth mentioning that there is one type of MIDI message that can't be processed in this way - System Exclusive (or SysEx) messages. However these are easier to process - they always start with status byte value 240, and end with a byte of value 247, with a variable number of data bytes in-between.

 

Advanced MIDI Processing

 

As mentioned at the start, there are some factors to the MIDI message format that mean processing MIDI messages isn't always as simple as just checking for a status byte and a message length. Here are five factors that could be unknown to some MIDI developers, with examples of how these factors can be processed in my example code at the bottom.

 

1. Note-on Note-off Messages

 

This factor is fairly well know to MIDI developers, but can be easily forgotten from time to time. Note-on and Note-off messages have their own set of status byte values, however 'note-off' messages can also be sent using the note-on message format but with a velocity value (2nd data byte) of zero. Many older MIDI controllers and synthesisers use this feature of MIDI in conjunction with the next factor I'll talk about, running status.

 

Therefore to correctly processing note-on messages, if it has a third byte of value 0, this message must actually be converted to or used as a note-off message (but with the same channel and note number).

 

2. Running Status

 

Running Status is a feature of MIDI that allows messages to be sent without the status byte, if the status byte is the same as that of the previous message. This feature was used in a lot of older MIDI equipment which had limited processing power available, as it allows less bytes to be sent. This explains why some MIDI keyboards send note-off messages as note-on messages, as it allows any number of keys to be pressed and released but without needing to send a new status byte on each event. Only Voice Category messages (e.g. note, CC, aftertouch) can be sent using running status.

 

To correctly process running status messages, after receiving a full voice category MIDI message, instead of setting the byte counter back to 0 at this point you set it to a value of 1. This will put your processing algorithm into a state where it thinks it has already received a status byte and is waiting for the messages data bytes. If a status byte is received instead at this point it just restarts processing a new message.

 

3. 14-Bit CCs

 

The majority of MIDI messages have 7-bit value bytes, due to data bytes not being able to have a value greater than 127. This isn't always enough resolution for control, therefore it is possibly to send 14-bit CC values by sending a pair of CCs instead. CC controller 0-31 can be used to send 14 bit values by following the CC with a second CC with a controller value of 32-63, where the combined 14-bit value is split between the value byte (3rd byte) of each message. For example, if you want to send a mod wheel message with a 14 bit resolution value, split the value into two 7 bit values (known as coarse and fine values), send the coarse value using a CC message with controller number (2nd byte) set to 0, and send the fine value using a CC message with a controller number set to 32.

 

Processing 14 bit CCs needs to be done one way or another, even if you only ever want to use 7 bit CCs. Therefore to correctly process CC messages you must always store the controller number of the previous CC you received, and if the new CC number is between 32 and 63 where the previous CC number is equal to the new CC number minus 32, flag that you have received a 14 bit CC value. It's then up to you whether you want to process this as a 14 bit CC or not. In my project/code I don't want to process 14 bit CCs so I just ignore the second CC (else my application thinks I've received two separate CCs and could do odd things based on this), however if you want to use 14 bit values it would be at this point that you combine the coarse and fine values to create a 14 bit number.

 

For more info on 14 bit MIDI CCs, including how they are encoded and decoded, see here.

 

4. RPNs and NRPNs

 

Another way that 14 bit resolution values can be sent using MIDI is using Registered Parameter Numbers (RPNs) and Non-Registered Parameter Numbers (NRPNs), which are sent as a succession of 3 or 4 specific CCs. RPN/NRPNs not only allow greater resolution of values, but also allow for a greater number of controller/parameter numbers. First an RPN/NRPN parameter number is sent using a pair of CCs (101 and 100 for RPNs, 99 and 98 for NRPNs), and then the parameter value is sent using either one or two further CCs (6 - course value, and 38 - fine value) depending on whether it is a 7-bit or 14-bit number. These CC numbers are specified for sending RPNs and NRPNs, and ideally should not be used for any other purpose in order to have full compatibility with other MIDI hardware and software.

 

To process RPNs/NRPNs an RPN/NRPN-specific message buffer needs to be used. If CC 101 or 99 is fully received you need to store the value of the CC and wait for the next parameter CC (CC 100 or 98), and once you have that combine the two received parameter numbers into a single parameter number, much like how 14 bit CC coarse and fine values are combined. Once you get the following CC 6 message you now have the full NRPN message to use as desired, though if the RPN/NRPN is followed by CC 38 you then need to adjust the value or this NRPN to include this fine value.

 

RPN/NRPN processing is something I have yet to add to my MIDI processing code. You can read more about RPNs and NRPNs here.

 

5. Intwined MIDI Messages

 

Most MIDI equipment won't start sending a new MIDI message until it has finished sending the previous one, however this isn't the case for all MIDI gear. You may find that some MIDI devices will send Realtime Category messages (e.g clock, active sensing) intwined within Voice Category messages, which is valid and needs to be processed correctly.

 

For example, the following is an example of a series of MIDI messages, with the bytes received in the order you'd expect:

MIDI_message_stream

 

This example shows four CC messages (status byte in red, data bytes in orange) with several Clock Timing messages (one byte MIDI messages) (in black) in between.

 

However here is an example of the same set of messages but with the bytes sent in a different order:

midi_message_stream

 

All the same messages are sent, but the clock messages are sent in the middle of CC messages. Unfortunately this is a valid stream of MIDI messages, and without proper processing only the last CC message in the previous example would have been processed correctly.

 

When I was working on an algorithm to process intwined MIDI messages correctly, this webpage proved to be very helpful. Even though it is mainly talking about running status, it offers an answer on how to handle intwined messages:

 

A recommended approach for a receiving device is to maintain its "running status buffer" as so:

  1. Buffer stores the status when a Voice Category Status (ie, 0x80 to 0xEF) is received.
  2. Nothing is done to the buffer when a RealTime Category message is received.

 

As can be seen in my example code below, only when a voice category message is received do I update my running status variable; if a clock message is received I just store this in my message buffer and flag that I've received a clock message. If the clock message was received within a voice message and I then start receiving the rest of the message, as the running status variable is still equal to the last received voice message I will keep processing this message.

 

Example MIDI Processing Code

 

This is the C code I'm currently using to process MIDI messages coming from the MIDI serial port on my BBB. It is a function that is called each time I receive a byte, which the byte is passed into, and returns a non-zero number when a full message has been received indicating the message type. I'm not suggesting the is the best or definitive way of processing MIDI messages, and it is currently only processing MIDI messages I care about (note, CC, aftertouch, pitch bend, program change, clock, and SysEx), however it is a good working example of how it can be done.

 

#define MIDI_NOTEOFF 0x80
#define MIDI_NOTEON 0x90
#define MIDI_PAT 0xA0
#define MIDI_CC 0xB0
#define MIDI_PROGRAM_CHANGE 0xC0
#define MIDI_CAT 0xD0
#define MIDI_PITCH_BEND 0xE0

#define MIDI_NOTEOFF_MAX 0x8F
#define MIDI_NOTEON_MAX 0x9F
#define MIDI_PAT_MAX 0xAF
#define MIDI_CC_MAX 0xBF
#define MIDI_PROGRAM_CHANGE_MAX 0xCF
#define MIDI_CAT_MAX 0xDF
#define MIDI_PITCH_BEND_MAX 0xEF

#define MIDI_CLOCK 0xF8
#define MIDI_CLOCK_START 0xFA
#define MIDI_CLOCK_CONTINUE 0xFB
#define MIDI_CLOCK_STOP 0xFC

#define MIDI_SYSEX_START 0xF0
#define MIDI_SYSEX_END 0xF7

uint8_t ProcInputByte (uint8_t input_byte, uint8_t message_buffer[], uint8_t *byte_counter, uint8_t *running_status_value, uint8_t *prev_cc_num)
{
    /*
     A recommended approach for a receiving device is to maintain its "running status buffer" as so:
     Buffer is cleared (ie, set to 0) at power up.
     Buffer stores the status when a Voice Category Status (ie, 0x80 to 0xEF) is received.
     Buffer is cleared when a System Common Category Status (ie, 0xF0 to 0xF7) is received - need to implement this fully!?
     Nothing is done to the buffer when a RealTime Category message (ie, 0xF8 to 0xFF, which includes clock messages) is received.
     Any data bytes are ignored when the buffer is 0.
     
     (http://www.blitter.com/~russtopia/MIDI/~jglatt/tech/midispec/run.htm)
     
     */
    
    //static uint8_t running_status_value = 0;
    //static uint8_t prev_cc_num = 127; //don't init this to 0, incase the first CC we get is 32, causing it to be ignored!
    uint8_t result = 0;
    
    //=====================================================================
    //If we've received the start of a new MIDI message (a status byte)...
    
    if (input_byte >= MIDI_NOTEOFF)
    {
        //If it's a Voice Category message
        if (input_byte >= MIDI_NOTEOFF && input_byte <= MIDI_PITCH_BEND_MAX)
        {
            message_buffer[0] = input_byte;
            *byte_counter = 1;
            result = 0;
            
            *running_status_value = message_buffer[0];
        }
        
        //If it's a clock message
        else if (input_byte >= MIDI_CLOCK && input_byte <= MIDI_CLOCK_STOP)
        {
            //Don't do anything with MidiInCount or *running_status_value here,
            //so that running status works correctly.
            
            message_buffer[0] = input_byte;
            result = input_byte;
        }
        
        //If it's the start of a sysex message
        else if (input_byte == MIDI_SYSEX_START)
        {
            message_buffer[0] = input_byte;
            *byte_counter = 1;
        }
        
        //If it's the end of a sysex
        else if (input_byte == MIDI_SYSEX_END)
        {
            message_buffer[*byte_counter] = input_byte;
            *byte_counter = 0;
            
            result = MIDI_SYSEX_START;
        }
        
        // If any other status byte, don't do anything
        
    } //if (input_byte >= MIDI_NOTEOFF)
    
    //=====================================================================
    //If we're received a data byte of a non-sysex MIDI message...
    //FIXME: do I actually need to check *byte_counter here?
    
    else if (input_byte < MIDI_NOTEOFF && message_buffer[0] != MIDI_SYSEX_START && *byte_counter != 0)
    {
        switch (*byte_counter)
        {
            case 1:
            {
                //Process the second byte...
                
                //Check *running_status_value here instead of message_buffer[0], as it could be possible
                //that we are receiving running status messages entwined with clock messages, where
                //message_buffer[0] will actually be equal to MIDI_CLOCK.
                
                //TODO: process NRPNs, correctly (e.g. process 9 byte NRPN, and then as 12 bytes if the following CC is 0x26)
                
                //if it's a channel aftertouch message
                if (*running_status_value >= MIDI_CAT && *running_status_value <= MIDI_CAT_MAX)
                {
                    message_buffer[1] = input_byte;
                    result = MIDI_CAT;
                    
                    //set the correct status value
                    message_buffer[0] = *running_status_value;
                    
                    //wait for next data byte if running status
                    *byte_counter = 1;
                }
                
                //if it's a program change message
                else if (*running_status_value >= MIDI_PROGRAM_CHANGE && *running_status_value <= MIDI_PROGRAM_CHANGE_MAX)
                {
                    message_buffer[1] = input_byte;
                    result = MIDI_PROGRAM_CHANGE;
                    
                    //set the correct status value
                    message_buffer[0] = *running_status_value;
                    
                    //wait for next data byte if running status
                    *byte_counter = 1;
                }
                
                //else it's a 3+ byte MIDI message
                else
                {
                    message_buffer[1] = input_byte;
                    *byte_counter = 2;
                    
                    //set the correct status value
                    message_buffer[0] = *running_status_value;
                }
                
                break;
            }
                
            case 2:
            {
                //Process the third byte...
                
                result = 0;
                
                //TODO: process NRPNs, correctly
                
                //if it's not zero it's a note on
                if (input_byte && (*running_status_value >= MIDI_NOTEON && *running_status_value <= MIDI_NOTEON_MAX))
                {
                    //3rd byte is velocity
                    message_buffer[2] = input_byte;
                    result = MIDI_NOTEON;
                    
                    //set the correct status value
                    message_buffer[0] = *running_status_value;
                }
                
                //if it's a note off
                else if ((*running_status_value >= MIDI_NOTEOFF && *running_status_value <= MIDI_NOTEOFF_MAX) ||
                         (!input_byte && (*running_status_value >= MIDI_NOTEON && *running_status_value <= MIDI_NOTEON_MAX)))
                {
                    //3rd byte should be zero
                    message_buffer[2] = 0;
                    result = MIDI_NOTEOFF;
                    
                    //set the correct status value
                    message_buffer[0] = *running_status_value;
                }
                
                //if it's a CC
                else if (*running_status_value >= MIDI_CC && *running_status_value <= MIDI_CC_MAX)
                {
                    //if we have got a 32-63 CC (0-31 LSB/fine CC),
                    //and the last CC we received was the MSB/coarse CC pair
                    if (message_buffer[1] >= 32 && message_buffer[1] <= 63 && (*prev_cc_num == (message_buffer[1] - 32)))
                    {
                        //Don't do anything. Right now if this is the case we just want to ignore it.
                        //However in the future we may want to process coarse/fine CC pairs to
                        //control parameters at a higher resolution.
                        printf ("[VB] Received CC num %d directly after CC num %d, so ignoring it\r\n", message_buffer[1], *prev_cc_num);
                    }
                    
                    else
                    {
                        message_buffer[2] = input_byte;
                        result = MIDI_CC;
                        
                        //set the correct status value
                        message_buffer[0] = *running_status_value;
                        
                    } //else
                    
                    //store this CC num as the previously received CC
                    *prev_cc_num = message_buffer[1];
                }
                
                //if it's a poly aftertouch message
                else if (*running_status_value >= MIDI_PAT && *running_status_value <= MIDI_PAT_MAX)
                {
                    message_buffer[2] = input_byte;
                    result = MIDI_PAT;
                    
                    //set the correct status value
                    message_buffer[0] = *running_status_value;
                }
                
                //if it's a pitch bend message
                else if (*running_status_value >= MIDI_PITCH_BEND && *running_status_value <= MIDI_PITCH_BEND_MAX)
                {
                    message_buffer[2] = input_byte;
                    result = MIDI_PITCH_BEND;
                    
                    //set the correct status value
                    message_buffer[0] = *running_status_value;
                }
                
                // wait for next data byte (if running status)
                *byte_counter = 1;
                
                break;
            }
                
            default:
            {
                break;
            }
                
        } //switch (*byte_counter)
        
    } //else if (input_byte < MIDI_NOTEOFF && message_buffer[0] != MIDI_SYSEX_START && *byte_counter != 0)
    
    //if we're currently receiving a sysex message
    else if (message_buffer[0] == MIDI_SYSEX_START)
    {
        //add data to the sysex buffer
        message_buffer[*byte_counter] = input_byte;
        *byte_counter++;
    }
    
    return result;
}

MIDI is an essential part of any serious piece of electronic music equipment. In a nutshell, MIDI is "a technical standard that describes a protocol, digital interface and connectors and allows a wide variety of electronic musical instruments, computers and other related devices to connect and communicate with one another". For example, it allows a consumer musical keyboard from one company to trigger notes or control audio within a piece of software developed by a completely different company, pretty much out-of-the-box.

 

You've probably seen from one of my previous blogposts that I use the MIDI messaging format to send note messages from the keyboard mechanism to the BeagleBone Black, however to make my toy piano synth fully MIDI-compatible I need to add some kind of MIDI input and output connections to the synth. In this blogpost I'm going to talk about the hardware and electronics I've used to allow MIDI messages to be sent to and from the BeagleBone Black, integrating a fully functional MIDI interface into my vintage toy synthesiser.

 

MIDI Hardware Transport Options

 

The original and most common hardware transport option for MIDI I/O is using a pair of five-pin DIN connectors, and I have planned on using this hardware transport option from the outset of this project because of the following reasons:

  1. They are the most common MIDI connectors found in commercial synthesisers
  2. They can quite simply be connected to one of the UARTs on the BeagleBone Black
  3. I've used MIDI-DIN connectors in past projects

 

midi-din-connectormidi-din-cables

A standard MIDI-DIN connector (left) and a pair of standard MIDI-DIN cables (right)

 

Throughout the project I have considered alternative or addition connections for sending and receiving MIDI messages, however for this project there weren't enough good reasons to use them. Other  hardware transport options I had consider include the following:

  • USB MIDI - MIDI can be sent over USB, and in regards to computer music it is becoming the most common hardware interface for MIDI. Most modern MIDI controllers include USB-MIDI, sometimes instead of MIDI-DIN connectors, as it allows MIDI hardware to be plugged straight into computers, however it is still not that common on commercial synthesisers. I believe that the mini USB port on the BBB can be used as a USB client port, making it a USB slave device (such as a commercial MIDI controller), however I'm not 100% sure of this as I haven't seen any examples, and I haven't had any experience with programming USB comms in Linux.
  • Ethernet MIDI - MIDI can be transmitted over a network protocol such as Ethernet, and as the BBB includes a network port this would be an option for this project. However MIDI over Ethernet isn't as supported as the other options, plus I've not had as much experience with network comms compared to standard serial comms.
  • Bluetooth MIDI - MIDI over Bluetooth is starting to become quite common on modern MIDI controllers that are designed to be portable and need to be wireless. However, even though it's fairly small, my vintage toy piano is a bit too bulky to be considered a 'portable' instrument, plus I'd need to attach a Bluetooth transmitter/receiver to the BBB for this option, which is a technology I've had no experience with, so it didn't make sense as a MIDI transport option for this project.

 

The Circuit

 

The circuit for the MIDI interface within my synth is made up of two separate circuits - a MIDI-in circuit and a MIDI-out circuit, which each connects a MIDI-DIN connector to a TX or RX serial port on the BBB. There are plenty of examples of these circuits and how they are connected to boards such as the BBB, and below I'm going to highlight the specific guides I used, as well as any of my own additions or changes.

 

MIDI-In

 

The guide I used for building the MIDI-in circuit was the Libre Music Production Arduino and MIDI in guide, which is fully transferable for the BBB.

 

The main components needed for the circuit are:

  • 1 x female MIDI DIN connector
  • 3 x 220 Ohm resistor
  • 1 x 1N4148 diode
  • 1 x 10kOhm resistor
  • 1 x 6N138 optocoupler

 

As explained in the guide, an optocoupler is very important here as it allows the two electronic circuits (the BBB and the connected MIDI gear) to be electronically isolated from each other, which prevents the occurrence of ground loops and protects equipment from voltage spikes. Here is a breadboard diagram of the circuit from the guide that I used:

 

MIDI-in-circuit

 

A couple of corrections and things to mention about this diagram:

  1. The anode of the diode should actually be connected to pin 3 of the optocoupler, not pin 4
  2. The viewport of the MIDI connector is from the back
  3. Obviously in my project this circuit is being attached to a BBB instead of an Arduino. I mention how this circuit is specifically connected to the BBB below.

 

MIDI-Out

 

The guide I used for building the MIDI-out circuit was the official Arduino MIDI guide, which again is fully transferable for the BBB.

 

The MIDI-out circuit is a lot simpler than the MIDI-in circuit, and it only requires the following components:

  • 1 x female MIDI DIN connector
  • 2 x 220 ohm resistors

 

Here is a diagram of the circuit from the guide that I used:

 

midi-out-diagram

 

 

A couple of things to mention about this diagram:

  1. The viewport of the MIDI connector is from the front
  2. Again, in my project this circuit is being attached to a BBB instead of an Arduino.

 

The Combined Circuit

 

Here is a photo of the above two circuits combined onto a single piece of strip board for my project:

 

MIDI-in-out-circuit

 

In the above photo, the wires at the top are going to the two MIDI-DIN connectors, and the wires on the left are going to the BBB.

Combining the two circuits into a single one has allowed me share the power and ground lines between the two, meaning I only need to use a single pair of wires from the BBB for power and ground.

You'll also notice that I've used a set of screw terminals for connecting the MIDI-DIN connectors to the circuit. I've done this so that, once everything is attached to the toy piano enclosure, I can remove this particular circuit from the piano if needed without having to remove the connectors too, or vice-versa.

 

Connecting to the BeagleBone Black

 

As mentioned above MIDI is a serial communication method, meaning that the above circuit can be simply attached to the BBB via a pair of UART pins. I'll be using UART2 for MIDI, therefore I've attached the circuit to the BBB using the following pins:

  1. Orange wire to BBB P9_21 pin (UART2 TXD), for sending MIDI messages from the BBB to an external device
  2. Green wire to BBB P9_22 pin (UART2 RXD), for receiving MIDI messages from an external device to the BBB
  3. Black wire to BBB P9_01 pin (a DGND pin), for allowing the MIDI circuit to be powered by the BBB
  4. Red wire to BBB P9_03 pin (a VDD_3V3 pin), for powering the MIDI circuit using the BBB

 

As per connecting the keyboard mechanism, this circuit needs to be powered by a 3.3V pin instead of 5V, as the BBB serial ports run at 3.3V.

 

BBB-serial-connection

Connections to UART 2 on the BBB

 

Setting the Required Serial Baud Rate in Linux

 

While this blogpost covers the electronics of the MIDI I/O connection within my toy piano synth, I just thought I'd briefly talk about how I successfully got MIDI messages being send from and to software running on the BBB, as this took me a little while due to the serial baud rate needed for MIDI.

 

MIDI communicates using a serial baud rate of 31250, which is not a standard or common baud rate. The code I've shown in a previous blogpost for setting up serial comms in Linux wouldn't work here as 31250 is not a recognised rate when using the most common method of setting up serial comms (or at least what I consider to be the common method!). After a lot of Googling I found this thread in which a very helpful man called Peter Hurley provided some example code on how to use the BOTHER method of setting a custom baud rate. Using this example code I have now replaced my serial setup code with the following in order to get work MIDI comms:

 

int SetupSerialPort (const char path[], int speed, bool should_be_blocking)
{
    int fd;
    struct termios2 tio;
    
    // open device for read/write
    fd = open (path, O_RDWR);
    
    //if can't open file
    if (fd < 0)
    {
        //show error and exit
        perror (path);
        return (-1);
    }
    
    if (ioctl (fd, TCGETS2, &tio) < 0)
        perror("TCGETS2 ioctl");
    
    tio.c_cflag &= ~CBAUD;
    tio.c_cflag |= BOTHER;
    tio.c_ispeed = speed;
    tio.c_ospeed = speed;
    
    if (ioctl( fd, TCSETS2, &tio) < 0)
        perror("TCGETS2 ioctl");
    
    printf("[VB] %s speed set to %d baud\r\n", path, speed);

    return fd;
}






I’ve spent the past 10 days in sunny Anaheim, California at The NAMM Show 2016 exhibiting with Modal Electronics, so I haven’t had much time to work on my project. However it’s given me a chance to think a lot about my synthesis engine design, so I thought I’d use this weeks (well, a late last weeks) update to give a brief overview of audio synthesis types, the essential components, as well as outlining my current design ideas and how they have changed over the project so far.

 

Before I continue, it is probably worth stating that I am by no means either a synthesiser enthusiast or an expert at developing audio synthesis engines, and part of me doing this project is to advance my knowledge and experience in both of these areas. I have got a good intermediate understanding of audio synthesis, but if you want to know about synthesis types and components at an advanced level I’d rather leave that to other sources than for me attempt to explain everything in detail here. Also I am going to talk about audio synthesis from a very general point of view, and not discuss the differences between analogue and digital synthesis; however it is worth mentioning that as I'll be implementing the audio synthesis engine on the BeagleBone Black it is going to be completely digital.

 

Synthesis Types

 

There are many different types of synthesis, which all create their own distinctive sounds in particular ways.

 

The main types of synthesis are as follows:

  • Additive synthesis - Adding together simple waveforms (usually sine waves), called partials, to create more complex waveforms. Not very common in modern synthesisers due to the complexity of designing a user interface that allows each partial to be controlled in an intuitive way. A good example of an additive synthesis software synthesiser is FL Studio's Harmor.
  • Subtractive synthesis - Filtering harmonically-rich waveforms. Probably the most common type of synthesis within modern commercial synthesisers, and is relatively simple to implement and process. An example of a classic subtractive synthesis synthesiser is the Moog Minimoog.
  • Frequency Modulation (FM) synthesis - Modulating the frequency of waveforms with other waveforms in the audio range. At its simplest level it creates a very particular distinctive sound, however it is very complex to implement a versatile FM engine. A good example of a class FM synthesiser is the Yamaha DX7; the first commercially successful digital synthesiser.
  • Wavetable synthesis - Similar to subtractive synthesis, but instead of using equations to generate the sound waves, it stores small samples of a single cycle of a waveform and plays back the stored sample over and over again. It was very popular in early digital synthesisers due to it’s advantages of taking up less memory and processing power. An example of this is the PPG wave 2.
  • Granular synthesis - Manipulating very short samples of sound called grains, played back in unconventional ways. Great for creating soundscapes, textures, and effects. A good example of granular synthesis is the Collidoscope.
  • Physical-Modelling synthesis - The process of using equations and algorithms to simulate real instruments or physical sources of sound. This type generally uses a very different set of parameters and controls compared to the other synthesis types. My personal favourite physical-modelling software synth is Apple's Sculpture.

 

Essential Building Blocks of Audio Synthesis

 

Below is a list of the four important building blocks needed for any decent sound synthesis engine of any synthesis type:

 

  • Oscillators - used to create the raw sound source/waves/tones, and can come in many different wave shapes (e.g. sine, saw, square, noise)
  • Filters - used to shape the timbre of the sound created by oscillators, with the most popular filter types being low-pass, high-pass, and band-pass. Important for subtractive synthesis.
  • Envelopes - used to modulate parameters of the sound (most commonly the volume) in the time domain. Most envelopes are ADSR envelopes which provide control over 4 distinct time related parts of a sound - attack, decay, sustain, and release.
  • Low Frequency Oscillators (LFOs) - used as source for creating rhythmic modulation, such as tremolo (by modulating the sounds volume), or vibrato (by modulating a sounds pitch).

 

On top of that you’ll find synthesisers will come with other components and controls such as effects (e.g. delay, reverb, chorus, flanger), sequencers, other modulation sources (e.g. key velocity, aftertouch, wheels/pedals/joysticks), but are not essential to the core of a synthesis engine.

 

Number of Voices

 

Other than synthesis type and essentially components, another main thing to consider in audio synthesis design is how many notes the synthesis engine can play simultaneously. There are most commonly two possible ‘modes’ here - polyphonic (multiple notes) or monophonic (a single note).

 

In a polyphonic system you then need to consider the number of voices, also called the polyphony value. A greater polyphony value allows bigger chords and textures to be played, however it increases the needed processing power. There are also different levels of polyphony - in a true polyphonic synthesis each voice/note will have it’s own oscillators, filter, envelopes, and LFOs; however some polyphonic systems are simplified so that, while multiple notes can be played simultaneously, every voice shares a single element such as a filter and/or LFO, which is more commonly known as a paraphonic synth. Polyphonic synthesisers usually have an option to put them into monophonic mode if desired.

 

mono synthesis engine example

poly synthesis engine example

paraphonic synthesis engine example

Example flow charts of of monophonic, polyphonic, and paraphonic audio synthesis engines.

In these examples each voices has two oscillators.

 

Development of my Synthesis Engine Design

 

So far throughout this project my design ideas for the synthesis engine have changed quite a bit, and I’m expecting it to keep changing right up until the end. There are a number factors that have caused this, which will ultimately determine what my final synthesis engine design will be:

  1.   The capabilities and power of the target hardware, the BeagleBone Black
  2.   The capabilities of the synthesis library I’m using, most probably Maximilian
  3.   The space on the toy piano enclosure for synth parameter controls
  4.   The number of inputs on an Arduino Pro Mini that I can used for controlling these controls
  5.   My knowledge and experience in implementing synthesis engines
  6.   Time!

 

Here are the main design changes that have happened so far, with reasons why:

  • Originally I was planning on having a very complex synthesis engine with many modulation destinations, digital effects, and some quite advanced parameters; totalling to about 60 parameters/controls. However after experimenting with Maximilian on the BBB I found that I’m not going to have enough processing power to get it all working without audio glitches. Also, I’m not sure I’m going to have enough time or space to wire and attach 60 controls to the toy piano enclosure.
  • I wanted to have 12 or 16 note polyphony, but testing Maximilian on the BBB seems to show that in order to have such high polyphony I would need to heavily simplify other parts of the synthesis engine, such as making it paraphonic instead of polyphonic.
  • Originally I was just going to have a basic filter which could either be low-pass, band-pass, or high-pass. However after experimenting with Maximilian I found that its SVF (state-variable filter) offers more control over the sound and uses less processing power.

 

My Current Synthesis Engine Design

 

I have decided to base my synthesis engine on subtractive synthesis for a couple of reasons:

  1.   It can create a relatively varied set of sounds without being too complex to implement or use
  2.   It’s the synthesis type I have had most experience with, from the synths I help develop at Modal Electronics

 

Minimum set of Parameters

 

Here are the minimum set of components and parameters that I want to implement into my vintage toy synthesiser:

 

Parameter Category
ParameterValue RangePanel ControlDescription
OscillatorsSine wave level0-127PotentiometerSets the level of a sine oscillator
Triangle wave level0-127PotentiometerSets the level of a triangle oscillator
Sawtooth wave level0-127PotentiometerSets the level of a sawtooth oscillator
Pulse wave level0-127PotentiometerSets the level of a pulse oscillator
Pulse amount0-127Potentiometer, centre-detentedSets the pulse wave shape
Sub level0-127PotentiometerSquare wave 12 semitones down
State-Variable FilterFrequency cutoff0-127PotentiometerSets the cutoff/centre frequency of the filter
Resonance0-127PotentiometerSets the resonance of the filter
Low-pass mix0-127PotentiometerSets the level of the low-pass mix
High-pass mix0-127PotentiometerSets the level of the high-pass mix
Band-pass mix0-127PotentiometerSets the level of the band-pass mix
Notch mix0-127PotentiometerSets the level of the notch mix
Amplitude EnvelopeAttack0-127PotentiometerSets the time it takes for the amplitude to reach each its max value when a note is triggered
Decay0-127PotentiometerSets the time it takes for the amplitude to go from the max value to the sustain value
Sustain0-127PotentiometerSets the amplitude level that the sound/note stays at for it's duration until the note/key is released
Release0-127PotentiometerSets the time it takes for the amplitude to go from the sustain level to 0 after a note/key is released
Amount0-127PotentiometerSets the amount of envelope modulation on the amplitude. Can also act as a volume/gain control.
Filter EnvelopeAttack0-127PotentiometerSets the time it takes for the filter cutoff to reach each its max value when a note is triggered
Decay0-127PotentiometerSets the time it takes for the filter cutoff to go from the max value to the sustain value
Sustain0-127PotentiometerSets the filter cutoff value that the sound/note stays at for it's duration until the note/key is released
Release0-127PotentiometerSets the time it takes for the filter cutoff to go from the sustain level to 0 after a note/key is released
LFOWave shapeSine, triangle, sawtooth, square, randomPotentiometer, detentedSets the shape of the LFO
Rate0-127PotentiometerSets how slow/fast the LFO goes
Depth-64-+63Potentiometer, centre-detentedSets the depth of the LFO (Possibly not needed if each LFO mod destination has it's own depth controls)
Keys/VoicesOctave-2-+2Potentiometer, centre-detentedSets the octave that the keyboard keys can play
ScaleChromatic, Major, Minor, othersPotentiometer, detentedSets the musical scale that the keyboards keys can play
Voice modePoly, monoToggle switchSets whether the synth is polyphonic or monophonic mode
Transpose-6-+6Potentiometer, centre-detentedSets the semitone offset of the keyboard
Modulation DepthsVelocity to amplitude-64-+63Potentiometer, centre-detentedSets the amount of applied keyboard velocity modulation for note amplitude
LFO to amplitude-64-+63Potentiometer, centre-detentedSets the amount of LFO modulation for note amplitude
LFO to cutoff-64-+63Potentiometer, centre-detentedSets the amount of LFO modulation for filter cutoff
LFO to resonance-64-+63Potentiometer, centre-detentedSets the amount of LFO modulation for filter resonance
Aftertouch to cutoff-64-+63Potentiometer, centre-detentedSets the amount of applied keyboard aftertouch modulation for filter cutoff
Aftertouch to LFO depth-64-+63Potentiometer, centre-detentedSets the amount of applied keyboard aftertouch modulation for LFO depth
EffectsDistortion amount0-127PotentiometerSets the amount of distortion applied to the overall sound
GlobalVintage amount0-127PotentiometerUnique to this synthesiser. Sets the amount of random voice/oscillator detuning; random filter cutoff value offsets; and random crackles/pops applied to the overall sound, essentially emulating an old/broken analogue synthesiser.
Volume0-127PotentiometerSets the system volume of the BBB. Not a 'patch' parameter like the rest of them.

 

I want it to be true polyphonic synthesiser, so that I can use the poly aftertouch I have implemented into the keyboard as well as implement filter envelopes, both which need to be per note/voice, with a polyphony value of at least 4 voices. Therefore there would be one instance of the oscillators, filter, envelopes, and LFO for each voice; the rest of the components/parameters would be global to the overall/mixed sound.

 

my synth engine flow chart

A flow chart of my synthesis engine design

 

Extended Parameters

 

If I'm able to, based on the 6 factors above, there are a number of other parameters I would like to build into my synthesiser:

 

Parameter Category
ParameterValue RangePanel ControlDescription
OscillatorsOscillator coarse tune-24-+24Potentiometer, centre-detentedSets the coarse tune (in semitones) for each oscillator. There would be one instance of this parameter for each wave type.
Oscillator fine tune-64-+63Potentiometer, centre-detentedSets the fine tune (in cents) for each oscillator. There would be one instance of this parameter for each wave type.
Noise level0-127PotentiometerSets the level of the noise sound source.
Double Modeoff/onToggle switchIf an oscillator has a 'note' value not set to 0, a second instance of that oscillator is created which plays at the root note, allowing two-note chords to be created.
Osc phase offset0-127PotentiometerSets the degree of offsets of phase for the set of oscillators.
Filter EnvelopeAmount/Depth-64-+63Potentiometer, centre-detentedSets the depth of the envelope modulation on the filter cutoff.
LFOSingle shot modeoff/onToggle switchSets whether the LFO only cycles once, acting more like an envelope
Delay0-127PotentiometerSets how long it takes for the LFO to start once a note is triggered
LFO2---A second LFO will all the same controls as the existing LFO1
Keys/VoicesGlide0-127PotentiometerSets the amount of glissando between notes
Voice size1-4Potentiometer, detentedSets how many voices are played with each note
Voice spread0-127PotentiometerIf there is a voice size of above 1, this sets how detuned each voice in the stack is
Modulation DepthsVelocity to cutoff, resonance, LFO depth-64-+63Potentiometer, centre-detentedSets the amount of applied keyboard velocity modulation for each destination. There would be one parameter for each destination.
LFO to filter mix-64-+63Potentiometer, centre-detentedSets the amount of applied LFO modulation for each of the filter mix parameters. There would be one parameter for each of the filter mix parameters.
Aftertouch to resonance, LFO rate-64-+63Potentiometer, centre-detentedSets the amount of applied keyboard aftertouch modulation for each destination. There would be one parameter for each destination.
LFO2 mod depths-64-+63Potentiometer, centre-detentedVarious modulation destination depths for LFO2
Filter envelope depths-64-+63Potentiometer, centre-detentedAllow the filter envelope to also modulate other parameters, with individual depths for each destination. If implementing this then the proposed filter envelope depth control could actually just be a 'envelope->filter depth' control here, with the envelope just being called 'mod envelope' rather than specific to the filter.
EffectsChorus level0-127PotentiometerSets the amount of chorus applied to the overall sound.
Reverb level0-127PotentiometerSets the amount of reverb applied to the overall sound.
Delay level0-127PotentiometerSets the amount of delay applied to the overall sound.
Delay time0-127PotentiometerSets the delay time of the delay effect
Distortion typepre-filter, post-filterToggle SwitchSets where in the signal chain that the distortion is applied.
GlobalVintage pitch amount0-127PotentiometerUnique to this synthesiser. Sets the amount of random voice detuning, essentially emulating an old/broken analogue synthesiser. This would replace the single 'vintage amount' parameter.
Vintage cutoff amount0-127PotentiometerUnique to this synthesiser. Sets the amount of random filter cutoff offsets, essentially emulating an old/broken analogue synthesiser. This would replace the single 'vintage amount' parameter.
Vintage crackle amount0-127PotentiometerUnique to this synthesiser. Sets the amount of random crackles/pops applied to the overall sound, essentially emulating an old/broken analogue synthesiser.
Patch save and recall-Push buttonsA set of buttons that are used to save and load 'patches' - a set of parameter values

This week, for my design challenge project, I've been attempting to get started on the sound synthesis engine for my synth, as well as setting up an audio output on my BeagleBone Black. There have been quite a few frustrating evenings in doing this, and while I now finally have synthesised sound coming out of my BBB, it looks like I may need to rethink what library I'm going to use to make my synthesis engine, or redesign and simplify my original idea for the synthesis engine.

 

USB audio on the BeagleBone Black

 

Initially I was planning on building my own DAC for the audio output on my BBB, however after looking into the audio IO options on the board I decided that taking advantage of the USB audio support would be the best option.

 

I tried out a couple of USB audio adapters to use with my BBB, and finally settled on an EC Technology adapter. At first I bought this budget adapter, however the sound quality was really terrible, whereas the EC Technology one sounds just as good (if not a little better) than the built-in soundcard in my MacBook.

 

usb audio adapters

A quick visual review of USB audio adapters I tried out

 

To set up USB audio as the default sound device on my BBB I used this guide.  ALSA was already installed on my BBB, and contrary to the guide all I had to do was disable HDMI to make USB audio the system default. This was done by adding the following line to my /boot/uEnv.txt file:

optargs=capemgr.disable_partno=BB-BONELT-HDMI,BB-BONELT-HDMIN

 

After doing that and rebooting my BBB, any audio I played on my BBB was coming out of the USB audio adapter. However if you're attempting to do that same I recommend reading the whole guide incase you need to do the extra steps.

The Synthesis Library

 

Compiling Maximilian

 

As stated in the proposal for my project, I've been planning on using the C++ audio synthesis/DSP library Maximilian to develop the synthesis engine. While I have never used it before, I was drawn to this particular library for a few reasons:

  1. It is supported on Linux, which is obviously very important for this design challenge
  2. It uses C++, which is my preferred language
  3. It looks easy to use and comes with a lot of code examples
  4. It seems to still be support and regularly updated

 

As stated in my last blogpost, I've been using a cross-compiler on OS X for compiling my BBB programs on so far, however when it came to attempting to cross-compile Maximilian example code everything became a lot more difficult. As Maximilian (and I'm guessing all Linux-based audio libraries and applications) requires the OSS or ALSA library to be compiled, one of these frameworks libraries needed to be installed and configured in such a way so that my cross-compiler could use it. After many attempts at doing this, getting fairly close using the advanced sysroot installation guide for my cross-compiler, I wasn't able to successfully compile the Maximilian example project.

 

So I went onto plan B - compiling the code on the BBB itself. This process involves editing the code on my MacBook, followed by scp'ing it onto the BBB, and then using the standard GCC compiler on the board. My first attempt at this didn't work, and came up with the following compilation errors for a number of variables:

maximilian.h:412:18: error: ISO C++ forbids initialization of member 'x' [-fpermissive]

maximilian.h:412:18: error: making 'x' static [-fpermissive]

maximilian.h:412:18: error: ISO C++ forbids in-class initialization of non-const static member 'x'

I'm not completely sure why the compiler thinks these variables are static, and everything compiles fine on both OS X and Intel Linux, however I fixed this by simply removing the initialisation of these variables in the hope that doing so won't break the library in anyway. After doing this the Maximilian program compiled, and I was able to run it and get example sound coming out of the USB audio.

 

Using Maximilian

 

The first few Maximilian examples I tried worked fine on the BBB, however it wasn't until I tried the polysynth example that I discovered something I hadn't yet considered - Maximilian may not be  optimised for single-board computers such as the BBB, or least the more complex examples. When running the unedited polysynth example with no GCC optimisation the audio was very glitchy and the program was using 99% of the CPU - a program that runs without issue on my MacBook. Even when adding the -O3 optimisation flag (full optimisation for performance) to the compile command the audio would still glitch, and after finding this guide on getting and setting BBB CPU speed I found my BBB was already running at it's max 1000MHz frequency.

 

This led me on to test the limits of what I could do with Maximilian before I start getting audio artefacts and a ridiculous CPU usage. The polysynth example is a good program to test with as it includes a fairly large number of synth components (18 oscillators, 6 filters, and 6 envelopes) which provides 6 note polyphony (with each note containing 2 oscillators and an LFO), and is a good simple example of the synthesis engine I was hoping to develop for this project. This is what my tests found, all with full performance optimisation:

  1. I can get up to 4 note polyphony of the polysynth example before getting glitches
  2. Removing all filters drops the CPU usage down to 30-40%, and allows me to increase polyphony up to at least 16 without getting glitches

 

Going forward, this gives me the following options:

  1. Use Maximilian but with a redesigned, simplified sound engine (e.g. a low polyphony number, a single global filter instead of an individual filter for each note).
  2. Attempt to optimise Maximilian for the BBB, somehow...
  3. Use another library or framework for creating my synthesis engine

 

At this point in time I still want to experiment a bit more with Maximilian to see if I can get it to work for me without having to simply my synthesis engine design too much, however there is another similar looking library, STK, that I may test to see if it offers better performance. Either way, it is good to finally hear synthesised sound coming out of my BeagleBone Black!

After completing the majority of the key mechanism for my vintage toy synthesiser, which I covered in my last blog post, I thought it was about time I cracked open the BeagleBone Black board and attempted to connect the key mech to it. Setting up the BBB for my preferred development language and environment, as well as getting the Arduino-to-BBB comms working, was a bit more complex than I thought it would be, nevertheless I have now got the BBB receiving key interaction data from the keyboard.

 

This blog post covers the following main things:

  1. Setting up the BBB to be tethered to a computer
  2. Installing a BBB-compatible ARM cross-compiler on OS X
  3. A method for developing C/C++ based software for the BBB, from writing code to testing compiled binaries
  4. Enabling all UART/serial ports on the BBB, and writing software that reads from a connected serial device

 

My Preferred Development Languages and Environment

 

Professionally, and as a hobbyist, I mainly develop software using the C and C++ languages, which is what I plan to use when developing the BBB software for the vintage toy synthesiser.

 

When it comes to developing software for Linux-based single-board computers such as the BBB, my preferred way of doing it is using a cross-compiler that allows me to develop and compile the software on my main computer running OS X, and then using something such as Secure Copy (scp) to transfers the binaries onto the target hardware. The majority of this blog post talks about the tools and methods used to get this environment set up.

 

Tethering the BBB to a Computer

 

As per the official BBB Getting Started guide, the most common way to use and and develop on the BBB is to connect it to a computer and use the network-over-USB access. This is done using the following simple steps:

  1. Connect the BBB to your computer via USB
  2. Wait for a new mass storage device to appear
  3. On the mass storage device, open START.htm
  4. Follow the provided instructions to install the needed drivers

 

Once that has been done, possibly followed by a needed computer restart, you can now access your BBB through the 192.168.7.2 IP address. My preferred access method is to use Secure Shell (ssh) through a command line interface (CLI), using the command:

 

ssh -l root 192.168.7.2

 

BBB-Compatible ARM Cross-Compiler for OS X

 

It is entirely possible to develop software for the BBB directly on the board by accessing it over a network. However there are a couple of pitfalls here:

  1. You're stuck using CLI programs which are not everyones preferred method of interacting with a computer, especially when it comes to text editing
  2. When compiling your software you're limited to the power of the BBB which is probably not as great as your personal computer, making the process a lot slower
  3. You'll be developing using Linux, which may not be your preferred OS to use

 

The way around this is to develop the software on your personal computer, where you have a GUI and greater CPU/RAM specs, and then transfer it over to the BBB afterwards. The main obstacle in doing that though is the fact that the processor type and OS of your personal computer (most probably Windows or OS X running on an Intel or AMD processor) is probably different from that of the BBB (Linux running on an ARM processor), so you need to use a compiler/toolchain that will run on one type of system (the host) but build software to be run on another type of system (the target). This is known as a cross-compiler.

 

The toolchain needed for cross compiling for the BBB is arm-linux-gnueabihf; which compared to the more-commonly used arm-linux-gnueabi has hardware FPU (Floating Point Unit) support which is needed for compiling for the BBB target. The arm-linux-gnueabihf cross-compiler toolchain is officially available for both Linux and Windows as a GCC-based compiler released by Linaro. For OS X there is an unofficial (but working) version of the Linaro toolchain available from here - this is the cross-compiler that I have installed and started using.


My Preferred Development Method

 

Now that I have my cross-compiler installed, I can start developing software for the BBB using my preferred environment and tools.

 

This is my preferred development method, from writing code to testing the compiled binaries:

  1. I write my code on OS X using a programming text editor - my personal favourites are Xcode and Sublime Text
  2. I compile my code using a Terminalwindow with the one of the following command:

    For C code:

    /usr/local/linaro/arm-linux-gnueabihf/bin/arm-linux-gnueabihf-gcc [source file] -o [compiled binary name]

     

    For C++ code:

    /usr/local/linaro/arm-linux-gnueabihf/bin/arm-linux-gnueabihf-g++ [source file] -o [compiled binary name]

  3. With the same Terminal window I copy the compiled binaries to the BBB using scp:

    scp [binary file] root@192.168.7.2:[destination directory]

  4. Using a second Terminal Window which has a running ssh session logged into the BBB (see the Tethering...section above) I test running the binaries using the following command:

    ./[binary file]

 

Eventually I will want my BBB software to start on boot, but I'll talk about that in a later blog post.

 

Connecting the Key Mech Arduino via Serial

 

Apart from the obvious Hello World program, the first application I have developed for the BBB is a simple program that reads serial data coming from the pianos key mech Arduino, displaying read bytes to the console.

 

The BBB has six on-board serial ports - one that is coupled to the boards serial console, and five UART ports that can be found on the boards expansion headers. By default only the serial console port is enabled, so to use any of the other UARTs you must allow them to be enabled at boot. I did this by following the "Section 1" steps on this tutorial. Note that on my BBB the uEnv.txt file was in the /boot/ directory, not /boot/uboot/ as the tutorial suggests.

 

Once this had been done, I connected the Arduino Pro Mini to the BBB using the following connections:

  1. Arduino TX pin to BBB P9_26 pin (UART1 RX), for sending serial data from Arduino to BBB
  2. Arduino GND pin to BBB P9_01 pin (a DGND pin), for allowing the Arduino to be powered by the BBB
  3. Arduino RAW pin to BBB P9_03 pin (a VDD_3V3 pin), for powering the Arduino using the BBB

 

arduino to beaglebone black serial

The key mechanisms Arduino Pro Mini connected to the BeagleBone Black via the UART1 port

 

Lastly I developed a small piece of code that opens the UART1 device file (/dev/ttyO0) and displays any byte it reads from it. You can see this code on my projects GitHub repo here, as well as below:

 

#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <termios.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#include <stdbool.h>
#include <errno.h>


#define KEYBOARD_SERIAL_PATH "/dev/ttyO1"

int main (void) 
{
  printf ("Running test_key_mech_input (v2)...\n");


  int keyboard_fd;
  uint8_t keyboard_input_buf[1] = {0};
  
  //==========================================================
  //Set up serial connection
  
  printf ("Setting up key mech serial connection...\n");


  struct termios tty_attributes;
    
  // open UART1 device file for read/write
  keyboard_fd = open (KEYBOARD_SERIAL_PATH, O_RDWR);
  
  //if can't open file
  if (keyboard_fd < 0)
  {
    //show error and exit
    perror (KEYBOARD_SERIAL_PATH);
    return (-1);
  }
  
  tcgetattr (keyboard_fd, &tty_attributes);
  cfmakeraw (&tty_attributes);
  tty_attributes.c_cc[VMIN]=1;
  tty_attributes.c_cc[VTIME]=0;
  
  // setup bauds (key mech Arduino uses 38400)
  cfsetispeed (&tty_attributes, B38400);
  cfsetospeed (&tty_attributes, B38400);
  
  // apply changes now
  tcsetattr (keyboard_fd, TCSANOW, &tty_attributes);
  
  // set it to blocking
  fcntl (keyboard_fd, F_SETFL, 0);


  //==========================================================
  //Enter main loop, and just read any data that comes in over the serial port
  
  printf ("Starting reading data from key mech...\n");
  
  while (true)
  {
    //attempt to read a byte from the serial device file
    int ret = read (keyboard_fd, keyboard_input_buf, 1);


    //if read something
    if (ret != -1)
    {
      //display the read byte
      printf ("Byte read from keyboard: %d\n", keyboard_input_buf[0]);
     
   } //if (ret)
    
  } ///while (true)


  return 0;


}
















 

Next Steps

 

Now that I have got the BBB up and running the next step is to start the development of the sound synthesis engine. This will involve developing some software that creates a simple controllable tone, as well as configuring the BBB to output the audio via one of its audio outputs.