Acoustics

Enter Your Electronics & Design Project for a chance to win an $200 Shopping cart of product!

Back to The Project14 homepage

Project14 Home
Monthly Themes
Monthly Theme Poll

 

I'll be the first to admit ... perhaps this music from a power supply thing has gotten a little out-of-hand. First, I played a poor rendition of the Mario theme. It was a proof-of-concept, not really intended to go anywhere but perhaps to bring a smile to people's faces. Then I played a better rendition of the Mario theme, inspired by a comment which praised as much as criticised the effort, this time using more features of power supply and improving connectivity. I even explored making .csv files for loading into the arb table memory directly.

 

But perhaps you're tired of listening to the Mario theme ... perhaps you'd like some other music to brighten your mood in the lab. Jolted by a comment from clem57, it reminded me of something I had always been intending to explore since I was in school - MIDI files and how to work with them. By the end of this, I figured that I should be able to play more music without hand-transcribing them from images of sheet music into code. Indeed, this is the case, although with a few caveats.

 

What is MIDI?

The term MIDI stands for Musical Instrument Digital Interface and was standardised back in 1983. It was designed as an asynchronous serial physical layer interface running at 31.25kbit/s, over which binary MIDI messages would be sent and received. These standardised messages comprise of 8-bit words, with most messages having three bytes although some with variable length. Its introduction meant that instruments from multiple vendors could communicate with each other, instruments could send performances to a controller or computer to record, compose and replay (even on different hardware). It is still in use today, although is less visible in the consumer space (even modern versions of Windows don't let you choose your MIDI output device anymore).

 

Files containing MIDI data are known as MIDI files. These usually have an extension of .mid, but they also have an extension of .kar sometimes when used with early karaoke systems. These latter MIDI files contain a lyrics track to go along with the instrument data. These files come in three variations - type 0 contains all data in a single "track", type 1 contains separate tracks but on a common time-base and started synchronously, while type 2 contains separate tracks but asynchronous in alignment. Most of the files you may find are usually the first two types. Type 0 files are quite difficult to handle, so I've decided to ignore them for now. Type 1 files are much easier, as the multi-track nature often leads to composers using one track for one type of instrument or part of the song (e.g. melody). This way, if we are just interested in making a monophonic representation of a song, we can just extract and interpret the data from one track.

 

It was a common misconception in the past that MIDI sounds bad. In the early days of dial-up internet, bandwidth constraints meant that sound-files-on-webpages were practically only MIDI files. These were small, because they only contained the metadata about how to reproduce the music, but not the music itself (i.e. no digital samples). As a result, many users in the mid-90s were using sound cards containing FM synthesis chips (e.g. Yamaha OPL2/3) which would produce rather artificial sounding instruments which gave MIDI music its characteristic "bad" sound (to a layperson). For me, I do enjoy the results of FM synthesis - it has a particularly sharp character that I like. Later on, this was replaced by more sophisticated synthesis techniques such as Wavetable synthesis, which used short samples of instruments, processed for pitch/velocity/etc. This sounded more realistic, but had the side effect of changing how a given MIDI file sounded. Early efforts were hardware-based - e.g. the Creative Labs AWE32/64 with the EMU8000, but as processing power increased, software Wavetable implementations became feasible - e.g. Yamaha S-YXG50. Later efforts used very large audio samples (e.g. National Instruments Bandstand) several gigabytes in size to create realistic sounding audio. It's important to remember that MIDI is basically the electronic-equivalent of sheet music - whether it sounds good or bad depends on the instrument and player.

 

Another issue is that there were several sorts of MIDI, just like there are several sorts of emojis depending on if you're on an iOS or various flavours of Android or Windows (I never thought I'd say that). This includes designators such as GM (General Midi), GS (General Standard), XG (Extended General) and more. These existed because there was some leeway in how patch-numbers matched to instruments, with some vendors opting to put in more instruments and capablities than others. This meant that certain files which were authored for XG (for example) would not sound right being played on a GM capable device.

 

The Code and Its Limitations

In order to make MIDI playback on the power supply a possibility, I needed to write some code. This code would need to do the following things:

  • Read MIDI files and parse them into their separate tracks and events. This part is easy, thanks to the Mido library in Python which has MidiFile routines.
  • Extract notes and timing information. This part is a little more difficult, as MIDI note data is stored as a note number/velocity pair which needs to be translated into frequency. These messages contain key_on and key_off events (and other events). Luckily this can be done by formula or (in my case) a lookup table to derive frequency. As for the timing information, it gets complicated as each message carries a delta-time from the previous message in units of ticks which needs to be converted into real-time. We can use an accumulator to measure cumulative time difference between messages of interest to us.
  • Extract tempo and time-base data. This part requires iterating through the whole file, as the tempo data may only be in one track which is not the track we are translating. Finding the tempo data lets us know the number of milliseconds per beat, but this is a different unit to the ticks, so a second time-base information which describes number of ticks per beat is necessary to derive the actual timing.
  • Transpose the music into a frequency range suitable for the supply. Many songs have notes well above the 500Hz capability of the supply, and ideally, the granularity of notes is best if playing below 250Hz. As a result, a transpose feature is necessary to drop the song by an octave, or two/three to ensure a better playback experience.
  • Convert all of this data into SCPI commands and send it to the instrument. This part is easy, since I've already figured that out in the previous part! Of course, it's not perfect, as we will still have a dropped note or two because of timing issues and possible firmware interactions with the power supply.

 

The Python 3 code which relies on the Mido library is as follows:

from mido import MidiFile
import visa
import time

# MIDI Note Frequencies from https://www.inspiredacoustics.com/en/MIDI_note_numbers_and_center_frequencies
midnotfs=[8.18,8.66,9.18,9.72,10.3,10.91,11.56,12.25,12.98,13.75,14.57,15.43,16.35,17.32,18.35,19.45,20.6,21.83,23.12,24.5,25.96,27.5,
29.14,30.87,32.7,34.65,36.71,38.89,41.2,43.65,46.25,49,51.91,55,58.27,61.74,65.41,69.3,73.42,77.78,82.41,87.31,92.5,98,103.83,110,116.54,
123.47,130.81,138.59,146.83,155.56,164.81,174.61,185,196,207.65,220,233.08,246.94,261.63,277.18,293.66,311.13,329.63,349.23,369.99,392,
415.3,440,466.16,493.88,523.25,554.37,587.33,622.25,659.26,698.46,739.99,783.99,830.61,880,932.33,987.77,1046.5,1108.73,1174.66,1244.51,
1318.51,1396.91,1479.98,1567.98,1661.22,1760,1864.66,1975.53,2093,2217.46,2349.32,2489.02,2637.02,2793.83,2959.96,3135.96,3322.44,3520,
3729.31,3951.07,4186.01,4434.92,4698.64,4978.03,5274.04,5587.65,5919.91,6271.93,6644.88,7040,7458.62,7902.13,8372.02,8869.84,9397.27,
9956.06,10548.08,11175.3,11839.82,12543.85,0]
notestate=0
notenumber=128 # 128 used to denote silence
noteduration=0
songnotes=[]
songduras=[]
outvolt = 0.75
outcur = 0.5
outch = 1

# Parse a MIDI File by selecting track and extracting the first note/time pair of any simultaneous notes
mid=MidiFile(input("Input MIDI Filename: "))
for i, track in enumerate(mid.tracks):
  print('Track {}: {}'.format(i, track.name))
for i in mid.tracks :
  print(i)
trk=int(input("Track Number to Process: "))
for msg in mid.tracks[trk]:
  if msg.type == "note_on" and notestate == 0 :
    noteduration=noteduration+msg.time
    songnotes.append(notenumber)
    songduras.append(noteduration)
    notenumber=msg.note
    noteduration=0
    notestate=1
  elif (msg.type == "note_off" or (msg.type == "note_on" and msg.velocity == 0)) and notestate == 1 and msg.note == notenumber :
    # Quirky MIDI files use note_on with velocity equal to zero to signal note off
    noteduration=noteduration+msg.time
    songnotes.append(notenumber)
    songduras.append(noteduration)
    notenumber=128
    noteduration=0
    notestate = 0
  else :
    noteduration=noteduration+msg.time

# Convert MIDI Note Numbers to Frequencies
for i in range (0,len(songnotes)) :
  songnotes[i]=midnotfs[songnotes[i]]

# Convert Ticks to Time by Searching File for Tempo Message, multiplying by time base
for i, track in enumerate(mid.tracks):
  for msg in track:
    if msg.type == "set_tempo" :
      tempo = int(msg.tempo)*0.000001/mid.ticks_per_beat
for i in range (0,len(songduras)) :
  songduras[i]=songduras[i]*tempo

# Drop First Note if Silent
if songnotes[0] == 0 :
  songnotes = songnotes[1:]
  songduras = songduras[1:]

print("Maximum F: "+str(max(songnotes)))
transf = input("Transpose by dividing by? ")
for i in range (0,len(songnotes)) :
  songnotes[i]=songnotes[i]/int(transf)

print("Beginning Playback ...")
resource_manager = visa.ResourceManager()
ins_ngm202 = resource_manager.open_resource("USB0::0x0AAD::0x0197::3638.4472k03-100856::INSTR")
ins_ngm202.timeout = 10000
print("Setting Up - NGM202")
ins_ngm202.write("INST:NSEL "+str(outch))
ins_ngm202.write("SENS:VOLT:RANG:AUTO 0")
ins_ngm202.write("SENS:VOLT:RANG 5")
ins_ngm202.write("SENS:CURR:RANG:AUTO 0")
ins_ngm202.write("SENS:CURR:RANG 1")
ins_ngm202.write("OUTP 0")
ins_ngm202.write("OUTP:GEN 0")
ins_ngm202.write("OUTP:MODE SOUR")
ins_ngm202.write("SOUR:VOLT 0.0")
ins_ngm202.write("SOUR:CURR "+str(outcur))
ins_ngm202.query("*OPC?")

for i in range(0,len(songnotes)) :
  if songnotes[i] == 0 :
    time.sleep(songduras[i])
  else :
    dura = "{:4e}".format((2/songnotes[i])-0.001)
    ins_ngm202.write("ARB:DATA "+str(outvolt)+","+str(outcur)+",0.001,0,0.0,"+str(outcur)+","+dura+",0")
    ins_ngm202.write("ARB:REP "+str(int(songduras[i]*songnotes[i]))) # Number of Cycles Needed
    ins_ngm202.write("ARB:TRAN "+str(outch))
    ins_ngm202.write("ARB 1")
    ins_ngm202.write("OUTP 1")
    ins_ngm202.query("*OPC?")
    time.sleep(songduras[i])

print("Song End. Closing instrument!")
ins_ngm202.write("OUTP 0")
ins_ngm202.write("OUTP:GEN 0")
ins_ngm202.write("ARB 0")
ins_ngm202.close()

 

Surprisingly to me, the code isn't as long as I had expected. It is implemented rather naively, so don't expect miracles. There are a number of caveats to this code which users should be aware of:

  • This code doesn't do any error checking whatsoever. Enter the wrong information into a field and it will probably throw an exception and exit.
  • You will have to modify the code's VISA resource string to match your power supply and attach a speaker to the first channel if you want to actually hear something.
  • The code can only interpret multi-track type 1 MIDI files by taking the data in just one track as a "monophonic" interpretation. Type 0 MIDI files with all the data in one track will produce garbage output due to all the interleaved note events (see next point).
  • The code processes note events by looking for the first note_on event and timing until its corresponding note_off (or note_on with velocity zero) event. This means that in the case of chords or multiple notes played simultaneously, only the first note that appears in the data stream is taken - this has the side effect that in chord segments, the result could sound unexpected.
  • The code also does not care about velocity (except where zero in place of a note_off) - where keys are decaying over time, they will be played throughout, resulting in something "like" a hung/stuck key.
  • It can be hard where tracks are not labelled to determine the best track with the melody - I've printed the track numbers/names and number of messages (hence the double-printing) to try and give users a clue, but this isn't always going to work on all files. Some files annoyingly split the melody across multiple tracks, so hearing quite a bit of silence is probably not unexpected - if you have a proper music workstation software to examine the MIDI files first, you will probably be able to pick the good ones for use with the code.
  • The code looks for tempo data by scanning every track of the file. Unfortunately, with files where there are tempo changes, only the final tempo will be carried through to the output.
  • The code is harsh on the relays and regulation circuitry as are the previous versions of the code - use at your own risk. No responsibility is accepted for any damage which may be incurred in any way.
  • While there is a formula to convert MIDI note number into frequency, it involves some funky exponents. To ensure I didn't get this wrong, I just adapted the frequency values from an article by Inspired Acoustics into an array.

 

I did write another version of this code which used the ARB table by loading it over SCPI, but unfortunately ran into a few issues. Uploads of large ARB:DATA sets (greater than about 768 points) resulted in the ARB:TRAN command failing or some overflow occuring. Restricting the ARB:DATA sets to smaller amounts, I found that issuing the ARB 1 and OUTP 1 commands quickly would cause the output to not activate sometimes unless I inserted delays. Once I did that, the playback became erratic. So I suppose the best way may be to compile .csv files and load them onto USB and use SCPI commands to load a full 4096 point ARB table block instead of trying to do it over SCPI. But polling the supply to see when the ARB sequence is over is also fairly inefficient, so hence why I still deferred to the "note by note" code above. I suppose this is what happens when you try to push the hardware/software right into an edge case - something that you don't expect any reasonably sane person to try.

 

Another note is that this doesn't really turn the power supply into a MIDI instrument (yet). Instead, it just converts a track of MIDI data into sequences of ARB commands for the power supply. The thought occurred to me that it could be turned into a MIDI instrument with mido simply by having the appropriate MIDI controller (e.g. a keyboard), interface on a computer (e.g. MPU-401 or USB to MIDI) and code using mido to interpret the events and fire them off on SCPI in realtime. There won't be any ARB:REP ending to the note, instead relying on OUTP 0 for every key-up, but that would certainly be a possibility to make a very expensive and imprecise (in terms of absolute pitch) pulse-wave MIDI instrument.

 

Video

If you're tired of hearing the Mario theme, since Mario day is over, you'll be glad to know that this video contains a "greatest hits" medley of more recognisable tunes including Vengaboys - Boom Boom Boom Boom, Harold Faltermeyer - Axel F, Christina Aguleira - Come On Over (All I Want Is You), Counting Crows - Big Yellow Taxi, Bill Conti - Gonna Fly Now (Theme from Rocky), Ace of Base - I Saw the Sign, Air Supply - All Out of Love and Vitamin C - Graduation (Friends Forever). Unfortunately, I can't easily give credit to the original MIDI file sources, since they come from my archive of MIDI files from the early 2000's back when I was on dialup. They were burned to a packet-write CD-RW, of all things, sourced from Yahoo GeoCities sites via MusicRobot that are all long gone. But to be clear - I did not author any of the MIDI files played.

 

 

I hope you enjoyed that compilation. It's a rewarding outcome of an evening's worth of code and video/audio editing. Of course, if there should be other multi-track type 1 MIDI files with a simple melody, it should be playable with the code above.

 

Conclusion

Embarking on this unexpected journey has actually been quite interesting. I've managed to teach myself more about how to torture instruments and their arbitrary waveform capabilities, as well as learned more about the structure and parsing of MIDI files. Not being a regular Python programmer, this has helped me refresh and maintain my programming abilities. In return, it has rewarded me with a lot of pulse-wave tones - you probably can imagine how much debugging has occurred during the creation of the demos. You're probably as tired of pulse-wave music as I am - but that's fine, because it was only just a proof-of-concept demo that got blown up into something a little bigger. But it goes to show that sometimes, a bit of software can really make hardware "sing".

 

Surprisingly, by reducing the amount of time I spend narrating or editing the videos, I managed to do everything in a few day's worth of spare hours after work. This is a good thing, since I'm flying out for a holiday in three days time. Hope to see you all when I get back, and to all the other contestants in the Acoustics challenge - good luck!