BuildBrighton has been a hive of activity for the past week, with members heading off to prototype their chosen parts of project. Here's what we said we'd do in the first blog post:
- Manage the project, making sure we meet our deadlines and targets.
- Input: Investigate RFID, and multiplexing RFID antennae
- Input: Physical input devices, e.g. punch cards, switches etc
- Chris and Ben
- Input: Look at re-purposing their cube-input project for the challenge
- Output: Android text to speech API
- Output: Android device screen as an output
- Input: Colour sensing
- Input: barcodes
- Input: UV binary dots
- Output: Speakjet voice synthesis chip
- Output: Waveshield or similar for sample-based speech output
Input: Reverse engineering existing similar toys (he likes taking things apart)
- Output: Implementing voice synthesis on commodity (cheap) microcontrollers
Now, due to an administrative error I'd completely forgotten to add Steve (pinter75) to the list. He was planning to investigate magnets and resistance as an input method and play around with a waveshield as a means of output.
Here's how we all got on:
The circuit I based my design on was from this Cornell Uni project, which in turn was based on a Microchip reference design, a 125khz reader. The first hurdle was finding the very specific resistor values, which are almost impossible to source, quite a downer since one of the judging critera is how easy the parts are to find. For the 47.5k 1% resistor I just pulled out my trusty bag of 47k 5% resistors and tested 'em until I found one that was the right value. I did the same thing for the other odd values. Cheap resistors FTW! Once I'd sourced all the components I started to breadboard the reader, starting with the 125khz squarewave generator. Unfortunately this was my stumbling block, as I just couldn't get it to work. It just ended up as a ripple 3ish volt wobble, as you can see on the right.
Given the time constraints, I thought I'd give up on trying to build my own reader and just buy a pre-made one that was TTL friendly so we can integrate it with a microcontroller. I found a cheap reader on ebay for £8, so I bought two of 'em to play with. They've not turned up yet so in the mean time I'll be starting the interaction design of the system, so we have a better idea of what we're building over the next week. Part of the interaction design will be choosing which phonics and target words we need, which will also define what samples or speech we need in the system and how many slots we need in for phonics on our input device.
Ben & Chris
Ben and Chris continued developing their cube-tray input device, which gives each cube a unique ID by embedding a tiny PIC microcontroller inside. Using a particular arrangement of pins/pads on the cube, the tray can determine which face is down and the orienation the cubes are positioned in. They made a lot of progress with their boards, and have a prototype 'exploded' cube working (i.e, the cube software and hardware work, but it's not cube shaped yet).
Chris has been documenting their efforts on a dedicated blog.
Mike started looking into Android development as a way of providing a voice output and a screen for displaying the picture of the target word. Unfortunately he only got as far as installing the SDK and running some of the examples as work commitments got in the way (boo, work ). There was a general concensus that using a smartphone for most of the brains would be cheating slightly anyway
Barney tagged a load of photo-sensors onto my RFID order from Farnell, he's got some full-spectrum phototransistors, infra-red phototransistors and ambient light sensors. Some progress has been made, varying the intensity of an LED according to the ambient light, however we don't yet have a working prototype of reading several bits of data using optical sensors.
Jason had the Speakjet chip up and running very quickly, and we played around with it trying to get it to say various words. The synthesis on it is pretty robotic however, which may be a showstopper for this particular use since the speech output needs to be very clear as we're trying to teach kids how to talk. We don't want them to end up talking like robots. Well, maybe WE do, but most parents wouldn't.
As planned, Matt went to a toy-shop to find similar products that are already on the market. He bought himself a Phonics Owl and proceeded to tear it apart to figure out what made it tick, or indeed talk.
He's already written up his findings in another blog post.
Even though I'd missed him off the list of team members, Steve has been hard at work prototyping an input system using magnets and resistors. The magnets stop you putting the blocks in the wrong way round as they're polarised, the also act as electrical contacts. Between the magnets there are various resistor values that allow him to tell which side of which block is connected to the device. He's also written up his prototyping in a couple of different blog posts which include some awesome videos of his laser cutter in action!
Over the next week we'll be pulling together our input and output modules, glueing them together with a microcontroller and implementing our control layer according to the interaction design I'm working on right now. I'll post another entry about the interaction design of the phonics toy soon too.
Also, we'll be thinking of a name for it, other than 'phonics toy'.