The last PiCassoTizer blog showed how the touch screen was going to work using image processing on Raspberry Pi's to monitor finger position.
This blog shows the other end, where a Raspberry Pi is used to edit screen graphics using external position information to control the process.
This system uses existing Raspberry Pi resources where possible, so the graphics program is any existing graphics program. All of these graphics programs can be manipulated with a mouse, so the external touch screen information can simply be converted to emulate a mouse to provide full functionality.
It sounds simple but there are a number of issues to deal with:
- The touch digitizer system provides angular data that needs to be translated to Cartesian coordinates
- The Cartesian coordinates are absolute, where normal mouse data is only relative position data (unfortunately it seems Linux doesn't support absolute mouse positioning)
- The only Raspberry Pi that can emulate a mouse is a Pi Zero, which isn't part of this challenge
The simplest way I could think of to convert angular position data to USB mouse data was to convert the angular data to analog signals and use an arduino to convert these signals to look like a USB HID mouse. Although the Raspberry Pi's do a lot of heavy lifting - camera dizitization and image processing on one end and graphics editing on the other end, there is still room for a little arduino to complete the system.
I ordered some digital-to-analog converters for the touch digitizers, but they have not arrived, so I am working on the mouse emulation system and just simulating the camera system outputs with potentiometers.
The potentiometer wiring was a confused spaghetti mess, so I designed a frame to hold everything in a more understandable relationship.
Here is a video showing the back end apparatus in operation with a paint program:
The positioning system is pretty close to absolute, so it should be useable once the digitized data is available.
This demo used X and Y position data directly, so there is still firmware needed to translate the angular data.
While waiting for the D-to-A modules, I can start designing the camera frame geometry to ensure the whole screen is in the field of view and the angles provide the best accuracy possible.
I can't complete it though, until the cameras arrive and I figure out their field of view.