Photonics

Enter Your Electronics & Design Project for a chance to win an $200 Shopping cart of product!

Submit an EntrySubmit an Entry  Back to homepage
Project14 Home
Monthly Themes
Monthly Theme Poll

 

The Concept

I've long had an idea for a super basic color sensor. I'm not talking about the fancy-schmancy color sensors like this one, but rather something more basic... much much more basic...

The concept here is to use a humble RGB LED and a photoresistor (aka "light dependent resistor" LDR). The thought is that if I can control the color of the light, then I know what spectrum is available at the sensor. This is the opposite of how most imagers work. The standard color image sensor has multiple pixels, each with a band-pass filter relative to roughly "red", "green", and "blue", and the light source contains spread-spectrum light (aka "white").

 

But first, a little theory...

I'll explain a little bit about how color is perceived and the three major factors involved. I'm mostly focusing on the visible spectrum from 400-700 nanometers (nm), although the near IR and the ultraviolet can have a significant effect on camera sensors.

There are three major factors in perception of color.

  1. The Light Source
  2. The object
  3. The observer

 

Any variance in any of these can change how one sees color.

 

The Light

The first factor in color perception is the light source. There are some ISO standards like D50 and D65 which define the spectrum of available color. Daylight (D50) is a pretty even spread

 

 

Images from the Open Photographic Society.


In these charts above, we see the D50 illuminant. It is a little low in the 400-450 range, but pretty even across the rest of the spectrum up through 700 nm. This means that for any object, we should have enough light available to reflect and be perceived by our eyes. Compare this with the D65 illuminant on the right. We can see much more relative blue light (475-525nm) than any other part of the spectrum.

 

The result is that simply by changing the color of available light, objects may appear a little "cooler" in color. This can be noticed by most people walking back to their car in the parking lot at night (at least before MV lamps were banned in 2008). Mercury Vapor lamps are common outdoors, but they have a very particular spectrum of light coming out. The Wikipedia article for MV lamps has special sections talking about how high-pressure lamps are devoid of light in the red regions and cause apparent color shifts.

 

By using the RGB LED, my plan is to take advantage of the narrow bands of available light from "three" different light sources, one at a time.

 

The Object

{gallery} The Object reflectance

IMAGE TITLE: Spectral Reflectance Curves for Cyan, Magenta, and Yellow.

IMAGE TITLE: Blue light on a yellow object makes the yellow hard to discern from a white sheet of paper. This is because Yellow ink doesn't reflect blue light.

IMAGE TITLE: Blue light on a Magenta swatch yields better results and is easier to tell the difference from the white paper.


The second factor is the object itself that we are trying to look at. Aside from the texture (which can also play funny games with perception of color) the surface and the area immediately under the surface act as a filter; much like a band-pass filter. A red object will reflect light in the red wavelengths (above 600nm) and it will absorb the rest

 

Take the chart below which overlays the spectral reflectance of Cyan, Magenta, and Yellow. This was taken from Munsell.com who is a leader in color in Graphic Arts and color communication.

 

Lets look at the spectral reflectance curve of the Yellow (as defined with CYMK process color in Graphic Arts). This shade of yellow reflects very strongly in the 550-700 nm range, but drops off quickly below that, with almost nothing below the 475 nm level. The theory I am going to apply is that when I use the Blue element of the LED on a Yellow object, I shouldn't see much response at all coming back into the photoresistor. The assumption is that the shade of Blue in the LED will be predominantly at <500 nm. If we were to turn off the lights in the room and shine a blue LED at a yellow swatch, it should stay pretty dark and pale.

 

Now lets say I shine the red LED at this object. The red spectrum is from 600-700 nm, and I should expect a strong response. The green LED would be somewhere in the middle, not as strong as the red, but much more than the blue.

 

Now if I change the object with a Magenta swatch, then the blue LED will produce a result in the photoresistor which is much higher than it had with the yellow object. This is because Magenta objects reflect much higher in the blue spectrum than yellow ones.

 

The Observer

The final piece is the observer. Most the time, we are talking about our finicky eyeballs. They are horrendous at color memory and comparison can be very difficult. They are non-linear and like many other parts of nature adapted for what we need them for. We don't have good perception of orange colors, so most pumpkins look like the same color. We DO have good perception of greens and discern many more hues easily. Perception of color in the human eyes is a chemical process starting on the retina, going out the optical canal and into the back of our brain. Things that change our body chemistry can play with our perception of color. This includes diet, sleep, and other things like drugs and alcohol.

 

At any rate, we can probably guess at this stage that our eyeballs and all the various types of photo-sensitive sensors (CCDs, CMOS, LDRs, etc) will respond different to the same amounts of light energy across the spectrum.

Image above from Wikipedia showing short, medium, and long cone cells' relative response curve to the light spectrum.

 

I used to use line scan cameras from Teledyne Dalsa. Here is the spectral response curve of the Linea series line scanner. We'll walk through some of the interesting things about this.

Fortunately, we are able to measure the pixel response in a much more controlled manner than measuring the human eyeball's response. So this camera basically can't see anything below 400nm. Then we see the humps for Blue, Green, then Red centered around 450, 550, and 650 nm. Then it starts to get strange... above 700 nm, the green and blue pixels pick way up in response strength. This means that the camera would respond with a very blue image when shown infra-red light at 825 nm, despite being invisible to the human eye. Most RGB cameras like our cell phones are equipped with an IR-Cut filter to act as a band-pass to eliminate this. Most RGB cameras have similar response curves to IR light which is why you can often see the light coming out the end of a remote control with your camera when you can't see it with your naked eye.

 

How will my sensor work?

My plan is to sequentially turn on just one color light at a time, then read the analog value produced by the LDR through a voltage divider circuit. I'll average 1000 readings, then scale them to a normalized 0-255 to represent 8-bit RGB color. I'm using an Arduino to control it. This isn't meant to be highly accurate, but there could be some applications for it. One that comes to mind (potentially) is a kids game where they place a colored block into a toy, and the toy can tell them what color it is. This changes out a multi-dolor sensor with dirt-cheap jellybean parts and works good enough to identify 10 colors.

 

 

IMAGE TITLE: Here is the simple schematic.

IMAGE TITLE: This is the physical layout of the sensor module

IMAGE TITLE: Cut-away view. Holes can be seen for the LDR and the LED.

 

For the actual sensor module hardware, I devised a simple holder for the LED and LDR. I hold the LED at +10° from vertical, and the sensor at +30°. This keeps the spectral reflections from the light going away from the sensor as to not blind it. Granted, a majority of the reflected light will go away, but the light I want is the light which is scattered by the surface. I could push the sensor angle to be closer to the light angle, but run into a real-estate problem pretty quickly. I designed the tunnels to be 0.5" x 0.5" square (but used metric measurements for everything else).

 

There are also some other significant differences between the LDR and an actual pixel. I don't have any exposure control for instance, so I don't have a way to compensate for different lighting/reflectance levels to enhance readings. I do have the LED on pins which can do PWM, but found out that it didn't matter. The full brightness still doesn't saturate my sensor.

 

The rough code follows below:

/*
 * Super simple color sensor
 * 
 * Created 4/14/2020 by James O'Gorman
 * 
 * This project is aimed to see if it is possible
 * to use and LDR (photo-resistor) to determine the color
 * of an object with an RGB LED to create the color bins.
 * 
 * Each of the three LED elements will come on one at a 
 * time, and the analog value of the LDR will be read in
 * and averaged.
 * 
 * LDR is attached on the analog pin using a voltage
 * divider circuit with 4.7k as the top resistor.
 * 
 */
#define LDR_pin A1
#define button_pin 12
#define LED_R 9
#define LED_G 10
#define LED_B 11


//941, 980, 1000


#define loopCounter 1000


void setup() {
  pinMode(LDR_pin, INPUT);
  pinMode(button_pin, INPUT);
  pinMode(LED_R, OUTPUT);
  pinMode(LED_G, OUTPUT);
  pinMode(LED_B, OUTPUT);
  Serial.begin(57600);
  
}


void loop() {
if(!digitalRead(button_pin)){
  delay(30); //debounce
  readSensor();
}
}


void readSensor(){
  unsigned long value;
  value=0;
  int RGB[3] = {0,0,0};
  int RGB_raw[3] = {0,0,0};
  
  int scaling[3] = {941, 1002, 1010}; //lowest possible vaules for the ADC
  
  for(int c=0; c< 3; c++){
    
    digitalWrite(c+9, LOW); //turn on the correct LED
    delay(100);
    
    for(int n=0; n<loopCounter; n++){
      value += analogRead(LDR_pin);
    }
    
    value = value / loopCounter; // average the vaules
    RGB_raw[c] = value;
    
    RGB[c] = 255- constrain(map(value , scaling[c], 1023, 0, 255), 0, 255);
    digitalWrite(c+9, HIGH);
  }
  for(int n=0; n<3; n++){
    Serial.print(RGB_raw[n]);
    Serial.print(",");
  }
  for(int n=0; n<3; n++){
    Serial.print(RGB[n]);
    if(n<3) Serial.print(",");   
  }
  Serial.println();
}

 

And here is the Processing code for the display window

import processing.serial.*;


Serial myPort; 
String myString = null;
int lf = 10;    // Linefeed in ASCII




color myColor = color(0,0,0);
int loopCount = 0;
int moveAmount = 30;


void setup() {
  size(640, 360);
  myPort = new Serial(this, Serial.list()[7], 57600);
  
}


void draw() {


  while (myPort.available() > 0) {
    myString = myPort.readStringUntil(lf);
    if (myString != null) {
      int[] values = int(split(myString, ","));
      
      myColor = color(values[3], values[4], values[5]);
           
      print(values[3], ',', values[4], ',', values[5], ',');
      noStroke();
      fill(myColor);
      rect(10+(loopCount * moveAmount),10, 30, 30, 10);
      loopCount +=1;
      print(myString); 
    }
  }
}

 

Does it work?

Why yes! It took a bit of work to get the pieces together, but it actually is able to display some semblance of RGB color from the sensor. Future improvements would be better accuracy on the LDR - namely using more LEDs to provide more light, then using an amplifier circuit to get more levels of resolution.

 

Thanks!

 

Here is a video overview: