By chance a few weeks after I wrote about sharing the RF spectrum, I came across an article in IEEE Communications Magazine about using TV white space (unused TV channels). The same week IEEE Spectrum ran an article about using terahertz frequencies for communication and imaging. RF spectrum issues are becoming a hot topic. (The image to the right is from the article about white space. It dramatically shows with pink shading the areas in Europe where channel 21 is used for TV. In most locations this spectrum goes to waste.)
When I mentioned spectrum management in passing to a very intelligent software engineering student last week, he said he didn’t understand how spectrum could be scarce. “Can’t we just keep going up in frequency until we run into IR remote controls?” he asked.
Since this is going to be a big issue for some time, I’d like to use this space for a basic review of this issue.
How much RF spectrum does it take to get something done like downloading low-resolution streaming video?
If you have a strong enough signal and a good enough transmitter and receiver, there’s no limit to how much data you can send over a small segment of RF spectrum. In typical systems, you only have so much signal and you get about 1 Mbit/s of data for every 1 MHz of spectrum. 1 Mbit/s is roughly the rate of a slow but usable Internet connection. It takes about 10 kHz, that’s 1/100th of 1 MHz, to transmit voice with the quality level of AM broadcast radio.
How many MHz of spectrum exists for communication?
Signals below 30MHz propagate over long distances, making them not useful for local communications. For example, during the daytime on a day when sunspots are active, all 40 channels on a CB radio, a service intended for local communication at 27MHz, are a cacophony of hundreds of users from thousands of kilometers away.
The wavelength [in meters] of a radio wave is 300 / frequency [in MHz]. For antenna to be efficient, it’s size should be at least one half a wavelength. So frequencies below 150MHz don’t lend themselves to handheld antennas. Hobbyists sometimes call handheld antennas for these frequencies “fishing poles”.
You can get inexpensive chipsets that go up to around 6GHz now. So there are several thousands MHz of spectrum theorecitally available for inexpensive handheld devices of all types (phones, GPS, police/fire radios, broadcast bands, TV, Wi-Fi, etc).
We can theoretically keep using higher and higher frequencies as needed. What’s the disadvantage to using higher frequencies?
- Path cost - Even in a vacuum with no obstructions, the higher the frequency the worse a signal carries.
- Circuit complexity - In an RF circuit, any physical features that are less than 1/10th of a wavelength can be thought of as a single element. Larger features have a different part of the wave at different locations, so they cannot be considered one element. At 300 MHz you could connect a 1cm wire to an antenna and ignore the effects of the wire. At 3000MHz, the wire begins to become part of the antenna. As frequency increases, the effects of small unwanted inductances and capacitances in a circuit increase. Coax cable used to carry RF signals becomes lossy. Circuit board materials begin absorbing RF energy and acting as part of the circuit.
- Absorption / Reflection - As frequency increases and wavelength decreases, smaller objects can reflect the wave. Many materials block higher frequencies more than lower frequencies. It makes intuitive sense that objects that are opaque to light are also opaque to higher frequency radio waves. Many people confuse absoption with path cost. They're completely separate phenomon that reduce the range of higher-frequency systems.
Can higher frequencies transmit data faster?
No. Suppose you have some a need for 0.5 GHz of spectrum, say, to transmit 500 Mbps of data. If you use a frequency around 2 GHz, you’d need a block from 1.75GHz to 2.25GHz. That would not be possible since those frequencies are allocated to other services. If you can use a frequency around 61 GHz, there is a 61.0-65.5 GHz band allocated to ISM. Moreover you’re less likely to experience interference from other users because the signals don’t carry as well.
Why do non-scientific media sometimes report that unused TV channels could be used for “Wi-Fi on steroids”?
The TV channels are all located at frequencies < 1 GHz. Wi-Fi uses 2.45 GHz and 5-6 GHz range. The TV bands carry all the benefits and detriments associated with being at a lower frequency.
Policy makers are just beginning to realize the RF spectrum is a resource akin to land or air. In a densely populated area, you may need strict rules about how much you can pollute the air. The market price for land is much higher in urban areas. In rural regions land is much cheaper, and less regulation is needed to keep people from interfering with each other’s lives.
This same situation exists for the RF spectrum. I predict that protocols will be deployed that allow generous use of the RF spectrum if they detect little activity. When they detect higher activity, more restrictions will go into effect. Current examples of this are a) Dynamic Frequency Selection (DFS), which requires that Wi-Fi devices monitor certain channels for radar and vacate them if radar is detected and b) proposed whitespace protocols that use geolocation and listening for TV signals to determine if a TV channel is unused.
This needs to be taken a step further to allow devices to negotiate a price for the use of part of the spectrum for certain time slices. Mobile phone service providers could handle this and make it transparent to users. Despite the increasing need for spectrum, in most locations there is plenty of spectrum available.