Researchers developed a camera that captures images at a trillion frames per second using a streak camera and a laser that generates light pulses. (Image Credit: M. Scott Brauer/MIT)

 

In 2011, MIT scientists developed a camera capable of capturing images at a trillion frames per second. Accomplishing such a feat allowed the team to obtain video footage of light traveling 600 million miles per hour in a one-liter bottle filled with water. It only took a nanosecond for the event to transpire, but the camera slowed it down to twenty seconds. The camera took millions of scans to reproduce each image since it’s impossible to capture images of light.

 

The technology relies on a streak camera deployed in a unique approach. Photons enter the camera via the narrow slit located on the aperture and are transformed into electrons. Afterward, these pass through an electric field, diverting them into a direction perpendicular to the slit. The electric field’s quick-change causes more deflection to the electrons equivalent to photons arriving later than it does to those arriving early.

 

This process produces a 2D image, but only the direction corresponding to the slit’s direction is spatial. Meanwhile, the dimension that corresponds to the deflection’s degree is time. As a result, an image represents the photons’ arrival as they pass through a one-dimensional slice of space. The camera was designed for experiments where light passes through or diffuses by a chemical sample.

 

However, to create ultra slow-motion videos, the team needed to perform the same process repetitively. This involved passing a light pulse through a bottle while repositioning the camera to produce a 2D image. Using an array of specialized optical equipment and mechanical control allowed the camera and laser, which generates the light pulse, to synchronize. It may take a nanosecond for light to travel through a bottle, but it takes an hour to collect the data for the video.

 

After an hour passes, the team collects hundreds of thousands of data sets. Each one maps out the one-dimensional positions of photons against their arrival times. Then, the team created algorithms that put the raw data together in a set of sequential 2D images.

 

Since the imaging system needs multiple passes to produce videos, it can’t record non-repeatable events. Some practical applications include situations where light scatters, providing useful information. Such cases may also include analysis of manufactured materials’ physical structure.

 

 

Have a story tip? Message me at: http://twitter.com/Cabe_Atwell