Engineering On Friday CPU GPU a Toast to Us by Cabe Atwell b.jpg

If your reading this, chances are your using a laptop or PC to do so. Being that this article is on an engineering oriented website you most likely have some knowledge as to what components are housed in either one of them. I'm talking primarily about the CPU (Central Processing Unit) and the GPU (Graphical Processing Unit) that make up the computer's brain and muscle respectively.

 

The CPU processes complex code or software and executes the instructions based on what app/software is being used. However, it has a tough time when it comes to executing code that uses high-intensive 3D images or graphics. That is where the GPU comes in. It offloads most of the work needed for images from the host CPU and allows it to crunch 1's and 0's for other tasks. Companies like AMD and NVIDIA have even combined the two on a single die (or chip).  While they work in tandem with software, they do not really communicate with each other. Sad, I know.

 

All is not lost, as engineers from North Carolina State University have found a way to overcome that problem and even give the hybrid processor a 20% increase in performance. Dr. Huiyang Zhou, an associate professor of electrical and computer engineering, and his team accomplished this by having the GPU portion of the chip handle the computations while the CPU 'fetches' the data the GPU needs from system memory. Both grab data from system memory at relatively the same speed. However, the GPU can crunch the numbers faster when it comes to graphics, but the CPU is quicker when it comes to what information the GPU will need to accomplish its task. That makes the whole process more efficient according to Dr. Zhou. In recent tests, the team found that 'fused' chips increased their performance by 21.4%, which is no small feat, as any overclocker will tell you. Some tasks even rocketed over 114% faster.

 

fusion.jpgamd-fusion-desktop-roadmap.jpg

(Left) AMD Fusion APU (Right) Partial Roadmap (via AMD)

 

The research was partly funded by AMD, and the experiment was simulated on a future Accelerated Processing Unit (APU) where there is a shared L3 cache. The technique may be publicly available rather soon.

 

Cabe

http://twitter.com/Cabe_e14

 

See more Engineering On Friday comics in the Engineering Life group.