Supercomputing speed can worsen due to more processor cores on chips, say expertsJanuary 15th, 2009 - 1:27 pm ICT by ANI
Washington, January 15 (ANI): A new study suggests that scientists trying to increase the speed of supercomputers merely by increasing the number of processor cores on individual chips may unexpectedly be slowing the computing performance for many complex applications.
The suggestion is based on the observations made by researchers associated with a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin company, for the U.S. Department of Energy’’s National Nuclear Security Administration.
A Sandia team simulated key algorithms for deriving knowledge from large data sets.
The researchers observed that the simulations showed a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores.
According to them, exceeding eight multicores causes a decrease in speed.
The team add that 16 multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added.
The problem is the lack of memory bandwidth as well as contention between processors over the memory bus available to each processorthe set of wires used to carry memory addresses and data to and from the system RAM.
The researchers explained the problem with the example of a supermarket analogy, saying that where two clerks at the same checkout counter are processing ones food instead of one, the checkout process should go faster.
However, where each clerk does not have access to the groceries, he or she does not necessarily help the process, and may get in each other’’s way.
Similarly, according to the researchers, if one core is fast, two would be faster, four still faster, and so on.
However, say Sandia’’s Richard Murphy, Arun Rodrigues and former student Megan Vance, that the lack of immediate access to individualized memory caches the “food” of each processor would slow the process down instead of speeding it up once the number of cores exceeds eight.
“To some extent, it is pointing out the obvious many of our applications have been memory-bandwidth-limited even on a single core. However, it is not an issue to which industry has a known solution, and the problem is often ignored,” says Rodrigues.
“The difficulty is contention among modules. The cores are all asking for memory through the same pipe. It’’s like having one, two, four, or eight people all talking to you at the same time, saying, ”I want this information.” Then they have to wait until the answer to their request comes back. This causes delays,” says James Peery, director of Sandia’’s Computations, Computers, Information and Mathematics Center.
“The original AMD processors in Red Storm were chosen because they had better memory performance than other processors, including other Opteron processors. One of the main reasons that AMD processors are popular in high-performance computing is that they have an integrated memory controller that, until very recently, Intel processors didn”t have, ” says Ron Brightwell.
Multicore technologies are considered a possible savior of Moore’’s Law, the prediction that the number of transistors that can be placed inexpensively on an integrated circuit will double approximately every two years.
“Multicore gives chip manufacturers something to do with the extra transistors successfully predicted by Moore’’s Law. The bottleneck now is getting the data off the chip to or from memory or the network,” Rodrigues says. (ANI)
Tags: arun, checkout counter, department of energy, lack of memory, lockheed martin, lockheed martin company, memory addresses, memory bandwidth, memory bus, memory caches, multiprogram laboratory, national nuclear security, national nuclear security administration, nuclear security administration, processor cores, sandia corporation, sandia team, steep decline, supercomputers, u s department