David Kirk, Chief Scientist at NVIDIA, is on the offensive. It is a very charming offensive, and it seems his message is a very compelling one. The summary is "GPGPU is dead, GPU Computing is the future for affordable supercomputing".
Kirk is currently on a world tour giving seminars wherever he can. His presentation is all about how NVIDIA have moved the goalposts in the game of using GPUs (graphics processing units) as components in computing systems. In particular, how NVIDIA graphics cards can provide supercomputer processing power at workstation prices.
For the last couple of years, graphics card manufacturers have been trying to push GPUs as the "must have" add-in coprocessor for turning workstations into supercomputers. The principle behind this has always had some very positive arguments. Processing for modern graphics, especially OpenGL, requires a lot of parallel processing, so GPUs have evolved into highly parallel processing chips: GPUs have been multicore for many years.
If GPUs are so good for parallel processing why haven't they become the standard way of handling complex processing of general algorithms? GPUs were designed for, and implemented well, graphics processing. The control flow in the chips revolves around the graphics image rendering process. Quite natural really. However, this control flow does not fit well with general algorithm processing. As anyone who has tried programming a GPGPU system knows, fitting an algorithm to the needs of the graphics control flow leads to tortuous and unnatural code.
So what have NVIDIA done to change things? Two things: they have generalised the control flow model in their GPUs to remove the focus on rendering (but without affecting rendering performance), and they have added a couple of instructions to the instruction set of the GPU to support direct access to off-chip memory. This means that GPUs can now run programs other than graphics processing ones. NVIDIA have labelled this GPU architecture as CUDA. Kirk claims jokingly in his talk that "cuda" is Polish for "miracle" and by implication that CUDA is a miracle technology. Clearly this is not the case, but CUDA does have all the hallmarks of being able to provide NVIDIA with a huge boost to their bottom line.
Programming applications to use a CUDA-enabled GPU requires using a specialist toolchain. Kirk indicates that NVIDIA will be putting these out under an open source licence. This will massively increase take up. For NVIDIA this will mean selling more chips, which is what they are in the business of doing. Amending their chip architecture so that it has only beneficial effect on their core market of graphics cards, and at the same time opens up new markets, is clearly a master stroke. Dropping GPGPU and pushing CUDA is a strategy that can only improve NVIDIA's prospects.
In answer to questions from the floor, Kirk made it clear that he thinks CUDA should become a standard, but it was not clear if he meant that NVIDIA should submit CUDA to a standards organisation, or whether he just meant all the other vendors of GPUs (and ATI in particular) should implement CUDA in their products.
NVIDIA have, by introducing CUDA, radically changed the place of GPUs in computing systems. Yes they will remain graphics processing engines, but they have now become more flexible. GPGPU is dead and gone. GPU as a special architecture processor is now the norm. Other manufacturers will have to follow suit if they are not to cede monopoly to NVIDIA. If they implement CUDA then NVIDIA will have set the standard by creating a de facto standard. If they choose another mechanism, then we will have a "format war", the outcome of which would be uncertain.
GPU Computing is the label NVIDIA are giving to the new way of using CUDA-enabled GPUs. The processors have quite specific structures and so will not replace CPUs in general. However, where an algorithm maps well to the CUDA-enabled GPU architecture, using such a chip increases performance by two or three orders of magnitude. Calculations that used to take years, now take days. Calculations that used to take days, now take minutes. Calculations that used to take hours are now interactive.
The world has changed with CUDA. Having a supercomputer on your desktop is now easily possible. NVIDIA have created a new market for their chips and Intel, AMD, Sun, and IBM will have to deal with this. There may be a multicore revolution in CPUs, but the GPU as co-processor revolutionises things even more.
Posted: 30th April 2008 | By Louis Savain :
Nvidia is an excellent company with fine products and I admire them greatly. However, the idea that a CUDA-enabled GPU is anything more than a C-friendly SIMD processor is nonsense on the face of it. It's true that Nvidia's GPU is a superfast, fine-grain processor but its performance is highly dependent on using data-intensive benchmarks that call for SIMD processing. Put it in an environment that requires multiple instructions at once or that has high data dependencies and it will choke. SIMD processors are not the answer to general parallel computing and David Kirk knows it. The multicore industry is undergoing an acute crisis and neither Nvidia, Intel or AMD or any of the other major players seems to have found a permanent and viable solution. Of course, coarse-grain, thread-based MIMD multicore processors are not the answer either. What is needed is a true universal architecture that combines the strengths of both MIMD and SIMD processors while eliminating their weaknesses. Google "Nightmare on Core Street" for a comprehensive expose on the multicore problem.
Posted: 30th April 2008 | By grxbstrd :
- "GPGPU is dead?" Are you sure that's the right quote?
- Louis S - If you read other interviews with Kirk, he never says the CPU has no place in a GPU-centric compute environment, in fact, it's quite the opposite. He believes the GPU must co-exists with the CPU for the reasons you point out.
The messages above were all contributed by IT-Director.com readers. Whilst we take care to remove any posts deemed inappropriate, we can take no responsibility for these comments. If you would like a comment removed please contact our editorial team.
We automatically stop accepting comments 180 days after a post is published. If you would like to know more about this subject, please contact us and we'll try to help.
Published by: electronicdawn Ltd.