As we are probably all aware now, multi-core processors are the new black. Increasing the clock speed of the processor in your computer has been replaced by increasing the number of cores in your computer. The aim is to increase performance so that bigger and better applications can run. However the sales pitch could be just vacuous hype.
In the "good old days", you changed the clock speed of the processor and code executed faster. A 2MHz processor ran code twice as fast as a 1MHz processor of the same architecture. It is true that you had to ensure that access to memory was faster and the disks had to get faster as well, in order for that faster processor to work, but that was what people did. Everything got faster. Problematically, it also got hotter. There had to be a point at which it was not going to be possible to get rid of the heat, and we have now got there: people do not want laptops that burn holes in their legs!
So how can we make processors perform better if we cannot make them go faster? The answer, in principle, is "use parallelism". So we get to a world where we have 2, 4, 8, 64 cores in a single processor chip: in effect we have 2, 4, 8, 64 more or less independent processors. There may be more of them but they are running slower so they don't generate as much heat. Great for hardware manufactures, but not very good for users. Well it is probably good for high performance computing people and large server people since they have been playing the parallelism game for a number of years. But it is not good for end users. Why is this? Their applications are going to run slower because the processor running their application is running slower.
The software that most people use are web browsers, word processors and, possibly, spreadsheets. These software products have been implemented for execution on single processor systems. They may use multithreading, but underneath there is generally an assumption of a single processor and multitasking. If they can use the real parallelism on offer with multicore systems, it is probably by fluke and not by design. The point is that the software that used to run on single processors that got faster and faster is now running on multicore processors that are running slower.
The problem is not the situation per se, though that is a problem. The way in which systems are being marketed and sold is highlighting the shift from single processor to multicore processor and ignoring the actual speed. In the "good old days" marketing was driven by increasing the MHz you advertised. 1.6MHz was better than 1.2MHz, 2MHz better than 1.6MHz, etc. The marketing was simplistic, i.e. it ignored bus speed, memory speeds, disk speeds etc., but there was an alignment between the marketing and reality. Faster processors did indeed tend to run the applications people actually used faster.
Now the systems sales pitch has shifted to the number of cores you have, 2 cores better than 1, 4 cores better than 2, etc. For the HPC cognoscenti there is clear truth here. However, for the average user of the average system, they will undoubtedly see no increase in performance of the applications they actually run no matter how many cores they have in their system. We have a complete mismatch between the marketing hype of systems, and any effect the user might see. How long is it going to be before the general public get wise to this problem? What is going to be the effect on system sales when the buying general public do get wise to the sleight of marketing that has been perpetrated on them?
There are two ways out of course: the software could be rewritten so it really does make proper use of all the cores available—and remember, we might only be talking about 2, 4, 8 cores now, but we are going to be talking about 80 cores next year and probably 2048 cores in the not too distant future. The alternative is for there to be a rethink of how to market systems so that there is a realignment of marketing and real user expectations. This rethink is only likely to come through pressure from the buying public when they realise that the reality of the hype is actually no improvement in performance.
Posted: 4th February 2008 | By ben :
geez, you'd think they'd do some research first before ranting. End user applications have been multithreaded for over a decade now. A typical user program spends much of it's time waiting in thread lock for all of the threads to get to particular points in processing. Adding multiple cores will speed up almost any user application, not just server applications or high end calculations.
Posted: 22nd February 2008 | By Darren Eckes :
Yes, but now you only have 2.4 ghz of cpu power working, vs the old 3.4 ghz HT cpu's.
Posted: 22nd February 2008 | By ben :
which is faster?
a race car moving one person at a time transporting 40 people, or 4 normal cars each moving one at a time? The race car would have to move more than 4 times faster than the normal car to have a total throughput exceeding the normal cars. In your example, the 3.4Ghz core is only 29% faster than the 2.4Ghz core when it would need to be 400% faster just to maintain the same
The messages above were all contributed by IT-Director.com readers. Whilst we take care to remove any posts deemed inappropriate, we can take no responsibility for these comments. If you would like a comment removed please contact our editorial team.
We automatically stop accepting comments 180 days after a post is published. If you would like to know more about this subject, please contact us and we'll try to help.
Published by: electronicdawn Ltd.