Computers were not personalized until IBM PC. Earlier, their major role was to crunch numbers and that too for hours. If a machine could crunch a problem (such as a simulation) in 24 hours instead of 48 hours, then the scientists would happily submit experiments with larger precision and bigger datasets. This is never ending cycle. Supercomputers are employed since years to help assist the scientists in such mammoth experiments, in this quest for high performance. And despite all this endeavour, the scientists are forced to solve their yesterday’s problems using today’s machines to get into the future.
Then what exactly is High Performance Computing? Several definitions are spread across; from parallel computing to cluster computing to supercomputing. Some have reverse-abbreviated (I can’t catch a better word) it as High Productivity Computing from original acronym ‘HPC’. Desktops of these days are much more powerful than a cluster of old-age servers or even ancient mainframes and/or supercomputers.
Intel’s co-founder Gordon Moore had predicted that the number of transistors on an integrated circuit for minimum component cost doubles every 24 months. This is called as ‘Moore’s Law‘. The important thing is that the speed (and functionality) of processors is more or less governed by this law. Historically the trend, as predicted by the law, was satisfied having only one processing unit per physical chip. However it has already taken a different route to continue on that path: Multi-core Processors. For example SUN’s new chip ‘UltraSPARC T2‘ has eight cores packed into one chip to handle eight threads per core totaling 64 threads. Intel has already demonstrated an 80-core processor. Beginning of new era is marked with multi-core processors.
Imagine 80 cores * 32 processors * 100 computeres = 256,000 processing units = 1024 Teraflops
which is larger than blue gene/L
(1 processor = 4 Gflops)
The future is bright, isn’t it? But let’s not make hurry to conclude. Are we heading towards a wall?
There are still plenty of issues to be considered when one has such a monstrous system for development.
- Will software be able to harness such a system?
- And how?
- How about power consumption?
Let’s discuss more on this in next post.