From a disciple of evolution

Archive for the ‘Technolgoy’ Category

Omnipresence of Evolution – Imagining Evolution of Microprocessors

Yesterday I read about Intel’s upcoming Xeon Phi Co-processor with 50+ x86-compatible cores. As per the graphics, the co-processor will provide a teraflop of performance and occupy just one PCIe slot. It’s great to see that Intel and other vendors are able to provide such phenomenal computing power,  by adding cores in a single processor chip. We always knew that GPUs with their large number of cores offer phenomenal power in SIMD mode. Moreover GPGPU has unleashed the power for other than graphics-intensive work. However, for Xeon Phi, it does not seem to be strict SIMD only. After reading these news, I confess that I am tempted to look at the field holistically and just blog my muddling thoughts in some order. So what is the theme? Theme is ‘Computation is following biologically evolutionary path’. First, I will try to articulate past, present and vaguely the future of the microprocessors. Later, I will attempt to identify similarities with biological evolution.

Evolution of Intel and AMD microprocessors (as representative) till today –

  1. Inetl 4004 – First single chip microprocessor.
  2. Intel 8086 – First microprocessor with x86 instruction set.
  3.  Intel 80386 – First 32-bit microprocessor and built for multitasking.
  4. Intel 80486 –  Microprocessor with inbuilt math co-processor. This marks beginning of heterogeneous micro-architecture era. However, it could not go much further for a decade.
  5. Intel Pentium – Superscalar implementation for x86 architecture and with multiprocessing support.
  6. Intel Pentium D – First multicore microprocessor.
  7. Intel Pentium M – Introduction of energy efficiency features.
  8. Intel Core –  Scalable and energy efficient microarchitecture (till today, it supports up to eight cores).
  9. AMD APU – First microprocessor with inbuilt GPU cores. This is rejuvenation of heterogeneous micro-architecture and now has momentum.
  10. Intel Xeon Phi – First Intel microprocessor with many-integrated-core implementation supporting x86 cores. Moreover, this fits in the system as an add-on card with its own little Linux OS.

In future, it may continue as –

  1. 1000+ core microprocessor. Each core will be a simple one (most likely an ARM-variant).
  2. More hybrid processors will be launched. For example, a processor will have ‘8 cores’ such that ‘4 x86 cores’, ‘1 ARM core’, ‘1 GPU core’, ‘2 ASIC core’ and so on.
  3. Reconfigurable microprocessors – Processors can have emulation mode. For example, a processor can be configured to have all ‘x86 cores’ in the morning and all ‘ARM cores’ in the evening, using system BIOS.
  4. Upgradable instruction sets. For example, I can upgrade from Core i5 to Core i7 and that too only for a few cores. It appears that upgradeable instruction is required for reconfiguration, but not strictly. Reconfigurable microprocessors and upgradeable instruction set microprocessors may follow one another, in quick succession and the order of their arrivals depends upon level of flexibility achieved for each requirement.
  5. Computing dust. Processors would grow smaller and smaller to an extent that a ‘not-so-advanced‘ processor with size of a grain or a even dust particle. I cite Hitachi RFID powder chip, although it is not a microprocessor, as a beginning. What is significant here is organization of these resources and their interconnects. A liquid network medium is quite possible and may provide substantial advantage over contemporary ones. (Let me call it a ‘Swimming Tank Interconnect’ 😀 )

Quite exciting !

I don’t want to attach any dates to these milestones, but given the trend, we can say with fair accuracy that amorphous computing should become reality by 2016 and part of every day life by 2022.

Each of these would demand significant reorientation in software development paradigm, especially the last milestone. In a separate post, I will articulate each of these challenges and possible paradigm adaptations.

…to be concluded !

Advertisements

High Performance Computing – The Battle Royale – Part 1

“Performance is like money, anyone would hardly want less (and those who ‘really’ don’t want it for some reason, then replace ‘money’ by ‘youth’, ‘fame’ or whatever one likes)” Moore's Law

Computers were not personalized until IBM PC. Earlier, their major role was to crunch numbers and that too for hours. If a machine could crunch a problem (such as a simulation) in 24 hours instead of 48 hours, then the scientists would happily submit experiments with larger precision and bigger datasets. This is never ending cycle. Supercomputers are employed since years to help assist the scientists in such mammoth experiments, in this quest for high performance. And despite all this endeavour, the scientists are forced to solve their yesterday’s problems using today’s machines to get into the future.

Then what exactly is High Performance Computing? Several definitions are spread across; from parallel computing to cluster computing to supercomputing. Some have reverse-abbreviated (I can’t catch a better word) it as High Productivity Computing from original acronym ‘HPC’. Desktops of these days are much more powerful than a cluster of old-age servers or even ancient mainframes and/or supercomputers.

Intel’s co-founder Gordon Moore had predicted that the number of transistors on an integrated circuit for minimum component cost doubles every 24 months. This is called as ‘Moore’s Law‘. The important thing is that the speed (and functionality) of processors is more or less governed by this law. Historically the trend, as predicted by the law, was satisfied having only one processing unit per physical chip. However it has already taken a different route to continue on that path: Multi-core Processors. For example SUN’s new chip ‘UltraSPARC T2‘ has eight cores packed into one chip to handle eight threads per core totaling 64 threads. Intel has already demonstrated an 80-core processor. Beginning of new era is marked with multi-core processors.

Imagine 80 cores * 32 processors * 100 computeres = 256,000 processing units = 1024 Teraflops

which is larger than blue gene/L

(1 processor = 4 Gflops)

The future is bright, isn’t it? But let’s not make hurry to conclude. Are we heading towards a wall?

There are still plenty of issues to be considered when one has such a monstrous system for development.

  • Will software be able to harness such a system?
  • And how?
  • How about power consumption?

Let’s discuss more on this in next post.

Parrot Virtual Machine, the Dream Grandeur

I came across an interesting post which compares the benchmarks of two virtual machines: Parrot and NekoVM. The post discussed various issues except the convenience that Parrot offers. There are, of course, various concerns and constraints and no one should deny them. However we need to prioritize these concerns. For few, performance might be the concern, whereas scalability, reliability, productivity in terms of time-to-market, etc are also valid criteria to consider. What is important is to understand the potential of Parrot and to apply it appropriately.Parrot

Application virtualization is one area, where Parrot can play very important role. Seamless integration of application-level resources is possible because Parrot is register-based virtual machine, unlike Java Virtual Machine which is based on stack operations. Hence resources can be managed better using Parrot, such that it would soar the overall utilization of resources. One needs the bulky OS virtualization environment with ‘minimal overhead’, just to run the applications. Instead applications themselves can be run in application virtualization environment, such that migration, instantiation, etc can be done on a light-weight basis. Probably the analogy between process and thread is applicable for OS virtualization and application virtualization, appropriately.

I am one of the believers of Parrot Virtual Machine, even though it has not come very far from where it had started and is humiliated sometimes. However the goal to integrate many languages (language run-times, to be precise) is very interesting. I don’t quite understand that why Parrot is still called the Perl6 VM. This is despite the fact that Parrot supports no language currently (including Perl6) and plans to support languages, along side Perl6.

In future, Parrot-like environments would become very useful, especially when domain-specific languages (DSL) would surface out. Ruby has already gathered some attention for this reason, whereas grammars would become first class citizens in Perl6. Imagine the world when lot many useful DSLs would need to interact with each other, such that each DSL has origin in different language and run-time.

Conclusion: (There has to be a conclusion of every discussion.) Despite failures and critique of the past and the present, Parrot VM is becoming more and more relevant for the future. All the best Parrot! One VM to rule them all!