From a disciple of evolution

Archive for the ‘Virtualization’ Category

Simcity World – A tip in the time

What is common among ‘Intel 8086’, ‘Microsoft Windows 95’, ‘Google Search’ and ‘Apple iPhone’? These are innovations that changed the World forever. When such a technology is launched, the World sees a ‘Tipping Point‘ or ‘tip’. Such innovations are celebrated but they hardly happen in isolation. Each one requires sufficient progress in related fields. For example, ‘Apple iPhone’ could take advantage of progress in energy-efficient microprocessors. From gaming industry too, I believe such an innovation has just happened. Today Simcity World has been released by Electronic Arts. There have been many versions and variants of the classic Simcity and most of them were popular among gamer community. However, this time it is different. No, I am not indicating popularity, but about usage. Let me try explaining.

Intrinsically games have been for entertainment. Simcity player enjoys the construction and administration of a virtual city. The gamer plays for hours in building a city from ground up, expanding and governing it thereafter. Patience and passion are not only virtues but also sort of ‘requirements’ that are not mentioned on the shiny box the game packaged in. However, one can only do so much being alone. For example, all the resources needed by a city should be available within the city’s limit and most importantly, there is no competition. When players can connect their cities with one another, they cooperate, compete and evolve. It is analogous to connecting one’s PC to the Internet – More the people join, better it is for everyone. Now how can this change the usage?

When cities connect, there is more dynamics poured into the game, and it can rival the dynamics of the World. This is an awesome thing for understanding global dynamics. Let’s take an example. Say there are 10,000 cities in the World, each with its unique geography, environment, resources, cultures, people and position in time.  It is enormously complex to imagine and deal with. So how should a city respond to a change, such as a drought or a new high-speed train line? How would the resource be utilized and people be given jobs? And more importantly there will be ripple effect as inevitably, the cities cooperate and compete. These avenues can be explored, discovered, to understand which approximate causes will lead to an aggregate effect, in this non-linear World. All this experimentation is impossible unless there is inherent potential to achieve at truly global scale. This is the new usage and it should differentiate Simcity World from many other games.

It is hardly matter of time to map entire Earth into such a virtual world, it is a relatively static issue. However, the real issue is dynamics, computation resource for it and computational infrastructure. Thankfully, few challenges can be tackled using recent innovations in computing infrastructure such as energy-efficient  (ARM/Intel Atom-based) multicore microservers, OpenStack cloud infrastructure, high speed fiber internet and so on.

Henceforth it is worth watching how this space develops, especially competition (such as SecondLife) and creativity.

Advertisements

Java Processors – Can it be Resurrection of Phoenix?

Over last 12 years, Java has become almost de facto in application development paradigm. Initial days were complaining about the performance of Java programs. However there is no doubt that enormous efforts that have been put in optimization of Java compiler and JVM implementations, have given handsome returns. But we know, rather we need to know, that there is an upper limit to this optimization for performance, being implemented as a software. Despite Java’s wide acceptance, Java Virtual Machines are limited to be software deployments. There is an emerging need, to have Java Virtual Machine in  hardware.

Fortunately the space is not an entirely unexplored territory. There were several efforts to implement Java processors and including PicoJava, one of them from Sun Microsystems. It seems a very promising concept and it should become more and more relevant in days to come. Imagine a system with many cores, for example ‘SUN UltraSPARC T2‘ that has 8 cores per CPU. Now all these cores are identical and a server with 8-way configuration would have 64 cores. This kinds of systems leave a lot of room for something called as ‘Domain-specific Processors’, hence it makes lot of sense to have four dedicated Java processors part of the system. One of such example is presented by IBM for its System Z Application Assist Processor(zAAP). Primary benefit of having such processors would be their specialization. Such processors can be optimized to a larger extent, they can be upgraded frequently and would be cheaper. Apart from that, these processors leave the main general purpose processors free to do their tasks. Thus a Java Processor can be a co-processor to your main processor. Remember the known examples such as ‘Intel 387’ or today’s Graphics Processing Units (GPUs). Checkout some benchmarks for IBM’s zAAP.

Another very interesting initiative is from Bea Systems, that talks about JVM Hypervisor. This can, meanwhile, provide some breathing space. The idea was, I guess, first presented by Joakim Dahlstedt (CTO of Bea) at JavaOne 2006. One can find PDF of the presentation here – “Bare Metal”—Speeding Up Java™ Technology in a Virtualized Environment.

Parrot Virtual Machine, the Dream Grandeur

I came across an interesting post which compares the benchmarks of two virtual machines: Parrot and NekoVM. The post discussed various issues except the convenience that Parrot offers. There are, of course, various concerns and constraints and no one should deny them. However we need to prioritize these concerns. For few, performance might be the concern, whereas scalability, reliability, productivity in terms of time-to-market, etc are also valid criteria to consider. What is important is to understand the potential of Parrot and to apply it appropriately.Parrot

Application virtualization is one area, where Parrot can play very important role. Seamless integration of application-level resources is possible because Parrot is register-based virtual machine, unlike Java Virtual Machine which is based on stack operations. Hence resources can be managed better using Parrot, such that it would soar the overall utilization of resources. One needs the bulky OS virtualization environment with ‘minimal overhead’, just to run the applications. Instead applications themselves can be run in application virtualization environment, such that migration, instantiation, etc can be done on a light-weight basis. Probably the analogy between process and thread is applicable for OS virtualization and application virtualization, appropriately.

I am one of the believers of Parrot Virtual Machine, even though it has not come very far from where it had started and is humiliated sometimes. However the goal to integrate many languages (language run-times, to be precise) is very interesting. I don’t quite understand that why Parrot is still called the Perl6 VM. This is despite the fact that Parrot supports no language currently (including Perl6) and plans to support languages, along side Perl6.

In future, Parrot-like environments would become very useful, especially when domain-specific languages (DSL) would surface out. Ruby has already gathered some attention for this reason, whereas grammars would become first class citizens in Perl6. Imagine the world when lot many useful DSLs would need to interact with each other, such that each DSL has origin in different language and run-time.

Conclusion: (There has to be a conclusion of every discussion.) Despite failures and critique of the past and the present, Parrot VM is becoming more and more relevant for the future. All the best Parrot! One VM to rule them all!