From a disciple of evolution

‘Technology is the religion and advancement is the faith’

Things are changing in the New World (the Internet) and indeed ever changing the lives of those who use it (errrr! Rather live within it). Let it be games, forums, social networking, emails, e-commerce, applications, storage and anything you imagine. Of course laymen have different perspective of the evolution of the Internet (and the revolution of the Globe) than the Techies, the Geeks, the Nerds, the Wizards, the Jedi, the Masters and the Pundits. Then why so much of noise is around, one would start understanding the reason only when one does ‘connect’ oneself to this New World.

To make this New World a better place to live in, better development environments are needed. Programming languages and IDEs are not good enough to make a development environment better. We need something that will not retard the momentum and carry this New World further, safer, smoother and faster.

Ruby on Rails

‘Ruby on Rails’ had just launched; some began to add ‘Ruby’ and ‘Rails’ words into their list of ‘favourite’ jargons; some complained about “Why new language when our favourite language has solved all problems in the World?” . To their surprise Ruby is not new. Before Columbus, Americas were existed. Actually popular in Japan, Ruby had required a killer application; Rails became that killer application and also a Ruby-window for rest of the World.

What is so special about Ruby? There has been several languages around (at least 8512). Then why Ruby? There is an answer, one of the possible answers: The Meta Answer (i.e. the Meta-programming :)). What would make Ruby better than Python, PHP, Lisp and Smalltalk? The answer is still simple, a simple question : “Who says Ruby is always better?”. But it is better most of the time. Ergonomic object-orientation and accessible meta-programming with expressive syntax are the interesting facets of Ruby.  Dynamic languages are interpreted, which have their pros and cons. This makes these languages suitable for a kind of applications. Domain-specific languages would make Ruby a preferred choice as the Enterprise Glue. ‘Ruby is slow’, ‘Ruby takes more memory’, etc have been the proven arguments. Java too suffered of similar proven arguments. But what matters now, is the same what did matter then: Which language can deliver.

There is a performance comparison of two implementations of Ruby, Ruby 1.8 and Ruby 1.9, with other languages such Perl, Python and PHP.

Comparing Ruby 1.9 (with the YARV) –Ruby Logo

Comparing Ruby 1.8 –

The performance-gap is being closed. One can see a future that is clear, rather crystal-clear and the color of the crystal is red, rather Ruby red.

So who needs a Ruby-Lobby now? Fans or Foes?

Advertisements

What is common among ‘Internet’, ‘stock markets’, ‘societies’, ‘economies’ and ‘human body’? Apparently nothing, except they are very complex. Bingo! And I welcome you to the field of ‘Complex Systems‘.

What is Complexity and where from does this emerge? Well, to my belief, this question itself is fairly complex to understand, comprehend and explain. Getting answers to this question is ‘The story of blind men and an elephant‘. Complexity, as a recursive definition, is ‘The state of being complex; intricacy; entanglement’, according to Wikipedia. I try to simplify it for myself as followed in this post.Internet map

I see complexity synonymous to scale. Any system can be represented using three Fundamental Abstractions: Entities, Interactions and Constraints/Invariants. When there are simple entities with simple interactions and/or simple constraints, things are within control; but wait; only when they are small in number. I mean this because there are numerous such systems, that help strengthen this belief. For example, in a stock market, which is a complex system. Now imagine a stock market with only two stocks, only tens of brokers whereas the market is open for only short interval. Each of the transaction is regulated by the law of the land. The system is comparatively easy to comprehend. However I am not at all talking anything about ‘Predictability‘ because it is very difficult for any open system.

Apart from fundamental abstractions there emerge Derived Abstractions. For a Complex system the cardinality of the set of derived abstractions is sufficiently large to ‘make’ it complex. The beauty is that this formal system is self-sufficient to represent any system. Now I would like to restrict all this discussion is around Computation and ‘A New Kind of Science‘ and I have no intention to cross ‘Gödel’s incompleteness theorems‘.

We, the Humans, understand through perception. Complexity can be understood iff (if and only if) its facets are understood (which are entities, interactions and constraints). With an assumption that the data about these facets is already captured, what remains is to navigate through the results. A concept like ‘Faceted Browsing‘ is very useful for such a navigation. Using a semantic interface within a browser, such as ‘Exhibit‘ developed at MIT, as a part of ‘SIMILE’ project. However in order to make them really useful, there has to be a way to capture the facets. Out of various ways to model the system, for example UML, formal methods, ontology, the last one looks quite promising. So continuing with our assumption, we can use the domain ontologies to navigate through an instance of the complex system. Longwell, another software developed as part of SIMILE project, gets one there, albeit not exactly there. This is just a beginning…

I have developed a small demonstration of faceted browsing – navigation of GNU Compiler Collections’ invocation options. The demo is a subset of all options and it is yet not at its best. However it is fairly usable. More to come. An important thing – the demo works only with Firefox. Some of the typical use-cases can be –

  • While compiling a ‘C++’ program on a ‘x86-64’ architecture supporting ‘SSE4a’ instruction set, what series of optimizations one should supply to the compiler.
  • How to make loop optimizations such as unrolling, auto-parallelization, etc.
  • What are the options supported on, say, warning levels?

Just few seconds back I came across a news. Hence this post.

The news says that Tilera Corporation has unveiled ‘TILE64‘ – a 64-core processor. Each core is a fully featured processor with its own L1/L2 caches.

“You lay out these cores much like you do tiles on a floor. By 2014, you will see a 1,000-core chip coming out.” – Dr. Anant Agarwal

Complete specs can be found here.

As of now I do not know whether it supports x86 and x86-64 applications.

It seems we are quite there, about which I am going to talk in next few posts: Future is Commodity Supercomputing.

Meanwhile, read the interview of Dr. Anant Agarwal.

“Performance is like money, anyone would hardly want less (and those who ‘really’ don’t want it for some reason, then replace ‘money’ by ‘youth’, ‘fame’ or whatever one likes)” Moore's Law

Computers were not personalized until IBM PC. Earlier, their major role was to crunch numbers and that too for hours. If a machine could crunch a problem (such as a simulation) in 24 hours instead of 48 hours, then the scientists would happily submit experiments with larger precision and bigger datasets. This is never ending cycle. Supercomputers are employed since years to help assist the scientists in such mammoth experiments, in this quest for high performance. And despite all this endeavour, the scientists are forced to solve their yesterday’s problems using today’s machines to get into the future.

Then what exactly is High Performance Computing? Several definitions are spread across; from parallel computing to cluster computing to supercomputing. Some have reverse-abbreviated (I can’t catch a better word) it as High Productivity Computing from original acronym ‘HPC’. Desktops of these days are much more powerful than a cluster of old-age servers or even ancient mainframes and/or supercomputers.

Intel’s co-founder Gordon Moore had predicted that the number of transistors on an integrated circuit for minimum component cost doubles every 24 months. This is called as ‘Moore’s Law‘. The important thing is that the speed (and functionality) of processors is more or less governed by this law. Historically the trend, as predicted by the law, was satisfied having only one processing unit per physical chip. However it has already taken a different route to continue on that path: Multi-core Processors. For example SUN’s new chip ‘UltraSPARC T2‘ has eight cores packed into one chip to handle eight threads per core totaling 64 threads. Intel has already demonstrated an 80-core processor. Beginning of new era is marked with multi-core processors.

Imagine 80 cores * 32 processors * 100 computeres = 256,000 processing units = 1024 Teraflops

which is larger than blue gene/L

(1 processor = 4 Gflops)

The future is bright, isn’t it? But let’s not make hurry to conclude. Are we heading towards a wall?

There are still plenty of issues to be considered when one has such a monstrous system for development.

  • Will software be able to harness such a system?
  • And how?
  • How about power consumption?

Let’s discuss more on this in next post.

These days we virtually feed on jargons. Jargons such as Web 2.0, Internet 2.0, etc are floating since quite many days. Sparsely we hear about Web 3.0. In these technology-filled days, it has become very difficult to predict which hype is ‘the real hype‘ and which one is the future. It can, quite realistically, be expected that Web 4.0, Web 5.0 and so on would be realized on one fine day. But merely giving numbers to the transitions and/or creating excitement about the jargons might not help the technology domain and its users. There has to be some serious money behind the transition; and yes, lots of jobs are put on stake. Before it appears to you that my words are vapourware, let me take a disciplined approach, through the evolution of Web itself –

‘Web 1.0’ essentially is the first implementation of World Wide Web. It brought the concepts such as ‘Hypermedia‘ (e.g. Hypertext that gave rise to HTTP and HTML). More generalization was added by dynamic pages, e-mails, MIME, browsers that support Java and JavaScript, etc. The best part of ‘Web 1.0’ was to become overwhelming resource of abundant information. ‘Access anywhere’ made ‘Hotmail’, ‘Google’, ‘Yahoo’, etc quickly and immensely became popular. The ‘Dot Com’ bubble was hyped and punctured during this generation of web. ‘1990s’ can be thought as duration of Web 1.0.

To my understanding, ‘Web 2.0’ has more to do with collaboration and cooperation. Wikipedia, Orkut, Citeulike, etc are classicWeb 2.0 Map (of Jargons) examples of its popularity and power, I would term this as ‘Creation of Information Wealth through Collaboration’. The technologies remained same, more or less. The effect was multi-fold in terms of usage. ‘Rich Internet Applications’ etc would add more value such that users would experience the Web phenomenon differently, like an advanced extension of Web 1.0. Network mash-ups are creating some excitement too. However it is not changing the paradigm. ‘Web 2.0’ is still happening and not yet happened to its fullest.

Before we discuss ‘Web 3.0’, there is ‘Web 2.5′ somewhere in between. Web 1.0’ and ‘Web 2.0’ are accomplished mainly using desktops and laptops. Laptops did bring excitement but not completely. What they did not bring was ‘Pervasiveness‘. The laptop users do not keep their laptops for ‘On’ forever, whilst their ‘mobile phones’. Bingo! In my opinion ‘Web 2.5 = Mobile Phone + Web 2.0’. A much anticipated and recently launched mobile phone device and alike has huge potential to drive such ‘Web 2.5’. Some of the unthought applications on this platform can change the way we work, forever

Now comes ‘Web 3.0, The ambitious’. It is called as ‘Semantic Web’ and/or ‘Geospatial Web’. The word ‘Semantic’ can safely assumed synonymous to ‘Meaningful’; and it is meaningful to machines. Humans effectively make use of Web, which machines can not. For example, it is very difficult (or there is no way) currently for machines (i.e. software programs) to find out information about a song being played on my computer by ‘searching’ the Web. Information needs to be put on the Web such that software programs can understand, filter, process and represent it for humans. Another example would be – Show all 2BHK apartments in my vicinity of 5 Km and that are available on rent, between Rs. 5000 and Rs. 10000. ‘Resource Description Framework‘ is targeted to be a framework to describe domain-specific ontologies. Experts across the World are conducting their research to figure out various possibilities of such ontologies, to become as guidelines. This requires mammoth efforts. But once accomplished, the site developers would need to make sure that their sites are following these guidelines. Much hyped Artificially Intelligent ‘Information Agents’ would then crawl over such ontologies to get some meaningful work done for their ‘masters’. Jimmy Wales, co-founder of Wikipedia, has a dream to create open source search engine, which might act as prototype of this next generation Semantic Web. In short, there is long way to go.

Now about ‘Web 3.5’. Imagine your car talks to Internet. Your home contains digital devices that are online and available on mobile phone to be controlled. Your online itinerary is following you all the time, with self-generated reminders and alerts depending upon where you are.

And ‘Web 4.0’? I am stretching my imagination to be called as Science Fiction, to certain extent. Imagine your car comes to you from the parking slot, with click of a button. It does not matter whether the button is installed on mobile phone, your wrist-watch or installed in your brain. 🙂

To put simply, in my opinion –

Web 2.0 = Web 1.0 + Collaboration + Rich interfaces + Mash-ups

Web 2.5 = Web 2.0 + Mobile computing

Web 3.0 = Web 2.0 + Semantics

Web 3.5 = Web 3.0 + Pervasiveness

Web 4.0 = Web 3.0 + Science Fiction 🙂

I came across an interesting post which compares the benchmarks of two virtual machines: Parrot and NekoVM. The post discussed various issues except the convenience that Parrot offers. There are, of course, various concerns and constraints and no one should deny them. However we need to prioritize these concerns. For few, performance might be the concern, whereas scalability, reliability, productivity in terms of time-to-market, etc are also valid criteria to consider. What is important is to understand the potential of Parrot and to apply it appropriately.Parrot

Application virtualization is one area, where Parrot can play very important role. Seamless integration of application-level resources is possible because Parrot is register-based virtual machine, unlike Java Virtual Machine which is based on stack operations. Hence resources can be managed better using Parrot, such that it would soar the overall utilization of resources. One needs the bulky OS virtualization environment with ‘minimal overhead’, just to run the applications. Instead applications themselves can be run in application virtualization environment, such that migration, instantiation, etc can be done on a light-weight basis. Probably the analogy between process and thread is applicable for OS virtualization and application virtualization, appropriately.

I am one of the believers of Parrot Virtual Machine, even though it has not come very far from where it had started and is humiliated sometimes. However the goal to integrate many languages (language run-times, to be precise) is very interesting. I don’t quite understand that why Parrot is still called the Perl6 VM. This is despite the fact that Parrot supports no language currently (including Perl6) and plans to support languages, along side Perl6.

In future, Parrot-like environments would become very useful, especially when domain-specific languages (DSL) would surface out. Ruby has already gathered some attention for this reason, whereas grammars would become first class citizens in Perl6. Imagine the world when lot many useful DSLs would need to interact with each other, such that each DSL has origin in different language and run-time.

Conclusion: (There has to be a conclusion of every discussion.) Despite failures and critique of the past and the present, Parrot VM is becoming more and more relevant for the future. All the best Parrot! One VM to rule them all!

Hello World!

Welcome to my blog!

One might be curious to know idea behind the title of the blog. Well, it is the last of the lines that I had written (as a small poem).

My life is a quest.

A quest for the knowledge.

The Knowledge and the Wisdom of the Reason.

The Reason behind the Life.

My life is a quest for the Life…

I consider myself a disciple of evolution, which has taught me so many things and bringing clarity to my mind. I hope to write some of those learnings here. I would be glad to hear your opinion.