From a disciple of evolution

Archive for August, 2007

Facets of Complexity – Navigation

What is common among ‘Internet’, ‘stock markets’, ‘societies’, ‘economies’ and ‘human body’? Apparently nothing, except they are very complex. Bingo! And I welcome you to the field of ‘Complex Systems‘.

What is Complexity and where from does this emerge? Well, to my belief, this question itself is fairly complex to understand, comprehend and explain. Getting answers to this question is ‘The story of blind men and an elephant‘. Complexity, as a recursive definition, is ‘The state of being complex; intricacy; entanglement’, according to Wikipedia. I try to simplify it for myself as followed in this post.Internet map

I see complexity synonymous to scale. Any system can be represented using three Fundamental Abstractions: Entities, Interactions and Constraints/Invariants. When there are simple entities with simple interactions and/or simple constraints, things are within control; but wait; only when they are small in number. I mean this because there are numerous such systems, that help strengthen this belief. For example, in a stock market, which is a complex system. Now imagine a stock market with only two stocks, only tens of brokers whereas the market is open for only short interval. Each of the transaction is regulated by the law of the land. The system is comparatively easy to comprehend. However I am not at all talking anything about ‘Predictability‘ because it is very difficult for any open system.

Apart from fundamental abstractions there emerge Derived Abstractions. For a Complex system the cardinality of the set of derived abstractions is sufficiently large to ‘make’ it complex. The beauty is that this formal system is self-sufficient to represent any system. Now I would like to restrict all this discussion is around Computation and ‘A New Kind of Science‘ and I have no intention to cross ‘Gödel’s incompleteness theorems‘.

We, the Humans, understand through perception. Complexity can be understood iff (if and only if) its facets are understood (which are entities, interactions and constraints). With an assumption that the data about these facets is already captured, what remains is to navigate through the results. A concept like ‘Faceted Browsing‘ is very useful for such a navigation. Using a semantic interface within a browser, such as ‘Exhibit‘ developed at MIT, as a part of ‘SIMILE’ project. However in order to make them really useful, there has to be a way to capture the facets. Out of various ways to model the system, for example UML, formal methods, ontology, the last one looks quite promising. So continuing with our assumption, we can use the domain ontologies to navigate through an instance of the complex system. Longwell, another software developed as part of SIMILE project, gets one there, albeit not exactly there. This is just a beginning…

I have developed a small demonstration of faceted browsing – navigation of GNU Compiler Collections’ invocation options. The demo is a subset of all options and it is yet not at its best. However it is fairly usable. More to come. An important thing – the demo works only with Firefox. Some of the typical use-cases can be –

  • While compiling a ‘C++’ program on a ‘x86-64’ architecture supporting ‘SSE4a’ instruction set, what series of optimizations one should supply to the compiler.
  • How to make loop optimizations such as unrolling, auto-parallelization, etc.
  • What are the options supported on, say, warning levels?

High Performance Computing – The Battle Royale – Part 1.5

Just few seconds back I came across a news. Hence this post.

The news says that Tilera Corporation has unveiled ‘TILE64‘ – a 64-core processor. Each core is a fully featured processor with its own L1/L2 caches.

“You lay out these cores much like you do tiles on a floor. By 2014, you will see a 1,000-core chip coming out.” – Dr. Anant Agarwal

Complete specs can be found here.

As of now I do not know whether it supports x86 and x86-64 applications.

It seems we are quite there, about which I am going to talk in next few posts: Future is Commodity Supercomputing.

Meanwhile, read the interview of Dr. Anant Agarwal.

High Performance Computing – The Battle Royale – Part 1

“Performance is like money, anyone would hardly want less (and those who ‘really’ don’t want it for some reason, then replace ‘money’ by ‘youth’, ‘fame’ or whatever one likes)” Moore's Law

Computers were not personalized until IBM PC. Earlier, their major role was to crunch numbers and that too for hours. If a machine could crunch a problem (such as a simulation) in 24 hours instead of 48 hours, then the scientists would happily submit experiments with larger precision and bigger datasets. This is never ending cycle. Supercomputers are employed since years to help assist the scientists in such mammoth experiments, in this quest for high performance. And despite all this endeavour, the scientists are forced to solve their yesterday’s problems using today’s machines to get into the future.

Then what exactly is High Performance Computing? Several definitions are spread across; from parallel computing to cluster computing to supercomputing. Some have reverse-abbreviated (I can’t catch a better word) it as High Productivity Computing from original acronym ‘HPC’. Desktops of these days are much more powerful than a cluster of old-age servers or even ancient mainframes and/or supercomputers.

Intel’s co-founder Gordon Moore had predicted that the number of transistors on an integrated circuit for minimum component cost doubles every 24 months. This is called as ‘Moore’s Law‘. The important thing is that the speed (and functionality) of processors is more or less governed by this law. Historically the trend, as predicted by the law, was satisfied having only one processing unit per physical chip. However it has already taken a different route to continue on that path: Multi-core Processors. For example SUN’s new chip ‘UltraSPARC T2‘ has eight cores packed into one chip to handle eight threads per core totaling 64 threads. Intel has already demonstrated an 80-core processor. Beginning of new era is marked with multi-core processors.

Imagine 80 cores * 32 processors * 100 computeres = 256,000 processing units = 1024 Teraflops

which is larger than blue gene/L

(1 processor = 4 Gflops)

The future is bright, isn’t it? But let’s not make hurry to conclude. Are we heading towards a wall?

There are still plenty of issues to be considered when one has such a monstrous system for development.

  • Will software be able to harness such a system?
  • And how?
  • How about power consumption?

Let’s discuss more on this in next post.

Web 3.0 – The Road to El Dorado

These days we virtually feed on jargons. Jargons such as Web 2.0, Internet 2.0, etc are floating since quite many days. Sparsely we hear about Web 3.0. In these technology-filled days, it has become very difficult to predict which hype is ‘the real hype‘ and which one is the future. It can, quite realistically, be expected that Web 4.0, Web 5.0 and so on would be realized on one fine day. But merely giving numbers to the transitions and/or creating excitement about the jargons might not help the technology domain and its users. There has to be some serious money behind the transition; and yes, lots of jobs are put on stake. Before it appears to you that my words are vapourware, let me take a disciplined approach, through the evolution of Web itself –

‘Web 1.0’ essentially is the first implementation of World Wide Web. It brought the concepts such as ‘Hypermedia‘ (e.g. Hypertext that gave rise to HTTP and HTML). More generalization was added by dynamic pages, e-mails, MIME, browsers that support Java and JavaScript, etc. The best part of ‘Web 1.0’ was to become overwhelming resource of abundant information. ‘Access anywhere’ made ‘Hotmail’, ‘Google’, ‘Yahoo’, etc quickly and immensely became popular. The ‘Dot Com’ bubble was hyped and punctured during this generation of web. ‘1990s’ can be thought as duration of Web 1.0.

To my understanding, ‘Web 2.0’ has more to do with collaboration and cooperation. Wikipedia, Orkut, Citeulike, etc are classicWeb 2.0 Map (of Jargons) examples of its popularity and power, I would term this as ‘Creation of Information Wealth through Collaboration’. The technologies remained same, more or less. The effect was multi-fold in terms of usage. ‘Rich Internet Applications’ etc would add more value such that users would experience the Web phenomenon differently, like an advanced extension of Web 1.0. Network mash-ups are creating some excitement too. However it is not changing the paradigm. ‘Web 2.0’ is still happening and not yet happened to its fullest.

Before we discuss ‘Web 3.0’, there is ‘Web 2.5′ somewhere in between. Web 1.0’ and ‘Web 2.0’ are accomplished mainly using desktops and laptops. Laptops did bring excitement but not completely. What they did not bring was ‘Pervasiveness‘. The laptop users do not keep their laptops for ‘On’ forever, whilst their ‘mobile phones’. Bingo! In my opinion ‘Web 2.5 = Mobile Phone + Web 2.0’. A much anticipated and recently launched mobile phone device and alike has huge potential to drive such ‘Web 2.5’. Some of the unthought applications on this platform can change the way we work, forever

Now comes ‘Web 3.0, The ambitious’. It is called as ‘Semantic Web’ and/or ‘Geospatial Web’. The word ‘Semantic’ can safely assumed synonymous to ‘Meaningful’; and it is meaningful to machines. Humans effectively make use of Web, which machines can not. For example, it is very difficult (or there is no way) currently for machines (i.e. software programs) to find out information about a song being played on my computer by ‘searching’ the Web. Information needs to be put on the Web such that software programs can understand, filter, process and represent it for humans. Another example would be – Show all 2BHK apartments in my vicinity of 5 Km and that are available on rent, between Rs. 5000 and Rs. 10000. ‘Resource Description Framework‘ is targeted to be a framework to describe domain-specific ontologies. Experts across the World are conducting their research to figure out various possibilities of such ontologies, to become as guidelines. This requires mammoth efforts. But once accomplished, the site developers would need to make sure that their sites are following these guidelines. Much hyped Artificially Intelligent ‘Information Agents’ would then crawl over such ontologies to get some meaningful work done for their ‘masters’. Jimmy Wales, co-founder of Wikipedia, has a dream to create open source search engine, which might act as prototype of this next generation Semantic Web. In short, there is long way to go.

Now about ‘Web 3.5’. Imagine your car talks to Internet. Your home contains digital devices that are online and available on mobile phone to be controlled. Your online itinerary is following you all the time, with self-generated reminders and alerts depending upon where you are.

And ‘Web 4.0’? I am stretching my imagination to be called as Science Fiction, to certain extent. Imagine your car comes to you from the parking slot, with click of a button. It does not matter whether the button is installed on mobile phone, your wrist-watch or installed in your brain. 🙂

To put simply, in my opinion –

Web 2.0 = Web 1.0 + Collaboration + Rich interfaces + Mash-ups

Web 2.5 = Web 2.0 + Mobile computing

Web 3.0 = Web 2.0 + Semantics

Web 3.5 = Web 3.0 + Pervasiveness

Web 4.0 = Web 3.0 + Science Fiction 🙂