kL


Towards the Web-as-Brain Metaphor

In The Structure of Scientific Revolutions, Thomas Kuhn describes the steps through which one paradigm, that is the set of theories, practices, applications and instrumentation that represent a model of reality to scientists, is replaced by a new paradigm. Scientific progress, according to Kuhn, does not represent a gradual accumulation of knowledge. Instead, it consists of periods of "normal" science, during which scientists operate from within a dominant paradigm, which determines the very nature of questions asked and problems posed, punctuated by paradigm shifts, when anomalies arise that the normal science is unable to deal with. While the Internet is not as complex a human activity as science, it is sufficiently complicated and removed from the pale of the ordinary that we need to create models for it. Like the "normal" mode of science that has accumulated too many anomalies, the prevailing model of the Internet -- Internet-as-space paradigm -- has reached its limits of usefulness and needs to be replaced by a new model in order for us to make further progress.

Metaphors are particularly useful as models, because we use them to borrow ideas from the familiar in order to understand the unfamiliar. Our models of the Internet affect how we use it, interact with it and with others using the Net. The dominant model for the Internet today uses the metaphor of space, as demonstrated by such expressions as the "information superhighway" and "cyberspace". But it affects more than the way we talk about it - it is also reflected in the way we think about the Web and use it. For example, retrieving information on the Web has become a matter of finding its "location" and "visiting" the site.

However, the static metaphor of space has outlived its usefulness. One case where the metaphor fails, or will soon fail, is in the area of information retrieval. Search engines, such as Alta Vista and Lycos use robots, or spiders, to regularly visit all the pages they know about and compile a record of their content, a so-called index of information. The user simply has to visit a central location where the results of the robots' travels are kept, in order to locate sites containing information of interest to the user. However, the Web is growing at a phenomenal rate, faster than the computing power and storage, and certainly much faster than the bandwidth, which governs how fast the spiders can build a central map of cyberspace.

No conventional spider can hope to keep up with the growth of the Web, as well as keeping the record of already indexed pages up to date. Already, the amount by which same query searches on different engines overlap is a small fraction of the results. The recently deployed Hotbot is a step in the right direction, since it uses collective power of many computers to carry out the indexing and the search.

One answer to the growth of the Web and the indexing problem is to increase the number of search engines - if one spider can't keep up with the growth of the entire Web, perhaps it can manage a portion of it. The implication here is that we should be able to use the Web as its own search engine. Every web server will index the content it serves, in addition to making the topical hot lists of the users connected to the server available (there is a good chance that the users are interested in the topics their machines are serving and have already found relevant links). Perhaps, each time changes in content are made, the web server reports them to some DNS-like service, which can then refer search queries to their proper destinations.

This thought experiment above suggests a new model for the Internet, one based on the dynamic self organization and adaptive characteristics of a complex system, such as a living organism. Here is some support that the "organism" metaphor correctly captures the more interesting aspects of the Internet. The evidence is organized around the central characteristics that define an organism.

 How will the organism metaphor change how the Web is used and to what ends? The new metaphor will enable the technology to be better utilized and grow in entirely new directions. As I mentioned above, it will enhance our information retrieval capabilities. Instead of centralized search engines maintaining indices of Web content, the network will become a collection of smart Web servers, that will maintain a sophisticated index of their own content, in addition to providing links to other servers offering similar content. The metaphor also offers an intriguing possibility of learning more about how we function collectively. Patterns of network activity, such as usage statistics recorded by Web servers, can be used to identify issues that capture the public's interest, much like the electro-encephalogram is used to identify active regions of the brain. In this way the global network of computers can be used to identify crises (as proposed by G. Mayer-Kress and C. Barczys), emergencies, or purely be a monitor of the collective conscious and the unconscious.


12/96

revised 12/97

kL