The World Wide Web may have apparently burst fully-formed onto the world stage in the mid-1990s, but it was in fact the result of a gradual development process dating back half a century. Credit for actually inventing the web belongs to British computer scientist, Tim Berners-Lee, in 1989 at the European Laboratory for Particle Physics, CERN, in Geneva, Switzerland.
It emerged from its academic ghetto onto the desktops of ordinary Macintosh and PC users four years later when the National Center for Supercomputing Applications, NCSA, at the University of Illinois, released its Mosaic browsers, giving virtually all computer users easy access to sites and information on a network. The World Wide Web (since shortened to www, the commonest prefix of Internet addresses) rapidly became a household name, cyber-cafés sprang up in cities around the world, and Internet start-ups began to redefine the rules of commerce. It seemed like a revolution, but the real revolution had started long before.
The web’s roots go back as far as the 1940s, when visionary US engineer, Vannevar Bush, dreamt of a “future device for individual use, which is a sort of mechanised private file and library”. He named it the memex, and went on to describe “new forms of encyclopaedias, ready made with associative trails running through them, ready to be dropped into the memex”. What Mr Bush had in fact described was hypertext, which links sites and pages together, although he didn’t call it that. Along with personal computing and the Internet, hypertext is an essential ingredient of the web.
But in the 1940s, computers were in their infancy and networks non-existent. Mr Bush’s dream lay dormant until the 1960s when the first networks were designed, the first hypertext systems built, and the first mouse was demonstrated.
Hypertext acquired its name early on in the decade thanks to Ted Nelson, whose conceptual Xanadu system, in which information would be stored in the form of linked text, inspires workers in the field to this day. But it is to Doug Engelbart that we owe the first working hypertext system, famously demonstrated at a U.S. computer conference in 1968, complete with mouse, graphical display, and all the trappings of a modern desktop computer.
For the final ingredient – communication – computer networks owe their origins to two very differently motivated people. Paul Baran in the US saw his country and the Soviet Union pointing huge nuclear arsenals at each other, while both had vulnerable communications networks. He believed this raised the incentive for one side or the other to attack, since whoever made the “first strike” would wipe out his adversary’s communications system, making the country more vulnerable.
If communications systems were made robust enough to survive a first strike, Mr Baran reasoned, the world would be a safer place. Donald Davies in England, on the other hand, simply wanted to find an efficient way for computers to talk to each other. Both Mr Baran and Mr Davies separately came up with the concept of packet-switching. This divides information up into small address-bearing chunks that are sent out onto the network. The chunks can follow different routes to get to their destination, and when they arrive they are reassembled. It’s a bit like sending a letter using several postcards in different mailbags – it may be inefficient for human communication, but it is ideal for machines and is how all computers talk to each other today.
Paul Baran and Donald Davies had laid the technological cornerstone of the Internet, but the political impetus had come earlier as a direct consequence of the 1957 launch of the Russian Sputnik. America’s first satellite went up just a few months later, but few now remember that. President Eisenhower declared that never again would the United States be taken off guard by the Soviet Union, and he established the Advanced Research Projects Agency (ARPA) to make sure that the United States stayed one step ahead. One of ARPA’s creations was a robust computer network spanning the United States. This was the ARPANET, precursor of the Internet.
By the 1980s, Mr Engelbart’s hypertext ideas had entered the mainstream, being developed first at Xerox’s Palo Alto Research Center and then commercialised by Apple in the form of the Macintosh. The main ingredients of the World Wide Web were all in place. All that remained was to complete the recipe. Brian Carpenter, at the time head of CERN networking, remembers, “We knew there would be a killer-application, but we didn’t know what it would be” until the World Wide Web was born.
CERN was the natural place for that final touch to be made. In the early 1980s the laboratory, already host to some of the world’s largest and most widespread scientific collaborations, was planning a big jump. Work from hundreds of scientists around the world was coming together to build experiments for the laboratory’s next high-energy big particle accelerator, the Large Electron-Positron collider, LEP. New ways of keeping in touch and sharing data were desperately needed.
Tim Berners-Lee was to provide the solution. He arrived at CERN in 1984 to work on networking LEP’s computers. One of the things that struck him was how inefficient it was that information on one computer at CERN could not be accessed by another.
At the same time a Belgian colleague, Robert Cailliau, was thinking along similar lines. While Mr Berners-Lee dreamed of putting hypertext on the Internet, Mr Cailliau wanted to build a hypertext system for CERN’s Macintosh networks, based on Apple’s own hypertext system, hypercard. Their collaboration was to change the face of the Internet forever.
In 1989, Mr Berners-Lee came up with a proposal for a “Distributed Information Management System” for CERN and its collaborating institutes. It was a document high on ideas, but low on practical details. Still, his boss, Mike Sendall, was sufficiently impressed to pencil it in as “vague, but exciting”. A year later the World Wide Web was born, and even though in the beginning the web only stretched from one office to the next, its global intentions were stated.
By Christmas 1990, Mr Berners-Lee had written programmes for the first web server and browser. This browser, which is the tool that finds and hauls in information from the web remains the state-of-the-art, but it only worked on rare computers called NeXT cubes, so the web’s range was initially limited.
The following year Nicola Pellow, a British student at CERN, wrote a programme for a simple browser that could be used on any computer, and the world’s particle physics community began to take notice. Mr Berners-Lee embarked on a world tour of particle physics labs, touting the new web software. Soon physicists in Hamburg were consulting online phone books at Stanford in California while scientists at CERN were looking up documentation at the United Kingdom’s Rutherford Appleton Laboratory.
The web finally took on its worldwide dimension in 1993 when CERN issued a statement relinquishing intellectual property rights and placing web software in the public domain, allowing anyone to download web software over the Internet and work on it. It was a controversial move, but it meant that anyone was free to contribute to (and benefit from) the web’s development. Sophisticated browsers began to appear, none more influential than NCSA’s Mosaic, the first sophisticated browser for UNIX, Macintosh or Microsoft windows systems that was easy to install. Copies were soon being downloaded at the rate of thousands per day.
In 1994 stewardship of the web passed to the World Wide Web Consortium, W3C, hosted by the French National Institute for Research in Computer Science and Control (INRIA), and the Massachusetts Institute of Technology (MIT), leaving CERN to get on with its core task of fundamental research. Mr Berners-Lee moved to MIT to become W3C’s director, where his role remains today much as it was then – trying to ensure that the web remains free and open, and steering it towards a realisation of his original dream.
For all its utility, the Web of today is not as powerful as the one Tim Berners-Lee had in mind in 1989. He didn’t just foresee a single Web, but multiple interconnected ones that would enable users to input as well as download information, creating new pages linked to the one they already had open in one seamless operation. Putting content up would be simpler, enabling people to create shared information spaces for their families, and companies to create workspaces, without worrying about compatibility of programmes, servers, or browsers.
That dream may be brought closer by a new computer language for structuring content, called XML, and a sophisticated word recognition concept which Mr. Berners-Lee dubbed the “semantic Web”. XML, or extensible mark-up language, will allow companies to customise their browsers, and bring an end to the “best seen with browser X” notices that feature on so many pages today. The semantic web idea will “teach” computers a more sophisticated use of language that will make finding and trusting information simpler for the user. It will work by adding computer-understandable meta data normally invisible to the user, including information allowing users to judge the reliability of the source of a web page. But it will also include broader language skills closer to the human use of language.
This would allow, for example, a seller advertising “a pink car” in Boston to link up with someone wanting to buy “a rose-coloured automobile” in Cambridge, by telling the search engine that pink and rose are the same colour and that automobiles and cars are the same thing. The result will be to connect seller and buyer more precisely and quickly than current systems allow.
The Amaya browser is W3C’s testbed and has all these features. And in keeping with the spirit of the World Wide Web, it can be downloaded for free from http://www.w3.org/. Amaya gives a flavour of Mr Berners-Lee’s dream, and a foretaste of the web of tomorrow.
The web is an outstanding example of how basic research can generate progress in unforeseeable and broadly beneficial ways. Perhaps the web was waiting to happen as technology was being evolved in several different places, but it was the demands of global particle physics research that made CERN the driving force.
And as for Microsoft? Well if you dig deeply enough in early versions of the Seattle company’s Explorer browser, you’ll find mention of a licence agreement with NCSA, the makers of Mosaic. One can only wonder how worldwide the web would be now had Microsoft somehow ignored W3C compatibility and gone on to develop the technology themselves.
• James Gillies and Robert Cailliau, How the Web was Born, Oxford University Press, September 2000.
©OECD Observer, Summer 2000