Within the next year, astronomer Alexander Szalay hopes to
begin collecting and storing digital images of a million galaxies
visible from the Northern Hemisphere as part of the Sloan Sky
Survey, a collaborative effort involving scientists from across
the country and overseas.
When completed, the unprecedented mapping effort will have collected between 20 and 40 trillion bytes (or terabytes) of data, enough information to fill more than 20 million floppy disks. And therein lies a great problem with current technologies: once stored, how do you transmit some or all of the information without hopelessly overloading the system?
"On the current Internet, just half a terabyte would take a year to transmit," said Szalay, a professor of physics and astronomy and a champion of a proposed new super-fast system known informally as Internet 2. "On the new Internet that same data would take about a half a day. We are talking about a system that is a thousand times faster than the current one."
Szalay is not alone in his hopes for the new system. "More and more researchers are finding the fully commercialized Internet unacceptable for good research," said David Binko, director of Homewood Academic Computing. "The low bandwidth and unreliability of the current system is an impediment to their work."
Recognizing the need for a new generation of faster, more reliable connections, the federal government has sponsored three different initiatives to plan, design and build the next Internet. Hopkins is, or will be, involved in all three.
In January of this year, Hopkins enrolled as a charter member in the Internet 2 Initiative, which brings together 109 institutions in an effort to envision, plan and create a separate, higher bandwidth network. "This is the most programmatic approach of the three projects," Binko said. "The idea is to step back and examine what we in the research community really need."
Planning a new Internet is not simply a matter of drawing lines of new fiber optic cable connections across a map of the United States. In fact, just the opposite. Laying new cables is largely being left up to AT&T, MCI, Sprint and other long-distance carriers. In some places there will not be a need for new cables at all.
What is needed, it is widely acknowledged, is a new system of switches, routers and protocols to enable the Internet to take full advantage of a digital fiber optic network that can, for all intents and purposes, transmit information at the speed of light.
"The current Internet has no concept of priority or quality of service," Binko said. "It was designed and built on a system known as 'Best Effort Quality of Service,' which expects everyone who has a part of the Net to do just that--make the best effort. There are no strict management protocols that can be enforced."
Although the system worked relatively well as first introduced--when it was used by thousands, rather than millions of people--its unforeseen explosive growth has led to problems. "It's a system that didn't scale well," Binko said. "We have a situation where somebody downloading a page from the J.C. Penney catalog gets the same priority as someone collecting data in a vital experiment."
Moreover, the current Internet is built on the concept of shared lines, in which data is broken into chunks and sent out on a space-available basis. This leads to a problem of latency. Unlike a telephone conversation, in which voice data from one phone is instantly transmitted to another phone across the country or halfway around the world, the Internet sends information when space on the lines permits. Thus an e-mail message may arrive instantaneously, or it may take several minutes. Sometimes even several days.
"There are essentially two complaints with the current system, latency being the first one," Binko said. "The other issue is speed, raw speed. If you need to transmit huge amounts of data then obviously speed becomes critical. Scientists often can't wait a week to download all the data they need."
While Internet 2 will occupy members with hashing out the protocols involved in creating a faster, more efficient system, another federally sponsored project, known as "very high speed backbone network services" (or vBNS), concentrates on putting the major conduits of the future Internet in place for a select number of research institutions to begin using.
In the next two years, the National Science Foundation, in conjunction with MCI, will fund the creation and testing of a 14,000-mile Figure 8 fiber optic loop laid across the United States from New York to San Francisco. The circuit was planned to connect su-percomputer centers in San Diego; Boulder, Colo.; Urbana, Ill.; Pittsburgh; and Ithaca, N.Y. Some universities-- including Johns Hopkins--have joined the project and will connect to the new backbone and begin experimenting with data transmission at rates of 622 million bits a second, more than 10 times the present Internet's maximum capacity.
It is widely believed the technologies and infrastructure developed in the vBNS program will be used to connect the institutions currently involved in the Internet 2 Initiative.
The third federal project, announced by President Clinton in his State of the Union message in January, is the Next Generation Internet Initiative (NGII), a $100 million effort to develop new network technologies to be distributed through the departments of Energy, Defense and Commerce, as well as NASA and the National Science Foundation. This project is seen as supporting and reinforcing the vBNS and Internet 2 Initiatives.
Next week: What will it take to make Internet 2 a reality at Hopkins?
Go back to Previous Page