Home

Biografia e approfondimenti

Interviste

Articoli

 

Indice corso

Credits

 

 

 

 

 

 

 

 

   

Ten Years After the Future Began


Ten years ago this month, a highly unusual Request For Comments (RFC) appeared. In contrast to the usual descriptions of protocol headers or rules for resource allocation, RFC 1287 was audaciously titled "Towards the Future Internet Architecture." And its team of 32 authors and contributors practically encompassed the field of experts who developed (and still develop) the Internet.
Most of the leading Internet designers I've talked to about RFC 1287 didn't know it existed; those who participated in its creation have practically forgotten it. Yet it was no casual, after-hours speculation. It represented the outcome of a year-long effort by leading Internet researchers involving many meetings and discussions, a very serious inquiry into the critical changes that the Internet needed to grow and keep its strength. The IETF, the Internet Activities Board, and the Internet Research Steering Group were all involved. The authors tell me that a large number of follow-up workshops were also held during the subsequent decade on various topics RFC 1287 addressed.
When I discovered RFC 1287, I decided it would be instructive to interview as many of its creators as I could recruit and to look at the RFC from the standpoint of ten years later. I contacted a number of Internet experts through the grapevine and asked such questions as:
1. Which predictions in RFC 1287 came true? Which look ridiculous at this time?
2. Which recommendations in RFC 1287 turned out to be relevant? Have they been carried out?
3. If a recognized need is still unmet, how has the Internet managed to work around the problem?
4. What major developments did RFC 1287 fail to predict and prepare for?
My inquiry began as an retrospective on RFC 1287 in the light of subsequent Internet development. The inquiry soon became just as much a retrospective on Internet development in the light of RFC 1287. In other words, reading the thought processes recorded in that RFC tells us a lot about why certain major events have taken place in the design of the Internet.
Predictions and Recommendations
Let's start with some explicit assumptions and predictions made by RFC 1287. This was an age, remember, when:
? Internet users generally connected from a research facility in a university or corporation. A few services offered dial-up access from home, providing email and a Unix shell account.
? The vast majority of Internet hosts were located in the United States. Much traffic ran through a single, government-funded backbone provided by the National Science Foundation.
? The World Wide Web was a text service used by a handful of physics researchers and other curious experimenters. (In fact, December 1991 marks the appearance of the first U.S. Web site.) The really hot technology of the day was Gopher.
? The need for security was widely recognized (for instance, the Internet worm was released in 1988) and packet-filtering firewalls had been invented. But both attacks and defenses were fairly primitive by today's standards. Network Address Translation (NAT) was mentioned as a research project in RFC 1287.
Those are just a few facts to set the tone. Major assertions about the future in RFC 1287 included:
? "The TCP/IP and OSI suites will coexist for a long time," and the Internet "will never be comprised of a single network technology." Consequently, the authors predicted that the Internet would have to expand beyond the IP protocol stack to allow a "Multi-Protocol Architecture."
? The IP address space has to be expanded. "The Internet architecture needs to be able to scale to 10**9 networks." (That means 1 billion; and the total number of end-user termination points could enter the trillions.) Routing has to be simplified, because routers were becoming burdened with the need to remember too many routes.
Recommendations for the architectural work included:
? An expanded address space and a more carefully planned system of allocating addresses. This project, of course, evolved into IPv6.
? Quality-of-Service (QoS) offerings, which would enable time-sensitive transmissions such as video-conferencing.
? Better security through firewalling (which they called security at the "network or subnetwork" perimeter), protocols that integrated security such as Privacy Enhanced Mail, and certificate systems to provide distributed access control.
? Coexistence with non-IP networks through "Application interoperability."
? Support for new applications through a variety of formats and delivery mechanisms for data in those formats.
That was quite a grab bag. For the most part, the relevance of their recommendations inspires admiration ten years later. The authors of these documents could tell where the stress points were in the Internet and proposed several innovations that became reality. Yet it's also strange how little the Internet community has accomplished along some of these goals in the ensuing years.
So, let's see how a decade of intensive Internet growth and innovation matches with the predictions and recommendations in the RFC.
Clothing a Straw Man
A sizeable chunk of RFC 1287 is taken up with speculation about how to live with a multiplicity of competing network protocols for an indefinite period in the future.
Judging "the future relevance of TCP/IP with respect to the OSI protocol suite," the authors rejected the suggestion that we "switch to OSI protocols," and proudly waved their successes in the face of the "powerful political and market forces" who were pushing OSI (Open Systems Interconnection). The authors boasted that "the entrenched market position of the TCP/IP protocols means they are very likely to continue in service for the foreseeable future."
But they also bent over backwards to find ways to accommodate OSI. They even proposed "a new definition of the Internet" based on applications rather than on the Internet Protocol. This Internet would include anyone who could reach an Internet system through an email gateway, which would include the users of Prodigy, CompuServe, and other non-Internet services of the time. The IETF would cooperate with developers of other networks to develop protocols on the upper layers that all networks could run, and that would communicate as mail does through "application relays" or other means.
Nathaniel Borenstein, the author of the MIME protocol, says that all this material was included largely for the sake of politeness and that many of the designers of the Internet tacitly expected much of it to be rendered moot by the Internet's success: "Because the whole focus of Internet protocols was on interoperability, we were planning to support gateways for as long as there were multiple standards. But gateways at best are nondestructive and usually fail to meet even that modest goal. While we talked about planning for a multiprotocol world, many of us believed that such a world would be just an interim step on the way to a world in which the Internet protocols were pretty much universal, as they are now."

O'Reilly & Associates is the premier source for information about technologies that change the world. In addition to authoritative publications, don't miss these upcoming O'Reilly conferences:
? O'Reilly's Bioinformatics Technology Conference, January 28-31, 2002, in Tucson, Arizona, explores the engineering, software development, and tool-building aspects of bioinformatics. This conference will deliver knowledge from the biotechnology innovators, rendered into useful skills you can take back to improve the way you do research.
? O'Reilly's Emerging Technology Conference, May 13-16, 2002, in Santa Clara, California, explores the emergence of a new network--distributed, untethered, and adaptive.

In any case, Internet designers continue to show deference over and over again. For instance, RFC 1726, which appeared three years later and was titled "Technical Criteria for Choosing IP The Next Generation (IPng)," says "Multi-Protocol operations are required to allow for continued testing, experimentation, and development, and because service providers' customers clearly want to be able to run protocols such as CLNP, DECNET, and Novell over their Internet connections."
The modest assumption that IP-based networks would be just one of many networking systems is the biggest point on which RFC 1287 shows its age. Indeed, a few interesting concepts from OSI remain in circulation today (LDAP, for instance, derives from the OSI standard X.500.), but the Internet has effectively swept it from the scene. The Prodigys and CompuServes of the world push their Internet access as a key marketing point. Even the standard voice telephone system is threatened with turning into a conduit for the Internet.
A rich multiprotocol environment does exist, but it is layered on top of the Internet. For instance, HTTP has given rise to SOAP and other protocols; embedded applications in HTML pages also function effectively as new protocols. Thus, the Internet has succeeded in fostering innovation to the point where few alternatives to IP are needed.
IPv6 Addressing and Its Alternatives
Internet visionaries had already decided by 1991 that the 4 billion individual addresses allowed by the current 32 bits of the IP address would not be enough. In the future we may face a situation where every DNA molecule in the universe requires its own IP address. In 1991, the threat of address exhaustion was even closer than it is now, due to the inflexible system of allocating Class A, Class B, and Class C addresses. The continual subdivision of the address space also began to weigh down routers that were required to know how to reach large portions of the address space.
The RFC 1287 authors pinpointed the problems with address assignment and routing tables that have become a central concern of many Internet researchers. The authors recognized that changing the IP address size and format presented major difficulties and would require transitional measures, but they declared that a thorough overhaul was the only way to solve the problems with address assignment and the proliferation of routes.
IPv6 is the main outcome of this research. Yet a lot of observers suggest, with either scorn or despair, that IPv6 will never be put into practice. Despite important steps forward like the implementation of IPv6 in routers and operating systems, one would be hard put to find an IPv6 user outside of private networks or research environments like the Internet2 networks. Other observers acknowledge that changes on this scale take a long time, but they claim that IPv6 is critical and therefore that its spread is inevitable.
RFC 1287 anticipated that "short-term actions" might be found to give the Internet "some breathing room." And indeed, that is what happened. Classless Inter-Domain Routing extended the class system in a simple way that created the needed flexibility in address assignment. Network Address Translation allowed organizations to make do with displaying a single IP address (or something on the order of a Class C network) to the world.
The path of least resistance has won out. The RFC authors themselves recognized the possibility that an old proverb (sometimes reversed) would apply: the good is the enemy of the best. In any case, these issues in RFC 1287 were right on target, and were handled elegantly.
Some Comments on Addressing and Routing
If the plans of the Internet designers and device manufacturers come to fruition, the Internet may become one of the largest and most complex systems ever built. The traditional way to manage size and complexity is through the hierarchical delegation of tasks, and the routing infrastructure on the Internet certainly follows that model. Recent proposals for address allocation reinforce the important role played by the organization that routes the packets to each end user.
Addresses for private networks are a fixture of IP. Wherever a user reaches the Internet through a gateway, as NAT shows, tricky addressing schemes can sidestep the need for a large, Internet-wide address space. The results aren't pretty and are blamed for holding back a wide range of new applications (especially peer-to-peer) but they are still dominant in today's networks. Mobile phones always talk to the larger network by connecting to a gateway in the cell, so each phone company could create a special mobile-phone addressing space and translate between that and the addresses used by outsiders to reach the phone.
Some commentators think that mobile phones and other devices will be the force driving the adoption of IPv6. That would probably be beneficial, but phone companies don't have to take that path. They could take the path of address translation instead. The ideal of a monolithic Internet consisting of equal actors has given way to a recognition that users take their place in a hierarchy of gateways and routers.
Security Slowly Congealing on the Internet
In the 1980s, many computer users seemed to accept that security was a binary quantity: either you had a network or you had security. The famous Orange Book from the Department of Defense refused to consider any system attached to a network worthy of certification at fairly minimal levels of security.
Security problems remain the top headline-getter and the central battleground for the Internet today. Among the casualties is one of Tim Berners-Lee's original goals for the World Wide Web. He wanted a system where people could easily edit other people's Web pages as well as read them, and he has mentioned the lack of Internet security as the reason that this goal remains unfulfilled.
RFC 1287 treats security as a major focus and lays out lots of ambitious goals:
? Confidentiality. We've made great strides in this area. Although few people encrypt their email, there are many VPN users enjoying reasonably secure connections through PPP, L2TP, and SSH. Web sites offer SSL for forms and other sensitive information. The general solution to confidentiality, IPSEC, is gradually appearing in both commercial and free-software VPNs.
? "Enforcement for integrity (anti-modification, anti-spoof, and anti-replay defenses)." This seems to be offered today by Kerberos and its various commercial implementations and imitations. The VPN solutions mentioned in the previous item also contribute.
? "Authenticatable distinguished names." This promise lies implicit in digital signatures, but these signatures are not widely deployed except when users download software from major Web sites. A major breakthrough may come with Microsoft's My Services, or its competition.
? Prevention of denial of service (DoS). Clearly, we have no solution to this problem. Trivial DoS attacks can be thwarted with firewalls and proxies, but in the face of distributed DoS the best (and still faint) hope we have is to persuade ISPs to adopt outgoing filters on certain kinds of traffic.
In general, progress toward security has been steady, but along multiple and haphazard routes. Symptomatic of this unfinished work is that nobody has written RFC 1287's recommended "Security Reference Model." The partial success in turn reflects the difficulty of retrofitting security onto TCP/IP. The RFC authors themselves repeated the common observation that "it is difficult to add security to a protocol suite unless it is built into the architecture from the beginning."
As with IPv6, the slowly evolving state of Internet security can be ascribed to taking the path of least resistance. It's hard for IPSEC to take hold when application-layer approaches to confidentiality and authentication seem adequate.
On top of the Internet's flaws, RFC 1287 seemed to recognize at an early stage that personal computers with their less-than-satisfactory operating systems would require network protection. I believe this recognition underlies the warning, "There are many open questions about network/subnetwork security protection, not the least of which is a potential mismatch between host level (end/end) security methods and methods at the network/subnetwork level."
In the context of this issue, the following "assumption" seems more like a veiled complaint about weak operating system security: "Applying protection at the process level assumes that the underlying scheduling and operating system mechanisms can be trusted not to prevent the application from applying security when appropriate."
Some Comments on Security
Integrity and confidentiality are supported quite well with today's Internet standards, but authentication and larger trust issues are not. With authentication, we step outside the circle where protocols and code can solve the problem--that is, where we need more support from the larger world. This additional verification and legal backing is what certificate authorities offer. Credit card purchases over the Internet also require these real-world structures. And even more support has to be put in place if we plan to achieve end-to-end security on the Internet.
Consider this analogy: You are sitting on a public bench when a scruffy fellow pulls up in a van and asks if you'd like some stereo equipment at a low price. Some buyers would be educated enough to know how to open up the equipment and make sure it matched the quality claimed by the seller. The seller, in turn, could hold your cash up to the light to make sure it's not counterfeit. That's the real-world equivalent of integrity on the Internet.
But you're not likely to conclude the deal simply on the basis that the equipment and the cash are both what they appear to be. First, you will justifiably assume that the transaction is illegal because the goods are stolen. Second, you will probably want to pay with a credit card and get a warranty for the goods. For a number of reasons that deal with authentication and trust, you're well advised to say no to the fellow in the van.
In many ways, Internet transactions take place out of the back of a virtual van. We can change that by erecting a complicated verification and authorization system, or adapt to it by using a Web-of-Trust system, and dealing with sites recommended by friends or other authorities. In either case, real-world backing is critical. Furthermore, system designers must always try to protect the right to anonymity outside of transactions and to data privacy.
Quality of Service
RFC 1287 championed the use of the Internet for voice and video. It also recognized that major enhancements to routers would be required to render a packet-based network appropriate for these exciting applications.
The Internet still does not support viable QoS. Multiprotocol Label Switching (MPLS) and IPv6 options for QoS are intriguing but unproven. RSVP (Resource ReSerVation Protocol) is supposed to create low-latency channels, but even its supporters admit it is feasible only if the correspondents share a LAN or another carefully controlled network. (RFC 1287 anticipated this problem when it pointed to routing between autonomous domains as an area for research.) Recently, the original type-of-service byte (TOS) in the IP header has been revisited, and an IETF working group has defined an architecture and standards for using it to provide real service differentiation. Products are appearing, but the services remain to be deployed.
Steve Crocker writes, "QoS requires considerable inter-provider cooperation and standards. There's been a huge land grab and subsequent crash among the providers. We probably have to wait for the dust to settle before we can get any real cooperation." A lot of organizations that want to eliminate jitter and transmission breaks achieve those goals by over-provisioning the communication lines, or by running a lower-layer service with policy routing, such as ATM or MPLS.
Multimedia
RFC 1287 pleaded for advanced applications, and called for more formats to be recognized as standards. MIME has essentially fulfilled this goal, as mail standards designer Einar Stefferud points out. The easy integration of file types on the Web (through MIME or some simpler form of recognition like file suffixes) gave us the basis for rapid advances in graphics, audio, video, and animation that one can find either extremely impressive or highly annoying. The Internet2 community is continuing the quest for more high-bandwidth applications.
Thus, the Internet community has accomplished most of what RFC 1287 requested in the applications area. One intriguing area left to be tied up is the RFC's "Database Access" goal, which calls for "a standard means for accessing databases" and methods that "will tell you about its [the data's] structure and content." Several standards organizations have been developing solutions, often based on SQL. A system that is widely adopted and easy to use is probably not far off, in the form of the W3C's XML Query.
Some Comments on Quality of Service and Multimedia
I have no quarrel with the claim--spoken by planners ranging from the most technical communities to the chambers of corporate management--that voice and video represent the core communications channels for the masses, and that the Internet will flower in ways we cannot even imagine if it manages to support interactive voice and video robustly. Realizing the importance of that vision is much easier than realizing the vision itself.
I suspect that packet-switching is not the right delivery channel for large, real-time streams. Using a dedicated circuit to deliver a movie is much easier. Current systems for delivering entertainment are not broken, and there's no reason to fix them. In fact, physical media like CD-ROMs have a lot to recommend them; they're cheap, easy to transport, and durable. To see how responsive this distribution system is, consider Afghanistan, where music and films were banned for five years. Two days after the Taliban left Kabul, CDs and videos were being sold all over town.
But the Internet should evolve to support bursts of high-bandwidth material. We need to make it good enough for reasonably high-quality telephone calls, for interactive video games, and for showing simulations or brief film clips as part of such information-rich communications as distance learning or medical consultations. As for films and music--well, in the new Internet environment, I expect them to evolve into playful, malleable, interactive variations that compete with the traditional media, without trying to displace or ape them.
As audiovisual formats proliferate, Internet researchers recognize the need to tie them together and let them interoperate. Solutions seem to be collecting under the XML umbrella. Synchronized Multimedia Integration Language, Scalable Vector Graphics, and perhaps even Resource Definition Format can be considered steps toward the integration and standardization of new media.
RFC 1287 Didn't Cover Everything
Having covered the major areas addressed in RFC 1287, I will end by stepping back and asking what they failed to anticipate, or what modern issues perhaps simply didn't seem relevant to their discussions. I find their visionary predictions and recommendations quite on target. But hindsight always has the last word.
Everyone knew that corporations would eventually discover the Internet, and that they would demand that it become a more reliable medium. What became obvious only later on is that the commercial interests' notion of reliability included not just up-time and bandwidth, but legal and political policies. These policy issues ranged from global ones like the relationship between domain names and trademarks, to narrow problems like the challenge that Internet gambling presented to traditional casinos.
The growth of commercial involvement paralleled the decline of government involvement. One turning point came with some fanfare in 1995 when the original, government-sponsored Internet backbone (NSFnet) was replaced by one run by the private entity Advanced Network Services (ANS). Significantly, ANS was bought by America Online in 1994.
Internet growth strained the organizations tasked with designing it, and led to the creation of some new organizations. The Internet Society, the World Wide Web Consortium, and ICANN are all attempts to solidify both funding and standards for Internet operation.
The Internet still retains an abstract purity that remains incredibly robust in an age where it is relied upon by millions of people, but it rests upon a nuts-and-bolts, real-world infrastructure that is much more problematic. The cry for higher bandwidth along the last mile has been heard by cable and telecommunications companies, but their offerings fall short along several measures (cost, availability, reliability, upstream bandwidth, and so on).
Worse still, after years of shrill accusations and admonitions, it's clear that high bandwidth is spreading slowly because it's truly hard to do well, and to do at a reasonable cost. The problems can't be totally blamed on manipulative policies by malignant companies. But bold new thinking is required to find a low-cost solution, and that is going to have to come from a totally new quarter. As one example, the successes of spread-spectrum, packet-radio Internet offer a tantalizing promise for the future, and await a recognition by government that new spectrum should be reassigned to this efficient and open medium.
To sum up, the Internet's success outstripped even the predictions of its leaders in 1991. But its success rests to a large extent on the types of planning and vision that these leaders demonstrated in RFC 1287. Obviously, they couldn't anticipate everything. But what's more surprising is that a lot of what they called for has taken more than ten years to put in place. Apparently, the evolution of digital media does not always take place on Internet time.
Acknowledgements
Thanks to Fred Baker, Nathaniel S. Borenstein, Brian E. Carpenter, Lyman Chapin, Steve Crocker, Jon Crowcroft, Russ Hobby, Harry Hochheiser, John Klensin, Clifford Lynch, and Einar Stefferud for their comments and review. Responsibility for errors and opinions lies with the author.
Andy Oram is an editor at O'Reilly & Associates, specializing in books on Linux and programming. Most recently, he edited Peer-to-Peer: Harnessing the Power of Disruptive Technologies

 

 

 

 

 

.