What is the Internet edge-to-edge (end-to-end or e2e) design principle? What is it good for and what are its main trade-offs?

This is a tutorial to a set of core concepts related to Internet communication for the Course COM 251 Information, Technology, Society which I teach at Purdue University.

In a previous post about the basic architecture of the Internet, I discussed the Internet’s intrinsic decentralization and its layered structure. To make the point as vividly as possible, I compared the architecture of the Internet to that of a car-based transportation system, which combines small and nimble vehicles with a wide and varied road network. In this post, I will explain how the decentralized, roads-and-cars architecture can also be understood as an attempt to move all major design and transportation decisions from a centralized core authority to the edges. In keeping with our metaphor, on the Internet, the drivers and their devices (users, producers, and disseminators of information) are more important and have more leeway in making decisions than in previous systems, where the trains (phone lines) were connected and disconnected at the mercy of the station chiefs and switchmen (operators).

How and why does the Internet push decisions to the “edges” of the network

To make the point again, cars are like packets, and the Internet is like a system of roads, large and small, modern or antiquated, all connected to each other and all available at any time. Such a system allows maximum flexibility, as large shipments (messages or files) can be sent piece by piece on various vehicles through the most available route, be it a highway (trunkline) or a country road (local ISP connection). It is also a more efficient system. Compared to a train system (old telephone system), where you need to keep a line occupied until a train passes (i.e., once you pick up the phone, a physical circuit is established between you and the person at the other end of the line), a car-road (Internet) system allows as many cars (packets) on the road (circuit) as its capacity permits. There is no “slack” on a highway. Yet, one might point, highways can get jammed much easier than railroads… Why? Because there is no way to stop some cars from getting on the highway, the way you can block off some trains from using a line to make sure that there is no overuse of the lines or, worse, collision. This means that the design principle that makes flexibility and openness possible is both the virtue and the vice of Internet (and highway) communication. The Internet is open. It is a first come, first served environment. And when too many people try to get on it first, things end up like everything else in life: with a traffic jam. But there is another way to think about the basic technological philosophy of the Internet, besides focusing on openness. This is by looking at where and how the heavy work and heavy lifting is done on the Internet. This is on your computer and on the computers you get the information from. These are not at the heart of the Internet, but, as I will explain below, at its edge or ends. Thus, another design principle of openness has been called in the context of the Internet (and only in this context) the edge-to-edge or end-to-end, at times shortened as e2e, principle.

But there is another way to think about the basic technological philosophy of the Internet, besides focusing on openness. Let us examine where and how the heavy work and heavy lifting is done on the Internet. This is on your computer and on the computers you get the information from. These are not at the heart of the Internet. The routers and DNS servers are, as I mentioned in the post about the basic architecture of the Internet. These are simple and “dumb” traffic cops, whose mission is to rush things along, rather than to filter and prioritize. If these routers are the “core” of the Internet, your computers and the servers you get data from or execute operations on are the “edges.” These “ends” of the Internet have all the intelligence needed to deliver and customize content, to run applications, or to operate the myriad things that make the Internet different and useful. Thus, another design principle of openness has been called in the context of the Internet (and only in this context) the edge-to-edge or end-to-end, at times shortened as e2e, principle.

What is the edge-to-edge or end-to-end (e-2-e) Internet principle?

The end to end principle can be formulated very briefly in these terms: in a network that is supposed to allow maximum flexibility and efficient use of resources  decision-making,and prioritizing should be done at the edges (ends), not at the core of the network, which should be kept as simple and unintelligent as possible. Yet another way to say it is that e-2-e networks need to be edge smart and core dumb. Or, that e-2-e networks are dumb. This might sound paradoxical. The common stereotype is that the Internet is a step forward in the world of communication. At the same time, the Internet should be decentralized. How come then that, all of a sudden, the Internet got a “core” and some “edges.”

Let us start with the “edge” – “core” distinction. In this context, the two concepts refer to a conceptual edge and core, to the two main mechanisms that make the Internet what it is, not an actual physical “core.” For any Internet connection to function we need computers, which serve as clients (where you browse the web, for example), or servers (where information is stored), plus the connections between them, which are facilitated by wires (or wireless connections) and most importantly by some essential pieces of equipment, the routers. In fact, in the post on basic Internet architecture, where we briefly discussed the structure of the Internet, the routers and the fact that they are part of the Internet “core” was briefly mentioned. Let us now expand on that.

A router is a piece of equipment (a very unsophisticated and modest computer, really) that handles requests for connections between a client (desktop, laptop, etc.) and some source of information (Facebook, tmz.com, etc). It is quite possible you have one in your own home. It is the Internet connection box that you connect to via WiFi or wires to get Internet access on your personal computer. It is an automatic switchboard operator. When you request one or more webpages (usually more than one), routers look up the IP address of the domain name you requested and return the page to you. Routers, especially those that are outside your home, at your Internet provider are more powerful but equally simple. They are only responsible for handling the operations involved in packet switching, such as keeping track what packet is going where, what has and has not arrived at the destination and so on. This is a very important function. At the same time, routers, and the protocols that they use to make decisions (mainly TCP/IP), are designed to be ignorant of the content of what they send back and forth. Furthermore, in the interest of efficiency, they are designed to treat all packets that arrive at their doors on a “first come, first serve” basis. To take advantage of the least busy connection, packets are sent whichever way is most convenient. No operation that would impede this efficiency is allowed. In exchange, to compensate for the lack of sophistication of the core routers, the clients, and the servers, where content is produced and consumed, are allowed to be as smart, complex and sophisticated as they can be. For example, when you receive a video file from Netflix, is the duty of the server and of your computer to buffer it and to make sure that all the packets get to you in time. If this fails, the server and the client renegotiate the connection on the fly downgrading the quality of the image.

What do routers and the “core” of the Internet really do?

Let us now think about the role played by routers which are at the Internet “core,” in more specific terms. And let us think about their role from the perspective of the higher job they engage in. We are all great multitaskers nowadays. We simultaneously write and send email, keep an eye on instant messaging windows, click like buttons on Facebook, have a Skype conversation, and run Spotify or Youtube in the background. This means that your computer simultaneously creates, sends and receives a large amount of information, of many different kinds. At the other end of your connections, people and servers prepare images, videos, songs and deliver them to you. However, to the Internet (and here we reduce the Internet to its core technology, the routers, and their protocols) all this is reduced to approximately equal size packets, stamped with two addresses, that of the sender and that of the receiver, and tagged with a label that indicates what file is it a part of. Routers pass the packets on, like a hot potato, until they reach the destination at the other edge of the Internet. The router closest to you might receive in a short burst from you a sentence from an email, a request for a webpage, a fragment for a video you are watching on  YouTube, or a few words from a skype conversation you are having with your friend. Now, think about it for a minute. Is it truly rational to throw all these in the same bag and send them out without even looking at how important or urgent they are? Shouldn’t, for example, the router hold onto that email message until you finish your Skype conversation? Or, more directly put, shouldn’t the words of your Skype conversation be prioritized, shouldn’t they be sent all at once, such that the quality of the call is always excellent? Of

Why can’t we prioritize time-critical content on an e-2-e network?

Now, think about it for a minute. Is it truly rational to throw all these in the same bag and send them out without even looking at how important or urgent they are? Shouldn’t, for example, the router hold onto that email message until you finish your Skype conversation? Or, more directly put, shouldn’t the words of your Skype conversation be prioritized, shouldn’t they be sent all at once, such that the quality of the call is always excellent? Of course, it would make more sense, and this becomes more and more pressing as we send more audio and video messages back and forth. Video is particularly critical, as watching a movie demands that all packets arrive in the right order and at the right time. Video delivery would be much more reliable if the routers could make way for videos, which are time critical, while delaying a bit the delivery of various bits of static messages, like emails. Netflix, Hulu, and other video-on-demand services would all benefit from a network architecture that prioritizes content as to its nature. Yet, the Internet was designed not for quality or reliability of connection, or for creating timely connections, but for efficient use of bandwidth and decentralized access. From the perspective of the early creators of the Internet, what is most rational is to send all messages (packets) as soon as they are created to fill in all available channels of communication. Furthermore, it was always supposed that the servers and the clients will be smart enough to compress and use all kinds of tricks to compensate for the lack of network reliability. That is why when you watch a video clip online you need to wait for the message to “buffer”. Your client is smart enough to know that it needs to download much more than what you need to see at any particular time so that if there were a hiccup in the communication with the server, your connection will live for a little bit from the “fat” accumulated in the buffer.

e-2-e or edge-to-edge and its discontents (trade-offs)

To draw the first line under the conversation, we need to conclude that the e-2-e principle is a good idea as long as you accept its trade-off, namely that between efficiency/flexibility and reliability/quality of live and streaming connections. For many observers, such as Lawrence Lessig, the author of the book The future of ideas, this is a justified trade-off, especially if we add to the plus column another benefit: creativity. Lessig believes that the simple internet “core” is a little bit like a “commons.” It is a place we can all use in common and imagine new ways to exploit and create around it. He makes the point, which is very true, that by making and keeping TCP/IP a simple traffic cop, we allowed all kinds of content to be created at the edges, sent back and forth between users and producers, without much worry if the computers or the systems were compatible. Furthermore, if we had the need for more sophisticated protocols of communication to sit on top of TCP/IP, we could simply invent them. This was the case for HTTP, which was invented relatively late in the game and which allowed us to send web pages back and forth. Or, the more recent VOIP (voice over IP), which makes Internet telephony possible. Yet, some people point to the fact that the assets in the plus column are not as many or as rich as they seem. The Internet allows new protocols and technologies, but they can be of the rather simple and unsophisticated kind. Internet telephony is often unreliable and the quality of the connection poor. As the old adage goes, you get what you are paying for. As soon as more complex technologies are proposed, such as replacing the old html languages with more complex computer programs or systems (think flash animations or java only applications), the system balked and rejected them.

The greatest headache created by the strict implementation of e-2-e principle remains, however, that of prioritizing the transmission of content that is time sensitive. As more and more people switch to the Internet for downloading or watching movies or playing online games, more and more information (orders of magnitude more information) is sent back and forth. At one point, it was estimated that Netflix, the video delivery service, occupied about 1/3 of the North American Internet bandwidth. Yet, a lot of the movies that we watch online don’t look that great. And watching the Super Bowl live, online, is still a dream. (Why? What would be the connectivity demand for such an enterprise? Multiply 100 million connections by 5 megabits/second) What is Lessig’s solution for the bandwidth and timeliness problem? Infinite bandwidth? Nationalizing the cable companies? (The questions are not rhetorical. What does he have to say about this issue?)

Why should we care about the trade-offs implicit into e-2-e principle?

I wrote this introduction for communication professionals. They should all care about this topic for reasons that go beyond the geek brownie points they might for mentioning the e-2-e principle in casual conversations. They need to know about it because the future of content industries depends on it since it is tightly related to the not yet finished debate about the “net neutrality” issue. In 2015, under the Obama administration, the FCC (Federal Communications Commission) issued a binding ruling titled the “Open Internet Order” that forces all companies that are involved in Internet services (Internet Service Providers) to apply the e2e principle strictly by refraining from implementing any technical modifications to their system that would prioritize one website over another or even one type of content over another. (Wireless networks were not subjected to the same restrictions, though. Why? This is another, longer story, which can be explored in this newspaper coverage of net neutrality).

The FCC Net Neutrality rue demands that companies such as Comcast, which is one of the nation’s leading Internet providers, should not favor one type of service over another. In other words, it could not serve movies ordered through its own Xfinity movie on demand service (via Internet, not classical cable) faster or with less lag or of higher quality than the movies its customers try to watch through Netflix. In its own words, the rule states… (How about you find out the specific wording of the rule?) The sources listed below describe the net neutrality rule in great detail and they talk about the fact that this affects how big content providers, such as Comcast or NBC, find themselves at odds with the companies that aggregate content and redistribute it to the public (Google and Microsoft). In reading the several brief articles dedicated to the net neutrality debate remember, again, that this is not a simple technical debate. It has long-lasting consequences. Some say that it will foster more innovation, allowing new technologies and applications. Yet others point to the fact that it is rather paradoxical to believe that denying the cable companies the right to improve the protocols and methods of communication that make delivery of content “smarter” is a path to innovation. They say that calling stagnation “innovation” is an Orwellian exercise, of calling war, “peace” and slavery, “freedom.” What do you think?

Resources on net neutrality

Sorin Adam Matei

Sorin Adam Matei – Professor of Communication at Purdue University – studies the relationship between information technology and social groups. He published papers and articles in Journal of Communication, Communication Research, Information Society, and Foreign Policy. He is the author or co-editor of several books. The most recent is Structural differentation in social media. He also co-edited Ethical Reasoning in Big Data,Transparency in social media and Roles, Trust, and Reputation in Social Media Knowledge Markets: Theory and Methods (Computational Social Sciences) , all three the product of the NSF funded KredibleNet project. Dr. Matei’s teaching portfolio includes online interaction, and online community analytics and development classes. His teaching makes use of a number of software platforms he has codeveloped, such as Visible Effort . Dr. Matei is also known for his media work. He is a former BBC World Service journalist whose contributions have been published in Esquire and several leading Romanian newspapers. In Romania, he is known for his books Boierii Mintii (The Mind Boyars), Idolii forului (Idols of the forum), and Idei de schimb (Spare ideas).

Leave a Reply

Your email address will not be published. Required fields are marked *