The generative dilemma of the Internet

This is an introduction to a set of core concepts related to Internet communication for the Course COM 251 Information, Technology, Society which I teach at Purdue University.

One of Aesop’s fables gives a clever answer to the question: “what is the best and the worst thing about humans?” The answer is “the tongue.” It can be used to say the most uplifting and the vilest things at the same time.

The Internet comes with its own curse, that of “generativity.” The Internet is open to new applications and innovations. Openness makes it, however, unreliable and insecure. To understand how the dilemma of generativity came about, we need to recount some basic facts about the Internet.

The origins of generativity and its downsides

The underlying architecture of the Internet is generative. This means that the Internet can grow and be extended with new layers of functionality and applications. While we can keep some core functionality in place, we can always add new things to the Internet. These “new things” include ways of packaging and sending information, which generates new methods of communication. Yesterday, these methods were web pages or voice calls. Tomorrow will be augmented reality layers of information or data coming from small sensors or sent to actuators. rests on three ideas: inter-networking,  packet switching, and open protocols. Let us translate these  grandiose phrases in layman’s terms:

Internetworking: The Internet is a cooperative agreement between various local network owners. They agree to open their networks to outside traffic just as much as they benefit from connecting to the net through other third-party networks, unknown to them. Internetworking is an extension of the famous golden rule: “do unto others as you would want them to do unto you.” Or, better yet, of the idea to “share and share alike.”

Packet switching: To accomplish the goal of sharing and sharing alike, a network exchanges its information with the world outside in the form of dumb packets, which contain the rawest version of the stuff sent back and forth: bits. These bundles of bits are passed on from network to network by routers and protocols handling addressing and traffic integrity. Packets are also sent out through the path least jammed at the time. Thus, each packet might take a different route.

Open (transparent) protocols: The packets are sent back and forth by some simple programs, called “protocols.” Their job is to address and send the packets on their way. Nothing more. They reside on your computer, but also on some computers between you and the sender. These are called “routers.” Their mission is to find the destination, not to optimize the distribution or, God forbid, to prioritize or support specific applications in their work to create or render the content.

Because of packet switching and open protocols, when you send your stuff out, you should not expect a given path of delivery. All you can expect is a best-effort path. Your message can go through any number of subnetworks, some unknown to you.

The openness of protocols and packet switching also make the Internet a core-dumb network. This is a network in which the traffic cops (protocols and machines that direct traffic) are very simple, dumb, even. A core-dumb network works well for situations where the content does need to get to the destination, but it does not have to arrive at a particular point in time or a specific order. A dumb network is also, according to Zittrain, a system that encourages innovation. Open protocols and first-come, first-served philosophies may be recipes for creating new applications and even new protocols.

At the same time, experts like John Day question the future e-2-e networks. He points to the fact that edge-smart and core-dumb networks are a hindrance to creating more secure and intelligent, adaptive networks. First-come, first-served networks encourage abuse. During the early 2000s, eighty percent of email was spam. Now is “merely” 50-60%. The prevalence of spam is possible because Internet routers and email servers are obligated to take in and transport all email sent. Thus, spammers are encouraged, since they do not have to suffer the cost of sending or even the cost of failing to deliver their messages.  The proponents of the e-2-e principle respond that the failings of the system are to be judged not by some of its drawbacks but only because of its many benefits. According to Lessig, for example, such benefits include creativity, reach, and flexibility. In other words, because the core is dumb and open, anyone can create a new smart application or service at the edge, which will always be handled well by the core. Reach is forever expanded by allowing all new computers to connect quickly. Both these possibilities also illustrate the idea of Internet’s infinite flexibility. Again, we cannot put a value on a benefit until we weigh it against its costs. Benefits without costs are not only impossible but are no benefits indeed.

Let us reconsider an example I gave before: a system of open roads.  Road networks that allow any carrier encourage the vehicle industry to innovate much more than a system dominated by railroads. In the former, you can make and run on the roads big and small cars, broad and narrow ones, or even vehicles that only have one or two wheels. On a railroad track, you can only put a rail car with wheels of a certain size and with an axle width of particular dimensions. Engines are, also, limited to a few solutions. But road networks that do not limit any cars to travel on them might be deluged by a mix of cars, bicycles, motorbikes, and even animal-drawn carriages, not to mention pedestrians. Also, when too many cars get on the road, you need traffic control. Even roads need rules!

The Internet as an e-2-e network

Let us return to the Internet. This open network is not without costs. These are abuse, unreliability, and lack of prioritization. To claim that the end-2-end Internet will pay off for its drawbacks (abuse, reliability, and lack of prioritization) through its “generativity,” you need to show that this is indeed the case. You need to show that the new network configurations, applications, and ideas, are worth the aggravation of lack of reliability and security.

Even if this were so, e-2-e networks would suffer from another drawback, for which we do not have a readily available cure. The illness is that the Internet transacts not only annoying text messages from real or fake sellers of Viagra or news about Donald Trump, but malicious code in the forms of password-stealing viruses, ad-serving pop-up images, and hard-drive encrypting ransom-requesting software.

Jonathan Zittrain explains in his book the Future of the Internet and how to stop it, especially in chapter 3, Cybersecurity and the Generative Dilemma, how the dumb Internet core fails us. The Internet governing software is a simple traffic light. Its mission is not to ask questions but to move the traffic along. This feature allows harmful programs sent over the net to ravage our lives. We are talking about the nasty things that may ruin your machine or even life: Trojan horse software that steals your passwords or ransomware, which encrypts your computer until you pay a “reward” to get it back.

What are the worst effects of the generative dilemma?

Let us explore the generative dilemma in more detail. For the Internet to work and grow, for new and exciting applications and uses to be developed, not only content (static text, pictures, or videos) need to be sent between computers, but also the executable programs that generate the new use possibilities. In other words, the Internet needs to allow transmission of code that is to be executed on your local computer alongside primary data (content). Executable code is dangerous because it is a type of program that can take over your computer. Once a program is installed, there is no limit to what it can do.

Stated even more forcefully, the generative dilemma means that for the Internet to be a generative system, an innovation commons, in Lessig’s terms, it also needs to give the world control over your computer.

When you access an interactive site, Facebook for example, where things automagically are pushed to your desktop (status updates, like reports, etc.), your computer downloads actual computer code, that runs locally. By this, you give Facebook or any other interactive site a significant amount of control over what happens on your machine. Next time you log into Facebook go to the View menu and select the Source item.  You will see how much code you download to see the page.

With the realization that on the Internet you need to allow some programs to run on your local machine we need to restate the idea that the Internet is core dumb by adding that the Internet is edge smart. The e-2-e phrase means, after all, that most of the real action on the Internet, most of the smart applications happen on the edges. That is, on your computer. Only your computer executes code on the Internet.  On the Internet, the edge (your client computer or the servers you access) does all the smart work, including visual and interactive programming. The core, the routers and the DNS (name servers), only shuffle content around.

How did we get in this generative maze? Are there alternatives?

The decision to make the core smart and the core dumb was made very early in the history of the Internet. At the time, all computers connected to the Internet were governmental, military, or academic. These were locked in labs and intrinsically secure. What they were not, was versatile. Also, the connections between them were costly. A versatile and very efficient connection was the most important thing. Security and privacy were non-issues. The Internet allowed as many computer systems or networks to connect to each other regardless of their design (operating system) at a lower cost.

Interestingly, this type of miser and unsecured networks was in direct competition with a different kind of network,  a smart, proprietary network (see Zittrain for a definition and discussion of this proprietary networks).

Represented by Compuserve, Prodigy, or AOL http://www.zdnet.com/article/before-the-web-online-services/ (in its first incarnation), proprietary networks had all intelligence and processed all code in the core, which was not just simple routers (traffic cops), but large mainframe computers that did all the heavy lifting. When the core computers used by Compuserve sent out the information to the users, it was all processed and ready to be consumed. The client computers were no more than “dumb terminals” or “thin clients.” In other words, the edge machines did not need a lot of processing to receive the information provided by the mainframe. All they needed was a straightforward browsing program, much simpler than contemporary browsers, and the capacity to display information with minimum computing power. In addition to being “core smart,” these networks required every single computer to identify the user through personal (login) authentication. When users accessed information on the network, each of them could be identified by name. The reason was simple. Proprietary networks charged their users for access. Thus, proprietary networks came with their trade-offs. Security and reliability were traded off for flexibility and low cost. The fact that we do not hear much about proprietary networks nowadays is mostly due to the fact the trade-off demanded by the Internet was seen by many as a better deal than the one implicit in proprietary networks.

The return of the proprietary networks as “walled gardens”

The big Internet players (Apple, Google, Facebook), however, are not oblivious to the fact that the Internet remains a laggard when privacy is concerned. All these companies worry all the time about the fact that their platforms can be used to push executable code on their client’s machines. Some of them are trying to change the way the Internet works and to make it a little bit smarter and more controllable. Companies like Apple or Facebook push for moving Internet browsing to “walled gardens” and thinner, less complex “information appliances.” The iPhone is an excellent example of an information appliance that lives in a walled garden. Most content and all programs need to be installed on the iPhone through the Apple App store (ITUNES market), which vets apps and prevents most viruses from being installed. Even if a virus is installed, it is easy to delete from a central location and the distribution point to be killed by Apple. Google has recently launched a laptop, the Chromebook, that uses the same principle. Its operating system is the browser, Chrome. It has a tiny hard drive, only meant to save the data for running the browser and for backing up some of the files you create with it. Most of the content resides online. In other words, the computer is limited to running a browser and applications that are in the Google walled garden, or in the “cloud” as people like to call it. There are very few “smarts” on the local machine (which also makes it relatively cheap). On Facebook, creating new applications goes through a vetting process controlled by the company. Similarly to what happens on iPhones, apps that are not liked by Facebook can be easily killed. Surprisingly or not, Windows 10 comes with an app store that works quite similarly to the Apple App Store. Even Microsoft Word is sometimes delivered as an app.

Yet, the walled garden model of communication and programming is still emerging and it has powerful enemies. Scholars like Zittrain, who see the dangers of the existing system of communication, oppose it. Why? They think that the potential risk of becoming slaves of software quasi-monopolies is greater than that of being actual victims of hacking and spamming attacks. (What do you make of this trade-off?)

Yet, the danger implicit in this trade-off cannot be ignored. It was highlighted recently by the Conficker incident. This was featured in the very readable and suspenseful article published in the Atlantic Monthly by Mark Bowden, the author of the famous book (turned into a movie) Black Hawk Down. The article eloquently illustrates how most of our current communication technologies, and not only the Internet but also the operating systems of our computers, especially Windows, are inherently insecure.

Sorin Adam Matei

Sorin Adam Matei - Professor of Communication at Purdue University - studies the relationship between information technology and social groups. He published papers and articles in Journal of Communication, Communication Research, Information Society, and Foreign Policy. He is the author or co-editor of several books. The most recent is Structural differentation in social media. He also co-edited Ethical Reasoning in Big Data,Transparency in social media and Roles, Trust, and Reputation in Social Media Knowledge Markets: Theory and Methods (Computational Social Sciences) , all three the product of the NSF funded KredibleNet project. Dr. Matei's teaching portfolio includes online interaction, and online community analytics and development classes. His teaching makes use of a number of software platforms he has codeveloped, such as Visible Effort . Dr. Matei is also known for his media work. He is a former BBC World Service journalist whose contributions have been published in Esquire and several leading Romanian newspapers. In Romania, he is known for his books Boierii Mintii (The Mind Boyars), Idolii forului (Idols of the forum), and Idei de schimb (Spare ideas).

Leave a Reply

Your email address will not be published. Required fields are marked *