What Title II Means for TCP

How long until the internet's underlying technology becomes mixed up in regulation? It may already be starting. The FCC has just voted to reclassify broadband internet access under Title II of the Communications Act of 1934, recognizing internet service providers as public utilities and opening the door to greater government involvement. Though the network neutrality tensions are much more about economics than technology, the problems tackled by the internet's architects are similar to those addressed by regulation today. Luckily, we can expect a reinforcement of the status quo in the short-term with support for the best of today's technology and standards. However, looking ahead there are threats to innovation.

At the crux of debates over paid prioritization, peering, and blocking, is a question of who gets to make use of limited network resources. These questions are old ones, some predate the commercial internet, and some form the basis for the technologies that define it: TCP and IP. While IP (Internet Protocol) describes the basis for packet-based communication, whereby data moves in little chunks (each typically the length of two or three paragraphs), reaching its destination by hopping across a global network, TCP (Transmission Control Protocol) regulates data transfer rates, including how network users share bandwidth when congestion arises.

Transmission Control Protocol, together with Internet Protocol, are the unifying abstractions that allow layering of diverse applications on top of diverse communication media. Together, they form the “narrow waist” in the hourglass model of the internet's architecture, managing the chaos of multi-party communications, and providing universal global interoperability.

In the mid-1980s, before TCP achieved its mature form, the proto-internet suffered from a series of failures described as “congestive collapse,” where useful network throughput dropped nearly 1000-fold (see paper by Van Jacobsen, and Michael Korel). As in a crowded room full of shouting people, an over-saturated link may deliver very little coherent information unless care is taken to coordinate access. TCP describes a back-off mechanism, whereby all networked computers collectively slow down to mitigate congestion.

Perhaps among the most amazing things about the internet is that it works at all. Relying upon the “end-to-end” principle, TCP and related technologies emphasize putting sophisticated mechanisms at the endpoints of the network, in the nodes, while keeping the central infrastructure relatively simple.

A priori, this appears a clever approach, yet it is not at all obvious that it should work, that a global network can exist without a central administrator, or that logic at the endpoints can manage congestion in the middle of the network. Throughout the multi-million-fold increase in internet traffic of the past two decades, TCP has maintained its starring role, calming traffic jams wherever and whenever they occur. Network operators deserve great credit for stable platforms and ever-expanding connectivity and capacity, network equipment makers for packet-pushing innovations, but the internet's glue has always relied on software run at the edges, and its identity is in the end-to-end principle.

Looking ahead to a more regulated internet, the FCC has made clear its support for protecting the network as we know it, for access on equal footing to all users and applications, for preventing operators from erecting toll gates or otherwise abusing market power. Much of this comes back to questions of fairness, just the sort of questions that the internet's original congestion protocols sought to address. It's natural for the FCC to reach for TCP in these aims, either today or in the future, perhaps referencing it within the framework of Title II's rule-making powers.

One of the key points of contention between network operators and the FCC has been the need to maintain exemptions for “network management policies” that allow operators to selectively block, or otherwise limit, traffic judged to degrade or damage the experience of other customers. Viruses and spam are examples of abusive traffic that ISPs sometimes need to block, but perhaps most famous is Comcast's throttling of BitTorrent and similar P2P traffic, a practice that the FCC ruled against in 2008. In this case, the ISP repurposed certain features of TCP, surreptitiously inserting data designed to interrupt selected transfers.

While this sort of interference is unlikely to be repeated — the integrity of TCP flows will be protected — we may see limitations on the number of TCP flows permitted, on modifications of TCP that back-off more slowly under congested conditions, or on non-TCP communications that use a disproportionate share of limited bandwidth. It's conceivable that lacking restraint, or perhaps under pressure from ISPs, the FCC will eventually promote rules limiting how connected devices can communicate on the internet. Perhaps this is far out, but the possibility is dangerous enough to warrant attention today.

The internet has matured a great deal, but it continues to evolve (see e.g., Geoff Huston or Vint Cerf), and future demands for innovation are as strong as ever. Today's approach and protocols will support continued scaling, yet some important improvements to latency, reliability, and flexibility likely require new ideas. For example, one recent development is MultiPath TCP (RFC-6824), which allows for flows to spread across multiple networks, useful in case of intermittent mobile connectivity. Another challenge remains in providing high-quality synchronous voice and video, just the sort of application that opponents of network neutrality have promoted to illustrate the need for paid prioritization. Demand for such services has not disappeared, and as members of the internet's technical standards-setting body, the IETF (Internet Engineering Task Force), consider future solutions, they will do so with regulatory considerations in the background.

Regardless of FCC rulemaking, the outcomes of inevitable legal challenges, or the prospect of congressional action on net neutrality, momentum suggests that the coming years will bring more government involvement and oversight in internet matters. We can expect this to reinforce the foundations, which seems fine, yet it remains imperative that we leave wide open the door to experimentation, to technical creativity, and to a broad scope of innovation. If we do, and if regulators stay true to the internet's end-to-end design philosophy, then we can be confident that we will see solutions to today's challenges, and to many that we have yet to imagine.