Monday, August 4, 2008

What's a reasonable approach for managing broadband networks?



As we said a few weeks ago, the Federal Communications Commission's order last Friday in the Comcast-BitTorrent dispute should help ensure that today's broadband networks remain open platforms to the Internet. But more broadly, the recent attention on Comcast -- and on Time Warner's recently launched trial of "consumption-based billing" -- raises the question: what is a reasonable approach for broadband networks to manage their Internet traffic?

Network capacity (bits per second or data rate) is a limiting factor in all communications networks. Users cannot send traffic faster than the amount of network capacity available to them. But when users' aggregate demand exceeds the available capacity of the network, network operators naturally seek to manage the traffic loads. Even FiOS, Verizon's speedy fiber-based broadband service, divides up the available wavelengths to carry video and data applications. Wireless broadband networks have capacity issues based on their own unique technical characteristics. The end result is the potential for traffic congestion, leading to service delays and even outages for consumers.

At least one proposal has surfaced that would charge users by the byte after a certain amount of data has been transmitted during a given period. This is a kind of volume cap, which I do not find to be a very useful practice. Given an arbitrary amount of time, one can transfer arbitrarily large amounts of information. Rather than a volume cap, I suggest the introduction of transmission rate caps, which would allow users to purchase access to the Internet at a given minimum data rate and be free to transfer data at at least up to that rate in any way they wish.

Others have suggested that users should be able to contract for a "floor" capacity and that they might possibly receive more capacity if it is available. One problem with charging for total bytes transferred (in either direction) is that users will have no reasonable way to estimate their monthly costs. Clicking on a link can take you to an unexpected streaming site or a major file transfer.

So the real question for today's broadband networks is not whether they need to be managed, but rather how.

In my view, Internet traffic should be managed with an eye towards applications and protocols. For example, a broadband provider should be able to prioritize packets that call for low latency (the period of time it takes for a packet to travel from Point A to Point B), but such prioritization should be applied across the board to all low latency traffic, not just particular application providers. Broadband carriers should not be in the business of picking winners and losers in the market under the rubric of network management.

Network management also should be narrowly tailored, with bandwidth constraints aimed essentially at times of actual congestion. In the middle of the night, available capacity may be entirely sufficient, and thus moderating users' traffic may be unnecessary. Some have suggested metered pricing -- charging by the megabyte rather than flat fee plans -- as a solution to congestion, and prices could be adjusted at non-peak periods. These kinds of pricing plans, depending on how they are devised or implemented, could end up creating the wrong incentives for consumers to scale back their use of Internet applications over broadband networks.

Over the past few months, I have been talking with engineers at Comcast about some of these network management issues. I've been pleased so far with the tone and substance of these conversations, which have helped me to better understand the underlying motivation and rationale for the network management decisions facing Comcast, and the unique characteristics of cable broadband architecture. And as we said a few weeks ago, their commitment to a protocol-agnostic approach to network management is a step in the right direction.

8 comments:

Chris Snyder said...

"I suggest the introduction of transmission rate caps, which would allow users to purchase access to the Internet at a given minimum data rate and be free to transfer data at at least up to that rate in any way they wish."

This sounds suspiciously like every Internet service plan I have ever purchased. Telco broadband has always been sold by the number of bytes per second.

It also seems like a perfectly reasonable measure. Why aren't the engineers in charge?

Andrew said...

I think the hope is that we have an open internet as well as a free one. I also believe that having it be open is more important than it being entirely free and not capped. As for a reasonable approach, it is hard to think of anything that will be acceptable considering we have had unlimited use for so long.

I believe that we do need to continue to fight for Net Neutrality. While we did get a big victory on Friday, I think that is only the beginning and we need Telcos to start taking the Net neutrality pledge. Check out this campaign if you're interested: http://www.thepoint.com/campaigns/first-telecom-to-take-the-net-neutrality-pledge-wins-our-business

Adam said...

Vint: I believe you made a mistake in your explanation of "transmission rate caps", defining it as "a given minimum data rate" when I believe you meant to write a maximum data rate. From the user’s perspective, I agree that a rate cap (AKA bandwidth limit) is easier to deal with than being charged by the byte (or megabyte). But, as Chris noted, that’s exactly what most ISPs are doing today.

You don't believe volume caps are the way to go and you evidently don't think rate caps are sufficient. So you suggest "prioritization should be applied across the board to all low latency traffic." Are you referring to the DiffServ packet prioritization functionality in IPv4 and IPv6 or something similar?

If packet prioritization was completely agnostic, applications would specify a packet’s priority, not ISPs. But that assumes application developers will prioritize packets appropriately. There’s nothing to stop one peer-to-peer software developer from designing their software to assign all of the packets their software sends as high-priority in hopes that their software will then perform faster than their competitors’, giving them a competitive advantage. It’s a ‘tragedy of the commons’ problem: since there’s little to no cost to the individual for "cutting in line" by assigning their packets the highest priority, most application developers will assign their packets a high priority.

Do you think ICANN should formulate guidelines for packet prioritization? If not ICANN, what about the Internet Society, IETF, or some other entity? Adam Thierer has asked this question over on The Technology Liberation Front.

bj said...

The problem with the cap is that the ISPs will exempt their own Pay Per View movies and other content from the cap. This, in effect, will block competition as surely as packet sniffing and blocking bittorrent does. The regulatory answer to this conundrum is to open up that last mile bottleneck, and smash the current regulatory monopolies. It has worked in other countries. If the law is written better this time around, in a way that the courts can't overturn, then it will force the providers into building out their (artificially limited) networks, the ones they've already gotten the taxpayer dough to expand and never did.

Jason said...

I agree with the idea of minimum transmission rates which vary depending on time of day.
I think that also allowing consumers to decide and pay for the quality of service they would like is also a reasonable approach. I know that PlusNet in the UK has adopted this approach (www.plus.net). Has anyone had any experience with them?

storypoint said...

Usage-based billing and bandwidth management schemes expressed in terms of gigabytes/month or min/max transmission rates will fail because consumers don’t understand then and don’t like them (and participants in this blog are probably not representative of typical consumers).

The only solution is to abandon the notion that a single undifferentiated pipe can carry high-definition video, voice conversations, email, and HTTP traffic with equal efficiency and in a manner that satisfies consumer quality expectations. It can’t, and even the many fathers of the internet would agree that it wasn’t designed to.

A better solution is to offer additional pipes to consumers—video pipes, voice, pipes, gaming pipes, etc. If they wanted, consumers could continue to avail themselves of a single, utterly neutral high-speed internet (HSI) access pipe. Alternatively, if they so desired, they could purchase additional pipes with service quality engineered for the traffic type in question. All would be based on IP and all would travel over the broadband access facility but all would not be identical in terms of cost or quality.

The beauty of this approach is that all parties are satisfied. Neutrality proponents get what they want—all packets within the HSI band treated equally. Consumers can buy what they want. Providers can generate the new revenue streams necessary to further invest in broadband. And, best of all, Government can continue to refrain from getting involved.

Ad Bresser said...

".. For example, a broadband provider should be able to prioritize packets that call for low latency (the period of time it takes for a packet to travel from Point A to Point B), but such prioritization should be applied across the board to all low latency traffic, not just particular application providers. Broadband carriers should not be in the business of picking winners and losers in the market under the rubric of network management."

During normal operations this should indeed be the case.
There should also be an option to give some application providers a higher priority for the case that there are severe network problems. This would give those applications a better QoS during network failures.

Daniel Calark A.D said...

The low phase that has set into the IT sector is no longer news. The fall in US economy was a major blow to the IT world all over the world. In countries like India where you can find a software development company on every nook and corner are rapidly closing down because of lack of work. What is to be seen is the strategy formulated by the big fishes such as Infosys and Tata. The current situation is of uncertainty and fear as companies are sacking employees, something that was unthought of a few years back! http://www.infysolutions.com