Netflix on bandwidth caps

Here's what Netflix said in a letter to their shareholders:

Wired ISPs have large fixed costs of building and maintaining their last mile network of residential cable and fiber. The ISPs’ costs, however, to deliver a marginal gigabyte, which is about an hour of viewing, from one of our regional interchange points over their last mile wired network to the consumer is less than a penny, and falling, so there is no reason that pay-per-gigabyte is economically necessary. Moreover, at $1 per gigabyte over wired networks, it would be grossly overpriced.

It should be noted that Netflix uses a content-distribution network, which more or less means that they pay your ISP (or someone very close by) to store data there, meaning that they pay for this as well as for the costs to push data from their central servers to here. All this examines is the last mile costs - from your ISP to you. Otherwise costs would be a little higher, but I doubt they would be immensely so.

(HT: Ars Technica)


Metered internet was almost inevitable. The cost of maintaining the internet backbones so they could keep up with demand (before Netflix started up in Canada) was $50 billion per year - and that was just to maintain current service, not improve it. The onslaught of bandwidth because of Netflix just accelerated its coming - it was not the cause of it.

Furthermore, I don't fully understand why people are so upset about paying for extra bandwidth they use. I must admit that some companies were pretty shady by silently lowering their limits and then charging for over usage, but what other commodity is bought at a fixed price, no matter how much you use? The only thing I can think of is local calling on a landline phone. Everything else: water, food, electricity, natural gas...all are metered. Why shouldn't the internet be. We've gotten so used to "unlimited packages" - or little or no enforcement of bandwidth "limits" that all of a sudden we've become upset over (what I would call) normal rules being enforced.

Also, look at the changes in latency times over the past couple years. They have continued to increase because the companies cannot afford to upgrade the main internet backbones. Don't believe me? Try using YouTube during prime hours. The internet in North America (and beyond) is reaching saturation.

Let's also look at the amount of money ISPs get. Assuming there are 100,000,000 internet subscribers in North America - with an average internet bill of $50 per month. That's $5 billion a month - and $60 billion a year. Now, $50 billion of that needs to be re-invested to MAINTAIN the current latency times - not improve them. That leaves $10 billion to cover all other remaining costs for the ISPs - which most likely would not be enough, considering the sizes of these companies.

We've had it pretty good in terms of internet access for the past few years. Now we're paying for it - literally and figuratively.

How much does that increasing traffic actually cost though?

e.g. To quote the Globe and Mail:

While Internet traffic grew at a rate of around 50 per cent per year in the last decade, The University of Minnesota and other researchers have found that processing power, hard disk densities and transmission rates grew at rates closer to 60 per cent per year over the same period. In addition, the servers and routers and other electrical equipment that are the backbone of the Internet are much more energy efficient than they were ten years ago, which has dramatically reduced the cost of operations.

In simple terms, the bandwidth explosion is real, but it’s been more than offset by more powerful and more energy-efficient machines. So, we can reject the notion that increased usage is the a significant rationale for huge Internet price increases and usage-based billing.

Where does the $50 billion figure come from? Are you accounting for the content providers that also have to pay for their internet connectivity? There's plenty of stuff that I can download at relatively high rates (> 1 megabyte/second) during peak hours.

Some sort of breakdown like local vs. long distance to account for the CDNs and third party ISPs seems like it might be reasonable.

Again, estimates seem to put the cost of bandwidth at somewhere in the neighborhood of 1-3 cents per gigabyte. Compare to the $1-$2 / GB charged by Shaw for overage fees.

The $50 billion actually came from a CNN article about the wireless networks in the US. However, all you have to do is look at the latency times and the jitter increases to see that we are running out of bandwidth. Why else would companies throttle users who use the most traffic? We are running out of bandwidth. Plain and simple. The .COM boom (and later bust) left a bunch of dark fibre cable available. That's probably gone by now. Better routers will help So, there are 2 possible solutions: datashaping, to let "more importatnt" traffic through - which is not allowed; the other is increasing the capacity of the main structure.

Think of it like a bridge. You can do many things to a bridge to help increase the amount of traffic that can go it when it is nearing its capacity. However, there comes a time when you just have to build a new bridge to see any marked improvement - and those cost $$$$$.

Furthermore, if you think $1-$2 / GB is a lot, how about the $0.5 / MB that cell phone companies charge?

However, all you have to do is look at the latency times and the jitter increases to see that we are running out of bandwidth. Why else would companies throttle users who use the most traffic?

Bandwidth is an instantaneous measure - if this were the case they might then offer different rates for transfer at off-peak periods. They'd also be unlikely to offer the sorts of packages they do - e.g. I can get 100Mbps with a not-much-larger bandwidth cap for a few extra dollars and Shaw's rolling out gigabit in some neighborhoods in the Calgary and Vancouver area. If such were the case it would seem unlikely that they'd increase the speed, but rather increase the bandwidth caps when those extra dollars are spent.

To make money seems to be the main reason for doing this - and with Shaw's secretive cap drops just after Netflix was launched I'd also suggest anticompetitive behaviour.

The .COM boom (and later bust) left a bunch of dark fibre cable available. That's probably gone by now

Or not... you can scale up the traffic that can fit on a single fiber relatively easily. See, e.g., this:

Bell Labs, in their portion of the presentation, even put forward what they call Butters’ Law of Photonics, a formulation which deliberately parallels Moore’s Law. According to Butters’ Law, the cost of transmitting a bit over an optical network decreases by half every nine months. Further progress seems assured. In a recent experiment, Bell Labs crammed 1000 wavelengths or channels on one fiber. And they see no reason why they couldn’t go to 15,000 wavelengths per fiber. To put these figures in perspective, consider today’s systems, which carry about 100 wavelengths per fiber.

I actually don't mind too much suggestions that once you cross a threshold you get throttled back to some degree - i.e. at times of congestion your packets are more likely to be dropped - but cutting people off entirely seems a different matter.

That's actually similar to how Wind approaches things - per their Fair Usage Policy:

Our Fair Usage Policy is that if you exceed 5 gigabytes of data usage within a given billing cycle, we may slow your speed so that all WIND customers can better share the network and enjoy quality access to the Internet. If we elect to do so, we will slow your speeds from a maximum speed of 7.2 megabits per second to a maximum speed of 512 kilobits per second for downloads and 128 kilobits per second for uploads.

They really only seem to apply that during time periods of congestion.

Or not... you can scale up the traffic that can fit on a single fiber relatively easily

Yes, that is definitely something that can (and probably should) be done. However, how much do you think it would cost to upgrade all the routers on Bell's network to do this. Furthermore, now that they've invested their millions (or more) into this venture, they can't acutally charge more money because of competition, and people complaining when they're charged for extra usage.

In the end, it comes down to money. Bell made $2.8 billion last year. That's a lot of money, but if the company doesn't post large profits, then their stock goes down, which causes all sorts of other problems.