Net Neutrality: it’s a hot topic and buzzword phrase in the news right now. But how much do you really know about Net Neutrality? It’s a fascinating, important, and complex issue that deserves careful consideration. Luckily for those of us who live in the Bay Area (and could make it to Berkeley last night), we were treated to a talk on Net Neutrality by Richard Esguerra (staff activitst) from the Electronic Frontier Foundation, an organization that works to protect people’s digital civil liberties. It was a great talk, even with some rather crazy technical issues, sponsored by the San Francisco Bay Region Chapter of SLA. So I thought I’d try to share the highlights with you.* (If you just want the bare bones executive summary, skip to the end of the post.)
So without going back to the very beginning of the Internet and making us sit through hours of history lessons, Richard gave us “Internet Architecture Lite.” The most important concept is the “end-to-end principle” which, in simplified terms, means that most of the control, processing, and changes to packets of information (the requests sent over the Internet for data, webpages, etc.) should only occur at the ends of the process. So if you request a website by typing the URL into your browser, there should not be changes made to that request as it is sent through the various nodes as it is routed to the server that can serve up the website page. Control and processing should reside with your computer (one end) and with the server that is fulfilling the request (the other end). Thus the “end-to-end principle.” Net Neutrality could then be seen as the transferring of the “end-to-end principle” into a law or policy requirement, as Richard explained later.
But first we have to talk about a very important Supreme Court Case, National Cable & Telecommunications Association v. Brand X Internet Services. (Trust me, this is important). This ruling decided basically that cable companies were not telecommunication services and therefore not subject to the same regulations. While telecommunication services, such as AT&T, have to let competitors use their infrastructures for reasonable rates, information services do not have to follow this regulation. Brand X, an Internet Service Provider (ISP), wanted to rent infrastructure from Comcast in order to run Internet service over Comcast’s cable infrastructure like other ISPs use different companies telephone lines to provide DSL service. Due to this ruling Comcast did not and does not have to let competitors use their infrastructure which is why if you want cable internet, you pretty much only have one choice of service provider.
After this ruling, the FCC issued a Broadband Policy Statement that had four clauses that became part of the foundation of Net Neutrality. In order to preserve open internet, consumers should have:
- access to the lawful internet content of their choice
- the ability to run applications and use services of their choice
- the ability to connect their choice of legal devices that do not harm the network
- the right to competition among network providers.
All of this sounds good, but as Richard pointed out there is a major problem with the FCC issuing a policy out of basically thin air. Who ever gave the FCC the power to make and enforce such a policy? The story gets even more interesting when independent research by the EFF and Associated Press showed in 2007 that despite Comcast’s denials, it was actually throttling BitTorrent (it was denying requests for BitTorrent downloads on its infrastructure). This brings us full-circle back to the “end-to-end principle,” which Comcast wasn’t following as it was filtering and denying requests by users who wanted to use BitTorrent to share files. Now obviously ISPs need to have some ability to manage network traffic, so we get into a gray area of what is “reasonable” network management. The FCC ruled that Comcast needed to stop blocking BitTorrent traffic in 2008 and Comcast challenged the ruling.
Because of this, the court ruled in April 2010 that the FCC cannot enforce broadband policy. This nullified the FCC’s Broadband Policy Statement, which it had just expanded in late 2009. This leaves us in a bit of muddle because there is no clear way forward and no one wants to see an internet that is tiered like the graphic shown below:
So why is all this history important? Because, as noted before, we are in a quandary over how to proceed. Currently there are four main options put forth as the way to Net Neutrality:
- Reclassify broadband as a telecommunication service so it falls under more regulation
- Partially classify broadband as a telecommunication service
- Genachowski’s Third Way: the FCC would have regulatory control over certain, select bits of broadband
- Congress should pass a Net Neutrality law (which would probably give regulatory authority to the FCC
As you can see, there isn’t any clear path and any path to Net Neutrality has potential problems such as: Congress moves slow and has the potential to be swayed by special interest groups, giving the FCC more power might lead to “regulatory capture” where the FCC is eventually steered by the very companies it is supposed to be regulating, etc.
In short, Net Neutrality is a super-important, pressing issue and the implementation of Net Neutrality is so much more complex than I thought it was before the talk. There are so many areas of grey and lots of issues surrounding free speech, civil liberties, copyright, fair use, creative works, and innovation that I really hadn’t considered. I think, if nothing else, a safe lesson to take away from last night’s awesome talk is that everyone should have a healthy amount of skepticism about any plan about how to implement and regulate Net Neutrality. Stay tuned for further developments and check out the section on the Deeplinks blog about Net Neutrality.
Have a fantastic rest of your week. I’ll be blogging from Internet Librarian this coming week, so don’t be surprised to see many posts about conference talks and cool technology to use in the library.
*Any mistakes or inaccuracies in the history or technical aspects are mine and probably due to my hastily scribbled notes from last night and definitely not attributable to Richard of the EFF.