For the past year, I’ve been working on a study on distributed denial of service (DDOS) attacks against independent media and human rights sites with colleagues at the Berkman Centre. The resulting report will be out shortly, but one of the main conclusions is that independent media sites are not capable of independently defending themselves of large, network based DDOS attacks.
There are many things an independent site can do to protect itself against smaller DDOS attacks that target specific application vulnerabilities (including simply serving static content), but the problem with a large, network-based attack is that it will flood the link between the targeted site and the rest of the internet, usually causing the hosting ISP to take the targeted site down entirely to protect the rest of its network.
Defending against these large network attacks requires massive amounts of bandwidth, specific and deep technical experience, and often connections to the folks running the networks where the attacks are originating from. There are only a couple dozen organisations (ISPs, hypergiant websites, and content distribution networks) at the core of the internet that have sufficient amounts of bandwidth, technical ability and community connections to fight off the biggest of these attacks.
Paying for services from those organisations is very expensive, though, starting at thousands of dollars per month without bandwidth costs, and often going much, much higher. An alternative is to use one of a handful of hosting services like blogger that offers a high level of DDOS protection at no financial cost. One of the recommendations we make in our report is for independent media sites that think they are likely to be attacked and want to be able to defend against themselves either find the resources to pay for a DDOS protection service or accept the compromises of hosting on a service like blogger in return for the free DDOS protection.
We make this recommendation with a great deal of caution, however, because moving independent media sites to these core network actors trades more freedom from DDOS attacks for more control by one of these large companies. It’s great to be able to withstand a 10Gbps DDOS attack on YouTube, but it’s not so great for YouTube to take down your video at its sole discretion for violation of its terms of service.
In general, these core companies have struggled in this genuinely difficult role. How is YouTube supposed to judge what to do when it receives complaints about a violent video in Arabic posted from Egypt? Do videos of police brutality qualify as the “graphic or gratuitous violence”, which YouTube disallows in its terms of service?
So, with this context, I’ve been watching the WikiLeaks attack with great interest. It has been suffering a pretty big network attack (WikiLeaks claims about 10Gbps, which is big enough to take down all but a couple dozen or fewer ISPs in the world; arbor claims about 2-4 Gbps, which is still big enough to cause the vast majority of ISPs in the world major disruption). The attack successfully took its site offline at its main hosting ISP. WikiLeak’s textbook response was to move to Amazon’s web services, one of those core internet services capable of defending against big network attacks.
The move seemed to work for a couple of days, but then Amazon exercised its control, shutting the site down. Joe Lieberman claimed responsibility for Amazon’s decision to take the site down. But Amazon responded with a message claiming that it made the decision to take the site down based purely on its decision based on its terms of service. The core of their argument is that WikiLeaks was hosting content that it did not own and that it was putting human rights workers at risk:
“for example, our terms of service state that ‘you represent and warrant that you own or otherwise control all of the rights to the content… that use of the content you supply does not violate this policy and will not cause injury to any person or entity.’ It’s clear that WikiLeaks doesn’t own or otherwise control all the rights to this classified content. Further, it is not credible that the extraordinary volume of 250,000 classified documents that WikiLeaks is publishing could have been carefully redacted in such a way as to ensure that they weren’t putting innocent people in jeopardy. Human rights organisations have in fact written to WikiLeaks asking them to exercise caution and not release the names or identities of human rights defenders who might be persecuted by their governments.”
If this is really how they made their decision, this is a worse process than merely succumbing to the political pressure of the US government. At least Lieberman is an elected official and therefore, to some degree, beholden to his constituents. Amazon is, instead, arguing dismissively that it made the decision based on its own interpretation of its terms of service. Without getting into the merits of either side, the questions of whether WikiLeaks has the rights to the content and especially of what level of risk of harm merits censorship are very, very difficult and should clearly be decided by some sort of deliberative jurisprudence, rather than arbitrarily and dismissively decided by a private actor.
This need for careful, structured and public deliberation on these questions is obviously balanced by Amazon’s right to decide what to do with its own property. But as a society, we have reached a place where the only way to protect some sorts of speech on the internet is through one of only a couple of dozen core internet organisations.
Totally ceding decisions about control of politically sensitive speech to that handful of actors, without any legal process or oversight, is a bad idea (worse even than ceding decisions to grandstanding politicians). The problem is that an even worse option is to cede these decisions about what content gets to stay up to the owners of the botnets capable of executing large DDOS attacks.
Hal Roberts, a researcher at the Berkman Centre for internet and society at Harvard University.