CHAPTER ONE

 

The P2P phenomenon

In 1998 a college dropout called Shawn Fanning wrote a little application called Napster, and the way content was delivered to consumers changed forever. Those few lines of code launched owners of the copyrights in music, movies, books and games into a death battle to protect traditional revenue streams and preserve their exclusive right to new ones. In previous decades they’d felt threatened by plenty of other technologies, including the phonograph, radio, photocopier and VCR, but they had never before faced such a Hydraean opponent. In the next decade, they expensively and lengthily litigated three major actions against P2P providers in US courts, and then a fourth action in Australia. In all that time not a single P2P provider ultimately emerged victorious. Sooner or later, every single one was held liable for the copyright infringements of its users.

Against any one of those predecessor technologies, that kind of emphatic victory would surely have brought about compromise or obliteration. But P2P file sharing software proved different. Unfazed by the overwhelming legal successes of rights holders, software developers continued creating new programs that facilitated file sharing between individual users. By 2007 there were more individual P2P applications available than there had ever been before. The average number of users sharing files on P2P file sharing networks at any one time was nudging 10 million,[1] and it was estimated P2P traffic had grown to comprise up to 90 percent of all global internet traffic.[2] At that point, rights holders tacitly admitted defeat.

Abandoning their long-held strategy of suing key P2P software providers, they closed the chapter on P2P litigation and diverted enforcement resources to other areas, particularly global efforts to persuade or compel internet service providers to police infringing users. This book tells the story of that decade-long struggle between rights holders and P2P software providers, tracing the development of the fledgling technologies, the attempts to crush them through litigation and legislation, and the remarkable ways in which they evolved as their programmers sought ever-more ingenious means to remain one step ahead of the law. In telling the complete legal and technological story of this fascinating era, the work focuses on answering the question that has so long baffled beleaguered rights holders – why is it that, despite being ultimately successful in holding individual P2P software providers liable for their users’ infringement, their litigation strategy has failed to bring about any meaningful reduction in the amount of P2P development and infringement?

Under the P2P model, all or most of the infrastructure necessary to distribute content – together with the content itself – is supplied by the participating individuals. It is this fact that is at the crux of rights holders’ objections to P2P file sharing technologies. Very often it is their content that is being made available to potentially millions of individual users, without license, and without the payment of any royalty. In the vernacular of P2P providers, this is known as file sharing. The owners of that content would prefer to call it stealing.

At times, the US music industry has taken direct enforcement action against some of these individuals, hoping that the astronomical statutory damages available under US law, ranging between $200 and $150 000 per infringed work, might deter future infringers.[3] From 2003–2007, members of the industry “filed, settled, or threatened” lawsuits against more than 20 000 individuals.[4] Illustrating the enormity of the campaign, just 2084 civil copyright suits were instituted in total across the whole of the US the year before the Recording Industry Association of America’s (“RIAA”) campaign commenced.[5] However, by any measure, it was not a success. Missteps by the RIAA helped turn the campaign into a public relations nightmare. (One memorable case involved a Mr Larry Scantlebury, war veteran and grandfather of three, who passed away during the course of the litigation against him. The plaintiffs sought to continue the suit post-mortem against his children – albeit after a 60-day stay to allow them “time to grieve”.[6]) There was relatively little support for the suits, and only a tiny percentage of the actions made it to trial. And even those few that resulted in the sought-after awards of statutory damages can’t be described as untrammeled successes. The most notorious involved a Minnesota single mother of four, Jammie Thomas, who was sued for sharing 24 songs via the Kazaa P2P file sharing program. Massive publicity followed the farcical situation as statutory damages of over $222 000 US were awarded at her first trial, upped to $1.92 million by the jury at a second, slashed to $54 000 by the trial judge’s exercise of remittitur (being the maximum that he considered not to be “monstrous and shocking”,[7] and then upped back up to $1.5 million by yet a third jury.[8] In a second case, undergraduate student Joel Tenenbaum was sued for infringing the copyright in 30 songs, and the jury awarded $675 000 in statutory damages.[9] The District Court judge subsequently found this to be “unconstitutionally excessive”, and substituted an award of $67 500 in total.[10] As of early 2011, both matters were still being appealed.

The publicity over these and other cases triggered public questioning of the industry’s motives and business model, and escalated growing discontent from its customer base. The loss of goodwill brought about by the direct litigation campaign might have been acceptable collateral damage had it been effective, but it did not bring about any reduction in the amount of file sharing. Indeed, despite the unprecedented number of lawsuits initiated or threatened, P2P infringement actually appeared to increase overthe relevant period.[11] In late 2008 the music industry abruptly announced an abandonment of its mass litigation strategy against end users (although unfortunately for Thomas and Tenenbaum, no abandonment of existing suits).[12]

The failure of the direct litigation campaign was predictable in advance. It has long been recognized that liability imposed directly upon wrongdoers will sometimes be ineffective.[13] Professor Reinier Kraakman argues that this will be the case where “‘too many’ wrongdoers remain unresponsive to the range of practicable legal penalties.”[14] That was precisely the case for P2P file sharers: the number of participating infringers was so high, the price of pursuing them so costly, and the chances of their being apprehended so remote, that the threat of direct infringement – even with the possibility of astronomical penalties – left individual infringers largely unmoved.[15] Where direct liability will be predictably ineffective, the “standard legal response” is to seek a remedy by targeting the intermediaries or “gatekeepers” responsible for committing or enabling largescale infringement.[16] As Professor Tim Wu has pointed out, until recently, copyright law was “entirely dependent on gatekeeper enforcement”:[1

 

. . . [C]opyright law achieved compliance through the imposition of liability on a limited number of intermediaries – those capable of copying and distributing works on a mass scale. The gatekeepers were book publishers at first; later gatekeepers included record manufacturers, film studios, and others who produced works on a mass scale. Their role resembled that of doctors with respect to prescription drugs – they prevented evasion of the law by blocking the opportunity to buy an infringing product in the first place.[18]

Traditionally, rights holders had considerable success in using legal doctrines based on these principles of gatekeeper enforcement to shut down activities that facilitated copyright infringement, whether they were swap meets whose proprietors tacitly permitted vendors to sell infringing records, dance halls whose operators didn’t secure licenses allowing visiting bands to perform copyrighted music, or advertising agencies who created campaigns for purveyors of “suspiciously” cheap records.[19] Such enforcement efforts were also successful in deterring many later market entrants from engaging in the kinds of conduct that had previously resulted in liability, and thus further limiting eventual third party infringement. When they commenced their 10-year struggle to apply the same principles to P2P software providers, rights holders undoubtedly expected to achieve the same outcome.

A unique vulnerability to anti-regulatory code

To start to understand why those lengthy, expensive and ultimately successful efforts to shut down individual P2P file sharing technologies had little or no impact on the current availability of file sharing software, it is necessary to understand something about the unique properties of software code. For some time now it has been recognized that code can have regulatory effects – or, as Professor Lawrence Lessig famously put it, that “code is law”.[20] As he explains, “[t]he software and hardware that make cyberspace what it is constitute a set of constraints on how you can behave”.[21] For example, software code may regulate behavior by imposing a password requirement on users seeking to gain access to a particular service.[22] Historically, rights holders have used a variety of code-based measures as part of their efforts to promote compliance amongst end-users, with the most notable example being Sony’s disastrous rootkit experiment.[23] In the P2P file sharing context, however, the idea that code regulates is less significant than the separate but related idea that code can be anti-regulatory in effect. Wu explains that “the reason [why] code matters for law at all is its capability to define behavior on a mass scale. This capability can mean constraints on behavior, in which case code regulates, but it can also mean shaping behavior into legally advantageous forms.”[24] Wu analogizes such anti-regulatory programmers to tax lawyers. “[They look] for loopholes or ambiguities in the operation of law (or, sometimes, ethics). More precisely, [they look] for places where the stated goals of the law are different than its self-defined or practical limits. The designer then redesigns behavior to exploit the legal weakness.”[25]

As the following chapters will demonstrate, post-Napster P2P developers engaged in precisely this kind of behavior, routinely seeking to code their software in ways that sidestepped the limits of the existing law while nonetheless still facilitating vast amounts of infringement. This book explores the great lengths they went to in their efforts to fall outside the strict letter of existing secondary liability formulations, including by coding their software to utilize encryption, to eliminate liability-attracting centralization or to facilitate copying in unanticipated new ways. Some of these strategies enjoyed a remarkable degree of success – for example, Chapters 3 and 4 will demonstrate that those behind the Grokster and Morpheus file sharing applications were so successful in coding their way out of liability that the US Supreme Court had to create a new legal doctrine to defeat them. Such P2P file sharing technologies highlighted for the first time the copyright law’s peculiar vulnerability to attack by anti-regulatory code. However, the reasons for that vulnerability remain largely unexplored.

The best explanation to date comes from the groundbreaking article “When Code Isn’t Law”, in which Wu identifies two reasons for this susceptibility. The first is the law’s longstanding reliance on gatekeeper enforcement mechanisms, which was introduced above. Gatekeeper enforcement schemes are premised on the idea that relatively few people are capable of widespread copying and distribution.[26] Thus, as Wu explains, they “have an obvious weakness: They depend on a specialized good or service remaining specialized.”[27] P2P file sharing technologies subvert that assumption by placing the ability to efficiently and cheaply distribute books, movies, music and other content in the hands of individual consumers. The second reason was the dearth of normative support for the law from individual users. Wu’s reasoning on this point was based on empirical studies that suggested individual end users had a widely held belief that copying copyrighted material for a friend was acceptable, whereas selling it on a commercial basis was not.[28] Wu argues that P2P file sharing applications “brilliantly” exploit this distinction between commercial and non-commercial copying:

 

P2P clients create no sensation or impression of stealing . . . Instead, the user is invited to a “community” of peers who exchange song files. A user, importantly, has no sense that she is “selling” copyrighted materials. The design therefore exploits the distinction between the acceptance of non-commercial copying and the non-acceptance of commercial copying. While the economic consequences of peer filesharing could be large, the superficial absence of commercial exchange makes filesharing more acceptable under the norms of home copying.[29]

Thus, by eliminating gatekeepers, and by exploiting the fact that many individuals don’t have any ethical problem with “sharing” content with others online, Wu argued that P2P software providers have sometimes managed to avoid the law’s traditional enforcement measures.[30]

A new reason for that vulnerability

This book puts forward a third reason for that vulnerability, which not only explains why the pre-P2P secondary liability law proved so peculiarly unsuited to the task of dealing with purveyors of anti-regulatory code, but also why even successful litigation against providers of P2P software has failed to curb their spread. It is premised on the fact that software is radically and fundamentally different from physical world technologies in a number of different ways. As the following chapter will demonstrate, the US pre-P2P secondary liability law evolved from decades of decisions relating almost exclusively to physical world scenarios and technologies.

Necessarily, the resulting principles were based on certain assumptions that had long proved correct in the physical world paradigm. This book is premised on the idea that there is a gap between those physical world assumptions and the realities of P2P software development, which it dubs the physical world/software world divide. It argues that, by failing to fully recognize the unique characteristics that distinguish software code and software development from their physical world predecessors, the law has been and continues to be vulnerable to exploitation by those who understood that those traditional or physical world assumptions do not always hold good in the software context.

Legal scholarship has touched upon the distinctions between software worlds and physical worlds in several different contexts, particularly in considering whether and how computer software should be provided with patent and copyright law protection,[31] and while considering the jurisdictional and choice of law difficulties associated with enforcing laws in cyberspace.[32]However, their implications remain poorly understood. It is not surprising that we are slow to acknowledge the revolutionary properties of code. Katsh explains that, historically, new technologies are frequently “perceived not as something with unique characteristics that will create new institutions and change old ones, but rather as something that simply extends the capabilities of . . . existing technolog[ies].”[33] Thus “early films were labeled ‘moving pictures’ and were not immediately understood to be a new art form”;[34] “the first cars were called ‘horseless carriages’ and looked as though they were designed to be pulled by a horse”;[35] and early personal computers “were called ‘typewriters with memory.’ ”[36] As Katsh explains, the danger of equating unlike technologies is that it may “mask the revolutionary character of the new technology”.[37] In turn, this can lead to legal standards that miss their targets because they fail to take into account the properties of the new innovation that make them unique.

This book will argue that this is precisely what has occurred in the P2P file sharing context. It begins by identifying four main physical world assumptions that lie at the heart of the pre-P2P US secondary liability law. The gap between those assumptions and the realities of P2P software development is explored as the book progresses, providing a third explanation for the secondary liability law’s vulnerability to anti-regulatory code, and satisfactorily explaining for the first time why the litigation strategy against P2P providers was ultimately unsuccessful in bringing about any meaningful reduction in the amount of P2P development and infringement.

Since this theory focuses inquiry on the characteristics of software code that make it different and unique as compared to physical world equivalents, it’s necessary to conceptually separate software from hardware. Software refers to the “programs and procedures required to enable a computer to perform a specific task”,[38] while hardware is the physical equipment necessary to execute software’s commands. The definition of “code” adopted in the existing legal literature typically conflates the two by defining “code” as the “information technology architecture,” or “the hardware and software,” that constitutes a particular technology.[39] It is easy for these lines to become blurred, as software is increasingly incorporated in much of the hardware we use in day-to-day life, including MP3 players, personal video recorders, cars, microwave ovens, and so on.

However, it is necessary to separate them in this context, since the equation of hardware and software risks masking the unique characteristics of software code on its own account, and particularly the ways in which it differs from the physical world technologies that came before it.

Physical world assumptions

Everybody is bound by physical world rules

This first assumption is the most abstract and poorly understood of the four, and understanding it requires delving into some of the conceptual differences between software worlds and physical worlds.

Consider what we know about the physical world. We have an immediate and “intuitive” understanding about how it works.[40] “Apples, when released fall down, not up. Actions are causally related to consequences. We expect things to behave sensibly. Our intuitive notion of what is ‘sensible’ is based on common-sense experiences, learned from earliest childhood, and rooted in the physical world.”[41] As Katsh explains:

 

In the ‘real world,’ time and space are ever-present constraints, with the laws of physics frequently limiting many of our desires to do something or be somewhere. The list of constraints to which we accommodate ourselves is significant. We respect the laws of gravity. We understand that no more than one object can occupy the same place. We recognize that we can only be in one place at one time and that there are some places we cannot go to because there is not enough time or because they are too far away.[42]

What is less well understood is that physical world rules do not necessarily apply to software. In fact, neither the laws of physics nor any other “law [or] principle known in the physical world” has any application in the virtual context.[43] As Professor Joseph Weizenbaum explains:

 

There is a distinction between physically embodied machines, whose ultimate function is to transduce energy or deliver power, and abstract machines, ie, machines that exist only as ideas. The laws which the former embody must be a subset of the laws that govern the real world. The laws that govern the behavior of abstract machines are not necessarily so constrained. One may, for example, design an abstract machine whose internal signals are propagated among its components at speeds greater than the speed of light, in clear violation of physical law.[44]

Unbound by physical world rules, software code is incredibly malleable. Indeed, Professor James Moor identified that “logical malleability” as software code’s revolutionary characteristic.[45] The medium’s inherent freedom and flexibility led Weizenbaum in 1976 to famously describe computer programmers as “creator[s] of universes . . . of virtually unlimited complexity.”[46] That unrestrained capability can, of course, be reined back by other code. An operating system, for example, can impose limits as to how software designed for that platform must operate. However, the internet was deliberately designed to be as free and open as possible for future developers and, as a result, developers of internet-based P2P file sharing programs have very few code-based constraints upon them.[47] All of this means that entities in a software world “can be made . . . to overlap, interconnect, and interact in ways that are not possible or feasible in the physical world.”[48] Programmers can write software that will do things that are simply not possible or feasible when limited by physical world constraints.

As Chapter 2 will demonstrate, copyright law evolved in response to decades of litigation involving physical world scenarios and technologies. The intuitive and unacknowledged understanding that we all have of the physical world’s constraints has played a large role in informing the law’s response to those scenarios. There can be no doubt that judges must sometimes have been influenced by unspoken and unacknowledged assumptions that if certain things were infeasible, impossible or impractical in the physical world, they were infeasible, impossible or impractical full stop.

Since these assumptions did hold good in the physical world context, the secondary liability law long worked well, and secondary infringements were limited, being “for the most part, crude, marginal transactions, the subjects of swap meets and unlicensed kiosks.”[49] As the following chapters will demonstrate, however, secondary liability principles based on the assumption that physical world rules apply can result in unanticipated outcomes when applied to situations where they simply do not. For example, a law that implicitly assumes that knowledge of a wrongdoing will be a natural corollary of a defendant’s culpability may struggle to respond to a defendant that utilizes encryption software to eliminate such knowledge. This might be the kind of phenomenon that Mitch Kapor and John Perry Barlow were hinting at when they observed in 1990 that “the old concepts of property, expression, identity, movement, and context, based as they are on physical manifestation, do not apply succinctly in a world where there can be none.”[50]

Developing and distributing distribution products is expensive

The final three assumptions identified in this work are less abstract, and flow on closely from one another. The first of them relates to expense. As Professor Jessica Litman has observed, “[o]ur copyright law was designed in an era in which mass distribution of copies of works required a significant capital investment.”[51] There can be no doubt that the creation of physical world distribution technologies capable of vast amounts of infringement, such as printing presses, photocopiers, and VCRs, typically requires large investments in research, development and infrastructure.[52] Even if the initial invention of a physical world distribution technology is achieved cheaply – and history is filled with examples of hobbyist inventors on shoestring budgets making amazing breakthroughs[53] – developing it to market, mass-manufacturing, promotion and delivery all require considerable amounts of cash.

The sizeable investment necessary to develop, manufacture and deliver a physical distribution technology creates high barriers to market entry that limit the number of manufacturers to relatively few – a fact that has long made it easier for content owners to enforce their rights against secondary infringers. One of the reasons that the copyright law evolved to rely on gatekeeper enforcement measures, as outlined above, was because these factors prevented end users from participating in widespread dissemination of copyrighted works. As Professor Jane Ginsburg explains:

Copyright owners have traditionally avoided targeting end users of copyrighted works. This is in part because pursuing the ultimate consumer is costly and unpopular. But the primary reason has been because end users did not copy works of authorship – or if they did copy, the reproduction was insignificant and rarely the subject of widespread further dissemination. Rather, the entities creating and disseminating copies (or public performances or displays) were intermediaries between the creators and the consumers: for example, publishers, motion picture producers, and producers of phonograms. Infringements, rather than being spread throughout the user population, were concentrated higher up the chain of distribution of works. Pursuing the intermediary therefore offered the most effective way to enforce copyright interests.[54]

A further corollary to the large investment necessary to create such technologies is that their providers are likely to be easily identifiable and deep-pocketed, making them attractive defendants in the event they step out of line.

Distribution technologies are developed for profit

The next assumption is that distribution technologies are developed for profit. As Professor Jonathan Zittrain has observed, “[b]efore the advent of modems and networks, major physical-world infringers typically needed a business model because mass-scale copyright infringements required substantial investment in copying and distribution infrastructure.”[55] Thus the assumption that developers of distribution technologies would do so for profit was inextricably tied to the large investments that were considered to be an integral part of developing and distributing it in the first place: once that initial investment had been made, there was strong motivation to obtain some financial return.

This traditional need to make a massive investment and then to recoup those expenses has significant implications. As Paul Ganley has explained:

 

The normal phases of R&D, product design, manufacture, unit testing and distribution all help to constrain the wilder excesses of copyright infringing potential. The inherent checks and balances in the structure of legitimate businesses help to ensure that companies will shy away from such costly and time consuming exercises if they believe there is no legitimate avenue for them to recoup their substantial investment.[56]

This assumption was reflected in various theories of secondary liability for copyright infringement. It is most explicit in the vicarious liability doctrine, of which one element is a “direct financial interest” in the infringement.[57] However, the imposition of contributory liability has also often appeared to have been inspired largely by the profit motives of the defendants.[58] Once again, this assumption worked to keep the total number of providers relatively small. It also kept them in line. Few providers were inclined to skirt the edges of the law too closely, since litigation by aggrieved rights holders would cut dramatically into their anticipated profits.

Rational developers of distribution technologies won’t share their secrets with consumers or competitors

The final relevant assumption is that providers of distribution technologies won’t share the secrets of their inventions. This follows on closely from the assumption that distribution technologies are expensive to develop. Having spent that money to research, develop, manufacture and distribute a technology, the provider has no incentive to share that technology with its competitors. Again, this is one of the reasons why the gatekeeper-enforcement regime worked so well before software distribution technologies emerged. The disinclination to allow technologies to be copied further limited the number of technology providers, and enabled gatekeeper-based laws to effectively keep them under control.

But this is getting ahead in the story. These assumptions and the gaping mismatch between them and the realities of P2P software development will be revisited a little later. For now, a better starting point is right back when people first started to get really interested in making music available online.

Evolution of the revolution

The online music equation

Since digital computers were first invented, people have gone to incredible lengths to make them play and share music. When MIT hacker Peter Samson was given access to a $3 million US computer in the 1960s, along with virtually unlimited possibilities for its use, he honed in on its single audio speaker – a basic device lacking any controls for pitch, amplitude or tone – and convinced it to output music.[59] As computing technology became more accessible, the demand for ways in which to play and share music online grew too. In 1993 a handful of college students founded the Internet Underground Music Archive (“IUMA”), which went on to become a pioneer of internet music distribution. Inspired by the failure of one of their number to sign his band to a major label (despite, or perhaps because of, such musical offerings as “Cold Turd on a Paper Plate”), the IUMA sought to make niche music available to a larger audience. The service offered online hosting of music and information on its website on behalf of unsigned bands in exchange for a small fee. Files were compressed using a technology known as MP2, which reduced files to manageable sizes by sacrificing sound quality. Although download times were long, the music obscure and the fidelity poor, IUMA rapidly gained popularity around the world. “Even when traffic was minimal, music clips were being downloaded from as far away as Russia – an appealing prospect to bands unaccustomed to being heard outside their hometowns.”[60]

At the same time that the IUMA was demonstrating the demand for online music distribution, an immense repertoire of unsecured digital music was being quietly built up, courtesy of the recording industry’s shift from vinyl and tape to compact disc. Industry executives had made a fateful decision not to incorporate digital rights management technology into the new format: the massive size of digital music files, prohibitive cost of

CD-burning technology, and slowness and scarcity of internet connections had led insiders to conclude that widespread unauthorized copying would never be an issue. After all, the standard baud rate of dial-up internet in the early 1980s was around 2400 bits per second. Assuming the download remained constantly at this maximum speed and the connection never dropped out, it would still take a user around a month to download a standard 650MB CD. As well as being time consuming and unreliable, downloading that much music was likely to cost far more (through internet access fees) than simply purchasing the CD from a record shop. By the late 1990s, however, the equation had changed. Powerful desktop computers had become cheap enough for extensive business and home adoption. The development of the World Wide Web and search engines, plus faster and cheaper access plans, had made the internet more accessible than ever before. And the cost of data storage fell through the floor – from $16.25 US per megabyte in 1981, to $0.003 US per megabyte by 1996.[61] The last remaining barrier to widespread unauthorized online distribution of top quality digital music files was their prohibitively large size. This was overcome in 1996, when German company Fraunhofer Schaltungen made widely available a technology it had developed during the previous decade.[62] Called “ISO-MPEG Audio Layer-3”, or “MP3”, the technology enabled CD-quality sound files to be compressed by a factor of 12 with minimal compromise of the original sound quality.[63] By that point, all of the elements necessary to make the online distribution of music a practical proposition were in place.

Joining the dots

By early 1997 a few dozen individuals, mostly college students, had begun to host websites offering a range of copyrighted popular music for free download to anyone who stumbled across them on the web. One of these pioneers was David Weekly, a Stanford student, who hosted his site courtesy of his school’s internet connection. Although his site hosted unauthorized versions of copyright-protected popular music, Weekly later explained that he was not motivated by commercial considerations. “None of us really had a patent interest in illegally copying music; we were simply blown away by the ‘cool factor’ of the new medium.”[64] Demand for the free digital music on Weekly’s site was massive: within a week and a half, it had become responsible for 80 percent of his campus’ outgoing internet traffic.[65] Shortly afterwards, Weekly received a call from his university’s network security branch. A record label had complained that he was distributing music in breach of copyright and, together with dozens of others, his site was shut down.[66] But the online music movement kept gaining momentum. A website hosted at the mp3.com domain received 10 000 hits the day it launched, even though it had not been advertised and despite the fact it didn’t host a single piece of music.[67] And when a teenaged programmer from Arizona wrote a program to play MP3s, it was downloaded 15 million times in just 18 months[68] before eventually being bought by AOL for a reported $100 million US.[69] Despite the incredible level of demand that these stories demonstrate, the RIAA resisted the trend towards online music. It stonewalled early efforts to legitimately distribute popular music online,[70] sought to prevent importation of an early portable MP3 player,[71] and tried to lock music up by working with technology companies to develop ways to “protect the playing, storing, and distributing of digital music”.[72] Most significant to the eventual development of P2P file sharing technology, however, it sought to shut down every website it could identify as hosting unauthorized MP3s.

As soon as MP3s started appearing on websites, the RIAA hired investigators to identify infringing sites. Then it issued ultimatums: remove the infringing content or face legal action for infringement. From 1998, such takedown demands became formalized under the Digital Millennium Copyright Act’s (“DMCA”) newly introduced safe harbor provisions, which are contingent on the “expeditious” removal of allegedly infringing content upon receipt of notice.[73] Matt Oppenheim, then the senior vice president of business and legal affairs for the RIAA, estimated that, by June 2003, copyright owners had sent more than half a million DMCA “cease and desist” notices.[74] Such takedown demands often had the desired effect, particularly where the sites were being unknowingly hosted by universities or corporations. These hosts, analogous to the traditional gatekeepers identified in the previous chapter, generally cooperated by quickly removing the offending content.[75] This is precisely what had happened to David Weekly, the student whose Stanford-hosted MP3 site had been so enthusiastically embraced by the internet-using public. Where the RIAA’s notices were disregarded, the music industry was known to back up the threat with action. In 1997, for example, rights holders filed three lawsuits within a 24-hour period against unnamed defendants, alleging that there was infringing content on their websites.

After preliminary injunctions were granted, the unauthorized music was quickly removed

from those sites.[76] Some individuals reacted to this campaign by developing ways of lessening the effect of the takedown strategy. Most of the work in creating a website lies in the original coding of its design and layout. Once that has been done, it is a simple matter to add or edit content, or relocate the entire site elsewhere. Taking advantage of these characteristics, many providers of infringing MP3s began distributing their offerings via a system of multiple sites. One site would list and provide hyperlinks to available songs and other content, but not itself host any infringing songs. The music itself would be hosted at a completely separate location, typically one of the many free quasi-anonymous web-hosting facilities that were being launched around the same time. Users who clicked the links could seamlessly save the relevant content regardless of where it was hosted. When the inevitable cease-and-desist letter reached the host of the content it would quickly be removed. However, its providers would then simply upload the MP3s at a new location (or find other copies that were already online), update their links to reflect the changed locations, and have the music available again almost immediately. Indeed, the entire enforcement process was likely to be considerably less expensive and time consuming for the distributors of the infringing content than for the RIAA itself. Nonetheless, the RIAA’s strategy enjoyed a significant amount of success. Because of the inevitable time lag between the infringing MP3 files being removed and the links pages being updated, attempts by internet users to download files were often met with “file not found” errors. This transformed the process of downloading music via the web into a time consuming and frustrating experience. “[T]here were no easy, continuous, reliable sources for pirated music on the Net at large.”[77] Such were the number of fruitless searches, Professor Stuart Biegel observes, that “many commentators predicted that the controversy was ending and that the RIAA had won.”[78] At this stage, Zittrain argues, the music industry had “battled at least to a stalemate, if not better”.[79]

Changing the rules

But that situation soon changed dramatically. While the RIAA’s enforcement tactics probably frustrated some users into returning to traditional record stores, others persevered, following link after link in search of one that had not yet been disabled. One user happened to complain to his college roommate about this frustrating glut of dead links.[80] That roommate, Shawn Fanning, reasoned that the shortcomings of existing online music distribution could be bypassed by developing an application that maintained a fluid index that could tell users what music was available at any given moment.[81] It would be far less vulnerable to the notice-and takedown regime because the content would be hosted by the individuals that wanted to share files and would go on and offline as they did; and its real-time structure would make it impervious to the scourge of dead links. Fanning put these ideas into execution via a program called Napster, releasing the first beta version on 1 June 1999.[82]

How Napster worked

Napster users were required to nominate a folder on their computer in which to store downloaded music. Unless the user expressly opted out, the content stored in this folder would be scanned for MP3 files each time the user connected to the service, and information about those files would be added to a central index maintained on Napster’s servers. However, at no stage did the service itself copy the music files. Napster’s user interface, depicted in Figure 1.1 below, allowed users to search for desired content using a number of search fields, including artist name and song title, file size, bit rate and other characteristics. A higher bit rate usually meant a better fidelity sound, and a correspondingly larger-sized file.

When a user entered a search, the parameters would be transmitted to the Napster servers, which would compare them to the information contained in the index and return a list of results.[83] Once a desired file was located, users could request a copy by double-clicking the file name or selecting it and clicking the “Get Selected Song(s)” button at the bottom of the screen. Upon receiving that request, Napster servers would query the host user to ascertain whether or not it was willing and able to send that file. If it was, Napster would communicate the IP address and other relevant details of the host user to the requesting user.[84] At that point, Napster’s role in the transaction would be complete, and the actual transfer would take place directly over the internet between the hosting and requesting users.[85] As soon as a user disconnected from the Napster service, the central index would be updated to reflect the change to the available content. This system of dynamic updating meant that Napster users, unlike those downloading music from the web, had no problems with broken or outdated links. The Napster service relied on central servers to give users a fixed point on the internet to which to connect and to facilitate their searches. This meant that individual users could only connect to the network if Napster’s servers permitted them to do so, and had to communicate with those servers to obtain any information about files currently available on the network. However, once those servers provided information as to the location of desired files, Napster’s architecture enabled data to be transferred directly between individuals. This represented a revolution in the way in which data was transmitted online.

Before Napster popularized P2P communications architectures, most common internet transactions utilized a client-server model. In client-server relationships, the server controls the provision of both access and content. Clients have no input as to what information will be made available, or who will be able to reach it. Examples of client-server relationships include the World Wide Web (which is exclusively accessed through web servers) and email systems (which utilize servers to deliver ingoing and outgoing mail). The widespread adoption of client-server architectures had been largely driven by the practicalities of IP address allocation. Every computer on the internet has a unique internet protocol or “IP” address, which allow them to be distinguished from one another. In the early days of the internet, when relatively few machines were connected, virtually all internet-connected computers had static IP addresses, which remained the same each time they connected to the internet. The benefit of a static IP address is that you can always find the same resource at the same location. However, the number of available IP addresses is finite, and when more people began using the internet it became impracticable to allocate a dedicated address to every device. Thus a system of dynamic IP addresses developed, whereby internet service providers (“ISPs”) were assigned a pool of IP addresses that could be allocated to their users as needed. An internet user with a dynamic IP service is likely to have a different IP address or internet location each time they go online or reboot their modem. Because the IP addresses of individual users tend to change rapidly, causing users to blink in and out of the internet network at different points, they are referred to as existing at “the edges of the internet”.[86] Such uncertain connectivity long limited the ability of many users to host, share and distribute information. But Napster changed all that. It exploited those underutilized resources by taking advantage of the computer power and internet connections of their highly transient and unpredictable members, and its users effectively functioned as both clients and servers, in that they both requested content and distributed it to others.

This architecture allowed Napster Inc to maintain accurate real-time indexes of available files, to facilitate communications between users and to respond quickly to searches. It also scaled effectively: the additional demand created by increased numbers of users could be handled by simply adding more servers to the central array. However, the model also has some noteworthy drawbacks. For one thing it was relatively expensive, since Napster Inc was obliged to purchase the server hardware required to power it – a cost that grew in line with the service’s popularity. Perhaps the most notable downside, however, was the network’s vulnerability: its central servers gave it a single point of failure, and if that switch was flipped the entire operation would disappear in an instant. These disadvantages were of little concern to users, who were much more interested in the service’s ready availability of free, high-quality music files. Word of the new Mecca for infringement spread quickly. The more people connected, the more music became available on the network, and the more attractive and popular the service became. Indeed, its popularity and illicit use became such that Zittrain described it as “the open air drug market of copyright infringement”.[87] Aghast at this sudden torrent of infringement, and unable to shut it down via the usual tactics, in December 1999 18 members of the Recording Industry Association of America sued Napster Inc for copyright infringement.

 

Looking for a copy?


...Or check your local library

 

About the author

Read more by Rebecca Giblin - academic papers - popular press

Want to get in touch? Follow Rebecca on twitter @rgibli or drop her a line.

Return home.

[1]  Eric Bangeman, “P2P traffic shifts away from music, towards movies” Ars Technica <http://arstechnica.com/tech-policy/news/2007/07/p2p-trafficshifts-away-from-music-towards-movies.ars>, (6 July 2007) accessed at 18 November 2010.

[2] Reported at Eric Bangeman, “P2P responsible for as much as 90 percent of all ‘Net traffic” Ars Technica, <http://Ars Technica.com/news.ars/post/20070903-p2p-responsible-for-as-much-as-90-percent-of-all-net-traffic.html>, (3 September 2007) accessed at 6 September 2007.

[3] See 17 USC § 104.

[4] “RIAA v The People: Four years later” Electronic Frontier Foundation, <http://www.eff.org/IP/P2P/riaa_at_four.pdf>, (29 August 2007) accessed at 7 September 2007 at 2 (internal note omitted); Grant Gross, “Despite Lawsuits, P-to-P Use Still Growing” PC World, <http://www.pcworld.com/article/id,138138-c,onlineentertainment/article.html>, (5 October 2007) accessed at 10 October 2007.

[5] See Mark Motivans, Intellectual Property Theft, 2002, Bureau of Justice Statistics, 13 <http://www.ojp.usdoj.gov/bjs/pub/pdf/ipt02.pdf> accessed at 11 November 2005.

[6] Anders Bylund, “RIAA defendant dies, heirs given 60 days to grieve before depositions”, Ars Technica, <http://Ars Technica.com/old/content/2006/08/7487.ars>, (12 August 2006), accessed at 8 November 2010.

[7] Capitol Records Inc v Thomas-Rasset, 680 F Supp 2D 1045, at 1057 (D Minn 2010).

[8] See minutes of decision at Scribd, <http://www.scribd.com/doc/40927654/Jammie-Thomas-Rasset-Verdict>, (3 November 2010) last accessed at 28 January 2011.

[9] Sony BMG Music Entertainment v Tenenbaum, 93 USPQ 2d 1867 (D Mass, 2009).

[10] Sony BMG Music Entertainment v Tenenbaum, 2010 WL 2705499, at 3 (D Mass, 2010).

[11] See eg Eliot Van Buskirk, “Survey Says Most Share Files without Fear of RIAA Lawsuits” Wired, <http://blog.wired.com/music/2007/04/survey_says_mos.html>, (24 April 2007) accessed at 10 October 2007.

[12] Sarah McBride and Ethan Smith, “Music Industry to Abandon Mass Suits”, Wall Street Journal.

[13] On this point see particularly Reinier H Kraakman, “Gatekeepers: The Anatomy of a Third-Party Enforcement Strategy” (1986) 2(1) Journal of Law, Economics & Organization 53, at 56–7. For an excellent discussion of the theory of indirect liability from an economist’s perspective see also Doug Lichtman and Eric Posner, “Holding Internet Service Providers Accountable” (2006) 14 Supreme Court Economic Review 221.

[14] Kraakman, above n 13, at 56.

[15] Alfred C Yen, “Third-Party Copyright Liability after Grokster” (2005) 91 Minnesota Law Review 184 (“The normal remedy for copyright infringement is litigation against infringers. However, the number of computer based infringers is so large that copyright holders cannot find and sue them all.” (Internal note omitted); David Lindsay, “Internet intermediary liability: a comparative analysis in the context of the Digital Agenda reforms” (2006) 1&2 Copyright Reporter 70–86, at 73. (“As pursuing individual infringers is costly, questions ar[i]se regarding the liability of intermediaries that are not involved with the publication of material, but that participate in the communication, location or storage of material”); Jacqueline D Lipton, “Solving the Digital Piracy Puzzle: Disaggregating Fair Use from the DMCA’s Anti-Device Provisions” (2005) 19 Harv J Law & Tech 111 (arguing that rights holders pursue secondary infringers “[b]ecause of the economic reality of pursuing direct infringers”).

[16] See eg Lichtman and Posner, “Holding Internet Service Providers Accountable”, above n 13; Jane C Ginsburg, “Putting Cars on the Information Superhighway’: Authors, Exploiters, and Copyright in Cyberspace” (1995) 95 Colum L Rev 1466, at 1488. However, cf Mark A Lemley and R Anthony Reese, “A Quick and Inexpensive System for Resolving Peer-to-peer Copyright Disputes” (2005) 23 Cardozo Arts & Ent LJ 1 (suggesting that changing the law to make it easier to sue direct infringers may be more appropriate than shutting down digital distribution technologies).

[17] Tim Wu, “When Code Isn’t Law” (2003) 89 Virginia Law Review 679, at 713.

[18] Ibid, at 712.

[19] Fonovisa Inc v Cherry Auction Inc, 76 F 3d 259, (9th Cir 1996); Dreamland Ball Room v Shapiro, Bernstein & Co, 36 F 2d 354 (7th Cir 1929); Screen Gems-Columbia Music Inc v Mark-Fi Records Inc, 256 F Supp 399 (DCNY 1966).

[20] Lawrence Lessig, Code and other laws of cyberspace (Basic Books, New York, 1999) 6. Those famous words were first enunciated by architecture and media professor William J Mitchell, who, in the context of explaining the significance of cyberspace, wrote that “[o]ut there on the electronic frontier, code is the law.” William J Mitchell, City of Bits (The MIT Press, Cambridge, 1995) 111. For more on this idea that code regulates, see also Lawrence Lessig, “Reading the Constitution in Cyberspace” (1996) 45 Emory LJ 869, at 896–7 (one of the earliest legal explorations of the idea that software can constrain or “regulate” behavior); M Ethan Katsh, “Software Worlds and the First Amendment: Virtual Doorkeepers in Cyberspace” (1996) University of Chicago Legal Forum 335, particularly at 335–43 (exploring “the role of software in structuring the online environment); Joel R Reidenberg, “Lex Informatica: The formulation of information policy rules through technology” (1998) 76 Tex L Rev 553, particularly at 568 (arguing that technology is a source of rule-making separate to traditional law) and at 569–74 (analogizing features of the “lex informatica” to traditional legal regulation); James Boyle, “Foucault in Cyberspace: Surveillance, Sovereignty, and Hardwired Censors” (1997) 66 U Cin L Rev 177, at 201 (predicting, two years before Napster was developed, that “there will be a continuing technological struggle between content providers, their customers, their competitors, and future creators.”); R Polk Wagner, “On Software Regulation” (2005) 78 S Cal L Rev 457 (elaborating on the code/law relationship, particularly the substitutability of code and law); Jay P Kesan and Rajiv C Shah, “Shaping Code” (2005) 18 Harv J Law & Tech 319, at 320 (canvassing a number of different ways in which code can be and is in fact used to regulate behavior).

[21] Lessig, Code and other laws of cyberspace, above n 20 at 89.

[22] Ibid.

[23] See eg Robert McMillan, “Sony rootkit settlement with states reaches $5.75M”, <http://www.infoworld.com/d/security-central/sony-rootkit-settlement- states-reaches-575m-558>, (21 December 2006) accessed at 8 July 2009.

[24] Wu, “When Code Isn’t Law”, above n 17, at 707–708.

[25] Ibid, at 708.

[26] Ibid, at 683; 685.

[27] Ibid, at 716.

[28] Ibid, at 724.

[29] Ibid, at 724–5.

[30] Ibid, at 685; 716–17; 722–6.

[31] See eg William Greubel, “A Comedy of Errors: Defining ‘Component’ in a Global Information Technology Market – Accounting for Innovation by Penalizing the Innovators” (2006) 24 J Marshall J Computer & Info L 507; Robert Plotkin, “Computer Programming and the Automation of Invention: A Case for Software Patent Reform” 2003 UCLA JL & Tech 7; Pamela Samuelson, “Contu Revisited: The Case Against Copyright Protection for Computer Programs in Machine-readable Form” 1984 Duke Law Journal 663; Jane C Ginsburg, “Four Reasons and a Paradox: The Manifest Superiority of Copyright over Sui Generis Protection of Computer Software” (1994) 94 Colum L Rev 2559; John Swinson, “Copyright or Patent or Both: An Algorithmic Approach to Computer Software Protection” (1991) 5 Harv J Law & Tech 145. See also Jacqueline D Lipton, “IP’s Problem Child: Shifting the Paradigms for Software Protection” (2006) 58 Hastings Law Journal 205 (which more recently canvasses some of the practical realities of software development that she argues makes it “unsuitable” to provide copyright protection to the underlying source code).

[32] See eg Biegel, Beyond our control? Confronting the limits of our legal system in the age of cyberspace (the MIT Press, Cambridge, 2001), at 25–49; David R Johnson and David Post, “Law and Borders: The Rise of Law in Cyberspace” (1996) 48 Stan L Rev 1367; Lawrence Lessig, “The Zones of Cyberspace” (1996) 48 Stan L Rev 1403.

[33] M Ethan Katsh, Law in a Digital World (Oxford University Press, New York, 1995) 135.

[34] Ibid.

[35] Ibid, at 24, citing James Martin, Hyperdocuments and how to create them (Prentice-Hall, Englewood Cliffs, New Jersey, 1990) 9.

[36] Katsh, Law in a Digital World, above n 33, at 135.

[37] Ibid, at 136.

[38] “Software” Oxford English Dictionary, <http://dictionary.oed.com>, accessed at 10 September 2007.

[39] See eg Lessig, Code and other laws of cyberspace, above n 20 at 6; Egbert Dommering and Lodewijk Asscher, Coding Regulation (TMC Asser Press, The Hague, 2006) 2; Kesan and Shah, “Shaping Code”, above n 20, at 320. However, cf Lessig, “Reading the Constitution in Cyberspace”, above n 20, at 896 in which Lessig seemed to define “code” as software alone.

[40] Boris Beizer, “Software is Different” (2000) 10 Annals of Software Engineering 293, at 295.

[41] Ibid.

[42] Katsh, “Software Worlds and the First Amendment”, above n 20, at 341–2.

[43] Joseph Weizenbaum, Computer Power and Human Reason (WH Freeman and Company, San Francisco, 1976) 111. Regarding the idea of software not being bound by physical laws see also Yingxu Wang, “Keynote Lecture: On the Informatics Laws of Software” (Paper presented at the Proceedings of the First IEEE International Conference on Cognitive Informatics, at 1; Beizer, above n 40, at 296; Katsh, “Software Worlds and the First Amendment”, above n 20, at 341–2; Juris Hartmanis, “Turing Award Lecture: On Computational Complexity and the Nature of Computer Science” (1994) 37(10) Communications of the ACM 37, at 39; Alan M Davis, “Fifteen Principles of Software Engineering” (1994) 111(6) IEEE Software 94, at 94; Greubel, above n 31.

[44] Weizenbaum, above n 43, at 111.

[45] James H Moor, “What is Computer Ethics?” (1985) 16(4) Metaphilosophy 266, at 269.

[46] Weizenbaum, above n 43, at 115.

[47] For a detailed explanation of the way in which the internet is coded, and why that structure allows free development of new protocols such as those needed for P2P file sharing, see Lawrence B Solum and Minn Chung, “The Layers Principle: Internet Architecture and the Law” (2004) 79 Notre Dame L Rev 815. For a history of the way in which the internet was developed see Barry M Leiner et al, “A Brief History of the Internet” Internet Society, <http://www.isoc.org/internet/history/brief.shtml>, last accessed at 7 March 2005. For detail regarding the philosophies of the internet’s creators, see Steven Levy, Hackers: Heroes of the Computer Revolution (Dell Publishing, New York, 1994) 40–9. For an analysis of the effects changes to the code or architecture of the internet may have, see Biegel, above n 32, at 187–211.

[48] Katsh, “Software Worlds and the First Amendment”, above n 20, at 339.

[49] Thomas Hays, “The Evolution and Decentralisation of Secondary Liabilityfor Infringements of Copyright-Protected Works: Part 1” (2006) 8(12) European Intellectual Property Review 617, at 617.

[50] Mitchell Kapor and John Perry Barlow, “Across the Electronic Frontier” Electronic Frontier Foundation, <http://www.eff.org/Misc/Publications/John_Perry_Barlow/HTML/eff.html>, (10 July 1990) last accessed at 3 September 2007.

[51] Jessica Litman, “The Copyright Revision Act of 2026” (2009) 13 Marquette Intellectual Property Review 249, at 253.

[52] Jonathan Zittrain, “A History of Online Gatekeeping” (2006) 19 Harv J Law & Tec 253, at 255.

[53] Many such examples are discussed throughout Tim Wu, The Master Switch (Knopf, New York, 2010).

[54] Ginsburg, “Putting Cars on the ‘Information Superhighway’ ”, above n 16, at 1488.

[55] Zittrain, “A History of Online Gatekeeping”, above n 52, at 255.

[56] Paul Ganley, “Surviving Grokster: Innovation and the Future of Peer-to-Peer” (2006) 28(1) EIPR 2006 15, at 22.

[57] Gershwin Publishing Corp v Columbia Artists Management Inc, 443 F 2d 1159, at 1162 (2nd Cir 1971).

[58] For example, in Gershwin Publishing Corp v Columbia Artists Management Inc, the case in which the modern contributory infringement framework was developed, the Court’s finding of liability seemed influenced by the fact that the defendant had significantly profited from the infringing concerts, although profit was not, strictly speaking, an element of the tort. See Gershwin Publishing Corp v Columbia Artists Management Inc, 443 F 2d 1159 (2nd Cir 1971). Similarly, in Fonovisa Inc v Cherry Auction Inc, it seems that the Court was influenced by the fact that Cherry Auction was profiting from the increased revenue that resulted from the infringement. See Fonovisa Inc v Cherry Auction Inc, 76 F 3d 259 (9th Cir 1996).

[59] Levy, above n 47, at 33–4.

[60] John Alderman, Sonic Boom: Napster, P2P and the Future of Music (Fourth Estate, London, 2002) 13–15.

[61] “Forgotten Pioneer (20 Years of Hardware)” (2003) 21(3) PC World 91.

[62] David Weekly, “The Online MP3 Book” <http://david.weekly.org/mp3book/a.php3>, accessed at 25 February 2005.

[63] “Audio & Multimedia MPEG Audio Layer-3 – History” Fraunhofer Institut Integrierte Schaltungen, <http://www.iis.fraunhofer.com/amm/techinf/layer3/index.html#1>, accessed at 25 February 2005.

[64] Weekly, above n 62.

[65] Ibid.

[66] Ibid.

[67] Joseph Menn, All the rave: the rise and fall of Shawn Fanning’s Napster (Crown Business, New York, 2003) 33.

[68] Karl Taro Greenfeld, “Disabling the System” (1999) 4(4) Time Magazine 26.

[69] David Kushner, “The World’s Most Dangerous Geek” Rolling Stone, <http://www.rollingstone.com/news/story/_/id/5938320?rnd=1098404116735&hasplayer=true>, (13 January 2004) accessed at 22 March 2005; Jim Hu, “Controversial Winamp creator resigns from AOL” CNet News.com, <http://news.com.com/2100-1032_3-5147599.html?part=rss&tag=feed&subj=news>, (26 January 2004) accessed at 31 March 2005.

[70] This strategy can most notably be illustrated by the experience of Liquid Audio, as discussed in Alderman, above n 60, at 40–6.

[71] See eg Recording Industry Association of America v Diamond Multimedia Systems, 180 F 3d 1072 (9th Cir 1999).

[72] This was primarily attempted through the unsuccessful “Secure Digital Music Initiative.” For details see “The Secure Digital Music Initiative” Secure Digital Music Initiative, <http://www.sdmi.org/>, accessed at 4 April 2005; Biegel, above n 32, at 207.

[73] See generally 17 USC § 512.

[74] See “Online News Hour – Forum: Copyright Conundrum” Public Broadcasting Service, <http://www.pbs.org/newshour/forum/june03/copyright5.html>, (June 2003) accessed at 8 March 2005.

[75] Zittrain, “A History of Online Gatekeeping”, above n 52, at 272.

[76] See A&M Records Inc v Internet Site Known As Fresh Kutz, 97-CV-1099H (JFS) (SD Cal 10 June 1997); SonyMusic Entertainment Inc v Internet Site, 97CIV4245 (SDNY 1997); and MCA Records Inc v Internet Site, 397CV1360-T (ND Tex 1997).

[77] Zittrain, “A History of Online Gatekeeping”, above n 52, at 272.

[78] Biegel, above n 32, at xii.

[79] Zittrain, “A History of Online Gatekeeping”, above n 52, at 275.

[80] A&M Records Inc v Napster Inc, 114 F Supp 2d 896, at 902 (ND Cal 2000).

[81] Menn, above n 67, at 27.

[82] Wu, “When Code Isn’t Law”, above n 17, at 728.

[83] A&M Records Inc v Napster Inc, 239 F Supp 3d 1004, at 1012 (9th Cir 2001).

[84] A&M Records Inc v Napster Inc, 2000 US Dist LEXIS 6243, at 5 (ND Cal 2000).

[85] A&M Records Inc v Napster Inc, 239 F Supp 3d 1004, at 1012 (9th Cir 2001).

[86] Clay Shirky, “What is P2P. And what isn’t.” O’Reilly P2P, <http://www.openp2p.com/pub/a/p2p/2000/11/24/shirky1-whatisp2p.html>, (24 November 2000) accessed at 5 September 2007.

[87] Zittrain, “A History of Online Gatekeeping”, above n 52, at 281.