Code and other laws of cyberspace
Larry Lessig, Harvard Law School (now at Stanford Law School)
- The Internet is not "inherently" unregulatable: it is unregulatable only
because of the choice of code that runs there. Code can be deployed that enables
regulation by enabling tracking and identification.
- Commerce thrives best in an infratsructure where some regulation and tracking
is in place, but it cannot necessarily bring this level of regulation to the Internet all
by itself. However, it and the government can be (uneasy) allies in making some
kinds of regulation the norm; e.g. DCMA; cooperation between industry and government to
regulate encryption.
- The government need not regulate individual behavior to achieve this: it can
regulate commerce, and commerce has a business interest in complying with regulations.
The effect of regulations could be to incentivize individuals to identify
themselves and be trackable, by making it very inconvenient to do otherwise.
Example: cookies.
- Because commerce is incentivized to comply, the effect of regulation, though not
complete, can be significant. Example: cryptography legislation had a huge effect on
Netscape products even though "bits know no borders". This is the
"bovinity principle": a small number of rules, consistently applied, suffice to
control a herd of large animals. This kind of regulation works because we are sheep.
Example: DAT copy protection; Netscape encryption; DVD regional coding and
encryption. Imperfect regulation can still be effective regulation.
- In addition, pacts between political entities can be used to "zone" the
Internet. Example: "I'll block Nevada residents from accessing New York porno
sites, if you (Nevada) agree to block New York users from Nevada gambling sites."
If history is any guide, political goals make such agreements likely; a certificate
infrsatructure would make them enforceable for any kind of digital transaction.
- Digital regulation is inexpensive and (usually) unobtrusive to the user. In
meatspace, regulation is costly and intrusive, requiring a constant tradeoff between high
cost of regulation for the government (==personal liberty for the individual) and high
desirability of regulation (==potentially obtrusive to an individual). So once the
architecture is in place that enables regulation, we can expect the degree of regulation
to increase dramatically.
- Open source somewhat mitigates this problem: because the code being used is not opaque
and its design not centralized, it becomes difficult for government to impose regulatory
pressure using this level of indirection.
- What about ecash? It's unregulatable (if properly implemented) and unobtrusive,
and satisfies at least some of the concerns of commerce, without government
involvement in regulation.
- What about products like Freedom and PGP? Is it just a matter of making them
painfully easy to use (below the "bovinity threshold")?
Trusted systems and how digital regulation affects intellectual property
If trusted systems (e.g. copy protection in DAT drives, CSS in DVD players) are widely
adopted, they make possible several damaging things, including perfect regulation (e.g.
loss of fair use) and loss of anonymity (potentially chilling controversy and criticism).
Each case arises because of a latent ambiguity in the constitutional
framing surrounding the particular issue. In many cases it is reasonable to argue
that the framers themselves might have been divided on the issue, since technology has
provided tools that were unimaginable to them at the time.
In the case of fair use, we must decide whether the original motivation was primarily
for the public good, or primarily to strike a balance on what user behaviors couldn't be
effectively regulated because it was prohibitively expensive. Since technology makes
the second constraint go away, under the second interpretation the "loss" of
fair use is not a loss at all, but the achievement of a desirable end that the (imperfect)
fair use laws couldn't achieve on their own. Under the first interpretation, public
good is harmed by the loss of what was intended a priori to be a public benefit,
in which case we should find a way to protect it despite the ability to build a
perfect system that excludes it. Similarly anonymity.
In the case of privacy, there are (at least) three conceptions of privacy that one
could argue are protected by the Fourth Amendment: privacy as a way of minimizing
intrusive burdens on individuals (search & seizure); privacy as a way of protecting
individual dignity (even if the search is not intrusive); and privacy as a substantive
check on government power (by effectively making some kinds of legislation, such as what
you do in your bedroom, impossible to enforce in practice). When the Fourth was
framed, the technology of the time was such that all three views suggested striking the
same balance. Digital technology has allowed these to be unbundled (e.g. a worm can
search unintrusively but still offend dignity; cheap surveillance increases the
enforceability of certain kinds of laws over individual behavior), so again we must choose
which conception(s) of privacy are worth explicitly preserving.
- One of Lessig's former students once suggested that the burden of property protection in
cyberspace (and of cyber self-defense) be moved to individuals, since the cost balance so
favored individual enforcement. Historically, technologists have found schemes to
defeat most of the "protective" measures enforced by opaque code (A5, DeCSS,
etc.) How about taking this approach for fair use, striking down DMCA and arguing
that we should simply be allowed to pursue ways of preserving our own fair use rights
("frontier justice") and allowing the balance to find itself?
As we've seen, legislation can, and has, been made to affect the Internet. In
general, to the extent that some of our liberties derive partly from the high cost of
regulating a behavior, when those costs fall we must choose how (legislatively) to
architect the new system.
Sovereignty
The Internet imposes its own sovereignty (being transnational, with bits knowing no
boundaries, etc., and with fairly well-defined and sometimes self-regulating communities).
Although there is precedent for individuals to be simultaneously subject to the
rules of multiple sovereigns simultaneously (eg international companies, state and federal
laws), in most cases where there is a potential for bona fide conflicts, they are
expected to occur between "sophisticated" actors (e.g. companies) and not
individuals. The mechanisms deployed for these don't work well when applied to
individuals.
As the world becomes more socially and economically integrated because of the effects
of the Internet, we will come to see ourselves as "global citizens", just as
Americans saw thmselves increasingly as national citizens (and less as state citizens)
during the social and economic integration following the early expansion of the Colonies.
Things that were once considered "local issues" are now everyone's
concern. We must choose what kind of a space we want to create to be citizens of.
Choices will be made; the only question is by whom. We may be weary and
skeptical of the products of our own democratic government, but if we still believe in the
ideal of the democratic process, we had better be sure that ideal is embodied in the
Internet architecture we choose to create.
Responses
- One of the benefits of open source is that the ideals (and restrictions) imposed by it
are transparent, for all to see. Unfortunately, the current legal system
favors opaque over open code in temrs of the IP protection afforded.
- Ideally, in a democracy, deliberation and reason are the forces behind collective
decision making. THis happens in microcosm in many jury trials. But it no
longer happens in government, in part because in an effort to appease voters, officials
rely on (usually bad) poll data and make short-horizon decisions. Technology can be
used to make this worse (constant polling and no hysteresis) or better (deliberative
polling that allows the polled parties to inform themselves and form an opinion over the
course of a couple of days). We can and should inject some of this back into our own
system.
- Do-nothingism (extreme libertarianism about code writing) leads to things like Y2K.
As it showed, cyberspace is not "elsewhere", it is right here.
- In short: we are confronted by a revolutionary technology, but at a time when we are not
ready for revolution because of our own skepticism about our governmental system.
- Law and social norms work only because the governed are aware of them.
They work both ex post (prosecution after a crime), although subjectively one
might claim that the fear of punishment makes them work to some extent ex ante.
But code can work whether or not the user is aware of what it is doing, and it can
work entirely ex ante (preventing you from doing something rather than punishing
afterward). Law and social norms can be made more codelike the more they are
internalized.
[email protected]