Andreas von Bonin, LL.M. '98
© Copyright Andreas
v. Bonin, 1998.
Content Control on the
Internet -
Substitutes for Government
Regulation
Seminar Paper for
"Free Speech on the
Information Superhighway"
Content Control on the
Internet - Substitutes for Government Regulation
directly go to beginning of text
A.Introduction 3
I. Why Content Regulation ? 3
II. Is Cyberspace Different ? 4
III. Non-Governmental
Interest in Abridging Speech 5
B. Why Government Cannot
Control Content on the Internet 5
I. Constitutionality of
Government Mandated Content Regulation on the Internet 6
1. Unprotected Speech 6
2. Protected Speech 6
II. Effectivity of
Government Mandated Content Regulation on the Internet 9
1. Technical Feasibility 9
2. Enforceability 10
C. Content Control Without Government Participation 11
I. Whose duty is it? 12
1. The Look Upward 12
2. The Look Downward 12
II. How Can It Be
Done Effectively ? 13
1. The Reception of Unwanted
Content 14
a. The
"Cyberspace Sovereignty" Approach 14
b. The
Empowerment Approach 16
2. The Disclosure of Private
Information 20
a. The
"Cyberspace Sovereignty" Approach 21
b. The
Empowerment Approach 21
3. Comparison of the Two
Approaches 22
III. Will We End up
in an Industry-Dominated World With Even Less Free
Speech and Privacy? 23
1. A More Pessimistic View
23
2. A
Government Duty to Reinstall Free Speech and Privacy? 24
D. Conclusion 26
The
easy availability of forbidden or otherwise socially undesirable content on the
Internet is one of the few dissonances in the worldwide euphoria that has
accompanied the advent of this new medium. Attempts were first made in the
The
question as to why content should be regulated at all has long been settled by
the American society. It rejects the notion that what constitutes conversation
is "just speech" and thus hurts no one. Society does not want to
leave the victims of defamation, plagiarism or propaganda alone in
self-defense. It wants to protect their intellectual property and their dignity
with the full force of governmental authority. And minors should be kept from
being exposed to content that might be harmful to them. Not only but also in
Cyberspace, the need to ban certain speech is felt and so is the need for
content regulation on the Internet(1). But at the same time we insist that our governments
protect certain areas from infringement of words, sounds and pictures, we want
to see everyone's freedom of speech be guaranteed to a substantial extent. This calls for a constant balancing of two competing
interests: that of upholding the rights of Free Speech against that of
protecting those vulnerable in society.
This
very battle has been fought in every medium. When it comes to the Internet,
some say that the application of existing laws is sufficient(2), some say we need a whole new legal framework(3). Both might be right. Libel and slander are no less
worth punishing whether committed in a newspaper or on a website. Trademark and
copyright infringements are not permitted by the law B whether in the physical
world or on the Net. Society does not want drug dealing, be it on the street
corner or via mailing list, e-mail or chat room. Sexually explicit material
should not be available to minors - not in a bookstore(4) nor on a computer screen.
On the other
hand, one has to consider that the Internet like no other medium offers vast
opportunities to the individual. Everyone can access all kinds of content more
easily than ever before and communicate with friends and strangers no matter
where they are, but everyone can also become a publisher to a worldwide
audience with one mouseclick. The cost of these enormous advances might be
lower standards in veryfiability of content(5); in protection against unwanted communication(6); in privacy(7) and in
defending intellectual property rights.(8)
III. Non-Governmental Interest in Abridging Speech
Not
only governments but also commercial entities have an interest in regulating
content on the Internet. As mass use of the Internet takes off, companies and
businesses want to advertise, invest and provide service in a medium that they
can market as family-friendly. Afraid of pornography and hate speech, the
average American family is likely to stay unwired, thus preventing further
commercialization and profits. Service providers wish to avoid possible
liability for third party content they transport to the unsuspecting user(9). To reach these
goals, companies active on the Internet will attempt to shape their contractual
relationships accordingly. Unlike the government interest in abridging speech,
commercial interests to the same end are not controlled by any Free Speech
guarantees but Aonly@ by the market. The First Amendment only blocks
governmental action, not industrial policy(10).
This
threat to the variety of speech in a medium undergoing commercialization has to
be kept in mind when substitutes for government regulation on the Internet are
examined.(11)
B. Why Government Cannot Control Content on the Internet
Whether
any national government is able to effectively ban undesirable speech from the
Internet is a controversial matter. Two issues feature prominently. First, the
question arises as to whether government can constitutionally regulate
content on the Internet. After recent Supreme Court decisions, the answer to
that question is clearer (see infra I.) If so, the second question becomes
whether government can effectively regulate content on the Internet (see
infra II.).
I. Constitutionality of Government Mandated Content Regulation on the
Internet
The
First Amendment to the United States Constitution only protects certain kinds
of speech. Some areas of speech, e.g. obscenity, have long been declared
unprotected and can therefore be prohibited from the Internet.(12)
Indecency(13), speech advocating violence(14)or being otherwise dangerous(15)as well as speech that might infringe copyrights of
third parties(16)are, to some
degree, protected by the First Amendment.
Any federal(17)regulation that
attempts to keep certain content from the Net is - as a content-based
regulation - subject to strict First Amendment scrutiny(18). Consequently, the regulation has to pursue a
compelling state interest and has to be narrowly tailored to that purpose.What
constitutes a compelling state interest cannot be answered theoretically.(19)
The
protection of minors is at least one.(20)Some say, Government has no compelling
interest at all in regulating computer speech, because the medium is not
pervasive and many means of effective user-control exists.(21)Still, the possibility is there that an active
legislature will decide to target other interests, particularly the prevention
of crime such as child abuse which is arguably promoted by the consumption of
violent or sexually explicit material in Cyberspace. Supporters of such
restrictions have attempted to justify such regulation with reference to the
"secondary effects" doctrine. This doctrine supports that a
questioned law not be directed at the speech itself, but rather at the
secondary effects caused by the "establishments that engage in such
speech."(22)
If
clear evidence is shown, that the regulation targets secondary effects, it
escapes strict scrutiny as a mere "time, place and manner
restriction".(23) But the
doctrine has been limited by the courts. To avoid direct regulation of
protected speech under the label of the "secondary effects" doctrine,
the "emotive impact of speech on its audience is not a secondary
effect".(24)
In the
cable medium, the zoning of indecent content to leased access channels was
recently struck down on overbreadth reasons.(25) In Reno v.
ACLU, the majority with some ease rejected the argument that the CDA was a
mere "zoning law". The framing of the statute clearly indicated that
its purpose was to "protect children from the primary effects of
'indecent' and 'patently offensive' speech, rather than any 'secondary' effect
of such speech.(26) Nonetheless,
the zoning approach might be a possible avenue for government regulation in the
future.(27)
Even if
promoting a compelling state interest, the statute must survive the second
prong of the strict scrutiny test, namely whether it is narrowly tailored to
the compelling interest(28). Given the technical situation of today's world,
every regulation that attempts to ban content for a specific age group fails
this test. This is because for the Internet services of email and newsgroups no
effective method of age verification exists(29). Although not yet adjudicated, it is unlikely that
regulations banning violent or otherwise dangerous content (such as bomb-making
instructions) will pass constitutional muster given the vagueness of the terms
and the resulting chilling effects on protected speech(30).
With
respect to copyright law, the courts have so far attempted to maintain the same
standard of protection on the Internet as in the physical world.(31)Government is
constitutionally able and mandated(32)to protect
intellectual property. There can be no Free Speech right to distribute other
people's content freely. Legally - this does not change in Cyberspace.
Summing
up, it can be said that since ACLU v. Reno government cannot prohibit
any kind of speech protected by the First Amendment on the Internet, based on
the compelling interest of protecting a specific (age) group. Constitutionally
unprotected speech, third party distribution of copyright protected content and
probably the "time, place and manner" of content reception can still
be regulated without being subject to strict scrutiny.
II. Effectivity of Government Mandated Content Regulation on the
Internet
Even
for obscenity and other types of constitutionally unprotected speech
incrimination only makes sense if it is technically feasible and if there is a
chance of enforcement.
It is
highly debated whether the Internet, designed to overcome obstacles like a
total breakdown of substantial parts of the network, is responsive to
government regulation.(33) Massive
regulation of the hardware structure of the Net, such as a complete shift to a
hierarchic network with a monitorable bottleneck, as is done in
Installation
of firewalls and blocking software might be a technically feasible solution,
but even, if government mandated, they face the same constitutional problems as
attempts to regulate speech directly.(35)
Today,
half of the content on the Internet comes from outside the
Nevertheless,
American citizens have been convicted for violations of American content regulations(37). One might
conclude that a strong domestic enforcement system can at least eliminate part
of the content this community considers improper. But criminal sanctions are a
pointless exercise when foreigners who commit the same crimes do so with
impunity, and the offending material remains accessible(38). If local authorities attempt to regulate content
only because it is accessible from within their jurisdiction, they will have to
accept that any other local authority - motivated by a different set of values
- has the same right to do so.(39)This leads to
the conclusion that refraining from top down regulation may prove the more
viable way of dealing with content on the worldwide Internet, at least if
alternative control mechanisms are likely to emerge (see infra C.).
Copyright
infringement happens daily on the Internet. Cutting, copying, pasting and
reshaping content presents few problems in a digital medium(40). Sometimes information is copied automatically for
technical reasons.(41)
In any
case, regulating, litigating and proving every copyright infringement on the
Net - be it the use of a Mickey Mouse on a private homepage, an inline-link, or
copying a newspaper article and sending it via email(42)- is a practical impossibility.
On the
whole it can be said, that even for speech than can constitutionally be banned,
(a national) government indeed is not the appropriate entity to regulate the
content of information transmitted across Cyberspace.(43)
C. Content Control Without Government
Participation
Most
Americans are skeptical about government involvement in Internet content
regulation. In 1995, only 6 % believed that it should be the duty of
government.(44)
Whose
duty is it then? How can it be done effectively? Will we end up in an
industry-dominated world with even less Free Speech and Privacy?
Once
the premise is accepted that a national government is not the appropriate
entity to govern content-related issues on the Internet, commentators look both
upward and downward.
Those
who believe, the major problem lies in the international character of the
Internet, tend to favor multi-national / international regulation. Particularly
the European Union has spearheaded this option.(45)
But
this alternative seems to create more problems than it solves. In the first
instance, there are still governments and politicians involved. International
rulemaking has never been bound to the strict standards of national First
Amendment protection, instead it has to be orientated
to the smallest common denominator. Second, international lawmaking is an evolving
field. There are few reliable procedures. Enforcement is flawed and there is
the risk of differing applications between countries. Finally, regardless of
the international framework (UN, OECD, ITU) involved, international lawmaking
takes an inordinately long to reach agreement.
Writers
who come from an Internet background and are familiar with the particularities
of an on-line community propose a different approach: Their solution lies in
handing regulatory power down to users, markets and communities. Two
key-concepts should be distinguished: "Cyberspace sovereignty"(46)and "user
empowerment"(47). Both concepts share certain characteristics, e.g. they
reject traditional government activity in actual regulations and they envision
the informed, prepared-to-make-choices type of consumer / user.
II. How Can It Be Done Effectively ?
Any
means of content control is effective that meets the interests of both speakers
and listeners most reliably at the lowest possible cost. This means it should
not limit the speakers' ability to say what they want to say as long as the
listeners have the ability to protect themselves or their dependants against
exposure to unwanted speech. At the same time speakers must be able to
determine the amount of information they want to convey on a case-by-case basis
according to the cost they are willing to pay for protecting this information. Effectivity
thus also means transparency and flexibility: Listeners must have the capacity
to constantly know what they have chosen to reject,(48)and speakers must know what they have chosen to
convey. Both must have the possibility to alter their preferences at any given
time. Although the most effective system is one that produces no failures, we
nevertheless want to see compliance enforced.
Consequently,
two main issues have to be addressed to determine effective means of content regulation:
The "reception of unwanted content"(49)and the "disclosure of
private information"(50). Both are two
sides of the same coin.
1. The Reception of Unwanted Content
Unwanted
content can include more than material that is "harmful to minors" or
generally "obscene", "violent" or "defamatory". Government
defined categories of speech matter little because government cannot be
enforced independent of the category of speech. Users and communities can
define themselves which content they declare "unwanted". Email spamming(51), certain
political speech and pointcast advertisement might be as unwanted as obscenity
for the individual user. Thus, even speech that enjoys First Amendment protection, may be rejected by considerable a number of
users.
a.
The "Cyberspace Sovereignty" Approach
The
"Cyberspace sovereignty" approach recognizes that interaction on the
Net is completely unrelated to geographical boundaries in the "physical
world."(52)
Instead
of territorial jurisdictions some kind of self-regulated government (or a
variety of them) will exist in Cyberspace. It may be diffuse and based on
common sense but an entity (or multiple entities) will develop a set of rules
for conduct in the virtual space. There will be values that, although perhaps
different from community to community, will be enforced and failure to comply
will be sanctioned. The analogy drawn by the promoters of this approach is the
"Law Merchant" of the middle ages.(53)
The Law
Merchant was a set of rules that emerged from the customs of the travelling
merchants. It existed in addition to incumbent laws and was enforced by special
merchant courts. The judges sitting on these courts were senior merchants
themselves and thus recognized the need for speedy, practical and flexible
dispute resolution.(54)
The
process of rule-making in Cyberspace should consequently start with low-level
regulations such as self-help and contracts(55). As these solutions are found reasonable by more and
more people of this community, they become a universally accepted (customary)
rule of the "Law Cyberspace". In addition to that, model codes,
provider policies(56)and
"netiquette"(57) - a set of
rules of behavior from the early days of Internet(58) - shall be relevant sources of law. Although this
view also mentions the competition between several governments, this is seen as
an intermediate stage on the way to the development of an overall set of rules,
the "Law Cyberspace".
In this
model, a user who does not want to be exposed to certain kinds of content would
in first place shield herself by not accessing it, ultimately by the use of
filtering software. Where mere self-help is not sufficient because of external
effects (e.g. when pornography affects the moral tenor or the physical safety
of the whole community(59)), the user will resort to negotiating with his
service provider to block pornography at the highest possible level of the
network(60). Where the problem is a lack of knowledge (e.g. a user does not want
any material that infringes someone's copyright, but she cannot determine what
material to block), she would have to rely on the enforcement of a netiquette
rule or a policy of the providers' association. If this does not exist,
she has to start a campaign against copyright violation or open her own service
and thus try to establish a custom.
b.
The Empowerment Approach
This
approach recognizes the same means of regulation such as self-help and
contracts, but it does not focus on the creation of an overall body of law in
Cyberspace. Although it defines different layers of "jurisdiction",
the underlying rationale remains that on the Net, "governments" can
coexist and "citizenship" under one jurisdiction is voluntary.(61)
More
sophisticated technology and market forces, according to this view, will
provide the individual user with sufficient means to create the form of virtual
environment, she wants. Rating and filtering software plays an important role
in this model as it deals with the question of avoiding exposure to unwanted
content. Based on the PICS-System(62) it is possible to give every user a powerful tool to
select content. Esther Dyson, a long-term Net activist, describes PICS in her
recently published book Release 2.0:(63)
"PICS is the underlying technology for
tools to create and publish labels and for the filters and other tools to
recognize them. It allows a Website owner or a third party to label a site or
individual page, and it allows any PICS-enabled browser or other software tool
to find and interpret the label or rating. The label can either be physically
on the site, or ratings can be collected elsewhere by a third-party ratings
bureau that users and browsers can refer to automatically over the Net. Anyone
can rate and label his own site, and anyone - interest group, commercial
service, community manager - can set up a ratings bureau. And vice versa: A
user can specify not just which ratings, but which rating service he wants. Just
as in other markets, some rating services will be more widely consulted than
others, but the PICS standard in principle means an open, decentralized market
where content descriptions and people's (or parents') preferences can be
matched.
"Service bureaus will maintain electronically readable lists of
ratings not posted on the sites themselves. Thus you could go to, say, the
"Over time, this technical approach of labels and selection can do
more than just protect children. (...) (I)t could allow an adult to find
articles rated 'insightful' by a favorite critic, all recipes rated spicy by
Julia Child, or all sites rated 'pure French' by the French government. (...)
"PICS also offers a way of specifying the
source of the ratings, so that you can search for labels from a rater you
trust, and some means of authenticating the rater, the item rated, and the
rating. Optionally, of course. Anyone could rate
something anonymously, and anyone else could decide whether to pay attention to
such unsourced ratings."
Today's
technology makes it possible to attach to any Website a discussion about its
content from different point of views. It provides the user with means to
decide which voices to listen to in that "rating discussion". Nevertheless,
the broad implementation of rating systems illustrates the following problems:
(1)
Unrated sites: Given the vast amount of content that is present on the
Internet, rating of all sites by one rating bureau appears to be an impossible
task(64); to assume 100%
self rating with one type of software is equally unrealistic. Thus, either
defaulted one way or the other(65), the user has to decide whether he wants his browser
to block all unrated sites or to let them all through. The first alternative
would be coercion to self-rate. The second option would fail as it is likely
that all the "problematic" sites will offer an "unrated
version". The only solution to this problem is to encourage rating
competition and to encourage the use of software that supports a variety of
rating systems on a "non-discriminatory basis".(66)
(2) Who
decides on the standards? Self-rating will probably help to increase the
percentage of rated sites quickly, but it lacks objectivity(67). Third-party rating, like every theater critic or
book review, depends heavily on the background and policy preferences of the
rater.(68) The categories
offered by the major built-in and stand-alone rating programs can never reflect
the full context and nuances of the content in question and can never hop to
satisfy individual wants and needs.(69)
But, as
there is no government action involved in this process, the absence of equal
treatment, objectivity, due process and uniform correctness is not actionable. Instead,
market and competition are supposed to deal with these shortfalls. Every group
may rate the sites of its interest first. No matter how ideologically motivated
the ratings are invariably there will be one voice in the discussion about this
site and the user will have to decide whether he wants to hear it. As long as a
variety of competing systems evolves and survives, no incumbent rater can
afford to mis-rate a competitor's site intentionally. Reputation means a lot in
a market environment. Furthermore, in an ideal model, filtering software will
be able to distinguish different categories. An educational site like the
"Critical Path Aids Project"(70)might receive a "crude" or
"explicit" rating in the "sex" category, but will be rated
"important" in an "educational value" category. So parents
can set their software to filter "explicit sex" unless it has
"important educational value" and make the site available to their
children.
(3) Rating might encourage more government regulation: Once we commit ourselves to content control through ratings, government might want to secure the accuracy of labels through legislation criminalizing mis- or / and non-rating.(71)Such a statute would face various constitu