Big Liberty is Watching

A century after Orwell’s birth, reality overtakes his classic.

Roberto Lovato

—Twenty-two-year-old college student Steve Fernandez of Harrisville, Rhode Island, says he came to pay homage to liberty and what the idea stands for. After being patted down in a holding area in front of the ferry that will transport him to Liberty Island, he stands in a long security queue, where he is eyed by police dogs, NYPD officers, park rangers, and helicopters that circle overhead.

These security measures have been in place since the attacks of September 11, 2001. “We have everything here, all the newest technology,” says Park Ranger Mark Morse, assuring tourists with what seems like a canned script. “I’m not allowed to say much, but just know that we are very well protected.”

“We have to make sacrifices. If that’s what it takes, then that’s what it takes,” says Fernandez, a bass player in a death rock band. When he is informed, however, that—according to interviews with security personnel—“everything” includes dozens of infrared surveillance cameras, vibration sensors, experimental facial recognition monitors, and other now ubiquitous electronic surveillance devices, his brow furrows. Hearing that these technologies are used to digitally capture, transfer, and store the images of tourists like him, Fernandez bursts out: “That’s bullshit! What do we have, liberty or Big Brother?”

The bits that make up Steve’s confused and angry face may now float in the cyberspace of colossal government databases. In a process known as “datamining,” digital technology can rapidly sort through oceans of public and private databases to see if Steve’s image and behavior fit any of the profiles and threat scenarios constructed by government officials since 9/11.

The technology is powerful. If Steve uses his credit card to rent a truck and buy fertilizer for his uncle’s farm, he fits a scenario. If he checks out a book on the Arabic language at the Jesse M. Smith Memorial Library on Main Street in Harrisville, he fits a scenario. Under Section 215 of the Patriot Act, the FBI can force the library or Amazon.com to hand over Steve’s borrowing or purchasing records on demand—without his consent and without notification. If he decides to take scuba lessons during his weekend off, he’ll fit other scenarios.

In fact, any commercial transaction Steve engages in may be subject to scrutiny. Most general business records no longer have statutory privacy protections—legal regulations detailing how businesses should handle information about employees and clients. As a result, many companies willingly hand data over to the government.

The world’s largest electronic retailer, for one, is glad to help. “There’s no need for a court order,” Joe Sullivan, eBay’s director of compliance and law enforcement relations told his audience at a February cybercrime conference in Connecticut. “I don’t know another Web site that has a privacy policy as flexible as eBay’s.”

————–

Since the World Trade Center attacks, the Bush administration has invested hundreds of millions of dollars in the hopes of using datamining to identify, track, and capture suspected terrorists. The most controversial of the Bush initiatives are the Total Information Awareness program (now called the Terrorist Information Awareness program), or TIA, and the Computer Assisted Passenger Prescreening System (CAPPS II). Both TIA and CAPPS II are still only prototypes, but privacy advocates are alarmed.

Conceived under the auspices of the Defense Advanced Research Projects Agency (DARPA), TIA is an ambitious and highly secretive project. According to one TIA expert, interviewed on condition of anonymity, “None of us consulted by DARPA really knows if and how TIA is being deployed.” The TIA program proposes to process huge amounts of public and private data about U.S. citizens and foreigners: bank transactions, cell phone and computer communications, casino transactions, gun purchases, car and video rentals, and medical transcripts are only some of the datasets the system is purportedly designed to scan and cross-reference.

More targeted, CAPPS II is triggered by the airline reservation system. One of Delta Airlines’ lesser-known partnerships was with the Bush administration’s practice run of the CAPPS II Traveler Tracker datamining program. As part of the test, the airline ran credit and criminal background checks on passengers at three unidentified airports.

This June marked the 100th anniversary of Orwell’s birth, and critics still love to invoke 1984. But the rapid proliferation of such public-private surveillance partnerships is only one facet of our current era that Orwell’s classic dystopian novel couldn’t predict.

Sir Bernard Crick—preeminent Orwell biographer, political scientist, and author of The American Science of Politics: Its Origins and Conditions—agrees that the novel doesn’t really reflect the United States post-9/11. “Big Brother describes the kind of atmosphere of a Stalinist or Nazi state, which God knows America is many miles from at the moment,” he says. Big Brother and 1984, he adds, “were meant as satire, not prophecy.”

“In an open society in which government abuses its power, I think it’s a different kind of mechanism in which people are almost bought off by prosperity, by wanting to be left alone in their own private life. The state is just there to protect one from outside interference,” Crick says.

While 1984 may no longer fit the bill, a frightening new surveillance order is coalescing, and activists are groping for metaphors to describe it. Unprecedented technological and legal powers are altering privacy beyond anything recognizable in science fiction or political discourse. In Fernandez’s terms, Big Brother and Lady Liberty have digitally morphed into Big Liberty: a hybrid that’s not exactly totalitarian, but is certainly not freedom-loving.

————–

“The current administration is getting deeper and deeper into mining public and private databases, with essentially no privacy protections for American citizens,” says Peter Swire, who served as chief counselor for privacy at the U.S. Office of Management and Budget under the Clinton administration.

Swire witnessed firsthand the profound shift from privacy-centered to surveillance-centered policy. Between 1999 and 2001, his job was to make sure privacy and civil liberties were a priority in surveillance policy discussions at the FBI, CIA, National Security Agency, and other agencies. His position there has been vacant since he left it.

“We worked hard to make sure that law enforcement and intelligence agencies had the tools they needed to match current challenges,” Swire says. “We also tried to make sure that these tools had safeguards.”

Datamining technologies combine advanced storage devices, statistical analysis tools, artificial intelligence systems, and other state-of-the-art hardware and software. With these tools, a great many Americans are being electronically patted down without their knowledge or consent. Once a person is singled out for scrutiny, he or she stays marked until proven innocent. More than 4.5 percent of the U.S. population is now on one of 12 different “watch lists” developed and maintained by nine federal government agencies, including the Transportation Security Administration, CIA, FBI, and Immigration and Naturalization Services. Disturbingly, under the Patriot Act, such suspects may be tried with secret evidence in a closed Foreign Intelligence Surveillance Act court. “Right now,” Swire says, “the criteria guiding Bush administration policy seem to be, one, maximizing the appearance of security and, two, subordinating questions about secrecy and civil liberties to the overwhelming necessity of the war [on terror].”

This cluster of tools forms a “government Google” which takes advantage of the fact that many Americans now live and work among devices embedded with microchips. “We used to think of Big Brother as things like government wiretaps, where government is directly receiving the information,” says Swire. “Today, the datafeed doesn’t come from a government telescreen like in 1984. The datafeed now comes from your phone calls, your tax records, your bank transactions, your social security number, your grocery purchases, your insurance claims, your credit history, your medical records.”

————–

Despite this avalanche of seemingly empirical evidence, critics are troubled by what datamining shares with all “scientific” endeavors: Murphy’s Law, or the inevitability of bugs. Commercial users of these technologies have already discovered gaps in reliability, but errors, abuse, and spillover effects in government datamining programs can have human consequences beyond minor glitches.

A study of datamining effectiveness in catching credit card crooks conducted by the Financial Services Technology Consortium in 1997 sorted through 500,000 samples of credit card transactions. 100,000 were known to be fraudulent. The datamining technology used in the experiment caught 80,000, or four out of five fraudulent transactions—which, according to experts, is pretty good. But it still missed 20,000 fraudulent transactions. Most troublingly, another 80,000 legitimate transactions tested as “false positives.”

In the world of credit, a false positive causes someone to lose money. But false positives in the post-9/11 world could result in imprisonment, according to Robert Ellis Smith, author of The Law of Privacy Explained. “None of these huge databases have an accuracy rate that exceeds 85 percent,” says Smith. “And when you’re talking about mistakes in big databases, you’re talking about millions of people who are going to be singled out inaccurately. Not to mention that terrorist suspects will pass through some of the inaccuracies as well.”

What if the error rates of the FSTC study are applied to real world scenarios using real world numbers of people? Suppose that, instead of credit card fraud, datamining projects like CAPPS II or TIA searched for 500 terrorists in a population of 200 million. Using the FSTC study’s findings, the search would find 40 million people labeled as potential terrorists, while 400 terrorists would be caught.

Barbara Simons, a former computer scientist at IBM Research who has taught policy courses at Stanford, says that datamining programs raise as many questions as they are designed to answer. “Computers aren’t especially good at dealing with human behavior; computers deal with numbers and algorithms,” she says. “Given that the description of TIA implies that it will examine information about a great many people, a large number of false positives is an inevitable result.”

In one odd example, the former child star of The Adventures of Ozzie and Harriet was singled out for questioning while flying. He was one of several persons named “David Nelson” who were repeatedly put on “no fly” lists by Transportation Security Agency officials in California, Oregon, Alaska and South Dakota. Incidents like these, say privacy advocates, are only the tip of the iceberg.

A less obvious, but no less damaging consequence of such systems is the way that collecting massive amounts of data also increases the probability of abuse. Datamining programs require armies of what are known as “trusted users,” law enforcement officers, customs officials, and others authorized to handle information culled from the projects. Multiple FBI studies show that trusted users are more likely to commit security breaches than an unauthorized user. Meanwhile, the Bush administration has denied numerous requests from the Electronic Privacy Information Center, a prominent civil liberties group, for information about the use and effectiveness of post-9/11 antiterrorist datamining projects, according to the group’s spokesman.

————–

Advocates are trying to defend what’s left of “fair information practices,” protocols established in response to technological developments and government abuses of privacy in the past. Some of the more common fair information practices adopted around the world since the ’70s include: guarantees that a person can know what information the government has about them and how it is used; mechanisms allowing persons to correct or amend a record of identifiable information about themselves; mechanisms limiting the use of and insuring the reliability of data; the prohibition of secret, personal-data record-keeping systems; and limitations on the amount of time government information is retained. These practices informed the historic privacy laws passed between the early ’70s and the early ’90s, such as the Fair Credit Reporting Act of 1971 and the Privacy Act of 1974.

“We’ve dealt with these problems before,” says former Georgia Congressman and staunch libertarian Bob Barr. “We had them back in the ’50s and ’60s, when government agencies were keeping files on certain people and certain organizations.”

Barr and his fellow conservatives at organizations like the Eagle Forum and the CATO Institute share liberals’ opposition to government invasions of privacy. “The point is not that 1984 is where we are today—we aren’t there,” Barr says, “but rather that there are clear elements of it in the new government powers and capabilities, and that we are headed in a very dangerous direction if we don’t take some steps to bring things more into balance.”

Some say that surveillance technology serves as much to control public opinion as to ferret out terrorists. Edward Tenner—a senior research associate at the National Museum of American History and the author of When Things Bite Back: Technology and the Revenge of Unintended Consequences—says that the use of such technology has historically been bolstered by what he calls the “illusion of control.”

Tenner spent years studying the surveillance systems developed by the Soviet Union and the Nazis and Stasi in Germany. He found a Wizard of Oz-like state projecting an illusion of control which it reinforced through ongoing threats to the populace. While these totalitarian regimes were efficient in the use of technologies like the punch card, archival research after their fall indicates that what passed for security was more like smoke and mirrors. The surveillance operations were surprisingly small, says Tenner, and much information was lost or simply never existed.

“The reputation [of] fear and efficiency had really intimidated the population, and that was all that was needed,” he said. “The concept of datamining is not very popular among actual scientists. They’ll tell you that when somebody’s doing datamining, they don’t really know what they’re looking for, they don’t have a theory. The inherent problem is that it encourages all sorts of spurious theories when you have so much data. These vast accumulations of data are invitations to incredible amounts of mischief.”

With such precedents, what illusions might the superior technology of today spin? History suggests, and Orwell might agree, that citizens should keep an eye on the watchers—and on their own backs.

This article was produced under the 2003 George Washington Williams Fellowship for Journalists of Color, a project sponsored by the Independent Press Association.

SPECIAL DEAL: Subscribe to our award-winning print magazine, a publication Bernie Sanders calls "unapologetically on the side of social and economic justice," for just $1 an issue! That means you'll get 10 issues a year for $9.95.

Roberto Lovato is a 2003 George Washington Williams fellow and a writer with Pacific News Service.
Get 10 issues for $19.95

Subscribe to the print magazine.