After telling him to step outside with his hands in the air, the officers lowered their guns and explained. They had received a report — later determined to be unfounded — that a suspect in a domestic disturbance had fled into Gill’s house. The police officers asked the then-26-year-old if one of them could do a sweep of the premises. Afraid and feeling he had no alternative, Gill agreed. One officer remained with him, while the other conducted the search. After that they took down Gill’s identification information. Then they were gone — but not out of his life.
Instead, Gill became the subject of a “suspicious activity report,” or SAR, which police officers fill out when they believe they’re encountering a person or situation that “reasonably” might be connected in some way to terrorism. The one-page report, filed shortly after the May 2012 incident, offered no hint of terrorism. It did, however, suggest that the two officers had focused on Gill’s religion, noting that his “full conversion to Islam as a young [white male] and pious demeanor is [sic] rare.”
The report also indicated that the officer who entered the house had looked at Gill’s computer screen and recalled something “similar to ‘Games that fly under the radar’” on it. According to the SAR, this meant Gill “had potential access to flight simulators via the Internet.” Gill suspects that he was probably looking at a website about video games. The SAR also noted earlier police encounters with Gill, in his mosque and on the street. It recorded his “full beard and traditional garb” and claimed that he avoided “eye contact.”
In short, the Chico Police Department was secretly keeping tabs on Gill as a suspected terrorist. Yet nowhere in the SAR was there a scintilla of evidence that he was engaged in any kind of criminal activity whatsoever. Nevertheless, that report was uploaded to the Central California Intelligence Center, one of a network of Department of Homeland Security-approved domestic intelligence fusion centers. It was then disseminated through the federal government’s domestic intelligence-sharing network as well as uploaded into an FBI database known as e-Guardian, after which the Bureau opened a file on Gill.
We do not know how many government agencies now associate Wiley Gill’s good name with terrorism. We do know that the nation’s domestic-intelligence network is massive, including at least 59 federal agencies, over 300 Defense Department units, and approximately 78 state-based fusion centers, as well as the multitude of law enforcement agencies they serve. We also know that local law enforcement agencies have themselves raised concerns about the system’s lack of privacy protections.
The SAR database is part of an ever-expanding domestic surveillance system established after 9/11 to gather intelligence on potential terrorism threats. At an abstract level, such a system may seem sensible: far better to prevent terrorism before it happens than to investigate and prosecute after a tragedy. Based on that reasoning, the government exhorts Americans to “see something, say something” — the SAR program’s slogan.
Indeed, just this week at a conference in New York City, FBI Director James Comey asked the public to report any suspicions they have to authorities. “When the hair on the back of your neck stands, listen to that instinct and just tell somebody,” said Comey. And seeking to reassure those who do not want to get their fellow Americans in trouble based on instinct alone, the FBI director added, “We investigate in secret for a very good reason, we don’t want to smear innocent people.”
There are any number of problems with this approach, starting with its premise. Predicting who exactly is a future threat before a person has done anything wrong is a perilous undertaking. That’s especially the case if the public is encouraged to report suspicions of neighbors, colleagues, and community members based on a “hair-on-the-back-of-your-neck” threshold. Nor is it any comfort that the FBI promises to protect the innocent by investigating “suspicious” people in secret. The civil liberties and privacy implications are, in fact, truly hair-raising, particularly when the Bureau engages in abusive and discriminatory sting operations and other rights violations.
At a fundamental level, suspicious activity reporting, as well as the digital and physical infrastructure of networked computer servers and fusion centers built around it, depends on what the government defines as suspicious. As it happens, this turns out to include innocuous, First Amendment-protected behavior.
As a start, a little history: the Nationwide Suspicious Activity Reporting Initiative was established in 2008 as a way for federal agencies, law enforcement, and the public to report and share potential terrorism-related information. The federal government then developed a list of 16 behaviors that it considered “reasonably indicative of criminal activity associated with terrorism.” Nine of those 16 behaviors, as the government acknowledges, could have nothing to do with criminal activity and are constitutionally protected, including snapping photographs, taking notes, and “observation through binoculars.”
Under federal regulations, the government can only collect and maintain criminal intelligence information on an individual if there is a “reasonable suspicion” that he or she is “involved in criminal conduct or activity and the information is relevant to that criminal conduct or activity.” The SAR program officially lowered that bar significantly, violating the federal government’s own guidelines for maintaining a “criminal intelligence system.”
There’s good reason for, at a minimum, using a reasonable suspicion standard. Anything less and it’s garbage in, garbage out, meaning counterterrorism “intelligence” databases become anything but intelligent.
In 2013, the ACLU of Northern California obtained nearly 2,000 SARs from two state fusion centers, which collect, store, and analyze such reports, and then share those their intelligence analysts find worthwhile across what the federal government calls its Information Sharing Environment. This connects the fusion centers and other federal agencies into an information-sharing network, or directly with the FBI. Their contents proved revealing.
A number of reports were concerned with “ME” — Middle Eastern — males. One headline proclaimed, “Suspicious ME Males Buy Several Large Pallets of Water at REDACTED.” Another read, “Suspicious Activities by a ME Male in Lodi, CA.” And just what was so suspicious about this male? Read into the document and you discover that a sergeant at the Elk Grove Police Department had long been “concerned about a residence in his neighborhood occupied by a Middle Eastern male adult physician who is very unfriendly.” And it’s not just “Middle Eastern males” who provoke such suspicion. Get involved in a civil rights protest against the police and California law enforcement might report you, too. A June 2012 SAR was headlined “Demonstration Against Law Enforcement Use of Excessive Force” and reported that “a scheduled protest” by demonstrators “concerned about the use of excessive force by law enforcement officers” was about to occur.
What we have here isn’t just a failure to communicate genuine threat information, but the transformation of suspicion into pernicious ideological, racial, and religious profiling, often disproportionately targeting activists and American Muslims. Again, that’s not surprising. Throughout our history, in times of real or perceived fear of amorphously defined threats, government suspicion focuses on those who dissent or look or act differently.
Law enforcement officials, including the Los Angeles Police Department’s top counterterrorism officer, have themselves exhibited skepticism about suspicious activity reporting (out of concern with the possibility of overloading the system).
In 2012, George Washington University’s Homeland Security Policy Institute surveyed counterterrorism personnel working in fusion centers and in a report generally accepting of SARs noted that the program had “flooded fusion centers, law enforcement, and other security outfits with white noise,” complicating “the intelligence process” and distorting “resource allocation and deployment decisions.” In other words, it was wasting time and sending personnel off on wild goose chases.
A few months later, a scathing report from the Senate subcommittee on homeland security described similar intelligence problems in state-based fusion centers. It found that Department of Homeland Security (DHS) personnel assigned to the centers “forwarded ‘intelligence’ of uneven quality — oftentimes shoddy, rarely timely, sometimes endangering citizens’ civil liberties and Privacy Act protections… and more often than not unrelated to terrorism.”
Effectiveness doesn’t exactly turn out to be one of the SAR program’s strong suits, though the government has obscured this by citing the growing number of SARs that have triggered FBI investigations. However, according to a report from the Government Accountability Office (GAO), the FBI doesn’t track whether SARs uploaded into the domestic intelligence network actually help thwart terrorism or lead to arrests or convictions.
You are, of course, what you measure — in this case, not much; and yet, despite its dubious record, the SAR program is alive and kicking. According to the GAO, the number of reports in the system exploded by 750%, from 3,256 in January 2010 to 27,855 in October 2012.
And being entered in such a system, as Wiley Gill found out, can prove just the beginning of your problems. Several months after his home was searched, his telephone rang. It was a Chico police officer who told Gill to shut down his Facebook page. Gill refused, responding that there was only one reason he thought the police wanted his account deleted: its references to Islam. The phone call ended ominously with the officer warning Gill that he was on a “watchlist.”
The officer may have been referring to yet another burgeoning secret database that the federal government calls its “consolidated terrorism watchlist.” Inclusion in this database — and on government blacklists that are generated from it — can bring more severe repercussions than unwarranted law enforcement attention. It can devastate lives.
When small business owner Abe Mashal reached the ticket counter at Chicago’s Midway Airport on April 20, 2010, an airline representative informed him that he was on the no-fly list and could not travel to Spokane, Washington, on business. Suddenly, the former Marine found himself surrounded by TSA agents and Chicago police. Later, FBI agents questioned him at the airport and at home about his Muslim faith and his family members.
The humiliation and intimidation didn’t end there. A few months later, FBI agents returned to interview Mashal, focusing again on his faith and family. Only this time they had an offer to make: if he became an FBI informant, his name would be deleted from the no-fly list and he would be paid for his services. Such manipulative quid pro quos have been made to others.
As of August 2013, there were approximately 47,000 people, including 800 U.S. citizens and legal permanent residents like Mashal, on that secretive no-fly list, all branded as “known or suspected terrorists.” All were barred from flying to, from, or over the United States without ever being given a reason why. On 9/11, just 16 names had been on the predecessor “no transport” list. The resulting increase of 293,650% — perhaps more since 2013 — isn’t an accurate gauge of danger, especially given that names are added to the list based on vague, broad, and error-prone standards.
In 2007, the Department of Homeland Security established the Traveler Redress Inquiry Program through which those who believe they are wrongly blacklisted can theoretically attempt to correct the government’s error. But banned flyers quickly find themselves frustrated because they have to guess what evidence they must produce to refute the government’s unrevealed basis for watchlisting them in the first place. Redress then becomes a grim bureaucratic wonderland. In response to queries, blacklisted people receive a letter from the DHS that gives no explanation for why they were not allowed to board a plane, no confirmation of whether they are actually on the no-fly list, and no certainty about whether they can fly in the future. In the end, the only recourse for such victims is to roll the dice by buying a ticket, going to the airport, and hoping for the best.
There is hope, however. In August, four years after the ACLU filed a lawsuit on behalf of 13 people on the no-fly list, a judge ruled that the government’s redress system is unconstitutional. In early October, the government notified Mashal and six others that they were no longer on the list. Six of the ACLU’s clients remain unable to fly, but at least the government now has to disclose just why they have been put in that category, so that they can contest their blacklisting. Soon, others should have the same opportunity.
The No Fly List is only the best known of the government’s web of terrorism watchlists. Many more exist, derived from the same master list. Currently, there are more than one million names in the Terrorist Identities Datamart Environment, a database maintained by the National Counterterrorism Center. This classified source feeds the Terrorist Screening Database (TSDB), operated by the FBI’s Terrorist Screening Center. The TSDB is an unclassified but still secret list known as the “master watchlist.” containing what the government describes as “known or suspected terrorists,” or KSTs.
According to documents recently leaked to the Intercept, as of August 2013 that master watchlist contained 680,000 people, including 5,000 U.S. citizens and legal permanent residents. The government can add people’s names to it according to a shaky “reasonable suspicion” standard. There is, however, growing evidence that what’s “reasonable” to the government may only remotely resemble what that word means in everyday usage. Information from a single source, even an uncorroborated Facebook post, can allow a government agent to watchlist an individual with virtually no outside scrutiny. Perhaps that’s why 40% of those on the master watchlist have “no recognized terrorist group affiliation,” according to the government’s own records.
The Terrorist Screening Database is then used to fill other lists. In the context of aviation, this means the no-fly list, as well as the selectee and expanded selectee lists. Transportation security agents subject travelers on the latter two lists to extra screenings, which can include prolonged and invasive interrogation and searches of laptops, phones, and other electronic devices. Around the border, there’s the State Department’s Consular Lookout and Support System, which it uses to flag people it thinks shouldn’t get a visa, and the TECS System, which Customs and Border Protection uses to determine whether someone can enter the country.
Inside the United States, no watchlist may be as consequential as the one that goes by the moniker of the Known or Appropriately Suspected Terrorist File. The names on this blacklist are shared with more than 17,000 state, local, and tribal police departments nationwide through the FBI’s National Crime Information Center (NCIC). Unlike any other information disseminated through the NCIC, the KST File reflects mere suspicion of involvement with criminal activity, so law enforcement personnel across the country are given access to a database of people who have secretly been labeled terrorism suspects with little or no actual evidence, based on virtually meaningless criteria.
This opens up the possibility of increased surveillance and tense encounters with the police, not to speak of outright harassment, for a large but undivulged number of people. When a police officer stops a person for a driving infraction, for instance, information about his or her KST status will pop up as soon a driver’s license is checked. According to FBI documents, police officers who get a KST hit are warned to “approach with caution” and “ask probing questions.”
When officers believe they’re about to go face to face with a terrorist, bad things can happen. It’s hardly a stretch of the imagination, particularly after a summer of police shootings of unarmed men, to suspect that an officer approaching a driver whom he believes to be a terrorist will be quicker to go for his gun. Meanwhile, the watchlisted person may never even know why his encounters with police have taken such a peculiar and menacing turn. According to the FBI’s instructions, under no circumstances is a cop to tell a suspect that he or she is on a watchlist.
And once someone is on this watchlist, good luck getting off it. According to the government’s watchlist rulebook, even a jury can’t help you. “An individual who is acquitted or against whom charges are dismissed for a crime related to terrorism,”it reads, “may nevertheless meet the reasonable standard and appropriately remain on, or be nominated to, the Terrorist Watchlist.”
The SARs program and the consolidated terrorism watchlist are just two domestic government databases of suspicion. Many more exist. Taken together, they should be seen as a new form of national ID for a growing group of people accused of no crime, who may have done nothing wrong, but are nevertheless secretly labeled by the government as suspicious or worse. Innocent until proven guilty has been replaced with suspicious until determined otherwise.
Think of it as a new shadow system of national identification for a shadow government that is increasingly averse to operating in the light. It’s an ID its “owners” don’t carry around with them, yet it’s imposed on them whenever they interact with government agents or agencies. It can alter their lives in disastrous ways, often without their knowledge.