Tangled Webs

    Will Technology Keep Us Safe?
Issue 7.5
Nov 12, 2002



 


I travel a lot; too much really. I see more movies on airplanes than I do on video, and some months I spend more nights in hotels than in my own bed. Like most travelers, I've been understanding of the delays caused by new security precautions in the US. I think it's sad that America has reached a point where we must examine everyone's shoes before allowing them to board a plane, but if it helps keep us safe I'm all for it.

I am much less reassured by some of the more technical security measures.



Face Recognition


Companies such as Visionics and Viisage develop face-recognition software. They claim that when hooked up to airport security monitors, their products can pick terrorists out of the crowd with over 99% accuracy. The promise is appealing, but there are three main problems with this technology.

First, it just doesn't work. Actual tests using airport employees demonstrated accuracy of only about 50%, and that was using high-quality, recent photographs. Second, a major airport handles hundreds of thousands of people every day. Even if these systems someday achieve 99% accuracy, they would be falsely identifying a "terrorist" every few minutes. Security personal would either learn to ignore these systems or would wind up running around like the Keystone Cops.

The third, and perhaps most subtle, problem with this approach is that we really don't know who the terrorists are. It's not that FBI can't locate them, they just don't know who they are looking for. In fact, some of the 9/11 hijackers bought tickets and boarded the planes using their real names.

Despite the problems, these systems are being rolled out in US airports. Other than the false sense of security, these systems are simply a huge but relatively harmless waste of money.



Data Mining


People have always been captivated by the notion that computers can be smarter than we are. Stories about stock-picking neural networks and medical expert systems that diagnose diseases better than physicians are perennial news favorites. Such technologies inevitably prove useful under limited and controlled conditions, but wind up having little or no widespread utility.

The Transportation Security Administration (TSA) will be taking this notion to a ridiculous extreme early next year when they roll out Computer Assisted Passenger Prescreening System II (CAPPSII). CAPPSII is a massive expansion of the existing passenger screening network. The TSA claims the system will screen passengers automatically by accessing dozens of public and private databases and use artificial intelligence to identify the terrorists.

It won't work, of course. Computers can learn to diagnose diabetes because there are clear symptoms of the disease that can be programmed in. Stock picking computers actually run very well-defined trading system models developed and refined by human beings. With terrorists, however, the best minds in the field can't begin to identify distinguishing traits. It is ridiculous to think that a computer system examining a random collection of data will be able to spot them.

CAPPS has never and probably will never catch a terrorist, but once deployed its mission will be quickly expanded to include all sorts of "worthy causes." For example, CAPPSII will certainly be used to stop people who have an outstanding warrant -- and what's wrong with that?

CAPPSII could also be used to find people behind on their alimony payments. Come to think of it, since the state DMV databases are connected, airports could become a convenient way to collect unpaid tickets. And then there's the IRS.

CAPPSII is being deployed, it has tremendous momentum behind it, and there is no public accountability. Unless something changes, airports will be no safer, but will become a central clearinghouse for every state and federal agency who thinks you own them money.



The List


The national No Fly List is a fairly low-tech security measure, but it best illustrates the failings of most technological approaches to finding terrorists. A dozen or so agencies add names to The List including CIA, FBI, INS and State Department, but no one agency is responsible for administering it. If your name somehow winds up on The List, there is no official way to have it removed or even find out how it got there.

The List has not yet snared any terrorists, but it has resulted in the detention of quite a few peace-activitis and members of left-leaning organizations; including one 74-year-old nun. Again, no one can say exactly why these people are being detained or delayed, and not a few angry voices are citing political or even malicious intent.

I think the answer is more innocent, but no less dangerous. Again, the problem is that there is no clear profile of a terrorist. About all the experts can seem to agree on is that these people certainly don't think like we do. This may sound simplistic, but the people who end up on The List are not those whose have a history of violence or illegal activity, but those whose thinking is most unlike that of federal law enforcement officers.

I think that law-enforcement personal honestly believe that the people they place on The List pose a threat, but the only effect The List has had so far is to deny people the right to travel because of their political views.

This has to change.

The cornerstone of any free society is the freedom to disagree with authority. Unfortunately, all of human history shows that when people are scared, those who are different will fall under suspicion. Technology itself provides no solutions. Technology will not make us safe. Technology only amplifies and dehumanizes human bias.

 


[ Home Page] [ Back to Index ] [ Previous Issue ] [ Next Issue ]

© Copyright 2002, Tim Romero, t3@t3.org
This article first appeared in the November 12, 2002 edition of The Japan Times.
Tangled Webs may be distributed freely provided this copyright notice is included.
The Tangled Webs Archive is located at http://www.t3.org/tangledwebs/