Bruce Schneier sums up this whole absurdity nicely:
The problem with all these measures is that they’re only effective if we guess the plot correctly. Defending against a particular tactic or target makes sense if tactics and targets are few. But there are hundreds of tactics and millions of targets, so all these measures will do is force the terrorists to make a minor modification to their plot.
It’s magical thinking: If we defend against what the terrorists did last time, we’ll somehow defend against what they do one time. Of course this doesn’t work. We take away guns and bombs, so the terrorists use box cutters. We take away box cutters and corkscrews, and the terrorists hide explosives in their shoes. We screen shoes, they use liquids. We limit liquids, they sew PETN into their underwear. We implement full-body scanners, and they’re going to do something else. This is a stupid game; we should stop playing it.
I have no real problem with security officials seeing me naked (though I imagine many do have an issue with that), but I am pissed off because these scanners (costing $250 000 each) are a waste of taxpayer money that could be better spent elsewhere. Guessing the terrorists tactics is a security strategy that is bound to fail.
What we need is security that’s effective even if we can’t guess the next plot: intelligence, investigation and emergency response. Our foiling of the liquid bombers demonstrates this. They were arrested in London, before they got to the airport. It didn’t matter if they were using liquids — which they chose precisely because we weren’t screening for them — or solids or powders. It didn’t matter if they were targeting airplanes or shopping malls or crowded movie theaters. They were arrested, and the plot was foiled. That’s effective security.
We need security strategies that don’t depend on our ability to guess the next terrorist plot correctly.
Finally, we need to be indomitable. The real security failure on Christmas Day was in our reaction. We’re reacting out of fear, wasting money on the story rather than securing ourselves against the threat. Abdulmutallab succeeded in causing terror even though his attack failed.
If we refuse to be terrorized, if we refuse to implement security theater and remember that we can never completely eliminate the risk of terrorism, then the terrorists fail even if their attacks succeed.
We need real security not security theatre.
UPDATE: And while we are at it, lets reemphasize that security agencies don’t need more surveillance powers to gather more data. They need better, more focused data, not more data that only serves to bury the important information.
With the attempted terror attack on Christmas, it appears that this focus on doing more surveillance rather than better security was a major part in “failing to connect the dots” that allowed the plot to get as far as it did. The EFF points us to a report noting that the reason why Abdulmutallab was allowed on an airplane into the US in the first place — despite widespread warnings, was that there was a backlog in processing all the data:
Abdulmutallab never made it onto a no-fly list because there are simply too many reports of suspicious individuals being submitted on a daily basis, which causes the system to be “clogged” — overloaded — with information having nothing to do with Terrorism. As a result, actually relevant information ends up obscured or ignored.
At what point do people realize that collecting more data doesn’t make us more secure, and actually can do the opposite. As is pointed out at the Salon link above, the idea that you evencan sacrifice liberty for security is wrong. The famous saying may say that you “deserve neither,” but increasingly people are realizing that sacrificing liberty doesn’t necessarily get you more security anyway.
UPDATE 2: More importantly, as pointed out by the Effect Measure blog, even if we could deploy an unrealistically accurate scanner, it would set off by 30,000 false positives for every bomber it catches.
let’s assume that by investing a gazillion dollars we could deploy some sophisticated technology at every airport within our borders and coming to and from the US that was so accurate it only had a false positive once in 100,000 passengers, i.e., it was 100% sensitive and 99.999% specific. I doubt we can make a machine that accurate, but let’s just suppose we could.
How many false positives would that produce? According to the Department of Transportation, during the last year there were about 710 million enplanements (US carriers, October 2008 – September 2009; excludes all-cargo services, includes domestic and international). That would produce 7100 false alarms, about 20 a day. How many passengers carrying explosives would the technology pick up? Well, we’ve had exactly 2 since 2001 (Richard Reid the shoe bomber and the current underpants bomber), or .25/710,000,000 enplanements (it’s actually less because enplanements have decreased substantially since 2001). So the probability of an alarm being correct is about 1 in 30,000 or .000033. For that yield there is the cost of research and development of the technology, acquiring and installing it, operating and maintaining it and the extra time of all the passengers. There will also be an effect on air travel generally, stressing an already economically desperate industry. To the extent that increases miles traveled by road, we have to add that cost and the cost in lives of motor vehicle accidents into the mix.
Mark Chu-Carroll sums up why worrying about airplane bombers, or terrorism in general, absurd:
And on the topic of airport security: put the numbers into context, and you’ll realize that all of the panic over terrorism on airplanes is really amazingly overblown. The chances of being hurt by someone who got past airport security, even without things like the full-body scanners being deployed after this latest panic, are smaller than dying in your dentist’s office from an anaesthesia error. And how often does anyone worry about that?