The security mirage
-
0:01 - 0:03So, security is two different things:
-
0:03 - 0:06it's a feeling, and it's a reality.
-
0:06 - 0:07And they're different.
-
0:07 - 0:10You could feel secure even if you're not.
-
0:10 - 0:12And you can be secure
-
0:12 - 0:14even if you don't feel it.
-
0:14 - 0:16Really, we have two separate concepts
-
0:17 - 0:18mapped onto the same word.
-
0:19 - 0:22And what I want to do in this talk
is to split them apart -- -
0:22 - 0:26figuring out when they diverge
and how they converge. -
0:26 - 0:29And language is actually a problem here.
-
0:29 - 0:31There aren't a lot of good words
-
0:31 - 0:33for the concepts
we're going to talk about. -
0:34 - 0:38So if you look at security
from economic terms, -
0:38 - 0:40it's a trade-off.
-
0:40 - 0:44Every time you get some security,
you're always trading off something. -
0:44 - 0:46Whether this is a personal decision --
-
0:46 - 0:49whether you're going to install
a burglar alarm in your home -- -
0:49 - 0:50or a national decision,
-
0:50 - 0:52where you're going to invade
a foreign country -- -
0:52 - 0:56you're going to trade off something:
money or time, convenience, capabilities, -
0:56 - 0:58maybe fundamental liberties.
-
0:58 - 1:02And the question to ask
when you look at a security anything -
1:02 - 1:05is not whether this makes us safer,
-
1:05 - 1:07but whether it's worth the trade-off.
-
1:07 - 1:10You've heard in the past
several years, the world is safer -
1:10 - 1:12because Saddam Hussein is not in power.
-
1:12 - 1:15That might be true,
but it's not terribly relevant. -
1:15 - 1:18The question is: Was it worth it?
-
1:18 - 1:20And you can make your own decision,
-
1:20 - 1:23and then you'll decide
whether the invasion was worth it. -
1:23 - 1:27That's how you think about security:
in terms of the trade-off. -
1:27 - 1:29Now, there's often no right or wrong here.
-
1:30 - 1:33Some of us have a burglar alarm
system at home and some of us don't. -
1:33 - 1:36And it'll depend on where we live,
-
1:36 - 1:38whether we live alone or have a family,
-
1:38 - 1:40how much cool stuff we have,
-
1:40 - 1:43how much we're willing
to accept the risk of theft. -
1:44 - 1:47In politics also,
there are different opinions. -
1:47 - 1:52A lot of times, these trade-offs
are about more than just security, -
1:52 - 1:54and I think that's really important.
-
1:54 - 1:57Now, people have a natural intuition
about these trade-offs. -
1:57 - 1:59We make them every day.
-
2:00 - 2:03Last night in my hotel room,
when I decided to double-lock the door, -
2:03 - 2:05or you in your car when you drove here;
-
2:06 - 2:07when we go eat lunch
-
2:07 - 2:10and decide the food's not
poison and we'll eat it. -
2:10 - 2:13We make these trade-offs again and again,
-
2:13 - 2:15multiple times a day.
-
2:15 - 2:16We often won't even notice them.
-
2:16 - 2:19They're just part
of being alive; we all do it. -
2:19 - 2:21Every species does it.
-
2:21 - 2:24Imagine a rabbit in a field, eating grass.
-
2:24 - 2:26And the rabbit sees a fox.
-
2:27 - 2:29That rabbit will make
a security trade-off: -
2:29 - 2:31"Should I stay, or should I flee?"
-
2:31 - 2:33And if you think about it,
-
2:33 - 2:35the rabbits that are good
at making that trade-off -
2:35 - 2:37will tend to live and reproduce,
-
2:37 - 2:40and the rabbits that are bad at it
-
2:40 - 2:41will get eaten or starve.
-
2:42 - 2:43So you'd think
-
2:44 - 2:49that us, as a successful species
on the planet -- you, me, everybody -- -
2:49 - 2:51would be really good
at making these trade-offs. -
2:52 - 2:55Yet it seems, again and again,
that we're hopelessly bad at it. -
2:57 - 2:59And I think that's a fundamentally
interesting question. -
2:59 - 3:01I'll give you the short answer.
-
3:01 - 3:04The answer is, we respond
to the feeling of security -
3:04 - 3:05and not the reality.
-
3:07 - 3:09Now, most of the time, that works.
-
3:10 - 3:12Most of the time,
-
3:12 - 3:14feeling and reality are the same.
-
3:16 - 3:19Certainly that's true
for most of human prehistory. -
3:20 - 3:23We've developed this ability
-
3:23 - 3:26because it makes evolutionary sense.
-
3:27 - 3:30One way to think of it
is that we're highly optimized -
3:30 - 3:32for risk decisions
-
3:32 - 3:34that are endemic to living
in small family groups -
3:34 - 3:37in the East African Highlands
in 100,000 BC. -
3:38 - 3:402010 New York, not so much.
-
3:42 - 3:45Now, there are several biases
in risk perception. -
3:45 - 3:47A lot of good experiments in this.
-
3:47 - 3:50And you can see certain biases
that come up again and again. -
3:50 - 3:52I'll give you four.
-
3:52 - 3:55We tend to exaggerate
spectacular and rare risks -
3:55 - 3:57and downplay common risks --
-
3:57 - 3:58so, flying versus driving.
-
3:59 - 4:03The unknown is perceived
to be riskier than the familiar. -
4:06 - 4:08One example would be:
-
4:08 - 4:10people fear kidnapping by strangers,
-
4:10 - 4:14when the data supports that kidnapping
by relatives is much more common. -
4:14 - 4:16This is for children.
-
4:16 - 4:20Third, personified risks
are perceived to be greater -
4:20 - 4:21than anonymous risks.
-
4:21 - 4:24So, Bin Laden is scarier
because he has a name. -
4:25 - 4:26And the fourth is:
-
4:26 - 4:31people underestimate risks
in situations they do control -
4:31 - 4:34and overestimate them
in situations they don't control. -
4:34 - 4:37So once you take up skydiving or smoking,
-
4:37 - 4:39you downplay the risks.
-
4:40 - 4:43If a risk is thrust upon you --
terrorism is a good example -- -
4:43 - 4:44you'll overplay it,
-
4:44 - 4:46because you don't feel
like it's in your control. -
4:47 - 4:50There are a bunch
of other of these cognitive biases, -
4:50 - 4:53that affect our risk decisions.
-
4:54 - 4:56There's the availability heuristic,
-
4:56 - 5:00which basically means we estimate
the probability of something -
5:00 - 5:03by how easy it is to bring
instances of it to mind. -
5:05 - 5:06So you can imagine how that works.
-
5:06 - 5:10If you hear a lot about tiger attacks,
there must be a lot of tigers around. -
5:10 - 5:13You don't hear about lion attacks,
there aren't a lot of lions around. -
5:13 - 5:16This works, until you invent newspapers,
-
5:16 - 5:20because what newspapers do
is repeat again and again -
5:20 - 5:22rare risks.
-
5:22 - 5:24I tell people: if it's in the news,
don't worry about it, -
5:24 - 5:29because by definition, news is something
that almost never happens. -
5:29 - 5:31(Laughter)
-
5:31 - 5:33When something is so common,
it's no longer news. -
5:34 - 5:36Car crashes, domestic violence --
-
5:36 - 5:38those are the risks you worry about.
-
5:38 - 5:41We're also a species of storytellers.
-
5:41 - 5:43We respond to stories more than data.
-
5:43 - 5:46And there's some basic
innumeracy going on. -
5:46 - 5:49I mean, the joke "One, two,
three, many" is kind of right. -
5:49 - 5:51We're really good at small numbers.
-
5:51 - 5:54One mango, two mangoes, three mangoes,
-
5:54 - 5:5610,000 mangoes, 100,000 mangoes --
-
5:56 - 5:59it's still more mangoes
you can eat before they rot. -
5:59 - 6:02So one half, one quarter,
one fifth -- we're good at that. -
6:02 - 6:04One in a million, one in a billion --
-
6:04 - 6:05they're both almost never.
-
6:06 - 6:10So we have trouble with the risks
that aren't very common. -
6:10 - 6:12And what these cognitive biases do
-
6:13 - 6:15is they act as filters
between us and reality. -
6:16 - 6:20And the result is that feeling
and reality get out of whack, -
6:20 - 6:21they get different.
-
6:22 - 6:26Now, you either have a feeling --
you feel more secure than you are, -
6:26 - 6:28there's a false sense of security.
-
6:28 - 6:31Or the other way, and that's a false
sense of insecurity. -
6:32 - 6:35I write a lot about "security theater,"
-
6:35 - 6:37which are products
that make people feel secure, -
6:37 - 6:39but don't actually do anything.
-
6:39 - 6:42There's no real word for stuff
that makes us secure, -
6:42 - 6:44but doesn't make us feel secure.
-
6:44 - 6:47Maybe it's what the CIA
is supposed to do for us. -
6:48 - 6:50So back to economics.
-
6:50 - 6:54If economics, if the market,
drives security, -
6:54 - 6:59and if people make trade-offs
based on the feeling of security, -
6:59 - 7:04then the smart thing for companies to do
for the economic incentives -
7:04 - 7:06is to make people feel secure.
-
7:07 - 7:09And there are two ways to do this.
-
7:09 - 7:12One, you can make people actually secure
-
7:12 - 7:13and hope they notice.
-
7:13 - 7:16Or two, you can make people
just feel secure -
7:16 - 7:18and hope they don't notice.
-
7:19 - 7:21Right?
-
7:21 - 7:24So what makes people notice?
-
7:24 - 7:26Well, a couple of things:
-
7:26 - 7:28understanding of the security,
-
7:28 - 7:30of the risks, the threats,
-
7:30 - 7:32the countermeasures, how they work.
-
7:32 - 7:34But if you know stuff, you're more likely
-
7:35 - 7:37to have your feelings match reality.
-
7:38 - 7:41Enough real-world examples helps.
-
7:41 - 7:44We all know the crime rate
in our neighborhood, -
7:44 - 7:46because we live there,
and we get a feeling about it -
7:46 - 7:48that basically matches reality.
-
7:50 - 7:52Security theater is exposed
-
7:52 - 7:55when it's obvious
that it's not working properly. -
7:56 - 7:59OK. So what makes people not notice?
-
7:59 - 8:01Well, a poor understanding.
-
8:01 - 8:05If you don't understand the risks,
you don't understand the costs, -
8:05 - 8:07you're likely to get the trade-off wrong,
-
8:07 - 8:09and your feeling doesn't match reality.
-
8:09 - 8:11Not enough examples.
-
8:12 - 8:15There's an inherent problem
with low-probability events. -
8:16 - 8:19If, for example, terrorism
almost never happens, -
8:19 - 8:24it's really hard to judge the efficacy
of counter-terrorist measures. -
8:25 - 8:29This is why you keep sacrificing virgins,
-
8:29 - 8:32and why your unicorn defenses
are working just great. -
8:32 - 8:34There aren't enough examples of failures.
-
8:36 - 8:39Also, feelings that cloud the issues --
-
8:39 - 8:43the cognitive biases I talked
about earlier: fears, folk beliefs -- -
8:43 - 8:46basically, an inadequate model of reality.
-
8:48 - 8:50So let me complicate things.
-
8:50 - 8:52I have feeling and reality.
-
8:52 - 8:55I want to add a third element.
I want to add "model." -
8:56 - 8:58Feeling and model are in our head,
-
8:58 - 9:01reality is the outside world;
it doesn't change, it's real. -
9:03 - 9:05Feeling is based on our intuition,
-
9:05 - 9:06model is based on reason.
-
9:07 - 9:09That's basically the difference.
-
9:09 - 9:11In a primitive and simple world,
-
9:11 - 9:13there's really no reason for a model,
-
9:15 - 9:17because feeling is close to reality.
-
9:17 - 9:19You don't need a model.
-
9:19 - 9:21But in a modern and complex world,
-
9:22 - 9:26you need models to understand
a lot of the risks we face. -
9:27 - 9:29There's no feeling about germs.
-
9:30 - 9:32You need a model to understand them.
-
9:33 - 9:37This model is an intelligent
representation of reality. -
9:37 - 9:42It's, of course, limited
by science, by technology. -
9:43 - 9:45We couldn't have a germ theory of disease
-
9:45 - 9:48before we invented
the microscope to see them. -
9:49 - 9:52It's limited by our cognitive biases.
-
9:53 - 9:56But it has the ability
to override our feelings. -
9:56 - 9:59Where do we get these models?
We get them from others. -
9:59 - 10:05We get them from religion,
from culture, teachers, elders. -
10:05 - 10:08A couple years ago,
I was in South Africa on safari. -
10:08 - 10:11The tracker I was with grew up
in Kruger National Park. -
10:11 - 10:14He had some very complex
models of how to survive. -
10:15 - 10:18And it depended on if you were attacked
by a lion, leopard, rhino, or elephant -- -
10:18 - 10:21and when you had to run away,
when you couldn't run away, -
10:21 - 10:24when you had to climb a tree,
when you could never climb a tree. -
10:24 - 10:26I would have died in a day.
-
10:27 - 10:31But he was born there,
and he understood how to survive. -
10:31 - 10:33I was born in New York City.
-
10:33 - 10:36I could have taken him to New York,
and he would have died in a day. -
10:36 - 10:37(Laughter)
-
10:37 - 10:41Because we had different models
based on our different experiences. -
10:43 - 10:46Models can come from the media,
-
10:46 - 10:47from our elected officials ...
-
10:48 - 10:51Think of models of terrorism,
-
10:51 - 10:53child kidnapping,
-
10:53 - 10:56airline safety, car safety.
-
10:56 - 10:58Models can come from industry.
-
10:59 - 11:02The two I'm following
are surveillance cameras, -
11:02 - 11:04ID cards,
-
11:04 - 11:07quite a lot of our computer
security models come from there. -
11:07 - 11:09A lot of models come from science.
-
11:09 - 11:11Health models are a great example.
-
11:11 - 11:14Think of cancer, bird flu,
swine flu, SARS. -
11:15 - 11:20All of our feelings of security
about those diseases -
11:20 - 11:24come from models given to us, really,
by science filtered through the media. -
11:26 - 11:27So models can change.
-
11:28 - 11:30Models are not static.
-
11:30 - 11:34As we become more comfortable
in our environments, -
11:34 - 11:37our model can move closer to our feelings.
-
11:39 - 11:41So an example might be,
-
11:41 - 11:43if you go back 100 years ago,
-
11:43 - 11:46when electricity was first
becoming common, -
11:46 - 11:48there were a lot of fears about it.
-
11:48 - 11:50There were people who were afraid
to push doorbells, -
11:50 - 11:53because there was electricity
in there, and that was dangerous. -
11:53 - 11:56For us, we're very facile
around electricity. -
11:56 - 11:59We change light bulbs
without even thinking about it. -
12:00 - 12:06Our model of security around electricity
is something we were born into. -
12:06 - 12:09It hasn't changed as we were growing up.
-
12:09 - 12:11And we're good at it.
-
12:12 - 12:17Or think of the risks on the Internet
across generations -- -
12:17 - 12:19how your parents approach
Internet security, -
12:19 - 12:20versus how you do,
-
12:20 - 12:22versus how our kids will.
-
12:23 - 12:26Models eventually fade
into the background. -
12:27 - 12:30"Intuitive" is just
another word for familiar. -
12:31 - 12:34So as your model is close to reality
and it converges with feelings, -
12:35 - 12:36you often don't even know it's there.
-
12:38 - 12:42A nice example of this came
from last year and swine flu. -
12:43 - 12:45When swine flu first appeared,
-
12:45 - 12:48the initial news caused
a lot of overreaction. -
12:48 - 12:50Now, it had a name,
-
12:50 - 12:52which made it scarier
than the regular flu, -
12:52 - 12:54even though it was more deadly.
-
12:55 - 12:58And people thought doctors
should be able to deal with it. -
12:58 - 13:01So there was that feeling
of lack of control. -
13:01 - 13:04And those two things
made the risk more than it was. -
13:04 - 13:07As the novelty wore off
and the months went by, -
13:07 - 13:10there was some amount of tolerance;
people got used to it. -
13:11 - 13:14There was no new data,
but there was less fear. -
13:14 - 13:17By autumn,
-
13:17 - 13:20people thought the doctors
should have solved this already. -
13:20 - 13:22And there's kind of a bifurcation:
-
13:22 - 13:28people had to choose
between fear and acceptance -- -
13:29 - 13:31actually, fear and indifference --
-
13:31 - 13:33and they kind of chose suspicion.
-
13:34 - 13:37And when the vaccine appeared last winter,
-
13:37 - 13:39there were a lot of people --
a surprising number -- -
13:40 - 13:41who refused to get it.
-
13:44 - 13:47And it's a nice example of how
people's feelings of security change, -
13:47 - 13:49how their model changes,
-
13:49 - 13:50sort of wildly,
-
13:51 - 13:53with no new information,
with no new input. -
13:55 - 13:57This kind of thing happens a lot.
-
13:58 - 14:00I'm going to give one more complication.
-
14:00 - 14:03We have feeling, model, reality.
-
14:03 - 14:06I have a very relativistic
view of security. -
14:06 - 14:08I think it depends on the observer.
-
14:08 - 14:14And most security decisions
have a variety of people involved. -
14:15 - 14:21And stakeholders with specific trade-offs
will try to influence the decision. -
14:21 - 14:23And I call that their agenda.
-
14:24 - 14:28And you see agenda --
this is marketing, this is politics -- -
14:28 - 14:31trying to convince you to have
one model versus another, -
14:31 - 14:33trying to convince you to ignore a model
-
14:33 - 14:36and trust your feelings,
-
14:36 - 14:38marginalizing people
with models you don't like. -
14:39 - 14:41This is not uncommon.
-
14:42 - 14:46An example, a great example,
is the risk of smoking. -
14:47 - 14:49In the history of the past 50 years,
-
14:49 - 14:51the smoking risk shows
how a model changes, -
14:51 - 14:56and it also shows how an industry fights
against a model it doesn't like. -
14:57 - 15:00Compare that to the secondhand
smoke debate -- -
15:00 - 15:02probably about 20 years behind.
-
15:03 - 15:04Think about seat belts.
-
15:04 - 15:06When I was a kid, no one wore a seat belt.
-
15:06 - 15:10Nowadays, no kid will let you drive
if you're not wearing a seat belt. -
15:11 - 15:14Compare that to the airbag debate,
-
15:14 - 15:16probably about 30 years behind.
-
15:17 - 15:19All examples of models changing.
-
15:22 - 15:24What we learn is that changing
models is hard. -
15:25 - 15:27Models are hard to dislodge.
-
15:27 - 15:29If they equal your feelings,
-
15:29 - 15:31you don't even know you have a model.
-
15:32 - 15:34And there's another cognitive bias
-
15:34 - 15:36I'll call confirmation bias,
-
15:36 - 15:40where we tend to accept data
that confirms our beliefs -
15:40 - 15:43and reject data
that contradicts our beliefs. -
15:44 - 15:48So evidence against our model,
we're likely to ignore, -
15:48 - 15:49even if it's compelling.
-
15:49 - 15:52It has to get very compelling
before we'll pay attention. -
15:54 - 15:56New models that extend
long periods of time are hard. -
15:56 - 15:58Global warming is a great example.
-
15:58 - 16:02We're terrible at models
that span 80 years. -
16:02 - 16:04We can do "to the next harvest."
-
16:04 - 16:06We can often do "until our kids grow up."
-
16:06 - 16:09But "80 years," we're just not good at.
-
16:10 - 16:12So it's a very hard model to accept.
-
16:13 - 16:16We can have both models
in our head simultaneously -- -
16:17 - 16:24that kind of problem where
we're holding both beliefs together, -
16:24 - 16:25the cognitive dissonance.
-
16:25 - 16:28Eventually, the new model
will replace the old model. -
16:29 - 16:31Strong feelings can create a model.
-
16:32 - 16:38September 11 created a security model
in a lot of people's heads. -
16:38 - 16:41Also, personal experiences
with crime can do it, -
16:41 - 16:42personal health scare,
-
16:42 - 16:44a health scare in the news.
-
16:45 - 16:48You'll see these called
"flashbulb events" by psychiatrists. -
16:49 - 16:51They can create a model instantaneously,
-
16:51 - 16:53because they're very emotive.
-
16:55 - 16:56So in the technological world,
-
16:56 - 16:59we don't have experience to judge models.
-
17:00 - 17:03And we rely on others. We rely on proxies.
-
17:03 - 17:05And this works, as long as
it's the correct others. -
17:06 - 17:09We rely on government agencies
-
17:09 - 17:13to tell us what pharmaceuticals are safe.
-
17:13 - 17:15I flew here yesterday.
-
17:15 - 17:17I didn't check the airplane.
-
17:17 - 17:20I relied on some other group
-
17:20 - 17:22to determine whether
my plane was safe to fly. -
17:23 - 17:26We're here, none of us fear the roof
is going to collapse on us, -
17:26 - 17:28not because we checked,
-
17:28 - 17:32but because we're pretty sure
the building codes here are good. -
17:33 - 17:36It's a model we just accept
-
17:36 - 17:38pretty much by faith.
-
17:38 - 17:39And that's OK.
-
17:43 - 17:49Now, what we want is people
to get familiar enough with better models, -
17:49 - 17:51have it reflected in their feelings,
-
17:51 - 17:54to allow them to make security trade-offs.
-
17:55 - 17:59When these go out of whack,
you have two options. -
17:59 - 18:03One, you can fix people's feelings,
directly appeal to feelings. -
18:03 - 18:05It's manipulation, but it can work.
-
18:06 - 18:08The second, more honest way
-
18:08 - 18:10is to actually fix the model.
-
18:11 - 18:14Change happens slowly.
-
18:14 - 18:18The smoking debate took 40 years --
and that was an easy one. -
18:20 - 18:22Some of this stuff is hard.
-
18:22 - 18:26Really, though, information
seems like our best hope. -
18:26 - 18:27And I lied.
-
18:27 - 18:31Remember I said feeling, model, reality;
reality doesn't change? -
18:31 - 18:33It actually does.
-
18:33 - 18:34We live in a technological world;
-
18:34 - 18:37reality changes all the time.
-
18:38 - 18:41So we might have,
for the first time in our species: -
18:41 - 18:44feeling chases model, model chases
reality, reality's moving -- -
18:44 - 18:45they might never catch up.
-
18:47 - 18:48We don't know.
-
18:50 - 18:52But in the long term,
-
18:52 - 18:54both feeling and reality are important.
-
18:54 - 18:57And I want to close with two quick
stories to illustrate this. -
18:57 - 19:001982 -- I don't know if people
will remember this -- -
19:00 - 19:03there was a short epidemic
of Tylenol poisonings -
19:03 - 19:05in the United States.
-
19:05 - 19:06It's a horrific story.
-
19:06 - 19:08Someone took a bottle of Tylenol,
-
19:08 - 19:11put poison in it, closed it up,
put it back on the shelf, -
19:11 - 19:13someone else bought it and died.
-
19:13 - 19:14This terrified people.
-
19:14 - 19:17There were a couple of copycat attacks.
-
19:17 - 19:19There wasn't any real risk,
but people were scared. -
19:19 - 19:23And this is how the tamper-proof
drug industry was invented. -
19:23 - 19:26Those tamper-proof caps?
That came from this. -
19:26 - 19:27It's complete security theater.
-
19:27 - 19:30As a homework assignment,
think of 10 ways to get around it. -
19:30 - 19:32I'll give you one: a syringe.
-
19:32 - 19:35But it made people feel better.
-
19:35 - 19:39It made their feeling of security
more match the reality. -
19:40 - 19:43Last story: a few years ago,
a friend of mine gave birth. -
19:43 - 19:44I visit her in the hospital.
-
19:45 - 19:46It turns out, when a baby's born now,
-
19:46 - 19:50they put an RFID bracelet on the baby,
a corresponding one on the mother, -
19:50 - 19:54so if anyone other than the mother takes
the baby out of the maternity ward, -
19:54 - 19:55an alarm goes off.
-
19:55 - 19:57I said, "Well, that's kind of neat.
-
19:57 - 20:01I wonder how rampant
baby snatching is out of hospitals." -
20:01 - 20:02I go home, I look it up.
-
20:02 - 20:03It basically never happens.
-
20:03 - 20:05(Laughter)
-
20:05 - 20:08But if you think about it,
if you are a hospital, -
20:08 - 20:11and you need to take a baby
away from its mother, -
20:11 - 20:12out of the room to run some tests,
-
20:12 - 20:14you better have some good
security theater, -
20:14 - 20:16or she's going to rip your arm off.
-
20:16 - 20:18(Laughter)
-
20:19 - 20:21So it's important for us,
-
20:21 - 20:23those of us who design security,
-
20:23 - 20:25who look at security policy --
-
20:26 - 20:29or even look at public policy
in ways that affect security. -
20:30 - 20:33It's not just reality;
it's feeling and reality. -
20:33 - 20:35What's important
-
20:35 - 20:37is that they be about the same.
-
20:37 - 20:39It's important that,
if our feelings match reality, -
20:39 - 20:41we make better security trade-offs.
-
20:41 - 20:43Thank you.
-
20:43 - 20:45(Applause)
- Title:
- The security mirage
- Speaker:
- Bruce Schneier
- Description:
-
The feeling of security and the reality of security don't always match, says computer-security expert Bruce Schneier. At TEDxPSU, he explains why we spend billions addressing news story risks, like the "security theater" now playing at your local airport, while neglecting more probable risks -- and how we can break this pattern.
- Video Language:
- English
- Team:
- closed TED
- Project:
- TEDTalks
- Duration:
- 20:44
Krystian Aparta commented on English subtitles for The security mirage | ||
Krystian Aparta edited English subtitles for The security mirage | ||
Krystian Aparta edited English subtitles for The security mirage | ||
TED edited English subtitles for The security mirage | ||
TED added a translation |
Krystian Aparta
The English transcript was updated on 5/30/2017.