< Return to Video

The security mirage

  • 0:00 - 0:02
    So security is two different things:
  • 0:02 - 0:04
    it's a feeling, and it's a reality.
  • 0:04 - 0:06
    And they're different.
  • 0:06 - 0:08
    You could feel secure
  • 0:08 - 0:10
    even if you're not.
  • 0:10 - 0:12
    And you can be secure
  • 0:12 - 0:14
    even if you don't feel it.
  • 0:14 - 0:16
    Really, we have two separate concepts
  • 0:16 - 0:18
    mapped onto the same word.
  • 0:18 - 0:20
    And what I want to do in this talk
  • 0:20 - 0:22
    is to split them apart --
  • 0:22 - 0:24
    figuring out when they diverge
  • 0:24 - 0:26
    and how they converge.
  • 0:26 - 0:28
    And language is actually a problem here.
  • 0:28 - 0:30
    There aren't a lot of good words
  • 0:30 - 0:33
    for the concepts we're going to talk about.
  • 0:33 - 0:35
    So if you look at security
  • 0:35 - 0:37
    from economic terms,
  • 0:37 - 0:39
    it's a trade-off.
  • 0:39 - 0:41
    Every time you get some security,
  • 0:41 - 0:43
    you're always trading off something.
  • 0:43 - 0:45
    Whether this is a personal decision --
  • 0:45 - 0:47
    whether you're going to install a burglar alarm in your home --
  • 0:47 - 0:50
    or a national decision -- where you're going to invade some foreign country --
  • 0:50 - 0:52
    you're going to trade off something,
  • 0:52 - 0:55
    either money or time, convenience, capabilities,
  • 0:55 - 0:58
    maybe fundamental liberties.
  • 0:58 - 1:01
    And the question to ask when you look at a security anything
  • 1:01 - 1:04
    is not whether this makes us safer,
  • 1:04 - 1:07
    but whether it's worth the trade-off.
  • 1:07 - 1:09
    You've heard in the past several years,
  • 1:09 - 1:11
    the world is safer because Saddam Hussein is not in power.
  • 1:11 - 1:14
    That might be true, but it's not terribly relevant.
  • 1:14 - 1:17
    The question is, was it worth it?
  • 1:17 - 1:20
    And you can make your own decision,
  • 1:20 - 1:22
    and then you'll decide whether the invasion was worth it.
  • 1:22 - 1:24
    That's how you think about security --
  • 1:24 - 1:26
    in terms of the trade-off.
  • 1:26 - 1:29
    Now there's often no right or wrong here.
  • 1:29 - 1:31
    Some of us have a burglar alarm system at home,
  • 1:31 - 1:33
    and some of us don't.
  • 1:33 - 1:35
    And it'll depend on where we live,
  • 1:35 - 1:37
    whether we live alone or have a family,
  • 1:37 - 1:39
    how much cool stuff we have,
  • 1:39 - 1:41
    how much we're willing to accept
  • 1:41 - 1:43
    the risk of theft.
  • 1:43 - 1:45
    In politics also,
  • 1:45 - 1:47
    there are different opinions.
  • 1:47 - 1:49
    A lot of times, these trade-offs
  • 1:49 - 1:51
    are about more than just security,
  • 1:51 - 1:53
    and I think that's really important.
  • 1:53 - 1:55
    Now people have a natural intuition
  • 1:55 - 1:57
    about these trade-offs.
  • 1:57 - 1:59
    We make them every day --
  • 1:59 - 2:01
    last night in my hotel room,
  • 2:01 - 2:03
    when I decided to double-lock the door,
  • 2:03 - 2:05
    or you in your car when you drove here,
  • 2:05 - 2:07
    when we go eat lunch
  • 2:07 - 2:10
    and decide the food's not poison and we'll eat it.
  • 2:10 - 2:12
    We make these trade-offs again and again,
  • 2:12 - 2:14
    multiple times a day.
  • 2:14 - 2:16
    We often won't even notice them.
  • 2:16 - 2:18
    They're just part of being alive; we all do it.
  • 2:18 - 2:21
    Every species does it.
  • 2:21 - 2:23
    Imagine a rabbit in a field, eating grass,
  • 2:23 - 2:26
    and the rabbit's going to see a fox.
  • 2:26 - 2:28
    That rabbit will make a security trade-off:
  • 2:28 - 2:30
    "Should I stay, or should I flee?"
  • 2:30 - 2:32
    And if you think about it,
  • 2:32 - 2:35
    the rabbits that are good at making that trade-off
  • 2:35 - 2:37
    will tend to live and reproduce,
  • 2:37 - 2:39
    and the rabbits that are bad at it
  • 2:39 - 2:41
    will get eaten or starve.
  • 2:41 - 2:43
    So you'd think
  • 2:43 - 2:46
    that us, as a successful species on the planet --
  • 2:46 - 2:48
    you, me, everybody --
  • 2:48 - 2:51
    would be really good at making these trade-offs.
  • 2:51 - 2:53
    Yet it seems, again and again,
  • 2:53 - 2:56
    that we're hopelessly bad at it.
  • 2:56 - 2:59
    And I think that's a fundamentally interesting question.
  • 2:59 - 3:01
    I'll give you the short answer.
  • 3:01 - 3:03
    The answer is, we respond to the feeling of security
  • 3:03 - 3:06
    and not the reality.
  • 3:06 - 3:09
    Now most of the time, that works.
  • 3:10 - 3:12
    Most of the time,
  • 3:12 - 3:15
    feeling and reality are the same.
  • 3:15 - 3:17
    Certainly that's true
  • 3:17 - 3:20
    for most of human prehistory.
  • 3:20 - 3:23
    We've developed this ability
  • 3:23 - 3:25
    because it makes evolutionary sense.
  • 3:25 - 3:27
    One way to think of it
  • 3:27 - 3:29
    is that we're highly optimized
  • 3:29 - 3:31
    for risk decisions
  • 3:31 - 3:34
    that are endemic to living in small family groups
  • 3:34 - 3:37
    in the East African highlands in 100,000 B.C.
  • 3:37 - 3:40
    2010 New York, not so much.
  • 3:41 - 3:44
    Now there are several biases in risk perception.
  • 3:44 - 3:46
    A lot of good experiments in this.
  • 3:46 - 3:49
    And you can see certain biases that come up again and again.
  • 3:49 - 3:51
    So I'll give you four.
  • 3:51 - 3:54
    We tend to exaggerate spectacular and rare risks
  • 3:54 - 3:56
    and downplay common risks --
  • 3:56 - 3:59
    so flying versus driving.
  • 3:59 - 4:01
    The unknown is perceived
  • 4:01 - 4:04
    to be riskier than the familiar.
  • 4:05 - 4:07
    One example would be,
  • 4:07 - 4:10
    people fear kidnapping by strangers
  • 4:10 - 4:13
    when the data supports kidnapping by relatives is much more common.
  • 4:13 - 4:15
    This is for children.
  • 4:15 - 4:18
    Third, personified risks
  • 4:18 - 4:21
    are perceived to be greater than anonymous risks --
  • 4:21 - 4:24
    so Bin Laden is scarier because he has a name.
  • 4:24 - 4:26
    And the fourth
  • 4:26 - 4:28
    is people underestimate risks
  • 4:28 - 4:30
    in situations they do control
  • 4:30 - 4:34
    and overestimate them in situations they don't control.
  • 4:34 - 4:37
    So once you take up skydiving or smoking,
  • 4:37 - 4:39
    you downplay the risks.
  • 4:39 - 4:42
    If a risk is thrust upon you -- terrorism was a good example --
  • 4:42 - 4:45
    you'll overplay it because you don't feel like it's in your control.
  • 4:47 - 4:50
    There are a bunch of other of these biases, these cognitive biases,
  • 4:50 - 4:53
    that affect our risk decisions.
  • 4:53 - 4:55
    There's the availability heuristic,
  • 4:55 - 4:57
    which basically means
  • 4:57 - 5:00
    we estimate the probability of something
  • 5:00 - 5:04
    by how easy it is to bring instances of it to mind.
  • 5:04 - 5:06
    So you can imagine how that works.
  • 5:06 - 5:09
    If you hear a lot about tiger attacks, there must be a lot of tigers around.
  • 5:09 - 5:12
    You don't hear about lion attacks, there aren't a lot of lions around.
  • 5:12 - 5:15
    This works until you invent newspapers.
  • 5:15 - 5:17
    Because what newspapers do
  • 5:17 - 5:19
    is they repeat again and again
  • 5:19 - 5:21
    rare risks.
  • 5:21 - 5:23
    I tell people, if it's in the news, don't worry about it.
  • 5:23 - 5:25
    Because by definition,
  • 5:25 - 5:28
    news is something that almost never happens.
  • 5:28 - 5:30
    (Laughter)
  • 5:30 - 5:33
    When something is so common, it's no longer news --
  • 5:33 - 5:35
    car crashes, domestic violence --
  • 5:35 - 5:38
    those are the risks you worry about.
  • 5:38 - 5:40
    We're also a species of storytellers.
  • 5:40 - 5:43
    We respond to stories more than data.
  • 5:43 - 5:45
    And there's some basic innumeracy going on.
  • 5:45 - 5:48
    I mean, the joke "One, Two, Three, Many" is kind of right.
  • 5:48 - 5:51
    We're really good at small numbers.
  • 5:51 - 5:53
    One mango, two mangoes, three mangoes,
  • 5:53 - 5:55
    10,000 mangoes, 100,000 mangoes --
  • 5:55 - 5:58
    it's still more mangoes you can eat before they rot.
  • 5:58 - 6:01
    So one half, one quarter, one fifth -- we're good at that.
  • 6:01 - 6:03
    One in a million, one in a billion --
  • 6:03 - 6:06
    they're both almost never.
  • 6:06 - 6:08
    So we have trouble with the risks
  • 6:08 - 6:10
    that aren't very common.
  • 6:10 - 6:12
    And what these cognitive biases do
  • 6:12 - 6:15
    is they act as filters between us and reality.
  • 6:15 - 6:17
    And the result
  • 6:17 - 6:19
    is that feeling and reality get out of whack,
  • 6:19 - 6:22
    they get different.
  • 6:22 - 6:25
    Now you either have a feeling -- you feel more secure than you are.
  • 6:25 - 6:27
    There's a false sense of security.
  • 6:27 - 6:29
    Or the other way,
  • 6:29 - 6:31
    and that's a false sense of insecurity.
  • 6:31 - 6:34
    I write a lot about "security theater,"
  • 6:34 - 6:37
    which are products that make people feel secure,
  • 6:37 - 6:39
    but don't actually do anything.
  • 6:39 - 6:41
    There's no real word for stuff that makes us secure,
  • 6:41 - 6:43
    but doesn't make us feel secure.
  • 6:43 - 6:46
    Maybe it's what the CIA's supposed to do for us.
  • 6:48 - 6:50
    So back to economics.
  • 6:50 - 6:54
    If economics, if the market, drives security,
  • 6:54 - 6:56
    and if people make trade-offs
  • 6:56 - 6:59
    based on the feeling of security,
  • 6:59 - 7:01
    then the smart thing for companies to do
  • 7:01 - 7:03
    for the economic incentives
  • 7:03 - 7:06
    are to make people feel secure.
  • 7:06 - 7:09
    And there are two ways to do this.
  • 7:09 - 7:11
    One, you can make people actually secure
  • 7:11 - 7:13
    and hope they notice.
  • 7:13 - 7:16
    Or two, you can make people just feel secure
  • 7:16 - 7:19
    and hope they don't notice.
  • 7:20 - 7:23
    So what makes people notice?
  • 7:23 - 7:25
    Well a couple of things:
  • 7:25 - 7:27
    understanding of the security,
  • 7:27 - 7:29
    of the risks, the threats,
  • 7:29 - 7:32
    the countermeasures, how they work.
  • 7:32 - 7:34
    But if you know stuff,
  • 7:34 - 7:37
    you're more likely to have your feelings match reality.
  • 7:37 - 7:40
    Enough real world examples helps.
  • 7:40 - 7:43
    Now we all know the crime rate in our neighborhood,
  • 7:43 - 7:46
    because we live there, and we get a feeling about it
  • 7:46 - 7:49
    that basically matches reality.
  • 7:49 - 7:52
    Security theater's exposed
  • 7:52 - 7:55
    when it's obvious that it's not working properly.
  • 7:55 - 7:59
    Okay, so what makes people not notice?
  • 7:59 - 8:01
    Well, a poor understanding.
  • 8:01 - 8:04
    If you don't understand the risks, you don't understand the costs,
  • 8:04 - 8:06
    you're likely to get the trade-off wrong,
  • 8:06 - 8:09
    and your feeling doesn't match reality.
  • 8:09 - 8:11
    Not enough examples.
  • 8:11 - 8:13
    There's an inherent problem
  • 8:13 - 8:15
    with low probability events.
  • 8:15 - 8:17
    If, for example,
  • 8:17 - 8:19
    terrorism almost never happens,
  • 8:19 - 8:21
    it's really hard to judge
  • 8:21 - 8:24
    the efficacy of counter-terrorist measures.
  • 8:25 - 8:28
    This is why you keep sacrificing virgins,
  • 8:28 - 8:31
    and why your unicorn defenses are working just great.
  • 8:31 - 8:34
    There aren't enough examples of failures.
  • 8:35 - 8:38
    Also, feelings that are clouding the issues --
  • 8:38 - 8:40
    the cognitive biases I talked about earlier,
  • 8:40 - 8:43
    fears, folk beliefs,
  • 8:43 - 8:46
    basically an inadequate model of reality.
  • 8:47 - 8:50
    So let me complicate things.
  • 8:50 - 8:52
    I have feeling and reality.
  • 8:52 - 8:55
    I want to add a third element. I want to add model.
  • 8:55 - 8:57
    Feeling and model in our head,
  • 8:57 - 8:59
    reality is the outside world.
  • 8:59 - 9:02
    It doesn't change; it's real.
  • 9:02 - 9:04
    So feeling is based on our intuition.
  • 9:04 - 9:06
    Model is based on reason.
  • 9:06 - 9:09
    That's basically the difference.
  • 9:09 - 9:11
    In a primitive and simple world,
  • 9:11 - 9:14
    there's really no reason for a model
  • 9:14 - 9:17
    because feeling is close to reality.
  • 9:17 - 9:19
    You don't need a model.
  • 9:19 - 9:21
    But in a modern and complex world,
  • 9:21 - 9:23
    you need models
  • 9:23 - 9:26
    to understand a lot of the risks we face.
  • 9:27 - 9:29
    There's no feeling about germs.
  • 9:29 - 9:32
    You need a model to understand them.
  • 9:32 - 9:34
    So this model
  • 9:34 - 9:37
    is an intelligent representation of reality.
  • 9:37 - 9:40
    It's, of course, limited by science,
  • 9:40 - 9:42
    by technology.
  • 9:42 - 9:45
    We couldn't have a germ theory of disease
  • 9:45 - 9:48
    before we invented the microscope to see them.
  • 9:49 - 9:52
    It's limited by our cognitive biases.
  • 9:52 - 9:54
    But it has the ability
  • 9:54 - 9:56
    to override our feelings.
  • 9:56 - 9:59
    Where do we get these models? We get them from others.
  • 9:59 - 10:02
    We get them from religion, from culture,
  • 10:02 - 10:04
    teachers, elders.
  • 10:04 - 10:06
    A couple years ago,
  • 10:06 - 10:08
    I was in South Africa on safari.
  • 10:08 - 10:11
    The tracker I was with grew up in Kruger National Park.
  • 10:11 - 10:14
    He had some very complex models of how to survive.
  • 10:14 - 10:16
    And it depended on if you were attacked
  • 10:16 - 10:18
    by a lion or a leopard or a rhino or an elephant --
  • 10:18 - 10:21
    and when you had to run away, and when you couldn't run away, and when you had to climb a tree --
  • 10:21 - 10:23
    when you could never climb a tree.
  • 10:23 - 10:26
    I would have died in a day,
  • 10:26 - 10:28
    but he was born there,
  • 10:28 - 10:30
    and he understood how to survive.
  • 10:30 - 10:32
    I was born in New York City.
  • 10:32 - 10:35
    I could have taken him to New York, and he would have died in a day.
  • 10:35 - 10:37
    (Laughter)
  • 10:37 - 10:39
    Because we had different models
  • 10:39 - 10:42
    based on our different experiences.
  • 10:43 - 10:45
    Models can come from the media,
  • 10:45 - 10:48
    from our elected officials.
  • 10:48 - 10:51
    Think of models of terrorism,
  • 10:51 - 10:54
    child kidnapping,
  • 10:54 - 10:56
    airline safety, car safety.
  • 10:56 - 10:59
    Models can come from industry.
  • 10:59 - 11:01
    The two I'm following are surveillance cameras,
  • 11:01 - 11:03
    ID cards,
  • 11:03 - 11:06
    quite a lot of our computer security models come from there.
  • 11:06 - 11:09
    A lot of models come from science.
  • 11:09 - 11:11
    Health models are a great example.
  • 11:11 - 11:14
    Think of cancer, of bird flu, swine flu, SARS.
  • 11:14 - 11:17
    All of our feelings of security
  • 11:17 - 11:19
    about those diseases
  • 11:19 - 11:21
    come from models
  • 11:21 - 11:24
    given to us, really, by science filtered through the media.
  • 11:25 - 11:28
    So models can change.
  • 11:28 - 11:30
    Models are not static.
  • 11:30 - 11:33
    As we become more comfortable in our environments,
  • 11:33 - 11:37
    our model can move closer to our feelings.
  • 11:38 - 11:40
    So an example might be,
  • 11:40 - 11:42
    if you go back 100 years ago
  • 11:42 - 11:45
    when electricity was first becoming common,
  • 11:45 - 11:47
    there were a lot of fears about it.
  • 11:47 - 11:49
    I mean, there were people who were afraid to push doorbells,
  • 11:49 - 11:52
    because there was electricity in there, and that was dangerous.
  • 11:52 - 11:55
    For us, we're very facile around electricity.
  • 11:55 - 11:57
    We change light bulbs
  • 11:57 - 11:59
    without even thinking about it.
  • 11:59 - 12:03
    Our model of security around electricity
  • 12:03 - 12:06
    is something we were born into.
  • 12:06 - 12:09
    It hasn't changed as we were growing up.
  • 12:09 - 12:12
    And we're good at it.
  • 12:12 - 12:14
    Or think of the risks
  • 12:14 - 12:16
    on the Internet across generations --
  • 12:16 - 12:18
    how your parents approach Internet security,
  • 12:18 - 12:20
    versus how you do,
  • 12:20 - 12:23
    versus how our kids will.
  • 12:23 - 12:26
    Models eventually fade into the background.
  • 12:27 - 12:30
    Intuitive is just another word for familiar.
  • 12:30 - 12:32
    So as your model is close to reality,
  • 12:32 - 12:34
    and it converges with feelings,
  • 12:34 - 12:37
    you often don't know it's there.
  • 12:37 - 12:39
    So a nice example of this
  • 12:39 - 12:42
    came from last year and swine flu.
  • 12:42 - 12:44
    When swine flu first appeared,
  • 12:44 - 12:48
    the initial news caused a lot of overreaction.
  • 12:48 - 12:50
    Now it had a name,
  • 12:50 - 12:52
    which made it scarier than the regular flu,
  • 12:52 - 12:54
    even though it was more deadly.
  • 12:54 - 12:58
    And people thought doctors should be able to deal with it.
  • 12:58 - 13:00
    So there was that feeling of lack of control.
  • 13:00 - 13:02
    And those two things
  • 13:02 - 13:04
    made the risk more than it was.
  • 13:04 - 13:07
    As the novelty wore off, the months went by,
  • 13:07 - 13:09
    there was some amount of tolerance,
  • 13:09 - 13:11
    people got used to it.
  • 13:11 - 13:14
    There was no new data, but there was less fear.
  • 13:14 - 13:16
    By autumn,
  • 13:16 - 13:18
    people thought
  • 13:18 - 13:20
    the doctors should have solved this already.
  • 13:20 - 13:22
    And there's kind of a bifurcation --
  • 13:22 - 13:24
    people had to choose
  • 13:24 - 13:28
    between fear and acceptance --
  • 13:28 - 13:30
    actually fear and indifference --
  • 13:30 - 13:33
    they kind of chose suspicion.
  • 13:33 - 13:36
    And when the vaccine appeared last winter,
  • 13:36 - 13:39
    there were a lot of people -- a surprising number --
  • 13:39 - 13:42
    who refused to get it --
  • 13:43 - 13:45
    as a nice example
  • 13:45 - 13:48
    of how people's feelings of security change, how their model changes,
  • 13:48 - 13:50
    sort of wildly
  • 13:50 - 13:52
    with no new information,
  • 13:52 - 13:54
    with no new input.
  • 13:54 - 13:57
    This kind of thing happens a lot.
  • 13:57 - 14:00
    I'm going to give one more complication.
  • 14:00 - 14:03
    We have feeling, model, reality.
  • 14:03 - 14:05
    I have a very relativistic view of security.
  • 14:05 - 14:08
    I think it depends on the observer.
  • 14:08 - 14:10
    And most security decisions
  • 14:10 - 14:14
    have a variety of people involved.
  • 14:14 - 14:16
    And stakeholders
  • 14:16 - 14:19
    with specific trade-offs
  • 14:19 - 14:21
    will try to influence the decision.
  • 14:21 - 14:23
    And I call that their agenda.
  • 14:23 - 14:25
    And you see agenda --
  • 14:25 - 14:28
    this is marketing, this is politics --
  • 14:28 - 14:31
    trying to convince you to have one model versus another,
  • 14:31 - 14:33
    trying to convince you to ignore a model
  • 14:33 - 14:36
    and trust your feelings,
  • 14:36 - 14:39
    marginalizing people with models you don't like.
  • 14:39 - 14:42
    This is not uncommon.
  • 14:42 - 14:45
    An example, a great example, is the risk of smoking.
  • 14:46 - 14:49
    In the history of the past 50 years, the smoking risk
  • 14:49 - 14:51
    shows how a model changes,
  • 14:51 - 14:54
    and it also shows how an industry fights against
  • 14:54 - 14:56
    a model it doesn't like.
  • 14:56 - 14:59
    Compare that to the secondhand smoke debate --
  • 14:59 - 15:02
    probably about 20 years behind.
  • 15:02 - 15:04
    Think about seat belts.
  • 15:04 - 15:06
    When I was a kid, no one wore a seat belt.
  • 15:06 - 15:08
    Nowadays, no kid will let you drive
  • 15:08 - 15:10
    if you're not wearing a seat belt.
  • 15:11 - 15:13
    Compare that to the airbag debate --
  • 15:13 - 15:16
    probably about 30 years behind.
  • 15:16 - 15:19
    All examples of models changing.
  • 15:21 - 15:24
    What we learn is that changing models is hard.
  • 15:24 - 15:26
    Models are hard to dislodge.
  • 15:26 - 15:28
    If they equal your feelings,
  • 15:28 - 15:31
    you don't even know you have a model.
  • 15:31 - 15:33
    And there's another cognitive bias
  • 15:33 - 15:35
    I'll call confirmation bias,
  • 15:35 - 15:38
    where we tend to accept data
  • 15:38 - 15:40
    that confirms our beliefs
  • 15:40 - 15:43
    and reject data that contradicts our beliefs.
  • 15:44 - 15:46
    So evidence against our model,
  • 15:46 - 15:49
    we're likely to ignore, even if it's compelling.
  • 15:49 - 15:52
    It has to get very compelling before we'll pay attention.
  • 15:53 - 15:55
    New models that extend long periods of time are hard.
  • 15:55 - 15:57
    Global warming is a great example.
  • 15:57 - 15:59
    We're terrible
  • 15:59 - 16:01
    at models that span 80 years.
  • 16:01 - 16:03
    We can do to the next harvest.
  • 16:03 - 16:06
    We can often do until our kids grow up.
  • 16:06 - 16:09
    But 80 years, we're just not good at.
  • 16:09 - 16:12
    So it's a very hard model to accept.
  • 16:12 - 16:16
    We can have both models in our head simultaneously,
  • 16:16 - 16:19
    right, that kind of problem
  • 16:19 - 16:22
    where we're holding both beliefs together,
  • 16:22 - 16:24
    right, the cognitive dissonance.
  • 16:24 - 16:26
    Eventually,
  • 16:26 - 16:29
    the new model will replace the old model.
  • 16:29 - 16:32
    Strong feelings can create a model.
  • 16:32 - 16:35
    September 11th created a security model
  • 16:35 - 16:37
    in a lot of people's heads.
  • 16:37 - 16:40
    Also, personal experiences with crime can do it,
  • 16:40 - 16:42
    personal health scare,
  • 16:42 - 16:44
    a health scare in the news.
  • 16:44 - 16:46
    You'll see these called flashbulb events
  • 16:46 - 16:48
    by psychiatrists.
  • 16:48 - 16:51
    They can create a model instantaneously,
  • 16:51 - 16:54
    because they're very emotive.
  • 16:54 - 16:56
    So in the technological world,
  • 16:56 - 16:58
    we don't have experience
  • 16:58 - 17:00
    to judge models.
  • 17:00 - 17:02
    And we rely on others. We rely on proxies.
  • 17:02 - 17:06
    I mean, this works as long as it's to correct others.
  • 17:06 - 17:08
    We rely on government agencies
  • 17:08 - 17:13
    to tell us what pharmaceuticals are safe.
  • 17:13 - 17:15
    I flew here yesterday.
  • 17:15 - 17:17
    I didn't check the airplane.
  • 17:17 - 17:19
    I relied on some other group
  • 17:19 - 17:22
    to determine whether my plane was safe to fly.
  • 17:22 - 17:25
    We're here, none of us fear the roof is going to collapse on us,
  • 17:25 - 17:28
    not because we checked,
  • 17:28 - 17:30
    but because we're pretty sure
  • 17:30 - 17:33
    the building codes here are good.
  • 17:33 - 17:35
    It's a model we just accept
  • 17:35 - 17:37
    pretty much by faith.
  • 17:37 - 17:40
    And that's okay.
  • 17:42 - 17:44
    Now, what we want
  • 17:44 - 17:46
    is people to get familiar enough
  • 17:46 - 17:48
    with better models --
  • 17:48 - 17:50
    have it reflected in their feelings --
  • 17:50 - 17:54
    to allow them to make security trade-offs.
  • 17:54 - 17:56
    Now when these go out of whack,
  • 17:56 - 17:58
    you have two options.
  • 17:58 - 18:00
    One, you can fix people's feelings,
  • 18:00 - 18:02
    directly appeal to feelings.
  • 18:02 - 18:05
    It's manipulation, but it can work.
  • 18:05 - 18:07
    The second, more honest way
  • 18:07 - 18:10
    is to actually fix the model.
  • 18:11 - 18:13
    Change happens slowly.
  • 18:13 - 18:16
    The smoking debate took 40 years,
  • 18:16 - 18:19
    and that was an easy one.
  • 18:19 - 18:21
    Some of this stuff is hard.
  • 18:21 - 18:23
    I mean really though,
  • 18:23 - 18:25
    information seems like our best hope.
  • 18:25 - 18:27
    And I lied.
  • 18:27 - 18:29
    Remember I said feeling, model, reality;
  • 18:29 - 18:32
    I said reality doesn't change. It actually does.
  • 18:32 - 18:34
    We live in a technological world;
  • 18:34 - 18:37
    reality changes all the time.
  • 18:37 - 18:40
    So we might have -- for the first time in our species --
  • 18:40 - 18:43
    feeling chases model, model chases reality, reality's moving --
  • 18:43 - 18:46
    they might never catch up.
  • 18:47 - 18:49
    We don't know.
  • 18:49 - 18:51
    But in the long-term,
  • 18:51 - 18:54
    both feeling and reality are important.
  • 18:54 - 18:57
    And I want to close with two quick stories to illustrate this.
  • 18:57 - 18:59
    1982 -- I don't know if people will remember this --
  • 18:59 - 19:02
    there was a short epidemic
  • 19:02 - 19:04
    of Tylenol poisonings in the United States.
  • 19:04 - 19:07
    It's a horrific story. Someone took a bottle of Tylenol,
  • 19:07 - 19:10
    put poison in it, closed it up, put it back on the shelf.
  • 19:10 - 19:12
    Someone else bought it and died.
  • 19:12 - 19:14
    This terrified people.
  • 19:14 - 19:16
    There were a couple of copycat attacks.
  • 19:16 - 19:19
    There wasn't any real risk, but people were scared.
  • 19:19 - 19:21
    And this is how
  • 19:21 - 19:23
    the tamper-proof drug industry was invented.
  • 19:23 - 19:25
    Those tamper-proof caps, that came from this.
  • 19:25 - 19:27
    It's complete security theater.
  • 19:27 - 19:29
    As a homework assignment, think of 10 ways to get around it.
  • 19:29 - 19:32
    I'll give you one, a syringe.
  • 19:32 - 19:35
    But it made people feel better.
  • 19:35 - 19:37
    It made their feeling of security
  • 19:37 - 19:39
    more match the reality.
  • 19:39 - 19:42
    Last story, a few years ago, a friend of mine gave birth.
  • 19:42 - 19:44
    I visit her in the hospital.
  • 19:44 - 19:46
    It turns out when a baby's born now,
  • 19:46 - 19:48
    they put an RFID bracelet on the baby,
  • 19:48 - 19:50
    put a corresponding one on the mother,
  • 19:50 - 19:52
    so if anyone other than the mother takes the baby out of the maternity ward,
  • 19:52 - 19:54
    an alarm goes off.
  • 19:54 - 19:56
    I said, "Well, that's kind of neat.
  • 19:56 - 19:58
    I wonder how rampant baby snatching is
  • 19:58 - 20:00
    out of hospitals."
  • 20:00 - 20:02
    I go home, I look it up.
  • 20:02 - 20:04
    It basically never happens.
  • 20:04 - 20:06
    But if you think about it,
  • 20:06 - 20:08
    if you are a hospital,
  • 20:08 - 20:10
    and you need to take a baby away from its mother,
  • 20:10 - 20:12
    out of the room to run some tests,
  • 20:12 - 20:14
    you better have some good security theater,
  • 20:14 - 20:16
    or she's going to rip your arm off.
  • 20:16 - 20:18
    (Laughter)
  • 20:18 - 20:20
    So it's important for us,
  • 20:20 - 20:22
    those of us who design security,
  • 20:22 - 20:25
    who look at security policy,
  • 20:25 - 20:27
    or even look at public policy
  • 20:27 - 20:29
    in ways that affect security.
  • 20:29 - 20:32
    It's not just reality; it's feeling and reality.
  • 20:32 - 20:34
    What's important
  • 20:34 - 20:36
    is that they be about the same.
  • 20:36 - 20:38
    It's important that, if our feelings match reality,
  • 20:38 - 20:40
    we make better security trade-offs.
  • 20:40 - 20:42
    Thank you.
  • 20:42 - 20:44
    (Applause)
Title:
The security mirage
Speaker:
Bruce Schneier
Description:

The feeling of security and the reality of security don't always match, says computer-security expert Bruce Schneier. At TEDxPSU, he explains why we spend billions addressing news story risks, like the "security theater" now playing at your local airport, while neglecting more probable risks -- and how we can break this pattern.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
20:44
Krystian Aparta commented on English subtitles for The security mirage
Krystian Aparta edited English subtitles for The security mirage
Krystian Aparta edited English subtitles for The security mirage
TED edited English subtitles for The security mirage
TED added a translation

English subtitles

Revisions Compare revisions