< Return to Video

The security mirage

  • 0:01 - 0:03
    So, security is two different things:
  • 0:03 - 0:06
    it's a feeling, and it's a reality.
  • 0:06 - 0:07
    And they're different.
  • 0:07 - 0:10
    You could feel secure even if you're not.
  • 0:10 - 0:12
    And you can be secure
  • 0:12 - 0:14
    even if you don't feel it.
  • 0:14 - 0:16
    Really, we have two separate concepts
  • 0:17 - 0:18
    mapped onto the same word.
  • 0:19 - 0:22
    And what I want to do in this talk
    is to split them apart --
  • 0:22 - 0:26
    figuring out when they diverge
    and how they converge.
  • 0:26 - 0:29
    And language is actually a problem here.
  • 0:29 - 0:31
    There aren't a lot of good words
  • 0:31 - 0:33
    for the concepts
    we're going to talk about.
  • 0:34 - 0:38
    So if you look at security
    from economic terms,
  • 0:38 - 0:40
    it's a trade-off.
  • 0:40 - 0:44
    Every time you get some security,
    you're always trading off something.
  • 0:44 - 0:46
    Whether this is a personal decision --
  • 0:46 - 0:49
    whether you're going to install
    a burglar alarm in your home --
  • 0:49 - 0:50
    or a national decision,
  • 0:50 - 0:52
    where you're going to invade
    a foreign country --
  • 0:52 - 0:56
    you're going to trade off something:
    money or time, convenience, capabilities,
  • 0:56 - 0:58
    maybe fundamental liberties.
  • 0:58 - 1:02
    And the question to ask
    when you look at a security anything
  • 1:02 - 1:05
    is not whether this makes us safer,
  • 1:05 - 1:07
    but whether it's worth the trade-off.
  • 1:07 - 1:10
    You've heard in the past
    several years, the world is safer
  • 1:10 - 1:12
    because Saddam Hussein is not in power.
  • 1:12 - 1:15
    That might be true,
    but it's not terribly relevant.
  • 1:15 - 1:18
    The question is: Was it worth it?
  • 1:18 - 1:20
    And you can make your own decision,
  • 1:20 - 1:23
    and then you'll decide
    whether the invasion was worth it.
  • 1:23 - 1:27
    That's how you think about security:
    in terms of the trade-off.
  • 1:27 - 1:29
    Now, there's often no right or wrong here.
  • 1:30 - 1:33
    Some of us have a burglar alarm
    system at home and some of us don't.
  • 1:33 - 1:36
    And it'll depend on where we live,
  • 1:36 - 1:38
    whether we live alone or have a family,
  • 1:38 - 1:40
    how much cool stuff we have,
  • 1:40 - 1:43
    how much we're willing
    to accept the risk of theft.
  • 1:44 - 1:47
    In politics also,
    there are different opinions.
  • 1:47 - 1:52
    A lot of times, these trade-offs
    are about more than just security,
  • 1:52 - 1:54
    and I think that's really important.
  • 1:54 - 1:57
    Now, people have a natural intuition
    about these trade-offs.
  • 1:57 - 1:59
    We make them every day.
  • 2:00 - 2:03
    Last night in my hotel room,
    when I decided to double-lock the door,
  • 2:03 - 2:05
    or you in your car when you drove here;
  • 2:06 - 2:07
    when we go eat lunch
  • 2:07 - 2:10
    and decide the food's not
    poison and we'll eat it.
  • 2:10 - 2:13
    We make these trade-offs again and again,
  • 2:13 - 2:15
    multiple times a day.
  • 2:15 - 2:16
    We often won't even notice them.
  • 2:16 - 2:19
    They're just part
    of being alive; we all do it.
  • 2:19 - 2:21
    Every species does it.
  • 2:21 - 2:24
    Imagine a rabbit in a field, eating grass.
  • 2:24 - 2:26
    And the rabbit sees a fox.
  • 2:27 - 2:29
    That rabbit will make
    a security trade-off:
  • 2:29 - 2:31
    "Should I stay, or should I flee?"
  • 2:31 - 2:33
    And if you think about it,
  • 2:33 - 2:35
    the rabbits that are good
    at making that trade-off
  • 2:35 - 2:37
    will tend to live and reproduce,
  • 2:37 - 2:40
    and the rabbits that are bad at it
  • 2:40 - 2:41
    will get eaten or starve.
  • 2:42 - 2:43
    So you'd think
  • 2:44 - 2:49
    that us, as a successful species
    on the planet -- you, me, everybody --
  • 2:49 - 2:51
    would be really good
    at making these trade-offs.
  • 2:52 - 2:55
    Yet it seems, again and again,
    that we're hopelessly bad at it.
  • 2:57 - 2:59
    And I think that's a fundamentally
    interesting question.
  • 2:59 - 3:01
    I'll give you the short answer.
  • 3:01 - 3:04
    The answer is, we respond
    to the feeling of security
  • 3:04 - 3:05
    and not the reality.
  • 3:07 - 3:09
    Now, most of the time, that works.
  • 3:10 - 3:12
    Most of the time,
  • 3:12 - 3:14
    feeling and reality are the same.
  • 3:16 - 3:19
    Certainly that's true
    for most of human prehistory.
  • 3:20 - 3:23
    We've developed this ability
  • 3:23 - 3:26
    because it makes evolutionary sense.
  • 3:27 - 3:30
    One way to think of it
    is that we're highly optimized
  • 3:30 - 3:32
    for risk decisions
  • 3:32 - 3:34
    that are endemic to living
    in small family groups
  • 3:34 - 3:37
    in the East African Highlands
    in 100,000 BC.
  • 3:38 - 3:40
    2010 New York, not so much.
  • 3:42 - 3:45
    Now, there are several biases
    in risk perception.
  • 3:45 - 3:47
    A lot of good experiments in this.
  • 3:47 - 3:50
    And you can see certain biases
    that come up again and again.
  • 3:50 - 3:52
    I'll give you four.
  • 3:52 - 3:55
    We tend to exaggerate
    spectacular and rare risks
  • 3:55 - 3:57
    and downplay common risks --
  • 3:57 - 3:58
    so, flying versus driving.
  • 3:59 - 4:03
    The unknown is perceived
    to be riskier than the familiar.
  • 4:06 - 4:08
    One example would be:
  • 4:08 - 4:10
    people fear kidnapping by strangers,
  • 4:10 - 4:14
    when the data supports that kidnapping
    by relatives is much more common.
  • 4:14 - 4:16
    This is for children.
  • 4:16 - 4:20
    Third, personified risks
    are perceived to be greater
  • 4:20 - 4:21
    than anonymous risks.
  • 4:21 - 4:24
    So, Bin Laden is scarier
    because he has a name.
  • 4:25 - 4:26
    And the fourth is:
  • 4:26 - 4:31
    people underestimate risks
    in situations they do control
  • 4:31 - 4:34
    and overestimate them
    in situations they don't control.
  • 4:34 - 4:37
    So once you take up skydiving or smoking,
  • 4:37 - 4:39
    you downplay the risks.
  • 4:40 - 4:43
    If a risk is thrust upon you --
    terrorism is a good example --
  • 4:43 - 4:44
    you'll overplay it,
  • 4:44 - 4:46
    because you don't feel
    like it's in your control.
  • 4:47 - 4:50
    There are a bunch
    of other of these cognitive biases,
  • 4:50 - 4:53
    that affect our risk decisions.
  • 4:54 - 4:56
    There's the availability heuristic,
  • 4:56 - 5:00
    which basically means we estimate
    the probability of something
  • 5:00 - 5:03
    by how easy it is to bring
    instances of it to mind.
  • 5:05 - 5:06
    So you can imagine how that works.
  • 5:06 - 5:10
    If you hear a lot about tiger attacks,
    there must be a lot of tigers around.
  • 5:10 - 5:13
    You don't hear about lion attacks,
    there aren't a lot of lions around.
  • 5:13 - 5:16
    This works, until you invent newspapers,
  • 5:16 - 5:20
    because what newspapers do
    is repeat again and again
  • 5:20 - 5:22
    rare risks.
  • 5:22 - 5:24
    I tell people: if it's in the news,
    don't worry about it,
  • 5:24 - 5:29
    because by definition, news is something
    that almost never happens.
  • 5:29 - 5:31
    (Laughter)
  • 5:31 - 5:33
    When something is so common,
    it's no longer news.
  • 5:34 - 5:36
    Car crashes, domestic violence --
  • 5:36 - 5:38
    those are the risks you worry about.
  • 5:38 - 5:41
    We're also a species of storytellers.
  • 5:41 - 5:43
    We respond to stories more than data.
  • 5:43 - 5:46
    And there's some basic
    innumeracy going on.
  • 5:46 - 5:49
    I mean, the joke "One, two,
    three, many" is kind of right.
  • 5:49 - 5:51
    We're really good at small numbers.
  • 5:51 - 5:54
    One mango, two mangoes, three mangoes,
  • 5:54 - 5:56
    10,000 mangoes, 100,000 mangoes --
  • 5:56 - 5:59
    it's still more mangoes
    you can eat before they rot.
  • 5:59 - 6:02
    So one half, one quarter,
    one fifth -- we're good at that.
  • 6:02 - 6:04
    One in a million, one in a billion --
  • 6:04 - 6:05
    they're both almost never.
  • 6:06 - 6:10
    So we have trouble with the risks
    that aren't very common.
  • 6:10 - 6:12
    And what these cognitive biases do
  • 6:13 - 6:15
    is they act as filters
    between us and reality.
  • 6:16 - 6:20
    And the result is that feeling
    and reality get out of whack,
  • 6:20 - 6:21
    they get different.
  • 6:22 - 6:26
    Now, you either have a feeling --
    you feel more secure than you are,
  • 6:26 - 6:28
    there's a false sense of security.
  • 6:28 - 6:31
    Or the other way, and that's a false
    sense of insecurity.
  • 6:32 - 6:35
    I write a lot about "security theater,"
  • 6:35 - 6:37
    which are products
    that make people feel secure,
  • 6:37 - 6:39
    but don't actually do anything.
  • 6:39 - 6:42
    There's no real word for stuff
    that makes us secure,
  • 6:42 - 6:44
    but doesn't make us feel secure.
  • 6:44 - 6:47
    Maybe it's what the CIA
    is supposed to do for us.
  • 6:48 - 6:50
    So back to economics.
  • 6:50 - 6:54
    If economics, if the market,
    drives security,
  • 6:54 - 6:59
    and if people make trade-offs
    based on the feeling of security,
  • 6:59 - 7:04
    then the smart thing for companies to do
    for the economic incentives
  • 7:04 - 7:06
    is to make people feel secure.
  • 7:07 - 7:09
    And there are two ways to do this.
  • 7:09 - 7:12
    One, you can make people actually secure
  • 7:12 - 7:13
    and hope they notice.
  • 7:13 - 7:16
    Or two, you can make people
    just feel secure
  • 7:16 - 7:18
    and hope they don't notice.
  • 7:19 - 7:21
    Right?
  • 7:21 - 7:24
    So what makes people notice?
  • 7:24 - 7:26
    Well, a couple of things:
  • 7:26 - 7:28
    understanding of the security,
  • 7:28 - 7:30
    of the risks, the threats,
  • 7:30 - 7:32
    the countermeasures, how they work.
  • 7:32 - 7:34
    But if you know stuff, you're more likely
  • 7:35 - 7:37
    to have your feelings match reality.
  • 7:38 - 7:41
    Enough real-world examples helps.
  • 7:41 - 7:44
    We all know the crime rate
    in our neighborhood,
  • 7:44 - 7:46
    because we live there,
    and we get a feeling about it
  • 7:46 - 7:48
    that basically matches reality.
  • 7:50 - 7:52
    Security theater is exposed
  • 7:52 - 7:55
    when it's obvious
    that it's not working properly.
  • 7:56 - 7:59
    OK. So what makes people not notice?
  • 7:59 - 8:01
    Well, a poor understanding.
  • 8:01 - 8:05
    If you don't understand the risks,
    you don't understand the costs,
  • 8:05 - 8:07
    you're likely to get the trade-off wrong,
  • 8:07 - 8:09
    and your feeling doesn't match reality.
  • 8:09 - 8:11
    Not enough examples.
  • 8:12 - 8:15
    There's an inherent problem
    with low-probability events.
  • 8:16 - 8:19
    If, for example, terrorism
    almost never happens,
  • 8:19 - 8:24
    it's really hard to judge the efficacy
    of counter-terrorist measures.
  • 8:25 - 8:29
    This is why you keep sacrificing virgins,
  • 8:29 - 8:32
    and why your unicorn defenses
    are working just great.
  • 8:32 - 8:34
    There aren't enough examples of failures.
  • 8:36 - 8:39
    Also, feelings that cloud the issues --
  • 8:39 - 8:43
    the cognitive biases I talked
    about earlier: fears, folk beliefs --
  • 8:43 - 8:46
    basically, an inadequate model of reality.
  • 8:48 - 8:50
    So let me complicate things.
  • 8:50 - 8:52
    I have feeling and reality.
  • 8:52 - 8:55
    I want to add a third element.
    I want to add "model."
  • 8:56 - 8:58
    Feeling and model are in our head,
  • 8:58 - 9:01
    reality is the outside world;
    it doesn't change, it's real.
  • 9:03 - 9:05
    Feeling is based on our intuition,
  • 9:05 - 9:06
    model is based on reason.
  • 9:07 - 9:09
    That's basically the difference.
  • 9:09 - 9:11
    In a primitive and simple world,
  • 9:11 - 9:13
    there's really no reason for a model,
  • 9:15 - 9:17
    because feeling is close to reality.
  • 9:17 - 9:19
    You don't need a model.
  • 9:19 - 9:21
    But in a modern and complex world,
  • 9:22 - 9:26
    you need models to understand
    a lot of the risks we face.
  • 9:27 - 9:29
    There's no feeling about germs.
  • 9:30 - 9:32
    You need a model to understand them.
  • 9:33 - 9:37
    This model is an intelligent
    representation of reality.
  • 9:37 - 9:42
    It's, of course, limited
    by science, by technology.
  • 9:43 - 9:45
    We couldn't have a germ theory of disease
  • 9:45 - 9:48
    before we invented
    the microscope to see them.
  • 9:49 - 9:52
    It's limited by our cognitive biases.
  • 9:53 - 9:56
    But it has the ability
    to override our feelings.
  • 9:56 - 9:59
    Where do we get these models?
    We get them from others.
  • 9:59 - 10:05
    We get them from religion,
    from culture, teachers, elders.
  • 10:05 - 10:08
    A couple years ago,
    I was in South Africa on safari.
  • 10:08 - 10:11
    The tracker I was with grew up
    in Kruger National Park.
  • 10:11 - 10:14
    He had some very complex
    models of how to survive.
  • 10:15 - 10:18
    And it depended on if you were attacked
    by a lion, leopard, rhino, or elephant --
  • 10:18 - 10:21
    and when you had to run away,
    when you couldn't run away,
  • 10:21 - 10:24
    when you had to climb a tree,
    when you could never climb a tree.
  • 10:24 - 10:26
    I would have died in a day.
  • 10:27 - 10:31
    But he was born there,
    and he understood how to survive.
  • 10:31 - 10:33
    I was born in New York City.
  • 10:33 - 10:36
    I could have taken him to New York,
    and he would have died in a day.
  • 10:36 - 10:37
    (Laughter)
  • 10:37 - 10:41
    Because we had different models
    based on our different experiences.
  • 10:43 - 10:46
    Models can come from the media,
  • 10:46 - 10:47
    from our elected officials ...
  • 10:48 - 10:51
    Think of models of terrorism,
  • 10:51 - 10:53
    child kidnapping,
  • 10:53 - 10:56
    airline safety, car safety.
  • 10:56 - 10:58
    Models can come from industry.
  • 10:59 - 11:02
    The two I'm following
    are surveillance cameras,
  • 11:02 - 11:04
    ID cards,
  • 11:04 - 11:07
    quite a lot of our computer
    security models come from there.
  • 11:07 - 11:09
    A lot of models come from science.
  • 11:09 - 11:11
    Health models are a great example.
  • 11:11 - 11:14
    Think of cancer, bird flu,
    swine flu, SARS.
  • 11:15 - 11:20
    All of our feelings of security
    about those diseases
  • 11:20 - 11:24
    come from models given to us, really,
    by science filtered through the media.
  • 11:26 - 11:27
    So models can change.
  • 11:28 - 11:30
    Models are not static.
  • 11:30 - 11:34
    As we become more comfortable
    in our environments,
  • 11:34 - 11:37
    our model can move closer to our feelings.
  • 11:39 - 11:41
    So an example might be,
  • 11:41 - 11:43
    if you go back 100 years ago,
  • 11:43 - 11:46
    when electricity was first
    becoming common,
  • 11:46 - 11:48
    there were a lot of fears about it.
  • 11:48 - 11:50
    There were people who were afraid
    to push doorbells,
  • 11:50 - 11:53
    because there was electricity
    in there, and that was dangerous.
  • 11:53 - 11:56
    For us, we're very facile
    around electricity.
  • 11:56 - 11:59
    We change light bulbs
    without even thinking about it.
  • 12:00 - 12:06
    Our model of security around electricity
    is something we were born into.
  • 12:06 - 12:09
    It hasn't changed as we were growing up.
  • 12:09 - 12:11
    And we're good at it.
  • 12:12 - 12:17
    Or think of the risks on the Internet
    across generations --
  • 12:17 - 12:19
    how your parents approach
    Internet security,
  • 12:19 - 12:20
    versus how you do,
  • 12:20 - 12:22
    versus how our kids will.
  • 12:23 - 12:26
    Models eventually fade
    into the background.
  • 12:27 - 12:30
    "Intuitive" is just
    another word for familiar.
  • 12:31 - 12:34
    So as your model is close to reality
    and it converges with feelings,
  • 12:35 - 12:36
    you often don't even know it's there.
  • 12:38 - 12:42
    A nice example of this came
    from last year and swine flu.
  • 12:43 - 12:45
    When swine flu first appeared,
  • 12:45 - 12:48
    the initial news caused
    a lot of overreaction.
  • 12:48 - 12:50
    Now, it had a name,
  • 12:50 - 12:52
    which made it scarier
    than the regular flu,
  • 12:52 - 12:54
    even though it was more deadly.
  • 12:55 - 12:58
    And people thought doctors
    should be able to deal with it.
  • 12:58 - 13:01
    So there was that feeling
    of lack of control.
  • 13:01 - 13:04
    And those two things
    made the risk more than it was.
  • 13:04 - 13:07
    As the novelty wore off
    and the months went by,
  • 13:07 - 13:10
    there was some amount of tolerance;
    people got used to it.
  • 13:11 - 13:14
    There was no new data,
    but there was less fear.
  • 13:14 - 13:17
    By autumn,
  • 13:17 - 13:20
    people thought the doctors
    should have solved this already.
  • 13:20 - 13:22
    And there's kind of a bifurcation:
  • 13:22 - 13:28
    people had to choose
    between fear and acceptance --
  • 13:29 - 13:31
    actually, fear and indifference --
  • 13:31 - 13:33
    and they kind of chose suspicion.
  • 13:34 - 13:37
    And when the vaccine appeared last winter,
  • 13:37 - 13:39
    there were a lot of people --
    a surprising number --
  • 13:40 - 13:41
    who refused to get it.
  • 13:44 - 13:47
    And it's a nice example of how
    people's feelings of security change,
  • 13:47 - 13:49
    how their model changes,
  • 13:49 - 13:50
    sort of wildly,
  • 13:51 - 13:53
    with no new information,
    with no new input.
  • 13:55 - 13:57
    This kind of thing happens a lot.
  • 13:58 - 14:00
    I'm going to give one more complication.
  • 14:00 - 14:03
    We have feeling, model, reality.
  • 14:03 - 14:06
    I have a very relativistic
    view of security.
  • 14:06 - 14:08
    I think it depends on the observer.
  • 14:08 - 14:14
    And most security decisions
    have a variety of people involved.
  • 14:15 - 14:21
    And stakeholders with specific trade-offs
    will try to influence the decision.
  • 14:21 - 14:23
    And I call that their agenda.
  • 14:24 - 14:28
    And you see agenda --
    this is marketing, this is politics --
  • 14:28 - 14:31
    trying to convince you to have
    one model versus another,
  • 14:31 - 14:33
    trying to convince you to ignore a model
  • 14:33 - 14:36
    and trust your feelings,
  • 14:36 - 14:38
    marginalizing people
    with models you don't like.
  • 14:39 - 14:41
    This is not uncommon.
  • 14:42 - 14:46
    An example, a great example,
    is the risk of smoking.
  • 14:47 - 14:49
    In the history of the past 50 years,
  • 14:49 - 14:51
    the smoking risk shows
    how a model changes,
  • 14:51 - 14:56
    and it also shows how an industry fights
    against a model it doesn't like.
  • 14:57 - 15:00
    Compare that to the secondhand
    smoke debate --
  • 15:00 - 15:02
    probably about 20 years behind.
  • 15:03 - 15:04
    Think about seat belts.
  • 15:04 - 15:06
    When I was a kid, no one wore a seat belt.
  • 15:06 - 15:10
    Nowadays, no kid will let you drive
    if you're not wearing a seat belt.
  • 15:11 - 15:14
    Compare that to the airbag debate,
  • 15:14 - 15:16
    probably about 30 years behind.
  • 15:17 - 15:19
    All examples of models changing.
  • 15:22 - 15:24
    What we learn is that changing
    models is hard.
  • 15:25 - 15:27
    Models are hard to dislodge.
  • 15:27 - 15:29
    If they equal your feelings,
  • 15:29 - 15:31
    you don't even know you have a model.
  • 15:32 - 15:34
    And there's another cognitive bias
  • 15:34 - 15:36
    I'll call confirmation bias,
  • 15:36 - 15:40
    where we tend to accept data
    that confirms our beliefs
  • 15:40 - 15:43
    and reject data
    that contradicts our beliefs.
  • 15:44 - 15:48
    So evidence against our model,
    we're likely to ignore,
  • 15:48 - 15:49
    even if it's compelling.
  • 15:49 - 15:52
    It has to get very compelling
    before we'll pay attention.
  • 15:54 - 15:56
    New models that extend
    long periods of time are hard.
  • 15:56 - 15:58
    Global warming is a great example.
  • 15:58 - 16:02
    We're terrible at models
    that span 80 years.
  • 16:02 - 16:04
    We can do "to the next harvest."
  • 16:04 - 16:06
    We can often do "until our kids grow up."
  • 16:06 - 16:09
    But "80 years," we're just not good at.
  • 16:10 - 16:12
    So it's a very hard model to accept.
  • 16:13 - 16:16
    We can have both models
    in our head simultaneously --
  • 16:17 - 16:24
    that kind of problem where
    we're holding both beliefs together,
  • 16:24 - 16:25
    the cognitive dissonance.
  • 16:25 - 16:28
    Eventually, the new model
    will replace the old model.
  • 16:29 - 16:31
    Strong feelings can create a model.
  • 16:32 - 16:38
    September 11 created a security model
    in a lot of people's heads.
  • 16:38 - 16:41
    Also, personal experiences
    with crime can do it,
  • 16:41 - 16:42
    personal health scare,
  • 16:42 - 16:44
    a health scare in the news.
  • 16:45 - 16:48
    You'll see these called
    "flashbulb events" by psychiatrists.
  • 16:49 - 16:51
    They can create a model instantaneously,
  • 16:51 - 16:53
    because they're very emotive.
  • 16:55 - 16:56
    So in the technological world,
  • 16:56 - 16:59
    we don't have experience to judge models.
  • 17:00 - 17:03
    And we rely on others. We rely on proxies.
  • 17:03 - 17:05
    And this works, as long as
    it's the correct others.
  • 17:06 - 17:09
    We rely on government agencies
  • 17:09 - 17:13
    to tell us what pharmaceuticals are safe.
  • 17:13 - 17:15
    I flew here yesterday.
  • 17:15 - 17:17
    I didn't check the airplane.
  • 17:17 - 17:20
    I relied on some other group
  • 17:20 - 17:22
    to determine whether
    my plane was safe to fly.
  • 17:23 - 17:26
    We're here, none of us fear the roof
    is going to collapse on us,
  • 17:26 - 17:28
    not because we checked,
  • 17:28 - 17:32
    but because we're pretty sure
    the building codes here are good.
  • 17:33 - 17:36
    It's a model we just accept
  • 17:36 - 17:38
    pretty much by faith.
  • 17:38 - 17:39
    And that's OK.
  • 17:43 - 17:49
    Now, what we want is people
    to get familiar enough with better models,
  • 17:49 - 17:51
    have it reflected in their feelings,
  • 17:51 - 17:54
    to allow them to make security trade-offs.
  • 17:55 - 17:59
    When these go out of whack,
    you have two options.
  • 17:59 - 18:03
    One, you can fix people's feelings,
    directly appeal to feelings.
  • 18:03 - 18:05
    It's manipulation, but it can work.
  • 18:06 - 18:08
    The second, more honest way
  • 18:08 - 18:10
    is to actually fix the model.
  • 18:11 - 18:14
    Change happens slowly.
  • 18:14 - 18:18
    The smoking debate took 40 years --
    and that was an easy one.
  • 18:20 - 18:22
    Some of this stuff is hard.
  • 18:22 - 18:26
    Really, though, information
    seems like our best hope.
  • 18:26 - 18:27
    And I lied.
  • 18:27 - 18:31
    Remember I said feeling, model, reality;
    reality doesn't change?
  • 18:31 - 18:33
    It actually does.
  • 18:33 - 18:34
    We live in a technological world;
  • 18:34 - 18:37
    reality changes all the time.
  • 18:38 - 18:41
    So we might have,
    for the first time in our species:
  • 18:41 - 18:44
    feeling chases model, model chases
    reality, reality's moving --
  • 18:44 - 18:45
    they might never catch up.
  • 18:47 - 18:48
    We don't know.
  • 18:50 - 18:52
    But in the long term,
  • 18:52 - 18:54
    both feeling and reality are important.
  • 18:54 - 18:57
    And I want to close with two quick
    stories to illustrate this.
  • 18:57 - 19:00
    1982 -- I don't know if people
    will remember this --
  • 19:00 - 19:03
    there was a short epidemic
    of Tylenol poisonings
  • 19:03 - 19:05
    in the United States.
  • 19:05 - 19:06
    It's a horrific story.
  • 19:06 - 19:08
    Someone took a bottle of Tylenol,
  • 19:08 - 19:11
    put poison in it, closed it up,
    put it back on the shelf,
  • 19:11 - 19:13
    someone else bought it and died.
  • 19:13 - 19:14
    This terrified people.
  • 19:14 - 19:17
    There were a couple of copycat attacks.
  • 19:17 - 19:19
    There wasn't any real risk,
    but people were scared.
  • 19:19 - 19:23
    And this is how the tamper-proof
    drug industry was invented.
  • 19:23 - 19:26
    Those tamper-proof caps?
    That came from this.
  • 19:26 - 19:27
    It's complete security theater.
  • 19:27 - 19:30
    As a homework assignment,
    think of 10 ways to get around it.
  • 19:30 - 19:32
    I'll give you one: a syringe.
  • 19:32 - 19:35
    But it made people feel better.
  • 19:35 - 19:39
    It made their feeling of security
    more match the reality.
  • 19:40 - 19:43
    Last story: a few years ago,
    a friend of mine gave birth.
  • 19:43 - 19:44
    I visit her in the hospital.
  • 19:45 - 19:46
    It turns out, when a baby's born now,
  • 19:46 - 19:50
    they put an RFID bracelet on the baby,
    a corresponding one on the mother,
  • 19:50 - 19:54
    so if anyone other than the mother takes
    the baby out of the maternity ward,
  • 19:54 - 19:55
    an alarm goes off.
  • 19:55 - 19:57
    I said, "Well, that's kind of neat.
  • 19:57 - 20:01
    I wonder how rampant
    baby snatching is out of hospitals."
  • 20:01 - 20:02
    I go home, I look it up.
  • 20:02 - 20:03
    It basically never happens.
  • 20:03 - 20:05
    (Laughter)
  • 20:05 - 20:08
    But if you think about it,
    if you are a hospital,
  • 20:08 - 20:11
    and you need to take a baby
    away from its mother,
  • 20:11 - 20:12
    out of the room to run some tests,
  • 20:12 - 20:14
    you better have some good
    security theater,
  • 20:14 - 20:16
    or she's going to rip your arm off.
  • 20:16 - 20:18
    (Laughter)
  • 20:19 - 20:21
    So it's important for us,
  • 20:21 - 20:23
    those of us who design security,
  • 20:23 - 20:25
    who look at security policy --
  • 20:26 - 20:29
    or even look at public policy
    in ways that affect security.
  • 20:30 - 20:33
    It's not just reality;
    it's feeling and reality.
  • 20:33 - 20:35
    What's important
  • 20:35 - 20:37
    is that they be about the same.
  • 20:37 - 20:39
    It's important that,
    if our feelings match reality,
  • 20:39 - 20:41
    we make better security trade-offs.
  • 20:41 - 20:43
    Thank you.
  • 20:43 - 20:45
    (Applause)
Title:
The security mirage
Speaker:
Bruce Schneier
Description:

The feeling of security and the reality of security don't always match, says computer-security expert Bruce Schneier. At TEDxPSU, he explains why we spend billions addressing news story risks, like the "security theater" now playing at your local airport, while neglecting more probable risks -- and how we can break this pattern.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDTalks
Duration:
20:44
Krystian Aparta commented on English subtitles for The security mirage
Krystian Aparta edited English subtitles for The security mirage
Krystian Aparta edited English subtitles for The security mirage
TED edited English subtitles for The security mirage
TED added a translation

English subtitles

Revisions Compare revisions