Return to Video

Reconceptualizing Security | Bruce Schneier | TEDxPSU

  • 0:23 - 0:25
    So security is two different things:
  • 0:25 - 0:28
    it's a feeling, and it's a reality.
  • 0:28 - 0:29
    And they're different.
  • 0:29 - 0:31
    You could feel secure
  • 0:31 - 0:33
    even if you're not.
  • 0:33 - 0:35
    And you can be secure
  • 0:35 - 0:37
    even if you don't feel it.
  • 0:37 - 0:39
    Really, we have two separate concepts
  • 0:39 - 0:41
    mapped onto the same word.
  • 0:41 - 0:43
    And what I want to do in this talk
  • 0:43 - 0:45
    is to split them apart --
  • 0:45 - 0:47
    figuring out when they diverge
  • 0:47 - 0:49
    and how they converge.
  • 0:49 - 0:51
    And language is actually a problem here.
  • 0:51 - 0:53
    There aren't a lot of good words
  • 0:53 - 0:56
    for the concepts
    we're going to talk about.
  • 0:56 - 0:58
    So if you look at security
  • 0:58 - 1:00
    from economic terms,
  • 1:00 - 1:02
    it's a trade-off.
  • 1:02 - 1:04
    Every time you get some security,
  • 1:04 - 1:06
    you're always trading off something.
  • 1:06 - 1:08
    Whether this is a personal decision --
  • 1:08 - 1:11
    whether you're going to install
    a burglar alarm in your home --
  • 1:11 - 1:15
    or a national decision -- where you're
    going to invade some foreign country --
  • 1:15 - 1:16
    you're going to trade off something,
  • 1:16 - 1:19
    either money or time,
    convenience, capabilities,
  • 1:19 - 1:21
    maybe fundamental liberties.
  • 1:21 - 1:24
    And the question to ask
    when you look at a security anything
  • 1:24 - 1:27
    is not whether this makes us safer,
  • 1:27 - 1:30
    but whether it's worth the trade-off.
  • 1:30 - 1:32
    You've heard in the past several years,
  • 1:32 - 1:35
    the world is safer because
    Saddam Hussein is not in power.
  • 1:35 - 1:37
    That might be true,
    but it's not terribly relevant.
  • 1:37 - 1:40
    The question is, was it worth it?
  • 1:40 - 1:43
    And you can make your own decision,
  • 1:43 - 1:46
    and then you'll decide
    whether the invasion was worth it.
  • 1:46 - 1:48
    That's how you think about security --
  • 1:48 - 1:49
    in terms of the trade-off.
  • 1:49 - 1:52
    Now there's often no right or wrong here.
  • 1:52 - 1:54
    Some of us have
    a burglar alarm system at home,
  • 1:54 - 1:56
    and some of us don't.
  • 1:56 - 1:58
    And it'll depend on where we live,
  • 1:58 - 2:00
    whether we live alone or have a family,
  • 2:00 - 2:02
    how much cool stuff we have,
  • 2:02 - 2:04
    how much we're willing to accept
  • 2:04 - 2:06
    the risk of theft.
  • 2:06 - 2:08
    In politics also,
  • 2:08 - 2:10
    there are different opinions.
  • 2:10 - 2:12
    A lot of times, these trade-offs
  • 2:12 - 2:14
    are about more than just security,
  • 2:14 - 2:16
    and I think that's really important.
  • 2:16 - 2:18
    Now people have a natural intuition
  • 2:18 - 2:20
    about these trade-offs.
  • 2:20 - 2:22
    We make them every day --
  • 2:22 - 2:24
    last night in my hotel room,
  • 2:24 - 2:26
    when I decided to double-lock the door,
  • 2:26 - 2:28
    or you in your car when you drove here,
  • 2:28 - 2:30
    when we go eat lunch
  • 2:30 - 2:33
    and decide the food's not poison
    and we'll eat it.
  • 2:33 - 2:35
    We make these trade-offs
    again and again,
  • 2:35 - 2:37
    multiple times a day.
  • 2:37 - 2:39
    We often don't even notice them.
  • 2:39 - 2:41
    They're just part of being alive;
    we all do it.
  • 2:41 - 2:44
    Every species does it.
  • 2:44 - 2:46
    Imagine a rabbit in a field, eating grass,
  • 2:46 - 2:49
    and the rabbit's going to see a fox.
  • 2:49 - 2:51
    That rabbit will make
    a security trade-off:
  • 2:51 - 2:53
    "Should I stay, or should I flee?"
  • 2:53 - 2:55
    And if you think about it,
  • 2:55 - 2:58
    the rabbits that are good
    at making that trade-off
  • 2:58 - 3:00
    will tend to live and reproduce,
  • 3:00 - 3:02
    and the rabbits that are bad at it
  • 3:02 - 3:04
    will get eaten or starve.
  • 3:04 - 3:06
    So you'd think
  • 3:06 - 3:09
    that us, as a successful species
    on the planet --
  • 3:09 - 3:11
    you, me, everybody --
  • 3:11 - 3:14
    would be really good
    at making these trade-offs.
  • 3:14 - 3:16
    Yet it seems, again and again,
  • 3:16 - 3:19
    that we're hopelessly bad at it.
  • 3:19 - 3:22
    And I think that's a fundamentally
    interesting question.
  • 3:22 - 3:24
    I'll give you the short answer.
  • 3:24 - 3:26
    The answer is, we respond
    to the feeling of security
  • 3:26 - 3:29
    and not the reality.
  • 3:29 - 3:32
    Now most of the time, that works.
  • 3:33 - 3:35
    Most of the time,
  • 3:35 - 3:37
    feeling and reality are the same.
  • 3:38 - 3:40
    Certainly that's true
  • 3:40 - 3:43
    for most of human prehistory.
  • 3:43 - 3:46
    We've developed this ability
  • 3:46 - 3:48
    because it makes evolutionary sense.
  • 3:49 - 3:51
    One way to think of it
  • 3:51 - 3:52
    is that we're highly optimized
  • 3:52 - 3:54
    for risk decisions
  • 3:54 - 3:57
    that are endemic to living
    in small family groups
  • 3:57 - 4:00
    in the East African highlands
    in 100,000 B.C.
  • 4:00 - 4:03
    2010 New York, not so much.
  • 4:06 - 4:10
    Now there are several biases
    in risk perception.
  • 4:10 - 4:11
    A lot of good experiments in this.
  • 4:11 - 4:15
    And you can see certain biases
    that come up again and again.
  • 4:15 - 4:17
    So I'll give you four.
  • 4:17 - 4:20
    We tend to exaggerate
    spectacular and rare risks
  • 4:20 - 4:22
    and downplay common risks --
  • 4:22 - 4:24
    so flying versus driving.
  • 4:24 - 4:26
    The unknown is perceived
  • 4:26 - 4:28
    to be riskier than the familiar.
  • 4:31 - 4:32
    One example would be,
  • 4:32 - 4:35
    people fear kidnapping by strangers
  • 4:35 - 4:39
    when the data supports kidnapping
    by relatives is much more common.
  • 4:39 - 4:41
    This is for children.
  • 4:41 - 4:43
    Third, personified risks
  • 4:43 - 4:46
    are perceived to be greater
    than anonymous risks --
  • 4:46 - 4:48
    so Bin Laden is scarier
    because he has a name.
  • 4:50 - 4:51
    And the fourth
  • 4:51 - 4:54
    is people underestimate risks
  • 4:54 - 4:56
    in situations they do control
  • 4:56 - 4:59
    and overestimate them
    in situations they don't control.
  • 4:59 - 5:02
    So once you take up skydiving or smoking,
  • 5:02 - 5:04
    you downplay the risks.
  • 5:04 - 5:08
    If a risk is thrust upon you
    -- terrorism was a good example --
  • 5:08 - 5:11
    you'll overplay it because you don't feel
    like it's in your control.
  • 5:11 - 5:15
    There are a bunch of other
    of these cognitive biases,
  • 5:15 - 5:17
    that affect our risk decisions.
  • 5:18 - 5:20
    There's the availability heuristic,
  • 5:20 - 5:22
    which basically means
  • 5:22 - 5:25
    we estimate the probability of something
  • 5:25 - 5:28
    by how easy it is
    to bring instances of it to mind.
  • 5:30 - 5:31
    So you can imagine how that works.
  • 5:31 - 5:35
    If you hear a lot about tiger attacks,
    there must be a lot of tigers around.
  • 5:35 - 5:38
    You don't hear about lion attacks,
    there aren't a lot of lions around.
  • 5:38 - 5:41
    This works until you invent newspapers.
  • 5:41 - 5:42
    Because what newspapers do
  • 5:42 - 5:45
    is they repeat again and again
  • 5:45 - 5:46
    rare risks.
  • 5:46 - 5:49
    I tell people, if it's in the news,
    don't worry about it.
  • 5:49 - 5:50
    Because by definition,
  • 5:50 - 5:53
    news is something
    that almost never happens.
  • 5:53 - 5:55
    (Laughter)
  • 5:55 - 5:59
    When something is so common,
    it's no longer news --
  • 5:59 - 6:01
    car crashes, domestic violence --
  • 6:01 - 6:03
    those are the risks you worry about.
  • 6:03 - 6:05
    We're also a species of storytellers.
  • 6:05 - 6:08
    We respond to stories more than data.
  • 6:08 - 6:11
    And there's some basic
    innumeracy going on.
  • 6:11 - 6:14
    I mean, the joke
    "One, Two, Three, Many" is kind of right.
  • 6:14 - 6:16
    We're really good at small numbers.
  • 6:16 - 6:18
    One mango, two mangoes, three mangoes,
  • 6:18 - 6:20
    10,000 mangoes, 100,000 mangoes --
  • 6:20 - 6:23
    it's still more mangoes
    you can eat before they rot.
  • 6:23 - 6:27
    So one half, one quarter, one fifth
    -- we're good at that.
  • 6:27 - 6:29
    One in a million, one in a billion --
  • 6:29 - 6:31
    they're both almost never.
  • 6:31 - 6:33
    So we have trouble with the risks
  • 6:33 - 6:35
    that aren't very common.
  • 6:35 - 6:37
    And what these cognitive biases do
  • 6:37 - 6:40
    is they act as filters
    between us and reality.
  • 6:41 - 6:43
    And the result
  • 6:43 - 6:45
    is that feeling and reality
    get out of whack,
  • 6:45 - 6:47
    they get different.
  • 6:47 - 6:51
    Now you either have a feeling
    -- you feel more secure than you are.
  • 6:51 - 6:53
    There's a false sense of security.
  • 6:53 - 6:54
    Or the other way,
  • 6:54 - 6:56
    and that's a false sense of insecurity.
  • 6:56 - 6:59
    I write a lot about "security theater,"
  • 6:59 - 7:02
    which are products
    that make people feel secure,
  • 7:02 - 7:04
    but don't actually do anything.
  • 7:04 - 7:07
    There's no real word
    for stuff that makes us secure,
  • 7:07 - 7:09
    but doesn't make us feel secure.
  • 7:09 - 7:12
    Maybe it's what the CIA's supposed to do
    for us.
  • 7:13 - 7:15
    So back to economics.
  • 7:15 - 7:18
    If economics, if the market,
    drives security,
  • 7:18 - 7:21
    and if people make trade-offs
  • 7:21 - 7:23
    based on the feeling of security,
  • 7:23 - 7:27
    then the smart thing for companies to do
  • 7:27 - 7:29
    for the economic incentives
  • 7:29 - 7:31
    are to make people feel secure.
  • 7:32 - 7:34
    And there are two ways to do this.
  • 7:34 - 7:37
    One, you can make people actually secure
  • 7:37 - 7:38
    and hope they notice.
  • 7:38 - 7:41
    Or two, you can make people
    just feel secure
  • 7:41 - 7:43
    and hope they don't notice.
  • 7:46 - 7:48
    So what makes people notice?
  • 7:49 - 7:51
    Well a couple of things:
  • 7:51 - 7:53
    understanding of the security,
  • 7:53 - 7:54
    of the risks, the threats,
  • 7:54 - 7:56
    the countermeasures, how they work.
  • 7:56 - 7:58
    But if you know stuff,
  • 7:58 - 8:02
    you're more likely to have
    your feelings match reality.
  • 8:03 - 8:06
    Enough real world examples helps.
  • 8:06 - 8:08
    Now we all know the crime rate
    in our neighborhood,
  • 8:08 - 8:11
    because we live there,
    and we get a feeling about it
  • 8:11 - 8:13
    that basically matches reality.
  • 8:15 - 8:17
    Security theater's exposed
  • 8:17 - 8:19
    when it's obvious
    that it's not working properly.
  • 8:21 - 8:23
    Okay, so what makes people not notice?
  • 8:23 - 8:26
    Well, a poor understanding.
  • 8:26 - 8:29
    If you don't understand the risks,
    you don't understand the costs,
  • 8:29 - 8:31
    you're likely to get the trade-off wrong,
  • 8:31 - 8:34
    and your feeling doesn't match reality.
  • 8:34 - 8:37
    Not enough examples.
  • 8:37 - 8:38
    There's an inherent problem
  • 8:38 - 8:40
    with low probability events.
  • 8:40 - 8:42
    If, for example,
  • 8:42 - 8:44
    terrorism almost never happens,
  • 8:44 - 8:46
    it's really hard to judge
  • 8:46 - 8:49
    the efficacy of counter-terrorist
    measures.
  • 8:50 - 8:53
    This is why you keep sacrificing virgins,
  • 8:53 - 8:56
    and why your unicorn defenses
    are working just great.
  • 8:56 - 8:59
    There aren't enough examples of failures.
  • 9:00 - 9:03
    Also, feelings that are clouding
    the issues --
  • 9:03 - 9:06
    the cognitive biases
    I talked about earlier,
  • 9:06 - 9:07
    fears, folk beliefs,
  • 9:09 - 9:11
    basically an inadequate model of reality.
  • 9:13 - 9:15
    So let me complicate things.
  • 9:15 - 9:17
    I have feeling and reality.
  • 9:17 - 9:20
    I want to add a third element.
    I want to add model.
  • 9:21 - 9:23
    Feeling and model in our head,
  • 9:23 - 9:25
    reality is the outside world.
  • 9:25 - 9:26
    It doesn't change; it's real.
  • 9:28 - 9:30
    So feeling is based on our intuition.
  • 9:30 - 9:32
    Model is based on reason.
  • 9:32 - 9:34
    That's basically the difference.
  • 9:34 - 9:36
    In a primitive and simple world,
  • 9:36 - 9:38
    there's really no reason for a model
  • 9:40 - 9:42
    because feeling is close to reality.
  • 9:42 - 9:44
    You don't need a model.
  • 9:44 - 9:47
    But in a modern and complex world,
  • 9:47 - 9:48
    you need models
  • 9:48 - 9:51
    to understand a lot of the risks we face.
  • 9:52 - 9:55
    There's no feeling about germs.
  • 9:55 - 9:57
    You need a model to understand them.
  • 9:57 - 9:59
    So this model
  • 9:59 - 10:02
    is an intelligent representation
    of reality.
  • 10:02 - 10:05
    It's, of course, limited by science,
  • 10:05 - 10:07
    by technology.
  • 10:08 - 10:10
    We couldn't have a germ theory of disease
  • 10:10 - 10:13
    before we invented
    the microscope to see them.
  • 10:14 - 10:16
    It's limited by our cognitive biases.
  • 10:18 - 10:19
    But it has the ability
  • 10:19 - 10:21
    to override our feelings.
  • 10:21 - 10:24
    Where do we get these models?
    We get them from others.
  • 10:24 - 10:27
    We get them from religion, from culture,
  • 10:27 - 10:29
    teachers, elders.
  • 10:29 - 10:31
    A couple years ago,
  • 10:31 - 10:33
    I was in South Africa on safari.
  • 10:33 - 10:36
    The tracker I was with
    grew up in Kruger National Park.
  • 10:36 - 10:39
    He had some very complex models
    of how to survive.
  • 10:39 - 10:41
    And it depended on if you were attacked
  • 10:41 - 10:43
    by a lion or a leopard
    or a rhino or an elephant --
  • 10:43 - 10:46
    and when you had to run away,
    and when you couldn't run away,
  • 10:46 - 10:50
    and when you had to climb a tree --
    when you could never climb a tree.
  • 10:50 - 10:52
    I would have died in a day,
  • 10:52 - 10:54
    but he was born there,
  • 10:54 - 10:56
    and he understood how to survive.
  • 10:56 - 10:58
    I was born in New York City.
  • 10:58 - 11:01
    I could have taken him to New York,
    and he would have died in a day.
  • 11:01 - 11:03
    (Laughter)
  • 11:03 - 11:04
    Because we had different models
  • 11:04 - 11:07
    based on our different experiences.
  • 11:08 - 11:10
    Models can come from the media,
  • 11:10 - 11:12
    from our elected officials.
  • 11:13 - 11:16
    Think of models of terrorism,
  • 11:16 - 11:18
    child kidnapping,
  • 11:18 - 11:21
    airline safety, car safety.
  • 11:21 - 11:23
    Models can come from industry.
  • 11:25 - 11:27
    The two I'm following
    are surveillance cameras,
  • 11:27 - 11:29
    ID cards,
  • 11:29 - 11:32
    quite a lot of our computer security
    models come from there.
  • 11:32 - 11:34
    A lot of models come from science.
  • 11:34 - 11:36
    Health models are a great example.
  • 11:36 - 11:39
    Think of cancer, of bird flu,
    swine flu, SARS.
  • 11:40 - 11:42
    All of our feelings of security
  • 11:42 - 11:44
    about those diseases
  • 11:44 - 11:46
    come from models
  • 11:46 - 11:49
    given to us, really, by science filtered
    through the media.
  • 11:51 - 11:53
    So models can change.
  • 11:53 - 11:55
    Models are not static.
  • 11:55 - 11:58
    As we become more comfortable
    in our environments,
  • 11:58 - 12:02
    our model can move closer to our feelings.
  • 12:04 - 12:06
    So an example might be,
  • 12:06 - 12:08
    if you go back 100 years ago
  • 12:08 - 12:11
    when electricity was first
    becoming common,
  • 12:11 - 12:13
    there were a lot of fears about it.
  • 12:13 - 12:16
    I mean, there were people
    who were afraid to push doorbells,
  • 12:16 - 12:19
    because there was electricity in there,
    and that was dangerous.
  • 12:19 - 12:21
    For us, we're very facile
    around electricity.
  • 12:21 - 12:23
    We change light bulbs
  • 12:23 - 12:25
    without even thinking about it.
  • 12:25 - 12:28
    Our model of security around electricity
  • 12:28 - 12:31
    is something we were born into.
  • 12:32 - 12:34
    It hasn't changed as we were growing up.
  • 12:34 - 12:36
    And we're good at it.
  • 12:37 - 12:39
    Or think of the risks
  • 12:39 - 12:41
    on the Internet across generations --
  • 12:41 - 12:44
    how your parents approach
    Internet security,
  • 12:44 - 12:46
    versus how you do,
  • 12:46 - 12:48
    versus how our kids will.
  • 12:48 - 12:50
    Models eventually fade
    into the background.
  • 12:52 - 12:55
    Intuitive is just another word
    for familiar.
  • 12:56 - 12:58
    So as your model is close to reality,
  • 12:58 - 13:00
    and it converges with feelings,
  • 13:00 - 13:02
    you often don't know it's there.
  • 13:03 - 13:05
    So a nice example of this
  • 13:05 - 13:07
    came from last year and swine flu.
  • 13:08 - 13:10
    When swine flu first appeared,
  • 13:10 - 13:13
    the initial news caused
    a lot of overreaction.
  • 13:13 - 13:15
    Now it had a name,
  • 13:15 - 13:18
    which made it scarier
    than the regular flu,
  • 13:18 - 13:20
    even though it was more deadly.
  • 13:20 - 13:23
    And people thought doctors
    should be able to deal with it.
  • 13:23 - 13:26
    So there was that feeling
    of lack of control.
  • 13:26 - 13:27
    And those two things
  • 13:27 - 13:29
    made the risk more than it was.
  • 13:29 - 13:32
    As the novelty wore off,
    the months went by,
  • 13:32 - 13:34
    there was some amount of tolerance,
  • 13:34 - 13:36
    people got used to it.
  • 13:36 - 13:39
    There was no new data,
    but there was less fear.
  • 13:39 - 13:41
    By autumn,
  • 13:41 - 13:43
    people thought
  • 13:43 - 13:45
    the doctors should have solved this
    already.
  • 13:45 - 13:47
    And there's kind of a bifurcation --
  • 13:47 - 13:49
    people had to choose
  • 13:49 - 13:51
    between fear and acceptance --
  • 13:54 - 13:56
    actually fear and indifference --
  • 13:56 - 13:58
    they kind of chose suspicion.
  • 13:59 - 14:02
    And when the vaccine appeared last winter,
  • 14:02 - 14:04
    there were a lot of people
    -- a surprising number --
  • 14:04 - 14:07
    who refused to get it --
  • 14:08 - 14:10
    as a nice example
  • 14:10 - 14:14
    of how people's feelings of security
    change, how their model changes,
  • 14:14 - 14:16
    sort of wildly
  • 14:16 - 14:17
    with no new information,
  • 14:17 - 14:19
    with no new input.
  • 14:20 - 14:23
    This kind of thing happens a lot.
  • 14:23 - 14:25
    I'm going to give one more complication.
  • 14:25 - 14:27
    We have feeling, model, reality.
  • 14:28 - 14:31
    I have a very relativistic view
    of security.
  • 14:31 - 14:33
    I think it depends on the observer.
  • 14:33 - 14:35
    And most security decisions
  • 14:35 - 14:38
    have a variety of people involved.
  • 14:39 - 14:41
    And stakeholders
  • 14:41 - 14:44
    with specific trade-offs
  • 14:44 - 14:46
    will try to influence the decision.
  • 14:46 - 14:48
    And I call that their agenda.
  • 14:49 - 14:51
    And you see agenda --
  • 14:51 - 14:53
    this is marketing, this is politics --
  • 14:53 - 14:56
    trying to convince you to have
    one model versus another,
  • 14:56 - 14:58
    trying to convince you to ignore a model
  • 14:58 - 15:01
    and trust your feelings,
  • 15:01 - 15:04
    marginalizing people
    with models you don't like.
  • 15:05 - 15:07
    This is not uncommon.
  • 15:07 - 15:11
    An example, a great example,
    is the risk of smoking.
  • 15:12 - 15:14
    In the history of the past 50 years,
    the smoking risk
  • 15:14 - 15:17
    shows how a model changes,
  • 15:17 - 15:19
    and it also shows
    how an industry fights against
  • 15:19 - 15:21
    a model it doesn't like.
  • 15:22 - 15:25
    Compare that
    to the secondhand smoke debate --
  • 15:25 - 15:27
    probably about 20 years behind.
  • 15:27 - 15:30
    Think about seat belts.
  • 15:30 - 15:32
    When I was a kid, no one wore a seat belt.
  • 15:32 - 15:34
    Nowadays, no kid will let you drive
  • 15:34 - 15:36
    if you're not wearing a seat belt.
  • 15:37 - 15:39
    Compare that to the airbag debate --
  • 15:39 - 15:42
    probably about 30 years behind.
  • 15:42 - 15:45
    All examples of models changing.
  • 15:47 - 15:50
    What we learn is that
    changing models is hard.
  • 15:50 - 15:53
    Models are hard to dislodge.
  • 15:53 - 15:55
    If they equal your feelings,
  • 15:55 - 15:57
    you don't even know you have a model.
  • 15:57 - 15:59
    And there's another cognitive bias
  • 15:59 - 16:01
    I'll call confirmation bias,
  • 16:01 - 16:03
    where we tend to accept data
  • 16:03 - 16:06
    that confirms our beliefs
  • 16:06 - 16:08
    and reject data
    that contradicts our beliefs.
  • 16:14 - 16:16
    So evidence against our model,
  • 16:16 - 16:19
    we're likely to ignore,
    even if it's compelling.
  • 16:19 - 16:22
    It has to get very compelling
    before we'll pay attention.
  • 16:23 - 16:26
    New models that extend
    long periods of time are hard.
  • 16:26 - 16:28
    Global warming is a great example.
  • 16:28 - 16:29
    We're terrible
  • 16:29 - 16:31
    at models that span 80 years.
  • 16:31 - 16:33
    We can do to the next harvest.
  • 16:33 - 16:36
    We can often do until our kids grow up.
  • 16:36 - 16:39
    But 80 years, we're just not good at.
  • 16:39 - 16:41
    So it's a very hard model to accept.
  • 16:42 - 16:46
    We can have both models
    in our head simultaneously,
  • 16:46 - 16:49
    right, that kind of problem
  • 16:50 - 16:53
    where we're holding both beliefs together,
  • 16:53 - 16:55
    right, the cognitive dissonance.
  • 16:55 - 16:56
    Eventually,
  • 16:56 - 16:58
    the new model will replace the old model.
  • 16:58 - 17:01
    Strong feelings can create a model.
  • 17:02 - 17:05
    September 11th created a security model
  • 17:05 - 17:07
    in a lot of people's heads.
  • 17:07 - 17:10
    Also, personal experiences
    with crime can do it,
  • 17:10 - 17:12
    personal health scare,
  • 17:12 - 17:14
    a health scare in the news.
  • 17:14 - 17:16
    You'll see these called flashbulb events
  • 17:16 - 17:18
    by psychiatrists.
  • 17:19 - 17:21
    They can create a model instantaneously,
  • 17:21 - 17:23
    because they're very emotive.
  • 17:24 - 17:26
    So in the technological world,
  • 17:26 - 17:28
    we don't have experience
  • 17:28 - 17:30
    to judge models.
  • 17:30 - 17:32
    And we rely on others. We rely on proxies.
  • 17:32 - 17:36
    I mean, this works
    as long as it's to correct others.
  • 17:36 - 17:38
    We rely on government agencies
  • 17:38 - 17:42
    to tell us what pharmaceuticals are safe.
  • 17:43 - 17:44
    I flew here yesterday.
  • 17:44 - 17:47
    I didn't check the airplane.
  • 17:47 - 17:49
    I relied on some other group
  • 17:49 - 17:52
    to determine whether
    my plane was safe to fly.
  • 17:52 - 17:55
    We're here, none of us fear
    the roof is going to collapse on us,
  • 17:55 - 17:57
    not because we checked,
  • 17:57 - 17:59
    but because we're pretty sure
  • 17:59 - 18:02
    the building codes here are good.
  • 18:03 - 18:05
    It's a model we just accept
  • 18:05 - 18:07
    pretty much by faith.
  • 18:08 - 18:10
    And that's okay.
  • 18:12 - 18:14
    Now, what we want
  • 18:14 - 18:16
    is people to get familiar enough
  • 18:16 - 18:18
    with better models --
  • 18:18 - 18:20
    have it reflected in their feelings --
  • 18:20 - 18:23
    to allow them to make security trade-offs.
  • 18:24 - 18:26
    Now when these go out of whack,
  • 18:26 - 18:28
    you have two options.
  • 18:28 - 18:30
    One, you can fix people's feelings,
  • 18:30 - 18:32
    directly appeal to feelings.
  • 18:32 - 18:35
    It's manipulation, but it can work.
  • 18:35 - 18:37
    The second, more honest way
  • 18:37 - 18:40
    is to actually fix the model.
  • 18:41 - 18:43
    Change happens slowly.
  • 18:43 - 18:46
    The smoking debate took 40 years,
  • 18:46 - 18:47
    and that was an easy one.
  • 18:50 - 18:52
    Some of this stuff is hard.
  • 18:52 - 18:53
    I mean really though,
  • 18:53 - 18:55
    information seems like our best hope.
  • 18:55 - 18:57
    And I lied.
  • 18:57 - 19:00
    Remember I said feeling, model, reality;
  • 19:00 - 19:02
    I said reality doesn't change.
    It actually does.
  • 19:02 - 19:04
    We live in a technological world;
  • 19:04 - 19:06
    reality changes all the time.
  • 19:07 - 19:10
    So we might have
    -- for the first time in our species --
  • 19:10 - 19:13
    feeling chases model,
    model chases reality, reality's moving --
  • 19:13 - 19:15
    they might never catch up.
  • 19:17 - 19:18
    We don't know.
  • 19:20 - 19:22
    But in the long-term,
  • 19:22 - 19:24
    both feeling and reality are important.
  • 19:24 - 19:27
    And I want to close with two quick stories
    to illustrate this.
  • 19:27 - 19:29
    1982 -- I don't know if people
    will remember this --
  • 19:29 - 19:32
    there was a short epidemic
  • 19:32 - 19:34
    of Tylenol poisonings
    in the United States.
  • 19:34 - 19:37
    It's a horrific story.
    Someone took a bottle of Tylenol,
  • 19:37 - 19:40
    put poison in it, closed it up,
    put it back on the shelf.
  • 19:40 - 19:42
    Someone else bought it and died.
  • 19:42 - 19:44
    This terrified people.
  • 19:44 - 19:46
    There were a couple of copycat attacks.
  • 19:46 - 19:49
    There wasn't any real risk,
    but people were scared.
  • 19:49 - 19:50
    And this is how
  • 19:50 - 19:53
    the tamper-proof drug industry
    was invented.
  • 19:53 - 19:55
    Those tamper-proof caps,
    that came from this.
  • 19:55 - 19:57
    It's complete security theater.
  • 19:57 - 20:00
    As a homework assignment,
    think of 10 ways to get around it.
  • 20:00 - 20:02
    I'll give you one, a syringe.
  • 20:02 - 20:04
    But it made people feel better.
  • 20:05 - 20:07
    It made their feeling of security
  • 20:07 - 20:09
    more match the reality.
  • 20:10 - 20:12
    Last story, a few years ago,
    a friend of mine gave birth.
  • 20:12 - 20:14
    I visit her in the hospital.
  • 20:14 - 20:16
    It turns out when a baby's born now,
  • 20:16 - 20:18
    they put an RFID bracelet on the baby,
  • 20:18 - 20:20
    put a corresponding one on the mother,
  • 20:20 - 20:23
    so if anyone other than the mother
    takes the baby out of the maternity ward,
  • 20:23 - 20:24
    an alarm goes off.
  • 20:24 - 20:26
    I said, "Well, that's kind of neat.
  • 20:26 - 20:29
    I wonder how rampant baby snatching is
  • 20:29 - 20:30
    out of hospitals."
  • 20:30 - 20:32
    I go home, I look it up.
  • 20:32 - 20:34
    It basically never happens.
  • 20:35 - 20:36
    But if you think about it,
  • 20:36 - 20:38
    if you are a hospital,
  • 20:38 - 20:40
    and you need to take a baby
    away from its mother,
  • 20:40 - 20:42
    out of the room to run some tests,
  • 20:42 - 20:44
    you better have some
    good security theater,
  • 20:44 - 20:46
    or she's going to rip your arm off.
  • 20:46 - 20:48
    (Laughter)
  • 20:48 - 20:50
    So it's important for us,
  • 20:50 - 20:52
    those of us who design security,
  • 20:52 - 20:55
    who look at security policy,
  • 20:55 - 20:57
    or even look at public policy
  • 20:57 - 20:59
    in ways that affect security.
  • 20:59 - 21:02
    It's not just reality;
    it's feeling and reality.
  • 21:02 - 21:04
    What's important
  • 21:04 - 21:06
    is that they be about the same.
  • 21:06 - 21:09
    It's important that,
    if our feelings match reality,
  • 21:09 - 21:11
    we make better security trade-offs.
  • 21:11 - 21:12
    Thank you.
  • 21:12 - 21:14
    (Applause)
Title:
Reconceptualizing Security | Bruce Schneier | TEDxPSU
Description:

Bruce Schneier is an internationally-renowned security technologist and author. Described by The Economist as a "security guru," he is best known as a refreshingly candid and lucid security critic and commentator. When people want to know how security really works, they turn to Schneier.

more » « less
Video Language:
English
Team:
closed TED
Project:
TEDxTalks
Duration:
21:14

English subtitles

Revisions