< Return to Video

Arne Padmos: Why is GPG "damn near unusable"?

  • 0:11 - 0:18
    Applause
  • 0:18 - 0:20
    So, good morning everyone
  • 0:20 - 0:24
    my name is Arne and today
  • 0:24 - 0:26
    I'll be hoping to entertain you
  • 0:26 - 0:32
    a bit with some GPG usability issues.
  • 0:32 - 0:34
    thanks for being here this early in the morning.
  • 0:34 - 0:37
    I know, some of you have had a short night
  • 0:37 - 0:43
    In short for the impatient ones:
  • 0:43 - 0:47
    Why is GnuPG damn near unusable?
  • 0:47 - 0:52
    Well, actually, I don’t know
  • 0:52 - 0:53
    Laughter
  • 0:53 - 0:58
    So more research is needed … as always.
  • 0:58 - 1:00
    Because it's not like using a thermometer.
  • 1:00 - 1:04
    We're doing something between social science and security
  • 1:04 - 1:11
    But I will present some interesting perspectives
  • 1:12 - 1:17
    or at least what I hope you'll find interesting perspectives.
  • 1:17 - 1:20
    This talk is about some possible explanations
  • 1:20 - 1:25
    that usable security research can offer to the question
  • 1:25 - 1:27
    Now some context, something about myself,
  • 1:27 - 1:34
    so you have a bit of an idea where I'm coming from
  • 1:34 - 1:39
    and what coloured glassed I have on.
  • 1:39 - 1:44
    So pretty much my background is in Mathematics,
  • 1:44 - 1:48
    Computer science, and—strangely enough—International relations
  • 1:48 - 1:52
    My professional background is that I've been doing
  • 1:52 - 1:57
    embedded system security evaluations and training
  • 1:57 - 2:03
    and I've also been a PhD student, studying the usability of security.
  • 2:03 - 2:08
    Currently, I teach the new generation,
  • 2:08 - 2:15
    hoping to bring some new blood into the security world.
  • 2:15 - 2:18
    I want to do some expectation setting
  • 2:18 - 2:21
    I want to say, what this talk is not about.
  • 2:21 - 2:24
    I will also give some helpful pointers for
  • 2:24 - 2:30
    those of you that are interested in these other areas.
  • 2:30 - 2:34
    I will not go into too much detail about the issue of truth
  • 2:34 - 2:38
    in security science.
  • 2:38 - 2:40
    Here are some links to some interesting papers that cover this
  • 2:40 - 2:43
    in a lot of detail.
  • 2:43 - 2:46
    Neither will I be giving a security primer.
  • 2:46 - 2:50
    There are some nice links to books on the slide.
  • 2:50 - 2:55
    I'll also not be giving a cryptography primer or a history lesson.
  • 2:55 - 2:58
    Neither will I be giving an introduction to PGP
  • 2:58 - 3:03
    And, interestingly enough, even though the talk is titled
  • 3:03 - 3:07
    “why is GPG damn near unusable”, I will
  • 3:07 - 3:10
    not really be doing much PGP bashing
  • 3:10 - 3:15
    I think it's quite, actually, a wonderful effort and other people
  • 3:15 - 3:21
    have pretty much done the PGP/GnuPG bashing for me.
  • 3:21 - 3:26
    And, as I've already mentioned, I will not be giving any definite answers
  • 3:26 - 3:29
    and a lot of “it depends.”
  • 3:29 - 3:34
    But then you might ask “well, it depends. What does it depend on?”
  • 3:34 - 3:37
    Well, for one: What users you’re looking at
  • 3:37 - 3:40
    which goals they have in mind and
  • 3:40 - 3:44
    in what context, what environment they’re doing these things.
  • 3:44 - 3:48
    So, instead I want to kindle your inspiration
  • 3:48 - 3:54
    I want to offer you a new view on the security environment
  • 3:54 - 4:00
    and I'll also give you some concrete exercises that you can try out
  • 4:00 - 4:02
    at home or at the office.
  • 4:02 - 4:08
    Some “do’s” and “don’t’s” and pointers for further exploration:
  • 4:08 - 4:10
    This is a short overview of the talk
  • 4:10 - 4:16
    I'll start with the background story to why I’m giving this talk
  • 4:16 - 4:21
    then an overview over usable security research area,
  • 4:21 - 4:25
    some principles and methods for usablity,
  • 4:25 - 4:29
    some case studies, then some open questions remain
  • 4:29 - 4:36
    So, the story. Well. It all started with this book.
  • 4:37 - 4:41
    When I was reading about the Snowden revelations,
  • 4:41 - 4:47
    I read, how Snowden tried to contact Glenn Greenwald.
  • 4:47 - 4:53
    On December, 1st, he sent an email, saying, well, writing to Glenn:
  • 4:53 - 5:00
    “If you don’t use PGP, some people will never be able to contact you.”
  • 5:00 - 5:05
    “Please install this helpful tool and if you need any help,
  • 5:05 - 5:08
    please request so.”
  • 5:08 - 5:12
    Three days later, Glenn Greenwald says: “Sorry, I don’t
  • 5:12 - 5:17
    know how to do that, but I’ll look into it.”
  • 5:17 - 5:22
    and Snowden writes back: “Okay, well, sure. And again:
  • 5:22 - 5:24
    If you need any help, I can facilitate contact.”
  • 5:24 - 5:29
    Now, a mere seven weeks later,
  • 5:29 - 5:30
    Laughter
  • 5:30 - 5:37
    Glenn is like “okay, well, I’ll do it within the next days or so.”
  • 5:37 - 5:38
    Okay, sure …
  • 5:38 - 5:43
    Snowden’s like “my sincerest thanks”.
  • 5:43 - 5:46
    But actually in the meantime, Snowden was growing a bit impatient
  • 5:46 - 5:51
    ’cause, okay, “why are you not encrypting?”
  • 5:51 - 5:55
    So he sent an email to Micah Lee, saying, “okay, well, hello,
  • 5:55 - 6:02
    I’m a friend, can you help me getting contact with Laura Poitras?”
  • 6:02 - 6:06
    In addition to that, he made a ten-minute video for Glenn Greenwald
  • 6:06 - 6:09
    Laughter
  • 6:09 - 6:11
    … describing how to use GPG.
  • 6:11 - 6:17
    And actually I have quite a lot of screenshots of that video
  • 6:17 - 6:20
    and it's quite entertaining.
  • 6:20 - 6:24
    ’cause, of course, Snowden was getting increasingly
  • 6:24 - 6:28
    bothered by the whole situation.
  • 6:28 - 6:33
    Now, this is the video that Snowden made
  • 6:33 - 6:40
    “GPG for Journalists. For Windows.”
  • 6:41 - 6:47
    Laughter
  • 6:49 - 6:51
    I’ll just click through it, because I think,
  • 6:51 - 6:54
    the slides speak for themselves.
  • 6:54 - 7:01
    Take notes of all the usability issues you can identify.
  • 7:02 - 7:06
    So just click through the wizard, generate a new key,
  • 7:06 - 7:12
    enable “expert settings”, ’cause we want 3000-bit keys
  • 7:12 - 7:16
    We want a very long password, etc.
  • 7:16 - 7:19
    And now, of course, we also wanna go and find keys
  • 7:19 - 7:22
    on the keyserver.
  • 7:22 - 7:25
    And we need to make sure that we shouldn’t
  • 7:25 - 7:29
    write our draft messages in GMail
  • 7:29 - 7:32
    or Thunderbird and enigmail, for that matter.
  • 7:32 - 7:37
    Although that issue has been solved.
  • 7:40 - 7:42
    So, I think you can start seeing
  • 7:42 - 7:45
    why Glenn Greenwald
    —even if he did open this video—
  • 7:45 - 7:53
    was like
    “okay, well, I’m not gonna bother.”
  • 7:53 - 7:57
    And Snowden is so kind to say, after 12 minutes,
  • 7:57 - 8:02
    “if you have any remaining questions, please contact me.”
  • 8:02 - 8:06
    At this year’s HOPE conference,
  • 8:06 - 8:12
    Snowden actually did a call for arms and he said
  • 8:12 - 8:17
    “Okay, we need people to evaluate our security systems.
  • 8:17 - 8:24
    we need people to go and do red team. But in addition to that,
  • 8:26 - 8:31
    we also need to look at the user experience issue.”
  • 8:31 - 8:37
    So this is a transcript of his kind of manifesto
  • 8:37 - 8:42
    and he says: “GPG is really damn near unusable”
  • 8:42 - 8:47
    because, well, of course, you might know command line
  • 8:47 - 8:50
    and then, okay, you might be okay
  • 8:50 - 8:53
    but “Gam Gam at the home”, she is never going to be able
  • 8:53 - 8:59
    to use GnuPG.
  • 8:59 - 9:09
    And he also notes that, okay, we were part of a technical elite
  • 9:09 - 9:16
    and he calls on us to work on the technical literacy of people
  • 9:16 - 9:18
    because what he explicitly warns against is
  • 9:18 - 9:23
    a high priesthood of technology.
  • 9:23 - 9:28
    Okay, that’s a nice call to arms
  • 9:28 - 9:33
    but are we actually up for a new dawn?
  • 9:33 - 9:40
    Well, I wanna go into the background of usable security
  • 9:40 - 9:45
    and I wanna show you that we’ve actually been
  • 9:45 - 9:48
    in a pretty dark time.
  • 9:48 - 9:56
    So, back in 1999, there was this paper:
    “Why Johnny can’t encrypt”
  • 9:56 - 10:01
    which described mostly the same broken interface
  • 10:01 - 10:10
    so if you remember, if you go back to the video of which
  • 10:10 - 10:15
    I showed some screenshots, then, well, if you look at
  • 10:15 - 10:22
    these screenshots from 1999, well,
    is there a lot of difference?
  • 10:22 - 10:24
    Not really! Nothing much has changed.
  • 10:24 - 10:26
    There are still the same
  • 10:26 - 10:32
    conceptual barriers,
    and same crappy defaults.
  • 10:32 - 10:36
    And most astonishingly, in the paper there
  • 10:36 - 10:39
    is a description of a user study where
  • 10:39 - 10:45
    users were given 90 minutes to encrypt an email
  • 10:45 - 10:50
    and most were unable to do so.
  • 10:50 - 10:55
    I think, that pretty much describes “damn near unusable.”
  • 10:55 - 11:03
    A timeline from, well, before 1999 to now
  • 11:03 - 11:06
    of the usable security research.
  • 11:06 - 11:10
    So, quite a lot has happened
  • 11:10 - 11:15
    although it is still a growing field.
  • 11:15 - 11:21
    It started—the idea of usable security,
    it was explicitly defined first—
  • 11:21 - 11:30
    in 1975, but it was only until
    … only in … 1989 that
  • 11:30 - 11:33
    the first usability tests were carried out.
  • 11:33 - 11:38
    And only in 1996 that
  • 11:38 - 11:45
    the concept of “user-centered security”
    was described.
  • 11:45 - 11:49
    An interesting paper, also from 1999, shows how
  • 11:49 - 11:55
    contary to the general description of users as lazy
  • 11:55 - 12:01
    and basically as the weakest chain in security
  • 12:01 - 12:06
    this paper describes users as pretty rational beings
  • 12:06 - 12:10
    who see security as an overhead and …
    where they don't
  • 12:10 - 12:17
    understand the usefulness of what they’re doing.
  • 12:17 - 12:21
    The study of PGP 5.0, I’ve talked about that already,
  • 12:21 - 12:28
    and there was also a study of the
    Kazaa network in 2002.
  • 12:28 - 12:30
    And it was found out that a lot of users were
  • 12:30 - 12:35
    accidentally sharing files from personal pictures,
  • 12:35 - 12:42
    who knows, maybe credit-card details,
    you never know, right?
  • 12:42 - 12:48
    In 2002, a lot of the knowledge of usable security design
  • 12:48 - 12:51
    was concretised in ten key principles
  • 12:51 - 12:54
    and if you’re interested,
  • 12:54 - 13:04
    I do recommend you to look at the paper.
  • 13:04 - 13:10
    A solution to the PGP problem was proposed in
  • 13:10 - 13:12
    2004, well, actually,
  • 13:12 - 13:15
    it was proposed earlier
    but it was tested in 2005.
  • 13:15 - 13:20
    And it was found that actually
    if we automate encryption
  • 13:20 - 13:24
    and if we automate key exchange then, well,
  • 13:24 - 13:26
    things are pretty workable, except that
  • 13:26 - 13:30
    users still fall for
    phishing attacks, of course.
  • 13:30 - 13:38
    But last year, another research
    identified that, well,
  • 13:38 - 13:42
    making security transparent is all nice and well
  • 13:42 - 13:46
    but it's also dangerous because users no longer
  • 13:46 - 13:50
    are less likely to trust the system and
  • 13:50 - 13:54
    are less likely to really understand
    what’s really happening.
  • 13:54 - 13:59
    So a paper this year also identified another issue:
  • 13:59 - 14:04
    Users generally have very bad understanding
  • 14:04 - 14:06
    of the email architecture.
  • 14:06 - 14:09
    An email goes from point A to point B.
  • 14:09 - 14:16
    And what happens in-between is unknown.
  • 14:16 - 14:23
    So, before I go on to general usability principles
  • 14:23 - 14:28
    from the founding pillar of the usable security field
  • 14:28 - 14:34
    I wanna give some exaples of usability failures.
  • 14:34 - 14:37
    You might be familiar with project VENONA.
  • 14:37 - 14:42
    This was an effort by the US intelligence agencies
  • 14:42 - 14:45
    to try and decrypt soviet communication.
  • 14:45 - 14:49
    And they actually were pretty successful.
  • 14:49 - 14:54
    They encovered a lot of spying and, well,
  • 14:54 - 14:56
    how did they do this?
  • 14:56 - 15:00
    The soviets were using one-time pads and,
  • 15:00 - 15:03
    well, if you reuse a one-time pad,
  • 15:03 - 15:08
    then you leak a lot of information about plain-text.
  • 15:08 - 15:10
    Well, what we also see happening a lot is
  • 15:10 - 15:13
    low password entropy.
  • 15:13 - 15:19
    We have people choosing password “123456”, etc.
  • 15:19 - 15:25
    And what I just described, the study looking into
  • 15:25 - 15:29
    the mental models of users,
  • 15:29 - 15:32
    of the email architecture and how it works
  • 15:32 - 15:35
    well, at the top you have
  • 15:35 - 15:38
    still a pretty simplified description of how things work
  • 15:38 - 15:41
    and at the bottom we have an actual drawing
  • 15:41 - 15:43
    of a research participant when asked:
  • 15:43 - 15:50
    “Can you draw how an email
    goes from point A to point B?”
  • 15:50 - 15:54
    And it’s like:
    “Well, it goes from one place to the other.”
  • 15:54 - 15:59
    Okay …
  • 15:59 - 16:01
    clicking sounds
  • 16:01 - 16:04
    Okay, so this died.
  • 16:13 - 16:19
    So, these are two screenshots of enigmail
  • 16:19 - 16:22
    Well, if I wouldn’t have marked them
  • 16:22 - 16:27
    as the plaintext and encrypted email that would be sent
  • 16:27 - 16:31
    you probably wouldn’t have spotted which was which
  • 16:31 - 16:35
    this is a pretty big failure in
  • 16:35 - 16:40
    the visibility of the system.
  • 16:40 - 16:45
    You don’t see anything? Ah.
  • 16:45 - 16:48
    Audience: “That’s the point!”
    That's the point, yes!
  • 16:48 - 16:58
    laughter
    applause
  • 16:58 - 17:02
    On the left we have a screenshot of GPG and
  • 17:02 - 17:04
    as I’ve already described,
  • 17:04 - 17:09
    command line people, we like command lines
  • 17:09 - 17:11
    but normal people don’t.
  • 17:11 - 17:14
    And what we also see is a lot of the jargon that is
  • 17:14 - 17:17
    currently being used even in GUI applications
  • 17:17 - 17:23
    so on the right there is PGP 10.0.
  • 17:23 - 17:26
    Now I wanna close these examples with
  • 17:26 - 17:29
    well, you might be wondering: “what is this?”
  • 17:29 - 17:33
    This is actually an example of a security device
  • 17:33 - 17:37
    from, I think it’s around 4000 years ago.
  • 17:37 - 17:38
    Like, People could use this.
  • 17:38 - 17:43
    Why can’t we get it right today?
  • 17:43 - 17:46
    Something that you should,
  • 17:46 - 17:49
    this is a little homework exercise,
  • 17:49 - 17:52
    take a laptop to your grandma, show her PGP,
  • 17:52 - 17:55
    can she use it—yes or no?
  • 17:55 - 18:02
    Probably not, but who knows?
  • 18:02 - 18:04
    Now I wanna go into
  • 18:04 - 18:10
    the usability cornerstones of usable security.
  • 18:10 - 18:13
    I wanna start with heuristics
  • 18:13 - 18:16
    some people call them “rules of thumb,” other people
  • 18:16 - 18:19
    call them “the ten holy commandments”
  • 18:19 - 18:23
    For example, the ten commandments of Dieter Rams,
  • 18:23 - 18:27
    there is ten commandments of Jakob Nielsen,
  • 18:27 - 18:28
    of Don Norman
  • 18:28 - 18:35
    and it really depends on who you believe in, etc.
  • 18:35 - 18:37
    But at the cornerstone of all of these is that
  • 18:37 - 18:40
    design is made for people.
  • 18:40 - 18:46
    And, well, actually, Google says it quite well
  • 18:46 - 18:49
    in their guiding mission:
  • 18:49 - 18:53
    “Focus on the user and all else will follow.”
  • 18:53 - 18:55
    Or, as a usability maxim:
  • 18:55 - 18:57
    “thou shalt test with thy user”
  • 18:57 - 19:01
    Don’t just give them the thing.
  • 19:01 - 19:03
    But there is one problem with these heuristics
  • 19:03 - 19:07
    and with this advice going just with your user.
  • 19:07 - 19:11
    Because it’s a pretty abstract advice.
  • 19:11 - 19:12
    What do you do?
  • 19:12 - 19:14
    You go out into the world to get practice.
  • 19:14 - 19:18
    You start observing people.
  • 19:18 - 19:20
    One nice exercise to try is:
  • 19:20 - 19:21
    go to the vending machine,
  • 19:21 - 19:25
    for example the ones at the S-Bahn.
  • 19:25 - 19:26
    Just stand next to it
  • 19:26 - 19:28
    and observe people buying tickets.
  • 19:28 - 19:31
    It’s quite entertaining, actually.
  • 19:31 - 19:34
    Laughter
  • 19:34 - 19:36
    And something you can also do is
  • 19:36 - 19:38
    search for usability failures.
  • 19:38 - 19:40
    This is what you already do when
  • 19:40 - 19:41
    you’re observing people.
  • 19:41 - 19:45
    But even just google for “usability failure”,
  • 19:45 - 19:48
    “GUI fail”, etc., and you will find
  • 19:48 - 19:53
    lots of entertaining stuff.
  • 19:53 - 19:55
    Those were some heuristics
  • 19:55 - 19:56
    but what about the princpiles
  • 19:56 - 20:02
    that lie behind those?
  • 20:02 - 20:06
    Usability or interaction design
  • 20:06 - 20:09
    is a cycle between the user and the system.
  • 20:09 - 20:10
    The user and the world.
  • 20:10 - 20:12
    The user acts on the world
  • 20:12 - 20:14
    and gets feedback.
  • 20:14 - 20:17
    They interpret that.
  • 20:17 - 20:18
    One important concept is
  • 20:18 - 20:20
    for things to be visible.
  • 20:20 - 20:21
    For the underlying system state
  • 20:21 - 20:22
    to be visible and
  • 20:22 - 20:24
    you get appropriate feedback
  • 20:24 - 20:26
    from the system.
  • 20:26 - 20:31
    So these are Don Norman’s gulfs
    of execution and evaluation
  • 20:31 - 20:35
    sort of yin and yang.
  • 20:35 - 20:39
    And there is two concrete problems
  • 20:39 - 20:40
    to illustrate.
  • 20:40 - 20:42
    For example, the button problem
  • 20:42 - 20:46
    that “how do you know what happens
    when you push the button?”
  • 20:46 - 20:51
    and “how do you know how to push it?”
  • 20:51 - 20:53
    I unforunately don’t have a picture of it
  • 20:53 - 20:58
    but at Oxford station, the tabs in the bathrooms
  • 20:58 - 21:02
    they say “push” and you need to turn.
  • 21:02 - 21:05
    Laughter
  • 21:05 - 21:09
    Then there is the toilet door problem.
  • 21:09 - 21:12
    The problem of “how do you know
    what state a system is in”.
  • 21:12 - 21:16
    How do you know whether an email will be encrypted?
  • 21:16 - 21:20
    This is a picture …
  • 21:20 - 21:22
    basically there is two locks.
  • 21:22 - 21:26
    One is actually broken and it’s …
    when pushing the button that's on the
  • 21:26 - 21:29
    door handle, you usually lock the door.
  • 21:29 - 21:32
    But … well … it broke. So that must have been
  • 21:32 - 21:36
    an entertaining accident.
  • 21:36 - 21:39
    Another, as I’ve already described,
  • 21:39 - 21:44
    another important concept is that of mental models.
  • 21:44 - 21:48
    It’s a question of what idea does the user have
  • 21:48 - 21:53
    of the system by interacting with it?
  • 21:53 - 21:56
    How do they acquire knowledge?
  • 21:56 - 21:59
    For example, how to achieve discoverability
  • 21:59 - 22:01
    of the system?
  • 22:01 - 22:06
    And how to ensure that while
    a user is discovering the system
  • 22:06 - 22:09
    that they are less likely to make mistakes?
  • 22:09 - 22:14
    So this is the concept of poka-yoke
  • 22:14 - 22:18
    and it’s … here is an example
    you also see with floppy disks,
  • 22:18 - 22:22
    with USB sticks, etc.
  • 22:22 - 22:24
    It’s engineered such that users are
  • 22:24 - 22:27
    less likely to make a mistake.
  • 22:27 - 22:31
    Then there’s also the idea
    of enabling knowledge transfer
  • 22:31 - 22:33
    So how can we do this?
  • 22:33 - 22:35
    One thing is metaphors.
  • 22:35 - 22:40
    And I’m not sure how many of you recognise this,
  • 22:40 - 22:44
    this is Microsoft BOB.
  • 22:44 - 22:46
    Traditionally, PC systems have been built
  • 22:46 - 22:52
    on the desktop metaphor.
  • 22:52 - 22:58
    Laughter
    Microsoft BOB had a little too much.
  • 22:58 - 23:05
    To enable knowledge transfer,
    you can also standardise systems.
  • 23:05 - 23:09
    And one important tool for this is design languages
  • 23:09 - 23:12
    so if you’re designing for iOS, go look at
  • 23:12 - 23:16
    the design language, the Human
    Interface Guidelines of iOS.
  • 23:16 - 23:20
    The same for Windows – go look
    at the Metro Design Guidelines.
  • 23:20 - 23:26
    As for Android, look at Material Design.
  • 23:26 - 23:30
    Because, another interesting exercise
    to try out
  • 23:30 - 23:33
    relating to design languages
  • 23:33 - 23:38
    and also to get familiar with how designers try to
  • 23:38 - 23:41
    communicate with users is to
  • 23:41 - 23:45
    look at an interface and trying to decode
  • 23:45 - 23:49
    what the designer is trying to say to the user.
  • 23:49 - 23:54
    And another interesting exercise is to look at
  • 23:54 - 23:59
    not usability but UNusability.
  • 23:59 - 24:01
    So there is this pretty interesting book
  • 24:01 - 24:05
    called “evil by design” and it goes into
  • 24:05 - 24:08
    all the various techniques that designers use
  • 24:08 - 24:17
    to fool users, to get them to buy an extra hotel, car, etc.
  • 24:17 - 24:22
    and, well, RyanAir is pretty much the worst offender
  • 24:22 - 24:25
    so a good example to study.
  • 24:25 - 24:31
    Applause
  • 24:31 - 24:34
    So, what if you wanna go out into the world
  • 24:34 - 24:40
    and are gonna apply these princpiles, these heuristics?
  • 24:40 - 24:42
    The first thing to know is that
  • 24:42 - 24:45
    design has to be a process
  • 24:45 - 24:51
    whereby, first part is actually defining your problem.
  • 24:51 - 24:54
    You first brain-storm
  • 24:54 - 24:58
    then you try to narrow down to concrete requirements
  • 24:58 - 25:03
    after which you go and try out
    the various approaches
  • 25:03 - 25:06
    and test these.
  • 25:06 - 25:09
    What materials do usability experts actually use?
  • 25:09 - 25:16
    Well, of course there’s expensive tools, Axure, etc.
  • 25:16 - 25:19
    but I think one of the most used materials
  • 25:19 - 25:25
    is still the post-it note. Just paper and pens.
  • 25:25 - 25:29
    And, okay, where do you wanna go and test?
  • 25:29 - 25:32
    Well, actually, go out into the field.
  • 25:32 - 25:36
    Go to the ticket machine of the S-Bahn.
  • 25:36 - 25:39
    But also go and test in the lab, so that you have
  • 25:39 - 25:42
    a controlled environment.
  • 25:42 - 25:45
    And then you ask “okay, how do I test?”
  • 25:45 - 25:50
    Well, first thing is: Go and get some real people.
  • 25:50 - 25:55
    Of course, it’s … it can be difficult to actually
  • 25:55 - 26:01
    get people into the lab but it’s not impossible.
  • 26:01 - 26:02
    So once you have people in the lab,
  • 26:02 - 26:05
    here are some methods.
  • 26:05 - 26:07
    There are so many usability evaluation methods
  • 26:07 - 26:09
    that I’m not gonna list them all and
  • 26:09 - 26:13
    I encourage you to go and look them up yourself
  • 26:13 - 26:15
    ’cause it’s also very personal what works for you
  • 26:15 - 26:20
    and what works in your situation.
  • 26:20 - 26:23
    When using these methods you wanna
  • 26:23 - 26:26
    evaluate how well a solution works
  • 26:26 - 26:30
    So you’re gonna look at some matrix
  • 26:30 - 26:31
    so that at the end of your evaluation
  • 26:31 - 26:35
    you can say “okay, we’ve done a good job”,
  • 26:35 - 26:40
    “this can go better”,
    “Okay, maybe we can move that”, …
  • 26:40 - 26:44
    So these are the standardised ones, so
  • 26:44 - 26:48
    how effective are people, or etc.
  • 26:48 - 26:53
    You can read …
  • 26:53 - 26:56
    For a quick start guide on how to
  • 26:56 - 26:59
    perform usability studies, this is quite a nice one.
  • 26:59 - 27:00
    And the most important thing to remember
  • 27:00 - 27:05
    is that preparation is half the work.
  • 27:05 - 27:08
    First thing to check that everything is working,
  • 27:08 - 27:17
    make sure that you have everyone
    you need in the room, etc.
  • 27:17 - 27:23
    And maybe most importantly,
    usability and usable security,
  • 27:23 - 27:26
    well, usable security is still a growing field, but
  • 27:26 - 27:31
    usability is a very large field and most likely
  • 27:31 - 27:35
    all the problems that you are going to face
  • 27:35 - 27:37
    or at least a large percentage, other people
  • 27:37 - 27:39
    have faced before.
  • 27:39 - 27:44
    So this book is, well, it describes a lot of the stories
  • 27:44 - 27:48
    of user experience professionals and the things
  • 27:48 - 27:52
    that they’ve come up against.
  • 27:52 - 27:56
    A homework exerciese if you feel like it
  • 27:56 - 28:01
    is looking at basically analysing who is your user
  • 28:01 - 28:07
    and where they’re going to use the application.
  • 28:07 - 28:10
    And also something to think about is
  • 28:10 - 28:13
    how might you involve your user?
  • 28:13 - 28:17
    Not just during the usability testing,
  • 28:17 - 28:21
    but also afterwards.
  • 28:21 - 28:28
    Now I wanna go into some case
    studies of encryption systems.
  • 28:28 - 28:30
    Now there’s quite a lot, and these are not all,
  • 28:30 - 28:35
    it’s just a small selection but I wanna focus on three.
  • 28:35 - 28:40
    I wanna focus at the OpenPGP standard,
    Cryptocat and TextSecure.
  • 28:40 - 28:43
    So, OpenPGP, well …
  • 28:43 - 28:46
    email is now almost 50 years old,
  • 28:46 - 28:52
    we have an encryption standard—S/MIME,
    it is widely used
  • 28:52 - 28:56
    well, it’s widely usable but it’s not widely used …
  • 28:56 - 29:04
    and GnuPG is used widely but is not installed by default
  • 29:04 - 29:10
    and when usability teaches us one thing
  • 29:10 - 29:14
    it’s that defaults rule.
  • 29:14 - 29:18
    Because users don’t change defaults.
  • 29:18 - 29:23
    Now you might ask “Okay,
    PGP is not installed by default,
  • 29:23 - 29:27
    so is there actually still a future for OpenPGP?”
  • 29:27 - 29:30
    Well, I’d argue: Yes.
    We have browser plug-ins
  • 29:30 - 29:33
    which make it easier for users
  • 29:33 - 29:38
    JavaScript crypto … I’ll come back to that later …
  • 29:38 - 29:43
    But when we look at Mailvelope, we see, well,
  • 29:43 - 29:48
    the EFF scorecard, it has a pretty decent rating
  • 29:48 - 29:56
    at least compared to that of native PGP implementations.
  • 29:56 - 29:59
    And also Google has announced and has been working
  • 29:59 - 30:01
    for quite some time on their own plug-in for
  • 30:01 - 30:03
    end-to-end encryption.
  • 30:03 - 30:08
    And Yahoo! is also involved in that.
  • 30:08 - 30:11
    And after the Snowden revelations there has been
  • 30:11 - 30:15
    a widespread search in the interest
  • 30:15 - 30:18
    in encrypted communications
  • 30:18 - 30:23
    and this is one website where a lot of these are listed.
  • 30:23 - 30:28
    And one project that I’d especially like to emphasise
  • 30:28 - 30:32
    is mailpile because I think it looks
  • 30:32 - 30:35
    like a very interesting approach
  • 30:35 - 30:38
    whereby the question is:
  • 30:38 - 30:41
    Can we use OpenPGP as a stepping stone?
  • 30:41 - 30:47
    OpenPGP is not perfect, meta-data is not protected,
  • 30:47 - 30:48
    header is not protected, etc.
  • 30:48 - 30:52
    But maybe when we get people into the ecosystem,
  • 30:52 - 30:56
    we can try and gradually move
    them to more secure options.
  • 30:56 - 30:59
    Now, what about Cryptocat?
  • 30:59 - 31:04
    So, Cryptocat’s online chat platform
  • 31:04 - 31:07
    that … yes … uses JavaScript.
  • 31:07 - 31:11
    And of course, JavaScript crypto is bad
  • 31:11 - 31:15
    but it can be made better.
  • 31:15 - 31:20
    And I think JavaScript crypto is not the worst problem.
  • 31:20 - 31:23
    Cryptocat had a pretty disastrous problem
  • 31:23 - 31:27
    whereby all messages that were sent
  • 31:27 - 31:31
    were pretty easily decryptable.
  • 31:31 - 31:33
    But actually, this is just history repeating itself
  • 31:33 - 31:39
    ’cause PGP 1.0 used something called BassOmatic,
  • 31:39 - 31:45
    the BassOmatic cypher which is also pretty weak.
  • 31:45 - 31:50
    And Cryptocat is improving, which is the important thing.
  • 31:50 - 31:51
    There is now a browser plug-in and
  • 31:51 - 31:54
    of course, there’s an app for that and
  • 31:54 - 31:57
    actually, Cryptocat is doing really, really well
  • 31:57 - 31:59
    in the EFF benchmarks.
  • 31:59 - 32:05
    And Cryptocat is asking the one question that a lot
  • 32:05 - 32:07
    of other applications are not asking, which is:
  • 32:07 - 32:09
    “How can we actually make crypto fun?”
  • 32:09 - 32:12
    When you start Cryptocat, there’s noises
  • 32:12 - 32:15
    and there’s interesting facts about cats
  • 32:15 - 32:19
    Laughter
  • 32:19 - 32:21
    … depends on whether you like cats, but still!
  • 32:21 - 32:23
    Keeps you busy!
  • 32:23 - 32:29
    Now, the last case: TextSecure
    also has pretty good markings
  • 32:29 - 32:32
    and actually just like CryptoCat,
  • 32:32 - 32:36
    the App store distribution model is something that
  • 32:36 - 32:39
    I think is a valuable one for usability.
  • 32:39 - 32:42
    It makes it easy to install.
  • 32:42 - 32:46
    And something that TextSecure is also looking at is
  • 32:46 - 32:52
    synchronisation options for your address book.
  • 32:52 - 32:57
    And I think the most interesting development is
  • 32:57 - 33:00
    on the one side, the CyanogenMod integration,
  • 33:00 - 33:05
    so that people will have encryption enabled by default.
  • 33:05 - 33:10
    ’Cause as I mentioned: People don’t change defaults.
  • 33:10 - 33:15
    And this one is a bit more contoversial, but
  • 33:15 - 33:18
    there’s also the WhatsApp partnership.
  • 33:18 - 33:21
    And of course people will say “it’s not secure”,
  • 33:21 - 33:23
    we know, we know,
  • 33:23 - 33:24
    EFF knows!
  • 33:24 - 33:29
    But at least, it’s more secure than nothing at all.
  • 33:29 - 33:31
    Because: Doesn’t every little bit help?
  • 33:31 - 33:33
    Well, I’d say: yes.
  • 33:33 - 33:36
    And at least, it’s one stepping stone.
  • 33:36 - 33:40
    And, well, all of these are open-source,
  • 33:40 - 33:42
    so you can think for yourself:
  • 33:42 - 33:45
    How can I improve these?
  • 33:45 - 33:50
    Now, there’s still some open questions remaining
  • 33:50 - 33:53
    in the usable security field and in the
  • 33:53 - 33:56
    wider security field as well.
  • 33:56 - 33:59
    I won’t go into all of these,
  • 33:59 - 34:03
    I wanna focus on the issues that developers have,
  • 34:03 - 34:06
    issues of end user understanding
  • 34:06 - 34:09
    and of identitiy management.
  • 34:09 - 34:14
    Because the development environment
  • 34:14 - 34:18
    there’s the crypto-plumbing problem, some people call it.
  • 34:18 - 34:21
    How do we standardise on a cryptographic algorithm?
  • 34:21 - 34:25
    How do we make everyone use the same system?
  • 34:25 - 34:29
    Because, again, it’s history repeating itself.
  • 34:29 - 34:36
    With PGP, we had RSA, changed for
    DSA because of patent issues
  • 34:36 - 34:40
    IDEA changed for CAST5 because of patent issues
  • 34:40 - 34:42
    and now we have something similar:
  • 34:42 - 34:44
    ’cause for PGP the question is:
  • 34:44 - 34:46
    Which curve do we choose?
  • 34:46 - 34:51
    ’cause this is from Bernstein, who has got a whole list
  • 34:51 - 34:56
    of, well not all the curves,
    but a large selection of them
  • 34:56 - 34:58
    analysing the security
  • 34:58 - 35:01
    but how do you make, well, pretty much
  • 35:01 - 35:06
    the whole world agree on a single standard?
  • 35:06 - 35:11
    And also, can we move toward safer languages?
  • 35:11 - 35:18
    And I’ve been talking about the
    usability of encryption systems
  • 35:18 - 35:22
    for users, but what about for developers?
  • 35:22 - 35:26
    So, API usability, and as I’ve mentioned:
  • 35:26 - 35:28
    Language usability.
  • 35:28 - 35:32
    And on top of that, it is not just a technical issue,
  • 35:32 - 35:35
    because, of course, we secure microchips,
  • 35:35 - 35:42
    but we also wanna secure social systems.
  • 35:42 - 35:45
    Because, in principal, we live in an open system,
  • 35:45 - 35:51
    in an open society and a system cannot audit itself.
  • 35:51 - 35:56
    So, okay, what do we do, right?
    I don’t know.
  • 35:56 - 35:58
    I mean, that’s why it’s an open question!
  • 35:58 - 36:01
    ’Cause how de we ensure the authenticity of,
  • 36:01 - 36:06
    I don’t know, my Intel processor in my lapotp?
  • 36:06 - 36:07
    How do I know that the
  • 36:07 - 36:09
    random number generator isn’t bogus?
  • 36:09 - 36:14
    Well, I know it is, but …
    laughter
  • 36:14 - 36:17
    Then, there’s the issue of identity management
  • 36:17 - 36:19
    related to key management, like
  • 36:19 - 36:25
    who has the keys to the kingdom?
  • 36:25 - 36:27
    One approach, as I’ve already mentioned, is
  • 36:27 - 36:29
    key continuity management.
  • 36:29 - 36:32
    Whereby we automate both key exchange and
  • 36:32 - 36:36
    whereby we automate encryption.
  • 36:36 - 36:38
    So one principle is trust on first use,
  • 36:38 - 36:44
    whereby, well, one approach will be to attach your key
  • 36:44 - 36:46
    to any email you send out and anyone who receives
  • 36:46 - 36:50
    this email just assumes it’s the proper key.
  • 36:50 - 36:53
    Of course, it’s not fully secure,
  • 36:53 - 36:56
    but at least, it’s something.
  • 36:56 - 36:59
    And this is really, I think, the major question
  • 36:59 - 37:01
    in interoperability:
  • 37:01 - 37:05
    How do you ensure that you can access your email
  • 37:05 - 37:09
    from multiple devices?
  • 37:09 - 37:11
    Now, of course, there is meta-data leakage,
  • 37:11 - 37:14
    PGP doesn’t protect meta-data,
  • 37:14 - 37:17
    and, you know, your friendly security agency knows
  • 37:17 - 37:18
    where you went last summer …
  • 37:18 - 37:19
    So, what do we do?
  • 37:19 - 37:24
    We do anonymous routing, we send over tor, but
  • 37:24 - 37:26
    I mean, how do we roll that out?
  • 37:26 - 37:28
    I think the approach that
  • 37:28 - 37:30
    mailpile is trying to do is interesting
  • 37:30 - 37:33
    and, of course still an open question, but
  • 37:33 - 37:37
    interesting research nonetheless.
  • 37:37 - 37:39
    Then there’s the introduction problem of
  • 37:39 - 37:44
    okay, how, I meet someone here, after the talk,
  • 37:44 - 37:46
    they tell me who they are,
  • 37:46 - 37:50
    but either I get their card—which is nice—or
  • 37:50 - 37:52
    they say what their name is.
  • 37:52 - 37:56
    But they’re not gonna tell me, they’re not gonna spell out
  • 37:56 - 37:58
    their fingerprint.
  • 37:58 - 38:02
    So the idea of Zooko’s triangle is that identifiers
  • 38:02 - 38:08
    are either human-meaningful,
    secure or decentralised.
  • 38:08 - 38:09
    Pick two.
  • 38:09 - 38:13
    So here’s some examples of identifiers,
  • 38:13 - 38:16
    so for Bitcoin: Lots of random garbage.
  • 38:16 - 38:19
    For OpenPGP: Lots of random garbage
  • 38:19 - 38:22
    For miniLock: Lots of random garbage
  • 38:22 - 38:26
    So, I think an interesting research problem is:
  • 38:26 - 38:30
    Can we actually make these things memorable?
  • 38:30 - 38:32
    You know, I can memorise email addresses,
  • 38:32 - 38:34
    I can memorise phone numbers,
  • 38:34 - 38:40
    I can not memorise these. I can try, but …
  • 38:40 - 38:45
    Then, the last open question I wanna focus on
  • 38:45 - 38:49
    is that of end-user understanding.
  • 38:49 - 38:54
    So of course, we’ll know that all devices are monitored.
  • 38:54 - 39:00
    But does the average user?
  • 39:00 - 39:05
    Do they know what worms can do?
  • 39:05 - 39:09
    Have they read these books?
  • 39:09 - 39:15
    Do they know where GCHQ is?
  • 39:15 - 39:21
    Do they know that Cupertino has
    pretty much the same powers?
  • 39:21 - 39:24
    Laughter
  • 39:24 - 39:29
    Do they know they’re living in a panopticon to come?
  • 39:29 - 39:32
    Laughter
  • 39:32 - 39:38
    Do they know that people are
    killed based on meta-data?
  • 39:38 - 39:41
    Well, I think not.
  • 39:41 - 39:46
    And actually this is a poster from the university
  • 39:46 - 39:47
    where I did my Master’s
  • 39:47 - 39:51
    and interestingly enough, it was founded by a guy
  • 39:51 - 39:56
    who made a fortune selling sugar pills.
  • 39:56 - 40:03
    You know, snake oil, we also have this in crypto.
  • 40:03 - 40:06
    And how is the user to know
  • 40:06 - 40:08
    whether something is secure or not?
  • 40:08 - 40:11
    Of course, we have the secure messaging scorecard
  • 40:11 - 40:15
    but can users find these?
  • 40:15 - 40:21
    Well, I think, there’s three aspects
    to end-user understanding
  • 40:21 - 40:24
    which is knowledge acquisition,
    knowledge transfer,
  • 40:24 - 40:27
    and the verification updating of this knowledge.
  • 40:27 - 40:31
    So, as I’ve already mentioned,
    we can do dummy-proofing
  • 40:31 - 40:38
    and we can create transparent systems.
  • 40:38 - 40:41
    For knowledge transfer, we can
  • 40:41 - 40:44
    look at appropriate metaphors and design languages.
  • 40:44 - 40:47
    And for verification we can
  • 40:47 - 40:51
    try an approach: Choose an advertising.
  • 40:51 - 40:56
    And, last but not least, we can do user-testing.
  • 40:56 - 41:03
    Because all these open questions that I’ve described
  • 41:03 - 41:06
    and all this research that has been done,
  • 41:06 - 41:11
    I think it’s missing one key issue, which is that
  • 41:11 - 41:14
    the usability people and the security people
  • 41:14 - 41:17
    tend not to really talk to one another.
  • 41:17 - 41:21
    The open-source developers and the users:
  • 41:21 - 41:23
    Are they talking enough?
  • 41:23 - 41:27
    I think that’s something, if we want a new dawn,
  • 41:27 - 41:31
    that’s something that I think we should approach.
  • 41:31 - 41:35
    Yeah, so, from my side, that’s it.
  • 41:35 - 41:37
    I’m open for any questions.
  • 41:37 - 41:49
    Applause
  • 41:49 - 41:52
    Herald: Arne, thank you very much for your brilliant talk
  • 41:52 - 41:55
    Now, if you have any questions to ask, would you please
  • 41:55 - 41:58
    line up at the microphones in the aisles?!
  • 41:58 - 42:00
    The others who’d like to leave now,
  • 42:00 - 42:04
    I’d ask you kindly to please leave very quietly
  • 42:04 - 42:09
    so we can hear what the people
    asking questions will tell us.
  • 42:09 - 42:14
    And those at the microphones,
    if you could talk slowly,
  • 42:14 - 42:19
    then those translating have no problems in translating
  • 42:19 - 42:21
    what is being asked. Thank you very much.
  • 42:21 - 42:27
    And I think we’ll start with mic #4 on the left-hand side.
  • 42:27 - 42:32
    Mic#4: Yes, so, if you’ve been to any successful
  • 42:32 - 42:36
    crypto party, you know that crypto parties very quickly
  • 42:36 - 42:41
    turn into not about discussing software,
    how to use software,
  • 42:41 - 42:44
    but into threat model discussions.
  • 42:44 - 42:47
    And to actually get users
    to think about what they’re
  • 42:47 - 42:49
    trying to protect themselves for and if a certain
  • 42:49 - 42:53
    messaging app is secure, that still means nothing.
  • 42:53 - 42:56
    ’Cause there is lots of other stuff that’s going on.
  • 42:56 - 42:57
    Can you talk a little bit about that and
  • 42:57 - 43:00
    how that runs into this model about, you know,
  • 43:00 - 43:02
    how we need to educate users and, while we’re at it,
  • 43:02 - 43:04
    what we want to educated about.
  • 43:04 - 43:06
    And what they actually need to be using.
  • 43:06 - 43:10
    Arne: Well, I think that’s an interesting point
  • 43:10 - 43:14
    and I think, one issue, one big issue is:
  • 43:14 - 43:17
    okay, we can throw lots of crypto parties
  • 43:17 - 43:21
    but we’re never gonna be able to throw enough parties.
  • 43:21 - 43:23
    I … with one party you’re very lucky
  • 43:23 - 43:25
    you’re gonna educate 100 people.
  • 43:25 - 43:29
    I mean, just imagine how many parties
    you’d need to throw. Right?
  • 43:29 - 43:33
    I mean, it’s gonna be a heck of party, but … yeah.
  • 43:33 - 43:39
    And I think, secondly, the question of threat modeling,
  • 43:39 - 43:43
    I think, sure, that’s helpful to do, but
  • 43:43 - 43:48
    I think, users do first need an understanding of,
  • 43:48 - 43:49
    for example, the email architecture.
  • 43:49 - 43:52
    Cause, how can they do threat
    modeling when they think
  • 43:52 - 43:55
    that an email magically pops
    from one computer to the next?
  • 43:55 - 43:59
    I think, that is pretty much impossible.
  • 43:59 - 44:01
    I hope that …
  • 44:01 - 44:05
    Herald: Thank you very much, so …
    Microphone #3, please.
  • 44:05 - 44:07
    Mic#3: Arne, thank you very much for your talk.
  • 44:07 - 44:10
    There’s one aspect that I didn’t see in your slides.
  • 44:10 - 44:13
    And that is the aspect of the language that we use
  • 44:13 - 44:17
    to describe concepts in PGP—and GPG, for that matter.
  • 44:17 - 44:20
    And I know that there was a paper last year
  • 44:20 - 44:22
    about why King George can’t encrypt and
  • 44:22 - 44:24
    they were trying to propose a new language.
  • 44:24 - 44:26
    Do you think that such initiatives are worthwile
  • 44:26 - 44:29
    or are we stuck with this language and should we make
  • 44:29 - 44:32
    as good use of it as we can?
  • 44:32 - 44:38
    Arne: I think that’s a good point
    and actually the question
  • 44:38 - 44:45
    of “okay, what metaphors do you wanna use?” … I think
  • 44:45 - 44:47
    we’re pretty much stuck with the language
  • 44:47 - 44:50
    that we’re using for the moment but
  • 44:50 - 44:54
    I think it does make sense
    to go and look into the future
  • 44:54 - 44:58
    at alternatives models.
  • 44:58 - 45:01
    Yeah, so I actually wrote a paper that also
  • 45:01 - 45:05
    goes into that a bit, looking at
  • 45:05 - 45:09
    the metaphor of handshakes to exchange keys.
  • 45:09 - 45:10
    So, for example, you could have
  • 45:10 - 45:16
    an embedded device as a ring or wristband,
  • 45:16 - 45:19
    it could even be a smartwatch, for that matter.
  • 45:19 - 45:22
    Could you use that shaking of hands to
  • 45:22 - 45:24
    build trust-relationships?
  • 45:24 - 45:30
    And that might be a better
    metaphor than key-signing,
  • 45:30 - 45:31
    webs of trust, etc.
  • 45:31 - 45:35
    ’Cause I think, that is horribly broken
  • 45:35 - 45:40
    for I mean the concept, trying
    to explain that to users.
  • 45:40 - 45:43
    Herald: Thank you. And … at the back in the middle.
  • 45:43 - 45:45
    Signal angel: Thanks. A question from the internet:
  • 45:45 - 45:47
    [username?] from the Internet wants to know if you’re
  • 45:47 - 45:52
    aware of the PEP project, the “pretty easy privacy”
  • 45:52 - 45:53
    and your opinions on that.
  • 45:53 - 45:55
    And another question is:
  • 45:55 - 46:02
    How important is the trust level of the crypto to you?
  • 46:02 - 46:04
    Arne: Well, yes, actually, there’s this screenshot
  • 46:04 - 46:10
    of the PEP project in the slides
  • 46:10 - 46:15
    … in the why WhatsApp is horribly insecure and
  • 46:15 - 46:19
    of course, I agree, and yeah.
  • 46:19 - 46:22
    I’ve looked into the PEP project for a bit
  • 46:22 - 46:25
    and I think, yeah, I think it’s an interesting
  • 46:25 - 46:28
    approach but I still have to read up on it a bit more.
  • 46:28 - 46:31
    Then, for the second question,
  • 46:31 - 46:38
    “how important is the rust in the crypto?”:
  • 46:38 - 46:42
    I think that’s an important one.
  • 46:42 - 46:43
    Especially the question of
  • 46:43 - 46:53
    “how do we build social systems
    to ensure reliable cryptography?”
  • 46:53 - 46:57
    So one example is the Advanced
    Encryption Standard competition.
  • 46:57 - 47:00
    Everyone was free to send in entries,
  • 47:00 - 47:02
    their design princpiles were open
  • 47:02 - 47:06
    and this is in complete contrast
    to the Data Encryption Standard
  • 47:06 - 47:12
    which, I think, the design princpiles are still Top Secret.
  • 47:12 - 47:16
    So yeah, I think, the crypto is
    something we need to build on
  • 47:16 - 47:22
    but, well, actually, the crypto is
    again built on other systems
  • 47:22 - 47:28
    where the trust in these systems
    is even more important.
  • 47:28 - 47:31
    Herald: Okay, thank you, microphone #2, please.
  • 47:31 - 47:34
    Mic#2: Yes, Arne, thank you very
    much for your excellent talk.
  • 47:34 - 47:38
    I wonder how about what to do with feedback
  • 47:38 - 47:41
    on usability in open-source software.
  • 47:41 - 47:42
    So, you publish something on GitHub
  • 47:42 - 47:44
    and you’re with a group of people
  • 47:44 - 47:45
    who don’t know each other and
  • 47:45 - 47:48
    one publishes something,
    the other publishes something,
  • 47:48 - 47:51
    and then: How do we know
    this software is usable?
  • 47:51 - 47:54
    In commercial software, there’s all kind of hooks
  • 47:54 - 47:56
    on the website, on the app,
  • 47:56 - 47:59
    to send feedback to the commercial vendor.
  • 47:59 - 48:02
    But in open-source software,
    how do you gather this information?
  • 48:02 - 48:05
    How do you use it, is there any way to do this
  • 48:05 - 48:06
    in an anonymised way?
  • 48:06 - 48:09
    I haven’t seen anything related to this.
  • 48:09 - 48:11
    Is this one of the reasons why
    open-source software is maybe
  • 48:11 - 48:15
    less usable than commercial software?
  • 48:15 - 48:20
    Arne: It might be. It might be.
  • 48:20 - 48:23
    But regarding your question, like, how do you know
  • 48:23 - 48:30
    whether a commercial software is usable, well,
  • 48:30 - 48:32
    you … one way is looking at:
  • 48:32 - 48:35
    Okay, what kind of statistics do you get back?
  • 48:35 - 48:38
    But if you push out something totally unusable
  • 48:38 - 48:40
    and then, I mean, you’re going to expect
  • 48:40 - 48:45
    that the statistics come back looking like shit.
  • 48:45 - 48:50
    So, the best approach is to
    design usability in from the start.
  • 48:50 - 48:51
    The same with security.
  • 48:51 - 48:55
    And I think, that is also …
    so even if you have …
  • 48:55 - 48:59
    you want privacy for end users, I think it’s still possible
  • 48:59 - 49:02
    to get people into their lab and look at:
  • 49:02 - 49:03
    Okay, how are they using the system?
  • 49:03 - 49:06
    What things can we improve?
  • 49:06 - 49:08
    And what things are working well?
  • 49:08 - 49:11
    Mic#2: So you’re saying, you should only publish
  • 49:11 - 49:19
    open-source software for users
    if you also tested in a lab?
  • 49:19 - 49:23
    Arne: Well, I think, this is a bit of a discussion of:
  • 49:23 - 49:26
    Do we just allow people to build
    houses however they want to
  • 49:26 - 49:28
    or do we have building codes?
  • 49:28 - 49:32
    And … I think … well, actually, this proposal of holding
  • 49:32 - 49:36
    software developers responsible for what they produce,
  • 49:36 - 49:40
    if it’s commercial software, I mean,
    that proposal has been
  • 49:40 - 49:42
    made a long time ago.
  • 49:42 - 49:43
    And the question is:
  • 49:43 - 49:48
    How would that work in an
    open-source software community?
  • 49:48 - 49:50
    Well, actually, I don’t have an answer to that.
  • 49:50 - 49:53
    But I think, it’s an interesting question.
  • 49:53 - 49:54
    Mic#2: Thank you.
  • 49:54 - 49:58
    Herald: Thank you very much. #1, please.
  • 49:58 - 50:01
    Mic#1: You said that every little bit helps,
  • 50:01 - 50:04
    so if we have systems that
    don’t provide a lot of … well …
  • 50:04 - 50:07
    are almost insecure, they
    provide just a bit of security, than
  • 50:07 - 50:10
    that is still better than no security.
  • 50:10 - 50:13
    My question is: Isn’t that actually worse because
  • 50:13 - 50:15
    this promotes a false sense of security and
  • 50:15 - 50:20
    that makes people just use
    the insecure broken systems
  • 50:20 - 50:24
    just to say “we have some security with us”?
  • 50:24 - 50:26
    Arne: I completely agree but
  • 50:26 - 50:29
    I think that currently people … I mean …
  • 50:29 - 50:31
    when you think an email goes
  • 50:31 - 50:34
    from one system to the other directly
  • 50:34 - 50:41
    and I mean … from these studies
    that I’ve done, I’ve met
  • 50:41 - 50:46
    quite some people who still think email is secure.
  • 50:46 - 50:50
    So, of course, you might give
    them a false sense of security
  • 50:50 - 50:53
    when you give them a
    more secure program but
  • 50:53 - 50:54
    at least it’s more secure than email—right?
  • 50:54 - 50:56
    I mean …
  • 50:56 - 50:57
    Mic#1: Thank you.
  • 50:57 - 51:00
    Herald: Thank you. There’s another
    question on the Internet.
  • 51:00 - 51:03
    Signal angel: Yes, thank you. Question from the Internet:
  • 51:03 - 51:06
    What crypto would you finally
    recommend your grandma to use?
  • 51:06 - 51:10
    Arne: laughs
  • 51:10 - 51:16
    Well … Unfortunately, my grandma
    has already passed away.
  • 51:16 - 51:20
    I mean … her secrets will be safe …
  • 51:27 - 51:32
    Actually, I think something like where
  • 51:32 - 51:37
    Crypto is enabled by default, say …
    iMessage, I mean
  • 51:37 - 51:42
    of course, there’s backdoors,
    etc., but at least
  • 51:42 - 51:45
    it is more secure than plain SMS.
  • 51:45 - 51:53
    So I would advise my grandma
    to use … well, to look at …
  • 51:53 - 51:56
    actually I’d first analyse
    what does she have available
  • 51:56 - 51:59
    and then I would look at okay
    which is the most secure
  • 51:59 - 52:04
    and still usable?
  • 52:04 - 52:07
    Herald: Thank you very much, so mic #3, please.
  • 52:07 - 52:11
    Mic#3: So, just wondering:
  • 52:11 - 52:15
    You told that there is
    a problem with the missing
  • 52:15 - 52:20
    default installation of GPG
    on operating systems but
  • 52:20 - 52:25
    I think, this is more of a
    problem of which OS you choose
  • 52:25 - 52:28
    because at least I don’t
    know any Linux system which
  • 52:28 - 52:34
    doesn’t have GPG installed today by default.
  • 52:34 - 52:40
    If you use … at least I’ve used
    the normal workstation setup.
  • 52:40 - 52:43
    Arne: Yes, I think you already
    answered your own question:
  • 52:43 - 52:47
    Linux.
    Laughter
  • 52:47 - 52:51
    Unfortunately, Linux is not yet widely default.
  • 52:51 - 52:53
    I mean, I’d love it to be, but … yeah.
  • 52:53 - 52:58
    Mic#3: But if I send an email to Microsoft and say:
  • 52:58 - 53:03
    Well, install GPG by default, they’re not gonna
  • 53:03 - 53:04
    listen to me.
  • 53:04 - 53:08
    And I think, for all of us, we should do
  • 53:08 - 53:09
    a lot more of that.
  • 53:09 - 53:14
    Even if Microsoft is the devil for most of us.
  • 53:14 - 53:16
    Thank you.
  • 53:16 - 53:20
    Arne: Well … We should be doing more of what?
  • 53:20 - 53:26
    Mic#3: Making more demands
    to integrate GPG by default
  • 53:26 - 53:29
    in Microsoft products, for example.
  • 53:29 - 53:31
    Arne: Yes, I completely agree.
  • 53:31 - 53:34
    Well, what you already see happening …
  • 53:34 - 53:36
    or I mean, it’s not very high-profile yet,
  • 53:36 - 53:39
    but for example I mean … I’ve refered to
  • 53:39 - 53:43
    the EFF scorecard a couple of times but
  • 53:43 - 53:50
    that is some pressure to encourage developers
  • 53:50 - 53:53
    to include security by default.
  • 53:53 - 53:57
    But, I think I’ve also mentioned, one of the big problems
  • 53:57 - 54:01
    is: users at the moment … I mean …
  • 54:01 - 54:04
    developers might say: my system is secure.
  • 54:04 - 54:07
    I mean … what does that mean?
  • 54:07 - 54:10
    Do we hold developers and
    commercial entities …
  • 54:10 - 54:12
    do we hold them to, well,
  • 54:12 - 54:14
    truthful advertisting standards or not?
  • 54:14 - 54:17
    I mean, I would say: Let’s gonna look at
  • 54:17 - 54:21
    what are companies claiming and
  • 54:21 - 54:23
    do they actually stand up to that?
  • 54:23 - 54:26
    And if not: Can we actually sue them?
  • 54:26 - 54:28
    Can we make them tell the truth about
  • 54:28 - 54:31
    what is happening and what is not?
  • 54:31 - 54:33
    Herald: So, we’ve got about 2 more minutes left …
  • 54:33 - 54:37
    So it’s a maximum of two more questions, #2, please.
  • 54:37 - 54:43
    Mic#2: Yeah, so … Every security system fails.
  • 54:43 - 54:50
    So I’m interested in what sort of work has been done on
  • 54:50 - 54:57
    how do users recover from failure?
  • 54:57 - 55:01
    Everything will get subverted,
  • 55:01 - 55:04
    your best firend will sneak
    your key off your computer,
  • 55:04 - 55:06
    something will go wrong with that, you know …
  • 55:06 - 55:10
    your kids will grab it …
  • 55:10 - 55:13
    and just, is there, in general, has somebody looked at
  • 55:13 - 55:17
    these sorts of issues?
  • 55:17 - 55:19
    Is there research on it?
  • 55:19 - 55:22
    Arne: Of various aspects of the problem but
  • 55:22 - 55:26
    as far as I’m aware not for the general issue
  • 55:26 - 55:30
    and not any field studies specifically looking at
  • 55:30 - 55:34
    “Okay, what happens when a key is compromised, etc.”
  • 55:34 - 55:38
    I mean, we do have certain cases of things happening
  • 55:38 - 55:42
    but nothing structured.
  • 55:42 - 55:45
    No structured studies, as far as I’m aware.
  • 55:45 - 55:47
    Herald: Thank you. #3?
  • 55:47 - 55:52
    Mic#3: Yeah, you mentioned
    mailpile as a stepping stone
  • 55:52 - 55:56
    for people to start using GnuPG and stuff, but
  • 55:56 - 56:05
    you also talked about an
    average user seeing mail as just
  • 56:05 - 56:09
    coming from one place and
    then ending up in another place.
  • 56:09 - 56:12
    Shouldn’t we actually talk about
  • 56:12 - 56:18
    how to make encryption transparent for the users?
  • 56:18 - 56:21
    Why should they actually care about these things?
  • 56:21 - 56:25
    Shouldn’t it be embedded in the protocols?
  • 56:25 - 56:29
    Shouldn’t we actually talk about
    embedding them in the protocols,
  • 56:29 - 56:32
    stop using unsecure protocols
  • 56:32 - 56:36
    and having all of these,
    you talked a little bit about it,
  • 56:36 - 56:39
    as putting it in the defaults.
  • 56:39 - 56:43
    But shouldn’t we emphasise that a lot more?
  • 56:43 - 56:47
    Arne: Yeah, I think we should
    certainly be working towards
  • 56:47 - 56:50
    “How do we get security by default?”
  • 56:50 - 56:54
    But I think … I’ve mentioned it shortly that
  • 56:54 - 56:58
    making things transparent also has a danger.
  • 56:58 - 57:01
    I mean, this whole, it’s a bit like …
  • 57:01 - 57:03
    a system should be transparent is a bit like
  • 57:03 - 57:06
    marketing speak, because actually
  • 57:06 - 57:09
    we don’t want systems to be completely transparent,
  • 57:09 - 57:13
    ’cause we also wanna be able
    to engage with the systems.
  • 57:13 - 57:16
    Are the systems working as they should be?
  • 57:16 - 57:20
    So, I mean, this is a difficult balance to find, but yeah …
  • 57:20 - 57:25
    Something that you achieve through usability studies,
  • 57:25 - 57:29
    security analysis, etc.
  • 57:29 - 57:31
    Herald: All right, Arne,
    thank you very much for giving
  • 57:31 - 57:34
    us your very inspiring talk,
  • 57:34 - 57:36
    thank you for sharing your information with us.
  • 57:36 - 57:38
    Please give him a round of applause.
  • 57:38 - 57:41
    Thank you very much.
    applause
  • 57:41 - 57:52
    subtitles created by c3subtitles.de
    Join, and help us!
Title:
Arne Padmos: Why is GPG "damn near unusable"?
Description:

http://media.ccc.de/browse/congress/2014/31c3_-_6021_-_en_-_saal_g_-_201412281130_-_why_is_gpg_damn_near_unusable_-_arne_padmos.html

GPG has been correctly described as "damn near unusable". Why is this so? What does research into usable security tell us? This talk covers the history, methods, and findings of the research field, as well as proposed solutions and open questions.

Arne Padmos

more » « less
Video Language:
English
Duration:
57:52

English subtitles

Revisions