Applause
So, good morning everyone
my name is Arne and today
I'll be hoping to entertain you
a bit with some GPG usability issues.
thanks for being here this early in the morning.
I know, some of you have had a short night
In short for the impatient ones:
Why is GnuPG damn near unusable?
Well, actually, I don’t know
Laughter
So more research is needed … as always.
Because it's not like using a thermometer.
We're doing something between social science and security
But I will present some interesting perspectives
or at least what I hope you'll find interesting perspectives.
This talk is about some possible explanations
that usable security research can offer to the question
Now some context, something about myself,
so you have a bit of an idea where I'm coming from
and what coloured glassed I have on.
So pretty much my background is in Mathematics,
Computer science, and—strangely enough—International relations
My professional background is that I've been doing
embedded system security evaluations and training
and I've also been a PhD student, studying the usability of security.
Currently, I teach the new generation,
hoping to bring some new blood into the security world.
I want to do some expectation setting
I want to say, what this talk is not about.
I will also give some helpful pointers for
those of you that are interested in these other areas.
I will not go into too much detail about the issue of truth
in security science.
Here are some links to some interesting papers that cover this
in a lot of detail.
Neither will I be giving a security primer.
There are some nice links to books on the slide.
I'll also not be giving a cryptography primer or a history lesson.
Neither will I be giving an introduction to PGP
And, interestingly enough, even though the talk is titled
“why is GPG damn near unusable”, I will
not really be doing much PGP bashing
I think it's quite, actually, a wonderful effort and other people
have pretty much done the PGP/GnuPG bashing for me.
And, as I've already mentioned, I will not be giving any definite answers
and a lot of “it depends.”
But then you might ask “well, it depends. What does it depend on?”
Well, for one: What users you’re looking at
which goals they have in mind and
in what context, what environment they’re doing these things.
So, instead I want to kindle your inspiration
I want to offer you a new view on the security environment
and I'll also give you some concrete exercises that you can try out
at home or at the office.
Some “do’s” and “don’t’s” and pointers for further exploration:
This is a short overview of the talk
I'll start with the background story to why I’m giving this talk
then an overview over usable security research area,
some principles and methods for usablity,
some case studies, then some open questions remain
So, the story. Well. It all started with this book.
When I was reading about the Snowden revelations,
I read, how Snowden tried to contact Glenn Greenwald.
On December, 1st, he sent an email, saying, well, writing to Glenn:
“If you don’t use PGP, some people will never be able to contact you.”
“Please install this helpful tool and if you need any help,
please request so.”
Three days later, Glenn Greenwald says: “Sorry, I don’t
know how to do that, but I’ll look into it.”
and Snowden writes back: “Okay, well, sure. And again:
If you need any help, I can facilitate contact.”
Now, a mere seven weeks later,
Laughter
Glenn is like “okay, well, I’ll do it within the next days or so.”
Okay, sure …
Snowden’s like “my sincerest thanks”.
But actually in the meantime, Snowden was growing a bit impatient
’cause, okay, “why are you not encrypting?”
So he sent an email to Micah Lee, saying, “okay, well, hello,
I’m a friend, can you help me getting contact with Laura Poitras?”
In addition to that, he made a ten-minute video for Glenn Greenwald
Laughter
… describing how to use GPG.
And actually I have quite a lot of screenshots of that video
and it's quite entertaining.
’cause, of course, Snowden was getting increasingly
bothered by the whole situation.
Now, this is the video that Snowden made
“GPG for Journalists. For Windows.”
Laughter
I’ll just click through it, because I think,
the slides speak for themselves.
Take notes of all the usability issues you can identify.
So just click through the wizard, generate a new key,
enable “expert settings”, ’cause we want 3000-bit keys
We want a very long password, etc.
And now, of course, we also wanna go and find keys
on the keyserver.
And we need to make sure that we shouldn’t
write our draft messages in GMail
or Thunderbird and enigmail, for that matter.
Although that issue has been solved.
So, I think you can start seeing
why Glenn Greenwald
—even if he did open this video—
was like
“okay, well, I’m not gonna bother.”
And Snowden is so kind to say, after 12 minutes,
“if you have any remaining questions, please contact me.”
At this year’s HOPE conference,
Snowden actually did a call for arms and he said
“Okay, we need people to evaluate our security systems.
we need people to go and do red team. But in addition to that,
we also need to look at the user experience issue.”
So this is a transcript of his kind of manifesto
and he says: “GPG is really damn near unusable”
because, well, of course, you might know command line
and then, okay, you might be okay
but “Gam Gam at the home”, she is never going to be able
to use GnuPG.
And he also notes that, okay, we were part of a technical elite
and he calls on us to work on the technical literacy of people
because what he explicitly warns against is
a high priesthood of technology.
Okay, that’s a nice call to arms
but are we actually up for a new dawn?
Well, I wanna go into the background of usable security
and I wanna show you that we’ve actually been
in a pretty dark time.
So, back in 1999, there was this paper:
“Why Johnny can’t encrypt”
which described mostly the same broken interface
so if you remember, if you go back to the video of which
I showed some screenshots, then, well, if you look at
these screenshots from 1999, well,
is there a lot of difference?
Not really! Nothing much has changed.
There are still the same
conceptual barriers,
and same crappy defaults.
And most astonishingly, in the paper there
is a description of a user study where
users were given 90 minutes to encrypt an email
and most were unable to do so.
I think, that pretty much describes “damn near unusable.”
A timeline from, well, before 1999 to now
of the usable security research.
So, quite a lot has happened
although it is still a growing field.
It started—the idea of usable security,
it was explicitly defined first—
in 1975, but it was only until
… only in … 1989 that
the first usability tests were carried out.
And only in 1996 that
the concept of “user-centered security”
was described.
An interesting paper, also from 1999, shows how
contary to the general description of users as lazy
and basically as the weakest chain in security
this paper describes users as pretty rational beings
who see security as an overhead and …
where they don't
understand the usefulness of what they’re doing.
The study of PGP 5.0, I’ve talked about that already,
and there was also a study of the
Kazaa network in 2002.
And it was found out that a lot of users were
accidentally sharing files from personal pictures,
who knows, maybe credit-card details,
you never know, right?
In 2002, a lot of the knowledge of usable security design
was concretised in ten key principles
and if you’re interested,
I do recommend you to look at the paper.
A solution to the PGP problem was proposed in
2004, well, actually,
it was proposed earlier
but it was tested in 2005.
And it was found that actually
if we automate encryption
and if we automate key exchange then, well,
things are pretty workable, except that
users still fall for
phishing attacks, of course.
But last year, another research
identified that, well,
making security transparent is all nice and well
but it's also dangerous because users no longer
are less likely to trust the system and
are less likely to really understand
what’s really happening.
So a paper this year also identified another issue:
Users generally have very bad understanding
of the email architecture.
An email goes from point A to point B.
And what happens in-between is unknown.
So, before I go on to general usability principles
from the founding pillar of the usable security field
I wanna give some exaples of usability failures.
You might be familiar with project VENONA.
This was an effort by the US intelligence agencies
to try and decrypt soviet communication.
And they actually were pretty successful.
They encovered a lot of spying and, well,
how did they do this?
The soviets were using one-time pads and,
well, if you reuse a one-time pad,
then you leak a lot of information about plain-text.
Well, what we also see happening a lot is
low password entropy.
We have people choosing password “123456”, etc.
And what I just described, the study looking into
the mental models of users,
of the email architecture and how it works
well, at the top you have
still a pretty simplified description of how things work
and at the bottom we have an actual drawing
of a research participant when asked:
“Can you draw how an email
goes from point A to point B?”
And it’s like:
“Well, it goes from one place to the other.”
Okay …
clicking sounds
Okay, so this died.
So, these are two screenshots of enigmail
Well, if I wouldn’t have marked them
as the plaintext and encrypted email that would be sent
you probably wouldn’t have spotted which was which
this is a pretty big failure in
the visibility of the system.
You don’t see anything? Ah.
Audience: “That’s the point!”
That's the point, yes!
laughter
applause
On the left we have a screenshot of GPG and
as I’ve already described,
command line people, we like command lines
but normal people don’t.
And what we also see is a lot of the jargon that is
currently being used even in GUI applications
so on the right there is PGP 10.0.
Now I wanna close these examples with
well, you might be wondering: “what is this?”
This is actually an example of a security device
from, I think it’s around 4000 years ago.
Like, People could use this.
Why can’t we get it right today?
Something that you should,
this is a little homework exercise,
take a laptop to your grandma, show her PGP,
can she use it—yes or no?
Probably not, but who knows?
Now I wanna go into
the usability cornerstones of usable security.
I wanna start with heuristics
some people call them “rules of thumb,” other people
call them “the ten holy commandments”
For example, the ten commandments of Dieter Rams,
there is ten commandments of Jakob Nielsen,
of Don Norman
and it really depends on who you believe in, etc.
But at the cornerstone of all of these is that
design is made for people.
And, well, actually, Google says it quite well
in their guiding mission:
“Focus on the user and all else will follow.”
Or, as a usability maxim:
“thou shalt test with thy user”
Don’t just give them the thing.
But there is one problem with these heuristics
and with this advice going just with your user.
Because it’s a pretty abstract advice.
What do you do?
You go out into the world to get practice.
You start observing people.
One nice exercise to try is:
go to the vending machine,
for example the ones at the S-Bahn.
Just stand next to it
and observe people buying tickets.
It’s quite entertaining, actually.
Laughter
And something you can also do is
search for usability failures.
This is what you already do when
you’re observing people.
But even just google for “usability failure”,
“GUI fail”, etc., and you will find
lots of entertaining stuff.
Those were some heuristics
but what about the princpiles
that lie behind those?
Usability or interaction design
is a cycle between the user and the system.
The user and the world.
The user acts on the world
and gets feedback.
They interpret that.
One important concept is
for things to be visible.
For the underlying system state
to be visible and
you get appropriate feedback
from the system.
So these are Don Norman’s gulfs
of execution and evaluation
sort of yin and yang.
And there is two concrete problems
to illustrate.
For example, the button problem
that “how do you know what happens
when you push the button?”
and “how do you know how to push it?”
I unforunately don’t have a picture of it
but at Oxford station, the tabs in the bathrooms
they say “push” and you need to turn.
Laughter
Then there is the toilet door problem.
The problem of “how do you know
what state a system is in”.
How do you know whether an email will be encrypted?
This is a picture …
basically there is two locks.
One is actually broken and it’s …
when pushing the button that's on the
door handle, you usually lock the door.
But … well … it broke. So that must have been
an entertaining accident.
Another, as I’ve already described,
another important concept is that of mental models.
It’s a question of what idea does the user have
of the system by interacting with it?
How do they acquire knowledge?
For example, how to achieve discoverability
of the system?
And how to ensure that while
a user is discovering the system
that they are less likely to make mistakes?
So this is the concept of poka-yoke
and it’s … here is an example
you also see with floppy disks,
with USB sticks, etc.
It’s engineered such that users are
less likely to make a mistake.
Then there’s also the idea
of enabling knowledge transfer
So how can we do this?
One thing is metaphors.
And I’m not sure how many of you recognise this,
this is Microsoft BOB.
Traditionally, PC systems have been built
on the desktop metaphor.
Laughter
Microsoft BOB had a little too much.
To enable knowledge transfer,
you can also standardise systems.
And one important tool for this is design languages
so if you’re designing for iOS, go look at
the design language, the Human
Interface Guidelines of iOS.
The same for Windows – go look
at the Metro Design Guidelines.
As for Android, look at Material Design.
Because, another interesting exercise
to try out
relating to design languages
and also to get familiar with how designers try to
communicate with users is to
look at an interface and trying to decode
what the designer is trying to say to the user.
And another interesting exercise is to look at
not usability but UNusability.
So there is this pretty interesting book
called “evil by design” and it goes into
all the various techniques that designers use
to fool users, to get them to buy an extra hotel, car, etc.
and, well, RyanAir is pretty much the worst offender
so a good example to study.
Applause
So, what if you wanna go out into the world
and are gonna apply these princpiles, these heuristics?
The first thing to know is that
design has to be a process
whereby, first part is actually defining your problem.
You first brain-storm
then you try to narrow down to concrete requirements
after which you go and try out
the various approaches
and test these.
What materials do usability experts actually use?
Well, of course there’s expensive tools, Axure, etc.
but I think one of the most used materials
is still the post-it note. Just paper and pens.
And, okay, where do you wanna go and test?
Well, actually, go out into the field.
Go to the ticket machine of the S-Bahn.
But also go and test in the lab, so that you have
a controlled environment.
And then you ask “okay, how do I test?”
Well, first thing is: Go and get some real people.
Of course, it’s … it can be difficult to actually
get people into the lab but it’s not impossible.
So once you have people in the lab,
here are some methods.
There are so many usability evaluation methods
that I’m not gonna list them all and
I encourage you to go and look them up yourself
’cause it’s also very personal what works for you
and what works in your situation.
When using these methods you wanna
evaluate how well a solution works
So you’re gonna look at some matrix
so that at the end of your evaluation
you can say “okay, we’ve done a good job”,
“this can go better”,
“Okay, maybe we can move that”, …
So these are the standardised ones, so
how effective are people, or etc.
You can read …
For a quick start guide on how to
perform usability studies, this is quite a nice one.
And the most important thing to remember
is that preparation is half the work.
First thing to check that everything is working,
make sure that you have everyone
you need in the room, etc.
And maybe most importantly,
usability and usable security,
well, usable security is still a growing field, but
usability is a very large field and most likely
all the problems that you are going to face
or at least a large percentage, other people
have faced before.
So this book is, well, it describes a lot of the stories
of user experience professionals and the things
that they’ve come up against.
A homework exerciese if you feel like it
is looking at basically analysing who is your user
and where they’re going to use the application.
And also something to think about is
how might you involve your user?
Not just during the usability testing,
but also afterwards.
Now I wanna go into some case
studies of encryption systems.
Now there’s quite a lot, and these are not all,
it’s just a small selection but I wanna focus on three.
I wanna focus at the OpenPGP standard,
Cryptocat and TextSecure.
So, OpenPGP, well …
email is now almost 50 years old,
we have an encryption standard—S/MIME,
it is widely used
well, it’s widely usable but it’s not widely used …
and GnuPG is used widely but is not installed by default
and when usability teaches us one thing
it’s that defaults rule.
Because users don’t change defaults.
Now you might ask “Okay,
PGP is not installed by default,
so is there actually still a future for OpenPGP?”
Well, I’d argue: Yes.
We have browser plug-ins
which make it easier for users
JavaScript crypto … I’ll come back to that later …
But when we look at Mailvelope, we see, well,
the EFF scorecard, it has a pretty decent rating
at least compared to that of native PGP implementations.
And also Google has announced and has been working
for quite some time on their own plug-in for
end-to-end encryption.
And Yahoo! is also involved in that.
And after the Snowden revelations there has been
a widespread search in the interest
in encrypted communications
and this is one website where a lot of these are listed.
And one project that I’d especially like to emphasise
is mailpile because I think it looks
like a very interesting approach
whereby the question is:
Can we use OpenPGP as a stepping stone?
OpenPGP is not perfect, meta-data is not protected,
header is not protected, etc.
But maybe when we get people into the ecosystem,
we can try and gradually move
them to more secure options.
Now, what about Cryptocat?
So, Cryptocat’s online chat platform
that … yes … uses JavaScript.
And of course, JavaScript crypto is bad
but it can be made better.
And I think JavaScript crypto is not the worst problem.
Cryptocat had a pretty disastrous problem
whereby all messages that were sent
were pretty easily decryptable.
But actually, this is just history repeating itself
’cause PGP 1.0 used something called BassOmatic,
the BassOmatic cypher which is also pretty weak.
And Cryptocat is improving, which is the important thing.
There is now a browser plug-in and
of course, there’s an app for that and
actually, Cryptocat is doing really, really well
in the EFF benchmarks.
And Cryptocat is asking the one question that a lot
of other applications are not asking, which is:
“How can we actually make crypto fun?”
When you start Cryptocat, there’s noises
and there’s interesting facts about cats
Laughter
… depends on whether you like cats, but still!
Keeps you busy!
Now, the last case: TextSecure
also has pretty good markings
and actually just like CryptoCat,
the App store distribution model is something that
I think is a valuable one for usability.
It makes it easy to install.
And something that TextSecure is also looking at is
synchronisation options for your address book.
And I think the most interesting development is
on the one side, the CyanogenMod integration,
so that people will have encryption enabled by default.
’Cause as I mentioned: People don’t change defaults.
And this one is a bit more contoversial, but
there’s also the WhatsApp partnership.
And of course people will say “it’s not secure”,
we know, we know,
EFF knows!
But at least, it’s more secure than nothing at all.
Because: Doesn’t every little bit help?
Well, I’d say: yes.
And at least, it’s one stepping stone.
And, well, all of these are open-source,
so you can think for yourself:
How can I improve these?
Now, there’s still some open questions remaining
in the usable security field and in the
wider security field as well.
I won’t go into all of these,
I wanna focus on the issues that developers have,
issues of end user understanding
and of identitiy management.
Because the development environment
there’s the crypto-plumbing problem, some people call it.
How do we standardise on a cryptographic algorithm?
How do we make everyone use the same system?
Because, again, it’s history repeating itself.
With PGP, we had RSA, changed for
DSA because of patent issues
IDEA changed for CAST5 because of patent issues
and now we have something similar:
’cause for PGP the question is:
Which curve do we choose?
’cause this is from Bernstein, who has got a whole list
of, well not all the curves,
but a large selection of them
analysing the security
but how do you make, well, pretty much
the whole world agree on a single standard?
And also, can we move toward safer languages?
And I’ve been talking about the
usability of encryption systems
for users, but what about for developers?
So, API usability, and as I’ve mentioned:
Language usability.
And on top of that, it is not just a technical issue,
because, of course, we secure microchips,
but we also wanna secure social systems.
Because, in principal, we live in an open system,
in an open society and a system cannot audit itself.
So, okay, what do we do, right?
I don’t know.
I mean, that’s why it’s an open question!
’Cause how de we ensure the authenticity of,
I don’t know, my Intel processor in my lapotp?
How do I know that the
random number generator isn’t bogus?
Well, I know it is, but …
laughter
Then, there’s the issue of identity management
related to key management, like
who has the keys to the kingdom?
One approach, as I’ve already mentioned, is
key continuity management.
Whereby we automate both key exchange and
whereby we automate encryption.
So one principle is trust on first use,
whereby, well, one approach will be to attach your key
to any email you send out and anyone who receives
this email just assumes it’s the proper key.
Of course, it’s not fully secure,
but at least, it’s something.
And this is really, I think, the major question
in interoperability:
How do you ensure that you can access your email
from multiple devices?
Now, of course, there is meta-data leakage,
PGP doesn’t protect meta-data,
and, you know, your friendly security agency knows
where you went last summer …
So, what do we do?
We do anonymous routing, we send over tor, but
I mean, how do we roll that out?
I think the approach that
mailpile is trying to do is interesting
and, of course still an open question, but
interesting research nonetheless.
Then there’s the introduction problem of
okay, how, I meet someone here, after the talk,
they tell me who they are,
but either I get their card—which is nice—or
they say what their name is.
But they’re not gonna tell me, they’re not gonna spell out
their fingerprint.
So the idea of Zooko’s triangle is that identifiers
are either human-meaningful,
secure or decentralised.
Pick two.
So here’s some examples of identifiers,
so for Bitcoin: Lots of random garbage.
For OpenPGP: Lots of random garbage
For miniLock: Lots of random garbage
So, I think an interesting research problem is:
Can we actually make these things memorable?
You know, I can memorise email addresses,
I can memorise phone numbers,
I can not memorise these. I can try, but …
Then, the last open question I wanna focus on
is that of end-user understanding.
So of course, we’ll know that all devices are monitored.
But does the average user?
Do they know what worms can do?
Have they read these books?
Do they know where GCHQ is?
Do they know that Cupertino has
pretty much the same powers?
Laughter
Do they know they’re living in a panopticon to come?
Laughter
Do they know that people are
killed based on meta-data?
Well, I think not.
And actually this is a poster from the university
where I did my Master’s
and interestingly enough, it was founded by a guy
who made a fortune selling sugar pills.
You know, snake oil, we also have this in crypto.
And how is the user to know
whether something is secure or not?
Of course, we have the secure messaging scorecard
but can users find these?
Well, I think, there’s three aspects
to end-user understanding
which is knowledge acquisition,
knowledge transfer,
and the verification updating of this knowledge.
So, as I’ve already mentioned,
we can do dummy-proofing
and we can create transparent systems.
For knowledge transfer, we can
look at appropriate metaphors and design languages.
And for verification we can
try an approach: Choose an advertising.
And, last but not least, we can do user-testing.
Because all these open questions that I’ve described
and all this research that has been done,
I think it’s missing one key issue, which is that
the usability people and the security people
tend not to really talk to one another.
The open-source developers and the users:
Are they talking enough?
I think that’s something, if we want a new dawn,
that’s something that I think we should approach.
Yeah, so, from my side, that’s it.
I’m open for any questions.
Applause
Herald: Arne, thank you very much for your brilliant talk
Now, if you have any questions to ask, would you please
line up at the microphones in the aisles?!
The others who’d like to leave now,
I’d ask you kindly to please leave very quietly
so we can hear what the people
asking questions will tell us.
And those at the microphones,
if you could talk slowly,
then those translating have no problems in translating
what is being asked. Thank you very much.
And I think we’ll start with mic #4 on the left-hand side.
Mic#4: Yes, so, if you’ve been to any successful
crypto party, you know that crypto parties very quickly
turn into not about discussing software,
how to use software,
but into threat model discussions.
And to actually get users
to think about what they’re
trying to protect themselves for and if a certain
messaging app is secure, that still means nothing.
’Cause there is lots of other stuff that’s going on.
Can you talk a little bit about that and
how that runs into this model about, you know,
how we need to educate users and, while we’re at it,
what we want to educated about.
And what they actually need to be using.
Arne: Well, I think that’s an interesting point
and I think, one issue, one big issue is:
okay, we can throw lots of crypto parties
but we’re never gonna be able to throw enough parties.
I … with one party you’re very lucky
you’re gonna educate 100 people.
I mean, just imagine how many parties
you’d need to throw. Right?
I mean, it’s gonna be a heck of party, but … yeah.
And I think, secondly, the question of threat modeling,
I think, sure, that’s helpful to do, but
I think, users do first need an understanding of,
for example, the email architecture.
Cause, how can they do threat
modeling when they think
that an email magically pops
from one computer to the next?
I think, that is pretty much impossible.
I hope that …
Herald: Thank you very much, so …
Microphone #3, please.
Mic#3: Arne, thank you very much for your talk.
There’s one aspect that I didn’t see in your slides.
And that is the aspect of the language that we use
to describe concepts in PGP—and GPG, for that matter.
And I know that there was a paper last year
about why King George can’t encrypt and
they were trying to propose a new language.
Do you think that such initiatives are worthwile
or are we stuck with this language and should we make
as good use of it as we can?
Arne: I think that’s a good point
and actually the question
of “okay, what metaphors do you wanna use?” … I think
we’re pretty much stuck with the language
that we’re using for the moment but
I think it does make sense
to go and look into the future
at alternatives models.
Yeah, so I actually wrote a paper that also
goes into that a bit, looking at
the metaphor of handshakes to exchange keys.
So, for example, you could have
an embedded device as a ring or wristband,
it could even be a smartwatch, for that matter.
Could you use that shaking of hands to
build trust-relationships?
And that might be a better
metaphor than key-signing,
webs of trust, etc.
’Cause I think, that is horribly broken
for I mean the concept, trying
to explain that to users.
Herald: Thank you. And … at the back in the middle.
Signal angel: Thanks. A question from the internet:
[username?] from the Internet wants to know if you’re
aware of the PEP project, the “pretty easy privacy”
and your opinions on that.
And another question is:
How important is the trust level of the crypto to you?
Arne: Well, yes, actually, there’s this screenshot
of the PEP project in the slides
… in the why WhatsApp is horribly insecure and
of course, I agree, and yeah.
I’ve looked into the PEP project for a bit
and I think, yeah, I think it’s an interesting
approach but I still have to read up on it a bit more.
Then, for the second question,
“how important is the rust in the crypto?”:
I think that’s an important one.
Especially the question of
“how do we build social systems
to ensure reliable cryptography?”
So one example is the Advanced
Encryption Standard competition.
Everyone was free to send in entries,
their design princpiles were open
and this is in complete contrast
to the Data Encryption Standard
which, I think, the design princpiles are still Top Secret.
So yeah, I think, the crypto is
something we need to build on
but, well, actually, the crypto is
again built on other systems
where the trust in these systems
is even more important.
Herald: Okay, thank you, microphone #2, please.
Mic#2: Yes, Arne, thank you very
much for your excellent talk.
I wonder how about what to do with feedback
on usability in open-source software.
So, you publish something on GitHub
and you’re with a group of people
who don’t know each other and
one publishes something,
the other publishes something,
and then: How do we know
this software is usable?
In commercial software, there’s all kind of hooks
on the website, on the app,
to send feedback to the commercial vendor.
But in open-source software,
how do you gather this information?
How do you use it, is there any way to do this
in an anonymised way?
I haven’t seen anything related to this.
Is this one of the reasons why
open-source software is maybe
less usable than commercial software?
Arne: It might be. It might be.
But regarding your question, like, how do you know
whether a commercial software is usable, well,
you … one way is looking at:
Okay, what kind of statistics do you get back?
But if you push out something totally unusable
and then, I mean, you’re going to expect
that the statistics come back looking like shit.
So, the best approach is to
design usability in from the start.
The same with security.
And I think, that is also …
so even if you have …
you want privacy for end users, I think it’s still possible
to get people into their lab and look at:
Okay, how are they using the system?
What things can we improve?
And what things are working well?
Mic#2: So you’re saying, you should only publish
open-source software for users
if you also tested in a lab?
Arne: Well, I think, this is a bit of a discussion of:
Do we just allow people to build
houses however they want to
or do we have building codes?
And … I think … well, actually, this proposal of holding
software developers responsible for what they produce,
if it’s commercial software, I mean,
that proposal has been
made a long time ago.
And the question is:
How would that work in an
open-source software community?
Well, actually, I don’t have an answer to that.
But I think, it’s an interesting question.
Mic#2: Thank you.
Herald: Thank you very much. #1, please.
Mic#1: You said that every little bit helps,
so if we have systems that
don’t provide a lot of … well …
are almost insecure, they
provide just a bit of security, than
that is still better than no security.
My question is: Isn’t that actually worse because
this promotes a false sense of security and
that makes people just use
the insecure broken systems
just to say “we have some security with us”?
Arne: I completely agree but
I think that currently people … I mean …
when you think an email goes
from one system to the other directly
and I mean … from these studies
that I’ve done, I’ve met
quite some people who still think email is secure.
So, of course, you might give
them a false sense of security
when you give them a
more secure program but
at least it’s more secure than email—right?
I mean …
Mic#1: Thank you.
Herald: Thank you. There’s another
question on the Internet.
Signal angel: Yes, thank you. Question from the Internet:
What crypto would you finally
recommend your grandma to use?
Arne: laughs
Well … Unfortunately, my grandma
has already passed away.
I mean … her secrets will be safe …
Actually, I think something like where
Crypto is enabled by default, say …
iMessage, I mean
of course, there’s backdoors,
etc., but at least
it is more secure than plain SMS.
So I would advise my grandma
to use … well, to look at …
actually I’d first analyse
what does she have available
and then I would look at okay
which is the most secure
and still usable?
Herald: Thank you very much, so mic #3, please.
Mic#3: So, just wondering:
You told that there is
a problem with the missing
default installation of GPG
on operating systems but
I think, this is more of a
problem of which OS you choose
because at least I don’t
know any Linux system which
doesn’t have GPG installed today by default.
If you use … at least I’ve used
the normal workstation setup.
Arne: Yes, I think you already
answered your own question:
Linux.
Laughter
Unfortunately, Linux is not yet widely default.
I mean, I’d love it to be, but … yeah.
Mic#3: But if I send an email to Microsoft and say:
Well, install GPG by default, they’re not gonna
listen to me.
And I think, for all of us, we should do
a lot more of that.
Even if Microsoft is the devil for most of us.
Thank you.
Arne: Well … We should be doing more of what?
Mic#3: Making more demands
to integrate GPG by default
in Microsoft products, for example.
Arne: Yes, I completely agree.
Well, what you already see happening …
or I mean, it’s not very high-profile yet,
but for example I mean … I’ve refered to
the EFF scorecard a couple of times but
that is some pressure to encourage developers
to include security by default.
But, I think I’ve also mentioned, one of the big problems
is: users at the moment … I mean …
developers might say: my system is secure.
I mean … what does that mean?
Do we hold developers and
commercial entities …
do we hold them to, well,
truthful advertisting standards or not?
I mean, I would say: Let’s gonna look at
what are companies claiming and
do they actually stand up to that?
And if not: Can we actually sue them?
Can we make them tell the truth about
what is happening and what is not?
Herald: So, we’ve got about 2 more minutes left …
So it’s a maximum of two more questions, #2, please.
Mic#2: Yeah, so … Every security system fails.
So I’m interested in what sort of work has been done on
how do users recover from failure?
Everything will get subverted,
your best firend will sneak
your key off your computer,
something will go wrong with that, you know …
your kids will grab it …
and just, is there, in general, has somebody looked at
these sorts of issues?
Is there research on it?
Arne: Of various aspects of the problem but
as far as I’m aware not for the general issue
and not any field studies specifically looking at
“Okay, what happens when a key is compromised, etc.”
I mean, we do have certain cases of things happening
but nothing structured.
No structured studies, as far as I’m aware.
Herald: Thank you. #3?
Mic#3: Yeah, you mentioned
mailpile as a stepping stone
for people to start using GnuPG and stuff, but
you also talked about an
average user seeing mail as just
coming from one place and
then ending up in another place.
Shouldn’t we actually talk about
how to make encryption transparent for the users?
Why should they actually care about these things?
Shouldn’t it be embedded in the protocols?
Shouldn’t we actually talk about
embedding them in the protocols,
stop using unsecure protocols
and having all of these,
you talked a little bit about it,
as putting it in the defaults.
But shouldn’t we emphasise that a lot more?
Arne: Yeah, I think we should
certainly be working towards
“How do we get security by default?”
But I think … I’ve mentioned it shortly that
making things transparent also has a danger.
I mean, this whole, it’s a bit like …
a system should be transparent is a bit like
marketing speak, because actually
we don’t want systems to be completely transparent,
’cause we also wanna be able
to engage with the systems.
Are the systems working as they should be?
So, I mean, this is a difficult balance to find, but yeah …
Something that you achieve through usability studies,
security analysis, etc.
Herald: All right, Arne,
thank you very much for giving
us your very inspiring talk,
thank you for sharing your information with us.
Please give him a round of applause.
Thank you very much.
applause
subtitles created by c3subtitles.de
Join, and help us!