< Return to Video

On the Security and Privacy of Modern Single Sign-On in the Web (33c3)

  • 0:05 - 0:13
    33C3 preroll music
  • 0:13 - 0:18
    Presenter: So I think we're all set
    without further ado, I would like to I
  • 0:18 - 0:24
    would like to introduce Guido Schmitz and
    Daniel Fett, who are going to be having
  • 0:24 - 0:30
    this talk on a single-sign-on on the web.
    Give them a big round of applause. And I
  • 0:30 - 0:37
    hope you're looking forward to the talk
  • 0:37 - 0:46
    Guido: OK, hello, everybody, welcome to
    our talk on the security and privacy of
  • 0:46 - 0:53
    modern single-sign-on the web. So in this
    talk, Daniel and me, we are going to
  • 0:53 - 1:02
    pretend not only just a OAuth and OpenID
    Connect, but also some thoughts about
  • 1:02 - 1:10
    analysis of all these standards. So first,
    a brief introduction who are. We are
  • 1:10 - 1:18
    researchers from the University of Trier,
    but soon of University of Stuttgart. And
  • 1:18 - 1:24
    also we happen to be the founders of the
    Maschinendeck, the hackerspace in Trier
  • 1:24 - 1:32
    and the Pi and More raspberry jam. If
    you're interested in anything else, what
  • 1:32 - 1:39
    we are doing, you can just follow us on
    Twitter. So what is the single-sign-on
  • 1:39 - 1:45
    about? What are we talking about? So
    probably all of you have seen websites
  • 1:45 - 1:52
    like this, like TripAdvisor, that you can
    use a lot of different methods to sign and
  • 1:52 - 1:56
    you can sign in with your Facebook
    account, with your Google account, you can
  • 1:56 - 2:01
    register account at their page with the
    email address and password, or you can use
  • 2:01 - 2:09
    your Samsung account or probably now even
    more different systems. And if you click,
  • 2:09 - 2:13
    for example, on this login with Facebook
    button and new window pops up, prompting
  • 2:13 - 2:18
    for your Facebook credentials, or if you
    already signed into Facebook, just asks
  • 2:18 - 2:26
    for confirmation. So this is a setting we
    are looking at and we have two parties
  • 2:26 - 2:31
    here. We have the we have TripAdvisor as
    the so-called relying party and we have
  • 2:31 - 2:39
    Facebook as the so-called identity
    provider. And the basic principle of how
  • 2:39 - 2:45
    this works is the following. So first you
    go with your browser to the relying party.
  • 2:45 - 2:53
    You say, I want to log in that RP, then
    you contact your identity provider,
  • 2:53 - 2:58
    authenticate there. And this identity
    provider then issues some kind of token
  • 2:58 - 3:04
    and this token you give to the relying
    party. And the ruling party can now use
  • 3:04 - 3:12
    this token to access some parts of your
    account at the identity provider. And this
  • 3:12 - 3:16
    is called authorization, for example, the
    ruling party can use this token now to
  • 3:16 - 3:24
    post on your Facebook timeline or reach
    out your friends list from Facebook. And
  • 3:24 - 3:31
    it the ruling party can also retrieve some
    unique user identifier and then consider
  • 3:31 - 3:36
    you to be locked in with that user
    identifier. And then this is
  • 3:36 - 3:43
    authentication. And then RP can set, for
    example, sensation Cookie and Mark. This
  • 3:43 - 3:49
    session belongs. Remember that this
    session belongs to this user. So this is
  • 3:49 - 3:55
    the basic the basic principle. Why should
    we use single-sign-on or why we shouldn't
  • 3:55 - 4:02
    use single-sign-on? So for users, it's
    very convenient. You don't have to
  • 4:02 - 4:07
    remember which account you used where,
    which password and so on. You just click
  • 4:07 - 4:11
    and login with Facebook and you're all
    done. Of course, this comes with lack of
  • 4:11 - 4:18
    privacy because Facebook then always knows
    where you log in. And also the identity
  • 4:18 - 4:22
    provider you choose is also the single
    point of failure. If that one closes down
  • 4:22 - 4:29
    or changes its terms and conditions, then
    you perhaps cannot log into your accounts
  • 4:29 - 4:37
    anymore at some third party web pages. So
    relying parties, they need to store less
  • 4:37 - 4:41
    data. They don't have to care about
    password databases that can leak. They
  • 4:41 - 4:44
    don't have to care about user
    registration, password recovery on all the
  • 4:44 - 4:50
    hassle that comes with user accounts. But
    they also have less control over this,
  • 4:50 - 4:55
    over the users accounts because they
    outsource the authentication to this
  • 4:55 - 4:59
    identity provider and also hear the
    identity provider as a single point of
  • 4:59 - 5:05
    failure. Far identity providers, the
    advantage is clear. They get more user
  • 5:05 - 5:12
    data and they can provide some service for
    their users, which makes, perhaps it makes
  • 5:12 - 5:19
    it more attractive for users to use that
    identity provider. On the downside, they
  • 5:19 - 5:23
    also have to take care about more user
    data. They have to store and protect it,
  • 5:23 - 5:27
    and they have to have the overhead of
    implement, implementing and running the
  • 5:27 - 5:37
    single-sign-on system. So what are these
    single-sign-on systems now? Now I will
  • 5:37 - 5:43
    show you some prominent examples, so there
    is OAuth 1.0. So this is a not so modern
  • 5:43 - 5:51
    single-sign-on system, it's now 10 years
    old. Many flaws are known for this system
  • 5:51 - 5:56
    and basically nobody uses it anymore
    except for Twitter. So Twitter uses a
  • 5:56 - 6:05
    modified version of OAuth 1, which more or
    less fixes all the known flaws. But in
  • 6:05 - 6:12
    general, we can say don't use OAuth 1.
    There is also OpenID, which is also quite
  • 6:12 - 6:19
    old, nine years. It's not that user
    friendly. It's a standard that's meant to
  • 6:19 - 6:25
    be super flexible for every corner use
    case that they developers at that time
  • 6:25 - 6:31
    thought of. And this makes it also
    extremely hard to use correctly because
  • 6:31 - 6:39
    you have a lot of things going on. Things
    change during an OpenID running, and it's
  • 6:39 - 6:47
    not that nice to develop something for
    OpenID. So also, OpenID, don't use this.
  • 6:47 - 6:53
    And now modern single-sign-on systems, for
    example, OAuth 2, which is also used in
  • 6:53 - 7:02
    login with Facebook. This is completely
    incompatible to OAuth 1 and OAuth 2 uses
  • 7:02 - 7:13
    the so-called Bearer token approach. The
    old protocol is based on some random
  • 7:13 - 7:19
    values that are passed around. But there's
    no crypto involved except for the
  • 7:19 - 7:27
    transport layer for HTTPS, for example,
    and OAuth2 is used everywhere, almost. So
  • 7:27 - 7:32
    it's the most popular of these systems,
    but it has never been developed for
  • 7:32 - 7:38
    authentication and really, it's not meant
    for authentication. And if you google for
  • 7:38 - 7:45
    OAuth 2 and authentication, you have to
    sometimes stumble upon the following
  • 7:45 - 7:51
    picture. So these two guys are members of
    the OAuth working group and they are
  • 7:51 - 7:55
    really insist it's not meant for
    authentication at all. It's just for
  • 7:55 - 8:03
    authorization. OK. Nonetheless, it is used
    in practice also for authentication.
  • 8:03 - 8:11
    Facebook, for example, uses it for
    authentication. And so the protocol is now
  • 8:11 - 8:15
    five years old. Many flaws have been
    discovered. Most of them have been fixed.
  • 8:15 - 8:22
    I will talk about some of these flaws
    later in the talk. So this is OAuth 2 and
  • 8:22 - 8:28
    there's also OpenID Connect. OpenID
    Connect is quite new. It's from one and a
  • 8:28 - 8:35
    half years old and it's an authentication
    layer on top of OAuth. So the first
  • 8:35 - 8:39
    definition or real definition on how you
    should use of for authentication, but it
  • 8:39 - 8:45
    changes the standard also so it can be
    seen as the protocol on its own, but
  • 8:45 - 8:50
    OpenID Connect, although despite the name,
    it's also completely incompatible to
  • 8:50 - 8:57
    OpenID. And it has also some dynamic
    features like IdP, Discovery and identity
  • 8:57 - 9:03
    provider Discovery and stuff like that. So
    this leads us to the Web, single-sign-on
  • 9:03 - 9:10
    chart of confusion. So we have OAuth 1
    which is the marketing predecessor off of
  • 9:10 - 9:17
    2, but completely incompatible to OAuth 2
    and OAuth 2 serves as the foundation for
  • 9:17 - 9:21
    login with Facebook, for authentication
    and also for OpenID Connect and OpenID
  • 9:21 - 9:26
    Connect, there is OpenID, which is the
    marketing predecessor of OpenID Connect,
  • 9:26 - 9:33
    but also same here, it's not compatible to
    each other. And OpenID Connect is used,
  • 9:33 - 9:43
    for example, by Google. So these are the
    most commonly used single-sign-on systems.
  • 9:43 - 9:49
    There's also some others, for example,
    Mozilla Persona. Who of you have heard
  • 9:49 - 9:59
    about Mozilla persona.? Oh, OK. Around
    five percent, more or less. So, the
  • 9:59 - 10:06
    original name is BrowserID and there the
    idea was that the email providers become
  • 10:06 - 10:12
    the identity providers. So this comes from
    the thought that for classical website
  • 10:12 - 10:17
    where you have to register, they send you
    emails with tokens you can click on to log
  • 10:17 - 10:24
    into your account and to reset your
    password. So your identity, your e-mail
  • 10:24 - 10:29
    provider already is the kind of identity
    provider. So why don't we just use it
  • 10:29 - 10:35
    directly in the Web? And Mozilla Persona
    is the first single-sign-on system with
  • 10:35 - 10:41
    the goal that we have some kind of privacy
    in the sense that the identity provider
  • 10:41 - 10:46
    does not learn where you use your
    accounts. So we will talk about this also
  • 10:46 - 10:51
    later in this talk. So it was developed by
    Mozilla and the first idea was to
  • 10:51 - 10:59
    integrate this protocol in the browsers,
    which never happened. So they went from
  • 10:59 - 11:05
    the target to have a pure Web
    implementation using just HTML5. And they
  • 11:05 - 11:14
    also built bridges to OpenID and OAuth 2
    to get some big identity providers in the
  • 11:14 - 11:19
    system. But this this whole approach
    failed. But it's still interesting if you
  • 11:19 - 11:28
    want to look for privacy. OK, there are
    also some other protocols I haven't talked
  • 11:28 - 11:33
    about, and now I will hand over to Daniel.
  • 11:33 - 11:36
    Daniel Fett: So what is this talk all
    about?
  • 11:36 - 11:40
    applause
  • 11:40 - 11:46
    Daniel: So what is all talk all about? So
    what we want to do is we want to analyze
  • 11:46 - 11:51
    where the web mechanisms in this case
    websites and protocols are secure when
  • 11:51 - 11:56
    they are implemented correctly. So this
    means if we follow all the standards and
  • 11:56 - 12:01
    all the best practices, on other words:
    Are the standards and protocols that
  • 12:01 - 12:11
    define the web secure? The current state
    of the art is that we have a lot of
  • 12:11 - 12:18
    documents that define some locking
    mechanism, for example, like OAuth, and we
  • 12:18 - 12:23
    have an expert or group of experts and
    they look at this and after a while they
  • 12:23 - 12:30
    say, well, this seems kind of OK to me. So
    they say it's secure. So this is the
  • 12:30 - 12:35
    current state of the art. And what we want
    to do also is part of our research is to
  • 12:35 - 12:42
    change this. In a way that has been
    already successful for other things in
  • 12:42 - 12:48
    Internet, for example, for TLS. We want to
    create a model of the Web infrastructure
  • 12:48 - 12:53
    and of Web applications, the former model.
    And these models, of course, they are also
  • 12:53 - 13:02
    always incomplete, but nonetheless useful,
    as has been shown with TLS 1.3. So we
  • 13:02 - 13:08
    create this model and then we put a lot of
    work into this. And finally, hopefully we
  • 13:08 - 13:18
    can create proofs of security for
    mechanisms of our standards. So of course
  • 13:18 - 13:27
    the hard part is number 2 here, as always.
    Some things our model cannot capture and
  • 13:27 - 13:32
    we don't want to capture this. So, for
    example, phishing attacks or clickjacking
  • 13:32 - 13:37
    attacks or just stupid users. Let's send
    that password to the attacker. These are
  • 13:37 - 13:43
    things that are out of the scope of the
    stuff that we are looking at. In the same
  • 13:43 - 13:53
    manner, compromised browsers or
    compromised databases and so on. When we
  • 13:53 - 14:00
    have this model for a Web application, one
    important question and maybe the most
  • 14:00 - 14:05
    important question is what is security and
    what is privacy if you want to look at
  • 14:05 - 14:11
    privacy as well? So we have to define this
    and luckily we can define this if we have
  • 14:11 - 14:16
    a formal model like we have and a
    following, of course, I'm not going to
  • 14:16 - 14:21
    present all the formal stuff, this is
    boring. Therefore, I have a high level
  • 14:21 - 14:25
    overview of what authentication
    properties, for example, look like.
  • 14:25 - 14:31
    Authentication in the Web single-sign-on
    system means that an attacker that even
  • 14:31 - 14:36
    has full control over the network say NSA
    should not be able to use a service of a
  • 14:36 - 14:41
    relying party as an honest user. So the
    NSA should be unable to log into my
  • 14:41 - 14:48
    account at least. Yeah, if they're not
    forcing the owner of relying party or
  • 14:48 - 14:57
    something. And this is an obvious property
    there's a slightly less obvious property,
  • 14:57 - 15:02
    which says that an attacker should not be
    able to authenticate an honest browser to
  • 15:02 - 15:10
    relying party as the attacker. So the
    attacker should be unable to force Alice's
  • 15:10 - 15:14
    browser to be locked in under the
    attackers identity. This is a property
  • 15:14 - 15:19
    that is often also called session fixation
    or session swapping, because if the
  • 15:19 - 15:23
    attacker would be able to do this, he
    could, for example, force me to be locked
  • 15:23 - 15:28
    in at some search engine. And if I then
    search something with a search engine and
  • 15:28 - 15:33
    I'm locked into the attackers account,
    then the attacker could be able to read
  • 15:33 - 15:37
    what I'm searching for in this search
    engine. OK, so these are the
  • 15:37 - 15:42
    authentication properties. Then we also
    have another property that is important,
  • 15:42 - 15:50
    namely session integrity . Session
    integrity means that if the relying party
  • 15:50 - 15:55
    acts on Alice's behalf at the identity
    provider or retrieves Alice's data at the
  • 15:55 - 16:03
    identity provider then Alice explicitly
    expresses her consent to log in at this
  • 16:03 - 16:12
    ruling party. So. There's a session
    integrity and a third property that we
  • 16:12 - 16:19
    have, is privacy and privacy in this case
    means that a malicious identity provider
  • 16:19 - 16:26
    should not be able to tell whether the
    user logs in at the wrong party A or party
  • 16:26 - 16:34
    B. So, for example, if OAuth would have
    privacy, which it doesn't, then Facebook
  • 16:34 - 16:37
    would be unable to tell whether I log in
    at, say, Wikipedia or myfavoritebeer.com.
  • 16:37 - 16:46
    There are also other notions of privacy,
    which we, however, will not look at in
  • 16:46 - 16:59
    this talk.
  • 16:59 - 17:10
    Guido: OK. Let's start with a closer look
    to OAuth when I say OAuth, I always mean
  • 17:10 - 17:22
    OAuth 2 not the older OAuth 1. So OAuth 2
    is mainly defined in RFC6749 and also some
  • 17:22 - 17:31
    other RFC's and some other documents.
    OAuth itself has four different modes it
  • 17:31 - 17:35
    can run. And so there is the Implicit
    Mode, the Authorization Code Mode, the
  • 17:35 - 17:42
    Resource Owner Password Credentials mode,
    the Client Credentials mode and all these
  • 17:42 - 17:51
    modes can have so options, which I won't
    list here. And out of these four modes,
  • 17:51 - 17:56
    the first two Implicit Mode and the
    Authorization Code Mode are the most
  • 17:56 - 18:03
    common ones. So let's have a closer look
    at these modes. So the Implicit Mode works
  • 18:03 - 18:09
    like this. Here we have an example with
    some random relying party and Facebook as
  • 18:09 - 18:14
    the identity provider. So first you say I
    want to login with Facebook at your
  • 18:14 - 18:21
    relying party, then your browser gets
    redirected to Facebook. Facebook prompts
  • 18:21 - 18:25
    you for your authentication data or for
    some confirmation if you're already logged
  • 18:25 - 18:37
    in at Facebook. And then Facebook issues a
    token that's called the access token. And
  • 18:37 - 18:41
    Facebook redirects your browser back to
    the relying party and puts the access
  • 18:41 - 18:50
    token in the URI. And then for some
    technical reasons, we need some additional
  • 18:50 - 19:00
    steps to retrieve the access token from
    the URI because it's in the fragment part.
  • 19:00 - 19:05
    And then finally, the relying party gets
    to retrieve this access token. And now
  • 19:05 - 19:13
    with this access token, an access token is
    the same basically the same thing as in
  • 19:13 - 19:19
    the in this first high level overview when
    I just talked about tokens an access token
  • 19:19 - 19:25
    is such a token, which gives the relying
    party access to the user's account at
  • 19:25 - 19:32
    Facebook. And now the relying party can
    retrieve data on the user's behalf at
  • 19:32 - 19:38
    Facebook or it can retrieve and user
    identifier and then consider this user to
  • 19:38 - 19:45
    be logged in and issue, for example, some
    cookie. So this is the Implicit Mode.
  • 19:45 - 19:53
    There is also the Authorization Code Mode
    there. Things start similar. The user says
  • 19:53 - 19:58
    I want to login with Facebook, gets
    redirected to Facebook, authenticates at
  • 19:58 - 20:03
    Facebook and then Facebook, instead of
    issuing an access token, it issues the so-
  • 20:03 - 20:09
    called authorization code and the relying
    party then takes the authorization code
  • 20:09 - 20:15
    and redeems it for an access token
    directly at Facebook. So we have here some
  • 20:15 - 20:22
    one intermediate step for this
    authorization code and then the access
  • 20:22 - 20:29
    token, the relying party retrieved, it can
    then use to act on the user's behalf at
  • 20:29 - 20:38
    Facebook or consider the user to be logged
    in. So let's talk about selected attacks
  • 20:38 - 20:51
    on OAuth. First, let's talk a bit about
    known attacks, there are attacks like the
  • 20:51 - 20:58
    so-called cut and paste attacks where you
    reuse some of these tokens like access
  • 20:58 - 21:03
    token, authorization code, or there are
    also some other tokens, which I haven't
  • 21:03 - 21:10
    talked about. So I left out some details
    before. It's about reusing these tokens
  • 21:10 - 21:18
    from different flows, mixing them into a
    new flow and then break the system. So
  • 21:18 - 21:26
    there are a lot of cut and paste attacks
    known. And there OAuth working group is
  • 21:26 - 21:34
    continuously giving tries on how to
    prevent these cut-and-paste attacks.
  • 21:34 - 21:39
    Another problem is if you don't use HTTPS,
    then you are screwed because a man in the
  • 21:39 - 21:46
    middle can easily read everything out, all
    the tokens that are exchanged. So if you
  • 21:46 - 21:51
    are in some Wi-Fi and the guy next to you
    is sniffing on the Wi-Fi, you log in and
  • 21:51 - 21:59
    don't use HTTPS because some developers
    forgot that there is the something called
  • 21:59 - 22:08
    HTTPS, then basically the whole thing is
    screwed. And also, if you just rely on
  • 22:08 - 22:15
    cookies, then you're also screwed because
    cookies lack integrity. It's very easy to
  • 22:15 - 22:22
    just inject cookies into your browser over
    HTTP, and then these cookies will later
  • 22:22 - 22:34
    also be used over HTTPS. So the cookies
    are also not a good thing to rely on. So
  • 22:34 - 22:40
    let's talk about some attacks we have
    found in our research. There is the 307
  • 22:40 - 22:55
    redirect attack and it works like this.
    First, we have some regular OAuth flow. In
  • 22:55 - 23:00
    this OAuth flow, if you have a closer look
    at what happens here and step two to four,
  • 23:00 - 23:05
    we have the user authentication. And after
    this authentication, the user gets
  • 23:05 - 23:12
    redirected back to relying party. If you
    look more into the details of these
  • 23:12 - 23:21
    requests, so first you have this request
    where you go to your identity provider and
  • 23:21 - 23:27
    ask, I have started OAuth flow there. So
    you just came from the relying party where
  • 23:27 - 23:32
    you want to log in and click on that
    button, log in with this IP, you get
  • 23:32 - 23:38
    redirected and then your browser contacts
    this identity provider here I have been
  • 23:38 - 23:47
    redirected to you in OAuth flow. Please
    authenticate the user. So this is a step
  • 23:47 - 23:54
    2.a then your identity provider returns
    some form where you have to enter your
  • 23:54 - 23:58
    username and password usually, and then
    you enter username and password and these
  • 23:58 - 24:04
    are sent over to the identity provider.
    And now if this identity provider
  • 24:04 - 24:11
    redirects you back to the relying party
    and uses the wrong HTTP location redirect
  • 24:11 - 24:19
    method for this, namely the 307 method,
    then the following happens. The browsers
  • 24:19 - 24:24
    instructed to just repost all of your
    credentials. So if you're logging in that
  • 24:24 - 24:31
    some malicious relying party, that relying
    party gets your username and password. So
  • 24:31 - 24:37
    this happens if you use 307, redirect.
    Fortunately, we didn't find any identity
  • 24:37 - 24:43
    provider in the wild to actually use 307.
    But you can never exclude that there is
  • 24:43 - 24:50
    some implementation which makes actually
    use of this location redirect method.
  • 24:50 - 24:55
    Also, if you look at the standard, how
    these are defined, it's not always clear
  • 24:55 - 25:02
    which redirects method has which details
    and behavior. And also the OAuth working
  • 25:02 - 25:07
    group didn't think about this. So in their
    standard, their write, you just use any
  • 25:07 - 25:16
    method. And surely the mitigation here is
    don't use 307, for example, use 303
  • 25:16 - 25:22
    instead. So the next attack is the
    identity provider mix-up attack. I will
  • 25:22 - 25:31
    present this in Implicit Mode and only one
    variant of this attack. So here in this
  • 25:31 - 25:39
    attack, we have to have the following
    setting. From step two on all these
  • 25:39 - 25:44
    requests are usually encrypted. But the
    very, very first request there, we cannot
  • 25:44 - 25:50
    be sure it is encrypted because a lot of
    relying parties when you go to the
  • 25:50 - 25:57
    website, you go over HTTP. And this the
    very first information we just click I
  • 25:57 - 26:03
    want to use Facebook to log in there. You
    could easily assume this is not a
  • 26:03 - 26:10
    sensitive information. So this a very
    first request goes off an unencrypted or
  • 26:10 - 26:16
    if you, for example, consider other
    attacks like TLS stripping, then you also
  • 26:16 - 26:25
    cannot guarantee that this request is
    encrypted. So now for an attacker who, for
  • 26:25 - 26:32
    example, sits in a same Wi-Fi network as
    you, so probably the guy next to you could
  • 26:32 - 26:37
    easily mount the attack as follows. So
    when your browser sends this request to
  • 26:37 - 26:44
    relying party, login with Facebook, the
    attacker can easily change this and change
  • 26:44 - 26:51
    its to just use the identity provider that
    is run by the attacker. Remember, you can
  • 26:51 - 26:56
    have a lot of different options of
    identity providers and with some
  • 26:56 - 27:05
    extension, this can also be extended
    dynamically just by entering some domains.
  • 27:05 - 27:09
    And then the relying party thinks, OK,
    that user wants to use the attacker
  • 27:09 - 27:18
    identity provider. It answers with the
    redirect to the attackers web page. But
  • 27:18 - 27:22
    now the as the attacker attackers still
    sits in the middle, he can just change it
  • 27:22 - 27:29
    back to Facebook. So the old dance
    continues as usual. You go to Facebook,
  • 27:29 - 27:34
    authenticate there, you get redirected
    back. There's probably some access token
  • 27:34 - 27:41
    and then eventually the relying party
    retrieve this access token and wants to
  • 27:41 - 27:47
    use this access token. So what happens? It
    won't use this access token at Facebook,
  • 27:47 - 27:52
    but at the relying at the attacker
    instead, because it still thinks that the
  • 27:52 - 27:58
    attacker is for the identity provider that
    is used here. So in practice, if you want
  • 27:58 - 28:03
    to mount this attack, then you have to
    take care of more details, like when you
  • 28:03 - 28:08
    want to break authentication instead of
    authorization. So in the version I just
  • 28:08 - 28:14
    presented, the attacker gets the token can
    act on the user's behalf at Facebook or at
  • 28:14 - 28:20
    some other identity provider that he was
    off. So this is not limited for Facebook,
  • 28:20 - 28:27
    but for authentication of the relying
    party. There are some other further steps
  • 28:27 - 28:32
    needed. But there's also there are also
    some other details that have to be taken
  • 28:32 - 28:38
    care of, like client identifiers, which
    are used by relying parties to identify
  • 28:38 - 28:44
    themselves to identity providers, the same
    for client credentials, which are
  • 28:44 - 28:51
    optional, by the way, and an OpenID
    Connect the layer on top of OAuth. If this
  • 28:51 - 28:57
    is used, then you need to take care about
    some other stuff, like the switching of
  • 28:57 - 29:03
    some signatures or exchanging some
    signatures and so on. But it's still
  • 29:03 - 29:08
    possible so that we successfully attacked
    real world applications. And this
  • 29:08 - 29:12
    definitely works. And there are also some
    variants that do not rely on that first
  • 29:12 - 29:20
    request going over HTTPS. But explaining
    all the variance would take a whole talk
  • 29:20 - 29:28
    on its own, so we now skip this and talk
    about mitigation. So the mitigation we
  • 29:28 - 29:35
    propose is quite simple. So the the one
    problem is you're in step three. This
  • 29:35 - 29:42
    access token is just some opaque string.
    Relying party cannot see who issued that
  • 29:42 - 29:47
    access token. So it needs some further
    information carried along with this
  • 29:47 - 29:52
    system. And that is who is the identity
    provider, which issued this access token.
  • 29:52 - 29:59
    And if you have this information carried
    along and then the relying party can
  • 29:59 - 30:05
    easily detect this attack and see that
    there is a mismatch between step five to
  • 30:05 - 30:12
    the one in step 1.a, where relying line
    party received the message the attacker's
  • 30:12 - 30:16
    identity provider is to be used and in
    five it gets the message she has access
  • 30:16 - 30:21
    token and it's from Facebook. So there's a
    mismatch and this whole flow can be
  • 30:21 - 30:31
    aborted without the attack being
    successful. So this is the mitigation we
  • 30:31 - 30:36
    talk to the OAuth working group at the
    IETF, so they invited us to a kind of
  • 30:36 - 30:43
    emergency meeting to discuss this attack
    and we scheduled public disclosure of
  • 30:43 - 30:50
    these attacks. So at the beginning of this
    year, in June, we had a district at
  • 30:50 - 30:55
    security workshop which took place in
    June. New RFC, with this service the
  • 30:55 - 31:03
    mitigations is in preparation. And also
    the working group is interested in the
  • 31:03 - 31:11
    kind of formal analysis we do to this, we
    carry out for these kind of standards. So
  • 31:11 - 31:19
    to sum up, the security for OAuth 2 these
    fixes applied and there are no
  • 31:19 - 31:27
    implementation errors, then we can say
    that in terms of security OAuth 2 is quite
  • 31:27 - 31:37
    good. We have formal proof in our model
    for this, but regarding privacy OAuth 2
  • 31:37 - 31:42
    does not provide any privacy at all.
  • 31:42 - 31:48
    David: Speaking about privacy, we
    mentioned earlier already that there was a
  • 31:48 - 31:54
    single-sign-on system that tried to
    provide privacy, namely BrowserID alias
  • 31:54 - 32:00
    Mozilla Persona. So as we already said
    before, this is a Web based single-sign-on
  • 32:00 - 32:05
    system with design goals of having no
    central authority and provide better
  • 32:05 - 32:14
    privacy. Spoiler alert: they failed at
    both. So how does BrowserID work? So let's
  • 32:14 - 32:20
    have a look at this on a very high level
    first. So like Guido already said in
  • 32:20 - 32:25
    browser I.D., the mail provider is the
    identity provider. So we have a user,
  • 32:25 - 32:32
    Alice, alice@mailprovider.com and in the
    first phase when using BrowserID, she does
  • 32:32 - 32:38
    the following, she goes to her identity
    provider and first creates a
  • 32:38 - 32:44
    public/private keypad. And then she sends
    the public key in a document with an
  • 32:44 - 32:51
    identity to the mal provider and the mail
    provider then signs this document. And
  • 32:51 - 32:56
    this creates the so-called user
    certificate. And this certificate is then
  • 32:56 - 33:02
    sent back to Alice. Now, in the second
    phase, if Alice wants to actually log in
  • 33:02 - 33:10
    that some website and then she does the
    following she creates and another document
  • 33:10 - 33:14
    containing the identity of the website
    where she wants to log in, say Wikipedia,
  • 33:14 - 33:24
    and to do so, she signs the identity of
    Wikipedia with her own private key. And
  • 33:24 - 33:28
    this creates the so-called identity
    assertion. Now, Alice sends both documents
  • 33:28 - 33:34
    to Wikipedia and Wikipedia can then, of
    course, check these documents, because
  • 33:34 - 33:40
    Wikipedia can check first it can retrieve
    the public key of the mail provider, can
  • 33:40 - 33:46
    check the user certificate, and then also
    can check the identity assertion. And
  • 33:46 - 33:53
    yeah, then Wikipedia can consider Alice to
    be logged in. So this was the basic idea
  • 33:53 - 33:58
    of BrowserID, which is quite nice and
    clean and simple. And then they started to
  • 33:58 - 34:04
    implement this using just the browser
    features, including all the workarounds
  • 34:04 - 34:10
    for the Internet Explorer and so on and so
    on. And they ended up with a quite
  • 34:10 - 34:19
    complicated system. So here we have on the
    left side Alice's browser two Windows,
  • 34:19 - 34:24
    namely Wikipedia and the login dialog,
    which is provided by a central authority
  • 34:24 - 34:30
    which they try to avoid login.persona.org
    and inside both of these windows and other
  • 34:30 - 34:35
    iframes and inside one of these iframes,
    that's another iframe. And on the right we
  • 34:35 - 34:40
    have the servers. So the relying party,
    the identity provider and the central
  • 34:40 - 34:46
    authority login.persona.org. And just to
    give you an idea of how complex the system
  • 34:46 - 34:53
    ended up, they all talk to each other
    using HTTP requests, but also using
  • 34:53 - 35:00
    postMessages and also using XML-HTTP
    requests. And as you can see, the system
  • 35:00 - 35:08
    became quite complex. To add even more
    complexity, they did the following. They
  • 35:08 - 35:12
    thought, well, some users, they are
    already using Gmail or Yahoo! So let's
  • 35:12 - 35:19
    provide some nice. Yeah. Interface for
    them. They provided the so-called identity
  • 35:19 - 35:25
    bridge specifically for Gmail and Yahoo!
    Which at the time supported OpenID
  • 35:25 - 35:31
    authentication only. And they created two
    new servers, the so-called bridging
  • 35:31 - 35:39
    servers, one for Gmail called Sideshow and
    the other one for Yahoo! Called BigTent.
  • 35:39 - 35:46
    Now, the user authenticates, authenticates
    to the bridging server, using OpenID and
  • 35:46 - 35:55
    then a bridging server has an interface to
    the standard BrowserID interface. So one
  • 35:55 - 35:59
    problem was that OpenID identities, they
    are not email addresses. And so in OpenID
  • 35:59 - 36:06
    you add an attribute, which is called the
    email attribute. And, um, we're talking
  • 36:06 - 36:12
    about this email attribute in a minute. So
    let's have a look at how these identity
  • 36:12 - 36:18
    breaches work. We are not going into all
    the details of the BrowserID or Persona
  • 36:18 - 36:21
    protocol because this would be too
    complicated. But the identity bridge is
  • 36:21 - 36:29
    interesting and also important for some of
    the attacks that we found. So in the
  • 36:29 - 36:33
    identity bridge the following happens, so
    here on the left, we have Alice's browser
  • 36:33 - 36:38
    and in the middle we have Sideshow service
    identity bridge and on the right side, we
  • 36:38 - 36:45
    have Gmail, which could also be Yahoo in
    this case. But let's say it's Gmail. So
  • 36:45 - 36:52
    first, the user says that she wants to log
    in at Sideshow and then Sideshow sends an
  • 36:52 - 36:59
    OpenID request requesting the email
    attribute signed from Gmail. This request
  • 36:59 - 37:05
    is then forwarded to Gmail. Gmail sees the
    request, the user logs in a Gmail for
  • 37:05 - 37:12
    authentication. And then Gmail creates
    this OpenID assertion, which contains the
  • 37:12 - 37:18
    signed email address attribute for Alice
    and as you can see, this is all in the
  • 37:18 - 37:26
    green box. So all properly signed. Nice.
    And now Alice's browser redirects this
  • 37:26 - 37:32
    document to Sideshow. Now Sideshow doesn't
    check the contents of this decision for
  • 37:32 - 37:39
    itself. Instead, it sends these things to
    Gmail, Gmail checks everything that is
  • 37:39 - 37:45
    signed. And Tells Sideshow, yes, this
    document looks correct to me, everything
  • 37:45 - 37:52
    that was signed was signed by me and was
    not tampered with. Then Sideshow looks at
  • 37:52 - 37:57
    the document and sees Alice wanted to log
    in. So this must be the user, must be
  • 37:57 - 38:04
    Alice now and provides a cookie because
    the user is now logged in as Alice. So
  • 38:04 - 38:12
    far, simple. Now for some of the attacks
    that we found. First attack, identity
  • 38:12 - 38:19
    forgery. So here we have essentially the
    same that we saw before, the same setting,
  • 38:19 - 38:24
    except now we don't have Alice's Browser
    on the left. We have the attackers browser
  • 38:24 - 38:31
    on the left. The attacker can go to
    Sideshow and say, I want to sign in. Now
  • 38:31 - 38:38
    Sideshow, sends this OpenID request to the
    attacker and it can change this request,
  • 38:38 - 38:44
    the attacker can just remove the request
    for the email attribute from this request.
  • 38:44 - 38:52
    Which is still a valid OpenID request.
    Gmail sees this request, and now the
  • 38:52 - 38:56
    attacker logs in. The attacker doesn't
    have Alice's user data so the attacker
  • 38:56 - 39:03
    just logs in with his own credentials. And
    now Gmail creates an automatic assertion
  • 39:03 - 39:11
    containing the signed attribute, which was
    requested, which was nothing. So
  • 39:11 - 39:16
    essentially, the document is empty, at
    least without any email address. Now, the
  • 39:16 - 39:24
    attacker can simply add a new attribute to
    this document containing an email address
  • 39:24 - 39:31
    that he has chosen arbitrarily. This, of
    course, is not signed, which is not a
  • 39:31 - 39:38
    problem because this document can be
    partly signed and this document is
  • 39:38 - 39:44
    forwarded to Gmail. Gmail now analyzes
    this document and sees whether there is a
  • 39:44 - 39:50
    signed part in this document. So I check
    this signed part. The signed part doesn't
  • 39:50 - 40:00
    contain anything useful, but it is
    correct. It's not the wrong signature. So
  • 40:00 - 40:05
    it sends back to Sideshow: there I checked
    this document looks fine to me. Now
  • 40:05 - 40:10
    Sideshow looks at a document, sees that as
    an email attribute, uses this email
  • 40:10 - 40:18
    attribute and the attacker is signed into
    any Gmail account that he likes with using
  • 40:18 - 40:25
    BrowserID. OK, so this is bad, as you can
    imagine. And we told the Mozilla guys
  • 40:25 - 40:31
    about this and they were quite fast. So we
    were really surprised. They were really
  • 40:31 - 40:35
    quick. So I think it was in the middle of
    the night for most of them, but they
  • 40:35 - 40:41
    scrambled in the back and they wrote some
    patches and so on and so on. And I think
  • 40:41 - 40:47
    it wasn't 24 hours later that it was all
    deployed and fixed. So that's what was
  • 40:47 - 40:54
    quite good. But then we took another look
    at the system and we found identity
  • 40:54 - 41:03
    forgery number two, which is actually
    remarkably similar to works as follows. So
  • 41:03 - 41:06
    the attacker, since the authentication
    requests, you know this part at Sideshow
  • 41:06 - 41:13
    once the signed e-mail attribute and the
    attacker now doesn't change anything, the
  • 41:13 - 41:18
    attacker just forwards this request to
    Gmail, Gmail, ask for the credentials, the
  • 41:18 - 41:26
    attacker signs and and sends back the
    OpenID assertion containing the signed
  • 41:26 - 41:32
    email address of the attacker. So no
    attack to up to this point. But now the
  • 41:32 - 41:37
    attacker can do the following. The
    attacker adds another attribute, another
  • 41:37 - 41:47
    email attribute. And yeah, you can guess
    what happens. The document is forwarded to
  • 41:47 - 41:54
    Gmail. Gmail checks the signed part of the
    document, which is still fine, sends back
  • 41:54 - 41:58
    to Sideshow that everything is fine with
    this document. And Sideshow selects the
  • 41:58 - 42:05
    wrong email address. Yeah, and now the
    user, the attacker signed into any user
  • 42:05 - 42:14
    account again. OK, so this was the second
    identity forgery attack. We also found
  • 42:14 - 42:19
    another attack, which is not very
    spectacular. And we also looked so this
  • 42:19 - 42:27
    was all, of course, authentication. And we
    also took a look at privacy so as to
  • 42:27 - 42:34
    remember privacy says that, in the words
    of Mozilla, the browser ID protocol never
  • 42:34 - 42:43
    leaks tracking information back to the
    identity provider, except it does. So
  • 42:43 - 42:46
    ideally, the identity provider should be
    unable to tell whether user logs in. In
  • 42:46 - 42:55
    fact, this is broken, because in the
    browser, the following happens. If
  • 42:55 - 43:00
    malicious identity provider wants to find
    out whether a user is logged in at some
  • 43:00 - 43:06
    specific relying party or not, then the
    malicious identity provider can just open
  • 43:06 - 43:14
    iframe containing the website of that
    relying party he wants to probe. Now the
  • 43:14 - 43:19
    following happens, the normal JavaScript
    of BrowserID runs in this relying party
  • 43:19 - 43:25
    because it has provided support,
    obviously, and creates an iframe and
  • 43:25 - 43:32
    inside this iframe, another iframe will be
    created. But this innermost iframe will
  • 43:32 - 43:40
    only be created if the user logged in at
    this RP before. Now, since the outermost
  • 43:40 - 43:44
    and the innermost iframe, they come from
    the same source and of course they can
  • 43:44 - 43:50
    collaborate and communicate. They can for
    example, just send postMessage saying: I,
  • 43:50 - 43:56
    the user logged in at this ruling party
    before. So an identity provider can easily
  • 43:56 - 44:03
    probe whether a user logged in at some
    relying party or not. And this
  • 44:03 - 44:06
    unfortunately cannot be fixed without a
    major redesign of BrowserID, because they
  • 44:06 - 44:13
    relied on all these iframes and so on.
    Yeah. So I think this can be considered
  • 44:13 - 44:19
    broken beyond repair. We also found some
    variants of these privacy attacks which
  • 44:19 - 44:26
    rely on other mechanisms. But essentially.
    Yeah, you get the idea right here. Privacy
  • 44:26 - 44:36
    of BrowserID is broken. OK, so to sum up
    BrowserID, we found attacks, but we also
  • 44:36 - 44:45
    were able to fix them with respect to
    security, and we also used our formal
  • 44:45 - 44:51
    methods to improve the security of the
    fixed BrowserID system. But privacy is
  • 44:51 - 44:55
    broken beyond repair.
  • 44:55 - 45:01
    Guido: OK, this leads us to the question,
    can we build a single-sign-on system that
  • 45:01 - 45:09
    provides security and privacy on the Web?
    So we thought a lot about this question.
  • 45:09 - 45:16
    And then we used our formal model to
    design such a single-sign-on system. And
  • 45:16 - 45:23
    we could also then use the former model to
    prove that these properties are actually
  • 45:23 - 45:31
    fulfilled. So the basic principle of the
    system is called SPRESSO for Secure
  • 45:31 - 45:38
    Privacy Respecting Single-Sign-On is the
    following. We have the user with a
  • 45:38 - 45:44
    browser. This user wants to log in at some
    relying party, for example, at Wikipedia.
  • 45:44 - 45:51
    So here we have the same same idea as in
    BrowserID to use the email address and the
  • 45:51 - 45:58
    e-mail provider as the identity provider.
    So the user enters the email address and
  • 45:58 - 46:08
    then the relying party asks for some proof
    of this identity. So the user goes to her
  • 46:08 - 46:14
    email provider, which is identity provider
    in this case, authenticates there. And
  • 46:14 - 46:21
    then this. The identity provider creates a
    document that proves the Alice's identity
  • 46:21 - 46:26
    and then forwards this document to the
    relying party. And the relying party can
  • 46:26 - 46:32
    check if everything is all right and then
    consider the user to be loggged in. So
  • 46:32 - 46:37
    let's have a closer look on how this
    system works. So here again, we have
  • 46:37 - 46:45
    Alice's browser, the window of the relying
    party. Alice enters her email address. The
  • 46:45 - 46:49
    email address is sent to relying party.
    And now the relying party creates a
  • 46:49 - 46:56
    document that contains the identity of the
    relying party itself. And this document is
  • 46:56 - 47:02
    encrypted and we call this document the
    tag. So now the tag is sent, along with
  • 47:02 - 47:07
    the key that was used to encrypt this
    document. So this is symmetric encryption
  • 47:07 - 47:16
    with a fresh key sends it to the browser.
    And now in the browser, the SPRESSO code
  • 47:16 - 47:22
    opens a new window of the identity
    provider that is given by the domain of
  • 47:22 - 47:34
    the email address and sends the tag over
    to this window. This login dialog prompts
  • 47:34 - 47:41
    the user to authenticate, so the user now
    enters her password. And this is sent
  • 47:41 - 47:47
    along with the tag to the server and now
    the server creates this document I've just
  • 47:47 - 47:56
    spoken of in the last slide and this
    document, we call it the user certificate,
  • 47:56 - 48:02
    user assertion, sorry, user assertion. We
    send it back to the window at the login
  • 48:02 - 48:10
    dialog and now we have a problem. We could
    just send it over to the Wikipedia window.
  • 48:10 - 48:17
    But I show you in the minute why this is a
    bad idea. So instead, now we have a third
  • 48:17 - 48:25
    party, the forwarder which serves just a
    single static JavaScript file, and this is
  • 48:25 - 48:33
    loaded in an iframe and is login dialog.
    And this iframe gets the user assertion
  • 48:33 - 48:40
    and it also gets the key and now it can
    decrypt the tag. Look who is the intended
  • 48:40 - 48:47
    receiver and then it sends over the user
    assertion through the window of the
  • 48:47 - 48:51
    relying party which forwards it to the
    server of the relying party, who could
  • 48:51 - 48:57
    then can check if everything is all right
    and consider the user to be logged in. So
  • 48:57 - 49:04
    why do we need this forward? So at first
    it may look strange. So let's look what
  • 49:04 - 49:09
    happens if we just don't have this
    forwarder. So let's assume the user wants
  • 49:09 - 49:15
    to log in at some malicious relying party
    at attacker.com and there is an email
  • 49:15 - 49:19
    address but the attacker wants to
    impersonate the user who wants to log in
  • 49:19 - 49:24
    at some other relying party. Let's say to
    Wikipedia, for example, and the attacker
  • 49:24 - 49:31
    goes to Wikipedia, says, hi, I'm Alice. I
    want to log in. Wikipedia creates a tag.
  • 49:31 - 49:39
    This is sent over to the attacker who just
    relays it to the user user. Protocol runs
  • 49:39 - 49:46
    on, the user authenticates to her identity
    provider. And then we just sent the tag
  • 49:46 - 49:52
    over as the identity provider does not
    know who is the intended receiver's
  • 49:52 - 49:58
    because we want to have this privacy
    feature. This just went through and the
  • 49:58 - 50:03
    attacker gets the user certificate and
    user assertion and forwards it to
  • 50:03 - 50:09
    Wikipedia and then the attacker is
    considered to be Alice. And this is that.
  • 50:09 - 50:17
    So we need some mechanism to prevent that
    the user assertion is forwarded to some
  • 50:17 - 50:22
    random party, but only to the intended
    receiver. And for this, we have this
  • 50:22 - 50:29
    forwarder. Now, you can think this
    forwarded may maybe be also malicious, but
  • 50:29 - 50:34
    let's talk about about this in a second.
    So let's just talk about the key forwarded
  • 50:34 - 50:42
    to us so the forwarder gets the user
    assertion and he gets the key to decrypt
  • 50:42 - 50:48
    the tag. And now he can instruct the
    browser to send a postMessage, but only to
  • 50:48 - 50:55
    give it to a window of Wikipedia. So the
    browser checks is the receiver of the
  • 50:55 - 51:00
    window of Wikipedia or not. And if it's
    not, it doesn't deliver this message. So
  • 51:00 - 51:07
    this protects the, um, the user assertion
    certain to be leaked. And now you may
  • 51:07 - 51:12
    think this forwarder may be malicious and
    deliver some other script that does
  • 51:12 - 51:18
    strange things like forwarding things att
    start to the attacker directly. But we can
  • 51:18 - 51:22
    enforce that the correct script is running
    inside that iframe using separate source
  • 51:22 - 51:30
    integrity where you just tell the browser
    and this window only this code may run.
  • 51:30 - 51:36
    And in this case the forwarder cannot just
    put some arbitrary or malicious code in
  • 51:36 - 51:41
    this iframe. And also there is no
    information that leaks back from the
  • 51:41 - 51:49
    browser to the forwarder. So to sum this
    up, as I just presented SPRESSO, it
  • 51:49 - 51:55
    features privacy and authentication. It's
    open and decentralized. So you don't need
  • 51:55 - 52:03
    any specific central party. It's compliant
    to web standards. That's based on HTML5.
  • 52:03 - 52:10
    And we have formal proof that all these
    properties we've talked at the beginning
  • 52:10 - 52:15
    actually hold and you can find a demo and
    more information on spresso.me.
  • 52:15 - 52:26
    Daniel: OK, now to conclude the talk, what
    is the takeaway? First of all, w talked, I
  • 52:26 - 52:31
    think most of the time about OAuth 2.0.
    Most of the results also translate to
  • 52:31 - 52:38
    OpenID Connect. We have formally proven
    the security of the protocol of OAuth and
  • 52:38 - 52:47
    also OpenID Connect. Which is a nice
    result, of course, if you're OK with
  • 52:47 - 52:53
    having no privacy, because OAuth and
    OpenID Connect don't have any kind of
  • 52:53 - 53:01
    privacy that we talked about. Regarding
    OAuth 1.0 and OpenID, I think that can be
  • 53:01 - 53:05
    considered deprecated and shouldn't be
    used. BrowserID, Mozilla Persona was a
  • 53:05 - 53:15
    nice experiment. But is dead now and also
    has broken privacy. With SPRESSO we have
  • 53:15 - 53:19
    shown, however, that you can achieve
    privacy on web single-sign-on using
  • 53:19 - 53:26
    standard HTML5 features and standard web
    features. But of course for now it is a
  • 53:26 - 53:34
    proof of concept. As you have seen, we
    don't even have a nice logo yet. Um, So
  • 53:34 - 53:40
    and one target audience are certainly
    developers, developers, developers use
  • 53:40 - 53:47
    libraries wherever possible. For example,
    the pyoidc is written even by members of
  • 53:47 - 53:57
    the OAuth and OpenID working groups. So
    they know what they do. Hopefully. Also
  • 53:57 - 54:03
    regarding RFC's, they are hard to read and
    information is often spread across several
  • 54:03 - 54:09
    documents. They are often not written
    clearly and they are not always up to
  • 54:09 - 54:16
    date, but they are still an important
    reference. And I think it's a good advice
  • 54:16 - 54:23
    to look at RFC's from time to time, even
    if they are hard to read. So thank you
  • 54:23 - 54:28
    very much for your attention. If you want
    to talk to us, come to us at the
  • 54:28 - 54:34
    Maschinendeck assemblyin hall 3 free or
    join us at the next Pi and More,shameless
  • 54:34 - 54:40
    plug here, January 14 in Krefield or at
    University of Stuttgart starting in
  • 54:40 - 54:47
    January. Thank you very much.
  • 54:47 - 54:52
    applause
  • 54:52 - 54:57
    Presenter: Now we have eight minutes for
    questions, what do we have from the
  • 54:57 - 55:03
    Internet?
    Question (internet): So we've got two
  • 55:03 - 55:11
    questions from the Internet. You can you
    hear me? So at the diagram you showed one
  • 55:11 - 55:16
    of the first slides, why does the
    authentication follow authorization?
  • 55:16 - 55:22
    Shouldn't it normally be the other way
    around?
  • 55:22 - 55:33
    Presenter: Can you try to repeat the
    question?
  • 55:33 - 55:40
    Internet: Yeah, sorry, at the diagram you
    showed in one of the first slides. Why
  • 55:40 - 55:46
    does the why does the authentication
    follow authorization? Shouldn't it
  • 55:46 - 55:53
    normally be the other way around?
    Guido: OK, so these are two concepts that
  • 55:53 - 56:04
    are kind of orthogonal to each other. So
    you can either do authentication to ensure
  • 56:04 - 56:09
    yourself of the user's identity or you can
    act on the user's behalf at the identity
  • 56:09 - 56:14
    provider, like posting on the user's
    Facebook timeline or doing different
  • 56:14 - 56:22
    things there. But for authentication, you
    need to retrieve some unique user
  • 56:22 - 56:29
    identifier. And this basically you make
    use of this authorization mechanism. So
  • 56:29 - 56:35
    you get authorized to access this unique
    user identifier and you use this then for
  • 56:35 - 56:40
    authentication.
    Presenter: Thank you. Questions from here.
  • 56:40 - 56:47
    Question 2: So for the special protocol,
    you said you need the forwarding party to
  • 56:47 - 56:51
    check whether this certificate was
    actually from Wikipedia or not from
  • 56:51 - 56:55
    attacker.com. But could Alice do this
    check herself?
  • 56:55 - 57:02
    Guido: You mean that you should present
    the user something and the user accepts
  • 57:02 - 57:07
    this and or declines this in this sense?
    Or...
  • 57:07 - 57:12
    Question 2: She has the challenge that is
    signed by her email provider and she has
  • 57:12 - 57:16
    the key that encrypted Wikipedia's
    identity. So she could use that to decrypt
  • 57:16 - 57:19
    it and check if it's Wikipedia or
    attacker.com.
  • 57:19 - 57:23
    Guido: Yeah, yeah. I mean, in principle,
    yes.
  • 57:23 - 57:27
    Daniel: So you mean the user?
    Question 2: Yeah, yes.
  • 57:27 - 57:30
    Daniel: Yes. The user could, of course,
    check. We could ask the user, did you
  • 57:30 - 57:36
    really want to sign attacker.com or
    wikipedia.com? But of course, we all know
  • 57:36 - 57:42
    that users are better at making decisions.
    So, yeah.
  • 57:42 - 57:47
    Presenter: Thank you. Questions from here?
    Question 3: Hi, thanks for the informative
  • 57:47 - 57:52
    talk, but I wanted to add a remark. It is
    highly unfair to call users stupid for
  • 57:52 - 57:55
    falling victim to clickjacking and
    phishing because they are working
  • 57:55 - 58:02
    professionally on them on enabling
    clickjacking and phishing. And if you need
  • 58:02 - 58:06
    a 4K monitor just to see that there is
    some JavaScript edit that like thousand
  • 58:06 - 58:14
    zero delimiters, it is impossible to blame
    the user for being stupid or falling
  • 58:14 - 58:20
    victim to clickjacking.
    Daniel: Yes, that's correct. It also
  • 58:20 - 58:28
    sometimes you just can't see it. So yes.
    Presenter: Thank you. Questions from down
  • 58:28 - 58:35
    there? Sorry. Questions?
    Question 4: You talked about formal
  • 58:35 - 58:44
    verification of both the OAuth and your
    protocol. I wanted to know what kind of
  • 58:44 - 58:50
    program or whatever you used like ProVerif
    or Tamarin or whatever. And also, I think
  • 58:50 - 58:57
    you just proved the, you just verified a
    subset of OAuth?
  • 58:57 - 59:03
    Daniel: Let's start with the second
    question first. So for OAuth, we really
  • 59:03 - 59:08
    tried to introduce as many options as we
    could find in the standard, so to say.
  • 59:08 - 59:14
    OAuth is a very loose standard. So they
    give you a lot of options. In many ways.
  • 59:14 - 59:19
    We had to exclude some of them for
    practical reasons when modeling the stuff.
  • 59:19 - 59:26
    But we included almost all of the options
    that are provided. And we also have a
  • 59:26 - 59:31
    detailed write up of what the options are
    and what we excluded and what we included.
  • 59:31 - 59:37
    And now for the first part of the
    question, our model currently is a manual
  • 59:37 - 59:44
    model. So what we do is pen and paper
    proves. The reasoning behind this is that
  • 59:44 - 59:51
    if you have tools, they are always, in
    some sense, limiting you. And when we
  • 59:51 - 59:58
    started out with this work, there was or
    there were two models, essentially already
  • 59:58 - 60:04
    existing web models, former models so in
    the same area as we are. But they were
  • 60:04 - 60:12
    both based on a model checker, so one on
    ProVerif, one on another modeling tool...
  • 60:12 - 60:15
    Guido: Alloy.
    Daniel: Alloy. And both were limited by
  • 60:15 - 60:20
    the possibilities that you had in these
    model checkers. So what we went the other
  • 60:20 - 60:25
    way around, what we wanted to do was a
    manual model that includes, that models
  • 60:25 - 60:30
    the web really precisely and
    comprehensively. And then as a second step
  • 60:30 - 60:36
    what we are currently working on or
    discussing about is to transfer this into
  • 60:36 - 60:39
    some kind of tool.
    Question 4: Thank you.
  • 60:39 - 60:43
    Presenter: Two more questions, questions
    from the Internet?
  • 60:43 - 60:51
    Internet: So I was wondering if you know
    about ND-Auth(?) and RealMe Auth and what
  • 60:51 - 60:57
    you think about the question of using
    domain names vs. email addresses as the
  • 60:57 - 61:01
    user identifier.
    Daniel: Could you repeat that a bit
  • 61:01 - 61:05
    louder?
    Internet: So if you have any comments
  • 61:05 - 61:12
    aboutND-Auth(?) and RealMe Auth which is
    the domain name as identifier rather than
  • 61:12 - 61:16
    an email address.
    Daniel: So we didn't look at these
  • 61:16 - 61:21
    systems.
    Presenter: Yes, last question.
  • 61:21 - 61:27
    Question 5: The question regarding the
    forwarder and the privacy protection, I
  • 61:27 - 61:32
    realized with the forwarder as far as I
    understand, the forwarder is used in its
  • 61:32 - 61:39
    own iframe to prevent the IDP from taking
    control of the verification process,
  • 61:39 - 61:45
    knowing that who is the the final system?
    Guido: Yes.
  • 61:45 - 61:50
    Question 5: But what if the identity
    provider at the forward to collaborate,
  • 61:50 - 61:58
    then the privacy would be broken? Yes. If
    we have these parties collaborating then
  • 61:58 - 62:04
    of course they are broken. So we haven't
    we haven't shown all the details of the
  • 62:04 - 62:13
    system. So this is really hard to prevent.
    But in SPRESSO, the relying party is
  • 62:13 - 62:20
    allowed to choose which forwarder has to
    be used. So line party, so choose water
  • 62:20 - 62:26
    run by some trustworthy party. So this is
    the countermeasure to prevent
  • 62:26 - 62:31
    collaboration. But if these parties
    collaborate, then you are screwed. Yes.
  • 62:31 - 62:35
    Daniel: So I think it's also important to
    add some of the forwarder is kind of a
  • 62:35 - 62:42
    semi trusted party, because on the one
    hand, we can enforce that as it uses the
  • 62:42 - 62:50
    correct code. Of course, the IDP then has
    to enforce this. On the other hand, you
  • 62:50 - 62:54
    still have some side channels like, for
    example, timing. So if you control the
  • 62:54 - 63:00
    parties, then you could check which IP
    addresses access for example, the
  • 63:00 - 63:07
    forwarder and IDP at the same time or and
    so on. So there are some side channels. So
  • 63:07 - 63:12
    the idea that we have to minimize this
    risk is to provide a set of trusted
  • 63:12 - 63:20
    forwarders that could be, for example,
    provided by some trusted parties like
  • 63:20 - 63:25
    Mozilla or the EFF or something, so that
    you have a set of forwards to choose from
  • 63:25 - 63:29
    and hopefully choose a trusted one.
    Question 5: Thank you.
  • 63:29 - 63:32
    Daniel: You're welcome.
    Presenter: Guido Schmitz and Daniel Fett,
  • 63:32 - 63:49
    thank you so much for the great talk.
    Please give a great round of applause.
  • 63:49 - 63:54
    applause
  • 63:54 - 63:57
    postroll music
  • 63:57 - 64:05
    Subtitles created by many many volunteers and
    the c3subtitles.de team. Join us, and help us!
Title:
On the Security and Privacy of Modern Single Sign-On in the Web (33c3)
Description:

more » « less
Video Language:
English
Duration:
01:04:05

English subtitles

Incomplete

Revisions