< Return to Video

37C3 - Writing secure software

  • 0:00 - 0:14
    33C3 preroll music
  • 0:14 - 0:16
    basically textbooks have been written
  • 0:16 - 0:20
    about it countless talks have been
  • 0:20 - 0:22
    have been Illuminating all of the errors
  • 0:22 - 0:27
    of our ways and still all those sucky
  • 0:27 - 0:30
    software is out there but
  • 0:30 - 0:33
    Fefe over here the hero of our show
  • 0:33 - 0:37
    has put out has put all of these best
  • 0:37 - 0:40
    practices into you know into his work to
  • 0:40 - 0:43
    try to create a secure website he's
  • 0:43 - 0:47
    going to show us how it's done so that
  • 0:47 - 0:52
    we can all sleep way better at night and
  • 0:52 - 0:55
    with that template go back and
  • 0:55 - 0:57
    and secure our own software and so with
  • 0:57 - 1:00
    that I'm going to hand it right over to
  • 1:00 - 1:02
    Fefe give him a round of applause
  • 1:02 - 1:12
    applause
  • 1:13 - 1:15
    thank you I have to start
  • 1:15 - 1:18
    with an apology because I did submit
  • 1:18 - 1:20
    this talk but it was rejected so the
  • 1:20 - 1:22
    slides are not at the stage where they
  • 1:22 - 1:24
    should be these are our slides for a
  • 1:24 - 1:26
    previous version of the talk it contains
  • 1:26 - 1:28
    all the material and I tried to update
  • 1:28 - 1:30
    it more but that destroyed the flow so
  • 1:30 - 1:33
    we we're stuck with it basically the
  • 1:33 - 1:36
    difference was the the audience so while
  • 1:36 - 1:38
    I expect more developers here the other
  • 1:38 - 1:39
    audience was more and hackers and
  • 1:39 - 1:43
    business people so I try to get them
  • 1:43 - 1:46
    from where they are and the main question
  • 1:46 - 1:48
    usually is "are we there yet?" right
  • 1:48 - 1:51
    so about me you probably
  • 1:51 - 1:53
    seen this before I'm a code auditor by
  • 1:53 - 1:55
    trade I have a small company and
  • 1:55 - 1:57
    companies show us their code and I show
  • 1:57 - 2:00
    them bugs I find in them quite easy
  • 2:02 - 2:04
    but before we start I have a small
  • 2:04 - 2:06
    celebration to do this actually happened
  • 2:06 - 2:09
    just a day before the first time I
  • 2:09 - 2:12
    talked about this so Kaspersky
  • 2:12 - 2:15
    message they found some malware introduced
  • 2:15 - 2:17
    tied to libc
  • 2:17 - 2:18
    which I have written so this is
  • 2:18 - 2:19
    like a
  • 2:19 - 2:26
    applause
  • 2:27 - 2:29
    some of the malware people
  • 2:29 - 2:31
    know what's good
  • 2:31 - 2:33
    so basically the main question when I
  • 2:33 - 2:36
    talk to customers is we spend so much
  • 2:36 - 2:39
    money on this why isn't it working
  • 2:39 - 2:42
    and the answer is you're doing it wrong
  • 2:42 - 2:46
    so I will try to show now what exactly
    is wrong
  • 2:46 - 2:50
    and there's a small preface here people
  • 2:50 - 2:52
    usually say there's no time to do this
  • 2:52 - 2:54
    right and that's just wrong you have
  • 2:54 - 2:57
    exactly as much time per day as other
  • 2:57 - 2:59
    people who did great things so you can
  • 2:59 - 3:02
    do great things too you just need to do it
  • 3:03 - 3:05
    so let's play a little warm-up game
  • 3:05 - 3:07
    it's called how it started and how
  • 3:07 - 3:10
    it's going so let's have a demo round
  • 3:10 - 3:11
    IBM Watson is revolutionizing
  • 3:11 - 3:15
    10 Industries and it's going like this
  • 3:15 - 3:17
    whatever happened to IBM Watson that's a
  • 3:17 - 3:20
    typical pattern in the security industry
  • 3:20 - 3:23
    right so here's another one how it started
  • 3:23 - 3:25
    revolutionize security with AI
  • 3:25 - 3:27
    right we all know where this is going
  • 3:27 - 3:28
    laugther
  • 3:28 - 3:31
    right so that's the pattern
  • 3:31 - 3:34
    let's play IT security mine sweeper
  • 3:33 - 3:35
    right so everybody here probably
  • 3:35 - 3:37
    knows who Gartner is they publish
  • 3:37 - 3:39
    recommendations and they even have a
  • 3:39 - 3:41
    voting section where people can say
  • 3:41 - 3:43
    this is the best product in this section
  • 3:43 - 3:45
    right so let's look at a few of them and
  • 3:45 - 3:48
    see what happened to people who trusted
    Gartner
  • 3:48 - 3:51
    first is a firewall right so how
  • 3:51 - 3:54
    it started the number one recommendation
  • 3:54 - 3:57
    is for Fortinet and they have a lot of
  • 3:57 - 3:59
    marketing gibberish
  • 3:59 - 4:01
    laughter
  • 4:01 - 4:03
    and if you look how it's going it's not
  • 4:03 - 4:05
    going so good
  • 4:06 - 4:08
    so let's extend the pattern a bit
  • 4:08 - 4:10
    why what happened to me in this regard
  • 4:10 - 4:12
    so I don't need a firewall
  • 4:12 - 4:14
    I don't have any ports open that I need
    blocking right
  • 4:14 - 4:16
    so you don't need this
  • 4:16 - 4:19
    strictly speaking you don't need it
  • 4:19 - 4:20
    next discipline endpoint protection
  • 4:20 - 4:25
    so it started with Trellix this is the
  • 4:25 - 4:27
    number one recommendation on Gartner
  • 4:27 - 4:29
    I hadn't heard of them there like can make
  • 4:29 - 4:30
    a feed joint venture or something
  • 4:30 - 4:31
    who cares
  • 4:31 - 4:35
    they also have great marketing gibberish
  • 4:35 - 4:36
    and then if you look at what happened
  • 4:36 - 4:39
    it's like they made it worse
  • 4:39 - 4:43
    okay so this didn't apply to me
  • 4:43 - 4:45
    either because I don't use snake oil
  • 4:45 - 4:47
    let's see the third one password manager
  • 4:47 - 4:49
    also very popular
  • 4:50 - 4:52
    how it started recommended LastPass
  • 4:52 - 4:54
    you probably know where this is going
  • 4:54 - 4:56
    laugther
  • 4:57 - 5:00
    yeah they got owned and then
  • 5:00 - 5:01
    people got owned
  • 5:03 - 5:05
    so you may notice a pattern here
  • 5:05 - 5:07
    this didn't apply to me because
  • 5:07 - 5:09
    I deserve a password authentication use
  • 5:09 - 5:11
    public key which has been available for
  • 5:11 - 5:14
    decades right so small bonus
  • 5:14 - 5:17
    the last one 2FA
  • 5:18 - 5:20
    Gartner recommends Duo which has
  • 5:20 - 5:22
    been bought by Cisco but doesn't matter
  • 5:24 - 5:25
    so if you look at what Duo does
  • 5:25 - 5:27
    your server asks the cloud for
  • 5:27 - 5:30
    permission the cloud goes to the telephone
  • 5:30 - 5:34
    telephone shows a popup you click yes
  • 5:32 - 5:35
    and then the cloud tells the server it's
  • 5:35 - 5:37
    okay you can let them in if you look
  • 5:37 - 5:39
    really closely you can notice the cloud
  • 5:39 - 5:42
    doesn't have to do the popup it can just
  • 5:42 - 5:44
    say sure so this comes pre-owned
  • 5:44 - 5:46
    there is no need to hack anything here
  • 5:46 - 5:47
    laugther
  • 5:47 - 5:49
    and something many people don't
  • 5:49 - 5:51
    realize you don't need two factor
  • 5:51 - 5:53
    if you have public key that's already the
    second factor
  • 5:54 - 5:55
    Okay, so
  • 5:56 - 5:58
    yeah let's skip over this briefly
  • 5:58 - 6:00
    Splunk is the the recommend option here
  • 6:00 - 6:02
    and they make the organization
  • 6:02 - 6:04
    more resilient unless you install it
  • 6:04 - 6:07
    laughter
  • 6:07 - 6:16
    applause
  • 6:16 - 6:18
    okay so this one is dear to my heart
  • 6:18 - 6:21
    because people start arguing about
  • 6:21 - 6:22
    whether to install patches and
  • 6:22 - 6:25
    which patch to install first and it used
  • 6:25 - 6:28
    to be simple you look for problems
  • 6:28 - 6:29
    then you install the patches and then
  • 6:29 - 6:32
    it got a bit more complicated and
  • 6:32 - 6:33
    the result is this right
  • 6:33 - 6:36
    that's a famous podcast in Germany
  • 6:36 - 6:39
    it's about municipality who got owned
  • 6:39 - 6:42
    by ransomware and then had to call the
  • 6:42 - 6:43
    army for help
  • 6:43 - 6:44
    inaudible chatter in crowd
  • 6:44 - 6:47
    and what you should do I'm having
  • 6:47 - 6:48
    this for completeness install all patches
  • 6:48 - 6:50
    immediately but that's a separate talk
  • 6:50 - 6:53
    right so you may notice a pattern here
  • 6:53 - 6:54
    the IT security industry
  • 6:54 - 6:56
    recommends something and
  • 6:56 - 6:58
    if you do it you're [ __ ] so don't do it
  • 6:58 - 7:01
    in case you can't read this says snake
  • 7:01 - 7:03
    repellent granules and then there's a
  • 7:03 - 7:05
    snake sleeping next to it
  • 7:05 - 7:06
    laugther
  • 7:06 - 7:07
    coughing
  • 7:08 - 7:11
    right so if we can't trust the
  • 7:11 - 7:13
    recommendations of the industry what
    shall we do
  • 7:13 - 7:15
    and so I had a lot of
  • 7:15 - 7:17
    time on my hands because I didn't have
  • 7:17 - 7:20
    to clean up after crappy IT security
  • 7:20 - 7:22
    industry recommendations so what
  • 7:22 - 7:24
    what did I do with my time
  • 7:24 - 7:27
    and I decided I need a Blog
  • 7:27 - 7:30
    some time ago now and I started
  • 7:30 - 7:33
    thinking what do I need and it's
  • 7:33 - 7:35
    actually not that much I could have just
  • 7:35 - 7:38
    shown basically static content a little
  • 7:38 - 7:40
    search function would be good but it's
  • 7:40 - 7:43
    optional um I didn't need comments for
  • 7:43 - 7:45
    legal reasons because people start
  • 7:45 - 7:48
    posting like links to maware or
  • 7:48 - 7:50
    whatever I don't want that I don't
  • 7:50 - 7:52
    need that right so the first version was
  • 7:52 - 7:54
    actually really easy it was a small
  • 7:54 - 7:56
    standard web server and I had the
  • 7:56 - 7:58
    blog entries a static HTML files
  • 7:58 - 8:00
    one file per month it was actually really
  • 8:00 - 8:02
    easy if you want to search you just can
  • 8:02 - 8:05
    ask Google and limit it to my site so
  • 8:05 - 8:07
    posting was also easy had a little
  • 8:07 - 8:10
    script that I could run on the server
  • 8:10 - 8:13
    and I just SSH in and SSH I trust for
  • 8:13 - 8:15
    authentication so there's no new attack
  • 8:15 - 8:17
    surface I have that anyway and this is a
  • 8:17 - 8:20
    great design it's secure it's simple
  • 8:20 - 8:22
    there's low risk it's also high
  • 8:22 - 8:25
    performance but you couldn't do a talk
  • 8:25 - 8:27
    about it at the CCC right so
  • 8:27 - 8:30
    it's too boring so I started to introduce
  • 8:30 - 8:31
    risk in my setup
  • 8:31 - 8:34
    *laughter
  • 8:34 - 8:36
    so the first idea was I had
  • 8:36 - 8:38
    written a small web server I could just
  • 8:38 - 8:40
    implement the blog in the web server
  • 8:40 - 8:43
    because you know it's my code anyway
  • 8:43 - 8:47
    but that has downsides if the the blog
  • 8:47 - 8:49
    is running in the web server then it can
  • 8:49 - 8:51
    access all the memory of the web server
  • 8:51 - 8:53
    in particular it can see the TLS private
  • 8:53 - 8:55
    key and that I don't want people to
  • 8:55 - 8:58
    extract right so it can't be a module
  • 8:58 - 9:00
    in the web server
  • 9:00 - 9:03
    and the the obvious solution is
  • 9:03 - 9:06
    it has to run in a different user ID on
  • 9:06 - 9:08
    on Linux I'm using Linux or but any
  • 9:08 - 9:10
    Unix or Windows would be the same
  • 9:10 - 9:12
    basically it runs in a different user ID
  • 9:12 - 9:14
    and then if you if you take over the
  • 9:14 - 9:16
    process of the blog because there's some
  • 9:16 - 9:19
    bug in it you couldn't access the TLS
  • 9:19 - 9:22
    key and while I did that the industry
  • 9:22 - 9:23
    was doing this
  • 9:23 - 9:24
    chatter
  • 9:24 - 9:25
    that's like the running gag of this
  • 9:25 - 9:28
    talk I show all kinds of interesting
  • 9:28 - 9:29
    things the industry did and then show
  • 9:29 - 9:31
    what I did in that time right so
  • 9:32 - 9:33
    next question
  • 9:33 - 9:35
    where's the content I could just have
  • 9:35 - 9:37
    files on disk like static HTML as before
  • 9:37 - 9:40
    but I think that's not professional enough
  • 9:40 - 9:42
    right so for a good CCC talk you
  • 9:42 - 9:44
    need to be more professional
  • 9:44 - 9:45
    also for a different
  • 9:45 - 9:47
    project I had just written an LDAP server
  • 9:47 - 9:51
    so I decided to reuse it and
  • 9:51 - 9:52
    while I did that the industry did this
  • 9:52 - 9:54
    I took this photo at the airport of
  • 9:54 - 9:56
    Jerusalem so this is an actual ad it's
  • 9:56 - 9:57
    not photoshopped right it's for
  • 9:57 - 9:59
    Northrop Grumman which is a
  • 9:59 - 10:03
    military contractor and it's about full
  • 10:03 - 10:06
    spectrum cyber across all domains
  • 10:06 - 10:07
    chatter
  • 10:07 - 10:10
    so why would I write my own LDAP server
  • 10:10 - 10:12
    mostly because it's small and
  • 10:12 - 10:15
    because I'm an auditor by trade I know
  • 10:15 - 10:18
    that if you want a chance to actually
  • 10:18 - 10:20
    audit the code it needs to be small
  • 10:20 - 10:22
    because that's a limited resource
  • 10:22 - 10:24
    the time you can spend on auditing code
  • 10:24 - 10:27
    right so Postgres is a common SQL
  • 10:27 - 10:30
    database slapped in the the open LDAP
  • 10:30 - 10:33
    implementation of the server and tinyldap
  • 10:33 - 10:35
    is mine and you see it's much slower
  • 10:35 - 10:37
    and much smaller
  • 10:39 - 10:41
    yeah so there was more to this
  • 10:41 - 10:44
    ad campaign I collected a few funny images
  • 10:45 - 10:49
    right so um if someone manages to
  • 10:49 - 10:52
    hack the blog CGI or whatever module
  • 10:52 - 10:55
    I use to to have connect the blog to the
  • 10:55 - 10:57
    web server they can open any file that
  • 10:57 - 11:00
    the blog can read right the UID can read
  • 11:00 - 11:03
    so I should probably do something
  • 11:03 - 11:06
    about that that was the next step and
  • 11:06 - 11:08
    the industry was starting to think about
  • 11:08 - 11:09
    vulnerability management
  • 11:11 - 11:13
    so there is a mechanism on Unix
  • 11:13 - 11:15
    on Linux I did a separate talk about that
  • 11:15 - 11:17
    on the last Congress
  • 11:17 - 11:19
    it's called seccomp and seccomp it's like
  • 11:19 - 11:21
    a firewall for sys calls so I can use
  • 11:21 - 11:24
    seccomp to block open the open sys which
  • 11:24 - 11:27
    is used to open files but if I have
  • 11:27 - 11:29
    to use open myself
  • 11:29 - 11:32
    then I can't block it right so what
  • 11:32 - 11:33
    to do about that for example my blog
  • 11:33 - 11:36
    calls local time which converts Unix's
  • 11:36 - 11:38
    time into the local time zone and for
  • 11:38 - 11:40
    that it opens a file containing the
  • 11:40 - 11:44
    description of the system time zone
  • 11:44 - 11:47
    and that calls open right so if
  • 11:47 - 11:49
    I just disabled the open system call from
  • 11:49 - 11:51
    my blog then it couldn't do the time
  • 11:51 - 11:54
    translation and this is actually
  • 11:54 - 11:58
    an old problem that also applies to set
  • 11:58 - 12:00
    ID programs and has has applied to them
  • 12:00 - 12:03
    for decades so what you can do is you
  • 12:03 - 12:06
    can reorganize your code so before you
  • 12:06 - 12:08
    block or before you drop privileges
  • 12:08 - 12:11
    generally speaking you do the open
  • 12:11 - 12:14
    calls in this in this example and
  • 12:14 - 12:17
    then you disable open and then you look
  • 12:17 - 12:19
    at the the data provided by the attacker
  • 12:19 - 12:21
    because if the attacker or any untrusted
  • 12:21 - 12:24
    source is trying to hack you it is via
  • 12:24 - 12:26
    data it gives you right it's
  • 12:26 - 12:28
    the environment is compromised so you look
  • 12:28 - 12:30
    at what kind of uh elements in the
  • 12:30 - 12:32
    environment are attacker supplied and
  • 12:32 - 12:34
    before you look at a single byte in them
  • 12:34 - 12:36
    you do all the dangerous stuff if you can
  • 12:36 - 12:38
    right so in this case I call local
  • 12:38 - 12:42
    time once before I drop the open sys call
  • 12:42 - 12:45
    and then my libc will cache the
  • 12:45 - 12:48
    time zone data and the next time I call it
  • 12:48 - 12:50
    after I have looked at the attacker
  • 12:50 - 12:52
    supplied code there is no need to call
  • 12:52 - 12:54
    open right so that's a major advantage
  • 12:54 - 12:57
    of Secom over similar Technologies like
  • 12:57 - 13:03
    SELinux where all the prohibitions
  • 13:03 - 13:04
    on sys calls are
  • 13:04 - 13:07
    applied to the whole process so there is
  • 13:07 - 13:09
    this is an example and you should make
  • 13:09 - 13:10
    use of it you should look at your
  • 13:10 - 13:12
    process and you can see if you have the
  • 13:12 - 13:14
    source code at least you can see which
  • 13:14 - 13:16
    parts do I need to do before I can drop
  • 13:16 - 13:19
    privileges and you move them up right so
  • 13:19 - 13:20
    that's what I did
  • 13:22 - 13:25
    this is actually a mockup from
  • 13:25 - 13:27
    the Estonian cyber security center
  • 13:29 - 13:30
    so this is real
  • 13:31 - 13:32
    okay so
  • 13:32 - 13:35
    next thought so let's
  • 13:35 - 13:38
    say someone hacks the blog module and
  • 13:38 - 13:40
    someone else uses the same module but
  • 13:40 - 13:43
    supplies a password right
  • 13:43 - 13:45
    this is a common problem in website
  • 13:45 - 13:47
    in websites there's some kind of login
  • 13:47 - 13:49
    something you get maybe a session token
  • 13:49 - 13:52
    or whatever and if someone manages to
  • 13:52 - 13:54
    take over the middleware
  • 13:54 - 13:56
    or like the server component
  • 13:56 - 13:59
    they can see all other connections too
  • 13:59 - 14:00
    if they are handled by the same
  • 14:00 - 14:03
    process right that's a major problem
  • 14:03 - 14:06
    and you can do something about it
  • 14:06 - 14:08
    so that's the good news here
  • 14:10 - 14:13
    and in my example it led to me using CGI
  • 14:13 - 14:16
    instead of fast CGI which is fast CGI
  • 14:16 - 14:18
    is a newer version of CGI
  • 14:18 - 14:21
    and the idea with fast CGI is that you
  • 14:21 - 14:24
    don't spawn a new process for every
  • 14:24 - 14:27
    request but you have like a Unix domain
  • 14:27 - 14:30
    socket or another socket to a fast CGI
  • 14:30 - 14:32
    process and that opens maybe a threat
  • 14:32 - 14:36
    per request or something but usually
  • 14:36 - 14:37
    in fast CGI you try to handle the
  • 14:37 - 14:39
    requests in the same process and then
  • 14:39 - 14:42
    you can use that process to cach data so
  • 14:42 - 14:45
    there's a perf advantage to using fast CGI
  • 14:45 - 14:47
    but for security reasons I don't
  • 14:47 - 14:50
    I don't use fast CGI so I can't do
  • 14:50 - 14:53
    caching right so that's a major downside
  • 14:53 - 14:54
    and you would expect the block to be
  • 14:54 - 14:57
    really really slow in the end so
  • 14:57 - 14:59
    first thing I need to use CGI instead of
  • 14:59 - 15:02
    fast CGI and secondly you could still
  • 15:02 - 15:05
    use debug APIs so if you use GDB or
  • 15:05 - 15:08
    another debugger to to look at another
  • 15:08 - 15:10
    process they use an API called ptrace
  • 15:10 - 15:13
    but that's a sys call so I can use seccomp
  • 15:13 - 15:16
    to disallow ptrace if I do those two
  • 15:16 - 15:20
    and the attacker takes over a blog process
  • 15:20 - 15:23
    all they can see is the data they supply
  • 15:23 - 15:27
    themselves right that's a major advantage
  • 15:28 - 15:30
    Okay so ENISA is actually an EU agency
  • 15:30 - 15:32
    which I find really disturbing
  • 15:32 - 15:33
    because they're burning lots of taxpayer
  • 15:33 - 15:38
    money anyway so let's assume the attacker
  • 15:38 - 15:41
    can hack my blog they can sill circumvent
  • 15:41 - 15:43
    any access control I do in the blog
  • 15:43 - 15:46
    so for example if I have an admin site
  • 15:46 - 15:49
    or some login site part of the webiste
  • 15:49 - 15:52
    and it's handled through the same program
  • 15:52 - 15:55
    and the access control is done in the blog
  • 15:55 - 15:57
    CGI and someone manages
  • 15:57 - 15:59
    to hack my blog CGI they could
  • 15:59 - 16:03
    just skip that so it's really hard
  • 16:03 - 16:06
    to do access restrictions that can be
  • 16:06 - 16:08
    circumvented if you do them in your own
  • 16:08 - 16:10
    code so the solution is not do it in
  • 16:10 - 16:13
    your own code I don't do any access
  • 16:13 - 16:16
    restriction in the blog I do it in the
  • 16:16 - 16:18
    LDAP server so if you connect to my blog
  • 16:18 - 16:21
    and supply a password then the blog
  • 16:21 - 16:22
    doesn't know if the password is
  • 16:22 - 16:24
    right or not there's an for example
  • 16:24 - 16:26
    there's an interface where you can add
  • 16:26 - 16:28
    new block entries or you can edit an old
  • 16:28 - 16:30
    one and for you need to supply
  • 16:30 - 16:32
    credentials but the block CGI doesn't know
  • 16:32 - 16:33
    if they are right or not it opens
  • 16:33 - 16:35
    the connections to the LDAP server with
  • 16:35 - 16:37
    that credential and then the LDAP server
  • 16:37 - 16:41
    says yes or no so since we removed
  • 16:41 - 16:44
    access to the ptraces calls and the
  • 16:44 - 16:47
    processes are isolated from each other
  • 16:47 - 16:48
    that means there is nothing to
  • 16:48 - 16:50
    circumvent here so if someone hacks my
  • 16:50 - 16:53
    blog the only advantage they get is
  • 16:53 - 16:55
    they can do the exact same stuff they
  • 16:55 - 16:57
    could do before basically they can just
  • 16:57 - 16:58
    talk to the LDAP server
  • 17:00 - 17:01
    okay so I'm starting to get into
  • 17:01 - 17:04
    James Bond territory here right
  • 17:04 - 17:06
    with the attacks they getting more
  • 17:06 - 17:09
    convoluted right so the industry started
  • 17:09 - 17:11
    doing threat intelligence feeds which
  • 17:11 - 17:13
    are useless don't spend money on those
  • 17:13 - 17:16
    okay so let's say the attacker hacked my
  • 17:16 - 17:19
    blog and then went to my tinyldap and now
  • 17:19 - 17:22
    is attacking tinyldap then they can
  • 17:22 - 17:24
    watch other logins because tinyldap
  • 17:24 - 17:27
    handles connections from other instances
  • 17:27 - 17:29
    of the blog too right so the same
  • 17:29 - 17:31
    problem we had before we just moved the
  • 17:31 - 17:33
    goal post a little and we need to
  • 17:33 - 17:36
    prevent this and the obvious solution
  • 17:36 - 17:38
    is to do the same thing we did
  • 17:38 - 17:41
    with the blog we have one process of
  • 17:41 - 17:45
    the LDAP server per request and then we
  • 17:45 - 17:49
    just allow ptrace right so now you
  • 17:49 - 17:51
    can't watch even if you get code execution
  • 17:51 - 17:54
    inside the LDAP server you can't watch
  • 17:54 - 17:56
    what passwords other people use
  • 17:56 - 17:59
    you can still see okay the industry
  • 17:59 - 18:01
    does some [ __ ] again you can still see
  • 18:01 - 18:04
    the password in the LDAP store right so
  • 18:04 - 18:06
    the LDAP server has to have a version of
  • 18:06 - 18:08
    the password to authenticate against and
  • 18:08 - 18:11
    the industry practice best practice is to
  • 18:11 - 18:13
    use salted hashes so the password is
  • 18:13 - 18:14
    not actually in the store
  • 18:15 - 18:17
    still if someone manages to attack
  • 18:17 - 18:20
    tinyldap through the blog they can
  • 18:20 - 18:22
    extract the hashes and try to crack them
  • 18:22 - 18:25
    but since I'm the only one adding users
  • 18:25 - 18:28
    I can control the password complexity so
  • 18:28 - 18:30
    good luck brute forcing that right
  • 18:32 - 18:38
    okay so this is actually a real problem
  • 18:38 - 18:39
    not for my blog specifically
  • 18:39 - 18:42
    but for other web services or services
  • 18:42 - 18:43
    that are reachable from the internet
  • 18:43 - 18:45
    what if an attacker doesn't want to steal
  • 18:45 - 18:48
    my data but it wants to encrypt it
  • 18:48 - 18:50
    so the ransomware what can you do
  • 18:50 - 18:54
    about that and my idea was to make
  • 18:54 - 18:56
    the data store read only so the
  • 18:56 - 18:58
    LDAP server has a data store that contains
  • 18:58 - 19:01
    all the blog entries and let's read only
  • 19:01 - 19:03
    to the add up process you can only read
  • 19:03 - 19:05
    from it and if you want to write to it
  • 19:05 - 19:08
    for example to add a new entry it gets
  • 19:08 - 19:10
    appended to a second file which I call the
  • 19:10 - 19:13
    journal so SQL databases have a similar
  • 19:13 - 19:16
    concept and they use it to roll back
  • 19:16 - 19:18
    transactions I can do the same thing
  • 19:18 - 19:19
    it's basically a log file
  • 19:19 - 19:23
    and that means all the differences from
  • 19:23 - 19:26
    the last time the store was created
  • 19:26 - 19:28
    the read only store all the differences
  • 19:28 - 19:30
    are sequentially in the log file
  • 19:30 - 19:33
    in the journal so that the performance
  • 19:33 - 19:35
    gets worse the bigger the journal gets
  • 19:35 - 19:37
    so every now and then I need to combine
  • 19:37 - 19:40
    the read only part and the journal
  • 19:40 - 19:42
    a new bigger read only part and
  • 19:42 - 19:43
    I do that manually
  • 19:46 - 19:48
    because tinyldap couldn't do it because
  • 19:48 - 19:50
    I didn't allow tinyldap to write the store
  • 19:50 - 19:52
    right that was part of the security here
  • 19:53 - 19:57
    and so with seccomp I can just disable
  • 19:57 - 19:59
    sys calls I can also install filters so I
  • 19:59 - 20:01
    can say open is allowed but only if you
  • 20:01 - 20:03
    use O_APPEND O_APPEND in the open sys
  • 20:03 - 20:06
    call on Unix means every right you do to
  • 20:06 - 20:09
    this descriptor is automatically
  • 20:09 - 20:12
    added to the end so I know if someone
  • 20:12 - 20:16
    manages to to access the tinyldap
  • 20:16 - 20:19
    binary and can write to my journal then
  • 20:19 - 20:21
    the only place the changes can show up
  • 20:21 - 20:23
    is at the end and that's actually a really
  • 20:23 - 20:25
    good thing to have because it means
  • 20:25 - 20:28
    if someone hacks me and adds junk to
  • 20:28 - 20:30
    my blog I can only remove at the end
  • 20:30 - 20:33
    and I'm good again compare that to a
  • 20:33 - 20:35
    usual SQL database if someone wrote
  • 20:35 - 20:38
    to the database you need to in to play
  • 20:38 - 20:41
    a backup uh in to restore backup because
  • 20:41 - 20:43
    they could have changed anything anywhere
  • 20:43 - 20:45
    right so but tinyldap doesn't even have
  • 20:45 - 20:47
    file system level permissions to change
  • 20:47 - 20:49
    anything in the store so I can
  • 20:49 - 20:51
    re-sleep soundly
  • 20:52 - 20:54
    yeah the industry spent money on
  • 20:54 - 20:56
    cyber security mesh architecture
  • 20:57 - 20:59
    right so the journal integration has
  • 20:59 - 21:01
    to be done by me manually out of band
  • 21:01 - 21:04
    so it's not something an automated process
  • 21:04 - 21:06
    does I do it manually
  • 21:06 - 21:08
    and when I'm doing it
  • 21:08 - 21:10
    because it's not that much data it's
  • 21:10 - 21:12
    like for a week or two I can just read it
  • 21:12 - 21:15
    again and see if something doesn't look
  • 21:15 - 21:19
    right this may not be available to all
  • 21:19 - 21:21
    other scenarios but you have to
  • 21:21 - 21:23
    realize if you have bigger data it's
  • 21:23 - 21:25
    usually not all the data that's big
  • 21:25 - 21:27
    most of it is usually static and read only
  • 21:27 - 21:30
    and then you have some logs that are
  • 21:30 - 21:33
    you know billing data that grows and grows
  • 21:33 - 21:35
    but usually there's part of the data and
  • 21:35 - 21:39
    this is the part with the you know
  • 21:39 - 21:42
    identifying information personally or
  • 21:42 - 21:46
    billing details that stuff is usually
  • 21:46 - 21:48
    small and mostly static and you could
  • 21:48 - 21:51
    use this strategy for that too
  • 21:53 - 21:57
    well yeah okay
  • 21:57 - 21:59
    so the attacker can still write garbage
  • 21:59 - 22:01
    to my blog that's still not good
  • 22:01 - 22:04
    right but since all they can do is append
  • 22:04 - 22:06
    to the journal I can use my text editor
  • 22:06 - 22:09
    open the journal and truncate at some
  • 22:09 - 22:11
    point and then I get all my data back
  • 22:11 - 22:14
    till the point where they start to [???]
  • 22:14 - 22:16
    the blog right this is still bad but
  • 22:16 - 22:19
    it's a very good position to be in
  • 22:19 - 22:21
    if there's an emergency because you
  • 22:21 - 22:24
    can basically investigate calmly first
  • 22:24 - 22:26
    you turn off right write access then you
  • 22:26 - 22:29
    delete the vandalism and the journal
  • 22:29 - 22:33
    and you know you haven't lost anything
  • 22:33 - 22:35
    because if you want to delete an entry
  • 22:35 - 22:37
    in the blog you could do that too but
  • 22:37 - 22:39
    that means at the end of the journal you
  • 22:39 - 22:41
    append a statement saying delete this
  • 22:41 - 22:43
    record and I can just remove that and I
  • 22:43 - 22:46
    get the record back right so there's no
  • 22:46 - 22:49
    way for someone vandalizing my blog to
  • 22:49 - 22:51
    damage any data that was in it before
  • 22:51 - 22:54
    all they can do is append junk at the end
  • 22:54 - 22:56
    and I can live with that right this is
  • 22:56 - 22:58
    this is should be the guiding thought
  • 22:58 - 23:01
    between any security you do
  • 23:01 - 23:03
    if someone hacks you will be in a very
  • 23:03 - 23:05
    stressful position the boss will be
  • 23:05 - 23:08
    behind you breathing down your neck are
  • 23:08 - 23:10
    we done yet? is it fixed? and you want to
  • 23:10 - 23:12
    have as little to do as possible at that
  • 23:12 - 23:15
    time you want to to move all the stress
  • 23:15 - 23:17
    to before you get hacked because then
  • 23:17 - 23:19
    you have more time
  • 23:20 - 23:22
    okay the industry did other things again
  • 23:25 - 23:28
    so what if the attacker doesn't write
  • 23:28 - 23:30
    garbage to the journal but writes some
  • 23:30 - 23:33
    exploit to the journal that the next
  • 23:33 - 23:35
    tinyldap up instance that reads the
  • 23:35 - 23:38
    journal gets compromised by it
  • 23:39 - 23:43
    that is a possibility and that would be
  • 23:43 - 23:46
    bad so agreed that there still a problem
  • 23:46 - 23:50
    but realize how preposterous the scenario
  • 23:50 - 23:52
    is so we are talking about an attacker
  • 23:52 - 23:55
    who found stable zero day in the blog
  • 23:55 - 23:57
    and then used that and another
  • 23:57 - 24:00
    stable zero day in tinyldap up to write
  • 24:00 - 24:02
    to the journal and then have the third
  • 24:03 - 24:06
    third zero day to compromise the journal
  • 24:06 - 24:09
    passing code so I mean
  • 24:09 - 24:11
    yes it is still a problem but we reduced
  • 24:11 - 24:14
    the risk significantly
  • 24:14 - 24:15
    and that is what
  • 24:15 - 24:18
    I'm trying to to tell you here it's not
  • 24:18 - 24:21
    it's not all or nothing it's good enough
  • 24:21 - 24:24
    if you can half the risk that's already
  • 24:24 - 24:26
    very important and you should do it
  • 24:26 - 24:31
    so as much as you can slice off the risk
  • 24:31 - 24:33
    the better the better off you will be
  • 24:33 - 24:34
    if something happens
  • 24:35 - 24:38
    right because the smaller the code is
  • 24:38 - 24:40
    that is still attackable the
  • 24:40 - 24:42
    more you can audit it and be sure it's
  • 24:42 - 24:44
    good you show it to your friends and
  • 24:44 - 24:47
    they can audit it too and you
  • 24:47 - 24:49
    need to save yourself that time because
  • 24:49 - 24:51
    it happens every now and then that I get
  • 24:51 - 24:53
    to get to see the whole code base and
  • 24:53 - 24:55
    the usual code base for commercial
  • 24:55 - 24:57
    products is like gigabytes of source code
  • 24:57 - 25:00
    nobody can read that like
  • 25:00 - 25:01
    I'm good I'm not that good
  • 25:03 - 25:05
    so this is a good place to be in
  • 25:05 - 25:08
    I think right so the industry was selling
  • 25:08 - 25:10
    DDOS mitigation sure whatever
  • 25:10 - 25:12
    so what happens if someone attacks
  • 25:12 - 25:15
    the web server that is still a big
  • 25:15 - 25:18
    problem and it's actually
  • 25:20 - 25:23
    it's a full damage right
  • 25:23 - 25:24
    that's the worst that can happen if
  • 25:24 - 25:26
    someone manages to attack the web server
  • 25:26 - 25:28
    they can see all traffic coming through
  • 25:28 - 25:30
    they can look inside TLS secured
  • 25:30 - 25:32
    connections and they can sniff all the
  • 25:32 - 25:35
    passwords so that's really bad
  • 25:35 - 25:37
    unfortunately there is not too much
  • 25:37 - 25:39
    you can do about that
  • 25:41 - 25:44
    you could do a separation
  • 25:44 - 25:46
    so this is something people have been
  • 25:46 - 25:48
    talking about for a while OpenSSL is
  • 25:48 - 25:50
    doing this they moved the dangerous crypto
  • 25:50 - 25:52
    stuff in a second process and use
  • 25:52 - 25:54
    sandboxing to lock down that process
  • 25:54 - 25:56
    that could be done but nobody has done
  • 25:56 - 25:59
    it for OpenSSL yet so OpenSSL doesn't
  • 25:59 - 26:01
    support that my web server
  • 26:01 - 26:03
    also supports embed TLS they don't
  • 26:03 - 26:05
    support that too so I I could spend time
  • 26:05 - 26:07
    on that and I've been actually
  • 26:07 - 26:09
    spending some time already but it's not
  • 26:09 - 26:11
    it's not ready yet but this would be a
  • 26:11 - 26:13
    good way to reduce the risk and you may
  • 26:13 - 26:16
    notice that the the tools I'm using to
  • 26:16 - 26:18
    reduce risks are actually just a handful
  • 26:18 - 26:21
    there's not it's not you know it's not
  • 26:21 - 26:23
    witchcraft I'm not inventing new
  • 26:23 - 26:26
    ways to look at things I'm doing the
  • 26:26 - 26:28
    same thing again I'm identifying the
  • 26:28 - 26:30
    part of the code that's dangerous and
  • 26:30 - 26:33
    then I think about how I can make that
  • 26:33 - 26:35
    part smaller maybe put it in a different
  • 26:35 - 26:37
    process lock it down so we need to do
  • 26:37 - 26:39
    the same thing with the web server
  • 26:39 - 26:41
    obviously but it's an ongoing process
  • 26:43 - 26:47
    yeah so again whatever why
  • 26:47 - 26:49
    haven't I done that yet uh so in my
  • 26:49 - 26:51
    web server you can it's a build time
  • 26:51 - 26:53
    decision if you want SSL support or not
  • 26:53 - 26:55
    and you can see the binary is
  • 26:55 - 26:58
    significantly bigger if you have SSL
  • 26:58 - 27:00
    and I'm showing you this because it means
  • 27:00 - 27:02
    the bulk of the attack surface is the SSL
  • 27:02 - 27:05
    code it's not my code so if I if I can
  • 27:05 - 27:07
    put the SSL code in a different process
  • 27:07 - 27:11
    they still need to see the private key
  • 27:11 - 27:12
    because that's what TLS needs
  • 27:12 - 27:14
    the private key otherwise it can't
  • 27:14 - 27:16
    do the crypto so the bug of the attack
  • 27:16 - 27:18
    surface would still have access to the
  • 27:18 - 27:20
    key I can still do it because there
  • 27:20 - 27:21
    might be bugs in my code and not the
  • 27:21 - 27:25
    SSL code but that's just 5% of the of
  • 27:25 - 27:27
    the overall attack surface so
  • 27:28 - 27:30
    I will probably do it at some point
  • 27:30 - 27:32
    but it's I don't expect miracles from it
  • 27:32 - 27:35
    bugs and open SSL will kill me
  • 27:35 - 27:37
    there's not much I can do about that
  • 27:40 - 27:41
    laughter
  • 27:42 - 27:44
    okay so I know what you're thinking
  • 27:44 - 27:47
    loud laughter
  • 27:48 - 27:51
    what about kernel bugs?
  • 27:51 - 27:52
    so I looked at a few of the recent
  • 27:52 - 27:55
    kernel bugs and it turns out that they
  • 27:55 - 27:57
    usually apply to sys calls that are rarely
  • 27:57 - 28:00
    used in regular programs and because
  • 28:00 - 28:02
    I'm blocking all the sys calls I don't
  • 28:02 - 28:04
    really need none of them apply to me
  • 28:04 - 28:07
    right and this is a this is a pattern
  • 28:07 - 28:10
    with Kernel bugs
  • 28:10 - 28:12
    there is a project called Sandstorm
  • 28:13 - 28:17
    that also uses ptrace and seccomp tracing
  • 28:17 - 28:19
    to reduce the sys call
  • 28:19 - 28:22
    surface and then puts regular services
  • 28:22 - 28:25
    into a sandbox for web services and
  • 28:25 - 28:28
    they evaded all kinds of of Kernel bugs
  • 28:28 - 28:30
    just because of that so this is
  • 28:30 - 28:32
    like a zero effort thing because
  • 28:32 - 28:35
    obviously if you have a list of sys calls
  • 28:35 - 28:36
    you'd use a white list and you
  • 28:36 - 28:38
    have a list of things you are
  • 28:38 - 28:40
    explicitly low and the rest is disabled
  • 28:40 - 28:42
    not the other way around right
  • 28:42 - 28:44
    so none of the usual Kernel bugs apply
  • 28:44 - 28:47
    to me um because of the the seccomp stuff
  • 28:47 - 28:49
    I already do so Kernel bugs aren't as big
  • 28:49 - 28:52
    of a problem as you might think at least
  • 28:52 - 28:54
    I still have them if I haven't patched
  • 28:54 - 28:56
    but you can't get to them via the blog
  • 28:57 - 29:00
    so I have a small confession to make
  • 29:00 - 29:02
    I'm a bit of a troll and that applies
  • 29:02 - 29:05
    to this project as well so I used the
  • 29:05 - 29:10
    worst programming language I used C right
  • 29:10 - 29:12
    so I'm trolling the security people
  • 29:12 - 29:14
    and then I'm trolling the Java people
  • 29:14 - 29:15
    who have been saying you should use
  • 29:15 - 29:17
    multi-threading for performance and not
  • 29:17 - 29:19
    have one process per request
  • 29:19 - 29:21
    so I'm doing actually two fork and exec
  • 29:21 - 29:22
    per request
  • 29:23 - 29:25
    I'm trolling the database people
  • 29:25 - 29:26
    I don't have any caching
  • 29:26 - 29:28
    I don't have connection pools
  • 29:28 - 29:30
    and the perf people too because I'm
  • 29:30 - 29:32
    still faster than most of the regular
  • 29:32 - 29:35
    solutions so there is no there's really
  • 29:35 - 29:37
    no downside if you if you architect your
  • 29:37 - 29:39
    software to use this kind of thing
  • 29:39 - 29:42
    it will be slower than other ways to do it
  • 29:42 - 29:44
    but most other software isn't as fast
  • 29:44 - 29:47
    anyway so there's enough headway that
  • 29:47 - 29:50
    you can use to do security instead of
  • 29:50 - 29:52
    performance you will still be faster
  • 29:53 - 29:56
    so let's recap the methodology I used
  • 29:57 - 30:00
    first I make a list of all the attacks
  • 30:00 - 30:01
    I can think of and this means
  • 30:01 - 30:03
    concrete attacks so what could happen
  • 30:03 - 30:05
    and what would what would
  • 30:05 - 30:07
    be the problem then right and then
  • 30:07 - 30:09
    I think for every item on the list
  • 30:09 - 30:11
    I consider how to prevent this
  • 30:11 - 30:14
    can I prevent this? what I need to do
  • 30:14 - 30:16
    and then I do it right so that's easy
  • 30:16 - 30:18
    it's like this the Feynman problem solving
  • 30:18 - 30:20
    algorithm in spirit and this
  • 30:20 - 30:23
    process is called threat modeling it's
  • 30:23 - 30:25
    it's like a it's dirty word because it
  • 30:25 - 30:27
    sounds like there's effort involved and
  • 30:27 - 30:29
    nobody wants to do it but it's really
  • 30:29 - 30:31
    it's easy it's just these these steps
  • 30:31 - 30:33
    you look at your software you
  • 30:33 - 30:35
    consider all the ways it could be attacked
  • 30:35 - 30:36
    and then you consider what you
  • 30:36 - 30:38
    could do to prevent the attack or in
  • 30:38 - 30:40
    some cases you can't prevent the attack
  • 30:40 - 30:43
    and then you say well that's a risk I have
    live with
  • 30:43 - 30:44
    right so that's called threat modeling
  • 30:44 - 30:46
    you should try it's awesome
  • 30:48 - 30:50
    and you saw that I'm trying
  • 30:50 - 30:52
    to optimize something here I go for a
  • 30:52 - 30:55
    specific target in this case I want
  • 30:55 - 30:57
    as little code as possible
  • 30:58 - 31:00
    the more code there is the more bugs
  • 31:00 - 31:02
    there will be that's an a very old
  • 31:02 - 31:05
    insight from I think it was originally
  • 31:05 - 31:07
    in IBM study and they basically found
  • 31:07 - 31:09
    that the number of bugs in code is a
  • 31:09 - 31:11
    function of the lines of code in the code
  • 31:11 - 31:13
    so there's a little more to it but
  • 31:13 - 31:15
    basically it's true so and it's not just
  • 31:15 - 31:17
    any code I want to have less of
  • 31:18 - 31:20
    if the code is dangerous I particularly
  • 31:20 - 31:22
    want to have less of it and the the most
  • 31:22 - 31:25
    important category to to make smaller is
  • 31:25 - 31:27
    the code that enforces security
  • 31:27 - 31:29
    guarantees so like one security
  • 31:29 - 31:31
    guarantee would be you can't log in
  • 31:31 - 31:34
    if you don't have the right password right
  • 31:34 - 31:36
    so the code that checks that I want it to
  • 31:36 - 31:38
    be as small as possible one or two
  • 31:38 - 31:41
    lines of code if I can manage it and
  • 31:41 - 31:43
    then it's obvious if it if it's wrong or
  • 31:43 - 31:45
    not the more complex the code is the
  • 31:45 - 31:48
    less easy would it be to see if
  • 31:48 - 31:49
    it's correct or not and that's what you
  • 31:49 - 31:51
    want in the end you want to be sure the
  • 31:51 - 31:53
    code is correct so how far did I get
  • 31:53 - 31:55
    it's actually pretty amazing I think
  • 31:55 - 31:58
    you can write an LDAP server in 5000 lines
  • 31:58 - 32:03
    of code the blog is 3500 lines of code
  • 32:03 - 32:05
    plus the LDAP client library
  • 32:05 - 32:06
    plus zlib
  • 32:07 - 32:09
    but I'm only using zlib to compress not to
  • 32:09 - 32:11
    decompress so most attack scenarios
  • 32:11 - 32:14
    doesn't don't apply to to my usage of zlib
  • 32:14 - 32:17
    and the web server is also pretty slow
  • 32:17 - 32:18
    if you only look at the HTTP code
  • 32:18 - 32:21
    unfortunately it also contains the
  • 32:21 - 32:24
    SSL Library which is orders of magnitude
  • 32:24 - 32:26
    more than my code and that's how you
  • 32:26 - 32:28
    want it you want the biggest risk not to
  • 32:28 - 32:31
    be in the new code but in an old code
  • 32:32 - 32:35
    that someone else already audited if you
  • 32:35 - 32:36
    can manage it right so this is the
  • 32:36 - 32:39
    optimization strategy try to have as
  • 32:39 - 32:41
    little dangerous code as possible sounds
  • 32:41 - 32:43
    like a no-brainer but if you look at
  • 32:43 - 32:45
    modern software development you will
  • 32:45 - 32:47
    find out they do the exact opposite pull
  • 32:47 - 32:49
    in as many frameworks as as they can
  • 32:51 - 32:52
    so this strategy is called
  • 32:52 - 32:55
    TCB minimization you should try it and
  • 32:55 - 32:57
    I gave a talk about it already it's
  • 32:57 - 32:59
    actually pretty easy so
  • 33:00 - 33:03
    I told you what I did to the
  • 33:03 - 33:04
    to the blog to
  • 33:05 - 33:08
    diminish the danger that can be done
  • 33:08 - 33:10
    if someone manages to take it over and
  • 33:10 - 33:12
    this is actually part of the
  • 33:12 - 33:15
    TCB minimization process so the blog was a
  • 33:15 - 33:18
    high risk area and then I took away
  • 33:18 - 33:21
    privileges and removed excess checks and
  • 33:21 - 33:24
    in the end even if I give you remote
  • 33:24 - 33:26
    code execution in the blog process you
  • 33:26 - 33:28
    can't do anything you couldn't do before
  • 33:28 - 33:31
    right so it's no longer part of the TCB
  • 33:31 - 33:33
    the TCB is the part that enforces
  • 33:33 - 33:35
    security guarantees which the blog CGI
  • 33:35 - 33:37
    doesn't anymore
  • 33:38 - 33:39
    so that's what you want to do
  • 33:39 - 33:41
    you want to end up in the smallest TCB
  • 33:41 - 33:44
    you can possibly manage and every
  • 33:44 - 33:47
    step on the way is good so no step is
  • 33:47 - 33:49
    too small right if you can shave off
  • 33:49 - 33:51
    even a little routine do it
  • 33:53 - 33:55
    this is the minimization part of TCB
  • 33:55 - 33:57
    minimization right I could I was able to
  • 33:57 - 34:00
    remove the blog from the TCB tinyldap
  • 34:00 - 34:03
    still has a risk so you saw
  • 34:03 - 34:05
    the threat model if someone manages to
  • 34:05 - 34:07
    take over tinyldap they can read the
  • 34:07 - 34:09
    hashes and try to crack them that's
  • 34:09 - 34:12
    still bad but I can live with it right
  • 34:12 - 34:15
    if they vandalize the blog I can undo
  • 34:15 - 34:17
    the damage without going to the
  • 34:17 - 34:19
    date library so that's good
  • 34:20 - 34:22
    if you compare that to the industry
  • 34:22 - 34:25
    standard you will find that my approach
  • 34:25 - 34:27
    is much better usually in
  • 34:27 - 34:29
    the industry you see platform decisions
  • 34:29 - 34:31
    done by management not by the techies
  • 34:31 - 34:33
    and it's untroubled by expertise or
  • 34:33 - 34:35
    risk analysis and you get a
  • 34:35 - 34:38
    diffusion of responsibility because if
  • 34:38 - 34:40
    you even if you try to find out who's
  • 34:40 - 34:42
    responsible for anything you find
  • 34:42 - 34:44
    well it's that team over there but we
  • 34:44 - 34:45
    don't really know and then you find out
  • 34:45 - 34:47
    the team dissolved last week and it's
  • 34:47 - 34:50
    really horrible and brand new we have
  • 34:50 - 34:52
    AI tools which is also a diffusion of
  • 34:52 - 34:54
    responsibility
  • 34:56 - 34:57
    and then you get people
  • 34:57 - 34:59
    arguing well it's so bad it can't get
  • 34:59 - 35:01
    any worse let's go to the cloud where
  • 35:01 - 35:02
    obviously it gets worse
  • 35:02 - 35:06
    immediately so I prefer my way
  • 35:07 - 35:08
    I think in the end it's important to
  • 35:08 - 35:11
    realize that the the lack of security
  • 35:11 - 35:13
    you may have in your projects right now
  • 35:13 - 35:16
    is self-imposed there is no guy with a
  • 35:16 - 35:18
    shotgun behind you
  • 35:18 - 35:20
    threatening you can do it you just have
  • 35:20 - 35:24
    to start right so this is self-imposed
  • 35:24 - 35:25
    helplessness you can actually help
  • 35:25 - 35:27
    yourself you just have to start
  • 35:29 - 35:32
    right how did we get here this is
  • 35:32 - 35:34
    obviously not a good place to be
  • 35:34 - 35:36
    like all the software is crappy and
  • 35:36 - 35:38
    there's a few it's not just that people
  • 35:38 - 35:40
    are dumb there's a few reasons for that
  • 35:40 - 35:43
    so back in the day you used to have
  • 35:43 - 35:45
    bespoke applications that were written
  • 35:45 - 35:48
    for a specific purpose and they used the
  • 35:48 - 35:50
    waterfall model and you had the
  • 35:50 - 35:52
    requirements specification and it was
  • 35:52 - 35:55
    lots of bureaucracy and really horrible
  • 35:55 - 35:58
    but it also meant that you knew what
  • 35:58 - 36:00
    the application had be had to be able to
  • 36:00 - 36:03
    do so that means you can make sure
  • 36:03 - 36:06
    anything else is forbidden if you know
  • 36:06 - 36:08
    what the application needs to be able to
  • 36:08 - 36:10
    do you can make sure it doesn't do any
  • 36:10 - 36:12
    other stuff and that is security if you
  • 36:12 - 36:15
    think about it deny everything that the
  • 36:15 - 36:17
    application wasn't supposed to be doing
  • 36:17 - 36:19
    and then that's what an attacker would
  • 36:19 - 36:21
    do if they take over the machine right
  • 36:22 - 36:24
    so if you know beforehand what you're
  • 36:24 - 36:26
    trying to get to you can actually
  • 36:26 - 36:29
    implement privilege even architecturally
  • 36:29 - 36:30
    as I've shown you
  • 36:31 - 36:33
    now we have more of an Ikea model
  • 36:33 - 36:36
    you buy parts that are designed by
  • 36:36 - 36:38
    their own teams and the teams designing
  • 36:38 - 36:39
    the parts don't know what the final
  • 36:39 - 36:42
    product will look like right in in some
  • 36:42 - 36:44
    cases even you don't know what the final
  • 36:44 - 36:46
    product will look like but it's even
  • 36:46 - 36:48
    worse if you consider that the
  • 36:48 - 36:50
    team building the part you make your
  • 36:50 - 36:52
    software from doesn't know what it will
  • 36:52 - 36:54
    be used for so it has to be as generic
  • 36:54 - 36:56
    as possible right the more it can be
  • 36:56 - 36:58
    done with it the better and that's
  • 36:58 - 37:01
    the opposite of security right security
  • 37:01 - 37:03
    means understanding what you need to do
  • 37:03 - 37:05
    and then disallowing the rest and this
  • 37:05 - 37:09
    means be as generic as you can the parts
  • 37:09 - 37:11
    are optimized for genericity what's the
  • 37:11 - 37:16
    name genericism I don't know so they are
  • 37:15 - 37:18
    optimized to be as flexible as possible
  • 37:18 - 37:20
    and they are chosen by flexibility
  • 37:22 - 37:24
    the developer of the part usually
  • 37:24 - 37:26
    has no idea what it would used for
  • 37:26 - 37:27
    and that means you can't do least
  • 37:27 - 37:31
    privilege because you don't know what
  • 37:31 - 37:34
    the privilege will be that's least so
  • 37:34 - 37:36
    this is actually a big mess so if
  • 37:36 - 37:38
    you use parts programmed by other people
  • 37:38 - 37:40
    you will have to invest extra effort to
  • 37:40 - 37:43
    find out what kind of stuff you can make
  • 37:43 - 37:45
    it not do because it will definitely be
  • 37:45 - 37:47
    able to do more than you need and the
  • 37:47 - 37:50
    more you can clamp down the more
  • 37:50 - 37:52
    security you will have it's even
  • 37:52 - 37:54
    worse if you do agile development
  • 37:54 - 37:55
    because then by definition you don't
  • 37:55 - 37:57
    know what the end result will be so
  • 37:58 - 38:00
    if you don't know that you can't do
  • 38:00 - 38:01
    security lockdown
  • 38:02 - 38:03
    so another argument why we got
  • 38:03 - 38:06
    here is economics of scale so it used to
  • 38:06 - 38:08
    be that if you build some kind of device
  • 38:08 - 38:10
    that needs to do something like I don't
  • 38:10 - 38:13
    know microwave
  • 38:14 - 38:17
    then you you find parts and
  • 38:17 - 38:19
    you combine the parts and you solder
  • 38:19 - 38:21
    them together and then they solve the
  • 38:21 - 38:24
    problem but these days you don't
  • 38:24 - 38:27
    solder parts anymore you assemble from
  • 38:27 - 38:29
    pre-made parts and these are usually
  • 38:29 - 38:32
    programmable right so a little ARM chip
  • 38:32 - 38:35
    cost like a tenth of a cent so why use
  • 38:35 - 38:37
    a special part if you can use an ARM chip
  • 38:37 - 38:39
    and then program it but that means
  • 38:39 - 38:41
    you still need to use software that
  • 38:41 - 38:43
    actually solves the problem the hardware
  • 38:43 - 38:45
    is generic and that means the hardware
  • 38:45 - 38:47
    can be hacked and this is turning out to
  • 38:47 - 38:50
    be a problem right if you had a brake in
  • 38:50 - 38:53
    in 20 years know it braked right
  • 38:53 - 38:55
    but now it's programmable
  • 38:55 - 38:57
    and people haven't realized
  • 38:57 - 38:59
    how bad that is but it is bad right so
  • 38:59 - 39:00
    that's that will bite us in the
  • 39:00 - 39:03
    ass oops
  • 39:03 - 39:06
    so the response from the industry
  • 39:06 - 39:08
    has so far been the ostrich method
  • 39:08 - 39:11
    basically we install stuff that we know
  • 39:11 - 39:13
    is untrustworthy and so we
  • 39:13 - 39:15
    install other stuff on top of it that's
  • 39:15 - 39:18
    also untrustworthy and then we call it
  • 39:18 - 39:20
    telemetry or big data and to some risk
  • 39:20 - 39:24
    logging analysis in [???] or whatever
  • 39:25 - 39:27
    and in the end the attack surface
  • 39:27 - 39:30
    has mushroomed like a nuclear explosion
  • 39:30 - 39:32
    right so that's our fault
  • 39:32 - 39:34
    nobody has forced us to do this you
  • 39:34 - 39:36
    don't need to do this in your own
  • 39:36 - 39:39
    projects that's the hopeful message of
  • 39:39 - 39:41
    this talk in conclusion if you remember
  • 39:41 - 39:43
    nothing else from this talk remember
  • 39:43 - 39:45
    that threat modeling is a thing and you
  • 39:45 - 39:46
    should try it TCB minimization actually
  • 39:46 - 39:49
    helps least privilege is another facet
  • 39:49 - 39:52
    of the same thing and if you can use
  • 39:52 - 39:54
    append only data storage you should
  • 39:54 - 39:55
    consider it
  • 39:55 - 39:57
    - blockchain
    - yeah not a blockchain
  • 39:57 - 39:58
    append only data storage
  • 39:58 - 39:59
    it's not blockchain
  • 39:59 - 40:01
    laughter
  • 40:01 - 40:12
    applause
  • 40:12 - 40:13
    - two more two more
  • 40:13 - 40:14
    - two more slides
  • 40:14 - 40:15
    - yeah two more slides
  • 40:15 - 40:16
    - sorry I'm imposter
  • 40:16 - 40:17
    - no problem
  • 40:17 - 40:18
    so the rule of thumb
  • 40:18 - 40:20
    should be if if the blog of some
  • 40:20 - 40:23
    unwashed hobbyist from the Internet is
  • 40:23 - 40:26
    more secure than your IT security then
  • 40:26 - 40:28
    you should improve your IT security
  • 40:28 - 40:30
    right that shouldn't happen
  • 40:31 - 40:34
    all right so that's all from my
  • 40:34 - 40:35
    talk I think we still have time for
  • 40:35 - 40:38
    questions do we? yes okay awesome okay
  • 40:38 - 40:40
    now you can put your hands together
  • 40:40 - 40:48
    applause
  • 40:48 - 40:50
    so if you want to ask a question
  • 40:50 - 40:52
    we have four microphones in the room
  • 40:52 - 40:56
    1 2 3 4 and I'm going to take a
  • 40:56 - 40:58
    question the first question from
  • 40:58 - 41:00
    the internet the internet is saying you
  • 41:00 - 41:02
    actually got hacked or can you elaborate
  • 41:02 - 41:04
    on what happened?
  • 41:04 - 41:06
    yes actually there was an
  • 41:06 - 41:07
    incident where someone was able to post
  • 41:07 - 41:09
    stuff to my blog and because I had append
  • 41:09 - 41:13
    only data storage I shrugged it off
  • 41:13 - 41:15
    basically so use append only data storage
  • 41:15 - 41:17
    it's it will save your ass at some point
  • 41:17 - 41:19
    the problem was a bug in my
  • 41:19 - 41:21
    access control lists I had used some
  • 41:21 - 41:24
    some access control list in my LDAP server
  • 41:24 - 41:26
    and I had a line in it that
  • 41:26 - 41:28
    I should have removed but I forgot to
  • 41:28 - 41:30
    remove it and that meant you could post
  • 41:30 - 41:33
    without having credentials but it
  • 41:33 - 41:35
    happened and it wasn't bad because my
  • 41:35 - 41:38
    architecture prevented damage as
  • 41:38 - 41:40
    people are leaving the room could you
  • 41:40 - 41:43
    leave very quietly thank you
  • 41:43 - 41:44
    microphone number one
  • 41:44 - 41:46
    - yeah is there a second alternative
  • 41:46 - 41:48
    for Windows and MacOS?
  • 41:48 - 41:50
    - secure alternative well so
  • 41:50 - 41:53
    basically you can do the principles
  • 41:53 - 41:56
    I showed in this talk you can do on
  • 41:56 - 42:00
    those two so usually you will not be
  • 42:00 - 42:02
    hacked because your MacOS or
  • 42:02 - 42:05
    Windows had a bug I that happens too but
  • 42:05 - 42:07
    the bigger problem is that the software
  • 42:07 - 42:09
    you wrote had a bug or that you the
  • 42:09 - 42:12
    software that you use had a bug so I'm
  • 42:12 - 42:14
    I'm trying to tell you Linux isn't
  • 42:14 - 42:17
    particularly more secure than Windows
  • 42:17 - 42:19
    it's just it's basically you can write
  • 42:19 - 42:21
    secure software and insecure software on
  • 42:21 - 42:23
    any operating system you should still
  • 42:23 - 42:25
    use Linux because it has advantages but
  • 42:25 - 42:27
    if you apply these techniques to
  • 42:27 - 42:29
    your software it will be secure on
  • 42:29 - 42:32
    MacOS and windows as well right so this
  • 42:32 - 42:34
    is not for for end users selecting the
  • 42:34 - 42:36
    software if you select software you have
  • 42:36 - 42:38
    to trust the vendor
  • 42:38 - 42:40
    there's no way around that but if
  • 42:40 - 42:42
    you write your own software then you can
  • 42:42 - 42:44
    reduce the risk to a point where you can
  • 42:44 - 42:46
    live with it and sleep soundly
  • 42:46 - 42:49
    - sure is there a a technical alternative
  • 42:49 - 42:51
    or similar similarity like seccomp for
  • 42:51 - 42:53
    Windows and MacOS so can you drop your
  • 42:53 - 42:55
    privileges after you have opened a file
  • 42:55 - 42:56
    for example
  • 42:56 - 42:59
    - so for MacOS I'm not sure but I know
  • 42:59 - 43:02
    that FreeBSD NetBSD and OpenBSD have an
  • 43:02 - 43:05
    equivalent thing I think MacOS has it too
  • 43:05 - 43:08
    but I'm not sure about that for Windows
  • 43:08 - 43:10
    there's are sandboxing methods you can
  • 43:10 - 43:12
    look at the Chrome source code for
  • 43:12 - 43:14
    example they have a Sandbox it's open
  • 43:14 - 43:16
    source you can use that to do this kind
  • 43:16 - 43:17
    of thing
  • 43:17 - 43:18
    - okay thanks
  • 43:18 - 43:21
    - so microphone number two except down
  • 43:21 - 43:22
    that's gone so I might go with
  • 43:22 - 43:24
    mic number three in that csae
  • 43:25 - 43:28
    sorry four four yes
  • 43:28 - 43:29
    - will your next talk be about writing
  • 43:29 - 43:32
    software secure software in Windows and
  • 43:32 - 43:33
    if no uh how much assets would you
  • 43:33 - 43:35
    request to compensate for all the pain?
  • 43:36 - 43:38
    - no
  • 43:38 - 43:39
    *laughter
  • 43:39 - 43:41
    it's not a question of money
  • 43:41 - 43:43
    laughter
  • 43:43 - 43:45
    - okay microphone one
  • 43:45 - 43:47
    - have you tried removing unnecessary
  • 43:47 - 43:49
    features from openSSL?
  • 43:50 - 43:52
    - yes actually I've I've done this
  • 43:52 - 43:55
    pretty early but it's still it's still
  • 43:55 - 43:57
    much bigger than my code
  • 43:57 - 44:00
    so for example openSSL has support for
  • 44:00 - 44:03
    UDP based TLS but there's a lot of
  • 44:03 - 44:06
    shared ciphers in there you can remove
  • 44:06 - 44:07
    ciphers you don't need and and that
  • 44:07 - 44:09
    helps a bit but it's still it's the
  • 44:09 - 44:12
    biggest part of the web server by far
  • 44:12 - 44:14
    - I think there was an internet question
  • 44:14 - 44:17
    was there no doesn't look like it
  • 44:19 - 44:21
    no yes no no yes okay
  • 44:21 - 44:24
    then microphone four
  • 44:24 - 44:27
    - as someone who is connected or
  • 44:27 - 44:30
    was connected to an industry which has
  • 44:30 - 44:32
    programming programmable brakes
  • 44:35 - 44:38
    what is your opinion about things like
  • 44:38 - 44:39
    like Misra?
  • 44:40 - 44:42
    - well well so there are standards
  • 44:42 - 44:44
    in the automotive industry for example
  • 44:44 - 44:47
    like Misra to make sure you write better
  • 44:47 - 44:50
    code and it's mostly compliance
  • 44:50 - 44:51
    so they give you rules like
  • 44:51 - 44:54
    you shouldn't use recursion in your code
  • 44:54 - 44:55
    for example and
  • 44:55 - 44:57
    the functions should would be this big
  • 44:57 - 44:59
    at most and this is more I mean it
  • 44:59 - 45:01
    will probably help a bit but it's much
  • 45:01 - 45:03
    better to to invest in in good
  • 45:03 - 45:05
    architecture but you may have noticed I
  • 45:05 - 45:09
    I've said I wrote the code in C and
  • 45:09 - 45:11
    I said nothing about what I did to make
  • 45:11 - 45:14
    sure it's it's good code so that's
  • 45:14 - 45:15
    that's a different dimension that's
  • 45:15 - 45:17
    orthogonal right
  • 45:17 - 45:21
    so follow those standards it will it
  • 45:21 - 45:22
    will make your code a bit better
  • 45:22 - 45:25
    probably but it won't solve all the
  • 45:25 - 45:27
    problems and I think personally you
  • 45:27 - 45:29
    should do both you should make sure or
  • 45:29 - 45:31
    try to make sure that there's as little
  • 45:31 - 45:33
    bugs as possible in your code there's
  • 45:33 - 45:34
    ways to do that I had a talk about that
  • 45:34 - 45:36
    too but after you do that you should
  • 45:36 - 45:37
    still have these kind of
  • 45:37 - 45:40
    architectural guide guard rails that
  • 45:40 - 45:42
    keep you on track even if someone
  • 45:42 - 45:44
    manages to take over the process
  • 45:45 - 45:47
    - so now I think there was an internet
  • 45:47 - 45:48
    question
  • 45:48 - 45:50
    - yes the internet is asking
  • 45:50 - 45:54
    how would it work to like scale this
  • 45:54 - 45:55
    truly impressive security architecture up
  • 45:55 - 45:59
    for more use cases and more like
  • 45:59 - 46:01
    larger theme or would the theme size and
  • 46:01 - 46:03
    the feature keep ruin it
  • 46:03 - 46:04
    - yes so
  • 46:05 - 46:06
    hello hello
  • 46:07 - 46:08
    - oh no
  • 46:08 - 46:11
    laughter
  • 46:12 - 46:14
    - well I'm sorry
  • 46:14 - 46:20
    applause
  • 46:20 - 46:39
    postroll music
Title:
37C3 - Writing secure software
Description:

more » « less
Video Language:
English
Duration:
46:39

English, British subtitles

Revisions Compare revisions