Know that I'm standing in between, uh
whatever, Wednesday night beer is
or dinner, uh
Actually this will be a pretty short talk
but uh moreover I would like to hear from you, so...
My name is Arseny.
I uh I do not work for Dell
haha by the way haha
I do not work for Dell
I'm a solution[s] architect and I've been working on some Monetary Authority of Singapore, uh
Technology risk guidelines, before, like many years ago
They were related to vmware, and I coauthored comparison
to help the industry basically leverage the best practices
and map, and if there's a regulatory requirement, how to answer it
if there's a security team within a big financial organisation, giving you hard time, how to answer their question.
so I basically want to do the same for containers for Docker
I basically share the digest of open materials that I found
around using Google search engine,
but I really want to hear from you because I believe there's, uh, very different crowd today
and I really wanted to hear what you do
what do you do for security in your organisation
To start with I'm not a security expert
So don't worry, it's fine
I'm not a security expert, have never been it
And I don't want to boil the ocean
It's really not something I will be able to do in 20 minutes
So forget about it
So 2 questions:
Who has in their production environment today any test-driven security pipelines?
anyone does test-driven security approaches today?
Cool
Who has, ok before even going to SAT, this acronym from financial markets
Who works for insurance of some regulated enterprise
That runs Docker in production?
1...2...ok
Got it
That's good for me, because I cannot screw up
that significantly because everything I do is just assuming my experience and what not
So, uh, I wanna just start with a problem statment
Remember the good old days we had very clear demarcation line between
where the developers are and where the the operations people are
The dependencies in here, this is where the gray line is starting to appear with containerisation
So, who actually keeps an eye on all the dependencies for the containers?
Within the container you may have a particular version of Python, with in the container, running on the same host you may have, like, uh, [inaudible] of other dependencies
So, who has to work on that?
So, in ideal containerised world, it looks like that, you have an application and it has all the prerequisites, all the dependencies installed and everything
is beautiful
but then, all of a sudden, your guys call you and they say:
hey, AC, we forgot we actually have a better release
we just migrated into Java, we wanna migrate into Java, can you just make sure that the image we are building on
the production security image is right now running on Java, no longer need all these dependencies
We need it like in 5 minutes
Well, it's DevOps world, you do is what you do
you install the needed prerequisites, update the image of the container
send it back to the rebuild pipeline, everything works, fine, nothing breaks
But then they wake up and they say, like, oh by the way, we needed Java 8, we forgot
It actually is optimised for Java 8, can you do the same but for Java 8?
And you do that and in the end, the risk is with all these images
You end up with enormous amount of, uh, configuration drift.
So you may have, like, ports open and something listening on those ports, and
maybe not even patched through the proper levels
And this is how configuration drift had looked like for that particular image
It on [inaudible].io and there are, like, nginx, image that has been open and downloaded like thousand times
they send it... and it has been used in multiple projects as you can imagine
across, uh, the universe, across the internet, and they were like
264 medium level vulnerabilities, 126 high-level vulnerabilities,
it's just because everyone just patched it a little bit,
patched it a little bit there, update the image and the latest became this huge drift
from, like, where it should have been.
Right? So...
And actually this a very interesting fly that I found
Look at this, information security job postings, they kinda tend to go down...
And then all these jobs, they kinda tend to go up!
So, the problem, I guess, will be signified in future
So if we look at the DevOps factory
I'll try to put it in here
This is the usual process, it's a mixed like logical and process stateful diagram interchange that leads from development of a particular business requirement from business partners and business analysists
And all the way into production that's on the operating side and the greens side
Do you see anything on this diagram, related to security at all?
This is DockerCon, this is, uh, 2017, it's actually from the keynote of DockerCon 2017 from Metlife, from their presentation.
So couple of months ago, nothing here says about security.
It does mention, ok, we have a CI/CD tool, and magically we do push/pull tag and image scanning, no details.
So I guess this is the area where need to at least put a bookmark for ourselves, well we need to research it
Because, like, not too much of material or best practices are currently in easily digestable format
How to do that stuff...
I actually included one marketing slide later...
But again, it doesn't actually give you all the best practices how to actually do stuff.
So the new reality is. There's a new term, DevSecOp -- developer security operations.
The new reality is we have a completely decentralised ownership of deployment, so we need to get as early concerned about security as we can
We should narrow it down to an easily digestable tasks all the way through the pipeline,
and make sure there's no configuration drift in the image for example
and make sure that the injection points are designed together with the developer, so they understand the concerns
So they don't come and rush it to you and ask to give them Java 8
And then, provoke this configuration drift and hence give you a lot og vulnerabilities.
At a 1-1 level, how secure is Docker?
And is it the kind of conversation that we may have with the security guys just over coffee in a big firm?
Or in a Starbucks, if you see someone?
I went into CVE details and opened up what the current open Docker vulnerabilities.
So, for, vendor called Docker, there are currently 14 vulenrabilities
Some still pending from 2016, not yet patched, it's not too bad actually.
14 CVE's, eh!
Just to give you an anchor point, what is kubernetes, kubernetes actually has 3.
So, there are 3 vulnerabilities currently known to the global population of our planet that are related to Kubernetes.
And to give you another anchor point, our favourite product, ESXi, from the very old days, currently has 31 vulnerabilities.
So, answer is, it's quite secure, just by nature.
Docker and Kubernetes are well-built, they are not really vulnerable as they are.
How about running in the cloud?
There's a report, and again I'll share all the materials and digest of the links with you, when you run deployment template for cloud formation that Docker provides with Docker Cloud,
There are 5 total issues, and 2 of them are low-risk.
Basically, they are over-provisioning cloud formation IAM roles.
Just with little bit too much of permissions than needed.
I would say, brilliant, actually,
Same for Azure, there was an independent research that has gone through... recently this research was published, there are only 3 issues that are related to rolling out Docker on top of Azure.
Bear in mind, our issues again, related to more of like, preferences rather than the problem.
Now, what has to happen, we used to run app, and again I apologies, it's just the digest of multiple different slides, right?
We used to run apps through the development into the user acceptance testing and then we would've sent them to the security acceptance tests, which are mandatory part of the DevOps in the old, legacy world.
So, in a bank for example, security acceptance team would take, like, couple of days, then they will call you and they say "Dude, this is not working, we are not passing the scan..." this used to be, like, a waterfall approach.
Now, they looked for everything, so security acceptance team would've scrutinised all the code that you push into production, they would've gone and checked every particular source code as well, crawled it for customer data, crawled it for some stored credentials, what not.
They were acceptable, when we did not have to do releases 5 times and hour. So it's not really what we can do these days
with DevOps, so the new analysis for vulnerabilities, the new analysis for security has to be bottoms-up one.
We need to identify the specific classes per application, what are the most likely vulnerable areas in this app, or in this microservice on in this microservice,
and we need to only care about just those, and we need to push it as heavily into the elimination of these problems early, before they actually hit any particular last-minute security tests
we need to push for that in the development pipeline as much as we can,
and that would actually support the only way to support, this would be the only way to support the velocity of security
Some tidbits, there are really good security best practices published by Docker themselves
They talk about the usual things, usual things, then I'm saying usual things, the things that regulator, the things that your security officer will care about.
Centralised logging -- yes!
[inaudible] reference architecture -- yes, please do something with CloudWatch,
Lot of cloud deployment in the AWS, please use CloudTrail for the API traces as well, in GCP everything is centralised quite differently with StackDriver
Do the signing of the content if you use a repository, there's a service within Docker, it's called Notary that allows you to create a hashed, some kind of like, snapshots of the code
of a particular revision, so that there's no tempering, there's no man-in-the-middle substitution
All these best practices are covered, there's special very nice document out there.
Now, there's also for the host itself, that runs Docker, and that is something that could be incorporated into the security practices within your firm
There's an audit tool, that kindof comes as a part of the intrusion prevention scanner,
So you run a small container on the worker node itself
And as you can see, it maps into the /var/runs and etc
And pull the usual Linux sockets and figures out, are they configured the host underneath, running the containers?
Are they configured as per the best practices against the known, latest vulnerabilities.
This is part of Docker, this is a audit tool, you run it as one command line, and it produces a report like this.
Is your Docker current?
Is the Docker audit files are in place?
Are you logging into right folders?
Do you do proper logging level, is it like, only exceptional or everything that you do?
etc, etc, etc
now, there's also a separate, and again, I seek your input, maybe you can try [inaudible] more tools around it
there's a separate project that could do pretty much the same thing, but both on-prem and in the cloud
And it's called Clair, it's developed by CoreOS team
What it does is, it also pulls the vulnerability updates into a small data store, and then also runs it per layer analysis of your Docker image
if you actually run it on your version N-2 it takes like an hour,
but since you've done it on layer, that was N-2, on the N-1, it would only scan for the incremental changes on that layer, so you don't have to redo the entire docker scan, you actually significantly reduce the time
for the vulnerabilities scan, unless there's something not neat
take a look at this Clair tool, it's incorporated into [inaudible] providers
Security artefacts, this is a pain point, but I mean, you probably heard about ransomware for MongoDB...
That recently hit the entire planet?
People did not at all set passwords,
but the other problem is, when you set passwords, you hardcode them
so the best practice is to avoid hardcoding and to pass the credentials, pass the security artefacts through environmental variables
and you do that for example, by setting -e variable, that is then available for you app to pull
so you literally pass something into the container from the outer world when you run it
and then you take it and use it, that is a very simple best practice
but for some reason, people, especially developing guys, they tend to forget when they push apps and there are so many bitcoins that are paid to ransom,
because someone then commits the code the public github and it's all there
when I've been looking at a lot of resources, there are like, some, attack vector analyses, it takes like 15 minutes to completely block
especially through Jenkins, I have a slide on that
it's insane, Jenkins allows you to crack it through the console, if it's really password-protected well
Anyhow, please pass them through environmental variables, you can also, if you are using AWS, EC2 container service, you can pass them in the cloud formation template
define them, and use the cloud formation script to retrieve it from some secure vault or from other source
into the cloud formation, so even your cloud formation template that then builds a lot of docker containers on AWS, will not have it hard-coded
you just script around getting it from some other location
and you could retrieve it from the metadata of the instance, if you are running on EC2 or Google Cloud, you can just curl into the instance metadata from within the app, at the bootstrap
and then retrieve the tokens that you need to access other services
Another tip: previous speaker had it on one of their slides, there's a very interesting startup called cilium, does an additional layer of tapping into the network at the packet level
they insert themselves before the virtualised adaptors of Docker, add themselves almost like a kernel module, they allow packet filtering to the container blade running on the worker node defined in BPF format
BFP system is already adopted by Docker, for example, they allow and deny particular syscalls, so form within the Docker container, by default there are some syscalls that you cannot run at all
There are 44 banned syscalls, you cannot run mount or process trace, you cannot reboot the worker node from within the container
They already use the BPF framework in Docker itself, but then there is this cilium company, and this is their process architecture
that allows you to do packet filtering even at L7 on the path level do very complex rules to kill the packets, to deny access, to deny traffic to particular application
It's running as an agent on a Docker node, they also have a project, they insert themselves in Kubernetes kublet, and they have monitoring and CLI special hooks
they configure almost like iptables routing, sorry, iptables filtering on the packet level, and again it's very low-latency, it inserts itself at the kernel level
Very interesting, quite sophisticate startup, there's a very good session about it, about an hour long
The marketing slides from Docker
Docker promotes... anyone from Docker here?
There's none from Dell... none from Docker...
The guy from Puppet left...
No vendors?
This is the marketing slide we discussed, it looks like everything is running out of the box, I cannot, I'm not a user of the enterprise edition of Docker...
But they really made significant effort to add aspects like RBAC, trusted registry, your on-site completely secured key manages source for container images
That avoid MITM attacks substituting your app's code
There's a lot of effort put in place, to just run compliance requirements out of the box with Docker Enterprise Edition
You can basically label a particular container or control group and allow some particular users that are LDAP authenticated, not just the local user
that runs user, but particular LDAP user to do some actions against control groups of containers
or hosts themselves, so it's very definable
Nothing like that actually in Kubernetes, there are similar things in the cloud, but again not as granular.
The best practices of hardening Linux containers, reducing your attack surfaces, very simple, right?
Remember the configuration drift story that we started with...
Lots of ports that who knows what they are doing there
I tend to isolate not only as on the development, staging, user-acceptance and production, but also on the risk, so try to containerise
this is best practice, if anyone worked at a bank, there are business criticalities, so you have to go like, BC5 to top-most one, like, do-or-die go fix it, to BC1, like, ok, let's leave it, if it's down for one week, none will care about it
Do this on the risk, do this on the exposure, and then try to scrutinise the security one by one
Create like a structured tree of decisions, what's my most business critical, what are the exposures? what are the attack vectors?
Apply and enable all security-relevant configuration options
if you already pay for Docker enterprise edition, or if you already have something enabled by default when you install it, do not temper with it
Keep everything up to date
Like the gentleman from Puppet
Regularly test security recommendations 1~4
Everything you do, put an Outlook calnedar
It's absolutely normal to do, put an Outlook calendar reminder once in 2 months, just go and sanity-check, did we change anything? Did we miss anything?
Remember, complexity breeds insecurity
The more complex, the more sophisticated set of scripts are, the more the probability that there will be something hardcoded or something that your dev team will inherit from you
and it will breed the attack vector exposure
Two very cool talks that I've seen:
One was on Enigma, and it's been from a guy who works for Mozilla
And they push a lot of releases, it's pretty much weekly stable
Of course they use the pipeline, they define the baseline for their security, what should be our central, focal point for every release
If they do add-ons, it's much more effort to control add-ons to Firefox, because it's a community thing
Developer submits an add-on, you have to still go through the same pipeline, because it's part of the dependency release for the core browser
They write tests and they insert them into the CI/CD pipeline
the security baseline for them, for Mozilla browser, is something like that
They define scores, what would be the build score for Jenkins to use
They need to pass some may-fail, some have-to-pass, they define the baseline,
They write tests using ZAP, I think it's Zed Attack Proxy
One of their focal points is ZAP, and ZAP has integration plugin for Jenkins
It's been released not too far ago
It's already been, it's quite aggressively, there are only 15 installs
They had a separate plugin for Jenkins and then they moved to the completely new project on Jenkins plugins
It's now currently being used about 400 installs in the world, which is not bad for security
The idea behind it is that you are able to define a ZAP scan,
I'm not sure if I have a slide on it
The idea is here -- you can define a site, is it gonna be like all headers scan, let's try to exploit all possible web server vulnerabilities
Let's crunch it with a lot of header-related blocks and see whether it crashes or not
Some client sessions, you can test you particular app from different paths
It orchestrates your HTTP or HTTPS scans, and it can simulate a total attack on conventional web servers
After that they just hook it into the pipeline of Jenkins, and as you see the scan takes about like 30 seconds
So it's not a really long scan, if you run it on a beefed-up container
and it's a container itself, it only takes like half a minute in the entire build process
There's a big set of different libraries that you can tie into the Jenkins testing, they would test different servers, e.g. node.js, rails, retire.js
All these projects are there for you to test your app on your preferred IDE
Test it against the known, current vulnerabilities, there's a full list of it.
If you google for "awesome devsecops" you'll be able to find this list
Once you've created the criticality, the risk to your applications, start with the threat modelling, like, is data going to be in the wrong hands?
Or is going to be an accidental compromise or a brute-force attack?
Then based on this logic, expand it, if it's going to be a human failure, like someone lost their laptop,
Losing a laptop has special attention in here
Remember this magic ~/. ?
Do not lose laptops, or use encryption of the SSD
If you lose the laptop, there could be a lot of things in your ~
Focus on code, as early as possible, ensure that not MD5, but SHA-256 is used
Ensure that code repositories are private, not public, especially if you hard-code security artefacts into it
Make sure that the orchestration involves all the testing we've discussed
there are many other cool things that I wanted to share
this is something that I've tried today -- 3pm today, on showdone, I've looked at ex-hudson response HTTP
and I came up with 5 Jenkins that are completely unprotected with password
Like, totally!
I've logged into one of them, and it runs on AWS
Guys actually run it in production, there was like Arun Kumar daily
I've looked him up on LinkedIn, he's actually working for TATA consultancy
I never thought it would be that easy!
Please make sure, all these aspects are reviewed at least twice a year
Or every 2 months, return and sanity-check what's the security exposure
Employees, who loves gist?
Make sure at least it's not Google-crawlable, ok?
Slack tokens are there, AWS creds as we discussed...
One good advice -- Jenkins has to be segmented, if you run it on AWS, it has to be in network ACL, it has to be segmented to only allow access from the known set of hosts
The core take-away form the Jay Kumar example
Everything should be all-across authenticated
I'll file it up in terms of links and everything, short link is bit.ly/sg-docket-security
I spent about 15~20 hours watching these videos and going through these PDFs, a lot of cool material, reach out to me on LinkedIn, if you have any ideas, I'll be happy to learn from you!
That's a big area, we are not experts, let's move on together!
There's a devsecops group in Singapore...
Also did a conference in Feb
Am I in the wrong room?
It's a way I structure my thoughts
Any questions?
[inaudible]
vulnez.com
website looks brilliant
[inaudible]
they have an API
Outlook notification doesn't scale
Scan packages...
Let me correct myself, once is 2 months is reviewing the process
not really checking it once in 2 month, that has to be every build
vulnhub
Thanks for that, and the 2nd one you proposed is SourceClear, right?
Thanks for these
What's the name of the project?
black duck
What sort of applications, what sort of language do you use it for?
Java, allright
I love it
There's twistlock
Are these, these companies are more likely to be startups, they are not long-term?
Twistlock and Aqua
All these websites they are brilliant
Any other questions?
Thank you!