Containers remain an unknown arena to many security professionals today. Ed Moyle, General Manager & Chief Content Officer at Prelude Institute, talks us through some benefits of containerisation and how a security professional can best enable the positive while minimising potential pitfalls.
If you’re a cyber security (information security) professional, you know assessing risk for a new technology is hard. In fact, it’s arguably harder (more time consuming and requiring more information) to make a risk decision about a technology than it is to learn to use that technology in the first place.
For example, consider the act of driving a car. What are the things you need to know to safely operate a motor vehicle? Driving laws certainly, also the vehicle’s instrument panels and controls, operational processes like steering, etc.
Now consider what you’d need to know to tell if a particular vehicle is safe to operate. In addition to the rules of the road, you’d also need to evaluate a host of information like the maintenance history of the vehicle, the weather conditions, safety features like airbags/seatbelts, the route to be driven, and numerous other pieces of information.
Since no business ever starts with discussing technology usage plans with the security team (most security pros find out about new technologies when they’re neck-deep in it), this means that not only do security pros need to evaluate more information to evaluate if technology is safe to operate, but they have comparatively less time to do it in relative to someone deciding when, how, and where to employ it.
Many security professionals are therefore in a state of “catchup” when it comes to application containerisation (technologies like Docker and rkt) where they’re trying to rapidly assimilate the security model, understand the proposed usage for their environment, and evaluate how (or if) their overall security profile is impacted. This is tricky for a few reasons.
First, there’s the natural dynamic outlined above requiring security practitioners to research and evaluate more information than their technologist peers judge safe operation.
Second, it’s made difficult because of important, though seemingly-arcane elements of the security model that make it different from more familiar technologies like OS virtualization (e.g. understanding the nuances of how the segmentation boundary is enforced).
Third, there’s the presence of recently-discovered vulnerabilities like the Kubernetes API issue (CVE-2018-1002105) and the recent issue in runc (CVE-2019-5736) — issues like this, while mitigatable, can muddy the waters and erode confidence in the container (or supporting orchestration services) security model.
Also of interest: The essential security protection that many organisations ignore
In fact, because of these things, many security pros are understandably a little apprehensive about jumping headlong onto the container bandwagon.
For example, they sometimes focus exclusively on potential pitfalls that can arise through unguarded use of container engines like those outlined in the recent NIST guidance on containers (SP 800-190: Application Container Security Guide) like namespace isolation, use in a multi-tenant situation, inter-container segmentation enforcement, malicious containers, and so on.
As with many things though, there can be — if used appropriately and judiciously — a corresponding benefit to security as well. In fact, if you know where to look and if the security team is involved in the architectural discussions associated with container deployment, there is a very real opportunity to improve the security posture of the organization through the use of application containers.
There are a few dimensions across which potential improvements can occur.
First, containers can serve to help enforce segmentation between application components and their underlying supporting components. Unlike a virtual OS instance or, in fact, a “bare metal” OS deployment, the “default state” for a new container is zero exposed services. In fact, services remain closed until and unless they are specifically published.
Likewise, containers have value to the developer because they can make more portable application dependencies – i.e. middleware, software dependencies, and supporting components/libraries. This can be valuable for the security team too, as it can make application deployments more modular and thereby make management and tracking potentially easier.
Since supporting elements are packaged together along with the elements of the application that require them, this means that they can be more logically tracked and organized and more easily phased out should they no longer be required.
That modularity can also serve to minimize an attacker’s ability to move laterally through the environment in the event of application compromise.
For example, an attacker compromising an application is limited to the compromised container and precluded access to other resources. Can they still attack other elements of the environment from there? Potentially. But the additional layer requires they invest more time, thereby increasing the likelihood of discovery.
Benefits can become more pronounced once we start to account for supporting security features of production container engines and the associated orchestration ecosystem.
For example, many container runtimes support the ability to sign containers so that untrusted or modified containers are detected and prevented from operation. Orchestration platforms like Kubernetes can employ RBAC to limit and restrict the actions of users to just those things they need to accomplish what they need.
Moreover, features for rapid provisioning and deprovisioning of containers can make it easier to tear down application components with less overhead, thereby preventing the build-up of misconfigurations, backup and temporary files/directories, one-off configuration tweaks and other byproducts of long-lived production systems.
Also of interest: How can bug bounties secure identity services?
Taking this forward
How does a practically-minded security professional best enable the positive while minimizing potential pitfalls? There are a few things that we can do to support this.
First and most importantly, it’s critical (in my opinion) that security pros invest the time to become educated about the container ecosystem: understand how they work, the security model, and what features (container signing, SELinux/AppArmor, features of the orchestration platform) are available that you can use.
Being educated not only positions you to make informed decisions about the risk/reward elements of your organization’s use of containers, but also helps you know how to best employ the features of these tools to support that usage.
The second thing you can do is to be an active participant in the architecture and design of how your organization will use containers. This is particularly important not only so that you are apprised of decisions as they’re made, but also to help you extend the existing mechanisms you have in place for securing application deployments into the container landscape.
For example, if you’re using application threat modeling to evaluate and support application deployment, participating in the architecture and deployment conversations can give you a chance to adapt those models to a container landscape.