Moving Containers: Uncovering Challenges. Finding Solutions.
Posted by: GuidePoint Security
In this blog post, we recap the high points of our August 30, 2022 conversation with AWS and Lacework. We talk about cloud migrations, application modernization, and then applying security to modernize applications like distributed architectures. We hope you find the information valuable. Let’s begin…
First, let’s examine the cloud adoption journey. This is the journey that our customers typically take when they’re either migrating or building cloud native applications.
Based on the experience of millions of business customers, AWS sees organizations moving to the cloud in four stages. The first stage, we call the “project phase”. In this stage, companies are typically experimenting with new projects, they are running workloads on AWS, they’re gauging and evaluating the cloud. This is generally done on a project-by-project basis.
The second stage is the “foundation phase”. This phase is where organizations begin to define the core constructs such as network connectivity and identity and access management. They are able to see the benefits of the cloud and incorporate essential components such as observability platforms and security.
The third phase is focused more on migration. Companies begin to scale the adoption of the AWS cloud and grow their cloud footprint. Companies will typically develop an application modernization strategy at this point. They will figure out which applications will be migrated and how they’ll be modernized, whether that’s containers or serverless, Kubernetes, ECS, and EKS.
The fourth and last phase is often referred to as the “reinvent stage” or “the cloud operating optimization” stage. This is where organizations begin taking full advantage of the cloud. They are accelerating time to market by utilizing technologies they were figuring out in phase three, like serverless models, microservices, containerization, and other cloud native technologies and architectures.
Now that we have identified the four phases of the cloud adoption model and hopefully you feel more comfortable with it, the question is… how do you begin this journey? Traditionally, it is important to understand your IT inventory; what do you have, what can you transition to the cloud, and what can you retire. Note that if it is no longer valuable to the business, it may be necessary for you to remove the application altogether.
Once you have clear insight into your applications and have determined which ones are better suited for the cloud, you must migrate or rehost them. “Rehosting” is typically where we see moving the application to a more modern environment for cost savings or performance gains; what we call a lift and shift motion. It eases operations and gives you the minimal viable components of what you’re doing with the cloud.
“Replatform” is when you modifying the infrastructure the application uses. An example of this is if you took one of your existing applications, and then an existing component of the application and moved it to a managed service. There are no drastic changes to the underlying business logic.
“Refactoring” is when you’re rearchitecting an application to take advantage of modern cloud services and architectures. As you move to distributed architectures through refactoring, you need speed. You need to use the agility of DevOps and you want to optimize your software delivery mechanisms, which include automated security, compliance testing, and then both necessary to put code into production in a safe and compliant way.
“Repurchasing” is typically what we think about as drop and shop. You’re taking applications that you’ve used and replacing those with SAS applications.
Why is this important?
In the world of DevOps, we marry security and compliance and come up with DevSecOps. Organizations want to make sure that in the build portion of your mechanism, as well as in the operating environment, or the runtime environment, you’re catching vulnerabilities, compliance, and misconfigurations before they become critical issues in your operating environment.
Most folks are probably aware of the Amazon Web Services (AWS) Shared Responsibility model, and in the shared responsibility model, AWS has certain responsibilities for the underlying infrastructure that you land your data, your applications, and your workloads on. Customers are responsible for what they put into the cloud, which we call “IT Security in the Cloud”. There are various software subdomains that we look at, and we try to think about how they actually cover off on the five basic units of the NIST Cybersecurity Framework.
Say you’re on an entire cloud migration journey. What does a good program look like for cloud security? And before we build anything, what does that foundation look like? Are there compliance requirements that we have to adhere to? Are there geographical boundaries?
Are there third party relationships, or third party integrations that we need to take into account? A lot of these questions right at this point should have already been decided on.
For example, what actually makes up a container? You still have to secure containers, just like you do a normal server, but you would secure them differently. If you have three different apps that are hosted on one piece of infrastructure, you’ve got your host, and additionally you’d have your Docker operating system, and then you can host multiple containers. On top of that, each of these different containers are going to have their own applications, their own application libraries, and their own Docker configurations.
How do we protect the overall host? You’ve got the containers themselves, but the host is still at its root, a server that needs protecting. A very popular way to run containers today is with Kubernetes, because it does a really good job of managing multiple containers in an efficient way to automate a lot of the spinning up and spinning down of the containers.
When you get into an AWS environment, there are several different options that you have for managing containers. You can build out your own hosts using Amazon Elastic Compute Cloud (EC2). You would fire up an EC2 instance, build a Docker host, or your Kubernetes host and then you would put your all of your containers on those EC2 instances.
So what actually happens when your containers are running? How do you know when something bad or malicious is happening on the container? How can you make sure that your Virtual Private Cloud (VPC) is configured correctly? We see it all the time where somebody goes to set up the VPC, and they’re not necessarily paying attention. Maybe they use the default settings, and they open up ports to the whole world and not limit a specific IP range. And then finally, are your libraries vulnerable? When we utilize containers in the cloud, we tend to want to throw point solutions at them. And maybe we want to use solutions that we’ve always used on premises. That doesn’t always work because of the complexity of the cloud.
Another thing you have to think about when you’re managing container security in the cloud is finite resources. When you’ve got up to 50 cloud accounts across multi-cloud environments that you’re working with, how do you have the visibility into your cloud workloads? Do you have visibility into your containers to understand what is going on?
This is where the Lacework cloud container security approach can really help. Lacework Polygraph® Data Platform can help you visualize all containers in your security environment. And not only are you building containers that may have vulnerable libraries, but you’re running vulnerable libraries in production as well. Being able to have full visibility into all of the containers spread throughout your environment in runtime and in production is a huge deal.
There are important questions for securing containers: are you securing all the networking around them? What about your IAM policies that you have set relating to the containers? Are you overly permissive? Or do you have MFA on all of your users? Just being able to analyze what is happening in your cloud environment, and present that to you from a behavior-based approach rather than a rules-based approach is key.
We fundamentally approach security and cloud security as a data problem. We ingest the API calls, and we ingest the user activity from the cloud. And from the workload, we ingest the application launches, running processes, network behavior, and the configurations. From there, we analyze the data, and we understand what is normal for you, and what is not normal.
It’s important to empower security teams and the tools where they live. Forcing an operations team to use the cloud and the cloud operations tool usually doesn’t work. So we have a lot of different integrations with JIRA teams, like Slack, where people can work in the tool that they’re most comfortable with.
While container technology has been a leading innovation to provide IT operations with more resilient and flexible environment, security must be included in your approach.
GuidePoint Security