Kubernetes application
Kubernetes application security in the cloud have become the de facto standard for modernizing workloads. Their multi-layered architecture allows for the rapid creation of various entry points for undesired activity. It would be beneficial if you had security procedures to safeguard your applications against these threats. A defense-in-depth plan is an example. It assists teams in enhancing their overall security posture and reducing single points of failure that could result in a data breach.
We’ll go over the best practices for preventing some of the most common security threats in this tutorial. They could appear in:
- Third-party dependencies and application code;
- Workloads and container pictures;
- Clusters and pods in Kubernetes;
- Will host your application on cloud infrastructure.
The security platform is made up of the following components:
- Cloud Security Posture Management (CSPM) detects misconfigurations in your environment that violate recommended security and compliance standards.
- CWS (Cloud Workload Security): monitors file and process activity in real-time across production workloads.
- Targeted attacks on your resources are detected by cloud SIEM.
- AppSec: gives real-time visibility into the web app and API logic assaults.
With these products, you can easily detect unsafe service misconfigurations and mitigate legitimate hazards at every level of your Kubernetes infrastructure.
Secure application code
Can compromise a cluster by exploiting superficial design flaws or implementation errors in application code. Some of the most prevalent code-level vulnerabilities, according to the Open Web Application Security Project (OWASP), are:
- insufficient logging procedures;
- third-party dependencies that are obsolete or vulnerable;
- password protection and data transfer methods that aren’t sanitized or verified result in fields that aren’t sanitized or validated.
A few best practices for developing safer code, obtaining better visibility into code health, and identifying attacks that target your application and APIs will help you avoid these risks. These processes are essential for establishing multi-layered security policies that safeguard Kubernetes infrastructure in both development and production environments.
Use code analysis tools to conduct regular audits.
Although logging key application events is an intelligent beginning step toward proactively identifying code-level security flaws, you may still create other weaknesses in your code’s design and structure—significantly if your environment is rapidly changing. Using code analysis tools to conduct frequent audits of your application code can help you detect these security issues early in the development process, allowing you to correct them before an attacker exploits them.
Analysis tools like Semgrep and SonarQube identify faulty code and offer suggestions for how to fix it. SQL injection attacks may be possible in form fields that do not sanitize or check user-submitted data. An attacker presents database queries as input in these cases, which can edit or destroy data from your database (e.g., DROP TABLE Customers).
Check for security concerns in third-party dependencies.
Most apps rely on open source dependencies controlled by a third party (for example, libraries, packages, and frameworks), which means you don’t have as much control over their design or security.
A flaw in a third-party library might quickly jeopardize your application’s security. Keeping track of the state of each dependent, on the other hand, takes time and work. A new version of a third-party library may be required. It contains a vital security patch and a bug fix that interferes with your application code and infrastructure resources. It’s critical to be aware of these restrictions. When you want to secure your program, you don’t break it.
Regularly scanning code dependencies may assist you in better understanding their present state. It also evaluates the risks of changing your code to a new or patched dependency version. OWASP Dependency-Check and other scanners provide further information about a compromised library, such as the affected versions, the severity of the vulnerability, and which versions you should upgrade to fix the problem. They can be connected to your CI/CD workflows as well. Before deploying the appropriate code, you can check if any pieces of your code interact with compromised libraries. These safeguards will allow you to make informed judgments regarding updating dependencies. It lowers the chances of introducing flaws or breaking changes.
Track application activities at the code and container levels
Regular audits of your application’s code and third-party dependencies are essential for security, but they may not be enough to prevent all threats. Because Kubernetes settings are diverse and dynamic, attackers have additional options to conceal their activities. They may, for example, target specific containers or target smaller, more vulnerable application components that are easy to overlook.
It’s vital to have visibility into the following to mitigate this risk:
- actions against application code and APIs accounts that interact with application services file, process, and kernel activity on containers.
Keeping track of modifications to application files, directories, and running processes might help you figure out how an attack unfolded and the attacker’s goals. An attacker may, for example, use a SQL injection attack to start a shell from a database process. This exploit uses improperly sanitized application fields to allow a hacker to breach a host or obtain access to other vital application services.
You can enable audit logging to understand better the source of these events, which provides a lot of information about Kubernetes activities. Later, we’ll go through audit logging in further depth.
Secure container images and workloads
Applications are divided into smaller workloads in distributed settings, each operating on its container. Many teams rely on publicly available container images containing the necessary operating system and binaries, significantly lowering the development time for a specific task. On the other hand, containerized apps are becoming more sophisticated and consuming more resources. The possibility of adding new vulnerabilities to your workloads is increasing. This section will look at some best practices for guaranteeing their safety.
Make sure the container photos are from a reputable source.
The dangers of employing third-party libraries in application code are comparable to getting images from a public registry. Because you have no thorough understanding of the structure of third-party container pictures, you might mistakenly pull one with obsolete dependencies or potentially hazardous code.
To avoid issues like this, have authorized users sign photos and come from a trusted source that actively updates them—for example, a well-known firm or an open-source organization. Using and monitoring the registries provided by your cloud providers, such as Amazon Elastic Container Registry (Amazon ECR) and Azure Container Registry, can also help to reduce risk. It’s crucial to be vigilant when a new image is unexpectedly added to your ECR registry. It could suggest that an attacker is attempting to establish persistence by uploading a container with malicious code.
The use of privileged containers should be limited.
Privileged containers have direct access to the host’s resources and other devices. An attacker with access to one of these containers can manipulate host resources in various ways, including updating the host’s /root/authorized keys with their SSH public keys. Though there are certain advantages to employing privileged containers, such as running GPU-enabled workloads in Kubernetes clusters, limiting their use and keeping track of their status in your environment is critical.
Because privileged containers have the same capabilities as the host, distinguishing between malicious and regular behavior can be challenging. As applications use hundreds of containers to serve workloads and new containers are spun up at regular intervals, this problem becomes increasingly widespread. It’s not always possible to keep track of the status of individual containers.
Improve the separation of container workloads from host resources.
Container isolation establishes a barrier between container workloads and hosts, limiting access to system resources for both workloads and attackers. While restricting the use of privileged containers is one technique to safeguard host resources, other configuration settings can help improve container isolation:
- Container runtimes. Take advantage of built-in security features like the ability to enforce signed and encrypted images by using a runtime like CRI-O.
- Set I/O, memory, and CPU restrictions for containers to assist against denial-of-service attacks.
- Kernel capabilities. based on specific use scenarios, assign a reduced set of rights (e.g., mount operations, filesystem access) to containers to prohibit access to essential resources.
These configuration parameters work together to provide various layers of protection for your containers.
Secure Kubernetes clusters
Kubernetes groups workloads into one or more pods that share network and storage resources, and it manages and scales them in clusters. Users and service accounts can modify pods, services, deployments, and more using the Kubernetes API server. Because Kubernetes is in charge of orchestrating your application, it should adequately configure cluster resources to minimize the risk of an attack. In this part, we’ll look at several ideas for securing Kubernetes clusters that can supplement your container-level configurations.
Can use audit logs to record Kubernetes activity.
All interactions between the API server, application services, and users are recorded in audit logging. It gives you more information about the source of a system’s fraudulent behavior. You can use a service to deliver your audit logs. It includes detection criteria to assist you in identifying potential risks, including:
- changes to Kubernetes resources due to illegal access to the API server adjustments to a service account.
These events could signal that your clusters have other security flaws, such as a misconfigured API server or pods with elevated access.
Access to the Kubernetes API should be restricted.
If an attacker gains access to the Kubernetes API server, any component of your application can be easily changed or destroyed. Also, the API server has numerous limits that you can enable to avoid this danger. Only authenticated users with appropriate permissions have access to the Kubernetes API. On cloud systems, authentication controls apply by default.
You can use OAuth2 authentication services like OpenID Connect, for example. Any user attempting to access the API server must be verified, with access limited to your company. Role-based access control (RBAC) models can also be used to approve requests to the server from specific authenticated users. RBAC allows you to create roles that reflect your company’s organization and authorize access to Kubernetes resources. Only the users or groups who need it have access to the API server.
Limiting access to the Kubernetes API server also helps protect secrets like API keys, user passwords, and certificates stored there across workloads, external services, and accounts. Secrets keep the server’s underlying data store unencrypted by default, making them accessible to anyone with access to etcd.
Enable encryption at rest for secrets to secure sensitive data. Kubernetes application security can use several different encryption providers. Kubernetes suggests using your cloud provider’s critical management service for maximum security (KMS). An attacker would need access to both the Kubernetes API server and the decryption keys because they are stored remotely rather than in Kubernetes.
Create segregated pods with restricted access.
Because pods and individual containers have comparable configurations and contexts, such as network regulations and resource restrictions, you may use the same isolation rules to prevent attackers from creating or altering pods or gaining access to other containers. Kubernetes provides out-of-the-box security policies via an admission controller to provide more control over pod settings in a cluster. Pods must be configured according to your guidelines to deploy successfully.
Also, based on Kubernetes recommendations, these policies offer varying levels of protection, including:
- limited privileged pods and escalating privileges;
- limiting the capability of pods (e.g., run mount operations, modify processes);
- restrict access to the namespace, ports, and filesystems of the host.
Many teams rely on publicly available container images containing the necessary operating system and binaries, significantly lowering the development time for a specific task. On the other hand, containerized apps are becoming more sophisticated and consuming more resources. The possibility of adding new vulnerabilities to your workloads is increasing.
Secure Kubernetes infrastructure in the cloud
The cloud provider that hosts your application is the final layer of infrastructure. Most providers, such as Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), and Azure Kubernetes Service, provide managed services (AKS). It should simplify the deployment and scaling of your container environment. On the other hand, managed services are subject to the same security vulnerabilities as the rest of your infrastructure. The following best practices might help you better understand what’s going on with your platform. Also, make sure any cloud resources that support your Kubernetes application security system configure.
Activate audit logging.
Kubernetes audit logs, as previously mentioned, contain more information regarding cluster-level behavior. You can also gather platform-specific audit logs to gain visibility into occurrences across a cloud provider, such as logins, profile or resource edits, and resource status. The most prevalent vulnerabilities in a cloud environment are application resources and cloud accounts that are not configured according to your security standards. Enabling and understanding how to read these logs might help you find these vulnerabilities. Depending on your provider, you can use AWS CloudTrail logs, Azure platform logs, or Google Cloud audit logs to collect activities.
For cloud accounts, follow the principle of least privilege.
Different users and services require different access levels in cloud-based Kubernetes applications, leading to permission misconfigurations that attackers can abuse. An attacker could take advantage of a misconfigured IAM authorization. Also, it takes control of a GKE service account and changes the configuration of an application cluster. Creating minimally privileged user and service accounts and providing additional permissions can help protect Kubernetes resources from unauthorized access. GKE, EKS, and AKS documents include best practices for establishing secure identity-based rules to enhance your RBAC policies and container-level customizations.
Access to the metadata API of the provider should be limited.
Cloud systems typically provide a metadata API server to store metadata about environmental resources. The names of the GKE virtual machine instances. Metadata, including cloud credentials, identity tokens, and other sensitive information, is accessible to running pods. Moreover, attackers can use a provider’s metadata API to investigate Kubernetes infrastructure. To find new resources to target, they can use Amazon EC2’s instance metadata service (IMDS). An attacker might query the metadata service for the credentials of an EC2 instance using a compromised pod in an EKS cluster. Also, the attacker can use this information to access account-level details on the cluster and manipulate cluster resources.
Using an interactive session to access an EC2 IMDS is not usual in production situations. It’s critical to be aware of this behavior to figure out whether it came from a natural source or whether it’s a sign of a broader threat to your resources. By restricting traffic from pods to your cloud provider’s metadata API, network policies can help you mitigate this type of danger.
Using the Calico network policy engine, you can establish appropriate policies for your clusters using our previous EKS example. These safeguards ensure that if an attacker gains access to a pod, they will not be able to acquire credentials for other cloud resources.
A multi-layered approach to securing Kubernetes applications
This tutorial looked at some best practices for safeguarding your Kubernetes application at every level, from application code to the cloud provider hosting your Kubernetes resources. We looked at how to monitor your entire Kubernetes stack and spot significant issues in real-time. Also, this multi-layered approach to security aids in detecting valid threats and assaults and the remediation of exploitable misconfigurations.
About Enteros
IT organizations routinely spend days and weeks troubleshooting production database performance issues across multitudes of critical business systems. Fast and reliable resolution of database performance problems by Enteros enables businesses to generate and save millions of direct revenue, minimize waste of employees’ productivity, reduce the number of licenses, servers, and cloud resources and maximize the productivity of the application, database, and IT operations teams.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Enhancing Accountability and Cost Estimation in the Financial Sector with Enteros
- 27 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Optimizing E-commerce Operations with Enteros: Leveraging Enterprise Agreements and AWS Cloud Resources for Maximum Efficiency
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Revolutionizing Healthcare IT: Leveraging Enteros, FinOps, and DevOps Tools for Superior Database Software Management
- 21 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Optimizing Real Estate Operations with Enteros: Harnessing Azure Resource Groups and Advanced Database Software
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…