× Install ThecoreGrid App
Tap below and select "Add to Home Screen" for full-screen experience.
B2B Engineering Insights & Architectural Teardowns

Confidential Containers in Kubernetes without Trusting the Cluster

Confidential Containers change the security model of Kubernetes: protecting data in use without trusting the platform and administrators.

The problem does not manifest at the level of network isolation or RBAC. It runs deeper — in the trust of the execution environment itself. In classic Kubernetes, it is assumed that cluster administrators and infrastructure are trustworthy. However, when working with sensitive data, especially in AI workloads, this assumption breaks down. Data can be protected “at rest” and “in transit,” but remains vulnerable “in use,” that is, in application memory. This becomes a bottleneck for scenarios with data sovereignty and zero trust requirements.

The solution emerging in the industry is confidential computing. In the context of Kubernetes, this is implemented through Confidential Containers. The approach is built around the idea of not trusting the platform, Kubernetes administrators, or even the host. Instead, the system verifies the integrity of the environment through attestation before granting access to secrets. This is a trade-off between security and operational complexity. We achieve a stricter trust model but pay for it with increased stack complexity: there is a dependency on TEE (trusted execution environment), additional services, and a verification chain.

The implementation in OpenShift demonstrates how this model lands in practice. The architecture separates roles and clusters: there is a “trusted” environment where the Trustee service operates, and an “untrusted” cluster where the confidential workloads themselves run. This separation is not accidental — it reduces the blast radius and establishes trust boundaries. The installation begins with operators and configuration, with the cluster intentionally deployed without pre-installed components. This approach helps to clearly see dependencies and control points in the system.

Next comes the most interesting part: application behavior. The basic scenario is “blackbox deployment.” A container with an AI workload (for example, fraud detection) runs unchanged but receives an additional layer of protection — memory encryption. From the perspective of Kubernetes resources, nothing changes. This is an important point: Confidential Containers do not require rewriting the application. They change the execution environment.

The next level involves working with external data. In the traditional model, the pod loads an encrypted dataset and receives the decryption key through an init container or cluster secrets. This means that access to the key is potentially available to administrators. In the confidential containers model, a TEE stack and the Trustee service are added. The key is issued only after successful attestation. If the environment is compromised or does not meet the expected state — the key will not be provided. This shifts control from Kubernetes to a cryptographically verifiable trust chain.

Another scenario is sealed secrets. In standard Kubernetes, secrets are accessible via the API and can be extracted with the right permissions. In confidential containers, a mechanism is used where the secret is not actually stored in the cluster in plaintext. It is injected during the container startup after the environment is verified through the Trustee. This process is initiated by components within the guest Linux environment of the container. As a result, the secret exists only within the trusted execution boundary.

What ultimately changes? The level of isolation becomes closer to that of virtual machines, but the container model is preserved. This is a compromise solution between security and orchestration convenience. Performance metrics or latency in the original material are not provided, so it is impossible to assess the overhead. However, architecturally it is clear: adding TEE and attestation increases complexity and potentially affects startup time and throughput.

The practical value of the approach lies in reducing trust in the infrastructure. This is especially relevant for hybrid cloud and edge scenarios, where control over the environment is limited. At the same time, tooling such as Red Hat Demo Platform lowers the entry barrier: ready-made infrastructure allows exploring system behavior without building a cluster from scratch. This makes the technology more accessible but does not eliminate its fundamental complexity.

Confidential Containers are not “just another runtime.” This is a shift in the trust model of Kubernetes. And like any shift, it requires a reevaluation of the usual assumptions about where control ends and risk begins.

Read

×

🚀 Deploy the Blocks

Controls: ← → to move, ↑ to rotate, ↓ to drop.
Mobile: use buttons below.