Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bare Metal Cluster Support #263

Open
Firaenix opened this issue Oct 14, 2022 · 10 comments
Open

Bare Metal Cluster Support #263

Firaenix opened this issue Oct 14, 2022 · 10 comments
Labels
question Further information is requested

Comments

@Firaenix
Copy link

Hi,

I love the look of this project and am considering trying it out on a cluster for myself.
However I was wondering if Constellation currently has support for Bare Metal clusters? I'd like to bootstrap it to replace some microk8s clusters. Is something like this in the pipeline?

Thanks,
Nick

@Firaenix Firaenix added the question Further information is requested label Oct 14, 2022
@m1ghtym0
Copy link
Member

Hi Nick,
Thanks for your kind words, glad you like it:-)

Constellation currently does not support Bare Metal.
In our latest release, we added a local mode that is called MiniConstellation (like "Minikube", kind, ...). It is based on qemu. Please see our documentation for more details on how to use it.

Bare Metal support is on our roadmap, especially with regard to running Constellation on the Edge.
However, it's not something that currently has a high priority for us.
Are Edge scenarios also why you are asking for Bare Metal?

@jeffersonbenson
Copy link

Personally, I would be looking into Constellation for edge cases and especially on bare metal devices. The documentation seems to check a lot of boxes for me, but not having bare metal support at scale puts a blocker in place for me.

@m1ghtym0
Copy link
Member

m1ghtym0 commented Dec 2, 2022

Hi @jeffersonbenson, thanks for your feedback. Would love to learn more about your use case and the requirements you have for bare metal. If you're interested we can have a short call to discuss. You can reach me via me@edgeless.systems.

@revoltez
Copy link

Hello @m1ghtym0

im interested in constellation and i was wondering, are there any technical challenges that are preventing you from having bare metal support? i assume the issue is related to variations in hardware and software in different machines, leading to diverse reference PCR values, so you can't possibly have constellation register all of them?

which is why its easier to register only the most famous (rarely changing) ones like Azure, GCP, AWS?

if thats the issue, is the solution is to configure the constellation cli to register a custom list of PCR values as another trusted reference, so that the target environment can provide the correct quotes against that list?

and is there a particular date when bare metal is supported?

and is there a way to modify constellation clutser to include a custom k8s pod (like an http server) that gets started automatically after the vm starts!

@m1ghtym0
Copy link
Member

Hi @revoltez,

im interested in constellation and i was wondering, are there any technical challenges that are preventing you from having bare metal support? i assume the issue is related to variations in hardware and software in different machines, leading to diverse reference PCR values, so you can't possibly have constellation register all of them?
which is why its easier to register only the most famous (rarely changing) ones like Azure, GCP, AWS?

Yes, partially, the much more diverse foundational layer (hardware/software) is challenging.
However, if we talk about "Bare Metal," there is an additional layer of resource management, networking, storage, etc.; Constellation would need to take care of what is currently covered by the Hypervisor / Cloud-Infra-Stack.
This is the biggest challenge for Constellation Bare Metal.

If you're talking about "Bare Metal" as in "on top of a hypervisor but without a CSP/API for resource management,"
it would be closer to what we currently support for running Constellation on-prem.
You can use Constellation with QEMU/KVM directly as well. Is that something you're interested in?

if thats the issue, is the solution is to configure the constellation cli to register a custom list of PCR values as another trusted reference, so that the target environment can provide the correct quotes against that list?

You can configure the trusted PCR values in the config. Constellation automatically pulls and verifies the PCR values for our release images. However, you can set your own image / specify your own PCR values for your purpose.

and is there a particular date when bare metal is supported?

There is no planned date for "bare metal" support. Depending on your requirements (virtualization-based or true bare metal) (?) a QEMU/KVM setup with the mentioned configuration options for PCRs could be a good start.

and is there a way to modify constellation clutser to include a custom k8s pod (like an http server) that gets started automatically after the vm starts!

Theoretically, you could modify/hijack Constellation's initialization process to do that, but you would require a custom image for that. Why wouldn't you just follow the usual k8s workflow? Create the Constellation/K8s cluster first and deploy your k8s pod in a second step. You could automate the entire process, even the cluster creation with Constellation's Terraform provider.

@revoltez
Copy link

revoltez commented Feb 21, 2024

thanks @m1ghtym0 for taking the time to answer my questions, i really appreciate it

You can use Constellation with QEMU/KVM directly as well. Is that something you're interested in?

Yes thats exactly what im intrested in, but AFAIK it does not provide CVM capabilities, as it just relies on the tpm?

Theoretically, you could modify/hijack Constellation's initialization process to do that, but you would require a custom image for that. Why wouldn't you just follow the usual k8s workflow?

So in a nutshell the idea is to create a PaaS based on constellation that allows regular users deploying their pods to someone else constellation cluster, by connecting to a server which is a pod running in the cluster, and in order for the users to trust that the server is correctly (pulling the correct OCI images, monitoring as expected,...etc) the provider need to provide attestation to the users that its indeed an Authentic server, so having an authentic cluster does not say much about the authenticity of the pods running within (to external users that didn't deploy those pods), which is why the server pod need to be also measured alongside the other services (joinService, keyservice,...etc) in the OS image in the measured boot chain.

at least this is so far the only viable option i can see, since regular users dont have access to constellation cli to apply their pods directly, neither they should, because they shouldn't control the cluster.

if thats the only way, are there specific docs for extending the image, my first guess is somewhere around modifying constellation helm charts to include my custom pod (if they are also baked and measured in with the rootfs), somewhere around here https://github.com/edgelesssys/constellation/tree/main/internal/constellation/helm/charts/edgeless/constellation-services)

and im also struggling a bit in building the image locally as the steps require connecting to aws here https://github.com/edgelesssys/constellation/blob/main/dev-docs/workflows/build-develop-deploy.md#authenticate-with-your-cloud-provider

can't i simply build the image locally and run it without pushing to github registry and deploy it locally like miniconstellation?

@m1ghtym0
Copy link
Member

So in a nutshell the idea is to create a PaaS based on constellation that allows regular users deploying their pods to someone else constellation cluster, by connecting to a server which is a pod running in the cluster, and in order for the users to trust that the server is correctly (pulling the correct OCI images, monitoring as expected,...etc) the provider need to provide attestation to the users that its indeed an Authentic server, so having an authentic cluster does not say much about the authenticity of the pods running within (to external users that didn't deploy those pods), which is why the server pod need to be also measured alongside the other services (joinService, keyservice,...etc) in the OS image in the measured boot chain.

Can you elaborate a bit on your Threat model here? There seem to be two parties here: The PaaS owner who controls/manages the Constellation cluster and the workload owner. From your description, it sounds like the threat model is that the workload owner does not trust the PaaS owner; hence, needs to run the workload in a Constellation to attest the integrity and authenticity of the environment.
However, in Constellation's threat model, the cluster owner has full control/access over that cluster. Workload attestation and provider exclusion are explicitly not part of the security guarantees that Constellation can give you.

Extending the node image can indeed help with workload attestation, however, without restricting the cluster owner's access to the Kubernetes API, you still won't be able to exclude the cluster owner from the TCB.
If you'd like to discuss this further in a quick chat, please feel free to send me an email at me@edgeless.systems

at least this is so far the only viable option i can see, since regular users dont have access to constellation cli to apply their pods directly, neither they should, because they shouldn't control the cluster.

if thats the only way, are there specific docs for extending the image, my first guess is somewhere around modifying constellation helm charts to include my custom pod (if they are also baked and measured in with the rootfs), somewhere around here https://github.com/edgelesssys/constellation/tree/main/internal/constellation/helm/charts/edgeless/constellation-services)

and im also struggling a bit in building the image locally as the steps require connecting to aws here https://github.com/edgelesssys/constellation/blob/main/dev-docs/workflows/build-develop-deploy.md#authenticate-with-your-cloud-provider

That is only necessary if you build images for running Constellation on a CSP like AWS or for uploading release images for Constellation. Not necessary for local deployments.

can't i simply build the image locally and run it without pushing to github registry and deploy it locally like miniconstellation?

You can build the images locally and deploy them with QEMU: https://docs.edgeless.systems/constellation/getting-started/first-steps-local#create-a-cluster

The QEMU setup with custom images isn't fully documented. Here are some links for building your images:

The code for generating TPM measurements is located here:
https://github.com/edgelesssys/constellation/blob/main/image/measured-boot/cmd/main.go

You can generate the measurements for the PCRs as follows and add them to your config file:

bazel run --run_under="sudo -E" //image/measured-boot/cmd constellation.raw  output.json

@revoltez
Copy link

revoltez commented Feb 23, 2024

Can you elaborate a bit on your Threat model here? There seem to be two parties here: The PaaS owner who controls/manages the Constellation cluster and the workload owner. From your description, it sounds like the threat model is that the workload owner does not trust the PaaS owner; hence, needs to run the workload in a Constellation to attest the integrity and authenticity of the environment.
However, in Constellation's threat model, the cluster owner has full control/access over that cluster. Workload attestation and provider exclusion are explicitly not part of the security guarantees that Constellation can give you.

yes that's correct, that would be a different threat model and its as you described, constellation does not provide security guarantees in this scenario due to the provider control. but its so far the most suitable option for our needs.

You can build the images locally and deploy them with QEMU: https://docs.edgeless.systems/constellation/getting-started/first-steps-local#create-a-cluster

i have built some images using bazel, when running bazel build //image/system:qemu it has built 4 images qemu_qemu-vtpm_console, qemu_qemu-vtpm_debug, qemu_qemu-vtpm_nightly, qemu_qemu-vtpm_stable under the folder constellation/bazel-out/k8-opt/bin/image/system

i decided to use qemu_qemu-vtpm_stable, and the command to extract the measurement was successful, i updated PCR[4,9,11] (the rest are zero), then i copied the constellation.raw file, renamed it to the v.2.14.2.raw which is my current (outdated) constellation version to avoid constellation downloading the image, so correct me if im wrong in this step because it worked!

however i found this when reading here: https://github.com/edgelesssys/constellation/blob/main/internal/constellation/helm/helm.go#L18:

The charts themselves are embedded in the CLI binary, and values are dynamically updated depending on configuration.

does that mean modifying the helm charts wont result in modifying the OS image since they are in the CLI?
if so is the nodelock/bootstraper package the go-to place for a quick injection of a simple pod? specifically after state is transitioned to locked which means either a running cluster have been found or initialized.

@m1ghtym0
Copy link
Member

i have built some images using bazel, when running bazel build //image/system:qemu it has built 4 images qemu_qemu-vtpm_console, qemu_qemu-vtpm_debug, qemu_qemu-vtpm_nightly, qemu_qemu-vtpm_stable under the folder constellation/bazel-out/k8-opt/bin/image/system

i decided to use qemu_qemu-vtpm_stable, and the command to extract the measurement was successful, i updated PCR[4,9,11] (the rest are zero), then i copied the constellation.raw file, renamed it to the v.2.14.2.raw which is my current (outdated) constellation version to avoid constellation downloading the image, so correct me if im wrong in this step because it worked!

Yes, that is one option. Alternatively, you can create a json file in the workspace directory.
The filename the CLI expects will be printed when you try to reference an image version that doesn't exist upstream.
E.g.: constellation_v2_ref_main_stream_debug_v2.16.0-pre.0.20240222124304-00d39ff7fa04_image_info.json:

{
    "ref": "main",
    "stream": "debug",
    "version": "v2.16.0-pre.0.20240222124304-00d39ff7fa03",
    "list": [
      {
        "csp": "QEMU",
        "attestationVariant": "qemu-vtpm",
        "reference": "file:///path/to/image.raw"
      }
    ]
  }

however i found this when reading here: https://github.com/edgelesssys/constellation/blob/main/internal/constellation/helm/helm.go#L18:

The charts themselves are embedded in the CLI binary, and values are dynamically updated depending on configuration.

does that mean modifying the helm charts wont result in modifying the OS image since they are in the CLI? if so is the nodelock/bootstraper package the go-to place for a quick injection of a simple pod? specifically after state is transitioned to locked which means either a running cluster have been found or initialized.

That is correct. You would need to embed the container image into the image and hijack the bootstrapper for deployment. However, attestation-wise, the way Constellation is designed, there is no guarantee for the workload owner that the container (HTTP-server) they are talking to is still the same as deployed by the bootstrapper. How do you envision the attestation guarantee to look like for the workload owner?

@revoltez
Copy link

when you try to reference an image version that doesn't exist upstream.

I could not find in constellation cli how can you specify the image to download, i tried simply creating a constellation.json file pointing to the constellation.raw but it skipped it! i also tried naming it constellation_v2_ref_main_stream_debug_v2.16.0-pre.0.20240222124304-00d39ff7fa04_image_info.json but it also didn't work and proceeded to download the image from upstream.

How do you envision the attestation guarantee to look like for the workload owner?

We intend to restrict the provider's access to the constellation cluster, (maybe through implementing security policies to the control plane and the server pod). This will limit the provider's operations on the cluster and transform the server into an immutable and readonly pod.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants