Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mounting a Persistent Volume Claim as Volume within a Pod's Container spec doesn't seem to work. #2217

Open
dtm2451 opened this issue Apr 8, 2024 · 9 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@dtm2451
Copy link

dtm2451 commented Apr 8, 2024

What happened (please include outputs or screenshots):
A pod is created from the manifest below, but the volume meant to target a persistent_volume_claim is instead created as EmptyDir, and container volume_mounts meant to target this volume are skipped entirely.

Specifically, when I run kubectl describe pod/test-pod, its container has no mount associated with the target name, and I see the below for the volume that should be a PersistentVolumeClaim:

Volumes:
  vulcan-cache:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>

What you expected to happen:
I expect the pod to be created with

Volumes:
  vulcan-cache:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  vulcan-cache-claim
    ReadOnly:   false

and its container to have

   Mounts:
      /running from vulcan-cache (rw)

I was able to successfully produce this by spinning up the pod up from an equivalent yml file and using kubectl directly.

How to reproduce it (as minimally and precisely as possible):

  1. Set up a persistent volume and persistent volume claim. Create yml files containing the below...

Persistent volume config: (Adjust 'path' as needed to something that exists on your k8s node)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: vulcan-cache
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 10Gi
  hostPath:
    path: "/home"
    type: Directory

Persistent Volume Claim config:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: vulcan-cache-claim
  namespace: default
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi
  volumeName: vulcan-cache

Then run kubectl create -f path/to/each/config.yml for both files.

  1. Use kubernetes client to try to spin up a pod from this manifest. (assuming you don't need or want description of how to establish kub_cli here)
pod_manifest = {
    'apiVersion': 'v1',
    'kind': 'Pod',
    'metadata': {
        'name': "test-pod",
        'namespace': 'default',
    },
    'spec': {
        "volumes": [{
            "name": "vulcan-cache",
            "persistent_volume_claim": {"claim_name": "vulcan-cache-claim"},
            }],
        'containers': [{
            'name': 'test-container',
            'image': 'nginx',
            'image_pull_policy': 'IfNotPresent',
            "args": 
                ["ls", "/running"],
            "volume_mounts":  [{
                "name": "vulcan-cache",
                "mount_path": "/running",
                }]
        }],
    }
}
kub_cli.create_namespaced_pod(body=pod_manifest, namespace='default')

Anything else we need to know?:

I am newer to working with kubernetes, but I believe this methodology is the intended way to mount persistent storage to pod containers, https://kubernetes.io/docs/concepts/storage/persistent-volumes/#claims-as-volumes

Environment:

  • Kubernetes version (kubectl version):
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.3
  • OS (e.g., MacOS 10.13.6): Linux, Pop!_OS 22.04
  • Python version (python --version): 3.10.12 and 3.8.17. (MRE tested directly only with 3.10.12)
  • Python client version (pip list | grep kubernetes): 29.0.0
@dtm2451 dtm2451 added the kind/bug Categorizes issue or PR as related to a bug. label Apr 8, 2024
@showjason
Copy link
Contributor

/assign

@showjason
Copy link
Contributor

I will try to reproduce the issue and find the root cause.

@showjason
Copy link
Contributor

showjason commented Apr 11, 2024

after debug, it looks like, in this case, the function sanitize_for_serialization can't serialize the pod_manifest properly.
For example, the field persistent_volume_claim can't be mapped to persistentVolumeClaim, hence persistent_volume_claim can't be recognized by Kubernetes, and the volume type is set to emptydir by default.
I will investigate deeply to figure it out.

Alternatively, I think there is another way that you can try, by utilizing the V1Volume and V1PersistentVolumeClaim ... to compose the pod you wanted, similar to this example

@showjason
Copy link
Contributor

@dtm2451, you need to modify the pod_manifest, all the snake case fields must be converted to camel case.
e.g. persistent_volume_claim => persistentVolumeClaim
Or you can choose the alternative I mentioned in former comment.

Alternatively, I think there is another way that you can try, by utilizing the V1Volume and V1PersistentVolumeClaim ... to compose the pod you wanted, similar to this example

@dtm2451
Copy link
Author

dtm2451 commented Apr 12, 2024

Oh oh! Thank you so much for your time investigating.

Sounds like this is user error then on my side, but a warning-add would be nice! I didn't catch that I'd left this portion of my manifest in snake_case rather then camelCase, as is clearly the way all key names work in the python client! Some warning when elements are skipped due to such conversion failure would be VERY nice!

@dtm2451
Copy link
Author

dtm2451 commented Apr 12, 2024

Wait actually, I responded too quickly there.

My understanding of the python client is that fields are designed around snake_case conversions of what one would normally provide in camelCase directly to kubectl. That is what I built towards here -- exactly the path you point towards in:

Alternatively, I think there is another way that you can try, by utilizing the V1Volume and V1PersistentVolumeClaim ... to compose the pod you wanted, similar to this example

So it does still seem like a bug in the client to me if the snake_case version of persistent_volume_claim is incorrect here!

FWIW, contrary to my understanding from the documentation, but perhaps it is my understanding that is wrong?, when I swap to using camelCase (not the seemingly desired snake_case) for the entirety of my pod_manifest I can produce the pod I want from kub_cli.create_namespaced_pod(body=pod_manifest, namespace='default')

@dtm2451
Copy link
Author

dtm2451 commented Apr 12, 2024

For example, in what I understand to be documentation of how to define a V1Volume for the python client, the field "persistent_volume_claim" (not "persistentVolumeClaim") is typed as V1PersistentVolumeClaimVolumeSource, and following that link we also find "claim_name" and "read_only" fields (not "claimName" and "readOnly").

@showjason
Copy link
Contributor

@dtm2451, I couldn't agree with you more that the fields are designed around snake_case. For this case, the difference is the type of request body, if the body type is pure json (like hard code), the python client will work as kubectl, for this kind of cases, it's not reasonable to modify json via python client. If the body is a Kubernetes resource object instantiated by client functions, the snake_case is supported does make sense. I hope my understanding can explain your question!

@dtm2451
Copy link
Author

dtm2451 commented Apr 18, 2024

I'm not sure I quite follow the logic behind

for this kind of cases, it's not reasonable to modify json via python client

fully. Specifically because the case here is a python dict, which is ofc similar to json yet fully python native. I suppose I'm simply curious for more detail of why it becomes unreasonable to parse and modify for the client. Is there a specific function that I should be passing the pod_manifest dict through before handing it to create_namespaced_pod perhaps?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants