Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

k8s_drain erroneously deletes daemonset when both delete options delete_emptydir_data and ignore_daemonsets are set to true #622

Open
MrWolfZ opened this issue May 19, 2023 · 0 comments
Labels
type/bug Something isn't working verified The issue is reproduced

Comments

@MrWolfZ
Copy link

MrWolfZ commented May 19, 2023

SUMMARY
ISSUE TYPE
  • Bug Report
COMPONENT NAME

kubernetes.core.k8s_drain

ANSIBLE VERSION
ansible [core 2.13.9]
  configured module search path = ['/home/dev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/dev/.local/lib/python3.8/site-packages/ansible
  ansible collection location = /home/dev/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/bin/ansible
  python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
  jinja version = 3.1.2
  libyaml = True
COLLECTION VERSION
Collection      Version
--------------- -------
kubernetes.core 2.3.2
CONFIGURATION

OS / ENVIRONMENT

Ubuntu 20.04.5 LTS
Kubernetes 1.27.1

STEPS TO REPRODUCE
  • create a daemonset with an emptyDir volume
  • use the k8s_drain module as shown below to drain a node that is running an instance of the daemonset
    - name: drain node {{ node }}
      kubernetes.core.k8s_drain:
        state: drain
        name: '{{ node }}'
        delete_options:
          delete_emptydir_data: true
          ignore_daemonsets: true
EXPECTED RESULTS

Since the ignore_daemonsets option is set to true, the daemonset should not be terminated (this is what happens when using kubectl drain node --ignore-daemonsets --delete-emptydir-data).

ACTUAL RESULTS

The daemonset gets terminated erroneously.

Looking at the code of the module, this is happening because both conditions are evaluated independently. That means any pod with an emptyDir volume will be deleted, regardless of other flags (i.e. even unmanaged pods will be deleted when force is set to false). In my understanding (and seemingly what happens with kubectl), the delete_emptydir_data option should only be applied to filter the list of pods deemed suitable for deletion based on the other criteria.

@gravesm gravesm added type/bug Something isn't working verified The issue is reproduced jira labels May 25, 2023
@gravesm gravesm removed the jira label May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Something isn't working verified The issue is reproduced
Projects
None yet
Development

No branches or pull requests

2 participants