Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

containerd: SElinux Relabelling broken for Kubernetes volume mounts #1138

Closed
log1cb0mb opened this issue Mar 22, 2022 · 24 comments
Closed

containerd: SElinux Relabelling broken for Kubernetes volume mounts #1138

log1cb0mb opened this issue Mar 22, 2022 · 24 comments
Labels

Comments

@log1cb0mb
Copy link

log1cb0mb commented Mar 22, 2022

Issue:
Default SElinux permissions seem to have changed between previous release v35.20220213.3.0 and v35.20220227.3.0.
Kubelet is being denied to access/write under default /var/lib/kubelet directory. Issue noticed specifically for container volume mounts under /var/lib/kubelet/pods which is by default labeled as system_u:object_r:var_lib_t:s0, with latest release container_t is no longer allowed to access var_lib_t.

Reference audit logs for coredns pod:
Mar 22 12:28:49 dc1-worker1.k8s.lab audit[4455]: AVC avc: denied { read } for pid=4455 comm="coredns" name="Corefile" dev="vda4" ino=9438676 scontext=system_u:system_r:container_t:s0:c666,c775 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file permissive=0

Reproduction steps
Steps to reproduce the behavior:

  1. Fresh FCOS v35.20220227.3.0
  2. /var/lib/kubelet with default SElinux context labels (Not relabeled which obviously fixes the issue)
  3. Spin up a pod that for e.g example uses volume mounts (config map based etc.)

Expected behavior
Kubelet is able to create required volume mounts or any other content under /var/lib/kubelet/*. Additionally with SElinux relabeling feature in use for Kubelet, it should be allowed to not only create content under said directory but also relabel for e.g volume mount with respective container SElinux context labels.

Actual behavior
container_t processes no longer allowed to access var_lib_t. labeled directories/files.

System details

  • Bare Metal/QEMU
  • v35.20220227.3.0 - 5.16.13-200.fc35.x86_64

Additional information
Already tested same behavior with previous v35.20220213.3.0 release and expected behavior is observed. So far I am unable to find any reference if this is an intended behavior and it is known that default permissions have been changed. Could it be that default permissions have been more tightened, removing such access and requires proper relabeling or explicit SElinux policy?

Note: Issue can also be fixed by restoring context labels: sudo restorecon -R -v /var/lib/kubelet/ OR if Kubelet is running as a container then /var/lib/kubelet volume mount can be set with :z flag to relabel the whole directory which is not ideal from security pov and specifically bad idea for large sized persistent volumes etc.

@dustymabe
Copy link
Member

Thanks for the detailed report! Just noting down here the differences in package set between 35.20220213.3.0 and 35.20220227.3.0 for now:

ostree diff commit from: fedora/x86_64/coreos/stable^ (16afaa9af694537aa00cb3e259eeca549908696dbc8d7d548f1f49cf2d0f693e)
ostree diff commit to:   fedora/x86_64/coreos/stable (60d938261803ef4f9d927e774db51a96c75073eff81744ee5f528ab33f985ecd)
Upgraded:                                 
  audit-libs 3.0.7-1.fc35 -> 3.0.7-2.fc35 
  bsdtar 3.5.2-2.fc35 -> 3.5.3-1.fc35                                                   
  btrfs-progs 5.16.1-1.fc35 -> 5.16.2-1.fc35 
  container-selinux 2:2.173.1-1.fc35 -> 2:2.177.0-1.fc35
  containerd 1.5.8-1.fc35 -> 1.6.0-1.fc35                                               
  containers-common 4:1-41.fc35 -> 4:1-45.fc35
  expat 2.4.3-1.fc35 -> 2.4.4-1.fc35       
  flatpak-session-helper 1.12.4-1.fc35 -> 1.12.5-1.fc35
  git-core 2.34.1-1.fc35 -> 2.35.1-1.fc35                                               
  glib2 2.70.3-2.fc35 -> 2.70.4-1.fc35                                                  
  gnutls 3.7.2-2.fc35 -> 3.7.2-3.fc35
  kernel 5.15.18-200.fc35 -> 5.16.13-200.fc35
  kernel-core 5.15.18-200.fc35 -> 5.16.13-200.fc35
  kernel-modules 5.15.18-200.fc35 -> 5.16.13-200.fc35
  libarchive 3.5.2-2.fc35 -> 3.5.3-1.fc35
  libblkid 2.37.3-1.fc35 -> 2.37.4-1.fc35                                                                                                                                        
  libfdisk 2.37.3-1.fc35 -> 2.37.4-1.fc35                                                                                                                                        
  libibverbs 38.1-2.fc35 -> 39.0-1.fc35
  libmount 2.37.3-1.fc35 -> 2.37.4-1.fc35
  libreport-filesystem 2.15.2-6.fc35 -> 2.15.2-7.fc35
  libsmartcols 2.37.3-1.fc35 -> 2.37.4-1.fc35       
  libuuid 2.37.3-1.fc35 -> 2.37.4-1.fc35
  libxml2 2.9.12-6.fc35 -> 2.9.13-1.fc35 
  linux-firmware 20211216-127.fc35 -> 20220209-129.fc35
  linux-firmware-whence 20211216-127.fc35 -> 20220209-129.fc35
  polkit 0.120-1.fc35.1 -> 0.120-1.fc35.2                                               
  polkit-libs 0.120-1.fc35.1 -> 0.120-1.fc35.2
  selinux-policy 35.13-1.fc35 -> 35.15-1.fc35
  selinux-policy-targeted 35.13-1.fc35 -> 35.15-1.fc35
  util-linux 2.37.3-1.fc35 -> 2.37.4-1.fc35
  util-linux-core 2.37.3-1.fc35 -> 2.37.4-1.fc35
  vim-data 2:8.2.4314-1.fc35 -> 2:8.2.4460-1.fc35
  vim-minimal 2:8.2.4314-1.fc35 -> 2:8.2.4460-1.fc35
Added:                                     
  libtool-ltdl-2.4.6-42.fc35.x86_64    

@dustymabe
Copy link
Member

dustymabe commented Mar 22, 2022

In the container-selinux upstream repo I see:

$ git log --oneline v2.173.1^..v2.177.0
e9ec0d4 (tag: v2.177.0) Allow userdomains to execute conmon_exec_t and use it as an entrypoint
82be248 (tag: v2.176.0) Allow conmon_exec_t as an entrypoint
b3e56e2 (tag: v2.175.0) Add boolean to allow containers to use any device
95e524a (tag: v2.174.0) Bump to v2.174.0
ef33309 Merge pull request #166 from 0xC0ncord/conmon-ranged-transition
662890a Add explicit range transition for conmon
a31e3e6 (tag: v2.173.2) Update package for new file context
b7c56fc Merge pull request #165 from fire833/main
cf3da79 Update file labeling type for /var/lib/kubelet
0ea4477 (tag: v2.173.1) Bump version to handle fixes in interface

So maybe containers/container-selinux@cf3da79 is the cause for the change in behavior. I'll have to dig in more tomorrow.

Two other notes:

  • on your end running testing nodes will help find stuff like this sooner
  • on our end we need to re-enable k8s tests.

@log1cb0mb
Copy link
Author

on your end running testing nodes will help find stuff like this sooner

I can arrange something like that but hopefully QEMU based virtual lab should suffice?

@dustymabe
Copy link
Member

dustymabe commented Mar 23, 2022

on your end running testing nodes will help find stuff like this sooner

I can arrange something like that but hopefully QEMU based virtual lab should suffice?

It should :) - Obviously a test environment as close to your production environment as possible will give you the best test results, but in the real world that can be hard to come by. In general a virtual lab should give you a good amount of coverage.

@dustymabe
Copy link
Member

hey @log1cb0mb. I'm trying to figure out what is going on. I started up a node on 35.20220213.3.0 and one on 35.20220227.3.0 and did some inspections. Here are the commands I'm running:

rpm-ostree status
matchpathcon /var/lib/kubelet /var/lib/kubelet/pods
sudo mkdir -p /var/lib/kubelet/pods
sudo touch /var/lib/kubelet/pods/foofile
sudo ls -ldZ /var/lib/kubelet/ /var/lib/kubelet/pods
sudo ls -lZ /var/lib/kubelet/pods/foofile
sudo runcon system_u:system_r:container_t:s0 stat /var/lib/kubelet /var/lib/kubelet/pods /var/lib/kubelet/pods/foofile
sudo restorecon -vR /var/lib/kubelet/
sudo ls -ldZ /var/lib/kubelet/ /var/lib/kubelet/pods
sudo ls -lZ /var/lib/kubelet/pods/foofile
sudo runcon system_u:system_r:container_t:s0 stat /var/lib/kubelet /var/lib/kubelet/pods /var/lib/kubelet/pods/foofile
[core@cosa-devsh ~]$ rpm-ostree status
State: idle
Deployments:
* fedora:fedora/x86_64/coreos/stable
                   Version: 35.20220213.3.0 (2022-03-01T01:45:12Z)
                    Commit: 16afaa9af694537aa00cb3e259eeca549908696dbc8d7d548f1f49cf2d0f693e
              GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ matchpathcon /var/lib/kubelet /var/lib/kubelet/pods
/var/lib/kubelet        system_u:object_r:var_lib_t:s0
/var/lib/kubelet/pods   system_u:object_r:container_file_t:s0
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo mkdir -p /var/lib/kubelet/pods
sudo touch /var/lib/kubelet/pods/foofile
sudo ls -ldZ /var/lib/kubelet/ /var/lib/kubelet/pods
drwxr-xr-x. 3 root root unconfined_u:object_r:var_lib_t:s0 18 Apr  1 20:21 /var/lib/kubelet/
drwxr-xr-x. 2 root root unconfined_u:object_r:var_lib_t:s0 21 Apr  1 20:21 /var/lib/kubelet/pods
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo ls -lZ /var/lib/kubelet/pods/foofile
-rw-r--r--. 1 root root unconfined_u:object_r:var_lib_t:s0 0 Apr  1 20:21 /var/lib/kubelet/pods/foofile
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo runcon system_u:system_r:container_t:s0 stat /var/lib/kubelet /var/lib/kubelet/pods /var/lib/kubelet/pods/foofile
  File: /var/lib/kubelet
  Size: 18              Blocks: 0          IO Block: 4096   directory
Device: fc04h/64516d    Inode: 11534470    Links: 3
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: unconfined_u:object_r:var_lib_t:s0
Access: 2022-04-01 20:21:52.315681725 +0000
Modify: 2022-04-01 20:21:52.315681725 +0000
Change: 2022-04-01 20:21:52.315681725 +0000
 Birth: 2022-04-01 20:21:52.315681725 +0000
  File: /var/lib/kubelet/pods
  Size: 21              Blocks: 0          IO Block: 4096   directory
Device: fc04h/64516d    Inode: 12583046    Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: unconfined_u:object_r:var_lib_t:s0
Access: 2022-04-01 20:21:52.315681725 +0000
Modify: 2022-04-01 20:21:52.329681725 +0000
Change: 2022-04-01 20:21:52.329681725 +0000
 Birth: 2022-04-01 20:21:52.315681725 +0000
stat: cannot statx '/var/lib/kubelet/pods/foofile': Permission denied
[core@cosa-devsh ~]$ sudo restorecon -vR /var/lib/kubelet/
Relabeled /var/lib/kubelet/pods from unconfined_u:object_r:var_lib_t:s0 to unconfined_u:object_r:container_file_t:s0
Relabeled /var/lib/kubelet/pods/foofile from unconfined_u:object_r:var_lib_t:s0 to unconfined_u:object_r:container_file_t:s0
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo ls -ldZ /var/lib/kubelet/ /var/lib/kubelet/pods
drwxr-xr-x. 3 root root unconfined_u:object_r:var_lib_t:s0        18 Apr  1 20:21 /var/lib/kubelet/
drwxr-xr-x. 2 root root unconfined_u:object_r:container_file_t:s0 21 Apr  1 20:21 /var/lib/kubelet/pods
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo ls -lZ /var/lib/kubelet/pods/foofile
-rw-r--r--. 1 root root unconfined_u:object_r:container_file_t:s0 0 Apr  1 20:21 /var/lib/kubelet/pods/foofile
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo runcon system_u:system_r:container_t:s0 stat /var/lib/kubelet /var/lib/kubelet/pods /var/lib/kubelet/pods/foofile
  File: /var/lib/kubelet
  Size: 18              Blocks: 0          IO Block: 4096   directory
Device: fc04h/64516d    Inode: 11534470    Links: 3
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: unconfined_u:object_r:var_lib_t:s0
Access: 2022-04-01 20:22:09.362681886 +0000
Modify: 2022-04-01 20:21:52.315681725 +0000
Change: 2022-04-01 20:21:52.315681725 +0000
 Birth: 2022-04-01 20:21:52.315681725 +0000
  File: /var/lib/kubelet/pods
  Size: 21              Blocks: 0          IO Block: 4096   directory
Device: fc04h/64516d    Inode: 12583046    Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: unconfined_u:object_r:container_file_t:s0
Access: 2022-04-01 20:22:20.568682006 +0000
Modify: 2022-04-01 20:21:52.329681725 +0000
Change: 2022-04-01 20:22:09.362681886 +0000
 Birth: 2022-04-01 20:21:52.315681725 +0000
  File: /var/lib/kubelet/pods/foofile
  Size: 0               Blocks: 0          IO Block: 4096   regular empty file
Device: fc04h/64516d    Inode: 12583047    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: unconfined_u:object_r:container_file_t:s0
Access: 2022-04-01 20:21:52.329681725 +0000
Modify: 2022-04-01 20:21:52.329681725 +0000
Change: 2022-04-01 20:22:09.363681886 +0000
 Birth: 2022-04-01 20:21:52.329681725 +0000
[core@cosa-devsh ~]$ rpm-ostree status
State: idle
Deployments:
* fedora:fedora/x86_64/coreos/stable
                   Version: 35.20220227.3.0 (2022-03-14T19:06:29Z)
                    Commit: 60d938261803ef4f9d927e774db51a96c75073eff81744ee5f528ab33f985ecd
              GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F
[core@cosa-devsh ~]$ matchpathcon /var/lib/kubelet /var/lib/kubelet/pods
/var/lib/kubelet        system_u:object_r:container_var_lib_t:s0
/var/lib/kubelet/pods   system_u:object_r:container_file_t:s0
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo mkdir -p /var/lib/kubelet/pods
[core@cosa-devsh ~]$ sudo touch /var/lib/kubelet/pods/foofile
[core@cosa-devsh ~]$ sudo ls -ldZ /var/lib/kubelet/ /var/lib/kubelet/pods
drwxr-xr-x. 3 root root unconfined_u:object_r:var_lib_t:s0 18 Apr  1 20:24 /var/lib/kubelet/
drwxr-xr-x. 2 root root unconfined_u:object_r:var_lib_t:s0 21 Apr  1 20:25 /var/lib/kubelet/pods
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo ls -lZ /var/lib/kubelet/pods/foofile
-rw-r--r--. 1 root root unconfined_u:object_r:var_lib_t:s0 0 Apr  1 20:25 /var/lib/kubelet/pods/foofile
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo runcon system_u:system_r:container_t:s0 stat /var/lib/kubelet /var/lib/kubelet/pods /var/lib/kubelet/pods/foofile
  File: /var/lib/kubelet
  Size: 18              Blocks: 0          IO Block: 4096   directory
Device: fc04h/64516d    Inode: 12583046    Links: 3
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: unconfined_u:object_r:var_lib_t:s0
Access: 2022-04-01 20:24:58.398718406 +0000
Modify: 2022-04-01 20:24:58.398718406 +0000
Change: 2022-04-01 20:24:58.398718406 +0000
 Birth: 2022-04-01 20:24:58.398718406 +0000
  File: /var/lib/kubelet/pods
  Size: 21              Blocks: 0          IO Block: 4096   directory
Device: fc04h/64516d    Inode: 13631622    Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: unconfined_u:object_r:var_lib_t:s0
Access: 2022-04-01 20:24:58.398718406 +0000
Modify: 2022-04-01 20:25:00.780718406 +0000
Change: 2022-04-01 20:25:00.780718406 +0000
 Birth: 2022-04-01 20:24:58.398718406 +0000
stat: cannot statx '/var/lib/kubelet/pods/foofile': Permission denied
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo restorecon -vR /var/lib/kubelet/
Relabeled /var/lib/kubelet from unconfined_u:object_r:var_lib_t:s0 to unconfined_u:object_r:container_var_lib_t:s0
Relabeled /var/lib/kubelet/pods from unconfined_u:object_r:var_lib_t:s0 to unconfined_u:object_r:container_file_t:s0
Relabeled /var/lib/kubelet/pods/foofile from unconfined_u:object_r:var_lib_t:s0 to unconfined_u:object_r:container_file_t:s0
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo ls -ldZ /var/lib/kubelet/ /var/lib/kubelet/pods
drwxr-xr-x. 3 root root unconfined_u:object_r:container_var_lib_t:s0 18 Apr  1 20:24 /var/lib/kubelet/
drwxr-xr-x. 2 root root unconfined_u:object_r:container_file_t:s0    21 Apr  1 20:25 /var/lib/kubelet/pods
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo ls -lZ /var/lib/kubelet/pods/foofile
-rw-r--r--. 1 root root unconfined_u:object_r:container_file_t:s0 0 Apr  1 20:25 /var/lib/kubelet/pods/foofile
[core@cosa-devsh ~]$ 
[core@cosa-devsh ~]$ sudo runcon system_u:system_r:container_t:s0 stat /var/lib/kubelet /var/lib/kubelet/pods /var/lib/kubelet/pods/foofile
  File: /var/lib/kubelet
  Size: 18              Blocks: 0          IO Block: 4096   directory
Device: fc04h/64516d    Inode: 12583046    Links: 3
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: unconfined_u:object_r:container_var_lib_t:s0
Access: 2022-04-01 20:25:14.558718406 +0000
Modify: 2022-04-01 20:24:58.398718406 +0000
Change: 2022-04-01 20:25:14.558718406 +0000
 Birth: 2022-04-01 20:24:58.398718406 +0000
  File: /var/lib/kubelet/pods
  Size: 21              Blocks: 0          IO Block: 4096   directory
Device: fc04h/64516d    Inode: 13631622    Links: 2
Access: (0755/drwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Context: unconfined_u:object_r:container_file_t:s0
Access: 2022-04-01 20:25:14.558718406 +0000
Modify: 2022-04-01 20:25:00.780718406 +0000
Change: 2022-04-01 20:25:14.558718406 +0000
 Birth: 2022-04-01 20:24:58.398718406 +0000
  File: /var/lib/kubelet/pods/foofile
  Size: 0               Blocks: 0          IO Block: 4096   regular empty file
Device: fc04h/64516d    Inode: 13631623    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: unconfined_u:object_r:container_file_t:s0
Access: 2022-04-01 20:25:00.780718406 +0000
Modify: 2022-04-01 20:25:00.780718406 +0000
Change: 2022-04-01 20:25:14.560718406 +0000
 Birth: 2022-04-01 20:25:00.780718406 +0000

In each case the files were dumbly created using mkdir and touch which IIUC just creates files with the same context as their parent directory. The initial runcon fails on the foofile but then after the labels are fixed it succeeds. The only change in the policy between the two machines with repect to these files I can see is the container-selinux change mentioned in #1138 (comment).

If you can reliably reproduce the issue (i.e. install node, working fine, upgrade, not working + AVC denials) it might be worth us inspecting Corefile under /var/lib/kubelet/ before and after the upgrade to see what the SELinux contexts are.

Also - it's always a plus if we could boil this down into a reproducer that doesn't require a kubernetes cluster (single container, just file manipulation).

@log1cb0mb
Copy link
Author

log1cb0mb commented Apr 3, 2022

@dustymabe think I have found the difference between two versions and context labels.
So I setup kubernetes cluster two worker nodes: one using 35.20220213.3.0 and other using 35.20220227.3.0. The nodes are setup exactly the same way, fresh installs and kubernetes bootstrapped with same config.

Working node:

[root@dc1-worker1 kubelet]# sudo rpm-ostree status
State: idle
Deployments:
* fedora:fedora/x86_64/coreos/stable
                   Version: 35.20220213.3.0 (2022-03-01T01:45:12Z)
                BaseCommit: 16afaa9af694537aa00cb3e259eeca549908696dbc8d7d548f1f49cf2d0f693e
              GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F
           LayeredPackages: conntrack-tools python python3-libselinux

# coredns pod volume mount:

[root@dc1-worker1 kubelet]# ls -alZ pods/a1feb462-dfdd-40e2-942a-f92f3ac99917/volumes/kubernetes.io~configmap/config/
total 0
drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c584,c689 76 Apr  3 10:47 .
drwxr-xr-x. 3 root root system_u:object_r:var_lib_t:s0                  20 Apr  3 10:47 ..
drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0:c584,c689 22 Apr  3 10:47 ..2022_04_03_10_47_20.3644846533
lrwxrwxrwx. 1 root root system_u:object_r:container_file_t:s0:c584,c689 32 Apr  3 10:47 ..data -> ..2022_04_03_10_47_20.3644846533
lrwxrwxrwx. 1 root root system_u:object_r:container_file_t:s0:c584,c689 15 Apr  3 10:47 Corefile -> ..data/Corefile

Non-working node:

[root@dc2-worker1 kubelet]# sudo rpm-ostree status
State: idle
Deployments:
* fedora:fedora/x86_64/coreos/stable
                   Version: 35.20220227.3.0 (2022-03-14T19:06:29Z)
                BaseCommit: 60d938261803ef4f9d927e774db51a96c75073eff81744ee5f528ab33f985ecd
              GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F
           LayeredPackages: conntrack-tools python python3-libselinux

# coredns pod volume mount:

[root@dc2-worker1 kubelet]# ls -alZ pods/4e118a5b-c54b-42fb-853b-088a6fd7e0dd/volumes/kubernetes.io~configmap/config/
total 0
drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c134,c755 76 Apr  3 10:47 .
drwxr-xr-x. 3 root root system_u:object_r:var_lib_t:s0                  20 Apr  3 10:47 ..
drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0:c134,c755 22 Apr  3 10:47 ..2022_04_03_10_47_20.1642956151
lrwxrwxrwx. 1 root root system_u:object_r:var_lib_t:s0                  32 Apr  3 10:47 ..data -> ..2022_04_03_10_47_20.1642956151
lrwxrwxrwx. 1 root root system_u:object_r:var_lib_t:s0                  15 Apr  3 10:47 Corefile -> ..data/Corefile

# audit log:
Apr 03 10:50:20 dc2-worker1.k8s.lab audit[16097]: AVC avc:  denied  { read } for  pid=16097 comm="coredns" name="Corefile" dev="vda4" ino=67109200 scontext=system_u:system_r:container_t:s0:c134,c755 tcontext=system_u:object_r:var_lib_t:s0 tclass=lnk_file permissive=0 

As can be seen, Corefile is not labelled correctly with container specific context label which explains why container is unable to access the file.

It seems, this is not exactly CoreOS SElinux policy change issue but instead kubelet not labelling directories/files correctly.

Note that the corends pods are replicas of same deployment which means no differences there.

Based on actual container inspection, selinux_relabel is actually set:

      "mounts": [
        {
          "container_path": "/etc/coredns",
          "host_path": "/var/lib/kubelet/pods/4e118a5b-c54b-42fb-853b-088a6fd7e0dd/volumes/kubernetes.io~configmap/config",
          "readonly": true,
          "selinux_relabel": true
        },

Now not sure why relabel flag is not working. I am suspecting this has something to do with containerd as ultimately its container runtime that needs to be relabelling based on information passed by kubelet. And yes, there is version change between CoreOS release: 1.5.8 and 1.6.0.
Could this be related to : GHSA-c9cp-9c75-9v8c ?

UPDATE: tried 1.6.2 did not help however I am not entirely sure if i get the containerd updated correctly.

@log1cb0mb
Copy link
Author

Allright confirmed, it is indeed containerd 1.6.0. Tested with containerd 1.5.8 on 35.20220227.3.0 and CoreFile is correctly labeled:

[root@dc2-worker1 ~]# sudo rpm-ostree status
State: idle
Deployments:
* fedora:fedora/x86_64/coreos/stable
                   Version: 35.20220227.3.0 (2022-03-14T19:06:29Z)
                BaseCommit: 60d938261803ef4f9d927e774db51a96c75073eff81744ee5f528ab33f985ecd
              GPGSignature: Valid signature by 787EA6AE1147EEE56C40B30CDB4639719867C58F
           LayeredPackages: conntrack-tools python python3-libselinux
[root@dc2-worker1 ~]# containerd --version
containerd github.com/containerd/containerd v1.5.8 1e5ef943eb76627a6d3b6de8cd1ef6537f393a71
[root@dc2-worker1 ~]# ls -alZ /var/lib/kubelet/pods/a279964f-674e-455b-b6e3-29d06a676425/volumes/kubernetes.io~configmap/config/
total 0
drwxrwxrwx. 3 root root system_u:object_r:container_file_t:s0:c122,c252 76 Apr  3 11:55 .
drwxr-xr-x. 3 root root system_u:object_r:var_lib_t:s0                  20 Apr  3 11:55 ..
drwxr-xr-x. 2 root root system_u:object_r:container_file_t:s0:c122,c252 22 Apr  3 11:55 ..2022_04_03_11_55_41.1009638654
lrwxrwxrwx. 1 root root system_u:object_r:container_file_t:s0:c122,c252 32 Apr  3 11:55 ..data -> ..2022_04_03_11_55_41.1009638654
lrwxrwxrwx. 1 root root system_u:object_r:container_file_t:s0:c122,c252 15 Apr  3 11:55 Corefile -> ..data/Corefile

@dustymabe
Copy link
Member

@log1cb0mb - great work tracking that down! Any chance there is an upstream containerd issue that has already been reported for this problem? In the best case it's already been fixed upstream and we can get it backported.

@log1cb0mb
Copy link
Author

Cannot seem to find any similar issue reported so I am going to open one.

@dustymabe dustymabe changed the title FCOS (v35.20220227.3.0) Changed default SElinux permissions for containers containerd: Changed default SElinux permissions for containers Apr 25, 2022
@dustymabe dustymabe changed the title containerd: Changed default SElinux permissions for containers containerd: SElinux Relabelling broken for Kubernetes volume mounts Apr 25, 2022
@dustymabe
Copy link
Member

New information: containerd/containerd#6767 (comment)

So maybe the issue is related to opencontainers/selinux needing an update. I pinged in the BZ requesting the release bump.

And then I think containerd (maybe) would need to get rebuilt? cc @olivierlemasle.

@log1cb0mb
Copy link
Author

log1cb0mb commented Apr 26, 2022

I am building containerd with the fix (opencontainers/selinux#173) as we speak to test if it indeed helps.

Update: It worked! built containerd with these changes: containerd/containerd@768af24

@log1cb0mb
Copy link
Author

@dustymabe Once this is merged: containerd/containerd#6865
Possible to rebuilt containerd and backport to existing stable stream or how does that actually work?

@dustymabe
Copy link
Member

@log1cb0mb - we'd need to get that backported to the Fedora RPM (@olivierlemasle is the maintainer).

@dustymabe
Copy link
Member

If we can't get up with him here then opening a BZ against containerd would be the avenue to take.

@log1cb0mb
Copy link
Author

@dustymabe will this Bug 2079779 do?

@dustymabe
Copy link
Member

@dustymabe will this Bug 2079779 do?

Yep. That should do!

The fix landed upstream in containerd/containerd@cb84b5a

@log1cb0mb
Copy link
Author

log1cb0mb commented May 12, 2022

@dustymabe As packages are now available, I have been trying to find a way how to upgrade containerd in existing release. Is that even possible or does it have to be included in upcoming releases?

@sedlund
Copy link

sedlund commented May 12, 2022

You can grab it from https://koji.fedoraproject.org/koji/buildinfo?buildID=1965128 then rpm-ostree oveerride replace <rpm> which is how I downgraded existing machines.

@dustymabe dustymabe added status/pending-testing-release Fixed upstream. Waiting on a testing release. and removed status/pending-upstream-release Fixed upstream. Waiting on an upstream component source code release. labels May 13, 2022
@dustymabe
Copy link
Member

Hey @log1cb0mb can you try out the latest testing-devel build (36.20220516.20.0 or newer) to see if this is fixed for you? You can grab those images in the unofficial builds browser.

@log1cb0mb
Copy link
Author

@dustymabe Just tested the build and it is fixed. 👍🏻

@dustymabe
Copy link
Member

The fix for this went into testing stream release 36.20220522.2.1. Please try out the new release and report issues.

@dustymabe dustymabe added status/pending-stable-release Fixed upstream and in testing. Waiting on stable release. and removed status/pending-testing-release Fixed upstream. Waiting on a testing release. labels May 25, 2022
@log1cb0mb
Copy link
Author

Tested and fixed.

@lucab
Copy link
Contributor

lucab commented May 30, 2022

Another similar report is at #1211 (comment), also confirmed fixed in testing.

@dustymabe
Copy link
Member

The fix for this went into stable stream release 36.20220522.3.0.

@dustymabe dustymabe removed the status/pending-stable-release Fixed upstream and in testing. Waiting on stable release. label Jun 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants