Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add CIS kubernetes CIS-1.9 for k8s v1.27 - v1.29 #1617

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

andypitcher
Copy link
Contributor

@andypitcher andypitcher commented May 17, 2024

Parent issue:

CIS Kubernetes Benchmark CIS-1.9

CIS Workbench: https://workbench.cisecurity.org/benchmarks/16828
K8s version: v1.27 to v1.29
Changelog details in CIS Workbench:
All the checks remain the same as CIS-1.8, only these were changed:

Pull request details

  • created cfg/cis-1.9

  • policies.yaml

    • 5.1.1 to 5.1.6 were adapted from Manual to Automated
    • 5.1.3 got broken down into 5.1.3.1 and 5.1.3.2
    • 5.1.6 got broken down into 5.1.6.1 and 5.1.6.2
    • version was set to cis-1.9
  • node.yaml master.yaml controlplane.yaml etcd.yaml

    • version was set to cis-1.9
  • master.yaml

    • 1.1.13 and 1.1.14 had their titles/remediations changed along with their tests (now test against multiple values)
  • Added CIS-1.9 to the global configmap and docs/

  • Set go-linter version from latest to v1.57.2 as per 86a42b5

Description of policies.yaml changes

Note: kubectl needs to be added to kube-bench's Dockerfile (and other needed places).

5.1.1 Ensure that the cluster-admin role is only used where required (Automated)

Test details: https://workbench.cisecurity.org/sections/2493119/recommendations/4022566
Change from previous version: Manual to Automated
Test: Retrieves all clusterrolebindings role names and subject available and look for cluster-admin role.
Condition: is_compliant is false (FAIL) if rolename is not cluster-admin and rolebinding is cluster-admin. cluster-admin role is not meant to fail since it's a default role.

FAIL
(For explanation purpose, the following check was set with one role_name: role-test-1 only)

I0517 21:59:41.795837  755921 check.go:110] -----   Running check 5.1.1   -----
I0517 21:59:41.955134  755921 check.go:309] Command: "kubectl get clusterrolebindings role-test-1 -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name --no-headers | while read -r role_name role_binding subject\ndo\n  if [[ \"${role_name}\" != \"cluster-admin\" && \"${role_binding}\" == \"cluster-admin\" ]]; then\n    is_compliant=\"false\"\n  else\n    is_compliant=\"true\"\n  fi;\n  echo \"**role_name: ${role_name} role_binding: ${rolebinding} subject: ${subject} is_compliant: ${is_compliant}\"\ndone"
I0517 21:59:41.955348  755921 check.go:310] Output:
 "**role_name: role-test-1 role_binding:  subject: role-test-object is_compliant: false\n"
I0517 21:59:41.955433  755921 check.go:231] Running 1 test_items
I0517 21:59:41.955542  755921 test.go:153] In flagTestItem.findValue false
I0517 21:59:41.955611  755921 test.go:247] Flag 'is_compliant' exists
I0517 21:59:41.955707  755921 check.go:255] Used auditCommand
I0517 21:59:41.955755  755921 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:false, flagFound:false, actualResult:"**role_name: role-test-1 role_binding:  subject: role-test-object is_compliant: false", ExpectedResult:"'is_compliant' is equal to 'true'"}
I0517 21:59:41.955832  755921 check.go:184] Command: "" TestResult: false State: "FAIL"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[FAIL] 5.1.1 Ensure that the cluster-admin role is only used where required (Automated)
	 **role_name: role-test-1 role_binding:  subject: role-test-object is_compliant: false

== Remediations policies ==
5.1.1 Identify all clusterrolebindings to the cluster-admin role. Check if they are used and
if they need this role or if they could use a role with fewer privileges.
Where possible, first bind users to a lower privileged role and then remove the
clusterrolebinding to the cluster-admin role : kubectl delete clusterrolebinding [name]
Condition: is_compliant is false if rolename is not cluster-admin and rolebinding is cluster-admin.


== Summary policies ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

PASS
(For explanation purpose, the following check was set with one role_name: cluster-admin only)

I0517 22:03:06.462059  756407 check.go:110] -----   Running check 5.1.1   -----
I0517 22:03:06.677065  756407 check.go:309] Command: "kubectl get clusterrolebindings cluster-admin -o=custom-columns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name --no-headers | while read -r role_name role_binding subject\ndo\n  if [[ \"${role_name}\" != \"cluster-admin\" && \"${role_binding}\" == \"cluster-admin\" ]]; then\n    is_compliant=\"false\"\n  else\n    is_compliant=\"true\"\n  fi;\n  echo \"**role_name: ${role_name} role_binding: ${rolebinding} subject: ${subject} is_compliant: ${is_compliant}\"\ndone"
I0517 22:03:06.677142  756407 check.go:310] Output:
 "**role_name: cluster-admin role_binding:  subject: system:masters is_compliant: true\n"
I0517 22:03:06.677160  756407 check.go:231] Running 1 test_items
I0517 22:03:06.677222  756407 test.go:153] In flagTestItem.findValue true
I0517 22:03:06.677236  756407 test.go:247] Flag 'is_compliant' exists
I0517 22:03:06.677245  756407 check.go:255] Used auditCommand
I0517 22:03:06.677256  756407 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"**role_name: cluster-admin role_binding:  subject: system:masters is_compliant: true", ExpectedResult:"'is_compliant' is equal to 'true'"}
I0517 22:03:06.677276  756407 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[PASS] 5.1.1 Ensure that the cluster-admin role is only used where required (Automated)

== Summary policies ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

5.1.2 Minimize access to secrets (Automated)

Test details: https://workbench.cisecurity.org/sections/2493119/recommendations/4022567
Change from previous version: Manual to Automated
Test: kubectl auth can-i get,list,watch secrets --all-namespaces --as=system:authenticated.
Condition: PASS when flag canGetListWatchSecretsAsSystemAuthenticated is no.

PASS

I0517 22:18:27.986835  758454 check.go:110] -----   Running check 5.1.2   -----
I0517 22:18:28.185305  758454 check.go:309] Command: "echo \"canGetListWatchSecretsAsSystemAuthenticated: $(kubectl auth can-i get,list,watch secrets --all-namespaces --as=system:authenticated)\""
I0517 22:18:28.185358  758454 check.go:310] Output:
 "canGetListWatchSecretsAsSystemAuthenticated: no\n"
I0517 22:18:28.185392  758454 check.go:231] Running 1 test_items
I0517 22:18:28.185462  758454 test.go:153] In flagTestItem.findValue no
I0517 22:18:28.185487  758454 test.go:247] Flag 'canGetListWatchSecretsAsSystemAuthenticated' exists
I0517 22:18:28.185502  758454 check.go:255] Used auditCommand
I0517 22:18:28.185526  758454 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"canGetListWatchSecretsAsSystemAuthenticated: no", ExpectedResult:"'canGetListWatchSecretsAsSystemAuthenticated' is equal to 'no'"}
I0517 22:18:28.185553  758454 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[PASS] 5.1.2 Minimize access to secrets (Automated)

== Summary policies ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

5.1.3 Minimize wildcard use in Roles and ClusterRoles (Automated)

Test details: https://workbench.cisecurity.org/sections/2493119/recommendations/4022568
Change from previous version: Manual to Automated
** Note: Broken down into 2 checks 5.1.3.1 and 5.1.3.2 to facilitate the analysis of 5.1.3 (Both are documented Artifacts in 5.1.3).

5.1.3.1 Minimize wildcard use in ClusterRoles (Automated)

Test: Retrieves all roles along with their respective rules.
Condition: is_compliant is false (FAIL) if ["*"] is found in rules (includes verbs, resources etc).

FAIL
(For explanation purpose, the following check was set with one namespace: mynamespace-system. Default runs against --all-namespaces.)

I0517 22:33:13.218641  760554 check.go:110] -----   Running check 5.1.3.1   -----
I0517 22:33:13.557568  760554 check.go:309] Command: "kubectl get roles -n mynamespace-system -o custom-columns=ROLE_NAMESPACE:.metadata.namespace,ROLE_NAME:.metadata.name --no-headers | while read -r role_namespace role_name\ndo\n  role_rules=$(kubectl get role -n \"${role_namespace}\" \"${role_name}\" -o=json | jq -c '.rules')\n  if echo \"${role_rules}\" | grep -q \"\\[\\\"\\*\\\"\\]\"; then\n    is_compliant=\"false\"\n  else\n    is_compliant=\"true\"\n  fi;\n  echo \"**role_name: ${role_name} role_namespace: ${role_namespace} role_rules: ${role_rules} is_compliant: ${is_compliant}\"\ndone"
I0517 22:33:13.557633  760554 check.go:310] Output:
 "**role_name: mynamespace-role role_namespace: mynamespace-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"services\"],\"verbs\":[\"watch\",\"list\",\"get\",\"patch\"]},{\"apiGroups\":[\"batch\"],\"resources\":[\"jobs\"],\"verbs\":[\"watch\",\"list\",\"get\",\"delete\"]},{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\",\"pods\",\"secrets\"],\"verbs\":[\"*\"]},{\"apiGroups\":[\"apps\"],\"resources\":[\"daemonsets\"],\"verbs\":[\"*\"]}] is_compliant: false\n"
I0517 22:33:13.557653  760554 check.go:231] Running 1 test_items
I0517 22:33:13.557723  760554 test.go:153] In flagTestItem.findValue false
I0517 22:33:13.557736  760554 test.go:247] Flag 'is_compliant' exists
I0517 22:33:13.557744  760554 check.go:255] Used auditCommand
I0517 22:33:13.557755  760554 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:false, flagFound:false, actualResult:"**role_name: mynamespace-role role_namespace: mynamespace-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"services\"],\"verbs\":[\"watch\",\"list\",\"get\",\"patch\"]},{\"apiGroups\":[\"batch\"],\"resources\":[\"jobs\"],\"verbs\":[\"watch\",\"list\",\"get\",\"delete\"]},{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\",\"pods\",\"secrets\"],\"verbs\":[\"*\"]},{\"apiGroups\":[\"apps\"],\"resources\":[\"daemonsets\"],\"verbs\":[\"*\"]}] is_compliant: false", ExpectedResult:"'is_compliant' is equal to 'true'"}
I0517 22:33:13.557784  760554 check.go:184] Command: "" TestResult: false State: "FAIL"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[FAIL] 5.1.3.1 Minimize wildcard use in Roles (Automated)
	 **role_name: mynamespace-role role_namespace: mynamespace-system role_rules: [{"apiGroups":[""],"resources":["services"],"verbs":["watch","list","get","patch"]},{"apiGroups":["batch"],"resources":["jobs"],"verbs":["watch","list","get","delete"]},{"apiGroups":[""],"resources":["configmaps","pods","secrets"],"verbs":["*"]},{"apiGroups":["apps"],"resources":["daemonsets"],"verbs":["*"]}] is_compliant: false

== Remediations policies ==
5.1.3.1 Where possible replace any use of wildcards ["*"] in roles with specific
objects or actions.
Condition: is_compliant is false if ["*"] is found in rules.
Parent: 5.1.3.


== Summary policies ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

PASS
(For explanation purpose, the following check was set on namespace: kube-system. Default runs against --all-namespaces.)

I0517 22:36:59.733548  761074 check.go:110] -----   Running check 5.1.3.1   -----
I0517 22:37:00.948955  761074 check.go:309] Command: "kubectl get roles -n kube-system -o custom-columns=ROLE_NAMESPACE:.metadata.namespace,ROLE_NAME:.metadata.name --no-headers | while read -r role_namespace role_name\ndo\n  role_rules=$(kubectl get role -n \"${role_namespace}\" \"${role_name}\" -o=json | jq -c '.rules')\n  if echo \"${role_rules}\" | grep -q \"\\[\\\"\\*\\\"\\]\"; then\n    is_compliant=\"false\"\n  else\n    is_compliant=\"true\"\n  fi;\n  echo \"**role_name: ${role_name} role_namespace: ${role_namespace} role_rules: ${role_rules} is_compliant: ${is_compliant}\"\ndone"
I0517 22:37:00.949037  761074 check.go:310] Output:
 "**role_name: extension-apiserver-authentication-reader role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resourceNames\":[\"extension-apiserver-authentication\"],\"resources\":[\"configmaps\"],\"verbs\":[\"get\",\"list\",\"watch\"]}] is_compliant: true\n**role_name: system::leader-locking-kube-controller-manager role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\"],\"verbs\":[\"watch\"]},{\"apiGroups\":[\"\"],\"resourceNames\":[\"kube-controller-manager\"],\"resources\":[\"configmaps\"],\"verbs\":[\"get\",\"update\"]}] is_compliant: true\n**role_name: system::leader-locking-kube-scheduler role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\"],\"verbs\":[\"watch\"]},{\"apiGroups\":[\"\"],\"resourceNames\":[\"kube-scheduler\"],\"resources\":[\"configmaps\"],\"verbs\":[\"get\",\"update\"]}] is_compliant: true\n**role_name: system:controller:bootstrap-signer role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"secrets\"],\"verbs\":[\"get\",\"list\",\"watch\"]}] is_compliant: true\n**role_name: system:controller:cloud-provider role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\"],\"verbs\":[\"create\",\"get\",\"list\",\"watch\"]}] is_compliant: true\n**role_name: system:controller:token-cleaner role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"secrets\"],\"verbs\":[\"delete\",\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"\",\"events.k8s.io\"],\"resources\":[\"events\"],\"verbs\":[\"create\",\"patch\",\"update\"]}] is_compliant: true\n"
I0517 22:37:00.949119  761074 check.go:231] Running 1 test_items
I0517 22:37:00.949177  761074 test.go:153] In flagTestItem.findValue true
I0517 22:37:00.949191  761074 test.go:247] Flag 'is_compliant' exists
I0517 22:37:00.949297  761074 test.go:153] In flagTestItem.findValue true
I0517 22:37:00.949318  761074 test.go:247] Flag 'is_compliant' exists
I0517 22:37:00.949407  761074 test.go:153] In flagTestItem.findValue true
I0517 22:37:00.949420  761074 test.go:247] Flag 'is_compliant' exists
I0517 22:37:00.949475  761074 test.go:153] In flagTestItem.findValue true
I0517 22:37:00.949488  761074 test.go:247] Flag 'is_compliant' exists
I0517 22:37:00.949511  761074 test.go:153] In flagTestItem.findValue true
I0517 22:37:00.949520  761074 test.go:247] Flag 'is_compliant' exists
I0517 22:37:00.949544  761074 test.go:153] In flagTestItem.findValue true
I0517 22:37:00.949554  761074 test.go:247] Flag 'is_compliant' exists
I0517 22:37:00.949566  761074 check.go:255] Used auditCommand
I0517 22:37:00.949596  761074 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"**role_name: extension-apiserver-authentication-reader role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resourceNames\":[\"extension-apiserver-authentication\"],\"resources\":[\"configmaps\"],\"verbs\":[\"get\",\"list\",\"watch\"]}] is_compliant: true\n**role_name: system::leader-locking-kube-controller-manager role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\"],\"verbs\":[\"watch\"]},{\"apiGroups\":[\"\"],\"resourceNames\":[\"kube-controller-manager\"],\"resources\":[\"configmaps\"],\"verbs\":[\"get\",\"update\"]}] is_compliant: true\n**role_name: system::leader-locking-kube-scheduler role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\"],\"verbs\":[\"watch\"]},{\"apiGroups\":[\"\"],\"resourceNames\":[\"kube-scheduler\"],\"resources\":[\"configmaps\"],\"verbs\":[\"get\",\"update\"]}] is_compliant: true\n**role_name: system:controller:bootstrap-signer role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"secrets\"],\"verbs\":[\"get\",\"list\",\"watch\"]}] is_compliant: true\n**role_name: system:controller:cloud-provider role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\"],\"verbs\":[\"create\",\"get\",\"list\",\"watch\"]}] is_compliant: true\n**role_name: system:controller:token-cleaner role_namespace: kube-system role_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"secrets\"],\"verbs\":[\"delete\",\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"\",\"events.k8s.io\"],\"resources\":[\"events\"],\"verbs\":[\"create\",\"patch\",\"update\"]}] is_compliant: true", ExpectedResult:"'is_compliant' is equal to 'true'"}
I0517 22:37:00.949670  761074 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[PASS] 5.1.3.1 Minimize wildcard use in Roles (Automated)

== Summary policies ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

5.1.3.2 Minimize wildcard use in ClusterRoles (Automated)

Test: Retrieves all clusterroles along with their respective rules.
Condition: is_compliant is false (FAIL) if ["*"] is found in rules (includes verbs, resources etc).

FAIL
(For explanation purpose, the following check was set on clusterrole: system:kubelet-api-admin. Default runs against all clusterroles.)

I0517 22:45:07.966527  765283 check.go:110] -----   Running check 5.1.3.2   -----
I0517 22:45:08.336184  765283 check.go:309] Command: "kubectl get clusterroles system:kubelet-api-admin -o custom-columns=CLUSTERROLE_NAME:.metadata.name --no-headers | while read -r clusterrole_name\ndo\n  clusterrole_rules=$(kubectl get clusterrole \"${clusterrole_name}\" -o=json | jq -c '.rules')\n  if echo \"${clusterrole_rules}\" | grep -q \"\\[\\\"\\*\\\"\\]\"; then\n    is_compliant=\"false\"\n  else\n    is_compliant=\"true\"\n  fi;\necho \"**clusterrole_name: ${clusterrole_name} clusterrole_rules: ${clusterrole_rules} is_compliant: $is_compliant\"\ndone"
I0517 22:45:08.336240  765283 check.go:310] Output:
 "**clusterrole_name: system:kubelet-api-admin clusterrole_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"nodes\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"\"],\"resources\":[\"nodes\"],\"verbs\":[\"proxy\"]},{\"apiGroups\":[\"\"],\"resources\":[\"nodes/log\",\"nodes/metrics\",\"nodes/proxy\",\"nodes/stats\"],\"verbs\":[\"*\"]}] is_compliant: false\n"
I0517 22:45:08.336463  765283 check.go:231] Running 1 test_items
I0517 22:45:08.336563  765283 test.go:153] In flagTestItem.findValue false
I0517 22:45:08.336625  765283 test.go:247] Flag 'is_compliant' exists
I0517 22:45:08.336702  765283 check.go:255] Used auditCommand
I0517 22:45:08.336776  765283 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:false, flagFound:false, actualResult:"**clusterrole_name: system:kubelet-api-admin clusterrole_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"nodes\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"\"],\"resources\":[\"nodes\"],\"verbs\":[\"proxy\"]},{\"apiGroups\":[\"\"],\"resources\":[\"nodes/log\",\"nodes/metrics\",\"nodes/proxy\",\"nodes/stats\"],\"verbs\":[\"*\"]}] is_compliant: false", ExpectedResult:"'is_compliant' is equal to 'true'"}
I0517 22:45:08.336844  765283 check.go:184] Command: "" TestResult: false State: "FAIL"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[FAIL] 5.1.3.2 Minimize wildcard use in ClusterRoles (Automated)
	 **clusterrole_name: system:kubelet-api-admin clusterrole_rules: [{"apiGroups":[""],"resources":["nodes"],"verbs":["get","list","watch"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["proxy"]},{"apiGroups":[""],"resources":["nodes/log","nodes/metrics","nodes/proxy","nodes/stats"],"verbs":["*"]}] is_compliant: false

== Remediations policies ==
5.1.3.2 Where possible replace any use of wildcards ["*"] in clusterroles with specific
objects or actions.
Condition: is_compliant is false if ["*"] is found in rules.
Parent: 5.1.3.


== Summary policies ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

PASS
(For explanation purpose, the following check was set with on clusterrole: view. Default runs against all clusterroles.)

I0517 22:40:27.412740  761659 check.go:110] -----   Running check 5.1.3.2   -----
I0517 22:40:27.741921  761659 check.go:309] Command: "kubectl get clusterroles view -o custom-columns=CLUSTERROLE_NAME:.metadata.name --no-headers | while read -r clusterrole_name\ndo\n  clusterrole_rules=$(kubectl get clusterrole \"${clusterrole_name}\" -o=json | jq -c '.rules')\n  if echo \"${clusterrole_rules}\" | grep -q \"\\[\\\"\\*\\\"\\]\"; then\n    is_compliant=\"false\"\n  else\n    is_compliant=\"true\"\n  fi;\necho \"**clusterrole_name: ${clusterrole_name} clusterrole_rules: ${clusterrole_rules} is_compliant: $is_compliant\"\ndone"
I0517 22:40:27.742013  761659 check.go:310] Output:
 "**clusterrole_name: view clusterrole_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\",\"endpoints\",\"persistentvolumeclaims\",\"persistentvolumeclaims/status\",\"pods\",\"replicationcontrollers\",\"replicationcontrollers/scale\",\"serviceaccounts\",\"services\",\"services/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"\"],\"resources\":[\"bindings\",\"events\",\"limitranges\",\"namespaces/status\",\"pods/log\",\"pods/status\",\"replicationcontrollers/status\",\"resourcequotas\",\"resourcequotas/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"\"],\"resources\":[\"namespaces\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"discovery.k8s.io\"],\"resources\":[\"endpointslices\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"apps\"],\"resources\":[\"controllerrevisions\",\"daemonsets\",\"daemonsets/status\",\"deployments\",\"deployments/scale\",\"deployments/status\",\"replicasets\",\"replicasets/scale\",\"replicasets/status\",\"statefulsets\",\"statefulsets/scale\",\"statefulsets/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"autoscaling\"],\"resources\":[\"horizontalpodautoscalers\",\"horizontalpodautoscalers/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"batch\"],\"resources\":[\"cronjobs\",\"cronjobs/status\",\"jobs\",\"jobs/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"extensions\"],\"resources\":[\"daemonsets\",\"daemonsets/status\",\"deployments\",\"deployments/scale\",\"deployments/status\",\"ingresses\",\"ingresses/status\",\"networkpolicies\",\"replicasets\",\"replicasets/scale\",\"replicasets/status\",\"replicationcontrollers/scale\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"policy\"],\"resources\":[\"poddisruptionbudgets\",\"poddisruptionbudgets/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"networking.k8s.io\"],\"resources\":[\"ingresses\",\"ingresses/status\",\"networkpolicies\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"metrics.k8s.io\"],\"resources\":[\"pods\",\"nodes\"],\"verbs\":[\"get\",\"list\",\"watch\"]}] is_compliant: true\n"
I0517 22:40:27.742122  761659 check.go:231] Running 1 test_items
I0517 22:40:27.742264  761659 test.go:153] In flagTestItem.findValue true
I0517 22:40:27.742294  761659 test.go:247] Flag 'is_compliant' exists
I0517 22:40:27.742312  761659 check.go:255] Used auditCommand
I0517 22:40:27.742359  761659 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"**clusterrole_name: view clusterrole_rules: [{\"apiGroups\":[\"\"],\"resources\":[\"configmaps\",\"endpoints\",\"persistentvolumeclaims\",\"persistentvolumeclaims/status\",\"pods\",\"replicationcontrollers\",\"replicationcontrollers/scale\",\"serviceaccounts\",\"services\",\"services/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"\"],\"resources\":[\"bindings\",\"events\",\"limitranges\",\"namespaces/status\",\"pods/log\",\"pods/status\",\"replicationcontrollers/status\",\"resourcequotas\",\"resourcequotas/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"\"],\"resources\":[\"namespaces\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"discovery.k8s.io\"],\"resources\":[\"endpointslices\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"apps\"],\"resources\":[\"controllerrevisions\",\"daemonsets\",\"daemonsets/status\",\"deployments\",\"deployments/scale\",\"deployments/status\",\"replicasets\",\"replicasets/scale\",\"replicasets/status\",\"statefulsets\",\"statefulsets/scale\",\"statefulsets/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"autoscaling\"],\"resources\":[\"horizontalpodautoscalers\",\"horizontalpodautoscalers/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"batch\"],\"resources\":[\"cronjobs\",\"cronjobs/status\",\"jobs\",\"jobs/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"extensions\"],\"resources\":[\"daemonsets\",\"daemonsets/status\",\"deployments\",\"deployments/scale\",\"deployments/status\",\"ingresses\",\"ingresses/status\",\"networkpolicies\",\"replicasets\",\"replicasets/scale\",\"replicasets/status\",\"replicationcontrollers/scale\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"policy\"],\"resources\":[\"poddisruptionbudgets\",\"poddisruptionbudgets/status\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"networking.k8s.io\"],\"resources\":[\"ingresses\",\"ingresses/status\",\"networkpolicies\"],\"verbs\":[\"get\",\"list\",\"watch\"]},{\"apiGroups\":[\"metrics.k8s.io\"],\"resources\":[\"pods\",\"nodes\"],\"verbs\":[\"get\",\"list\",\"watch\"]}] is_compliant: true", ExpectedResult:"'is_compliant' is equal to 'true'"}
I0517 22:40:27.742487  761659 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[PASS] 5.1.3.2 Minimize wildcard use in ClusterRoles (Automated)

== Summary policies ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

5.1.4 Minimize access to create pods (Automated)

Test details: https://workbench.cisecurity.org/sections/2493119/recommendations/4022569
Change from previous version: Manual to Automated
Test: kubectl auth can-i create pods --all-namespaces --as=system:authenticated.
Condition: PASS when flag canCreatePodsAsSystemAuthenticated is no.

PASS

I0517 22:49:58.802203  765953 check.go:110] -----   Running check 5.1.4   -----
I0517 22:49:58.972015  765953 check.go:309] Command: "echo \"canCreatePodsAsSystemAuthenticated: $(kubectl auth can-i create pods --all-namespaces --as=system:authenticated)\""
I0517 22:49:58.972059  765953 check.go:310] Output:
 "canCreatePodsAsSystemAuthenticated: no\n"
I0517 22:49:58.972078  765953 check.go:231] Running 1 test_items
I0517 22:49:58.972153  765953 test.go:153] In flagTestItem.findValue no
I0517 22:49:58.972168  765953 test.go:247] Flag 'canCreatePodsAsSystemAuthenticated' exists
I0517 22:49:58.972179  765953 check.go:255] Used auditCommand
I0517 22:49:58.972193  765953 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"canCreatePodsAsSystemAuthenticated: no", ExpectedResult:"'canCreatePodsAsSystemAuthenticated' is equal to 'no'"}
I0517 22:49:58.972219  765953 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[PASS] 5.1.4 Minimize access to create pods (Automated)

== Summary policies ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

5.1.5 Ensure that default service accounts are not actively used (Automated)

Test details: https://workbench.cisecurity.org/sections/2493119/recommendations/4022570
Change from previous version: Manual to Automated
Test: Retrieves all default serviceaccount and search for automountServiceAccountToken presence and value.
Condition: FAIL if automountServiceAccountToken is notset OR is true. Note: To make it more comprehensible notset is a substitution when the value returned is null or <none>.

FAIL
(For explanation purpose, the following check was set with one namespace: kube-system. Default runs against --all-namespaces.)

I0517 22:56:42.633432  766845 check.go:110] -----   Running check 5.1.5   -----
I0517 22:56:42.800967  766845 check.go:309] Command: "kubectl get serviceaccount -n kube-system --field-selector metadata.name=default -o=json | jq -r '.items[] | \" namespace: \\(.metadata.namespace), kind: \\(.kind), name: \\(.metadata.name), automountServiceAccountToken: \\(.automountServiceAccountToken | if . == null then \"notset\" else . end )\"' | xargs -L 1"
I0517 22:56:42.801159  766845 check.go:310] Output:
 "namespace: kube-system, kind: ServiceAccount, name: default, automountServiceAccountToken: notset\n"
I0517 22:56:42.801370  766845 check.go:231] Running 1 test_items
I0517 22:56:42.801861  766845 test.go:153] In flagTestItem.findValue notset
I0517 22:56:42.802049  766845 test.go:247] Flag 'automountServiceAccountToken' exists
I0517 22:56:42.802167  766845 check.go:255] Used auditCommand
I0517 22:56:42.802248  766845 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:false, flagFound:false, actualResult:"namespace: kube-system, kind: ServiceAccount, name: default, automountServiceAccountToken: notset", ExpectedResult:"'automountServiceAccountToken' is equal to 'false'"}
I0517 22:56:42.802553  766845 check.go:184] Command: "" TestResult: false State: "FAIL"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[FAIL] 5.1.5 Ensure that default service accounts are not actively used. (Automated)
	 namespace: kube-system, kind: ServiceAccount, name: default, automountServiceAccountToken: notset

== Remediations policies ==
5.1.5 Create explicit service accounts wherever a Kubernetes workload requires specific access
to the Kubernetes API server.
Modify the configuration of each default service account to include this value
`automountServiceAccountToken: false`.


== Summary policies ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

PASS
(For explanation purpose, the following check was set with one namespace: mynamespace-system. Default runs against --all-namespaces.)

I0517 22:57:20.417749  766946 check.go:110] -----   Running check 5.1.5   -----
I0517 22:57:20.616997  766946 check.go:309] Command: "kubectl get serviceaccount -n mynamespace-system --field-selector metadata.name=default -o=json | jq -r '.items[] | \" namespace: \\(.metadata.namespace), kind: \\(.kind), name: \\(.metadata.name), automountServiceAccountToken: \\(.automountServiceAccountToken | if . == null then \"notset\" else . end )\"' | xargs -L 1"
I0517 22:57:20.617245  766946 check.go:310] Output:
 "namespace: mynamespace-system, kind: ServiceAccount, name: default, automountServiceAccountToken: false\n"
I0517 22:57:20.617380  766946 check.go:231] Running 1 test_items
I0517 22:57:20.617578  766946 test.go:153] In flagTestItem.findValue false
I0517 22:57:20.617732  766946 test.go:247] Flag 'automountServiceAccountToken' exists
I0517 22:57:20.617819  766946 check.go:255] Used auditCommand
I0517 22:57:20.617937  766946 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"namespace: mynamespace-system, kind: ServiceAccount, name: default, automountServiceAccountToken: false", ExpectedResult:"'automountServiceAccountToken' is equal to 'false'"}
I0517 22:57:20.617988  766946 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[PASS] 5.1.5 Ensure that default service accounts are not actively used. (Automated)

== Summary policies ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

5.1.6 Ensure that Service Account Tokens are only mounted where necessary (Automated)

Test details: https://workbench.cisecurity.org/sections/2493119/recommendations/4022571
Change from previous version: Manual to Automated
** Note: Broken down into 2 checks 5.1.6.1 and 5.1.6.2 to facilitate the analysis of 5.1.6 (Both are documented Artifacts in 5.1.6).

5.1.6.1 Ensure that Service Account Tokens are only mounted where necessary - ServiceAccount (Automated)

Test: Retrieves all default serviceaccount and search for automountServiceAccountToken presence and value.
Condition: FAIL if automountServiceAccountToken is notset OR is true. Note: To make it more comprehensible notset is a substitution when the value returned is null or <none>.

FAIL
(For explanation purpose, the following the check was set with one namespace: mynamespace-system. Default runs against --all-namespaces.)
mynamespace-system has two serviceaccounts (default that is compliant because automountServiceAccountToken: false but svctest is not because automountServiceAccountToken: notset )

I0518 00:42:03.803666  780772 check.go:110] -----   Running check 5.1.6.1   -----
I0518 00:42:04.030298  780772 check.go:309] Command: "kubectl get serviceaccount -n mynamespace-system -o=json | jq -r '.items[] | \" namespace: \\(.metadata.namespace), svacc_name: \\(.metadata.name), Kind: \\(.kind), automountServiceAccountToken: \\(.automountServiceAccountToken | if . == null then \"notset\" else . end )\"' | xargs -L 1"
I0518 00:42:04.030678  780772 check.go:310] Output:
 "namespace: mynamespace-system, svacc_name: default, Kind: ServiceAccount, automountServiceAccountToken: false\nnamespace: mynamespace-system, svacc_name: svctest, Kind: ServiceAccount, automountServiceAccountToken: notset\n"
I0518 00:42:04.030823  780772 check.go:231] Running 1 test_items
I0518 00:42:04.031090  780772 test.go:153] In flagTestItem.findValue false
I0518 00:42:04.031256  780772 test.go:247] Flag 'automountServiceAccountToken' exists
I0518 00:42:04.031412  780772 test.go:153] In flagTestItem.findValue notset
I0518 00:42:04.031729  780772 test.go:247] Flag 'automountServiceAccountToken' exists
I0518 00:42:04.031761  780772 check.go:255] Used auditCommand
I0518 00:42:04.031849  780772 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:false, flagFound:false, actualResult:"namespace: mynamespace-system, svacc_name: default, Kind: ServiceAccount, automountServiceAccountToken: false\nnamespace: mynamespace-system, svacc_name: svctest, Kind: ServiceAccount, automountServiceAccountToken: notset", ExpectedResult:"'automountServiceAccountToken' is equal to 'false'"}
I0518 00:42:04.031945  780772 check.go:184] Command: "" TestResult: false State: "FAIL"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[FAIL] 5.1.6.1 Ensure that Service Account Tokens are only mounted where necessary - ServiceAccount (Automated)
	 namespace: mynamespace-system, svacc_name: default, Kind: ServiceAccount, automountServiceAccountToken: false
	 namespace: mynamespace-system, svacc_name: svctest, Kind: ServiceAccount, automountServiceAccountToken: notset

== Remediations policies ==
5.1.6.1 Modify the definition of service accounts which do not need to mount service
account tokens to disable it, with `automountServiceAccountToken: false`.
Parent: 5.1.6.


== Summary policies ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

PASS
(For explanation purpose, the following the check was set with one namespace: mynamespace-system. Default runs against --all-namespaces.)
mynamespace-system has two serviceaccounts default and svctest that are both compliant because automountServiceAccountToken: false. The svctest was made compliant by setting automountServiceAccountToken: false and applying (ref FAIL above).

I0518 00:48:13.397775  781678 check.go:110] -----   Running check 5.1.6.1   -----
I0518 00:48:13.570910  781678 check.go:309] Command: "kubectl get serviceaccount -n mynamespace-system -o=json | jq -r '.items[] | \" namespace: \\(.metadata.namespace), svacc_name: \\(.metadata.name), Kind: \\(.kind), automountServiceAccountToken: \\(.automountServiceAccountToken | if . == null then \"notset\" else . end )\"' | xargs -L 1"
I0518 00:48:13.570965  781678 check.go:310] Output:
 "namespace: mynamespace-system, svacc_name: default, Kind: ServiceAccount, automountServiceAccountToken: false\nnamespace: mynamespace-system, svacc_name: svctest, Kind: ServiceAccount, automountServiceAccountToken: false\n"
I0518 00:48:13.571004  781678 check.go:231] Running 1 test_items
I0518 00:48:13.571073  781678 test.go:153] In flagTestItem.findValue false
I0518 00:48:13.571092  781678 test.go:247] Flag 'automountServiceAccountToken' exists
I0518 00:48:13.571128  781678 test.go:153] In flagTestItem.findValue false
I0518 00:48:13.571138  781678 test.go:247] Flag 'automountServiceAccountToken' exists
I0518 00:48:13.571150  781678 check.go:255] Used auditCommand
I0518 00:48:13.571174  781678 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"namespace: mynamespace-system, svacc_name: default, Kind: ServiceAccount, automountServiceAccountToken: false\nnamespace: mynamespace-system, svacc_name: svctest, Kind: ServiceAccount, automountServiceAccountToken: false", ExpectedResult:"'automountServiceAccountToken' is equal to 'false'"}
I0518 00:48:13.571203  781678 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[PASS] 5.1.6.1 Ensure that Service Account Tokens are only mounted where necessary - ServiceAccount (Automated)

== Summary policies ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

5.1.6.2 Ensure that Service Account Tokens are only mounted where necessary - Pods (Automated)

Note: This check has been improved by using more context, based on k8s doc If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.
Test: Retrieves all Pods and search for automountServiceAccountToken presence and value. Then compare with their use of a ServiceAccount ( and if the said ServiceAccount uses automountServiceAccountToken).
Condition: Pod is_compliant to true when
- ServiceAccount is automountServiceAccountToken: false and Pod is automountServiceAccountToken: false or notset
- ServiceAccount is automountServiceAccountToken: true notset and Pod is automountServiceAccountToken: false

FAIL
(For explanation purpose, the following check was set with one namespace: mynamespace-system. Default runs against --all-namespaces.)

I0519 02:41:15.065860  994379 check.go:110] -----   Running check 5.1.6.2   -----
I0519 02:41:15.375753  994379 check.go:309] Command: "kubectl get pods -n mynamespace-system -o custom-columns=POD_NAMESPACE:.metadata.namespace,POD_NAME:.metadata.name,POD_SERVICE_ACCOUNT:.spec.serviceAccount,POD_IS_AUTOMOUNTSERVICEACCOUNTTOKEN:.spec.automountServiceAccountToken --no-headers | while read -r pod_namespace pod_name pod_service_account pod_is_automountserviceaccounttoken\ndo\n  # Retrieve automountServiceAccountToken's value for ServiceAccount and Pod, set to notset if null or <none>.\n  svacc_is_automountserviceaccounttoken=$(kubectl get serviceaccount -n ${pod_namespace} ${pod_service_account} -o json | jq -r '.automountServiceAccountToken' | sed -e 's/<none>/notset/g' -e 's/null/notset/g')\n  pod_is_automountserviceaccounttoken=$(echo ${pod_is_automountserviceaccounttoken} | sed -e 's/<none>/notset/g' -e 's/null/notset/g')\n  if [[ \"${svacc_is_automountserviceaccounttoken}\" == \"false\"  && ( \"${pod_is_automountserviceaccounttoken}\" == \"false\" || \"${pod_is_automountserviceaccounttoken}\" == \"notset\" ) ]]; then\n    is_compliant=\"true\"\n  elif [[ \"${svacc_is_automountserviceaccounttoken}\" == \"true\" && \"${pod_is_automountserviceaccounttoken}\" == \"false\" ]]; then\n    is_compliant=\"true\"\n  else\n    is_compliant=\"false\"\n  fi\n  echo \"**namespace: ${pod_namespace} pod_name: ${pod_name} service_account: ${pod_service_account} pod_is_automountserviceaccounttoken: ${pod_is_automountserviceaccounttoken} svacc_is_automountServiceAccountToken: ${svacc_is_automountserviceaccounttoken} is_compliant: ${is_compliant}\"\ndone"
I0519 02:41:15.375822  994379 check.go:310] Output:
 "**namespace: mynamespace-system pod_name: pod1 service_account: mynamespace-system-svacc pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: notset is_compliant: false\n"
I0519 02:41:15.375854  994379 check.go:231] Running 1 test_items
I0519 02:41:15.375927  994379 test.go:153] In flagTestItem.findValue false
I0519 02:41:15.375950  994379 test.go:247] Flag 'is_compliant' exists
I0519 02:41:15.375963  994379 check.go:255] Used auditCommand
I0519 02:41:15.375991  994379 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:false, flagFound:false, actualResult:"**namespace: mynamespace-system pod_name: pod1 service_account: mynamespace-system-svac pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: notset is_compliant: false", ExpectedResult:"'is_compliant' is equal to 'true'"}
I0519 02:41:15.376021  994379 check.go:184] Command: "" TestResult: false State: "FAIL"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[FAIL] 5.1.6.2 Ensure that Service Account Tokens are only mounted where necessary - Pods (Automated)
	 **namespace: mynamespace-system pod_name: pod1 service_account: mynamespace-system-svacc pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: notset is_compliant: false

== Remediations policies ==
5.1.6.2 Modify the definition of pods which do not need to mount service
account tokens to disable it, with `automountServiceAccountToken: false`.
If both the ServiceAccount and the Pod's .spec specify a value for automountServiceAccountToken, the Pod spec takes precedence.
Condition: Pod is_compliant to true when
  - ServiceAccount is automountServiceAccountToken: false and Pod is automountServiceAccountToken: false or notset
  - ServiceAccount is automountServiceAccountToken: true notset and Pod is automountServiceAccountToken: false
Parent: 5.1.6.


== Summary policies ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

PASS
(For explanation purpose, the following check was set with one namespace: mynamespace-system. Default runs against --all-namespaces.)

I0519 02:36:20.477873  993661 check.go:110] -----   Running check 5.1.6.2   -----
I0519 02:36:21.147446  993661 check.go:309] Command: "kubectl get pods -n mynamespace-system -o custom-columns=POD_NAMESPACE:.metadata.namespace,POD_NAME:.metadata.name,POD_SERVICE_ACCOUNT:.spec.serviceAccount,POD_IS_AUTOMOUNTSERVICEACCOUNTTOKEN:.spec.automountServiceAccountToken --no-headers | while read -r pod_namespace pod_name pod_service_account pod_is_automountserviceaccounttoken\ndo\n  # Retrieve automountServiceAccountToken's value for ServiceAccount and Pod, set to notset if null or <none>.\n  svacc_is_automountserviceaccounttoken=$(kubectl get serviceaccount -n ${pod_namespace} ${pod_service_account} -o json | jq -r '.automountServiceAccountToken' | sed -e 's/<none>/notset/g' -e 's/null/notset/g')\n  pod_is_automountserviceaccounttoken=$(echo ${pod_is_automountserviceaccounttoken} | sed -e 's/<none>/notset/g' -e 's/null/notset/g')\n  if [[ \"${svacc_is_automountserviceaccounttoken}\" == \"false\"  && ( \"${pod_is_automountserviceaccounttoken}\" == \"false\" || \"${pod_is_automountserviceaccounttoken}\" == \"notset\" ) ]]; then\n    is_compliant=\"true\"\n  elif [[ \"${svacc_is_automountserviceaccounttoken}\" == \"true\" && \"${pod_is_automountserviceaccounttoken}\" == \"false\" ]]; then\n    is_compliant=\"true\"\n  else\n    is_compliant=\"false\"\n  fi\n  echo \"**namespace: ${pod_namespace} pod_name: ${pod_name} service_account: ${pod_service_account} pod_is_automountserviceaccounttoken: ${pod_is_automountserviceaccounttoken} svacc_is_automountServiceAccountToken: ${svacc_is_automountserviceaccounttoken} is_compliant: ${is_compliant}\"\ndone"
I0519 02:36:21.147778  993661 check.go:310] Output:
 "**namespace: mynamespace-system pod_name: small-pod service_account: svctest pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: false is_compliant: true\n**namespace: mynamespace-system pod_name: small-pod2 service_account: svctest pod_is_automountserviceaccounttoken: false svacc_is_automountServiceAccountToken: false is_compliant: true\n**namespace: mynamespace-system pod_name: small-pod3 service_account: default pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: false is_compliant: true\n"
I0519 02:36:21.148012  993661 check.go:231] Running 1 test_items
I0519 02:36:21.148194  993661 test.go:153] In flagTestItem.findValue true
I0519 02:36:21.148360  993661 test.go:247] Flag 'is_compliant' exists
I0519 02:36:21.148425  993661 test.go:153] In flagTestItem.findValue true
I0519 02:36:21.148495  993661 test.go:247] Flag 'is_compliant' exists
I0519 02:36:21.148573  993661 test.go:153] In flagTestItem.findValue true
I0519 02:36:21.148630  993661 test.go:247] Flag 'is_compliant' exists
I0519 02:36:21.148671  993661 check.go:255] Used auditCommand
I0519 02:36:21.148761  993661 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"**namespace: mynamespace-system pod_name: small-pod service_account: svctest pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: false is_compliant: true\n**namespace: mynamespace-system pod_name: small-pod2 service_account: svctest pod_is_automountserviceaccounttoken: false svacc_is_automountServiceAccountToken: false is_compliant: true\n**namespace: mynamespace-system pod_name: small-pod3 service_account: default pod_is_automountserviceaccounttoken: notset svacc_is_automountServiceAccountToken: false is_compliant: true", ExpectedResult:"'is_compliant' is equal to 'true'"}
I0519 02:36:21.148871  993661 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 5 Kubernetes Policies
[INFO] 5.1 RBAC and Service Accounts
[PASS] 5.1.6.2 Ensure that Service Account Tokens are only mounted where necessary - Pods (Automated)

== Summary policies ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

== Summary total ==
1 checks PASS
0 checks FAIL
0 checks WARN
0 checks INFO

Description of master.yaml changes

1.1.13 Ensure that the default administrative credential file permissions are set to 600 (Automated)

Test details: https://workbench.cisecurity.org/sections/2493110/recommendations/4022542
Note: This check has been adapted since 1.9 to verify admin.conf and super-admin.conf. 1.1.13 will fail if neither of admin.conf and super-admin.conf are present.
Test: Retrieves file permissions of /etc/kubernetes/admin.conf and /etc/kubernetes/super-admin.conf, use multiple_values: true.
Condition: File permissions should be 600 for both files (if super-admin.conf is present - the case in k8s 1.29+)

FAIL
(In case of k8s 1.29+ where the 2 files are present)

I0528 20:30:37.664719 2842595 check.go:110] -----   Running check 1.1.13   -----
I0528 20:30:37.672869 2842595 check.go:309] Command: "for conf in /etc/kubernetes/{admin.conf,super-admin.conf}; do if test -e $conf; then stat -c \"permissions=%a %n\" $conf; fi; done"
I0528 20:30:37.672918 2842595 check.go:310] Output:
 "permissions=600 /etc/kubernetes/admin.conf\npermissions=644 /etc/kubernetes/super-admin.conf\n"
I0528 20:30:37.672951 2842595 check.go:231] Running 1 test_items
I0528 20:30:37.673006 2842595 test.go:153] In flagTestItem.findValue 600
I0528 20:30:37.673028 2842595 test.go:247] Flag 'permissions' exists
I0528 20:30:37.673063 2842595 test.go:153] In flagTestItem.findValue 644
I0528 20:30:37.673078 2842595 test.go:247] Flag 'permissions' exists
I0528 20:30:37.673093 2842595 check.go:255] Used auditCommand
I0528 20:30:37.673127 2842595 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:false, flagFound:false, actualResult:"permissions=600 /etc/kubernetes/admin.conf\npermissions=644 /etc/kubernetes/super-admin.conf", ExpectedResult:"permissions has permissions 644, expected 600 or more restrictive"}
I0528 20:30:37.673163 2842595 check.go:184] Command: "" TestResult: false State: "FAIL"
[INFO] 1 Control Plane Security Configuration
[INFO] 1.1 Control Plane Node Configuration Files
[FAIL] 1.1.13 Ensure that the default administrative credential file permissions are set to 600 (Automated)
	 permissions=600 /etc/kubernetes/admin.conf
	 permissions=644 /etc/kubernetes/super-admin.conf

== Remediations master ==
1.1.13 Run the below command (based on the file location on your system) on the control plane node.
          For example, chmod 600 /etc/kubernetes/admin.conf
          On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
          For example, chmod 600 /etc/kubernetes/super-admin.conf


== Summary master ==
0 checks PASS
1 checks FAIL
0 checks WARN
0 checks INFO

PASS
(In case of k8s 1.29+ where the 2 files are present)

I0528 20:32:58.191576 2842934 check.go:110] -----   Running check 1.1.13   -----
I0528 20:32:58.198458 2842934 check.go:309] Command: "for conf in /etc/kubernetes/{admin.conf,super-admin.conf}; do if test -e $conf; then stat -c \"permissions=%a %n\" $conf; fi; done"
I0528 20:32:58.198504 2842934 check.go:310] Output:
 "permissions=600 /etc/kubernetes/admin.conf\npermissions=600 /etc/kubernetes/super-admin.conf\n"
I0528 20:32:58.198518 2842934 check.go:231] Running 1 test_items
I0528 20:32:58.198601 2842934 test.go:153] In flagTestItem.findValue 600
I0528 20:32:58.198621 2842934 test.go:247] Flag 'permissions' exists
I0528 20:32:58.198651 2842934 test.go:153] In flagTestItem.findValue 600
I0528 20:32:58.198676 2842934 test.go:247] Flag 'permissions' exists
I0528 20:32:58.198689 2842934 check.go:255] Used auditCommand
I0528 20:32:58.198726 2842934 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"permissions=600 /etc/kubernetes/admin.conf\npermissions=600 /etc/kubernetes/super-admin.conf", ExpectedResult:"permissions has permissions 600, expected 600 or more restrictive"}
I0528 20:32:58.198759 2842934 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 1 Control Plane Security Configuration
[INFO] 1.1 Control Plane Node Configuration Files
[PASS] 1.1.13 Ensure that the default administrative credential file permissions are set to 600 (Automated)
[...]

FAIL
(In case of k8s older than 1.29 where only admin.conf is present and is 640)

I0528 20:34:47.063307 2843249 check.go:110] -----   Running check 1.1.13   -----
I0528 20:34:47.068231 2843249 check.go:309] Command: "for conf in /etc/kubernetes/{admin.conf,super-admin.conf}; do if test -e $conf; then stat -c \"permissions=%a %n\" $conf; fi; done"
I0528 20:34:47.068267 2843249 check.go:310] Output:
 "permissions=640 /etc/kubernetes/admin.conf\n"
I0528 20:34:47.068280 2843249 check.go:231] Running 1 test_items
I0528 20:34:47.068342 2843249 test.go:153] In flagTestItem.findValue 640
I0528 20:34:47.068354 2843249 test.go:247] Flag 'permissions' exists
I0528 20:34:47.068365 2843249 check.go:255] Used auditCommand
I0528 20:34:47.068388 2843249 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:false, flagFound:false, actualResult:"permissions=640 /etc/kubernetes/admin.conf", ExpectedResult:"permissions has permissions 640, expected 600 or more restrictive"}
I0528 20:34:47.068410 2843249 check.go:184] Command: "" TestResult: false State: "FAIL"
[INFO] 1 Control Plane Security Configuration
[INFO] 1.1 Control Plane Node Configuration Files
[FAIL] 1.1.13 Ensure that the default administrative credential file permissions are set to 600 (Automated)
	 permissions=640 /etc/kubernetes/admin.conf
[...]

PASS
(In case of k8s older than 1.29 where only admin.conf is present and is 600)

I0528 20:33:15.731478 2843029 check.go:110] -----   Running check 1.1.13   -----
I0528 20:33:15.736819 2843029 check.go:309] Command: "for conf in /etc/kubernetes/{admin.conf,super-admin.conf}; do if test -e $conf; then stat -c \"permissions=%a %n\" $conf; fi; done"
I0528 20:33:15.736860 2843029 check.go:310] Output:
 "permissions=600 /etc/kubernetes/admin.conf\n"
I0528 20:33:15.736871 2843029 check.go:231] Running 1 test_items
I0528 20:33:15.736924 2843029 test.go:153] In flagTestItem.findValue 600
I0528 20:33:15.736939 2843029 test.go:247] Flag 'permissions' exists
I0528 20:33:15.736953 2843029 check.go:255] Used auditCommand
I0528 20:33:15.736977 2843029 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"permissions=600 /etc/kubernetes/admin.conf", ExpectedResult:"permissions has permissions 600, expected 600 or more restrictive"}
I0528 20:33:15.737016 2843029 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 1 Control Plane Security Configuration
[INFO] 1.1 Control Plane Node Configuration Files
[PASS] 1.1.13 Ensure that the default administrative credential file permissions are set to 600 (Automated)
[...]

1.1.14 Ensure that the default administrative credential file ownership is set to root:root (Automated)

Test details: https://workbench.cisecurity.org/sections/2493110/recommendations/4022546
Note:

  • This check has been adapted since 1.9 to verify admin.conf and super-admin.conf.
  • 1.1.14 will fail if neither of admin.conf and super-admin.conf are present.
  • The flag root:root has been replaced with ownership, this works and is more clean. We could adapt all related checks with this format now.
  • Scenario for k8s older than 1.29 have been tested, but basically if only admin.conf is present, it's fine.
    Test: Retrieves file ownerships of /etc/kubernetes/admin.conf and /etc/kubernetes/super-admin.conf, use multiple_values: true.
    Condition: File ownership should be root:root for both files (if super-admin.conf is present - the case in k8s 1.29+)

FAIL:
(In case of k8s 1.29+ where the 2 files are present - admin.conf has been set to nobody:nobody on purpose)

I0528 20:47:33.181826 2845010 check.go:110] -----   Running check 1.1.14.1   -----
I0528 20:47:33.190239 2845010 check.go:309] Command: "for adminconf in /etc/kubernetes/{admin.conf,super-admin.conf}; do if test -e $adminconf; then stat -c \"ownership=%U:%G %n\" $adminconf; fi; done"
I0528 20:47:33.190312 2845010 check.go:310] Output:
 "ownership=nobody:nobody /etc/kubernetes/admin.conf\nownership=root:root/etc/kubernetes/super-admin.conf\n"
I0528 20:47:33.190355 2845010 check.go:231] Running 1 test_items
I0528 20:47:33.190409 2845010 test.go:153] In flagTestItem.findValue nobody:nobody
I0528 20:47:33.190425 2845010 test.go:247] Flag 'ownership' exists
I0528 20:47:33.190452 2845010 check.go:255] Used auditCommand
I0528 20:47:33.190485 2845010 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:false, flagFound:false, actualResult:"ownership=nobody:nobody /etc/kubernetes/admin.conf\nownership=root:root /etc/kubernetes/super-admin.conf", ExpectedResult:"'ownership' is equal to 'root:root'"}
I0528 20:47:33.190513 2845010 check.go:184] Command: "" TestResult: false State: "FAIL"
[INFO] 1 Control Plane Security Configuration
[INFO] 1.1 Control Plane Node Configuration Files
[FAIL] 1.1.14.1 Ensure that the default administrative credential file ownership is set to root:root (Automated)
	 ownership=nobody:nobody /etc/kubernetes/admin.conf
	 ownership=root:root /etc/kubernetes/super-admin.conf

== Remediations master ==
1.1.14.1 Run the below command (based on the file location on your system) on the control plane node.
For example, chown root:root /etc/kubernetes/admin.conf
On Kubernetes 1.29+ the super-admin.conf file should also be modified, if present.
For example, chmod 600 /etc/kubernetes/super-admin.conf
[...]

PASS
(In case of k8s 1.29+ where the 2 files are present)

I0528 20:52:01.766176 2845642 check.go:110] -----   Running check 1.1.14   -----
I0528 20:52:01.777458 2845642 check.go:309] Command: "for adminconf in /etc/kubernetes/{admin.conf,super-admin.conf}; do if test -e $adminconf; then stat -c \"ownership=%U:%G %n\" $adminconf; fi; done"
I0528 20:52:01.777622 2845642 check.go:310] Output:
 "ownership=root:root /etc/kubernetes/admin.conf\nownership=root:root /etc/kubernetes/super-admin.conf\n"
I0528 20:52:01.777759 2845642 check.go:231] Running 1 test_items
I0528 20:52:01.778219 2845642 test.go:153] In flagTestItem.findValue root:root
I0528 20:52:01.779164 2845642 test.go:247] Flag 'ownership' exists
I0528 20:52:01.779640 2845642 test.go:153] In flagTestItem.findValue root:root
I0528 20:52:01.779660 2845642 test.go:247] Flag 'ownership' exists
I0528 20:52:01.779673 2845642 check.go:255] Used auditCommand
I0528 20:52:01.779688 2845642 check.go:287] Returning from execute on tests: finalOutput &check.testOutput{testResult:true, flagFound:false, actualResult:"ownership=root:root /etc/kubernetes/admin.conf\nownership=root:root /etc/kubernetes/super-admin.conf", ExpectedResult:"'ownership' is equal to 'root:root'"}
I0528 20:52:01.779720 2845642 check.go:184] Command: "" TestResult: true State: "PASS"
[INFO] 1 Control Plane Security Configuration
[INFO] 1.1 Control Plane Node Configuration Files
[PASS] 1.1.14 Ensure that the default administrative credential file ownership is set to root:root (Automated)
[...]

Description of node.yaml changes

4.3.1 Ensure that the kube-proxy metrics service is bound to localhost (Automated)

Test details: https://workbench.cisecurity.org/sections/2535189/recommendations/4095050
Test: Retrieves kube-proxy process definition along with kubeproxyconf, to look for --metrics-bind-address or metricsBindAddress value.
Condition: Metrics bind address, if present, should be bound to a localhost IP address to reduce the exposition of sensitive info. The default conf is 127.0.0.1:10249.

  • It will PASS if either the flags are found in the process definition or kubeproxyconf with the use of 127.0.0.1:(any-port) OR if these flags are not set (since the default value is 127.0.0.1:10249 and complies with this recommendation).

@andypitcher andypitcher force-pushed the cis-1.9 branch 2 times, most recently from 4f244d5 to 05c45a6 Compare May 19, 2024 02:53
@andypitcher andypitcher marked this pull request as ready for review May 19, 2024 03:03
cfg/cis-1.9/node.yaml Outdated Show resolved Hide resolved
Copy link
Contributor

@pjbgf pjbgf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

      - policies.yaml
          - 5.1.1 to 5.1.6 were adapted from Manual to Automated
          - 5.1.3 got broken down into 5.1.3.1 and 5.1.3.2
          - 5.1.6 got broken down into 5.1.6.1 and 5.1.6.2
          - version was set to cis-1.9
       - node.yaml master.yaml controlplane.yaml etcd.yaml
          - version was set to cis-1.9
Copy link
Collaborator

@mozillazg mozillazg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution! I've added some comments. Please check them when you get a chance. Thanks!

cfg/cis-1.9/master.yaml Outdated Show resolved Hide resolved
cfg/cis-1.9/master.yaml Outdated Show resolved Hide resolved
cfg/cis-1.9/master.yaml Outdated Show resolved Hide resolved
Modify the configuration of each default service account to include this value
`automountServiceAccountToken: false`.
scored: true
- id: 5.1.6.1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- id: 5.1.6.1
- id: 5.1.6

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same approach I've used for 5.1.3, since 5.1.6 was complex to satisfy with only one check so I used the Artifacts as baseline. More details in the PR description.

  • 5.1.6.1 Ensure that Service Account Tokens are only mounted where necessary - ServiceAccount (Automated)
  • 5.1.6.2 Ensure that Service Account Tokens are only mounted where necessary - Pods (Automated)
image

remediation: |
Where possible, remove get, list and watch access to Secret objects in the cluster.
scored: true
- id: 5.1.3.1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- id: 5.1.3.1
- id: 5.1.3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned in the PR description, I chose to split 5.1.3 into two other checks (5.1.3.1 and 5.1.3.2) that are the referenced artifacts in CIS Workbench. This gives more accuracy and reduce the complexity of having to test 5.1.3.1 and 5.1.3.2 within the same check.
To make sure we keep the same references, I've added a Parent: 5.1.3 in the remediation.

Here are the details:

image image

WDYT ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMHO, we should follow the CIS benchmark, don't split the remediation in this version. As you can see, both 5.1.3.1 and 5.1.3.2 are draft.

  • This may break downstream.
  • We may have to explain it to everyone who has questions about this.

@chen-keinan WDYT?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree , CIS benchmark should be the exact guide to follow to avoid breaking changes

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will see how I can adapt the 5.1.3 and 5.1.6 checks, and propose a new version.

cfg/cis-1.9/policies.yaml Outdated Show resolved Hide resolved
cfg/cis-1.9/policies.yaml Outdated Show resolved Hide resolved
cfg/cis-1.9/policies.yaml Outdated Show resolved Hide resolved
cfg/cis-1.9/policies.yaml Outdated Show resolved Hide resolved
cfg/cis-1.9/node.yaml Show resolved Hide resolved
    - Expand 1.1.13/1.1.14 checks by adding super-admin.conf to the permission and ownership verification
    - Remove 1.2.12 Ensure that the admission control plugin SecurityContextDeny is set if PodSecurityPolicy is not used (Manual)
    - Adjust numbering from 1.2.12 to 1.2.29
   - Check 5.2.3 to 5.2.9 Title Automated to Manual
   - Create 4.3 kube-config group
   - Create 4.3.1 Ensure that the kube-proxy metrics service is bound to localhost (Automated)
@andypitcher
Copy link
Contributor Author

andypitcher commented May 29, 2024

Thanks for your contribution! I've added some comments. Please check them when you get a chance. Thanks!

@mozillazg Thanks for your review, I've used what was written in the CIS-1.9 PDF under ChangeLog initially. This makes me think that not every changes have been listed out, so thanks for bringing up the other places that needed some modifications.

I've made some changes and comments based on your review, let me know WDYT ?

One other thing: kubectl needs to be added to kube-bench's Dockerfile (and other needed places) since 5.x checks rely on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants