Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kilo on RKE cluster: non-maser PODs CANNOT access to the kube-apiserver endpoint #218

Open
vladimir22 opened this issue Jul 21, 2021 · 5 comments

Comments

@vladimir22
Copy link

vladimir22 commented Jul 21, 2021

Hi @squat, I strongly believe in your project and hope you could help me with the final issue...

I have successfully installed kilo on my RKE cluster as a CNI:

  • all PODs are working,
  • service discovery is working,
  • connection between RKE nodes is secured
  • PODs in the master-node have access to kube-apiserver endpoint

but I have caught another issue:
PODs in the non-master nodes CANNOT access to kube-apiserver (kubernetes.default -> 10.45.0.1:443 -> 172.25.132.35:6443)

It is critical for k8s operators like Istio, Prometheus, Infinispan, etc. and I got stuck with that...

I guess something wrong with network routing (KILO-NAT, KILO-IPIP iptables).
Please check my k8s configuration, test PODs, and iptables of master-node and node-1, probably you might find a reason:

kubectl get nodes -o wide

NAME                        STATUS   ROLES                      AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
foundation-musanin-master   Ready    controlplane,etcd,worker   47h   v1.17.4   172.25.132.35    <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://19.3.8
foundation-musanin-node-1   Ready    worker                     47h   v1.17.4   172.25.132.55    <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://19.3.8
foundation-musanin-node-2   Ready    worker                     47h   v1.17.4   172.25.132.230   <none>        CentOS Linux 7 (Core)   3.10.0-1160.31.1.el7.x86_64   docker://19.3.8
## Install kilo DaemonSet:  https://raw.githubusercontent.com/squat/kilo/main/manifests/kilo-typhoon.yaml
 
kubectl delete ds -n kube-system kilo
 
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: kilo
  namespace: kube-system
  labels:
    app.kubernetes.io/name: kilo
data:
  cni-conf.json: |
    {
       "cniVersion":"0.3.1",
       "name":"kilo",
       "plugins":[
          {
             "name":"kubernetes",
             "type":"bridge",
             "bridge":"kube-bridge",
             "isDefaultGateway":true,
             "forceAddress":true,
             "mtu": 1420,
             "ipam":{
                "type":"host-local"
             }
          },
          {
             "type":"portmap",
             "snat":true,
             "capabilities":{
                "portMappings":true
             }
          }
       ]
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kilo
  namespace: kube-system
  labels:
    app.kubernetes.io/name: kilo
    app.kubernetes.io/part-of: kilo
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: kilo
      app.kubernetes.io/part-of: kilo
  template:
    metadata:
      labels:
        app.kubernetes.io/name: kilo
        app.kubernetes.io/part-of: kilo
    spec:
      serviceAccountName: kilo
      hostNetwork: true
      containers:
      - name: kilo
        image: squat/kilo
        imagePullPolicy: Always ## image updates are very high
        ## list args details: https://kilo.squat.ai/docs/kg/#usage
        args:
        - --kubeconfig=/etc/kubernetes/config
        - --hostname=\$(NODE_NAME)
        - --create-interface=true
        - --interface=kilo0
        - --log-level=debug
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        ports:
        - containerPort: 1107
          name: metrics
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni-conf-dir
          mountPath: /etc/cni/net.d
        - name: kilo-dir
          mountPath: /var/lib/kilo
        - name: kubeconfig
          mountPath: /etc/kubernetes
          readOnly: true
        - name: lib-modules
          mountPath: /lib/modules
          readOnly: true
        - name: xtables-lock
          mountPath: /run/xtables.lock
          readOnly: false
      initContainers:
      - name: install-cni
        image: squat/kilo
        ## CAUTION!!!: init-container removes all CNI configs in the node (dir: /etc/cni/net.d)
        command:
        - /bin/sh
        - -c
        - set -e -x;
          cp /opt/cni/bin/* /host/opt/cni/bin/;
          TMP_CONF="\$CNI_CONF_NAME".tmp;
          echo "\$CNI_NETWORK_CONFIG" > \$TMP_CONF;
          rm -f /host/etc/cni/net.d/*;
          mv \$TMP_CONF /host/etc/cni/net.d/\$CNI_CONF_NAME
        env:
        - name: CNI_CONF_NAME
          value: 10-kilo.conflist
        - name: CNI_NETWORK_CONFIG
          valueFrom:
            configMapKeyRef:
              name: kilo
              key: cni-conf.json
        volumeMounts:
        - name: cni-bin-dir
          mountPath: /host/opt/cni/bin
        - name: cni-conf-dir
          mountPath: /host/etc/cni/net.d
      tolerations:
      - effect: NoSchedule
        operator: Exists
      - effect: NoExecute
        operator: Exists
      volumes:
      - name: cni-bin-dir
        hostPath:
          path: /opt/cni/bin
      - name: cni-conf-dir
        hostPath:
          path: /etc/cni/net.d
      - name: kilo-dir
        hostPath:
          path: /var/lib/kilo
      - name: kubeconfig
        configMap:
          name: kubeconfig-in-cluster
      - name: lib-modules
        hostPath:
          path: /lib/modules
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
EOF

kubectl get pods --all-namespaces -o wide

NAMESPACE            NAME                                      READY   STATUS      RESTARTS   AGE     IP               NODE                        NOMINATED NODE   READINESS GATES
default              master                                    1/1     Running     0          10s     10.44.0.18       foundation-musanin-master   <none>           <none>
default              node1                                     1/1     Running     0          10s     10.44.2.31       foundation-musanin-node-1   <none>           <none>
default              node2                                     1/1     Running     0          10s     10.44.1.20       foundation-musanin-node-2   <none>           <none>
kube-system          coredns-7c5566588d-gwjg4                  1/1     Running     1          6h      10.44.2.23       foundation-musanin-node-1   <none>           <none>
kube-system          coredns-7c5566588d-kxphm                  1/1     Running     1          6h      10.44.1.18       foundation-musanin-node-2   <none>           <none>
kube-system          coredns-autoscaler-65bfc8d47d-kqs2m       1/1     Running     1          6h10m   10.44.2.22       foundation-musanin-node-1   <none>           <none>
kube-system          kilo-gmkrl                                1/1     Running     0          88m     172.25.132.55    foundation-musanin-node-1   <none>           <none>
kube-system          kilo-sj9jz                                1/1     Running     0          88m     172.25.132.230   foundation-musanin-node-2   <none>           <none>
kube-system          kilo-tsn5v                                1/1     Running     0          88m     172.25.132.35    foundation-musanin-master   <none>           <none>
kube-system          metrics-server-6b55c64f86-m5t28           1/1     Running     2          28h     10.44.0.13       foundation-musanin-master   <none>           <none>
kube-system          rke-coredns-addon-deploy-job-j2k85        0/1     Completed   0          28h     172.25.132.35    foundation-musanin-master   <none>           <none>
kube-system          rke-ingress-controller-deploy-job-48m49   0/1     Completed   0          28h     172.25.132.35    foundation-musanin-master   <none>           <none>
kube-system          rke-metrics-addon-deploy-job-8vdhx        0/1     Completed   0          28h     172.25.132.35    foundation-musanin-master   <none>           <none>
kube-system          rke-network-plugin-deploy-job-g796v       0/1     Completed   0          28h     172.25.132.35    foundation-musanin-master   <none>           <none>
local-path-storage   local-path-provisioner-5bd6f65fdf-2kqks   1/1     Running     2          22h     10.44.0.14       foundation-musanin-master   <none>           <none>
pf                   echoserver-977db48cd-f4msv                1/1     Running     1          21h     10.44.2.21       foundation-musanin-node-1   <none>           <none>

kubectl get service --all-namespaces -o wide

NAMESPACE       NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                        AGE   SELECTOR
default         kubernetes                           ClusterIP   10.45.0.1       <none>        443/TCP                        28h   <none>
ingress-nginx   default-http-backend                 ClusterIP   10.45.193.115   <none>        80/TCP                         28h   app=default-http-backend
kube-system     kube-dns                             ClusterIP   10.45.0.10      <none>        53/UDP,53/TCP,9153/TCP         28h   k8s-app=kube-dns
kube-system     metrics-server                       ClusterIP   10.45.53.231    <none>        443/TCP                        28h   k8s-app=metrics-server
kube-system     monitoring-kube-prometheus-kubelet   ClusterIP   None            <none>        10250/TCP,10255/TCP,4194/TCP   20h   <none>
pf              echoserver                           ClusterIP   10.45.214.208   <none>        8080/TCP                       22h   app=echoserver

kubectl get endpoints --all-namespaces -o wide

NAMESPACE            NAME                                 ENDPOINTS                                                                  AGE
default              kubernetes                           172.25.132.35:6443                                                         28h
ingress-nginx        default-http-backend                 <none>                                                                     28h
kube-system          kube-controller-manager              <none>                                                                     28h
kube-system          kube-dns                             10.44.1.18:53,10.44.2.23:53,10.44.1.18:53 + 3 more...                      28h
kube-system          kube-scheduler                       <none>                                                                     28h
kube-system          metrics-server                       10.44.0.13:443                                                             28h
kube-system          monitoring-kube-prometheus-kubelet   172.25.132.230:10255,172.25.132.35:10255,172.25.132.55:10255 + 6 more...   20h
local-path-storage   rancher.io-local-path                <none>                                                                     28h
pf                   echoserver                           10.44.2.21:8080                                                            22h

echoserver POD is accessible from all nodes

## access by service name is OK
kubectl exec -it master curl http://echoserver.pf:8080 - is OK
kubectl exec -it node1 curl http://echoserver.pf:8080 - is OK
kubectl exec -it node2 curl http://echoserver.pf:8080 - is OK
## access by service IP is OK
kubectl exec -it master curl http://10.45.214.208:8080 - is OK
kubectl exec -it node1 curl http://10.45.214.208:8080 - is OK
kubectl exec -it node2 curl http://10.45.214.208:8080 - is OK

but kube-apiserver API is accessible only from MASTER POD !!!

## access to kube-apiserver from master is OK
kubectl exec -it master -- curl -kv https://kubernetes.default/api - is OK
kubectl exec -it master -- curl -kv https://172.25.132.35:6443/api - is OK


## access to kube-apiserver from RKE nodes - Operation timed out:
kubectl exec -it node1 -- curl -kv https://kubernetes.default/api - ERROR
kubectl exec -it node2 -- curl -kv https://kubernetes.default/api - ERROR
*   Trying 10.45.0.1:443...
* connect to 10.45.0.1 port 443 failed: Operation timed out
* Failed to connect to kubernetes.default port 443: Operation timed out
* Closing connection 0
curl: (28) Failed to connect to kubernetes.default port 443: Operation timed out
command terminated with exit code 28

kubectl exec -it node1 -- curl -kv https://172.25.132.35:6443/api - ERROR
kubectl exec -it node2 -- curl -kv https://172.25.132.35:6443/api - ERROR
*   Trying 172.25.132.35:6443...
* connect to 172.25.132.35 port 6443 failed: Operation timed out
* Failed to connect to 172.25.132.35 port 6443: Operation timed out
* Closing connection 0
curl: (28) Failed to connect to 172.25.132.35 port 6443: Operation timed out
command terminated with exit code 28

below network settings of master-node (foundation-musanin-master)

sudo iptables-save > ~/temp/20210720_iptables_master

# Generated by iptables-save v1.4.21 on Tue Jul 20 09:18:53 2021
*mangle
:PREROUTING ACCEPT [7023874:6584966013]
:INPUT ACCEPT [4419450:1084240089]
:FORWARD ACCEPT [3616:218503]
:OUTPUT ACCEPT [4385189:1292219696]
:POSTROUTING ACCEPT [4388805:1292438199]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Tue Jul 20 09:18:53 2021
# Generated by iptables-save v1.4.21 on Tue Jul 20 09:18:53 2021
*filter
:INPUT ACCEPT [721284:174744895]
:FORWARD ACCEPT [603:36368]
:OUTPUT ACCEPT [714871:209627914]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KILO-IPIP - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -j KUBE-FIREWALL
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -p ipv4 -m comment --comment "Kilo: jump to IPIP chain" -j KILO-IPIP
-A INPUT -p ipv4 -m comment --comment "Kilo: reject other IPIP traffic" -j DROP
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KILO-IPIP -s 172.25.132.35/32 -m comment --comment "Kilo: allow IPIP traffic" -j ACCEPT
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.44.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.44.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.45.193.115/32 -p tcp -m comment --comment "ingress-nginx/default-http-backend: has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Tue Jul 20 09:18:53 2021
# Generated by iptables-save v1.4.21 on Tue Jul 20 09:18:53 2021
*nat
:PREROUTING ACCEPT [409724:899785231]
:INPUT ACCEPT [1027:177582]
:OUTPUT ACCEPT [7378:462422]
:POSTROUTING ACCEPT [7501:469896]
:DOCKER - [0:0]
:KILO-NAT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-2ZKD5TCM6OYP5Z4F - [0:0]
:KUBE-SEP-AWXKJFKBRSHYCFIX - [0:0]
:KUBE-SEP-CN66RQ6OOEA3MWHE - [0:0]
:KUBE-SEP-DWYO5FEH5J22EHP6 - [0:0]
:KUBE-SEP-OC25G42R336ONLNP - [0:0]
:KUBE-SEP-P2V4QTKODS75TTCV - [0:0]
:KUBE-SEP-QPYANCAVUCDJHCNQ - [0:0]
:KUBE-SEP-TB7SM5ETL6QLHI7C - [0:0]
:KUBE-SEP-YYRNE5UY2T7VHRUE - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-7ECNF7D6GS4GHP3A - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-LC5QY66VUV2HJ6WZ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.44.0.0/24 -m comment --comment "Kilo: jump to KILO-NAT chain" -j KILO-NAT
-A DOCKER -i docker0 -j RETURN
-A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
-A KILO-NAT -d 10.44.0.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 172.25.132.35/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
-A KILO-NAT -d 10.44.2.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 172.25.132.55/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.3/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
-A KILO-NAT -d 10.44.1.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 172.25.132.230/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.3/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -m comment --comment "Kilo: NAT remaining packets" -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-2ZKD5TCM6OYP5Z4F -s 10.44.2.23/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2ZKD5TCM6OYP5Z4F -p tcp -m tcp -j DNAT --to-destination 10.44.2.23:9153
-A KUBE-SEP-AWXKJFKBRSHYCFIX -s 10.44.2.21/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-AWXKJFKBRSHYCFIX -p tcp -m tcp -j DNAT --to-destination 10.44.2.21:8080
-A KUBE-SEP-CN66RQ6OOEA3MWHE -s 10.44.0.13/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CN66RQ6OOEA3MWHE -p tcp -m tcp -j DNAT --to-destination 10.44.0.13:443
-A KUBE-SEP-DWYO5FEH5J22EHP6 -s 172.25.132.35/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-DWYO5FEH5J22EHP6 -p tcp -m tcp -j DNAT --to-destination 172.25.132.35:6443
-A KUBE-SEP-OC25G42R336ONLNP -s 10.44.1.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-OC25G42R336ONLNP -p tcp -m tcp -j DNAT --to-destination 10.44.1.18:53
-A KUBE-SEP-P2V4QTKODS75TTCV -s 10.44.2.23/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-P2V4QTKODS75TTCV -p tcp -m tcp -j DNAT --to-destination 10.44.2.23:53
-A KUBE-SEP-QPYANCAVUCDJHCNQ -s 10.44.2.23/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-QPYANCAVUCDJHCNQ -p udp -m udp -j DNAT --to-destination 10.44.2.23:53
-A KUBE-SEP-TB7SM5ETL6QLHI7C -s 10.44.1.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TB7SM5ETL6QLHI7C -p tcp -m tcp -j DNAT --to-destination 10.44.1.18:9153
-A KUBE-SEP-YYRNE5UY2T7VHRUE -s 10.44.1.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-YYRNE5UY2T7VHRUE -p udp -m udp -j DNAT --to-destination 10.44.1.18:53
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.53.231/32 -p tcp -m comment --comment "kube-system/metrics-server: cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.53.231/32 -p tcp -m comment --comment "kube-system/metrics-server: cluster IP" -m tcp --dport 443 -j KUBE-SVC-LC5QY66VUV2HJ6WZ
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.214.208/32 -p tcp -m comment --comment "pf/echoserver:http cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.214.208/32 -p tcp -m comment --comment "pf/echoserver:http cluster IP" -m tcp --dport 8080 -j KUBE-SVC-7ECNF7D6GS4GHP3A
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-7ECNF7D6GS4GHP3A -j KUBE-SEP-AWXKJFKBRSHYCFIX
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OC25G42R336ONLNP
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-P2V4QTKODS75TTCV
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-TB7SM5ETL6QLHI7C
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-2ZKD5TCM6OYP5Z4F
-A KUBE-SVC-LC5QY66VUV2HJ6WZ -j KUBE-SEP-CN66RQ6OOEA3MWHE
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-DWYO5FEH5J22EHP6
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YYRNE5UY2T7VHRUE
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-QPYANCAVUCDJHCNQ
COMMIT

sudo ifconfig

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:fd:d8:9f:77  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.132.35  netmask 255.255.254.0  broadcast 172.25.133.255
        inet6 fe80::ef5:e143:acdd:a51a  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::2483:a3e7:45f:fe20  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::d43f:5c55:1e06:d744  prefixlen 64  scopeid 0x20<link>
        ether 00:15:5d:6a:82:4e  txqueuelen 1000  (Ethernet)
        RX packets 6930994  bytes 6920692627 (6.4 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 335105  bytes 346039095 (330.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

kilo0: flags=209<UP,POINTOPOINT,RUNNING,NOARP>  mtu 1420
        inet 10.4.0.1  netmask 255.255.0.0  destination 10.4.0.1
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)
        RX packets 20  bytes 3312 (3.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23  bytes 2288 (2.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

kube-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
        inet 10.44.0.1  netmask 255.255.255.0  broadcast 10.44.0.255
        inet6 fe80::bc1c:b3ff:fe5d:e206  prefixlen 64  scopeid 0x20<link>
        ether be:1c:b3:5d:e2:06  txqueuelen 1000  (Ethernet)
        RX packets 137428  bytes 20489107 (19.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 135650  bytes 68464111 (65.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 4892759  bytes 1199043137 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 4892759  bytes 1199043137 (1.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1480
        inet 10.44.0.1  netmask 255.255.255.255
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth48a778ff: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
        inet6 fe80::f0e7:acff:fef4:ee48  prefixlen 64  scopeid 0x20<link>
        ether f2:e7:ac:f4:ee:48  txqueuelen 0  (Ethernet)
        RX packets 89964  bytes 15910534 (15.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 89612  bytes 25614788 (24.4 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vetha48db6f0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
        inet6 fe80::9c48:ffff:fed2:3616  prefixlen 64  scopeid 0x20<link>
        ether 9e:48:ff:d2:36:16  txqueuelen 0  (Ethernet)
        RX packets 47376  bytes 6492961 (6.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 46019  bytes 42837623 (40.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethd75ae550: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
        inet6 fe80::4a0:fbff:fe27:5a3f  prefixlen 64  scopeid 0x20<link>
        ether 06:a0:fb:27:5a:3f  txqueuelen 0  (Ethernet)
        RX packets 30  bytes 3138 (3.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32  bytes 5997 (5.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

sudo ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:6a:82:4e brd ff:ff:ff:ff:ff:ff
    inet 172.25.132.35/23 brd 172.25.133.255 scope global noprefixroute dynamic eth0
       valid_lft 81355sec preferred_lft 81355sec
    inet6 fe80::d43f:5c55:1e06:d744/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::ef5:e143:acdd:a51a/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::2483:a3e7:45f:fe20/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:fd:d8:9f:77 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: kube-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue state UP group default qlen 1000
    link/ether be:1c:b3:5d:e2:06 brd ff:ff:ff:ff:ff:ff
    inet 10.44.0.1/24 brd 10.44.0.255 scope global kube-bridge
       valid_lft forever preferred_lft forever
    inet6 fe80::bc1c:b3ff:fe5d:e206/64 scope link
       valid_lft forever preferred_lft forever
5: vetha48db6f0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
    link/ether 9e:48:ff:d2:36:16 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::9c48:ffff:fed2:3616/64 scope link
       valid_lft forever preferred_lft forever
6: veth48a778ff@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
    link/ether f2:e7:ac:f4:ee:48 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::f0e7:acff:fef4:ee48/64 scope link
       valid_lft forever preferred_lft forever
7: kilo0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 10.4.0.1/16 brd 10.4.255.255 scope global kilo0
       valid_lft forever preferred_lft forever
8: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.44.0.1/32 brd 10.44.0.1 scope global tunl0
       valid_lft forever preferred_lft forever
12: vethd75ae550@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
    link/ether 06:a0:fb:27:5a:3f brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::4a0:fbff:fe27:5a3f/64 scope link
       valid_lft forever preferred_lft forever

sudo ip r

default via 172.25.132.1 dev eth0 proto dhcp metric 100
10.4.0.0/16 dev kilo0 proto kernel scope link src 10.4.0.1
10.44.0.0/24 dev kube-bridge proto kernel scope link src 10.44.0.1
10.44.1.0/24 via 10.4.0.3 dev kilo0 proto static onlink
10.44.2.0/24 via 10.4.0.2 dev kilo0 proto static onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.25.132.0/23 dev eth0 proto kernel scope link src 172.25.132.35 metric 100

sudo wg

interface: kilo0
  public key: c7yyiSaA9nvLVFz60Rkr42+xdvC4BVPaGDKJ+5v5QTU=
  private key: (hidden)
  listening port: 51820

peer: yA6LdCuJT7y+pRNvlhds8GeeEGoT1Q/PUhF++GZ8gB0=
  endpoint: 172.25.132.55:51820
  allowed ips: 10.44.2.0/24, 172.25.132.55/32, 10.4.0.2/32
  latest handshake: 1 hour, 37 minutes, 22 seconds ago
  transfer: 2.38 KiB received, 1.66 KiB sent

peer: IErj++lf80jkWOEEVsH97G6tTbNGViCZ12s2Gedl5kg=
  endpoint: 172.25.132.230:51820
  allowed ips: 10.44.1.0/24, 172.25.132.230/32, 10.4.0.3/32
  latest handshake: 1 hour, 37 minutes, 22 seconds ago
  transfer: 880 B received, 592 B sent

below network settings of node-1 (foundation-musanin-node-1)

sudo iptables-save > ~/temp/20210720_iptables_node1


# Generated by iptables-save v1.4.21 on Tue Jul 20 09:30:41 2021
*security
:INPUT ACCEPT [97587:60160550]
:FORWARD ACCEPT [36309:18066799]
:OUTPUT ACCEPT [82224:74167649]
COMMIT
# Completed on Tue Jul 20 09:30:41 2021
# Generated by iptables-save v1.4.21 on Tue Jul 20 09:30:41 2021
*raw
:PREROUTING ACCEPT [929995:1885420176]
:OUTPUT ACCEPT [82339:74210968]
COMMIT
# Completed on Tue Jul 20 09:30:41 2021
# Generated by iptables-save v1.4.21 on Tue Jul 20 09:30:41 2021
*mangle
:PREROUTING ACCEPT [3205552:6261897059]
:INPUT ACCEPT [511213:559075107]
:FORWARD ACCEPT [111629:55798071]
:OUTPUT ACCEPT [334930:130814428]
:POSTROUTING ACCEPT [446559:186612499]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
COMMIT
# Completed on Tue Jul 20 09:30:41 2021
# Generated by iptables-save v1.4.21 on Tue Jul 20 09:30:41 2021
*filter
:INPUT ACCEPT [8232:5406612]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [6668:2199433]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:KILO-IPIP - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -p ipv4 -m comment --comment "Kilo: jump to IPIP chain" -j KILO-IPIP
-A INPUT -p ipv4 -m comment --comment "Kilo: reject other IPIP traffic" -j DROP
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KILO-IPIP -s 172.25.132.55/32 -m comment --comment "Kilo: allow IPIP traffic" -j ACCEPT
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 10.44.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 10.44.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-SERVICES -d 10.45.193.115/32 -p tcp -m comment --comment "ingress-nginx/default-http-backend: has no endpoints" -m tcp --dport 80 -j REJECT --reject-with icmp-port-unreachable
COMMIT
# Completed on Tue Jul 20 09:30:41 2021
# Generated by iptables-save v1.4.21 on Tue Jul 20 09:30:41 2021
*nat
:PREROUTING ACCEPT [54265:145774843]
:INPUT ACCEPT [193:32980]
:OUTPUT ACCEPT [277:20460]
:POSTROUTING ACCEPT [277:20460]
:DOCKER - [0:0]
:KILO-NAT - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-PROXY-CANARY - [0:0]
:KUBE-SEP-2ZKD5TCM6OYP5Z4F - [0:0]
:KUBE-SEP-AWXKJFKBRSHYCFIX - [0:0]
:KUBE-SEP-CN66RQ6OOEA3MWHE - [0:0]
:KUBE-SEP-DWYO5FEH5J22EHP6 - [0:0]
:KUBE-SEP-OC25G42R336ONLNP - [0:0]
:KUBE-SEP-P2V4QTKODS75TTCV - [0:0]
:KUBE-SEP-QPYANCAVUCDJHCNQ - [0:0]
:KUBE-SEP-TB7SM5ETL6QLHI7C - [0:0]
:KUBE-SEP-YYRNE5UY2T7VHRUE - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-7ECNF7D6GS4GHP3A - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-LC5QY66VUV2HJ6WZ - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 10.44.2.0/24 -m comment --comment "Kilo: jump to KILO-NAT chain" -j KILO-NAT
-A DOCKER -i docker0 -j RETURN
-A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
-A KILO-NAT -d 10.44.0.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 172.25.132.35/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.1/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
-A KILO-NAT -d 10.44.2.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 172.25.132.55/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.2/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.3/32 -m comment --comment "Kilo: do not NAT packets destined for WireGuared IPs" -j RETURN
-A KILO-NAT -d 10.44.1.0/24 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 172.25.132.230/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -d 10.4.0.3/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN
-A KILO-NAT -m comment --comment "Kilo: NAT remaining packets" -j MASQUERADE
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-2ZKD5TCM6OYP5Z4F -s 10.44.2.23/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-2ZKD5TCM6OYP5Z4F -p tcp -m tcp -j DNAT --to-destination 10.44.2.23:9153
-A KUBE-SEP-AWXKJFKBRSHYCFIX -s 10.44.2.21/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-AWXKJFKBRSHYCFIX -p tcp -m tcp -j DNAT --to-destination 10.44.2.21:8080
-A KUBE-SEP-CN66RQ6OOEA3MWHE -s 10.44.0.13/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-CN66RQ6OOEA3MWHE -p tcp -m tcp -j DNAT --to-destination 10.44.0.13:443
-A KUBE-SEP-DWYO5FEH5J22EHP6 -s 172.25.132.35/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-DWYO5FEH5J22EHP6 -p tcp -m tcp -j DNAT --to-destination 172.25.132.35:6443
-A KUBE-SEP-OC25G42R336ONLNP -s 10.44.1.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-OC25G42R336ONLNP -p tcp -m tcp -j DNAT --to-destination 10.44.1.18:53
-A KUBE-SEP-P2V4QTKODS75TTCV -s 10.44.2.23/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-P2V4QTKODS75TTCV -p tcp -m tcp -j DNAT --to-destination 10.44.2.23:53
-A KUBE-SEP-QPYANCAVUCDJHCNQ -s 10.44.2.23/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-QPYANCAVUCDJHCNQ -p udp -m udp -j DNAT --to-destination 10.44.2.23:53
-A KUBE-SEP-TB7SM5ETL6QLHI7C -s 10.44.1.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-TB7SM5ETL6QLHI7C -p tcp -m tcp -j DNAT --to-destination 10.44.1.18:9153
-A KUBE-SEP-YYRNE5UY2T7VHRUE -s 10.44.1.18/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-YYRNE5UY2T7VHRUE -p udp -m udp -j DNAT --to-destination 10.44.1.18:53
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.53.231/32 -p tcp -m comment --comment "kube-system/metrics-server: cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.53.231/32 -p tcp -m comment --comment "kube-system/metrics-server: cluster IP" -m tcp --dport 443 -j KUBE-SVC-LC5QY66VUV2HJ6WZ
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.214.208/32 -p tcp -m comment --comment "pf/echoserver:http cluster IP" -m tcp --dport 8080 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.214.208/32 -p tcp -m comment --comment "pf/echoserver:http cluster IP" -m tcp --dport 8080 -j KUBE-SVC-7ECNF7D6GS4GHP3A
-A KUBE-SERVICES ! -s 10.44.0.0/16 -d 10.45.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.45.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-7ECNF7D6GS4GHP3A -j KUBE-SEP-AWXKJFKBRSHYCFIX
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-OC25G42R336ONLNP
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-P2V4QTKODS75TTCV
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-TB7SM5ETL6QLHI7C
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-2ZKD5TCM6OYP5Z4F
-A KUBE-SVC-LC5QY66VUV2HJ6WZ -j KUBE-SEP-CN66RQ6OOEA3MWHE
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-DWYO5FEH5J22EHP6
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-YYRNE5UY2T7VHRUE
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-QPYANCAVUCDJHCNQ
COMMIT
# Completed on Tue Jul 20 09:30:41 2021

sudo ifconfig

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:d3:e3:11:ff  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.25.132.55  netmask 255.255.254.0  broadcast 172.25.133.255
        inet6 fe80::ef5:e143:acdd:a51a  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::2483:a3e7:45f:fe20  prefixlen 64  scopeid 0x20<link>
        inet6 fe80::d43f:5c55:1e06:d744  prefixlen 64  scopeid 0x20<link>
        ether 00:15:5d:6a:82:4f  txqueuelen 1000  (Ethernet)
        RX packets 7373002  bytes 7733829348 (7.2 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 267181  bytes 71056769 (67.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

kilo0: flags=209<UP,POINTOPOINT,RUNNING,NOARP>  mtu 1420
        inet 10.4.0.2  netmask 255.255.0.0  destination 10.4.0.2
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)
        RX packets 131  bytes 17188 (16.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 126  bytes 19100 (18.6 KiB)
        TX errors 1  dropped 0 overruns 0  carrier 0  collisions 0

kube-bridge: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
        inet 10.44.2.1  netmask 255.255.255.0  broadcast 10.44.2.255
        inet6 fe80::1487:8cff:fe05:c2a8  prefixlen 64  scopeid 0x20<link>
        ether 16:87:8c:05:c2:a8  txqueuelen 1000  (Ethernet)
        RX packets 93131  bytes 6175021 (5.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 96966  bytes 66719614 (63.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 157924  bytes 81506918 (77.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 157924  bytes 81506918 (77.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tunl0: flags=193<UP,RUNNING,NOARP>  mtu 1480
        inet 10.44.2.1  netmask 255.255.255.255
        tunnel   txqueuelen 1000  (IPIP Tunnel)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth2c02f12a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
        inet6 fe80::88b3:b4ff:fe84:309e  prefixlen 64  scopeid 0x20<link>
        ether 8a:b3:b4:84:30:9e  txqueuelen 0  (Ethernet)
        RX packets 71509  bytes 5683947 (5.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 72642  bytes 25937437 (24.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth6dd5fd0c: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
        inet6 fe80::48dd:7cff:fea4:b183  prefixlen 64  scopeid 0x20<link>
        ether 4a:dd:7c:a4:b1:83  txqueuelen 0  (Ethernet)
        RX packets 51  bytes 4469 (4.3 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 37  bytes 5999 (5.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth9acbfe8d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
        inet6 fe80::f4b5:afff:fe7c:2599  prefixlen 64  scopeid 0x20<link>
        ether f6:b5:af:7c:25:99  txqueuelen 0  (Ethernet)
        RX packets 86  bytes 13277 (12.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 144  bytes 10648 (10.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vethc96781bd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1420
        inet6 fe80::bca9:9dff:fea5:7ebc  prefixlen 64  scopeid 0x20<link>
        ether be:a9:9d:a5:7e:bc  txqueuelen 0  (Ethernet)
        RX packets 21413  bytes 1776386 (1.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 24271  bytes 40768549 (38.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

sudo ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:6a:82:4f brd ff:ff:ff:ff:ff:ff
    inet 172.25.132.55/23 brd 172.25.133.255 scope global noprefixroute dynamic eth0
       valid_lft 80954sec preferred_lft 80954sec
    inet6 fe80::d43f:5c55:1e06:d744/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::ef5:e143:acdd:a51a/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
    inet6 fe80::2483:a3e7:45f:fe20/64 scope link tentative noprefixroute dadfailed
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:d3:e3:11:ff brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
4: kube-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue state UP group default qlen 1000
    link/ether 16:87:8c:05:c2:a8 brd ff:ff:ff:ff:ff:ff
    inet 10.44.2.1/24 brd 10.44.2.255 scope global kube-bridge
       valid_lft forever preferred_lft forever
    inet6 fe80::1487:8cff:fe05:c2a8/64 scope link
       valid_lft forever preferred_lft forever
5: veth9acbfe8d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
    link/ether f6:b5:af:7c:25:99 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::f4b5:afff:fe7c:2599/64 scope link
       valid_lft forever preferred_lft forever
6: vethc96781bd@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
    link/ether be:a9:9d:a5:7e:bc brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::bca9:9dff:fea5:7ebc/64 scope link
       valid_lft forever preferred_lft forever
7: veth2c02f12a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
    link/ether 8a:b3:b4:84:30:9e brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::88b3:b4ff:fe84:309e/64 scope link
       valid_lft forever preferred_lft forever
8: kilo0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 10.4.0.2/16 brd 10.4.255.255 scope global kilo0
       valid_lft forever preferred_lft forever
9: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.44.2.1/32 brd 10.44.2.1 scope global tunl0
       valid_lft forever preferred_lft forever
17: veth6dd5fd0c@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1420 qdisc noqueue master kube-bridge state UP group default
    link/ether 4a:dd:7c:a4:b1:83 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::48dd:7cff:fea4:b183/64 scope link
       valid_lft forever preferred_lft forever

sudo ip r

default via 172.25.132.1 dev eth0 proto dhcp metric 100
10.4.0.0/16 dev kilo0 proto kernel scope link src 10.4.0.2
10.44.0.0/24 via 10.4.0.1 dev kilo0 proto static onlink
10.44.1.0/24 via 10.4.0.3 dev kilo0 proto static onlink
10.44.2.0/24 dev kube-bridge proto kernel scope link src 10.44.2.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.25.132.0/23 dev eth0 proto kernel scope link src 172.25.132.55 metric 100

sudo wg

interface: kilo0
  public key: yA6LdCuJT7y+pRNvlhds8GeeEGoT1Q/PUhF++GZ8gB0=
  private key: (hidden)
  listening port: 51820

peer: IErj++lf80jkWOEEVsH97G6tTbNGViCZ12s2Gedl5kg=
  endpoint: 172.25.132.230:51820
  allowed ips: 10.44.1.0/24, 172.25.132.230/32, 10.4.0.3/32
  latest handshake: 1 hour, 44 minutes, 22 seconds ago
  transfer: 2.30 KiB received, 2.35 KiB sent

peer: c7yyiSaA9nvLVFz60Rkr42+xdvC4BVPaGDKJ+5v5QTU=
  endpoint: 172.25.132.35:51820
  allowed ips: 10.44.0.0/24, 172.25.132.35/32, 10.4.0.1/32
  latest handshake: 1 hour, 44 minutes, 29 seconds ago
  transfer: 1.66 KiB received, 2.38 KiB sent

kgctl graph

digraph kilo {
        label="10.4.0.0/16";
        labelloc=t;
        outputorder=nodesfirst;
        overlap=false;
        "foundation-musanin-master"->"foundation-musanin-node-1"[ dir=both ];
        "foundation-musanin-master"->"foundation-musanin-node-2"[ dir=both ];
        "foundation-musanin-node-1"->"foundation-musanin-node-2"[ dir=both ];
        subgraph "cluster_location_location:foundation-musanin-master" {
        label="location:foundation-musanin-master";
        style="dashed,rounded";
        "foundation-musanin-master" [ label="location:foundation-musanin-master\nfoundation-musanin-master\n10.44.0.0/24\n172.25.132.35\n10.4.0.1\n172.25.132.35:51820", rank=1, shape=ellipse ];

}
;
        subgraph "cluster_location_location:foundation-musanin-node-1" {
        label="location:foundation-musanin-node-1";
        style="dashed,rounded";
        "foundation-musanin-node-1" [ label="location:foundation-musanin-node-1\nfoundation-musanin-node-1\n10.44.2.0/24\n172.25.132.55\n10.4.0.2\n172.25.132.55:51820", rank=1, shape=ellipse ];

}
;
        subgraph "cluster_location_location:foundation-musanin-node-2" {
        label="location:foundation-musanin-node-2";
        style="dashed,rounded";
        "foundation-musanin-node-2" [ label="location:foundation-musanin-node-2\nfoundation-musanin-node-2\n10.44.1.0/24\n172.25.132.230\n10.4.0.3\n172.25.132.230:51820", rank=1, shape=ellipse ];

}
;
        subgraph "cluster_peers" {
        label="peers";
        style="dashed,rounded";

}
;

}

I got stuck with iptables routing and unclear how to set up access from node-1 (foundation-musanin-node-1) to kube-apiserver (kubernetes.default -> 10.45.0.1:443 -> 172.25.132.35:6443)

Hope for your help.

Thanks in advance,
Vladimir.

@leonnicolas
Copy link
Collaborator

leonnicolas commented Jul 21, 2021

Maybe it is not related, but some things seem odd to me.
Your nodes have private IPs and you are using mesh-granularity=local, but still the nodes are WireGuard peers of each other. Can you share the labels of your nodes? Maybe you put them in different locations even though they are in the same private network? This is also suggested by the graph you shared.
I am not sure if it can work to have nodes in the same "real" location, but in different "Kilo" locations.

EDIT: If you still want to encrypt all traffic, use mesh-granularity=full instead of the different locations.

@squat
Copy link
Owner

squat commented Jul 21, 2021

Hmm the funny thing here is that your private IPs are being used as the public endpoints for WireGuard. This creates some tricky situations. The problem here (i think) is that the master node is dropping martian packets. Imagine the following situation:

  • a Pod on node1 (source: 10.44.2.31) makes an HTTP request to the k8s API;
  • this DNS name is resolved to 10.45.0.1, which in turn gets forwarded to 172.25.132.35;
  • this request creates a packet that is routed directly over the eth0 interface (routing table entry 172.25.132.0/23 dev eth0 proto kernel scope);
  • the IP packet is not masqueraded (iptables rule -A KILO-NAT -d 172.25.132.35/32 -m comment --comment "Kilo: do not NAT packets destined for known IPs" -j RETURN);
  • master receives this packet on the eth0 interface;
  • the Linux kernel on master identifies the source IP of the packet, 10.44.2.31, and sees that packets to this IP should be routed over the kilo0 interface (routing table entry 10.44.1.0/24 via 10.4.0.3 dev kilo0 proto static);
  • the fact that the incoming interface and the outgoing interface do not match causes the Kernel to label this packet a martian packet and to drop it.

This is all a funny side effect of the reuse of the private IPs in the cluster as public endpoints.
In order for this to work, node1 would have to send packets to master's IP address over the WireGuard network, but we can't do this because this IP address is the tunnel's endpoint.
The only way around this is to ensure that IP addresses are ONLY either endpoints OR private IPs but not both.
Questions for you:

  • Do you have a true private network connecting these nodes or is the network shared?
  • If the network is private, then why not put all of the nodes in a shared Kilo location?
  • Instead, if you really want to use the VPN between these nodes, you could disable their private IPs https://kilo.squat.ai/docs/annotations#force-internal-ip. Note: this means that requests to/from node IP addresses will be NATed and so they will work, however these packets will not go over the WireGuard tunnel.

I wonder if there's a nicer way we could deal with this in Kilo to enable fully private clusters @leonnicolas

@squat
Copy link
Owner

squat commented Jul 21, 2021

Also, can you share the logs from the Kilo pod on master? Ideally with debug log level :))

@vladimir22
Copy link
Author

vladimir22 commented Jul 22, 2021

@squat I appreciate your detailed response, yes you right, we are using:

  • RKE Clusters in a VCloud env
  • public and internal IP are the same
  • we have no load-balancers outside the clusters (access by direct IP)

The idea was to cover all internal RKE nodes by VPN connections + add custom external peers.

The configuration above related to the next location settings:

kubectl label node foundation-musanin-master kilo.squat.ai/location="foundation-musanin-master"  --overwrite
kubectl label node foundation-musanin-node-1 kilo.squat.ai/location="foundation-musanin-node-1"  --overwrite
kubectl label node foundation-musanin-node-2 kilo.squat.ai/location="foundation-musanin-node-2"  --overwrite

In that case, I was able to see already working VPN connections between the nodes, and only one minor thing ( kube-api access from non-master POD) spoiled all the stuff :)

@vladimir22
Copy link
Author

Let me share another bunch of data for other cases:

  1. When I use all nodes in a single location, I am running into an issue:

Service discovery is not working, I CANNOT access to any POD because sudo wg is not configured properly

, details below:

kubectl label node foundation-musanin-master kilo.squat.ai/location="musanin"  --overwrite
kubectl label node foundation-musanin-node-1 kilo.squat.ai/location="musanin"  --overwrite
kubectl label node foundation-musanin-node-2 kilo.squat.ai/location="musanin"  --overwrite

kgctl graph

digraph kilo {
        label="10.4.0.0/16";
        labelloc=t;
        outputorder=nodesfirst;
        overlap=false;
        "foundation-musanin-master"->"foundation-musanin-node-1"[ dir=both ];
        "foundation-musanin-master"->"foundation-musanin-node-2"[ dir=both ];
        subgraph "cluster_location_location:" {
        label="location:";
        style="dashed,rounded";
        "foundation-musanin-master" [ label="location:\nfoundation-musanin-master\n10.44.0.0/24\n172.25.132.35\n10.4.0.1\n172.25.132.35:51820", rank=1, shape=ellipse ];
        "foundation-musanin-node-1" [ label="location:\nfoundation-musanin-node-1\n10.44.1.0/24\n172.25.132.55", shape=ellipse ];
        "foundation-musanin-node-2" [ label="location:\nfoundation-musanin-node-2\n10.44.2.0/24\n172.25.132.230", shape=ellipse ];

}
;
        subgraph "cluster_peers" {
        label="peers";
        style="dashed,rounded";

}
;

}

kubectl get pods --all-namespaces -o wide

NAMESPACE            NAME                                      READY   STATUS      RESTARTS   AGE     IP               NODE                        NOMINATED NODE   READINESS GATES
default              master                                    1/1     Running     0          13m     10.44.0.4        foundation-musanin-master   <none>           <none>
default              node1                                     1/1     Running     0          13m     10.44.1.4        foundation-musanin-node-1   <none>           <none>
default              node2                                     1/1     Running     0          13m     10.44.2.5        foundation-musanin-node-2   <none>           <none>
kube-system          coredns-7c5566588d-6zsf8                  1/1     Running     0          136m    10.44.1.2        foundation-musanin-node-1   <none>           <none>
kube-system          coredns-7c5566588d-jsf5t                  1/1     Running     0          136m    10.44.0.2        foundation-musanin-master   <none>           <none>
kube-system          coredns-autoscaler-65bfc8d47d-h525p       1/1     Running     0          136m    10.44.2.2        foundation-musanin-node-2   <none>           <none>
kube-system          kilo-rm7xl                                1/1     Running     0          3m43s   172.25.132.230   foundation-musanin-node-2   <none>           <none>
kube-system          kilo-tb5v9                                1/1     Running     0          3m43s   172.25.132.35    foundation-musanin-master   <none>           <none>
kube-system          kilo-tvgpl                                1/1     Running     0          3m43s   172.25.132.55    foundation-musanin-node-1   <none>           <none>
kube-system          metrics-server-6b55c64f86-6t49b           1/1     Running     0          136m    10.44.2.3        foundation-musanin-node-2   <none>           <none>
kube-system          rke-coredns-addon-deploy-job-hb6tp        0/1     Completed   0          136m    172.25.132.35    foundation-musanin-master   <none>           <none>
kube-system          rke-ingress-controller-deploy-job-lbxn6   0/1     Completed   0          136m    172.25.132.35    foundation-musanin-master   <none>           <none>
kube-system          rke-metrics-addon-deploy-job-tv2b8        0/1     Completed   0          136m    172.25.132.35    foundation-musanin-master   <none>           <none>
kube-system          rke-network-plugin-deploy-job-tlchx       0/1     Completed   0          136m    172.25.132.35    foundation-musanin-master   <none>           <none>
local-path-storage   local-path-provisioner-5bd6f65fdf-525fc   1/1     Running     0          133m    10.44.0.3        foundation-musanin-master   <none>           <none>
pf                   echoserver-977db48cd-fcvsj                1/1     Running     0          15m     10.44.2.4        foundation-musanin-node-2   <none>           <none>

kubectl get service -n pf -o wide

NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE   SELECTOR
echoserver   ClusterIP   10.45.241.43   <none>        8080/TCP   98m   app=echoserver
## failed access by hostname
kubectl exec -it master -- curl -kv http://echoserver.pf:8080
kubectl exec -it node1 -- curl -kv http://echoserver.pf:8080
kubectl exec -it node2 -- curl -kv http://echoserver.pf:8080

* Could not resolve host: echoserver.pf
* Closing connection 0
curl: (6) Could not resolve host: echoserver.pf
command terminated with exit code 6


## failed access by service IP
kubectl exec -it master -- curl -kv http://10.45.241.43:8080
kubectl exec -it node1 -- curl -kv http://10.45.241.43:8080
kubectl exec -it node2 -- curl -kv http://10.45.241.43:8080

*   Trying 10.45.241.43:8080...
* connect to 10.45.241.43 port 8080 failed: Host is unreachable
* Failed to connect to 10.45.241.43 port 8080: Host is unreachable
* Closing connection 0
curl: (7) Failed to connect to 10.45.241.43 port 8080: Host is unreachable
command terminated with exit code 7


## only POD on the same node has acces by POD IP
kubectl exec -it master -- curl -kv http://10.44.2.4:8080
kubectl exec -it node1 -- curl -kv http://10.44.2.4:8080


*   Trying 10.44.2.4:8080...
* connect to 10.44.2.4 port 8080 failed: Host is unreachable
* Failed to connect to 10.44.2.4 port 8080: Host is unreachable
* Closing connection 0
curl: (7) Failed to connect to 10.44.2.4 port 8080: Host is unreachable
command terminated with exit code 7

kubectl exec -it node2 -- curl -kv http://10.44.2.4:8080 - OK

foundation-musanin-master configuration:

sudo wg

interface: kilo0
  public key: c7yyiSaA9nvLVFz60Rkr42+xdvC4BVPaGDKJ+5v5QTU=
  private key: (hidden)
  listening port: 51820

sudo ip r

default via 172.25.132.1 dev eth0 proto dhcp metric 100
10.4.0.0/16 dev kilo0 proto kernel scope link src 10.4.0.1
10.44.0.0/24 dev cni0 proto kernel scope link src 10.44.0.1
10.44.0.0/24 dev kube-bridge proto kernel scope link src 10.44.0.1
10.44.1.0/24 via 172.25.132.55 dev tunl0 proto static onlink
10.44.2.0/24 via 172.25.132.230 dev tunl0 proto static onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.25.132.0/23 dev eth0 proto kernel scope link src 172.25.132.35 metric 100

sudo ip a

43: kilo0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
    link/none
    inet 10.4.0.1/16 brd 10.4.255.255 scope global kilo0
       valid_lft forever preferred_lft forever

foundation-musanin-node-1 configuration:

sudo wg

interface: kilo0
  public key: yA6LdCuJT7y+pRNvlhds8GeeEGoT1Q/PUhF++GZ8gB0=
  private key: (hidden)
  listening port: 51820

sudo ip r

default via 172.25.132.1 dev eth0 proto dhcp metric 100
10.44.1.0/24 dev cni0 proto kernel scope link src 10.44.1.1
10.44.1.0/24 dev kube-bridge proto kernel scope link src 10.44.1.1
10.44.2.0/24 via 172.25.132.230 dev tunl0 proto static onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.25.132.0/23 dev eth0 proto kernel scope link src 172.25.132.55 metric 100

sudo ip a

27: kilo0: <POINTOPOINT,NOARP> mtu 1420 qdisc noqueue state DOWN group default qlen 1000
    link/none
    inet 10.4.0.1/16 brd 10.4.255.255 scope global kilo0
       valid_lft forever preferred_lft forever

foundation-musanin-node-2 configuration:

sudo wg

interface: kilo0
  public key: IErj++lf80jkWOEEVsH97G6tTbNGViCZ12s2Gedl5kg=
  private key: (hidden)
  listening port: 51820

sudo ip r

default via 172.25.132.1 dev eth0 proto dhcp metric 100
10.4.0.1 via 172.25.132.35 dev tunl0 proto static onlink
10.44.0.0/24 via 172.25.132.35 dev tunl0 proto static onlink
10.44.1.0/24 via 172.25.132.55 dev tunl0 proto static onlink
10.44.2.0/24 dev cni0 proto kernel scope link src 10.44.2.1
10.44.2.0/24 dev kube-bridge proto kernel scope link src 10.44.2.1
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.25.132.0/23 dev eth0 proto kernel scope link src 172.25.132.230 metric 100

sudo ip a

28: kilo0: <POINTOPOINT,NOARP> mtu 1420 qdisc noqueue state DOWN group default qlen 1000
    link/none
    inet 10.4.0.1/16 brd 10.4.255.255 scope global kilo0
       valid_lft forever preferred_lft forever

LOGS:

kubectl logs -n kube-system kilo-rm7xl

{"caller":"mesh.go:141","component":"kilo","level":"debug","msg":"using 172.25.132.230/23 as the private IP address","ts":"2021-07-22T09:42:22.29522959Z"}
{"caller":"mesh.go:146","component":"kilo","level":"debug","msg":"using 172.25.132.230/23 as the public IP address","ts":"2021-07-22T09:42:22.295347894Z"}
{"caller":"main.go:223","msg":"Starting Kilo network mesh '6309529a3ff0fd98a78ef2f352d5996387ef0293'.","ts":"2021-07-22T09:42:22.299384255Z"}
{"caller":"cni.go:60","component":"kilo","err":"failed to read IPAM config from CNI config list file: no IP ranges specified","level":"warn","msg":"failed to get CIDR from CNI file; overwriting it","ts":"2021-07-22T09:42:22.400952104Z"}
{"caller":"cni.go:68","component":"kilo","level":"info","msg":"CIDR in CNI file is empty","ts":"2021-07-22T09:42:22.401003506Z"}
{"CIDR":"10.44.2.0/24","caller":"cni.go:73","component":"kilo","level":"info","msg":"setting CIDR in CNI file","ts":"2021-07-22T09:42:22.401019107Z"}
{"caller":"mesh.go:268","component":"kilo","event":"add","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.509533933Z"}
{"caller":"mesh.go:279","component":"kilo","event":"add","in-mesh":false,"level":"debug","msg":"received non ready node","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"foundation-musanin-node-1","PersistentKeepalive":0,"Subnet":{"IP":"10.44.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2021-07-22T09:42:22.509630436Z"}
{"caller":"mesh.go:297","component":"kilo","event":"add","level":"info","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"foundation-musanin-node-1","PersistentKeepalive":0,"Subnet":{"IP":"10.44.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2021-07-22T09:42:22.509772342Z"}
{"caller":"mesh.go:268","component":"kilo","event":"add","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.509812444Z"}
{"caller":"mesh.go:270","component":"kilo","event":"add","level":"debug","msg":"processing local node","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"foundation-musanin-node-2","PersistentKeepalive":0,"Subnet":{"IP":"10.44.2.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2021-07-22T09:42:22.509824844Z"}
{"caller":"mesh.go:387","component":"kilo","level":"debug","msg":"local node differs from backend","ts":"2021-07-22T09:42:22.509857446Z"}
{"caller":"mesh.go:393","component":"kilo","level":"debug","msg":"successfully reconciled local node against backend","ts":"2021-07-22T09:42:22.520490269Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.521797321Z"}
{"caller":"mesh.go:536","component":"kilo","level":"info","msg":"WireGuard configurations are different","ts":"2021-07-22T09:42:22.576117087Z"}
{"caller":"mesh.go:268","component":"kilo","event":"add","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.583246571Z"}
{"caller":"mesh.go:279","component":"kilo","event":"add","in-mesh":false,"level":"debug","msg":"received non ready node","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"foundation-musanin-master","PersistentKeepalive":0,"Subnet":{"IP":"10.44.0.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2021-07-22T09:42:22.583339575Z"}
{"caller":"mesh.go:297","component":"kilo","event":"add","level":"info","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"foundation-musanin-master","PersistentKeepalive":0,"Subnet":{"IP":"10.44.0.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2021-07-22T09:42:22.583374476Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.584253011Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.585213549Z"}
{"caller":"mesh.go:270","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"Endpoint":{"DNS":"","IP":"172.25.132.230","Port":51820},"Key":"SUVyaisrbGY4MGprV09FRVZzSDk3RzZ0VGJOR1ZpQ1oxMnMyR2VkbDVrZz0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.230","Mask":"///+AA=="},"LastSeen":1626946942,"Leader":false,"Location":"","Name":"foundation-musanin-node-2","PersistentKeepalive":0,"Subnet":{"IP":"10.44.2.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:42:22.585255751Z"}
{"caller":"mesh.go:387","component":"kilo","level":"debug","msg":"local node differs from backend","ts":"2021-07-22T09:42:22.585350755Z"}
{"caller":"mesh.go:393","component":"kilo","level":"debug","msg":"successfully reconciled local node against backend","ts":"2021-07-22T09:42:22.596500099Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.598070762Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.5990341Z"}
{"caller":"mesh.go:297","component":"kilo","event":"update","level":"info","node":{"Endpoint":{"DNS":"","IP":"172.25.132.55","Port":51820},"Key":"eUE2TGRDdUpUN3krcFJOdmxoZHM4R2VlRUdvVDFRL1BVaEYrK0daOGdCMD0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.55","Mask":"///+AA=="},"LastSeen":1626946942,"Leader":false,"Location":"","Name":"foundation-musanin-node-1","PersistentKeepalive":0,"Subnet":{"IP":"10.44.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:42:22.599087803Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.60003574Z"}
{"caller":"mesh.go:550","component":"kilo","level":"debug","msg":"local node is not the leader","ts":"2021-07-22T09:42:22.691546388Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.70363607Z"}
{"caller":"mesh.go:270","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"Endpoint":{"DNS":"","IP":"172.25.132.230","Port":51820},"Key":"SUVyaisrbGY4MGprV09FRVZzSDk3RzZ0VGJOR1ZpQ1oxMnMyR2VkbDVrZz0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.230","Mask":"///+AA=="},"LastSeen":1626946942,"Leader":false,"Location":"","Name":"foundation-musanin-node-2","PersistentKeepalive":0,"Subnet":{"IP":"10.44.2.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.1","Mask":"//8AAA=="},"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:42:22.703711473Z"}
{"caller":"mesh.go:387","component":"kilo","level":"debug","msg":"local node differs from backend","ts":"2021-07-22T09:42:22.703762875Z"}
{"caller":"mesh.go:393","component":"kilo","level":"debug","msg":"successfully reconciled local node against backend","ts":"2021-07-22T09:42:22.718906679Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.721843296Z"}


kubectl logs -n kube-system kilo-tb5v9

{"caller":"mesh.go:356","component":"kilo","level":"debug","msg":"successfully checked in local node in backend","ts":"2021-07-22T09:56:22.875063564Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:56:22.87513167Z"}
{"caller":"mesh.go:270","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"Endpoint":{"DNS":"","IP":"172.25.132.35","Port":51820},"Key":"Yzd5eWlTYUE5bnZMVkZ6NjBSa3I0Mit4ZHZDNEJWUGFHREtKKzV2NVFUVT0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.35","Mask":"///+AA=="},"LastSeen":1626947782,"Leader":false,"Location":"","Name":"foundation-musanin-master","PersistentKeepalive":0,"Subnet":{"IP":"10.44.0.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.1","Mask":"//8AAA=="},"DiscoveredEndpoints":{},"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:56:22.875154972Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:56:23.030299863Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:56:23.083005071Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:56:52.682453593Z"}
{"caller":"mesh.go:356","component":"kilo","level":"debug","msg":"successfully checked in local node in backend","ts":"2021-07-22T09:56:52.883267527Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:56:52.884284105Z"}
{"caller":"mesh.go:270","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"Endpoint":{"DNS":"","IP":"172.25.132.35","Port":51820},"Key":"Yzd5eWlTYUE5bnZMVkZ6NjBSa3I0Mit4ZHZDNEJWUGFHREtKKzV2NVFUVT0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.35","Mask":"///+AA=="},"LastSeen":1626947812,"Leader":false,"Location":"","Name":"foundation-musanin-master","PersistentKeepalive":0,"Subnet":{"IP":"10.44.0.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.1","Mask":"//8AAA=="},"DiscoveredEndpoints":{},"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:56:52.884331009Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:56:53.049789826Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:56:53.099891577Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:07.885698782Z"}
{"caller":"mesh.go:270","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"Endpoint":{"DNS":"","IP":"172.25.132.35","Port":51820},"Key":"Yzd5eWlTYUE5bnZMVkZ6NjBSa3I0Mit4ZHZDNEJWUGFHREtKKzV2NVFUVT0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.35","Mask":"///+AA=="},"LastSeen":1626947812,"Leader":false,"Location":"","Name":"foundation-musanin-master","PersistentKeepalive":0,"Subnet":{"IP":"10.44.0.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.1","Mask":"//8AAA=="},"DiscoveredEndpoints":{},"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:57:07.885764487Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:22.398091373Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:22.398200282Z"}
{"caller":"mesh.go:270","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"Endpoint":{"DNS":"","IP":"172.25.132.35","Port":51820},"Key":"Yzd5eWlTYUE5bnZMVkZ6NjBSa3I0Mit4ZHZDNEJWUGFHREtKKzV2NVFUVT0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.35","Mask":"///+AA=="},"LastSeen":1626947812,"Leader":false,"Location":"","Name":"foundation-musanin-master","PersistentKeepalive":0,"Subnet":{"IP":"10.44.0.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.1","Mask":"//8AAA=="},"DiscoveredEndpoints":{},"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:57:22.398216883Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:22.398270087Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:57:22.68519494Z"}
{"caller":"mesh.go:356","component":"kilo","level":"debug","msg":"successfully checked in local node in backend","ts":"2021-07-22T09:57:22.889197319Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:22.889942876Z"}
{"caller":"mesh.go:270","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"Endpoint":{"DNS":"","IP":"172.25.132.35","Port":51820},"Key":"Yzd5eWlTYUE5bnZMVkZ6NjBSa3I0Mit4ZHZDNEJWUGFHREtKKzV2NVFUVT0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.35","Mask":"///+AA=="},"LastSeen":1626947842,"Leader":false,"Location":"","Name":"foundation-musanin-master","PersistentKeepalive":0,"Subnet":{"IP":"10.44.0.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.1","Mask":"//8AAA=="},"DiscoveredEndpoints":{},"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:57:22.88999168Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:23.070523755Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:23.126743776Z"}
[k8s@foundation-musanin-master rke]$ clear
[k8s@foundation-musanin-master rke]$ kubectl logs -n kube-system kilo-tb5v9
{"caller":"mesh.go:141","component":"kilo","level":"debug","msg":"using 172.25.132.35/23 as the private IP address","ts":"2021-07-22T09:42:22.373165056Z"}
{"caller":"mesh.go:146","component":"kilo","level":"debug","msg":"using 172.25.132.35/23 as the public IP address","ts":"2021-07-22T09:42:22.373289366Z"}
{"caller":"main.go:223","msg":"Starting Kilo network mesh '6309529a3ff0fd98a78ef2f352d5996387ef0293'.","ts":"2021-07-22T09:42:22.377477892Z"}
{"caller":"cni.go:60","component":"kilo","err":"failed to read IPAM config from CNI config list file: no IP ranges specified","level":"warn","msg":"failed to get CIDR from CNI file; overwriting it","ts":"2021-07-22T09:42:22.478660578Z"}
{"caller":"cni.go:68","component":"kilo","level":"info","msg":"CIDR in CNI file is empty","ts":"2021-07-22T09:42:22.478718683Z"}
{"CIDR":"10.44.0.0/24","caller":"cni.go:73","component":"kilo","level":"info","msg":"setting CIDR in CNI file","ts":"2021-07-22T09:42:22.478736384Z"}
{"caller":"mesh.go:268","component":"kilo","event":"add","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.586319169Z"}
{"caller":"mesh.go:270","component":"kilo","event":"add","level":"debug","msg":"processing local node","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"foundation-musanin-master","PersistentKeepalive":0,"Subnet":{"IP":"10.44.0.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2021-07-22T09:42:22.586390874Z"}
{"caller":"mesh.go:387","component":"kilo","level":"debug","msg":"local node differs from backend","ts":"2021-07-22T09:42:22.586520985Z"}
{"caller":"mesh.go:393","component":"kilo","level":"debug","msg":"successfully reconciled local node against backend","ts":"2021-07-22T09:42:22.59864613Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.60031746Z"}
{"caller":"mesh.go:536","component":"kilo","level":"info","msg":"WireGuard configurations are different","ts":"2021-07-22T09:42:22.653762925Z"}
{"caller":"mesh.go:268","component":"kilo","event":"add","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.656816763Z"}
{"caller":"mesh.go:279","component":"kilo","event":"add","in-mesh":false,"level":"debug","msg":"received non ready node","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"foundation-musanin-node-1","PersistentKeepalive":0,"Subnet":{"IP":"10.44.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2021-07-22T09:42:22.656874368Z"}
{"caller":"mesh.go:297","component":"kilo","event":"add","level":"info","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"foundation-musanin-node-1","PersistentKeepalive":0,"Subnet":{"IP":"10.44.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2021-07-22T09:42:22.656916371Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.65869681Z"}
{"caller":"mesh.go:268","component":"kilo","event":"add","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.659554577Z"}
{"caller":"mesh.go:279","component":"kilo","event":"add","in-mesh":false,"level":"debug","msg":"received non ready node","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"foundation-musanin-node-2","PersistentKeepalive":0,"Subnet":{"IP":"10.44.2.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2021-07-22T09:42:22.65959418Z"}
{"caller":"mesh.go:297","component":"kilo","event":"add","level":"info","node":{"Endpoint":null,"Key":"","NoInternalIP":false,"InternalIP":null,"LastSeen":0,"Leader":false,"Location":"","Name":"foundation-musanin-node-2","PersistentKeepalive":0,"Subnet":{"IP":"10.44.2.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":""},"ts":"2021-07-22T09:42:22.659635883Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.661288012Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.662964542Z"}
{"caller":"mesh.go:297","component":"kilo","event":"update","level":"info","node":{"Endpoint":{"DNS":"","IP":"172.25.132.230","Port":51820},"Key":"SUVyaisrbGY4MGprV09FRVZzSDk3RzZ0VGJOR1ZpQ1oxMnMyR2VkbDVrZz0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.230","Mask":"///+AA=="},"LastSeen":1626946942,"Leader":false,"Location":"","Name":"foundation-musanin-node-2","PersistentKeepalive":0,"Subnet":{"IP":"10.44.2.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:42:22.663004746Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.66383051Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.749645698Z"}
{"caller":"mesh.go:297","component":"kilo","event":"update","level":"info","node":{"Endpoint":{"DNS":"","IP":"172.25.132.55","Port":51820},"Key":"eUE2TGRDdUpUN3krcFJOdmxoZHM4R2VlRUdvVDFRL1BVaEYrK0daOGdCMD0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.55","Mask":"///+AA=="},"LastSeen":1626946942,"Leader":false,"Location":"","Name":"foundation-musanin-node-1","PersistentKeepalive":0,"Subnet":{"IP":"10.44.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:42:22.749812511Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.750900396Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:42:22.872869802Z"}
{"caller":"mesh.go:297","component":"kilo","event":"update","level":"info","node":{"Endpoint":{"DNS":"","IP":"172.25.132.230","Port":51820},"Key":"SUVyaisrbGY4MGprV09FRVZzSDk3RzZ0VGJOR1ZpQ1oxMnMyR2VkbDVrZz0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.230","Mask":"///+AA=="},"LastSeen":1626946942,"Leader":false,"Location":"","Name":"foundation-musanin-node-2","PersistentKeepalive":0,"Subnet":{"IP":"10.44.2.0","Mask":"////AA=="},"WireGuardIP":{"IP":"10.4.0.1","Mask":"//8AAA=="},"DiscoveredEndpoints":null,"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:42:22.872944408Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:42:22.874607538Z"}

kubectl logs -n kube-system kilo-tvgpl

{"caller":"mesh.go:550","component":"kilo","level":"debug","msg":"local node is not the leader","ts":"2021-07-22T09:57:23.114232984Z"}
{"caller":"mesh.go:561","component":"kilo","error":"failed to delete rule: no such file or directory","level":"error","ts":"2021-07-22T09:57:23.114893035Z"}
{"caller":"mesh.go:356","component":"kilo","level":"debug","msg":"successfully checked in local node in backend","ts":"2021-07-22T09:57:23.126617742Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:23.127445406Z"}
{"caller":"mesh.go:270","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"Endpoint":{"DNS":"","IP":"172.25.132.55","Port":51820},"Key":"eUE2TGRDdUpUN3krcFJOdmxoZHM4R2VlRUdvVDFRL1BVaEYrK0daOGdCMD0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.55","Mask":"///+AA=="},"LastSeen":1626947843,"Leader":false,"Location":"","Name":"foundation-musanin-node-1","PersistentKeepalive":0,"Subnet":{"IP":"10.44.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":{},"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:57:23.127482909Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:52.896760817Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:53.087422463Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:57:53.117206066Z"}
{"caller":"mesh.go:550","component":"kilo","level":"debug","msg":"local node is not the leader","ts":"2021-07-22T09:57:53.130811719Z"}
{"caller":"mesh.go:561","component":"kilo","error":"failed to delete rule: no such file or directory","level":"error","ts":"2021-07-22T09:57:53.131117342Z"}
{"caller":"mesh.go:356","component":"kilo","level":"debug","msg":"successfully checked in local node in backend","ts":"2021-07-22T09:57:53.143393992Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:57:53.143484499Z"}
{"caller":"mesh.go:270","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"Endpoint":{"DNS":"","IP":"172.25.132.55","Port":51820},"Key":"eUE2TGRDdUpUN3krcFJOdmxoZHM4R2VlRUdvVDFRL1BVaEYrK0daOGdCMD0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.55","Mask":"///+AA=="},"LastSeen":1626947873,"Leader":false,"Location":"","Name":"foundation-musanin-node-1","PersistentKeepalive":0,"Subnet":{"IP":"10.44.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":{},"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:57:53.143518001Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:58:08.370304068Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:58:22.907335487Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:58:23.101513105Z"}
{"DiscoveredEndpoints":{},"caller":"mesh.go:803","component":"kilo","level":"debug","msg":"Discovered WireGuard NAT Endpoints","ts":"2021-07-22T09:58:23.132699317Z"}
{"caller":"mesh.go:550","component":"kilo","level":"debug","msg":"local node is not the leader","ts":"2021-07-22T09:58:23.157593343Z"}
{"caller":"mesh.go:561","component":"kilo","error":"failed to delete rule: no such file or directory","level":"error","ts":"2021-07-22T09:58:23.158234692Z"}
{"caller":"mesh.go:356","component":"kilo","level":"debug","msg":"successfully checked in local node in backend","ts":"2021-07-22T09:58:23.164685391Z"}
{"caller":"mesh.go:268","component":"kilo","event":"update","level":"debug","msg":"syncing nodes","ts":"2021-07-22T09:58:23.166577337Z"}
{"caller":"mesh.go:270","component":"kilo","event":"update","level":"debug","msg":"processing local node","node":{"Endpoint":{"DNS":"","IP":"172.25.132.55","Port":51820},"Key":"eUE2TGRDdUpUN3krcFJOdmxoZHM4R2VlRUdvVDFRL1BVaEYrK0daOGdCMD0=","NoInternalIP":false,"InternalIP":{"IP":"172.25.132.55","Mask":"///+AA=="},"LastSeen":1626947903,"Leader":false,"Location":"","Name":"foundation-musanin-node-1","PersistentKeepalive":0,"Subnet":{"IP":"10.44.1.0","Mask":"////AA=="},"WireGuardIP":null,"DiscoveredEndpoints":{},"AllowedLocationIPs":null,"Granularity":"location"},"ts":"2021-07-22T09:58:23.166620741Z"}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants