Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

octavia-ingress-controller #2520

Open
yangzhilie opened this issue Jan 11, 2024 · 6 comments
Open

octavia-ingress-controller #2520

yangzhilie opened this issue Jan 11, 2024 · 6 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@yangzhilie
Copy link

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug
/kind feature

What happened:
We are following https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/octavia-ingress-controller/using-octavia-ingress-controller.md to create octavia ingress controller. we are in openstack yoga version, and octavia ingress control version as (registry.k8s.io/provider-os/octavia-ingress-controller:v1.28.1)

kube-system octavia-ingress-controller-0 0/1 CrashLoopBackOff 15 (2m28s ago) 53m

we are getting the following error from octavia ingress controller pod

❯ k logs octavia-ingress-controller-0 -n kube-system

2024/01/10 17:47:16 Running command:

Command env: (log-file=, also-stdout=false, redirect-stderr=true)

Run from directory:

Executable path: /bin/octavia-ingress-controller

Args (comma-delimited): /bin/octavia-ingress-controller,--config=/etc/config/octavia-ingress-controller-config.yaml

2024/01/10 17:47:16 Now listening for interrupts

INFO [2024-01-10T17:47:16Z] Using config file file=/etc/config/octavia-ingress-controller-config.yaml

W0110 17:47:16.597288 12 client_config.go:618] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.

FATAL [2024-01-10T17:47:16Z] failed to initialize openstack client error="Get "/": unsupported protocol scheme """

2024/01/10 17:47:16 running command: exit status 1

=======

What you expected to happen:
octavia ingress controller pod should be created successful

How to reproduce it:
apply the deployment.yaml to create octavia ingress contorller

Anything else we need to know?:
the following are our service account/configmap/deployment yaml files:

==============

serviceaccount.yaml


kind: ServiceAccount
apiVersion: v1
metadata:
name: octavia-ingress-controller
namespace: kube-system

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: octavia-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:

  • kind: ServiceAccount
    name: octavia-ingress-controller
    namespace: kube-system

config.yaml


kind: ConfigMap
apiVersion: v1
metadata:
name: octavia-ingress-controller-config
namespace: kube-system
data:
config: |
cluster-name: cedev17
openstack:
auth_url: https://srelab501.wpc.az1.eng.pdx.wd:5000/
project_domain_name: Default
user_domain_name: Default
project_name: cedev17.t501.eng.pdx.wd
project_id: 94ba42c68e1346189b666f17e49e22f5
username: admin
user-id: b0f2b611d99e444cbe1c1fa068940411
password: password
region_name: RegionOne
cacert: /etc/pki/tls/certs/ca-bundle.crt
octavia:
subnet-id: 37abc0d8-f5ab-4109-8864-622ab4b47b1f
floating-network-id: 42d4ba58-ccd6-407b-a887-5727ee7fe275
manage-security-groups: false
provider: amphora



kind: StatefulSet
apiVersion: apps/v1
metadata:
name: octavia-ingress-controller
namespace: kube-system
labels:
k8s-app: octavia-ingress-controller
spec:
replicas: 1
selector:
matchLabels:
k8s-app: octavia-ingress-controller
serviceName: octavia-ingress-controller
template:
metadata:
labels:
k8s-app: octavia-ingress-controller
spec:
serviceAccountName: octavia-ingress-controller
tolerations:
- effect: NoSchedule # Make sure the pod can be scheduled on master kubelet.
operator: Exists
- key: CriticalAddonsOnly # Mark the pod as a critical add-on for rescheduling.
operator: Exists
- effect: NoExecute
operator: Exists
imagePullSecrets:
- name: regcred
containers:
- name: octavia-ingress-controller
image: docker-dev-artifactory.workday.com/wpc5/dev/octavia-ingress-controller:v1.28.1
imagePullPolicy: IfNotPresent
args:
- /bin/octavia-ingress-controller

        - --config=/etc/config/octavia-ingress-controller-config.yaml
      volumeMounts:
        - mountPath: /etc/kubernetes
          name: kubernetes-config
          readOnly: true
        - name: ingress-config
          mountPath: /etc/config
  hostNetwork: true
  volumes:
    - name: kubernetes-config
      hostPath:
        path: /etc/kubernetes
        type: Directory
    - name: ingress-config
      configMap:
        name: octavia-ingress-controller-config
        items:
          - key: config
            path: octavia-ingress-controller-config.yaml

Environment:

  • openstack-cloud-controller-manager(or other related binary) version: openstack 5.8.1
  • OpenStack version: yoga
  • Others:
@dulek
Copy link
Contributor

dulek commented Jan 12, 2024

Copying my openstack-discuss answer:

This looks like an issue with pod's connectivity to OpenStack API and
Keystone in particular. I'd try debugging it by replacing StatefulSet's
command with sleep inf, logging into the pod and investigating
connectivity to OpenStack API from there.

In general you'll get more help by raising an issue in GitHub. cloud-
provider-openstack belongs to K8s, not OpenStack.

@jichenjc
Copy link
Contributor

I don't remember clearly but before there are CSI related issue that following URL is not solvable due to various reason
auth_url: https://srelab501.wpc.az1.eng.pdx.wd:5000/ per @dulek above as well

can you help change the DNS to IP if it's ok to you?

@yangzhilie
Copy link
Author

Thanks for your reply. I tried change dns name to ip but unfortunately it is still showing the same error from pod

FATAL [2024-01-16T18:33:58Z] failed to initialize openstack client error="Get "/": unsupported protocol scheme """

@jichenjc
Copy link
Contributor

I googled and seems they something in the URL has issue such as https://nanxiao.me/en/fix-unsupported-protocol-scheme-issue-in-golang/
are you able to try connect directly with the URL ?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 16, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants