Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Percona Helm Setup Login Failure #2427

Open
1 task done
drpdishant opened this issue Aug 23, 2023 · 6 comments
Open
1 task done

Percona Helm Setup Login Failure #2427

drpdishant opened this issue Aug 23, 2023 · 6 comments
Assignees
Labels
bug Bug report

Comments

@drpdishant
Copy link

drpdishant commented Aug 23, 2023

Description

Login to installation of Percona via Helm on Kubernetes is not working.

Expected Results

  • Should be able to login with generated admin password.
  • Shoud be login with new password set with CLI

Actual Results

Not able to login
image

Version

PMM Server: v2.36
Helm Chart Version: pmm-1.2.2

Steps to reproduce

  • Created a fresh deployment of pmm on kubernetes cluster
helm upgrade --install pmm percona/pmm
  • Accessed it via NodePort
  • Attempted login using generated admin password.
image
  • Not able to login
image

Relevant logs

File /srv/pmm-distribution doesn't exist. Initizlize /srv...
Copy plugins and VERSION file
Generate self-signed certificates for nginx
Init Postgres
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "C".
The default database encoding has accordingly been set to "SQL_ASCII".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /srv/postgres14 ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /usr/pgsql-14/bin/pg_ctl -D /srv/postgres14 -l logfile start

Temporary start postgres and enable pg_stat_statements
waiting for server to start....2023-08-23 05:12:44.879 UTC [95] LOG:  redirecting log output to logging collector process
2023-08-23 05:12:44.879 UTC [95] HINT:  Future log output will appear in directory "log".
 done
server started
CREATE EXTENSION
waiting for server to shut down.... done
server stopped
2023-08-23 05:12:46,257 INFO Included extra file "/etc/supervisord.d/alertmanager.ini" during parsing
2023-08-23 05:12:46,258 INFO Included extra file "/etc/supervisord.d/dbaas-controller.ini" during parsing
2023-08-23 05:12:46,259 INFO Included extra file "/etc/supervisord.d/grafana.ini" during parsing
2023-08-23 05:12:46,259 INFO Included extra file "/etc/supervisord.d/pmm.ini" during parsing
2023-08-23 05:12:46,259 INFO Included extra file "/etc/supervisord.d/prometheus.ini" during parsing
2023-08-23 05:12:46,259 INFO Included extra file "/etc/supervisord.d/qan-api2.ini" during parsing
2023-08-23 05:12:46,259 INFO Included extra file "/etc/supervisord.d/victoriametrics.ini" during parsing
2023-08-23 05:12:46,259 INFO Included extra file "/etc/supervisord.d/vmalert.ini" during parsing
2023-08-23 05:12:46,259 INFO Included extra file "/etc/supervisord.d/vmproxy.ini" during parsing
2023-08-23 05:12:46,259 INFO Set uid to user 0 succeeded
2023-08-23 05:12:46,296 INFO RPC interface 'supervisor' initialized
2023-08-23 05:12:46,297 INFO supervisord started with pid 1
2023-08-23 05:12:47,316 INFO spawned: 'pmm-update-perform-init' with pid 146
2023-08-23 05:12:47,327 INFO spawned: 'postgresql' with pid 148
2023-08-23 05:12:47,332 INFO spawned: 'clickhouse' with pid 150
2023-08-23 05:12:47,340 INFO spawned: 'grafana' with pid 152
2023-08-23 05:12:47,344 INFO spawned: 'nginx' with pid 153
2023-08-23 05:12:47,367 INFO spawned: 'victoriametrics' with pid 156
2023-08-23 05:12:47,384 INFO spawned: 'vmalert' with pid 161
2023-08-23 05:12:47,389 INFO spawned: 'alertmanager' with pid 162
2023-08-23 05:12:47,392 INFO spawned: 'vmproxy' with pid 163
2023-08-23 05:12:47,396 INFO spawned: 'qan-api2' with pid 166
2023-08-23 05:12:47,404 INFO spawned: 'pmm-managed' with pid 170
2023-08-23 05:12:47,412 INFO spawned: 'pmm-agent' with pid 173
2023-08-23 05:12:48,281 INFO exited: qan-api2 (exit status 1; not expected)
2023-08-23 05:12:48,304 INFO success: pmm-update-perform-init entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:48,351 INFO success: postgresql entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:48,352 INFO success: clickhouse entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:48,352 INFO success: grafana entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:48,353 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:48,353 INFO success: victoriametrics entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:48,383 INFO success: vmalert entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:48,386 INFO success: alertmanager entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:48,398 INFO success: vmproxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:48,400 INFO success: pmm-managed entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:48,413 INFO success: pmm-agent entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:12:49,356 INFO spawned: 'qan-api2' with pid 272
2023-08-23 05:12:49,726 INFO exited: qan-api2 (exit status 1; not expected)
2023-08-23 05:12:51,735 INFO spawned: 'qan-api2' with pid 347
2023-08-23 05:12:52,740 INFO success: qan-api2 entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2023-08-23 05:13:20,243 INFO exited: pmm-update-perform-init (exit status 0; expected)

Code of Conduct

  • I agree to follow Percona Community Code of Conduct
@BupycHuk
Copy link
Member

Hello @drpdishant, we noticed that in your screenshot, there is a lot of space in front of the admin value. Could you please recheck?

@fmonera
Copy link

fmonera commented Sep 22, 2023

The same happens to me. I tried with versions 2.39.0 and 2.38.1.
Impossible to login.

@rishavmehra
Copy link

@BupycHuk Can you Please assign this issue to me, I like to work on this issue

@BupycHuk BupycHuk assigned rishavmehra and unassigned BupycHuk and artemgavrilov Oct 2, 2023
@michizubi-SRF
Copy link

I experience the same issue on a newly deployed installation of version 2.40.1. I also tested 2.40.0 and 2.39.0 and it's not working either.

@michizubi-SRF
Copy link

In the meantime I figured out that, apparently, special characters in the password lead to this issue. As soon as I used a password consisting of only letters and numbers it was correctly set and I was able to log in.

Can you reproduce that?

@pamanseau
Copy link

I have the same issue after upgrading with Helm from 1.3.1 to 1.3.4
I used the pmm-secrets password and didn't note the previous one.
I suspect that the upgrade changed the password in Secrets but not in the DB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Bug report
Projects
None yet
Development

No branches or pull requests

7 participants