New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PMM Server 2.36.0 can not restart successfully with pg failed. #1986
Comments
I also tried to change the pg directory permissions and rename it, and found that the permissions were still changed after restarting pod. Did a script or program force the folder permissions to be updated? Before restart:
After restart pod:
|
Hi @cdmikechen, what version of a helm chart (pmm chart version) and repo do you use for PMM? There are couple of things that could change those permissions - init container, storage provisioner or some update procedure. As you said you use OKD - we don't officially support OpenShift yet as PMM requires root in the container. Why pod was restarted? Did you run some update procedure? Thanks, |
@denisok However, there is another problem: |
what version of a helm chart (pmm chart version) and repo do you use for PMM? What logs and events shows for that pod and all containers in it? |
@denisok
|
Hi. I think the pmm-client failing is much similar to this issue that I've created: https://jira.percona.com/browse/PMM-11893 |
I ran into the same issue with pmm-server using the helm chart version 1.2.5 and pmm-server 2.39.0. I did not set any security context in the helm chart values and the deployed sts had them empty. I then learned our k8s cluster applies a default security context at both the pod and container level, here is the pod security context: securityContext:
fsGroup: 1
seccompProfile:
type: RuntimeDefault
supplementalGroups:
- 1 After a restart, this is what
After some trial and error, I found this helm chart value allowed pmm to survive restarts podSecurityContext:
fsGroupChangePolicy: OnRootMismatch The effective pod security context: securityContext:
fsGroup: 1
fsGroupChangePolicy: OnRootMismatch
seccompProfile:
type: RuntimeDefault
supplementalGroups:
- 1 Starting fresh, this is what
and reboot:
I hope there are plans to support running without root |
Description
I installed pxc-operator and pmm-server using helm-chart 1.12.1. When the pmm was first deployed, it started correctly. When the pod restarted, I found that the pg service was still failing.
I checked pg logs in
/src/logs
and found that the pg directory permissions is not correct.I used the following commands to change the pg directory permissions and start pg. Pg started after the first change. But after I tried to restart pod, the directory permissions were forced to change by an unknown script or program. The repetition caused the above exception.
chmod 700 -R /srv/postgres14 su postgres -c "/usr/pgsql-14/bin/pg_ctl start -D /srv/postgres14"
Expected Results
Directory permission for postgres should not change, which is a mandatory restriction for pg startup.
Actual Results
pg directory permissions should not be changed.
Version
pmm-server and client 2.36.
OKD 4.11
Steps to reproduce
No response
Relevant logs
I had checked
/srv
permissions and I found that:Code of Conduct
The text was updated successfully, but these errors were encountered: