Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Utilize systemd-notify to add status details #2604

Closed
dgoetz opened this issue Apr 19, 2021 · 12 comments · Fixed by #3006
Closed

Utilize systemd-notify to add status details #2604

dgoetz opened this issue Apr 19, 2021 · 12 comments · Fixed by #3006

Comments

@dgoetz
Copy link

dgoetz commented Apr 19, 2021

Is your feature request related to a problem? Please describe.
I come from a discussion in the Foreman community where we want to get the status for debugging and scaling the system. In the past the puma-systemd plugin was used and now puma-status, but having this directly in puma without an additional plugin and configuration would be a great feature for us and our users.

Describe the solution you'd like
We primarily need the number and state of the workers like the systemd plugin had, but the resource usage of the status plugin could also be helpful. Just think about would a sysadmin perhaps wants to see and can influence and provide this information if gathering it is doable without cause performance issues.

Describe alternatives you've considered
We add the status plugin and describe how to use it at the moment or perhaps add some graphical interface for monitoring it into the Foreman webinterface in the future.

Additional context
While the considerations are coming from how we use or plan to use it in the Foreman project, I think also other projects and users can benefit from it.

@jacobherrington
Copy link
Contributor

@dgoetz Is this still something you'd find useful? It seems that Foreman added the status plugin, does that solve the issue for you?

@Nowaker
Copy link

Nowaker commented Feb 22, 2022

@jacobherrington What you linked isn't very useful. With support for Type=notify already built-in, this request is a natural progression towards more integration with systemd.

@ekohl
Copy link
Contributor

ekohl commented Feb 22, 2022

This comment reminded me that at some point I did play with it. I've submitted it as #2833 but I don't know if I have time to finish it. Anyone should feel free to take it as a start and complete it.

@Nowaker
Copy link

Nowaker commented Feb 22, 2022

@ekohl Thanks for letting me know! I'll check it out, and maybe complete it in March. We'll see. ;)

@nateberkopec
Copy link
Member

Thanks for sharing where you got to @ekohl !

@DaviidFr

This comment was marked as spam.

@dentarg
Copy link
Member

dentarg commented Jan 12, 2023

Anyone here that can take a look at #3006? I'm not using Puma with systemd myself

@dgoetz
Copy link
Author

dgoetz commented Jan 12, 2023

I monkey-patched my Foreman installation and it works. I now have a nice additional line showing the Status.

● foreman.service - Foreman
   Loaded: loaded (/usr/lib/systemd/system/foreman.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/foreman.service.d
           └─installer.conf
   Active: active (running) since Thu 2023-01-12 15:38:27 CET; 3s ago
     Docs: https://theforeman.org
 Main PID: 2703 (rails)
   Status: "Puma 5.6.5: cluster: 5/5, worker_status: [{ 5/5 threads, 5 available, 0 backlog },{ 5/5 threads, 5 available, 0 backlog },{ 5/5 threads, 5 available, 0 backlog },{ 5/5 threads, 5 available, 0 backlog },{ 5/5 threads, 5 availa>
    Tasks: 101 (limit: 50687)
   Memory: 604.5M
   CGroup: /system.slice/foreman.service
           ├─2703 puma 5.6.5 (unix:///run/foreman.sock) [foreman]
           ├─2745 puma: cluster worker 0: 2703 [foreman]
           ├─2746 puma: cluster worker 1: 2703 [foreman]
           ├─2747 puma: cluster worker 2: 2703 [foreman]
           ├─2752 puma: cluster worker 3: 2703 [foreman]
           └─2753 puma: cluster worker 4: 2703 [foreman]

Jan 12 15:38:27 katello.foreman foreman[2703]: [2703] * Preloading application
Jan 12 15:38:27 katello.foreman foreman[2703]: [2703] * Activated unix:///run/foreman.sock
Jan 12 15:38:27 katello.foreman foreman[2703]: [2703] Use Ctrl-C to stop
Jan 12 15:38:27 katello.foreman foreman[2703]: [2703] * Starting control server on unix:///usr/share/foreman/tmp/sockets/pumactl.sock
Jan 12 15:38:27 katello.foreman foreman[2703]: [2703] - Worker 2 (PID: 2747) booted in 0.1s, phase: 0
Jan 12 15:38:27 katello.foreman foreman[2703]: [2703] - Worker 3 (PID: 2752) booted in 0.11s, phase: 0
Jan 12 15:38:27 katello.foreman foreman[2703]: [2703] - Worker 4 (PID: 2753) booted in 0.11s, phase: 0
Jan 12 15:38:27 katello.foreman foreman[2703]: [2703] - Worker 1 (PID: 2746) booted in 0.13s, phase: 0
Jan 12 15:38:27 katello.foreman foreman[2703]: [2703] - Worker 0 (PID: 2745) booted in 0.15s, phase: 0
Jan 12 15:38:27 katello.foreman systemd[1]: Started Foreman.

@ekohl @ares @ehelms (as you all were part of the initial discussion) anything else that should be added for it being even more helpful?

@ehelms
Copy link

ehelms commented Jan 16, 2023

Looks great from my perspective, if you are happy with it I think it's safe to close it out.

@ekohl
Copy link
Contributor

ekohl commented Jan 16, 2023

It looks like the worker status gets truncated there, so I wonder if it should somehow be made shorter. On the other hand, that's also something that could be iterated on.

@dgoetz
Copy link
Author

dgoetz commented Jan 17, 2023

It looks like the worker status gets truncated there, so I wonder if it should somehow be made shorter. On the other hand, that's also something that could be iterated on.

Truncation happens from systemd not breaking the line at maximum size of my console instead you can scroll sideways, so all information is present even if you do not see it on the first look.

@nateberkopec
Copy link
Member

Closing out for now, please open new issues if you'd like something added on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants