-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enabling profiling at runtime #17583
base: master
Are you sure you want to change the base?
Conversation
…Technically the first branch shuould only be exercise one time
The |
1403f62
to
c120c03
Compare
if not self._running: | ||
return | ||
|
||
with self._mutex: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need self._mutex
when checking if self._running
is True
but not when checking if it's False
?
Could you please add an inline comment somewhere saying why mutex is necessary in the first place?
You can probably include also why this needs to be a singleton while you're at it.
And if you really want to convey more clearly that this should not be initialized outside of the module, you could make the class private as well s/Profiling/_Profiling/
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My 2 cents:
I know the convetion is to add tests for all modules in utils
. I appreciate that you stuck to it.
At the same time unless you plan to expose datadog_checks.base.utils.profiling
outside of the base package (which rn it doesn't sound like you do), I'd treat that module as internal. i.e. nix this test and try to cover as much ground as you can with test_agent_check.py
If we do decide to keep this test, let's not reach into the class and patch the private attribute _profiler
, but make the external dependency explicit with either public attribute or (better) something we pass to __init__
.
self._running = False | ||
|
||
def status(self): | ||
return "running" if self._running else "stopped" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we anticipate more status types? If no, we could turn this into running(self) -> bool
, right?
|
||
@pytest.mark.parametrize( | ||
"integration_profiling", | ||
["true", "false"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
["true", "false"], | |
[True, False], |
That way you can later just say if integration_profiling:
What does this PR do?
During the Agent innovation week @iglendd, @coignetp and myself are working on enabling continues profiling for the Agent and the integrations at runtime. The main idea is that we would be able to toggle continues profiling when requesting a flare.
Currently the integrations support profiling via the
integration_profiling
configuration from the Agent aka.datadog.yml
. The profiler is only enabled at the init time once. HereThis PR introduces a naive profiling util that would allow use to enable and disable profiling at any point with in a check.
The Profiling class is a singleton. Which can be use in the check's code to enabled and disable profiling.
With the
Check#run
function we check thedatadog_agent
configuration for the valueintegration_profiling
if it is enabled we start profiling. The main idea of using a singleton is to ensure is to not create many profiler instances, and to ensure one profile instance is running at all times.Disclaimer
The PR is on its earlier stages and is missing test. I want to get some feedback from the integrations team before investing more time on it. Once validated, I will add tests and any other necessary changes to make the PR production ready
Motivation
Additional Notes
Review checklist (to be filled by reviewers)
qa/skip-qa
label if the PR doesn't need to be tested during QA.backport/<branch-name>
label to the PR and it will automatically open a backport PR once this one is merged