New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Restrict pylint parallelism to available RAM #2918
Conversation
Hello @martinpitt! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found: There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2020-10-13 05:05:24 UTC |
pylint uses *lots* of RAM (more than 1 GiB per process), and the `jobs=0` default runs `nproc` parallel processes by default. This exceeds the available memory in many standard environments (such as m1.medium == 2 CPUs/4 GiB RAM, or m1.large == 4 CPUs/8 GiB RAM) and causes eternal hangs of the pylintrc test when one job process gets OOM-killed. Instead, set the parallelism depending on the amount of free memory -- one job for each 2 GB of available RAM. This is an underestimation, but let's not use up the entire system memory. Also still cap it to the number of available CPUs, just in case this runs on a system with lots of RAM and only few CPUs.
8a3cf17
to
be12ca7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Thanks!
@@ -116,6 +117,23 @@ def run(self): | |||
def _prepare_args(self): | |||
args = [] | |||
|
|||
# pylint uses a lot of memory and doesn't handle ENOMEM well, so set --jobs based on memory |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you think it might be worth it to report the issue to pylint?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pylint-dev/pylint#1495 has existed for years, but I'll report a more specific one. Possibly fixed by/related to pylint-dev/pylint#3869
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Thank you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Thanks for this improvement!
I filed pylint-dev/pylint#3899 about the hang after OOM, and followed up to pylint-dev/pylint#1495 with a current reproducer about the memory leak. |
pylint uses lots of RAM (more than 1 GiB per process), and the
jobs=0
default runsnproc
parallel processes by default. Thisexceeds the available memory in many standard environments (such as
m1.medium == 2 CPUs/4 GiB RAM, or m1.large == 4 CPUs/8 GiB RAM) and
causes eternal hangs of the pylintrc test when one job process gets
OOM-killed.
Instead, set the parallelism depending on the amount of free memory --
one job for each 2 GB of available RAM. This is an underestimation, but
let's not use up the entire system memory. Also still cap it to the
number of available CPUs, just in case this runs on a system with lots
of RAM and only few CPUs.