You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been finding that running pytest -n auto can take or even just seem to hang when testing one of my codebases. I believe the culprit is that a number of tests using matrix multiplication or parallelized numba functions are being hit at the same time. Both of these cases default to using the number of cores as the default number of threads – so my CPU becomes heavily oversubscribed.
Would it be possible for pytest-xdist to use something like threadpoolctl to limit the number of threads each worker uses? Ideally it could be similar to how joblib or dask set the number of threads available to each worker to something like hardware_threads // num_workers.
Alternatively, is there a good way I could set this behaviour myself? Ideally without hardcoding the number of threads to use.
The text was updated successfully, but these errors were encountered:
I've worked out a basic version of what I'd like to have. I suspect there are a number of edge cases that this would hit with xdist's more advanced features. But, this fixture:
This is probably a feature request.
I've been finding that running
pytest -n auto
can take or even just seem to hang when testing one of my codebases. I believe the culprit is that a number of tests using matrix multiplication or parallelized numba functions are being hit at the same time. Both of these cases default to using the number of cores as the default number of threads – so my CPU becomes heavily oversubscribed.Would it be possible for pytest-xdist to use something like
threadpoolctl
to limit the number of threads each worker uses? Ideally it could be similar to howjoblib
ordask
set the number of threads available to each worker to something likehardware_threads // num_workers
.Alternatively, is there a good way I could set this behaviour myself? Ideally without hardcoding the number of threads to use.
The text was updated successfully, but these errors were encountered: