You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I manually changed the variable from 40 down to 2, and in my test situation the computation time goes from 4 minutes down to 40 seconds -- most of it is clearly spent in waiting to begin querying for jobs which are already completed
Thanks again!
The text was updated successfully, but these errors were encountered:
As this is only present in snakemake_interface_executor_plugins and code making use of it, not in the snakemake core, I too wonder how this can be tuned.
Meanwhile, I too, find this value a bit high: Particularly when teaching, but also upon development (when many jobs fail) such a default hinders progress.
So, is there a way to set this without tinkering with the code? Would a default of 5 seconds (or even lower) be more sensible? If a slurmdb, as suggested in the code comment, can take half a minute to be queried, there is something wrong with the cluster, and we might reasonably be catching issues when inquiring the job status, right?
I nicely have everything up and running with the latest versions of snakemake and snakemake-executor-plugin-slurm pulled from github
Thanks for all your hard work!
One quick question that i can't suss out from the available documentation - how do modify the CommonSettings parameters, and in particular this one:
snakemake-executor-plugin-slurm/snakemake_executor_plugin_slurm/__init__.py
Line 43 in 58af422
my current command line looks like this, but no modifications seem to be able to impact that variable:
I manually changed the variable from 40 down to 2, and in my test situation the computation time goes from 4 minutes down to 40 seconds -- most of it is clearly spent in waiting to begin querying for jobs which are already completed
Thanks again!
The text was updated successfully, but these errors were encountered: