Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to set CommonSettings from command line? #73

Open
choosehappy opened this issue Apr 17, 2024 · 1 comment
Open

How to set CommonSettings from command line? #73

choosehappy opened this issue Apr 17, 2024 · 1 comment
Assignees

Comments

@choosehappy
Copy link

I nicely have everything up and running with the latest versions of snakemake and snakemake-executor-plugin-slurm pulled from github

Thanks for all your hard work!

One quick question that i can't suss out from the available documentation - how do modify the CommonSettings parameters, and in particular this one:

my current command line looks like this, but no modifications seem to be able to impact that variable:

snakemake  --executor slurm -j 20  --latency-wait 1 --default-resources  "slurm_account=root" "slurm_partition=LocalQ" mem_mb=0 --config json_file=sectra_json_example.json

I manually changed the variable from 40 down to 2, and in my test situation the computation time goes from 4 minutes down to 40 seconds -- most of it is clearly spent in waiting to begin querying for jobs which are already completed

Thanks again!

@cmeesters
Copy link
Collaborator

As this is only present in snakemake_interface_executor_plugins and code making use of it, not in the snakemake core, I too wonder how this can be tuned.

Meanwhile, I too, find this value a bit high: Particularly when teaching, but also upon development (when many jobs fail) such a default hinders progress.

So, is there a way to set this without tinkering with the code? Would a default of 5 seconds (or even lower) be more sensible? If a slurmdb, as suggested in the code comment, can take half a minute to be queried, there is something wrong with the cluster, and we might reasonably be catching issues when inquiring the job status, right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants