Execution

Deploying the pipeline

With the config.yaml configured to your run-mode of choice with paths to the necessary configuration and input files, the workflow can be executed on any infrastructure using the snakemake command, supplied with further Snakemake command-line arguments (e.g. specifying a profile with --profile or --cluster to submit jobs to an HPC) depending on your environment. In addition to these options, you must also supply the –configfile directive on the command line to point to the configuration that you would like to use.

Test your configuration by performing a dry-run:

snakemake --use-conda --configfile config/config.yaml -n

Execute the workflow locally via:

snakemake --use-conda --configfile config/config.yaml --cores $N

Execute the workflow on a cluster using something like:

snakemake --use-conda --cluster sbatch --configfile config/config.yaml --jobs 10

The pipeline will automatically create a subdirectory for logs in logs/.

Logging

All job-specific logs will be directed to a logs/ subdirectory in the home analysis directory of the pipeline. This directory is automatically created for you upon execution of the pipeline. For example, if you run the pipeline on a Slurm cluster with default parameters, these log files will follow the naming structure of snakejob.<name_of_rule>.<job_number>.