Executors

In the Nextflow framework architecture, the executor is the component that determines the system where a pipeline process is run and supervises its execution.

The executor provides an abstraction between the pipeline processes and the underlying execution system. This allows you to write the pipeline functional logic independently from the actual processing platform.

In other words, you can write your pipeline script once and have it running on your computer, a cluster resource manager, or the cloud — simply change the executor definition in the Nextflow configuration file.

AWS Batch

Nextflow supports the AWS Batch service that allows job submission in the cloud without having to spin out and manage a cluster of virtual machines. AWS Batch uses Docker containers to run tasks, which greatly simplifies pipeline deployment.

The pipeline processes must specify the Docker image to use by defining the container directive, either in the pipeline script or the nextflow.config file.

To enable this executor, set process.executor = 'awsbatch' in the nextflow.config file.

The pipeline can be launched either in a local computer, or an EC2 instance. EC2 is suggested for heavy or long-running workloads. Additionally, an S3 bucket must be used as the pipeline work directory.

Resource requests and other job characteristics can be controlled via the following process directives:

See the AWS Batch page for further configuration details.

Azure Batch

New in version 21.04.0.

Nextflow supports the Azure Batch service that allows job submission in the cloud without having to spin out and manage a cluster of virtual machines. Azure Batch uses Docker containers to run tasks, which greatly simplifies pipeline deployment.

The pipeline processes must specify the Docker image to use by defining the container directive, either in the pipeline script or the nextflow.config file.

To enable this executor, set process.executor = 'azurebatch' in the nextflow.config file.

The pipeline can be launched either in a local computer, or a cloud virtual machine. The cloud VM is suggested for heavy or long-running workloads. Additionally, an Azure Blob storage container must be used as the pipeline work directory.

Resource requests and other job characteristics can be controlled via the following process directives:

See the Azure Batch page for further configuration details.

Bridge

New in version 22.09.1-edge.

Bridge is an abstraction layer to ease batch system and resource manager usage in heterogeneous HPC environments.

It is open source software that can be installed on top of existing classical job schedulers such as Slurm, LSF, or other schedulers. Bridge allows you to submit jobs, get information on running jobs, stop jobs, get information on the cluster system, etc.

For more details on how to install the Bridge system, see the documentation.

To enable the Bridge executor, set process.executor = 'bridge' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

Flux Executor

New in version 22.11.0-edge.

The flux executor allows you to run your pipeline script using the Flux Framework.

Nextflow manages each process as a separate job that is submitted to the cluster by using the flux mini submit command.

To enable the Flux executor, set process.executor = 'flux' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

Note

Flux does not support the memory directive.

Note

By default, Flux will send all output to the .command.log file. To send this output to stdout and stderr instead, set flux.terminalOutput = true in your config file.

GA4GH TES

Warning

Experimental: may change in a future release.

The Task Execution Schema (TES) project by the GA4GH standardization initiative is an effort to define a standardized schema and API for describing batch execution tasks in a portable manner.

Nextflow supports the TES API via the tes executor, which allows the submission of workflow tasks to a remote execution backend exposing a TES API endpoint.

To use this feature, define the following variables in the workflow launching environment:

export NXF_MODE=ga4gh
export NXF_EXECUTOR=tes
export NXF_EXECUTOR_TES_ENDPOINT='http://back.end.com'

It is important that the endpoint is specified without the trailing slash; otherwise, the resulting URLs will not be normalized and the requests to TES will fail.

You will then be able to run your workflow over TES using the usual Nextflow command line. Be sure to specify the Docker image to use, i.e.:

nextflow run rnaseq-nf -with-docker alpine

Note

If the variable NXF_EXECUTOR_TES_ENDPOINT is omitted, the default endpoint is http://localhost:8000.

Tip

You can use a local Funnel server using the following launch command line:

./funnel server --Server.HTTPPort 8000 --LocalStorage.AllowedDirs $HOME run

(tested with version 0.8.0 on macOS)

Warning

Make sure the TES backend can access the Nextflow work directory when data is exchanged using a local or shared file system.

Known Limitations

  • Automatic deployment of workflow scripts in the bin folder is not supported.

    Changed in version 23.07.0-edge: Automatic upload of the bin directory is now supported.

  • Process output directories are not supported. For details see #76.

  • Glob patterns in process output declarations are not supported. For details see #77.

Google Cloud Batch

New in version 22.07.1-edge.

Google Cloud Batch is a managed computing service that allows the execution of containerized workloads in the Google Cloud Platform infrastructure.

Nextflow provides built-in support for the Cloud Batch API, which allows the seamless deployment of a Nextflow pipeline in the cloud, offloading the process executions as pipelines.

The pipeline processes must specify the Docker image to use by defining the container directive, either in the pipeline script or the nextflow.config file. Additionally, the pipeline work directory must be located in a Google Storage bucket.

To enable this executor, set process.executor = 'google-batch' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

See the Google Cloud Batch page for further configuration details.

Google Life Sciences

New in version 20.01.0.

Google Cloud Life Sciences is a managed computing service that allows the execution of containerized workloads in the Google Cloud Platform infrastructure.

Nextflow provides built-in support for the Life Sciences API, which allows the seamless deployment of a Nextflow pipeline in the cloud, offloading the process executions as pipelines.

The pipeline processes must specify the Docker image to use by defining the container directive, either in the pipeline script or the nextflow.config file. Additionally, the pipeline work directory must be located in a Google Storage bucket.

To enable this executor, set process.executor = 'google-lifesciences' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

See the Google Life Sciences page for further configuration details.

HTCondor

Warning

Experimental: may change in a future release.

The condor executor allows you to run your pipeline script by using the HTCondor resource manager.

Nextflow manages each process as a separate job that is submitted to the cluster using the condor_submit command.

The pipeline must be launched from a node where the condor_submit command is available, which is typically the cluster login node.

Note

The HTCondor executor for Nextflow does not currently support HTCondor’s ability to transfer input/output data to the corresponding job’s compute node. Therefore, the data must be made accessible to the compute nodes through a shared file system directory from where the Nextflow workflow is executed (or specified via the -w option).

To enable the HTCondor executor, set process.executor = 'condor' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

HyperQueue

New in version 22.05.0-edge.

Warning

Experimental: may change in a future release.

The hyperqueue executor allows you to run your pipeline script by using the HyperQueue job scheduler.

Nextflow manages each process as a separate job that is submitted to the cluster using the hq command line tool.

The pipeline must be launched from a node where the hq command is available, which is typically the cluster login node.

To enable the HyperQueue executor, set process.executor = 'hq' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

Ignite

Warning

This feature is no longer maintained.

Changed in version 22.01.0-edge: The ignite executor must be enabled via the nf-ignite plugin.

The ignite executor allows you to run a pipeline on an Apache Ignite cluster.

To enable this executor, set process.executor = 'ignite' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

See the Apache Ignite page to learn how to configure Nextflow to deploy and run an Ignite cluster in your infrastructure.

Kubernetes

The k8s executor allows you to run a pipeline on a Kubernetes cluster.

Resource requests and other job characteristics can be controlled via the following process directives:

See the Kubernetes page to learn how to set up a Kubernetes cluster to run Nextflow pipelines.

Local

The local executor is used by default. It runs the pipeline processes on the computer where Nextflow is launched. The processes are parallelised by spawning multiple threads, taking advantage of the multi-core architecture of the CPU.

The local executor is useful for developing and testing a pipeline script on your computer, before switching to a cluster or cloud environment with production data.

Note

While the local executor limits the number of concurrent tasks based on requested vs available resources, it does not enforce task resource requests. In other words, it is possible for a local task to use more CPUs and memory than it requested, in which case it may starve other tasks. An exception to this behavior is when using Docker or Podman containers, in which case the resource requests are enforced by the container runtime.

LSF

The lsf executor allows you to run your pipeline script using a Platform LSF cluster.

Nextflow manages each process as a separate job that is submitted to the cluster using the bsub command.

The pipeline must be launched from a node where the bsub command is available, which is typically the cluster login node.

To enable the LSF executor, set process.executor = 'lsf' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

Note

LSF supports both per-core and per-job memory limits. Nextflow assumes that LSF works in the per-core mode, thus it divides the requested memory by the number of requested cpus.

When LSF is configured to work in the per-job memory limit mode, you must specify this limit with the perJobMemLimit option in the Scope executor scope of your Nextflow config file.

See also the Platform LSF documentation.

Moab

New in version 19.07.0.

Warning

Experimental: may change in a future release.

The moab executor allows you to run your pipeline script using the Moab resource manager by Adaptive Computing.

Nextflow manages each process as a separate job that is submitted to the cluster using the msub command provided by the resource manager.

The pipeline must be launched from a node where the msub command is available, which is typically the cluster login node.

To enable the Moab executor, set process.executor = 'moab' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

NQSII

The nsqii executor allows you to run your pipeline script using the NQSII resource manager.

Nextflow manages each process as a separate job that is submitted to the cluster using the qsub command provided by the scheduler.

The pipeline must be launched from a node where the qsub command is available, which is typically the cluster login node.

To enable the NQSII executor, set process.executor = 'nqsii' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

OAR

New in version 19.11.0-edge.

The oar executor allows you to run your pipeline script using the OAR resource manager.

Nextflow manages each process as a separate job that is submitted to the cluster using the oarsub command.

The pipeline must be launched from a node where the oarsub command is available, which is typically the cluster login node.

To enable the OAR executor set process.executor = 'oar' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

Known Limitations

  • Multiple clusterOptions should be semicolon-separated to ensure that the OAR job script is accurately formatted:

    clusterOptions = '-t besteffort;--project myproject'
    

PBS/Torque

The pbs executor allows you to run your pipeline script using a resource manager from the PBS/Torque family of batch schedulers.

Nextflow manages each process as a separate job that is submitted to the cluster using the qsub command provided by the scheduler.

The pipeline must be launched from a node where the qsub command is available, which is typically the cluster login node.

To enable the PBS executor, set process.executor = 'pbs' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

PBS Pro

The pbspro executor allows you to run your pipeline script using the PBS Pro resource manager.

Nextflow manages each process as a separate job that is submitted to the cluster using the qsub command provided by the scheduler.

The pipeline must be launched from a node where the qsub command is available, which is typically the cluster login node.

To enable the PBS Pro executor, set process.executor = 'pbspro' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

SGE

The sge executor allows you to run your pipeline script using a Sun Grid Engine cluster or a compatible platform (Open Grid Engine, Univa Grid Engine, etc).

Nextflow manages each process as a separate grid job that is submitted to the cluster using the qsub command.

The pipeline must be launched from a node where the qsub command is available, which is typically the cluster login node.

To enable the SGE executor, set process.executor = 'sge' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

SLURM

The slurm executor allows you to run your pipeline script using the SLURM resource manager.

Nextflow manages each process as a separate job that is submitted to the cluster using the sbatch command.

The pipeline must be launched from a node where the sbatch command is available, which is typically the cluster login node.

To enable the SLURM executor, set process.executor = 'slurm' in the nextflow.config file.

Resource requests and other job characteristics can be controlled via the following process directives:

Note

SLURM partitions can be specified with the queue directive.

Note

Nextflow does not provide direct support for SLURM multi-clusters. If you need to submit workflow executions to a cluster other than the current one, specify it with the SLURM_CLUSTERS variable in the launch environment.

New in version 23.07.0-edge: Some SLURM clusters require memory allocations to be specified with --mem-per-cpu instead of --mem. You can specify executor.perCpuMemAllocation = true in the Nextflow configuration to enable this behavior. Nextflow will automatically compute the memory per CPU for each task (by default 1 CPU is used).