@@ -35,13 +35,21 @@ Free to use and open source under [MIT License](https://github.com/It4innovation
3535
3636## Usage
3737
38- Initialize the recommended version of HyperQueue on Puhti and Mahti like this:
38+ Load the default version of HyperQueue on Puhti and Mahti like this:
3939
4040``` bash
4141module load hyperqueue
4242```
4343
44- Use ` module spider ` to locate other versions.
44+ To load a specific version, use:
45+
46+ ``` bash
47+ module load hyperqueue/< version>
48+ ```
49+
50+ Replace ` <version> ` with suitable version number ([ see above] ( #available ) ).
51+ Use ` module spider ` to locate other versions.
52+
4553To access CSC's HyperQueue modules on LUMI,
4654remember to first run ` module use /appl/local/csc/modulefiles ` .
4755
@@ -62,7 +70,7 @@ until all are done or the batch job time limit is reached.
6270
6371Let's assume we have a ` tasks ` file with a list of commands we want to run using
6472eight threads each. ** Do not use ` srun ` in the commands!** HyperQueue will launch
65- the tasks using the allocated resources as requested. For example,
73+ the tasks using the allocated resources as requested. For example,
6674
6775``` text
6876command1 arguments1
@@ -103,7 +111,7 @@ directory structure looks as follows:
103111└── task # Executable task script for HyperQueue
104112```
105113
106- ** Task**
114+ #### Task
107115
108116We assume that HyperQueue tasks are independent and run on a single node.
109117Here is an example of a simple, executable ` task ` script written in Bash.
@@ -116,7 +124,7 @@ sleep 1
116124The overhead per task is around 0.1 milliseconds.
117125Therefore, we can efficiently execute even very small tasks.
118126
119- ** Batch job**
127+ #### Batch job
120128
121129In a Slurm batch job, each Slurm task corresponds to one HyperQueue worker.
122130We can increase the number of workers by increasing the number of Slurm tasks.
@@ -184,15 +192,15 @@ allocation.
184192 #SBATCH --time=00:15:00
185193 ```
186194
187- ** Module**
195+ #### Module
188196
189197We load the HyperQueue module to make the ` hq ` command available.
190198
191199``` bash
192200module load hyperqueue
193201```
194202
195- ** Server**
203+ #### Server
196204
197205Next, we specify where HyperQueue places the server files.
198206All ` hq ` commands respect this variable, so we set it before using any ` hq ` commands.
@@ -222,9 +230,9 @@ hq server start &
222230until hq job list & > /dev/null ; do sleep 1 ; done
223231```
224232
225- ** Workers**
233+ #### Workers
226234
227- </ --
235+ <! --
228236Next, we start HyperQueue workers in the background with the number of CPUs and the amount
229237of memory defined in the batch script. We access those values using the `SLURM_CPU_PER_TASK`
230238and `SLURM_MEM_PER_CPU` environment variables. By starting the workers using the `srun`
@@ -271,7 +279,7 @@ srun --overlap --cpu-bind=none --mpi=none hq worker start \
271279hq worker wait " $SLURM_NTASKS "
272280```
273281
274- ** Computing tasks**
282+ #### Computing tasks
275283
276284Now we can submit tasks with ` hq submit ` to the server, which executes them on the
277285available workers. It is a non-blocking command; thus, we do not need to run it in
@@ -296,7 +304,7 @@ complex task dependencies, we can use HyperQueue as the executor for other workf
296304managers, such as [ Snakemake] ( #using-snakemake-or-nextflow-with-hyperqueue ) or
297305[ Nextflow] ( #using-snakemake-or-nextflow-with-hyperqueue ) .
298306
299- ** Stopping the workers and the server**
307+ #### Stopping the workers and the server
300308
301309Once we are done running all of our tasks, we shut down the workers and server to
302310avoid a false error from Slurm when the job ends.
@@ -335,17 +343,13 @@ is requested.
335343
336344=== "Single node"
337345
338- File: `task`
339-
340- ```bash
346+ ```bash title="task"
341347 #!/bin/bash
342348 echo "Hello from task $HQ_TASK_ID!" > "output/$HQ_TASK_ID.out"
343349 sleep 1
344350 ```
345351
346- File: `batch.sh`
347-
348- ```bash
352+ ```bash title="batch.sh"
349353 #!/bin/bash
350354 #SBATCH --account=<project>
351355 #SBATCH --partition=small
@@ -392,35 +396,27 @@ is requested.
392396
393397 The archive `input.tar.gz` used in this example extracts into `input` directory.
394398
395- File: `extract`
396-
397- ```bash
399+ ```bash title="extract"
398400 #!/bin/bash
399401 tar xf input.tar.gz -C "$LOCAL_SCRATCH"
400402 mkdir -p "$LOCAL_SCRATCH/output"
401403 ```
402404
403- File: `task`
404-
405- ```bash
405+ ```bash title="task"
406406 #!/bin/bash
407407 cd "$LOCAL_SCRATCH"
408408 cat "input/$HQ_TASK_ID.inp" > "output/$HQ_TASK_ID.out"
409409 sleep 1
410410 ```
411411
412- File: `archive`
413-
414- ```bash
412+ ```bash title="archive"
415413 #!/bin/bash
416414 cd "$LOCAL_SCRATCH"
417415 tar czf "output-$SLURMD_NODENAME.tar.gz" output
418416 cp "output-$SLURMD_NODENAME.tar.gz" "$SLURM_SUBMIT_DIR"
419417 ```
420418
421- File: `batch.sh`
422-
423- ```bash
419+ ```bash title="batch.sh"
424420 #!/bin/bash
425421 #SBATCH --account=<project>
426422 #SBATCH --partition=large
0 commit comments