Skip to content

Commit 50131ec

Browse files
authored
fix wrong comment, some formatting (#2733)
* fix wrong comment, some formatting * Update hyperqueue.md
1 parent 235598a commit 50131ec

File tree

1 file changed

+25
-29
lines changed

1 file changed

+25
-29
lines changed

docs/apps/hyperqueue.md

Lines changed: 25 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -35,13 +35,21 @@ Free to use and open source under [MIT License](https://github.com/It4innovation
3535

3636
## Usage
3737

38-
Initialize the recommended version of HyperQueue on Puhti and Mahti like this:
38+
Load the default version of HyperQueue on Puhti and Mahti like this:
3939

4040
```bash
4141
module load hyperqueue
4242
```
4343

44-
Use `module spider` to locate other versions.
44+
To load a specific version, use:
45+
46+
```bash
47+
module load hyperqueue/<version>
48+
```
49+
50+
Replace `<version>` with suitable version number ([see above](#available)).
51+
Use `module spider` to locate other versions.
52+
4553
To access CSC's HyperQueue modules on LUMI,
4654
remember to first run `module use /appl/local/csc/modulefiles`.
4755

@@ -62,7 +70,7 @@ until all are done or the batch job time limit is reached.
6270

6371
Let's assume we have a `tasks` file with a list of commands we want to run using
6472
eight threads each. **Do not use `srun` in the commands!** HyperQueue will launch
65-
the tasks using the allocated resources as requested. For example,
73+
the tasks using the allocated resources as requested. For example,
6674

6775
```text
6876
command1 arguments1
@@ -103,7 +111,7 @@ directory structure looks as follows:
103111
└── task # Executable task script for HyperQueue
104112
```
105113

106-
**Task**
114+
#### Task
107115

108116
We assume that HyperQueue tasks are independent and run on a single node.
109117
Here is an example of a simple, executable `task` script written in Bash.
@@ -116,7 +124,7 @@ sleep 1
116124
The overhead per task is around 0.1 milliseconds.
117125
Therefore, we can efficiently execute even very small tasks.
118126

119-
**Batch job**
127+
#### Batch job
120128

121129
In a Slurm batch job, each Slurm task corresponds to one HyperQueue worker.
122130
We can increase the number of workers by increasing the number of Slurm tasks.
@@ -184,15 +192,15 @@ allocation.
184192
#SBATCH --time=00:15:00
185193
```
186194

187-
**Module**
195+
#### Module
188196

189197
We load the HyperQueue module to make the `hq` command available.
190198

191199
```bash
192200
module load hyperqueue
193201
```
194202

195-
**Server**
203+
#### Server
196204

197205
Next, we specify where HyperQueue places the server files.
198206
All `hq` commands respect this variable, so we set it before using any `hq` commands.
@@ -222,9 +230,9 @@ hq server start &
222230
until hq job list &> /dev/null ; do sleep 1 ; done
223231
```
224232

225-
**Workers**
233+
#### Workers
226234

227-
</--
235+
<!--
228236
Next, we start HyperQueue workers in the background with the number of CPUs and the amount
229237
of memory defined in the batch script. We access those values using the `SLURM_CPU_PER_TASK`
230238
and `SLURM_MEM_PER_CPU` environment variables. By starting the workers using the `srun`
@@ -271,7 +279,7 @@ srun --overlap --cpu-bind=none --mpi=none hq worker start \
271279
hq worker wait "$SLURM_NTASKS"
272280
```
273281

274-
**Computing tasks**
282+
#### Computing tasks
275283

276284
Now we can submit tasks with `hq submit` to the server, which executes them on the
277285
available workers. It is a non-blocking command; thus, we do not need to run it in
@@ -296,7 +304,7 @@ complex task dependencies, we can use HyperQueue as the executor for other workf
296304
managers, such as [Snakemake](#using-snakemake-or-nextflow-with-hyperqueue) or
297305
[Nextflow](#using-snakemake-or-nextflow-with-hyperqueue).
298306

299-
**Stopping the workers and the server**
307+
#### Stopping the workers and the server
300308

301309
Once we are done running all of our tasks, we shut down the workers and server to
302310
avoid a false error from Slurm when the job ends.
@@ -335,17 +343,13 @@ is requested.
335343

336344
=== "Single node"
337345

338-
File: `task`
339-
340-
```bash
346+
```bash title="task"
341347
#!/bin/bash
342348
echo "Hello from task $HQ_TASK_ID!" > "output/$HQ_TASK_ID.out"
343349
sleep 1
344350
```
345351

346-
File: `batch.sh`
347-
348-
```bash
352+
```bash title="batch.sh"
349353
#!/bin/bash
350354
#SBATCH --account=<project>
351355
#SBATCH --partition=small
@@ -392,35 +396,27 @@ is requested.
392396

393397
The archive `input.tar.gz` used in this example extracts into `input` directory.
394398

395-
File: `extract`
396-
397-
```bash
399+
```bash title="extract"
398400
#!/bin/bash
399401
tar xf input.tar.gz -C "$LOCAL_SCRATCH"
400402
mkdir -p "$LOCAL_SCRATCH/output"
401403
```
402404

403-
File: `task`
404-
405-
```bash
405+
```bash title="task"
406406
#!/bin/bash
407407
cd "$LOCAL_SCRATCH"
408408
cat "input/$HQ_TASK_ID.inp" > "output/$HQ_TASK_ID.out"
409409
sleep 1
410410
```
411411

412-
File: `archive`
413-
414-
```bash
412+
```bash title="archive"
415413
#!/bin/bash
416414
cd "$LOCAL_SCRATCH"
417415
tar czf "output-$SLURMD_NODENAME.tar.gz" output
418416
cp "output-$SLURMD_NODENAME.tar.gz" "$SLURM_SUBMIT_DIR"
419417
```
420418

421-
File: `batch.sh`
422-
423-
```bash
419+
```bash title="batch.sh"
424420
#!/bin/bash
425421
#SBATCH --account=<project>
426422
#SBATCH --partition=large

0 commit comments

Comments
 (0)