|  | 
|  | 1 | +--- | 
|  | 2 | +date: 2018-08-27 | 
|  | 3 | +title: "Singularity Quick Start" | 
|  | 4 | +description: A Quick Start to using Singularity on the Sherlock Cluster | 
|  | 5 | +categories: | 
|  | 6 | + - tutorial | 
|  | 7 | +type: Tutorial | 
|  | 8 | +set: clusters | 
|  | 9 | +set_order: 11 | 
|  | 10 | +tags: [resources] | 
|  | 11 | +--- | 
|  | 12 | + | 
|  | 13 | +This is a quick start to using Singularity on the Sherlock cluster. You should have familiarity | 
|  | 14 | +with how to log in, and generally use a command line. If you run into trouble, please | 
|  | 15 | +<a href="https://www.github.com/vsoch/lessons/issues" target="_blank">ask us for help</a>. | 
|  | 16 | + | 
|  | 17 | +## Login | 
|  | 18 | + | 
|  | 19 | +<a href="https://www.sherlock.stanford.edu/docs/getting-started/connecting/" target="_blank">Here are</a>  | 
|  | 20 | +the official "How to Connect to Sherlock" docs. Generally you  | 
|  | 21 | +<a href="https://vsoch.github.io/lessons/kerberos/" target="_blank">set up kerberos</a> and do this: | 
|  | 22 | + | 
|  | 23 | +```bash | 
|  | 24 | +ssh <username>@login.sherlock.stanford.edu | 
|  | 25 | +``` | 
|  | 26 | + | 
|  | 27 | +I'm lazy, so my preference is to define this setup in my `~/.ssh/config` file. By setting | 
|  | 28 | +the login node to be a specific one I can maintain a session over time. Let' say my | 
|  | 29 | +username is "tacos" | 
|  | 30 | + | 
|  | 31 | +```bash | 
|  | 32 | + | 
|  | 33 | +Host sherlock | 
|  | 34 | + User tacos | 
|  | 35 | + Hostname sh-ln05.stanford.edu | 
|  | 36 | + GSSAPIDelegateCredentials yes | 
|  | 37 | + GSSAPIAuthentication yes | 
|  | 38 | + ControlMaster auto | 
|  | 39 | + ControlPersist yes | 
|  | 40 | + ControlPath ~/.ssh/%l%r@%h:%p | 
|  | 41 | +``` | 
|  | 42 | + | 
|  | 43 | +Then to login I don't need to type the longer string, I can just type: | 
|  | 44 | + | 
|  | 45 | +```bash | 
|  | 46 | +ssh sherlock | 
|  | 47 | +``` | 
|  | 48 | + | 
|  | 49 | +## Interactive Node | 
|  | 50 | +You generally shouldn't run anything computationally intensive on the login nodes! It's not | 
|  | 51 | +just to be courteous, it's because the processes will be killed and you have to start over. | 
|  | 52 | +Don't waste your time doing this, grab an interactive node to start off: | 
|  | 53 | + | 
|  | 54 | +```bash | 
|  | 55 | +# interactive node | 
|  | 56 | +sdev | 
|  | 57 | + | 
|  | 58 | +# same thing, ask for different memory or time | 
|  | 59 | +srun --time 8:00:00 --mem 32000 pty bash | 
|  | 60 | +``` | 
|  | 61 | + | 
|  | 62 | +If your PI has <a href="https://srcc.stanford.edu/sherlock-high-performance-computing-cluster" target="_blank">purchased nodes</a>  | 
|  | 63 | +and you have a partition for your lab, | 
|  | 64 | +this means that your jobs will run a lot faster (you get priority as a member of the group!) and you | 
|  | 65 | +should use that advantage: | 
|  | 66 | + | 
|  | 67 | +``` | 
|  | 68 | +srun --time 8:00:00 --mem 32000 --partition mygroup --pty bash | 
|  | 69 | +``` | 
|  | 70 | + | 
|  | 71 | +If you don't have any specific partition, the implied one is `--partition normal`. | 
|  | 72 | + | 
|  | 73 | +## Load Singularity | 
|  | 74 | +Let's get Singularity set up! You may want to add this to your bash profile (in HOME) | 
|  | 75 | +under `$HOME/.bashrc` (or similar depending on your shell) so that you don't need to manually | 
|  | 76 | +do it.  | 
|  | 77 | + | 
|  | 78 | +```bash | 
|  | 79 | + | 
|  | 80 | +module use system | 
|  | 81 | +module load singularity | 
|  | 82 | +``` | 
|  | 83 | + | 
|  | 84 | +A very import variable to export is the `SINGULARITY_CACHEDIR`. This is where Singularity stores | 
|  | 85 | +pulled images, built images, and image layers (e.g., when you pull a Docker image it first pulls | 
|  | 86 | +the `.tar.gz` layers before assembling into a container binary). What happens if you **don't** export | 
|  | 87 | +this variable? The cache defaults to your HOME, your HOME quickly gets filled up (containers | 
|  | 88 | +are large files!) and then you are locked out. | 
|  | 89 | + | 
|  | 90 | + >> don't do that. | 
|  | 91 | +
 | 
|  | 92 | +```bash | 
|  | 93 | + | 
|  | 94 | +export SINGULARITY_CACHEDIR=$SCRATCH/.singularity | 
|  | 95 | +mkdir -p $SINGULARITY_CACHEDIR | 
|  | 96 | +``` | 
|  | 97 | + | 
|  | 98 | +Also note for the above that we are creating the directory, which is something that needs to be | 
|  | 99 | +done once (and then never again!). | 
|  | 100 | + | 
|  | 101 | +# Examples | 
|  | 102 | + | 
|  | 103 | +Now let's jump into examples. I'm just going to show you, because many of the commands | 
|  | 104 | +speak for themselves, and I'll add further note if needed. What you should know that we | 
|  | 105 | +are going to reference a container with a "uri" (unique resource identifier) and it can | 
|  | 106 | +be in reference to (the most common forms): | 
|  | 107 | + | 
|  | 108 | + - **docker://**: a container on Docker Hub | 
|  | 109 | + - **shub://**: a container on Singularity Hub | 
|  | 110 | + - **container.simg**: a container image file | 
|  | 111 | + | 
|  | 112 | +## Pull | 
|  | 113 | + | 
|  | 114 | +You **could** just run or execute a command to a container reference, but I recommend | 
|  | 115 | +that you pull the container first. | 
|  | 116 | + | 
|  | 117 | +```bash | 
|  | 118 | + | 
|  | 119 | +singularity pull shub://vsoch/hello-world | 
|  | 120 | +singularity run $SCRATCH/.singularity/vsoch-hello-world-master-latest.simg | 
|  | 121 | +``` | 
|  | 122 | + | 
|  | 123 | +It's common that you might want to name a container based on it's Github commit (for Singularity Hub), | 
|  | 124 | +it's image file hash, or a custom name that you really like. | 
|  | 125 | + | 
|  | 126 | +```bash | 
|  | 127 | + | 
|  | 128 | +singularity pull --name meatballs.simg shub://vsoch/hello-world | 
|  | 129 | +singularity pull --hash shub://vsoch/hello-world | 
|  | 130 | +singularity pull --commit shub://vsoch/hello-world | 
|  | 131 | +``` | 
|  | 132 | + | 
|  | 133 | +## Exec | 
|  | 134 | + | 
|  | 135 | +The "exec" command will execute a command to a container. | 
|  | 136 | + | 
|  | 137 | +```bash | 
|  | 138 | + | 
|  | 139 | +singularity pull docker://vanessa/salad | 
|  | 140 | +singularity exec $SCRATCH/.singularity/salad.simg /code/salad spoon | 
|  | 141 | +singularity exec $SCRATCH/.singularity/salad.simg /code/salad fork | 
|  | 142 | +``` | 
|  | 143 | +``` | 
|  | 144 | +
 | 
|  | 145 | +# Other options for what you can do (without sudo) | 
|  | 146 | +singularity build container.simg docker://ubuntu | 
|  | 147 | +singularity exec container.simg echo "Custom commands" | 
|  | 148 | +``` | 
|  | 149 | + | 
|  | 150 | +## Run | 
|  | 151 | +The run command will run the container's runscript, which is the executable (or `ENTRYPOINT`/`CMD`  | 
|  | 152 | +in Docker speak) that the creator intended to be used. | 
|  | 153 | + | 
|  | 154 | +```bash | 
|  | 155 | + | 
|  | 156 | +singularity pull docker://vanessa/pokemon | 
|  | 157 | +singularity run $SCRATCH/.singularity/pokemon.simg run catch | 
|  | 158 | +``` | 
|  | 159 | + | 
|  | 160 | +## Options | 
|  | 161 | +And here is a quick listing of other useful commands! This is by no means | 
|  | 162 | +an exaustive list, but I've found it helpful to debug and understand a container. | 
|  | 163 | +First, inspect things! | 
|  | 164 | + | 
|  | 165 | +```bash | 
|  | 166 | + | 
|  | 167 | +singularity inspect -l $SCRATCH/.singularity/pokemon.simg # labels | 
|  | 168 | +singularity inspect -e $SCRATCH/.singularity/pokemon.simg # environment | 
|  | 169 | +singularity inspect -r $SCRATCH/.singularity/pokemon.simg # runscript | 
|  | 170 | +singularity inspect -d $SCRATCH/.singularity/pokemon.simg # Singularity recipe definition | 
|  | 171 | +``` | 
|  | 172 | + | 
|  | 173 | +Using `--pwd` below might be necessary if the working directory is important.  | 
|  | 174 | +Singularity doesn't respect Docker's `WORKDIR` Dockerfile | 
|  | 175 | +command, as we usually use the present working directory. | 
|  | 176 | + | 
|  | 177 | +```bash | 
|  | 178 | + | 
|  | 179 | +# Set the present working directory with --pwd when you run the container | 
|  | 180 | +singularity run --pwd /code container.simg | 
|  | 181 | +``` | 
|  | 182 | + | 
|  | 183 | +Singularity is great because most of the time, you don't need to think about | 
|  | 184 | +binds. The paths that you use most often (e.g., your home and scratch and tmp) | 
|  | 185 | +are "inside" the container. I hesitate to use that word because the | 
|  | 186 | +boundry really is seamless. Thus, if your host supports  | 
|  | 187 | +<a href="https://en.wikipedia.org/wiki/OverlayFS" target="_blank">overlayfs</a> and the configuration allows it, your container | 
|  | 188 | +will by default see all the bind mounts on the host. You can specify a custom mount  | 
|  | 189 | +(again if the administrative configuration allows it) with `-B` or `--bind`. | 
|  | 190 | + | 
|  | 191 | +```bash | 
|  | 192 | + | 
|  | 193 | +# scratch, home, tmp, are already mounted :) But use --bind/-B to do custom | 
|  | 194 | +singularity run --bind $HOME/pancakes:/opt/pancakes container.simg | 
|  | 195 | +``` | 
|  | 196 | + | 
|  | 197 | +I many times have conflicts with my PYTHONPATH and need to unset it, | 
|  | 198 | +or even just clean the environment entirely. | 
|  | 199 | + | 
|  | 200 | +```bash | 
|  | 201 | + | 
|  | 202 | +PYTHONPATH= singularity run docker://continuumio/miniconda3 python | 
|  | 203 | +singularity run --cleanenv docker://continuumio/miniconda3 python | 
|  | 204 | +``` | 
|  | 205 | + | 
|  | 206 | +What does writing a script look like? You are going to treat singularity as | 
|  | 207 | +you would any other executable! It's good practice to load it in your  | 
|  | 208 | +<a href="https://researchapps.github.io/job-maker/" target="_blank">SBATCH</a> | 
|  | 209 | +scripts too. | 
|  | 210 | + | 
|  | 211 | +```bash | 
|  | 212 | + | 
|  | 213 | +module use system | 
|  | 214 | +module load singularity | 
|  | 215 | +singularity run --cleanenv docker://continuumio/miniconda3 python $SCRATCH/analysis/script.py | 
|  | 216 | +``` | 
|  | 217 | + | 
|  | 218 | +## Build | 
|  | 219 | +If you build your own container, you have a few options! | 
|  | 220 | + | 
|  | 221 | + - Created an [automated build for Docker Hub](https://docs.docker.com/docker-hub/builds/) then pull the container via the `docker://` uri. | 
|  | 222 | + - add a `Singularity` recipe file to a Github repository and build it automatically on [Singularity Hub](https://www.singularity-hub.org). | 
|  | 223 | + - [Ask for @vsoch help](https://www.github.com/researchapps/sherlock) to build a custom container!  | 
|  | 224 | + - Check out all the [example](https://github.com/researchapps/sherlock) containers to get you started. | 
|  | 225 | + - [Browse the Containershare](https://vsoch.github.io/containershare/) and use the [Forward tool](https://www.github.com/vsoch/forward/) for interactive jupyter (and other) notebooks. | 
|  | 226 | + - [Build your container](https://www.sylabs.io/guides/2.6/user-guide/quick_start.html#build-images-from-scratch) locally, and then use [scp](http://www.hypexr.org/linux_scp_help.php) to copy it to Sherlock: | 
|  | 227 | + | 
|  | 228 | +```bash | 
|  | 229 | +scp container.simg <username>@login.sherlock.stanford.edu:/scratch/users/<username>/container.simg | 
|  | 230 | +``` | 
|  | 231 | + | 
|  | 232 | + | 
|  | 233 | +Have a question? Or want to make these notes better? Please <a href="https://www.github.com/vsoch/lessons/issues">open an issue</a>. | 
0 commit comments