Difference between revisions of "Annotated SLURM Script"

From UFRC
Jump to navigation Jump to search
(Created page with "Category:SLURM {{HPG2}} This is a walk-through for a basic SLURM scheduler job script. Annotations are marked with bullet points. You can click on the link below to downlo...")
 
Line 9: Line 9:
 
#!/bin/bash
 
#!/bin/bash
 
</source>
 
</source>
;COMMON SETTINGS
+
;Common arguments
 
* Name the job to make it easier to see in the job queue
 
* Name the job to make it easier to see in the job queue
 
{{#fileAnchor: slurm_job.sh}}
 
{{#fileAnchor: slurm_job.sh}}
Line 15: Line 15:
 
#SBATCH --job-name=<JOBNAME>
 
#SBATCH --job-name=<JOBNAME>
 
</source>
 
</source>
;Optional:
 
:A group to use if you belong to multiple groups. Otherwise, do not use.
 
{{#fileAnchor: slurm_job.sh}}
 
<source lang=make>
 
#SBATCH --account=<GROUP>
 
 
;Email
 
;Email
 
:Your email address to use for all batch system communications
 
:Your email address to use for all batch system communications
Line 34: Line 29:
 
#SBATCH --mail-type=FAIL,END
 
#SBATCH --mail-type=FAIL,END
 
</source>
 
</source>
 
+
;Standard Output and Error log files
;Standard Output Log File
 
 
:Use file patterns  
 
:Use file patterns  
 
:: %j - job id
 
:: %j - job id
Line 41: Line 35:
 
{{#fileAnchor: slurm_job.sh}}
 
{{#fileAnchor: slurm_job.sh}}
 
<source lang=make>
 
<source lang=make>
#SBATCH --output <my_job-%j.out
+
#SBATCH --output <my_job-%j.out>
 +
#SBATCH --error <my_job-%j.err>
 
</source>
 
</source>
* Batch error file
+
;Number of compute nodes (standalone computers) to use
 
{{#fileAnchor: slurm_job.sh}}
 
{{#fileAnchor: slurm_job.sh}}
 
<source lang=make>
 
<source lang=make>
#PBS -e testjob_$PBS_JOBID.err
+
#SBATCH --nodes=1
 
</source>
 
</source>
* Number of compute nodes and processor cores (equal to how many processes you will run simultaneously). Unless you use MPI programs keep ''nodes=1'' and only change the ''ppn'' value.
+
;Number of processor cores to use on each node
 
{{#fileAnchor: slurm_job.sh}}
 
{{#fileAnchor: slurm_job.sh}}
 
<source lang=make>
 
<source lang=make>
#PBS -l nodes=1:ppn=1
+
#SBATCH --cpus-per-task=1
 
</source>
 
</source>
* Memory per processor (nodes * ppn * pmem = total memory used by the job), use mb for megabytes or gb for gigabytes.
+
;Total job memory in MB
 +
:For example, 2gb ~ 2000mb
 
{{#fileAnchor: slurm_job.sh}}
 
{{#fileAnchor: slurm_job.sh}}
 
<source lang=make>
 
<source lang=make>
#PBS -l pmem=1gb
+
#SBATCH --mem=2000
 
</source>
 
</source>
* Job run time in HOURS:MINUTES:SECONDS
+
;Job run time in [DAYS]:HOURS:MINUTES:SECONDS
 +
:[DAYS] are optional, use when it is convenient
 
{{#fileAnchor: slurm_job.sh}}
 
{{#fileAnchor: slurm_job.sh}}
 
<source lang=make>
 
<source lang=make>
#PBS -l walltime=01:00:00
+
#SBATCH --time=72:00:00
 
</source>
 
</source>
;Recommended Optional Settings
+
;Optional:
 +
:A group to use if you belong to multiple groups. Otherwise, do not use.
 +
{{#fileAnchor: slurm_job.sh}}
 +
<source lang=make>
 +
#SBATCH --account=<GROUP>
 +
</source>
 +
:A job array, which will create many jobs (called array tasks) different only in the '<code>$SLURM_ARRAY_TASK_ID</code>' variable, similar to [[Torque_Job_Arrays]] on HiPerGator 1
 +
{{#fileAnchor: slurm_job.sh}}
 +
<source lang=make>
 +
#SBATCH --array=<BEGIN-END>
 +
</source>
 +
;Example of five tasks
 +
:#SBATCH --array=1-5
 +
 
 +
;END OF PBS SETTINGS:
 +
----
 +
;Recommended convenient shell code to put into your job script
 
* If we're inside a job change to this directory instead of /home/$USER.
 
* If we're inside a job change to this directory instead of /home/$USER.
 
{{#fileAnchor: slurm_job.sh}}
 
{{#fileAnchor: slurm_job.sh}}
 
<source lang=make>
 
<source lang=make>
[[ -d $PBS_O_WORKDIR ]] && cd $PBS_O_WORKDIR
+
[[ -d $SLURM_SUBMIT_DIR ]] && cd $SLURM_SUBMIT_DIR
 
</source>
 
</source>
 
* Add host, time, and directory name for later troubleshooting
 
* Add host, time, and directory name for later troubleshooting
Line 73: Line 86:
 
<source lang=make>
 
<source lang=make>
 
date;hostname;pwd
 
date;hostname;pwd
 
 
</source>
 
</source>
;END OF PBS SETTINGS:
 
 
Below is the shell script part - the commands you will run to analyze your data. The following is an example.
 
Below is the shell script part - the commands you will run to analyze your data. The following is an example.
  

Revision as of 23:01, 23 February 2016

Hpg2 wiki logo.png

HiPerGator 2.0 documentation

This is a walk-through for a basic SLURM scheduler job script. Annotations are marked with bullet points. You can click on the link below to download the raw job script file without the annotation. Values in brackets are placeholders. You need to replace them with your own values. E.g. Change '<job name>' to something like 'blast_proj22'.

Download raw source of the [{{#fileLink: scratch_local.PBS}} scratch_local.PBS] file.

  • Set the shell to use

{{#fileAnchor: slurm_job.sh}}

#!/bin/bash
Common arguments
  • Name the job to make it easier to see in the job queue

{{#fileAnchor: slurm_job.sh}}

#SBATCH --job-name=<JOBNAME>
Email
Your email address to use for all batch system communications

{{#fileAnchor: slurm_job.sh}}

##SBATCH --mail-user=<EMAIL>
What emails to send
NONE - no emails
ALL - all emails
END,FAIL - only email if the job fails and email the summary at the end of the job

{{#fileAnchor: slurm_job.sh}}

#SBATCH --mail-type=FAIL,END
Standard Output and Error log files
Use file patterns
%j - job id
%A-$a - Array job id (A) and task id (a)

{{#fileAnchor: slurm_job.sh}}

#SBATCH --output <my_job-%j.out>
#SBATCH --error <my_job-%j.err>
Number of compute nodes (standalone computers) to use

{{#fileAnchor: slurm_job.sh}}

#SBATCH --nodes=1
Number of processor cores to use on each node

{{#fileAnchor: slurm_job.sh}}

#SBATCH --cpus-per-task=1
Total job memory in MB
For example, 2gb ~ 2000mb

{{#fileAnchor: slurm_job.sh}}

#SBATCH --mem=2000
Job run time in [DAYS]
HOURS:MINUTES:SECONDS
[DAYS] are optional, use when it is convenient

{{#fileAnchor: slurm_job.sh}}

#SBATCH --time=72:00:00
Optional
A group to use if you belong to multiple groups. Otherwise, do not use.

{{#fileAnchor: slurm_job.sh}}

#SBATCH --account=<GROUP>
A job array, which will create many jobs (called array tasks) different only in the '$SLURM_ARRAY_TASK_ID' variable, similar to Torque_Job_Arrays on HiPerGator 1

{{#fileAnchor: slurm_job.sh}}

#SBATCH --array=<BEGIN-END>
Example of five tasks
  1. SBATCH --array=1-5
END OF PBS SETTINGS

Recommended convenient shell code to put into your job script
  • If we're inside a job change to this directory instead of /home/$USER.

{{#fileAnchor: slurm_job.sh}}

[[ -d $SLURM_SUBMIT_DIR ]] && cd $SLURM_SUBMIT_DIR
  • Add host, time, and directory name for later troubleshooting

{{#fileAnchor: slurm_job.sh}}

date;hostname;pwd

Below is the shell script part - the commands you will run to analyze your data. The following is an example.

  • Load the software you need

{{#fileAnchor: slurm_job.sh}}

module load ncbi_blast
  • Run the program

{{#fileAnchor: slurm_job.sh}}

blastn -db nt -query input.fa -outfmt 6 -out results.xml

date