Skip to content

Update qchem.md #686

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Apr 8, 2025
85 changes: 55 additions & 30 deletions docs/Documentation/Applications/qchem.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,61 +4,86 @@

## Running Q-Chem

The `q-chem` module should be loaded to set up the necessary environment. The `module help` output can provide more detail. In particular, the modulefile does not set the needed environment variable `QCSCRATCH`, as this is likely unique for each run. `QCLOCALSCR` is set by default to `/tmp/scratch`, but one may wish to point to a more persistent location if files written to local scratch need to be accessed after the job completes. Users can easily do this in their Slurm scripts or at the command line via `export` (Bash) or `setenv` (csh).
The `q-chem` module should be loaded to set up the necessary environment. The `module help` output can provide more detail. In particular, the modulefile does not set the needed environment variable `QCSCRATCH`, as this is likely unique for each run. Users should do this in their Slurm scripts or at the command line via `export` (bash) or `setenv` (csh).

The simplest means of starting a Q-Chem job is via the supplied `qchem` wrapper. The general syntax is:

`qchem -slurm <-nt number_of_OpenMP_threads> <input file> <output file> <savename>`

For example, to run a job with 36 threads:
For example, to run a job with 104 threads:

`qchem -slurm -nt 36 example.in`
`qchem -slurm -nt 104 example.in`

!!! tip "Note"
The Q-Chem input file must be in the same directory in which you issue the qchem command. In other words, `qchem ... SOMEPATH/<input file>` won't work.

For a full list of which types of calculation are parallelized and the types of parallelism, see the [Q-Chem User's Manual](https://manual.q-chem.com/5.3/).
For a full list of which types of calculation are parallelized and the types of parallelism, see the [Q-Chem User's Manual](https://manual.q-chem.com/6.2/).

To save certain intermediate files for, *e.g.*, restart, a directory name needs to be provided. If not provided, all scratch files will be automatically deleted at job's end by default. If provided, a directory `$QCSCRATCH/savename` will be created and will hold saved files. In order to save all intermediate files, you can add the `-save` option.

A template Slurm script to run Q-Chem with 36 threads is:
A template Slurm script to run Q-Chem with 104 threads is:

??? example "Sample Submission Script"
### Sample Submission Script for Kestrel

```bash
```
#!/bin/bash
#SBATCH --job-name=my_qchem_job
#SBATCH --account=my_allocation_ID
#SBATCH --ntasks=36
#SBATCH --time=01:00:00
#SBATCH --nodes=1
#SBATCH --tasks-per-node=104
#SBATCH --time=01:00:00
#SBATCH --exclusive
#SBATCH --mail-type=BEGIN,END,FAIL
#SBATCH [email protected]
#SBATCH --output=std-%j.out
#SBATCH --error=std-%j.err

# Load the Q-Chem environment
module load q-chem

# Go to the location of job files, presumably from where this file was submitted
cd $SLURM_SUBMIT_DIR

# Set up scratch space
SCRATCHY=/scratch/$USER/${SLURM_JOB_NAME:?}
if [ -d $SCRATCHY ]
then
rm -r $SCRATCHY
module load q-chem/6.2

if [ -e /dev/nvme0n1 ]; then
SCRATCH=$TMPDIR
echo "This node has a local storage and will use $SCRATCH as the scratch path"
else
SCRATCH=/scratch/$USER/$SLURM_JOB_ID
echo "This node does not have a local storage drive and will use $SCRATCH as the scratch path"
fi
mkdir -p $SCRATCHY
export QCSCRATCH=$SCRATCHY

# Move files over
cp * $SCRATCHY/.
cd $SCRATCHY

# Start run. Keep restart files without intermediate temp files in directory called "my_save"
qchem -nt 36 job.in job.out my_save
```

To run this script on Swift, the number of threads can be changed to 64.
mkdir -p $SCRATCH

export QCSCRATCH=$SCRATCH
export QCLOCALSCR=$SCRATCH

jobnm=qchem_test

if [ $SLURM_JOB_NUM_NODES -gt 1 ]; then
QCHEMOPT="-mpi -np $SLURM_NTASKS"
else
QCHEMOPT="-nt $SLURM_NTASKS"
fi

echo Running Q-Chem with this command: qchem $QCHEMOPT $jobnm.com $jobnm.out
qchem $QCHEMOPT $jobnm.com $jobnm.out

rm $SCRATCH/*
rmdir $SCRATCH
```

To run this script on HPCs other than Kestrel, the number of threads should be changed accordingly.

A large number of example Q-Chem input examples are available in `/nopt/nrel/apps/q-chem/<version>/samples`.

## Running BrianQC
BrianQC is the GPU version of Q-Chem and can perform Q-Chem calculations on GPUs, which is significantly faster for some larger ab initio jobs. BrianQC uses the same input file as Q-Chem. To run BrianQC, please make the following changes to the sample Slurm scipt above:
1. Add this line in the header section: "#SBATCH --gres=gpu:1"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If only using 1 GPU, would you recommend removing --exclusive and changing the number of cores, so that the job isn't charged for the full node and 3 GPUs aren't idle? For GPU jobs, RAM also needs to be requested.

2. Load the BrianQC module instead of Q-Chem: "module load brianqc"
3. Add "-gpu" in $QCHEMOPT like:
```
if [ $SLURM_JOB_NUM_NODES -gt 1 ]; then
QCHEMOPT="-gpu -mpi -np $SLURM_NTASKS"
else
QCHEMOPT="-gpu -nt $SLURM_NTASKS"
fi
```
4. Submit jobs through the GPU login nodes on Kestrel or add "#SBATCH -p gpu" to the header of slurm file if running on Swift.