Skip to content

Commit 936b5dc

Browse files
authored
Update README.md with UserDocs migration and additional links.
Thanks!
1 parent d422b39 commit 936b5dc

File tree

1 file changed

+3
-26
lines changed

1 file changed

+3
-26
lines changed

Parallel_Computing/MPI/README.md

Lines changed: 3 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,9 @@
22

33
## Introduction
44

5-
This web-page is intended to help you compile and run MPI applications on the cluster cluster.
5+
This web-page is intended to help you compile and run MPI applications on the cluster cluster. For more information on MPI and OpenMPI, see our [User Docs entry](https://docs.rc.fas.harvard.edu/kb/mpi-message-passing-interface/).
66

7-
The Message Passing Interface (MPI) library allows processes in your parallel application to communicate with one another by sending and receiving messages. There is no default MPI library in your environment when you log in to the cluster. You need to choose the desired MPI implementation for your applications. This is done by loading an appropriate MPI module. Currently the available MPI implementations on our cluster are [OpenMPI](https://www.open-mpi.org/) and [Mpich](https://www.mpich.org/). For both implementations the MPI libraries are compiled and built with either the [Intel compiler suite](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html) or the [GNU compiler suite](https://www.gnu.org/software/gcc/). These are organized in [software modules](https://docs.rc.fas.harvard.edu/kb/modules-intro/).
8-
9-
For instance, if you want to use OpenMPI compiled with the GNU compiler you need to load appropriate compiler and MPI modules. Below are some possible combinations, check <code>module spider MODULENAME</code> to get a full listing of possibilities.
7+
If you want to use OpenMPI compiled with the GNU compiler you need to load appropriate compiler and MPI modules. Below are some possible combinations, check <code>module spider MODULENAME</code> to get a full listing of possibilities.
108

119
```bash
1210
# GCC + OpenMPI, e.g.,
@@ -27,20 +25,6 @@ module load intel/24.0.1-fasrc01 intelmpi/2021.11-fasrc01
2725

2826
For reproducibility and consistency it is recommended to use the complete module name with the module load command, as illustrated above. Modules on the cluster get updated often so check if there are more recent ones. The modules are set up so that you can only have one MPI module loaded at a time. If you try loading a second one it will automatically unload the first. This is done to avoid dependencies collisions.
2927

30-
There are four ways you can set up your MPI on the cluster:
31-
32-
* Put the module load command in your startup files.<br>
33-
Most users will find this option most convenient. You will likely only want to use a single version of MPI for all your work. This method also works with all MPI modules currently available on the cluster.
34-
35-
* Load the module in your current shell.<br>
36-
For the current MPI versions you do not need to have the module load command in your startup files. If you submit a job the remote processes will inherit the submission shell environment and use the proper MPI library. Note this method does not work with older versions of MPI.
37-
38-
* Load the module in your job script.<br>
39-
If you will be using different versions of MPI for different jobs, then you can put the module load command in your script. You need to ensure your script can execute the module load command properly.
40-
41-
* Do not use modules and set environment variables yourself. <br>
42-
You obviously do not need to use modules but can hard code paths. However, these locations may change without warning so you should set them in one location only and not scatter them throughout your scripts. This option could be useful if you have a customized local build of MPI you would like to use with your applications.
43-
4428
## Your First MPI Program
4529

4630
The below examples are included in this repository.
@@ -229,14 +213,7 @@ srun -n $SLURM_NTASKS --mpi=pmi2 ./mpitest.x
229213

230214
> **NOTE:** Notice, in the above example we use Intel and IntelMPI, <code>module load intel/23.2.0-fasrc01 intelmpi/2021.10.0-fasrc01</code>. As a rule, you **must** load exactly the same modules you used to compile your code.
231215
232-
233-
### Submit the jobs to the queue
234-
235-
The <code>sbatch</code> command followed the batch-job script name, e.g., <code>run.sbatch</code>, is used to submit your batch script to the cluster compute nodes. Upon submission a job ID is returned, such as:
236-
237-
```bash
238-
sbatch run.sbatch
239-
```
216+
Here is useful Information on [how to run jobs](https://docs.rc.fas.harvard.edu/kb/running-jobs/) and [Convienent Slurm Commands](https://docs.rc.fas.harvard.edu/kb/convenient-slurm-commands/).
240217

241218
### Monitor your job
242219

0 commit comments

Comments
 (0)