You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+153-60
Original file line number
Diff line number
Diff line change
@@ -50,32 +50,54 @@ each list element:
50
50
51
51
### slurm.conf
52
52
53
-
`openhpc_slurm_partitions`: Optional. List of one or more slurm partitions, default `[]`. Each partition may contain the following values:
54
-
*`groups`: If there are multiple node groups that make up the partition, a list of group objects can be defined here.
55
-
Otherwise, `groups` can be omitted and the following attributes can be defined in the partition object:
56
-
*`name`: The name of the nodes within this group.
57
-
*`cluster_name`: Optional. An override for the top-level definition `openhpc_cluster_name`.
58
-
*`extra_nodes`: Optional. A list of additional node definitions, e.g. for nodes in this group/partition not controlled by this role. Each item should be a dict, with keys/values as per the ["NODE CONFIGURATION"](https://slurm.schedmd.com/slurm.conf.html#lbAE) docs for slurm.conf. Note the key `NodeName` must be first.
59
-
*`ram_mb`: Optional. The physical RAM available in each node of this group ([slurm.conf](https://slurm.schedmd.com/slurm.conf.html) parameter `RealMemory`) in MiB. This is set using ansible facts if not defined, equivalent to `free --mebi` total * `openhpc_ram_multiplier`.
60
-
*`ram_multiplier`: Optional. An override for the top-level definition `openhpc_ram_multiplier`. Has no effect if `ram_mb` is set.
53
+
`openhpc_nodegroups`: Optional, default `[]`. List of mappings, each defining a
54
+
unique set of homogenous nodes:
55
+
*`name`: Required. Name of node group.
56
+
*`ram_mb`: Optional. The physical RAM available in each node of this group
in MiB. This is set using ansible facts if not defined, equivalent to
59
+
`free --mebi` total * `openhpc_ram_multiplier`.
60
+
*`ram_multiplier`: Optional. An override for the top-level definition
61
+
`openhpc_ram_multiplier`. Has no effect if `ram_mb` is set.
61
62
*`gres`: Optional. List of dicts defining [generic resources](https://slurm.schedmd.com/gres.html). Each dict must define:
62
63
-`conf`: A string with the [resource specification](https://slurm.schedmd.com/slurm.conf.html#OPT_Gres_1) but requiring the format `<name>:<type>:<number>`, e.g. `gpu:A100:2`. Note the `type` is an arbitrary string.
63
64
-`file`: A string with the [File](https://slurm.schedmd.com/gres.conf.html#OPT_File) (path to device(s)) for this resource, e.g. `/dev/nvidia[0-1]` for the above example.
64
-
65
65
Note [GresTypes](https://slurm.schedmd.com/slurm.conf.html#OPT_GresTypes) must be set in `openhpc_config` if this is used.
66
-
67
-
*`default`: Optional. A boolean flag for whether this partion is the default. Valid settings are `YES` and `NO`.
68
-
*`maxtime`: Optional. A partition-specific time limit following the format of [slurm.conf](https://slurm.schedmd.com/slurm.conf.html) parameter `MaxTime`. The default value is
69
-
given by `openhpc_job_maxtime`. The value should be quoted to avoid Ansible conversions.
70
-
*`partition_params`: Optional. Mapping of additional parameters and values for [partition configuration](https://slurm.schedmd.com/slurm.conf.html#SECTION_PARTITION-CONFIGURATION).
71
-
72
-
For each group (if used) or partition any nodes in an ansible inventory group `<cluster_name>_<group_name>` will be added to the group/partition. Note that:
73
-
- Nodes may have arbitrary hostnames but these should be lowercase to avoid a mismatch between inventory and actual hostname.
74
-
- Nodes in a group are assumed to be homogenous in terms of processor and memory.
75
-
- An inventory group may be empty or missing, but if it is not then the play must contain at least one node from it (used to set processor information).
76
-
77
-
78
-
`openhpc_job_maxtime`: Maximum job time limit, default `'60-0'` (60 days). See [slurm.conf](https://slurm.schedmd.com/slurm.conf.html) parameter `MaxTime` for format. The default is 60 days. The value should be quoted to avoid Ansible conversions.
66
+
*`features`: Optional. List of [Features](https://slurm.schedmd.com/slurm.conf.html#OPT_Features) strings.
67
+
*`node_params`: Optional. Mapping of additional parameters and values for
To deploy, create a playbook which looks like this:
196
-
197
-
---
198
-
- hosts:
199
-
- cluster_login
200
-
- cluster_control
201
-
- cluster_batch
202
-
become: yes
203
-
roles:
204
-
- role: openhpc
205
-
openhpc_enable:
206
-
control: "{{ inventory_hostname in groups['cluster_control'] }}"
207
-
batch: "{{ inventory_hostname in groups['cluster_batch'] }}"
208
-
runtime: true
209
-
openhpc_slurm_service_enabled: true
210
-
openhpc_slurm_control_host: "{{ groups['cluster_control'] | first }}"
211
-
openhpc_slurm_partitions:
212
-
- name: "compute"
213
-
openhpc_cluster_name: openhpc
214
-
openhpc_packages: []
215
-
...
211
+
[hpc_control]
212
+
cluster-control
213
+
```
216
214
215
+
```yaml
216
+
#playbook.yml
217
+
---
218
+
- hosts: all
219
+
become: yes
220
+
tasks:
221
+
- import_role:
222
+
name: stackhpc.openhpc
223
+
vars:
224
+
openhpc_cluster_name: hpc
225
+
openhpc_enable:
226
+
control: "{{ inventory_hostname in groups['cluster_control'] }}"
227
+
batch: "{{ inventory_hostname in groups['cluster_compute'] }}"
228
+
runtime: true
229
+
openhpc_slurm_control_host: "{{ groups['cluster_control'] | first }}"
230
+
openhpc_nodegroups:
231
+
- name: compute
232
+
openhpc_partitions:
233
+
- name: compute
217
234
---
235
+
```
236
+
237
+
### Multiple nodegroups
238
+
239
+
This example shows how partitions can span multiple types of compute node.
240
+
241
+
This example inventory describes three types of compute node (login and
242
+
control nodes are omitted for brevity):
243
+
244
+
```ini
245
+
# inventory/hosts:
246
+
...
247
+
[hpc_general]
248
+
# standard compute nodes
249
+
cluster-general-0
250
+
cluster-general-1
251
+
252
+
[hpc_large]
253
+
# large memory nodes
254
+
cluster-largemem-0
255
+
cluster-largemem-1
256
+
257
+
[hpc_gpu]
258
+
# GPU nodes
259
+
cluster-a100-0
260
+
cluster-a100-1
261
+
...
262
+
```
263
+
264
+
Firstly the `openhpc_nodegroups` is set to capture these inventory groups and
265
+
apply any node-level parameters - in this case the `largemem` nodes have
266
+
2x cores reserved for some reason, and GRES is configured for the GPU nodes:
267
+
268
+
```yaml
269
+
openhpc_cluster_name: hpc
270
+
openhpc_nodegroups:
271
+
- name: general
272
+
- name: large
273
+
node_params:
274
+
CoreSpecCount: 2
275
+
- name: gpu
276
+
gres:
277
+
- conf: gpu:A100:2
278
+
file: /dev/nvidia[0-1]
279
+
```
280
+
281
+
Now two partitions can be configured - a default one with a short timelimit and
282
+
no large memory nodes for testing jobs, and another with all hardware and longer
283
+
job runtime for "production" jobs:
284
+
285
+
```yaml
286
+
openhpc_partitions:
287
+
- name: test
288
+
nodegroups:
289
+
- general
290
+
- gpu
291
+
maxtime: '1:0:0'# 1 hour
292
+
default: 'YES'
293
+
- name: general
294
+
nodegroups:
295
+
- general
296
+
- large
297
+
- gpu
298
+
maxtime: '2-0'# 2 days
299
+
default: 'NO'
300
+
```
301
+
Users will select the partition using `--partition` argument and request nodes
302
+
with appropriate memory or GPUs using the `--mem` and `--gres` or `--gpus*`
303
+
options for `sbatch` or `srun`.
304
+
305
+
Finally here some additional configuration must be provided for GRES:
306
+
```yaml
307
+
openhpc_config:
308
+
GresTypes:
309
+
-gpu
310
+
```
218
311
219
312
<b id="slurm_ver_footnote">1</b> Slurm 20.11 removed `accounting_storage/filetxt` as an option. This version of Slurm was introduced in OpenHPC v2.1 but the OpenHPC repos are common to all OpenHPC v2.x releases. [↩](#accounting_storage)
0 commit comments