@@ -29,8 +29,42 @@ Open MPI version v6.0.0
29
29
delivered through the Open MPI internal "OMPIO" implementation
30
30
(which has been the default for quite a while, anyway).
31
31
32
- - Added MPI-4.1 ``MPI_Status_* `` functions.
32
+ - Added support for MPI-4.1 functions to access and update ``MPI_Status ``
33
+ fields.
33
34
34
35
- MPI-4.1 has deprecated the use of the Fortran ``mpif.h `` include
35
36
file. Open MPI will now issue a warning when the file is included
36
37
and the Fortran compiler supports the ``#warning `` directive.
38
+
39
+ - Added support for the MPI-4.1 memory allocation kind info object and
40
+ values introduced in the MPI Memory Allocation Kinds side-document.
41
+
42
+ - Added support for Intel Ponte Vecchio GPUs.
43
+
44
+ - Extended the functionality of the accelerator framework to support
45
+ intra-node device-to-device transfers for AMD and NVIDIA GPUs
46
+ (independent of UCX or Libfabric).
47
+
48
+ - Added support for MPI sessions when using UCX.
49
+
50
+ - Added support for MPI-4.1 ``MPI_REQUEST_GET_STATUS_[ALL|ANY_SOME] `` functions.
51
+
52
+ - Improvements to collective operations:
53
+
54
+ - Added new ``xhc `` collective component to optimize shared memory collective
55
+ operations using XPMEM.
56
+
57
+ - Added new ``acoll `` collective component optimizing single-node
58
+ collective operations on AMD Zen-based processors.
59
+
60
+ - Added new algorithms to optimize Alltoall and Alltoallv in the
61
+ ``han `` component when XPMEM is available.
62
+
63
+ - Introduced new algorithms and parameterizations for Reduce, Allgather,
64
+ and Allreduce in the base collective component, and adjusted the ``tuned ``
65
+ component to better utilize these collectives.
66
+
67
+ - Added new JSON file format to tune the ``tuned `` collective component.
68
+
69
+ - Extended the ``accelerator `` collective component to support
70
+ more collective operations on device buffers.
0 commit comments