Intel Parallel Studio XE 2019 Update 3 has been installed on the cluster and is available via
module load intel/2019.3
This version has some significant changes in the underlying fabrics for MPI communications*
In particular the variables:
- I_MPI_FABRICS
- I_MPI_DAPL_UD
- I_MPI_DAPL_UD_PROVIDER
are no longer required to be set. Setting I_MPI_FABRICS (to shm:shm) for login node test runs is no longer necessary. These variables will have the correct values after loading the intel/2019.3 modules, so please check you aren't resetting them to old values after the module load.
This version of the Intel Parallel Studio includes, but does not activate, the Intel Python Distribution to avoid confusion with the standard Anaconda Python distributions. If you wish to use the Intel Python Distribution, you can find Python 2 and 3 versions at
- /p/system/packages/intel/parallel_studio_xe_2019_update3/intelpython2
- /p/system/packages/intel/parallel_studio_xe_2019_update3/intelpython3
Please unload any Anaconda modules before using these. To use the Python MPI interfaces, you may need to load the intel/2019.3 module or source the mpivars.sh file, e.g.
source /p/system/packages/intel/parallel_studio_xe_2019_update3/intelpython3/bin/mpivars.sh
For more information on these changes, please have a look at the links below.
Get Started with Intel® Parallel Studio XE 2019 Composer Edition for Linux*
Get Started with Intel® Distribution for Python*
Working with libfabric* on Intel® MPI Library Cluster Systems
A full documentation set can be found here:
/p/system/packages/intel/parallel_studio_xe_2019_update3/documentation_2019/en
This can be copied to your local machine for offline reading.
If you have any question, comments or problems please drop us a line at cluster-support@pik-potsdam.de
* Specifically: DAPL, TMI, OFA fabrics have been deprecated since Intel® MPI Library 2017 Update 1. Intel MPI Library 2019 does not support these fabrics, it supports only libfabric