Building MPI Yourself
MPI does not require any special system privileges to set up or use on most clusters. There are many variants of MPI, and different users may need different versions. In the case of EMAN2, OpenMPI, the most widely used MPI distribution, works well in almost all situations, but there are some version-specific issues. We currently recommend version 1.4.3. This page describes how you can compile and install OpenMPI yourself in your own account.
Beware! Even if you compile MPI yourself, you will still need to install pydusa. By installing pydusa first, you *might* be able to avoid compiling your own MPI, so... try it: Install pydusa for EMAN2/SPARX ; EMAN2/Parallel/PyDusa
For Independent OpenMPI compilation, download the version of OpenMPI you want or need to use (for example, openmpi-1.4.3.tar.gz; you can find any given OpenMPI version by searching for it on google), transfer the tar/zip file to your home directory on the cluster of interest, and untar/unzip it (ask someone who already has MPI working for them what version of MPI they recommend). Once you untar it, you should see a directory with all the OpenMPI source files (for example, openmpi-1.4.3).
To compile, you will need to create a separate folder within your come directory where OpenMPI will be built. For example:
mkdir openmpi
- Then go to the source files directory
cd openmpi-1.4.3
Once you're in there, run the configure file, specifying that you want to use your own OpenMPI build (this is usually done through the --prefix option, which you have to set equal to your compilation directory, which is the openmpi directory you were instructed to create in this example; the --prefix option might be called something different depending on what version of OpenMPI you're building, but you can always check by typing ./configure --help). You also have to specify --disable-dlopen
./configure --prefix=/raid/jgalaz/openmpi --disable-dlopen
Still within the source directory (openmpi-1.4.3) run the following commands:
make make install
If it complies successfully, now you have to add mpi to your PATH by modifying your .bashrc file, which is an invisible file that should be inside your home directory. You can easily open it with the text editor vi which comes with Linux and OSX by default
cd ~ vi .bashrc
- Note that THE ORDER in which you add directories to your PATH variable *MATTERS*. You want the system to find your build of MPI *first*, so add the directory at the beginning of your PATH (or the following lines before any other lines that contain the word "export" on your .bashrc file):
export PATH=$HOME/openmpi/bin:$PATH:/usr/local/bin
- export LD_LIBRARY_PATH=$HOME/openmpi/lib:$LD_LIBRARY_PATH
Once this is correctly set, go to the ~/EMAN2/mpi_eman/ directory and run Makefile.linux2 to compile the mpi_eman_c library.
cd ~/EMAN2/mpi_eman/ make -f Makefile.linux2 install
If run successfully, two new files should have been generated:- mpi_eman_c.o
- mpi_eman_c.so
Copy the mpi_eman_c.so file to ~/EMAN2/lib/
cp mpi_eman_c.so ~/EMAN2/lib/
You should now be able to run the test scripts that come within the ~/EMAN2/mpi_eman/ directory on the main EMAN2/Parallel/Mpi page.
If you're getting errors running your jobs on a cluster using mpi, please *DO* remember to clear the cache:
cexec rm -rf /scratch/username