\/
This Guide corresponds to Version 1.2.1 of mpich. It was processed by LaTeX on Tue Sep 5 14:51:06 2000.
MPI (Message-Passing Interface) is a standard specification for
message-passing libraries. Mpich is a portable implementation of
the full MPI specification for a wide variety of parallel computing
environments, including workstation clusters and massively parallel
processors (MPPs). Mpich contains, along with the MPI library itself,
a
programming environment for working with MPI programs. The programming
environment includes a portable startup mechanism, several profiling
libraries for studying the performance of MPI programs, and an X interface
to all of the tools. This guide explains how to compile, test, and
install mpich and its related tools.
This document describes how to obtain and install mpich [12], the portable implementation of the MPI Message-Passing Standard Details on using the mpich implementation are presented in a separate User's Guide for mpich [9]. Version 1.2.1 of mpich is primarily a bug fix and increased portability release, particularly for LINUX-based clusters.
New in 1.2.1:
Improved support for assorted Fortran and Fortran 90 compilers. In particular, a single version of MPICH can now be built to use several different Fortran compilers; see the installation manual (in doc/install.ps.gz) for details.
Using a C compiler for MPI programs that use MPICH that is different from the one that MPICH was built with is also easier now; see the installation manual.
Significant upgrades have been made to the MPD system of daemons that provide fast startup of MPICH jobs, management of stdio, and a crude parallel debugger based on gdb. See the README file in the mpich/mpid/mpd directory and the mpich User's Guide for information on how to use the MPD system with mpich.
The NT version of MPICH has been further enhanced and is available separately; see the MPICH download page http://www.mcs.anl/gov/mpi/mpich/download.html.
The MPE library for logging and program visualization has been much improved. See the file mpe/README for more details.
A new version of ROMIO, 1.0.3, is included. See romio/README for details.
A new version of the C++ interface from the University of Notre Dame is also included.
Known problems and bugs with this release are documented in the file mpich/KnownBugs .
There is an FAQ at http://www.mcs.anl.gov/mpi/mpich/faq.html . See this if you get "permission denied", "connection reset by peer", or "poll: protocol failure in circuit setup" when trying to run MPICH.
There is a paper on jumpshot available at ftp://ftp.mcs.anl.gov/pub/mpi/jumpshot.ps.gz . A paper on MPD is available at ftp://ftp.mcs.anl.gov/pub/mpd.ps.gz.
Full MPI 1.2 compliance, including cancel of sends
IMPI (Interoperable MPI [3]) style flow control.
A Windows NT version is now available as open source. The installation process for this version is different; this manual covers only the Unix version of mpich.
Support for SMP-clusters in mpirun.
A Fortran 90 MPI module (actually two, see Section Fortran 90 Modules ).
Support for MPI_INIT_THREAD (but only for MPI_THREAD_SINGLE)
Support for VPATH-style installations, along with a installation process and choice of directory names that is closer to the GNU-recommended approach
A new, scalable log file format, SLOG, for use with the MPE logging tools. SLOG files can be read by a new version of Jumpshot which is included with this release.
Updated ROMIO
A new device for networked clusters, similar to the p4 device but based on daemons and thus supporting a number of new convenience features, including fast startup. See the User's Guide for details.
The ROMIO subsystem implements a large part of the MPI-2 standard for parallel I/O. For details on what types of file systems runs on and current limitations, see the Romio documentation in romio/doc. ROMIO was implemented by Rajeev Thakur at Argonne.
The MPI-2 standard C++ bindings are available for the MPI-1 functions. These were implemented by Andrew Lumsdaine and Jeff Squyres of Notre Dame.
A new device, Globus2 device which replaces the previous Globus, is available. It has been implemented by Nick Karonis of Northern Illinois University and Brian Toonen of Argonne National Laboratory.
A new program visualization program, called Jumpshot, is available as an alternative to the upshot and nupshot programs. The principal implementor is Omer Zaki of Argonne and Angelo State University.