RGX_List_Processor_Names

List all the processor/machine names in the given Communicator either in stdout or in returned character array. There are 2 different calling interfaces which are listed as follows,

Synopsis


int RGX_List_Processor_Names(
                              MPI_Comm  World_Comm,

                              ...
                            )

  int RGX_List_Processor_Names( MPI_Comm  World_Comm,
                                int N_task_out = -1 )

  int RGX_List_Processor_Names( MPI_Comm  World_Comm,
                                int      *N_task_out_ptr,
                                char     *processor_names_out )

Unmodified Input Variables

World_Comm - Input Communicator containing all processors for the grid (Usually MPI_COMM_WORLD)
N_task_out_ptr - Pointer to Number of Tasks, N_task, in the Communicator. If N_task = -1, pointer to the character array of the processor names is NOT assumed. If N_task > 0, a pointer to a character array of sufficient size is needed. N_task inputed is the number of the processors names the array can hold. If N_task < number of processors in the communicator, only the first N_task of the processor names are returned.

Modified Output Variables

processor_names_out - Pointer to the Character Array, of size at least bigger than Number of processors times MPI_MAX_PROCESSOR_NAME. The array will contain all the processor names if N_task > 0. Where ii th processor name can be accessed at processor_names_out[ ii * MPI_MAX_PROCESSOR_NAME ].
returned value - returns MPI_SUCCESS if completed successfully

Notes on Fortran Interface

The RGX routines which have this message have a Fortran interface. If the C function that is defined here has a returned integer status, the Fortran interface will have an additional argument, ierr, at the end of its argument list.

Notes on Minimal Set of Variables

In order to use RGX for a regualr grid calculation. There is a minimal set of variables needed to be used with the RGX_XXX routines. The set of variables is listed as follows,


   const int     MD_type = MPI_DOUBLE_PRECISION;
   double       *Task_xGrid;         // Data of Local Task Grid              
   int           Ndim;               // Dimensionality of the 
   //                                // grid           
   MPI_Comm      RGX_Comm;           // Communicator of all 
   //                                // sub-grids        
   int           N_task;             // No. of process/tasks 
   //                                // in Communicator 
   int          *Glb_Grid_sz;        // Global grid sizes :
   //                                // Glb_Grid_sz(Ndim) 
   int          *task_Ngbrs;         // the tid of the nearest
   //                                // neighbor task :
   //                                // task_Ngbrs(2, Ndim)                
   int          *task_Grid_endpts;   // local sub-grid's end
   //                                // points :
   //                                // task_Grid_endpts(Ndim, 2)          
   int          *N_task_1dims;       // No of task 
   //                                // per dimension :             
   //                                // N_task_1dims(Ndim)                 
   int          *stride_width;       // width of the stride of
   //                                // ghost point :
   //                                // stride_width(Ndim)                 
   MPI_Datatype *task_bndry_strides; // local task/grid's 
   //                                // boundary strides :
   //                                // task_bndry_strides(Ndim)           

Memory for each of these pointer variables can be allocated dynammically/statically, but it has to be continuous.

Supported Data Type of the Block Array

There are currently 5 datatypes supported in RGX. The types are identified by MPI_Datatype. They are MPI_INTEGER, MPI_REAL, MPI_DOUBLE_PRECISION, MPI_COMPLEX and MPI_DOUBLE_COMPLEX.

Definition Location

This subroutine is defined in the librgx.a.

Location:../src/librgx/RGX_List_Processor_Names.c