Newsgroups: comp.parallel.mpi
From: sdblackb@uncc.edu (Stuart D Blackburn)
Subject: Problem with MPI_Scatter on LAM implementation
Organization: University of NC at Charlotte
Date: 16 Aug 1995 20:02:52 GMT
Message-ID: <40titc$qfo@news.uncc.edu>

	I am trying to port a PVM program to MPI (in particular,
the LAM implementation of MPI.) The program is hanging up at
the MPI_Scatter call, and I can not see why. Below is the code
in question:

--------- From an included header file -----------
#define Height 10
#define Width 10
#define Depth 10

/* N must be set equal to a factor of Depth */
#define N 2
#define MIN_ITERATIONS Depth

/* The number of alterable ranks for each worker */
#define NUM_RANK Depth/N	

---- The code in question with it's variable declarations ----

    int me, numprocs;
    float *data, my_data[NUM_RANK +2][Height+2][Width+2];
  
    	/* enroll in MPI */
    	MPI_Init( &argc, &argv);
     	/* world communicator/group setup */
    	MPI_Comm_size(MPI_COMM_WORLD, &numprocs);  	
    	MPI_Comm_rank(MPI_COMM_WORLD, &me);  	

	if (DEBUG) printf("I am %i, and there are %i process' in MPI_COMM_WORLD\n",
		me, numprocs);

    	if (me == 0) { /* Then I am the master */
    	
    		data = (float *) malloc( (Depth*(Height+2)*(Width+2))* sizeof(float));
    	    	if( data == NULL) {
    	    	    printf("ERROR: Could not allocate memory for data array.\nEXITING\n");
    	    	    MPI_Finalize();
    		    exit(3);
	    	}
    	    	
    	    	init_array(data);

	       	if (DEBUG) printf("I %i am the master, and this is initialized data\n", me);
	       	if (DEBUG) print_results(data);
     	 	
 	       	/* Scatter the data array */
	        MPI_Scatter(&data, NUM_RANK*(Width+2)*(Height+2), MPI_FLOAT,
	           &(my_data[1][0][0]), NUM_RANK*(Width+2)*(Height+2), MPI_FLOAT, 
	           0, MPI_COMM_WORLD);

	} else { /* I am a slave */
	
 	       	/* Receive my portion of the data array */
 		MPI_Scatter(&data, NUM_RANK*(Width+2)*(Height+2), MPI_FLOAT,
	           &(my_data[1][0][0]), NUM_RANK*(Width+2)*(Height+2), MPI_FLOAT,
	           0, MPI_COMM_WORLD);
	}

----------------------------------------------------------------

	I am starting 2 copies of this with mpirun. The print_results()
fcn prints out the initialized array just fine. Everything seems 
reasonable up to that point, but both processes seem to hang when they
hit MPI_Scatter. The data to be scattered is technically just an array
of Depth*(Height+2)*(Width+2) floats, but I am using it as if it were
an array of floats data[Depth][Height+2][Width+2]. I don't see this as
a problem. I am scattering NUM_RANK*(Width+2)*(Height+2) to each process
where NUM_RANK is defined as Depth/N, so there should be no problem with
the amount of data being divided evenly, etc. 
	What am I doing wrong here??
	Is there something fundamentally different about the way in
which PVM's scatter works compared to MPI's that I am missing or what?
	
	Another thing, when this hangs, it seems to confuse lamd, and
I can't get it to tell me anything about what is going on with the tasks,
buffers, etc. I have to restart the virtual machine again in order to
run anything. 

	Thanks for any assistance that you can offer.
	

       _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/
      _/   Stuart D. Blackburn, Computer Science Graduate Student   _/
     _/          University of North Carolina at Charlotte         _/
    _/    Graduate Teaching Assistant (CSCI 1201 and 1202 Labs)   _/
   _/      225 E. North St. (PO Box 1012), Albemarle, NC 28002   _/
  _/         Home: (704) 982 0763          Office: 547-4574     _/
 _/E-mail:sdblackb@uncc.edu (http://www.coe.uncc.edu/~sdblackb)_/
_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/



