This assignment implements a simple parallel data structure.  This structure
is a two dimension regular mesh of points, divided into slabs, with each slab
allocated to a different processor.  In the simplest C form, the full data
structure is 
<PRE>
	double x[maxn][maxn];
</PRE>
and we want to arrange it so that each processor has a local piece:
<PRE>
	double xlocal[maxn][maxn/size];
</PRE>
where <CODE>size</CODE> is the size of the communicator (e.g., the number of
processors).
<P>
If that was all that there was to it, there wouldn't be anything to do.
However, for the computation that we're going to perform on this data
structure, we'll need the adjacent values.  That is, to compute a new 
<CODE>x[i][j]</CODE>, we will need 
<PRE>
x[i][j+1]
x[i][j-1]
x[i+1][j]
x[i-1][j]
</PRE>
The last two of these could be a problem if they are not in
<CODE>xlocal</CODE> but are instead on the adjacent processors.
To handle this difficulty, we define <IT>ghost points</IT> that we will
contain the values of these adjacent points.  
<P>
Write code to copy divide the array x into equal-sized strips and to copy the 
adjacent edges to the neighboring processors.  Assume that x is maxn by maxn,
and that maxn is evenly divided by the number of processors.
For simplicity, You may assume a fixed size array and a fixed (or minimum) 
number of processors.
<P>
To test the routine, have each processor fill its section with the rank of the
process, and the ghostpoints with -1.  After the exchange takes place,
test to make sure that the ghostpoints have the proper value.  Assume that the
domain is not periodic; that is, the top process (rank = size - 1) only sends
and receives data from the one under it (rank = size - 2) and the bottom
process (rank = 0) only sends and receives data from the one above it (rank =
1).  Consider a maxn of 12 and use 4 processors to start with.

<CENTER>
<IMG SRC="ghost.gif">
</CENTER>

For this exercise, use MPI_Send and MPI_Recv.  See the related exercises for
alternatives that use the nonblocking operations or MPI_SendRecv.
<P>
A more detailed description of this operation may be found in Chapter 4 of
"Using MPI".
