Newsgroups: comp.parallel.mpi
From: James P Monaco <jamesm>
Subject: MPI for nCube help
Organization: Texas A&M University, College Station, TX
Date: 30 Jun 1995 14:33:01 GMT
Mime-Version: 1.0
Content-Type: multipart/mixed; oundary="-------------------------------143811912813030"
Message-ID: <3t11ut$4uc@news.tamu.edu>

This is a multi-part message in MIME format.

---------------------------------143811912813030
Content-Transfer-Encoding: 7bit
Content-Type: text/plain; charset=us-ascii

I helping a Ph.D student constuct a large parallel program on the nCube.  We
chose MPI as the message passing library we would use.  We decided (without
much
research) to use MPICH.  (We are using an nCube 2 with a Sun front end running
SunOs 4.1.1).   Benchmarking is important to us.  Accoringly, we desire quality 
profiling tools.  Yet, because of the nature of the nCube the MPE graphics
don't work, making visual message passing with vissmess impossible.  (Well
maybe not impossible, but exceedingly difficult).  Furthermore, the debugger
(ndb, much like dbx) and all related tools that come with the nCube,
segmentation fault
when they try to load the symbol table from any code that call MPI functions.
We compiled MPI with the debug flag.

    I guess my question is can we recompile the code in some manner to so the
debugger will work?  Or, is there another implementation or extension of MPI
that supports visual message passing (or other quality profilers for message
passng) on an nCube?

Any help would be greatly appreciated.


thanks
james

---------------------------------143811912813030
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html

<BASE HREF=3D"news:comp.parallel.mpi">

<BASE HREF=3D"news:">
<A HREF=3D"newspost:comp.parallel.mpi"><IMG ALT=3D"" BORDER=3D0 SRC=3D"inte=
rnal-news-post"></A><A HREF=3D"newscatchup:comp.parallel.mpi"><IMG ALT=3D""=
 BORDER=3D0 SRC=3D"internal-news-catchup-group"></A><A HREF=3D"news:comp.pa=
rallel.mpi?ALL"><IMG ALT=3D"" BORDER=3D0 SRC=3D"internal-news-show-all-arti=
cles"></A><A HREF=3D"newsrc://news/?UNSUBSCRIBE=3Dcomp.parallel.mpi"><IMG A=
LT=3D"" BORDER=3D0 SRC=3D"internal-news-unsubscribe"></A><A HREF=3D"newsrc:=
//news/"><IMG ALT=3D"" BORDER=3D0 SRC=3D"internal-news-go-to-newsrc"></A>
<HR>
<TITLE>Newsgroup: comp.parallel.mpi</TITLE>
<H1>Newsgroup: comp.parallel.mpi</H1>
<UL>
<LI><A NAME=3D"DAKGq0.IEL@dcs.ed.ac.uk" HREF=3D"DAKGq0.IEL@dcs.ed.ac.uk"><B=
>Re: Does MPI_REDUCE work for max & min Complex datatype?</B> - L J Clarke<=
/A>  (19)
<LI><A NAME=3D"DAJBps.FyK@cs.dal.ca" HREF=3D"DAJBps.FyK@cs.dal.ca"><B>Re: (=
no subject)</B> - Erik Demaine</A>  (12)
<UL><LI><A NAME=3D"3sc192$eab@hermes.acs.unt.edu" HREF=3D"3sc192$eab@hermes=
=2Eacs.unt.edu">Maximilian Ibel</A>  (21)
<UL><LI><A NAME=3D"3smtka$rsl@NNTP.MsState.Edu" HREF=3D"3smtka$rsl@NNTP.MsS=
tate.Edu">Jason David Bridges</A>  (11)
</UL></UL><LI><A NAME=3D"llewins-2206950859320001@x-147-16-95-58.es.hac.com=
" HREF=3D"llewins-2206950859320001@x-147-16-95-58.es.hac.com"><B>Re: non-bl=
ocking error code</B> - Lloyd J Lewins</A>  (31)
<LI><A NAME=3D"3sctej$ddv@monalisa.usc.edu" HREF=3D"3sctej$ddv@monalisa.usc=
=2Eedu"><B>How to do this in F90, HPF, PVM, or MPI? (Summary of responses t=
o "Is SP2 the best supercomputer"?)</B> - Zhiwei Xu</A>  (91)
<UL><LI><A NAME=3D"3speb0$pem@phoenix.fujitsu.co.uk" HREF=3D"3speb0$pem@pho=
enix.fujitsu.co.uk">Pierre Lagier</A>  (23)
</UL><LI><A NAME=3D"MARR.95Jun23120339@rhum.dcs.ed.ac.uk" HREF=3D"MARR.95Ju=
n23120339@rhum.dcs.ed.ac.uk"><B>Re: Capturing LAM output?</B> - Marcus Marr=
</A>  (14)
<UL><LI><A NAME=3D"3sfuvp$kof@nuscc.nus.sg" HREF=3D"3sfuvp$kof@nuscc.nus.sg=
">Cheshire Cat</A>  (15)
</UL><LI><A NAME=3D"3sefkj$cd@bright.ecs.soton.ac.uk" HREF=3D"3sefkj$cd@bri=
ght.ecs.soton.ac.uk"><B>Question : buffer size for MPI_BSEND</B> - James Al=
lwright</A>  (13)
<UL><LI><A NAME=3D"80403638718721@godzilla.mcs.anl.gov" HREF=3D"80403638718=
721@godzilla.mcs.anl.gov">William Gropp</A>  (4)
<UL><LI><A NAME=3D"3sjp1h$gk0@newsserv.zdv.uni-tuebingen.de" HREF=3D"3sjp1h=
$gk0@newsserv.zdv.uni-tuebingen.de">Lavrentios Servissoglou</A>  (46)
</UL></UL><LI><A NAME=3D"3seks9$up@ra.nrl.navy.mil" HREF=3D"3seks9$up@ra.nr=
l.navy.mil"><B>which one?</B> - David Conklin</A>  (22)
<LI><A NAME=3D"3si9ir$e4s@charm.magnus.acs.ohio-state.edu" HREF=3D"3si9ir$e=
4s@charm.magnus.acs.ohio-state.edu"><B>Re: Capturing LAM output?</B> - Raja=
 B Daoud</A>  (11)
<LI><A NAME=3D"3sckpe$c95@monalisa.usc.edu" HREF=3D"3sckpe$c95@monalisa.usc=
=2Eedu"><B>How to solve this simple problem in F90, HPF, PVM, or MPI? (Summ=
ary of Responses to "Is SP2 the best supercomputer?")</B> - Zhiwei Xu</A>  =
(89)
<LI><B> active messages/global memory in MPI(-2?)</B>
<UL><LI><A NAME=3D"3so644$p6d@agate.berkeley.edu" HREF=3D"3so644$p6d@agate.=
berkeley.edu">Lok Tin Liu</A>  (23)
<LI><A NAME=3D"3sutpq$8ov@bbs.pnl.gov" HREF=3D"3sutpq$8ov@bbs.pnl.gov">Jare=
k Nieplocha</A>  (15)
</UL><LI><A NAME=3D"3so7vq$qqg@kronos.cti.gr" HREF=3D"3so7vq$qqg@kronos.cti=
=2Egr"><B>RE: Simulated message passing on 1 processor</B> - dzois@upatras.=
gr</A>  (11)
<LI><A NAME=3D"3sq36m$dda@monalisa.usc.edu" HREF=3D"3sq36m$dda@monalisa.usc=
=2Eedu"><B>Re: How to do this in F90, HPF, PVM, or MPI?</B> - Zhiwei Xu</A>=
  (77)
<UL><LI><A NAME=3D"3t0tdi$fkj@csnews.cs.colorado.edu" HREF=3D"3t0tdi$fkj@cs=
news.cs.colorado.edu">Christos Triantafillou</A>  (47)
</UL><LI><A NAME=3D"3sqd84$7pl@rosebud.sdsc.edu" HREF=3D"3sqd84$7pl@rosebud=
=2Esdsc.edu"><B>Inter-resource communication capabilities</B> - frost richa=
rd</A>  (50)
<UL><LI><A NAME=3D"3ss70p$6o8@rosebud.sdsc.edu" HREF=3D"3ss70p$6o8@rosebud.=
sdsc.edu">Richard Frost</A>  (28)
<UL><LI><A NAME=3D"dennis.804442643@woodstock" HREF=3D"dennis.804442643@woo=
dstock">Dennis Cottel</A>  (10)
<UL><LI><A NAME=3D"3sv7vi$pel@rosebud.sdsc.edu" HREF=3D"3sv7vi$pel@rosebud.=
sdsc.edu">Richard Frost</A>  (30)
</UL></UL></UL><LI><A NAME=3D"3sr8sa$kj5@news.rrz.uni-koeln.de" HREF=3D"3sr=
8sa$kj5@news.rrz.uni-koeln.de"><B>MPI on Cray J916</B> - Martin Strietzel</=
A>  (9)
<LI><A NAME=3D"DAs320.D2q@hdl.ie" HREF=3D"DAs320.D2q@hdl.ie"><B>Position Av=
ailable: Hitachi Dublin Laboratory</B> - Donal Finn</A>  (47)
<LI><A NAME=3D"3su6cd$6h7@sulawesi.lerc.nasa.gov" HREF=3D"3su6cd$6h7@sulawe=
si.lerc.nasa.gov"><B>what happened at the MPI users' group meeting?</B> - K=
im Ciula</A>  (8)
<UL><LI><A NAME=3D"LUMS.95Jun29150235@owl.cse.nd.edu" HREF=3D"LUMS.95Jun291=
50235@owl.cse.nd.edu">Andrew Lumsdaine</A>  (26)
</UL><LI><A NAME=3D"3sue8k$83@carol.fwi.uva.nl" HREF=3D"3sue8k$83@carol.fwi=
=2Euva.nl"><B>MPI_PROC_NULL question</B> - Arjen Schoneveld</A>  (15)
<LI><A NAME=3D"3suv8s$es0@mane.cgrg.ohio-state.edu" HREF=3D"3suv8s$es0@mane=
=2Ecgrg.ohio-state.edu"><B>MPI: Everyday Collective Communication</B> - Raj=
a Daoud</A>  (9)
<LI><A NAME=3D"1995Jun30.063251.20465@uxmail.ust.hk" HREF=3D"1995Jun30.0632=
51.20465@uxmail.ust.hk"><B>What's wrong?</B> - Lum Wing Fung Adrian</A>  (5=
2)
</UL><HR>
<A HREF=3D"newspost:comp.parallel.mpi"><IMG ALT=3D"" BORDER=3D0 SRC=3D"inte=
rnal-news-post"></A><A HREF=3D"newscatchup:comp.parallel.mpi"><IMG ALT=3D""=
 BORDER=3D0 SRC=3D"internal-news-catchup-group"></A><A HREF=3D"news:comp.pa=
rallel.mpi?ALL"><IMG ALT=3D"" BORDER=3D0 SRC=3D"internal-news-show-all-arti=
cles"></A><A HREF=3D"newsrc://news/?UNSUBSCRIBE=3Dcomp.parallel.mpi"><IMG A=
LT=3D"" BORDER=3D0 SRC=3D"internal-news-unsubscribe"></A><A HREF=3D"newsrc:=
//news/"><IMG ALT=3D"" BORDER=3D0 SRC=3D"internal-news-go-to-newsrc"></A>
---------------------------------143811912813030--

