Newsgroups: comp.parallel,comp.parallel.pvm
From: chasman@chem.columbia.edu (David Chasman)
Subject: Re: Distributed Proccessing - How to measure speedups?
Organization: Center for Biomolecular Simulation
Date: Wed, 27 Jul 1994 12:37:52 GMT
Message-ID: <CtLn35.7zG@dcs.ed.ac.uk>

In article <Ct8JBz.KDJ@dcs.ed.ac.uk> nfotis@theseas.ntua.gr (Nick C. Fotis) writes:
>
>- How can / should I measure the efficient execution of programs in a
>  heterogeneous network?
>
>
>We don't know what to measure anymore - the CPU seconds spent in each CPU
>are rather irrelevant, as we may have CPUs from 30 SPECfp to 300 SPECfp each
>- and the network delays aren't the same on each machine.
>
>We cannot isolate the network, since it's not our own, and the wall-clock
>time is not adequate metric, since he does research on efficient parallel
>algorithms (till now on homogeneous, shared memory machines)
>

	The relevant resource on any machine is the number of
	here is "megaflop seconds" :

	MFS = SPECfp * ( user_time / elapsed_time )

	So, the efficiency of any code is:

	E_serial = Time_of_execution / MFS

	-------------------------------------

	In a parallel or distributed environment - 

	MFS = sum_i ( SPECfp_i * ( user_time_i / elapsed_time_i ) ) 

	the index i runs over the processors used.

	Once again

	One thing that you should take a look at is :

	E_parallel = Time_of_execution / MFS

	---------------------------------------

	So to look at the relative efficiency of a parallel code -

	E_relative = E_parallel / E_serial 

	1.0 is perfect
	0.0 is a disaster

	( 1 / E_relative )  tells you  how many more megaflop
		            seconds you used that if you had
			    availed yourself of a serial implementation.

	
	ie E_relative = 0.5  ---> you used ( 1/0.5= ) 2 times as many MFS
							as you needed

	



	





