statically build PARSEC
1. cmake is difficult to statically built and is UN-NECESSARY since it just a tool to build several benchmarks.
So, to make life simpler, build cmake dynamically linked.
parsecmgmt -a build -p tools
2. Add -static in you CFLAGS, CXXFLAGS
type parsecmgmt -a build -p apps kernels
3. for three benchmarks, they are still dynamically linked due to libtool
just manually linked them:
go to log directory, find out the log file for the last build.
search "-o bodytrack", "-o facesim" and "-o vips" strings to locate the link commands.
and go to the right directory to statically link those benchmarks, and manually copy them to the installed directory.
4. DONE
2012年5月31日 星期四
2012年5月27日 星期日
SPEC OMP 2001 318.galgel, fail to compile by gfortran 4.4
SPEC OMP 2001 318.galgel, fail to compile by gfortran 4.4
1. add -ffixed-form
2. /data/Benchmarks/SPEC/OMP2001_v3.2_x86/benchspec/OMPM2001/318.galgel_m/src/bifg21.f90
change
Poj2(NKY*(L-1)+M,1:K) = - MATMUL( LPOP(1:K,1:N), VI(K+1:K+N) )
to
1. add -ffixed-form
2. /data/Benchmarks/SPEC/OMP2001_v3.2_x86/benchspec/OMPM2001/318.galgel_m/src/bifg21.f90
change
Poj2(NKY*(L-1)+M,1:K) = - MATMUL( LPOP(1:K,1:N), VI(K+1:K+N) )
to
Poj2(NKY*(L-1)+M,1:K) = - MATMUL( LPOP(1:K,1:N), VI(K+1:K+N))
remove the fucking space.
2012年5月14日 星期一
20120513 - 503
Error: 1x318.galgel_m 1x324.apsi_m 1x326.gafort_m
Success: 1x310.wupwise_m 1x312.swim_m 1x314.mgrid_m 1x316.applu_m 1x320.equake_m 1x328.fma3d_m 1x330.art_m 1x332.ammp_m
ARM SPEC Regression test, 503 V.S 467
ARM
-14.08 445.gobmk
-6.85 456.hmmer
-10.92 458.sjeng
-3.10 471.omnetpp
-3.02 473.astar
-4.51 483.xalancbmk
4.63 403.gcc
-5.12 458.sjeng
-3.89 462.libquantum
-5.87 464.h264ref
-5.18 471.omnetpp
-14.08 445.gobmk
-6.85 456.hmmer
-10.92 458.sjeng
-3.10 471.omnetpp
-3.02 473.astar
-4.51 483.xalancbmk
4.63 403.gcc
-5.12 458.sjeng
-3.89 462.libquantum
-5.87 464.h264ref
-5.18 471.omnetpp
2012年5月12日 星期六
2012年5月11日 星期五
Compile 314.mgrid fail, resolved!
Compile 314.mgrid error:
==========================================================================
/usr/bin/gfortran-4.4.2 -fopenmp -O3 -m32 -march=prescott -mmmx -msse -msse2 -msse3 -msse4 -mfpmath=sse -fforce-addr -fivopts -fsee -ftree-vectorize -pipe mgrid.f -o mgrid
Error from make 'specmake build 2> make.err | tee make.out':
mgrid.f: In function 'resid':
mgrid.f:365: error: lastprivate variable 'i2' is private in outer context
mgrid.f:365: error: lastprivate variable 'i1' is private in outer context
mgrid.f: In function 'psinv':
mgrid.f:408: error: lastprivate variable 'i2' is private in outer context
mgrid.f:408: error: lastprivate variable 'i1' is private in outer context
Related Post:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33904
And in the following post, OPENMP confirms that is a bug in mgrid.f
http://openmp.org/pipermail/omp/2007/001101.html
Bug description:
==========================================================================
/usr/bin/gfortran-4.4.2 -fopenmp -O3 -m32 -march=prescott -mmmx -msse -msse2 -msse3 -msse4 -mfpmath=sse -fforce-addr -fivopts -fsee -ftree-vectorize -pipe mgrid.f -o mgrid
Error from make 'specmake build 2> make.err | tee make.out':
mgrid.f: In function 'resid':
mgrid.f:365: error: lastprivate variable 'i2' is private in outer context
mgrid.f:365: error: lastprivate variable 'i1' is private in outer context
mgrid.f: In function 'psinv':
mgrid.f:408: error: lastprivate variable 'i2' is private in outer context
mgrid.f:408: error: lastprivate variable 'i1' is private in outer context
==========================================================================
Related Post:
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=33904
And in the following post, OPENMP confirms that is a bug in mgrid.f
http://openmp.org/pipermail/omp/2007/001101.html
Bug description:
> >Hi! > >Is > > SUBROUTINE foo(a, b, n) > DOUBLE PRECISION a, b > INTEGER*8 i1, i2, i3, n > DIMENSION a(n,n,n), b(n,n,n) >!$OMP PARALLEL >!$OMP+DEFAULT(SHARED) >!$OMP+PRIVATE(I3) >!$OMP DO >!$OMP+LASTPRIVATE(I1,I2) > DO i3 = 2, n-1, 1 > DO i2 = 2, n-1, 1 > DO i1 = 2, n-1, 1 > a(i1, i2, i3) = b(i1, i2, i3); > 600 CONTINUE > ENDDO > ENDDO > ENDDO >!$OMP END DO NOWAIT >!$OMP END PARALLEL > RETURN > END > >valid? My reading of the standard is it is not, because both I1 >and I2 are sequential loop iterator vars in a parallel construct >and as such should be predetermined private rather than implicitly >determined shared (OpenMP 2.5, 2.8.1.1). It is not present >in any of the clauses on the parallel construct which could possibly >override it. 2.8.3.5 about the lastprivate clause in the >first restriction >says that the vars can't be private in the parallel region. >Several other compilers accept this code though. > >In OpenMP 3.0 draft the wording is even clear, because it talks there >about the loop iterators being predetermined private in a task region, >and !$omp do doesn't create a new task region. > >Or am I wrong with this? > >Thanks. > > Jakub
He is right!!!
Solution:
!$OMP+DEFAULT(SHARED) with: !$OMP+SHARED(I1,I2) makes the code compile successfully with gfortran. Alternatively, keeping DEFAULT(SHARED) and fusing the OMP PARALLEL clause with the OMP DO clause (i.e. using OMP PARALLEL DO) also solves the problem.
2012年5月8日 星期二
SPEC OMP 2001
Name Remarks
310.wupwise_m Quantum chromodynamics
312.swim_m Shallow water modeling
314.mgrid_m Multi-grid solver in 3D potential field
316.applu_m Parabolic/elliptic partial differential equations
318.galgel_m Fluid dynamics: analysis of oscillatory instability
320.equake_m Finite element simulation; earthquake modeling
324.apsi_m Solves problems regarding temperature, wind, velocity and distribution of pollutants
326.gafort_m Genetic algorithm
328.fma3d_m Finite element crash simulation
330.art_m Neural network simulation; adaptive resonance theory
332.ammp_m Computational Chemistry
http://www.spec.org/omp2001/docs/runspec.html
訂閱:
文章 (Atom)