Community GSI Users Page
Frequently Asked Questions for GSI
Most build or run problems must be diagnosed by use of the log files. For build errors pipe the standard out and standard error into a log file with a command such as (for csh)
./compile |& tee build.log
Search the log file for any instance of the word "Error." Its presence indicates a build error. Be certain to use the exact spelling with a capital "E." If the build fails, but the word "Error" is not present in the log file, it typically indicates that the build failed during the link phase. Information on the failed linking phase will be present at the very end of the log file. Try searching there.
For run errors, it is useful to examine the "stdout" file located in the run directory.
Summary of Common Problems
General Issues
Build Issues
- Building GSI with the PGI compiler
- Building GSI with the Intel compiler
- Building GSI with the IBM AIX XLF compiler
- Building GSI with MPI
- Building GSI for HWRF
- Building GSI on currently unsupported platforms
Run Time Issues
- Issues running with MPI
- Run time issues related to memory
- Run time problem reading prepbufr files from NCEP
Where to find help outside of the users guide
Q: How do I get help if my questions are not answered in the User's Guide?
A: First, refer to the documentation on this website, specifically this FAQ. If that doesn't answer your question, then email: gsi_help@ucar.edu.
Referencing GSI in publications
Q: How do I reference the GSI User's Guide in publications?
A: Please refer to Citation.
Building GSI with the PGI compiler
Problems building with a PGI compiler before version 11
Q: I have an older version fo the PGI compiler and I'm experiencing build errors.
A: Because of compiler related issues, it is strongly recommended that you employ the latest version of the PGI compiler available on your system. Should that not be possible, compiler errors may result in the code not building. Please check Compiler Support for compilers the code was tested for.
Building GSI with the Intel compiler
Q: I'm experiencing build error with the Intel compiler.
A: Two types of errors tend to occur with the Intel compiler.
- Errors having to do with the MKL library
- Errors having to do with linking to MPI
Building GSI with the IBM AIX XLF compiler
Q:I'd like to build GSI on an AIX IBM using the XLF fortran compiler
A:Unfortunely, the DTC team no longer has access to an AIX IBM machine to test GSI and update the build system. As a result the DTC only provides legacy support for IBM AIX platforms. Users have informed us of issues they have experienced trying to build on the IBM AIX platform and their solution is provided here.
1. The following build system files need to be modified:
- src/main/makefile_DTC
- src/libs/gfsio/Makefile
- src/libs/bufr/Makefile
- src/libs/sp/Makefile
Change .F90.o: $(CPP) $(CPP_FLAGS) $(CPP_F90FLAGS) $*.F90 > $*.fpp $(F90) $(FFLAGS) -c $*.fpp $(RM) $*.fpp to .F90.o: $(CPP) $(CPP_FLAGS) $(CPP_F90FLAGS) $*.F90 > $*.f90 $(F90) $(FFLAGS) -c $*.f90 $(RM) $*.f90 and .F.o: $(CPP) $(CPP_FLAGS) $(CPP_F90FLAGS) $*.F > $*.fpp $(SFC) -c $(FFLAGS_BUFR) $*.fpp $(RM) $*.fpp to .F.o: $(CPP) $(CPP_FLAGS) $(CPP_F90FLAGS) $*.F > $*.f $(SFC) -c $(FFLAGS_BUFR) $*.f $(RM) $*.f
2. Modify the source code file src/libs/gsdcloud/hydro_mxr_thompson.f90
Change tc0 = MIN(-0.1, tc) ! the type of these two variables are single and double precision seperately. to tc0 = MIN(-0.1_r_kind, tc) ! keep these two variables as the same type. change qnr_3d(i,j,k) = max(1.0_r_kind, qnr_3d(i,j,k)) ! the type of these two variables are double and single precision seperately. to qnr_3d(i,j,k) = max(1.0_r_single, qnr_3d(i,j,k)) ! keep these two variables as the same type.
3. Modify file src/libs/w3/Makefile, by deleting line 16:
$(CP) *.mod $(INCMOD)
as all the three module files (args_mod.mod, GBLEVN_MODULE.mod, nersenne_twister.mod) exist in the include\ directory.
4. when using LAPACK v5.2 rather than the ESSL mathematics libraries, some function names have been changed according to the link http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?infotype=an&subtype=ca&appname=gpateam&supplier=897&letternum=ENUS213-026
The way to solve the compile error "ld: 0711-317 ERROR: Undefined symbol: .dgeev; ld: 0711-317 ERROR: Undefined symbol: .dspev" is
- Change function name DGEEV to DGEEVX in code file src/main/bicglanczos.F90 and src/main/lanczos.F90
- Change function name dspev to dspevx in code file src/main/lanczos.F90
- Modify files src/main/makefile_DTC, src/main/Makefile.dependency and src/main/Makefile, by deleting lines related m_dgeevx.F90.
Issues Building with MPI
Q: I'm experencing build issues related to MPI.
A: The community build system employed by GSI assumes that your computing system comes with a fairly vanilla instalation of MPI. This means that it uses the traditional conventions for naming MPI wrapper scripts. On some newer platforms, with vender supplied versions of MPI, the MPI Fortran wrappers can function differently and/or even have different names. For instance, any of these might be found on a Linux system for invoking the MPI wrapper script to build Fortran code.
- mpif90 -f90=pgf90
- mpif90
- mpiifort
- mpfort
DM_FC DM_F90 DM_CCin your configure.gsi file.
If you experience any difficulty building the MPI components of the code, check these issues in the order listed.
- Does the build complain that it doesn't recognize the "-f90=pgf90" argument to "mpif90"? If so remove it from the "DM_FC" and "DM_F90" variables in the configure.gsi file, and try recompiling.
- Does mpif90 exist on your system? Check this by typing which mpif90. If the command responds with Command not found, the standard Fortran wrapper for MPI is not being found.
- Is MPI even in your path? Type "env" or "echo $PATH". Is there a path with the letters M-P-I? If it exists, check the contents of the "bin/" directory at that path location for one of the alternatives to "mpif90".
- If all else fails, contact you system administrator for help.
Building GSI for HWRF
Q: I am unable to compile GSI for use with the HWRF system.
A: The HWRF version of GSI differs slightly from the standard community version. Thus prior to building GSI for HWRF, you must set the environment variable HWRF to one.
- For csh: "setenv HWRF 1"
- For ksh/bash: "export HWRF=1"
Building GSI on currently unsupported platforms
Building GSI on Mac OSX using the PGI compiler
A: Why doesn't the community GSI support the Mac OSX platform with the PGI compiler?
Q: The community GSI development team only supports platforms that it has continuous access to for porting and testing. On occasion, a user will provide the build information for a platform that the community GSI does not support. When this happens we will share this information with the user community. Based on a user contribution, GSI can be compiled on a Mac OSX platform using the PGI v11 compiler with these configure file settings.
# Darwin (MACOS), PGI compilers (pgf90 & pgcc) (dmpar,optimize)# COREDIR = $(GSI) INC_DIR = $(COREDIR)/include BYTE_ORDER = LITTLE_ENDIAN SFC = pgf90 -mp -tp=core2 SF90 = pgf90 -Mfree -mp -tp=core2 SCC = pgcc -tp=core2 INC_FLAGS = -I $(INC_DIR) -module $(INC_DIR) -I $(NETCDF)/include FFLAGS_i4r4 = -i4 -r4 FFLAGS_i4r8 = -i4 -r8 FFLAGS_DEFAULT = -C FFLAGS = $(FFLAGS_DEFAULT) $(INC_FLAGS) -DLINUX -DMACOS -DPGI # CPP = cpp CPP_FLAGS = -C -P -D$(BYTE_ORDER) -D_REAL8_ -DWRF -DLINUX -DPGI CPP_F90FLAGS = DM_FC = mpif90 -tp=core2 DM_F90 = mpif90 -Mfree -tp=core2 DM_CC = mpicc -tp=core2 FC = $(DM_FC) F90 = $(DM_F90) CC = $(DM_CC) CFLAGS = -O0 -DLINUX -DMACOS -DUNDERSCORE CFLAGS2 = -DLINUX -DMACOS -Dfunder -DFortranByte=char -DFortranInt=int -DFortranLlong='long long' MYLIBsys = -L$(PGI)/lib -llapack -lblas
Issues Running with MPI
Q: The run script fails with an mpi related error.
A: The community GSI run script assumes that your computing system comes with a fairly vanilla instalation of MPI. This means that it uses the traditional conventions for naming MPI wrapper scripts. On some newer platforms, with vender supplied versions of MPI, the MPI run wrappers can function differently and/or even have different names. For instance, either and/or both of these run commands might be found on a particular Linux system for running parallel code.
- mpirun
- mpiexec
The community run script assumes the first of these, along with some minor modifications for batch systems based on the value of the "ARCH" variable in the run script. If your system does not use "mpirun" the user will need to modify the run script for their particular computing environment. This part of the script starts at line 87 and runs through line 132.
87 case $ARCH in 88 'IBM_LSF') 89 ###### IBM LSF (Load Sharing Facility) 90 BYTE_ORDER=Big_Endian 91 RUN_COMMAND="mpirun.lsf " ;; 92 93 'IBM_LoadLevel') 94 ###### IBM LoadLeve 95 BYTE_ORDER=Big_Endian 96 RUN_COMMAND="poe " ;; 97 98 'LINUX') 99 BYTE_ORDER=Little_Endian 100 if [ $GSIPROC = 1 ]; then 101 #### Linux workstation - single processor 102 RUN_COMMAND="" 103 else 104 ###### Linux workstation - mpi run 105 RUN_COMMAND="mpirun -np ${GSIPROC} -machinefile ~/mach " 106 fi ;;Note that on the IBM AIX platform (line 91), the run script calls "mpirun.lsf". On some linux workstations (line 105), a machine file is required. Typically on most large clusters, attempting to specify a machine file will result in an error. It is up to the user to make the necessary modifications for their particular computing system.
Once again, if all else fails, contact your system administrator for help.
Run time issues related to memory
Out of Memory Error
Q: The run crashes and the stdout file, in the run directory, complains about not being able allocate memory.
A: Resize the stacksize by adding the ksh/bash command:
In bash/ksh: ulimit -s 524288
In tcsh/csh: limit stacksize 524288
to your run script. If that fails, try increasing the number of processors used to run your analysis.
Run time problem reading prepbufr files obtained from the NCEP
Q: When running GSI on Linux platforms, there is a problem reading prepbufr files obtained from the NCEP ftp server and/or the file gdas1.t12z.prepbufr.nr from the tutorial exercise.
A: This may be caused by what is known as the Endian problem. Different computer hardware platforms may use different byte order to representation information. For details see the Wikipedia article on Endianness. Typically this is only an issue on current systems when sharing binary IO between an IBM ("Big-Endian") and Linux system ("Little Endian").
The prepbufr format is such a binary IO format. The prepbufr files from the NCEP ftp server, or the file gdas1.t12z.prepbufr.nr from the tutorial exercises, were Big Endian files. A conversion C code ssrc.c located in the ./util directory of the GSI distribution. This byte-swapping code will take a prepbufr file generated on an IBM platofrm (Big Endian) and convert it to a prepbufr file that can be read on a Linux or Intel Mac platform (Little Endian).
Compile ssrc.c with any c compiler. To convert an IBM prepbufr file, take the executable (e.g. ssrc.exe), and run it as follows:
ssrc.exe < name of Big Endian prepbufr file > name of Little Endian prepbufr file
Starting with the release Version 3.2, BUFRLIB can automatically identify and do the conversion. So, BUFR/PrepBUFR files in any byte order can be used by GSI directly