January 2005
PSU/NCAR Mesoscale Modeling System Tutorial Class Notes and User’s Guide: MM5 Modeling System Version 3
Mesoscale and Microscale Meteorology Division National Center for Atmospheric Research
NCAR MM5 Tutorial Class Staff:
Jimy Dudhia Dave Gill Kevin Manning Wei Wang Cindy Bruyere Sudie Kelly, Administrative Assistant Katy Lackey, Administrative Assistant
MM5 Tutorial
MM5 Tutorial
CONTENTS
CONTENTS 1
INTRODUCTION 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10
2
Getting Started 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 2.13 2.14
3
Purpose 3 Program portability 3 Prerequisite 4 Where to obtain program tar files? 5 What is Contained in a Program tar File? 7 Steps to run MM5 modeling system programs 7 Functions of Job Decks or Scripts 8 What to Modify in a Job Deck/Script? 8 How to Build the Executable and Run the Program? 10 Input Files 11 Output Files 11 Representation of Date in MM5 Modeling System Programs 11 Where to Find Data at NCAR? 12 Other Data Sources 14
MAKE UTILITY 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11
MM5 Tutorial
Introduction to MM5 Modeling System 3 The MM5 Model Horizontal and Vertical Grid 6 Nesting 9 Lateral Boundary Conditions 10 Nonhydrostatic Dynamics Versus Hydrostatic Dynamics 11 Reference State in the Nonhydrostatic Model 11 Four-Dimensional Data Assimilation 12 Land-Use Categories 12 Map Projections and Map-Scale Factors 13 Data Required to Run the Modeling System 13
The UNIX make Utility 3 make Functionality 3 The Makefile 4 Sample make Syntax 5 Macros 5 Internal Macros 5 Default Suffixes and Rules 6 Sample Program Dependency Chart 7 Sample Program Components for make Example 8 makefile Examples for the Sample Program 9 Make Command Used in MM5 Preprocessing Programs 10
iii
CONTENTS
3.12 3.13
4
TERRAIN 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 4.14
5
Purpose 3 Structure 3 A schematic 4 Input to pregrid 4 Input to regridder 5 Output from regridder 5 Intermediate Data Format 5 Pregrid VTables 8 Pregrid program functioning 9 Handy pregrid utility programs 9 How to run REGRID 10 pregrid.csh 11 The regridder Namelist options 13 REGRID tar File 15 Data 15
Objective Analysis (little_r) 6.1 6.2 6.3 6.4 6.5 6.6 6.7
iv
Purpose 3 Input Data 4 Defining Mesoscale Domains 16 Interpolation 19 Adjustment 22 Fudging function 23 Script Variables 24 Parameter statement 24 Namelist Options 24 How to run TERRAIN 26 TERRAIN Didn’t Work: What Went Wrong? 28 TERRAIN Files and Unit Numbers 29 TERRAIN tar File 30 terrain.deck 31
REGRID 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13 5.14 5.15
6
An Example of Top-level Makefile 10 An Example of Low-level Makefile 13
Purpose of Objective Analysis 3 RAWINS or LITTLE_R? 4 Source of Observations 4 Objective Analysis techniques in LITTLE_R and RAWINS 4 Quality Control for Observations 6 Additional Observations 7 Surface FDDA option 7
MM5 Tutorial
CONTENTS
6.8 6.9 6.10 6.11 6.12 6.13 6.14
7
INTERPF 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 7.12 7.13 7.14 7.15
8
Purpose 3 INTERPF Procedure 3 Surface Pressure Computation 5 Hydrostatic Vertical Interpolation 6 Integrated Mean Divergence Removal 6 Base State Computation 8 Initialization of Nonhydrostatic Model 9 Substrate Temperature and the LOWBDY_DOMAINn file 9 Shell Variables (for IBM job deck only) 10 Parameter Statements 10 FORTRAN Namelist Input File 11 How to Run INTERPF 12 INTERPF didn’t Work! What Went Wrong? 13 File I/O 14 INTERPF tar File 15
MM5 8.1 8.2 8.3 8.4 8.5 8.6 8.7 8.8 8.9 8.10 8.11 8.12 8.13 8.14 8.15 8.16 8.17 8.18
MM5 Tutorial
Objective Analysis on Model Nests 8 How to Run LITTLE_R 8 Output Files 10 Plot Utilities 11 LITTLE_R Observations Format 12 LITTLE_R Namelist 15 Fetch.deck 22
Purpose 3 Basic Equations of MM5 3 Physics Options in MM5 7 Interactions of Parameterizations 17 Boundary conditions 17 Nesting 18 Four-Dimensional Data Assimilation (FDDA) 20 How to run MM5 22 Input to MM5 24 Output from MM5 24 MM5 Files and Unit Numbers 27 Configure.user Variables 28 Script Variables for IBM Batch Deck: 30 Namelist Variables 30 Some Common Errors Associated with MM5 Failure 36 MM5 tar File 37 Configure.user 39 mm5.deck 54
v
CONTENTS
9
MAKE AND MM5 9.1 9.2 9.3 9.4
10
NESTDOWN 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 10.10 10.11 10.12
11
Purpose 3 INTERPB Procedure 3 Sea Level Pressure Computation 4 Vertical Interpolation/Extrapolation 6 Parameter Statements 8 FORTRAN Namelist Input File 8 How to Run INTERPB 11 INTERPB didn’t Work! What Went Wrong? 12 File I/O 12 INTERPB tar File 13
GRAPH 12.1 12.2 12.3 12.4 12.5 12.6 12.7 12.8 12.9 12.10
vi
Purpose 3 NESTDOWN Procedure 3 Base State Computation 5 Shell Variables (for IBM job deck only) 5 Parameter Statements 5 FORTRAN Namelist Input File 5 Horizontal Interpolation 7 Vertical Corrections after Horizontal Interpolation 8 How to Run NESTDOWN 10 NESTDOWN didn’t Work! What Went Wrong? 10 File I/O 11 NESTDOWN tar File 12
INTERPB 11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9 11.10
12
make and MM5 3 Configure.user File 4 Makefiles 5 CPP 11
Purpose 3 Typical GRAPH Jobs 4 Plotting Table File: g_plots.tbl 5 Default Option Settings File: g_defaults.nml 7 Map Options File: g_map.tbl 8 Plot Color Options File: g_color.tbl 10 How to Run GRAPH 11 Available 2-D Horizontal Fields 14 Available Cross-Section Only Fields 16 Available 3-D Fields (as 2-D Horizontal or Cross-Section) 17
MM5 Tutorial
CONTENTS
12.11 12.12 12.13 12.14 12.15
13
I/O FORMAT 13.1 13.2 13.3 13.4 13.5 13.6 13.7 13.8 13.9 13.10 13.11 13.12 13.13
14
Purpose 3 Utility Programs 3
EXERCISE 15.1 15.2 15.3 15.4 15.5 15.6 15.7 15.8 15.9
MM5 Tutorial
Introduction 3 Version 3 File Format 3 Explanation of Output Field List 7 Big Header Record for TERRAIN Output 7 Big Header Record for REGRID output 8 Big Header Record for little_r/RAWINS Output 9 Big Header Record for little_r Surface FDDA Output 10 Big Header Record for INTERPF Output 11 Big Record Header for LOWBDY Output 12 Big Record Header for BDYOUT Output 13 Big Record Header for MM5 Output 14 Big Record Header for Interpolated, Pressure-level MM5 Output 18 Special Data Format in MM5 Modeling System 20
Utility Programs 14.1 14.2
15
Some Hints for Running GRAPH 19 Sample Graph Plot File 20 Graph tar file 21 Script file to run Graph job 21 An Alternative Plotting Package: RIP 24
Test Case 3 Obtaining Program Tar Files 3 Getting Started 3 Experiment Design 4 Terrain and Land-Use Data 4 Objective Analysis : 4 Interpolation 5 Model Simulation 5 Viewing Model Output 6
vii
CONTENTS
Appendix A
Derivation of Basic MM5 Equations
Appendix B
MM5 Model Code
Appendix C
How to Noah Use Land-Surface Model Option
Appendix D
MPP MM5 - The Distributed-memory (DM) Extension
Appendix E
3DVAR
Appendix F
RAWINS
Appendix G
Alternative Plotting Package - RIP
Appendix H
Running MM5 Jobs on IBM References
viii
MM5 Tutorial
PREFACE
PREFACE The MM5 tutorial class is sponsored by the Mesoscale and Microscale Meteorology Division (MMM) at the National Center for Atmospheric Research. The class of January 2005 is the final official MM5 tutorial offered by the Mesoscale Prediction Group of MMM. The first tutorial class was offered in 1993, and a total of nearly 800 participants were trained at NCAR during the past 12 years. The tutorial notes are available on the MM5 Web page (URL: http://www.mmm.ucar.edu/mm5/ documents/tutorial-v3-notes.html and http://www.mmm.ucar.edu/mm5/documents/ MM5_tut_Web_notes/TutTOC.html). An online tutorial, which takes a new user step by step through how to set up and run the MM5 modeling system programs, is available at http:// www.mmm.ucar.edu/mm5/mm5v3/tutorial/teachyourself.html. General information regarding the MM5 modeling system, model applications, documentation and user support can also be found on the MM5 Web page (http://www.mmm.ucar.edu/mm5/mm5-home.html). This version of the notes is edited for MM5 modeling system Version 3, release 3-7. The major changes in the release 3-7 are improvements to the MM5 code. The pre- and post-processors did not change much between release 3-6 and 3-7. Most of the chapters in these notes have therefore not changed much, except chapter 8: MM5, which reflect all the new code development in release 3-7. The MM5 3DVAR code was officially released in June 2003. An introduction to this code has been added, and is available in Appendix E. More information on the 3DVAR system can be obtained from URL: http://www.mmm.ucar.edu/3dvar).
MM5 Tutorial
i
ii
MM5 Tutorial
PREFACE
MM5 Tutorial
i
1: INTRODUCTION
INTRODUCTION
1
Introduction to MM5 Modeling System 1-3 The MM5 Model Horizontal and Vertical Grid 1-6 Nesting 1-9 Lateral Boundary Conditions 1-10 Nonhydrostatic Dynamics Versus Hydrostatic Dynamics 1-11 Reference State in the Nonhydrostatic Model 1-11 Four-Dimensional Data Assimilation
1-12
Land-Use Categories 1-12 Map Projections and Map-Scale Factors 1-13 Data Required to Run the Modeling System 1-13
MM5 Tutorial
1-1
1: INTRODUCTION
1-2
MM5 tutorial
1: INTRODUCTION
1
INTRODUCTION
1.1 Introduction to MM5 Modeling System The Fifth-Generation NCAR / Penn State Mesoscale Model is the latest in a series that developed from a mesoscale model used by Anthes at Penn State in the early ‘70’s that was later documented by Anthes and Warner (1978). Since that time it has undergone many changes designed to broaden its applications. These include (i) a multiple-nest capability, (ii) nonhydrostatic dynamics, and (iii) a four-dimensional data assimilation (Newtonian nudging) capability, (iv) increased number of physics options, and (v) portability to a wider range of computer platforms, including OpenMP and MPI systems. The purpose of this introduction is to acquaint the user with some concepts as used in the MM5 modeling system. Flow-charts of the complete modeling system are depicted in the schematic diagrams in Fig. 1.1. It is intended to show the order of the programs, flow of the data, and to briefly describe their primary functions. Fig.1.1a shows the a flow-chart when objective analysis (LITTLE_R/RAWINS) is used, while Fig. 1.1b depicts the flow when 3-dimensional variational analysis (3DVAR) is used. Terrestrial and isobaric meteorological data are horizontally interpolated (programs TERRAIN and REGRID) from a latitude-longitude grid to a mesoscale, regtangular domain on either a Mercator, Lambert Conformal, or Polar Stereographic projection. Since the interpolation of the meteorological data does not necessarily provide much mesoscale detail, the interpolated data may be enhanced (program LITTLE_R/RAWINS) with observations from the standard network of surface and rawinsonde stations using a successive-scan Cressman or multiquadric technique. Program INTERPF then performs the vertical interpolation from pressure levels to the σ-coordinate of the MM5 model. Alternatively, program 3DVAR may be used to ingest data on model σ-levels. After a MM5 model integration, program INTERPB can be used to interpolate data from σ-levels back to pressure levels, while program NESTDOWN can be used to interpolate model level data to a finer grid to prepare for a new model integration. Graphic programs (RIP and GRAPH) may be used to view modeling system output data on both pressure and σ-levels.
MM5 Tutorial
1-3
1: INTRODUCTION
Additional Capability
Main Programs
Data Sets TERRESTRIAL
TERRAIN
GRAPH/ RIP
Old, USGS and Old, and USGS SiB Landuse Terrain
Other LSM Data
GLOBAL/REGIONAL ANALYSIS MM5
little_r/ RAWINS
REGRID ETA NNRP NCEP AVN ...... ERA ECMWF TOGA
INTERPF/ INTERPB/ NESTDOWN
little_r/ RAWINS
OBSERVATIONS
Surface
Rawinsonde
INTERPB INTERPF
MM5 NESTDOWN
Fig 1.1a The MM5 modeling system flow chart.
1-4
MM5 Tutorial
1: INTRODUCTION
Additional Capability
Main Programs
Data Sets TERRESTRIAL
TERRAIN
GRAPH/ RIP
Old, USGS and Old, and USGS SiB Landuse Terrain
Other LSM Data
GLOBAL/REGIONAL ANALYSIS MM5
REGRID
INTERPF 3DVAR
ETA NNRP NCEP AVN ...... ERA ECMWF TOGA INTERPF
Observations (Conventional, satellite) 3DVAR
Background error statistics
MM5
Fig 1.1b The MM5 modeling system flow chart, when using 3DVAR
MM5 Tutorial
1-5
1: INTRODUCTION
1.2 The MM5 Model Horizontal and Vertical Grid It is useful to first introduce the model’s grid configuration. The modeling system usually gets and analyzes its data on pressure surfaces, but these have to be interpolated to the model’s vertical coordinate before being input to the model. The vertical coordinate is terrain following (see Fig. 1.2) meaning that the lower grid levels follow the terrain while the upper surface is flat. Intermediate levels progressively flatten as the pressure decreases toward the chosen top pressure. A dimensionless quantity σ is used to define the model levels
σ = ( p0 – pt ) ⁄ ( ps 0 – pt )
(1.1)
where p0 is the reference-state pressure, pt is a specified constant top pressure, and ps0 is the reference-state surface pressure. Section 1.6 provides more discussion on the definition of the reference state. It can be seen from the equation and Fig 1.2 that σ is zero at the model top and one at the model surface, and each model level is defined by a value of σ. The model vertical resolution is defined by a list of values between zero and one that do not necessarily have to be evenly spaced. Commonly the resolution in the boundary layer is much finer than above, and the number of levels may vary from ten to forty, although there is no limit in principle. The horizontal grid has an Arakawa-Lamb B-staggering of the velocity variables with respect to the scalars. This is shown in Fig 1.3 where it can be seen that the scalars (T, q etc.) are defined at the center of the grid square, while the eastward (u) and northward (v) velocity components are collocated at the corners. The center points of the grid squares will be referred to as cross points, and the corner points are dot points. Hence horizontal velocity is defined at dot points, for example, and when data is input to the model the preprocessors do the necessary interpolations to assure consistency with the grid. All the above variables are defined in the middle of each model vertical layer, referred to as halflevels and represented by the dashed lines in Fig 1.2. Vertical velocity is carried at the full levels (solid lines). In defining the σ levels it is the full levels that are listed, including levels at 0 and 1. The number of model layers is therefore always one less than the number of full σ levels. Note also the I, J, and K index directions in the modeling system. The finite differencing in the model is, of course, crucially dependent upon the grid staggering wherever gradients or averaging are required to represent terms in the equations, and more details of this can be found in the model description document (Grell et al., 1994).
1-6
MM5 Tutorial
1: INTRODUCTION
K
σ
1
0.0
· σ
pt,
=0
1 1 --2
2
0.1
1 2 --2
3
0.2
1 3 --2
4
· σ
,w
u,v,T,q,p’ 0.3
1 4 --2
5
0.4
1 5 --2
6 7
0.5 0.6
8
0.7
9
0.78
10
0.84
11
0.89
12
0.93
13
0.96
16
1.00
ps,
· σ
=0
Figure 1.2 Schematic representation of the vertical structure of the model. The example is for 15 vertical layers. Dashed lines denote half-sigma levels, solid lines denote full-sigma levels.
MM5 Tutorial
1-7
1: INTRODUCTION
(IMAX, JMAX)
(IMAX, 1)
T,q,p’,w u,v
I
(1,1)
(1, JMAX)
J
Figure 1.3 Schematic representation showing the horizontal Arakawa B-grid staggering of the dot (l) and cross (x) grid points. The smaller inner box is a representative mesh staggering for a 3:1 coarse-grid distance to fine-grid distance ratio.
1-8
MM5 Tutorial
1: INTRODUCTION
1
2 3 4
Fig 1.4 Example of a nesting configuration. The shading shows three different levels of nesting.
1.3 Nesting MM5 contains a capability of multiple nesting with up to nine domains running at the same time and completely interacting. A possible configuration is shown in Fig 1.4. The nesting ratio is always 3:1 for two-way interaction. “Two-way interaction” means that the nest’s input from the coarse mesh comes via its boundaries, while the feedback to the coarser mesh occurs over the nest interior. It can be seen that multiple nests are allowed on a given level of nesting (e.g. domains 2 and 3 in Fig 1.4), and they are also allowed to overlap. Domain 4 is at the third level, meaning that its grid size and time step are nine times less than for domain 1. Each sub-domain has a “Mother domain” in which it is completely embedded, so that for domains 2 and 3 the mother domain is 1, and for 4 it is 3. Nests may be turned on and off at any time in the simulation, noting that whenever a mother nest is terminated all its descendent nests also are turned off. Moving a domain is also possible during a simulation provided that it is not a mother domain to an active nest and provided that it is not the coarsest mesh. There are three ways of doing two-way nesting (based on a switch called IOVERW). These are • Nest interpolation (IOVERW=0). The nest is initialized by interpolating coarse-mesh fields. Topography, land-use coastlines only retain the coarse-mesh resolution. This option should be used with moving nests. It requires no additional input files. • Nest analysis input (IOVERW=1). This requires a model input file to be prepared for the MM5 Tutorial
1-9
1: INTRODUCTION
nest in addition to the coarse mesh. This allows the inclusion of high-resolution topography and initial analyses in the nest. Usually such a nest would have to start up at the same time as the coarse mesh starts. • Nest terrain input (IOVERW=2). This new option requires just a terrain/land-use input file, and the meteorological fields are interpolated from the coarse mesh and vertically adjusted to the new terrain. Such a nest can be started up at any time in the simulation, but there will be a period over which the model would adjust to the new topography. One-way nesting is also possible in MM5. Here the model is first run to create an output that is interpolated using any ratio (not restricted to 3:1), and a boundary file is also created once a oneway nested domain location is specified. Typically the boundary file may be hourly (dependent upon the output frequency of the coarse domain), and this data is time-interpolated to supply the nest. Therefore one-way nesting differs from two-way nesting in having no feedback and coarser temporal resolution at the boundaries. The one-way nest may also be initialized with enhancedresolution data and terrain. It is important that the terrain is consistent with the coarser mesh in the boundary zone, and the TERRAIN preprocessor needs to be run with both domains to ensure this.
1.4 Lateral Boundary Conditions To run any regional numerical weather prediction model requires lateral boundary conditions. In MM5 all four boundaries have specified horizontal winds, temperature, pressure and moisture fields, and can have specified microphysical fields (such as cloud) if these are available. Therefore, prior to running a simulation, boundary values have to be set in addition to initial values for these fields. The boundary values come from analyses at the future times, or a previous coarser-mesh simulation (1-way nest), or from another model’s forecast (in real-time forecasts). For real-time forecasts the lateral boundaries will ultimately depend on a global-model forecast. In studies of past cases the analyses providing the boundary conditions may be enhanced by observation analysis (little_r or Rawins) in the same way as initial conditions are. Where upper-air analyses are used the boundary values may only be available 12-hourly, while for model-generated boundary conditions it may be a higher frequency like 6-hourly or even 1-hourly. The model uses these discrete-time analyses by linearly interpolating them in time to the model time. The analyses completely specify the behavior of the outer row and column of the model grid. In the next four rows and columns in from the boundary, the model is nudged towards the analyses, and there is also a smoothing term. The strength of this nudging decreases linearly away from the boundaries. To apply this condition, the model uses a boundary file with information for the five points nearest each of the four boundaries at each boundary time. This is a rim of points from the future analyses described above. The interior values from these analyses are not required unless data assimilation by grid-nudging is being performed, so disk-space is saved by having the boundary file just contain the rim values for each field. Two-way nest boundaries are similar but are updated every coarse-mesh timestep and have no relaxation zone. The specified zone is two grid-points wide instead of one.
1-10
MM5 Tutorial
1: INTRODUCTION
1.5 Nonhydrostatic Dynamics Versus Hydrostatic Dynamics Historically the Penn State/NCAR Mesoscale Model has been hydrostatic because typical horizontal grid sizes in mesoscale models are comparable with or greater than the vertical depth of features of interest. Therefore the hydrostatic approximation holds and the pressure is completely determined by the overlying air’s mass. However when the scale of resolved features in the model have aspect ratios nearer unity, or when the horizontal scale becomes shorter than the vertical scale, nonhydrostatic dynamics can not be neglected. MM5 Version 3 only supports the nonhydrostatic solver. The only additional term in nonhydrostatic dynamics is vertical acceleration that contributes to the vertical pressure gradient so that hydrostatic balance is no longer exact. Pressure perturbations from a reference state (described in the next section) together with vertical momentum become extra three-dimensional predicted variables that have to be initialized.
1.6 Reference State in the Nonhydrostatic Model The reference state is an idealized temperature profile in hydrostatic equilibrium. It is specified by the equation
T 0 = T s0 + Alog e ( p 0 ⁄ ( p 00 ) )
(1.2)
T0 (p0) is specified by 3 constants: p00 is sea-level pressure taken to be 105 Pa, Ts0 is the reference temperature at p00, and A is a measure of lapse rate usually taken to be 50 K, representing the temperature difference between p00 and p00/e = 36788 Pa. These constants are chosen in the INTERP program. Usually just Ts0 needs to be selected based on a typical sounding in the domain. The reference profile represents a straight line on a T-log p thermodynamic diagram. The accuracy of the fit is not important, and typically Ts0 is taken to the nearest 10 K (e.g. 270, 280, 290, 300 in polar, midlatitude winter, midlatitude summer, and tropical conditions, respectively). A closer fit however does reduce the pressure gradient force error associated with sloped coordinate surfaces over terrain, so Ts0 should be selected by comparison with the lower tropospheric profile. The surface reference pressure therefore depends entirely upon the terrain height. This can be derived from (1.2) using the hydrostatic relation, 2 RT RA ⎛ p 0 ⎞ s0 ⎛ p 0 ⎞ Z = – ------- ⎜ ln --------⎟ – ------------ ⎜ ln --------⎟ g ⎝ p 00⎠ 2g ⎝ p 00⎠
(1.3)
and this quadratic can be solved for p0 (surface) given Z, the terrain elevation. Once this is done, the heights of the model σ levels are found from
p 0 = p s0 σ + p top
MM5 Tutorial
(1.4)
1-11
1: INTRODUCTION
where
p s0 = p 0 ( surface ) – p top
(1.5)
and then (1.3) is used to find Z from p0. It can be seen that since the reference state is independent of time, the height of a given grid point is constant. Since Version 3.1 the reference state can include an isothermal layer at the top to better approximate the stratosphere. This is defined by a single additional temperature (Tiso) which acts as a lower limit for the base-state temperature. Using this effectively raises the model top height.
1.7 Four-Dimensional Data Assimilation In situations where data over an extended time period is to be input to the model, four-dimensional data assimilation (FDDA) is an option that allows this to be done. Essentially FDDA allows the model to be run with forcing terms that “nudge” it towards the observations or an analysis. The benefit of this is that after a period of nudging the model has been fit to some extent to all the data over that time interval while also remaining close to a dynamical balance. This has advantages over just initializing with an analysis at a single synoptic time because adding data over a period effectively increases the data resolution. Observations at a station are carried downstream by the model and may help fill data voids at later times. The two primary uses for FDDA are dynamical initialization and four-dimensional datasets. Dynamical initialization is where FDDA is used over a pre-forecast period to optimize the initial conditions for a real-time forecast. It has been shown that the added data is beneficial to forecasts compared to a static initialization from an analysis at the initial time. The second application, four-dimensional datasets, is a method of producing dynamically balanced analyses that have a variety of uses from budget to tracer studies. The model maintains realistic continuity in the flow and geostrophic and thermal-wind balances while nudging assimilates data over an extended period. Two methods of data assimilation exist that depend on whether the data is gridded or individual observations. Gridded data, taking the form of analyses on the model grid, are used to nudge the model point-by-point with a given time constant. This is often most useful on larger scales where an analysis can accurately represent the atmosphere between the observations that go into it. For smaller scales, asynoptic data, or special platforms such as profilers or aircraft, where full analyses cannot be made, individual observations may be used to nudge the model. Here each observation is given a time window and a radius of influence over which it affects the model grid. The weight of the observation at a grid point thus depends upon its spatial and temporal distance from the observation, and several observations may influence a point at a given time.
1.8 Land-Use Categories The model has the option of three sets of land-use categorizations (Table 4.2) that are assigned
1-12
MM5 Tutorial
1: INTRODUCTION
along with elevation in the TERRAIN program from archived data. These have 13, 16, or 24 categories (type of vegetation, desert, urban, water, ice, etc.). Each grid cell of the model is assigned one of the categories, and this determines surface properties such as albedo, roughness length, longwave emissivity, heat capacity and moisture availability. Additionally, if a snow cover dataset is available, the surface properties may be modified accordingly. The values in the table are also variable according to summer or winter season (for the northern hemisphere). Note that the values are climatological and may not be optimal for a particular case, especially moisture availability. A simpler land-use option distinguishes only between land and water, and gives the user control over the values of surface properties for these categories.
1.9 Map Projections and Map-Scale Factors The modeling system has a choice of several map projections. Lambert Conformal is suitable for mid-latitudes, Polar Stereographic for high latitudes and Mercator for low latitudes. The x and y directions in the model do not correspond to west-east and north-south except for the Mercator projection, and therefore the observed wind generally has to be rotated to the model grid, and the model u and v components need to be rotated before comparison with observations. These transformations are accounted for in the model pre-processors that provide data on the model grid, and post-processors. The map scale factor, m, is defined by m = (distance on grid) / (actual distance on earth) and its value is usually close to one varying with latitude. The projections in the model preserve the shape of small areas, so that dx=dy everywhere, but the grid length varies across the domain to allow a representation of a spherical surface on a plane surface. Map-scale factors need to be accounted for in the model equations wherever horizontal gradients are used.
1.10 Data Required to Run the Modeling System Since the MM5 modeling system is primarily designed for real-data studies/simulations, it requires the following datasets to run:
• Topography and landuse (in categories); • Gridded atmospheric data that have at least these variables: sea-level pressure, wind, temperature, relative humidity and geopotential height; and at these pressure levels: surface, 1000, 850, 700, 500, 400, 300, 250, 200, 150, 100 mb; • Observation data that contains soundings and surface reports. Mesouser provides a basic set of topography, landuse and vegetation data that have global coverage but variable resolution. The Data Support Section of Scientific Computing Division at NCAR has an extensive archive of atmospheric data from gridded analyses to observations. For information on how to obtain data from NCAR, please visit URL: http://www.scd.ucar.edu/dss/ index.html.
MM5 Tutorial
1-13
1: INTRODUCTION
1-14
MM5 Tutorial
2: GETTING STARTED
Getting Started
2
Purpose 2-3 Program portability 2-3 Prerequisite 2-4 Where to obtain program tar files? 2-5 What is Contained in a Program tar File? 2-7 Steps to run MM5 modeling system programs 2-7 Functions of Job Decks or Scripts 2-8 What to Modify in a Job Deck/Script? 2-8 Shell Variables 2-8 Parameter Statements 2-9 Fortran Namelist 2-9 How to Build the Executable and Run the Program? 2-10 Creating FORTRAN Executable 2-10 Linking files to Fortran units 2-10 Execution 2-10 Input Files 2-11 Output Files 2-11 Representation of Date in MM5 Modeling System Programs 2-11 Where to Find Data at NCAR? 2-12 Other Data Sources 2-14
MM5 Tutorial
2-1
2: GETTING STARTED
2-2
MM5 tutorial
2: GETTING STARTED
2
Getting Started
2.1 Purpose This chapter discusses general aspects of MM5 modeling system programs, including
• what are required on your computer in order to compile and run MM5 programs; • where and how to obtain program tar files and utility programs; • function of a job script (or a job deck); • parameter statement and namelist; • how to set up, compile and run the modeling system programs; • date representation in MM5 modeling system; • where to find data to run MM5. 2.2 Program portability MM5 modeling system programs, TERRAIN, REGRID, LITTLE_R/RAWINS, INTERPF, NESTDOWN, INTERPB, RIP/GRAPH and MM5, can all be run on Unix workstations, PC running Linux, Cray’s and IBM’s. Running MM5 programs on a Linux PC requires either the Portland Group, or INTEL Fortran and C compilers. The primary reasons for these choices are 1) they supports Cray pointers which are used in several programs, including MM5 model; and 2) they have Fortran 90 compilers. MM5 modeling system programs are mostly Fortran programs that require compilation on your local computer. Some (Fortran 77) programs need recompilation each time you change model configuration. Others (Fortran 90) programs need only be compiled once. A user should try to know your computer and compiler. Find out how much usable memory you have on the computer, and what version of compiler you have. This information can come handy when you encounter problems when compiling and running the modeling system programs and report problems to mesouser. If you are thinking about purchasing a computer, get at least 0.5 to 1 Gb memory and a few Gb of disk. As most of the MM5 preprocessor programs are being migrated to Fortran 90, you will need a f90 compiler too to compile the programs. The following table lists the source code type and compiler required to compile them: MM5 Tutorial
2-3
2: GETTING STARTED
Program Name
Source Code
Compiler required
TERRAIN
Fortran 77
f77 (or f90)
REGRID
Fortran 90
f90
LITTLE_R
Fortran 90
f90
RAWINS
Fortran 77
f77 (or f90)
INTERPF
Fortran 90
f90
MM5
Fortran 77
f77 (or f90)
NESTDOWN
Fortran 90
f90
INTERPB
Fortran 90
f90
RIP/GRAPH
Fortran 77
f77 (or f90)
MM5 programs do not require NCAR Graphics to run. It is a matter of convenience to have it, since a few programs use it to help you configure model domains and prepare data. Some of the vitualization software that come with the MM5 system (RIP and GRAPH programs) is based on NCAR Graphics. NCAR Graphics is a licensed software, but part of it has become free and this is the part that MM5 modeling system requires. For more information on NCAR Graphics, please see its Web page: http://ngwww.ucar.edu/.
2.3 Prerequisite There are a few things a user needs to prepare before starting running jobs on your workstation.
• If you have NCAR Graphics on your system, make sure you have the following line in your .cshrc file:
setenv NCARG_ROOT /usr/local or setenv NCARG_ROOT /usr/local/ncarg This enables a user to load NCAR Graphics libraries when compiling programs which use NCAR Graphics (Terrain, Little_r, Rawins, RIP and Graph).
• If you need to remotely copy files between two workstations, make sure you have an .rhosts file on both workstations. A typical .rhosts file looks like this: chipeta.ucar.edu username blackforest.ucar.edu username
• Make sure that you browse through the ~mesouser directories on the NCAR’s computers, or mesouser/ directory on anonymous ftp. All job decks, program tar files, data catalogs, and utility programs reside in the directory.
2-4
MM5 Tutorial
2: GETTING STARTED
2.4 Where to obtain program tar files? MM5 modeling system programs are archived in three locations: NCAR’s anonymous ftp, NCAR IBM-accessible disk, and NCAR’s Mass Storage System (MSS). On the ftp site, the source code tar files are archived under /mesouser/MM5V3: /mesouser/MM5V3/TERRAIN.TAR.gz /mesouser/MM5V3/REGRID.TAR.gz /mesouser/MM5V3/LITTLE_R.TAR.gz /mesouser/MM5V3/RAWINS.TAR.gz /mesouser/MM5V3/INTERPF.TAR.gz /mesouser/MM5V3/MM5.TAR.gz /mesouser/MM5V3/MPP.TAR.gz /mesouser/MM5V3/NESTDOWN.TAR.gz /mesouser/MM5V3/INTERPB.TAR.gz /mesouser/MM5V3/GRAPH.TAR.gz /mesouser/MM5V3/RIP4.TAR.gz /mesouser/MM5V3/RIP.TAR.gz /mesouser/MM5V3/FETCH.TAR.gz /mesouser/mm53dvar/3dvar.tar.gz On NCAR’s IBM, the source code tar files and job decks reside in ~mesouser/MM5V3: ~mesouser/MM5V3/TERRAIN.TAR.gz ~mesouser/MM5V3/REGRID.TAR.gz ~mesouser/MM5V3/INTERPF.TAR.gz ~mesouser/MM5V3/LITTLE_R.TAR.gz ~mesouser/MM5V3/RAWINS.TAR.gz ~mesouser/MM5V3/MM5.TAR.gz ~mesouser/MM5V3/MPP.TAR.gz ~mesouser/MM5V3/NESTDOWN.TAR.gz ~mesouser/MM5V3/INTERPB.TAR.gz ~mesouser/MM5V3/GRAPH.TAR.gz ~mesouser/MM5V3/RIP4.TAR.gz ~mesouser/MM5V3/RIP.TAR.gz ~mesouser/MM5V3/FETCH.TAR.gz ~mesouser/MM5V3/CRAY/*.deck.cray ~mesouser/MM5V3/IBM/*.deck On MSS, the source code tar files are archived in /MESOUSER/MM5V3 /MESOUSER/MM5V3/TERRAIN.TAR.gz /MESOUSER/MM5V3/REGRID.TAR.gz /MESOUSER/MM5V3/INTERPF.TAR.gz /MESOUSER/MM5V3/LITTLE_R.TAR.gz /MESOUSER/MM5V3/RAWINS.TAR.gz /MESOUSER/MM5V3/MM5.TAR.gz /MESOUSER/MM5V3/MPP.TAR.gz /MESOUSER/MM5V3/NESTDOWN.TAR.gz /MESOUSER/MM5V3/INTERPB.TAR.gz MM5 Tutorial
2-5
2: GETTING STARTED
/MESOUSER/MM5V3/GRAPH.TAR.gz /MESOUSER/MM5V3/RIP4.TAR.gz /MESOUSER/MM5V3/RIP.TAR.gz /MESOUSER/MM5V3/FETCH.TAR.gz Previous releases are also available on ftp and MSS. To obtain files from IBM and MSS, you need to have an NCAR SCD computing account. To access program tar files from NCAR’s anonymous ftp site, do the following (taking MM5 tar file as an example): # ftp ftp.ucar.edu Name: anonymous Password: your-email-address ftp> cd mesouser/MM5V3 ftp> binary ftp> get MM5.TAR.gz ftp> quit Once you download these tar files, use Unix gunzip command to decompress the .gz files, gunzip MM5.TAR.gz and untar the file by using the command tar -xvf MM5.TAR After you untar the file, a program directory will be created. In this example, an MM5 directory will be created with all source code files inside it. All utility programs are archived under MM5V3/Util/ directory on ftp and NCAR IBM-accessible disk. The list of the utility programs are:
Program/script Name
Function
cray2ibm.f
convert Cray MM5 binary data to IBM binary data
cray2ibm-intermediate.f
convert Cray intermediate binary data to IBM binary data
ieeev3.csh
convert Cray binary data to standard 32-bit IEEE data
readv3.f
read program for MM5 V3 modeling system output
v22v3.tar.gz
program tar file to convert V2 data to V3
v32v2.tar.gz
program tar file to convert V3 MM5 model data to V2
tovis5d.tar.gz
program tar files to convert MM5 model data to Vis5D data
MM5toGrADS.TAR.gz
program tar files to convert MM5 model data to GrADS data
2-6
MM5 Tutorial
2: GETTING STARTED
2.5 What is Contained in a Program tar File? A program tar file contains all source code (excluding NCAR Graphics), makefile, and instructions (in README file) required to compile and run that particular program. As an example, the files contained in program RAWINS tar file are listed below: CHANGES Diff/ Makefile README Templates/ con.tbl map.tbl src/
Description of changes to the program Will contain difference files between consecutive releases Makefile to create the program executable General information about the program directory Job script directory Table file for plots Table file for plots Program source code directory and low-level makefile
2.6 Steps to run MM5 modeling system programs Typically there are several steps to set up and run the modeling system programs. For detailed instruction on how to compile and run a particular program, read the respective chapter and README file inside the tar file. For FORTRAN 77 programs TERRAIN, and RAWINS: 1) Type make x.deck to create job script to compile and run the program. 2) Edit x.deck to select appropriate shell variables, parameter statements, and namelist. 3) Type x.deck to (compile and) run the program. For FORTRAN 77 program GRAPH: 1) Edit include files if necessary. 2) Type make to create the program executable. 3) Type graph.csh n m mm5-modeling-system-output-file to run. For FORTRAN 90 programs REGRID, LITTLE_R, INTERPF, INTERPB and NESTDOWN: 1) Type make to compile the program. 2) Edit either the job script and/or namelist.input file. 3) Type the executable name to run the program, e.g. regridder
MM5 Tutorial
2-7
2: GETTING STARTED
2.7 Functions of Job Decks or Scripts Most of MM5 modeling system programs have a job deck or script to help you run the program. Some are called x.deck, and some are x.csh. While x.deck can be used either for a batch job (such as on an IBM) or interactive job, x.csh is only for interactive use. They have very similar functions. When using these job decks or scripts, they assume the program source code is local. Most also expect all input files are local too. To obtain an appropriate job script for your computer, type ‘make x.deck’ to create a deck for program x (program name in lower case, e.g. make terrain.deck). The general job deck construct and functions are the following:
• job switches, which usually appear in the first part of a deck; • parameter statements used in Fortran 77 programs to define domain and data dimensions; • FORTRAN namelist used during program execution to select runtime options; • a section that does not normally require user modification, which links input files to Fortran units, create executable based on parameter statement setup, and obtain data from anonymous ftp sites (such as in the case of TERRAIN program)
2.8 What to Modify in a Job Deck/Script? 2.8.1 Shell Variables Since the MM5 modeling system is designed for multiple applications, there are many options on how a job may be run. These options include different sources for inputting terrestrial and meteorological data, ways to do objective analysis, running the model with or without 4DDA option, and whether an MM5 job is an initial or restart run, etc. A user is required to go through the shell variables and make appropriate selections for your application. The following is an example taken from the pregrid.csh, and the selection is with regarding to the type of global analysis to be used to create the first guess fields: # # Select the source of 3-d analyses # # #
set SRC3D = ON84 set SRC3D = NCEP set SRC3D = GRIB
# Many GRIB-format datasets
Other examples of shell variables are listed below and they need to be defined by users for each program of the MM5 modeling system: Program Name TERRAIN REGRID/pregrid RAWINS MM5
Shell Variables ftpdata,Where30sTer SRC3D,SRCSST,SRCSNOW,SRCSOIL,VTxxx INOBS, SFCsw, BOGUSsw,InRaobs,InSfc3h,InSfc6h STARTsw, FDDAsw (in IBM batch deck only)
These shell variables will be discussed in detail in the other chapters of this document. 2-8
MM5 Tutorial
2: GETTING STARTED
2.8.2 Parameter Statements The FORTRAN 77 MM5 modeling system programs require a user to set parameter statements in a deck or script (TERRAIN and RAWINS), or directly in an include file (GRAPH), which are typically used to define the parameterized dimensions for a FORTRAN 77 program before compilation takes place. The unix cat command is used in a deck to create FORTRAN include files containing these parameter statements. These are direct modifications to the source code, implying that strict FORTRAN syntax must be observed. The usage of cat is shown below: cat > src/param.incl << EOF PARAMETER (.....) ..................... EOF This creates a Fortran include file param.incl in the src/ directory and this include file will be used by a number of subroutines during compilation. As an example, the following is taken from terrain.deck: cat > src/parame.incl.tmp << EOF C IIMX,JJMX are the maximum size of the domains, NSIZE = IIMX*JJMX PARAMETER (IIMX = 136, JJMX = 181, NSIZE = IIMX*JJMX) EOF
2.8.3 Fortran Namelist The MM5 modeling system uses FORTRAN namelist to provide a way of selecting different options without re-compiling the program. If the namelist is inside a shell script or a deck, the unix cat command is used to create namelist files from the shell script during the execution of the shell script. The variables in the namelist for each program are described in detail in other chapters of this document. The format is the following cat > nml << EOF &xxxx ......... &END EOF
Where xxxx is the name of the namelist. Since the namelist is not a ANSI 77 standard, the FORTRAN 77 compiler used by different machines may have different syntax for the namelist. Namelist for FORTRAN 90 programs doesn’t have this problem. The following is an example of namelist MAPBG from program TERRAIN for Cray, SGI, SUN and Compaq: &MAPBG PHIC XLONC IEXP AEXP IPROJ &END
= 36.0, = -85.0, = .T., = 360., = ’LAMCON’,
; CENTRAL LATITUDE (minus for southern hemesphere) ; CENTRAL LONGITUDE (minus for western hemesphere) ; .T. EXTENDED COARSE DOMAIN, .F. NOT EXTENDED. ; APPROX EXPANSION(KM) ; MAP PROJECTION
For most Fortran 90 programs, edit namelist file namelist.input directly. MM5 Tutorial
2-9
2: GETTING STARTED
After the user has 1) correctly set the shell variables, 2) modified the parameter statements, and 3) set up the FORTRAN namelist, there is typically no more user modification required in the deck. The rest of the script can be treated as a black box.
2.9 How to Build the Executable and Run the Program? After you set the parameter statement either in a deck or directly in include file for any FORTRAN 77 program, type x.deck, or x.csh to compile and run the program. Or type make to compile, and then type x.deck to run. For FORTRAN 90 program, simply type make to compile, and type the executable name to run.
2.9.1 Creating FORTRAN Executable Unix make utility is used to generate FORTRAN executable in the MM5 modeling system. The rules and compile options a make command uses are contained in the Makefile for programs TERRAIN, REGRID, LITTLE_R/RAWINS, INTERPF, INTERPB, NESTDOWN, RIP and GRAPH, and configure.user for program MM5. For more information on make, please see Chapters 3 and 9 of this document.
2.9.2 Linking files to Fortran units For most FORTRAN 77 programs, files with file names are typically linked to Fortran units prior to execution of the program inside a deck or shell script. For example, ln -s terrain.namelist fort.15 This command makes a soft ‘link’ between filename terrain.namelist and Fortran-unit number 15.
2.9.3 Execution timex ./X.exe >&! X.print.out, or time ./X.exe >& X.print.out Where X is the program name. And Unix command timex or time is used to get a timing of the executable run. Example: # # run MM5 # timex mm5.exe >&! mm5.print.out
2-10
MM5 Tutorial
2: GETTING STARTED
At the end of your mm5.print.out file, you will see something like: real user sys
1028.8 1009.7 2.4
which tells you how long the mm5 job has taken in terms of wallclock time (real).
2.10 Input Files MM5 modeling system programs require several datasets to run. Mesouser provides the terrestrial datasets for program TERRAIN. Programs REGRID, LITTLE_R and RAWINS will require other archived data or data in real-time to run. Since V3.6, maximum snow albedo data at 1 degree resolution can be ingetsted into the model via REGRID using the ALMX_FILE file which is supplied with the REGRID.TAR.gz file. It is suggested that this file be used if one intends to use the Noah LSM option in MM5. Monthly albedo fields at 0.15 degree resolution can also be ingested via REGRID for Noah LSM use. These data are available from /MESOUSER/DATASETS/REGRID/MONTHLY_ALBEDO.TAR.gz, or from ftp://ftp.ucar.edu/mesouser/MM5V3/REGRID_DATA/MONTHLY_ALBEDO.TAR.gz
2.11 Output Files When a job is completed, certain output files are generated, which are named programname_DOMAINx (e.g., REGRID_DOMAIN1, LITTLE_R_DOMAIN1, etc.). It is up to the user to archive the output. If you want to keep the output files, move them to a disk where you can keep them. If you run the same program again, these files will be overwritten.
2.12 Representation of Date in MM5 Modeling System Programs Date is represented in the MM5 modeling system programs by up to 24 characters in the form of yyyy-mm-dd_hh:mm:ss:xxxx, where yyyy represents 4-digit year, mm 2-digit month, dd 2-digit day, hh 2-digit hour, mm (again) 2-digit minutes, ss 2 digit second, and xxxx ten thousandths of a second (optional). This replaces the 8-digit integer date representation, YYMMDDHH, in previous modeling system codes. For example, 1200 UTC 24 January 2005 is represented as 2005-0124_12:00:00 in the model. Note that all model times are referring to Universal Time or Greenwich Mean Time, and not local time.
MM5 Tutorial
2-11
2: GETTING STARTED
2.13 Where to Find Data at NCAR? The Data Support Section of NCAR’s Scientific Computing Division provides catalogs of data MM5 modeling system programs REGRID and LITTLE_R/RAWINS use. These catalogs are available from the Data Support Section of NCAR/SCD: http://dss.ucar.edu/datasets/dsNNN.x/MSS-file-list.html or ftp://ncardata.ucar.edu/datasets/dsNNN.x where NNN.x is a dataset identifier. Most datasets that MM5 uses are listed below: Dataset Identifier
Dataset Name
DS082.0
NCEP GLOBAL TROPO ANALS, DAILY 1976JUL-1997MAR
DS083.0
NCEP GLOBAL TROPO ANALS, DAILY 1997MAY-CON (GRIB)
DS083.2
NCEP Final Analysis (GRIB, 1 degree resolution) 1999SEP15 - CON
DS090.0
NCEP/NCAR Global Reanalysis, 6 hourly, monthly (1948-Current)
DS111.2
ECMWF TOGA GLOBAL SFC & UPPER AIR ANALS, DAILY 1985-CON
DS115
ECMWF Global Reanalysis (1979-1993)
DS118
ECMWF Global Reanalysis (ERA40), 1957SEP-2002AUG
DS609.2
NCEP Eta model output (GRID212) 1995MAY01- CON
DS240.0
U.S. NAVY FNOC N.HEM SEA SFC TEMP ANALS, DAILY 1961NOV1993DEC
DS353.4
NCEP ADP GLOBAL UPPER AIR OBS SUBSETS, DAILY 1973-CON
DS464.0
NCEP ADP GLOBAL SFC OBS, DAILY JUL1976-CON
Information on NCEP/NCAR Reanalysis Project (NNRP) and on European Center Reanalysis can be found at URL http://dss.ucar.edu/pub/reanalyses.html The NCEP Eta model data (the AWIP data, GRID 212), NCEP Final Analysis and ECMWF ERA40 are recent additions to NCAR’s archive. Information about the data can be found at: http://dss.ucar.edu/datasets/ds609.2.html http://dss.ucar.edu/datasets/ds083.2.html http://dss.ucar.edu/datasets/ds118.0.html and http://dss.ucar.edu/datasets/ds118.1.html
2-12
MM5 Tutorial
2: GETTING STARTED
When choosing to run the Noah land-surface model option in MM5, one can use the NNRP, AWIP or Final Analysis datasets at NCAR. These datasets contain additional fields required by LSM to initialize soil temperature and moisture. A recent addition which can also be used as input to the Noah LSM option, is the AGRMET data supplied by AFWA. AGRMET data provides soil temperature, soil moisture, soil water, land-sea mask and soil height data. One can use this dataset in combination with any other 3-dimensional meteorological analyses. This dataset can be obtained from: /MESOUSER/DATASETS/AGRMET/ This data is available since October 2002. Documentation regarding this data is available from: http://www.mmm.ucar.edu/mm5/mm5v3/data/agrmet.html
A sample of the catalog for NCEP dataset DS083.0 is shown below: Y47606 Y48077 Y48277
1998OCT01-1998OCT31, 12524 BLKS, 1998NOV01-1998NOV30, 12120 BLKS, 1998DEC01-1998DEC31, 12524 BLKS,
86.0MB 83.3MB 85.9MB
The MSS filenames correspond to these files are /DSS/Y47606 /DSS/Y48077 /DSS/Y48277 A sample of the catalog for NCEP Global upper air observation dataset DS343.4 looks like Y47652 Y48086 Y48286
1998OCT01-1998OCT31, 9096 BLKS LIST 1998NOV01-1998NOV30, 8688 BLKS LIST 1998DEC01-1998DEC31, 8541 BLKS, SEE NOTES - LIST NOTE:ADPUPA 1998DEC15 MISSING LIST
A98 A98 A98 A98
Similarly, the MSS file name corresponding to the Oct 1998 dataset is /DSS/Y47625 File specifics for all global analysis used as input to REGRID are no longer required. Shell scripts are provided to access the data MASTER file, find the MSS file names, and obtain them based on user selected data source and date. If you run LITTLE_R or RAWINS at your local computer, you will still need to go to the catalog, find the file name on MSS, and access them from NCAR’s computer. Note that you need to use -fBI option with msread to obtain observations to be used on your workstation. A small utility program, fetch.csh, may be used to obtain observational data too. NCAR has a couple of free data sets available from the NCAR/DSS web site. These data sets are updated monthly, and are available for the latest 12 months. The free data sets are:
MM5 Tutorial
2-13
2: GETTING STARTED
DS083.0
NCEP GLOBAL TROPO ANALS, DAILY 1997MAY-CON (GRIB)
DS083.2
NCEP Final Analysis (GRIB, 1 degree resolution) 1999SEP15 - CON
DS090.0
NCEP/NCAR Global Reanalysis, 6 hourly, monthly (1948-Current)
DS353.4
NCEP ADP GLOBAL UPPER AIR OBS SUBSETS, DAILY 1973-CON
DS464.0
NCEP ADP GLOBAL SFC OBS, DAILY JUL1976-CON
For more information on the free data sets, see: http://www.mmm.ucar.edu/mm5/mm5v3/data/free_data.html If you don’t have access to NCAR’s data, you need to consider where you can obtain similar data to run the modeling system.
2.14 Other Data Sources Seaice fractional data is available from the Near Real-TimeSSM/I EASE-Grid Daily Global Ice Concentration and Snow Extent. Boulder, CO, USA: National Snow and Ice Data Center. This data can be obtained from their web site: http://nsidc.org/data/nise1.html Please become a registered user before downloading data (no charge). SST data on a 0.5 degree grid is available from NCEP’s ftp server: ftp://ftpprd.ncep.noaa.gov/ pub/emc/mmab/history/sst Files names has the format: rtg_sst_grib_0.5.YYYYMMDD This data is in GRIB format and is available since Febuaray 2001. If you are interested in running MM5 in real-time, a good source of data can be obtained from NCEP’s ftp server: ftp://ftpprd.ncep.noaa.gov/pub/data/nccf/com. NCEP provides a number of datasets from their global and regional models at this site. For example, NAM/Eta 40 km data: nam/prod/nam.YYYYMMDD/nam.tXXz.grbgrbYY.tm00 (eg. nam/prod/nam.20021223/nam.t18z.grbgrb12.tm00 ) GFS/AVN 1deg data: gfs/prod/gfs.YYYYMMDD/gfs.tXXz.pgrbfYY (eg. gfs/prod/gfs.20021223/gfs.t12z.pgrbf12 )
2-14
MM5 Tutorial
3: MAKE UTILITY
MAKE UTILITY
3
The UNIX make Utility 3-3 make Functionality 3-3 The Makefile 3-4 Sample make Syntax 3-5 Macros 3-5 Internal Macros 3-5 Default Suffixes and Rules 3-6 Sample Program Dependency Chart 3-7 Sample Program Components for make Example 3-8 makefile Examples for the Sample Program 3-9 Make Command Used in MM5 Preprocessing Programs 3-10 An Example of Top-level Makefile 3-10 An Example of Low-level Makefile 3-13
MM5 Tutorial
3-1
3: MAKE UTILITY
3-2
MM5 tutorial
3: MAKE UTILITY
3
MAKE UTILITY
3.1 The UNIX make Utility The following chapter is designed to provide the MM5 user with both an overview of the UNIX make command and an understanding of how make is used within the MM5 system. UNIX supplies the user with a broad range of tools for maintaining and developing programs. Naturally, the user who is unaware of their existence, doesn’t know how to use them, or thinks them unnecessary will probably not benefit from them. In the course of reading this chapter it is hoped that you not only become aware of make but that you will come to understand why you need it. In the same way you use dbx when you want to “debug” a program or sort when you need to sort a file, so you use make when you want to “make” a program. While make is so general that it can be used in a variety of ways, its primary purpose is the generation and maintenance of programs. But why the bother of a separate make utility in the first place? When you wrote your first program it probably consisted of one file which you compiled with a command such as “f77 hello.f”. As long as the number of files is minimal you could easily track the modified files and recompile any programs that depend on those files. If the number of files becomes larger you may have written a script that contains the compiler commands and reduces the amount of repetitive typing. But as the number of files increases further your script becomes complicated. Every time you run the script every file is recompiled even though you have only modified a single file. If you modify an include file it is your responsibility to make sure that the appropriate files are recompiled. Wouldn’t it be nice to have something that would be smart enough to only recompile the files that need to be recompiled? To have something that would automatically recognize that a buried include file have been changed and recompile as necessary? To have something that would optimize the regeneration procedure by executing only the build steps that are required. Well, you do have that something. That something is make. When you begin to work on larger projects, make ceases to be a nicety and becomes a necessity.
3.2 make Functionality
MM5 Tutorial
3-3
3: MAKE UTILITY
The most basic notion underlying make is that of dependencies between files. Consider the following command: f77 -o average mainprog.o readit.o meanit.o printit.o Consider the object file mainprog.o and its associated source code file mainprog.f. Since a change in mainprog.f necessitates recompiling mainprog.o, we say that mainprog.o is dependent upon mainprog.f. Using the same reasoning we see that the final program average is dependent upon mainprog.o. average in this context is the target program (the program we wish to build). In this fashion we can build up a tree of dependencies for the target, with each node having a subtree of its own dependencies (See Figure 3.1). Thus the target average is dependent upon both mainprog.o and mainprog.f, while mainprog.o is dependent only upon mainprog.f. Whenever mainprog.f is newer than mainprog.o, average will be recompiled. make uses the date and time of last modification of a file to determine if the dependent file is newer than the target file. This is the time you see when using the UNIX ls command. By recognizing this time relationship between target and dependent, make can keep track of which files are up-to-date with one another. This is a reasonable approach since compilers produce files sequentially. The creation of the object file necessitates the pre-existence of the source code file. Whenever make finds that the proper time relationship between the files does not hold, make attempts to regenerate the target files by executing a user-specified list of commands, on the assumption that the commands will restore the proper time relationships between the source files and the files dependent upon them. The make command allows the specification of a heirarchical tree of dependency relationships. Such relationships are a natural part of the structure of programs. Consider our program average (See Figure 3.1). This application consists of 6 include files and four source code files. Each of the four source code files must be recompiled to create the four object files, which in turn are used to create the final program average. There is a natural dependency relationship existing between each of the four types of files: include files, FORTRAN sources, object, and executables. make uses this relationship and a specification of the dependency rules between files to determine when a procedure (such a recompilation) is required. make relieves people who are constantly recompiling the same code of the tedium of keeping track of all the complexies of their project, while avoiding inefficiency by minimizing the number of steps required to rebuild the executable.
3.3 The Makefile Even with a small number of files the dependency relationships between files in a programming project can be confusing to follow. In the case of mm5 with hundreds of files, an English description would be unuseable. Since make requires some definition of the dependencies, it requires that you prepare an auxillary file -- the makefile -- that describes the dependencies between files in the project. There are two kinds of information that must be placed in a makefile: dependency relations and generation commands. The dependency relations are utilized to determine when a file must be regenerated from its supporting source files. The generation commands tell make how to build out-of-date files from the supporting source files. The makefile therefore contains two distinct line formats: one called rules, the other commands.
3-4
MM5 Tutorial
3: MAKE UTILITY
3.4 Sample make Syntax targetfile : < tab > < tab >
dependencies command1 command2
myprog.exe: mysource1.f mysource2.f < tab > f77 -o myprog.exe mysource1.f mysource2.f A rule begins in the first position of the line and has the following format: targetfile: dependencies. The name or names to the left of the colon are the names of target files. The names to the right of the colon are the files upon which our target is dependent. That is, if the files to the right are newer than the files to the left, the target file must be rebuilt. A dependency rule may be followed by one or more command lines. A command line must begin with at least one tab character; otherwise, it will not be recognized by make and will probably cause make to fail. This is a common cause of problems for new users. Other than this, make places no restrictions are command lines - when make uses command lines to rebuild a target, it passes them to the shell to be executed. Thus any command acceptable to the shell is acceptable to make.
3.5 Macros make Macro definitions and usage look very similar to UNIX environment variables and serve much the same purpose. If the Macro STRING1 has been defined to have the value STRING2, then each occurrence of $(STRING1) is replaced with STRING2. The () are optional if STRING1 is a single character. MyFlags = -a -b -c -d In this example every usage of $(MyFlags) would be replaced by make with the string “-a -b -c d” before executing any shell command.
3.6 Internal Macros $@
name of the current target
$<
The name of a dependency file, derived as if selected for use with an implicit rule.
MM5 Tutorial
3-5
3: MAKE UTILITY
$?
The list of dependencies that are newer than the target
$*
The basename of the current target, derived as if selected for use with an implicit rule.
D
directory path, $(@D), $(
F
file name, $(@F), $(
3.7 Default Suffixes and Rules .f.o: < tab >
$(FC) $(FFLAGS) -c $<
.f: < tab >
$(FC) $(FFLAGS) $(LDFLAGS) $< -o $@
.SUFFIXES: < tab > .o .c .f In the Makefile you may notice a line beginning with .SUFFIXES near the top of the file, and followed by a number of targets (e.g., .f.o). In addition, you may notice that the MAKE macro is commonly defined using the -r option. These definitions are all designed to deal with what are known as make’s implicit suffix rules. An implicit suffix rule defines the relationship between files based on their suffixes. If no explicit rule exists and the suffix of the target is one recognized by make, it will use the command associated with the implicit suffix rule. So if there is no explicit rule in the Makefile which deals with the target mainprog.o, make will recognize the suffix .o to indicate that this is an object file and will look in its list of implicit suffix rules to decide how to update the target. If there is a file named mainprog.f in the directory, make will compile mainprog.o using the .f.o rule. If instead there is a file named mainprog.c, make will compile mainprog.o using the .c.o implicit suffix rule. If both source files are in the directory, the rule used is dependent on the particular implementation of make. The -r option to make turns off the implicit suffix rules. So on most platforms we do not use the implicit suffix rules, preferring to define our own suffix rules. We do this by specifying which files with suffixes use suffix rules - this is done with the .SUFFIXES macro. We then define what these rules in low-level makefiles. For example, one of the suffix rule we specify is .F.o:
$(RM) $@ $(FC) -c $(FCFLAGS) $*.F
The reason we have this suffix rule is that all our Fortran files are named *.F, which will be subject to cpp (c pre-processor) before being compiled. 3-6
MM5 Tutorial
3: MAKE UTILITY
3.8 Sample Program Dependency Chart
average
mainprog.o
mainprog.f
readit.o
readit.f
unit.include
data.include
meanit.o
printit.o
meanit.f
printit.f
sum.include
data.include
sum.include
data.include
Fig. 3.1 Sample program dependency chart.
MM5 Tutorial
3-7
3: MAKE UTILITY
3.9 Sample Program Components for make Example mainprog.f -----------------------program mainprog call readit call meanit call printit stop 99999 end 100
100
meanit.f -----------------------subroutine meanit include 'data.include' include 'sum.include' do 100 l = 1, length sum = sum + data (l) continue sum = sum / float(length) return end
unit.include -----------------------parameter (iunit=7)
readit.f ------------------------subroutine readit include 'unit.include' include 'data.include' open (iunit,file='input.data', * access='sequential', * form='formatted') read (iunit,100) data format(f10.4) close (iunit) return end
printit.f ------------------------subroutine printit include 'data.include' include 'sum.include' print *,(l,data(l),l=1,length) print *,'average = ',sum return end
sum.include ------------------------common /avg/ sum
data.include -----------------------parameter (length=10) common /space/ data(length)
3-8
MM5 Tutorial
3: MAKE UTILITY
3.10 makefile Examples for the Sample Program # # first makefile example # average : mainprog.o readit.o meanit.o printit.o f77 -o average mainprog.o readit.o meanit.o printit.o mainprog.o : mainprog.f f77 -c mainprog.f readit.o : readit.f unit.include data.include f77 -c readit.f meanit.o : meanit.f data.include sum.include f77 -c meanit.f printit.o : printit.f data.include sum.include f77 -c printit.f
# # second makefile example # average : mainprog.o readit.o meanit.o printit.o f77 -o $@ mainprog.o readit.o meanit.o printit.o mainprog.o : mainprog.f f77 -c $< readit.o : readit.f unit.include data.include f77 -c $< meanit.o : meanit.f data.include sum.include f77 -c $*.f printit.o : printit.f data.include sum.include f77 -c $*.f
MM5 Tutorial
3-9
3: MAKE UTILITY
# # third makefile example # OBJS = mainprog.o readit.o meanit.o printit.o average : $(OBJS) f77 -o $@ $(OBJS) readit.o : readit.f unit.include data.include f77 -c $< meanit.o : meanit.f data.include sum.include f77 -c $< printit.o : printit.f data.include sum.include f77 -c $<
# # # .f.o:
fourth makefile example
rm -f $@ f77 -c $*.f OBJS = mainprog.o readit.o meanit.o printit.o average : $(OBJS) f77 -o $@ $(OBJS) readit.o : unit.include data.include meanit.o : data.include sum.include printit.o : data.include sum.include
3.11 Make Command Used in MM5 Preprocessing Programs The make rules, defined dependencies (sometimes not the default ones), and compiler/loader options are defined in Makefiles. The syntax for the make command is in general make "rule1" "rule2"
3.12 An Example of Top-level Makefile #
Top-level Makefile for TERRAIN
#
Macros, these should be generic for all machines
.IGNORE:
3-10
MM5 Tutorial
3: MAKE UTILITY
AR =ar ru CD =cd LN =ln -s MAKE=make -i -f RM =/bin/rm -f RM_LIST=*.o *.f NCARGRAPHICS #NCARGRAPHICS #
Makefile core .tmpfile terrain.exe data_area.exe rdem.exe = NCARG = NONCARG
Targets for supported architectures
default: uname -a > .tmpfile grep CRAY .tmpfile ; \ if [ $$? = 0 ]; then echo "Compiling for CRAY" ; \ ( $(CD) src ; $(MAKE) all\ "RM= $(RM)" "RM_LIST= $(RM_LIST)"\ "LN= $(LN)" "MACH= CRAY"\ "MAKE= $(MAKE)""CPP= /opt/ctl/bin/cpp" \ "CPPFLAGS= -I. -C -P -D$(NCARGRAPHICS) -DRECLENBYTE"\ "FC= f90" "FCFLAGS= -I."\ "LDOPTIONS = " "CFLAGS = "\ "LOCAL_LIBRARIES= -L/usr/local/lib -lncarg -lncarg_gks -lncarg_c -lX11 lm" ) ; \ else \ grep OSF .tmpfile ; \ if [ $$? = 0 ]; then echo "Compiling for Compaq" ; \ ( $(CD) src ; $(MAKE) all \ "RM= $(RM)" "RM_LIST= $(RM_LIST)"\ "LN= $(LN)" "MACH= DEC"\ "MAKE= $(MAKE)""CPP= /usr/bin/cpp" \ "CPPFLAGS= -I. -C -P -D$(NCARGRAPHICS)"\ "FC= f77""FCFLAGS= -I. -convert big_endian -fpe"\ "LDOPTIONS = ""CFLAGS = "\ "LOCAL_LIBRARIES= -L/usr/local/ncarg/lib -lncarg -lncarg_gks -lncarg_c lX11 -lm" ) ; \ else \ grep IRIX .tmpfile ; \ if [ $$? = 0 ]; then echo "Compiling for SGI" ; \ ( $(CD) src ; $(MAKE) all\ "RM= $(RM)" "RM_LIST= $(RM_LIST)"\ "LN= $(LN)" "MACH= SGI" \ "MAKE= $(MAKE)""CPP= /lib/cpp"\ "CPPFLAGS= -I. -C -P -D$(NCARGRAPHICS)"\ "FC= f77" "FCFLAGS= -I. -n32"\ "LDOPTIONS = -n32""CFLAGS = -I. -n32"\ "LOCAL_LIBRARIES= -L/usr/local/ncarg/lib -L/usr/local/lib -lncarg lncarg_gks -lncarg_c -lX11 -lm" ) ; \ else \ grep HP .tmpfile ; \ if [ $$? = 0 ]; then echo "Compiling for HP" ; \ ( $(CD) src ; $(MAKE) all\ "RM= $(RM)" "RM_LIST= $(RM_LIST)"\ "LN= $(LN)" "MACH= HP"\ "MAKE= $(MAKE)""CPP= /opt/langtools/lbin/cpp" \ "CPPFLAGS= -I. -C -P -D$(NCARGRAPHICS) -DRECLENBYTE"\ "FC= f77" "FCFLAGS= -I. -O"\ "LDOPTIONS= " "CFLAGS= -Aa"\ "LOCAL_LIBRARIES= -L/usr/local/ncarg/lib -L/usr/local/lib -lncarg lncarg_gks -lncarg_c -lX11 -lm" ) ; \ else \ grep SUN .tmpfile ; \ if [ $$? = 0 ]; then echo "Compiling for SUN" ; \ ( $(CD) src ; $(MAKE) all\ "RM= $(RM)" "RM_LIST= $(RM_LIST)"\
MM5 Tutorial
-
-
-
-
3-11
3: MAKE UTILITY
"LN= $(LN)" "MACH= SUN"\ "MAKE= $(MAKE)""CPP= /usr/ccs/lib/cpp" \ "CPPFLAGS=-I. -C -P -D$(NCARGRAPHICS) -DRECLENBYTE"\ "FC= f77" "FCFLAGS= -I."\ "LDOPTIONS= " "CFLAGS= -I."\ "LOCAL_LIBRARIES= -L/usr/local/ncarg/lib -L/usr/openwin/lib -L/usr/dt/lib -lncarg -lncarg_gks -lncarg_c -lX11 -lm" ) ; \ else \ grep AIX .tmpfile ; \ if [ $$? = 0 ]; then echo "Compiling for IBM" ;\ ( $(CD) src ; $(MAKE) all\ "RM= $(RM)" "RM_LIST= $(RM_LIST)"\ "LN= $(LN)" "MACH= IBM"\ "MAKE= $(MAKE)""CPP= /usr/lib/cpp" \ "CPPFLAGS= -I. -C -P -D$(NCARGRAPHICS) -DRECLENBYTE"\ "FC= xlf""FCFLAGS= -I. -O -qmaxmem=-1"\ "LDOPTIONS= " "CFLAGS= -I."\ "LOCAL_LIBRARIES= -L/usr/local/lib32/r4i4 -lncarg -lncarg_gks -lncarg_c lX11 -lm" ) ; \ fi ; \ fi ; \ fi ; \ fi ; \ fi ; \ fi ; \ ( $(RM) terrain.exe ; $(LN) src/terrain.exe . ) ; terrain.deck: uname -a > .tmpfile grep OSF .tmpfile ; \ if [ $$? = 0 ]; then \ echo "Making terrain deck for Compaq" ; \ ( cp Templates/terrain.deck.dec terrain.deck ) ;\ else \ grep CRAY .tmpfile ; \ if [ $$? = 0 ]; then \ echo "Making terrain deck for CRAY" ; \ ( cp Templates/terrain.deck.cray terrain.deck ) ;\ else \ grep IRIX .tmpfile ; \ if [ $$? = 0 ]; then \ echo "Making terrain deck for SGI" ; \ ( cp Templates/terrain.deck.sgi terrain.deck ) ;\ else \ grep HP .tmpfile ; \ if [ $$? = 0 ]; then \ echo "Making terrain deck for HP" ; \ ( cp Templates/terrain.deck.hp terrain.deck ) ;\ else \ grep SUN .tmpfile ; \ if [ $$? = 0 ]; then \ echo "Making terrain deck for SUN" ; \ ( cp Templates/terrain.deck.sun terrain.deck ) ;\ else \ grep AIX .tmpfile ; \ if [ $$? = 0 ]; then \ echo "Making terrain deck for IBM" ; \ ( cp Templates/terrain.deck.ibm terrain.deck ) ;\ fi; \ fi; \ fi; \ fi; \ fi; \ fi; code:
3-12
MM5 Tutorial
3: MAKE UTILITY
( $(CD) src ; $(MAKE) code\ "MAKE=$(MAKE)"\ "CPP=/usr/bin/cpp"\ "CPPFLAGS=-I. -C -P -DDEC") clean: ( $(CD) src ; $(MAKE) clean "CD = $(CD)" "RM = $(RM)" "RM_LIST = $(RM_LIST)" ) $(RM) $(RM_LIST)
3.13 An Example of Low-level Makefile # Lower level Makefile for TERRAIN # Suffix rules and commands ####################### FIX01 = ####################### .IGNORE: .SUFFIXES:
.F .f .i .o
.F.o: $(RM) $@ $(CPP) $(CPPFLAGS) -D$(MACH) $(FIX01) $*.F > $*.f $(FC) -c $(FCFLAGS) $*.f $(RM) $*.f .F.f: $(CPP) $(CPPFLAGS) -D$(MACH) $(FIX01) $*.F > $@ .f.o: $(RM) $@ $(FC) -c $(FCFLAGS) $(FIX01) $*.f OBJS
=ia.o anal2.o bint.o bndry.o crlnd.o crter.o dfclrs.o exaint.o \ finprt.o fudger.o interp.o label.o lakes.o \ latlon.o llxy.o mxmnll.o nestll.o oned.o \ outpt.o output.o pltter.o rdldtr.o replace.o rflp.o setup.o sint.o \ smth121.o smther.o smthtr.o terdrv.o terrain.o tfudge.o vtran.o \ xyobsll.o hiresmap.o plots.o crvst.o \ crvst30s.o nestbdy.o crsoil.o equate.o labels.o labelv.o patch.o\ plotcon.o watercheck.o crlwmsk.o soil_tg.o water_vfr.o check_data.\ terrestial_info.o write_fieldrec.o
SRC
=$(OBJS:.o=.f)
cray dec hp ibm sgi sun default: @echo "you need to be up a directory to make terrain.exe" all::
terrain.exe data_area.exe rdem.exe
terrain.exe:$(OBJS) $(FC) -o $@ $(LDOPTIONS) $(OBJS) $(LOCAL_LIBRARIES) code: MM5 Tutorial
$(SRC) 3-13
3: MAKE UTILITY
# # #
for preprocessor 1
OBJS1
= latlon.o llxy.o mxmnll.o nestll.o rflp.o setup.o outpt.o vtran.o\ search.o data30s.o data_area.o
SRC1
=
$(OBJS1:.o=.i)
data_area.exe: $(OBJS1) $(RM) $@ $(FC) -o $@ $(OBJS1) $(LDOPTIONS) $(LOCAL_LIBRARIES) $(LDLIBS) code1:
$(SRC1)
# # #
for preprocessor 2
OBJS2
=
cr30sdata.o
SRC2
=
$(OBJS2:.o=.i)
rdem.exe:
$(OBJS2) $(RM) $@ $(FC) -o $@ $(OBJS2) $(LDOPTIONS) $(LOCAL_LIBRARIES) $(LDLIBS)
code2:
$(SRC2)
read30s.o rdem.o ia.o
# ------------------------------------------------------------------------# DO NOT DELETE THIS LINE -- make depend depends on it. anal2.o: parame.incl nestdmn.incl bndry.o: maps.incl option.incl crlnd.o: parame.incl paramed.incl ltdata.incl fudge.incl option.incl crlnd.o: maps.incl nestdmn.incl trfudge.incl ezwater.incl crlwmsk.o: parame.incl paramesv.incl paramed.incl maps.incl nestdmn.incl crlwmsk.o: ltdata.incl crsoil.o: parame.incl paramesv.incl paramed.incl ltdata.incl crter.o: parame.incl paramed.incl nestdmn.incl option.incl ltdata.incl crvst.o: parame.incl paramed.incl ltdata.incl crvst30s.o: parame.incl paramed.incl nestdmn.incl maps.incl ltdata.incl data_area.o: parame.incl maps.incl nestdmn.incl ltdata.incl exaint.o: parame.incl finprt.o: option.incl parame.incl paramesv.incl headerv3.incl interp.o: option.incl ltdata.incl labels.o: paramesv.incl vs_cmn2.incl labelv.o: paramesv.incl vs_cmn2.incl latlon.o: maps.incl option.incl llxy.o: maps.incl mxmnll.o: parame.incl maps.incl option.incl nestbdy.o: parame.incl nestll.o: option.incl output.o: option.incl paramesv.incl ltdata.incl headerv3.incl nestdmn.incl output.o: maps.incl namelist.incl vs_cmn2.incl vs_cmn1.incl
3-14
MM5 Tutorial
3: MAKE UTILITY
pltter.o: parame.incl maps.incl nestdmn.incl option.incl paramesv.incl pltter.o: vs_cmn1.incl vs_cmn2.incl rdldtr.o: paramed.incl paramesv.incl space.incl replace.o: parame.incl option.incl paramesv.incl vs_cmn1.incl maps.incl replace.o: nestdmn.incl rflp.o: maps.incl search.o: parame.incl maps.incl nestdmn.incl ltdata.incl option.incl setup.o: ezwater.incl parame.incl paramesv.incl maps.incl nestdmn.incl setup.o: fudge.incl trfudge.incl option.incl ltdata.incl namelist.incl setup.o: vs_cmn1.incl vs_cmn2.incl vs_data.incl sint.o: parame.incl smth121.o: parame.incl smthtr.o: parame.incl terdrv.o: paramed.incl parame.incl paramesv.incl maps.incl nestdmn.incl terdrv.o: option.incl ltdata.incl trfudge.incl space.incl vs_cmn1.incl terdrv.o: vs_cmn2.incl terrain.o: parame.incl paramesv.incl maps.incl nestdmn.incl option.incl terrain.o: ezwater.incl terrestial_info.o: maps.incl tfudge.o: parame.incl paramesv.incl vs_cmn1.incl maps.incl nestdmn.incl vtran.o: parame.incl xyobsll.o: maps.incl option.incl clean: $(RM) $(RM_LIST)
MM5 Tutorial
3-15
3: MAKE UTILITY
3-16
MM5 Tutorial
4: TERRAIN
TERRAIN
4
Purpose 4-3 Tasks of TERRAIN 4-3 Overview of TERRAIN 4-4 Input Data 4-4 Source Data 4-4 Data Format 4-5 Input Data Sources and File Sizes 4-7 Data Information 4-12 Lists of Landuse/Vegetation and Soil Categories 4-12 Defining Mesoscale Domains 4-16 Interpolation 4-19 Overlapping parabolic interpolation 4-19 Cressman-type objective analysis 4-21 Adjustment 4-22 Reset the nested domain boundary values 4-22 Feedback 4-23 Fudging function 4-23 Water body correction 4-23 Land-use fudge 4-23 Script Variables 4-24 Parameter statement 4-24 Namelist Options 4-24 MAPBG: Map Background Options 4-24 DOMAINS: Domain Setting Options 4-24 OPTN: Function Options 4-25 Land-use Fudging Options (used when IFFUDG=T) 4-26 Skip the EZFUDGE over the boxes (used when IFTFUG=T) 4-26
MM5 Tutorial
4-1
4: TERRAIN
Heights of water bodies 4-26 How to run TERRAIN 4-26 TERRAIN Didn’t Work: What Went Wrong? 4-28 TERRAIN Files and Unit Numbers 4-29 TERRAIN tar File 4-30 terrain.deck 4-31
4-2
MM5 tutorial
4: TERRAIN
4
TERRAIN
4.1 Purpose The program that begins any complete forecast simulation in MM5 modeling system is TERRAIN (Fig. 1.1). This program horizontally interpolates (or analyzes) the regular latitude-longitude terrain elevation, and vegetation (land use) onto the chosen mesoscale domains (see Fig. 4.1). If the land-surface model (LSM) will be used in the MM5 model, additional fields such as soil types, vegetation fraction, and annual deep soil temperature will also be generated.
Figure 4.1 4.1.1 Tasks of TERRAIN There are essentially two tasks the program TERRAIN performs: 1. Set up mesoscale domains: coarse and fine grids (except for moving nests); 2. Produce terrestrial data fields for all of the mesoscale domains, which will first be used by MM5 Tutorial
4-3
4: TERRAIN
REGRID, and later by MM5 (optionally) and NESTDOWN. The program also computes a few constant fields required by the modeling system: latitude and longitude, map scale factors, and Coriolis parameter. 4.1.2 Overview of TERRAIN The TERRAIN program is composed of four parts (Fig. 4.2): 1. Source data input; 2. Interpolation from lat/long source data to mesoscale grid; 3. Nest interface adjustment and feed back; and 4. Output terrain elevation, land use and other terrestrial data in MM5 format.
TERRAIN program Interpolation: Cressman-type analysis
: Output:
Reconstruction
Original Source Data
Input Data: Search area
Overlapping bi-parabolic Interp.
Read in source data
Print
Plots Adjustment:
Boundaries blending
Binary Files
Feedback
Figure 4.2
4.2 Input Data 4.2.1 Source Data The data available as input to the program TERRAIN include terrain elevation, landuse/vegetation, land-water mask, soil types, vegetation fraction and deep soil temperature. Most data are available at six resolutions: 1 degree, 30, 10, 5 and 2 minutes, and 30 seconds. Here is the list of
4-4
MM5 tutorial
4: TERRAIN
available data: 1. Elevation data at six resolutions from USGS: 1-degree, 30-, 10-, 5-, 2-minutes (5 files) and 30second (33 tiles directly from USGS). All lower resolution data (1 degree to 2 minutes) are created from the 30 seconds USGS data. 2. Three types of source vegetation/land-use data available: (a) 13-category, global coverage with the resolution of 1-degree, 30- and 10-minute (3 files); (b) 17-category, North-American coverage with the resolution of 1-degree, 30-, 10-, 5-, 2minutes and 30 seconds (6 files); (c) 25-category, global coverage with the resolution of 1-degree, 30-, 10-, 5-, 2-minutes and 30-seconds (6 files; all lower resolution data are created from 30 sec data from USGS version 2 land cover data). 3. Two types of land-water mask data: (a) 17-category, North-American coverage with the resolution of 1-degree, 30-, 10-, 5-, 2minutes and 30seconds (6 files); (b) 25-category, global coverage with the resolution of 1-degree, 30-, 10-, 5-, 2-minutes and 30-seconds (6 files). 4. For LSM option in MM5, the soil, vegetation fraction, and annual deep soil temperature are needed. The source data files are: (a) 17-category, six resolutions of global soil data (6 files); (b) 12 monthly, 10-minute, global vegetation fraction data (1 file); (c) 1-degree, global annual deep soil temperature (1 file). More description of the data is available in section 4.2.3. 4.2.2 Data Format Since the original data come from different sources, they have different formats and layouts. These data sets are translated to a standard format which is used by the TERRAIN program. The data arrangement and format in the reformatted data file are as follows, • Latitude by latitude from north to south in one latitude, the data points are arranged from west to east, usually starting from 0 degree longitude (or dateline). • Two-characters arrays are used to store the elevation and deep soil temperature data (the maximum value < 215, or 32768) (Fig. 4.3), and 1-character array to store all other data (values < 100) (Fig. 4.4). • All source data files are direct-access, which makes data reading efficient. • All data are assumed to be valid at the center of a grid box. Hence there are 360x180 data points for 1-degree data, and (360x2)x(180x2) for 30-minute data, and (360x120)x(180x120) data points for the 30-second data, and so on.
MM5 Tutorial
4-5
4: TERRAIN
(xlati, xloni) 2-characters data1 data2
Number
of
Records
Number of points Figure 4.3
(xlati, xloni) 1-character data1 data2
Number
of
Records
Number of points Figure 4.4
4-6
MM5 tutorial
4: TERRAIN
4.2.3 Input Data Sources and File Sizes
•
Elevation:
Table 4.1a Terrain Height Data Resolution
Data source*
Coverage
Size(bytes)
1 deg. (111.0 km)
USGS
Global
129,600
30 min. (55.0 km)
USGS
Global
518,400
10 min. (18.5 km)
USGS
Global
4,665,600
5 min. (9.25 km)
USGS
Global
18,662,400
2 min. (3.70 km)
USGS
Global
116,640,000
Tiled 30 sec. (0.925 km)**
GTOPO30 by U.S. Geological Survey’s EROS Data Center in late 1996
Global (33 tiles: 40o lon. x 50o lat. or 60o lon. x 30o lat.)
57,600,000 or 51,840,000 for each of tiles
30 sec. (0.925 km)
USGS
Global***
1,866,240,000
* Except for the tiled 30 sec. data (GTOPO30), the data reconstruction from original source data was completed separately prior to TERRAIN. All lower resolution elevation datasets are created from the USGS global 30 second dataset since Version 3.4. ** For details of the GTOPO30 data, see http://www.scd.ucar.edu/dss/datasets/ds758.0.html. The tiled 30 seconds elevation data are available from the USGS EROS Data Center’s anonymous ftp site edcftp.cr.usgs.gov under directory: /pub/data/gtopo30/global *** This single tile global 30 second file is available through request to mesouser, or on MSS: /MESOUSER/MM5V3/DATA/SINGLE-TILE-GLOBAL-30S-ELEVATION.gz. The data reconstruction for the 30 second data is included in the ftp30s.csh which is used by the TERRAIN job deck. The reconstructing procedure contains three steps: (1) determine which tiles of the elevation data are needed based on the information in namelist (data_area.exe); (2) fetch the data from ftp site (or MSS if one runs at NCAR) (dem_read); (3) reconstruct data in TERRAIN standard input format from the tiled data and provide the necessary information to TERRAIN (rdem.exe). The outputs are new_30sdata, and new_30sdata_info, located in Data/ directory.
MM5 Tutorial
4-7
4: TERRAIN
• Vegetation/Land-use (1) Global 13-category data from PSU/NCAR tape
Table 4.1b PSU/NCAR Land-use Data Resolution
Data source
Coverage
Size(bytes)
1 deg. (111.0 km)
PSU/NCAR
Global
842,400
30 min. (55.0 km)
PSU/NCAR
Global
3,369,600
10 min. (18.5 km)
PSU/NCAR
Global
30,326,400
The 13 categories are listed in Table 4.2a. The data are represented by 13 numbers of percentages for the 13 categories at each of lat/lon grid points. (2) North-American 17-category data used by Simple Biosphere (SiB) model (from USGS)
Table 4.1c 17-category SiB Vegetation Data Resolution
Data source
Coverage
Size(bytes)
1 deg. (111.0 km)
Simple Biosphere model
0o-90oN, 60o-180oW
183,600
30 min. (55.0 km)
Simple Biosphere model
0o-90oN, 60o-180oW
734,400
10 min. (18.5 km)
Simple Biosphere model
0o-90oN, 60o-180oW
6,609,600
5 min. (9.25 km)
Simple Biosphere model
0o-90oN, 60o-180oW
26,438,400
2 min. (3.70 km)
Simple Biosphere model
0o-90oN, 60o-180oW
165,240,000
30 sec. (0.925 km) Simple Biosphere model
0o-90oN, 60o-180oW
155,520,000
The 17 categories are listed in Table 4.2b. The 30-sec data are represented by one category-ID number at each of lat/lon grid point. The low resolution (1-deg, 30-, 10-, 5and 2-min) data are derived from 30-sec data, and are represented by 17 numbers of percentages for the 17 categories at each of lat/lon grid points.
4-8
MM5 tutorial
4: TERRAIN
(3) Global 25-category data from U.S. Geological Survey (USGS)
Table 4.1d 25-category USGS Vegetation Data Resolution
Data source
Coverage
Size(bytes)
1 deg. (111.0 km)
USGS
Global
1,620,000
30 min. (55.0 km)
USGS
Global
6,480,000
10 min. (18.5 km)
USGS
Global
58,320,000
5 min. (9.25 km)
USGS
Global
233,280,000
2 min. (3.70 km)
USGS
Global
1,458,000,000
30 sec. (0.925 km)
USGS
Global
933,120,000
The 25 categories are listed in Table 4.2c. The 30-sec data are represented by one category-ID number at each of lat/lon grid point. The low resolution (1-deg, 30-, 10-, 5and 2-min) data are derived from 30-sec data, and are represented by 25 numbers of percentages for the 25 categories at each of lat/lon grid points. • Land-water mask (1) North-American Land-water mask files derived from SiB Vegetation data
Table 4.1e SiB Land-Water Mask Data Resolution
Data source
Coverage
Size(bytes)
1 deg. (111.0 km)
SiB Vegetation
0o-90oN, 60o-180oW
10,800
30 min. (55.0 km)
SiB Vegetation
0o-90oN, 60o-180oW
43,200
10 min. (18.5 km)
SiB Vegetation
0o-90oN, 60o-180oW
388,800
5 min. (9.25 km)
SiB Vegetation
0o-90oN, 60o-180oW
1,555,200
2 min. (3.70 km)
SiB Vegetation
0o-90oN, 60o-180oW
9,720,000
30 sec. (0.925 km)
SiB Vegetation
0o-90oN, 60o-180oW
155,520,000
The SiB land-water mask data files are derived from SiB vegetation data files. At each of lat/lon grid points, there is one number indicating the land ( 1), water ( 0), or missing data (-1) at that point.
MM5 Tutorial
4-9
4: TERRAIN
(2) Global Land-water mask files derived from USGS Vegetation data
Table 4.1f USGS Land-Water Mask Data Resolution
Data source
Coverage
Size(bytes)
1 deg. (111.0 km)
USGS Vegetation
Global
64,800
30 min. (55.0 km)
USGS Vegetation
Global
259,200
10 min. (18.5 km)
USGS Vegetation
Global
2,332,800
5 min. (9.25 km)
USGS Vegetation
Global
9,331,200
2 min. (3.70 km)
USGS Vegetation
Global
58,320,000
30 sec. (0.925 km)
USGS Vegetation
Global
933,120,000
The land-water mask data files are derived from USGS vegetation data files. At each of lat/lon grid points, there is one number indicating the land ( 1), water ( 0), or missing data (-1) at that point.
• Soil
Table 4.1g Global 17-category Soil Data Resolution
Data source*
Coverage
Size(bytes)
1 deg. (111.0 km)
FAO+STATSGO
Global
1,101,600
30 min. (55.0 km)
FAO+STATSGO
Global
4,406,400
10 min. (18.5 km)
FAO+STATSGO
Global
39,657,600
5 min. (9.25 km)
FAO+STATSGO
Global
158,630,400
2 min. (3.70 km)
FAO+STATSGO
Global
991,440,000
30 sec. (0.925 km)
FAO+STATSGO
Global
933,120,000
The 17-Category Global Soil data files are generated by (1) Global 5-minutes United Nation FAO soil data are converted to the 17-category data, same as STATSGO data (available since V3.5); (2) North-American STATSGO 30-sec soil data (3) Global high resolution soil data are produced from 5-min FAO data; (4) North-American low resolution (1-deg, 30-, 10-, 5 -and 2-min) soil data are derived from the 30-sec North-American soil data; (5) FAO and STATSGO data are combined for each of the resolutions. (6) Both top soil layer (0 - 30 cm) and bottom soil layer (30 - 100 cm) data are provided. Obtaining a particular dataset can be set in terrain.deck.
4-10
MM5 tutorial
4: TERRAIN
The 17 categories are listed in Table 4.2d. Similar to the vegetation data, the 30-sec data are represented by one category-ID number at each of lat/lon grid point, and the low resolution (1-deg, 30-, 10-, 5- and 2-min) data are represented by 17 numbers of percentages for the 17 categories at each of lat/lon grid points.
• Vegetation fraction
Table 4.1h Global Monthly Vegetation Fraction Data Resolution
Data source
Coverage*
Size(bytes)
10 min. (18.5 km)
AVHRR
Global
27,993,600
The original 10-min vegetation fraction data contained 12 percentage-values for 12 months at each of lat/lon grid points, but covered only from 55oS to 75oN. To make the data file have global coverage, a zero value of vegetation fraction was assigned over the high latitude area.
• Soil temperature
Table 4.1i Global Annual Deep Soil Temperature Data Resolution*
Data source
Coverage*
Size(bytes)
1 deg. (111.0 km)
ECMWF analysis
Global
129,600
The resolution of the 1-deg annual deep soil temperature data is rather low. For some of grid points located at small islands in the ocean, it is unable to obtain the deep soil temperature value by interpolation based on this source dataset. In this case, an annual deep soil temperature, Tg, will be assigned based on the latitude of the point, ϕ: T g = C 0 + C 1 sin ( A ) + C 2 cos ( A ) where ( 89.5 – ϕ ) A = 0.5 × 3.14159 26 × -----------------------89.5 and
MM5 Tutorial
C0 = 242.06,
C1 = 59.736, C2 = 1.9445.
4-11
4: TERRAIN
4.2.4 Data Information If a user has a different source data, the data must be translated to the above standard format and a direct-access file. In addition, the following information should be provided to the TERRAIN pro gram through a DATA statement in setup.F or in vs_data.incl, and paramesv.incl. • • • • • • •
Number of categories ID number of water category Data resolution in degree Initial latitude and longitude Total number of records (latitudes) The number of data points (longitudes) in a latitude File name to be linked to the Fortran unit number
Note: (1) If your own data contain missing data, you must provide the missing-value and modify the interpolation subroutine INTERP or ANAL2 for processing missing-values. (2) For plotting the map of vegetation and soil, one may need to modify the existing color tables, especially if the number of categories have been changed. 4.2.5 Lists of Landuse/Vegetation and Soil Categories Table 4.2a Description of 13-category (PSU/NCAR) land-use categories and physical parameters for N.H. summer (15 April - 15 October) and winter (15 October - 15 April). Landuse Integer Identification
Landuse Description
1
4-12
Albedo(%)
Moisture Avail. (%)
Emissivity (% at 9 µ m)
Roughness Length (cm)
Thermal Inertia (cal cm-2 k-1 s-1/2)
Sum
Win
Sum
Win
Sum
Win
Sum
Win
Sum
Win
Urban land
18
18
5
10
88
88
50
50
0.03
0.03
2
Agriculture
17
23
30
60
92
92
15
5
0.04
0.04
3
Range-grassland
19
23
15
30
92
92
12
10
0.03
0.04
4
Deciduous forest
16
17
30
60
93
93
50
50
0.04
0.05
5
Coniferous forest
12
12
30
60
95
95
50
50
0.04
0.05
6
Mixed forest and wet land
14
14
35
70
95
95
40
40
0.05
0.06
7
Water
8
8
100
100
98
98
.01
.01
0.06
0.06
8
Marsh or wet land
14
14
50
75
95
95
20
20
0.06
0.06
9
Desert
25
25
2
5
85
85
10
10
0.02
0.02
10
Tundra
15
70
50
90
92
92
10
10
0.05
0.05
11
Permanent ice
80
82
95
95
95
95
0.01
0.01
0.05
0.05
12
Tropical or sub tropical forest
12
12
50
50
95
95
50
50
0.05
0.05
13
Savannah
20
20
15
15
92
92
15
15
0.03
0.03
MM5 tutorial
4: TERRAIN
Table 4.2b Description of 17-category (SiB) vegetation categories and physical parameters for N.H. summer (15 April - 15 October) and winter (15 October - 15 April). Vegetation Integer Identification
Vegetation Description
1
Albedo(%)
Moisture Avail. (%)
Emissivity (% at 9 µ m)
Roughness Length (cm)
Thermal Inertia (cal cm-2 k-1 s-1/2)
Sum
Win
Sum
Win
Sum
Win
Sum
Win
Sum
Win
Evergrn. Broadlf.
12
12
50
50
95
95
50
50
0.05
0.05
2
Broadlf, Decids.
16
17
30
60
93
93
50
50
0.04
0.05
3
Decids. Evergrn.
14
14
35
70
95
95
40
40
0.05
0.06
4
Evergrn. Needlf.
12
12
30
60
95
95
50
50
0.04
0.05
5
Decids. Needlf.
16
17
30
60
93
93
50
50
0.04
0.05
6
Grnd. Tree Shrb.
20
20
15
15
92
92
15
15
0.03
0.03
7
Ground only
19
23
15
30
92
92
12
10
0.03
0.04
8
Broadlf. Shrb.P.G.
19
23
15
30
92
92
12
10
0.03
0.04
9
Broadlf. Shrb.B.S.
19
23
15
30
92
92
12
10
0.03
0.04
10
Grndcvr. DT. Shrb
15
70
50
90
92
92
10
10
0.05
0.05
11
Bare Soil
25
25
2
5
85
85
10
10
0.02
0.02
12
Agricltr. or C3 Grs
17
23
30
60
92
92
15
5
0.04
0.04
13
Perst. Wetland
14
14
50
75
95
95
20
20
0.06
0.06
14
Dry Coast Cmplx
19
23
15
30
92
92
12
10
0.03
0.04
15
Water
8
8
100
100
98
98
.01
.01
0.06
0.06
16
Ice cap & Glacier
80
82
95
95
95
95
5
5
0.05
0.05
17
No data
MM5 Tutorial
4-13
4: TERRAIN
Table 4.2c Description of 25-category (USGS) vegetation categories and physical parameters for N.H. summer (15 April - 15 October) and winter (15 October - 15 April). Vegetation Integer Identification
Vegetation Description
1
4-14
Albedo(%)
Moisture Avail. (%)
Emissivity (% at 9 µ m)
Roughness Length (cm)
Thermal Inertia (cal cm-2 k-1 s-1/2)
Sum
Win
Sum
Win
Sum
Win
Sum
Win
Sum
Win
Urban
15
15
10
10
88
88
80
80
0.03
0.03
2
Drylnd Crop. Past.
17
23
30
60
98.5
92
15
5
0.04
0.04
3
Irrg. Crop. Past.
18
23
50
50
98.5
92
15
5
0.04
0.04
4
Mix. Dry/Irrg.C.P.
18
23
25
50
98.5
92
15
5
0.04
0.04
5
Crop./Grs. Mosaic
18
23
25
40
99
92
14
5
0.04
0.04
6
Crop./Wood Mosc
16
20
35
60
98.5
93
20
20
0.04
0.04
7
Grassland
19
23
15
30
98.5
92
12
10
0.03
0.04
8
Shrubland
22
25
10
20
88
88
10
10
0.03
0.04
9
Mix Shrb./Grs.
20
24
15
25
90
90
11
10
0.03
0.04
10
Savanna
20
20
15
15
92
92
15
15
0.03
0.03
11
Decids. Broadlf.
16
17
30
60
93
93
50
50
0.04
0.05
12
Decids. Needlf.
14
15
30
60
94
93
50
50
0.04
0.05
13
Evergrn. Braodlf.
12
12
50
50
95
95
50
50
0.05
0.05
14
Evergrn. Needlf.
12
12
30
60
95
95
50
50
0.04
0.05
15
Mixed Forest
13
14
30
60
94
94
50
50
0.04
0.06
16
Water Bodies
8
8
100
100
98
98
.01
.01
0.06
0.06
17
Herb. Wetland
14
14
60
75
95
95
20
20
0.06
0.06
18
Wooded wetland
14
14
35
70
95
95
40
40
0.05
0.06
19
Bar. Sparse Veg.
25
25
2
5
85
85
10
10
0.02
0.02
20
Herb. Tundra
15
60
50
90
92
92
10
10
0.05
0.05
21
Wooden Tundra
15
50
50
90
93
93
30
30
0.05
0.05
22
Mixed Tundra
15
55
50
90
92
92
15
15
0.05
0.05
23
Bare Grnd. Tundra
25
70
2
95
85
95
10
5
0.02
0.05
24
Snow or Ice
55
70
95
95
95
95
5
5
0.05
0.05
25
No data
MM5 tutorial
4: TERRAIN
Table 4.2d Description of 17-category Soil categories and physical parameters Saturation Soil Air dry Saturation Max Reference Wilting Saturation B soil moist Soil diffu./ moisture soil point soil soil content conducti- parameter diffusivity condu. content moisture moisture potential (10-6) limits vity (10-6) coef.
Soil Integer Identification
Soil Description
1
Sand
0.339
0.236
0.010
0.010
0.069
1.07
2.79
0.608
- 0.472
2
Loamy Sand
0.421
0.283
0.028
0.028
0.036
14.10
4.26
5.14
- 1.044
3
Sandy Loam
0.434
0.312
0.047
0.047
0.141
5.23
4.74
8.05
- 0.569
4
Silt Loam
0.476
0.360
0.084
0.084
0.759
2.81
5.33
23.9
0.162
5
Silt
0.476
0.360
0.084
0.084
0.759
2.81
5.33
23.9
0.162
6
Loam
0.439
0.329
0.066
0.066
0.355
3.38
5.25
14.3
- 0.327
7
Sandy Clay Loam
0.404
0.314
0.067
0.067
0.135
4.45
6.66
9.90
- 1.491
8
Silty Clay Loam
0.464
0.387
0.120
0.120
0.617
2.04
8.72
23.7
- 1.118
9
Clay Loam
0.465
0.382
0.103
0.103
0.263
2.45
8.17
11.3
- 1.297
10
Sandy Clay
0.406
0.338
0.100
0.100
0.098
7.22
10.73
18.7
- 3.209
11
Silty Clay
0.468
0.404
0.126
0.126
0.324
1.34
10.39
9.64
- 1.916
12
Clay
0.468
0.412
0.138
0.138
0.468
0.974
11.55
11.2
- 2.138
13
Organic Materials
0.439
0.329
0.066
0.066
0.355
3.38
5.25
14.3
- 0.327
14
Water
1.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
15
Bedrock
0.200
0.108
0.006
0.006
0.069
141.0
2.79
136.0
- 1.111
16
Other
0.421
0.283
0.028
0.028
0.036
14.10
4.26
5.14
- 1.044
17
No data
MM5 Tutorial
4-15
4: TERRAIN
4.3 Defining 4.3 Defining Mesoscale Mesoscale Domains Domains There are a number of key parameters a user must specify in order to define mesoscale domains. These are: •
Map projection: three types are available: available - Lambert conformal - Polar stereographic - Mercator
•
Coarse domain parameters: - Central latitude and longitude - Expanded domain information (useful for objective analysis) - Domain size (number of grid points in each direction: IX is in Y direction) - Grid distance in km
•
Nested domain parameters: parameters - Location of grid point (1,1) in its mother domain - Mother domain ID - Domain size (number of grid points in each direction) - Grid distance in km (must have a ratio of 3-to-1 for 2-way runs)
The latitudes and longitudes of mesoscale grids should be in the range of – 90° ≤ ϕ ≤ 90° – 90° ≤ ϕ ≤ 90°
(4.1) (4.1)
– 180° ≤ λ ≤ 180°
(4.2)
– 180° ≤ λ ≤ 180°
(4.2)
There are some restrictions in defining a nest in the program: There are some restrictions in defining a nest in the program: - A nest domain must start and end at a coarse domain grid point whether it is a one-way or twonest. This means thatand for end a two-way nest,domain the number of gridwhether points in thea nest mustor satisfy - way A nest domain must start at a coarse grid point it is one-way two(number nestmeans grid points - 1)/3 is an integer. way nest.ofThis that for a two-way nest, the number of grid points in the nest must satisfy (number of nest grid points - 1)/3 is an integer. - A nest must be at least 5 coarse grid points away from the coarse domain boundary. This is necessary ensure points available whenthe thecoarse coarse-to-fine grid data interpolation -A nest to must be atenough least 5 data coarse gridare points away from domain boundary. This is necis performed for enough the nestdata interface (seewhen below). essary to ensure pointsadjustment are available it comes to nest interface adjustment (see below). - TERRAIN program cannot be used to generate overlapping nests. Overlapping nests and movnests can only becannot dealt with in the model, which gets the Overlapping data for the nests - ing TERRAIN program be used to MM5 generate overlapping nests. nests,from and intermovpolation theonly coarse domain data Fig. 4.5). ing nestsof can be dealt with in (see the MM5 model, which interpolates from the coarse domain data (see Fig. 4.5).
4-16
MM5 tutorial
4: TERRAIN
Domain 1, Level 1 Doamin 3, Level 2
Domain 2, Level 2
Domain 5, Level 3
Domain 4, Level 2
Figure 4.5
The mesoscale domain information specified by a user in the namelist is used to set up a search area for reading and storing input data in memory. Using this information, the program calculates the maximum and minimum latitude/longitude for the search area. The formulas to calculate the latitude/longitude (λ, φ) from mesoscale grid indices (I, J) and vice versa for different map projections can be found in the documentation “Terrain and Land Use for the Fifth-Generation Penn State/NCAR Mesoscale Modeling System (MM5): Program TERRAIN”, page 10-17. •
In most situations, determination of the search area is a straightforward job.
•
In case of domain across the dateline, the longitudes at some of the points must have a conversion prior to the calculation
λ = λ – 360° •
(4.3)
In case of poles inside the domain, determination of the search area is more complicated. User may refer to page 22-25 of the documentation of TERRAIN program (Guo and Chen 1994).
MM5 Tutorial
4-17
4: TERRAIN
D01
Date line
Domain across date line
D03
D04
Pole inside domain
D02
Figure 4.6
4-18
MM5 tutorial
4: TERRAIN
4.4 Interpolation 4.4.1 Overlapping parabolic interpolation • • •
Used for terrain height, vegetation/land use, soil, vegetation fraction, and deep soil temperature Spherical (latitude-longitude) coordinate for input data is assumed 16-point, 2-dimensional parabolic fit (see page 81-82 of Guo and Chen 1994)
Figure 4.7 16-point, 2-dimension parabolic interpolation
A
B
ξ
C
D
Figure 4.8 One-dimension overlapping parabolic interpolation
MM5 Tutorial
4-19
4: TERRAIN
There are 3 types of vegetation/land-use data with different number of categories (N = 13, 17 or 25, see Table 4.1b, 4.1c, and 4.1d) and one type of soil data with 17 categories (Table 4.1g) available. At each data point, there are N numbers of percentage values for the N categories in the source data with 1-deg, 30-, 10-, 5- and 2-min resolution. The overlapping parabolic interpolation method is applied to obtain the percentages for each vegetation/land-use or soil categories at the mesoscale grid. If the water coverage (category 7, 15, or 16 for 13-, 17-, 25-category vegetation/ land-use data, respectively, and category 14 for 17-category soil data) is more than 50% at the point, the category with the maximum percentage (water) will be assigned to that point. If the water coverage is less than 50%, the category with the maximum percentage excluding the water will be assigned to that point. When the 30-sec vegetation/land-use and soil source data are used, the overlapping parabolic interpolation method cannot be applied to obtain the percentages at the mesoscale grids because the source data are represented by the category ID numbers. Another algorithm was developed to calculate the percentages at the mesoscale grids. The same rule used for the lower resolution data above was also used to determine the dominant category at each of the mesoscale grid points. The overlapping parabolic interpolation method is also applied to obtain values of the monthly vegetation fraction and annual deep soil temperature at the mesoscale grid points. For the vegetation fraction, there are 12 monthly values of percentages assigned to each of the mesoscale grid points, and for the annual deep soil temperature, there is one value at each grid point. Because the resolution of the deep soil temperature data is rather low (1-deg), its value at some ‘land points’ cannot be obtained from the interpolation procedure. To remedy this problem, the following two steps are taken: (1) A weighted averaged value from the neighbor points is assigned to those points. (2) If the temperature still cannot be found for the small isolated islands, a latitude-based value from the formula in section 4.2.3 is assigned to the point. After the mesoscale fields of the terrain elevation, vegetation/land-use, soil, and vegetation fraction are produced, the land-water mask data, or EZFUDGE function (just for elevation) are used to correct the land/water boundaries. The input vegetation/land-use and soil data to TERRAIN are the percentage values (1-deg, 30-, 10-, 5- and 2-min data) or ID numbers (30-sec data) for the N categories on the latitude/longitude grid. The output from TERRAIN is the dominant category ID number on the mesoscale grid. In the MM5 model without LSM, the dominant vegetation/land-use category ID number will be translated to the physical parameters of the surface characteristics, such as albedo, moisture availability, emissivity, roughness length, and thermal inertia, etc., as shown in Tables 4.2a-c for the three types of landuse data (which are provided in the MM5/Run/LANDUSE.TBL file). For the LSM option in MM5, given the dominant category ID numbers of vegetation and soil, the land properties are defined in the model. A vegetation fraction field is derived based on the model time and monthly vegetation fraction fields (which is assumed to be valid in the middle of each month) from TERRAIN.
4-20
MM5 tutorial
4: TERRAIN
4.4.2 Cressman-type objective analysis • • •
Used for terrain elevation only No first guess field is used Only single-pass scan is performed The weighting function is defined as
r ≤R s
⎧ R 2 – r2 s ⎪ -----------------⎪ 2 2 W = ⎨R + r s s ⎪ ⎪ 0 ⎩
(4.4)
r >R s
2 2 2 r s = ( I – I obs ) + ( J – J obs )
(4.5)
SN
∑
W s × ht s
=1 HT ( I, J ) = s--------------------------------SN
∑
(4.6)
Ws
s=1 In the TERRAIN program, both of the overlapping parabolic interpolation and Cressman-type objective analysis methods are available as the interpolation options for terrain elevation. No systematic comparison with these two methods is performed. They are kept in the current program for historical reason (they are from TERRAIN program of MM4 modeling system). In general, a large radius of influence will give a smoother results (with less terrain gradient). When a small radius of influence is used. it may cause “no data available” error for certain grid boxes if a lower resolution dataset is used. It is recommended that a user should choose the source dataset with the resolution comparable to the grid distance of the given domain.
MM5 Tutorial
4-21
4: TERRAIN
4.5 Adjustment When MM5 is applied for a multiple-nest simulation, each of the nest domains obtains their lateral boundary condition from their mother domain during the integration, and feeds the results back to the mother domain in the two-way nested application. After the terrain height, land-use and other terrestrial files are produced for each domain, the following procedure must be completed to make the terrain height, land-use and other terrestrial fields consistent between the domains: • •
reset the nested domain boundary values for both 1-way and 2-way applications, and feed the nest domain information back to the mother domain for 2-way application
4.5.1 Reset the nested domain boundary values For both 1-way and 2-way nests, these steps are taken to reset nest boundary values: 1. Interpolate the mother domain’s terrain heights to the nest grid by using the monotonic interpolation scheme (ratio=3), or bi-parabolic interpolation scheme (ratio≠3). 2. For rows and columns 1 to 3 (2-way) or 1 to 4 (one-way) along the nest domain boundaries, terrain heights are replaced with mother domain’s values. 3. For rows and columns 4 to 6 (2-way) or 5 to 7 (one-way), blending the nest domain’s values with mother domain’s values.
Mother domain
Replaced with mother domain’s values
Feedback to the mother domain Blending the nest
domain’s values with its mother domain’s values Figure 4.8
User must leave enough space (at least 5 grid-points) between the nest’s boundary and its mother domain’s boundary so that the (high-order) interpolation can be applied. If there is not enough space between the boundaries, the program will stop and issue a warning message.
4-22
MM5 tutorial
4: TERRAIN
4.5.2 Feedback The interior values of terrain, land-use and other terrestrial fields in a nest-domain are used to overwrite the mother domain values for the two-way nest application. This is necessary to ensure that at the coinciding grid points between the nests, the terrestrial values are identical for all the domains. This is done from the finest nest domain down to the coarsest domain.
4.6 Fudging function 4.6.1 Water body correction • Based on land-water mask data files If a user chooses to use the 24-category land-use data (VEGTYPE = 1), or to process the LSM data (LSMDATA = .T.), or not to use EZFUDGE function (IFEZFUG = .FALSE.; all namelistcontrolled options), the land-water mask files generated based on the vegetation data are used to correct the vegetation/land-use, soil categories, vegetation fraction, and the elevation of water bodies. This is recommended as the vegetation data provide fairly accurate representation of land mass, and in most cases (e.g. outside US) have better resolution than map information from NCAR Graphics. • Based on the EZMAP from NCAR GRAPHICS NCAR Graphics’ mapping utility may be used to identify water bodies. The information from a call to ARGTAI can be used to correct the land-use categories and the elevation of water bodies. When the IFEZFUG = .T., the inland spurious lakes can be eliminated, and the terrain heights are also matched with the coastline better. The heights of some of the larger lakes in the US have been defined in the namelist EZFUDGE, a user can define more lakes in this namelist. We recommend that users set IFEZFUG = .T. to correct possible errors from the source land-use data only if VEGTYPE = 1and LSMDATA = FALSE. This is because of data used in NCAR Graphics are rather old, the coastlines for many parts of the world are very coarse and some are even incorrect. Using land-water mask files can make the coastlines more realistic. Using IFEZFUG = .T. may require more computer memory and CPU time. To skip this EZFUDGE option over special areas, turn the switch IFTFUG on and specify the LAT/LON boxes in the namelist FUDGET. 4.6.2 Land-use fudge After the TERRAIN program is finished, a user should check the results carefully. Sometime the program does not generate satisfactory land-use categories at some grid points due to errors in the original dataset, or sometimes a user may want to modify the land-use categories in their numerical experiments. TERRAIN provides the user another chance to modify the land-use categories at upto 200 grid points for each domain. In the namelist, the switch IFFUDG = .T. allows a user to fudge the land-use data point by point. The locations (IFUG, JFUG) and land-use values (LNDFUG) are specified in namelist FUDGE. After the namelist variables IFFUG, NDFUG, IFUG, JFUG, LNDFUG are modified, the user needs to run the TERRAIN program again to get the corrected land-use data output.
MM5 Tutorial
4-23
4: TERRAIN
4.7 Script Variables ftpdata Where30sTer
Switch to indicate whether one wants to ftp data (T) or not (F). Switch to indicate where tiled global 30-s dataset is. = ftp: ftp data; = directory: data have been ftp’ed, untared, and reside in local directory
users
Users from inside NCAR set users = MMM, otherwise set users = Others. This causes the terrain job script to use a different ftp script to ftp data.
BotSoil
Uncomment this line to obtain bottom soil layer (30 - 100 cm) data.
4.8 Parameter statement parame.incl
To specify the maximum dimensions (IIMX, JJMX) of any domains (expanded or non expanded).
paramed.incl
To specify the maximum dimensions (ITRH, JTRH) of array holding the source data. They depend on source data resolution, and map projection, etc.
4.9 Namelist Options 4.9.1 MAPBG: Map Background Options PHIC
Central latitude of the coarse domain in degrees North; latitudes in SH is negative.
XLONC
Central longitude of the coarse domain in degrees East. Longitudes between Greenwich and Dateline is negative.
IEXP
Logical flag to use the expanded coarse domain (T) or not (F).
AEXP
Approximate expansion (km) of the grid on all sides of the coarse domain.
IPROJ
Map projection: ‘LAMCON’ for Lambert Conformal, ‘POLSTR’ for Polar Stereographic, and ‘MERCAT’ for Mercator.
4.9.2 DOMAINS: Domain Setting Options MAXNES
Maximum number of domains. The TERRAIN program allows the maximum number of domains less than or equal to 100.
NESTIX
The I(y)-direction dimensions for each of the domains.
NESTJX
The J(x)-direction dimensions for each of the domains.
DIS
The grid distance for each of the domains in km.
NUMNC
The mother domain’s ID number for each of the domains. For the coarse domain, always set NUMNC=1.
NESTI
The I location in its mother domain of the nest domain’s low-left corner ---
4-24
MM5 tutorial
4: TERRAIN
point (1,1). NESTJ
The J location in its mother domain of the nest domain’s low-left corner --point (1,1).
RID
The radius of influence in unit of grid points used only for Cressman type objective analysis (IFANAL=T).
NTYPE
The source terrain height and land-use data type for each of the domains:1=one degree; 2=30 min.; 3=10min.; 4=5 min.; 5=2 min.; 6=30 sec.
NSTTYP
To indicate the nest type: 1=one way nest; 2=two way nest.
4.9.3 OPTN: Function Options IFTER
Logical flag to indicate to create terrain height and other terrestrial fields, =T; or map background only, =F.
IFANAL
Interpolation method: .T. -- Cressman type objective analysis; .F. -- Overlapping parabolic interpolation.
ISMTHTR
To choose smoothing method: 1= 1-2-1 smoother; 2= smoother/desmoother.
IFEZFUG
To activate the EZFUDGE function: .T. turns on; .F. is off.
IFFUDG
Need to do land-use fudging (T) or not (F).
IFTFUG
Need to skip the EZFUDGE function over certain areas (T) or not (F).
IPRINTD
Print out the latitude and longitude of the mesoscale grids (T) or not (F).
IPRTHT
Print out all processing fields on the mesoscale grids (T) or not (F).
IPRINT
= 1: A lot more print output in terrain.print.out. Helpful when error occurs.
FIN
Contour interval (meter) of terrain height plots.
TRUELAT1
The first true latitude for the map projection. Default value = 91.0 means the standard values will be used for the projections. True lat/long may only be changed for Lambert-Conformal and Polar Stereographic projections.
TRUELAT2
The second latitude for the map projection. Default value = 91.0 means the standard value will be used for the projections. (Use this for Lambert-Conformal projection only.)
IFILL
Plots are color filled (T) or not (F).
LSMDATA
Switch to indicate whether to create vegetation, soil, vegetation fraction, and deep soil temperature files for LSM in MM5.
VEGTYPE
Switch to indicate which vegetation dataset to use. = 0: use old 13-category dataset; =1: use 24-category USGS dataset; =2: use 16-SiB category dataset.
VSPLOT
Switch to indicate whether to plot the dominant vegetation, soil, and vegetation fraction (T) or not (F).
IEXTRA
Switch to indicate whether to output and plot the percentage values of vegetation and soil types. Required for ISOIL=3 or Pleim-Xiu LSM option in MM5.
MM5 Tutorial
4-25
4: TERRAIN
4.9.4 Land-use Fudging Options (used when IFFUDG=T) IFFUG
To indicate which domains need to be fudged (T) or not (F).
NDFUG
The number of fudge points for each of the domains. The maximum of NDFUG is 200, that means that user can fudge maximum of 200 points for land-use for each of the domains.
IFUG
The I location of the fudge points for each of the domains. IFUG is a 2-dimension array IFUG(200,100), the first index is corresponding to points, and the second index corresponding to domains.
JFUG
The J location of the fudge points for each of the domains.
LNDFUG
The land-use category of the fudge points for each of the domains.
4.9.5 Skip the EZFUDGE over the boxes (used when IFTFUG=T) Note: The maximum number of boxes is 10. The user can use STARTLAT(10),..., to specify the boxes over which no EZFUDGE is to be done. STARTLAT
The latitudes of the lower-left corner of the area.
ENDLAT
The latitudes of the upper-right corner of the area.
STARTLON
The longitudes of the lower-left corner of the area.
ENDLON
The longitudes of the upper-right corner of the area
4.9.6 Heights of water bodies The heights of the water bodies can be specified in the record EZFUDGE in the namelist file as follows. The index in parenthesis refers to a specific water body that can be found in file “ezids” which are known to NCAR Graphics. For the Great Lakes in US, the heights have already been specified. Users can add more water body’s surface heights, in meters above sea level only if the water bodies are identifiable in NCAR Graphics. HTPS( 441) = -.001
; Ocean
HTPS( 550) = 183.
; Lake Superior
-------------
4.10 How to run TERRAIN 1. Get the source code. The current TERRAIN release resides on NCAR’s anonymous ftp site, ftp.ucar.edu:mesouser/MM5V3/TERRAIN.TAR.gz. You may download TERRAIN.TAR.gz to your working directory from the web page, ftp://ftp.ucar.edu/mesouser/MM5V3. Or you can copy it from ~mesouser/MM5V3/TERRAIN.TAR.gz on NCAR’s SCD machines. 2. Create the terrain.deck. Uncompress (“gunzip TERRAIN.TAR.gz”) and untar (“tar -xvf 4-26
MM5 tutorial
4: TERRAIN
TERRAIN.TAR”) the file, a directory TERRAIN will be created. Go into the TERRAIN directory, and type “make terrain.deck”, which creates a cshell script, terrain.deck. This deck
is created specifically for your computer. If your system does not have NCAR Graphics, you must modify the “Makefile” in the TERRAIN/ directory, set NCARGRAPHICS = NONCARG, and remove the libraries in LOCAL_LIBRARIES line. Note that the TERRAIN program does not require NCAR Graphics to run, but having it will make life a lot easier because you can see where you have set your domains. Although NCAR Graphics is a licensed software, but part of it has become free to download. See NCAR Graphics Web page for details: ngwww.ucar.edu. 3. Edit terrain.deck. There are three parts in terrain.deck that need to be edited: (a) Shell variables: ftp, Where30sTer, and users. Instructions on how to set these shell variables can be found in terrain.deck, or refer to section 4.7 in this chapter. (b) Parameter statements in parame.incl and paramed.incl (edit them in the terrain.deck): parameters IIMX and JJMX in parame.incl are used to declare the arrays holding the mesoscale gridded data, while parameters ITRH and JTRH in paramed.incl are used to declare the arrays holding the input lat/lon data (refer to the instructions in terrain.deck or section 4.8 in this chapter). (c) Records in terrain.namelist: MAPBG, DOMAINS, and OPTN. In case you would like to fudge the land-use, or add more heights of water bodies, the records FUDGE, FUDGET, and EZFUDGE need to be modified. Refer to the instructions in terrain.deck or section 4.9 in this chapter. 4. Run terrain.deck by typing “./terrain.deck” TERRAIN needs two kinds of inputs: (a) terrain.namelist and (b) data files for elevation, landuse, etc.. The terrain.namelist is created from terrain.deck, and the necessary data files are obtained from ftp sites based on the types of data user specifies in the namelist. Beware that the minimum size of downloaded data from ftp site is 57 Mb, and it can go up to 362 Mb if one requests the USGS landuse data and land-water mask data. It will require a few Gb of disk space to host 30 sec datasets. 5. Check your output. TERRAIN has three kinds of output: (a) A log file from compilation: make.terrain.out, and a print file from running the program: terrain.print.out. Check make.terrain.out to see if compilation is successful. Check terrain.print.out to see if the program runs successfully. When the TERRAIN job is successful, you should get a message “== NORMAL TERMINATION OF TERRAIN PROGRAM ==” at the end of the terrain.print.out file. If the TERRAIN job failed, you can also find error MM5 Tutorial
4-27
4: TERRAIN
messages and look for clues in this file. (b) A plot file, TER.PLT (or gmeta), if NCAR Graphics is used (type idt TER.PLT to view); Because the TERRAIN is the first component of MM5 modeling system and it produces constant fields used in the model, we use NCAR Graphics in the program to produce plots for users to check the output carefully. When LSMDATA = FALSE, there are 7 frames plotted for each of the domains: map background, color and black/white terrain height, land-use (vegetation), mesh, schematic raob station map, a map showing the rest of the nests (6 frames only for the finest domain without the last map). When LSMDATA = TRUE, there are additional 15 frames plotted: deep soil temperature, soil category, 12 monthly vegetation fraction percentages, land-water mask. When IEXTRA = TRUE, more frames will be plotted. (c) Binary files, TERRAIN_DOMAIN1, TERRAIN_DOMAIN2, ......; These are the terrestrial data files for each of the mesoscale domains used by REGRID, MM5 or NESTDOWN. You may check the size of each of the files to make sure the files were created correctly (not having a zero size). Useful ‘make’ commands: make clean If you are going to recompile, it is best to type ‘make clean’ first. It will remove all generated files (which include object files and executables). make dataclean This command removes downloaded data in Data/ directory, and Data30s/ directory itself.
4.11 TERRAIN Didn’t Work: What Went Wrong? If the TERRAIN job fails, check to see if one of the following is a possibility: •
First, make sure the compilation is successful. Check if the following executables are produced: terrain.exe - main terrain executable rdnml - utility to read namelist variables and figure out what data to download data_area.exe - utility to figure out which 30 sec elevation data tile to download rdem.exe - utility to read the 30 sec elevation data and reformat it for terrain program If they are not generated, check make.terrain.out file for compiler errors. To recompile, type make clean
4-28
MM5 tutorial
4: TERRAIN
and again ./terrain.deck •
Missing NCAR Graphics environment variable: see if you have included the following line in your .cshrc file: setenv NCARG_ROOT /usr/local or /usr/local/ncarg This is required for making plots using NCAR Graphics.
•
Program aborted in subroutine SETUP: most likely you didn’t provide the map background information correctly. Check the namelist MAPBG and variables TRUELAT1, TRUELAT2.
• The program stopped abnormally, check the terrain.print.out to find the maximum dimensions required. For example, when polar projection specified and the pole inside the domain, the JTRH should be much larger than ITRH, but for other projections, both ITRH and JTRH may be comparable. Also IIMX and JJMX should be the maximum dimensions including the expanded domain. •
“The nest 2 is too close to the boundary of the domain 1 ...” and STOP in subroutine TFUDGE: This means there are not enough grid points between domains’ boundaries, change the domain settings (e.g. NESTI and NESTJ), and run the program again.
•
The grid size or the dimensions of the nested domain are specified incorrectly (do not match the mother domain). Please check the messages in terrain.print.out to find the correct ones.
•
The necessary input data files have not been accessed correctly via ftp. You may check the directories, Data and Data30s, to see if the necessary source data files are there. Type ‘make dataclean’ can remove all data files before one starts again.
•
When the constant fields (for example, whole domain located over ocean) are generated, the plotting errors will occurred if IFILL = TRUE. Set IFILL = FALSE or reset your domains.
•
If running the TERRAIN job on a CRAY computer, probably a huge memory is required and more CPU time are needed because all integer numbers are represented by a 8-byte word and all operations are done on the 8-byte word. So, if possible, we suggest that users run the TERRAIN job on workstations.
4.12 TERRAIN Files and Unit Numbers Table 4.3 List of shell names, fortran unit numbers and their description for TERRAIN Shell name
Unit number
Description
terrain.namelist
fort.15
namelist
MM5 Tutorial
4-29
4: TERRAIN
Shell name
Unit number
Description
*.tbl
fort.17
the tables used for plotting
ezids
fort.18
area ID file used by ezmap
raobsta.ieee
fort.19
Global RAOB station list
LNDNAME(1), (2), (3)
fort.20, 22, 24
1-deg, 30-, and 10-min source land-use file
TERNAME(1), (2), (3), (4), (5)
fort.21, 23, 25, 27,29
1-deg, 30-, 10-, 5- and 2-min source terrain file
new_30sdata
fort.31
30 seconds source terrain file
TERRAIN_DOMAINn
fort.7(n-1)
TERRAIN output files for domain ID n
LWNAME(1), (2), (3), (4), (5), (6)
fort.32, 33, 34, 35,36,37
1-deg, 30-, 10-, 5-, 2-min, and 30-sec landwater mask file
VGNAME(1), (2), (3), (4), (5), (6)
fort.38,39,40, 41,42,43
1-deg, 30-, 10-, 5-, 2-min, and 30-sec vegetation file
SONAME(1), (2), (3), (4), (5), (6)
fort.44,45,46, 47,48,49
1-deg, 30-, 10-, 5-, 2-min, and 30-sec soil file
VFNAME
fort.50
12 monthly 10-min vegetation fraction file
TSNAME
fort.51
1-deg annual deep soil temp. file
new_30sdata_info
fort.97
global 30 sec. elevation data information
4.13 TERRAIN tar File The terrain.tar file contains the following files and directories CHANGES Data/ Makefile README Templates/ con.tbl confi.tbl confiP.tbl confiT.tbl ezids lsco.tbl luco.tbl lvc1.tbl lvc2.tbl map.tbl maparea.tbl
4-30
Description of changes to Terrain program Data directory Makefile to create terrain.deck and executable General information about the Terrain directory Job deck and tables directory Table file for terrain height plot Table file for color terrain height plot Table file for vegetation fraction percentages plots Table file for deep soil temperature plot File for NCAR Graphics geographic area identifier Table file for soil category plot Table file for old land-use plot Table file for SiB vegetation category plot Table file for USGS vegetation category plot Table file for plots Table file for plots
MM5 tutorial
4: TERRAIN
mapfi.tbl raobsta.ieee src/
Table file for plots Radiosonde locations for plot Terrain source code
In the directory src/, paramesv0.incl and vs_data0.incl are the parameters and data statements for SiB data. paramesv1.incl and vs_data2.incl are the parameters and data statements for USGS data. In the Data/ directory, namelists for USGS and SiB input files are present. These namelists will be cat’ed to terrain namelist file during the run. Also present in the directory are the ftp scripts to ftp general terrain data from NCAR ftp site and 30 second USGS terrain dataset from USGS ftp site. These ftp scripts may be run separately from the Terrain program to obtain data. If users have their own vegetation, soil data with different definitions, these parameter and data statement files must be created as well as the corresponding color tables for plots.
MM5 Tutorial
4-31
4: TERRAIN
4.14 terrain.deck #!/bin/csh -f # terrain.csh # set echo # # Set this if you would like to ftp terrain data # set ftpdata = true #set ftpdata = false # # Set the following for ftp’ing 30 sec elevation data from USGS ftp site # set Where30sTer = ftp #set Where30sTer = /your-data-directory if ( $Where30sTer == ftp) then # # Use this if you are ftping from other places # # set users = Others # # Use this if you are ftping from inside NCAR # set users = MMM else set users = endif # # Uncomment the following line if using the 30-100 cm layer soil file # # set BotSoil # # -------------------------------------------------------------# 1. Set up parameter statements # -------------------------------------------------------------# cat > src/parame.incl.tmp << EOF C IIMX,JJMX are the maximum size of the domains, NSIZE = IIMX*JJMX PARAMETER (IIMX = 100, JJMX = 100, NSIZE = IIMX*JJMX) EOF cat > src/paramed.incl.tmp << EOF C ITRH,JTRH are the maximum size of the terrain data. C NOBT = ITRH*JTRH, here assuming C ITRH= 270 ( 45 deg. in north-south direction, 10 min. resolution) C JTRH= 450 ( 75 deg. in north-south direction, 10 min. resolution) C NOTE: C IF USING GLOBAL 30SEC ELEVATION DATASET FROM USGS, NEED TO SET C BOTH ITRH AND JTRH BIG. TRY THE COMMENTED PARAMETER LINE FIRST. C THIS WILL REQUIRE APPROXI 0.9 GB MEMORY ON A 32-BIT IEEE MACHINE. C AN ESTIMATE OF THE DIMENSION SIZE CAN BE MADE FROM Data30s/rdem.out C AFTER THE FIRST JOB FAILS. USE (XMAXLAT-XMINLAT)*120 TO ESTIMATE C ITRH, AND (XMAXLON-XMINLON)*120 TO ESTIMATE JTRH. C PARAMETER (ITRH = 500, JTRH = 500, NOBT = ITRH*JTRH) C PARAMETER (ITRH = 1500, JTRH = 1800, NOBT = ITRH*JTRH) EOF # # -------------------------------------------------------------# 2. Set up NAMELIST # -------------------------------------------------------------# if ( -e terrain.namelist ) rm terrain.namelist cat > terrain.namelist << EOF &MAPBG PHIC = 36.0, ; CENTRAL LATITUDE (minus for southern hemesphere) XLONC = -85.0, ; CENTRAL LONGITUDE (minus for western hemesphere) IEXP = .F., ; .T. EXPANDED COARSE DOMAIN, .F. NOT EXPANDED. ; USEFUL IF RUNNING RAWINS/little_r AEXP = 360., ; APPROX EXPANSION (KM) IPROJ = ‘LAMCON’, ; LAMBERT-CONFORMAL MAP PROJECTION ;IPROJ = ‘POLSTR’, ; POLAR STEREOGRAPHIC MAP PROJECTION
4-32
MM5 tutorial
4: TERRAIN
;IPROJ = ‘MERCAT’, ; MERCATOR MAP PROJECTION &END &DOMAINS ; MAXNES = 2, ; NUMBER OF DOMAINS TO PROCESS NESTIX = 35, 49, 136, 181, 211, 221, ; GRID DIMENSIONS IN Y DIRECTION NESTJX = 41, 52, 181, 196, 211, 221, ; GRID DIMENSIONS IN X DIRECTION DIS = 90., 30., 9., 3.0, 1.0, 1.0, ; GRID DISTANCE NUMNC = 1, 1, 2, 3, 4, 5, ; MOTHER DOMAIN ID NESTI = 1, 10, 28, 35, 45, 50, ; LOWER LEFT I OF NEST IN MOTHER DOMAIN NESTJ = 1, 17, 25, 65, 55, 50, ; LOWER LEFT J OF NEST IN MOTHER DOMAIN RID = 1.5, 1.5, 1.5, 3.1, 2.3, 2.3, ; RADIUS OF INFLUENCE IN GRID UNITS (IFANAL=T) NTYPE = 2, 3, 4, 6, 6, 6, ; INPUT DATA RESOLUTION ; ; 1: 1 deg (~111 km) global terrain and landuse ; 2: 30 min ( ~56 km) global terrain and landuse ; 3: 10 min ( ~19 km) global terrain and landuse ; 4; 5 min ( ~9 km) global terrain and landuse ; 5; 2 min ( ~4 km) global terrain and landuse ; 6; 30 sec ( ~.9 km) global terrain and landuse ; NSTTYP= 1, 2, 2, 2, 2, 2, ; 1 -- ONE WAY NEST, 2 -- TWO WAY NEST &END &OPTN IFTER = .TRUE., ; .T.-- TERRAIN, .F.-- PLOT DOMAIN MAPS ONLY IFANAL = .F., ; .T.-- OBJECTIVE ANALYSIS, .F.-- INTERPOLATION ISMTHTR = 2 , ; 1: 1-2-1 smoother, 2: two pass smoother/desmoother IFEZFUG = .F., ; .T. USE NCAR GRAPHICS EZMAP WATER BODY INFO TO FUDGE THE LAND USE ; .F. USE LANDWATER MASK DATA IFTFUG = .F., ; .T. DON’T DO EZFUDGE WITHIN THE USER-SPECIFIED ; LAT/LON BOXES, need to define namelist fudget IFFUDG = .F., ; .T. POINT-BY-POINT FUDGING OF LANDUSE, ; need to define namelist fudge IPRNTD = .F., ; PRINT OUT LAT. AND LON. ON THE MESH IPRTHT = .F., ; PRINT OUT ALL PROCESSING FIELDS ON THE MESH IPRINT = 0, ; = 1: A LOT MORE PRINT OUTPUT IN terrain.print.out FIN = 100., 100., 100., 100., 100., 100., ; CONTOUR INTERVAL (meter) FOR TERRAIN HEIGHT PLOT ;TRUELAT1=91., ; TRUE LATITUDE 1 ;TRUELAT2=91., ; TRUE LATITUDE 2, use this if IPROJ=’LAMCON’ IFILL = .TRUE., ; .TRUE. --- color filled plots LSMDATA = .FALSE., ; .TRUE. --- Create the data for LSM VEGTYPE = 1, ; LANDUSE DATA TYPE: =0: old 13 cat; =1: 24 cat USGS; =2: 16 cat SiB VSPLOT = .TRUE., ; .TRUE. --- plot Vege., Soil, Vege. Frc. percentages. IEXTRA = .FALSE., ; .TRUE. --- Create extra data for Pleim-Xiu LSM &END &FUDGE ; USE ONLY IF IFFUDG = .T., POINT-BY-POINT FUDGING OF LANDUSE, ; IFFUG FOR EACH OF THE NESTS: .F. NO FUDGING, .T. FUDGING IFFUG = .F.,.F., ; FUDGE FLAGS ; NDFUG : THE NUMBER OF FUDGING POINTS FOR EACH OF NESTS NDFUG = 0,0, ; LOCATION (I,J) AND LANDUSE VALUES FOR EACH OF THE NESTS ; NOTE: REGARDLESS OF IFFUG AND NDFUG, 200 VALUES MUST BE GIVEN FOR ; EACH NEST, OR ELSE THE INDEXING WILL GET MESSED UP ; The example below is for two domains. Add more for domain 3 and up ; if needed. Do not remove 0 values for domain 1 and/or 2 even ; they are not used. ; IFUG(1,1)= 200*0, ; I location for fudge points in domain 1 IFUG(1,2)= 200*0, ; I location for fudge points in domain 2 JFUG(1,1)= 200*0, ; J location for fudge points in domain 1 JFUG(1,2)= 200*0, ; J location for fudge points in domain 2 LNDFUG(1,1)= 200*0, ; land-use value at fudge points for domain 1 LNDFUG(1,2)= 200*0, ; land-use value at fudge points for domain 2 &END &FUDGET ; USE ONLY IF IFTFUG=.T., WHICH MEANS TERRAIN WON’T DO EZFUDGE WITHIN ; THE USER-SPECIFIED LAT/LON BOXES. THIS OPTION IS USED WHEN THERE ; ARE INLAND BODIES OF WATER THAT ARE DEFINED IN THE LAND USE ; DATA SET BUT NOT IN THE EZMAP DATA SET. THIS OPTION PREVENTS ; THOSE BODIES OF WATER FROM BEING WIPED OUT BY EZFUDGE NFUGBOX = 2 ; NUMBER OF SUBDOMAINS IN WHICH TO ; TURN OFF EZMAP LAND USE FUDGING
MM5 Tutorial
4-33
4: TERRAIN
STARTLAT=45.0,44.0, ; LATITUDES OF LOWER-LEFT CORNERS OF SUBDOMAINS ENDLAT =46.5,45.0, ; LATITUDES OF UPPER-RIGHT CORNERS OF SUBDOMAINS STARTLON=-95.0,-79.8, ; LONGITUDES OF LOWER-LEFT CORNERS OF SUBDOMAINS ENDLON =-92.6,-78.5, ; LONGITUDES OF UPPER-RIGHT CORNERS OF SUBDOMAINS &END &EZFUDGE ; USE ONLY IF IFEZFUG=.T., WHICH TURNS ON EZMAP WATER BODY FUDGING OF LANDUSE. ; USERS: FEEL FREE TO ADD ANY MORE LAKE SURFACE HEIGHTS THAT YOU’LL NEED. ; HTPS IS THE HEIGHT IN METERS AND THE INDEX OF HTPS CORRESPONDS TO THE ID ; OF THE ‘PS’ AREA IN THE FILE ezmap_area_ids. ; HTPS(441) = -.001 ; Oceans -- Do NOT change this one HTPS(550) = 183. ; Lake Superior HTPS(587) = 177. ; Lakes Michigan and Huron HTPS(618) = 176. ; Lake St. Clair HTPS(613) = 174. ; Lake Erie HTPS(645) = 75. ; Lake Ontario HTPS(480) = 1897. ; Lake Tahoe HTPS(500) = 1281. ; Great Salt Lake &END EOF # # -----------------------------------------------------------------------------# # END OF USER MODIFICATION # # -----------------------------------------------------------------------------# # Check to see if recompilation is needed # Need to make here so that rdnml may be used # cd src ../Templates/incldiff.sh parame.incl.tmp parame.incl ../Templates/incldiff.sh paramed.incl.tmp paramed.incl cd .. make >& make.terrain.out # # Create a namelist without comments # sed -f Templates/no_comment.sed terrain.namelist | grep “[A-Z,a-z]” > terlif.tmp mv terlif.tmp terrain.namelist # # Set default script variables # set LandUse = OLD # set DataType = `src/rdnml < terrain.namelist` echo $DataType # if ( $DataType[4] == 1 ) set IfProcData if ( $DataType[4] == 0 ) set ftpdata = false if ( $DataType[5] == 1 ) set LandUse = USGS if ( $DataType[5] == 2 ) set LandUse = SiB if ( $DataType[3] == 1 ) set IfUsgsTopo # # reset LandUse if BotSoil is set # -- use bottome soil files # if ( $?BotSoil ) set LandUse = USGS2 # # link to Fortran units # set ForUnit = fort. rm ${ForUnit}1* ${ForUnit}2* ${ForUnit}4* # if ( $LandUse == OLD ) cat Data/namelist.usgsdata >> terrain.namelist if ( $LandUse == USGS ) cat Data/namelist.usgsdata >> terrain.namelist if ( $LandUse == USGS2 ) cat Data/namelist.usgsdata2 >> terrain.namelist if ( $LandUse == SiB ) cat Data/namelist.sibdata >> terrain.namelist cat > endnml << EOF &END EOF cat endnml >> terrain.namelist rm endnml
4-34
MM5 tutorial
4: TERRAIN
# ln -s terrain.namelist ${ForUnit}15 ln -s ezids ${ForUnit}18 ln -s raobsta.ieee ${ForUnit}16 # ---------------------------------------------------------------------# # Update parameter statements for vegetation dataset # (may require partial recompilation) # if ( $LandUse == SiB ) then cp src/paramesv0.incl src/paramesv.incl.tmp ./Templates/incldiff.sh src/paramesv.incl.tmp src/paramesv.incl cp src/vs_data0.incl src/vs_data.incl.tmp ./Templates/incldiff.sh src/vs_data.incl.tmp src/vs_data.incl make >& make2.print.out else if ( $LandUse == USGS ) then cp src/paramesv1.incl src/paramesv.incl.tmp ./Templates/incldiff.sh src/paramesv.incl.tmp src/paramesv.incl cp src/vs_data2.incl src/vs_data.incl.tmp ./Templates/incldiff.sh src/vs_data.incl.tmp src/vs_data.incl make >& make2.print.out endif # ---------------------------------------------------------------------# # should I ftp the data? # if ( $ftpdata == true && $?BotSoil ) then # ftp other data plus bottom soil data echo ‘about to start ftping’ cp Data/ftp2.csh ftp.csh chmod +x ftp.csh ./ftp.csh >& ftp.out # rm ftp.csh ftp.out else # ftp other data plus top soil data echo ‘about to start ftping’ cp Data/ftp.csh ftp.csh chmod +x ftp.csh ./ftp.csh >& ftp.out # rm ftp.csh ftp.out endif # if ( $?IfUsgsTopo && $IfProcData ) then echo ‘about to start ftping 30 sec tiled elevation data from USGS’ cp Data/ftp30s.csh . chmod +x ftp30s.csh ./ftp30s.csh $Where30sTer $users >& ftp30s.out # rm ftp30s.csh ftp30s.out endif # ---------------------------------------------------------------------# # Execute terrain # unlimit date ./terrain.exe >&! terrain.print.out # rm ${ForUnit}*
MM5 Tutorial
4-35
4: TERRAIN
4-36
MM5 tutorial
5: REGRID
REGRID
5
Purpose 5-3 Structure 5-3 A schematic 5-4 Input to pregrid 5-4 Input to regridder 5-5 Output from regridder 5-5 Intermediate Data Format 5-5 General format description 5-5 File Naming conventions 5-5 File format 5-6 Special field names 5-7 Pregrid VTables 5-8 Pregrid program functioning 5-9 Handy pregrid utility programs 5-9 How to run REGRID 5-10 pregrid.csh 5-11 The regridder Namelist options 5-13 RECORD1 RECORD2 RECORD3 RECORD4 RECORD5
5-13 5-14 5-14 5-14 5-15
REGRID tar File 5-15 Data 5-15 NCEP GDAS 5-16
MM5 Tutorial
5-1
5: REGRID
NCEP/NCAR Reanalysis 5-16 NCEP Eta 5-17 NCEP AVN 5-17 ECMWF TOGA Global Analysis 5-17 ECMWF Reanalysis (ERA15) 5-17 ECMWF Reanalysis (ERA40) 5-18 Other data 5-18
5-2
MM5 tutorial
5: REGRID
5
REGRID
5.1 Purpose The purpose of REGRID is to read archived gridded meteorological analyses and forecasts on pressure levels and interpolate those analyses from some native grid and map projection to the horizontal grid and map projection as defined by the MM5 preprocessor program TERRAIN. REGRID handles pressure-level and surface analyses. Two-dimensional interpolation is performed on these levels. Other types of levels, such as constant height surfaces, isentropic levels or model sigma or eta levels, are not handled. REGRID is the second step in the flow diagram of the MM5 modeling system (Fig.1.1). It expects input from the TERRAIN program, and creates files ready for RAWINS, LITTLE_R, or INTERPF. These files are generally used as the first guess to an objective analysis (RAWINS or LITTLE_R), or as analyses which are to be directly interpolated to the MM5 model levels for initial and boundary conditions for MM5 (INTERPF). An additional capability of REGRID is the capability to insert, or “bogus in” a tropical cyclone into the analysis. This is a fairly specialized usage of REGRID, and will not be discussed in any detail here. For details on this tropical cyclone bogussing method, see: http://www.mmm.ucar.edu/mm5/mm5v3/tc-report.pdf (pdf format) http://www.mmm.ucar.edu/mm5/mm5v3/tc-report.doc (word format)
5.2 Structure REGRID is not a single program, but a suite of programs to handle various tasks of the REGRID package. The tasks are split up into two main components: 1) data input (i.e., reading the original meteorological analyses) 2) interpolation to MM5 grid. The data input task is handled by the programs collectively known as “pregrid”, and the interpolation to the MM5 grid is handled by the program “regridder”. Communication between these programs is accomplished via intermediate files written in a pretty simple format. The pregrid tasks are further subdivided into programs which read some specific data formats, while the regridder tasks are managed in a single program. The intent is that individual users can easily write their own data input programs (i.e., their own pregrid programs) if necessary, thus introducing their MM5 Tutorial
5-3
5: REGRID
own data into the MM5 Modeling System. This division separates the fairly messy and very dataset-specific task of reading data from the more general task of interpolation. By this division, REGRID can be easily expanded to ingest more data sets, and users can more easily ingest their own data sets for use in the MM5 system.
5.3 A schematic Thinking of REGRID as a package:
TERRAIN
REGRID
LITTLE_R
Analyses Considering the components of REGRID: REGRID PREGRID
Intermediate files
REGRIDDER
LITTLE_R
Analyses
TERRAIN
5.4 Input to pregrid The pregrid program expects to find files of gridded meteorological analyses. Currently, pregrid can read many data sets formatted in GRIB Edition 1 (hereinafter referred to as GRIB), as well as several GRIB and non-GRIB data sets that have traditionally been available to MM5 users. Most of the individual pregrid programs, particularly those dealing with GRIB datasets, also expect to find tables which tell the pregrid program what fields to access from the input files. These are referred to as “Vtables” and are discussed in greater detail below. A Fortran namelist file passes user-specified options to pregrid. For pregrid, this is mostly date information.
5-4
MM5 Tutorial
5: REGRID
5.5 Input to regridder The regridder program expects to find in the files from pregrid the fields of temperature, horizontal wind components, relative humidity, height of pressure levels, sea-level pressure, sea-surface temperature, and snow-cover data. Other fields may be used as well, interpolated and passed on to the rest of the modeling system. When you set up and run the pregrid programs, you should verify that the files you pass to regridder contain the necessary fields. One way to verify this is to run regridder and see what error messages it gives you. From the TERRAIN files, regridder finds terrain, land-use, and map data. A Fortran namelist file passes user-specified options to regridder.
5.6 Output from regridder The regridder program creates a file called “REGRID_DOMAIN#”. This file contains the data at every time period for a single domain. This file is in MM5v3 format which is discussed in greater detail in Chapter 13, “I/O Format”
5.7 Intermediate Data Format Key to the REGRID package is the data format used for passing data from pregrid to regridder. Data are passed from the pregrid programs to regridder via files written in the format described in this section.
5.7.1 General format description Fields are written to the intermediate files as as two-dimensional horizontal (i.e., pressure-level or surface) slabs of data. Each horizontal slab contains a single level of a single variable (i.e., 500 mb RH, surface T, etc.). Any number of horizontal slabs may be written to single file. The slabs in a given file are not necessarily all from the same data source, or all on the same grid or map projection, but they should all represent data valid at the same time. The order of slabs in the file does not matter.
5.7.2 File Naming conventions Each file contains data for a single time. The file names consist of a prefix (possibly denoting the source of data), followed by a colon, followed by a time-stamp in the form YYYY-MM-DD_HH. Regridder uses the file names as discussed below. For example, analyses from the ON84-formatted data from NCEP for 17 Jun 2002 at 12 UTC may be written to a file called “ON84:2002-06-17_12”
MM5 Tutorial
5-5
5: REGRID
5.7.3 File format The files are written as sequential-access unformatted FORTRAN records. Four records are used for each horizontal slab. The first record is a format version number, currently 3. This is intended to facilitate backward compatibility as the intermediate format is adapted for additional grids. The second record contains information common to all types of gridded data recognized by regridder. The third record contains information specific to the particular grid type represented. This record varies depending on the grid type. The fourth record is the 2-dimensional slab of data. -Record 1: IFV -Record 2: HDATE, XFCST, FIELD, UNITS, DESC, XLVL, NX, NY, IPROJ if (IPROJ == 0) (Cylindrical equidistant projection) -Record 3: STARTLAT, STARTLON, DELTALAT, DELTALON if (IPROJ == 1) (Mercator projection) -Record 3: STARTLAT,STARTLON, DX, DY, TRUELAT1 if (IPROJ == 3) (Lambert conformal projection) -Record 3: STARTLAT, STARTLON, DX, DY, XLONC, TRUELAT1, TRUELAT2 if (IPROJ == 5) (Polar-stereographic projection) -Record 3: STARTLAT, STARTLON, DX, DY, XLONC, TRUELAT1 -Record 4: SLAB where: integer :: char*24 :: real :: char*9 :: char*25 :: char*46 :: real ::
IFV HDATE XFCST FIELD UNITS DESC XLVL
integer :: NX integer :: NY integer :: IPROJ
real real real real real real real real
5-6
:: :: :: :: :: :: :: ::
STARTLAT STARTLON DELTALAT DELTALON DX DY XLONC TRUELAT1
:The PREGRID format version number, currently 3 :The time, in format “YYYY-MM-DD_HH:mm:ss ” :Forecast time (in hours) of the data in the slab :Field name, those with special meaning are described below :Units describing the field in the slab :Text description of the field :Pressure-level (Pa) of the data. 200100 Pa indicates surface data; 201300 Pa indicates sea-level data :Slab dimension (number of gridpoints) in the X direction :Slab dimension (number of gridpoints) in the Y direction :Flag denoting the projection. Recognized values are: 0: Cylindrical Equidistant (regular lat/lon) projection 1: Mercator projection 3: Lambert conformal projection 5: Polar stereographic projection :Starting latitude (degrees north) :Starting longitude (degrees east) :Latitude increment (degrees) for lat/lon grid :Longitude increment (degrees) for lat/lon grid :Grid-spacing in x (km at TRUELAT1 (and TRUELAT2)) :Grid-spacing in y (km at TRUELAT1 (and TRUELAT2)) :Center longitude of the projection :Standard latitude used for Mercator, polar stereographic, and Lambert conformal projections MM5 Tutorial
5: REGRID
real real
:: TRUELAT2 :Second standard latitude value used for Lambert conf. projection :: SLAB :Two-dimensional array (NX,NY) of data
5.7.4 Special field names The variable FIELD indicates the physical variable in the slab. Certain values of FIELD are recognized by pregrid and regridder for specific treatment. Slabs identified by an unrecognized values of FIELD are simply interpolated horizontally and written out by regridder. Recognized field names are: T * Air Temperature (K) U * Grid-relative u-component of the horizontal wind (m s-1) V * Grid-relative v-component of the horizontal wind (m s-1) RH * Relative humidity (%) HGT * Geopotential height (GPM) PMSL * Sea-level pressure (Pa) SST or * Sea-surface Temperature or Skin Temperature (K) TSEASFC or SKINTEMP SNOWCOVR Binary flag for the presence (1.0) / absence (0.0) of snow on the ground SOILT010 F Ground temperature of a layer below ground (K) SOILT040 F Ground temperature of a layer below ground (K) F Ground temperature of a layer below ground (K) SOILT100 SOILT200 F Ground temperature of a layer below ground (K) F SOILT400 Ground temperature of a layer below ground (K) F Soil moisture of a layer below ground (fraction) SOILM010 F SOILM040 Soil moisture of a layer below ground (fraction) F SOILM100 Soil moisture of a layer below ground (fraction) F Soil moisture of a layer below ground (fraction) SOILM200 F SOILM400 Soil moisture of a layer below ground (fraction) F SEAICE Binary flag for the presence (1.0) / absence (0.0) of sea ice. The value should be 0.0 or 1.0. The grib.misc pregrid code makes a check on SEAICE. If a value > 0.5, SEAICE is set to 1.0, otherwise, SEAICE is set to 0.0. LANDSEA F Binary flag for land (1.0) / water (0.0) masking F SOILHGT Terrain elevation of the input data set (not of the MM5 model terrain), in meters WEASD Water equivalent of accumulated snow depth (kg m-2) SPECHUMD † Specific Humidity DEWPT † Dewpoint (K) DEPR † Dewpoint Depression (K) VAPP † Vapor Pressure (Pa) GEOPT † Geopotential (m2/s2) * Fields absolutely required by regridder. F Fields used in MM5 only for Noah Land Surface Model.
MM5 Tutorial
5-7
5: REGRID
† Fields recognized by pregrid for internal conversions.
5.8 Pregrid VTables Pregrid is intended to read a wide variety of data sets. Since many data sets are archived and distributed in GRIB format, and the non-GRIB data sets we read use many of the same ideas for describing given fields, it is convenient to use the GRIB data sets as an example. The GRIB format describes each field by several code numbers. However, we cannot include the code tables in the program code itself, because these code numbers are not consistent from one data set to another. Also, pregrid must have the capability to ingest analyses that we have not anticipated. Therefore, we have to supply coded information to the program through some sort of input file. The pregrid VTables are the means we have chosen to do this. These tables are essentially a conversion from the GRIB method of referring to fields to the MM5-System method of referring to fields.. The body of the VTables consists of one or more lines describing the fields we want to extract from the analysis files. A couple of examples are in order: GRIB | Level | Level | Level| REGRID | REGRID | REGRID | Code | Code | 1 | 2 | Name | Units | Description | ------+-------+-------+------+-----------+-----------+-----------------------+ 11 | 100 | * | | T | K | Temperature | 11 | 105 | 2 | | T | K | Temperature at 2 m | ------+-------+-------+------+-----------+-----------+-----------------------+
The first four columns of the Vtable represent the GRIB method of identifying fields. The last three columns represent the MM5 method of identifying fields. The GRIB Code is the code number identifying the variable to access. For example, in NCEP GRIB files, temperature is generally coded as 11. The Level Code is the code number identifying the type of level on which the variable is expected. For example, GRIB Level Code 100 refers to pressure levels, and GRIB Level Code 105 refers to a fixed height (in meters) above the ground. Level 1 is the GRIB code for the value of the level. An asterisk (*) means to get data from every level of the type defined by the Level Code. This (*) wild-card is effective only for the pressurelevels (level-code 100). Level 2 is often needed for types of levels (such as averages or sums over a depth) which are defined by two values. REGRID Name is the character string identifying the field to the rest of the modeling system. REGRID Units are the units used for this field in the rest of the modeling system. This is simply a descriptive text to remind the user what the units are. Do not attempt to change the units in which a field is output by changing the Units string. It will not work, and you will wind up confusing yourself later. REGRID Description is a text description of the field. There are a few subtleties to the VTables. A situation that sometimes occurs is that we want a field that is not included in the source files, but may be derived from fields which are in those files. One example is relative humidity. Some data sets may archive specific humidity instead. Yet we can derive RH from specific humidity, temperature, and pressure. We want to write out RH, but not write out specific humidity. Since we need specific humidity to compute relative humidity, we need to ask for specific humidity in the Vtables. The signal in the VTables that a certain field is not to be written out is a blank REGRID Description. Since we want to write out relative humid5-8
MM5 Tutorial
5: REGRID
ity, we include the relative humidity in the VTables in the usual way (with no GRIB Code since it wouldn’t be found anyway). There is coded into the program the conversion from specific humidity to relative humidity, so pregrid will create the relative humidity field. GRIB Code ----11 51
| Level | Code +-----| 100 | 100 | 100 ----- +------
| Level | 1 +-----| * | * | * +------
| Level| REGRID | 2 | Name +------+---------| | T | | SPECHUMD | | RH +------+----------
| REGRID | Units +---------| K | kg kg{-1} | % +----------
| REGRID | | Description | +-----------------------+ | Temperature | | | | Relative Humidity | +-----------------------+
Those conversions already coded into pregrid are: - Relative humidity, from specific humidity (SPECHUMD), pressure, and temperature. - Relative humidity, from dewpoint (DEWPT), pressure, and temperature - Relative humidity, from dewpoint depression (DEPR), pressure, and temperature - Relative humidity, from vapor pressure (VAPP), pressure, and temperature. - Height, from geopotential (GEOPT) This list may grow as users encounter various situations in which a computed field is necessary. There are several VTables already set up for certain data sets that we have commonly accessed. Most of these are found in the directory pregrid/grib.misc. If you want to access a different GRIBformatted data set, you must first determine which variables are included in that data set, and find the appropriate code numbers that are used by that data set. If you want to extract additional variables from a data set, you are responsible for finding the appropriate GRIB Code and Level Code numbers. You may find NCEP Office Note 388, a description of the GRIB Edition 1 format, useful. This document can be found in many places on the internet, including: http://www.nco.ncep.noaa.gov/pmb/docs/on388.
5.9 Pregrid program functioning The pregrid programs first read the namelist to determine the starting and ending times of the period of interest, and to find the desired time-interval of data. It then reads the VTable to determine which variables to extract from the source files. Then for each source file, the program scans through the data, pulling out all analyses which fall between the starting and ending times, and which are listed in the VTable. These analyses are written to preliminary files (named by time and written in the intermediate format). Once a record with a time greater than the user-specified ending time has been read, processing on that analysis file stops and the next file is opened (i.e., records in the source file are assumed to be sorted by time; this assumption can be turned off in the namelist). This cycle repeats until all the source files have been scanned. Once that cycle is finished, the preliminary files are reread and derived fields are computed. Temporal interpolation is performed as necessary to fill in missing time periods. The final intermediate files are written.
5.10 Handy pregrid utility programs We have created a handful of handy programs you should be aware of: gribprint [- v | -V] file MM5 Tutorial
5-9
5: REGRID
Scans through a GRIB-formatted file, printing out a few details of each GRIB record. With the -v option, prints more details of the GRIB header. With the -V option, prints way too much of the actual data. This program is made automatically when you issue the top-level make. It is found in the pregrid/util directory. plotfmt file Makes plots of each field in the file which has been created by pregrid. This program requires NCAR Graphics. To make this program, go to the util directory, and execute “make plotfmt”, or compile plotfmt.F using NCAR-Graphics and loading library libpgu.a. get_ncep.csh to download archives from the GRIB-formatted NCEP GDAS analyses. get_on84.csh to download archives from the ON84-formatted NCEP GDAS analyses. get_fnl.csh to download archives from the ON84-formatted NCEP GDAS analyses. get_nnrp.csh to download archives from the NCEP/NCAR Reanalysis project. get_era.csh to download archives from the ECMWF Reanalysis project. get_awip.csh to download archives from the NCEP Eta model output (GRIB212). get_toga.csh to download archives from the ECMWF Reanalysis project. All the get_*.csh scripts are available from ~mesouser/MM5V3/Util. These scripts can be run on NCAR IBM computers to download analysis data from the mass store. Be sure to first check the DSS catalogs for missing analyses.
5.11 How to run REGRID 1) Get the source code. The current REGRID release resides on NCAR’s anonymous FTP site, ftp://mesouser/MM5V3/REGRID.TAR.gz. There may be a regrid.tar file available elsewhere for the tutorial. Uncompress (“gunzip regrid.tar.gz”) and untar (“tar -xvf regrid.tar”) the file. This creates a top-level REGRID directory called, strangely enough, REGRID. 2) Make the executables. To do this, go into the REGRID directory, and type “make”. The makefiles we’ve set up attempt to recognize the type of system you are using, and select the appropriate compile and load options. Watch while your machine builds all the executables. If this doesn’t work, you may find yourself having to go into the Makefiles yourself, and tuning some of the compiler and load options. 3) Get the analysis files. It may be convenient to put these files in a directory of their own. For 5-10
MM5 Tutorial
5: REGRID
users of NCAR’s machines interested in historical cases, investigate the get_*** programs mentioned in “Handy utility programs”. 4) Set up to run pregrid. The “pregrid.csh” shell in the pregrid subdirectory is handy. This is discussed below. If you need to create your own Vtables, do it now. 5) Make sure the pregrid.csh script is executable: “chmod u+x pregrid.csh” 6) Run pregrid.csh: “pregrid.csh” 7) Check your output: Make sure that pregrid created files for every time between your starting and ending dates. Check the printout to see which fields are available at which times. 8) Set up to run regridder: Get your terrain output file. Go to the regridder subdirectory, and edit the namelist for your specific case. 9) Run regridder. “regridder”. This creates a file “REGRID_DOMAIN#”.
5.12 pregrid.csh A shell script has been created, called pregrid.csh, as a higher-level user interface for the pregrid programs. The top part of the pregrid.csh script looks something like this (variables the user may have to set are noted with a vertical bar to the left). ############################################################################# #!/bin/csh -f # # set echo # # Put your input files for pregrid into the directory you specify as DataDir: # set DataDir = /usr/tmp/username/REGRID # # Specify the source of 3-d analyses # set SRC3D = ON84 # Old ON84-formatted NCEP GDAS analyses # set SRC3D = NCEP # Newer GRIB-formatted NCEP GDAS analyses # set SRC3D = GRIB # Many GRIB-format datasets # # # #
InFiles: Tell the program where you have put the analysis files, and what you have called them. If SRC3D has the value “GRIB”, then the Vtables you specify below in the script variable VT3D will be used to interpret the files you specify in the ${InFiles} variable.
set InFiles = ( ${DataDir}/NCEP* ) # # Specify the source of SST analyses # # # #
set set set set
SRCSST SRCSST SRCSST SRCSST
MM5 Tutorial
= = = =
ON84 NCEP NAVY $SRC3D
5-11
5: REGRID
# # # # # # #
InSST: Tell the program where the files with SST analyses are. Do this only if SST analyses are coming from files not named above in InFiles. If SRCSST has the value “GRIB”, then the Vtables you specify below in the script variable VTSST will be used to interpret the files you specify in the ${InSST} variable.
set InSST = ( ) # # Select the source of snow-cover analyses (entirely optional) #
# #
set SRCSNOW = $SRC3D set SRCSNOW = ON84 set SRCSNOW = GRIB
# # # #
InSnow: Set InSnow only if the snow-cover analyses are from files not listed in InFiles. If SRCSNOW has the value “GRIB”, then the Vtables you specify below in the script variable VTSNOW will be used to interpret the files you specify in the ${InSnow} variable. set InSnow = ()
# # Select the source of soil model analyses (entirely optional) # # # # # #
set SRCSOIL = $SRC3D InSoil: Set InSoil only if the soil analyses are from files not listed in InFiles. If SRCSOIL has the value “GRIB”, then the Vtables you specify below in the script variable VTSOIL will be used to interpret the files you specify in the ${InSoil} variable.
#
set InSoil = ()
# # Build the Namelist # if ( -e ./pregrid.namelist ) then rm ./pregrid.namelist endif cat << End_Of_Namelist | sed -e ‘s/#.*//; s/ *$//’ > ./pregrid.namelist &record1 # # Set the starting date of the time period you want to process: # START_YEAR = 1993 # Year (Four digits) START_MONTH = 03 # Month ( 01 - 12 ) START_DAY = 13 # Day ( 01 - 31 ) START_HOUR = 00 # Hour ( 00 - 23 ) END_YEAR END_MONTH END_DAY END_HOUR
= = = =
1993 03 14 00
# # # #
Year (Four digits) Month ( 01 - 12 ) Day ( 01 - 31 ) Hour ( 00 - 23 )
# # Define the time interval to process.
5-12
MM5 Tutorial
5: REGRID
# INTERVAL =
43200
# # # # # #
Time interval (seconds) to process. This is most sanely the same as the time interval for which the analyses were archived, but you can really set this to just about anything, and pregrid will interpolate in time and/or skip over time periods for your regridding pleasure.
/ End_Of_Namelist # # # # # # # # # #
Tell the pregrid programs which Vtables to use. Do this only if you have selected GRIB-formatted input using SRC___ = GRIB above. The directories referenced here are relative to REGRID/pregrid/. The Vtable files specified in VT3D will be applied to the files specified in the InFiles variable. Similarly, the Vtable files specified in VTSST, VTSNOW, and VTSOIL will be applied to the files listed above in InSST, InSNOW, and InSoil, respectively. set set set set
VT3D = ( grib.misc/Vtable.NNRP3D ) VTSST = ( grib.misc/Vtable.NNRPSST ) VTSNOW = ( grib.misc/Vtable.xxxxSNOW ) VTSOIL = ( grib.misc/Vtable.xxxxSOIL )
######################################################################## ######################################################################## ###### ###### ###### END USER MODIFICATION ###### ###### ###### ######################################################################## ########################################################################
The rest of the shell performs some file shuffling and linking to put files in places that the pregrid programs expect. The shell links the source files to files of specific names which the pregrid programs expect. The shell builds a file called “Vtable” from the individual Vtables named by the user in the shell. The shell then executes the program, and moves the final output files to the pregrid directory.
5.13 The regridder Namelist options The regridder program is run entirely through the namelist file. The regridder namelist is separated into four namelist records.
5.13.1 RECORD1 The first namelist record handles the temporal information required by the regridder program: basically, when do I start, when do I stop, and how many intermediate steps are to be taken between those bounding times. This namelist record is identical to that of pregrid. (see pregrid.csh, above).
MM5 Tutorial
5-13
5: REGRID
5.13.2 RECORD2 The second record for regridder deals with information concerning the vertical levels and other user options. The user defines the top of the analysis and which “new” levels to add to the firstguess data (through vertical interpolation from the surrounding layers, linear in pressure). Other options are a SST temperature threshold below which a sea-ice flag will be set (useful if you do not have a sea-ice field in your input dataset; if using LSM or Polar options in MM5, do NOT use this threshold), an an option to select a linear (4-point) interpolation as opposed to a higher-order interpolation. &record2 ptop_in_Pa new_levels_in_Pa
sst_to_ice_threshold linear_interpolation
= 10000/ = 97500, 95000, 87500, 85000, 77500, 75000, 67500, 65000, 57500, 55000, 47500, 45000, 37500, 35000, 27500, 25000, 17500, 15000, = -9999, = .FALSE./
92500, 82500, 72500, 62500, 52500, 42500, 32500, 22500, 12500,
90000, 80000, 70000, 60000, 50000, 40000, 30000, 20000, 10000/
5.13.3 RECORD3 The third record is used to input the pregrid output names to the regridder program. The file names include the root of the file name (up to but not including the “:”, and may include directory information). The character string after the “:” is the date, which is internally generated by the regridder program based on the information provided in RECORD1. For example, to input the file “../test/FILE:1996-07-30_00:00:00”, data would be entered as given below. Multiple files for the same time may be used as input. It is typical for the sea-surface temperature to be defined in a file different than the wind fields, for example. The user appends as many files names are required on the root_nml line (a limit of 20 is currently enforced). The optional constants_full_name is the name of a file that may have fields which are to be kept constant through all time periods. This is mostly useful for fields like SST or snow cover which may frequently be missing from archives. There are also some special (and optional) albedo datasets which have been prepared int the intermediate format, and are best accessed through constants_full_name. The terrain_file_name is the file name of the output file from the terrain program. &record3 root constants_full_name terrain_file_name
= ’../test/FILE’ , = ’./SST-CONSTANT’ = ’./terrain’ /
5.13.4 RECORD4 The fourth record controls the print-out in the regridder program. Until something goes wrong, 5-14
MM5 Tutorial
5: REGRID
keep everything turned off. &record4 print_echo print_debug print_mask print_interp print_link_list_store print_array_store print_header print_output print_file print_f77_info
= = = = = = = = = =
.FALSE., .FALSE., .FALSE., .FALSE., .FALSE., .FALSE., .FALSE., .FALSE., .FALSE., .TRUE. /
5.13.5 RECORD5 The fifth record controls tropical cyclone bogussing scheme. Unless you are dealing with tropical cyclones or hurricanes, keep this turned off (insert_bogus_storm = .FALSE.). &record5 insert_bogus_storm num_storm latc_loc lonc_loc vmax /
= = = = =
.FALSE. 1 36. -35. 50.
5.14 REGRID tar File The regrid.tar file contains the following files and directories: REGRID/README REGRID/CHANGES REGRID/configure.rules REGRID/pregrid REGRID/regridder REGRID/Makefile REGRID/regrid.deck
general information about REGRID description of changes since earlier releases Rules for make pregrid directory regridder directory Makefile to create REGRID executables. batch job deck for NCAR’s Cray
I would further direct your attention to the directory REGRID/pregrid/Doc/html, which contains documentation on REGRID in html format. Direct your web browser to REGRID/pregrid/Doc/ html/Documentation_Home.html. The contents of the html documentation is approximately what is in these tutorial notes, though the organization and details differ.
5.15 Data Users with NCAR computing accounts have ready access to a variety of gridded meteorological analyses and forecast products. Some of the more commonly-used archived analyses include: NCEP GDAS MM5 Tutorial
5-15
5: REGRID
NCEP/NCAR Reanalysis NCEP EDAS ECMWF Reanalysis MRF/AVN “final analysis” Users who do not have NCAR computing accounts must find their own ways to get gridded analyses or forecasts. Various real-time analyses and forecasts may be available via anonymous ftp, for example. Real-time analyses and forecasts that have been found useful include: NCEP ETA analysis and forecast NCEP AVN analysis and forecast Information on NCEP real-time analyses and forecasts may be found at http://www.emc.ncep.noaa.gov and http://www.emc.ncep.noaa.gov/data
5.15.1 NCEP GDAS The NCEP GDAS (Global Data Assimilation System) analysis as archived at NCAR is the traditional option for analyses. Analyses are available every 12 hours. Data are archived on a 2.5 degree x 2.5 degree lat/lon grid. Data are available from the mid 1970’s to recent months (updated periodically). Through March 1997, data are in ON84 format. Beginnning in April 1997, data are in GRIB format. For more information see http://dss.ucar.edu/datasets/ds082.0 (for the ON84formatted data through March 1997) or http://dss.ucar.edu/datasets/ds083.0 (for the GRIB-formatted data beginning April 1997). Peculiarities/Caveats: • • •
Northern and southern hemispheres of a given global field are archived as separate fields. Snow-cover data are archived sporadically in the ON84 dataset. SST are archived once per day in the ON84 dataset.
5.15.2 NCEP/NCAR Reanalysis This data set is a global analysis beginning in 1948 using a single analysis system for the entire dataset. Analyses are available every six hours. Data are archived on a 2.5 x 2.5 degree lat/lon grid and a gaussian grid (~1.9 degrees lat, 2.5 degrees lon). The data REGRID accesses are in GRIB format. For further details on the NCEP/NCAR Reanalysis Project, see http://dss.ucar.edu/ pub/reanalysis. Peculiarities/Caveats: • •
5-16
Much of the surface data are actually six-hour forecasts. Sea-surface temperature is not archived. Skin temperature may be used with some caution.
MM5 Tutorial
5: REGRID
Be aware that ground temperatures and sea-surface temperatures at coastlines may be unrealistic.
5.15.3 NCEP Eta The NCEP Eta is a regional analysis and forecast for North America. Peculiarities/Caveats: •
•
Sea-surface temperature is not archived. Skin temperature may be used with some caution. Be aware that ground temperatures and sea-surface temperatures at coastlines may be unrealistic. The archived specific humidity is converted to relative humidity by pregrid.
For further information about the Eta archives at NCAR, see http://dss.ucar.edu/pub/gcip. For further information about real-time Eta analyses and forecasts, including where to find them, see http://www.emc.ncep.noaa.gov/mmb/research/meso.products.html
5.15.4 NCEP AVN The NCEP AVN is a global analysis and forecast. Products are available in real time from NCEP. For further information about the real-time AVN analyses and forecasts, see http://www.emc.ncep.noaa.gov/modelinfo
5.15.5 ECMWF TOGA Global Analysis The archives at NCAR begin January 1985. Data are archived at 2.5 x 2.5 degree lat/lon grid. Times are 00 UTC and 12 UTC. Peculiarities/Caveats: •
• •
Sea-surface temperature is not archived. Skin temperature may be used with some caution. Be aware that ground temperatures and sea-surface temperatures at coastlines may be unrealistic. Geopotential must be converted to geopotential height by REGRID. Thus, both geopotential and geopotential height must be specified in the Vtable. ECMWF uses different parameter tables (i.e., GRIB code numbers) than NCEP for many variables.
Use of this archive is restricted: NCAR may distribute this data to US scientists, scientists visiting US organizations, and Canadian scientists affiliated with UCAR member organizations only. This data must not be used for commercial purposes. ECMWF must be given credit in any publications in which this data is used. A permission form must be signed and returned to DSS for use of this data. For further information about the ECMWF TOGA analysis archives at NCAR, see http://dss.ucar.edu/datasets/ds111.2.
5.15.6 ECMWF Reanalysis (ERA15) The ECMWF Reanalysis is a global analysis of 15 years’ worth of data. Archives are from Jan 1979 through Dec 1993. Data are archived on a 2.5 x 2.5 degree lat/lon grid, in GRIB format.
MM5 Tutorial
5-17
5: REGRID
Peculiarities/Caveats: •
• •
Sea-surface temperature is not archived. Skin temperature may be used with some caution. Be aware that ground temperatures and sea-surface temperatures at coastlines may be unrealistic. Geopotential must be converted to geopotential height by REGRID. Thus, both geopotential and geopotential height must be specified in the Vtable. ECMWF uses different parameter tables (i.e., GRIB code numbers) than NCEP for many variables.
Use of this archive is restricted: NCAR may distribute this data to US scientists, scientists visiting US organizations, and Canadian scientists affiliated with UCAR member organizations only. This data must not be used for commercial purposes. ECMWF must be given credit in any publications in which this data is used. A permission form must be signed and returned to DSS for use of this data. For further information about the ECMWF Reanalysis archives at NCAR, see http://dss.ucar.edu/pub/reanalysis.html.
5.15.7 ECMWF Reanalysis (ERA40) The ECMWF Reanalysis is a global analysis of 40 years’ worth of data. Archives are from Sep 1957 through Aug 2002. Data are archived on a 2.5 x 2.5 degree lat/lon grid, in GRIB format. Peculiarities/Caveats: •
• • •
Sea-surface temperature is not archived. Skin temperature may be used with some caution. Be aware that ground temperatures and sea-surface temperatures at coastlines may be unrealistic. Geopotential must be converted to geopotential height by REGRID. Thus, both geopotential and geopotential height must be specified in the Vtable. ECMWF uses different parameter tables (i.e., GRIB code numbers) than NCEP for many variables. The soil data are available in levels 0-7, 7-28, 28-100, and 100-255 cm
Use of this archive is restricted: NCAR may distribute this data to US scientists, scientists visiting US organizations, and Canadian scientists affiliated with UCAR member organizations only. This data must not be used for commercial purposes. ECMWF must be given credit in any publications in which this data is used. A permission form must be signed and returned to DSS for use of this data. For further information about the ECMWF Reanalysis archives at NCAR, see http://dss.ucar.edu/pub/reanalysis.html.
5.15.8 Other data Daily global SST analyses in GRIB format are available via ftp from NCEP at ftp://ftpprd.ncep.noaa.gov/pub/emc/mmab/history/sst/rtg_sst_grb_0.5.. Daily Northern Hemisphere snow-cover analyses in GRIB format are available via ftp from NCEP at ftp://ftp.ncep.noaa.gov/pub/gcp/sfcflds/oper/live. Northern and southern hemispheric sea-ice datasets are available in near-real-time through the National Snow and Ice Data Center. See the REGRID/pregrid/nise/README file for details. 5-18
MM5 Tutorial
5: REGRID
Data are approximately 1/4 degree spacing. A separate pregrid program has been set up to ingest these data. Global soil temperature and moisture fields (“AGRMET” dataset) have been graciously provided by the United States Air Force Weather Agency’s (AFWA) Specialized Models Team. The data are archived with a 1 to 2 month delay, on NCAR’s MSS: /MESOUSER/DATASETS/AGRMET/AGRMET_.tar The files are on the order of 1 Gigabyte of data per month. Global fields every three hours on a 0.5x0.5 degree grid are available. The authors would like to express their appreciation to the Air Force Weather Agency’s Specialized Models Team for providing the AGRMET data. Two albedo datasets are available, intended for use in MM5 with the NOAH LSM. The maximum snow albedo is in file REGRID/regridder/ALMX_FILE. This is a global, 1.0x1.0 degree dataset, in REGRID intermediate format. A monthly climatological albedo dataset (without snow) is available NCAR’s MSS, /MESOUSER/DATASETS/REGRID/MONTHLY_ALBEDO.TAR.gz. This is approximately 15-km global dataset, again, in REGRID intermediate format.
MM5 Tutorial
5-19
5: REGRID
5-20
MM5 Tutorial
6: OBJECTIVE ANALYSIS
Objective Analysis (little_r)
6
Purpose of Objective Analysis 6-3 RAWINS or LITTLE_R? 6-4 Source of Observations 6-4 Objective Analysis techniques in LITTLE_R and RAWINS 6-4 Cressman Scheme 6-4 Ellipse Scheme 6-5 Banana Scheme 6-6 Multiquadric scheme 6-6 Quality Control for Observations 6-6 Quality Control on Individual Reports 6-6 The ERRMX test 6-7 The Buddy test 6-7 Additional Observations 6-7 Surface FDDA option 6-7 Objective Analysis on Model Nests 6-8 How to Run LITTLE_R 6-8 Get the source code 6-8 Generate the executable 6-8 Prepare the observations files 6-9 Edit the namelist for your specific case 6-9 Run the program 6-9 Check your output 6-9 Output Files 6-10 LITTLE_R_DOMAIN# 6-10 SFCFDDA_DOMAIN# 6-10 results_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss:tttt 6-10 useful_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss:tttt 6-10
MM5 Tutorial
6-1
6: OBJECTIVE ANALYSIS
discard_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss:tttt 6-11 qc_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss:tttt 6-11 obs_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss:tttt 6-11 plotobs_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss:tttt 6-11 Plot Utilities 6-11 plot_sounding 6-11 plot_level 6-12 LITTLE_R Observations Format 6-12 QCFlags 6-14 LITTLE_R Namelist 6-15 Fetch.deck 6-22
6-2
MM5 tutorial
6: OBJECTIVE ANALYSIS
6
Objective Analysis (little_r)
6.1 Purpose of Objective Analysis The goal of objective analysis in meteorological modeling is to improve meteorological analyses (the first guess) on the mesoscale grid by incorporating information from observations. Traditionally, these observations have been “direct” observations of temperature, humidity, and wind from surface and radiosonde reports. As remote sensing techniques come of age, more and more “indirect” observations are available for researchers and operational modelers. Effective use of these indirect observations for objective analysis is not a trivial task. Methods commonly employed for indirect observations include three-dimensional or four-dimensional variational techniques (“3DVAR” and “4DVAR”, respectively), which can be used for direct observations as well. The MM5 system has long included packages for objective analysis of direct observations: the RAWINS program and the LITTLE_R program. A recent additon to the MM5 system is the 3DVAR package, which allows for variational assimilation of both direct observations and certain types of indirect observations. This chapter discusses the objective analysis program, LITTLE_R, which is perhaps best suited to new MM5 users. Some reference is made to the older RAWINS program (some details are available in Appendix F). Discussion of 3DVAR is reserved for Appendix E. The analyses input to LITTLE_R and RAWINS as the first guess are usually fairly low-resolution analyses output from program REGRID. LITTLE_R and RAWINS may also use an MM5 forecast (through a back-end interpolation from sigma to pressure levels) as the first guess. LITTLE_R and RAWINS capabilities include:
• Choice of Cressman-style or Multiquadric objective analysis. • Various tests to screen the data for suspect observations. • Procedures to input bogus data. • Expanded Grid: If you used an expanded grid in TERRAIN and REGRID, the objective analysis can incorporate data from outside your grid to improve analyses near the boundaries. These programs cut down the expanded grid to the unexpanded dimensions on output.
MM5 Tutorial
6-3
6: OBJECTIVE ANALYSIS
Output from the objective analysis programs is used to:
• Provide fields for Initial and Boundary conditions (through program INTERPF). • Provide 3-d fields for analysis-nudging FDDA (through program INTERPF). • Provide surface fields for surface-analysis-nudging FDDA. 6.2 RAWINS or LITTLE_R? Users are strongly encouraged to use LITTLE_R for the objective analysis step. Most of what you’ll need to do is done more easily in LITTLE_R than in RAWINS.
6.3 Source of Observations Input of observations is perhaps the greatest difference between LITTLE_R and RAWINS. RAWINS was developed around a specific set of data in a specific format. Incorporating data into RAWINS from unexpected sources or in different formats tends to be a challenge. LITTLE_R specifies it’s own format for input (which has it’s own challenges), but is better suited for users to adapt their own data. RAWINS incorporates data from NCEP operational global surface and upper-air observations subsets as archived by the Data Support Section (DSS) at NCAR.
• Upper-air data: RAOBS (ADPUPA), in NMC ON29 format. • Surface data: NMC Surface ADP data, in NMC ON29 format.
NMC Office Note 29 can be found in many places on the World Wide Web, including:
http://www.emc.ncep.noaa.gov/mmb/papers/keyser/on29.htm. LITTLE_R reads observations provided by the user in formatted ASCII text files. The LITTLE_R tar file includes programs for converting the above NMC ON29 files into the LITTLE_R Observations Format. A user-contributed (i.e., unsupported) program is available on the MM5 ftp site for converting observations files from the GTS to LITTLE_R format. Users are responsible for converting other observations they may want to provide LITTLE_R into the LITTLE_R format. The details of this format are provided in section 6.12.
6.4 Objective Analysis techniques in LITTLE_R and RAWINS 6.4.1 Cressman Scheme Three of the four objective analysis techniques used in LITTLE_R and RAWINS are based on the Cressman scheme, in which several successive scans nudge a first-guess field toward the neighboring observed values. The standard Cressman scheme assigns to each observation a circular radius of influence R. The first-guess field at each gridpoint P is adjusted by taking into account all the observations which influence P. The differences between the first-guess field and the observations are calculated, and a distance-
6-4
MM5 Tutorial
6: OBJECTIVE ANALYSIS
weighted average of these difference values is added to the value of the first-guess at P. Once all gridpoints have been adjusted, the adjusted field is used as the first guess for another adjustment cycle. Subsequent passes each use a smaller radius of influence.
Observations Grid point O1 O2
P
O3
Observations O1 and O2 influence grid point P, O3 does not. 6.4.2 Ellipse Scheme In analyses of wind and relative humidity (fields strongly deformed by the wind) at pressure levels, the circles from the standard Cressman scheme are elongated into ellipses oriented along the flow. The stronger the wind, the greater the eccentricity of the ellipses. This scheme reduces to the circular Cressman scheme under low-wind conditions.
O1 P
O2
Streamline through observations
MM5 Tutorial
6-5
6: OBJECTIVE ANALYSIS
6.4.3 Banana Scheme In analyses of wind and relative humidity at pressure levels, the circles from the standard Cressman scheme are elongated in the direction of the flow and curved along the streamlines. The result is a banana shape. This scheme reduces to the Ellipse scheme under straight-flow conditions, and the standard Cressman scheme under low-wind conditions.
6.4.4 Multiquadric scheme The Multiquadric scheme uses hyperboloid radial basis functions to perform the objective analysis. Details of the multiquadric technique may be found in Nuss and Titley, 1994: “Use of multiquadric interpolation for meteorological objective analysis.” Mon. Wea. Rev., 122, 1611-1631. Use this scheme with caution, as it can produce some odd results in areas where only a few observations are available.
6.5 Quality Control for Observations A critical component of LITTLE_R and RAWINS is the screening for bad observations. Many of these QC checks are done automatically in RAWINS (no user control), they are optional in LITTLE_R. 6.5.1 Quality Control on Individual Reports Most of these QC checks are done automatically in RAWINS (no user control), most are optional in LITTLE_R.
• Gross Error Checks (sane values, pressure decreases with height, etc.) • Remove spikes from temperature and wind profiles. • Adjust temperature profiles to remove superadiabatic layers. • No comparisons to other reports or to the first-guess field.
6-6
MM5 Tutorial
6: OBJECTIVE ANALYSIS
6.5.2 The ERRMAX test The ERRMAX quality-control check is optional (but highly recommended) in both LITTLE_R and RAWINS.
• Limited user control over data removal. The user may set thresholds which vary the toler-
ance of the error check. • Observations are compared to the first-guess field. • If the difference value (obs - first-guess) exceeds a certain threshold, the observation is discarded. • Threshold varies depending on the field, level, and time of day. • Works well with a good first-guess field. 6.5.3 The Buddy test The Buddy check is optional (but highly recommended) in both LITTLE_R and RAWINS.
• Limited user control over data removal. The user may set weighting factors which vary the tolorance of the error check.
• Observations are compared to both the first guess and neighboring observations. • If the difference value of an observation (obs - first-guess) varies significantly from the distance-weighted average of the difference values of neighboring observations, the observation is discarded. • Works well in regions with good data density.
6.6 Additional Observations Input of additional observations, or modification of existing (and erroneous) observations, can be a useful tool at the objective analysis stage. In LITTLE_R, additional observations are provided to the program the same way (in the same format) as standard observations. Indeed, additional observations must be in the same file as the rest of the observations. Existing (erroneous) observations can be modified easily, as the observations input format is ASCII text. Identifying an observation report as “bogus” simply means that it is assumed to be good data -- no quality control is performed for that report. In RAWINS, the methods of adding or modifying observations are rather difficult to work with, requiring additional files with cryptic conventions. All observations provided through these files are assumed to be “good”; no quality control is performed on these observations. Don’t try this unless it’s absolutely necessary, and you’re the patient sort. However, some people actually manage to use these procedures successfully. See notes on NBOGUS, KBOGUS, NSELIM in Appendix F.
6.7 Surface FDDA option The surface FDDA option creates additional analysis files for the surface only, usually with a smaller time interval between analyses (i.e., more frequently) than the full upper-air analyses. The purpose of these surface analysis files is for later use in MM5 with the surface analysis nudging option. This capability is turned on by setting the namelist option F4D = .TRUE., and selecting the time inteval in seconds for the surface analyses with option INTF4D. MM5 Tutorial
6-7
6: OBJECTIVE ANALYSIS
A separate set of observations files is needed for the surface FDDA option in LITTLE_R. These files must be listed by the namelist record2 option sfc_obs_filename. A separate observations file must be supplied for each analysis time from the start date to the end date at time interval INTF4D. The LAGTEM option controls how the first-guess field is created for surface analysis files. Typically, the surface and upper-air first-guess is available at twelve-hour intervals (00 Z and 12 Z), while the surface analysis interval may be set to 3 hours (10800 seconds). So at 00 Z and 12 Z, the available surface first-guess is used. If LAGTEM is set to .FALSE., the surface first-guess at other times will be temporally interpolated from the first-guess at 00 Z and 12 Z. If LAGTEM is set to .TRUE., the surface first guess at other times is the objective analysis from the previous time.
6.8 Objective Analysis on Model Nests LITTLE_R and RAWINS have the capability to perform the objective analysis on a nest. This is done manually with a separate LITTLE_R or RAWINS process, performed on REGRID files for the particular nest. Often, however, such a step is unnecessary; it complicates matters for the user and may introduce errors into the forecast. At other times, extra information available to the user, or extra detail that objective analysis may provide on a nest, makes objective analysis on a nest a good option. The main reason to do objective analysis on a nest is if you have observations available with horizontal resolution somewhat greater than the resolution of your coarse domain. There may also be circumstances in which the representation of terrain on a nest allows for better use of surface observations (i.e., the model terrain better matches the real terrain elevation of the observation). The main problem introduced by doing objective analysis on a nest is inconsistency in initial conditions between the coarse domain and the nest. Observations that fall just outside a nest will be used in the analysis of the coarse domain, but discarded in the analysis of the nest. With different observations used right at a nest boundary, one can get very different analyses.
6.9 How to run LITTLE_R 6.9.1 Get the source code The source code is available via anonymous ftp, at ftp://ftp.ucar.edu/mesouser/MM5V3/LITTLE_R.TAR.gz Download this file to your local machine, uncompress it, and untar it: gzip -cd LITTLE_R.TAR.gz tar -xvf LITTLE_R.TAR
You should now have a directory called LITTLE_R. Change to that directory: cd LITTLE_R
6.9.2 Generate the executable The LITTLE_R executable is generated through the Make utility. For a variety of common plat-
6-8
MM5 Tutorial
6: OBJECTIVE ANALYSIS
forms and architectures, the Makefile is already set up to build the executable. Simply type: make
If your system is a little unusual, you may find yourself having to edit options in the Makefile. 6.9.3 Prepare the observations files For the tutorial exercises, there are prepared observations files for you to use. See the notes on the assignment. A program is available for users with access to NCAR’s computers to download archived observations and reformat them into the LITTLE_R Observations Format. See the information about the “fetch.deck” program, in section 6.14. A program is also available for reformatting observations from the GTS stream. For other sources of data, the user is responsible for putting data into the LITTLE_R Observations Format. Hence the detailed discussion of the observations format in Section 6.12. In general, there are two overall strategies for organizing observations into observations files. The easiest strategy is to simply put all observations into a single file. The second strategy, which saves some processing time by LITTLE_R, is to sort observations into separate files by time. 6.9.4 Edit the namelist for your specific case For details about the namelist, see section 6.13. The most critical information you’ll be changing most often are the start date, end date, and file names. Pay particularly careful attention to the file name settings. Mistakes in observations file names can go unnoticed because LITTLE_R will happily process the wrong files, and if there are no data in the (wrongly-specified) file for a particular time, LITTLE_R will happily provide you with an analysis of no observations. 6.9.5 Run the program Run the program by invoking the command: little_r >! print.out
The “>! print.out” part of that command simply redirects printout into a file called “print.out”. 6.9.6 Check your output Examine the “print.out” file for error messages or warning messages. The program should have created the file called “LITTLE_R_DOMAIN<#>”, according to the domain number. Additional output files containing information about observations found and used and discarded will probably be created, as well. Important things to check include the number of observations found for your objective analysis, and the number of observations used at various levels. This can alert you to possible problems in specifying observations files or time intervals. This information is included in the printout file. MM5 Tutorial
6-9
6: OBJECTIVE ANALYSIS
You may also want to experiment with a couple of simple plot utility programs, discussed below. There are a number of additional output files which you might find useful. These are discussed below.
6.10 Output Files The LITTLE_R program generates several ASCII text files to detail the actions taken on observations through a time cycle of the program (sorting, error checking, quality control flags, vertical interpolation). In support of users wishing to plot the observations used for each variable (at each level, at each time), a file is created with this information. Primarily, the ASCII text files are for consumption by the developers for diagnostic purposes. The main output of the LITTLE_R program is the gridded, pressure-level data set to be passed to the INTERPF program (file LITTLE_R_DOMAIN<#>). In each of the files listed below, the text “_YYYY-MM-DD_HH:mm:ss.tttt” allows each time period that is processed by LITTLE_R to output a separate file. The only unusual information in the date string is the final four letters “tttt” which is the decimal time to ten thousandths (!) of a second. The bracketed “[sfc_fdda_]” indicates that the surface FDDA option of LITTLE_R creates the same set of files with the string “sfc_fdda_” inserted. 6.10.1 LITTLE_R_DOMAIN<#> The final analysis file at surface and pressure levels. Generating this file is the primary goal of running LITTLE_R. 6.10.2 SFCFDDA_DOMAIN<#> Use of the surface FDDA option in LITTLE_R creates a file called “SFCFDDA_DOMAIN<#>”. This file contains the surface analyses at INTF4D intervals, analyses of T, u, v, RH, qv, psfc, pmsl, and a count of observations within 250 km of each grid point. 6.10.3 result_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss.tttt This file contains a listing of all of the observations available for use by the LITTLE_R program. The observations have been sorted and the duplicates have been removed. Observations outside of the analysis region have been removed. Observations with no information have been removed. All reports for each separate location (different levels but at the same time) have been combined to form a single report. Interspersed with the output data are lines to separate each report. This file contains reports discarded for QC reasons. 6.10.4 useful_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss.tttt This file contains a listing of all of the observations available for use by the LITTLE_R program. The observations have been sorted and the duplicates have been removed. Observations outside of the analysis region have been removed. Observations with no information have been removed. All reports for each separate location (different levels but at the same time) have been combined to form a single report. Data which has had the “discard” flag internally set (data which will not 6-10
MM5 Tutorial
6: OBJECTIVE ANALYSIS
be sent to the quality control or objective analysis portions of the code) are not listed in this output. No additional lines are introduced to the output, allowing this file to be reused as an observation input file. 6.10.5 discard_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss.tttt This file only contains a listing of the discarded reports. This is a good place to begin to search to determine why an observation didn’t make it into the analysis. This file has additional lines interspersed within the output to separate each report. 6.10.6 qc_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss.tttt The information contained in the qc_out file is similar to the useful_out. The data has gone through a more expensive test to determine if the report is within the analysis region, and the data have been given various quality control flags. Unless a blatant error in the data is detected (such as a negative sea-level pressure), the observation data are not tyically modified, but only assigned quality control flags. Any data failing the error maximum or buddy check tests are not used in the objective analysis. 6.10.7 obs_used_for_oa_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss.tttt This file lists data by variable and by level, where each observation that has gone into the objective analysis is grouped with all of the associated observations for plotting or some other diagnostic purpose. The first line of this file is the necessary FORTRAN format required to input the data. There are titles over the data columns to aid in the information identification. Below are a few lines from a typical file. ( 3x,a8,3x,i6,3x,i5,3x,a8,3x,2(g13.6,3x),2(f7.2,3x),i7 ) Number of Observations 00001214 Variable Press Obs Station Obs Obs-1st Name Level Number ID Value Guess U 1001 1 CYYT 6.39806 4.67690 U 1001 2 CWRA 2.04794 0.891641 U 1001 3 CWVA 1.30433 -1.80660 U 1001 4 CWAR 1.20569 1.07567 U 1001 5 CYQX 0.470500 -2.10306 U 1001 6 CWDO 0.789376 -3.03728 U 1001 7 CWDS 0.846182 2.14755
X Location 161.51 162.04 159.54 159.53 156.58 155.34 157.37
Y Location 122.96 120.03 125.52 121.07 125.17 127.02 118.95
QC Value 0 0 0 0 0 0 0
6.10.8 plotobs_out_[sfc_fdda_]YYYY-MM-DD_HH:mm:ss.tttt Observations files used by plotting program plot_level.
6.11 Plot Utilities The LITTLE_R package provides two utility programs for plotting observations. These programs are called “plot_soundings”and “plot_levels”. These optional programs use NCAR Graphics, and are built automatically if the PROGS option in the top-level Makefile is set to $(I_HAVE_NCARG). Both programs prompt the user for additional input options. 6.11.1 plot_soundings Program plot_soundings plots soundings. This program generates soundings from either the qual-
MM5 Tutorial
6-11
6: OBJECTIVE ANALYSIS
ity-controlled (“qc_out_yyyy-mm-dd_hh:mm:ss:ffff”) or the non-quality-controlled (“useful_out_yyyy-mm-dd_hh:mm:ss:ffff”) upper air data. Only data that are on the requested analysis levels are processed. The program asks the user for an input filename, and creates the file “soundings.cgm”. 6.11.2 plot_level Program plot_level creates station plots for each analysis level. These plots contain both observations that have passed all QC tests and observations that have failed the QC tests. Observations that have failed the QC tests are plotted in various colors according to which test was failed. The program prompts the user for a date of the form yyyymmddhh, and plots the observations from file “plotobs_out_yyyy-mm-dd_hh:00:00:00.0000”. The program creates the file “levels.cgm”.
6.12 LITTLE_R Observations Format To make the best use of the LITTLE_R program, it is important for users to understand the LITTLE_R Observations Format. Observations are conceptually organized in terms of reports. A report consists of a single observation or set of observations associated with a single latitude/longitude coordinate. Examples
• a surface station report including observations of temperature, pressure, humidity, and
winds. • an upper-air station’s sounding report with temperature, humidity, and wind observations at many height or pressure levels. • an aircraft report of temperature at a specific lat/lon/height. • a satellite-derived wind observation at a specific lat/lon/height. Each report in the LITTLE_R Observations Format consists of at least four records:
• A report header record • one or more data records • an end data record • an end report record. The report header record is a 600-character long record (don’t worry, much of it is unused and needs only dummy values) which contains certain information about the station and the report as a whole: location, station id, station type, station elevation, etc. The report header record is described fully in the following table. Shaded items in the table are unused:
Report header format Variable
Fortran I/O format
Description
latitude
F20.5
station latitude (north positive)
longitude
F20.5
station longitude (east positive)
id
A40
ID of station
6-12
MM5 Tutorial
6: OBJECTIVE ANALYSIS
Report header format Variable
Fortran I/O format
Description
name
A40
Name of station
platform
A40
Description of the measurement device
source
A40
GTS, NCAR/ADP, BOGUS, etc.
elevation
F20.5
station elevation (m)
num_vld_fld
I10
Number of valid fields in the report
num_error
I10
Number of errors encountered during the decoding of this observation
num_warning
I10
Number of warnings encountered during decoding of this observation.
seq_num
I10
Sequence number of this observation
num_dups
I10
Number of duplicates found for this observation
is_sound
L10
T/F Multiple levels or a single level
bogus
L10
T/F bogus report or normal one
discard
L10
T/F Duplicate and discarded (or merged) report.
sut
I10
Seconds since 0000 UTC 1 January 1970
julian
I10
Day of the year
date_char
A20
YYYYMMDDHHmmss
slp, qc
F13.5, I7
Sea-level pressure (Pa) and a QC flag
ref_pres, qc
F13.5, I7
Reference pressure level (for thickness) (Pa) and a QC flag
ground_t, qc
F13.5, I7
Ground Temperature (T) and QC flag
sst, qc
F13.5, I7
Sea-Surface Temperature (K) and QC
psfc, qc
F13.5, I7
Surface pressure (Pa) and QC
precip, qc
F13.5, I7
Precipitation Accumulation and QC
t_max, qc
F13.5, I7
Daily maximum T (K) and QC
t_min, qc
F13.5, I7
Daily minimum T (K) and QC
t_min_night, qc
F13.5, I7
Overnight minimum T (K) and QC
p_tend03, qc
F13.5, I7
3-hour pressure change (Pa) and QC
p_tend24, qc
F13.5, I7
24-hour pressure change (Pa) and QC
cloud_cvr, qc
F13.5, I7
Total cloud cover (oktas) and QC
ceiling, qc
F13.5, I7
Height (m) of cloud base and QC
MM5 Tutorial
6-13
6: OBJECTIVE ANALYSIS
Following the report header record are the data records. These data records contain the observations of pressure, height, temperature, dewpoint, wind speed, and wind direction. There are a number of other fields in the data record that are not used on input. Each data record contains data for a single level of the report. For report types which have multiple levels (e.g., upper-air station sounding reports), each pressure or height level has its own data record. For report types with a single level (such as surface station reports or a satellite wind observation), the report will have a single data record. The data record contents and format are summarized in the following table
Format of data records
pressure, qc
F13.5, I7
Pressure (Pa) of observation, and QC
height, qc
F13.5, I7
Height (m MSL) of observation, and QC
temperature, qc
F13.5, I7
Temperature (K) and QC
dew_point, qc
F13.5, I7
Dewpoint (K) and QC
speed, qc
F13.5, I7
Wind speed (m s-1) and QC
direction, qc
F13.5, I7
Wind direction (degrees) and QC
u, qc
F13.5, I7
u component of wind (m s-1), and QC
v, qc
F13.5, I7
v component of wind (m s-1), and QC
rh, qc
F13.5, I7
Relative Humidity (%) and QC
thickness, qc
F13.5, I7
Thickness (m), and QC
The end data record is simply a data record with pressure and height fields both set to -777777. After all the data records and the end data record, an end report record must appear. The end report record is simply three integers which really aren’t all that important.
Format of end_report records
num_vld_fld
I7
Number of valid fields in the report
num_error
I7
Number of errors encountered during the decoding of the report
num_warning
I7
Number of warnings encountered during the decoding the report
6.12.1 QCFlags In the observations files, most of the meteorological data fields also have space for an additional integer quality-control flag. The quality control values are of the form 2n, where n takes on posi6-14
MM5 Tutorial
6: OBJECTIVE ANALYSIS
tive integer values. This allows the various quality control flags to be additive yet permits the decomposition of the total sum into constituent components. Following are the current quality control flags that are applied to observations. pressure interpolated from first-guess height = 2 ** 1 = 2 temperature and dew point both = 0 = 2 ** 4 = 16 wind speed and direction both = 0 = 2 ** 5 = 32 wind speed negative = 2 ** 6 = 64 wind direction < 0 or > 360 = 2 ** 7 = 128 level vertically interpolated = 2 ** 8 = 256 value vertically extrapolated from single level = 2 ** 9 = 512 sign of temperature reversed = 2 ** 10 = 1024 superadiabatic level detected = 2 ** 11 = 2048 vertical spike in wind speed or direction = 2 ** 12 = 4096 convective adjustment applied to temperature field = 2 ** 13 = 8192 no neighboring observations for buddy check = 2 ** 14 = 16384 -------------------------------------------------------------------------fails error maximum test = 2 ** 15 = 32768 fails buddy test = 2 ** 16 = 65536 observation outside of domain detected by QC = 2 ** 17 = 131072
6.13 LITTLE_R Namelist The LITTLE_R namelist file is called “namelist.input”, and must be in the directory from which LITTLE_R is run. The namelist consists of nine namelist records, named “record1” through “record9”, each having a loosely related area of content. Each namelist record, which extends over several lines in the namelist.input file, begins with “&record<#>” (where <#> is the namelist record number) and ends with a slash “/”. The data in namelist record1 define the analysis times to process: &record1 start_year start_month start_day start_hour end_year end_month end_day end_hour interval /
= = = = = = = = =
1990 03 13 00 1990 03 14 00 21600
NAMELIST RECORD1 Namelist Variable
Variable Type
Description
start_year
INTEGER
4-digit year of the starting time to process
start_month
INTEGER
2-digit month of the starting time to process
start_day
INTEGER
2-digit day of the starting time to process
MM5 Tutorial
6-15
6: OBJECTIVE ANALYSIS
NAMELIST RECORD1 Namelist Variable
Variable Type
Description
start_hour
INTEGER
2-digit hour of the starting time to process
end_year
INTEGER
4-digit year of the ending time to process
end_month
INTEGER
2-digit month of the ending time to process
end_day
INTEGER
2-digit day of the ending time to process
end_hour
INTEGER
2-digit hour of the ending time to process
interval
INTEGER
time interval (s) between consecutive times to process
The data in record2 define the names of the input files: &record2 fg_filename obs_filename obs_filename obs_filename sfc_obs_filename sfc_obs_filename sfc_obs_filename sfc_obs_filename sfc_obs_filename /
= = = = = = = = =
’../data/REGRID_DOMAIN1’ ’../data/obs1300’ ’../data/obs1306’ ’../data/obs1312’ ’../data/obs1300’ ’../data/obs1303’ ’../data/obs1306’ ’../data/obs1309’ ’../data/obs1312’
NAMELIST RECORD2 Namelist Variable
Variable Type
Description
fg_filename
CHARACTER
file name (may include directory information) of the first-guess fields, there is only a single name
obs_filename
CHARACTER
file name(s) (may include directory information) of the observation files, one required for each time period to run through the objective analysis
sfc_obs_filename
CHARACTER
file name(s) (may include directory information) of the observation files to be used for the surface analyses option (only when .F4D=.TRUE.)
The obs_filename and sfc_obs_filename settings can get confusing, and deserve some additional explanation. Use of these files is related to the times and time interval set in namelist record1, and to the F4D options set in namelist record8. The obs_filename files are used for the analyses for the full 3D dataset, both at upper-air and the surface. The obs_filename files should contain all observations, upper-air and surface, to be used for a particular analysis at a particular time. The
6-16
MM5 Tutorial
6: OBJECTIVE ANALYSIS
sfc_obs_filename is used only when F4D=.TRUE., that is, if surface analyses are being created for surface FDDA nudging. The sfc_obs_filenames may be the same files as obs_filenames, and they should probably contain both surface and upper-air observations. The designation "sfc_obs_filename" is not intended to indicate that the observations in the file are at the surface, but rather that the file is used only for a surface analysis prepared for surface FDDA nudging. There must be an obs_filename listed for each time period for which an objective analysis is desired. Time periods are processed sequentially from the starting date to the ending date by the time interval, all specified in namelist record1. For the first time period, the file named first in obs_filename is used. For the second time period, the file named second in obs_filename is used. This pattern is repeated until all files listed in obs_filename have been used. dor Subsequent time periods (if any), the first guess is simply passed to the output file without objective analysis. If the F4D option is selected, the files listed in sfc_obs_filename are similarly processed for surface analyses, this time with the time interval as specified by INTF4D.
The data in the record3 concern space allocated within the program for observations. These are values that should not frequently need to be modified: &record3 max_number_of_obs fatal_if_exceed_max_obs /
= 10000 = .TRUE.
.
NAMELIST RECORD3 Namelist Variable
Variable Type
Description
max_number_of_obs
INTEGER
anticipated maximum number of reports per time period
fatal_if_exceed_max_obs
LOGICAL
T/F flag allows the user to decide the severity of not having enough space to store all of the available observations
The data in record4 set the quality control options. There are four specific tests that may be activated by the user: &record4 qc_test_error_max qc_test_buddy qc_test_vert_consistency qc_test_convective_adj max_error_t max_error_uv max_error_z
MM5 Tutorial
= = = = = = =
.TRUE. .TRUE. .FALSE. .FALSE. 8 10 16
6-17
6: OBJECTIVE ANALYSIS
max_error_rh max_error_p max_buddy_t max_buddy_uv max_buddy_z max_buddy_rh max_buddy_p buddy_weight max_p_extend_t max_p_extend_w
= = = = = = = = = =
40 400 10 12 16 40 400 1.0 1300 1300
/
NAMELIST RECORD4 - QC Options Namelist Variable
Variable Type
Description
qc_test_error_max
LOGICAL
check the difference between the first-guess and the observation
qc_test_buddy
LOGICAL
check the difference between a single observation and neighboring observations
qc_test_vert_consistency
LOGICAL
check for vertical spikes in temperature, dew point, wind speed and wind direction
qc_test_convective_adj
LOGICAL
remove any super-adiabatic lapse rate in a sounding by conservation of dry static energy
For the error maximum tests, there is a threshold for each variable. These values are scaled for time of day, surface characteristics and vertical level.
NAMELIST RECORD4 - Error Max Tolerances Namelist Variable
Variable Type
Description
max_error_t
REAL
maximum allowable temperature difference (K)
max_error_uv
REAL
maximum allowable horizontal wind component difference (m/s)
max_error_z
REAL
not used
max_error_rh
REAL
maximum allowable relative humidity difference (%)
max_error_p
REAL
maximum allowable sea-level pressure difference (Pa)
6-18
MM5 Tutorial
6: OBJECTIVE ANALYSIS
For the buddy check test, there is a threshold for each variable. These values are similar to standard deviations.
NAMELIST RECORD4 - Buddy Check Tolerances Namelist Variable
Variable Type
Description
max_buddy_t
REAL
maximum allowable temperature difference (K)
max_buddy_uv
REAL
maximum allowable horizontal wind component difference (m/s)
max_buddy_z
REAL
not used
max_buddy_rh
REAL
maximum allowable relative humidity difference (%)
max_buddy_p
REAL
maximum allowable sea-level pressure difference (Pa)
buddy_weight
REAL
value by which the buddy thresholds are scaled
For satellite and aircraft observations, data are often horizontally spaced with only a single vertical level. The following two entries describe how far the user assumes that the data are valid in pressure space.
NAMELIST RECORD4 - Single Level Extension Namelist Variable
Variable Type
Description
max_p_extend_t
REAL
pressure difference (Pa) through which a single temperature report may be extended
max_p_extend_w
REAL
pressure difference (Pa) through which a single wind report may be extended
The data in record5 control the enormous amount of print-out which may be produced by the LITTLE_R program. These values are all logical flags, where TRUE will generate output and FALSE will turn off output. &record5 print_obs_files print_found_obs print_header print_analysis print_qc_vert print_qc_dry print_error_max print_buddy print_oa /
MM5 Tutorial
= = = = = = = = =
.TRUE. .FALSE. .FALSE. .FALSE. .FALSE. .FALSE. .FALSE. .FALSE. .FALSE.
6-19
6: OBJECTIVE ANALYSIS
The data in record7 concerns the use of the first-guess fields, and surface FDDA analysis options. Always use the first guess. &record7 use_first_guess f4d intf4d lagtem /
= = = =
.TRUE. .TRUE. 10800 .FALSE.
NAMELIST RECORD7 Namelist Variable
Variable Type
Description
use_first_guess
LOGICAL
Always use first guess (use_first_guess=.TRUE.)
f4d
LOGICAL
Turns on (.TRUE.) or off (.FALSE.) the creation of surface analysis files.
intf4d
INTEGER
time interval in seconds between surface analysis times
lagtem
LOGICAL
Use the previous time-period’s final surface analysis for this time-period’s first guess (lagtem=.TRUE.); or Use a temporal interpolation between upper-air times as the first guess for this surface analysis (lagtem = .FALSE.).
The data in record8 concern the smoothing of the data after the objective analysis. The differences (observation minus first-guess) of the analyzed fields are smoothed, not the full fields: &record8 smooth_type smooth_sfc_wind smooth_sfc_temp smooth_sfc_rh smooth_sfc_slp smooth_upper_wind smooth_upper_temp smooth_upper_rh /
6-20
= = = = = = = =
1 1 0 0 0 0 0 0
MM5 Tutorial
6: OBJECTIVE ANALYSIS
NAMELIST RECORD8 Namelist Variable
Variable Type
Description
smooth_type
INTEGER
1 = five point stencil of 1-2-1 smoothing; 2 = smoother-desmoother
smooth_sfc_wind
INTEGER
number of smoothing passes for surface winds
smooth_sfc_temp
INTEGER
number of smoothing passes for surface temperature
smooth_sfc_rh
INTEGER
number of smoothing passes for surface relative humidity
smooth_sfc_slp
INTEGER
number of smoothing passes for sea-level pressure
smooth_upper_wind
INTEGER
number of smoothing passes for upper-air winds
smooth_upper_temp
INTEGER
number of smoothing passes for upper-air temperature
smooth_upper_rh
INTEGER
number of smoothing passes for upper-air relative humidity
The data in record9 concern the objective analysis options. There is no user control to select the various Cressman extensions for the radius of influence (circular, elliptical or banana). If the Cressman option is selected, ellipse or banana extensions will be applied as the wind conditions warrant. &record9 oa_type mqd_minimum_num_obs mqd_maximum_num_obs radius_influence oa_min_switch oa_max_switch /
= = = = = =
’MQD’ 50 1000 12 .TRUE. .TRUE.
RECORD9 Namelist Variable oa_type
MM5 Tutorial
Variable Type CHARACTER
Description “MQD” for multiquadric; “Cressman” for the Cressman-type scheme, this string is case sensitive
6-21
6: OBJECTIVE ANALYSIS
RECORD9 Namelist Variable
Variable Type
Description
mqd_minimum_num_obs
INTEGER
minimum number of observations for MQD
mqd_maximum_num_obs
INTEGER
maximum number of observations for MQD
radius_influence
INTEGER
radius of influence in grid units for Cressman scheme
oa_min_switch
LOGICAL
T = switch to Cressman if too few observations for MQD; F = no analysis if too many observations
oa_max_switch
LOGICAL
T = switch to Cressman if too many observations for MQD; F = no analysis if too many observations
6.14 Fetch.deck An IBM job deck is provided to allow users with NCAR IBM access to use the traditional observations archives available to MM5 users from the NCAR Mass Storage System. It is located in the LITTLE_R/util directory, and called “fetch.deck.ibm”. This job script retrieves the data from the archives for a requested time period, converts it to the LITTLE_R Observations Format, and stores these reformatted files on the Mass Storage System for the user to retrieve. The critical portion of the script is printed below: # # # # # # # # # # # # # # # # # # # # # # #
6-22
******************************************** ****** fetch interactive/batch C shell ***** ******* NCAR IBM's only ****** ******* f90 only ****** ******************************************** This shell fetches ADP data from the NCAR MSS system and converts it into a format suitable for the little_r program. The data are stored on the NCAR MSS. Three types of data files are created: obs:DATE : Upper-air and surface data used as input to little_R surface_obs_r:DATE : Surface data needed for FDDA in little_r (if no FDDA will be done, these are not needed, since they are also contained in obs:DATE) upper-air_obs_r:DATE : Upper-air data (this file is contained in obs:DATE file, and is not needed for input to little_r) This should be the user's case or experiment (used in MSS name). This is where the data will be stored on the MSS.
MM5 Tutorial
6: OBJECTIVE ANALYSIS
set ExpName set RetPd # # # # # # # # # #
= MM5V3/TEST = 365
# MSS path name for output # MSS retention period in days
The only user inputs to the fetch program are the beginning and ending dates of the observations, and a bounding box for the observation search. These dates are given in YYYYMMDDHH. The ADP data are global, and include the surface observations and upper-air soundings. A restrictive bounding box (where possible) reduces the cost substantially. Note: No observational data are available prior to 1973, and no or limited surface observations are available prior to 1976.
set starting_date = 1993031300 set ending_date = 1993031400 set set set set
lon_e lon_w lat_s lat_n
= 180 = -180 = -90 = 90
######################################################### ######### ######### ######### END OF USER MODIFICATIONS ######### ######### ######### #########################################################
MM5 Tutorial
6-23
6: OBJECTIVE ANALYSIS
6-24
MM5 Tutorial
7: INTERPF
INTERPF
7
Purpose 7-3 INTERPF Procedure 7-3 Surface Pressure Computation 7-5 Hydrostatic Vertical Interpolation 7-6 Integrated Mean Divergence Removal 7-6 Base State Computation 7-8 Initialization of Nonhydrostatic Model 7-9 Substrate Temperature and the LOWBDY_DOMAINn file 7-9 Shell Variables (for IBM job deck only) 7-10 Parameter Statements 7-10 FORTRAN Namelist Input File 7-11 How to Run INTERPF 7-12 INTERPF didn’t Work! What Went Wrong? 7-13 File I/O 7-14 INTERPF tar File 7-15
MM5 Tutorial
7-1
7: INTERPF
7-2
MM5 tutorial
7: INTERPF
7
INTERPF
7.1 Purpose The INTERPF program handles the data transformation required to go from the analysis programs to the mesoscale model. This entails vertical interpolation, diagnostic computation, and data reformatting. INTERPF takes REGRID, RAWINS, LITTLE_R, or INTERPB output data as input to generate a model initial, lateral boundary condition and a lower boundary condition. The INTERPF program runs on the following platforms: HP/Compaq/Alpha, Cray, Fujitsu, HP, IBM, SGI, Sun, PCs running Linux (Fedora with PGI or Intel compilers), and MAC (OSX with xlf). The INTERPF code is written in FORTRAN 90.
7.2 INTERPF Procedure • input LITTLE_R, RAWINS, REGRID, or INTERPB data • pressure level Qv for Psfc • interpolate variables from pressure coordinates to hydrostatic σ • u, v, RH: linear in pressure • theta: linear in ln pressure • remove integrated mean divergence • compute base state • compute w • re-interpolate u, v, t, Qv,(optionally QC, QR, QI, QS, QG) • compute perturbation pressure • save Tsfc and SST for daily mean for lower boundary file • output current data for boundary file • output interpolated data for initial conditions • output data for lower boundary file
MM5 Tutorial
7-3
7: INTERPF
REGRID
LITTLE_R / RAWINS
INTERPF
INTERPB
MM5
NESTDOWN
Fig 7.1 MM5 modeling system flow chart for INTERPF.
7-4
MM5 Tutorial
7: INTERPF
7.3 Surface Pressure Computation Please note that the “X” used in the following computations throughout the entire chapter signifies an arithmetic multiplication, not a cross product. 1. first guess for representative T, p 100 hPa above surface
P slv –TER ⁄ H 850 ⎛ P sfc = P slv ----------⎞ ⎝ 850 ⎠
(7.1)
⎛ P slv ⎞ γ T slv = T 100 – up ⎜ ----------------------⎟ ⎝ P 100 – up⎠
(7.2)
2. extrapolate Tslv
T 850 ln ----------T 700 γ = ---------------- , 850 ln --------700
if 700 ≤ P 100 – up ≤ 850
(7.3)
3. corrected Tsfc
T sfc = T slv – γ s × TER
(7.4)
4a. use mean temperature underground to estimate surface pressure
⎧ -TER × g 1 ⎫ P sfc = P slv exp ⎨ ---------------------- ⁄ --- × ( T sfc + T slv ) ⎬ R 2 ⎩ ⎭
(7.5)
4b. OR use daily mean surface temperature to compute surface pressure
P sfc = P slv
MM5 Tutorial
– g------γ S × TER Rγ S
⎛ 1 + -----------------------⎞ ⎝ Tavg ⎠
(7.6)
7-5
7: INTERPF
7.4 Hydrostatic Vertical Interpolation The process of going from pressure levels to the σ coordinate requires only strictly bounded interpolation. Since the σ coordinate is defined to be contained within the maximum and minimum pressure, no extrapolations are required. A generated surface field is available as a coding option inside INTERPF via the namelist. Vertical interpolation uses linear techniques exclusively, typically linear in pressure or linear in ln pressure. Hydrostatic pressure is defined as *
P ijk = σ k × p ij + P top
(7.7)
where σ is a 1-D vertical coordinate, σ=1 at the ground, σ=0 at the model lid; p* is the arithmetic difference of the 2-D field of surface pressure and a constant (Ptop); and Ptop is the constant pressure at the model lid.
αP ( PB – Pσ ) + αP ( Pσ – PA ) A B α σ = --------------------------------------------------------------------------PB – PA
(7.8)
Pa, αa Pσ, ασ Pb, αb
1
2
3
Fig. 7.2 A vertical profile of a σ surface cutting through several isobaric layers. The heavy dot is the location on the σ surface for which a vertical interpolation is requested. The arrows (labeled 1 through 3) represent consecutive grid points that use three separate surrounding layers along a σ surface.
7.5 Integrated Mean Divergence Removal Removing the integrated mean divergence allows the model to begin with a smaller amount of ini7-6
MM5 Tutorial
7: INTERPF
tial condition noise that the analysis contains. Given the average upper-air station separation, the reasonableness of the high frequency, column-averaged vertical motion is spurious at best. Again, the computations are scalar, and the “X” signifies scalar multiplication. 1. pressure weighted u, v on each σ *
*
PU ijk = p ij × u ijk , PV ijk = p ij × v ijk
(7.9)
2. vertically average p*u, p*v
U integ = ij
∑ PUijk × ∆σk , Vinteg
ij
=
∑ PVijk × ∆σk
(7.10)
k k 3. divergence of vertically-averaged pressure-weighted wind [m is the map scale factor for dot (D) and cross (X)]
∆ ( U integ ⁄ m D ) ∆ ( V integ ⁄ m D ) 2 ij ij - + -------------------------------------DIV ij = m X -------------------------------------∆x ∆y
(7.11)
4. solve for the velocity potential, with assumed boundary conditions
∇2χ ij = DIV ij , χ ij ≡0 boundary
(7.12)
5. mean divergent wind components
m D ∆χ m D ∆χ U DIV = -------- ------- , V DIV = -------- ------* * ij ij p ∆x p ∆y
(7.13)
6. vertical weighting
require:
∑ wk × ∆σk= 1 ,
(7.14)
k
presently: w k =2 ( 1-σk )
MM5 Tutorial
(7.15)
7-7
7: INTERPF
7. corrected wind components
U corrected
ijk
= u ijk – U DIV × w k , V corrected = v ijk – V DIV × w k ij ijk ij
(7.16)
7.6 Base State Computation The base state for the MM5 model is constructed from several constants prescribing a surface level temperature and pressure, a temperature profile which may include an isothermal layer above the tropopause, and analytic expressions for a reference pressure and the height of the nonhydrostatic σ surfaces. Other than the terrain elevation only these constants are required by the modeling system as user input to completely define the base state. 1. constants
• P00: reference sea level pressure (in the INTERPF namelist) • Ts0: reference sea level temperature (in the INTERPF namelist) • A: reference temperature lapse rate (in the INTERPF namelist) • PTOP: reference pressure at model top (in the REGRIDDER and INTERPF namelists) • TISO: (optional) temperature at which the reference temperature becomes constant (possibly for use in modeling the stratosphere) (in the INTERPF namelist)
2. reference p*
P s0 =
1--⎧ ⎫ 2⎪ T s0 2 ⎪ – T s0 TER P 00 exp ⎨ ----------- + ⎛ --------⎞ – 2g ------------- ⎬– P TOP ⎝ A⎠ A×R ⎪ ⎪ A ⎩ ⎭
(7.17)
3. reference pressure 3-D
P 0 = P s0 × σ + P TOP
(7.18)
P0 T 0 = T s0 + Aln --------P 00
(7.19)
4. reference temperature 3-D
5. reference height 2 R×T R × A ⎛ P0 ⎞ s0 P 0 ----------------------------------- ln --------z = – ⎜ ln ⎟ + 2g ⎝ P 00⎠ P 00 g 7-8
(7.20) MM5 Tutorial
7: INTERPF
This provides a fixed (in time) height for each σ surface, since each i,j,k location is a function of the fixed σ values and the terrain elevation. If the user has requested the use of the isothermal temperature option from the namelist, the temperature and height computations are modified. First, the minimum temperature allowable is defined as the isothermal temperature. The pressure at the location for the switch to the isothermal temperature is computed. From this pressure (PISO), the isothermal height is found, and then the adjusted reference height.
⎛ R × A ⎛ ⎛ P ISO⎞ ⎞ 2 R × T s0 ⎛ P ISO⎞ ⎞ -------------------------------------× ⎜ ln ⎜ × ln ⎜ ------------⎟ ⎟ Z ISO = – ⎜ ⎟⎟ + g P 2g ⎝ ⎝ ⎝ 00 ⎠ ⎠ ⎝ P 00 ⎠ ⎠
(7.21)
R × T ISO ⎛ P0 ⎞ --------------------× ln ⎜ ------------⎟ z = Z ISO – g ⎝ P ISO⎠
(7.22)
7.7 Initialization of Nonhydrostatic Model INTERPF first generates a hydrostatic input file on the hydrostatic sigma levels which is based on actual surface pressure, not reference pressure. To initialize the data for the nonhydrostatic model a further small vertical interpolation is needed to move to the nonhydrostatic sigma levels. This involves first calculating the heights of the hydrostatic levels, then doing a linear-in-height interpolation of u, v, T and q to the nonhydrostatic levels. While sea-level pressure, u, v, T and q are known from the input data sets, the nonhydrostatic model requires two more variables to be initialized.
• Vertical velocity (w) is simply calculated from the pressure velocity (ω) obtained by integrating horizontal velocity divergence vertically while still on the hydrostatic sigma levels. Divergence removal has already ensured that this integration will give no vertical motion at the top of the model domain. This ω is then interpolated to the nonhydrostatic levels and converted to w (w=-ω/ρg). In practice, the results are not sensitive to whether w is initialized this way or equal to zero.
• Pressure perturbation (p′) has to be initialized to give a hydrostatic balance. Once virtual
temperature is known on the nonhydrostatic model levels, the model’s vertical velocity equation in finite difference form is used with the acceleration and advection terms set to zero. This leaves a relation between Tv(z) and the vertical gradient of p′. Given the sealevel pressure, p′ at the lowest sigma level can be estimated, and then given the profile of virtual temperature vertical integration gives p′ at the other levels. This balance ensures that the initial vertical acceleration is zero in each model column.
7.8 Substrate Temperature and the LOWBDY_DOMAINn file There are three primary binary output files from the INTERPF program: MMINPUT_DOMAINn,
MM5 Tutorial
7-9
7: INTERPF
BDYOUT_DOMAINn and LOWBDY_DOMAINn. The MMINPUT_DOMAINn file contains the time dependent 3D and 2D fields, such as wind, temperature, moisture and pressure. The BDYOUT_DOMAINn file contains the lateral boundaries of the 3D fields, typically, four rows worth of data. The LOWBDY_DOMAINn file contains either daily means of, or time-varying surface temperature fields (surface air temperature and sea surface temperature), and optionally sea-ice and snow cover fields. The surface air temperature is either the temperature field defined at the surface from the input pressure-level data set (typically), or the lowest σ-level temperature field (if the namelist option was set to not use the input surface data in the vertical interpolation). This field is used as the constant, deep-soil temperature whenever the land surface model is not selected. The variable used as the sea surface temperature in REGRID is not well defined. Based on user selections, the sea surface temperature could be the water temperature, the skin temperature or the 1000 hPa temperature. Users with high resolution land use may find that they have very “hot” lakes during the summer. If the user selected the skin temperature from the PREGRID Vtable, a daily mean of both the surface air temperature and the sea surface temperature are computed and output in the LOWBDY_DOMAINn file. The purpose for the daily mean is to reduce the diurnal variation of the “constant” temperature and provide more realistic inland lake temperatures. This is the reason it is recommended that users always prepare an analysis/forecast that extends for at least a full day. If the user selected the SST from the PREGRID Vtable, then the INTERPF program automatically provides time varying fields of both SST and the surface air temperature. When in doubt, the user should assume that that temperature at the ground is the skin temperature and not suitable for use as a time varying field for SST.
7.9 Shell Variables (for NCAR IBM job deck only) All of the MM5 system job decks for IBM are written as C-shell executables. Strict adherence to C-shell syntax is required in this section. Table 7.1: INTERPF IBM deck shell variables. C-shell Variable Name
Options and Use
ExpName
location of MSS files, keep same as used for deck generating input file for this program
RetPd
time in days to retain data on MSS after last access
input_file
MSS root name for p-level input to INTERPF
7.10 Parameter Statements Ha! There aren’t any.
7-10
MM5 Tutorial
7: INTERPF
7.11 FORTRAN Namelist Input File Most of the available options for the INTERPF code are handled through the namelist input file. Since this file is a FORTRAN namelist (FORTRAN 90 standard), syntax is very specific. There are six namelist records (record0 through record5). In general, all of the namelist records must be filled with the user’s description of the data. Table 7.2: INTERPF namelist values: RECORD0 and RECORD1. Namelist Record
Namelist Variable
RECORD0
INPUT_FILE
input file from REGRID, RAWINS, LITTLE_R, or INTERPB complete with directory structure
RECORD1
START_YEAR
starting time, 4 digit INTEGER of the year
RECORD1
START_MONTH
starting time, 2 digit INTEGER of the month
RECORD1
START_DAY
starting time, 2 digit INTEGER of the day
RECORD1
START_HOUR
starting time, 2 digit INTEGER of the hour
RECORD1
END_YEAR
ending time, 4 digit INTEGER of the year
RECORD1
END_MONTH
ending time, 2 digit INTEGER of the month
RECORD1
END_DAY
ending time, 2 digit INTEGER of the day
RECORD1
END_HOUR
ending time, 2 digit INTEGER of the hour
RECORD1
INTERVAL
time interval in seconds between analysis periods
RECORD1
LESS_THAN_24H
T/F flag of whether to force less than 24 h in the analysis (FALSE by default)
Description
Table 7.3: INTERPF namelist values: RECORD2 and RECORD3. Namelist Record
Namelist Variable
RECORD2
SIGMA_F_BU
input sigma levels, full levels, bottom-up (1.0 through 0.0)
RECORD2
PTOP
pressure of the model lid (Pa)
MM5 Tutorial
Description
7-11
7: INTERPF
Table 7.3: INTERPF namelist values: RECORD2 and RECORD3. Namelist Record
Namelist Variable
RECORD2
ISFC
how many sigma levels to include in the use of the lowest level analysis for the vertical interpolation; 0 = normal interpolation, 1 = use surface level for lowest sigma layer, n>1 use surface level for n sigma layers in interpolation
RECORD3
P0
reference sea level pressure (Pa)
RECORD3
TLP
reference temperature lapse rate (K {ln Pa}-1)
RECORD3
TS0
reference sea level temperature (K)
RECORD3
TISO
isothermal temperature (K), if this is left as 0 there is no effect, this is the temperature that the reference profile assumes when the temperature would otherwise be less than TISO
Description
Table 7.4: INTERPF namelist values: RECORD4 and RECORD5. Namelist Record
Namelist Variable
Description
RECORD4
REMOVEDIV
T/F flag, remove the integrated mean divergence
RECORD4
USESFC
T/F flag, use the input surface data in the vertical interpolation
RECORD4
WRTH2O
T/F flag, saturation is with respect to liquid water
RECORD4
PSFC_METHOD
INTEGER, 0 => (Tslv + Tsfc )/2; 1 => surface pressure from diurnally averaged surface temp
RECORD5
IFDATIM
INTEGER, number of time periods of initial condition output required (only 1 is necessary if not doing analysis nudging), “-1” is the magic value that means output all of the time periods
7.12 How to Run INTERPF 1) Obtain the source code tar file from one of the following places:
7-12
MM5 Tutorial
7: INTERPF
Anonymous ftp: ftp://ftp.ucar.edu/mesouser/MM5V3/INTERPF.TAR.gz On NCAR MSS: /MESOUSER/MM5V3/INTERPF.TAR.gz 2) gunzip the file, untar it. A directory INTERPF will be created. cd to INTERPF. 3) Type ‘make’ to create an executable for your platform. 4) On an NCAR IBM, edit interpf.deck.ibm (located in ~mesouser/MM5V3/IBM) to select script options and to select namelist options. On workstations, edit the namelist.input file for the namelist options. 5) On an NCAR IBM, type interpf.deck.ibm to compile and execute the program. It is usually a good practice to pipe the output to an output file so that if the program fails, you can take a look at the log file. To do so, type: interpf.deck.ibm >& interpf.log, for example. On a workstation, run the executable directly (interpf >& interpf.log). INTERPF requires one of the following input files: REGRID_DOMAINn, RAWINS_DOMAINn, LITTLE_R_DOMAINn, or MMOUTP_DOMAINn (where n is the domain identifier). The location for the input data, including directory structure, is defined in the namelist file. Output files from INTERPF (input files for MM5): MMINPUT_DOMAINn, BDYOUT_DOMAINn, LOWBDY_DOMAINn (where n is the domain identifier). These files are output in the current working directory. The user has no control over this naming convention.
7.13 INTERPF didn’t Work! What Went Wrong? • Most of the errors from INTERPF that do not end with a "segmentation fault", "core dump", or "floating point error" are accompanied with a print statement. Though the message itself may not contain enough substance to correct the problem, it will lead you to the section of the code that failed, which should provide more diagnostic information. The last statement that INTERPF prints during a controlled failed run is the diagnostic error.
• To see if INTERPF completed successfully, first check to see if the "STOP 99999" statement appears. Also check to see that INTERPF processed each of the requested times from the namelist. The initial condition file should be written-to after each analysis time, up to the number of time periods requested by the namelist. The boundary condition file is written-to after each analysis time, beginning with the second time period. The lower boundary file is written to just once.
• When INTERPF tells you that it "Relaxation did not converge in 20000 iterations”, you may doing an idealized run with non-divergent winds. Set the flag (REMOVEDIV = .FALSE. in the namelist) so that you are not doing the mean divergence removal.
• Remember that to generate a single boundary condition file, you must have at least two
time periods, so that a lateral boundary tendency may be computed. Even if you are not
MM5 Tutorial
7-13
7: INTERPF
going to run a long forecast, it is advantageous to provide a full day for the lower boundary condition file, as this file contains the daily mean of the surface air temperature and the daily mean of the SST.
• When INTERPF runs into an interpolation error that it did not expect (i.e. forced to do an
extrapolation when none should be required), INTERPF will stop and print out the offending (I,J,K) and pressure values. If this is not simply a fix by amending the provided σ or pressure surfaces, it is usually a bit more tricky and implies that the analysis data is possibly in error.
7.14 File I/O The interpolation program has input and output files that are ingested and created during an INTERPF run. The binary input files and all of the output files are unformatted FORTRAN write statements (binary, sequential access). One of the input files is a human-readable namelist formatted file of run-time options. The following tables are for the input and output units.
Table 7.5: INTERPF program input file. File Name
Description
namelist.input
namelist file containing run-time options
LITTLE_R_DOMAINn, RAWINS_DOMAINn, REGRID_DOMAINn, MMOUTP_DOMAINn (specified in namelist file)
MM5 system, meteorological data on pressure levels, input to INTERPF
Table 7.6: INTERPF program output files. File Name
Description
MMINPUT_DOMAINn
initial condition for MM5
BDYOUT_DOMAINn
lateral boundary condition for MM5
LOWBDY_DOMAINn
lower boundary condition (reservoir temperature, mean or timevarying SST, sea ice, fractional seaice, snow cover)
7-14
MM5 Tutorial
7: INTERPF
7.15 INTERPF tar File The interpf.tar file contains the following files and directories: CHANGES Doc Makefile README interpf.deck.cray namelist.input src/
MM5 Tutorial
Description of changes to the INTERPF program Contains a couple of README files Makefile to create INTERPF executable General information about the INTERPF directory job deck for usage on one of the NCAR Cray machines input namelist file for run-time options INTERPF source code
7-15
7: INTERPF
7-16
MM5 Tutorial
8: MM5
MM5
8
Purpose 8-3 Basic Equations of MM5 8-3 Physics Options in MM5 8-7 Cumulus Parameterizations (ICUPA) 8-7 PBL Schemes (IBLTYP) 8-8 Explicit Moisture Schemes (IMPHYS) 8-10 Radiation Schemes (IFRAD) and Diffusion 8-13 Surface Schemes (ISOIL) 8-14 Interactions of Parameterizations 8-17 Boundary conditions 8-17 Lateral boundary conditions (IBOUDY) 8-17 Lower boundary conditions 8-18 Upper boundary condition (IFUPR) 8-18 Nesting 8-18 One-way nesting 8-18 Two-way nesting 8-18 Two-way nest initialization options (IOVERW) 8-18 Two-way nesting feedback options (IFEED) 8-19 Four-Dimensional Data Assimilation (FDDA) 8-20 Introduction 8-20 FDDA Method 8-20 Uses of FDDA 8-20 Data used in FDDA 8-21 How to run MM5 8-22 Compiling MM5 8-22 Running MM5 8-22 Running MM5 Batch Job on NCAR’s IBM 8-23 Useful make commands 8-23
MM5 Tutorial
8-1
8: MM5
Input to MM5 8-24 Output from MM5 8-24 MM5 Files and Unit Numbers 8-27 Configure.user Variables 8-28 Script Variables for IBM Batch Deck: 8-30 Namelist Variables 8-30 OPARAM 8-31 LPARAM 8-31 NPARAM 8-34 PPARAM 8-35 FPARAM 8-35 Some Common Errors Associated with MM5 Failure 8-36 MM5 tar File 8-37 Configure.user 8-39 mm5.deck 8-54
8-2
MM5 tutorial
8: MM5
8
MM5
8.1 Purpose • This is the numerical weather prediction part of the modeling system. • MM5 can be used for a broad spectrum of theoretical and real-time studies, including applications of both predictive simulation and four-dimensional data assimilation to monsoons, hurricanes, and cyclones.
• On the smaller meso-beta and meso-gamma scales (2-200 km), MM5 can be used for studies involving mesoscale convective systems, fronts, land-sea breezes, mountain-valley circulations, and urban heat islands.
8.2 Basic Equations of MM5 In terms of terrain following coordinates (x, y, σ), these are the equations for the nonhydrostatic model’s basic variables excluding moisture. Pressure γp ⎛ Q· T 0 ⎞ ∂p′ ------- – ρ 0 gw + γp∇ ⋅ v = – v ⋅ ∇p′ + ----- ⎜ ----- + ----- D θ⎟ ∂t T ⎝ cp θ0 ⎠
MM5 Tutorial
(8.1)
8-3
8: MM5
Momentum (x-component) ∂m ∂m uw ∂-----u m ∂p′ σ ∂p∗ ∂p′ + ---- ⎛ ------- – ------ -------- -------⎞ = – v ⋅ ∇u + v ⎛ f + u ------- – v -------⎞ – ew cos α – ------------- + D u ⎝ ∂y ∂x ⎠ r earth ∂t ρ ⎝ ∂x p∗ ∂x ∂σ ⎠
(8.2)
Momentum (y-component) ∂----v- m ∂p′ σ ∂p∗ ∂p′ ∂m ∂m vw + ---- ⎛⎝ ------- – ------ -------- -------⎞⎠ = – v ⋅ ∇v – u ⎛⎝ f + u ------- – v -------⎞⎠ + ew sin α – ------------- + D v ∂t ρ ∂y p∗ ∂y ∂σ ∂y ∂x r earth
(8.3)
Momentum (z-component) 2 2 p 0 T′ g d p ′ ρ 0 g ∂p′ g p′ ∂-----w ---u + v R – - ------ ------- + --- ---- = – v ⋅ ∇w + g ----- ----- – --------- ---- + e ( u cos α – v sin α ) + ----------------- + D w ∂t ρ p∗ ∂σ γ p p T0 cp p r earth
(8.4)
Thermodynamics · Q ∂----T1 ∂p′ T = – v ⋅ ∇T + -------- ⎛⎝ ------- + v ⋅ ∇p′ – ρ 0 gw⎞⎠ + ----- + -----0 D θ ∂t ρc p ∂t cp θ0
(8.5)
Advection terms can be expanded as A∂A · ∂A v ⋅ ∇A ≡ mu ∂----+ mv ------ + σ -----∂x ∂y ∂σ
(8.6)
ρ0 g mσ ∂p∗ mσ ∂p∗ · - w – -------- -------- u – -------- -------- v σ = – -------p∗ p∗ ∂x p∗ ∂y
(8.7)
where
Divergence term can be expanded as u mσ ∂p∗ ∂u v mσ ∂p∗ ∂v ρ 0 -g∂-----w 2 2 ∇ ⋅ v = m ∂ ⎛ ----⎞ – -------- -------- ------ + m ∂ ⎛ ----⎞ – -------- -------- ------ – -------⎝ ⎠ ⎝ ⎠ ∂x m ∂y m p∗ ∂x ∂σ p∗ ∂y ∂σ p∗ ∂σ
8-4
(8.8)
MM5 Tutorial
8: MM5
Notes about the equations: • Appendix A shows derivations of Equations 8.1, 8.4, 8.5 and 8.7, and shows the coordinate transformation from z to sigma coordinates. • In the model, Equation 8.1 does not include the last term with parentheses on the right. This is neglected and it represents a pressure increase due to heating which forces the air to expand. • Equations 8.2-8.4 include terms (eu and ew) representing the usually neglected component of the Coriolis force, where e = 2Ω cos λ , α = φ – φ c , λ is latitude, φ is longitude, and φc is central longitude. ∂m ∂m • The u ------- , v ------- and rearth terms represent curvature effects, and m is map-scale factor. ∂y ∂x • Equations 8.2, 8.3 and 8.8 include terms to account for the sloped sigma surfaces when calculating horizontal gradients. • Prognostic equations also exist for water vapor and microphysical variables such as cloud and precipitation (if used). These include the advection and various source/sink terms. Spatial finite differencing The above equations are finite differenced on the B grid mentioned in Chapter 1. Second-order centered finite differences represent the gradients except for the precipitation fall term which uses a first-order upstream scheme for positive definiteness. Often horizontal averaging is required to determine the gradient in the correct position. Vertical interpolations allow for the variable vertical grid size. More details are in Grell et al. (1994), NCAR Tech. Note 398. Temporal finite differencing A second-order leapfrog time-step scheme is used for these equations, but some terms are handled using a time-splitting scheme. Note that Equations 8.1-8.4 contain extra terms on the left of the equals sign. This designates so-called fast terms that are responsible for sound waves that have to be calculated on a shorter time step. In the leapfrog scheme, the tendencies at time n are used to step the variables from time n-1 to n+1. This is used for most of the right-hand terms (advection, coriolis, buoyancy). A forward step is used for diffusion and microphysics where the tendencies are calculated at time n-1 and used to step the variables from n-1 to n+1. Some radiation and cumulus options use a constant tendency over periods of many model timesteps and are only recalculated every 30 minutes or so. However for certain terms the model timestep is too long for stability and these have to be predicted with a shorter step. Examples of this are the sound-wave terms shown in the equations, the precipitation fall term and the PBL tendencies which also may be split in certain situations. When the timestep is split, certain variables and tendencies are updated more frequently. For sound waves u, v, w and p′ all need to be updated each short step using the tendency terms on the left of 8.1-8.4 while the terms on the right are kept fixed. For sound waves there are usually four of these steps between n-1 and n+1, after which u, v, w and p′ are up to date. Certain processes are treated implicitly for numerical stability. An implicit time scheme is one in which the tendencies of variables depend not only on the present and past values, but also the MM5 Tutorial
8-5
8: MM5
future values. These schemes are often numerically stable for all timesteps, but usually require a matrix inversion to implement them. In MM5 implicit schemes are used only in 1-d column calculations for vertical sound waves and vertical diffusion, so that the matrix is tridiagonal making it straightforward to solve directly.
First time step:
∆t
n=1
n=2
∆τ
Time step n: T, qv, qc, etc., advection, physics, boundary, coriolis, diffusion terms
Long (leapfrog) step ∆t
n-1
∆t
n
n+1
∆τ Short (forward) step u, v, w, p’ advanced (pressure gradients, divergence terms) Time step n+1:
∆t
n
n+1
∆t
n+2
∆τ
8-6
MM5 Tutorial
8: MM5
8.3 Physics Options in MM5 8.3.1 Cumulus Parameterizations (ICUPA) 1. None Use no cumulus parametrization at grid sizes < 5-10 km.
Illustration of Cumulus Processes
detrainment updraft compensating subsidence downdraft entrainment boundary layer
2. Anthes-Kuo Based on moisture convergence, mostly applicable to larger grid sizes > 30 km. Tends to produce much convective rainfall, less resolved-scale precip, specified heating profile, moistening dependent upon relative humidity. 3. Grell Based on rate of destabilization or quasi-equilibrium, simple single-cloud scheme with updraft and downdraft fluxes and compensating motion determining heating/moistening profile. Useful for smaller grid sizes 10-30 km, tends to allow a balance between resolved scale rainfall and convective rainfall. Shear effects on precipitation efficiency are considered. See Grell et al. (1994). 4. Arakawa-Schubert Multi-cloud scheme that is otherwise likeGrell scheme. Based on a cloud population, allowing for entrainment into updrafts and allows for downdrafts. Suitable for larger scales, > 30 km grid sizes, possibly expensive compared to other schemes. Shear effects on precipitation efficiency are considered. Also see Grell et al. (1994).
MM5 Tutorial
8-7
8: MM5
5. Fritsch-Chappell Based on relaxation to a profile due to updraft, downdraft and subsidence region properties. The convective mass flux removes 50% of available buoyant energy in the relaxation time. Fixed entrainment rate. Suitable for 20-30 km scales due to single-cloud assumption and local subsidence. See Fritsch and Chappell (1980) for details. This scheme predicts both updraft and downdraft properties and also detrains cloud and precipitation. Shear effects on precipitation efficiency are also considered. 6. Kain-Fritsch Similar to Fritsch-Chappell, but using a sophisticated cloud-mixing scheme to determine entrainment/detrainment, and removing all available buoyant energy in the relaxation time. See Kain and Fritsch (1993) for details. This scheme predicts both updraft and downdraft properties and also detrains cloud and precipitation. Shear effects on precipitation efficiency are also considered. 7. Betts-Miller Based on relaxation adjustment to a reference post-convective thermodynamic profile over a given period. This scheme is suitable for > 30 km, but no explicit downdraft, so may not be suitable for severe convection. See Betts (1986), Betts and Miller (1986), Betts and Miller (1993) and Janjic (1994) for details. 8. Kain-Fritsch 2 A new version of Kain-Fritsch that includes shallow convection. This is similar to one that is being run in test mode in the Eta model (Kain 2002). Shallow Cumulus - (ISHALLO=1) Handles non-precipitating clouds. Assumed to have strong entrainment and small radius, no downdrafts, and uniform clouds. Based on Grell and Arakawa-Schubert schemes. Equilibrium assumption between cloud strength and sub-grid (PBL) forcing.
8.3.2 PBL Schemes (IBLTYP) and Diffusion 0. None No surface layer, unrealistic in real-data simulations. 1. Bulk PBL Suitable for coarse vertical resolution in boundary layer, e.g. > 250 m vertical grid sizes. Two stability regimes. 2. High-resolution Blackadar PBL Suitable for high resolution PBL, e.g. 5 layers in lowest km, surface layer < 100 m thick. Four stability regimes, including free convective mixed layer. Uses split time steps for stability. 3. Burk-Thompson PBL Suitable for coarse and high-resolution PBL. Predicts turbulent kinetic energy for use in vertical mixing, based on Mellor-Yamada formulas. See Burk and Thompson (1989) for details. This is the only PBL option that does not call the SLAB scheme, as it has its own force-restore ground 8-8
MM5 Tutorial
8: MM5
temperature prediction 4. Eta PBL This is the Mellor-Yamada scheme as used in the Eta model, Janjic (1990, MWR) and Janjic (1994, MWR). It predicts TKE and has local vertical mixing. The scheme calls the SLAB routine or the LSM for surface temperature and has to use ISOIL=1 or 2 (not 0) because of its long time step. Its cost is between the MRFPBL and HIRPBL schemes. Before SLAB or the LSM the scheme calculates exchange coefficients using similarity theory, and after SLAB/LSM it calculates vertical fluxes with an implicit diffusion scheme.
Illustration of PBL Processes stable layer/free atmosphere
vertical diffusion entrainment
PBL top
nonlocal mixing PBL layers
local mixing
surface layer
sensible heat flux
latent heat flux
friction
5. MRF PBL or Hong-Pan PBL, suitable for high-resolution in PBL (as for Blackadar scheme). Efficient scheme based on Troen-Mahrt representation of countergradient term and K profile in the well mixed PBL, as implemented in the NCEP MRF model. See Hong and Pan (1996) for details. This scheme either calls the SLAB routine or the LSM and should have ISOIL=1 or 2. Vertical diffusion uses an implicit scheme to allow longer time steps. 6. Gayno-Seaman PBL This is also based on Mellor-Yamada TKE prediction. It is distinguished from others by the use of liquid-water potential temperature as a conserved variable, allowing the PBL to operate more accurately in saturated conditions (Ballard et al., 1991; Shafran et al. 2000). Its cost is comparable with the Blackadar scheme’s because it uses split time steps. MM5 Tutorial
8-9
8: MM5
7. Pleim-Chang PBL This scheme only works with ISOIL=3 (see later). The PBL scheme is a derivative of the Blackadar PBL scheme called the Asymmetric Convective Model (Pleim and Chang, 1992, Atm. Env.), using a variation on Blackadar’s non-local vertical mixing. Moist vertical diffusion - (IMVDIF=1) IBLTYP = 2, 5 and 7 have this option. It allows diffusion in cloudy air to mix toward a moist adiabat by basing its mixing on moist stability instead of the dry stability. From Version 3.5 it can mix cloudy air upwards into clear air in addition to just internally in cloudy layers. Thermal roughness length - (IZ0TOPT=0,1,2) IBLTYP =2 and 5 have the option of using a different roughness length for heat/moisture than that used for momentum. This is the thermal roughness length. IZ0TOPT=0 is the default (old) scheme, IZ0TOPT=1 is the Garratt formulation, and IZ0TOPT=2 is the Zilitinkevich formulation (used by the Eta model). Changing the thermal roughness length affects the partitioning of sensible and latent heat fluxes, and affects the total flux over water. Horizontal diffusion - (ITPDIF=0,1,2) ITPDIF=0,1 are two methods of doing horizontal temperature diffusion. ITPDIF=1 (default) is to only horizontally diffuse the perturbation from the base-state temperature. This partially offsets the effect of the coordinate slope over topography which is needed due to the fact that the diffusion is along model levels. ITPDIF=0 diffuses the full temperature (like all other fields) instead. In Version 3.7, a new option is ITPDIF=2. This applies to temperature, moisture and cloud water, and is a purely horizontal diffusion accounting more accurately for coordinate slope and valley walls (Zangl, 2002 MWR)..
8.3.3 Explicit Moisture Schemes (IMPHYS) 1. Dry No moisture prediction. Zero water vapor. 2. Stable Precip Nonconvective precipitation. Large scale saturation removed and rained out immediately. No rain evaporation or explicit cloud prediction. 3. Warm Rain Cloud and rain water fields predicted explicitly with microphysical processes. No ice phase processes. 4. Simple Ice (Dudhia) Adds ice phase processes to above without adding memory. No supercooled water and immediate melting of snow below freezing level. This also can be run with a look-up table (MPHYSTBL=1) version for efficiency. 5. Mixed-Phase (Reisner 1) Adds supercooled water to above and allows for slow melting of snow. Memory added for cloud ice and snow. No graupel or riming processes. See Reisner et al. (1998) for details. Since version 3.7 an optimized version of this code is also available (MPHYSTBL=2). This also can be run with 8-10
MM5 Tutorial
8: MM5
a look-up table (MPHYSTBL=1) version for efficiency. 6. Goddard microphysics Includes additional equation for prediction of graupel. Suitable for cloud-resolving models. See Lin et al. (JCAM, 1983), Tao et al. (1989, 1993) for details. Scheme was updated for Version 3.5 to include graupel or hail properties. 7. Reisner graupel (Reisner 2) Based on mixed-phase scheme but adding graupel and ice number concentration prediction equations. Also suitable for cloud-resolving models. Scheme was updated significantly between Version 3.4 and 3.5, and again between 3.5 and 3.6. 3.6 also has a capability for calling the scheme less frequently than every time-step, but this is not standard and requires code editing to implement (Web pages will show the procedure). 8. Schultz microphysics A highly efficient and simplified scheme (based on Schultz 1995 with some further changes), designed for running fast and being easy to tune for real-time forecast systems. It contains ice and graupel/hail processes.
MM5 Tutorial
8-11
8: MM5
Illustration of Microphysics Processes Dudhia simple ice Hsie Warm Qv
Qi
Qv
Qs
0C Qv Qc
Qr Qc
Goddard mixed-phase
Reisner mixed-phase Qv
Qi
Qr
Qv
Qs
Qi
Qg Qs
Qc
8-12
Qr
Qc
Qr
MM5 Tutorial
8: MM5
8.3.4 Radiation Schemes (IFRAD) 0. None No mean tendency applied to atmospheric temperature, unrealistic in long-term simulations. 1. Simple cooling Atmospheric cooling rate depends just on temperature. No cloud interaction or diurnal cycle. 0 or 1. Surface radiation This is used with the above two options. It provides diurnally varying shortwave and longwave flux at the surface for use in the ground energy budget. These fluxes are calculated based on atmospheric column-integrated water vapor and low/middle/high cloud fraction estimated from relative humidity. 2. Cloud-radiation scheme Sophisticated enough to account for longwave and shortwave interactions with explicit cloud and clear-air. As well as atmospheric temperature tendencies, this provides surface radiation fluxes. May be expensive but little memory requirement. In Version 3.7 namelist switches LEVSLP and OROSHAW can be used with this option. LEVSLP enables slope effects on solar radiation, and OROSHAW allows shadowing effects on nearby grid-cells. 3. CCM2 radiation scheme Multiple spectral bands in shortwave and longwave, but cloud treated based on either resolved clouds (ICLOUD=1) or RH-derived cloud fraction (ICLOUD=2). Suitable for larger grid scales, and probably more accurate for long time integrations. Also provides radiative fluxes at surface. See Hack et al. (1993) for details. As with other radiation schemes ICLOUD=0 can be used to remove cloud effects on the radiation. Up until Version 3.5, this scheme was only able to interact with RH-derived clouds. 4. RRTM longwave scheme This is combined with the cloud-radiation shortwave scheme when IFRAD=4 is chosen. This longwave scheme is a new highly accurate and efficient method provided by AER Inc. (Mlawer et al. 1997). It is the Rapid Radiative Transfer Model and uses a correlated-k model to represent the effects of the detailed absorption spectrum taking into account water vapor, carbon dioxide and ozone. It is implemented in MM5 to also interact with the model cloud and precipitation fields in a similar way to IFRAD=2.
MM5 Tutorial
8-13
8: MM5
Illustration of Free Atmosphere Radiation Processes Shortwave Longwave
Shortwave Longwave reflection
model layer cloud
surface emissivity
clear sky
scattering
absorption LW emission
surface albedo
8.3.5 Surface Schemes (ISOIL) None - (ITGFLG=3) No ground temperature prediction. Fixed surface temperature, not realistic. 0. Force/restore (Blackadar) scheme Single slab and fixed-temperature substrate. Slab temperature based on energy budget and depth assumed to represent depth of diurnal temperature variation (~ 10-20 cm). 1. Five-Layer Soil model Temperature predicted in 1,2,4,8,16 cm layers (approx.) with fixed substrate below using vertical diffusion equation. Thermal inertia same as force/restore scheme, but vertically resolves diurnal temperature variation allowing for more rapid response of surface temperature. See Dudhia (1996 MM5 workshop abstracts) for details. Cannot be used with Burk-Thompson PBL (IBLTYP=3). 2. Noah Land-Surface Model [Note: this was the OSU LSM until MM5 Version 3.5, and from 3.6 it is updated and renamed as the Noah LSM, a unified model between NCAR, NCEP and AFWA]. The land-surface model is capable of predicting soil moisture and temperature in four layers (10,
8-14
MM5 Tutorial
8: MM5
30, 60 and 100 cm thick), as well as canopy moisture and water-equivalent snow depth. It also outputs surface and underground run-off accumulations. The LSM makes use of vegetation and soil type in handling evapotranspiration, and has effects such as soil conductivity and gravitational flux of moisture. In MM5 it may be called instead of the SLAB model in the MRF and Eta PBL schemes, taking surface-layer exchange coefficients as input along with radiative forcing, and precipitation rate, and outputting the surface fluxes for the PBL scheme. This scheme uses a diagnostic equation to obtain a skin temperature, and the exchange coefficients have to allow for this by use of a suitable molecular diffusivity layer to act as a resistance to heat transfer. See Chen and Dudhia (2001). It also handles sea-ice surfaces. All the aforementioned processes were in the OSU LSM. The Noah LSM has some modifications, and additional processes to better handle snow cover, predict physical snow depth, and frozen soil effects. In addition to soil moisture, soil water is a separate 4-layer variable, and soil moisture is taken to be the total of soil water and soil ice. Physical snow height is also diagnosed and output. The Noah LSM can also optionally use satellite-derived climatological albedo, supplied by REGRID, instead of relating albedo to landuse type. See Appendix D for practical guidance on setting up the modeling system to use the LSM. 3. Pleim-Xiu Land-Surface Model This is coupled to the Pleim-Xiu PBL (IBLTYP=7). It is a combined land-surface and PBL model. It represents soil moisture and temperature in two layers (surface layer at 1 cm, and root zone at 1 m) as well as canopy moisture. It handles soil surface, canopy and evapotranspiration moisture fluxes. It also makes use of percentage land-use and soil data from Terrain to aggregatesoil and vegetation properties, rather than using a single dominant type. Soil moisture can be initialized from land-use moisture availability, a soil moisture input grid (as with the Noah LSM), or via nudging using model minus observed surface temperature error to correct soil moisture. The model also has optional plant-growth and leaf-out algorithms making it suitable for long-term simulations. See Xiu and Pleim (2000).
Illustration of Surface Processes LW/SW
LW
SH LH snow
land
SH LH water constant temperature
Ground flux soil layers
soil diffusion
substrate (constant temperature)
MM5 Tutorial
8-15
8: MM5
Bucket Soil Moisture Model - (IMOIAV=1,2) This can be run with ISOIL=0 or 1. It keeps a budget of soil moisture allowing moisture availability to vary with time, particularly in response to rainfall and evaporation rates. The soil moisture can be initialized from land-use type and season (LANDUSE.TBL) as before (IMOIAV=1), or a 10-cm soil moisture input as with the Noah LSM (IMOIAV=2). Snow Cover Model - (IFSNOW=0,1,2) When the LSM is not used this switch determines how snow cover is handled. IFSNOW=0 means snow cover is ignored. IFSNOW=1 uses the input snow-cover (0/1) flag to determine the landsurface properties such as albedo and soil moisture. These stay fixed in the simulation. Since Version 3.5 there is an option (IFSNOW=2) to predict snow cover using an input water-equivalent snow depth. It updates water-equivalent snow depth according to a heat and moisture budget in the SLAB routine, and accumulates snow from the microphysical schemes (currently IMPHYS=4,5, or 7). In Version 3.7 this can be used with IMOIAV=1 or 2, the bucket soil moisture. Polar Mods - (IPOLAR=1) The so-called Polar Mods were developed by the Byrd Polar Research Center at Ohio State Univerisity to better handle Antarctic conditions for forecasting purposes. IPOLAR=1 is a compile-time option, therefore it is in the configure.user file. The use of the Polar Mods has several effects, and should be applied only with ISOIL=1. The main changes are (i) to increase the number of prognostic soil layers from 5 to 7, and (ii) to allow for sea-ice fraction effects on the heat and moisture fluxes and mean ground temperature. Sea-ice fraction can either be diagnosed from sea-surface temperature (IEXSI=1), or read in from a dataset (IEXSI=2). It is also recommended that the Eta PBL is used with this option, as that has been modified to account for ice-surface fluxes. The soil model is modified to account for snow and ice properties for heat conduction. The Polar Mods also slightly modify the Simple Ice and Reisner 1 microphysics schemes to use the Meyers formula for ice number concentration. In release 3.7 the MRF PBL also has modifications to work with this option.
8-16
MM5 Tutorial
8: MM5
8.4 Interactions of Parameterizations
Direct Interactions of Parameterizations cloud detrainment
Microphysics
Cumulus
cloud effects cloud fraction
surface fluxes
Radiation
SH, LH
PBL
downward SW, LW
surface T,Qv,wind
surface emission/albedo Surface
8.5 Boundary conditions 8.5.1 Lateral boundary conditions (IBOUDY) 0. Fixed This will not allow time variation at lateral boundaries. Not recommended for real-data applications. 2. Time-dependent/Nest Outer two rows and columns have specified values of all predicted fields. Recommended for nests where time-dependent values are supplied by the parent domain. Not recommended for coarse mesh where only one outer row and column would be specified. 3. Relaxation/inflow-outflow Outer row and column is specified by time-dependent value, next four points are relaxed towards the boundary values with a relaxation constant that decreases linearly away from the boundary. Recommended for coarse mesh where boundary values are supplied by the BDYOUT_DOMAIN1 file. Fields without boundary values (such as some moisture variables) are specified zero on inflow and zero-gradient on outflow boundaries.
MM5 Tutorial
8-17
8: MM5
8.5.2 Lower boundary conditions The LOWBDY_DOMAINx file provides sea-surface temperature, substrate temperature, and optionally snow cover and sea-ice. The switch ISSTVAR allows multiple times in this file (created by INTERPF) to be read in as the model runs, which is the method of updating these fields in long-term simulations.
8.5.3 Upper boundary condition (IFUPR) 0. No upper boundary condition Rigid lid with no vertical motion at the model top. This may be preferable for very coarse mesh simulations (50 km or more grids). 1. Upper radiative condition Top vertical motion calculated to reduce reflection of energy from the model top preventing some spurious noise or energy build-up over topography. This is recommended for grid-lengths below 50 km. It works better for hydrostatic gravity wave scales, rather than inertial or nonhydrostatic scales.
8.6 Nesting 8.6.1 One-way nesting When a single-domain or multiple-domain run completes, its domain output can be put into NESTDOWN to create an input file with higher resolution (any integer ratio in dx) and new lateral and lower boundary files. See NESTDOWN chapter. NESTDOWN allows the addition of higher resolution elevation and land-use data. This is known as a one-way nest because it is forced purely by the coarse mesh boundaries, and obviously has no feedback on the coarse-mesh run. When INTERPB becomes available, it will be possible to put model output on pressure levels and reanalyze with observations as well as choosing different vertical levels for the nest by using INTERPF and NESTDOWN.
8.6.2 Two-way nesting Multiple domains can be run in MM5 at the same time. Up to nine domains on four levels of nest are allowed with each nest level one third of its parent domain’s grid-length. Each domain takes information from its parent domain every timestep, and runs three timesteps for each parent step before feeding back information to the parent domain on the coincident interior points. Figure 1.3 illustrates the staggering with the 3:1 ratio. The feedback distinguishes two-way nesting from one-way nesting, and allows nests to affect the coarse mesh solution, usually leading to better behavior at outflow boundaries. However there is significant overhead cost associated with the boundary interpolation and feedback at every timestep, particularly with distributed-memory machines.
8.6.3 Two-way nest initialization options (IOVERW)
8-18
MM5 Tutorial
8: MM5
IOVERW is the overwrite switch that determines whether a nested input file is used to replace coarse mesh information or whether the coarse domain is just interpolated to start the nest. 0. Interpolation No nested input file is required. All the information including topography is interpolated from the coarse mesh to start the nest. This is suitable for nests that start later than the coarse mesh or for moving and overlapping nests. This could be used in situations where improved topography is not essential such as over water or smooth terrain. 1. Nest input file This requires an MMINPUT file to be read in for the nest. The input file contains all the meteorological and terrain fields at a higher resolution, and so may provide a more accurate initial analysis. This should only be applied when the coarse mesh and nest both start at the same time, because an analysis at a later time is unlikely to match the coarse-mesh boundary conditions. 2. Terrain input file This only requires the TERRAIN file for the nest. The meteorological fields are interpolated from the coarse mesh, but the terrain and land-use are replaced with the higher resolution fields from TERRAIN. A vertical adjustment is carried out to put the interpolated fields on terrain-following levels consistent with the new nest terrain. This has the benefit of allowing fine-topography nests to start later than the coarse mesh.
8.6.4 Two-way nesting feedback options (IFEED) These options determine how a nest feeds back its interior information to its parent domain. 0. No feedback Feedback is turned off, similar to one-way nests except boundary conditions are updated by parent domain every timestep. Not recommended except for tests. 1. 9-point weighted average Feedback uses a weighted average of nest points onto coarse mesh point, not just coincident value. Not the primary recommended choice because terrain elevation is not consistent with this feedback. 2. 1-point feedback with no smoothing Coincident point is fed back. Not recommended except for tests. 3. 1-point feedback with smoother-desmoother Coincident point is fed back, and coarse mesh fields are then filtered using smoother-desmoother to remove two-grid-length noise. Recommended option. 4. 1-point feedback with heavy smoothing Coincident point is fed back, and coarse mesh fields are then smoothed with a 1-2-1 smoother that removes two-grid-length noise, and damps other short wavelengths strongly. Could be used if nest region appears excessively noisy when viewing coarse mesh output.
MM5 Tutorial
8-19
8: MM5
8.7 Four-Dimensional Data Assimilation (FDDA) 8.7.1 Introduction FDDA is a method of running a full-physics model while incorporating observations. Thus the model equations assure a dynamical consistency while the observations keep the model close to the true conditions and make up for errors and gaps in the initial analysis and deficiencies in model physics. The MM5 model uses the Newtonian-relaxation or nudging technique.
8.7.2 FDDA Method There are two distinct nudging methods. The model can use these individually or combined. Analysis or Grid Nudging Newtonian relaxation terms are added to the prognostic equations for wind, temperature, and water vapor. These terms relax the model value towards a given analysis. The technique is implemented by obtaining analyses on the model grid over the data assimilation period and these are fed to the model in its standard input format. The model linearly interpolates the analyses in time to determine the value towards which the model relaxes its solution. The user defines the time scale of the relaxation constants for each variable. Station or Observational Nudging In situations where analysis-nudging is not practical, such as at high resolution or with asynoptic data, obs-nudging is a useful alternative. This method again uses relaxation terms, but the method is similar to objective analysis techniques where the relaxation term is based on the model error at observational stations. The relaxation is such as to reduce this error. Each observation has a radius of influence, a time window and a relaxation time scale to determine where, when and how much it affects the model solution. Typical model grid points may be within the radius of influence of several observations and their contributions are weighted according to distance. To implement this method an observation input file is required that chronologically lists the 3D positions and values of each observation in a specific format.
8.7.3 Uses of FDDA Four-Dimensional Data Assimilation has three basic uses -
• Dynamic Initialization: Data assimilation by the above methods is applied during a pre-
forecast time period for which additional observations or analyses exist. Then the nudging terms switch off as the forecast begins. This has two advantages over the standard static initialization, (i) It can make use of asynoptic data during the pre-forecast period and generally contains more observational information at the forecast start time, and (ii) There is a reduced spin-up or shock effect at the forecast start owing to the better balance of the initial model conditions.
• Dynamic Analysis: This is the same as dynamic initialization except that the intent is to
produce a four-dimensionally consistent analysis taking into account dynamical balances that are provided by the model and observations that are introduced by nudging. This analysis may be used to initialize higher-resolution simulations or for kinematic studies such as chemical and tracer transports.
8-20
MM5 Tutorial
8: MM5
• Boundary Conditions: By using data assimilation on the coarse mesh and nesting with a
finer mesh, the fine mesh is provided with superior boundary conditions compared to the standard linear interpolation of analyses, because the boundaries have a much higher time resolution of features passing through them into the fine mesh.
Note: For scientific case studies and forecasts the model should have no data assimilation terms as these represent non-physical terms in the equations.
8.7.4 Data used in FDDA Analysis nudging When doing three-dimensional analysis nudging, no additional input data files are required. MM5 can use the same MMINPUT file or a copy of MMINPUT to MMINPUT2 file. If surface FDDA is desired, a user must set F4D = TRUE in the namelist of RAWINS job deck, which enables the job to create (typically) a 3-hourly surface analysis file to be used in MM5. FDDA now works with all the boundary layer options except 0, 1, and 3. It needs information on the boundary-layer top from these schemes. Station nudging There is no standard software available to create input data file for observational nudging. The input file is a binary file containing 9 real numbers per record and in order of increasing time. The READ statement in the model is the following: READ (NVOL,END=111) TIMEOB,RIO,RJO,RKO,(VAROBS(IVAR),IVAR=1,5) where NVOL is the input fortran unit number, and TIMEOB: RIO: RJO: RKO: IVAR(1): IVAR(2): IVAR(3): IVAR(4): IVAR(5):
Julian date in dddhh. Example: 16623.5 - Julian day 166 and hour 2330 UTC y-location - I dot-point location on coarse mesh (may be a fraction of a grid) x-location - J dot-point location on coarse mesh (may be a fraction of a grid) z-location - K half-σ level (must be on half σ levels) u wind - in m/sec rotated to model grid v wind - in m/sec rotated to model grid temperature - in Kelvin water vapor mixing ratio - in kg/kg Pstar - in cb (only used in hydrostatic model)
A user may include more information at the end of a record which are not read by the model but can be used to identify the station and data type. The no-data value is 99999.. If running the model in nonhydrostatic mode, 99999. can be used to fill up the Pstar spot.
MM5 Tutorial
8-21
8: MM5
8.8 How to run MM5 Get the source code. The current MM5 release resides on NCAR’s anonymous ftp site, ftp.ucar.edu:mesouser/MM5V3/MM5.TAR.gz. You may download MM5.TAR.gz to your working directory from the web page, ftp://ftp.ucar.edu/mesouser/MM5V3. Or you can copy it from ~mesouser/MM5V3/MM5.TAR.gz on NCAR’s SCD machines. There are 2 steps to compiling and running the MM5 system:
• Choosing compilation options and compiling the code. • Modifying the run-time options and executing the program. 8.8.1 Compiling MM5 • Edit the file “configure.user” • Type ‘make’ (see 8.8.3 for running batch job on NCAR’s IBM.) The user chooses those compilation options appropriate to his/her system by editing the “configure.user” file. This file is included in every Makefile used in compiling the model so it contains many rules, but the user need only concern with 3 things: • Find the section of compilation options appropriate for your machine. Uncomment the RUNTIME_SYSTEM variable and the compiler options. • Make sure that the general utilities required in a UNIX environment for compilation are available and appropriate. For example, there are many versions of the program “make” - if yours has special quirks and/or options, this would be the place to indicate them. • Set model options in sections 5 and 6 of configure.user. These are used to set up domain sizes, 4DDA and physics option for (selective) compilation purposes. If you wish to compile and run the model on a distributed-memory machine (such as IBM SP2, Cray T3E, SGI Origin 2000 with MPI, and Linux clusters), • obtain additional tar file, MPP.TAR.gz, gunzip and then untar the file in the MM5 top directory; • edit the configure.user file, and select and uncomment the appropriate RUNTIME_SYSTEM and compiler flags; • type ‘make mpp’ to make an executable. More information is provided for this topic in README.MPP in the MM5 tar file, Appendix D in this document, and on Web page: http://www.mmm.ucar.edu/mm5/mpp.html
8.8.2 Running MM5 • create the “mm5.deck” script by typing ‘make mm5.deck’ - need to set RUNTIME_SYSTEM correctly to get the right deck.
• edit the mm5.deck script to set appropriate namelist values • run the “mm5.deck” script by typing ‘mm5.deck’.
8-22
MM5 Tutorial
8: MM5
Basic Run: Need to set at least these namelist variables in mm5.deck: TIMAX, TISTEP, TAPFRQ, NESTIX, NESTJX, NESTI, NESTJ Restart Run: In addition to above namelist variables, set IFREST = .TRUE., and IXTIMR = restart time (can be found at the end of the mm5.print.out file from the previous run). One-Way Run: Should treat a one-way run in exact manner as if it is a basic run.
8.8.3 Running MM5 Batch Job on NCAR’s IBM • If you want to work in batch mode, whether to compile and/or execute, get a copy of mm5.deck.ibm from mesouser directory: ~mesouser/MM5V3/IBM on NCAR’s blackforest/ babyblue/bluesky. Or, you may get the deck once you obtain the MM5.TAR.gz file on your local machine. To do so, first unzip and untar the tar file, edit the configure.user file to define RUNTIME_SYSTEM=‘‘sp2”; then type ‘make mm5.deck’. This deck has the relavent part of configure.user file inside the deck. This deck is designed to be used for both interactive and batch mode.
• If you would like to compile interactively on a IBM, you can either use the above deck, or
use the IBM interactive deck, by setting the RUNTIME_SYSTEM=‘‘IBM’’, and followed by typing ‘make mm5.deck’. The mm5.deck generated this way has an appearance of other workstations decks. Compiling on IBM is similar to what one does on all other workstations.
• When you use the interactive deck to compile, you will still need to use the batch deck to
submit a batch job for executing. Before you submit the batch job, remember to tar up your entire directory structure, and save it to some place (whether it is NCAR’s MSS, or your local archive). Your batch job needs to access this tar file (default name mm5exe.tar) for executing.
Note: The mmlif (namelist file) for running MM5 is now generated from both your configure.user file (section 6 of the configure.user) and mm5.deck.
8.8.4 Useful make commands make clean This removes all generated files and returns the code to its original state. Use it before doing recompilation. make code This creates *.f files from *.F files and places them in directory pick/. Useful for looking at code in a single directory. All files related to options selected in configure.user file will be created.
MM5 Tutorial
8-23
8: MM5
8.9 Input to MM5 Files from INTERPF program for a basic run:
• Model initial condition file(s): MMINPUT_DOMAINx (MMINPUT_DOMAIN2, 3.. are
optional) • Lateral and lower boundary condition files for the coarsest domain: BDYOUT_DOMAIN1, LOWBDY_DOMAINx (LOWBDY_DOMAIN2, 3, ..., are optional. The model will use them if they are present). • Nest terrain file(s) from program TERRAIN: TERRAIN_DOMAIN2, 3, etc. if using IOVERW = 2 option. Files from MM5 program, if it is a restart run:
• Model save file(s) from previous run: rename SAVE_DOMAINx to RESTART_DOMAINx Files from RAWINS/LITTLE_R, if running gridded 4DDA option with surface analysis
• FDDA surface analysis: SFCFDDA_DOMAINx Files generated by user, if running observational nudging option
• FDDA 4D obs file(s): MM5OBS_DOMAINx mmlif: a namelists file containing user-specified options; created when mm5.deck is executed. LANDUSE.TBL: user-modifiable landuse characteristics (in ASCII), provided. RRTM_DATA: RRTM radiation scheme data file, provided. BUCKET.TBL: user-modifiable constants used in bucket soil moisture model, provided. VEGPARM.TBL: user-modifiable constants used in Noah LSM, provided. SOILPARM.TBL: user-modifiable constants used in Noah LSM, provided. GENPARM.TBL: user-modifiable constants used in Noah LSM, provided. Note that the workstation mm5.deck expects all input files (named as above) to be present in the Run/ directory. See the mm5.deck for details.
8.10 Output from MM5 A number of files are written out during MM5 integration. These are
• history files (MMOUT_DOMAINx), if IFTAPE = 1, and the output frequency is set by TAPFRQ (and INCTAP).
• restart files (SAVE_DOMAINx), if IFSAVE = .TRUE., and the output frequency is set by SAVFRQ. Output from each domain will be written to different files. For example, domain 1’s history file is written to MMOUT_DOMAIN1, and its restart file to SAVE_DOMAIN1. Each output file contains data for all output times for that domain. On NCAR’s IBMs, we recommend that one uses BUFFRQ to limit output file sizes not exceeding 6,000 Mb (which is the file size limit for MSS). For each time period the model history output includes:
8-24
MM5 Tutorial
8: MM5
• A general header record describing the model configuration • A subheader describing the field following, and the field. This is repeated for all fields in a output. 3D forecast fields dimensioned by (IX, JX, KX or KX+1) for that domain include (note that the variables are NO LONGER coupled in Version 3): 1 2 3 4 5 6 7 8 9 10 11 12 13 14
U: U-wind (m/s) V: V-wind (m/s) T: Temperature (K) Q: Water vapor mixing ratio (kg/kg) (if IMPHYS≥2) CLW: Cloud water mixing ratio (kg/kg) (if IMPHYS≥3) RNW: Rain water mixing ratio (kg/kg) (if IMPHYS≥3) ICE: Ice cloud mixing ratio (kg/kg) (if IMPHYS≥5) SNOW: Snow mixing ratio (kg/kg) (if IMPHYS≥5) GRAUPEL: Graupel (kg/kg) (if IMPHYS≥6) NCI: Number concentration of ice (if IMPHYS=7) TKE: Turbulent k.e. (J/kg) (if IBLTYP=3,4,6) RAD TEND: Atmospheric radiation tendency (K/day) (if FRAD≥2) W: Vertical velocity (m/s) (on full σ -levels) PP: Perturbation pressure (Pa)
2D forecast fields dimensioned (IX, JX) include: 1 PSTARCRS: Pstar (cb) 2 GROUND T: Ground temperature (K) 3 RAIN CON: Accum. convective rainfall (cm) 4 RAIN NON: Accum. nonconv. rainfall (cm) 5 PBL HGT: PBL height (m) 6 REGIME: PBL regime (catagory, 1-4) 7 SHFLUX: Surface sensible heat flux (W/m2) 8 LHFLUX: Surface latent heat flux (W/m2) 9 UST: Frictional velocity (m/s) 10 SWDOWN: Surface downward shortwave radiation (W/m2) 11 LWDOWN: Surface downward longwave radiation (W/m2) 12 MAVAIL: Surface moisture availability (if IMOIAV=1,2) 13 SOIL T x: Soil temperature in a few layers (K) (if ISOIL=1,2) 14 SOIL M x: Soil moisture in a few layers (m3/m3) (if ISOIL=2) 15 SOIL W x: Soil water in a few layers (m3/m3) (if ISOIL=2) 16 SFCRNOFF: Surface runoff (mm) (if ISOIL=2) 17 UGDRNOFF: Underground runoff (mm) (if ISOIL=2) 18 SNOWCOVR: Snow cover (variable if ISOIL=2) 19 SNOWH: physical snow height (m)
MM5 Tutorial
dot dot cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross
8-25
8: MM5
(if ISOIL=2, or IFSNOW=2) 20 WEASD: Water-equivalent snow depth (mm) (if ISOIL=2, or IFSNOW=2) 21 CANOPYM: Canopy moisture (m) (if ISOIL=2) 22 GRNFLX: ground head flux (W/m2) (if ISOIL=2, 3) 23 ALB: albedo (fraction) (if ISOIL=2) 24 ALBSNOMX: maximum snow albedo (%) (if ISOIL=2, and RDMAXALB=T) 25 MONALBnn: monthly albedo (%) (if ISOIL=2, or RDBRDALB=T) 26 ALBEDO: background albedo (%) (if IFSOIL=2, RDBRDALB=T) 27 VEGFRC: Vegetation coverage (if ISOIL = 2) 28 SWOUT: top outgoing shortwave radiation (if FRAD>=2) 29 LWOUT: top outgoing longwave radiation (if FRAD>=2) 30 T2: 2 m temperature (K) (if IBLTYP=2, 4, 5) 31 Q2: 2 m mixing ratio (kg/kg) (if IBLTYP=2, 4, 5) 32 U10: 10 m u component of wind (m/sec) (if IBLTYP=2,4,5) 33 V10: 10 m v component of wind (m/sec) (if IBLTYP=2,4,5) 34 M-O LENG: Monin-Obukov length (m) (if ISOIL=3) 35 NET RAD: surface net radiation (W/m2) (if ISOIL=3) 36 ALBEDO: surface albedo (fraction) (if ISOIL=3) 37 RA: aerodynamic resistance (s/m) (if ISOIL=3) 38 RS: surface resistance (s/m) (if ISOIL=3) 39 LAI: leaf area index (area/area) (if ISOIL=3) 40 VEGFRC: vegetation fraction (fraction) (if ISOIL=3) 41 ZNT: roughness length (m) (if ISOIL=3) 42 ISLTYP: soil texture type (if ISOIL=3) 43 SUMFB: mass flux updraft (if ICUPA=8) 44 SPSRC: source layer updraft (if ICUPA=8) 45 SEAICEFR: seaice fraction (if IPOLAR=1) 46 TGSI: seaice temperature (if IPOLAR=1) 2D constant fields dimensioned (IX, JX) include: 47 TERRAIN: Terrain elevation (m) 48 MAPFACCR: Map scale factor 49 MAPFACDT: Map scale factor 50 CORIOLIS: Coriolis parameter (/s) 51 RES TEMP: Substrate temperature (K) 52 LATITCRS: Latitude (deg) 53 LONGICRS: Longitude (deg) 54 LANDUSE: Land-use category 55 SNOWCOVR: Snow cover (if ISOIL < 2) 56 TSEASFC: Sea surface temperature (K) 57 SEAICE: Seaice (dimensionless) (if ISOIL=2)
cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross cross dot cross cross cross cross cross cross cross cross
Other special output: 58 SIGMAH: Model half-sigma levels 59 ALBD: Surface albedo from LANDUSE.TBL 60 SLMO: Surface moisture availability from LANDUSE.TBL 61 SFEM: Surface emissivity from LANDUSE.TBL 8-26
MM5 Tutorial
8: MM5
62 63 64 65
SFZ0: Surface roughness from LANDUSE.TBL THERIN: Surface thermal inertia from LANDUSE.TBL SFHC: Soil heat capacity from LANDUSE.TBL SCFX: Snow cover effect from LANDUSE.TBL
If one sets IFTSOUT = .TRUE., and defines TSLAT and TSLON for the time-series locations, one will obtain time-series output in fort.26 for domain 1, fort.27 for domain 2 and so on for serial runs (for MPI runs, the time series is (unfortunately) scattered in various rsl.out.* files. The timeseries output contains the following data: xtime, time-step, its, jts, t-sfc, q-sfc, u-sfc, v-sfc, pstar, pp-sfc, rainc, rainnc, clw, glw, hfx, qfx, gsw, t-ground
where xtime : model time (unit minutes) time-step : the nth time series its, jts : I, J locations in model grid for time-series points t-sfc : 2-m or lowest σ level temperature (unit K) q-sfc : 2-m or lowest σ level mixing ratio (unit kg/kg) u-sfc, v-sfc : the 10-m or lowest-σ level winds (unit m s-1), rotated to earth coordinates pstar : reference p* (unit cb, or 10*hPa) pp-sfc : perturbation pressure at the lowest-σ level (unit Pa) rainc, rainnc : accumulative convective and non-convective surface precipitation (unit cm) clw : column integrated cloud liquid/ice (unit mm) glw, gsw : surface downward long-wave and shortwave radiation (unit W m-2) hfx, qfx : surface sensible and latent heat (* latent heat of vaporization) fluxes (unit W m-2) t-ground : ground or skin temperature (if ISOIL = 2) temperature (unit K).
8.11 MM5 Files and Unit Numbers MM5 accesses most files by referring to the file names. Fortan unit numbers associated with the files are assigned as follows: Table 8.1 File names, fortran unit numbers, and their description for MM5. File name
Unit number
Description
INPUT mmlif
fort.10
Input, namelist file
LOWBDY_DOMAIN1
fort.21, 22, ...
Lower boundary file, contains substrate temp and SST
BDYOUT_DOMAIN1
fort.9
Lateral boundary file created by program INTERPF
LANDUSE.TBL
fort.19
Physical properties for landuse categories
BUCKET.TBL
fort.18
Max, min moisture availability range, evaporation rate
VEGPARM.TBL
fort.19
used if ISOIL = 2
MM5 Tutorial
8-27
8: MM5
File name
Unit number
Description
SOILPARM.TBL
fort.19
used if ISOIL = 2
GENPARM.TBL
fort.19
used if ISOIL = 2
RRTM_DATA
fort.20
RRTM radiation scheme data
MMINPUT_DOMAINx (TERRAIN_DOMAIN2..)
fort.11, 12, ... 19
Initial condition files created by program INTERPF (or NESTDOWN); or Terrain output files for nests
MMINPUT(2)_DOMAINx
fort.31, 32, ... 39
3D analysis nudging files (same as initial condition files)
SFCFDDA_DOMAINx
fort.71, 72, ... 79 (fort.81, 82, ... 89)
Surface analysis nudging files created by program LITTLE_R/RAWINS
MM5OBS_DOMAINx
fort.61,62, ... 69
Observation nudging files created by user’s own program
RESTART_DOMAINx
fort.91, 92 ... 99
Rrestart files (same as SAVE_DOMAINx files)
OUTPUT MMOUT_DOMAINx
fort.41,42, ... 49
MM5 model history files
SAVE_DOMAINx
fort.51, 52, ..., 59
restart files
SHUTDO_DOMAINx
fort.61, 62, ..., 69
Shutdown restart files
fort.26, fort.27....
fort.26, 27, ..., 29
time series output (IFTSOUT=T)
8.12 Configure.user Variables The ‘configure.user’ is the first file one needs to edit (if one is running Cray batch job, one would need to edit the mm5.deck only and these variables appear inside the deck). Except for the first variable, the rest are used for setting up model’s memory - these variables are referred to as precompilation variables. Sections 1, 4 and make rules will be explained in Chapter 9. RUNTIME_SYSTEM
computer system to run model on.
FDDAGD
=1, for 4DDA grid analysis nudging; =0, no 4DDA.
FDDAOB
=1, for 4DDA observation nudging; =0, no obs 4DDA.
MAXNES
maximum number of domains in simulation. Note though, there are only 4 default nest levels (i.e. 1 coarse domain and 3 nests)
MIX,MJX,MKX
maximum number of grid points in I, J, and K.
IMPHYS
options for explicit schemes:
8-28
MM5 Tutorial
8: MM5
=1, dry; =2, removal of super-saturation; =3, warm rain (Hsie); =4, simple ice (Dudhia); =5, mixed phase (Reisner); =6, mixed phase with graupel (Goddard); =7, mixed phase with graupel (Reisner); =8, mixed phase with graupel (Schultz) MPHYSTBL
ICUPA
=0, not using look-up table version; =1, use look-up table version of explicit scheme options 4 and 5; =2, use new optimized version (with vmass libraries). options for cumulus parameterization schemes: =1, none; =2, Anthes-Kuo; =3, Grell; =4, Arakawa-Schubert; =5, Fritsch-Chappell; =6, Kain-Fritsch; =7, Betts-Miller; =8, Kain-Fritsch 2 (with shallow convection).
IBLTYP
options for planetary boundary layer schemes: =0, no PBL; =1, bulk PBL; =2, Blackadar PBL; =3, Burk-Thompson PBL; =4, Eta PBL; =5, MRF PBL; =6, Gayno-Seaman PBL; =7, Pleim-Chang PBL.
FRAD
options for atmospheric radiation schemes: =0, none; =1, simple cooling; =2, cloud (Dudhia) (require IMPHYS ≥ 3); =3, CCM2; =4, RRTM longwave scheme.
IPOLAR
=0, none; =1, polar physics (ISOIL .ne. 2)
ISOIL
MM5 Tutorial
=1, use the multi-layer soil model (require IBLTYP=2, 4, 5, 6); =0, no soil model; =2, Noah LSM model (requires IBLTYP=4, 5); =3, Pleim-Xiu LSM (requires IBLTYP=7).
8-29
8: MM5
ISHALLO
=1, use shallow convective scheme (not well tested); =0, no.
8.13 Script Variables for IBM Batch Deck: ExpName
experiment name used in setting MSS pathname for output.
InName
input MSS pathname.
RetPd
mass store retention period (days).
compile
=yes, compile the mm5 code; =no, expect an existing executable.
execute
=yes, execute the model; =no, compile the code only.
UseMySource
=yes, use your own source code; =no, use mesouser version of the source code.
CaseName
MSS pathname for this run.
STARTsw
= NoReStart: start model run at hour zero (initialize). = ReStart: restart model run.
FDDAsw
= NoFDDA, no FDDA input files, = Anly, gridded FDDA input files, = Obs, obsFDDA input files, = Both, gridded and obs FDDA input files.
InBdy
MSS name of lateral boundary file.
InLow
MSS name for lower boundary condition file.
InMM
MSS name(s) of model input files.
InRst
MSS name(s) of model restart files.
In4DSfc
MSS name of surface analysis used for 4DDA.
In4DObs
MSS name of fdda obs files.
Host
= [email protected]:/usr/tmp/username, host computer to rcp user’s program tar file.
OutMM
MSS name for output.
8.14 Namelist Variables A namelist file, called mmlif, is created when mm5.deck is executed. In MM5, this file is created 8-30
MM5 Tutorial
8: MM5
partially from the configure.user file, and partially from mm5.deck.
8.14.1 OPARAM TIMAX
= forecast length in minutes.
TISTEP
= time step in seconds for the coarsest domain (recommend 3*dx(km)).
IFREST
=TRUE, for restart, =FALSE, for initial run.
IXTIMR
= integer time in minutes for restart.
IFSAVE
=TRUE, if saving data for restart, = FALSE, for no restart output.
SVLAST
= TRUE, if only saving the last time; = FALSE, save multiple times.
SAVFRQ
= frequency of restart output in minutes.
IFTAPE
= 1, for model output; =0, no model output.
TAPFRQ
= frequency of model history file output in minutes.
BUFFRQ
= how frequency to split model output files in minutes (ignored if < TAPFRQ).
INCTAP
= multipliers of TAPFRQ for outputting.
IFRSFA
= TRUE, if it is a restart run, using FDDA and multiple input files. Use with CDATEST.
IFSKIP
= TRUE, skip input files to start the model - DO NOT use this when restart.
CDATEST
= DATE (yyyy-mm-dd_hh:mm:ss) of the start file, used with IFSKIP/IFRSFA.
IFPRT
= 1, for printed output fields; = 0, for no printed output fields
PRTFRQ
= frequency of printed output fields in minutes
MASCHK
= integer frequency in number of time steps for budget/rainfall prints (coarsest mesh) - may not give correct answer on parallel computers.
IFTSOUT
= TRUE, if output time series; = FALSE, do not output time series
TSLAT
= latitudes of time series output locations
TSLON
= longitudes of time series output locations
8.14.2 LPARAM 1) Defined in mm5.deck: RADFRQ
= frequency in minutes of radiation calculations (surface and atmospheric)
IMVDIF
=1, for moist vertical diffusion in clouds (requires IMPHYS>2, and IBLTYP=2, 5 or 7), = 0, vertical diffusion is dry
IVQADV MM5 Tutorial
= 0, vertical moisture advection uses log interpolation (old method), 8-31
8: MM5
= 1, vertical moisture advection uses linear interpolation (affects all moisture variables) IVTADV
= 0, vertical temperature advection uses log interpolation (old method), = 1, vertical temperature advection uses linear interpolation
ITHADV
= 0, temperature advection and adiabatic term use temperature (old method), = 1, temperature advection and adiabatic term use potential temperature
ITPDIF
= 1, for diffusion using perturbation temperature in NH model; =2, use horizontal diffusion (new in version 3.7); = 0, not using this function (new in V2)
TDKORR
=2, temperature gradient correction fo horizontal diffusion (ITPDIF=2) at ground level uses ground temperature; =1, temperature gradient correction fo horizontal diffusion (ITPDIF=2) at ground level uses one sided difference of air temperature.
ICOR3D
= 1, for full 3D Coriolis force (requires INHYD=1), = 0, for traditional approximation.
IEXSI
= 0, no seaice = 1, seaice fraction diagnosed from sea-surface temperature (requires IPOLAR=1) = 2, seaice fraction read from from LOWBDY file (requires IPOLAR=1)
IFUPR
= 1, for upper radiative boundary condition (NH run only). = 0, rigid upper boundary in nonhydrostatic runs.
LEVSLP
nest level (corresponding to LEVIDN) at which solar radiation starts to account for orography (to switch off, set to a large value). Only available for SWRAD (IFRAD=2,4).
OROSHAW
=1, include the effect of orography shadowing. This only has an effect if LEVSLP is also set. This option is not available for MM5 MPP runs.Only available for SWRAD (IFRAD=2,4). =0, do not include the effect of orography shawdowing.
ITADVM
=1, use instability limiter for temperature advection. =0, do not use instability limiter for temperature advection
IQADVM
=1, use instability limiter for QV/CLW advection. =0, do not use instability limiter for QV/CLW advection
IBOUDY
Boundary condition options: = 0, fixed, and for coarse domain only = 2, time-dependent (used for 2-way nested boundary condition) = 3, relaxation inflow/outflow, for coarse domain only
8-32
MM5 Tutorial
8: MM5
IFDRY
= 1, for fake dry run with no latent heating release (requires IMPHYS>1, and ICUPA=1)
ISSTVAR
= 1, update SST during a simulation (and snow cover and sea ice, if they are available). Must have at least SST field in the input; = 0, do not update SST (and snow cover and sea ice) during a simulation.
IMOIAV
used for bucket moisture scheme. = 0, do not use bucket scheme. = 1, use bucket scheme, and soil moisture is initialized with moisture availability values in LANDUSE.TBL. = 2: use bucket scheme, and soil moisture is initialized with soil moisture fields from MMINPUT files.
IFSNOW
= 1, snow cover effects (requires input SNOWC field from REGRID) = 2, snow-cover prediction (requires input WEASD field from REGRID, and use of IMPHYS = 4,5, and 7)
ISFMTHD
method for calculation of 2m/10m diagnostics = 0, old method = 1, new method for stable conditions (IBLTYP = 2 and 5 only)
IZ0TOPT
Thermal roughness length option for IBLTYP = 2 and 5 only. = 0, default (old) scheme = 1, Garratt formulation = 2, Zilitinkevich formulation
ISFFLX
= 1, compute surface heat and moisture fluxes; =0, no fluxes.
ITGFLG
= 1, ground temperature predicted; = 3, constant ground temperature.
ISFPAR
= 1, use TERRAIN-generated land-use categories; = 0, use only 2 (land/water) categories.
ICLOUD
= 1, consider cloud effects on surface radiation when FRAD=0,1 ; consider clouds in both surface and atmospheric radiation when FRAD=2,3,4; = 0, do not consider cloud effect on radiation; = 2, (IFRAD=3 only) radiation interacts with RH-derived cloud fraction only.
IEVAP
= 1, normal evaporative cooling; = 0, no evaporative effects; = -1, no precip evaporative cooling, (for IMPHYS=3,4, and 5).
ISMRD
Soil moisture initialization method for IBLTYP = 7 (Pleim-Xiu scheme) only. = 0, use moisture availability from LANDUSE.TBL; = 2, use soil moisture input from REGRID.
ISTLYR
bottom of soil layers expected as input for SOIL=2,3
ISMLYR
bottom of soil layers expected as input for SOIL=2,3
RDMAXALB
whether to read in max snow albedo for ISOIL = 2 (Noah LSM) only.
MM5 Tutorial
8-33
8: MM5
= FALSE, do not use max snow albedo; = TRUE, use max snow albedo present in MMINPUT file. RDBRDALB
whether to read in climatological month albedo for ISOIL = 2 (Noah LSM). = FALSE, do not use climatological monthly albedo; = TRUE, use climatological monthly albedo present in MMINPUT file.
2) Defined in configure.user, or internally produced: IFRAD
see ‘Configure.user variables’
ICUPA
see ‘Configure.user variables’
IBLTYP
see ‘Configure.user variables’
ISHALLO
see ‘Configure.user variables’
ISOIL
see ‘Configure.user variables’
IPOLAR
see ‘Configure.user variables’
8.14.3 NPARAM LEVIDN
= level of nest for each domain (0 for domain 1 - default valid values are 0-3)
NUMNC
= id number of parent domain for each domain (1 for domain 1)
NESTIX
= I-dimension of each domain.
NESTJX
= J-dimension of each domain.
NESTI
= south-west corner point I for each domain.
NESTJ
= south-west corner point J for each domain.
XSTNES
= starting time in minutes for each domain.
XENNES
= ending time in minutes for each domain.
IOVERW
= 1, for initializing a nest from the nest input file, usually at model starting time; = 0, for interpolating to a nest from parent mesh, usually during model integration; = 2, for initializing domain with high resolution terrain, usually during model integration.
IACTIV
= 1, if this domain is active when restart; = 0, if this domain is inactive.
IMOVE
= 0, if domain does not move; =1, if domain will move.
IMOVCO
= number of first move (always 1 at beginning, may change for restarts).
IMOVEI
= increment in I (parent domain grids) of this move for this domain.
8-34
MM5 Tutorial
8: MM5
IMOVEJ
= increment in J (parent domain grids) of this move for this domain.
IMOVET
= time in minutes of this move for this domain (relative to beginning of the coarse mesh run).
Note: the default number of moves is 10. IFEED
feedback from nest to coarse mesh in 2-way nests: = 0, no feedback; = 1, 9-point weighted average; = 2, 1-point feedback, with no smoothing; = 3, 1-point feedback, with smoother/desmoother (recommended); = 4, 1-point feedback, with heavy smoothing
8.14.4 PPARAM ZZLND
= roughness length over land (m) (if ISFPAR=0)
ZZWTR
= roughness length over water (m) (if ISFPAR=0)
ALBLND
= albedo over land (if ISFPAR=0)
THILND
= thermal inertia of land (cal-1 cm-2 K-1 s-0.5, if ISFPAR=0)
XMAVA
= moisture availability over land (if ISFPAR=0)
CONF
= non-convective precip saturation criterior (fraction ≤ 1 for IMPHYS=1)
SOILFAC
= a factor to make 5-layer soil model time step more conservative. Higher number makes soil timestep shorter (range typically 1.0 - 2.0). Used in IBLTYP=1, 2, 4 ,5, and 6.
CZO,OZO
= constants in Charnock relation for water roughness length. Used in IBLTYP = 2, 5 and 6.
CKH
= factor to control background diffusion coefficient used in the model. Default value is 1., which gives the same diffusion as versions before 3.5 if one uses 3xDX as the time step.
8.14.5 FPARAM FDASTA
(MAXSES); time (min) for initiation of FDDA.
FDAEND
(MAXSES); time (min) for termination of FDDA.
I4D
(MAXSES, 2); will FDDA analysis nudging be employed, (0=no; 1=yes).
DIFTIM
(MAXNES, 2); time (min) between input analyses for analysis nudging.
IWIND
(MAXSES, 2); will the wind field be nudged from analyses, (0=no; 1=yes).
GV
(MAXSES, 2); analysis-nudging coefficient (s-1) for wind.
ITEMP
(MAXSES, 2); will the temperature be nudged from analyses, (0=no; 1=yes).
MM5 Tutorial
8-35
8: MM5
GT
(MAXSES, 2); analysis-nudging coefficient (s-1) for temperature.
IMOIS
(MAXSES, 2); will the mixing ratio be nudged from analyses, (0=no; 1=yes).
GQ
(MAXSES, 2); analysis-nudging coefficient (s-1) for mixing ratio.
IROT
(MAXSES); will vorticity be nudged from analyses, (0=no; 1=yes).
GR
(MAXSES, 2); analysis-nudging coefficient (m2 s-1) for vorticity.
INONBL
(MAXSES, 4); will PBL fields be nudged from 3-D analyses when not using surface-analysis nudging within PBL. (0=yes; 1=exclude certain variables depending on integer value of second index).
RINBLW
radius of influence (km) for surface-analysis nudging where the horizontal weighting function depends on surface data density.
NPFG
coarse-grid time-step frequency for select diagnostic print of analysis nudging.
I4DI
(MAXSES); will FDDA observation nudging be employed, (0=no; 1=yes).
ISWIND
(MAXSES); will the wind field be nudged from observations, (0=no; 1=yes).
GIV
(MAXSES); observation-nudging coefficient (s-1) for wind.
ISTEMP
(MAXSES); will the temperature be nudged from observations, (0=no; 1=yes).
GIT
(MAXSES); observation-nudging coefficient (s-1) for temperature.
ISMOIS
(MAXSES); will the mixing ratio be nudged from observations, (0=no; 1=yes).
GIQ
(MAXSES); observation-nudging coefficient (s-1) for mixing ratio.
RINXY
default horizontal radius of influence (km) for distance-weighted nudging corrections (for observation nudging).
RINSIG
vertical radius of influence (on sigma) for distance -weighted nudging corrections (for observationnudging).
TWINDO
(time window)/2 (min) over which an observation will be used for nudging.
NPFI
coarse-grid time-step frequency for select diagnostic print of observation nudging.
IONF
observation-nudging frequency in coarse grid time steps for observation-nudging calculations.
IDYNIN
for dynamic initialization using a ramp-down function to gradually turn off the FDDA before the pure forecast (1=yes, 0=no).
DTRAMP
the time period in minutes over which the nudging (obs nudging and analysis nudging) is ramped down from one to zero. Set dtramp negative if FDDA is to be ramped down BEFORE the end-of-data time (DATEND), and positive if the FDDA ramp-down period extends beyond the end-of-data time.g calculations.
8.15 Some Common Errors Associated with MM5 Failure When an MM5 job is completed, always check for at least the following: • The “STOP 99999” print statement indicates that MM5 completed without crashing.
8-36
MM5 Tutorial
8: MM5
• When running a batch job on NCAR’s computer, check to be sure that the mswrite com-
mands were all completed successfully by the shell, and that the files were written to the pathnames you expected. • Check the top of “mm5.print.out” file to see if all domains are correctly initiated if running a multiple-domain job, and if the physics options are correctly specified. If an MM5 job has failed, check for some of the common problems: • If your model stops immediately after it prints out ‘NON-HYDROSTATIC RUN’ with an ‘Segmentation fault’ or sometimes ‘Bus error’, it is a good indication that the model is not getting enough memory to run. On most machine, typing ‘unlimit’ before you run the model will be the solution. • “Read past end-of-file”: This is usually followed by a fortran unit number. Check this unit number with Table 8.1 to find out which file MM5 has problem with. Check all the MSREAD statements in the printout to be sure that files were read properly from the MSS. Also check to make sure that the file sizes are not zero. Double-check experiment names, MSS pathnames. • “Unrecognized namelist variable”: This usually means there are typos in the namelist. • Unmatched physics option: for instance, the following should appear in the print output: STOP SEE ERRORS IN PRINT-OUT If one browses through the output, one may find things like: ERROR: IFRAD=2 REQUIRES IRDDIM=1 AND IMPHYS>3 which tells a user what a user needs to do to correct the problem.
• Uncompiled options: STOP SEE ERRORS IN PRINT-OUT If one browses through the output, one may find things like: ERROR: IFRAD=2, OPTION NOT COMPILED which tells a user the option you choose has not been compiled.
• When restarting a job, do not re-compile. If you do re-compile, do not change anything in the configure.user file. • If the job stopped and there is a long list of “CFL>1...”, it usually means the time step (TISTEP in namelist) is too big. Shorten the TISTEP and re-submit. • If doing a multi-domain run, please check these namelist variables carefully: LEVIDN = 0,1,1,1,1,1,1,1,1,1, ; level of nest for each domain NUMNC = 1,1,1,1,1,1,1,1,1,1, ; ID of mother domain for each nest
8.16 MM5 tar File The mm5.tar file contains the following files and directories: CHANGES Makefile README
MM5 Tutorial
Description of changes to the MM5 program Makefile to create MM5 executable General information about the MM5 directory and
8-37
8: MM5
README.MPP Diff/ Run/ Templates/ Util/ configure.user configure.user.linux domain/ dynamics/ fdda/ include/ memory/ physics/ pick/
how to run MM5 General information on how to compile and run on DM machines Diff files for each new release Where MM5 runs MM5 job decks for different machines Utility programs for cpp Make rules and model configuration Above for PC running Linux OS on single and OMP processors
The file README contains basic instructions on how to compile and run the model. The file README.MPP contains basic information and instructions on how to start compiling and running MPP MM5. The model is executed in the directory Run. The bug fixes and changes to the source code and tar file are described in file CHANGES file and diff.* files in directory Diff/. All FORTRAN files are in lower-case directories separated according to their functions. See the chart at the end of this chapter for a complete list of FORTRAN files. When ‘make code’ command is executed, all .F and .f files selected for compiling are copied into the pick/ directory. A single cat command will enable a user to generate a source listing (see the README file in directory pick/).
8-38
MM5 Tutorial
8: MM5
8.17 Configure.user (This file is included here for reference only. Use the most up-to-date one from MM5.TAR file.) # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
Sections 1. System Variables 3. Fortran options 3a. Cray (YMP, J90) Note: set RUNTIME_SYSTEM="CRAY_IA" for Cray interactive job 3a2. Cray X1 Single Node OpenMP version 3b. IRIX.6.X (SGI_Origin,SGI_R10000,SGI_R8000 which support OpenMP) 3b2. IRIX.6.X (SGI_Origin,SGI_R10000,SGI_R8000) 3c. IRIX.5.2/5.3, IRIX.6.X (SGI_R4000/SGI_R4400/SGI_R5000) Note: set RUNTIME_SYSTEM="SGI_R4000" for SGI_R4400/SGI_R5000 3d. SUN Fortran (solaris,SPARC20/SPARC64) 3e. DEC_ALPHA (OSF/1) 3e2. DEC_ALPHA (4100/8400; use OpenMP parallel directives) 3f. IBM (AIX) 3f2. IBM, OpenMP (AIX) 3g. HP (UX) 3h. HP (SPP-UX) for HP Exemplar S/X-Class Systems 3i1. PC_PGF (LINUX/Portland Group Inc.) 3i2. PC_INTEL (LINUX/INTEL) 3j. MAC (OSX/xlf) 4. General commands 5. Options for making "./include/parame.incl" 6. Physics Options (memory related) 7. MPP Options (Set no options in section 3) 7a. IBM SP2 7a.1 IBM SP2 with SMP nodes 7b. Cray T3E 7c. SGI Origin 2000 7d. HP Exemplar 7e. Compaq ALPHA/MPI 7e.1 ALPHA Linux with MPI 7f. Fujitsu VPP 7g1. Network of Linux PCs with MPI (PGI) 7g2. Network of Linux PCs with MPI (INTEL) 7h. NEC SX/5 (under development) 7i. Sun MPI 7j. Cray X1
# 7k. Cray XD1, PGI Fortran # #----------------------------------------------------------------------------# 1. System Variables #----------------------------------------------------------------------------SHELL = /bin/sh RANLIB =echo .SUFFIXES: .F .i .o .f .c #----------------------------------------------------------------------------# 3. Fortran options # Uncomment the ones you need, including RUNTIME_SYSTEM #----------------------------------------------------------------------------LIBINCLUDE = $(DEVTOP)/include #----------------------------------------------------------------------------# 3a. Cray # Note: - imsl library is only needed if running Arakawa-Schubert cumulus scheme; # and the location of the library may be different on non-NCAR Crays. # - if you are using the new program environment on Cray, should set # CPP = /opt/ctl/bin/cpp # - select the right compilation option for Cray - you may use # f90 option on paiute # - -x omp is needed for f90 compiler version 3.0.2.6 and above. # Check man page. #-----------------------------------------------------------------------------
MM5 Tutorial
8-39
8: MM5
#RUNTIME_SYSTEM = "CRAY_IA" #FC = f90 #FCFLAGS = -D$(RUNTIME_SYSTEM) -I$(LIBINCLUDE) -O task1 -x omp #CFLAGS = #CPP = /opt/ctl/bin/cpp #CPPFLAGS = -I$(LIBINCLUDE) -C -P #LDOPTIONS = #LOCAL_LIBRARIES = -L /usr/local/lib -l imsl #MAKE = make -i -r #----------------------------------------------------------------------------# 3a2. Cray X1 Single Node OpenMP version #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "crayx1" ## Use these for X1 cross compiler #X1_CROSS_COMP = "gcc" #X1_CROSS_CFLG = "" ## Use these for X1 native (trigger) compiler ##X1_CROSS_COMP = "cc" ##X1_CROSS_CFLG = "-hcommand" #FC = ftn ### OpenMP in SSP mode #FCFLAGS = -Ossp,task1,inline0 -xcsd,mic -sword_pointer -I$(LIBINCLUDE) D$(RUNTIME_SYSTEM) #LDOPTIONS = -Ossp,task1 ### Multi-streaming single MSP mode ###FCFLAGS = -O3 -Ogen_private_callee -xomp,mic -sword_pointer -I$(LIBINCLUDE) -D$(RUNTIME_SYSTEM) ### LDOPTIONS = #CFLAGS = #CPP = cpp #CPPFLAGS = -I$(LIBINCLUDE) -C -P #LOCAL_LIBRARIES = #MAKE = make -i -r #----------------------------------------------------------------------------# 3b. IRIX.6.X (SGI_Origin,SGI_R10000,SGI_R8000 which support OpenMP) # Use OpenMP directives for multi-processor runs. # - set RUNTIME_SYSTEM = SGI_Origin # - works with 7.2.1 and above compiler # - select appropriate XLOCAL0 macro for loader option # # - For parallel execution of MM5 set the following environment variables: # setenv OMP_NUM_THREADS # setenv _DSM_PLACEMENT ROUND_ROBIN # - For parallel execution on a processor set without contention: # setenv _DSM_WAIT SPIN # setenv OMP_DYNAMIC FALSE # setenv MPC_GANG OFF # - For parallel execution on a contented set of processors: # setenv _DSM_WAIT YEILD # setenv OMP_DYNAMIC TRUE # setenv MPC_GANG OFF #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "SGI_Origin" #FC = f77 #ABI = -n32 # 2 GB address space ##ABI = -64 # For 64-bit address space #IO = -mpio #PREC = # default 32-bit floating-point presicion. ##PREC = -r8 # 64-bit floating-point precision. ##Conversion program between different precisions of mminput and bdyout available from [email protected] #MP = -mp -MP:old_mp=OFF ##MP = -mp -MP:open_mp=OFF # Use SGI multiprocessing directives #OPT = -O3 -OPT:roundoff=3:IEEE_arithmetic=3 -OPT:reorg_common=OFF ##debugging#OPT = -g -DEBUG:div_check:subscript_check=ON:trap_uninitialized=ON ##select appropriate XLOCAL loader
8-40
MM5 Tutorial
8: MM5
#XLOCAL0 = ### Burk-Thompson PBL (IBLTYP=3) option mp directives ##XLOCAL0 = -Wl,-Xlocal,bt1_,-Xlocal,blk1_,-Xlocal,blk2_ ### Noah LSM (ISOIL=2) option mp directives ##XLOCAL0 = -Wl,-Xlocal,rite_,-Xlocal,abci_ ### Gayno-Seaman PBL (IBLTYP=6) option mp directives ##XLOCAL0 = -Wl,-Xlocal,fog1d_,-Xlocal,surface1_,-Xlocal,surface2_,-Xlocal,surface3_,-Xlocal,comsurfslab_ #FCFLAGS = -I$(LIBINCLUDE) -D$(RUNTIME_SYSTEM) $(ABI) $(IO) $(PREC) $(MP) $(OPT) #CFLAGS = #CPP = /usr/lib/cpp #CPPFLAGS = -I$(LIBINCLUDE) -C -P #LDOPTIONS = $(ABI) $(PREC) $(MP) $(OPT) $(XLOCAL0) #LOCAL_LIBRARIES = -lfastm #MAKE = make -i -r -P #----------------------------------------------------------------------------# 3b2. IRIX.6.X (SGI_Origin,SGI_R10000,SGI_R8000) # Use SGI directives for multi-processor runs. # - set RUNTIME_SYSTEM = SGI_R8000 # - use the appropriate LDOPTIONS if compiling Burk-Thompson PBL, # Gayno-Seaman PBL, or Noah land-surface module # - use 7.0 and above compiler # - do not use -lfastm for R10000 and Origin series for compiler # versions 7.0 and 7.1, unless patches are installed. For more # information please see MM5 Web page: # http://www.mmm.ucar.edu/mm5/mm5v2-sgi.html #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "SGI_R8000" #FC = f77 #FCFLAGS = -I$(LIBINCLUDE) -O3 -n32 -mips4 -mp -OPT:roundoff=3:IEEE_arithmetic=3 #CFLAGS = #CPP = /usr/lib/cpp #CPPFLAGS = -I$(LIBINCLUDE) -C -P #LDOPTIONS = -n32 -mips4 -mp ###Burk-Thompson (IBLTYP=3) option mp directives ##LDOPTIONS = -n32 -mips4 -mp -Wl,-Xlocal,bt1_,-Xlocal,blk1_,-Xlocal,blk2_ ###Noah LSM (ISOIL=2) option mp directives ##LDOPTIONS = -n32 -mips4 -mp -Wl,-Xlocal,rite_,-Xlocal,abci_ ### Gayno-Seaman (IBLTYP=6) option mp directives ##LDOPTIONS = -n32 -mips4 -mp -Wl,-Xlocal,fog1d_,-Xlocal,surface1_,-Xlocal,surface2_,-Xlocal,surface3_,-Xlocal,comsurfslab_ #LOCAL_LIBRARIES = -lfastm ##LOCAL_LIBRARIES = #MAKE = make -i -r #----------------------------------------------------------------------------# 3c. IRIX.6.X (SGI_R4400/SGI_R4000/SGI_R5000) #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "SGI_R4000" #FC = f77 #FCFLAGS = -I$(LIBINCLUDE) -mips2 -32 -O2 -Nn30000 -Olimit 1500 #CFLAGS = #CPP = /usr/lib/cpp #CPPFLAGS = -I$(LIBINCLUDE) -C -P #LDOPTIONS = #LOCAL_LIBRARIES = -lfastm #MAKE = make -i -r #----------------------------------------------------------------------------# 3d. SUN (solaris,SPARC20/SPARC64) #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "SUN" #FC = f90 #FCFLAGS = -fast -O2 -I$(LIBINCLUDE) #CFLAGS = #LDOPTIONS = -fast -O2 #CPP = /usr/lib/cpp #CPPFLAGS = -I$(LIBINCLUDE) -C -P
MM5 Tutorial
8-41
8: MM5
#LOCAL_LIBRARIES = #MAKE = make -i -r #----------------------------------------------------------------------------# 3e. DEC_ALPHA (OSF/1) #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "DEC_ALPHA" #FC = f90 #FCFLAGS = -cpp -D$(RUNTIME_SYSTEM) -I$(LIBINCLUDE) -c -O4 -Olimit 2000 -automatic \ # -fpe0 -align dcommons -align records -convert big_endian ###FCFLAGS = -cpp -D$(RUNTIME_SYSTEM) -DIBMopt -DvsLIB -I$(LIBINCLUDE) -c -O4 -Olimit 2000 -automatic \ ### -fpe0 -align dcommons -align records -convert big_endian #CFLAGS = #CPP = cpp #CPPFLAGS = -I$(LIBINCLUDE) -C -P #LDOPTIONS = -math_library accurate #LOCAL_LIBRARIES = #MAKE = make -i -r #----------------------------------------------------------------------------# 3e2. DEC_ALPHA (4100/8400 Series) # Use OpenMP directives for multi-processor runs. # - set RUNTIME_SYSTEM = DEC_ALPHA #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "DEC_ALPHA" #FC = f90 #FCFLAGS = -omp -cpp -D$(RUNTIME_SYSTEM) -I$(LIBINCLUDE) -c -O4 -Olimit 2000 \ #-automatic -fpe0 -align dcommons -align records -convert big_endian #CFLAGS = #CPP = cpp #CPPFLAGS = -I$(LIBINCLUDE) -C -P #LDOPTIONS = -omp -math_library accurate #LOCAL_LIBRARIES = #MAKE = make -i -r #----------------------------------------------------------------------------# 3f. IBM (AIX) #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "IBM" #FC = xlf #FCFLAGS = -I$(LIBINCLUDE) -O3 -qarch=auto -qmaxmem=-1 #CPP = /usr/lib/cpp #CFLAGS = #CPPFLAGS = -I$(LIBINCLUDE) -C -P -Drs6000 #LDOPTIONS = -qmaxmem=-1 -O3 -qarch=auto #LOCAL_LIBRARIES = -lmass #MAKE = make -i #----------------------------------------------------------------------------# 3f2. IBM (AIX) # - Depending on problem size and machine memory size, the settings # of maxstack and maxdata may need to be modified. # - If the newer thread-safe mass library is available, add # the -lmass_r option to LOCAL_LIBRARIES. #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "IBM" #FC = xlf_r #FCFLAGS = -I$(LIBINCLUDE) -O2 -qarch=auto -qmaxmem=-1 -qsmp=omp:noauto qnosave -qstrict -qnocclines #CPP = /usr/lib/cpp #CFLAGS = #CPPFLAGS = -I$(LIBINCLUDE) -C -P -Drs6000 #LDOPTIONS = -qmaxmem=-1 -O2 -qarch=auto -bmaxstack:512000000 -bmaxdata:2000000000 #LOCAL_LIBRARIES = -lxlsmp -lmass_r #LOCAL_LIBRARIES = -lxlsmp #MAKE = make -i #-----------------------------------------------------------------------------
8-42
MM5 Tutorial
8: MM5
# 3g. HP (UX) #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "HP" #FC = f77 #FCFLAGS = -I$(LIBINCLUDE) -O #CPP = /usr/lib/cpp #CFLAGS = -Aa #CPPFLAGS = -I$(LIBINCLUDE) -C -P #LDOPTIONS = #LOCAL_LIBRARIES = #MAKE = make -i -r #----------------------------------------------------------------------------# 3h. HP-SPP (SPP-UX), and HP-SPP_IA #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "HP-SPP" #FC = f77 #PA8K = +DA2.0N +DS2.0a #ARCH = ${PA8K} #PROFILE = #INLINE = +Olimit +Oinline=_saxpy,vadv,hadv,sinty,sintx,slab,diffut #PARALLEL = +O3 +Oparallel +Onofail_safe +Onoautopar +Onodynsel # ## Use the following FCFLAGS to build single-threaded executable ##FCFLAGS = ${PROFILE} ${ARCH} -I$(LIBINCLUDE) +O3 +Oaggressive \ ## +Olibcalls ${INLINE} # ## Use the following FCFLAGS to build a parallel executable #FCFLAGS = ${PROFILE} ${ARCH} -I$(LIBINCLUDE) ${PARALLEL} \ # +O3 +Oaggressive +Olibcalls ${INLINE} # #CPP = /usr/lib/cpp #CFLAGS = ${PROFILE} -Aa #CPPFLAGS = -I$(LIBINCLUDE) -C -P #LDOPTIONS = ${FCFLAGS} -Wl,-aarchive_shared -Wl,+FPD #LOCAL_LIBRARIES = -Wl,/usr/lib/pa1.1/libm.a #MAKE = gmake -j 4 -i -r #----------------------------------------------------------------------------# 3i1. PC_PGF77 (LINUX/Portland Group Inc.) # pgf77 version 1.6 and above # May use pgf90 if the version is 3.1-4 #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "linux" #FC = pgf90 #FCFLAGS = -I$(LIBINCLUDE) -O2 -Mcray=pointer -tp p6 -pc 32 -Mnoframe -byteswapio ##FCFLAGS = -I$(LIBINCLUDE) -O2 -Mcray=pointer -tp p6 -pc 32 -Mnoframe byteswapio -mp \ ##-Mnosgimp #CPP = /lib/cpp #CFLAGS = -O #CPPFLAGS = -I$(LIBINCLUDE) #LDOPTIONS = -O2 -Mcray=pointer -tp p6 -pc 32 -Mnoframe -byteswapio ##LDOPTIONS = -O2 -Mcray=pointer -tp p6 -pc 32 -Mnoframe -byteswapio -mp #LOCAL_LIBRARIES = #MAKE = make -i -r #----------------------------------------------------------------------------# 3i2. PC_INTEL (LINUX/INTEL) #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "linux" #FC = ifort #FCFLAGS = -I$(LIBINCLUDE) -O2 -tp p6 -pc 32 -convert big_endian #CPP = /lib/cpp #CFLAGS = -O #CPPFLAGS = -I$(LIBINCLUDE) #LDOPTIONS = -O2 -tp p6 -pc 32 -convert big_endian #LOCAL_LIBRARIES = #MAKE = make -i -r #-----------------------------------------------------------------------------
MM5 Tutorial
8-43
8: MM5
# 3j. MAC (OSX/xlf) #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "macxlf" #FC = xlf #FCFLAGS = -I$(LIBINCLUDE) -qarch=auto #CPP = /usr/bin/cpp #CFLAGS = -O -DNOUNDERSCORE #CPPFLAGS = -I$(LIBINCLUDE) -I. -C -P -DIBM -xassembler-with-cpp #LDOPTIONS = -Wl,-stack_size,10000000,-stack_addr,0xc0000000 #LOCAL_LIBRARIES = #MAKE = make -i -r #RANLIB = ranlib #----------------------------------------------------------------------------# 4. General commands #----------------------------------------------------------------------------AR = ar ru RM = rm -f RM_CMD = $(RM) *.CKP *.ln *.BAK *.bak *.o *.i core errs ,* *~ *.a \ .emacs_* tags TAGS make.log MakeOut *.f ! GREP = grep -s CC = cc #----------------------------------------------------------------------------# 5. Options for making ./include/parame.incl #----------------------------------------------------------------------------# # FDDAGD (integer) - "1" -> FDDA gridded run FDDAGD = 0 # # FDDAOBS (integer) - "1" -> FDDA obs run FDDAOBS = 0 # # MAXNES (integer) - Max Number of Domains in simulation MAXNES = 2 # # MIX,MJX (integer) - Maximum Dimensions of any Domain MIX = 49 MJX = 52 # MKX (integer) - Number of half sigma levels in model MKX = 23 #----------------------------------------------------------------------------# 6. Physics Options # The first MAXNES values in the list will be used for the corresponding # model nests; the rest in the list can be used to compile other options. # The exception is FRAD, of which only the first value is used in the model, # (i.e., only one radiation option is used for all nests). The rest allow # other options to be compiled. # Compilation of Arakawa-Schubert cumulus scheme requires imsl. #----------------------------------------------------------------------------# IMPHYS - for explicit moisture schemes (array,integer) IMPHYS = "4,4,1,1,1,1,1,1,1,1" # - Dry,stable,warm rain,simple ice,mix phase, # - 1 ,2 ,3 ,4 ,5 # - graupel(gsfc),graupel(reisner2),schultz # -,6 ,7 ,8 MPHYSTBL = 0 # - 0=do not use look-up tables for moist # physics # - 1=use look-up tables for moist physics # (currently only simple ice and mix phase # are available) # - 2=optimized exmoisr routine (need vslib, if not # available set -DvsLIB in compile flags) # # ICUPA - for cumulus schemes (array,integer)
8-44
MM5 Tutorial
8: MM5
# - None,Kuo,Grell,AS,FC,KF,BM,KF2 1,2,3,4,5,6,7,8 ICUPA = "3,3,1,1,1,1,1,1,1,1" # # IBLTYP - for planetary boundary layer (array,integer) # - 0=no PBL fluxes,1=bulk,2=Blackadar, # 3=Burk-Thompson,4=Eta M-Y,5=MRF, # 6=Gayno-Seaman,7=Pleim-Xiu IBLTYP = "5,5,0,0,0,0,0,0,0,0" # # FRAD - for atmospheric radiation (integer) # - Radiation cooling of atmosphere # 0=none,1=simple,2=cloud,3=ccm2,rrtm=4 FRAD = "2,0,0,0,0" # # IPOLAR - (integer) for polar model used only if ISOIL=1 # 0=not polar (5-layer soil model) # 1=polar (7-layer snow/soil model) IPOLAR = 0 # # ISOIL - for multi-layer soil temperature model (integer) # - 0=no,1=yes (only works with IBLTYP=2,4,5,6) # 2=Noah land-surface scheme (IBLTYP=4,5 only) # 3=Pleim-Xiu LSM (IBLTYP=7 only) ISOIL = 1 # # ISHALLO (array,integer) - Shallow Convection Option # 1=shallow convection,0=No shallow convection ISHALLO = "0,0,0,0,0,0,0,0,0,0" #----------------------------------------------------------------------------# 7. MPP options # # For general information and updated "helpdesk" information see # http://www.mmm.ucar.edu/mm5/mpp # http://www.mmm.ucar.edu/mm5/mpp/helpdesk # #----------------------------------------------------------------------------# # Presently, of the MPP platforms only the "sp2" # is supplied with the "make deck" capability. # # MPP Software Layer MPP_LAYER=RSL #MPP_LAYER=NNTSMS # # PROCMIN_NS - minimum number of processors allowed in N/S dim # PROCMIN_NS = 1 # # PROCMIN_EW - minimum number of processors allowed in E/W dim # PROCMIN_EW = 1 # # ASSUME_HOMOGENOUS_ENVIRONMENT - on a machine with a heterogeneous # mix of processors (different speeds) setting this compile time # constant to 0 (zero) allows the program to detect the speed of each # processor at the beginning of a run and then to attempt to come up with # an optimal (static) mapping. Set this to 0 for a heterogeneous # mix of processors, set it to 1 for a homogeneous mix. Unless you # are certain you have a heterogeneous mix of processors, leave this # set to 1. Currently, this option is ignored on platforms other # than the IBM SP. # ASSUME_HOMOGENEOUS_ENVIRONMENT = 1 # #----------------------------------------------------------------------------# 7a. IBM SP2 # type 'make mpp' for the SP2
MM5 Tutorial
8-45
8: MM5
#----------------------------------------------------------------------------#RUNTIME_SYSTEM = "sp2" #MPP_TARGET=$(RUNTIME_SYSTEM) #MFC = mpxlf_r #MCC = mpcc_r #MLD = mpxlf_r #FCFLAGS = -O2 -qmaxmem=-1 -qarch=auto -qfloat=hsflt #LDOPTIONS = -bmaxdata:0x70000000 #LOCAL_LIBRARIES = -lmassv ##LOCAL_LIBRARIES = -lmass ###LOCAL_LIBRARIES = -lessl #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = /lib/cpp -C -P #CPPFLAGS = -DMPI -Drs6000 -DSYSTEM_CALL_OK -DIBMopt ##CPPFLAGS = -DMPI -Drs6000 -DSYSTEM_CALL_OK -DIBMopt -DvsLIB #CFLAGS = -DNOUNDERSCORE -DMPI #ARCH_OBJS = milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #----------------------------------------------------------------------------# 7a.1 IBM SP with Silver or Winterhawk nodes # type 'make mpp' for the SP2 # - You must compile with XLF or MPXLF version 6.1 or greater. # - Check with your system admin before linking to lessl or lmass. # - Note for running on blue.llnl.gov: # newmpxlf_r is LLNL specific wrapper around HPF 6.1 w/ HPF off. # - If the newer thread-safe mass library is available, add # the -lmass_r option to LOCAL_LIBRARIES. # - For very large domains, use -bmaxdata:2000000000 -bmaxstack:268435456 # for load options (Peter Morreale/SCD) # - If you enable -O3 optimization, add -qstrict as well #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "sp2" #MPP_TARGET=$(RUNTIME_SYSTEM) ## On llnl.blue.gov, (3/99) ##MFC = time newmpxlf_r ##MCC = mpcc_r ##MLD = newmpxlf_r ## On systems with R6.1 or greater of IBM Fortran. #MFC = time mpxlf_r #MCC = mpcc_r #MLD = mpxlf_r #FCFLAGS = -O2 -qarch=auto -qcache=auto -qzerosize -qsmp=noauto -qnosave qmaxmem=-1 \ # -qspillsize=2000 #LDOPTIONS = -qsmp=noauto -bmaxdata:0x70000000 ##LOCAL_LIBRARIES = -lmass_r ##LOCAL_LIBRARIES = -lessl #LOCAL_LIBRARIES = #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = /lib/cpp -C -P #CPPFLAGS = -DMPI -Drs6000 -DSYSTEM_CALL_OK #CFLAGS = -DNOUNDERSCORE -DMPI
8-46
MM5 Tutorial
8: MM5
#ARCH_OBJS = milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #----------------------------------------------------------------------------# 7b. T3E #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "t3e" #MPP_TARGET=$(RUNTIME_SYSTEM) #MFC = f90 #MCC = cc #MLD = $(MFC) ##FCFLAGS = -g #FCFLAGS = -O2 #LDOPTIONS = #LOCAL_LIBRARIES = #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = /opt/ctl/bin/cpp -C -P #CPPFLAGS = -DMPI -DT3E #CFLAGS = -DNOUNDERSCORE -Dt3e -DT3E -DMPI #ARCH_OBJS = error_dupt3d.o t3etraps.o set_to_nan.o milliclock.o #IWORDSIZE = 8 #RWORDSIZE = 8 #LWORDSIZE = 8 #----------------------------------------------------------------------------# 7c. Origin 2000 # Note that the MPP version of MM5 is not supported for compilation under # the "modules" environment. To see if you are using modules to control # compiler versions on your machine, type "module list". # # It may be necessary to modify the MPI run time environment on the # Origin as follows: # # setenv MPI_MSGS_PER_PROC 4096 # # See also http://www.mmm.ucar.edu/mm5/mpp/helpdesk/20000621.txt # #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "o2k" #MPP_TARGET=$(RUNTIME_SYSTEM) #MFC = f90 -64 -mips4 -w #MCC = cc -64 -mips4 -w #MLD = f90 -64 -mips4 ##FCFLAGS = -g #FCFLAGS = -O3 -OPT:roundoff=3:IEEE_arithmetic=3 -OPT:fold_arith_limit=2001 #LDOPTIONS = #LOCAL_LIBRARIES = -lfastm -lmpi #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = /lib/cpp -C -P #CPPFLAGS = -DMPI -DO2K -DDEC_ALPHA -DSYSTEM_CALL_OK #CFLAGS = -DO2K -DMPI -DDEC_ALPHA #ARCH_OBJS = milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #-----------------------------------------------------------------------------
MM5 Tutorial
8-47
8: MM5
# 7d. HP Exemplar #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "hp" #MPP_TARGET=$(RUNTIME_SYSTEM) #MFC = f77 #MCC = mpicc #MLD = mpif77 ##FCFLAGS = +DA2.0N +DS2.0a -g #FCFLAGS = +DA2.0N +DS2.0a +O3 #LDOPTIONS = #LOCAL_LIBRARIES = #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = /lib/cpp -C -P #CPPFLAGS = -DMPI -DSYSTEM_CALL_OK #CFLAGS = -DNOUNDERSCORE -DMPI #ARCH_OBJS = milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #----------------------------------------------------------------------------# 7e. Compaq ALPHA/MPI/OpenMP (Thanks to Dave Sherden) # - For multi-threaded MPI processes (useful on dm-clusters of SMP # nodes; such as fir.mmm.ucar.edu), uncomment the definition # of the macro: SPECIAL_OMP. # - If running with MPICH (public domain MPI) uncomment # first set of definitions for MFC, MCC, MLD and LDOPTIONS. If using # the Compaq/DEC MPI, uncomment the second set. # - On prospect.ucar.edu (ES40), add the -lelan option to LDOPTIONS. #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "alpha" #MPP_TARGET=$(RUNTIME_SYSTEM) ###### If using OpenMP for SMP parallelism on each MPI process ### ##SPECIAL_OMP = -omp ###### If using MPICH ### #MFC = f77 #MCC = mpicc #MLD = mpif77 #LDOPTIONS = $(SPECIAL_OMP) ###### If using DEC MPI (e.g. on fir.mmm.ucar.edu) ### ###### Compaq ES40 Cluster (prospect.ucar.edu) requires -lelan for OpenMP ##MFC = f90 ##MCC = cc ##MLD = f90 ##LDOPTIONS = -lmpi -lelan $(SPECIAL_OMP) ##LDOPTIONS = -lmpi $(SPECIAL_OMP) ###### #FCFLAGS = -O4 -Olimit 2000 -fpe0 -align dcommons -align records \ # -convert big_endian $(SPECIAL_OMP) #LOCAL_LIBRARIES = #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = cpp -C -P #CPPFLAGS = -DMPI -DDEC_ALPHA -DSYSTEM_CALL_OK #CFLAGS = -DMPI -DDEC_ALPHA #ARCH_OBJS = milliclock.o
8-48
MM5 Tutorial
8: MM5
#IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #----------------------------------------------------------------------------# 7e.1 ALPHA Linux with MPI (Thanks Greg Lindahl, HPTi) # (This has run on jet.fsl.noaa.gov) #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "alpha" #MPP_TARGET=$(RUNTIME_SYSTEM) ####### If using OpenMP for SMP parallelism on each MPI process ### ##SPECIAL_OMP = -omp ####### #MFC = fort #MCC = mpicc #MLD = mpif77 #UNDERSCORE = -DF2CSTYLE #LDOPTIONS = $(SPECIAL_OMP) -static #FCFLAGS = -O5 -arch ev6 -tune ev6 -align dcommons -align records \ # -convert big_endian $(SPECIAL_OMP) #LOCAL_LIBRARIES = #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = /lib/cpp -traditional -C -P #CPPFLAGS = -DMPI -DDEC_ALPHA $(UNDERSCORE) -DSYSTEM_CALL_OK #CFLAGS = -DMPI -DDEC_ALPHA $(UNDERSCORE) #ARCH_OBJS = milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #----------------------------------------------------------------------------# 7f. Fujitsu VPP # # These options have been updated for the newer VPP5000 system. If you # find that you have trouble compiling on your system, try removing the # -KA32 and -Ka4 option from FCFLAGS, LDOPTIONS, CFLAGS and from # MPP/RSL/RSL/makefile.vpp. Note that to successfully compile the RSL # library (MPP/RSL/RSL) you need the following two environment variables # set (syntax may vary with shells other than csh): # # Older systems: # # setenv MPIINCDIR /usr/lang/mpi/include # setenv MPILIBS '-Wl,-P -L/usr/lang/mpi/lib -lmpi -lmp' # # Newer systems: # # setenv MPIINCDIR /usr/lang/mpi2/include32 # setenv MPILIBS '-Wl,-P -L/usr/lang/mpi2/lib32 -lmpi -lmp' # # Note for older systems. The configure.user is set up for VPP5000. # For older (VPP300/700) systems, it may be necessary to remove the # -KA32 and -Ka4 flags in the settings below. # # Note with v3.4: VECTOR=1 works only with IMPHYS=5, IBLTYP=5, and ICUPA=3. # Other IMPHYS options and ICUPA options will work but won't be vector # optimized. IBLTYP=2 will not compile with VECTOR=1. # # Debugging VECTOR=1 option on non-vector platforms: see MPP/README_VECDEBUG # #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "vpp" #MPP_TARGET=$(RUNTIME_SYSTEM) #MFC = frt
MM5 Tutorial
8-49
8: MM5
#MCC = cc #MLD = frt ### debugging ### FCFLAGS = -Sw -g -Pdos -lmpi -lmp ### debugging; for debugging without MPI (also need to compile RSL with DSTUBS) ### FCFLAGS = -Sw -g -Pdos -Of,-P,-E #FCFLAGS = -Sw -Wv,-Of,-te,-ilfunc,-noalias,-m3,-P255 \ # -Oe,-P -Kfast -Pdos -lmpi -lmp -KA32 #FCVFLAGS = -Sw -Wv,-te,-noalias,-ilfunc,-Of,-m3,-P255 \ # -Of,-e,-P,-u -Kfast -Pdos -lmpi -lmp -KA32 #LDOPTIONS = -Wl,-P -L$(MPILIBS) -lmpi -J -lmp -KA32 #LOCAL_LIBRARIES = #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = $(CAT) #M4 = m4 #CPP = /lib/cpp -C -P ### Uncomment only for debugging without MPI ### CPPFLAGS = -DMPI -Dvpp -I$(MPIINCDIR) -DKMA -DSTUBS -DSYSTEM_CALL_OK ### CFLAGS = -DMPI -Dvpp -I$(MPIINCDIR) -KA32 -Ka4 -DSTUBS ### Normal settings for CPPFLAGS and CFLAGS #CPPFLAGS = -DMPI -Dvpp -I$(MPIINCDIR) -DKMA -DSYSTEM_CALL_OK #CFLAGS = -DMPI -Dvpp -I$(MPIINCDIR) -KA32 -Ka4 #ARCH_OBJS = milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #FLIC_MACROS = LMvpp.m4 #VECTOR = 1 #----------------------------------------------------------------------------# 7g1. Linux PCs. Need Portland Group pgf77 and MPICH. # # The following information has been added to this file with MM5v3.2: # # This expects mpif77 and mpicc to be installed on your system in # $(LINUX_MPIHOME)/bin . These should be configured to use the Portland Group # pgf77 (v3 or higher) and gcc, respectively. For information on how to # download, install, and configure mpich on your system, see: # # http://www.mcs.anl.gov/mpi/mpich # # Information on Portland Group compiler: # # http://www.pgroup.com # # If using a different Fortran compiler, modify FCFLAGS and LDOPTIONS as # needed. The compiler should be capable of doing little- to big-endian # conversion and it should understand integer (Cray-style) pointers. It # is recommended that the same fortran compiler be used to compile # mpich. Edit the LINUX_MPIHOME macro, below, to point to the top level mpich # directory. See also: # # http://www.mmm.ucar.edu/mm5/mpp/linuxhelp.html (by Steve Webb, NCAR/RAP) # # Note for pgf77 on RedHat Linux6: patches available from Portland Group at: # # http://www.pgroup.com/downloads/rh6patches.html # #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "linux" #MPP_TARGET=$(RUNTIME_SYSTEM) ## edit the following definition for your system
8-50
MM5 Tutorial
8: MM5
#LINUX_MPIHOME = /usr/local/mpich #MFC = $(LINUX_MPIHOME)/bin/mpif77 #MCC = $(LINUX_MPIHOME)/bin/mpicc #MLD = $(LINUX_MPIHOME)/bin/mpif77 #FCFLAGS = -O2 -Mcray=pointer -tp p6 -pc 32 -Mnoframe -byteswapio #LDOPTIONS = -O2 -Mcray=pointer -tp p6 -pc 32 -Mnoframe -byteswapio #LOCAL_LIBRARIES = -L$(LINUX_MPIHOME)/build/LINUX/ch_p4/lib -lfmpich -lmpich #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = /lib/cpp -C -P -traditional #CPPFLAGS = -DMPI -Dlinux -DSYSTEM_CALL_OK #CFLAGS = -DMPI -I$(LINUX_MPIHOME)/include #ARCH_OBJS = milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #----------------------------------------------------------------------------# 7g2. Linux PCs. Need INTEL and MPICH. #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "linux" #MPP_TARGET=$(RUNTIME_SYSTEM) ### edit the following definition for your system #LINUX_MPIHOME = /usr/local/mpich-intel #MFC = $(LINUX_MPIHOME)/bin/mpif77 #MCC = $(LINUX_MPIHOME)/bin/mpicc #MLD = $(LINUX_MPIHOME)/bin/mpif77 #FCFLAGS = -O2 -convert big_endian -pc32 #LDOPTIONS = -O2 -convert big_endian -pc32 #LOCAL_LIBRARIES = -L$(LINUX_MPIHOME)/build/LINUX/ch_p4/lib -lfmpich -lmpich #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = /usr/bin/expand #M4 = m4 #CPP = /lib/cpp -C -P #CPPFLAGS = -traditional -DMPI -Dlinux #CFLAGS = -DMPI -I/usr/local/mpi/include #ARCH_OBJS = milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #----------------------------------------------------------------------------# 7h. NEC SX-4 (under development) #----------------------------------------------------------------------------#RUNTIME_SYSTEM = sx #MPP_TARGET=$(RUNTIME_SYSTEM) #MFC = f90 #MCC = cc #MLD = $(MFC) #FCFLAGS = -V -E P -Wf"-init stack=zero heap=zero -O nooverlap" -USX -float0 \ # -D$(RUNTIME_SYSTEM) -I$(LIBINCLUDE) -Wf"-L transform fmtlist summary" -g #FCFLAGS = -V -E P -C vopt -Wf"-init stack=zero heap=zero -O nooverlap" \ # -ew -USX -float0 -D$(RUNTIME_SYSTEM) -I$(LIBINCLUDE) \ # -Wf"-L transform fmtlist summary" #LDOPTIONS = -float0 -lmpi -lmpiw -g #CFLAGS = #LOCAL_LIBRARIES = #MAKE = make -i -r #AWK = awk #SED = sed
MM5 Tutorial
8-51
8: MM5
#CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = /lib/cpp -C -P #CPPFLAGS = -DMPI -Dvpp -I$(LIBINCLUDE) -C -P -DDEC_ALPHA -DSYSTEM_CALL_OK #CFLAGS = -DMPI -Dvpp -DDEC_ALPHA #ARCH_OBJS = milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #ASSUME_HOMOGENEOUS_ENVIRONMENT = 1 #FLIC_MACROS = LMvpp.m4 #VECTOR = 1 #----------------------------------------------------------------------------# 7i. Sun MPI (tested on k2.ucar.edu) #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "sunmpi" #MPP_TARGET=$(RUNTIME_SYSTEM) ###### If using OpenMP for SMP parallelism on each MPI process ### ##SPECIAL_OMP = ?? #MFC = mpf90 #MCC = mpcc #MLD = mpf90 #LDOPTIONS = -fast -O2 -lmpi ####### #FCFLAGS = -fast -O2 $(SPECIAL_OMP) #LOCAL_LIBRARIES = #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = cpp -C -P #CPPFLAGS = -DMPI -DSYSTEM_CALL_OK #CFLAGS = -DMPI #ARCH_OBJS = milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #----------------------------------------------------------------------------# 7j. Cray X1 #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "crayx1" #MPP_TARGET=$(RUNTIME_SYSTEM) #MFC = ftn #MCC = cc #MLD = $(MFC) ## Use these for X1 cross compiler #X1_CROSS_COMP = "gcc" #X1_CROSS_CFLG = "" ## Use these for X1 native (trigger) compiler ##X1_CROSS_COMP = "cc" ##X1_CROSS_CFLG = "-hcommand" # #FCFLAGS = -x omp,mic -O3 -Ofp3 -Ogen_private_callee -V -ra -sword_pointer D$(RUNTIME_SYSTEM) ##FCFLAGS = -x omp,mic -Oscalar2,stream3,vector3 -Ofp3 -Ogen_private_callee -V -ra -sword_pointer -D$(RUNTIME_SYSTEM) # #LDOPTIONS = #LOCAL_LIBRARIES = -lmalloc #MAKE = make -i -r #AWK = awk
8-52
MM5 Tutorial
8: MM5
#SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = cpp -C -P #CPPFLAGS = -DMPI -D$(RUNTIME_SYSTEM) -DKMA #CFLAGS = -V -O3 -h display_opt -h report=imsvf -DMPI -D$(RUNTIME_SYSTEM) #ARCH_OBJS = error_dupt3d.o set_to_nan.o milliclock.o #IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #----------------------------------------------------------------------------# 7k. Cray XD1, Linux Opteron. Need Portland Group pgf90. # # The following information has been added to this file with MM5v3.6.3: # # Information on Portland Group compiler: # # http://www.pgroup.com # # If using a different Fortran compiler, modify FCFLAGS and LDOPTIONS as # needed. The compiler should be capable of doing little- to big-endian # conversion and it should understand integer (Cray-style) pointers. It # is recommended that the same fortran compiler be used to compile # mpich. Edit the LINUX_MPIHOME macro, below, to point to the top level mpich # directory. See also: # # http://www.mmm.ucar.edu/mm5/mpp/linuxhelp.html (by Steve Webb, NCAR/RAP) # # Note for pgf77 on RedHat Linux6: patches available from Portland Group at: # # http://www.pgroup.com/downloads/rh6patches.html # #----------------------------------------------------------------------------#RUNTIME_SYSTEM = "linux" #MPP_TARGET=$(RUNTIME_SYSTEM) # edit the following definition for your system #LINUX_MPIHOME = /usr/mpich/mpich-1.2.5 ### mpif77, mpicc are not yet installed on XD1 #MFC = $(LINUX_MPIHOME)/bin/mpif77 #MCC = $(LINUX_MPIHOME)/bin/mpicc #MLD = $(LINUX_MPIHOME)/bin/mpif77 #MFC = pgf90 #MCC = pgcc #MLD = pgf90 #FCFLAGS = -DDEC_ALPHA -O3 -fastsse -Mnoreentrant -Mcray=pointer -Mnoframe byteswapio #LDOPTIONS = -DDEC_ALPHA -O3 -Mcray=pointer -Mnoframe -byteswapio # ### need to point to header and libs for mpich explicitly for XD1 #OBJS_PATH = /opt/benchmark/shome/CONTRIB #LOCAL_OBJS = $(OBJS_PATH)/if.o $(OBJS_PATH)/strdup.o $(OBJS_PATH)/farg.o #LIB_PATH = -L $(PGI)/linux86-64/5.1/lib -L $(LINUX_MPIHOME)/lib -L /lib64 #LOCAL_LIBRARIES = $(LIB_PATH) -lgcc -lmpich -lfmpich -lrapl -lmpichfsup lpthread $(LOCAL_OBJS) # #MAKE = make -i -r #AWK = awk #SED = sed #CAT = cat #CUT = cut #EXPAND = expand #M4 = m4 #CPP = /lib/cpp -C -P -traditional #CPPFLAGS = -DDEC_ALPHA -DMPI -Dlinux -DSYSTEM_CALL_OK #CFLAGS = -O3 -DDEC_ALPHA -DMPI -I$(LINUX_MPIHOME)/include #ARCH_OBJS = milliclock.o
MM5 Tutorial
8-53
8: MM5
#IWORDSIZE = 4 #RWORDSIZE = 4 #LWORDSIZE = 4 #----------------------------------------------------------------------------# Don't touch anything below this line #----------------------------------------------------------------------------.F.i: $(RM) $@ $(CPP) $(CPPFLAGS) $*.F > $@ mv $*.i $(DEVTOP)/pick/$*.f cp $*.F $(DEVTOP)/pick .c.o: $(RM) $@ && \ $(CC) -c $(CFLAGS) $*.c .F.o: $(RM) $@ $(FC) -c $(FCFLAGS) $*.F .F.f: $(RM) $@ $(CPP) $(CPPFLAGS) $*.F > $@ .f.o: $(RM) $@ $(FC) -c $(FCFLAGS) $*.f
8.18 mm5.deck This is a Bourne shell script. Slight variations may exist on different machines. (This file is included here for reference only. Use the most up-to-date one from MM5.TAR file.) #!/bin/sh # # Version 3 of mm5 job deck # # The mm5 executable (mm5.exe) expects to find the following files # in the Run/ directory: # MMINPUT_DOMAIN1 -| # BDYOUT_DOMAIN1 | --> output files from Interpf # LOWBDY_DOMAIN1 -| # TERRAIN_DOMAIN[2,3..] if running nests --> output from Terrain # # If it is a restart run: # RESTART_DOMAIN1[,2,3..] --> output from MM5 run: renamed from # SAVE_DOMAIN1[,2,3...] # # If it is gridded FDDA run with surface analysis nudging: # SFCFDDA_DOMAIN1[2,3,...] # # If it is observational nudging run: # MM5OBS_DOMAIN1[,2,3..] --> user-created observation files # # Output from a MM5 run: # If IFTAPE = 1
8-54
MM5 Tutorial
8: MM5
# MMOUT_DOMAIN1[,2,3...] --> one output for each domain # If IFSAVE = TRUE # SAVE_DOMAIN1[,2,3...] # # # temp files should be accessible umask 022 # # Select appropriate FDDAsw if doing gridded analysis FDDA # #FDDAsw=yes # gridded FDDA input switch FDDAsw=no # # Sections # 1. Options for namelist (“mmlif”) # 2. Running... # #----------------------------------------------------------------------------# 1. Options for namelist (“mmlif”) #----------------------------------------------------------------------------# # The first dimension (column) of the arrays denotes the domain # identifier. # Col 1 = Domain #1, Col 2 = Dom #2, etc. # cat > ./Run/oparam << EOF &OPARAM ; ; ************* FORECAST TIME AND TIME STEP ****************** ; TIMAX = 720., ; forecast length in minutes TISTEP = 240., ; coarse domain DT in model, use 3*DX ; ; ************** OUTPUT/RESTART OPTIONS *************** ; IFREST = .FALSE., ; whether this is a restart IXTIMR = 720, ; restart time in minutes IFSAVE = .TRUE., ; save data for restart SVLAST = .TRUE., ; T: only save the last file for restart ; F: save multiple files SAVFRQ = 360., ; how frequently to save data (in minutes) IFTAPE = 1, ; model output: 0,1 TAPFRQ = 180., ; how frequently to output model results (in minutes) BUFFRQ = 0., ; how frequently to split model output files (in minutes), ; ignored if < TAPFRQ INCTAP = 1,1,1,1,1,1,1,1,1,1, ; multipliers of TAPFRQ for outputting IFRSFA = .FALSE., ; IF this is a RESTART run, AND IF FDDA is ON, AND ; IF multiple input FILES are used, set this to .TRUE. ; set CDATEST to the INITIAL time of the first run IFSKIP = .FALSE., ; whether to skip input files - DO NOT use this for ; restart also need to set CDATEST if set to .TRUE. CDATEST = '1993-03-13_00:00:00', ; IF IFSKIP=.TRUE., this will be the date from which the code should start ; IF IFRSFA=.TRUE., this will be the INITIAL date from the first model run IFPRT = 0, ; sample print out: =1, a lot of print PRTFRQ = 720., ; Print frequency for sample output (in minutes) MM5 Tutorial
8-55
8: MM5
MASCHK = 99999, ; mass conservation check (KTAU or no. of time steps) IFTSOUT = .FALSE., ; output time series (default 30 points) TSLAT = 0.0,0.0,0.0,0.0,0.0, ; latitudes of time series points (S is negative) TSLON = 0.0,0.0,0.0,0.0,0.0, ; longitudes of time series points (W is negative) &END EOF cat > ./Run/lparam << EOF &LPARAM ; ; 1. user-chosen options I ; RADFRQ = 30., ;atmospheric radiation calculation frequency (in minutes) IMVDIF = 1, ;moist vertical diffusion in clouds - 0, 1 (IBLTYP=2,5 only) IVQADV = 1, ;vertical moisture advection uses log interpolation - 0, linear - 1 IVTADV = 1, ;vertical temperature advection uses theta interpolation-0,linear-1 ITPDIF ; TDKORR ;
= 1,
ITPDIF ICOR3D IEXSI IFUPR LEVSLP
= = = = =
= 2,
1, 1, 0, 1, 9,
;sigma-diffusion using temperature - 0, sigma-diffusion using ;perturbation temperature - 1, z-diffusion - 2 ;temperature gradient correction for z-diffusion at ground level ;uses -1- ground temp, -2- one-sided difference of air temp
;diffusion using perturbation temperature - 0,1 ;3D Coriolis force - 0, 1 ;initial sea-ice - 0, 1(base on SST), 2(read in) (ISOIL=1 only)
;upper radiative boundary condition - 0, 1 ;nest level (correspond to LEVIDN) at which solar radiation ;start to account for orography ;set large to switch off ;only have an effect for very high resolution model domains OROSHAW = 0, ;include effect of orography shadowing ;ONLY has an effect if LEVSLP is also set ;0=no effect (default), ;1=orography shadowing taken into account ;NOT AVAILABLE FOR MPI RUNS ITADVM = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; 0: default - instability limiter not used ; 1: use instability limiter for temp advection IQADVM = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; 0: default - instability limiter not used ; 1: use instability limiter for QV/CLW advection ; ; 2. do not change IBOUDY ; IBOUDY = 3, 2, 2, 2, 2, 2, 2, 2, 2, 2, ;boundary conditions ; (fixed, time-dependent, relaxation -0,2,3) ; ; 3. keep the following 8 variables as they are ; unless doing sensitivity runs ; IFDRY = 0, ;fake-dry run (no latent heating) - 0, 1 ;
for IMPHYS = 2,3,4,5,6,7 (requires ICUPA = 1)
ISSTVAR= 0, ;varying SST in time - 1, otherwise, 0 IMOIAV = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ;bucket soil moisture scheme. 0 - not used, ;1 - used w/o extra input, 2 - user w/ soil m input
IFSNOW = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ;SNOW COVER EFFECTS - 0, 1
8-56
MM5 Tutorial
8: MM5
; (only if snow data are generated in REGRID), 2 (simple snow model ; only if WEASD is provided by REGRID) ISFMTHD= 1, ;method for calculation of 2m/10m diagnostics ;0 - old method, 1 - new method for stable conditions IZ0TOPT = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ;Thermal roughness length option ; - 0 (default), 1 (Garratt), 2 (Zilitinkevich) ISFFLX = 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ;surface fluxes - 0, 1 ITGFLG = 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ;surface temperature prediction ; 1:yes, 3:no ISFPAR = 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ;surface characteristics - 0, 1 ICLOUD = 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ;cloud effects on radiation - 0, 1 ; currently for IFRAD = 1,2 IEVAP = 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ;evap of cloud/rainwater - <0, 0, >0 ; (currently for IMPHYS=3,4,5 only) ISMRD = 0, ;soil moisture initialization by PX LSM: ; =0, use moisture avail from LANDUSE.TBL; ; =2, use soil moisture from REGRID; ; ; Default soil layers expected as input for ISOIL 2 & 3 ; These values reflect the BOTTOM of the soil layer available ISTLYR = 10,40,100,200, ISMLYR = 10,40,100,200, ;ISTLYR = 10,200,0,0, ;ISMLYR = 10,200,0,0, ; Other common layers used by EC models (for instance ERA40) ;ISTLYR = 7,28,100,255, ;ISMLYR = 7,28,100,255, ; ; Next two switches for new version of NOAH LSM (ISOIL=2) RDMAXALB = .FALSE., ; T: use climatological max snow albedo RDBRDALB = .FALSE., ; T: use climatological monthly albedo ; (not landuse table) EOF cat > ./Run/nparam << EOF &NPARAM ; ; ************** NEST AND MOVING NEST OPTIONS *************** ; LEVIDN = 0,1,2,1,1,1,1,1,1,1, ; level of nest for each domain NUMNC = 1,1,2,1,1,1,1,1,1,1, ; ID of mother domain for each nest NESTIX = 35, 49, 31, 46, 46, 46, 46, 46, 46, 46, ; domain size i NESTJX = 41, 52, 31, 61, 61, 61, 61, 61, 61, 61, ; domain size j NESTI = 1, 10, 8, 1, 1, 1, 1, 1, 1, 1, ; start location i NESTJ = 1, 17, 9, 1, 1, 1, 1, 1, 1, 1, ; start location i XSTNES = 0., 0.,900., 0., 0., 0., 0., 0., 0., 0., ; domain initiation XENNES =1440.,1440.,1440.,720.,720.,720.,720.,720.,720.,720.;domain termination IOVERW = 1, 2, 0, 0, 0, 0, 0, 0, 0, 0, ; overwrite nest input ; 0=interpolate from coarse mesh (for nest domains); ; 1=read in domain initial conditions ; 2=read in nest terrain file IACTIV = 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; ; in case of restart: is this domain active? ; ; ************* MOVING NEST OPTIONS ****************** ; IMOVE = 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; move domain 0,1 IMOVCO = 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ; 1st move # MM5 Tutorial
8-57
8: MM5
; IMOVEI =
IMOVEJ =
IMOVET =
IFEED
=
imovei(j,k)=L, ; I-INCREMENT MOVE (DOMAIN J, MOVE NUMBER K) IS L 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; I move #1 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; I move #2 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; I move #3 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; I move #4 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; I move #5 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; J move #1 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; J move #2 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; J move #3 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; J move #4 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; J move #5 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; time of move #1 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; time of move #2 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; time of move #3 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; time of move #4 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ; time of move #5 3, ; no feedback; 9-pt weighted average; 1-pt feedback w/o smoothing / ; light smoothing / heavy smoothing - 0,1,2,3, and 4
&END EOF cat > ./Run/pparam << EOF &PPARAM ; ; ************* MISCELLANEOUS OPTIONS ***************** ; ; The values for the following 5 variables are only used if ISFPAR = 0 ; (i.e. only land/water surface catagories) ; ZZLND = 0.1, ; roughness length over land in meters ZZWTR = 0.0001, ; roughness length over water in meters ALBLND = 0.15, ; albedo THINLD = 0.04, ; surface thermal inertia XMAVA = 0.3, ; moisture availability over land as a decimal fraction of one ; CONF = 1.0, ; non-convective precipitation saturation threshold (=1: 100%) &END EOF cat > ./Run/fparam << EOF &FPARAM ; ; ************* 4DDA OPTIONS ********************** ; ; THE FIRST DIMENSION (COLUMN) IS THE DOMAIN IDENTIFIER: ; COLUMN 1 = DOMAIN #1, COLUMN 2 = DOMAIN #2, ETC. ; ; START TIME FOR FDDA (ANALYSIS OR OBS) FOR EACH DOMAIN ; (IN MINUTES RELATIVE TO MODEL INITIAL TIME) FDASTA=0.,0.,0.,0.,0.,0.,0.,0.,0.,0. ; ENDING TIME FOR FDDA (ANALYSIS OR OBS) FOR EACH DOMAIN ; (IN MINUTES RELATIVE TO MODEL INITIAL TIME) FDAEND=780.,0.,0.,0.,0.,0.,0.,0.,0.,0., ; ; **************** ANALYSIS NUDGING ****************** ; ; THE FIRST DIMENSION (COLUMN) OF THE ARRAYS DENOTES THE
8-58
MM5 Tutorial
8: MM5
; DOMAIN IDENTIFIER: ; COLUMN 1 = DOMAIN #1, COLUMN 2 = DOMAIN #2, ETC. ; THE SECOND DIMENSION (ROW OR LINE) EITHER REFERS TO THE 3D VS ; SFC ANALYSIS OR WHICH VARIABLE IS ACCESSED: ; LINE 1 = 3D, LINE 2 = SFC OR ; LINE 1 = U, LINE 2 = V, LINE 3 = T, LINE 4 = Q ; ; IS THIS A GRID 4DDA RUN? 0 = NO; 1 = YES I4D= 0,0,0,0,0,0,0,0,0,0, 0,0,0,0,0,0,0,0,0,0, ; ; SPECIFY THE TIME IN MINUTES BETWEEN THE INPUT (USUALLY ; FROM INTERP) USED FOR GRID FDDA DIFTIM=720.,720.,0.,0.,0.,0.,0.,0.,0.,0., ; 3D ANALYSIS NUDGING 180.,180.,0.,0.,0.,0.,0.,0.,0.,0., ; SFC ANALYSIS NUDGING ; ; GRID NUDGE THE WIND FIELD? 0 = NO; 1 = YES IWIND=1,1,0,0,0,0,0,0,0,0, ; 3D ANALYSIS NUDGING 1,1,0,0,0,0,0,0,0,0, ; SFC ANALYSIS NUDGING ; ; NUDGING COEFFICIENT FOR WINDS ANALYSES GV=2.5E-4,1.0E-4,0.,0.,0.,0.,0.,0.,0.,0., ; 3D ANALYSIS NUDGING 2.5E-4,1.0E-4,0.,0.,0.,0.,0.,0.,0.,0., ; SFC ANALYSIS NUDGING ; ; GRID NUDGE THE TEMPERATURE FIELD? 0 = NO; 1 = YES ITEMP=1,1,0,0,0,0,0,0,0,0, ; 3D ANALYSIS NUDGING 1,1,0,0,0,0,0,0,0,0, ; SFC ANALYSIS NUDGING ; ; NUDGING COEFFICIENT FOR TEMPERATURE ANALYSES GT=2.5E-4,1.0E-4,0.,0.,0.,0.,0.,0.,0.,0., ; 3D ANALYSIS NUDGING 2.5E-4,1.0E-4,0.,0.,0.,0.,0.,0.,0.,0., ; SFC ANALYSIS NUDGING ; IMOIS=1,1,0,0,0,0,0,0,0,0, ; 3D ANALYSIS NUDGING 1,1,0,0,0,0,0,0,0,0, ; SFC ANALYSIS NUDGING ; ; NUDGING COEFFICIENT FOR THE MIXING RATIO ANALYSES GQ=1.E-5,1.E-5,0.,0.,0.,0.,0.,0.,0.,0., ; 3D ANALYSIS NUDGING 1.E-5,1.E-5,0.,0.,0.,0.,0.,0.,0.,0., ; SFC ANALYSIS NUDGING ; ; GRID NUDGE THE ROTATIONAL WIND FIELD? 0 = NO; 1 = YES IROT=0,0,0,0,0,0,0,0,0,0, ; 3D ANALYSIS NUDGING ; ; NUDGING COEFFICIENT FOR THE ROTATIONAL COMPONENT OF THE WINDS GR=5.E6,5.E6,0.,0.,0.,0.,0.,0.,0.,0., ; 3D ANALYSIS NUDGING ; ; IF GRID NUDGING (I4D(1,1)=1) AND YOU WISH TO EXCLUDE THE ; BOUNDARY LAYER FROM FDDA OF COARSE GRID THREE DIMENSIONAL ; DATA (USUALLY FROM INTERP), ; 0 = NO, INCLUDE BOUNDARY LAYER NUDGING ; 1 = YES, EXCLUDE BOUNDARY LAYER NUDGING INONBL =0,0,0,0,0,0,0,0,0,0, ; U WIND 0,0,0,0,0,0,0,0,0,0, ; V WIND 1,1,1,1,1,1,1,1,1,1, ; TEMPERATURE 1,1,1,1,1,1,1,1,1,1, ; MIXING RATIO ; ; RADIUS OF INFLUENCE FOR SURFACE ANALYSIS (KM). ; IF I4D(2,1)=1 OR I4D(2,2)=1, ETC, DEFINE RINBLW (KM) USED MM5 Tutorial
8-59
8: MM5
; IN SUBROUTINE BLW TO DETERMINE THE HORIZONTAL VARIABILITY ; OF THE SURFACE-ANALYSIS NUDGING AS A FUNCTION OF SURFACE ; DATA DENSITY. OVER LAND, THE STRENGTH OF THE SURFACE; ANALYSIS NUDGING IS LINEARLY DECREASED BY 80 PERCENT AT ; THOSE GRID POINTS GREATER THAN RINBLW FROM AN OBSERVATION ; TO ACCOUNT FOR DECREASED CONFIDENCE IN THE ANALYSIS ; IN REGIONS NOT NEAR ANY OBSERVATIONS. RINBLW=250., ; ; SET THE NUDGING PRINT FREQUENCY FOR SELECTED DIAGNOSTIC ; PRINTS IN THE GRID (ANALYSIS) NUDGING CODE (IN CGM ; TIMESTEPS) NPFG=50, ; ; **************** OBSERVATION NUDGING *************** ; ; ; INDIVIDUAL OBSERVATION NUDGING. VARIABLES THAT ARE ARRAYS ; USE THE FIRST DIMENSION (COLUMN) AS THE DOMAIN IDENTIFIER: ; COLUMN 1 = DOMAIN #1, COLUMN 2 = DOMAIN #2, ETC. ; ; IS THIS INDIVIDUAL OBSERVATION NUDGING? 0 = NO; 1 = YES I4DI =0,0,0,0,0,0,0,0,0,0, ; ; OBS NUDGE THE WIND FIELD FROM STATION DATA? 0 = NO; 1 = YES ISWIND =1,0,0,0,0,0,0,0,0,0, ; ; NUDGING COEFFICIENT FOR WINDS FROM STATION DATA GIV =4.E-4,4.E-4,0.,0.,0.,0.,0.,0.,0.,0., ; ; OBS NUDGE THE TEMPERATURE FIELD FROM STATION DATA? 0 = NO; 1 = YES ISTEMP=1,0,0,0,0,0,0,0,0,0, ; ; NUDGING COEFFICIENT FOR TEMPERATURES FROM STATION DATA GIT =4.E-4,4.E-4,0.,0.,0.,0.,0.,0.,0.,0., ; ; OBS NUDGE THE MIXING RATIO FIELD FROM STATION DATA? 0 = NO; 1 = YES ISMOIS=1,0,0,0,0,0,0,0,0,0, ; ; NUDGING COEFFICIENT FOR THE MIXING RATIO FROM STATION DATA GIQ =4.E-4,4.E-4,0.,0.,0.,0.,0.,0.,0.,0., ; ; THE OBS NUDGING RADIUS OF INFLUENCE IN THE ; HORIZONTAL IN KM FOR CRESSMAN-TYPE DISTANCE-WEIGHTED ; FUNCTIONS WHICH SPREAD THE OBS-NUDGING CORRECTION ; IN THE HORIZONTAL. RINXY=240., ; ; THE OBS NUDGING RADIUS OF INFLUENCE IN THE ; VERTICAL IN SIGMA UNITS FOR CRESSMAN-TYPE DISTANCE; WEIGHTED FUNCTIONS WHICH SPREAD THE OBS-NUDGING ; CORRECTION IN THE VERTICAL. RINSIG=0.001, ; ; THE HALF-PERIOD OF THE TIME WINDOW, IN MINUTES, OVER
8-60
MM5 Tutorial
8: MM5
; WHICH AN OBSERVATION WILL AFFECT THE FORECAST VIA OBS ; NUDGING. THAT IS, THE OBS WILL INFLUENCE THE FORECAST ; FROM TIMEOBS-TWINDO TO TIMEOBS+TWINDO. THE TEMPORAL ; WEIGHTING FUNCTION IS DEFINED SUCH THAT THE OBSERVATION ; IS APPLIED WITH FULL STRENGTH WITHIN TWINDO/2. MINUTES ; BEFORE OR AFTER THE OBSERVATION TIME, AND THEN LINEARLY ; DECREASES TO ZERO TWINDO MINUTES BEFORE OR AFTER THE ; OBSERVATION TIME. TWINDO=40.0, ; ; THE NUDGING PRINT FREQUENCY FOR SELECTED DIAGNOSTIC PRINT ; IN THE OBS NUDGING CODE (IN CGM TIMESTEPS) NPFI=20, ; ; FREQUENCY (IN CGM TIMESTEPS) TO COMPUTE OBS NUDGING WEIGHTS IONF=2, IDYNIN=0, ;for dynamic initialization using a ramp-down function to gradually ; turn off the FDDA before the pure forecast, set idynin=1 [y=1, n=0] DTRAMP=60.,;the time period in minutes over which the ; nudging (obs nudging and analysis nudging) is ramped down ; from one to zero. Set dtramp negative if FDDA is to be ramped ; down BEFORE the end-of-data time (DATEND), and positive if the ; FDDA ramp-down period extends beyond the end-of-data time. &END EOF # #----------------------------------------------------------------# # create namelist: mmlif, and remove comments from namelist: # make mmlif cd ./Run sed -f ../Util/no_comment.sed mmlif | grep [A-Z,a-z] > mmlif.tmp mv mmlif.tmp mmlif rm fparam lparam nparam oparam pparam # # copy gridded FDDA files # if [ $FDDAsw = yes ]; then echo “Copy grid fdda file” for i in MMINPUT_DOMAIN[1-9] do Num=`echo $i | grep [1-9]$ | sed ‘s/.*\(.\)/\1/’` cp $i MMINPUT2_DOMAIN$Num echo “cp $i MMINPUT2_DOMAIN$Num” done fi # #----------------------------------------------------------------# # run MM5 # date echo “timex mm5.exe >! mm5.print.out “ timex ./mm5.exe > mm5.print.out 2>&1
MM5 Tutorial
8-61
8: MM5
List of MM5 Fortran Files
MM5 FORTRAN FILES domain
physics
Run boundary: mm5.F
memory address addall.F addrx1c.F addrx1n.F
fdda grid: blbrgd.F blnudgd.F blw.F bufslgd.F bufvdgd.F conv3.F in4dgd.F intpsgd.F
nopro.F nudgd.F setupgd.F
qsatgd.F
obs: errob.F in4dob.F nudob.F util: fdaoff.F unity.F setfd.F
8-62
include addr0.incl addras.incl addrasn.incl addrcu.incl addrcun.incl addrfog.incl addrfogn.incl btblk1.incl btblk2.incl chardate.incl config.INCL dusolve1.incl fddagd.incl fddaob.incl fog1d.incl fogstuf.incl functb.INCL functb.incl hdrv3.incl hdtabl.incl jrg.incl landinp.incl landuse.incl navypb.incl nestl.incl nhcnst.incl nhtens.incl nncnst.incl nnnhyd.incl nnnhydb.incl nonhyd.incl nonhydb.incl parakcu.incl param2.incl param3.incl parame parbmcu.incl parccm2.incl pargrcu.incl parkfcu.incl pbltb.incl pbltke.incl pbltken.incl pmoist.incl pnavyn.incl point2d.incl point2dn.incl point3d.incl point3dn.incl pointbc.incl pointbcn.incl radccm2.incl radiat.incl rpstar.incl rpstarn.incl soil.incl soilcnst.incl soiln.incl soilp.incl soilpn.incl sum.incl surface.incl . surfslab.incl . uprad.incl varia.incl various.incl variousn.incl bucket.incl comicl.incl parpx.incl zdiff2.incl zdiffu.incl paramgen_LSM paramsoil_STAT paramveg_USGS
bdyin.F bdyrst.F bdyten.F nudge.F bdyval.F lbdyin.F drivers: nstlev1.F nstlev2.F nstlev3.F nstlev4.F nstlev5.F
initial: init.F param.F paramr.F initts.F io: conadv.F conmas.F initsav.F mapsmp.F outprt.F output.F outsav.F outtap.F rdinit.F savread.F shutdo.F tmass.F vtran.F rdter.F write_big_ header.F write_ fieldrec.F write_flag.F dm_io.F outts.F nest: bdyovl1.F chknst.F exaint.F exchani.F exchanj.F feedbk.F filslb.F initnest.F ovlchk.F sint.F sintx.F sinty.F stotndi.F stotndt.F subch.F nestlsm.F
util: couple.F date.F dots.F dcpl3d.F dcpl3dwnd.F decouple.F equate.F fill.F fillcrs.F skipf.F smt2.F smther.F xtdot.F
dynamics advection/simple: hadv.F vadv.F vad2.F cumulus/as: aramb.F araout.F arasch.F clodwd.F cloudw.F cupara4.F entr.F kerhel.F soundd.F zx4lp.F cumulus/bm: bmpara.F cupara7.F lutbl.F spline.F tpfc.F cumulus/fc: cupara5.F fcpara.F tp.F cumulus/grell: cup.F cupara3.F maximi.F minimi.F cumulus/kf: cupara6.F dtfrz.F dtfrznew.F kfpara.F tpdd.F tpmix.F cumulus/kuo: cupara2.F cumulus/shallow: araouts.F cloudws.F entrs.F kerhels.F shallcu.F shallow.F cumulus/kf2: cupara8.F kfdrive.F kfpara2.F dtfrz2.F lutab.F tp_cape.F tpmix2.F tpmix2dd.F cumulus/shared: kfbmdata.F heipre.F dtfrz.F maxim.F, minim.F moiene.F, precip.F zunc.F, condload.F envirtht.F prof5.F
explicit/nonconv: nconvp.F explicit/simple: exmoiss.F lexmoiss.F
pbl_sfc/noahlsm: surfce.F sflx.F
solve.F sound.F
explicit/reisner1: exmoisr.F lexmoisr.F zexmoisr.F
radiation/sfc: sfcrad.F trans.F transm.F
explicit/reisner2: exmoisg.F
radiation/simple: radcool.F
explicit/gsfc: falflux.F godmic.F satice.F
radiation/cloud: lwrad.F swrad.F
explicit/schultz: schultz.F schultz_mic.F explicit/shared: consat.F gamma.F settbl.F pbl_sfc/dry: cadjmx.F convad.F gauss.F pbl_sfc/bulk: blkpbl.F pbl_sfc/hirpbl: hirpbl.F pbl_sfc/myepbl: difcof.F mixleg.F myepbl.F prodq2.F sfcdif.F vdifh.F vdifq.F vdifv.F pbl_sfc/btpbl: bound.F erase.F esatpb.F hoskeep.F initpb.F navypb.F outpb.F uvcomp.F pbl_sfc/mrfpbl: mrfpbl.F tridi2.F pbl_sfc/gspbl: gspbl.F
nonhydro:
radiation/ccm2: cldems.F colmod.F fetchd.F getabe.F getdat.F putabe.F radabs.F radclr.F radclw.F radcsw.F radctl.F radded.F radems.F radini.F radinp.F radout.F radtpl.F resetr.F stored.F wheneq.F whenfgt.F zenitm.F radiation/rrtm: mm5atm.F rrtm.F rrtm_gasabs.F rrtm_init.F rrtm_k_g.F rrtm_rtrn.F rrtm_setscef.F rrtm_taumol.F radiation/util: solar1.F inirad.F o3data.F hzdiffu/simple: coef_diffu.F diffu.F diffint.F diffmoi.F diffth.F diffthd.F
pbl_sfc/util: slab.F
MM5 Tutorial
9: MAKE
MAKE AND MM5
9
make and MM5 9-3 Logical Subdivision of code 9-3 Minimize Portability Concerns 9-3 Conditional Compilation 9-4 Configure.user File 9-4 Makefiles 9-5 Example: Top-Level Makefile 9-5 Example: Mid-Level Makefile 9-8 Example: Low-Level Makefile 9-10 CPP 9-11 CPP “inclusion” 9-12 CPP “conditionals” 9-12
MM5 Tutorial
9-1
9: MAKE
9-2
MM5 tutorial
9: MAKE AND MM5
9
MAKE AND MM5
9.1 make and MM5 The use of make in the MM5 project is necessitated for a number of reasons. 9.1.1 Logical Subdivision of code MM5 is written in FORTRAN and organized with the goal of providing a logical structure for code development and to encourage modular development of new options and subroutines. In addition, it is desired to supply the user/developer with some “pointers” as to the location of routines of particular interest. So the hope is to create something that appeared simple to the casual user but allowed more convenient access for the power user. This structure is implemented implicitly by taking advantage of the Unix file system structure. Since directories are arranged as trees, the subroutines are subdivided into conceptual groups. The include directory contains the include files for the various subroutines. The domain, dynamics, fdda, physics, and memory directories contain the subroutines divided by function. The Run directory holds the main program source code. make is the glue that holds this complicated structure together. As you have seen, make executes commands by spawning shells. These shells can in turn run make in subdirectories. This ability to nest makefiles is very powerful, since it allows you to recursively build an entire directory. 9.1.2 Minimize Portability Concerns Writing portable code involves not only following language standards, but creating a development structure that is equally standard. Every time code moves to a new machine you not only need to worry about your code but compilers, the operating system and the system environment, available libraries, and options for all of the above. The answer to this problem is two-fold, namely, use only standard tools and minimize use of esoteric options. make is such a standard tool - you will find make on every working UNIX machine you encounter. While each vendor’s make may differ in significant way, they all support a core MM5 Tutorial
9-3
9: MAKE AND MM5
subset of functionality. This means that a basic makefile with no bells and whistles will work on the vast majority of machines available on the market. “All makes are equal, but some makes are more equal than others.” Every decent Unix box will have make and cpp. However, they may throw in non-standard options. The quotation above reminds us to keep to standard constructions whenver possible. 9.1.3 Conditional Compilation One of the stated goals is conditional compilation. This is done in two different ways. make keys off the user's options to skip compilation of those directories not required. When a source file is compiled, cpp is used to avoid including code that is not required. So make skips unnecessary compilation while cpp is used to modify compilation.
9.2 Configure.user File Since make needs rules and defined dependencies (sometimes not the default ones), and there are more than 65 makefiles in the MM5 V3 directory structure (more than 70 MM5 subdirectories, over 300 Fortran files, and more than 100 C files), it would be an enomous task to make any change to these makefiles. A simple solution to solve this problem is to define all the rules and dependencies in one file and pass this file to all makefiles when make is executed. These definitions constitute part of the configure.user file. This section explains the rules and dependencies as defined in the configure.user file. SHELL .SUFFIXES FC FCFLAGS CFLAGS CPP CPPFLAGS LDOPTIONS LOCAL_LIBRARIES
MAKE -I$(LIBINCLUDE) -C -P -i -r AR
9-4
Defines the shell under which the make is run. Defines the suffixes the makefiles use. Macro to define fortran compiler. Macro to define any FORTRAN compiler options. Macro to define any c compiler options. Macro to define where to locate c pre-processor on the machine. Macro to define any cpp options. Macro to define any loader options. Macro to define any local libraries that the compiler may access. Macro to define the make command. to search for include files when compiling cpp option: all comments (except those found on cpp directive lines) are passed along. cpp option: preprocess the input without producing the line control information used by the next pass of the C compiler. make option: ignore error codes returned by invoked commands. make option: to remove any default suffix rules. Macro to define archive options.
MM5 Tutorial
9: MAKE AND MM5
RM RM_CMD GREP CC
Macro to define remove options. Macro to define what to remove when RM is executed. Macro similar to grep. Macro to define c compiler.
The following, which appears at the end of the configure.user file, defines the suffix rules a makefile uses. For example, .F.o: defines the rules to go from .F to .o files. In this case, the make will first remove any existing out-of-date .o file, and compile the .F files. .F.i: $(RM) $@ $(CPP) $(CPPFLAGS) $*.F > $@ mv $*.i $(DEVTOP)/pick/$*.f cp $*.F $(DEVTOP)/pick .c.o: $(RM) $@ && \ $(CC) -c $(CFLAGS) $*.c .F.o: $(RM) $@ $(FC) -c $(FCFLAGS) $*.F .F.f: $(RM) $@ $(CPP) $(CPPFLAGS) $*.F > $@ .f.o: $(RM) $@ $(FC) -c $(FCFLAGS) $*.f
9.3 Makefiles make is a tool that executes "makefiles". Makefiles contain "targets" and "dependencies". A target is what you want to compile. A dependency is what needs to be done to compile the target. We use a 3-tiered makefile structure following the directory struture.
• Top-Level • Middle (branching) Level • Lowest (compilation) Level Examples of each makefile follow.
• Top Level hides everything. The casual user edits the parameters and then just types "make". We take care of the rest.
• Middle Level is where branching occurs. These would be modified for something like the addition of a new moist physics scheme.
• Lowest Level is where object files are made. Change this when adding files. In addition,
the power user will make in these lower directories to avoid remaking the whole structure.
9.3.1 Example: Top-Level Makefile # Makefile for top directory DEVTOP = . include ./configure.user
MM5 Tutorial
9-5
9: MAKE AND MM5
all: (cd Util; $(MAKE)); \ ./parseconfig; \ (cd include; $(MAKE)); \ (cd memory; $(MAKE)); \ (cd fdda; $(MAKE)); \ (cd domain; $(MAKE));\ (cd physics; $(MAKE));\ (cd dynamics; $(MAKE));\ (cd Run; $(MAKE)); code: find . -name \*.i -exec rm {} \; ; \ (cd Util; $(MAKE)); \ ./parseconfig; \ (cd include; $(MAKE)); \ (cd include; $(MAKE) code); \ (cd memory; $(MAKE) code); \ (cd fdda; $(MAKE) code); \ (cd domain; $(MAKE) code);\ (cd physics; $(MAKE) code);\ (cd dynamics; $(MAKE) code);\ (cd Run; $(MAKE) code); little_f: (cd Util; $(MAKE)); \ ./parseconfig; \ (cd include; $(MAKE)); \ (cd memory; $(MAKE) little_f); \ (cd fdda; $(MAKE) little_f); \ (cd domain; $(MAKE) little_f); \ (cd physics; $(MAKE) little_f); \ (cd dynamics; $(MAKE) little_f); \ (cd Run; $(MAKE) little_f); mm5.deck: ./Util/makedeck.csh $(RUNTIME_SYSTEM); clean: (cd Util; $(MAKE) clean); \ (cd include; $(MAKE) clean); \ (cd memory; $(MAKE) clean); \ (cd fdda; $(MAKE) clean); \ (cd physics; $(MAKE) clean);\ (cd domain; $(MAKE) clean);\ (cd dynamics; $(MAKE) clean);\ (cd Run; $(MAKE) clean); \ if [ -f libutil.a ]; then $(RM) libutil.a; fi; rm_obj: (cd Util; $(MAKE) clean); \ (cd include; $(MAKE) clean); \ (cd memory; $(MAKE) clean); \ (cd fdda; $(MAKE) clean); \ (cd physics; $(MAKE) clean);\ (cd domain; $(MAKE) clean);\ (cd dynamics; $(MAKE) clean);\ (cd Run; $(MAKE) rm_obj); \ if [ -f libutil.a ]; then $(RM) libutil.a; fi; LineNumberer: $(CC) -o ./LineNumberer Util/LineNumberer.c; mmlif: (cd Run; $(MAKE) mmlif); ###
9-6
Additions for MPP
MM5 Tutorial
9: MAKE AND MM5
# # # # #
To clean after changes to configure.user, type ‘make mpclean’. To uninstall everything relating to MPP option, ‘make uninstall’. To partially remake installation, remove MPP/mpp_install and ‘make mpp’.
mpclean: clean (cd MPP/build ; /bin/rm -fr *.o *.f *.dm *.b) mpp: MPP/mpp_install (cd Util; $(MAKE)) ./parseconfig (cd include; $(MAKE)) (cd include; $(MAKE) code) (sed ‘/t touch anything below this line/,$$d’ configure.user \ > ./MPP/conf.mpp) (cd MPP; $(MAKE) col_cutter) (cd MPP/build; \ /bin/rm -f .tmpobjs ; \ $(CPP) -I../../pick ../mpp_objects_all > .tmpobjs ; \ $(MAKE) -f Makefile.$(MPP_LAYER) ) MPP/mpp_install: (cd include; $(MAKE) code ) (cd MPP/RSL/RSL ; $(MAKE) $(MPP_TARGET) ) (cd MPP/FLIC ; $(MAKE) ; $(MAKE) clean ) (cd MPP/FLIC/FLIC ; $(MAKE) ; \ $(MAKE) clean ; \ /bin/rm -f flic ; \ sed s+INSTALL_STRING_FLICDIR+`pwd`+ flic.csh > flic ; \ chmod +x flic ) (csh MPP/Makelinks $(MPP_LAYER) $(MPP_TARGET) ) touch MPP/mpp_install uninstall: (cd include; $(MAKE) clean) (cd memory; $(MAKE) clean) (cd fdda; $(MAKE) clean) (cd physics; $(MAKE) clean) (cd domain; $(MAKE) clean) (cd dynamics; $(MAKE) clean) (cd Run; $(MAKE) clean) if [ -f libutil.a ]; then $(RM) libutil.a; fi (cd MPP/FLIC/FLIC; /bin/rm -f dm ; $(MAKE) clean ) (cd MPP/FLIC; $(MAKE) clean ; /bin/rm -fr bin ) (cd MPP/RSL/RSL; $(MAKE) clean ; /bin/rm -f librsl.a ) /bin/rm -f MPP/FLIC/h/*.h /bin/rm -fr MPP/build /bin/rm -f parseconfig /bin/rm -f MPP/col_cutter /bin/rm -f Run/mm5.exe /bin/rm -f Run/mm5.mpp /bin/rm -f pick/*.incl *.h /bin/rm -f MPP/mpp_install
Note: there are several targets in the top-level makefile: all, code, little_f (for IBM xlf compiler, or any fortran compiler that does not allow the use of cpp), mm5.deck, clean, LineNumberer, and mpclean, mpp, etc. for MPP extension. If a user does not specify a target, the makefile will use the first one it sees. In this case, it is the ‘all’ target. For any target that is not placed first, a user must explicitly specify the target. For example, we use ‘make mm5.deck’ to make a job deck. The command for the target ‘all’ is to cd to a particular directory and execute make (the macro $(MAKE) is defined in configure.user file).
MM5 Tutorial
9-7
9: MAKE AND MM5
9.3.2 Example: Mid-Level Makefile # Makefile for directory physics/pbl_sfc DEVTOP = ../.. include ../../configure.user # Makefile for directory physics/pbl_sfc lib: @tmpfile=’.tmpfile’; \ echo $(IBLTYP) > $$tmpfile; \ $(GREP) “0” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 0”; \ (cd dry; $(MAKE) all); \ else \ echo “IBLTYP != 0”; \ fi; \ $(GREP) “1” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 1”; \ (cd bulk; $(MAKE) all); \ (cd dry; $(MAKE) all); \ else \ echo “IBLTYP != 1”; \ fi; \ $(GREP) “2” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 2”; \ (cd hirpbl; $(MAKE) all); \ else \ echo “IBLTYP != 2”; \ fi; \ $(GREP) “3” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 3”; \ (cd btpbl; $(MAKE) all); \ else \ echo “IBLTYP != 3”; \ fi; \ $(GREP) “4” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 4”; \ (cd btpbl; $(MAKE) all); \ else \ echo “IBLTYP != 4”; \ fi; \ $(GREP) “5” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 5”; \ (cd mrfpbl; $(MAKE) all); \ else \ echo “IBLTYP != 5”; \ fi; \ $(GREP) “6” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 6”; \ (cd btpbl; $(MAKE) all); \ else \ echo “IBLTYP != 6”; \ fi; \ (cd util; $(MAKE) all);
9-8
MM5 Tutorial
9: MAKE AND MM5
code: @tmpfile=’.tmpfile’; \ echo $(IBLTYP) > $$tmpfile; \ $(GREP) “1” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 1”; \ (cd bulk; $(MAKE) code); \ (cd dry; $(MAKE) code); \ else \ echo “IBLTYP != 1”; \ fi; \ $(GREP) “0” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 0”; \ (cd dry; $(MAKE) code); \ else \ echo “IBLTYP != 0”; \ fi; \ $(GREP) “2” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 2”; \ (cd hirpbl; $(MAKE) code); \ else \ echo “IBLTYP != 2”; \ fi; \ $(GREP) “3” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 3”; \ (cd btpbl; $(MAKE) code); \ else \ echo “IBLTYP != 3”; \ fi; \ $(GREP) “4” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 4”; \ (cd btpbl; $(MAKE) code); \ else \ echo “IBLTYP != 4”; \ fi; \ $(GREP) “5” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 5”; \ (cd mrfpbl; $(MAKE) code); \ else \ echo “IBLTYP != 5”; \ fi; \ $(GREP) “6” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 6”; \ (cd btpbl; $(MAKE) code); \ else \ echo “IBLTYP != 6”; \ fi; \ (cd util; $(MAKE) code); little_f: @tmpfile=’.tmpfile’; \ echo $(IBLTYP) > $$tmpfile; \ $(GREP) “0” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 0”; \ (cd dry; $(MAKE) little_f); \ else \ echo “IBLTYP != 0”; \ fi; \ $(GREP) “1” $$tmpfile; \
MM5 Tutorial
9-9
9: MAKE AND MM5
if [ $$? = 0 ]; then \ echo “IBLTYP = 1”; \ (cd bulk; $(MAKE) little_f); \ (cd dry; $(MAKE) little_f); \ else \ echo “IBLTYP != 1”; \ fi; \ $(GREP) “2” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 2”; \ (cd hirpbl; $(MAKE) little_f); \ else \ echo “IBLTYP != 2”; \ fi; \ $(GREP) “3” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 3”; \ (cd btpbl; $(MAKE) little_f); \ else \ echo “IBLTYP != 3”; \ fi; \ $(GREP) “4” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 4”; \ (cd btpbl; $(MAKE) little_f); \ else \ echo “IBLTYP != 4”; \ fi; \ $(GREP) “5” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 5”; \ (cd mrfpbl; $(MAKE) little_f); \ else \ echo “IBLTYP != 5”; \ fi; \ $(GREP) “6” $$tmpfile; \ if [ $$? = 0 ]; then \ echo “IBLTYP = 6”; \ (cd btpbl; $(MAKE) little_f); \ else \ echo “IBLTYP != 6”; \ fi; \ (cd util; $(MAKE) little_f); clean: (cd (cd (cd (cd (cd (cd
btpbl; $(MAKE) clean); \ bulk; $(MAKE) clean); \ dry; $(MAKE) clean); \ hirpbl; $(MAKE) clean); \ mrfpbl; $(MAKE) clean); \ util; $(MAKE) clean);
Note: This example shows how the branching is done with the mid-level makefile. The makefile first echos the string IBLTYP defined in configure.user file to a temporary file, .tmpfile. It then checks, using grep, to see if any of the options exist (in this case, IBLTYP may be 0,1,2,3, or 5). If any of them is defined, it will go to the directory that contains the subroutines for that option and execute the make command there. Again there are several targets in this mid-level makefile: lib, code, little_f, and clean. The default is the target lib. 9.3.3 Example: Low-Level Makefile # Makefile for directory physics/pbl_sfc/mrfpbl
9-10
MM5 Tutorial
9: MAKE AND MM5
DEVTOP = ../../.. include ../../../configure.user CURRENT_DIR = $(DEVTOP)/physics/pbl_sfc/mrfpbl OBJS =\ mrfpbl.o \ tridi2.o SRC =\ mrfpbl.i \ tridi2.i SRCF =\ mrfpbl.f \ tridi2.f LIBTARGET = util TARGETDIR = ../../../ all:: $(OBJS) $(AR) $(TARGETDIR)lib$(LIBTARGET).a $(OBJS) code:: $(SRC) little_f:: $(SRCF) $(OBJS) $(AR) $(TARGETDIR)lib$(LIBTARGET).a $(OBJS) # --------------------------------------------------------------# common rules for all Makefiles - do not edit emptyrule:: clean:: $(RM_CMD) “#”* # --------------------------------------------------------------# DO NOT DELETE THIS LINE -- make depend depends on it. mrfpbl.o: mrfpbl.o: mrfpbl.o: mrfpbl.o: mrfpbl.o: mrfpbl.o: mrfpbl.o: mrfpbl.o: tridi2.o:
../../../include/parame.inc ../../../include/rpstar.incl ../../../include/varia.incl ../../../include/dusolve1.incl ../../../include/param2.incl ../../../include/param3.incl ../../../include/pmoist.incl ../../../include/point3d.incl ../../../include/point2d.incl ../../../include/various.incl ../../../include/nonhyd.incl ../../../include/nhcnst.incl ../../../include/soil.incl ../../../include/soilcnst.incl ../../../include/addrcu.incl ../../../include/pbltb.incl ../../../include/parame.incl
Note: In this example, when make is executed (‘make -i -r’), it first looks for the target all, for example. It finds that the target ‘all’ depends on a group of object files (defined by the macro OBJS). The rules for making the object files are defined in configure.user file, i.e. the .F.o: rule. The makefile checks whether any .o files are out-of-date w.r.t. the .F files, or w.r.t. any of the include files used in the .F files. The dependencies on include files are at the end of the makefile. After the .o files are made, the command on the following line specifies how to archive them into libutil.a using macro AR defined in configure.user.
9.4 CPP The cpp pre-processor is about as old as Unix itself. A pre-processor scans a file and make modiMM5 Tutorial
9-11
9: MAKE AND MM5
fications according to user-supplied definitions. Typically this facility is used for global substitutions, conditional code inclusion, including files, and function templating. We only use the cpp "conditional code inclusion" and "including files" features. Because we use cpp, our Fortran codes are named .F, in contrast to .f. Many machines recognize .F files as the ones that need to be run through cpp first before being compiled. 9.4.1 CPP “inclusion” One cpp directive is "#include ". This directive indicates that filename should be included in the source prior to compilation. Example: SUBROUTINE SOLVE(IEXEC,INEST,NN) # include <parame.incl>
turns into SUBROUTINE SOLVE(IEXEC,INEST,NN) C PARAME C C--- ADDITIONAL MEMORY REQUIREMENTS FOR RUNS , C--- GRIDDED FDDA RUNS (IFDDAG=1) AND OBS FDDA RUNS (IFDDAO=1), C--- NONHYDROSTATIC RUNS (INHYD=1), HIGHER ORDER PBL RUNS (INAV=1), C--- EXPLICIT MOISTURE SCHEME (IEXMS=1), ARAKAWA-SCHUBERT C--- CONVECTIVE PARAMETERIZATION (IARASC=1), ATMOSPHERIC C--- RADIATION (IRDDIM=1), MIXED-PHASE ICE SCHEME (IICE=1). C--- GRAUPEL SCHEME (IICEG=1), KAIN-FRITSCH AND FRITSCH-CHAPPELL. C--- CONVECTIVE PARAMETERIZATIONS (IKFFC=1), AND GAYNO-SEAMAN PBL (IGSPBL=1). C--- INTEGER IARASC,IEXMS,IFDDAG,IFDDAO,IICE,IICEG,IKFFC,ILDDIM,INAV C--- 5-LAYER SOIL (ISLDIM=1,MLX=6), OSU LAND SFC (ILDDIM=1,MLX=4). C C INTEGER IARASC,IEXMS,IFDDAG,IFDDAO,IICE,IICEG,IKFFC,ILDDIM,INAV INTEGER INAV2,INAV3,IGSPBL,INHYD,IRDDIM,ISLDIM,MLX PARAMETER (IFDDAG=1,IFDDAO=1,INHYD=1,INAV=0,INAV2=0,INAV3=0, 1 IICE=0,IICEG=0,IEXMS=1,IKFFC=0,IARASC=0,IRDDIM=1, 2 IGSPBL=0,ISLDIM=1,ILDDIM=0,MLX=6)
9.4.2 CPP “conditionals” cpp also recognizes conditional directives. You define a macro in your source code using the "define" directive - you can then use the "#ifdef" test on this macro to selectively include code. Example: In defines.incl, there are statements such as: #define #define #define #define
IMPHYS4 1 IMPHYS1 1 ICUPA3 1 IBLT2 1
In the SOLVE, the .F file has #ifdef ICUPA3 C C--- ICUPA=3: GRELL C
9-12
MM5 Tutorial
9: MAKE AND MM5
IF(ICUPA(INEST).EQ.3)THEN DO J=JBNES,JENES-1 DO K=1,KL DO I=IBNES,IENES-1 CLDFRA(I,K) = 0.0 END DO END DO CALL CUPARA3(T3D,QV3D,PSB,T3DTEN,QV3DTEN,RAINC,CLDFRA,HT,U3D, + V3D,PP3D,INEST,J,IBNES,IENES-1) DO K=1,KL DO I=IBNES,IENES-1 CLDFRA3D(I,J,K)=CLDFRA(I,K) ENDDO ENDDO ENDDO ENDIF #endif
................... and so on. In this example only ICUPA3 is defined (#define ICUPA3 1 in defines.incl), so the call to CUPARA3 will be kept in the final source code. Other cumulus schemes are not selected, so the calls to these schemes won’t be included in the source code to be compiled.
MM5 Tutorial
9-13
9: MAKE AND MM5
9-14
MM5 Tutorial
10: NESTDOWN
NESTDOWN
10
Purpose 10-3 NESTDOWN Procedure 10-3 Base State Computation 10-5 Shell Variables (for IBM job deck only) 10-5 Parameter Statements 10-5 FORTRAN Namelist Input File 10-5 Horizontal Interpolation 10-7 Vertical Corrections after Horizontal Interpolation 10-8 Temperature correction 10-8 Horizontal-wind correction 10-9 How to Run NESTDOWN 10-10 NESTDOWN didn’t Work! What Went Wrong? 10-10 File I/O 10-11 NESTDOWN tar File 10-12
MM5 Tutorial
10-1
10: NESTDOWN
10-2
MM5 tutorial
10: NESTDOWN
10
NESTDOWN
10.1 Purpose The NESTDOWN program horizontally interpolates σ−coordinate data, from a coarse grid to a fine grid. Modifying the number of vertical levels or their spatial distribution is permitted, usually for the purpose of increasing vertical resolution to complement a finer horizontal grid. The input data is on a σ-coordinate, either output from the mesoscale model or the initial condition file for model input. The output remains on the σ−coordinate. The program requires that the fine grid TERRAIN data be made available as input. Optionally, the coarse-grid lower boundary file may be required. If this program is being used to produce a higher resolution model run from a coarse grid, there are several advantages: 1) the model has lateral boundary conditions that use consistent physics with the coarse grid model; 2) the lateral boundary conditions are available at a relatively high temporal frequency; 3) the vertical structure of the atmosphere is not significantly modified through vertical interpolation. Without the inclusion of observations, though, the model is able to drift. The NESTDOWN program runs on the following platforms: Compaq/Alpha, Cray, DEC, HP, IBM, SGI, Sun, PCs running Linux (Fedora with PGI or Intel compilers), and MAC (OSX with xlf). The NESTDOWN code is written in FORTRAN 90.
10.2 NESTDOWN Procedure • ingest model input or model output (must be σ levels) and TERRAIN data • horizontally interpolate data from coarse grid to the fine grid for 3d data • horizontally interpolate 2d data that is not a subset of the TERRAIN file • horizontally interpolate 2d masked fields • compute base state for coarse grid and fine grid, base state for vertical nesting • adjust 3d fine grid temperature from base state differences MM5 Tutorial
10-3
10: NESTDOWN
• fine grid: Qv -> RH, recompute Qv with new fine grid temperature • if vertical nesting, vertically interpolate all 3d arrays linear in Z • save output for daily mean for lower boundary file • output current data for boundary file • output interpolated data for initial conditions • output data for lower boundary
INTERPF
NESTDOWN
MM5
NESTDOWN
Fig. 10.1 A schematic diagram of different NESTDOWN jobs. The NESTDOWN program is able to ingest a coarse grid file which is output from INTERPF. The NESTDOWN program is also able to ingest model output. Both NESTDOWN jobs send output to the mesoscale model.
10-4
MM5 Tutorial
10: NESTDOWN
10.3 Base State Computation The base state computation is handled identically to that described in Chapter 7 INTERPF. However, two base states are computed: one for the coarse grid and one for the fine grid. The only difference between the two is the terrain elevation, as all of the other constants use the same values (P00, Ts0, A, PTOP, TISO). If vertical nesting is requested, then the base state for the new sigma level distribution is also computed, but only for the fine grid.
10.4 Shell Variables (for NCAR IBM job deck only) All of the MM5 system job decks for NCAR IBMs are written as C-shell executables. Strict adherence to C-shell syntax is required in this section. Table 10.1: NESTDOWN IBM deck shell variables. C-shell Variable Name
Options and Use
ExpName
location of MSS files, keep same as used for deck generating input file for this program
InName
directory location of LOWBDY and TERRAIN files on MSS
RetPd
time in days to retain data on MSS after last access
InData
local names of the MM5 output, the LOWBDY, and the fine grid TERRAIN files
10.5 Parameter Statements Guess what? No domain-specific FORTRAN PARAMETER statements required.
10.6 FORTRAN Namelist Input File Most of the available options for the NESTDOWN code are handled through the namelist input file. Since this is a FORTRAN namelist (a FORTRAN 90 standard), syntax is very strict. There are five namelist records for NESTDOWN. There are no default values, the entire namelist must be correctly modified for each program execution. However, there are a few values that do not often need to be changed.
MM5 Tutorial
10-5
10: NESTDOWN
Table 10.2: NESTDOWN namelist values: RECORD0. Namelist Record
Namelist Variable
RECORD0
INPUT_FILE
CHARACTER string, coarse grid input file from INTERPF or MM5, complete with directory structure
RECORD0
INPUT_LOWBDY_FILE
CHARACTER string, LOWBDY file from coarse grid INTERPF, complete with directory structure (OPTIONAL, if USE_MM5_LOWBDY = .FALSE., this file is not necessary)
RECORD0
INPUT_TERRAIN_FILE
CHARACTER string, fine grid input file from TERRAIN, complete with directory structure
Description
Table 10.3: NESTDOWN namelist values: RECORD1. Namelist Record
Namelist Variable
Description
RECORD1
START_YEAR
starting time, 4 digit INTEGER of the year
RECORD1
START_MONTH
starting time, 2 digit INTEGER of the month
RECORD1
START_DAY
starting time, 2 digit INTEGER of the day
RECORD1
START_HOUR
starting time, 2 digit INTEGER of the hour
RECORD1
END_YEAR
ending time, 4 digit INTEGER of the year
RECORD1
END_MONTH
ending time, 2 digit INTEGER of the month
RECORD1
END_DAY
ending time, 2 digit INTEGER of the day
RECORD1
END_HOUR
ending time, 2 digit INTEGER of the hour
RECORD1
INTERVAL
time interval in seconds between analysis/forecast periods
RECORD1
LESS_THAN_24H
T/F flag of whether to force less than 24 h in the analysis (FALSE by default)
10-6
MM5 Tutorial
10: NESTDOWN
Table 10.4: NESTDOWN namelist values: RECORD2. Namelist Record
Namelist Variable
RECORD2
SIGMA_F_BU
REAL array, new sigma distribution, full-levels, bottom-up (only required for vertical nesting)
RECORD 2
SST_TO_ICE_THRESHO LD
REAL, temperature at which SST is cold enough to turn the “water” category into an “ice” category (not advised for LSM or polar physics in MM5)
Description
Table 10.5: NESTDOWN namelist values: RECORD4, 5 and 6. Namelist Record
Namelist Variable
RECORD4
WRTH2O
T/F flag, saturation is with respect to liquid water (inoperative currently)
RECORD 5
IFDATIM
INTEGER, number of time periods of initial condition output required (only 1 is necessary if not doing analysis nudging), “-1” is the magic choice for selecting all time periods are to be output
RECORD6
INTERP_METHOD
INTEGER; horizontal interpolation choice: 1 = overlapping parabolic, BINT (fast), 2 = positive definite, SINT (slow)
RECORD6
USE_MM5_LOWBDY
T/F flag, use the information inside the input file to build the LOWBDY file; TRUE implies that the user provides the INPUT_LOWBDY_FILE from the coarse grid as input
Description
10.7 Horizontal Interpolation There are several horizontal interpolation options used inside NESTDOWN, though the user is only allowed to control the selection of positive definite (slow) vs. overlapping parabolic (faster). If the user selects a 3:1 ratio in the TERRAIN program, a tailored 3:1 interpolator is employed. Other ratios require the BINT function. For masked fields, a linear 4-point interpolation is selected if all four surrounding points are valid. If at least one of the points is inappropriate for a masked interpolation and at least one of the points is valid, then an average of the possible data MM5 Tutorial
10-7
10: NESTDOWN
points (of the four surrounding values) is computed. Masked fields with no valid surrounding points have various default values based on the field name.
10.8 Vertical Corrections after Horizontal Interpolation The horizontal interpolation takes place on σ coordinates. With the introduction of the fine grid terrain elevation data, there are locations with a significant extent of orographic change. The 3D temperature, 3D mixing ratio, 2D ground temperature, and the 2D soil temperatures are slightly modified to reduce the initial imbalance the model will feel with these fields.
10.8.1 Temperature correction The air temperature and ground temperature adjustments are based upon the difference in the reference temperature change at the particular (i,j,k) location. The “F” subscript is the new variable with fine grid terrain elevation, the “C” subscript denotes the coarse terrain values, and the “R” refers to the reference temperature. Other than a few constants, the reference temperature differences are a function of terrain elevation differences only. For the ground temperature and soil temperature adjustments, the lowest σ level value of the reference temperature difference is used as the correction.
TF
ijk
= TC + ( TR – TR ) ijk C F
(10.1)
The original fine grid mixing ratio, computed with the original fine grid temperature, is converted to relative humidity. This is re-diagnosed as mixing ratio, but using the adjusted fine grid temperature. Essentially, RH is conserved.
10-8
MM5 Tutorial
10: NESTDOWN
10.8.2 Horizontal-wind correction PFU K-1
New Level K
New Level K
PF
K
PFL K+1
GROUND
GROUND
Fig. 10.2 With the introduction of new terrain elevation data in NESTDOWN, the σ level locations (fixed in height) are consequently modified. To keep surface winds on the new domain from inappropriately picking up characteristics from the free atmosphere, a limit on the depth the vertical gradient is enforced. The option to modify the horizontal winds based on the elevation differences from the horizontally interpolated terrain elevation and the fine grid input terrain is currently commented out, which matches the treatment given to fine grid domains initialized within MM5. 1. restrict the vertical extrapolation to only 1 level at most
∆P = max ( min ( ( P F – P F ), ( P FL – P F ) ), ( P FU – P F ) ) N
(10.2)
2. vertical gradient of horizontal wind components
FK + 1 – FK – 1 δF ------ = ----------------------------------P FL – P FU δP
(10.3)
δF F N = F + ∆P -----δP
(10.4)
3. new wind component
MM5 Tutorial
10-9
10: NESTDOWN
10.9 How to Run NESTDOWN 1) Obtain the source code tar file from one of the following places: Anonymous ftp: ftp://ftp.ucar.edu/mesouser/MM5V3/NESTDOWN.TAR.gz On NCAR MSS: /MESOUSER/MM5V3/NESTDOWN.TAR.gz 2) gunzip and untar the file. 3) Type ‘make’ to create an executable for your platform. 4) On the NCAR IBMs, edit nestdown.deck.ibm (located in ~mesouser/MM5V3/IBM) to select script options and to select namelist options. On a workstation, edit the namelist.input file for the namelist options. 5) On the NCAR IBMs, type nestdown.deck.ibm to compile and execute the program. It is usually a good practice to redirect the output to a file so that if the program fails, you can take a look at the log file. To do so, type: nestdown.deck.ibm >& nestdown.log, for example. On a workstation, or on a IBM running interactively, run the executable directly (nestdown >& nestdown.log). NESTDOWN input files: Either MMINPUT_DOMAINn or MMOUT_DOMAINn TERRAIN_DOMAINm LOWBDY_DOMAINn (optional) where n is the coarse grid domain identifier on input and m is the fine grid domain identifier. The location of the input files (directory information) is provided in the namelist.input file. NESTDOWN output files: MMINPUT_DOMAINm LOWBDY_DOMAINm BDYOUT_DOMAINm where m is the fine grid domain identifier on output. These files are created in the current working directory. The user has no control over the naming convention.
10.10 NESTDOWN didn’t Work! What Went Wrong? • Most of the errors from NESTDOWN that do not end with a "segmentation fault", "core dump", or "floating point error" are accompanied with a simple print statement. Though the message itself may not contain enough substance to correct the problem, it will lead you to the section of the code that failed, which should provide more diagnostic information. The last statement that NESTDOWN prints during a controlled failed run is the diagnostic error.
• To see if NESTDOWN completed successfully, first check to see if the "STOP 99999"
statement appears. Also check to see that NESTDOWN processed each of the requested times from the namelist. The initial condition file should be written-to after each analysis
10-10
MM5 Tutorial
10: NESTDOWN
time (up to the number of time periods requested in the namelist file). The boundary condition file is written-to after each analysis time, beginning with the second time period. The LOWBDY file is only written once (either the end of the NESTDOWN program if the LOWBDYfile is computed, or at the beginning of the program if the LOWBDY file is directly interpolated).
• Remember that to generate a single boundary condition file, you must have at least two
time periods, so that a lateral boundary tendency may be computed. Even if you are not going to run a long forecast, it is advantageous to provide a full day for the lateral boundary condition file, as this file contains the daily mean of the surface air temperature and thedaily mean of the SST. Users may mitigate this situation by providing the coarse grid’s LOWBDY file as input, which contains the previous analysis’ daily mean fields.
10.11 File I/O The NESTDOWN program has input and output files that are ingested and created during the program execution. The gridded input files and all of the output files are unformatted FORTRAN (binary, sequential access). One of the input files is a human-readable namelist formatted file of run-time options. The following tables are lists of input and output units for the NESTDOWN program.
Table 10.6: NESTDOWN program input files. File Name
Description
namelist.input
namelist file containing run-time options
MMINPUT_DOMAINn MMOUT_DOMAINn
coarse grid input data on σ levels
LOWBDY_DOMAINn
optional input file, contains the coarse grid reservoir temperature and mean SST
TERRAIN_DOMAINm
the fine grid terrestial information
Table 10.7: NESTDOWN program output files. File Name
Description
MMINPUT_DOMAINm
fine grid initial condition for MM5
BDYOUT_DOMAINm
fine grid lateral boundary condition for MM5
LOWBDY_DOMAINm
fine grid lower boundary condition (reservoir temperature and mean SST)
MM5 Tutorial
10-11
10: NESTDOWN
10.12 NESTDOWN tar File The nestdown.tar file contains the following files and directories: CHANGES Makefile README namelist.input nestdown.deck src/
10-12
Description of changes to the NESTDOWN program Makefile to create NESTDOWN executable General information about the NESTDOWN directory Namelist file containing run-time options Job deck for usage on one of NCAR’s Crays NESTDOWN source code
MM5 Tutorial
11: INTERPB
INTERPB
11
Purpose 11-3 INTERPB Procedure 11-3 Sea Level Pressure Computation 11-4 Vertical Interpolation/Extrapolation 11-6 Interpolation (non-hydrostatic) 11-7 Parameter Statements 11-8 FORTRAN Namelist Input File 11-8 How to Run INTERPB 11-11 INTERPB didn’t Work! What Went Wrong? 11-12 File I/O 11-12 INTERPB tar File 11-13
MM5 Tutorial
11-1
11: INTERPB
11-2
MM5 tutorial
11: INTERPB
11
INTERPB
11.1 Purpose The INTERPB program handles the data transformation required to go from the mesoscale model on σ coordinates back to pressure levels. This program only handles vertical interpolation and a few diagnostics. The output from this program is suitable for input to REGRIDDER (to re-grid a model forecast), LITTLE_R (for pressure-level re-analysis), INTERPF (for pressure to σ interpolation for generating model input) and GRAPH (for visualization and diagnostic computation). In practice, much of the post-analysis performed with MM5 data is done interactively with diagnostic and visualization tools that can handle simple vertical coordinate transformations on-the-fly. The INTERPB program can run on the following platforms: Compaq/Alpha, Cray, DEC, HP, IBM, SGI, Sun, PCs running Linux (Fedora with PGI or Intel compilers), and MAC (OSX with xlf).
11.2 INTERPB Procedure • input model input or model output data (INTERPF, NESTDOWN, MM5) • compute total pressure on dot and cross points • compute RH and Z on σ levels • compute 2D fields on both staggerings: surface pressure, sea level pressure, latitude, longitude, “computationally expedient” surface pressure
• extrapolate below ground and above top σ surface
variables: u and v, temperature, moisture, height, pressure, and ELSE options: extrapolate or constant • interpolate to selected pressure levels options: linear in pressure, linear in ln pressure, linear in pκ • output interpolated data
MM5 Tutorial
11-3
11: INTERPB
MM5
NESTDOWN
INTERPF
INTERPB
RAWINS/ LITTLE_R
INTERPF
REGRIDDER
GRAPH
Fig. 11.1 All INTERPB jobs involve ingesting σ-level model output . This data is interpolated to requested isobaric surfaces by INTERPB. The output from INTERPB is suitable for use by the MM5 programs LITTLE_R, INTERPF, REGRIDDER, and GRAPH.
11.3 Sea Level Pressure Computation Please note that the “X” used in the following computations, and throughout this chapter, signifies an arithmetic multiplication, not a cross product. 1. Find two surrounding σ levels 100 hPa above the surface, compute T at this level
Pσ P T σ ln --------B- + T σ ln --------A B P Pσ T = ------------------------------------------------------APσ ln --------BPσ
(11.1)
A
11-4
MM5 Tutorial
11: INTERPB
Pa, Ta P, T 100 hPa Pb, Tb
GROUND
Fig.11.2 To minimize the diurnal effects on the sea-level pressure computation, a pressure and temperature 100 hPa above the surface is used to compute a “surface” pressure and “surface” temperature.
2. Find Ts (surface temperature), Tm (mean temperature in layer above ground), Z at level 100 hPa above surface, and Tslv (sea level temperature) Rγ S
P SFC ------g ⎛ ⎞ T S = T ------------⎝ P ⎠
(11.2)
TS + T --------------Tm = 2
(11.3)
R P Z = Z SFC – --- ln ------------- × T m g P
(11.4)
T SLV = T + γ S Z
(11.5)
SFC
3. Then sea level pressure is calculated as
MM5 Tutorial
11-5
11: INTERPB
( g )Z SFC P SLV = P SFC exp ---------------------------------( T S + T SLV ) R -----------------------------2
(11.6)
11.4 Vertical Interpolation/Extrapolation Extrapolation is required near the surface when *
p ij σ k = KX + P TOP + P' ijk < P int – bot
(11.7)
where Pint-bot is typically 1000 hPa. This is handled in a subroutine specifically to allow pipelining of expensive inner loops for the vertical interpolation scheme. Extrapolation is required near the top of the model when *
p ij σ k = 1 + P TOP + P' ijk > P int – top
(11.8)
where Pint-top is typically PTOP. Every column of σ level data has a fictitious level inserted in the column, below the 1000 hPa level (the chosen value is 1001 hPa).
σ = KX - 2 σ = KX - 1 σ = KX GROUND
P = 1001 hPa
Fig. 11.3 Extrapolation is required on INTERPB jobs when the requested pressure is below the lowest σ level. A fictitious level is generated (1001 hPa) so that the 1000 hPa level is always available without extrapolation.
11-6
MM5 Tutorial
11: INTERPB
11.4.1 Interpolation (non-hydrostatic) Similar to the front-end interpolation, the back-end interpolation is handled as either linear in pressure, linear in ln pressure, linear in pκ. The vertical interpolation on the back-end may not be entirely contained within the bounds of valid data, resulting in extrapolation. The non-hydrostatic pressure from the forecast data is given as
Pσ
′
*
ijk
= p ij σ k + P top + P ijk
(11.9)
• Pσijk : 3-D pressure at each (i,j,k) of the σ-level variable • p*ij : 2-D field of reference surface pressure minus a constant (Ptop) • σk : 1-D vertical coordinate • Ptop : reference pressure at model lid • P’ijk : 3-D pressure perturbation from reference state
PσA, ασA P, αp PσΒ, ασB
GROUND
Fig. 11.4 For INTERPB jobs, most of the data placed on the isobaric surface is interpolated between the nearest two surrounding σ levels.
ασ ( Pσ – P ) + ασ ( P – Pσ ) A B B A α P = -----------------------------------------------------------------------Pσ – Pσ B
MM5 Tutorial
(11.10)
A
11-7
11: INTERPB
11.5 Parameter Statements And again, no domain-specific FORTRAN PARAMETER statements.
11.6 FORTRAN Namelist Input File Most of the available options for the INTERPB code are handled through the namelist input file. Since this is a FORTRAN namelist (a FORTRAN 90 standard), syntax is very strict. There are three namelist records for INTERPB. There are no default values, the entire namelist must be correctly modified for each program execution. Table 11.1: INTERPB namelist values: RECORD0. Namelist Record
Namelist Variable
RECORD0
INPUT_FILE
Description CHARACTER string, σ-level input file (from MM5, INTERPF, or NESTDOWN), complete with directory structure
Table 11.2: INTERPB namelist values: RECORD1. Namelist Record
Namelist Variable
Description
RECORD1
START_YEAR
starting time, 4 digit INTEGER of the year
RECORD1
START_MONTH
starting time, 2 digit INTEGER of the month
RECORD1
START_DAY
starting time, 2 digit INTEGER of the day
RECORD1
START_HOUR
starting time, 2 digit INTEGER of the hour
RECORD1
END_YEAR
ending time, 4 digit INTEGER of the year
RECORD1
END_MONTH
ending time, 2 digit INTEGER of the month
RECORD1
END_DAY
ending time, 2 digit INTEGER of the day
RECORD1
END_HOUR
ending time, 2 digit INTEGER of the hour
RECORD1
INTERVAL
time interval in seconds between analysis/forecast periods
11-8
MM5 Tutorial
11: INTERPB
Table 11.3: INTERPB namelist values: RECORD2. Namelist Record
Namelist Variable
RECORD2
pressure_bu_no_sfc_Pa
Description array of REALs, pressure (Pa) from 100000 up to (but not necessarily including) PTOP, the surface pressure is NOT included
Table 11.4: INTERPB namelist values: RECORD3. Namelist Record
Namelist Variable
RECORD3
print_info
Description LOGICAL: TRUE = send extra printout to the standard out; FALSE = nah, don’t do that
The entries in the RECORD4 namelist record (described in the following three tables) are optional. Default values for each variable have been established to remain internally consistent with other interpolation assumptions made through out the modeling system. This record is not available from the standard namelist.input file in the INTERPB directory, but is located in the ./ INTERPB/.hidden/namelist.input file. This record is to allow the user access to differing methods for the vertical interpolation and the extrapolations which are performed above or below the model half σ levels.
Table 11.5: INTERPB namelist values: RECORD4 Namelist Record
Namelist Variable
RECORD4
uv_interp_method
CHARACTER, “linear in p”, linear in log p”, or “linear in p**kappa” for u and v
RECORD4
t_interp_method
CHARACTER, “linear in p”, linear in log p”, or “linear in p**kappa” for temperature
RECORD4
moist_interp_method
CHARACTER, “linear in p”, linear in log p”, or “linear in p**kappa” for all moisture species
RECORD4
height_interp_method
CHARACTER, “linear in p”, linear in log p”, or “linear in p**kappa” for height
MM5 Tutorial
Description
11-9
11: INTERPB
Table 11.5: INTERPB namelist values: RECORD4 Namelist Record
Namelist Variable
RECORD4
p_interp_method
CHARACTER, “linear in p”, linear in log p”, or “linear in p**kappa” for pressure
RECORD4
else_interp_method
CHARACTER, “linear in p”, linear in log p”, or “linear in p**kappa” for everything else
Description
Table 11.6: INTERPB namelist values: RECORD4 Namelist Record
Namelist Variable
RECORD4
uv_extrap_up
CHARACTER, "constant" or "extrapolate" for u and v
RECORD4
t_extrap_up
CHARACTER, "constant" or "extrapolate" for temperature
RECORD4
moist_extrap_up
CHARACTER, "constant" or "extrapolate" for all moisture species
RECORD4
height_extrap_up
CHARACTER, "constant" or "extrapolate" for height
RECORD4
p_extrap_up
CHARACTER, "constant" or "extrapolate" for pressure
RECORD4
else_extrap_up
CHARACTER, "constant" or "extrapolate" for everything else
Description
Table 11.7: INTERPB namelist values: RECORD4 Namelist Record
Namelist Variable
RECORD4
uv_extrap_low
11-10
Description CHARACTER, "constant" or "extrapolate" for u and v
MM5 Tutorial
11: INTERPB
Table 11.7: INTERPB namelist values: RECORD4 Namelist Record
Namelist Variable
RECORD4
t_extrap_low
CHARACTER, "constant" or "extrapolate" for temperature
RECORD4
moist_extrap_low
CHARACTER, "constant" or "extrapolate" for all moisture species
RECORD4
height_extrap_low
CHARACTER, "constant" or "extrapolate" for height
RECORD4
p_extrap_low
CHARACTER, "constant" or "extrapolate" for pressure
RECORD4
else_extrap_low
CHARACTER, "constant" or "extrapolate" for everything else
Description
11.7 How to Run INTERPB 1) Obtain the source code tar file from one of the following places: Anonymous ftp: ftp://ftp.ucar.edu/mesouser/MM5V3/INTERPB.TAR.gz On NCAR MSS: MESOUSER/MM5V3/INTERPB.TAR.gz 2) gunzip and untar the INTERPB.TAR.gz file. 3) Type ‘make’ to create an executable for your platform. Users may choose to run the interpb program with the available job deck on the NCAR IBMs. This file, interpb.deck.ibm, is located on the NCAR IBM blackforest machine at ~mesouser/MM5V3/IBM. As with other job decks for the IBMs, the proper NQS instructions are included at the top, and most of the namelist options are handled through shell variables. Files are read from and written to the NCAR MSS. 4) Edit the namelist.input file to select the run-time options. 5) Run the executable directly by typing ‘interpb >& interpb.log’. INTERPB expects an input file (such as MMOUT_DOMAINn or MMINPUT_DOMAINn, where n is the domain identifier of the input data) to be provided. This is specified in the namelist.input file.
MM5 Tutorial
11-11
11: INTERPB
INTERPB outputs the files: MMOUTP_DOMAINn, REGRID_DOMAINn, and FILE_MMOUTP:blah, where n is the domain identifier of the input data and blah is the date string typical of the intermediate format files. The user has no control over the output file naming convention.
11.8 INTERPB didn’t Work! What Went Wrong? • Most of the errors from INTERPB that do not end with a "segmentation fault", "core
dump", or "floating point error" are accompanied with a simple print statement. Though the message itself may not contain enough substance to correct the problem, it will lead you to the section of the code that failed, which should provide more diagnostic information. The last statement that INTERPB prints during a controlled failed run is the diagnostic error.
• To see if INTERPB completed successfully, first check to see if the "STOP 99999" state-
ment appears. Also check to see that INTERPB processed each of the requested times from the namelist.
• When INTERPB runs into an interpolation error that it did not expect (i.e. forced to do an extrapolation when none should be required), INTERPB will stop and print out the offending (I,J,K) and pressure values.
11.9 File I/O The interpolation program has input and output files that are ingested and created during an INTERPB run. The gridded input file and the output file are unformatted FORTRAN I/O (binary, sequential access). One of the input files is a human-readable namelist formatted file of run-time options. The following tables are for the input and output units.
Table 11.8: INTERPB program input files. File Name
Description
namelist.input
namelist file containing run-time options
MMOUT_DOMAINn
model output file on s coordinates, where n is the domain identifier
11-12
MM5 Tutorial
11: INTERPB
Table 11.9: INTERPB program output files. File Name
Description
MMOUTP_DOMAINn, REGRID_DOMAINn
pressure-level file suitable for input to LITTLE_R, INTERPF and GRAPH, where n is the domain identifier (MMOUTP has all model output 3d fields, REGRID has only the traditional 3d arrays: wind, temperature, moisture, height)
FILE_MMOUTP:blah
intermediate format file, suitable for input to the REGRIDDER program (as it would come from PREGRID), where blah is the 13-character date string
11.10 INTERPB tar File The INTERPB.tar file contains the following files and directories: .hidden/ Makefile README namelist.input src/
MM5 Tutorial
Special namelist.input file with interpolation and extrapolation options Makefile to create INTERPB executable General information about the INTERPB directory Namelist file containing run-time options INTERPB source code
11-13
11: INTERPB
11-14
MM5 Tutorial
12: GRAPH
GRAPH
12
Purpose 12-3 Typical GRAPH Jobs 12-4 Plotting Table File: g_plots.tbl 12-5 Default Option Settings File: g_defaults.nml 12-7 Map Options File: g_map.tbl 12-8 Plot Color Options File: g_color.tbl 12-10 How to Run GRAPH 12-11 Available 2-D Horizontal Fields 12-14 Available Cross-Section Only Fields 12-17 Available 3-D Fields (as 2-D Horizontal or Cross-Section) 12-17 Some Hints for Running GRAPH 12-20 Sample Graph Plot File 12-21 Graph tar file 12-22 Script file to run Graph job 12-22 An Alternative Plotting Packages RIP 12-25
MM5 Tutorial
12-1
12: GRAPH
12-2
MM5 tutorial
12: GRAPH
12
GRAPH
12.1 Purpose The GRAPH program generates simple diagnostics and plots for some standard meteorological variables. The GRAPH code will process multiple times and vertical levels, computing the same diagnostics for each time and level. The GRAPH code will provide simple vertical interpolation capability, cross-section figures, and skew-T plots. The GRAPH program can overlay two plots. The GRAPH code is written to be used as a batch processor, so that all graphical choices are made from tables. The GRAPH code can process data from TERRAIN, REGRID, little_r and RAWINS, INTERPF, MM5, NESTDOWN, LOWBDY, and INTERPB. But GRAPH code cannot plot boundary condition data. The GRAPH code does not produce any standard output for use by a subsequent program. The GRAPH code in MM5 system is built on NCAR Graphics library (which is a licensed software: http://ngwww.ucar.edu, but part of it has become free which is sufficient to be used by all MM5 modeling system programs that require NCAR Graphics). It can be run on IBMs, Crays, workstations, and PC running Linux where NCAR Graphics is installed. When working on an IBM, a user can run GRAPH in batch or interactive mode. Examples of the interactive GRAPH use are shown in the section 12.7. Note on compiling GRAPH on a PC: When compiling on a PC running Linux using Portland Group Fortran compiler, it may require a library called libf2c.a. This library is required because NCAR Graphics library is compiled with GNU f77, while the GRAPH program requires PGF77 (or PGF90 - in order to deal with pointers). This library may or may not be available on your system. If it isn’t, you may obtain it from the internet for free.
MM5 Tutorial
12-3
12: GRAPH
12.2 Typical GRAPH Jobs
TERRAIN
REGRID
little_r/ RAWINS
GRAPH
INTERPF/ NESTDOWN
MM5
Fig. 12.1 Schematic diagram showing GRAPH accepting data from outputs of MM5 modeling system.
12-4
MM5 Tutorial
12: GRAPH
12.3 Plotting Table File: g_plots.tbl This table is used to define times, levels and fields to be processed and plotted by Graph. An example is shown below: TIME LEVELS: FROM 1993-03-13_00:00:00 TO 1993-03-14_00:00:00 BY 21600 (A) PRESSURE LEVEL MANDATORY: FROM SFC TO PTOP (B) PRESSURE LEVEL NON STANDARD: FROM SFC TO PTOP BY 3 (C) SIGMA LEVEL: FROM 23 TO KMAX BY 5 (D) TITLE: MM5 Tutorial (E) -----------------------------------------------------------------------------PLOT | FIELD | UNITS | CONTOUR | SMOOTH || OVERLAY | UNITS |CONTOUR |SMOOTH T/F | | | INTERVAL | PASSES || FIELD | |INTERVAL|PASSES -----|-------|-------|----------|--------||---------|-------|--------|-----T | TER | m | 100 | 0 || | | | T | WIND | m/s | 5 | 0 || BARB | m/s | 2 | 0 TP500| HEIGHT| m | 30 | 0 || VOR |10**5/s| 0 | 0 TI305| PV | PVU | 1 | 0 || P | mb | 20 | 0 X | 5 | 5 | 23 | 8 || PSLV | mb | 2 | 0 X | THETA | K | 3 | 0 || CXW | m/s | 10 | 0 X | 5 | 18 | 23 | 5 || PSLV | mb | 2 | 0 X |THETA | K | 3 | 0 || CXW | m/s | 10 | 0 T |SKEWTLL|72469 DEN DENVER, CO |39.75 |-104.87 || | | | T |SKEWTXY|STATION IN (X,Y) | 19 | 30 || | | | ------------------------------------------------------------------------------
Description of Table Header Rows: (A)
TIME
with beginning and ending times given in YYYY-MMDD_HH:MM:SS, and time increment given in seconds defined by the number after BY. If one doesn’t use :SS, the increment should be in minutes, and if one doesn’t use :MM, then the increment should be in hours. ‘BY 0’ means to plot every output times.
(B)
MANDATORY
used for pressure level dataset (such as from DATAGRID and Rawins). Will plot every mandatory level from the maximum and minimum level requrested. ALL and NONE may also used to replace the entire string after the colon.
(C)
NON STANDARD
used for pressure level dataset. Will plot every mandatory level from the maximum and minimum level requrested. Optional use of BY n will make plots at every n levels. ALL and NONE may also be used to replace the entire string after the colon.
(D)
SIGMA
used for σ-level data. Will plot the levels specified by indexes (K=1 is at the top of the model). An increment is required, and defined by the number after BY. Can also use ALL or NONE.
(E)
TITLE
except for a colon (:), any alpha-numeric character can be used to make a simple 1 line, 80 character title
MM5 Tutorial
12-5
12: GRAPH
Description of Table Columns: PLOT T/F
True or False to plot this field. Removing the line from the table has the same effect as F. If the user requests a cross section plot, the letter is X. If the user requests a plot on pressure level, the first 2 characters are TP, followed by the pressure value (TP500 is 500 mb level); if the user requests a plot on isentropic surface, the first 2 characters are TI, followed by the potential temperature value (TI330 is 330 K level). The last two options only work with σ data.
FIELD
a field name to be plotted. See complete list in Tables 8.1, 8.2 and 8.3. If the field is a skew-T, the interpretation of the following columns is changed (see the explanation below).
UNITS
units used on a plot. For some fields, there are different standard units available. If you don’t know the unit, use ‘?’.
CONTOUR INTERVAL
real or integer values used to contour a plot. If you don’t know what contour interval to use, use ‘0’. For a vector field (e.g. BARB), this value specifies the grid interval. For streamline field (VESL), this value specifies how sparse or dense streamlines are drawn.
SMOOTH PASSES
number of passes of the smoother-desmoother used for each horizontal plot.
OVERLAY FIELD
a field name for the overlay plot. May be left blank.
To create pressure-level plot from sigma-level data: TP500| HEIGHT|
m
|
30
|
0
||
VOR
|10**5/s|
P
|
0
|
0
To create isentropic-level plot from sigma-level data: TI305| PV
|
PVU
|
1
|
0
||
mb
|
20 |
0
To plot skew-T: If the plot is a skew-T (SKEWTLL or SKEWTXY), the UNITS column is used to define location name, and lat/long or X/Y appear in the following two columns. e.g. T
|SKEWTLL|72469 DEN
DENVER, CO
|39.75
|-104.87
|| | | |
To plot a vertical cross-section: For a cross-section plot, the location is defined by the 4 numbers in the columns following ‘X’, and they are in the order of X1, Y1, X2, and Y2. e.g., X X
12-6
| 5 | | THETA |
5 K
| |
23 3
| |
8 0
|| PSLV || CXW
| mb | m/s
| |
2 10 |
|
0
0
MM5 Tutorial
12: GRAPH
12.4 Default Option Settings File: g_defaults.nml This is a namelist file and it is optional. If this file exists in the current working directory when the Graph program starts executing, the file’s contents replace the previously set defaults in the Fortran code. Since this is a namelist structured file, lines may be removed. Comments after ‘;’ are not allowed on most platforms, but are shown here for easy reference only. &JOEDEF ; defaults for graph ; ISTART=1, ; sub-domain plot beginning I location ; JSTART=1, ; sub-domain plot beginning J location ; IEND=37, ; sub-domain plot ending I location ; JEND=49, ; sub-domain plot ending J location LW1=2000, ; line width, 1000 is thinnest LW2=2000, ; line width for overlay plot DASH1=-682, ; dash pattern, standard NCAR GKS method DASH2=-682, ; 4092, 3640, 2730, -682 COLOR1=12, COLOR4=12, COLOR2=9, COLOR5=9, COLOR3=8, COLOR6=8, HDRINFO=F, ; true=print header and stop LOGP=0, ; cross section: 0=linear in p; 1=linear in ln p XPTOP=200., ; top of cross section plots (mb) LABLINE=1, ; 0: no contour line labels LABMESG=0, ; 1: no message below conrec plot NOZERO=0, ; 0: allow zero line; 1:no min/max zero line; ; 2: no zero whatsoever IHIRES=0, ; 1: use high resolution US county line/China coastline &END ;
Description of variables in the namelist: ISTART JSTART IEND JEND LW1 LW2 DASH1
integer integer integer integer integer integer integer
DASH2 COLOR1 COLOR2 COLOR3 COLOR4
integer integer integer integer integer
MM5 Tutorial
for a subdomain plot, this is the I-direction starting point for a subdomain plot, this is the J-direction starting point for a subdomain plot, this is the I-direction ending point for a subdomain plot, this is the J-direction ending point line width for the first plot; 1000 is the thinest line width for the overlay plot dash pattern for the first plot; standard NCAR GKS method. A ‘-’ before a number means contour of positive values is solid, negative values is dashed 682: shorter-dashed line 2730: short-dashed line 3640: medium-dashed line 4092: long-dashed line dash pattern for the overlay plot color index for the first contour plot, labeled lines color index for the overlay plot, labeled lines color index for a dot-point plot, labeled lines color index for the first contour plot, unlabeled lines 12-7
12: GRAPH
COLOR5 COLOR6 HDRINFO LOGP
integer integer logical integer
XPTOP LABLINE LABMESG NOZERO IHIRES
real integer integer integer integer
color index for the overlay plot, unlabeled lines color index for a dot-point plot, unlabeled lines T: will only print record header for cross section plots: whether the vertical coordinate is plotted in linear p (LOGP=0), or log p (LOGP=1) top of a cross section plot (in mb) =0: no contour line labels =1: no message below conrec plot =1: no min/max zero line; =2: no zero line whatsoever =1: use high resolution US county/Asia coastline
To use higher resolution US county lines or Asia coastline, set IHIRES=1, and name the outline file to be hipone.ascii. These files may be downloaded from ftp://ftp.ucar.edu/mesouser/Data/ GRAPH directory.
12.5 Map Options File: g_map.tbl This table is used to modify map background specifics for a Graph plot. -----------------------------------------------------------------------------MAP DETAILS LL | DASH | INT | LB | LSZ | LQL | P | TTL | TSZ | TQL | OUT | DOT | LW | SP -----------------------------------------------------------------------------A | PB | D | M | 12 | 00 | Y | Y | 8 | 00 | PS | N | D | -----------------------------------------------------------------------------MAP COLORS LL LINES | LABELS | TITLE | STATES | COUNTRIES | CONTINENTS | PERIMETER -----------------------------------------------------------------------------1 | 1 | 1 | 1 | 1 | 1 | 1 ------------------------------------------------------------------------------
Description of variables in g_plots.tbl: (Text is provided by Dr. Mark Stoelinga of University of Washington.)
LL DASH INT LB LSZ
12-8
lat/lon lines over land only (L), water only (W), none (N), or both land and water (D, A, or E) lat/lon lines are dashed large (L), medium (M), small (SM), tiny (T), solid (SO), publ. style (P), or default (D) [LL.ne.N] lat/lon grid interval in degrees, or D for default [LL.ne.N] M for only MAPDRV labels (lat/lon on perimeter), N for none, or D or A for both lat/lon label size, 1 to 25 [LB.ne.N]
MM5 Tutorial
12: GRAPH
LQL
P TTL TSZ and TQL OUT
DOT LW
SP
label quality: [LB.ne.N] 00 - Complex characters / High quality 01 - Complex characters / Medium quality 02 - Complex characters / Low quality 10 - Duplex characters / High quality 11 - Duplex characters / Medium quality 12 - Duplex characters / Low quality D - Default = 11 draw just a line perimeter (N) or a line perimeter with ticks (Y) [DASH.ne.P.or.LL.eq.N] title flag: read the next two title parameters (Y) or skip to outline parameter (N) the same as LSZ and LQL except they refer to the title [both TTL.eq.Y] determines which geo-political outlines will be drawn: NO - no outlines CO - continental outlines only US - U.S. State outlines only PS - Continental + International + State outlines PO - Continental + International outlines determines whether geo-political boundaries will be dotted (Y) or solid (N) [OUT.ne.NO] gives the line width, in multiples of default (which is 1000 “units”). D gives default line width. [OUT.ne.NO.and.DOT.eq.N] (LW=2 would double the line width for geographic boundaries) gives dot spacing. Default (D) is 12 [OUT.ne.NO.and.DOT.eq.Y]
With each parameter is given a conditional statement. If that conditional statement is not met, then that particular box should be made blank. The most common error that occurs when the routine attempts to read this table is “Too many entries on line”, which simply means that the routine expected a box to be blank, but it wasn’t. One can also do color-filled maps. To do so, add the following in the g_map.tbl: MAP FILL WATER | SIX COLOR INDICIES WITH WHICH TO COLOR IN THE MAP -----------------------------------------------------------------------1 | 2 | 2 | 2 | 2 | 2 | 2 ------------------------------------------------------------------------
In this example, the water will be colored white, and land light grey according to the color tabel described below.
MM5 Tutorial
12-9
12: GRAPH
12.6 Plot Color Options File: g_color.tbl This table is used to define the color codes referred in the Graph program. -----------------------------------------------------------------------------COLOR TABLE COLOR | RED | GREEN | BLUE | NUMBER -----------------------------------------------------------------------------WHITE | 1.00 | 1.00 | 1.00 | 1 LIGHT GRAY | 0.66 | 0.66 | 0.66 | 2 DARK GRAY | 0.40 | 0.40 | 0.40 | 3 BLACK | 0.00 | 0.00 | 0.00 | 4 SKY BLUE | 0.20 | 0.56 | 0.80 | 5 BLUE | 0.00 | 0.00 | 1.00 | 6 LIGHT YELLOW | 0.80 | 0.80 | 0.00 | 7 MAGENTA | 1.00 | 0.00 | 1.00 | 8 YELLOW | 1.00 | 1.00 | 0.00 | 9 GREEN | 0.00 | 1.00 | 0.00 | 10 FOREST GREEN | 0.14 | 0.25 | 0.14 | 11 CYAN | 0.00 | 1.00 | 1.00 | 12 TAN | 0.40 | 0.30 | 0.20 | 13 BROWN | 0.25 | 0.20 | 0.15 | 14 ORANGE | 1.00 | 0.50 | 0.00 | 15 RED | 1.00 | 0.00 | 0.00 | 16 MID-BLUE | 0.00 | 0.50 | 1.00 | 17 DULL MID-BLUE | 0.00 | 0.15 | 0.30 | 18 BRIGHT FOREST GREEN | 0.20 | 0.40 | 0.20 | 19 DULL ORANGE | 0.60 | 0.30 | 0.00 | 20 ------------------------------------------------------------------------------
To make a color contour plot, change the background color from black to white using the following g_color.tbl: -----------------------------------------------------------------------------COLOR TABLE COLOR | RED | GREEN | BLUE | NUMBER -----------------------------------------------------------------------------WHITE | 1.00 | 1.00 | 1.00 | 0 BLACK | 0.00 | 0.00 | 0.00 | 1 LIGHT GRAY | 0.66 | 0.66 | 0.66 | 2 DARK GRAY | 0.40 | 0.40 | 0.40 | 3 BLACK | 0.00 | 0.00 | 0.00 | 4
.... and change color used for maps in the MAP COLORS section of the g_map.tbl from 1 to a color code other than white for borders, tick marks, and map background.
12-10
MM5 Tutorial
12: GRAPH
12.7 How to Run GRAPH Obtaining Graph tar file To run GRAPH interactively, the first step is to obtain the GRAPH tar file. The GRAPH tar file, GRAPH.TAR.gz, can be otained from ~mesouser/MM5V3 (or /fs/othrorgs/home0/mesouser/ MM5V3) from NCAR’s IBM, /MESOUSER/MM5V3/GRAPH.TAR.gz on MSS, or from the anonymous site (ftp://ftp.ucar.edu:mesouser/MM5V3). This tar file contains the GRAPH source code, makefiles, as well as the table files required to produce plots. To get the tar file from the anonymous ftp site: 1) ftp ftp.ucar.edu 2) login as anonymous 3) use your full email address as the password 4) cd mesouser/MM5V3 5) set the transfer to binary (or image), usually this is just “bin” 6) get GRAPH.TAR.gz 7) quit Or to get the tar file on NCAR’s IBM: cd /ptmp/$USER msread GRAPH.TAR.gz /MESOUSER/MM5V3/GRAPH.TAR.gz or cp ~mesouser/MM5V3/GRAPH.TAR.gz . Compiling Graph code Once you have the GRAPH.TAR.gz on the IBM’s working directory or on the local workstation, the building process is to gunzip the file, untar the file and make the executable. 1) gunzip GRAPH.TAR.gz 2) tar -xvf GRAPH.TAR After untarring the file, you should find the GRAPH directory and the following in GRAPH directory among others: Makefile g_color.tbl g_defaults.nml g_map.tbl g_plots.tbl graph.csh 3) If your dataset dimensions are greater than 200x200x40, you need to edit two files in src/ directory: scratch.incl and data.incl 4) type “make”, and this will create a graph executable called graph.exe. (if working on NCAR’s IBM, a user can simply copy the graph-run-ibm.tar.gz file from
MM5 Tutorial
12-11
12: GRAPH
~mesouser/MM5V3/IBM, unzip and untar it. An executable is inside) 5) edit the g_plots.tbl and g_defaults.nml (if needed) files. 6) if a user is working on NCAR’s IBM, he/she needs to retrieve data from MSS by typing the following: msread MMOUT_DOMAIN1[_01 through _99] MSSfilename & The ‘&’ puts the msread command in the background. Running Graph Program Graph can only process output from one domain at a time. To run Graph, type “graph.csh 1 1 MMOUT_DOMAIN1”, or graph.csh 1 3 MMOUT_DOMAIN1 MM5 V3 format file name, or root (without suffix [_01, _02]) if multiple files Number of files to process Into how many pieces is the metacode to be split upon successful GRAPH completion (using “med”)
The graph.csh tries to figure out what options you have placed on the command line. For example, a) to run graph with one data file: graph.csh 1 1 MMOUT_DOMAIN1 b) to run graph with 3 files named MMOUT_DOMAIN1, MMOUT_DOMAIN1_01, MMOUT_DOMAIN1_02: graph.csh 1 3 MMOUT_DOMAIN1 c) to run graph with 3 files named MMOUT_DOMAIN1_00, MMOUT_DOMAIN1_01, MMOUT_DOMAIN1_02: graph.csh 1 3 MMOUT_DOMAIN1 d) to run graph with 3 files named MMOUT_DOMAIN1_00, MMOUT_DOMAIN1_01, MMOUT_DOMAIN1_02: graph.csh 1 3 MMOUT_DOMAIN1*
12-12
MM5 Tutorial
12: GRAPH
Viewing Graphic Output The plot files generated by Graph are metacode files called ‘gmeta’ (and gmeta.split1, gmeta.split2, etc. if you choose to split the file), which can be viewed by NCAR Graphics utility idt, and/or transformed to postscript files using ctrans (also an NCAR Graphics utility). For example, to transfer a gmeta file to postscript file, ctrans -d ps.mono gmeta > gmeta.ps (for black-white plot), or ctrans -d ps.color gmeta > gmeta.ps (for color plot) Or to view the output interactively using an interface, idt gmeta
MM5 Tutorial
12-13
12: GRAPH
12.8 Available 2-D Horizontal Fields Table 8.1 List of 2-D horizontal fields available for plotting. Field ID
Default Units
Description
Optional Units
Optional Units
CORIOLIS
Coriolis parameter
1/s
ICLW
integrated cloud water
cm
mm
in
IRNW
integrated rain water
cm
mm
in
LATITDOT
latitude
degrees
LI
lifted index
K
LNDUS
land use categories
(no units)
LHFLUX
surface latent heat flux
W/m2
LONGIDOT longitude
degrees
LWDOWN
W/m2
longwave downward radiation
Optional Units
C
MAPFACDT map scale factor
(no units)
PBL HGT7
PBL height
m
PRECIPT
total accumulated precipitation
mm
cm
in
PRECIPC
convective accumulated precip
mm
cm
in
PRECIPN
stable accumulated precip
mm
cm
in
PRECIPTT
total precip during time interval
mm
cm
in
PRECIPTN
stable precip during time interval
mm
cm
in
PRECIPTC
convective precip during interval
mm
cm
in
PRH2O
precipitable water
cm
mm
in
PSLV
sea level pressure
mb
hPa
Pa
inHg
PSFC
surface pressure
mb
hPa
Pa
inHg
PTEND
pressure change
mb
hPa
Pa
RAINT
total accumulated precipitation
mm
cm
in
RAINC
convective accumulated precip
mm
cm
in
RAINN
stable accumulated precip
mm
cm
in
12-14
MM5 Tutorial
12: GRAPH
Field ID
Default Units
Description
Optional Units
Optional Units
RTENDT
total precip during time interval
mm
cm
in
RTENDC
convective precip during interval
mm
cm
in
RTENDN
stable precip during time interval
mm
cm
in
REGIME
PBL regimes (values 1-4)
catagory
SHFLUX
surface sensible heat flux
W/m2
SOIL T 1-6
soil temp in 1/2/4/8/16 cm layer
K
SWDOWN
shortwave downward radiation
W/m2
TER
terrain elevation
m
ft
TGD
ground temperature
K
C
THK
thickness
m
TSEASFC
sea surface temperature
K
UST
frictional velocity
m/s
Optional Units
C
if IPOLAR = 1 SEAICEFR
Sea ice fraction
(no units)
if ISOIL = 2 SOIL T 1-4
soil temp in 10/40/100/200 cm layer K
SOIL M 1-4
soil moisture in above layers
m3/m3
SOIL W 1-4
soil water in above layers
m3/m3
SFCRNOFF surface runoff
mm
UGDRNOFF underground runoff
mm
CANOPYM
canopy moisture
m
SNODPTH
water-equivalent of snow depth
mm
SNOWH
physical snow depth
m
SEAICE
sea ice flag
(no units)
ALB
albedo
fraction
ALBSNOMX maximum snow albedo MM5 Tutorial
% 12-15
12: GRAPH
Field ID
Default Units
Description
Optional Units
Optional Units
Optional Units
if FRAD >= 2 SWOUT
top outgoing shortwave radiation
W/m2
LWOUT
top outgoing longwave radiation
W/m2
if IBLTYP = 5 T2M/T2
2 m temperature
K
C
F
TD2M
2 m dewpoint temperature
K
C
F
TDD2M
2 m dewpoint depression
K
C
F
Q2M/Q2
2 m mixing ratio
kg/kg
g/kg
U10
10 m model u wind component
m/sec
knots
V10
10 m model v wind component
m/sec
knots
WIND10M
10 m wind speed
m/sec
knots
BARB10M
10 m wind barb
m/sec
knots
VECT10M
10 m wind vector
m/sec
knots
VESL10M
10 m streamline if ISOIL=3 and IBLTYP=7
M-O LENG
Monin-Obukov Length
m
NET RAD
surface net radiation
W/m2
GRNFLX
ground heat flux
W/m2
ALBEDO
surface albedo
fraction
VEGFRG
vegetation coverage
fraction
LAI
leaf area index
area/area
RA
aerodynamic resistance
s/m
RS
surface resistance
s/m
ZNT
roughness length
m
ISLTYP
soil texture type
category
12-16
MM5 Tutorial
12: GRAPH
12.9 Available Cross-Section Only Fields Table 8.2 List of cross-section-only fields available for plotting. Field ID
Default Units
Description
AM
absolute momentum
AXW
wind speed tangential to the cross- m/s section
CUV
horizontal wind barb in plane
CXW
circulation vectors in cross-section m/s plane
XXW
wind speed normal to the cross-sec- m/s tion
Optional Units
Optional Units
Optional Units
m/s
m/s
12.10 Available 3-D Fields (as 2-D Horizontal or Cross-Section) Table 8.3 List of 3-D fields available for plotting. Field ID
Default Units
Description
Optional Units
Optional Units
AGL
above ground level
m
cm
Dm
BARB
wind barbs
m/s
kt
cm/s
CLB
cloud boundary
g/kg
kg/kg
mg/kg
CLW
cloud water
g/kg
kg/kg
mg/kg
DIV
divergence of horizontal wind
10**5/s
1/s
GRA
graupel
g/kg
kg/kg
mg/kg
H
geopotential height
m
HEIGHT
geopotential height
m
ICE
cloud ice
g/kg
kg/kg
mg/kg
MDIV
moisture divergence
10**7/s
1/s
MM5 Tutorial
Optional Units
12-17
12: GRAPH
Field ID
Default Units
Description
Optional Units
Optional Units
MSE
moist static energy
J/kg
MSS
saturated moist static energy
J/kg
NCI
number concentration of ice
number/m3
OMG
vertical motion (pressure level data ub/s only)
mb/s
hPa/s
P
pressure
mb
Pa
hPa
PP
pressure perturbation
mb
Pa
hPa
PV
potential vorticity
PVU
QDIV
q-vector divergence (p data only)
QV
mixing ratio
QVEC
q-vectors (p data only)
RDTEND
g/kg
kg/kg
atmospheric radiative tendency
K/day
K/h
RH
relative humidity
%
RNW
rain water
g/kg
kg/kg
mg/kg
SLW
super-cooled liquid water
g/kg
kg/kg
mg/kg
SNOW
snow
g/kg
kg/kg
mg/kg
T
temperature
K
C
F
TD
dew point temperature
K
C
F
TDD
dew point depression
K
C
F
THETA
potential temperature
K
C
F
THETAE
equivalent potential temperature
K
C
F
TKE
turbulent kinetic energy
J/kg
U
u-component of wind
m/s
kt
cm/s
V
v-component of wind
m/s
kt
cm/s
VAB
absolute vorticity
10**5/s
1/s
VECT
horizontal wind vectors
m/s
kt
12-18
Optional Units
cm/s
MM5 Tutorial
12: GRAPH
Field ID
Description
Default Units
Optional Units
VESL
horizontal wind streamlines
m/s
VOR
relative vorticity
10**5/s
1/s
W
w-component of wind
m/s
kt
WIND
wind speed
m/s
kt
MM5 Tutorial
Optional Units
Optional Units
cm/s
12-19
12: GRAPH
12.11 Some Hints for Running GRAPH • make sure the following line is included in your .cshrc file on NCAR’s IBM or your local computer: setenv NCARG_ROOT /usr/local or setenv NCARG_ROOT /usr/local/ncarg
• NCAR Graphics has recently been upgraded to include better country/poloitcal boundaries.
This is especially true over Europe. GRAPH’s default is to use NCAR Graphics version 4.1, but GRAPH can also run with older/newer versions. - If you have an older version, remove the “-DNCARG41” directive from the Makefile. - If you have NCAR Graphics version 4.2, change the “-DNCARG41” directive in the Makefile to “-DNCARG42”.
• The GRAPH program uses the information in the record header to define the size and location of the data. This limits the "wrong" data that the user can provide to be related to the requested fields and levels to be plotted.
• GRAPH prints out information to allow you to track the program's status. It should inform the user that it is processing each of the requested time periods, and for each of the requested variables and levels.
• If the GRAPH program is not processing the time that you have requested, and it should be based upon the intervals that you have set, ask GRAPH to plot every time by setting time increment to be 0.
• GRAPH only vertically interpolates data that is on a σ coordinate. • Contour intervals for precipitation are fixed for the RAIN and RTEND fields and user modifiable for the PRECIP fields.
• Do not request a subdomain and also process soundings. The GRAPH program may not place the sounding at the correct location for the large domain.
• Errors related to the NAMELIST or other temporary files are common when porting
GRAPH to a different architecture. Use the NAMELIST format in the architecture's FORTRAN manual. Make sure and remove all temporary files and links prior to each initiation of the GRAPH C-shell.
• When GRAPH is compiled on a different architecture, the length of the records for the
direct access files must be specified in bytes (4 or 8 per word) or words. This information is found in the include file word_length.incl.
• If you get the following, and Graph stops, NEED MORE DATA SPACE it means that you need to increase the dimensions in the data.incl and scratch.incl files.
• If you get error message related to ‘Direct Access Files’, it is usually an indication that the dimensions specified in data.incl and scratch.incl files are not large enough, or the running memory is not large enough, or the word length is not correct for your particular computer architecture.
12-20
MM5 Tutorial
12: GRAPH
12.12 Sample Graph Plot File For some horizontal plots, please refer to Chapter 15, page 15-9 and 15-10.
Figure 12.1 A NW-SE cross section of potential temperature (unit K and contour interval 4 K) and 2-D in-plane cir-
Figure 12.2 A skew-T plot from a 24-h simulation at Albany, New York.
MM5 Tutorial
12-21
12: GRAPH
12.13 Graph tar file The graph.tar file contains the following files and directories: CHANGES Diff/ Makefile README Templates/ g_color.tbl g_defaults.nml g_map.tbl g_plots.tbl graph.csh src/
Description of changes to the Graph program Will contain difference files between consecutive releases Makefile to create Graph executable General information about the Graph directory and how to run Graph Job deck directory: batch deck for Cray only Color table for Graph job NAMELIST file for Graph job Map table for Graph job Table for selecting plot variables C-shell script to run Graph interactively Graph source code and low-level Makefile
12.14 Script file to run Graph job #!/bin/csh -f #
this is INTERACTIVE or BATCH
if ( $?ENVIRONMENT ) then echo "environment variable defined as $ENVIRONMENT" else setenv ENVIRONMENT INTERACTIVE echo "environment variable defined as $ENVIRONMENT" endif #
initializations, no user modification required
set FILE_EXT = ( 00 10 20 30 40 50 60 #
01 11 21 31 41 51 61
02 12 22 32 42 52 62
03 13 23 33 43 53 63
04 14 24 34 44 54 64
05 15 25 35 45 55 65
06 16 26 36 46 56 66
07 17 27 37 47 57 67
08 18 28 38 48 58 68
09 19 29 39 49 59 69
\ \ \ \ \ \ )
is it color
# set Color = BW set Color = CO # #
If this is an HP-UX machine, the generic name for fortran files starts with "ftn" not "fort".
model >& /dev/null set OK = $status if ( $OK == 0 ) then set ForUnit = ftn # echo "This is an HP-UX" else set ForUnit = fort. # echo "This is not an HP-UX" endif if ( -e numsplit.tbl ) rm numsplit.tbl if ( -e med.input ) rm med.input
12-22
MM5 Tutorial
12: GRAPH
if ( ( `uname` == AIX ) || ( `uname` == SunOS ) || if ( -e ${ForUnit}20 ) rm ${ForUnit}2* if ( -e ${ForUnit}30 ) rm ${ForUnit}3* if ( -e ${ForUnit}40 ) rm ${ForUnit}4* if ( -e ${ForUnit}50 ) rm ${ForUnit}5* if ( -e ${ForUnit}60 ) rm ${ForUnit}6* if ( -e ${ForUnit}70 ) rm ${ForUnit}7* if ( -e ${ForUnit}80 ) rm ${ForUnit}8* else if ( ( -e ${ForUnit}20 ) || ( -l ${ForUnit}20 ) if ( ( -e ${ForUnit}30 ) || ( -l ${ForUnit}30 ) if ( ( -e ${ForUnit}40 ) || ( -l ${ForUnit}40 ) if ( ( -e ${ForUnit}50 ) || ( -l ${ForUnit}50 ) if ( ( -e ${ForUnit}60 ) || ( -l ${ForUnit}60 ) if ( ( -e ${ForUnit}70 ) || ( -l ${ForUnit}70 ) if ( ( -e ${ForUnit}80 ) || ( -l ${ForUnit}80 ) endif #
( `uname` == HP-UX ) ) then
) ) ) ) ) ) )
rm rm rm rm rm rm rm
${ForUnit}2* ${ForUnit}3* ${ForUnit}4* ${ForUnit}5* ${ForUnit}6* ${ForUnit}7* ${ForUnit}8*
simple error check on call
if (( $#argv == 0 ) && ( $ENVIRONMENT == INTERACTIVE )) then echo -n "into how many pieces is the metafile to be split (1) " set NumSplit = "$<" echo -n "how many input files are there (1) " set TotFiles = "$<" echo -n "what is the name of the file (MMOUT_DOMAIN1) " set FileName = "$<" else if ( $#argv < 3 ) then echo "graph.deck: error in call" echo "usage: graph.deck ns nf filename [filename2 filename3 ...]" echo " where ns is the number of files into which metacode is split" echo " where nf is the number of input files" echo " where filename is either the root name, or several names" exit (1) else if ( $#argv == 3 ) then set NumSplit = $1 set TotFiles = $2 set FileName = $3 else if ( $#argv > 3 ) then set NumSplit = $1 set TotFiles = $2 endif #
consistency checks on input
if (( $NumSplit < 1 ) || ( $NumSplit > 99 )) then set NumSplit = 1 endif cat >! numsplit.tbl << EOF $NumSplit EOF if (( $TotFiles < 1 ) || ( $TotFiles > 69 )) then set TotFiles = 1 endif if ( $#argv == 0 ) then if (( ! -e $FileName ) && ( ! -e ${FileName}_00 )) then echo "file $FileName does not exist" exit (2) endif else if ( $#argv >= 3 ) then if (( ! -e $argv[3] ) && ( ! -e $argv[3]_00 ) ) then echo "file $argv[3] does not exist" exit (3) endif endif
MM5 Tutorial
12-23
12: GRAPH
if ( ! -e graph.exe ) ln -s src/graph.exe graph.exe chmod +x graph.exe #
make sure we have all the required tables
if ( -e echo else echo exit endif
g_map.tbl ) then "using local copy of g_map.tbl"
if ( -e echo else echo exit endif
g_color.tbl ) then "using local copy of g_color.tbl"
if ( -e echo else echo exit endif
g_plots.tbl ) then "using local copy of g_plots.tbl"
if ( -e echo else echo exit endif
g_defaults.nml ) then "using local copy of g_defaults.nml"
#
run graph program
"need a copy of g_map.tbl" (4)
"need a copy of g_color.tbl" (5)
"need a copy of g_plots.tbl" (6)
"need a copy of g_defaults.nml" (7)
if ( ( `uname` == AIX ) || ( `uname` == SunOS ) || ( `uname` == HP-UX ) ) then if ( -e ${ForUnit}18 ) rm ${ForUnit}18 else if ( ( -e ${ForUnit}18 ) || ( -l ${ForUnit}18 ) ) rm ${ForUnit}18 endif ln -s g_plots.tbl ${ForUnit}18 if ( -e .assign ) rm .assign if ((( $TotFiles == 1 ) && ( $#argv == 3 )) || \ (( $TotFiles > 1 ) && ( $#argv > 3 ))) then shift shift set NUMFIL = 1 while ( $#argv ) @ UNIT = 19 + $NUMFIL ln -s $argv[1] ${ForUnit}$UNIT # assign -a $argv[1] -Ff77 -Nieee ${ForUnit}$UNIT shift @ NUMFIL ++ end else if ( (( $TotFiles > 1 ) && ( $#argv == 3 )) || ( $#argv == 0 ) ) then if ( ( `uname` == AIX ) || ( `uname` == SunOS ) || ( `uname` == HP-UX ) ) then set AIX = true else set AIX = false endif set NUMFIL = 1 while ( $NUMFIL <= $TotFiles ) @ UNIT = 19 + $NUMFIL if ( $AIX == true ) then if ( ( $NUMFIL == 1 ) && ( -e $FileName ) ) then
12-24
MM5 Tutorial
12: GRAPH
ln -s assign -a @ NUMFIL else if ( (
#
$FileName ${ForUnit}$UNIT $FileName -Ff77 -Nieee ${ForUnit}$UNIT ++ $NUMFIL == 1 ) && ( -e ${FileName}_00 ) ) then ln -s ${FileName}_$FILE_EXT[${NUMFIL}]
${ForUnit}$UNIT # assign ${ForUnit}$UNIT @ NUMFIL ++ else
-a
${FileName}_$FILE_EXT[${NUMFIL}]
-Ff77
-Nieee
ln -s ${FileName}_$FILE_EXT[${NUMFIL}] ${ForUnit}$UNIT # assign -a ${FileName}_$FILE_EXT[${NUMFIL}] -Ff77 -Nieee ${ForUnit}$UNIT @ NUMFIL ++ endif else if ( $AIX == false ) then if ( ( $NUMFIL == 1 ) && ( ( -e $FileName ) || ( -l $FileName ) ) ) then ln -s $FileName ${ForUnit}$UNIT # assign -a $FileName -Ff77 -Nieee ${ForUnit}$UNIT @ NUMFIL ++ else if ( ( $NUMFIL == 1 ) && ( ( -e ${FileName}_00 ) || ( -l ${FileName}_00 ) ) ) then ln -s ${FileName}_$FILE_EXT[${NUMFIL}] ${ForUnit}$UNIT # assign -a ${FileName}_$FILE_EXT[${NUMFIL}] -Ff77 -Nieee ${ForUnit}$UNIT @ NUMFIL ++ else ln -s ${FileName}_$FILE_EXT[${NUMFIL}] ${ForUnit}$UNIT # assign -a ${FileName}_$FILE_EXT[${NUMFIL}] -Ff77 -Nieee ${ForUnit}$UNIT @ NUMFIL ++ endif endif end endif #
run graph program
graph.exe #
split metacode apart
if
( $NumSplit != 1 ) then cat med.input med -f med.input else if ( $NumSplit == 1 ) then # cat med.input # cp gmeta gmeta.split1 endif rm med.input numsplit.tbl rm tmp.* echo "GRAPH run complete"
12.15 An Alternative Plotting Package: RIP RIP (which stands for Read, Interpolate and Plot), is a graphical program developed by Dr. Mark Stoelinga of University of Washington. This program is a popular alternative graphics program to GRAPH, especially when it comes to color plots. This program is also based on NCAR Graphics. A descriptions of this package is available in Appendix G. MM5 Tutorial
12-25
12: GRAPH
12-26
MM5 Tutorial
13: I/O FORMAT
I/O FORMAT
13
Introduction 13-3 Version 3 File Format 13-3 Big Header 13-5 Sub Header 13-5 Special Header Location 13-6 Output Units 13-6 Explanation of Output Field List 13-7 Big Header Record for TERRAIN Output 13-7 Terrain Output Fields (with LSM option) 13-8 Big Header Record for REGRID output 13-8 REGRID Output Fields (with LSM option) 13-9 Big Header Record for little_r/RAWINS Output 13-9 little_r/RAWINS Output Fields (with LSM option) 13-10 Big Header Record for little_r/RAWINS Surface FDDA Output 13-10 little_r/RAWINS Surface FDDA Output Fields 13-11 Big Header Record for INTERPF Output 13-11 INTERPF Output Fields (with LSM option) 13-11 Big Header Record for LOWBDY Output 13-12 LOWBDY Output Fields 13-13 Big Header Record for BDYOUT Output 13-13 BDYOUT Output Fields 13-13 Big Header Record for MM5 Output 13-14 MM5 Output Fields 13-17 Big Header Record for Interpolated, Pressure-level MM5 Output 13-18
MM5 Tutorial
13-1
13: I/O FORMAT
Interpolated MM5 Output Fields 13-19 Special Data Format in MM5 Modeling System 13-20 Data format for observational nudging 13-20 Description of observational nudging variables 13-20 Data format for surface observations file 13-20 Description of surface observation variables 13-21 Data format for upper-air observations file 13-21 Description of upper-air observation variables 13-21 Data format for raw upperair observations file 13-22 Description of raw upperair observation variables 13-22
13-2
MM5 tutorial
13: I/O FORMAT
13
I/O FORMAT
13.1 Introduction The Version 1 header has been in use since 1994. During the period since, several ways to improve on it became apparent, and these are put into effect with Version 3. Firstly, Version 1 files are unnecessarily long because before every time period there are about 3.5 Mbytes of header record, only a small fraction of which is used. Version 1 had a first effort at a format that was selfdescribing, in that the header contains information about all the fields in the file. However it was made a little awkward to use because this information on each field is buried in the header. When adding a new field, the location and order in the header had to be consistent, as well as updating the total number of fields, so the files were difficult to manipulate. Version 3 improves in both respects: the files are shorter, and it becomes easier to add or retrieve selected fields. The length of the header is reduced. The header still has 20 sections (second index which indicates program name), but now only 50 integers, and 20 reals, together with their 80character descriptions, are in each section. This makes the header size a little over 100 kbytes. Moreover, there is only one header which is at the beginning of the file, but there is enough generality in the format to allow more headers at other times such as at the beginning of a restart run or when a nest moves. The header still contains information about the preprocessor options, and domain characteristics and location. However it no longer contains 1-dimensional fields, such as sigma or pressure levels, nor information about what is in the rest of the file. Version 3 introduces the concept of a sub-header, a 1-record description directly ahead of each field. This description includes information on the name, dimensionality, index order, index range, size and time of the following field. Flags in the file indicate whether to read a “big header’’, or sub-header and field, or whether it is the end of a time period. It can be seen that it is easy to insert a field as long as it is accompanied by a relevant flag and sub-header. It is also easy to search for a given field by reading sub-headers until a match is found then reading the following field.
13.2 Version 3 File Format An MM5 Version 3 modeling system output file contains the following records:
MM5 Tutorial
13-3
13: I/O FORMAT
(first time period) big header flag (integer value of 0) big header sub-header flag (integer value of 1) sub-header field sub-header flag (integer value of 1) sub-header field sub-header flag (integer value of 1) sub-header field .... .... end-of-time-period flag (integer value of 2) (second time period) sub-header flag (integer value of 1) sub-header field sub-header flag (integer value of 1) sub-header field sub-header flag (integer value of 1) sub-header field .... .... end-of-time-period flag (integer value of 2) (and so on ....) No particular order of fields is assumed, other than that they are chronologically grouped. When reading files in the modeling system, each field has to be read and matched to an expected 8-character name before being assigned to a variable in the program. Note that 1D, 2D, and 3D fields could be mixed, but that the sub-header gives enough information to assign an appropriate array to the read statement. Thus, a simple read program would look like: 10
900
13-4
continue read (input_unit, end=900) flag if (flag.eq.0) then read (input_unit) big header go to 10 else if (flag.eq.1) then read (input_unit) sub header read (input_unit) field go to 10 else if (flag.eq.2) then print *, ‘end of time period’ go to 10 end if continue
MM5 Tutorial
13: I/O FORMAT
In Version 3, there are two boundary condition files: one contains only the lateral boundary condition arrays, and the other the lower boundary condition file which fields like substrate temperature and SST. Both files have the same file structure as the rest of modeling system output files.
13.2.1 Big Header The big header has four 2-D arrays similar to that in the V1/V2 system, which we refer to in the V3 modeling system programs as BHI,BHR,BHIC,BHRC and the dimensions of these arrays are BHI(50,20),BHR(20,20),BHIC(50,20),BHRC(20,20) where BHI is an integer array, and BHIC is the companion array that contains the description of what is in BHI. Similarly BHR is a real array, and BHRC contains the description of what is in BHR. The first value in the header, BHI(1,1), still represents data types. But there are some changes as shown below: BHI(1,1)
Data Types
1 2 3 4 5 6 7 8 11
Terrain Regrid Little_R / Rawins Rawins’ surface analysis Model initial condition file Model lower boundary condition file Model lateral boundary condition file Interpolated model output on pressure levels Model output
MM5 model output actually occupies header locations 11 through 16.
13.2.2 Sub Header A sub-header contains the following information: ndim, start_index(4), end_index(4), xtime, staggering, ordering, current_date, name, units, description where ndim: start_index:
MM5 Tutorial
integer integer(4)
dimension of the field (integer) starting indices of the field array (generally 1’s)
13-5
13: I/O FORMAT
end_index:
integer(4)
xtime: staggering: ordering:
real char*4 char*4
current_date: name: unit: description:
char*24 char*9 char*25 char*46
ending indices of the field array (generally IX, JX, KX, and 1) (the fourth dimension is not yet used) the integration or forecast time for this field (unit in minutes) whether the field is at dot or cross point (character C or D) the order of the field array dimension (4-character string with the following values: YXP: 3-D field, pressure data dimensioned by (IX,JX,KXP) YXS: 3-D field, sigma data dimensioned by (IX,JX,KXS) YXW: 3-D field, sigma data dimensioned by (IX,JX,KXS+1) (e.g. vertical motion in MM5) YX: 2-D field, with array dimensioned by (IX,JX) with IX in Y direction. CA: 2-D field, with array dimensioned by (landuse-categories,2). Arrays storing land property values, such as albedo, roughness length, etc. XSB: 3-D field, containing north and south boundary arrays, dimensioned by (JX,KXS,5) YSB: 3-D field, containing west and east boundary arrays, dimensioned by (IX,KXS,5) XWB: 3-D field, containing north and south boundary arrays for vertical motion, dimensioned by (JX,KXS+1,5) YWB: 3-D field, containing west and east boundary arrays forvertical motion, dimensioned by (IX,KXS+1,5) P: 1-D field, pressure level array S: 1-D field, sigma level array 24-character representation of date valid for this field 8-character field name (kept the same as in Version 1/2 system) 25-character unit description field description (kept mostly the same as in Version 1/2 system)
13.2.3 Special Header Location There are few locations in big header that contains special information about the data (where X is the number indicating the program name: BHI(1,1): BHI(2,X): BHI(3,X): BHI(4,X): BHI(5-10,X): BHI(12,X): BHR(1,X):
Program name for this dataset Version 3 MM5 system format edition number Program version number Program minor revision number Beginning time for this dataset Number of vertical levels in this dataset Time interval in the dataset
13.2.4 Output Units The units used in the modeling system output are mostly MKS. For example:
13-6
MM5 Tutorial
13: I/O FORMAT
Pressure: Distance:
Pascal m
Sigma-level data in V3 are no longer coupled (e.g. multiplied by p*).
13.3 Explanation of Output Field List In the following pages, you will see a complete list of big record header in each MM5 dataset, and a list of output fields from the dataset. What printed in the output field list are the following: field name, dimension of the field (either 1, 2 or 3), first (y), second (x), and third (pressure or s) dimension of the field, forth dimension of the field (1; and not used), whether the field is a cross (C) or dot (D) one, how the field is ordered (YXS, for example), a value from the field (in the middle of the domain), and the unit for the field. For example, 1st, 2nd, 3rd and 4th dimension of the field
Field ID T
3
35
Dimension of Field
41
23
1 C
cross or dot point field
Field value YXS
:
258.47454834 K
field order
Field unit
13.4 Big Header Record for TERRAIN Output TERRAIN Portion of big header: ***Integers: BHI( BHI( BHI( BHI( BHI( BHI( BHI(
1, 2, 3, 4, 5, 6, 7,
1): 1): 1): 1): 1): 1): 1):
1 1 6 1 35 41 1
BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI(
8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24,
1): 1): 1): 1): 1): 1): 1): 1): 1): 1): 1): 1): 1): 1): 1): 1): 1):
0 35 41 0 0 1 1 0 35 41 1 1 1 1 2 16 1
: : : : : : :
PROGRAM NAME : TERRAIN TERRAIN VERSION 3 MM5 SYSTEM FORMAT EDITION NUMBER TERRAIN PROGRAM VERSION NUMBER TERRAIN PROGRAM MINOR REVISION NUMBER COARSE DOMAIN GRID DIMENSION IN I (N-S) DIRECTION COARSE DOMAIN GRID DIMENSION IN J (E-W) DIRECTION MAP PROJECTION. 1: LAMBERT CONFORMAL, 2: POLAR STEREOGRAPHIC, 3: MERCATOR : IS COARSE DOMAIN EXPANDED?, 1: YES, 0: NO : EXPANDED COARSE DOMAIN GRID DIMENSION IN I DIRECTION : EXPANDED COARSE DOMAIN GRID DIMENSION IN J DIRECTION : GRID OFFSET IN I DIR DUE TO COARSE GRID EXPANSION : GRID OFFSET IN J DIR DUE TO COARSE GRID EXPANSION : DOMAIN ID : MOTHER DOMAIN ID : NEST LEVEL (0: COARSE MESH) : DOMAIN GRID DIMENSION IN I DIRECTION : DOMAIN GRID DIMENSION IN J DIRECTION : I LOCATION IN THE MOTHER DOMAIN OF THE DOMAIN POINT (1,1) : J LOCATION IN THE MOTHER DOMAIN OF THE DOMAIN POINT (1,1) : DOMAIN GRID SIZE RATIO WITH RESPECT TO COARSE DOMAIN : DOMAIN GRID SIZE RATIO WITH RESPECT TO MOTHER DOMAIN : SMOOTHER (1: 1-2-1, 2:SMOOTHER-DESMOOTHER) : USGS 25-CATEGORY LAND USE: WATER CATEGORY : IS THIS DOMAIN A ONE-WAY OR TWO-WAY NEST? 1: 1-WAY, 2: 2-WAY. 1 FOR DOMAIN 1
***Floats:
MM5 Tutorial
13-7
13: I/O FORMAT
BHR( BHR( BHR( BHR( BHR( BHR( BHR( BHR( BHR( BHR( BHR( BHR( BHR( BHR( BHR(
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
1): 90000.00 1): 36.00 1): -85.00 1): 0.72 1): 60.00 1): 30.00 1): 90.00 1):360000.00 1): 90000.00 1): 1.00 1): 1.00 1): 35.00 1): 41.00 1): 0.50 1): 0.50
: : : : : : : : : : : : : : :
COARSE DOMAIN GRID DISTANCE (m) COARSE DOMAIN CENTER LATITUDE (degree) COARSE DOMAIN CENTER LONGITUDE (degree) CONE FACTOR TRUE LATITUDE 1 (degree) TRUE LATITUDE 2 (degree) POLE POSITION IN DEGREE LATITUDE APPROX EXPANSION (m) GRID DISTANCE (m) OF THIS DOMAIN I LOCATION IN THE COARSE DOMAIN OF THE DOMAIN J LOCATION IN THE COARSE DOMAIN OF THE DOMAIN I LOCATION IN THE COARSE DOMAIN OF THE DOMAIN J LOCATION IN THE COARSE DOMAIN OF THE DOMAIN TERRAIN DATA RESOLUTION (in degree) LANDUSE DATA RESOLUTION (in degree)
POINT POINT POINT POINT
(1,1) (1,1) (IX,JX) (IX,JX)
13.4.1 Terrain Output Fields (with LSM option) 0000-00-00_00:00:00 TERRAIN LAND USE VEGFRC01 VEGFRC02 VEGFRC03 VEGFRC04 VEGFRC05 VEGFRC06 VEGFRC07 VEGFRC08 VEGFRC09 VEGFRC10 VEGFRC11 VEGFRC12 TEMPGRD LANDMASK SOILINDX LATITCRS LONGICRS MAPFACCR LATITDOT LONGIDOT MAPFACDT CORIOLIS
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35
41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
0.00000 Hours 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
C C C C C C C C C C C C C C C C C C C C D D D D
YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX
: : : : : : : : : : : : : : : : : : : : : : : :
475.45861816 11.00000000 39.89308548 43.45289612 50.00000000 68.57250977 100.00000000 98.89189148 99.99304199 94.87342834 94.89308167 77.98728943 48.47255707 40.46560287 285.15682983 1.00000000 6.00000000 35.58544922 -85.50784302 0.98003817 35.16879654 -86.00925446 0.98123306 0.00008400
m category % % % % % % % % % % % % K category category degree degree dimensionless degree degree dimensionless 1/s
13.5 Big Header Record for REGRID output REGRID Portion of big header: ***Integers: BHI( 2, 2): BHI( 3, 2): BHI( 4, 2): BHI( 5, 2): BHI( 6, 2): BHI( 7, 2): BHI( 8, 2): BHI( 9, 2): BHI( 10, 2): BHI( 11, 2): BHI( 12, 2):
1 6 1 1993 3 13 0 0 0 0
: : : : : : : : : :
REGRID Version 3 MM5 System Format Edition Number REGRID Program Version Number REGRID Program Minor Revision Number Four digit year of the start time Two digit month (01 through 12) of the start time Two digit day (01 through 31) of the start time Two digit hour (00 through 23) of the start time Two digit minute (00 through 59) of the start time Two digit second (00 through 59) of the start time Four digit ten thousandth of a second (0000 through 9999) of the start time 21 : Anticipated number of vertical levels in 3d data
***Floats: BHR(
13-8
1, 2): 43200.00 : Time increment between analysis times (s)
MM5 Tutorial
13: I/O FORMAT
BHR(
2, 2): 10000.00 : Top pressure used in analysis, pressure defining model lid (Pa)
13.5.1 REGRID Output Fields (with LSM option) 1993-03-13_00:00:00.0000 T U V RH H ALBSNOMX PSEALVLC SKINTEMP WEASD SOILT010 SOILT200 SOILT400 SOILM010 SOILM200 SEAICE SNOWCOVR PSEALVLD TERRAIN LAND USE VEGFRC01 VEGFRC02 VEGFRC03 VEGFRC04 VEGFRC05 VEGFRC06 VEGFRC07 VEGFRC08 VEGFRC09 VEGFRC10 VEGFRC11 VEGFRC12 TEMPGRD LANDMASK SOILINDX LATITCRS LONGICRS MAPFACCR LATITDOT LONGIDOT MAPFACDT CORIOLIS PRESSURE
3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1
35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 21
41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 1
21 21 21 21 21 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
0.00000 Hours 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
C D D C C C C C C C C C C C C C D C C C C C C C C C C C C C C C C C C C C D D D D P
YXP YXP YXP YXP YXP YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX P
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
262.65667725 7.14344168 10.38981056 96.45741272 3529.16406250 38.97982025 101799.22656250 276.31039429 0.00000000 277.13571167 280.43368530 286.95452881 0.33993921 0.37924942 0.00000000 0.00000000 101726.61718750 475.45861816 11.00000000 39.89308548 43.45289612 50.00000000 68.57250977 100.00000000 98.89189148 99.99304199 94.87342834 94.89308167 77.98728943 48.47255707 40.46560287 285.15682983 1.00000000 6.00000000 35.58544922 -85.50784302 0.98003817 35.16879654 -86.00925446 0.98123306 0.00008400 65000.00000000
K m/s m/s % m % Pa K kg m{-2} K K K fraction fraction 0/1 Flag 0/1 Flag Pa m category % % % % % % % % % % % % K category category degree degree dimensionless degree degree dimensionless 1/s Pa
13.6 Big Header Record for little_r/RAWINS Output RAWINS Portion of big header: ***Integers: BHI( 2, 3): BHI( 3, 3): BHI( 4, 3): BHI( 5, 3): BHI( 6, 3): BHI( 7, 3): BHI( 8, 3): BHI( 9, 3): BHI( 10, 3): BHI( 11, 3): BHI( 12, 3):
MM5 Tutorial
1 6 1 1993 3 13 0 0 0 0
: : : : : : : : : :
little_r Version 3 MM5 System Format Edition Number little_r Program Version Number little_r Program Minor Revision Number FOUR-DIGIT YEAR OF THE STAR TIME (1900 - 2099) TWO-DIGIT MONTH OF THE START TIME (01-12) TWO-DIGIT DAY OF THE START TIME (01-31) TWO-DIGIT HOUR OF THE START TIME (00-23) TWO-DIGIT MINUTE OF THE START TIME (00-59) TWO-DIGIT SECOND OF THE START TIME (00-59) FOUR-DIGIT TEN-THOUSANDTH OF A SECOND OF THE START TIME ( 0000-9999) 21 : NUMBER OF PRESSURE LEVELS IN OUTPUT, INCLUDING SURFACE LEVEL
13-9
13: I/O FORMAT
***Floats: BHR( BHR( BHR( BHR( BHR(
1, 2, 3, 4, 5,
3): 43200.00 : TIME DIFFERENCE (s) BETWEEN OUTPUT ANALYSIS TIMES 3): 10.00 : MAXIMUM TEMPERATURE DIFFERENCE ALLOWED IN ERROR MAX (K) 3): 13.00 : MAXIMUM SPEED DIFFERENCE ALLOWED IN ERROR MAX (m/s) 3): 6.00 : MAXIMUM SEA-LEVEL PRESSURE DIFFERENCE ALLOWED IN ERROR MAX (Pa) 3): 1.00 : TOLERANCE FOR BUDDY CHECK (0 = NO BUDDY CHECK)
13.6.1 little_r/RAWINS Output Fields (with LSM option) 1993-03-13_00:00:00.0000 T U V RH H ALBSNOMX PSEALVLC SKINTEMP WEASD SOILT010 SOILT200 SOILT400 SOILM010 SOILM200 SEAICE SNOWCOVR PSEALVLD TERRAIN LAND USE VEGFRC01 VEGFRC02 VEGFRC03 VEGFRC04 VEGFRC05 VEGFRC06 VEGFRC07 VEGFRC08 VEGFRC09 VEGFRC10 VEGFRC11 VEGFRC12 TEMPGRD LANDMASK SOILINDX LATITCRS LONGICRS MAPFACCR LATITDOT LONGIDOT MAPFACDT CORIOLIS PRESSURE
3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1
35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 21
41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 1
21 21 21 21 21 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
0.00000 Hours 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
C D D C C C C C C C C C C C C C D C C C C C C C C C C C C C C C C C C C C D D D D P
YXP YXP YXP YXP YXP YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX P
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
263.60055542 7.07863760 9.47566032 100.00000000 3529.16406250 38.97982025 101740.06250000 276.31039429 0.00000000 277.13571167 280.43368530 286.95452881 0.33993921 0.37924942 0.00000000 0.00000000 101726.61718750 475.45861816 11.00000000 39.89308548 43.45289612 50.00000000 68.57250977 100.00000000 98.89189148 99.99304199 94.87342834 94.89308167 77.98728943 48.47255707 40.46560287 285.15682983 1.00000000 6.00000000 35.58544922 -85.50784302 0.98003817 35.16879654 -86.00925446 0.98123306 0.00008400 65000.00000000
K m/s m/s % m % Pa K kg m{-2} K K K fraction fraction 0/1 Flag 0/1 Flag Pa m category % % % % % % % % % % % % K category category degree degree dimensionless degree degree dimensionless 1/s Pa
13.7 Big Header Record for little_r/RAWINS Surface FDDA Output SFC RAWINS Portion of big header: ***Integers: BHI( BHI( BHI( BHI( BHI(
13-10
2, 3, 4, 5, 6,
4): 4): 4): 4): 4):
1 6 1 1993 3
: : : : :
little_r Version 3 MM5 System Format Edition Number little_r Program Version Number little_r Program Minor Revision Number FOUR-DIGIT YEAR OF THE STAR TIME (1900 - 2099) TWO-DIGIT MONTH OF THE START TIME (01-12)
MM5 Tutorial
13: I/O FORMAT
BHI( 7, 4): BHI( 8, 4): BHI( 9, 4): BHI( 10, 4): BHI( 11, 4):
13 0 0 0 0
TWO-DIGIT DAY OF THE START TIME (01-31) TWO-DIGIT HOUR OF THE START TIME (00-23) TWO-DIGIT MINUTE OF THE START TIME (00-59) TWO-DIGIT SECOND OF THE START TIME (00-59) FOUR-DIGIT TEN-THOUSANDTH OF A SECOND OF THE START TIME ( 0000-9999) 1 : NUMBER OF PRESSURE LEVELS IN OUTPUT, INCLUDING SURFACE LEVEL
BHI( 12, 4):
: : : : :
***Floats: BHR(
1, 4): 10800.00 : TIME DIFFERENCE (seconds) BETWEEN SURFACE ANALYSES
13.7.1 little_r/RAWINS Surface FDDA Output Fields 1993-03-13_00:00:00.0000 T U V RH Q PSTARCRS PSEALVLC TOBBOX
2 2 2 2 2 2 2 2
35 35 35 35 35 35 35 35
41 41 41 41 41 41 41 41
1 1 1 1 1 1 1 1
0.00000 Hours 1 1 1 1 1 1 1 1
C D D C C C C C
YX YX YX YX YX YX YX YX
: : : : : : : :
273.91900635 -0.72313094 -4.36657429 91.08442688 0.00384220 85925.07812500 101738.28125000 13.19999981
K m/s m/s % kg kg{-1} Pa Pa Obs within 250 km
13.8 Big Header Record for INTERPF Output INTERP Portion of big header: ***Integers: BHI( 2, 5): BHI( 3, 5): BHI( 4, 5): BHI( 5, 5): BHI( 6, 5): BHI( 7, 5): BHI( 8, 5): BHI( 9, 5): BHI( 10, 5): BHI( 11, 5): BHI( 12, 5):
1 6 1 1993 3 13 0 0 0 0 23
: : : : : : : : : : :
INTERP Version 3 MM5 System Format Edition Number INTERP Program Version Number INTERP Program Minor Revision Number Four-digit year of start time Month of the year of the start time (1-12) Day of the month of the start time (1-31) Hour of the day of the start time (0-23) Minute of the start time (0-59) Second of the start time (0-59) Ten thousandths of a second of the start time (0-9999) Number of half-sigma layers in the model input data (top down)
***Floats: BHR( BHR( BHR( BHR( BHR(
1, 2, 3, 4, 5,
5): 43200.00 5):100000.00 5): 275.00 5): 50.00 5): 0.00
: : : : :
Time difference Non-hydrostatic Non-hydrostatic Non-hydrostatic Non-hydrostatic temperature (K)
(seconds) between model IC input files base state sea-level pressure (Pa) base state sea-level temperature (K) base state lapse rate d(T)/d(ln P) base state isothermal stratospheric
13.8.1 INTERPF Output Fields (with LSM option) 1993-03-13_00:00:00.0000 T U V Q PP W GROUND T PSTARCRS TSFC
3 3 3 3 3 3 2 2 2
MM5 Tutorial
35 35 35 35 35 35 35 35 35
41 41 41 41 41 41 41 41 41
23 23 23 23 23 24 1 1 1
0.00000 Hours 1 1 1 1 1 1 1 1 1
C D D C C C C C C
YXS YXS YXS YXS YXS YXW YX YX YX
: : : : : : : : :
259.52917480 13.80757618 14.90762901 0.00224648 1911.64038086 0.04734292 273.84320068 84232.07031250 273.84320068
K m/s m/s kg/kg Pa m/s K Pa K
13-11
13: I/O FORMAT
USFC VSFC RHSFC HSFC ALBSNOMX PSEALVLC TSEASFC WEASD SOILT010 SOILT200 SOILT400 SOILM010 SOILM200 SEAICE SNOWCOVR PSEALVLD TERRAIN LAND USE VEGFRC01 VEGFRC02 VEGFRC03 VEGFRC04 VEGFRC05 VEGFRC06 VEGFRC07 VEGFRC08 VEGFRC09 VEGFRC10 VEGFRC11 VEGFRC12 TEMPGRD LANDMASK SOILINDX LATITCRS LONGICRS MAPFACCR LATITDOT LONGIDOT MAPFACDT CORIOLIS SIGMAH PRESSURE
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1
35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 23 21
41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
D D C C C C C C C C C C C C C D C C C C C C C C C C C C C C C C C C C C D D D D H P
YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX S P
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
-0.79890132 -4.37785530 85.66202545 475.45861816 38.97982025 101740.06250000 276.31039429 0.00000000 277.13571167 280.43368530 286.95452881 0.33993921 0.37924942 0.00000000 0.00000000 101726.61718750 475.45861816 11.00000000 39.89308548 43.45289612 50.00000000 68.57250977 100.00000000 98.89189148 99.99304199 94.87342834 94.89308167 77.98728943 48.47255707 40.46560287 285.15682983 1.00000000 6.00000000 35.58544922 -85.50784302 0.98003817 35.16879654 -86.00925446 0.98123306 0.00008400 0.52499998 65000.00000000
m/s m/s % m % Pa K kg m{-2} K K K fraction fraction 0/1 Flag 0/1 Flag Pa m category % % % % % % % % % % % % K category category degree degree dimensionless degree degree dimensionless 1/s sigma Pa
13.9 Big Header Record for LOWBDY Output MM5 Substrate Temp File big header: ***Integers: BHI( 2, 6): BHI( 3, 6): BHI( 4, 6): BHI( 5, 6): BHI( 6, 6): BHI( 7, 6): BHI( 8, 6): BHI( 9, 6): BHI( 10, 6): BHI( 11, 6): BHI( 12, 6):
1 6 1 1993 3 13 0 0 0 0 1
: : : : : : : : : : :
INTERP Version 3 MM5 System Format Edition Number INTERP Program Version Number INTERP Program Minor Revision Number Four-digit year of start time Month of the year of the start time (1-12) Day of the month of the start time (1-31) Hour of the day of the start time (0-23) Minute of the start time (0-59) Second of the start time (0-59) Ten thousandths of a second of the start time (0-9999) Number of levels in the lower boundary condition file
***Floats: BHR(
13-12
1, 6): 43200.00 : Time difference (seconds) during which the lower boundary condition is valid
MM5 Tutorial
13: I/O FORMAT
13.9.1 LOWBDY Output Fields 1993-03-13_00:00:00.0000 TSEASFC RES TEMP SEAICE SNOWCOVR
2 2 2 2
35 35 35 35
41 41 41 41
1 1 1 1
0.00000 Hours 1 1 1 1
C C C C
YX YX YX YX
: : : :
271.56176758 271.93566895 0.00000000 0.00000000
K K 0/1 Flag 0/1 Flag
13.10 Big Header Record for BDYOUT Output MM5 Boundary File big header: ***Integers: BHI( 2, 7): BHI( 3, 7): BHI( 4, 7): BHI( 5, 7): BHI( 6, 7): BHI( 7, 7): BHI( 8, 7): BHI( 9, 7): BHI( 10, 7): BHI( 11, 7): BHI( 12, 7):
1 6 1 1993 3 13 0 0 0 0 1
: : : : : : : : : : :
INTERP Version 3 MM5 System Format Edition Number INTERP Program Version Number INTERP Program Minor Revision Number Four-digit year of start time Month of the year of the start time (1-12) Day of the month of the start time (1-31) Hour of the day of the start time (0-23) Minute of the start time (0-59) Second of the start time (0-59) Ten thousandths of a second of the start time (0-9999) Number of levels in the lower boundary condition file
***Floats: BHR(
1, 7): 43200.00 : Time difference (seconds) during which the lateral boundary condition is valid
13.10.1 BDYOUT Output Fields 1993-03-13_00:00:00.0000 UEB UWB UNB USB UEBT UWBT UNBT USBT VEB VWB VNB VSB VEBT VWBT VNBT VSBT TEB TWB TNB TSB TEBT TWBT TNBT TSBT QEB QWB QNB QSB QEBT QWBT
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
MM5 Tutorial
35 35 41 41 35 35 41 41 35 35 41 41 35 35 41 41 35 35 41 41 35 35 41 41 35 35 41 41 35 35
23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
0.00000 Hours 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
D D D D D D D D D D D D D D D D C C C C C C C C C C C C C C
YSB YSB XSB XSB YSB YSB XSB XSB YSB YSB XSB XSB YSB YSB XSB XSB YSB YSB XSB XSB YSB YSB XSB XSB YSB YSB XSB XSB YSB YSB
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
1284.62487793 991.48913574 1166.55163574 1022.80157471 -0.00204996 0.00547681 -0.00773819 0.05125543 768.32391357 -2715.44311523 -554.05780029 897.45916748 0.01266296 0.03035343 -0.00653175 -0.02874910 23971.13867188 19236.74023438 20320.28515625 24464.29296875 0.00578337 -0.00315023 -0.00162751 0.00058096 0.10734902 0.01490723 0.00913386 0.08680473 0.00000125 0.00000099
kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa
m/s m/s m/s m/s m/s/s m/s/s m/s/s m/s/s m/s m/s m/s m/s m/s/s m/s/s m/s/s m/s/s K K K K K/s K/s K/s K/s kg/kg kg/kg kg/kg kg/kg kg/kg/s kg/kg/s
13-13
13: I/O FORMAT
QNBT QSBT WEB WWB WNB WSB WEBT WWBT WNBT WSBT PPEB PPWB PPNB PPSB PPEBT PPWBT PPNBT PPSBT
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
41 41 35 35 41 41 35 35 41 41 35 35 41 41 35 35 41 41
23 23 24 24 24 24 24 24 24 24 23 23 23 23 23 23 23 23
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
C C C C C C C C C C C C C C C C C C
XSB XSB YWB YWB XWB XWB YWB YWB XWB XWB YSB YSB XSB XSB YSB YSB XSB XSB
: : : : : : : : : : : : : : : : : :
-0.00000006 -0.00000078 -0.10622037 -6.22259235 0.65768749 2.64147258 0.00002296 0.00007693 -0.00005458 -0.00014272 296879.78125000 197363.87500000 -18028.70507812 289880.34375000 -0.09617984 0.09014142 -0.13370353 -1.02497864
kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa kPa
kg/kg/s kg/kg/s m/s m/s m/s m/s m/s/s m/s/s m/s/s m/s/s Pa Pa Pa Pa Pa/s Pa/s Pa/s Pa/s
13.11 Big Header Record for MM5 Output MM5 Portion of big header: ***Integers: BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI(
2,11): 3,11): 4,11): 5,11): 6,11): 7,11): 8,11): 9,11): 10,11): 11,11): 12,11): 13,11): 14,11): 15,11): 16,11): 17,11): 18,11): 19,11): 20,11):
1 6 1 1993 3 13 0 0 0 0 23 0 0 0 0 0 0 0 1
: : : : : : : : : : : : : : : : : : :
BHI( 21,11): BHI( 22,11): BHI( 23,11):
0 : 0 : 1 :
BHI( 24,11): BHI( 25,11):
0 : 1 :
MM5 Version 3 MM5 System Format Edition Number MM5 Program Version Number MM5 Program Minor Revision Number FOUR-DIGIT YEAR OF START TIME INTEGER MONTH OF START TIME DAY OF THE MONTH OF THE START TIME HOUR OF THE START TIME MINUTES OF THE START TIME SECONDS OF THE START TIME TEN THOUSANDTHS OF A SECOND OF THE START TIME MKX: NUMBER OF LAYERS IN MM5 OUTPUT IFDDAG: 1=GRIDDED FDDA OPTION COMPILED, 0=NOT COMPILED IFDDAO: 1=OBS FDDA OPTION COMPILED, 0=NOT COMPILED INAV: 1=TKE ARRAY PRESENT, 0=NOT PRESENT INAV2: 1=TKE ARRAY PRESENT, 0=NOT PRESENT INAV3: 1=TKE ARRAY PRESENT, 0=NOT PRESENT IICE: 1=CLOUD ICE AND SNOW ARRAYS PRESENT, 0=NOT PRESENT IICEG: 1=GRAUPEL AND NUMBER CONC ARRAYSPRESENT, 0=NOT PRESENT IEXMS: 1=CLOUD WATER AND RAIN WATER ARRAYS PRESENT, 0=NOT PRESENT IKFFC: 1=KF AND/OR FC ARRAYS PRESENT, 0=NOT PRESENT IARASC: 1=ARAKAWA-SCHUBERT ARRAYS PRESENT, 0=NOT PRESENT IRDDIM: 1=ATMOSPHERIC RADIATION TENDENCY ARRAY PRESENT, 0=NOT PRESENT ISLDIM: 1=5-LAYER SOIL MODEL ARRAYS PRESENT, 0=NOT PRESENT ILDDIM: 1=LAND-SURFACE MODEL ARRAYS PRESENT, 0=NOT PRESENT
***Floats: BHR(
1,11): 10800.00 : INTTIM: TIME DIFFERENCE IN MODEL OUTPUT
***Integers: BHI( BHI( BHI(
1,12): 2,12): 3,12):
BHI( BHI(
4,12): 5,12):
13-14
0 : IFREST: 0 : IXTIMR: 1 : IFSAVE: 0 = DATA 1 : IFTAPE: 99999 : MASCHK:
1 = RESTARTED JOB; 0 = NOT A RESTARTED JOB TIME OF RESTART 1 = DATA SAVED FOR RESTART; NOT SAVED FOR RESTART 1 = OUTPUT DATA SAVED FOR GRIN; 0 = NO OUTPUT FOR GRIN MASS CONSERVATION CHECK FREQUENCY (MINUTES)
MM5 Tutorial
13: I/O FORMAT
***Floats: BHR( BHR( BHR(
1,12): 2,12): 3,12):
BHR( BHR(
4,12): 5,12):
720.00 : TIMAX: SIMULATION END TIME (MINUTES) 240.00 : TISTEP: COARSE-DOMAIN TIME STEP IN SECONDS 360.00 : SAVFRQ: TIME INTERVAL (MINUTES) THAT DATA WERE SAVED FOR RESTART 180.00 : TAPFRQ: TIME INTERVAL (MINUTES) THAT DATA WERE SAVED FOR GRIN 0.00 : BUFFRQ: TIME FREQ USED TO SPLIT OUTPUT FILES (MINUTES). IGNORED IF < TAPFRQ
***Integers: BHI(
1,13):
BHI(
2,13):
BHI(
3,13):
BHI(
4,13):
BHI(
5,13):
BHI(
6,13):
BHI(
7,13):
BHI(
8,13):
BHI(
9,13):
BHI( 10,13): BHI( 11,13): BHI( 12,13): BHI( 13,13): BHI( 14,13): BHI( 15,13): BHI( 16,13): BHI( 17,13): BHI( 18,13): BHI( 19,13): BHI( 20,13): BHI( 21,13): BHI( 22,13): BHI( 23,13): BHI( 24,13): BHI( 25,13): BHI( 26,13): BHI( 27,13): BHI( 28,13):
2 : IFRAD: 0=NO RADIATIVE COOLING; 1=SIMPLE; 2=CLOUD RADIATION; 3=CCM2; 4=RRTMLW 3 : ICUPA: 1-8/NO/ANTHES-KUO/GRELL/A-S/F-C/KAIN-FRITSCH/B ETTS-MILLER/KAIN-FRITSCH2 4 : IMPHYS: 1-8 DRY/STABLE/WARM RAIN/SIMPLE ICE/REISNER1/GODDARD/ REISNER2/SCHULTZ 5 : IBLTYP: 0=FRICTIONLESS; 1=BULK;2=BLACKADAR;3=B-T; 4=ETA M-Y;5=MRF;6=G-S;7=PX 2 : ISOIL: 0=BLACKADAR SLAB MODEL, 1=MULTI-LAYER, 2=LAND-SURFACE SCHEME 0 : ISHALLO: 1=SHALLOW CONVECTION SCHEME USED; 0=SHALLOW CONVECTION SCHEME NOT USED 1 : IMVDIF: 1=MOIST-ADIABATIC VERTICAL DIFFUSION IN CLOUDS INCLUDED; 0=NOT INCLUDED 1 : IVQADV: =0, LOG, =1, LINEAR INTERPOLATION OF MOISTURE IN VERTICAL ADVECTION 1 : IVTADV: =0, THETA, =1, LINEAR INTERPOLATION OF TEMPERATURE IN VERTICAL ADVECTN 1 : ITHADV: =0, STANDARD, = 1, USING POTENTIAL TEMPERATURE IN TEMP ADVECTION 1 : ITPDIF: =1, HORIZONTAL DIFFUSION OF PERTURBATION TEMPERATURE ONLY, =0, FULL T 1 : ICOR3D: 1=FULL CORIOLIS WITH VERTICAL COMPONENT; 0=VERTICAL COMPONENT NEGLECTED 1 : IFUPR: 1= UPPER RADIATIVE BOUNDARY CONDITION USED; 0= U.R.B.C. NOT USED 0 : IFDRY: 1=FAKE DRY RUN; 2=NOT A FAKE DRY RUN 3 : IBOUDY: SPECIFIED 0=FIXED, 2=TIME DEPENDENT, 3=RELAXATION ZONE/IO DEPENDENT 0 : IFSNOW: 1=SNOW-COVER EFFECTS CONSIDERED0=NOT CONSIDERED 1 : ISSFLX: 1=SURFACE HEAT AND MOISTURE FLUXES CALCULATED; 0=NOT CALCULATED 1 : ITGFLG: 1=TG CALCULATED FROM BUDGET; 2=SINUSOIDAL FUNCTION; 3=SPECIFIED CONSTS 1 : ISFPAR: 1=SFC/LAND-USE PARAMETERS VARIABLE; 0=SFC/LAND-USE PARAMETERS CONSTANT 1 : ICLOUD: 0=CLOUD EFFECTS NOT CONSIDERED;1, 2=CLOUD EFF THR CLOUD WATER/ICE OR RH 0 : ICDCON: 1=DRAG COEFFS ARE CONSTANT F(TER-ELEV) IN BULK PBL; 0=COEFFS VARIABLE 1 : IVMIXM: 1=VERTICAL MIXING OF MOMENTUM CONSIDERED; 0=NOT CONSIDERED 1 : IEVAP: -1=EVAP OF RAIN NOT CONSIDERED; 0=EVAP NOT CONSIDERED; 1=EVAP CONSIDERED 1 : ICUSTB: 1=STABILITY CHECK IN CUMULUS SCHEME ACTIVATED; 0 = NOT ACTIVATED 0 : IMOIAV: 1=USE BUCKET MODEL W/O EXTRA INPUT; 2=USE BUCKET MODEL W SOIL MOIS INPUT 0 : IBMOIST: 1=BOUNDARY AND INITIAL WATER/ICE SPECIFIED; 0=NOT SPECIFIED 0 : IFOGMD: 1=FOG MODEL IS USED 0=FOG MODEL IS NOT USED 0 : ISSTVAR: 1=SST VARYING IN TIME 0=SST DOES NOT VARY IN TIME
***Floats: BHR( BHR(
1,13): 2,13):
MM5 Tutorial
30.00 : RADFRQ: FREQUENCY THAT SOLAR RADIATION IS COMPUTED (MINUTES) 1.00 : HYDPRE: 1.0 = WATER LOADING EFFECTS IN HYDROSTATIC EQN; 0.0=NO WATER LOADING
13-15
13: I/O FORMAT
***Integers: BHI(
1,14):
BHI(
2,14):
BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI(
3,14): 4,14): 5,14): 6,14): 7,14): 8,14): 9,14): 10,14): 11,14): 12,14): 13,14): 14,14): 15,14): 16,14): 17,14): 18,14): 19,14): 20,14): 21,14): 22,14): 23,14): 24,14): 25,14): 26,14): 27,14): 28,14): 29,14): 30,14): 31,14): 32,14): 33,14): 34,14): 35,14):
1 : IOVERW: 1=NEST INITIAL CONDITIONS OVERWRITTEN WITH USER ANALYSIS; 0=INTERPOLATE 3 : IFEED: 0=NO FB; 1=9-PT FB; 1-PT FD/2=NO SMOOTH; 3=LIGHT SMOOTH; 4=HEAVY SMOOTH 10 : MAXMV: MAXIMUM NUMBER OF MOVES ALLOWED 0 : IMOVE: 1=THIS DOMAIN MOVES; 0=THIS DOMAIN DOES NOT MOVE 1 : IMOVCO: MOVE NUMBER 0 : IMOVET: TIME OF MOVE (MINUTES FROM START OF FORECAST) 0 : IMOVET: TIME OF MOVE (MINUTES FROM START OF FORECAST) 0 : IMOVET: TIME OF MOVE (MINUTES FROM START OF FORECAST) 0 : IMOVET: TIME OF MOVE (MINUTES FROM START OF FORECAST) 0 : IMOVET: TIME OF MOVE (MINUTES FROM START OF FORECAST) 0 : IMOVET: TIME OF MOVE (MINUTES FROM START OF FORECAST) 0 : IMOVET: TIME OF MOVE (MINUTES FROM START OF FORECAST) 0 : IMOVET: TIME OF MOVE (MINUTES FROM START OF FORECAST) 0 : IMOVET: TIME OF MOVE (MINUTES FROM START OF FORECAST) 0 : IMOVET: TIME OF MOVE (MINUTES FROM START OF FORECAST) 0 : IMOVEI: I INCREMENT OF MOVE 0 : IMOVEI: I INCREMENT OF MOVE 0 : IMOVEI: I INCREMENT OF MOVE 0 : IMOVEI: I INCREMENT OF MOVE 0 : IMOVEI: I INCREMENT OF MOVE 0 : IMOVEI: I INCREMENT OF MOVE 0 : IMOVEI: I INCREMENT OF MOVE 0 : IMOVEI: I INCREMENT OF MOVE 0 : IMOVEI: I INCREMENT OF MOVE 0 : IMOVEI: I INCREMENT OF MOVE 0 : IMOVEJ: J INCREMENT OF MOVE 0 : IMOVEJ: J INCREMENT OF MOVE 0 : IMOVEJ: J INCREMENT OF MOVE 0 : IMOVEJ: J INCREMENT OF MOVE 0 : IMOVEJ: J INCREMENT OF MOVE 0 : IMOVEJ: J INCREMENT OF MOVE 0 : IMOVEJ: J INCREMENT OF MOVE 0 : IMOVEJ: J INCREMENT OF MOVE 0 : IMOVEJ: J INCREMENT OF MOVE 0 : IMOVEJ: J INCREMENT OF MOVE
***Floats: BHR( BHR(
1,14): 2,14):
0.00 : XSTNES: 1440.00 : XENNES:
STARTING TIME (MINUTES) OF DOMAIN ENDING TIME (MINUTES) OF DOMAIN
***Integers: ***Floats: BHR( BHR( BHR( BHR( BHR(
1,15): 2,15): 3,15): 4,15): 5,15):
BHR( BHR(
6,15): 7,15):
0.10 0.00 0.15 0.04 0.30
: : : : :
ZZLND: ROUGHNESS LENGTH (M) OVER LAND (WHEN ISFPAR=1) ZZWTR: ROUGHNESS LENGTH (M) OVER WATER (WHEN ISFPAR=1) ALBLND: ALBEDO OVER LAND (WHEN ISFPAR=1) THINLD: THERMAL INERTIA OVER LAND (WHEN ISFPAR=1) XMAVA: MOISTURE AVAILABLITY OVER LAND AS A FRACTION OF 1 (WHEN ISFPAR=1) 1.00 : CONF: CONDENSATION THRESHOLD 1.00 : other namelist variables
***Integers: BHI( BHI(
1,16): 2,16):
BHI(
3,16):
BHI(
4,16):
BHI(
5,16):
13-16
0 : I4D(3D): 1=3-D ANALYSIS NUDGING; 0=NO 3-D ANALYSIS NUDGING 1 : IWIND(3D): 1=3-D ANALYSIS NUDGING OF WIND; 0=NO 3-D ANALYSIS NUDGING OF WIND 1 : ITEMP(3D): 1=3-D ANALYSIS NUDGING OF TEMP; 0=NO 3-D ANALYSIS NUDGING OF TEMP 1 : IMOIS(3D): 1=3-D ANALYSIS NUDGING MOISTURE: 0=NO 3-D ANALYSIS NUDGING MOISTURE 0 : IROT: 1=3-D ANALYSIS NUDGING OF ROTATIONAL WIND; 0 = ROT. WIND NOT NUDGED
MM5 Tutorial
13: I/O FORMAT
BHI( BHI(
6,16): 7,16):
BHI(
8,16):
BHI(
9,16):
0 : I4D(SFC): 1=SFC ANALYSIS NUDGING; 0=NO SFC ANALYSIS NUDGING 1 : IWIND(SFC): 1=SFC ANALYSIS NUDGING OF WIND; 0=NO SFC ANALYSIS NUDGING OF WIND 1 : ITEMP(SFC): 1=SFC ANALYSIS NUDGING OF TEMP; 0=NO SFC ANALYSIS NUDGING OF TEMP 1 : IMOIS(SFC): 1=SFC ANALYSIS NUDGING MOISTURE: 0=NO SFC ANALYSIS NUDGING MOISTURE 0 : INONBL(U): 0 = B.L. NUDGING OF U INCLUDED; 1 = B.L. NUDGING OF U EXCLUDED 0 : INONBL(V): 0 = B.L. NUDGING OF V INCLUDED; 1 = B.L. NUDGING OF V EXCLUDED 1 : INONBL(T): 0 = B.L. NUDGING OF T INCLUDED; 1 = B.L. NUDGING OF T EXCLUDED 1 : INONBL(M.R.): 0 = B.L. NUDGING OF M.R. INCLUDED; 1 = B.L. NUDGING M.R. EXCLUDED 0 : I4DI: 1=OBSERVATIONS NUDGING; 2=NO OBSERVATIONS NUDGING 1 : ISWIND: 1=OBS NUDGING OF THE WIND FIELD; 2=NO OBS NUDGING OF THE WIND FIELD 1 : ISTEMP: 1=OBS NUDGING OF THE TEMP FIELD; 2=NO OBS NUDGING OF THE TEMP FIELD 1 : ISMOIS: 1=OBS NUDGING OF THE MIXING RATIO FIELD; 2=NO OBS NUDGING OF MIX. RAT. 2 : IONF: FREQUENCY (COARSE-GRID TIMESTEPS) TO COMPUTE OBS-NUDGING WEIGHTS 0 : IDYNIN: =1, USING RAMPING FUNCTION AT END OF FDDA, =0, NO RAMP
BHI( 10,16): BHI( 11,16): BHI( 12,16): BHI( 13,16): BHI( 14,16): BHI( 15,16): BHI( 16,16): BHI( 17,16): BHI( 18,16): BHI( 19,16): ***Floats: BHR( BHR( BHR(
1,16): 2,16): 3,16):
BHR(
4,16):
BHR( BHR( BHR(
5,16): 6,16): 7,16):
BHR(
8,16):
BHR( 9,16): BHR( 10,16): BHR( 11,16):
0.00 : FDASTA: STARTING TIME FOR FDDA (MINUTES) 780.00 : FDAEND: ENDING TIME FOR FDDA (MINUTES) 720.00 : DIFTIM(3D): TIME INTERVAL (MINUTES) BETWEEN 3-D ANALYSES FOR NUDGING 180.00 : DIFTIM(SFC): TIME INTERVAL (MINUTES) BETWEEN SURFACE ANALYSES FOR NUDGING 0.00 : GV(3D): NUDGING COEFFICIENT FOR 3-D ANALYSIS FDDA OF WINDS 0.00 : GT(3D): NUDGING COEFFICIENT FOR 3-D ANALYSIS FDDA OF TEMP 0.00 : GQ(3D): NUDGING COEFFICIENT FOR 3-D ANALYSIS FDDA OF MIXING RATIO 5000000. : GR(3D): NUDGING COEFFICIENT FOR 3-D ANALYSIS FDDA OF ROTATIONAL WIND COMPONENT 0.00 : GV(SFC): NUDGING COEFFICIENT FOR SFC ANALYSIS FDDA OF WINDS 0.00 : GT(SFC): NUDGING COEFFICIENT FOR SFC ANALYSIS FDDA OF TEMP 0.00 : GQ(SFC): NUDGING COEFFICIENT FOR SFC ANALYSIS FDDA OF MIXING RATIO 250.00 : RINBLW: RADIUS OF INFLUENCE FOR SURFACEANALYSIS NUDGING 0.00 : GIV: NUDGING COEFFICIENT FOR OBS NUDGING OF THE WIND FIELD 0.00 : GIT: NUDGING COEFFICIENT FOR OBS NUDGING OF THE TEMP FIELD 0.00 : GIQ: NUDGING COEFFICIENT FOR OBS NUDGING OF THE MIXING RATIO FIELD 240.00 : RINXY: OBS NUDGING RADIUS OF INFLUENCE (KM) IN THE HORIZONTAL 0.00 : RINSIG: OBS NUDGING RADIUS OF INFLUENCE (SIGMA UNITS) IN THE VERTICAL 0.67 : TWINDO: OBS NUDGING HALF PERIOD (MINUTES) OF THE TIME WINDOW 60.00 : DTRAMP: IF IDYNIN=1, RAMPING TIME IN MINUTES
BHR( BHR( BHR( BHR(
12,16): 13,16): 14,16): 15,16):
BHR( 16,16): BHR( 17,16): BHR( 18,16): BHR( 19,16):
13.11.1 MM5 Output Fields 1993-03-13_00:00:00.0000 U V T Q CLW RNW RAD TEND W PP PSTARCRS GROUND T RAIN CON
3 3 3 3 3 3 3 3 3 2 2 2
MM5 Tutorial
35 35 35 35 35 35 35 35 35 35 35 35
41 41 41 41 41 41 41 41 41 41 41 41
23 23 23 23 23 23 23 24 23 1 1 1
0.00000 Hours 1 1 1 1 1 1 1 1 1 1 1 1
D D C C C C C C C C C C
YXS YXS YXS YXS YXS YXS YXS YXW YXS YX YX YX
: : : : : : : : : : : :
13.80757618 14.90762901 259.52917480 0.00224648 0.00000000 0.00000000 0.00000000 0.04734292 1911.64025879 84232.07031250 273.84320068 0.00000000
m/s m/s K kg/kg kg/kg kg/kg K/DAY m/s Pa Pa K cm
13-17
13: I/O FORMAT
RAIN NON TERRAIN MAPFACCR MAPFACDT CORIOLIS RES TEMP LATITCRS LONGICRS LAND USE TSEASFC PBL HGT REGIME SHFLUX LHFLUX UST SWDOWN LWDOWN SWOUT LWOUT SOIL T 1 SOIL T 2 SOIL T 3 SOIL T 4 SOIL M 1 SOIL M 2 SOIL M 3 SOIL M 4 SOIL W 1 SOIL W 2 SOIL W 3 SOIL W 4 CANOPYM WEASD SNOWH SNOWCOVR ALB GRNFLX VEGFRC SEAICE SFCRNOFF UGDRNOFF T2 Q2 U10 V10 ALBD SLMO SFEM SFZ0 THERIN SFHC SCFX SIGMAH
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1
35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 27 27 27 27 27 27 27 23
41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 2 2 2 2 2 2 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
C C C D D C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C
H
YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX CA CA CA CA CA CA CA S
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
0.00000000 475.45858765 0.98003811 0.98123312 0.00008400 285.15682983 35.58544922 -85.50784302 11.00000000 271.56176758 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 277.13571167 277.79531860 279.27941895 281.52362061 0.33993921 0.34780127 0.36549085 0.37924942 0.33993921 0.34780127 0.36549085 0.37924942 0.00000000 0.00000000 0.00000000 0.00000000 0.17000000 0.00000000 48.91479874 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 12.00000000 0.50000000 0.94999999 50.00000000 5.00000000 2920000.00000000 0.00000000 0.52499998
cm m (DIMENSIONLESS) (DIMENSIONLESS) 1/s K DEGREES DEGREES category K m (DIMENSIONLESS) W/m^2 W/m^2 m/s W/m^2 W/m^2 W/m^2 W/m^2 K K K K m^3/m^3 m^3/m^3 m^3/m^3 m^3/m^3 m^3/m^3 m^3/m^3 m^3/m^3 m^3/m^3 m mm m fraction fraction W m{-2} fraction (DIMENSIONLESS) mm mm K kg kg{-1} m s{-1} m s{-1} PERCENT fraction fraction cm 100*cal cm^-2 K^-1 s^1/2 J m^-3 K^-1 fraction sigma
13.12 Big Header Record for Interpolated, Pressure-level MM5 Output Interpolated MM5 Portion of big header: ***Integers: BHI( BHI( BHI( BHI( BHI( BHI( BHI( BHI(
13-18
2, 3, 4, 5, 6, 7, 8, 9,
8): 8): 8): 8): 8): 8): 8): 8):
1 2 0 1993 3 13 0 0
: : : : : : : :
INTERPB Version 3 MM5 System Format Edition Number INTERPB Program Version Number INTERPB Program Minor Revision Number Four-digit year of start time Month of the year of the start time (1-12) Day of the month of the start time (1-31) Hour of the day of the start time (0-23) Minute of the start time (0-59)
MM5 Tutorial
13: I/O FORMAT
BHI( 10, 8): BHI( 11, 8): BHI( 12, 8):
0 : Second of the start time (0-59) 0 : Ten thousandths of a second of the start time (0-9999) 20 : Number of pressure levels in the output, bottom up, including surface
***Floats: BHR(
1, 8): 21600.00 : Time difference (seconds) between isobaric interpolated model output files.
13.12.1 Interpolated MM5 Output Fields 1993-03-13_00:00:00.0000 U V T Q CLW RNW RAD TEND W PP H RH PSTARCRS GROUND T RAIN CON RAIN NON TERRAIN MAPFACCR MAPFACDT CORIOLIS RES TEMP LATITCRS LONGICRS LAND USE TSEASFC PBL HGT REGIME SHFLUX LHFLUX UST SWDOWN LWDOWN SWOUT LWOUT SOIL T 1 SOIL T 2 SOIL T 3 SOIL T 4 SOIL M 1 SOIL M 2 SOIL M 3 SOIL M 4 SOIL W 1 SOIL W 2 SOIL W 3 SOIL W 4 CANOPYM WEASD SNOWH SNOWCOVR ALB GRNFLX VEGFRC SEAICE SFCRNOFF UGDRNOFF T2
3 3 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
MM5 Tutorial
35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35
41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41
20 20 20 20 20 20 20 20 20 20 20 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
0.00000 Hours 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
D D C C C C C C C C C C C C C C C D D C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C C
YXP YXP YXP YXP YXP YXP YXP YXP YXP YXP YXP YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX YX
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
11.72152519 12.60861397 261.79498291 0.00264599 0.00000000 0.00000000 0.00000000 0.04727403 1809.39270020 4142.13671875 98.65138245 84232.07031250 273.84320068 0.00000000 0.00000000 475.45858765 0.98003811 0.98123312 0.00008400 285.15682983 35.58544922 -85.50784302 11.00000000 271.56176758 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 0.00000000 277.13571167 277.79531860 279.27941895 281.52362061 0.33993921 0.34780127 0.36549085 0.37924942 0.33993921 0.34780127 0.36549085 0.37924942 0.00000000 0.00000000 0.00000000 0.00000000 0.17000000 0.00000000 48.91479874 0.00000000 0.00000000 0.00000000 0.00000000
m/s m/s K kg/kg kg/kg kg/kg K/DAY m/s Pa m % Pa K cm cm m (DIMENSIONLESS) (DIMENSIONLESS) 1/s K DEGREES DEGREES category K m (DIMENSIONLESS) W/m^2 W/m^2 m/s W/m^2 W/m^2 W/m^2 W/m^2 K K K K m^3/m^3 m^3/m^3 m^3/m^3 m^3/m^3 m^3/m^3 m^3/m^3 m^3/m^3 m^3/m^3 m mm m fraction fraction W m{-2} fraction (DIMENSIONLESS) mm mm K
13-19
13: I/O FORMAT
Q2 U10 V10 PSFC PSEALVLC PSEALVLD LATITDOT LONGIDOT RH SFC PRESSURE
2 2 2 2 2 2 2 2 2 1
35 35 35 35 35 35 35 35 35 20
41 41 41 41 41 41 41 41 41 1
1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1
C C C C C D D D C P
YX YX YX YX YX YX YX YX YX P
: : : : : : : : : :
0.00000000 0.00000000 0.00000000 95925.21093750 101745.21093750 101671.34375000 35.16881180 -86.00924683 82.59911346 60000.00000000
kg kg{-1} m s{-1} m s{-1} Pa Pa Pa degrees degrees % Pa
13.13 Special Data Format in MM5 Modeling System There are several data files in the MM5 modeling system that do not conform with the above format; therefore special programs are needed to read these files. These are a file required for observational nudging, a file for boundary conditions, three observation output files from Rawins.
13.13.1 Data format for observational nudging This input file to MM5 model is a binary file containing 9 real numbers per record, and data are in order of increasing time. The READ statement in the model is the following: READ(NVOL,END=111) TIMEOB,RIO,RJO,RKO,(VAROBS(IVAR),IVAR=1,5) where NVOL is the input fortran unit number.
13.13.2 Description of observational nudging variables Name
Description
TIMEOB: RIO: RJO: RKO: IVAR(1): IVAR(2): IVAR(3): IVAR(4): IVAR(5):
Julian date in dddhh. Example: 16623.5 - Julian day 166 and hour 2330 UTC y-location - I dot-point location on coarse mesh x-location - J dot-point location on coarse mesh z-location - K half-σ level u wind - in m/sec rotated to model grid v wind - in m/sec rotated to model grid temperature - in Kelvin water vapor mixing ratio - in kg/kg Pstar - in cb (only used in hydrostatic model)
A user may include more information at the end of a record which are not read by the model but can be used to identify the station and data type. The no-data value is 99999.. If running the model in nonhydrostatic mode, 99999. can be used to fill up the Pstar spot.
13.13.3 Data format for surface observations file This is an output file written by program Rawins after the surface data have gone through error checks. Only those surface observations that have passed the error checks are included. These are the surface observations that were used in the objective analysis. The local file name is
13-20
MM5 Tutorial
13: I/O FORMAT
sfc4dobs.out, the NCAR MSS filename is SFC4DOBS_DOMAINx, and the fortran unit number in program Rawins is unit 60. This file may be used to construct an observation nudging file for use in MM5. This file is an unformatted fortran output file. Each record contains surface observations for a single station. The fortran READ statement for each record is: READ (60) JDATE,JSTA,JLAT,JLON,JSFCEL,XU,XV,XT,XRH,XSLP
13.13.4 Description of surface observation variables Name
Description
JDATE JSTA JLAT JLON JSFCEL XU XV XT XRH XSLP
16-character string date in yyyy-mm-dd_hh:mm station identifier (character*8) station latitude * 10 (integer) station longitude * 10 (integer) station elevation (integer) u component of wind rotated to model grid (m/s) v component of wind rotated to model grid (m/s) temperature (C) Rawins’ RH form; to obtain real RH: RH=100*(1.-XRH*XRH) sea-level pressure (mb)
13.13.5 Data format for upper-air observations file This is an output file written by program Rawins after the upper-air data have gone through error checks. Only those upper-air observations that have passed the error checks are included. These are the upper-air observations that were used in the objective analysis. The local file name is upr4dobs.out, the NCAR MSS filename is UPR4DOBS_DOMAINx, and the fortran unit number in program Rawins is unit 61. This file may be used to construct an observation nudging file for use in MM5. This file is an unformatted fortran output file. Each record contains the upper-air observations for a single station. The fortran READ statement for each record is: READ (61) JDATE,JSTA,JLAT,JLON,JSFCEL,JTOTLV, (XP(L),L=1,JTOTLV),(XU(L),L=1,JTOTLV),(XV(L),L=1, JTOTLV), (XT(L),L=1,JTOTLV),(XRH(L),L=1,JTOTLV)
13.13.6 Description of upper-air observation variables Name
Description
JDATE JSTA
16-character string date in yyyy-mm-dd_hh:mm station identifier (character*6)
MM5 Tutorial
13-21
13: I/O FORMAT
JLAT JLON JSFCEL JTOTLV XP XU XV XT XRH
station latitude * 10 (integer) station longitude * 10 (integer) station elevation (integer) number of data levels pressure values u component of wind rotated to model grid (m/s) v component of wind rotated to model grid (m/s) temperature (C) Rawins’ RH form; to obtain real RH: RH=100*(1.-XRH*XRH)
13.13.7 Data format for raw upperair observations file This is an output file written by program Rawins before the upper-air data have gone through error checks. The local file name is rawobs.out, NCAR MSS filename is RAOBS_DOMAINx, and fortran unit number in program Rawins is unit 11. This file is an unformated fortran output file. Each record contains the upper-air observations for a single station. The fortran READ statement for each record is: READ (11) JIDENT,NTT,POT,TOT,HOT,ZOT,NWW,PWW,PWT,DOT,SOT, LAT,LON,JSFCEL,JYR,JMO,JDY,JHR
13.13.8 Description of raw upperair observation variables Name
Description
JIDENT NTT POT TOT HOT ZOT NWW PWT DOT SOT LAT LON JSFCEL JYR JMO JDY JHR
station identifier (character*6) number of pressure levels for temperature and height pressure values for temp and height reports temperature (C) dew point temperature (C) height (m) number of pressure levels for wind reports pressure values for wind reports wind direction (this is NOT rotated to model grid) wind speed (knots) station latitude * 10 station longitude * 10 station elevation (integer) year month day hour
13-22
MM5 Tutorial
14: UTILITY PROGRAMS
Utility Programs
14
Purpose 14-3 Utility Programs 14-3 readv3.f 14-3 ieeev3.csh 14-4 V2-to-V3 Converter 14-4 V3-to-V2 converter 14-5 Get Scripts 14-5 Fetch 14-6 Cray-to-IBM Converters 14-6 tovis5d 14-7 MM5toGrADS 14-8
MM5 Tutorial
14-1
14: UTILITY PROGRAMS
14-2
MM5 tutorial
14: UTILITY PROGRAMS
14
Utility Programs
14.1 Purpose A number of utility programs are available to users. These programs are intended to assist users to work with MM5 input and output data. These programs and program tar files may be found from NCAR’s anonymous ftp site: ftp://ftp.ucar.edu/mesouser/MM5V3/Util, and NCAR SCD disk: /fs/ othrorgs/home0/mesouser/MM5V3/Util.
14.2 Utility Programs 14.2.1 readv3.f Function This utility reads all V3 MM5 output files, print out the header, partial sub-header, and a value from all fields in the dataset. This program can be used as an essential component to build user’s utility program for data processing and analysis.
How to Run It This program is written in free-formatted FORTRAN. To compile it on a Compaq machine: f90 -free -convert big_endian readv3.f To compile on a Linux machine, type pgf90 -Mfreeform -pc 32 -byteswapio readv3.f
MM5 Tutorial
14-3
14: UTILITY PROGRAMS
To compile on a SGI, type f90 -freeform readv3.f To run it, type a.out v3-filename
14.2.2 ieeev3.csh Function This script converts MM5 modeling system output data from Cray binary to standard IEEE data. It only runs on Crays (since it needs to read Cray data).
How to Run It Obtain the script from ~mesouser/MM5V3/Util on NCAR’s IBM, or ftp://ftp.ucar.edu/mesouser/ MM5V3/Util directory. To run it, type ieeev3.csh v3-filename-in-Cray-format It creates an IEEE file with the name v3-filename-in-Cray-format.ieee.
14.2.3 V2-to-V3 Converter Function This utility converts all V2 modeling system output to that of V3, including boundary condition file. It is intended for users who have data in V2 format and would like to use them in V3 system.
How to Run It This program is written in free-formatted FORTRAN 90. The program is built with a main program and a few modules. To compile it, simply type ‘make’, and two executables will be built: v22v3.exe and readv3.exe. To run it, type v22v3.exe v2-filename The file name for the converted file is v2-filename.v3. To convert V2’s boundary condition file, one must convert both the boundary file and the mminput file that corresponds to the boundary file together. For example, v22v3.exe mminput_domain1 bdyout_domain1 It will create three files for V3 named mminput_domain1.v3, bdyout_domain1.v3 and lowbdy.v3 (which is a new file in V3 containing the lower boundary condition fields such as substrate temperature and SST).
14-4
MM5 Tutorial
14: UTILITY PROGRAMS
14.2.4 V3-to-V2 converter Function This utility converts ONLY V3 MM5 model output to that of V2. This is intended for users to make a smooth transition until all utility programs a user has developed are converted for V3.
How to Run It This program is written in free-formatted FORTRAN 90, and built similarly as the program v22v3. To compile it, simply type ‘make’, and it will build two executables: v32v2.exe, and readv2.exe. To run it, type v32v2.exe v3-mm5-filename It will create an output file named v3-mm5-filename.v2
14.2.5 Get Scripts Function These job scripts may be used to obtain analysis data for REGRID from NCAR’s data archive. It can be run on NCAR’s IBM either interactively or in batch mode. One should also be able to run it on other NCAR/SCD machines which have access to MSS. The available scripts are:
MM5 Tutorial
get_on84
NCEP GDAS data in ON84 format (dss.ucar.edu/datasets/ds082.0)
get_ncep
NCEP GDAS data in GRIB format (dss.ucar.edu/datasets/ds083.0)
get_fnl
NCEP Final Analysis data in GRIB format (dss.ucar.edu/datasets/ds083.2)
get_nnrp
NCEP Global Reanalysis data in GRIB format (dss.ucar.edu/datasets/ds090.0)
get_awip
NCEP Eta model data (the AWIP data, GRID 212) (dss.ucar.edu/datasets/ds609.2)
get_era
ECMWF Reanalysis data (dss.ucar.edu/datasets/ds115)
get_toga
ECMWF Toga data (dss.ucar.edu/datasets/ds111.2)
14-5
14: UTILITY PROGRAMS
How to Run It Obtain the script for the data you wish to download from ~mesouser/MM5V3/Util, and edit the dates at the top of the script to specify the times you are interested in, and run the script: startdate ndates
itimint
Start date from which data will be extracted, format YYYY-MM-DD+HH Number of time periods (eg, if data is available in 12 hour intervals, ndates=3, will give you 24 hours of data, but if the data is available every 6 hours, ndates=3 will only give you 12 hours of data) Interval of available data (default is the available time)
Once completed the extracted data will be available in: /ptmp/$USER/REGRID/pregrid/nnrp (for NNRP data); /ptmp/$USER/REGRID/pregrid/era (for ERA data); /ptmp/$USER/REGRID/pregrid/grib_misc (for AWIP data); etc.
14.2.6 Fetch Function This job script may be used to obtain data for LITTLE_R and RAWINS from NCAR’s data archive. It can be run on NCAR’s IBM either interactively or in batch mode. One should also be able to run it on other NCAR/SCD machines which have access to MSS.
How to Use It This program is written in free-formatted FORTRAN 90, and built in a program tar file. To use it, get fetch-little_r-data.deck.ibm from ~mesouser/MM5V3/Util directory if a user wants to obtain data for LITTLE_R (fetch.deck is also in LITTLE_R/util directory in the program tar file), or get fetch-rawins-data.deck.ibm from the same directory if obtaining data for Rawins. Edit the deck to define starting and ending dates (starting_date and ending_date in the deck). Either type fetch-little_r-data.deck.ibm to run it interactively (in /ptmp/$USER), or type llsubmit fetch-little_r-data.deck.ibm to submit the deck to IBM as a batch job.
14.2.7 Cray-to-IBM Converters Function There are 2 programs that will convert Cray bindary data to IEEE formatted data. 14-6
MM5 Tutorial
14: UTILITY PROGRAMS
These programs must run on NCAR’s IBM, since they require a special library. cray2ibm.f cray2ibm-intermediate.f
covert MM5 V3 output from Cray binary to IEEE formatted data convert intermediate files (as produced by pregrid) from Cray binary to IEEE formatted data
How to Use It The programs can be obtained from NCAR’s computer under the ~mesouser/MM5V3/Util directory. To compile on IBM, type xlf90 -O -o cray2ibm.exe cray2ibm.f -L/usr/local/lib32/r8i4 -lncaru OR xlf90 -O -o cray2ibm.exe cray2ibm-intermediate.f -L/usr/local/lib32/r8i4 -lncaru To run it, type cray2ibm.exe filename
14.2.8 tovis5d Function This utility converts MM5 V3 (and MM5 V2) σ-level data (MMINPUT_DOMAINx and MMOUT_DOMAINx) to the form Vis5D can accept. This program can also calculate some diagnostic fields selected by user with namelist option.
How to Run It The new version of tovis5d program is written in FORTRAN 90. The program tar file can be downloaded from ftp://ftp.ucar.edu/mesouser/MM5V3/Util/tovis5d.tar.gz. When it is uncompressed and untared, a directory TOVIS5D/ should be built. To compile the program, cd to TOVIS5D, type ‘make’, which will return a list of make commands one can use. To compile on a Compaq Alpha machine, type ‘make dec’. If ‘make’ is successful, the executable named tovis5d will be built and linked to the top directory. To select namelist options, edit tovis5d.csh. To run it, type tovis5d.csh mm5-filename It creates a file for Vis5D named vis5d.file. It will also create a log file named tovis5d.log. For detailed instructions, please read the README file inside the tar file.
MM5 Tutorial
14-7
14: UTILITY PROGRAMS
14.2.9 MM5toGrADS Function MM5toGrADS is a utility program for the MM5 modeling system that convert MM5 output to data that can be displayed with the GrADS software (which can be freely downloaded from http:// grads.iges.org/grads). The advantage of this software is that it does not need any special libraries to run, and the user can create plots interactively. To be able to display the data, GrADS must be loaded on your system, and a user must have at least a basic understanding of the GrADS software. Development of this software has primarily been done by George H Bryan from Pennsylvania State University. This software are been supported by mesouser since the beginning of 2002. MM5toGrADS can plot output from most of the MM5 programs; TERRAIN_DOMAINx, REGRID_DOMAINx, LITTLE_R_DOMAINx, RAWINS_DOMAINx, MMINPUT_DOMAINx, LOWBDY_DOMAINx, and MMOUT_DOMAINx.
Namelist Table 14-1: MM5toGrADS namelist RECORD1 Namelist Variable
14-8
Description
TIMIN
First model output time that must be processed.
TIMAX
Last model output time to process. Set this to -99 to get all times from TIMIN to the last time available in the input file.
NSKIP
Skip increment.
IFLINUX
If you need to byteswap data on your machine, set to 1.
IFMAP
Interpolate the map background
IFSFC
Set to 1 if only processing the surface data.
IFSKEW
Set to 0 if 3D fields are generated, or to 1 if data from a single point is required.
ISKW JSKW
The i and j point location if generating data for a single point only (case where IFSKEW=1)
ZTYPE
=1, data will be displayed on the native vertical coordinate of the dataset =2, data will be interpolated to pressure levels (must also set plev)
plev
pressure levels to interpolate to (for ZTYPE=2) MM5 Tutorial
14: UTILITY PROGRAMS
RECORD10,11,12,13 are lists of switches to either plot (“1”) or skip (“0”) a specific variable. • • • •
RECORD10, are a list of native 3D variables that can be plotted, RECORD11, are a list of derived 3D variables, RECORD12, are native 2D variables; and RECORD13, are derived 2D variables.
How to Run It 1) Obtain the source code tar file from one of the following places: Anonymous ftp: ftp://ftp.ucar.edu/mesouser/MM5V3/MM5toGrADS.TAR.gz On NCAR MSS: /MESOUSER/MM5V3/MM5toGrADS.TAR.gz 2) gunzip the file, untar it. A directory MM5toGrADS will be created. cd to MM5toGrADS. 3) Type ‘make’ to create an executable for your platform. 4) Edit the namelist to set up the plotting parameters, and choose which fields must be processed. 5) Edit the mm5_to_grads.csh script to indicate input and output file names. 6) Create the graphics output by running the mm5_to_grads.csh script This will generate the grads .dat and .ctl files 7) View the output by invoking the GrADS software: Example: grads -l -c “open grads_output” (where grads_output, is the .ctl file created in step 6)
MM5 Tutorial
14-9
14: UTILITY PROGRAMS
14-10
MM5 Tutorial
15: EXERCISE
EXERCISE
15
Test Case 15-3 Obtaining Program Tar Files 15-3 Getting Started 15-3 Experiment Design 15-4 Terrain and Land-Use Data 15-4 Objective Analysis : 15-4 Interpolation 15-5 Model Simulation 15-5 Viewing Model Output 15-6
MM5 Tutorial
15-1
15: EXERCISE
15-2
MM5 tutorial
15: EXERCISE
15
EXERCISE
15.1 Test Case The test case we are doing in the class is the ‘Storm of the Century’ (or SOC) case that occurred 13-14 March 1993. The date and time of our model simulation will start at 0000 UTC March 13, and the simulation will run for 24 hours till 0000 UTC March 14 1993. You can read more about the case at the end of this chapter.
15.2 Obtaining Program Tar Files Create a working directory, such as ‘tutorial’ or ‘MM5V3’. cd to it, and obtain MM5 program tar files under the directory from NCAR’s anonymous ftp site: > > > > > > > >
ftp ftp.ucar.edu Name: anonymous Password: your-email-address cd /mesouser/MM5V3 binary prompt mget *.gz quit
‘prompt’ and ‘mget’ allow you to download all files with .gz. Uncompress each program tar file: gunzip X.tar.gz and untar the tar file: tar -xvf X.tar
15.3 Getting Started Before you start to work on the first program, TERRAIN, in the MM5 modeling system, check to
MM5 Tutorial
15-3
15: EXERCISE
see if you have NCAR Graphics installed on your machine, since program TERRAIN can make a good use of the NCAR Graphics when one configures the model domains. Refer to Chapter 2, section 2.3 to see if NCARG_ROOT is set. If your NCAR Graphics is installed in a different directory than those listed, you may need to edit Makefiles in TERRAIN, RAWINS and GRAPH to change how NCAR Graphics library is loaded (look for LOCAL_LIBRARIES for your particular platform). Once you have done this, you are ready to start with program TERRAIN.
15.4 Experiment Design Figure 15.1 shows the model domain set up for the SOC case. The domain configurations for the case study are summarize below.
Coarse domain (domain 1): Centered at 36.0 N and 85.0 W (remember west longitude is negative). The grid size is 90 km, and the domain IX (north-south) and JX (east-west) dimensions are 35x41. Use expanded coarse domain for objective analysis. The expansion on each side of coarse domain is 360 km.
Nest domain 1 (domain 2): Resides inside the coarse domain. Domain 2 dimensions are 49x52, and the grid spacing is 30 km. The location of (1,1) point of the fine mesh in the coarse mesh is 10 and 17, respectively.
15.5 Terrain and Land-Use Data Run TERRAIN program on two domains according to Fig. 15.1 to obtain the terrain and land-use data for REGRID. Use either old 13 category landuse, or new landuse. Figure 15.2 shows the maps of terrain for the two domains. Either use the old or new landuse data. For this exercise we will not use the land-surface model option.
15.6 Objective Analysis : Use the NCEP ON84 first guess field and NCEP ON84 SST for REGRID. Obtain the first guess fields only on coarse domain at 10 mandatory pressure levels (default) and a few new levels. The mandatory pressure levels are 100000, 85000, 70000, 50000, 40000, 30000, 25000, 20000, 15000, and 10000 Pascals, and new levels are 95000, 92500, 90000, 80000, 75000, 65000, 60000, 55000, 45000, 35000 Pascals. Define ptop_in_Pa = 10000 Pascals. Obtain first guess for 3 time periods: 1993-03-13_00, 1993-03-13_12, and 1993-03-14_00. Use snow cover option. (Note, if you choose to run little_r later, you must create all new levels (see below) in REGRID.) Run little_r (or Rawins) only for coarse domain. Create surface FDDA file for coarse domain. Analyzed sea-level pressure fields are shown in Fig. 15.3.
15-4
MM5 Tutorial
15: EXERCISE
15.7 Interpolation Define the full σ levels in namelist.input file as 1., .99, .98, .96, .93, .89, .85, .8, .75, .7, .65, .6, .55, .5, .45, .4, .35, .3, .25, .2, .15,.1, 0.05, and 0.0. Use INTERPF to obtain the initial, lateral and lower boundary condition files for domain 1 to run MM5 for the 3 time periods. Set isfc=1, usesfc=.TRUE., and reference state parameters p0, tlp and ts0.
15.8 Model Simulation Run the model on the desktop workstation, starting by obtaining the mm5 tar file. Start the MM5 model at 0000 UTC 13 March, 1993, and integrate for 24 hours. Use the boundary and initial condition files from Interpf for domain 1, and domain 2 terrain input file from Terrain program output as input. Output history files every 6 hours. Save files for restart and set SVLAST = .TRUE.. Set time step to be 270 sec. Use IOVERW=2 for the nest. Use the following physics in the MM5 simulation. Remember some switches are set in configure.user file, and some in mm5.deck. 1) Run 1 (This run requires about 40 Mb memory, and 2100 sec. if you are the only running): •Grell convective scheme; simple explicit scheme with ice on domain 1 and 2. •Cloud radiation scheme with radiation frequency of 30 minutes. •MRF PBL with multi-layer soil model. •Moist vertical diffusion in clouds. •3-D Coriolis force for nonhydrostatic MM5. •Radiative top boundary condition. •With surface fluxes. •Cloud effect on radiation. •Account for snow cover effects (IFSNOW = 1). •No shallow convection. 2) Run 2 (optional): Option 1: •First run the model for 12 hours. Save files for restart. •Restart the model at hour 12, and continue integration to 0000 UTC 14 March. Option 2: •Use all of above physics options. •Do 3D and surface analysis nudging on domain 1. No FDDA on domain 2. Option 3: •Use all of above physics options. •Design and move the second domain at 3-h interval. Should use only the coarse mesh initial condition file and let model interpolate all fields to the fine mesh. Note, ideally when using the moving nest option, one should rerun terrain for 1 domain only.
MM5 Tutorial
15-5
15: EXERCISE
MM5 simulated sea-level pressure and precipitation are shown in Fig. 15.4.
15.9 Viewing Model Output Use RIP or Graph program to view the model output you generate. For RIP, follow the instruction given in Appendix G. For Graph, follow the instruction given in Chapter 12. Run RIP/Graph for both domain 1 and 2 model outputs.
15-6
MM5 Tutorial
15: EXERCISE
Domain 1
Domain 2
Figure 15.1 MM5 model domain configuration for the Storm of the Centuray case (2 domains).
MM5 Tutorial
15-7
15: EXERCISE
Domain 1
Domain 2
Fig. 15.2 The terrain maps of coarse and fine mesh. The contour interval is 100 m.
15-8
MM5 Tutorial
15: EXERCISE
(a)
(b)
(c)
Figure 15.3 Objective analyses of sealevel pressure fields from RAWINS at (a) 0000 UTC, (b) 1200 UTC 13, and (c) 0000 UTC 14 March 1993.
MM5 Tutorial
15-9
15: EXERCISE
(a)
(c)
(b)
(d)
Figure 15.4 Simulated sea-level pressure valid at (a) 1200 UTC 13 March and (c) 0000 UTC 14 March, and 6 hour accumulated precipitation valid at (b) 1200 UTC 13 March, and (d) 0000 UTC 14 March.
15-10
MM5 Tutorial
15: EXERCISE
The following is a reprint of an article from the Web page: NATIONAL CLIMATIC DATA CENTER RESEARCH CUSTOMER SERVICE GROUP TECHNICAL REPORT 93-01 THE BIG ONE! A REVIEW OF THE MARCH 12-14, 1993 “STORM OF THE CENTURY” NEAL LOTT, PHYSICAL SCIENTIST, MAY 14, 1993
****************************************************************************** On March 12-15, a storm now called “The Storm of the Century” struck the eastern seaboard. Following are the highlights of the information gathered about the storm thus far: 1) The preliminary death toll for the U.S. is approximately 270, and 48 people were reported as missing at sea (Gulf of Mexico and Atlantic, including Canadian waters). This is over 3 times the combined death toll of 79 attributed to hurricanes Hugo and Andrew. The death toll includes those caused by direct and indirect (e.g., shoveling snow) results of the storm. Due to the widespread nature of the storm, assessing its toll has been quite difficult for damage survey teams--hurricanes are easier to assess due to their more limited areal coverage. The following breakdown by state (not including lost at sea) is still preliminary (its summation does not reflect all deaths from the storm): Florida 44; New York 23; South Carolina 1; Alabama 16; Georgia 15; Tennessee 14, North Carolina 19; Kentucky 5; Virginia 13; Maryland 3; West Virginia 4; Maine 2; Pennsylvania 49 2) Thousands of people were isolated by record snowfalls, especially in the Georgia, North Carolina, and Virginia mountains. Over 200 hikers were rescued from the North Carolina and Tennessee mountains. Curfews were enforced in many counties and cities as ‘states of emergency’ were declared. The National Guard was deployed in many areas to protect lives and property. Generally, all interstate highways from Atlanta northward were closed. 3) For the first time, every major airport on the east coast was closed at one time or another by the storm. The Asheville, NC airport was closed for 3 days. Snowfall rates of 2-3 inches per hour were common during the height of the storm. Generally, New York’s Catskill Mountains along with most of the central and southern Appalachians received at least 2 feet of snow. In areas to the east, wind-driven sleet occurred in some areas, with central New Jersey reporting 2.5 inches of sleet on top of 12 inches of snow--somewhat of an “ice-cream sandwich” affect. 4) Hundreds of roof collapses occurred due to the weight of the heavy wet snow. Over 3 million customers were without electrical power at one time due to fallen trees and high winds. 5) At least 18 homes fell into the sea on Long Island due to the pounding surf. About 200 homes along North Carolina’s Outer Banks were damaged and may be uninhabitable. Over 160 people were rescued at sea by the Coast Guard in the Gulf of Mexico and Atlantic. At least 1 freighter sank in the Gulf of Mexico. 6) Florida was struck by an estimated 15 tornadoes, and 44 deaths in Florida were attributed either to the tornadoes or other severe weather. A 12-foot storm surge occurred in Taylor County, FL resulting in at least 7 deaths. Also, up to 6 inches of snow fell in the Florida panhandle. 7) 3 storm-related deaths were reported in Quebec and 1 in Ontario. About 110 miles south of Cape Sable Island, Nova Scotia, a 177-meter ship sank in heavy seas, with all 33 of its crew lost at sea. 65-foot waves were reported in the area. Also, a wind gust of 131 MPH occurred at Grand Etang, Nova Scotia. Some parts of northern New Brunswick experienced temperature drops of 45 degrees Fahrenheit in 18 hours. 3 deaths occurred in Cuba (Havana was blacked out), and a tornado left 5000 people homeless in Reynosa, Mexico (near Texas border). 8) Highest recorded wind gusts included: 144 MPH on Mount Washington, NH 109 MPH in the Dry Tortugas (west of Key West, FL) 101 MPH on Flattop Mountain, NC (by NCDC employee Grant Goodge--due to ice accumulation on anemometer, he estimated 105- 107 MPH) 98 MPH in South Timbalier, LA 92 MPH on South Marsh Island, LA 90 MPH in Myrtle Beach, SC 89 MPH in Fire Island, NY 83 MPH in Vero Beach, FL 81 MPH in Boston, MA 71 MPH at La Guardia Arpt, NY 9) Snowfall totals included: 56 inches on Mount LeConte, TN 50 inches on Mount Mitchell, NC (14-foot drifts) 44 inches in Snowshoe, WV 43 inches in Syracuse, NY
MM5 Tutorial
15-11
15: EXERCISE
36 inches in Latrobe, PA (10-foot drifts) 35 inches in Lincoln, NH 30 inches in Beckley, WV 29 inches in Page County, VA 27 inches in Albany, NY 25 inches in Pittsburgh, PA 24 inches in Mountain City, GA 20 inches in Chattanooga, TN 19 inches in Portland, ME 19 inches in Asheville, NC 17 inches near Birmingham, AL (6-foot drifts) 16 inches in Roanoke, VA 13 inches in Washington, DC 9 inches in Boston, MA 4 inches in Atlanta, GA 10) Record low temperatures included (some records for March): -12 degrees in Burlington, VT and Caribou, ME -11 degrees in Syracuse, NY -10 degrees on Mount LeConte, TN -5 degrees in Elkins, WV -4 degrees in Waynesville, NC and Rochester, NY 1 degree in Pittsburgh, PA 2 degrees in Asheville, NC and Birmingham, AL 6 degrees in Knoxville, TN 8 degrees in Greensboro, NC 1 degree in Beckley, WV 11 degrees in Chattanooga, TN and Philadelphia, PA 15 degrees in New York-JFK and Washington, DC 17 degrees in Montgomery, AL 18 degrees in Columbia, SC and Atlanta, GA 19 degrees in Augusta, GA 21 degrees in Mobile, AL 25 degrees in Savannah, GA and Pensacola, FL 31 degrees in Daytona Beach, FL 11) Record low sea-level pressures included: 28.38 inches in White Plains, NY 28.43 inches in Philadelphia, PA 28.43 inches at JFK Arpt, NY 28.45 inches in Dover, DE 28.51 inches in Boston, MA 28.53 inches in Augusta, ME 28.54 inches in Norfolk, VA 28.54 inches in Washington, DC 28.61 inches in Raleigh-Durham, NC 28.63 inches in Columbia, SC 28.73 inches in Augusta, GA 28.74 inches in Greenville-Spartanburg, SC 28.89 inches in Asheville, NC 12) The National Weather Service’s Office of Hydrology estimated the volume of water that fell as snow as 44 million acre-feet. This is comparable to 40 days’ flow on the Mississippi River at New Orleans. For example, the NWS office at the Asheville, NC airport reported a snow/water ratio of 4.2 to 1 from core samples of new snow. Numerous core samples taken in a nearby area by an NCDC employee showed similar results with a ratio of 5.3 to 1. This equated to 4-5 inches of liquid equivalent precipitation (or even higher in some areas) from the storm. Areas north of Asheville which reported up to 4 feet of snow probably received ‘dryer’ snow with similar liquid equivalent amounts. Due to the weight of the heavy snow, damage to trees and some buildings was extensive. Polk County, NC reported 99% of its electrical customers without power at one point during the storm. Some sleet also occurred during the storm, contributing to its ‘heavy’ nature. Many of the power outages occurred before the high winds arrived (due to snow-induced tree damage). 13) Overall damage figures are not yet complete, but the insured property damage estimates now exceed $1.6 billion. Therefore, this was the 4th costliest storm in U.S. history, and by far the most costly extra-tropical storm. Some estimates of total damages and costs from the storm now exceed $6 billion.
15-12
MM5 Tutorial
Appendix A
Appendix A: Derivation of Basic MM5 Equations A.1
Derivation of Thermodynamic Equation
First Law of Thermodynamics
dQ = c v dT + p dα = c p dT – α dp
(A.1)
since from the gas law RdT = pdα + αdp and c p – c v = R . Temperature tendency therefore is given by
DT 1 Dp · c p -------- = --- ------- + Q Dt ρ Dt A.2
(A.2)
Derivation of Pressure Tendency Equation
From Gas Law
1 Dp1 Dρ 1 DT --- -----= --- ------- + --- -------p Dt ρ Dt T Dt
(A.3)
Continuity and Thermodynamics lead to
· 1 Dp Q 1 Dp --- ------- = – ∇ ⋅ v + --------- + ------------ ------c p T c p ρT Dt p Dt
(A.4)
cp However, c p ρT = ⎛ -----⎞ p , so ⎝ R⎠ · Q 1 Dp R ⎛ ⎞ --- ------- 1 – ----- = – ∇ ⋅ v + --------cp T p Dt ⎝ c p⎠
(A.5)
cv R 1 But 1 – ----- = ----- = --- , therefore cp cp γ · Dp γpQ ------- = – γp∇ ⋅ v + ---------cp T Dt
MM5 Tutorial
(A.6)
A-1
Appendix A
A.3
Forms of the Vertical Momentum Equation
1 ρ
Dw 1 ∂p -------- + --- ----- + g = D w Dt ρ ∂z
(A.7)
Dw p -------- + α ∂----+ g = Dw Dt ∂z
(A.8)
Defining α = --- ,
Defining hydrostatic reference and perturbation, α = α 0 + α′ , p = p 0 + p′ ,
∂p ∂p′ Dw -------- + ( α 0 + α′ ) ⎛ -------0- + -------⎞ + g = D w ⎝ ∂z ∂z ⎠ Dt By definition,
(A.9)
∂p 0 α 0 -------- = – g , so ∂z ∂p 0 Dw p′ ∂p′ -------- + α′ ∂------+ α 0 ------- + α′ -------- = D w ∂z Dt ∂z ∂z
(A.10)
which can be written as
Dw p′α′ -------- + α ∂-----– g ------ = D w Dt ∂z α0
(A.11)
α – α0 Dw p′-------- + α ∂-----– g ---------------- = D w Dt ∂z α0
(A.12)
1--1- – ----ρ ρ0 Dw p′-------- + --1- ∂-----– g --------------- = D w Dt ρ ∂z 1 -----ρ0
(A.13)
This can be expanded as
In terms of ρ, this is
which is
A-2
MM5 Tutorial
Appendix A
Dw --1- ∂p′ ρ′ -------- + ------- + g ----- = D w Dt ρ ∂z ρ
(A.14)
This can be expressed in terms of temperature and pressure perturbations for the buoyancy term because
ρ0 p0 T p0 T p 0 T′ p′ ρ′ p – ----- = ------ – 1 = --------- – 1 = ----- ⎛ ------ – -----⎞ = ----- ⎛ ------ – -----⎞ ρ ρ pT 0 p ⎝ T 0 p 0⎠ p ⎝ T 0 p 0⎠
(A.15)
p T′ p′ Dw 1 ∂p′ -------- + --- ------- – g ----0- ⎛ ------ – -----⎞ = D w p ⎝ T 0 p 0⎠ Dt ρ ∂z
(A.16)
So
A.4
Coordinate Transformation
General coordinate transformation ( x, y, z ) → ( x, y, σ )
∂ ∂z ∂ ∂-⎞ ⎛ ---→ ⎛ -----⎞ – ⎛ -----⎞ ----⎝ ∂x⎠ z ⎝ ∂x⎠ σ ⎝ ∂x⎠ σ ∂z
(A.17)
– δp 0 ( p∗ δσ + σδp∗ ) ----------- = – ------------------------------------- , so δz but = ρ0g ρ0 g ∂ σ ∂ p∗ ∂ ∂-⎞ ⎛ ---→ ⎛ -----⎞ – ------ -------- -----⎝ ∂x⎠ z ⎝ ∂x⎠ σ p∗ ∂x ∂σ
A.5
(A.18)
· Derivation of σ Relation p 0 – p top p 0 – p top σ = ---------------------------- = ---------------------p surf – p top p∗
MM5 Tutorial
(A.19)
A-3
Appendix A
where p top and p surf are the values of p 0 at the top and surface and p∗ = p surf – p top .
Dσ · σ = -------Dt
(A.20)
1 Dp 0 ( p 0 – p top ) Dp∗ · σ = ------ ---------- – --------------------------- ---------2 Dt p∗ Dt ( p∗ )
(A.21)
Therefore
Expanding total derivatives noting that p 0 = p 0 ( z ) and p∗ = p∗ (x,y) and also that p 0 is hydrostatic
ρ0 g σ ∂p∗ ∂ p∗ · σ = – --------- w – ------ ⎛ u -------- + v --------⎞ ∂y ⎠ p∗ p ∗ ⎝ ∂x
A-4
(A.22)
MM5 Tutorial
Appendix B
Appendix B: MM5 Model Code General Notes The model source code for Version 3 contains more than 220 subroutines and about 55500 lines of code including comments. It is written to be portable to many platforms and so is generally standard in terms of its Fortran and it is self-contained in that it does not require additional libraries to run.
Vectorization Since the code was developed originally for efficiency on a Cray, it is written to vectorize as efficiently as possible. A vectorized loop on these machines is many times faster than an unvectorized one. To achieve this, the inner do-loops are often in a horizontal direction to both maximize the vector length and to reduce the possibility of index dependencies that would inhibit vectorization. This is not optimal for non-vector machines with small cache because it can lead to more frequent memory calls as these loops are executed as opposed to the case where the inner loop is short. However, in practice the code runs well on RISC/cache machines. Even though most of the physics operates on vertical columns, the physics routines take a whole north-south slice (see next section) so that vectorization can be done over the I-index (southnorth) (Fig.B1).
k
i Vector Direction j Parallel Direction Figure B1
MM5 Tutorial
B-1
Appendix B
Parallelization Again because of the development on shared-memory parallel Cray processors, the code is structured to make efficient use of this. Use of multiple processors in parallel speeds up a task by an amount depending on the parallel efficiency. Typically for MM5, eight processors can speed up the job by a factor of six. Although this costs more in CPU time it has benefits in real-time forecast applications in getting the forecast out quickly, and sometimes the charging algorithm favors efficiently parallelized jobs, as they may get the benefit of special low-cost queues. The code also contains parallelization directives for SGIs and can in principal be parallelized on any multi-processor workstation. To achieve efficient parallelization outer do-loops are often distributed across processors. For instance, there are several parallel J-loops in the SOLVE routine. J is the west-east horizontal index, and by having the outer loop over this index the physics calculations are done in northsouth vertical slices. When a J-loop is multi-tasked each value of J goes to a different processor, so that each operates on a different north-south slice. As processors finish the calculations in their slice they take the next available one. It can be seen that the code has to be written carefully to allow this to work, and essentially each J-slice’s calculations must be independent of the results generated by other slices. Multi-tasked sections of code have a clear distinction between shared and private (or local) memory. Shared memory is seen by all the processors, while private memory is seen only by a single processor as each processor has its own copy of these arrays. Often the multi-tasking preprocessor is able to decide which variables are shared or private but some declarations are required, particularly for variables passed into subroutines within a multi-tasked loop. In general scalars and arrays that are constant in the parallel region (read only) should be shared, while those that do change their value (are written to) should be private. The major exception to this is an array with an index corresponding to the multi-tasking index (e.g. a J index in a DO J multi-tasked loop). These arrays have to be shared and special care is required if the array is written to. It is safest to avoid references to J+1, J-1 etc. elements within a multi-tasked J loop. The computation will execute the J loop in essentially random order, so no dependence on results from other J-slices should exist. Common blocks within parallel sections of code also have to be treated carefully. If each task needs its own copy of a common block, such as for storing temporary variables that are not dimensioned by the tasked loop index (usually J), there is a Cray command CDIR$ TASKCOMMON common-block-name that accomplishes this. A new standard for parallel directives, recognized by many vendors now, is OpenMP. The directive for the above in OpenMP looks like c$omp threadprivate (/common-block-name/) On older SGI compilers (without OpenMP) this is achieved by a special -Xlocal declaration in the load options. Without such directives all tasks will attempt to use the same memory space leading to unpredictable results. Note that this is rarely required in MM5 since most common blocks contain domain-wide variables and constants. However, the Burk-Thompson and Gayno-Seaman PBL schemes use common blocks for storing temporary values to pass between its own subroutines, as do the Noah land-surface model and the RRTM radiation scheme.
B-2
MM5 Tutorial
Appendix B
Parallelization is implemented by placing a special directive ahead of the parallel loop. For example, the following loop parallelizes over J, cmic$ do all autoscope c$doacross c$& share(klp1,qdot,wtens,il,jl), c$& local(i,j) c$omp parallel do default(shared) c$omp private(i,j) DO J=1,JL DO I=1,IL QDOT(I,J,KLP1)=0. WTENS(I,J,KLP1)=0. ENDDO ENDDO where the cmic$ represents a Cray directive and c$ represent SGI directives, and c$omp represent OpenMP directives. Note that these appear as comments to a Fortran compiler and only have special meaning to the parallel preprocessors.
Use of pointers Perhaps the most nonstandard aspect of the code is the use of Cray pointers which are now supported on most platforms. In the model these are used to allow the code to operate on multiple domains without the need for an additional array index to identify the domain. When the code is doing calculations for a given nest, the pointers give the locations in the memory of all the arrays associated with that nest. Thus subroutines using these arrays, such as UA (a 3D array of x-direction wind component), have to have a pointer statement as follows. POINTER ( IAUA, UA(MIX,MJX,MKX)) IAUA is an address locating the first element of UA which is dimensioned by parameters MIX, MJX and MKX. The model typically uses about 300 such addresses to locate all the information on a given nest. These 300 variables representing 0-dimensional scalars to 4-dimensional arrays are actually stored end-to-end in two super-arrays, one for reals (ALLARR) and one for integers (INTALL) (Fig. B2). There are also additional super-arrays for FDDA. The pointers locate the starting position of each variable in the super-array. This array is dimensioned by the sum of all the array sizes, which may reach a few million, as its first index and by the number of domains as its second index. Routine ADDALL, called once at the beginning of the simulation, calculates all these addresses based on the sizes of the arrays, gets their absolute addresses with the LOC function, and stores these in a 2D array (IAXALL) dimensioned by about 300 and the number of domains. Each time the model calculations shift from one domain to another the addresses in the pointers, such as IAUA, have to be changed by a call to routine ADDRX1C which has the new domain number as an argument. Sometimes information for two domains is needed at once, such as when a nest feeds back to the coarser mesh, and ADDRX1N is used to locate the addresses for the second domain. These routines take the relevant addresses from IAXALL and put them into common blocks such as /ADDR1/ (see below), overwriting the common blocks each time the routines are called.
MM5 Tutorial
B-3
Appendix B
COMMON/ADDR1/ IAUA, IAUB, IAVA, IAVB, IATA, IATB, IAQVA, IAQVB, .. There are several common blocks of pointers, and these are passed to various routines together with the pointer statements. The use of pointers allows the routines not to require domain number specifiers. Alternative methods would either require a large number of EQUIVALENCE statements, or equivalencing through passing a large number of arguments into certain routines.
~ 106 dimension
Variables IAUA
Domain 1 Domain 2 M A X N E S
IAUB
UA
UB
VA
VB
UA
UB
VA
VB
Pointers
ALLARR
~ 300 dimension Figure B2
Domain 1
IAUA
IAUB
Domain 2
IAUA
IAUB
IAVA
IAVB IAXALL
IAVA
IAVB
Distributed-memory version Distributed-memory machines are becoming increasingly common. These machines can run a gridded domain by distributing the grid across a number of independent processors which all only calculate and store information for a sub-area of the grid. At various points during the calculation there needs to be communication between processors, but the coding has to minimize these to maintain efficiency. In 1998, in release 2.8, we added a capability for MM5 to run on distributed-memory machines. This involves several code pre-compilation steps, and uses mostly the same code as the standard MM5, with ‘ifdef MPP’ being used to isolate specific differences. The pre-compilation involves two stages. In the first using FLIC (Fortran Loop and Index Converter), DO loops are replaced by
B-4
MM5 Tutorial
Appendix B
generic FLIC directives and other areas of the code are modified to allow for distributed memory. In the second stage RSL (Run-time System Language) commands are inserted which is a highlevel language built on the low-level MPI standard for message passing. This leads to an automated code conversion which is run as part of the “make” process when MPP options are selected. John Michalakes (Argonne National Laboratory) has developed FLIC and RSL and applied them to MM5. The resulting extension of MM5 to these platforms is proving to make efficient use of multiple distributed processors on machines such as the IBM SP2 and Cray T3E, and more recently Fujitsu, Compaq, and PC clusters.
Brief Code Description This is a brief, and not very thorough, description of the model’s code. The main program is MM5 (filename is ./Run/mm5.F). This calls routines to initialize the pointer addresses (ADDALL), the model constants (PARAM), to restart (INITSAV) or initialize and read in (INIT) the model arrays, and to read in boundary conditions (BDYIN). It then executes the main time loop of the program. If there is no nest the time loop just has a call to SOLVE and occasional calls to OUTPUT. SOLVE is the routine that calls all the physics and dynamics routines and is responsible for all the model calculations. If there is a nest, there is a call to STOTNDI before SOLVE to define the initial nest boundary values by interpolation from the coarse mesh, and after SOLVE there is a call to a driver routine NSTLEV1. The main program is also responsible through CHKNST for initializing and ending nest. NSTLEV1 calls STOTNDT which calculates the nest boundary tendency based on that of the coarse mesh which is known after SOLVE has been called. It then executes three nested timesteps, which are one third of the coarse mesh’s, in which it calls SOLVE for the nested domain, thus advancing that domain to the same time as the coarse mesh. It then calls FEEDBK to overwrite the coarse mesh values that coincide with nested grid-points. If there is a further nested level, NSTLEV1 will also call STOTNDI and NSTLEV2 for each subdomain at the next level. SOLVE is the main solver routine in which all the fields are advanced to the next time level. It does this by calling routines that calculate tendencies due to advection (VADV, HADV), diffusion (DIFFU, DIFFUT), PBL (e.g. HIRPBL), cumulus (e.g. CUPARA2), explicit moisture (e.g. EXMOISS), radiation (e.g. LWRAD, SWRAD), FDDA (NUDOB, NUDGD), boundary conditions (NUDGE), and dynamics (SOUND). In addition some tendencies are added within SOLVE itself, such as the adiabatic temperature term and the buoyancy and Coriolis momentum terms. SOUND handles some terms using a short timestep. These terms, responsible for acoustic modes, are the pressure gradient terms in the momentum equations and the divergence term in the pressure tendency. Thus, only after SOUND are the momentum components and pressure updated while all the other prognostic variables are updated in SOLVE.
MM5 Tutorial
B-5
Appendix B
mm5.F start
ADDALL
PARAM restart INIT
INITSAV
BDYIN nesting CHKNST STOTNDI SOLVE nesting NSTLEV1 time loop
output OUTPUT
end
B-6
MM5 Tutorial
Appendix B
nstlev1.F from MM5
STOTNDT
further nesting STOTNDI
SOLVE
nest time loop NSTLEV2
FEEDBK
to MM5
MM5 Tutorial
B-7
Appendix B
solve.F from MM5 or NSTLEV advection
radiation schemes
PBL schemes
cumulus schemes
shallow cumulus scheme
horizontal diffusion
lateral boundary tendencies
FDDA tendencies
exp. moisture schemes
SOUND
to MM5 or NSTLEV
B-8
MM5 Tutorial
Appendix C
Appendix C: How to Use Noah Land-Surface Model Option Since Version 3.6, the Oregon State University / NCEP Eta Land-Surface Model (OSU LSM) in MM5 (Chen and Dudhia 1999) has been replaced by an updated version of the model, known as the Noah LSM, which includes many improvements from NCEP, NCAR, AFWA and UCLA. Like the OSU LSM, using the Noah LSM option in MM5 requires additional inputs to initialize the model.
C.1 Noah LSM Requirements in Pre-Processing Programs To use the Noah LSM option, the MM5 model requires several additional input fields. The Version 3 TERRAIN program provides an annual-mean deep soil temperature adjusted to model terrain elevation, a monthly climatological vegetation fraction, dominant soil type, and dominant vegetation type in each grid cell. All of the inputs to the TERRAIN program are provided by mesouser. The REGRID program provides soil moisture, soil temperature at various depths, water-equivalent snow depth, sea ice, and optionally canopy moisture and soil water content. These additional fields are currently available from the NCEP/NCAR Reanalysis (NNRP, 2.5 degree resolution over past 40 more years), NCEP’s global Final Analyses (FNL, 1.0 degree resolution since Sept 1999) and the Eta AWIP analyses (for the Continental US only, about 40 km resolution since May 1995). These datasets are archived at NCAR. The additional soil data are also available in realtime from NCEP’s ftp site from Eta and AVN model outputs (ftp://ftpprd.ncep.noaa.gov/pub/data/ nccf/), or from AFWA-produced AGRMET data (http://www.mmm.ucar.edu/mm5/doc.html). These input data are illustrated in Fig. C-1. Two additional types of input can be ingested in the REGRID/regridder program as well: 1-degree global maximum snow albedo, and 0.15-degree monthly climatological (snow-free) albedo. The recommendation is to use the maximum snow albedo (which is used in the Noah LSM to limit values of albedo when snow is present), and use the climatological albedo with caution (for example, one may want to only use this field when the grid size is above the data resolution which is about 16 km). These two datasets are provided by mesouser. The maximum snow albedo data are provided in REGRID tar file (REGRID/regridder/ALMX_FILE), and one can obtain monthly albedo from: ftp://ftp.ucar.edu/mesouser/MM5V3/REGRID_DATA/MONTHLY_ALBEDO.TAR.gz or on MSS: /MESOUSER/DATASETS/REGRID/MONTHLY_ALBEDO.TAR.gz All LSM fields are passed along in programs RAWINS/LITTLE_R and INTERPF.
MM5 Tutorial
C-1
Appendix C
Additional Input Deep Soil Temp (Annual) Soil Category Vegetation Category Vegetation fraction (Monthly) Water-equivalent of snow depth Canopy Moisture (optional) Sea Ice Soil Height Soil Temp in layers Soil Moisture in layers Soil water in layers (optional) Snow Height (optional) Max snow albedo (optional) Monthly albedo (optional)
Main Programs
Data Sets TERRESTRIAL
TERRAIN
Old, USGS and SiB Landuse
USGS Terrain
GLOBAL/REGIONAL ANALYSIS REGRID NNRP NCEP ERA ECMWF TOGA RAWINS/ little_r
MM5
AVN
OBSERVATIONS
Surface
INTERPF
ETA
Rawinsonde
Additional Output Soil temp: 4 layers Soil moisture: 4 layers Soil water: 4 layers Surface runoff Underground runoff Canopy moisture Water-equivalent snow depth Snow height Ground heat flux Surface albedo
Fig C-1 The MM5 Version 3 modeling system flow chart with input data to and output data from Noah LSM shown.
C-2
MM5 Tutorial
Appendix C
C.2 How to Set Program Switches to Run Noah LSM Terrain In terrain.deck, set namelist variable LSMDATA = .T., and select values 1 or 2 for VEGTYPE. e.g., VEGTYPE = 1, (In V3.3 or earlier terrain.deck, select set NewLandUseOnly = FALSE set LandSurface = TRUE and either set VegType = USGS or set VegType = SiB The rest of the terrain.deck is the same.) Note that SiB data is only available over North America, but these categories correspond to those used in the Eta model’s operational LSM. However, either set can be used in MM5’s version of the Noah LSM. Also, note that the SiB classification lacks an ‘urban’ category. These setups will make use of the terrestrial datasets to create the following additional fields on the model grid in the TERRAIN output: 1. VEGFRCnn (nn=1,12): vegetation fraction monthly climatology 2. TEMPGRD: annual mean ground temp adjusted to model terrain elevation 3. SOILINDX: dominant soil type (currently 30” over US, 5-minutes elsewhere) Note, for soil types, one may choose either soil type data over top layer (0 - 30 cm), or bottom layer (30 - 100 cm). Selecting the bottom soil data can be done by uncommenting the script variable BotSoil near the top of terrain.deck. A comparison of top and bottom soil types over the continental US may be found online at http://www.mmm.ucar.edu/mm5/mm5v3/new-soil.html.
REGRID The datasets that have the required additional fields to run Noah LSM in MM5 are Eta (AWIP or Eta212 grid), NNRP and NCEP’s FNL data archived at NCAR (DSS609.2, DSS090.0, and DSS083.2, respectively). Real-time data from NCEP’s Eta and AVN can also be used. To get the NNRP data from NCAR archive, use either get_nnrp.deck.ibm for batch IBM job, or MM5 Tutorial
C-3
Appendix C
get_nnrp.csh for running interactively from pregrid/nnrp directory. To get the FNL data from NCAR archive, go to http://dss.ucar.edu/datasets/ds083.2/inventories/, and download individual files as needed, or use the get_fnl.deck script to download multiple files. To get NCAR archived AWIP data, use the get_awip.deck script, or to download the files manually, follow the instructions below. (Note that the Eta dataset only covers Continental US. It starts May 1995, and may have missing periods): 1. Use Web browser to go to ftp://ncardata.ucar.edu/datasets/ds609.2/inventories/eta.inv and find out which Gxxxxx file contains the time period of your interest. Should get both 3Danal and SFanal files. 2. On NCAR’s computer, type the following to get the dataset in non-cos-blocked format: msread -fBI Gxxxxx /DSS/Gxxxxx (You may then ftp this file back to your local workstation to do the rest. Note though these files are very BIG in size: the 3Danal file is about 1Gb each, and SFanal is about 250 mb. If ftping big files is a problem, do steps 3 and 4 on IBM, and ftp the files after step 4.) 3. Type the following to obtain all files listed in the Gxxxxx file: tar tvf Gxxxxx > tar.list Or find out the file names from ftp://ncardata.ucar.edu/datasets/ds609.2/inventories/TARLIST and click on the appropriate tarlist file. 4. Extract the tm00 files to use by typing, using G40001 (containing upperair data) and G40006 (containing surface data) files as an example: tar -xvf G40001 9706_3Danal/97062400.AWIP3D00.tm00 and tar -xvf G40006 9706_SFanal/97062400.AWIPSF00.tm00 Repeat the last two commands several times to obtain all time periods. The tm00 files are to be used by pregrid program. (If one wants to use other tmXX files, please refer to the DSS document at http://dss.ucar.edu/datasets/ds609.2/docs/awip212.html.) The extracted file for each time period is considerably smaller in file size: about 5 Mb each for upperair data and 1.2 Mb for surface data. You can ftp each file back to your workstation, or tar them up and then ftp the file back. Note, if you would like to run REGRID on NCAR’s IBM using AWIP dataset, you can either run the job interactively if your domain size is not too big (IBM allows for 32 Mb of memory for interactive job only), or you can modify the deck to extract the files only.
C-4
MM5 Tutorial
Appendix C
In pregrid.csh make sure you have either Eta (AWIP or Eta212 grid), or NNRP, or FNL data for the relevant dates, set SRCSOIL to either $SRC3D or different input files, and: set VTSOIL = ../grib.misc/Vtable.AWIPSOIL or set VTSOIL = ../grib.misc/Vtable.NNRPSOIL or set VTSOIL = ../grib.misc/Vtable.AVNSOIL For snow data, you will need to set SRCSNOW for input files and use one of the Vtable file for VTSNOW (Vtable.AWIPSNOW, or Vtable.NNRPSNOW, or Vtable.AVNSNOW). If you have other LSM data, check the Vtables above to see which fields that may be used. The fields added to the standard meteorological fields by setting SRCSOIL (and SRCSNOW) are: 1. SOILTnnn: Soil temp at various depths (nnn are in cm; unit: K) 2. SOILMnnn: Soil moisture at various depths (nnn are in cm; unit: fraction) 3. SOILHGT: Analysis surface elevation, used in REGRID to adjust soil temp (unit: m) 4. WEASD: Water-equivalent snow depth (optional but highly desirable; unit: kg m{-2}) 5. SEAICE: Sea-ice mask (optional but highly desirable; 0 or 1) 6. SOILWnnn: Soil water (optional and currently from AGRMET only; unit: m3/m3) where nnn is equal to 010, 040, 100 and 200 (10, 40, 100 and 200 cm, respectively) for various datasets we support. Among these fields, only the soil temperature, soil moisture, and soil height fields are required by the Noah LSM option. But we recommend that fields WEASD and SEAICE are made available. If your LSM data contain fields at levels other than those listed, you can still use them. For example, you may get soil temperature at 10 and 200 cm only, or perhaps at other levels, you can modify Vtable to extract these fields. When you run MM5, you can use the namelist options ISTLYR and ISMLYR (available since V3-2) to define where your data are (see below). Since V3.6, REGRID may ingest AGRMET LSM data. This dataset is produced by AFWA, and made available to MM5 users since October 2002 (with two months delay in real-time). The data are archived at NCAR on MSS: /MESOUSER/DATASETS/AGRMET/. Vtables for this dataset are provided in REGRID/pregrid/grib.misc/Vtable.AGRMETxxxx. Soil temperature, moisture, water, landsea mask and soil height fields are extracted from this dataset. One may use this dataset in combination with other three-dimensional meteorological input. In regridder, do not use sst_to_ice_threshold in the namelist.input, i.e. do not turn sea water into land ice. Use the SEAICE field in the input instead.
MM5 To use the Noah LSM option, set ISOIL=2 in configure.user prior to compilation. IBLTYP=4 or 5 (the Eta or MRF PBL) must be used for now. They are the only ones coupled to the LSM. If you have ingested climatological albedo fields from REGRID, you may choose to use or not use them in MM5. Set the following namelist variables to .FALSE. if you don’t want to use them in the model: RDMAXALB = .FALSE., RDBRDALB = .FALSE., MM5 Tutorial
C-5
Appendix C
We generally recommend that one uses the climatological maximum snow albedo only. To use input soil temperature and moisture one must add the input levels to the namelist variables ISTLYR and ISMLYR in the namelist LPARAM section. For example, ISTLYR = 10,200,0,0, ISMLYR = 10,200,0,0, This shows that the input soil temperature and moisture only comes in at 10 and 200 cm levels. Or for input coming in at levels 10,40,100 and 200 cm, one will set, ISTLYR = 10,40,100,200, ISMLYR = 10,40,100,200, Other common layers found in ECMWF models are 7, 28, 199, 255 cm. Inputting these layers in the model can be done by setting, ISTLYR = 7,28,100,255, ISMLYR = 7,28,100,255, Note that one can only input up to 4 levels of soil temperature and moisture. The prediction levels in current MM5/Noah LSM are 5, 25, 70, and 150 cm and are bounded between surface and 300 cm below. The climatological deep soil temperature generated in program TERRAIN is used as the lower boundary condition for the Noah LSM, while open-boundary condition is used for soil moisture and soil water. Additional Noah LSM prognostic outputs from MM5 are: 1. SOIL Tn (n=1,4): 2. SOIL Mn (n=1,4): 3. SOIL Wn (n=1,4): 4. CANOPYM: 5. SNOWH: 6. WEASD: 7. SFCRNOFF: 8. UGDRNOFF: 9. GRNFLX: 10. ALB:
The soil temp at all 4 soil levels, unit K; The soil moisture at all 4 soil levels, unit m3/m3; The soil water at all 4 soil levels, unit m3/m3; Canopy moisture, unit m; Snow height, unit m; Water equivalent snow depth, unit mm; Surface runoff accumulation, unit mm; Underground runoff accumulation, unit mm; Ground head flux, unit W m-2; Albedo, unit fraction.
Note 1: If using NNRP soil moisture, you may want to check init.F and initnest.F, where there is a correction for some known biases in soil moisture. Comment out GOTO 1001, and GOTO 2001 in those routines respectively, if you want to use this correction. Note 2: If you want diagnostic LSM prints at a gridpoint, change the line in surfce.F (SURFCE.284) to NOOUT=1 and edit the IF statements determining where and how often to output data.
C-6
MM5 Tutorial
Appendix C
Note 3: When using NESTDOWN to generate input files for a one-way nested run, again make sure that one doesn’t use the sst_to_ice_threshold option, and check interpolated fields (especially those of masked fields: e.g. sea ice, snow height, soil water, and other LSM fields) carefully.
MM5 Tutorial
C-7
Appendix C
C-8
MM5 Tutorial
Appendix D
Appendix D: MPP MM5 - The Distributed Memory (DM) Extension Overview Users may download additional components that support execution of the model on distributed memory (DM) parallel machines. The DM option to MM5 has been implemented so that users who do not wish to run on these types of machines may simply download and build the model as before without downloading the additional components for the DM option. Further, all additional components for the DM option reside in a new directory, MPP, within the top-level MM5 source directory and this MPP directory need exist only if the DM extension is to be employed. Therefore, the first step to obtaining and building the model for a distributed-memory parallel machine is to obtain and unarchive the main model distribution file (a compressed UNIX tar archive file) into a directory. Then, a secondary DM-specific distribution file (also a compressed UNIX tar archive file) is unarchived, creating the MPP directory. After downloading, uncompressing, and unarchiving the main model and the DM-specific components, the model may be compiled and run. The first section of this tutorial provides details and step by step instructions for downloading and unarchiving the main model and DM-specific components. The next section describes configuration and compilation of the model. This is followed by a section with information on running the model on a distributed memory parallel machine. Additional sources of information on the DM-parallel option to MM5 include the on-line Helpdesk, which is an archive of support email, the Tips and Troubleshooting page, and the Benchmarking page. These instructions have been updated for MM5 Version 3.
Downloading All the standard job decks and source code tar files needed to run the MM5 model and its pre- and post- processors are available via anonymous ftp to ftp://ftp.ucar.edu/mesouser/MM5V3. The model and the additional code for the DM-parallel option are distributed in two archive files: - MM5.TAR.gz, which contains the standard distribution of the current release of MM5, and - MPP.TAR.gz, the MPP directory and contents comprising the DM option to MM5. If you do not intend to use the DM-parallel option to run on a distributed memory parallel machine, you do not need the second file nor do you need to read further in this documentation. Please refer to the standard MM5 documentation for instructions on running MM5 without the DM-parallel option. If you intend to run the model on a distributed-memory parallel computer using the DM-option, download both these files onto your local machine. If you are using ftp directly, be sure to type “bin” at the ftp prompt before downloading, to make certain that the files are transferred in binary format. Otherwise, you may be unable to uncompress and unarchive the files. Note: Version 3 test data is available in the TESTDATA subdirectory of the MM5V3 directory on the ftp site.
MM5 Tutorial
D-1
Appendix D
Once you have downloaded the files onto your local machine, they should be uncompressed and unarchived. The following sequence of UNIX shell commands will create a complete MM5 source directory directly below the one in which the MM5.TAR.gz and MPP.TAR.gz archive files are stored. The commands will then fill the directory with the necessary files and directories to compile the model with the DM-parallel option. Upon completion of the commands, the current working directory will be the just-created MM5 top-level source directory. The “%” (percent) characters are not to be typed; they represent the command line prompt, which may be different on your system. % gzip -d -c MM5.TAR.gz | tar xf % cd MM5 % gzip -d -c ../MPP.TAR.gz | tar xf At this point, the MM5 code and the DM-option components have been unarchived into your MM5 directory. The next section describes how to configure and compile the model.
Configuration and compilation Configuring, compiling, and running the model with the distributed-memory parallel option involves the following steps: 1.Modify the configure.user file to reflect the machine-specific and scenario-specific settings for your run. 2.Compile the model to use the distributed memory parallel option. 3.Create the job deck for your run (optional). 4.Submit the job (platform specific). 1. Configuring the model The configure.user file in the top level of the MM5 directory comprises sections for machine- and scenario-specific settings, some of which may need to be modified. The sections are as follows: 1.System variables: not affected 3.Fortran Options: not affected (these are ignored when compiling with “make mpp”) 4.General commands: not affected 5.Model configuration: NHYDRO, FDDAGD, FDDAOBS, MAXNES, MIX, MJX, MKX 6.Physics options: IMPHYS, MPHYSTBL, ICUPA, IBLTYP, FRAD, ISOIL, ISHALLO 7.MPP options: this section begins with a number of important settings that apply to all DMparallel platforms. Two important settings are PROCMIN_NS and PROCMIN_EW used to reduce the memory usage of the model on larger numbers of processors. (Please see “Notes on the PROCMIN variables” at the end of this tutorial for more details on the PROCMIN variables.) The remainder of the section is analogous to Section 3 and contains sets of compiler, linker, and other options for the computers on which the DM-parallel option is currently supported. The settings for your computer should be uncommented (remove the leading # character) and edited as necessary. Please refer to the sample configure.user file for details.
D-2
MM5 Tutorial
Appendix D
Except for Section 7, the configure.user file that is distributed with the model is set up already for the “Storm of the Century” case whose data is available in the ftp TESTDATA directory. A configure.user file is provided with the largedomain case, with the following case-specific settings: MAXNES = 1 MIX = 200 MJX = 250 MKX = 27 Note that MM5 Version 3 is distributed with a configure.user.linux file but this is not intended for use with the DM-parallel option. Use the settings in the appropriate subsection of Section 7 in the configure.user file for Linux Beowulf clusters of PCs. 2. Making the Executable Issue the command “make mpp” from the main MM5 directory. Assuming a successful compile, the resulting executable file will be named mm5.mpp and it will appear in the directory named “Run” in the main MM5 directory. Note that the first time you execute this command after downloading and unarchiving the file in the MM5 directory, some additional installation will occur; specifically, software in the MPP directory will be compiled and some additional directories and links between files will be set up (e.g., MM5/MPP/build). VERY IMPORTANT: Once the DM-parallel version has installed in a directory it will not compile properly if moved or copied to another location without first uninstalling and allowing the code to re-install itself in the new location. This is because there are certain absolute directory paths that are set up during installation; moving the model invalidates these paths. It is simple to uninstall the model by simply typing “make uninstall.” To rebuild the code in a different configuration when it hasn’t been moved from it’s installed location, it is sufficient to type “make mpclean”. There is additional discussion of compilation issues for the DM-parallel version of the code at the end of this document. 3. Making the mm5.deck (optional) Issue the command “make mm5.deck”. This will produce a file named mm5.deck in your MM5 directory. The original purpose of the mm5.deck in MM5 was to generate the namelist (mmlif) file, set up links to data files, and then execute the model and non-DM parallel users may still use it this way. However, because file systems, run commands, and queuing mechanisms vary from installation to installation, the mm5.deck files that are generated for DM-parallel platforms do little more than generate the namelist. Even then, the mm5.deck file created by “make mm5.deck” may still need to be edited to accomplish your specific run. In particular, length of the run in minutes (TIMAX), the length of the model time step in seconds (TISTEP), output frequency in minutes (TAPFRQ), whether to write model restarts (IFSAVE), the run time size of your domain(s) (NESTIX, NESTJX), their spatial location (NESTI, NESTJ), and time to start and end (XSTNES, XENNES) may need to be changed either in the mm5.deck or in the mmlif namelist file that
MM5 Tutorial
D-3
Appendix D
results from executing the mm5.deck. The mm5.deck generated by default (with the configure.user file that is distributed with the model) is suitable for use with the Storm of the Century case data. The largedomain data case comes with an mm5.deck already included, however, in this case it may be necessary to edit the resulting mmlif file to change occurrences of the Fortran-77 style “&END” to the Fortran-90 style “/” delimiters (IBM).
Running the Model In the Run directory, or in some other run directory, the following files or symbolic links should exist:
mm5.mpp
Executable file
BDYOUT_DOMAIN1
Copy of or symbolic link to lateral boundary conditions file
mmlif
Copy of or symbolic link to mmlif namelist file
LOWBDY_DOMAIN1
Lower boundary conditions
LANDUSE.TBL
Land surface properties (distributed with MM5)
MMINPUT_DOMAIN1
Initial conditions file for coarse domain
MMINPUT_DOMAIN[2-9]
Initial conditions (IOVER=1)
TERRAIN_DOMAIN[2-9]
Terrain file (IOVER=2) for nests
MMINPUT2_DOMAIN1
Used only for FDDA on IBM, a copy of MMINPUT_DOMAIN1
restrts
Directory for restart files (only needed if using restarts)
Run the parallel model using the command for your system. Many parallel machines will use some version of the mpirun command. Here are some examples: 1) 2) 3) 4) 5)
mpirun -np 16 mm5.mpp mpirun -np 16 -machinefile machines mm5.mpp dmpirun -pf procfile mprun -np 4 mm5.mpp poe mm5.mpp -rmpool 1 -procs 2 -pgmmodel spmd
Examples (1) and (2) run the code on 16 processors. The -machinefile option (compatible with MPICH) in example (2) allows you to specify processors in a file, here named “machines”. Examples (3), (4), and (5) are for DEC MPI, Sun MPI, and IBM, respectively. Check the manual pages for the mpirun command on your system. You may also need to interact with a queuing system or batch scheduler. If you are running on a Linux cluster, please see “Note on Linux systems”
D-4
MM5 Tutorial
Appendix D
at the end of this tutorial. When the job runs, it will create files in the run directory:
MMOUT_DOMAIN1
Output files from coarse domain
MMOUT_DOMAIN[2-9]
Output files from nests
rsl.out.0000
Standard output from processor zero
rsl.out.0001, rsl.out.0002, ...
Standard output from other processors
rsl.error.0000
Standard error from processor zero
rsl.error.0001, rsl.out.0002, ...
Standard error from other processors
Additional notes on compilation As with the non-distributed memory versions of the model, the distributed-memory version is compiled using the UNIX make utility and controlled by settings within the configure.user file in the main MM5 directory. This section provides some additional detail that may be helpful in working with the DM-parallel model. Design note: A general note on design philosophy: users and developers will notice some additional complexity within the build mechanism for the DM option. There is a conservation of complexity principle at work here: we avoided introducing changes for DM parallelism (message passing, loop and data restructuring, etc.) into the MM5 Fortran source by transferring it, instead, to a library (RSL), a source translator (FLIC), and into the MM5 build mechanism. Thus, the changes for DM parallelism are effected automatically and transparently to the source code in a series of precompilation steps, without the necessity of a separate version of the model source code. Most of the complexity of the DM parallel build mechanism has been hidden from users within several commands that are close in look and usage to the commands that are used to build the non-DM version of MM5. Even so, some knowledge of what is going on behind the scenes when the model is built is important and is provided in the discussion that follows. There are several DM-specific options to the “make” command. The command “make mpp” is used to build the model and this command is analogous to just typing “make” to build the nonDM version. The command “make mpclean” is used to put the code back to a non-configured state by deleting object (.o) files from the source tree; it is analogous to the “make clean” command for the non-DM version. The command “make uninstall” is unique to the DM version of the model, and it is used to return the code to a pristine, more or less as-downloaded state. Command: make mpp The “make mpp” command builds (compiles) the model based on the settings in the configure.user file and places the resulting executable, mm5.mpp, in the directory named Run in the top-
MM5 Tutorial
D-5
Appendix D
level MM5 directory. The first time after unarchiving the MM5.TAR.gz and MPP.TAR.gz files into the MM5 directory, the “make mpp” command installs software needed by the DM version in the directory named MPP in the MM5 directory prior to building the model. It also sets up the MPP/build directory and creates symbolic links to MM5 source files in the other directories of the MM5 source tree. The DM code is compiled only in the MPP/build directory, not in the various subdirectories of the MM5 source tree, where the non-DM version is compiled. This, and the fact that the executable file has a different name from the non-DM version, allows both the DM and non-DM versions of the model to be compiled and coexist within the same MM5 directory. Subsequent invocations of “make mpp” will only recompile the model and not re-install the MPP software. Prior to compilation, the MPP/build directory will contain a Makefile.RSL and a large number of files, actually symbolic links, with names that end in .F and a few that end in .c . Note to programmers and developers: since these are symbolic links you may either edit the source files in their local directory (e.g. dynamics/nonhydro/solve3.F) or through the symbolic link (MPP/build/solve3.F). After compilation, the directory will also contain the intermediate object files with names that end in .o. The “make mpp” command will only recompile files that have been touched since their corresponding .o file was compiled. For debugging purposes, it is possible to also have the .f files retained in the MPP/build directory. To do this, uncomment (remove the # character) from the line “# RM = echo” in the file MPP/build/Makefile.RSL (the file will only exist in this directory if the code has been made using “make mpp” once). This will prevent intermediate files from being deleted between compiles. Command: make mpclean The “make mpclean” command is used to restore the model to a clean, un-compiled state. The “make mpclean” command causes all of the files ending with .o and any other intermediate files in the MPP/build directory to be removed. This should be done whenever the model is reconfigured in the configure.user file, prior to rebuilding the code with the “make mpp” command. Usually, when making code changes to MM5 source files, it is not necessary to “make mpclean” between each compile. However, any time code describing a COMMON block is altered, the model should be completely rebuilt using “make mpclean” first. Command: make uninstall The “make uninstall” command undoes what was done to install the DM option when the “make mpp” command was executed the first time. It uninstalls software needed by the DM option and it removes the MPP/build directory, including all symbolic links. The effect is to restore the code to an “as downloaded” state with respect to the DM option (it won’t undo changes you’ve made to the configure.user file or to model source files). Because the installation of the DM option sets up some non-relative directory path information within the MPP directory, the “make uninstall” command should be used whenever the code is moved to a different directory. Otherwise, there are very few instances for which the “make uninstall” command is necessary; “make mpclean” will usually suffice.
D-6
MM5 Tutorial
Appendix D
Notes on the PROCMIN variables The PROCMIN_NS and PROCMIN_EW variables, which appear in the section 7 of the configure.user file, can be used to reduce the memory required by the DM-parallel model when running on a larger numbers of processors. These two variables determine the horizontal dimensions of MM5 arrays for each processor AT COMPILE TIME. Roughly speaking, PROCMIN_NS divides MIX and PROCMIN_EW divides MJX. (This is not exact, due to “pad” or “ghost” cells maintained in the DM-parallel arrays.) Therefore the larger you make PROCMIN_NS and PROCMIN_EW, the smaller the per processor memory. The product (PROCMIN_NS x PROCMIN_EW) specifies the MINIMUM number of processors for which the MM5 executable is valid. If you try running the model with LESS processors than this product, the internal array dimensions (determined at compile time!) will not be large enough to handle the effective subdomain sizes on each processor. If this happens, the model will die with the runtime abort message in the rsl.error.0000 file: ‘MPASPECT: UNABLE TO GENERATE PROCESSOR MESH. STOPPING.’ In this case, you must either run with more processors, or reduce PROCMIN_NS and/or PROCMIN_EW and recompile. An executable compiled with PROCMIN_NS=1 and PROCMIN_EW=1 uses the maximum per processor memory, but is valid for running with any number of processors. For the most efficient use of memory, set PROCMIN_NS and PROCMIN_EW so that their product equals the number of processors you will be using. For a given product (e.g., 16 processors), runtimes might vary up to 10 or possibly even 15 percent based on the ratio of PROCMIN_NS to PROCMIN_EW. For example, on most systems we have found that setting PROCMIN_NS=2 and PROCMIN_EW=8 (2x8 decomposition) is more efficient than a 4x4 or 8x2 decomposition. Note on Linux systems Users of Linux clusters should be aware that details for configuring, compiling, and running the MM5 model will vary based on their particular system configuration and installation. Processor chip, operating system, compilers, and interconnect are all variables that make your system unique. If you are experiencing problems compiling and/or running the model on your system, you should scan the on-line Helpdesk for Linux-specific notices. Note on time-series output If one sets IFTSOUT = .TRUE., and defines TSLAT and TSLON for the time-series locations, one will obtain time-series output in fort.26 for domain 1, fort.27 for domain 2 and so on for serial runs. For MPI runs, the time-series output is not written to the fort.2? files, but rather (unfortunately) scattered in the various rsl.out.* files. The rsl.out.* file which will contains the time-series output will correspond to the processor that calculated the time-series. This means that if 2 time-series outputs were requested, they may be in written to two different rsl.out.* files, as their respective locations may have placed them on two different processors.
MM5 Tutorial
D-7
Appendix D
Note on restart runs For serial runs, SAVE_DOMAINn files are created, which needs to be renamed to RESTART_DOMAINn before submitting a restart job. For MPI runs, restart files are written to a directory MM5/Run/restrt These files already have the correct naming convention for a restart, so no renaming is required. A restart file will be written for each processor. As these restart files ONLY contain information pertinent to the processor that created them, it is essential to restart with EXACTLY the same number of processors that was used during the initial run.
D-8
MM5 Tutorial
Appendix E
Appendix E: 3DVAR E.1 Introduction This document provides an overview of the MM5 3DVAR system and is taken from the more extensive 3DVAR Technical Note (Barker et al. 2003). The code has been designed to be a community data assimilation system flexible enough to allow a variety of research studies to be performed (e.g. impact of new observation types, globally relocatable etc.). In addition, the code has from the start of the project been geared towards operational implementation. Thus, the issues of computational efficiency and robustness have also been major design features. Further details, including further documentation, an online tutorial and results may be found on the MM5 3DVAR web page http://www.mmm.ucar.edu/3dvar
E.1.1 The Data Assimilation Problem A data assimilation system combines all available information on the atmospheric state in a given time-window to produce an estimate of atmospheric conditions valid at a prescribed analysis time. Sources of information used to produce the analysis include observations, previous forecasts (the background or first-guess state), their respective errors and the laws of physics. The analysis can be used in a number of ways, including: • • • •
Providing initial conditions for a numerical weather forecast (initialization). Studying climate through the merging of observations and numerical models (reanalysis). Assessing the impact of individual components of the existing observation network via Observation System Experiments (OSEs). Predicting the potential impact of proposed new components of a future observation network via Observation System Simulation Experiments (OSSEs).
The importance of accurate initial conditions to the success of an assimilation/forecast numerical weather prediction (NWP) system is well known. The relative importance of forecast errors due to errors in initial conditions compared to other sources of error such as physical parameterizations, boundary conditions and forecast dynamics depends on a number of factors e.g. resolution, domain, data density, orography as well as the forecast product of interest. However, judging from the current/future-planned resources (computational and human) of both operational and research communities being devoted to data assimilation, better initial conditions are increasingly considered vital for a whole range of NWP applications. Initial applications of the MM5 3DVAR system have focused on providing initial conditions from which to integrate MM5 forecasts. Future use of the system for regional climate modeling, OSEs and OSSEs is an exciting possibility.
E.1.2 Variational Data Assimilation In recent years, much effort has been spent in the development of variational data assimilation systems to replace previously used schemes e.g. the Cressman (MM5), Newtonian nudging (FDDA -MM5), optimum interpolation (OI - NCEP, ECMWF, HIRLAM, NRL, etc.) and analysis correction (UKMO) algorithms. Practical considerations have led to a variety of alternative implementations of VAR systems. MM5 Tutorial
E-1
Appendix E
The basic goal of the MM5 3DVAR system is to produce an "optimal" estimate of the true atmospheric state at analysis time through iterative solution of a prescribed cost-function (Ide et al. 1997) 1 1 J ( x ) = J b + J o = ( x − x b ) T B −1 ( x − x b ) + ( y − y o ) T ( E + F ) −1 ( y − y o ). 2 2
(E.1)
The VAR problem can be summarized as the iterative solution of Eq. (E.1) to find the analysis state x that minimizes J(x). This solution represents the a posteriori maximum likelihood (minimum variance) estimate of the true state of the atmosphere given the two sources of a priori data: the background (previous forecast) xb and observations yo (Lorenc 1986). The fit to individual data points is weighted by estimates of their errors: B, E and F are the background, observation (instrumental) and representivity error covariance matrices respectively. Representivity error is an estimate of inaccuracies introduced in the observation operator H used to transform the gridded analysis x to observation space y=Hx for comparison against observations. This error will be resolution dependent and may also include a contribution from approximations (e.g. linearizations) in H. The quadratic cost function given by Eq. (E.1) assumes that observation and background error covariances statistically are described using Gaussian probability density functions with zero mean error. Alternative cost functions maybe used which relax these assumptions (e.g. Dharssi et al. 1992). Eq. (E.1) additionally neglects correlations between observation and background errors. The use of adjoint operations, which can be viewed as a multidimensional application of the chain-rule for partial differentiation, permits efficient calculation of the gradient of the cost-function. Modern minimization techniques (e.g. Quasi-Newton, preconditioned conjugate gradient) are used to efficiently combine cost function, gradient and the analysis information to produce the "optimal" analysis. The theoretical problem of minimizing the cost function J(x) is equivalent to the previous-generation OI technique in the linear case. Despite this equivalence, previously developed operational 3/ 4DVAR systems e.g. NCEP (1992), ECMWF (1996/8), Meteo-France (1998/2000), UKMO (1999) have led to improved forecast scores relatively quickly after implementation through their more flexible design. Below are listed practical advantages of VAR systems over their predecessors. •
Observations can easily be assimilated directly without the need for prior retrieval. This results in a consistent treatment of all observations and, as the observation errors are less correlated (with each other and the background errors), practical simplifications to the analysis algorithm.
•
The VAR solution is found using all observations simultaneously, unlike the OI technique for which a data selection into artificial sub-domains is required.
•
Asynoptic data can be assimilated near its validity time. This is implicit to 4DVAR but can also be achieved using a "rapidly-updating" 3DVAR technique.
E-2
MM5 Tutorial
Appendix E
•
Balance (e.g. weak geostrophy, hydrostatic) constraints can be built into the preconditioning of the cost-function minimization. In 4DVAR, use is also made of the implicit balance of the forecast model.
Having expounded the advantages of variational data assimilation it is wise to also recognize its weaknesses. Although the variational analysis is frequently described as "optimal", this label is subject to a number of assumptions. Firstly, given both imperfect observations and prior (e.g. background) information as inputs to the assimilation system, the quality of the output analysis depends crucially on the accuracy of prescribed errors. Secondly, although the variational method allows for the inclusion of linearized dynamical/physical processes, in reality real errors in the NWP system may be highly nonlinear. This limits the usefulness of variational data assimilation in highly nonlinear regimes e.g. the convective scale or in the tropics. It is hoped that the 3DVAR system will be used in future studies to investigate these research topics. In the development of variational data assimilation systems at the operational centers, 3DVAR has been seen as a necessary prerequisite to the ultimate goal of four-dimensional (e.g. 4DVAR/Kalman-filter-type) assimilation algorithms. Their initial concentration on 3DVAR has been partly motivated by a lack of computing resources (with the current exceptions of ECMWF and MeteoFrance which now run 4DVAR operationally). Without the cut-off time restrictions of the weather centers, the research community has tended to bypass 3DVAR to concentrate on applications of 4DVAR to new/asynoptic data types e.g. Doppler radar.
E.2 Overview Of 3DVAR In The MM5 Modeling System This section provides an overview of the 3DVAR system as used in the MM5 modeling environment. The basic layout is illustrated in Fig. (1) for both cold-starting mode, where the background forecast originates from another model and/or grid, and cycling mode where the background forecast is a short-range MM5 forecast from a previous 3DVAR analysis. The three input (first guess, observation and background error) and output (analysis) files are shown as circles. Highlighted rectangles indicate code especially written for use with 3DVAR and MM5. Clear rectangles represent preexisting code. The following is a summary of the various components of the system.
MM5 Tutorial
E-3
Appendix E
MM5 B ackground Preprocessing
O bservation Preprocessor
xb
yo
B ackground E rror C alculation
3D V A R
xa
U pdate B oundary C onditions
Forecast
B
FIG. 1. The various components of the 3DVAR system (highlighted) and their interaction with pre-existing components of the MM5 modeling system. Note the background preprocessing is only required if 3DVAR is being run in “cold-starting” mode.
E.2.1 Background Preprocessing In cold-starting mode, standard MM5 preprocessing programs may be used to reformat and interpolate forecast fields from a variety of sources to the target MM5 domain. These packages are: • • • •
TERRAIN - defines domain, orography, land use etc. PREGRID - reads background forecast in native format e.g. RUC, ETA, AVN, ECMWF etc. REGRIDDER - horizontally interpolates background to MM5 domain. INTERPF - vertically interpolates background field to MM5 sigma-height levels.
For further details on any of the above MM5 preprocessing packages, refer to the documentation on the MM5 web-page: http://www.mmm.ucar.edu/mm5. In cycling mode, background processing is not required as the background field xb input to 3DVAR is already on the MM5 grid.
E.2.2 The Observation Preprocessor (3DVAR_OBSPROC) The observation preprocessor provides the observations yo for ingest into 3DVAR. The program 3DVAR_OBSPROC has been specially written for use with the MM5 3DVAR system. It performs the following functions: • • •
E-4
Reads in observation file in decoder (MM5 LITTLE_R format). Reads in run-time parameters from a namelist file. Performs spatial and temporal checks to select only observations located within the target domain and within a specified time-window.
MM5 Tutorial
Appendix E
• • • •
Calculates heights for observations whose vertical coordinate is pressure. Merges duplicate observations (same location, place, type) and chooses observation nearest analysis time for stations with observations at several times. Estimates the error for each observation. Outputs observation file in ASCII 3DVAR format.
Further details may be found in Section 3 of the NCAR Technical Note (Barker et al. 2003).
E.2.3 Background Error Calculation Background error covariance statistics are used in the 3DVAR cost-function to weight errors in features of the background field. The assimilation system will filter those background structures that have high error relative to more accurately known background features and observations. In reality, errors in the background field will be synoptically-dependent i.e. vary from day to day depending on the current weather situation. Current implementations of 3DVAR however, tend to use climatological background errors although research is ongoing into the specification and use of background "errors of the day". The NMC-method (Parrish and Derber 1992) is a popular method for estimating climatological background error covariances. In this process, background errors are assumed to be well approximated by averaged forecast difference (e.g. month-long series of 24hr - 12hr forecasts valid at the same time) statistics:
(
)(
B = xb − xt xb − xt
)
T
(
)(
= ε b ε b ≈ x T + 24 − x T +12 x T + 24 − x T +12 T
)
T
(E.2)
where xt is the true atmospheric state and ε b is the background error. The overbar denotes an average over time and/or space. Technical details of the NMC-method code developed in NCAR/ MMM may be found in section 8 of the NCAR Technical Note (Barker et al. 2003). In the current MM5 3DVAR, the background errors are computed for a variety of resolutions and a seasonal dependence is introduced simply by using forecast difference statistics valid at different times of the year (e.g. winter, summer). It is clear that the background errors should estimate errors in the analysis/forecast used as starting point for the 3DVAR minimization. In cold-starting mode, the background field originates from a different model (e.g. AVN, CWBGM). In contrast, a cycling application requires errors representative of a short-range forecast run from a previous 3DVAR analysis. Background errors will vary between each application and should ideally be tuned for each domain. This is time-consuming, but important, work. A recalculation of background error should be considered whenever the background field changes. Scenarios where this might occur include: • • •
Using an alternative source for the background field in cold-starting mode. The cold-starting background has been upgraded (e.g. change of resolution, additional observations used in a global analysis background). Change to MM5 configuration in a cycling run.
The initial period of a new cycling application must initially use background errors interpolated from another source of similar resolution/location. Once the new domain has been running for a period (e.g. 1 month) a better estimate of background error may be obtained. This is an iterative MM5 Tutorial
E-5
Appendix E
process - changing the background error used in 3DVAR will again modify background errors of the resulting short-range forecast used as background. The calculation of background error covariances requires significant resources that are not always available. Given this limitation, and the fact that the background errors derived by the "NMCmethod" are climatological estimates, approximations are inevitable. 3DVAR includes a number of namelist variables that allow some tuning of the background error files at run-time.
E.2.4 3DVAR System Overview Although the 3DVAR code is completely new, the particular 3DVAR implementation described below is similar in basic design to that implemented operationally at the UK Meteorological Office in 1999 (Lorenc et al. 2000). In summary, the main features of the MM5 3DVAR system include: •
• •
• • • • •
•
Incremental formulation of the model-space cost function given by Eq. (1) with the outer loop implemeted in considering the nonlineat effects of the observation operators and doing the ‘variational data quality control’. Quasi-Newton (QN) and Conjugate Gradient (CG) minimization algorithm. Analysis increments on unstaggered "Arakawa-A" grid. In the MM5 environment, the input background wind field is interpolated from the Arakawa-B grid of MM5. On output, the unstaggered analysis wind increments are interpolated to the MM5 B-grid. Analysis performed on the sigma-height levels of MM5. Jb preconditioning via a "control variable transform" U defined as B=UUT. Preconditioned control variables are chosen as streamfunction, velocity potential, unbalanced pressure and a choice between specific or relative humidity. Linearized mass-wind balance (including both geostrophic and cyclostrophic terms) used to define a balanced pressure. Climatological background error covariances estimated via the NMC-method of averaged forecast differences. Values are tuned by comparison with estimates derived from observation minus background differences (innovation vector) statistics. Representation of the horizontal component of background error via isotropic recursive filters. The vertical component is applied through projection onto climatologically averaged eigenvectors of vertical error (estimated via the NMC-method. Horizontal/vertical errors are nonseparable (horizontal scales vary with vertical eigenvector).
Further details can be found in links from the NCAR/MMM 3DVAR web site (http:// www.mmm.ucar.edu/3dvar) including links to results of extended testing as well as the code (Fortran90 transformed to html using software designed in NCAR/MMM). The code itself contains a significant level of documentation.
E.2.5 Update Boundary Conditions In order to run MM5 (or any other forecast model supported by the 3DVAR system) using the 3DVAR analysis as initial conditions, the lateral boundary conditions must first be modified to reflect differences between background forecast and analysis. This process is described in section 10 of the NCAR Technical Note (Barker et al. 2003).
E-6
MM5 Tutorial
Appendix E
E.3 The Observation Preprocessor (3DVAR_OBSPROC) The observation preprocessor provides the observations yo for ingest into 3DVAR and has been specially developed for MM5 applications of 3DVAR. The 3DVAR_OBSPROC program makes use of Fortran90 and requires an F90-friendly compiler. It has been successfully run on DECAlpha, IBM-SP, Fujitsu VPP5000, NEC-SX5 and PC/Linux machines.
E.3.1 Observation Preprocessor Tasks The observation preprocessor performs the following functions: 1. Reads in observation file in decoder (LITTLE_R) format. This format is that output by MM5 decoder routines and previously used in the preexisting MM5 LITTLE_R analysis package. This format was adopted as input to 3DVAR_OBSPROC in order to allow easy comparison of 3DVAR with LITTLE_R (which 3DVAR is intended to replace). A description of the LITTLE_R data format can be found at: http://www.mmm.ucar.edu/mm5/documents/MM5_tut_Web_notes/OA/OA.html 2. Reads in run-time parameters from a namelist file. An example is given below: &record1 obs_gts_filename obs_err_filename obs_gps_filename first_guess_file /
= = = =
&record2 time_earlier time_analysis time_later /
= -90, = '2001-06-27_12:00:00', = 90,
'/mmmtmp/bresch/3dv/obs', 'obserr.txt', 'NOGPS', '/mmmtmp/bresch/3dv/MMINPUT_DOMAIN2',
&record3 max_number_of_obs fatal_if_exceed_max_obs /
= 58000, = .TRUE.,
&record4 qc_test_vert_consistency qc_test_convective_adj qc_test_above_lid remove_above_lid Thining_SATOB Thining_SSMI /
= = = = = =
&record5 print_gts_read print_gpspw_read MM5 Tutorial
.TRUE., .TRUE., .TRUE., .TRUE., .FALSE., .FALSE.,
= .TRUE., = .TRUE., E-7
Appendix E
print_recoverp print_duplicate_loc print_duplicate_time print_recoverh print_qc_vert print_qc_conv print_qc_lid print_uncomplete user_defined_area
= = = = = = = = =
.TRUE., .TRUE., .TRUE., .TRUE., .TRUE., .TRUE., .TRUE., .TRUE., .FALSE.,
/ &record6 x_left x_right y_bottom y_top /
= = = =
1., 100., 1., 100.,
3. Performs spatial and temporal checks to select only observations located within the target domain and within a specified time-window. 4. Calculates heights for observations whose vertical coordinate is pressure. 5. Merges duplicate observations (same location, place, type) and chooses observation nearest analysis time for stations with observations at several times. 6. Estimates the error for each observation. Values are input from the "obserr.txt" file containing observation errors at standard pressure levels for a number of different observation types. The errors tabulated in file "obserr.txt" originate from NCEP but have been modified at NCAR after comparisons against O-B data. 7. Outputs observation file in ASCII MM5 3DVAR format read for input to 3DVAR. An example header of the observation file is given below. TOTAL = 8170, MISS. =-888888., SYNOP = 1432, METAR = 164, SHIP = 86, TEMP = 180, AMDAR = 0, AIREP = 265, PILOT = 0, SATEM = 0, SATOB = 6043, GPSPW = 0, SSMT1 = 0, SSMT2 = 0, TOVS = 0, OTHER = 0, PHIC = 28.50, XLONC = 116.00, TRUE1 = 10.00, TRUE2 = 45.00, TS0 = 275.00, TLP = 50.00, PTOP = 7000., PS0 =100000., IXC = 67, JXC = 81, IPROJ = 1, IDD = 1, MAXNES= 10, NESTIX= 67, 67, 67, 67, 67, 67, 67, 67, 67, NESTJX= 81, 81, 81, 81, 81, 81, 81, 81, 81, NUMC = 1, 1, 1, 1, 1, 1, 1, 1, 1, DIS = 135.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, NESTI = 1, 1, 1, 1, 1, 1, 1, 1, 1, NESTJ = 1, 1, 1, 1, 1, 1, 1, 1, 1, INFO = PLATFORM, DATE, NAME, LEVELS, LATITUDE, LONGITUDE, ELEVATION, ID. SRFC = SLP, PW (DATA,QC,ERROR). EACH = PRES, SPEED, DIR, HEIGHT, TEMP, DEW PT, HUMID (DATA,QC,ERROR)*LEVELS. INFO_FMT = (A12,1X,A19,1X,A40,1X,I6,3(F12.3,11X),6X,A5) SRFC_FMT = (F12.3,I4,F7.2,F12.3,I4,F7.2) EACH_FMT = (3(F12.3,I4,F7.2),11X,3(F12.3,I4,F7.2),11X,1(F12.3,I4,F7.2))) ……..observations……..
E-8
67, 81, 1, 0.00, 1, 1,
MM5 Tutorial
Appendix E
The header contains information on the number of observations for each type and the grid that has been used to select observations. The final three lines above define the format used to store particular observations which follow the header and which are subsequently read by 3DVAR. The observation preprocessor also has the capability to input observations in BUFR format. This latter format is not used in MM5 applications. 8. 3DVAR_OBSPROC outputs numerous diagnostics files that detail the quality control decisions taken and error estimates used.
E.3.2 Quality Control Flags used in 3DVAR_OBSPROC and 3DVAR A variety of quality control checks are performed by the observation preprocessor. Quality control flags are set for all observations and output ready for input into 3DVAR. The following flags are currently used: missing_data outside_of_domain
= -88, & = -77, &
wrong_direction
= -15, &
negative_spd
= -14, &
zero_spd
= -13, &
wrong_wind_data
= -12, &
zero_t_td
= -11, &
t_fail_supa_inver
= -10, &
wrong_t_sign
= - 9, &
above_model_lid
= - 8, &
far_below_model_surface = - 7, & below_model_surface
= - 6, &
standard_atmosphere
= - 5, &
from_background
= - 4, &
fails_error_max
= - 3, &
fails_buddy_check
= - 2, &
no_buddies
= - 1, &
good_quality
=
0, &
convective_adjustment
=
1, &
surface_correction
=
2, &
Hydrostatic_recover
=
3, &
MM5 Tutorial
! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !
Data is missing with the value of missing_r Data outside horizontal domain or time window, data set to missing_r Wind vector direction <0 or> 360 => direction set to missing_r Wind vector norm is negative => norm set to missing_r Wind vector norm is zero => norm set to missing_r Spike in wind profile =>direction and norm set to missing_r t or td = 0 => t or td, rh and qv are set to missing_r, superadiabatic temperature Spike in Temperature profile height above model lid => no action height far below model surface => no action height below model surface => no action Missing h, p or t =>Datum interpolated from standard atm Missing h, p or t =>Datum interpolated from model Datum Fails error max check => no action Datum Fails buddy check => no action Datum has no buddies => no action OBS datum has good quality convective adjustment check =>apply correction on t, td, rh and qv Surface datum => apply correction on datum Height from hydrostatic assumption with the OBS data calibration
E-9
Appendix E
Reference_OBS_recover
=
Other_check
=
4, & 88
! Height from reference state with OBS ! data calibration ! passed other quality check
The obervation preprocessor code can be downloaded from 3DVAR tutorial page by clicking Online 3DVAR tutorial from http://www.mmm.ucar.edu/3dvar.
E.4 The 3DVAR System As discussed above, the role of the 3DVAR assimilation system is to use the three input data sources xb, yo and B to produce analysis increments xai to be recombined with the background xb in order to produce an analysis xa = xb + I xai from which to run MM5. The operator I represents post-processing of the analysis increments in 3DVAR e.g. modifications to ensure the humidity analysis is within physical limits.
E.4.1 Overview
3DVAR START
Namelist File
xb
B
yo
Setup MPP
Read Namelist
Setup Background
Setup Background Errors
Setup Observations
Compute Analysis
Minimize Cost Function
Calculate O-B
“Outer Loop” Calculate Diagnostics
Output Analysis
Tidy Up
Diagnostics File
xa
3DVAR END
FIG. 2: Illustration of the major steps taken during the 3DVAR analysis precedure.
The top-level structure of 3DVAR is shown in Fig. 2. The 3DVAR runs under the WRF model framework to permit access to WRF MPP software, required for applications of 3DVAR on multiple processor platforms. In future, this arrangement will also allow easy access to other parts of the WRF system (e.g. I/O) from 3DVAR (and vice versa). The 3DVAR algorithm is called as a "mediation layer" subroutine from the WRF driver. E-10
MM5 Tutorial
Appendix E
The following summarizes the role of each step in the 3DVAR algorithm: 1. Setup MPP: Details of the run configuration are read in from a WRF namelist file. Tile, memory and domain dimension are calculated and stored. 2. Read [3DVAR] Namelist: 3DVAR run-time options are read in from a namelist file. These options are described more fully in Appendix D of the NCAR Technical Note (Barker et al. 2003). The details can also be found from 3DVAR tutorial page. 3. Setup Background: The background field xb is read in (MM5 format for MM5 applications, WRF format for WRF). Variables required by 3DVAR are stored in the xb Fortran90 derived data type (e.g. xb % u, xb % v etc.). Any additional fields present in the input file are ignored. 4. Setup Background Errors: Components of the background error (eigenvectors, eigenvalues, lengthscales and balance regression coefficients) are read in (currently in MM5 format) and stored in the be derived data type (e.g. be % v1, be % reg_coeff etc.). 5. Setup Observations: Observations yo and metadata (output from the observation preprocessor) are read (in either MM5 3DVAR ASCII or BUFR format) and stored in the ob derived data type (e.g. ob % synop % lat, ob % sonde % u, etc.). Basic quality control checks are again applied (e.g. domain checks) and an initial quality control flag is assigned. 6. Calculate O-B: For valid data, the innovation vector yo - yb is calculated and stored in the iv derived data type (of similar design to the ob structure but including additional metadata). The transform yb = H(xb) of the full-resolution background xb to observation space uses the nonlinear observation operator H. This transform involves both a change from model to observation variable and interpolation from grid points to the observation location. A "maximum error check" is applied to all values within the innovation vector iv which compares the O-B value against a maximum value defined as a multiple of the observation error for each observation. Various namelist parameters exist to tune QC checks as well as ones to choose which QC flags to ignore. 7. Minimize Cost Function: The minimization of the 3DVAR cost function proceeds iteratively as described below. Diagnostic output includes cost function and gradient norm values for each iteration. 8. Calculate Analysis: Having found the control variables that minimize the cost function, a final transform of the analysis increments to model (i.e. gridded u, v, T, p, q) space is performed. The increments are added to the background values to produce the analysis. Finally, checks are performed to ensure certain variables are within physically reasonable limits (e.g. relative humidity is greater than zero and less that 100%). The increments are adjusted if analysis values fall outside this range. 9. Compute Diagnostics: Assimilation statistics (minimum, maximum, mean and root mean square) are calculated and output for study e.g. O-B, O-A statistics for each observation type, A-B (increment) statistics for each model variable. Output files are described in Appendix E of the NCAR Technical Note (Barker et al. 2003).
MM5 Tutorial
E-11
Appendix E
10.Output Analysis: Both analysis and analysis increments are output. 11.Tidy Up: Dynamically allocated memory is deallocated and summary run-time data output. The "outer loop" seen in Fig. 2 permits the recalculation of the innovation vector using the analysis as an improved "background". The recalculation of O-B uses the full nonlinear observation operator H and hence provides a way if introducing nonlinearities into the analysis procedure. In addition, quality control checks based on maximum O-B values can be repeated. This equates to a crude "variational quality control" through the possibility that observation previously rejected due to too large an O-B value may be accepted in subsequent outer loops if the new O-B drops below the specified maximum value.
E.4.2 3DVAR Preconditioning Method This subsection contains some of the mathematics behind the solution method chosen for the 3DVAR system. As stated above, the basic problem is to find the analysis state xa that minimizes a chosen cost function, here given by Eq. (1). For a model state x with n degrees of freedom, calculation of the background term Jb of the cost function requires ~O(n2) calculations. For a typical NWP model with n ~ 106 - 107 (number of grid-points times number of independent variables) direct solution is prohibitively expensive. One practical solution to this problem is to perform a preconditioning via a control variable v transform defined by x' = Uv, where x' = x - xb. The transform U is chosen to approximately satisfy the relationship B=UUT. Using the incremental formulation (Courtier et al. 1994) and the control variable transform, eq. (1) may be rewritten J (v ) = J b + J o =
1 T 1 v v + ( y o ' − HUv )T ( E + F ) −1 ( y o ' − HUv). 2 2
(E.3)
where yo' = yo - H(xb) is the innovation vector and H is the linearization of the potentially nonlinear observation operator H used in the calculation of yo'. In this form, the background term is essentially diagonalized, reducing the number of calculations required from O(n2) to O(n). In addition, the background error covariance matrix equals the identity matrix I in control variable space, hence preconditioning the minimization procedure. The use of the incremental method has a number of advantages. Firstly, use of linear control variable transforms allows the straightforward use of adjoints in the calculation of the gradient of the cost function. Secondly, any imbalance introduced through the analysis procedure is limited to the (small) increments that are added to the balanced first guess. This generally leads to a more balanced analysis than that obtained using a technique in which the full-field analysis is constructed. The transformation x' = Uv must be designed to ensure the validity of the B=UUT relationship. One goal is to transform to variables whose errors are largely uncorrelated with each other thus reducing B to block-diagonal form. In addition, each component of v is essentially scaled by the appropriate background error variance to allow an accurate penalization in the transformed Jb cost function.
E-12
MM5 Tutorial
Appendix E
The 3DVAR control variable transform x' = Uv is in practice composed of a series of operations x' = UpUvUhv . The transformation always proceeds from control to model space (but is reversed in the adjoint code and the calculation of control variable background error statistics via the NMC-method). The individual operators represent in order the horizontal, vertical and change of physical variable transforms. In the MM5 3DVAR algorithm, the horizontal transform Uh is performed using recursive filters to represent horizontal background error correlations. The vertical transform Uv is applied via a projection from eigenvectors of a climatological estimate of the vertical component of background error onto model levels. Finally, the physical variable transformation Up converts control variables to model variables (e.g. u, v, T, p, q). Each stage of the control variable transform is discussed in the NCAR Technical Note (Barker et al. 2003)
E.4.3 3DVAR Source Code Organization This section provides a brief tour around the 3DVAR code. Significant efforts have been made to make the code self-documenting, so this section should be seen as a prelude to looking at the code itself. The use of Fortran90 has a number of advantages in designing a flexible, clear code. Firstly, the use of derived data types e.g. to store observations and their metadata significantly reduces the clutter that would be required in an equivalent Fortran77 code in which all components (e.g. station identifiers, location, quality control flags, errors, values etc.) would be separate entities. The entire observation structure can be represented as a single subroutine argument in which details are hidden. An extra advantage is that if a low level routine requires additional components of the data type to be written, then the calling tree above that routine stays the same. The use of subroutine and variable names longer than the 31-character limit improves the readability of Fortran90 relative to Fortran77 code. Care must be taken in the use of some Fortran90 intrinsic procedures and dynamical allocation of memory. Experience has shown that, on certain platforms, use of these features may increase CPU relative to their Fortran77 counterparts.
MM5 Tutorial
E-13
Appendix E
FIG. 3: 3DVAR source code organization.
The 3DVAR source code is split into subdirectories containing logically distinct algorithms. Fig. 3 illustrates the setup for an earlier serial version of the code. As well as making the 3DVAR code easier to follow, the idea is to identify aspects that could be used, replaced or shared with code in the wider WRF framework in which 3DVAR resides e.g. general dynamics, physic and interpolation code. Each subdirectory within Fig. 3 is identified with a particular Fortran90 module file i.e. all the routines within the subdirectory are "Fortran90 INCLUDEd" in a single module file with the same name as the subdirectory (and the filetype .f90). Fig. 4 gives an example of the DA_VToX_Transforms subdirectory of Fig. 3. By convention, the module file in the DA_VToX_Transforms subdirectory shown in Fig. 4 is named DA_VToX_Transforms.f90 and CONTAINS all other (scientific) routines within the directory. The references within DA_VToX_Transforms.f90 to other subroutines within the DA_VToX_Transforms directory are seen in Fig. 5.
E-14
MM5 Tutorial
Appendix E
FIG. 4: 3DVAR single subdirectory source code organization.
Other reasons for adopting this code structure include the use of available automatic makefile generation scripts (which search .f90 files and routines specified in their INCLUDE lines). Also, experience has shown that this approach makes use of automatic Fortran->html tools much easier - common subdirectory, file and subroutine naming conventions are required to utilize this very useful facility.
MM5 Tutorial
E-15
Appendix E
FIG. 5: Example 3DVAR single module organization - DA_VToX_Transform.f90
E.5 How to Run 3DVAR Please refer to the online tutorial, available from the MM5 3DVAR web page: http://www.mmm.ucar.edu/3dvar/ to learn how to run 3DVAR.
E-16
MM5 Tutorial
Appendix F
Appendix F: RAWINS F.1 Source of Observations • NMC operational global surface and upper-air observations subsets as archived by the Data Support Section (DSS) at NCAR. - Upper-air data: RAOBS (ADPUPA), in NMC ON29 format. - Surface data: NMC Surface ADP data, in NMC ON29 format. NMC Office Note 29 can be found at: http://www.emc.ncep.noaa.gov/mmb/papers/keyser/on29.htm.
F.2 Bogus Options See examples in the appendices of the RAWINS/DATAGRID Tech Note. KBOGUS: Change Existing Observations. • Requires an additional input file, the KBOGUS file, called KBOG_REMOTE in your local working directory. NBOGUS: Insert Station Reports. • Requires an additional input file, the NBOGUS file, called NBOG_REMOTE in your local working directory. NSELIM: Remove Station Reports. • Uses NBOGUS file AUTOGBOGUS: Subjective Examination of Suspect Data • Gives the user maximum control over data removal. • Two submittals of RAWINS are required. • First submittal: - Screens the data (ERRMX check) and creates analyses. - Plots analyses with suspect observations overlaid (file AB.PLT). - Creates Autobogus file listing suspect observations (called AUBG_OUT in your local working directory). • User Control: - Examine plots and the list of suspect observations. - Decide which of the suspect observations should be included in the final analyses. - Edit the Autobogus file AUBG_OUT (T,F,B). • Second Submittal: - Reads the edited Autobogus file AUBG_OUT from your local working directory.. - Creates a new analysis including all observations flagged T or B in the Autobogus file.
MM5 Tutorial
F-1
Appendix F
F.3 Script Variables Submit
0 = Not an Autobogus submittal. 1 = First autobogus submittal. 2 = Second Autobogus submittal.
InObs
ARCHIVE = use archived NMC observations. UNIOBS = Obsolete option. Do not use.
SFCsw
Flag to indicate whether surface observations are to be accessed (SFCsw = SFC, recommended) or not (SFCsw = NoSFC).
BOGUSsw
Flag to indicate the type of bogus data to be used: NoBOG: Not a bogus job. Konly: kbogus data available (no nbogus). Nonly: nbogus data available (no kbogus). KandN: nbogus and kbogus data available.
InDatg
Pathname for the REGRID output used as input to RAWINS.
InRaobs
Pathname for the RAOB data (see ~mesouser/catalog/catalog.raob file; for pre1984 cases, see ~mesouser/catalog/catalog.raob.1973-1985).
InSfc6h
Pathname for the 6-hrly surface data and ship data (see ~mesouser/catalog/catalog.sfc, LIST A).
InSfc3h
Pathname for the 3-hrly surface data (see ~mesouser/catalog/catalog.sfc, LIST B).
upa_unidate
8 digit date (YYMMDDHH) for upper-air data from Unidata. If more than one time period of data is to be input, upa_unidate may have more than one date.
sfc_unidate
8 digit date for surface data from Unidata. Like upa_unidate, sfc_unidate may have more than one date.
F.4 Parameters IMX, JMX
Must be equal to the I and J dimensions of the domain being processed. Must include expansion if the expanded grid is processed.
LMX
Must be greater than or equal to the maximum number of levels (mandatory plus new plus surface).
IRB
Must be greater than the number of rawinsonde reports which will be processed.
IRS
Must be greater than IRB + the number of surface stations which will be processed.
F-2
MM5 Tutorial
Appendix F
F.5 Namelist Variables NNEWPL
Number of new pressure levels to interpolate to.
GNLVL
The pressures at the new levels (bottom to top, mb).
IWTSCM
Type of weighting scheme for objective analysis: 1 = Cressman (circular). 2 = Ellipse. 3 = Banana (recommended). 4 = Multiquadric (worth a try, but use with caution).
IWIND
1= Use the surface wind as output from REGRID for first guess. 2 = Use the 1000 mb wind as output from REGRID as the first guess.
UNIOBS
T/F: Use Unidata observations (Obsolete. do not touch).
RWSUBM
Same as script variable SUBMIT (do not touch).
IUINTVL
Time interval of raob data in hours (should be 12).
ISFCS3
T/F: Use 3-hrly surface data (recommended).
ISFCS6
T/F: Use 6-hrly surface and ship data (recommended).
F4D
T/F: Create a surface FDDA file.
INTF4D
Time interval (hours) for FDDA output (either 3 or 6).
LAGTEM
T: Use a 3 hour lag-time for FDDA first guess. F: Use a first guess interpolated from the 12-hour surface first guesses for FDDA first guess.
NSELIM
T/F: Specific raobs (Nbogus option) are to be deleted (one flag for each time). Flags refer to 12-hour intervals for non-FDDA jobs; Flags refer to INTF4D intervals for FDDA jobs.
NBOGUS
T/F: Nbogus data are included (One flag for each time) Flags refer to 12-hour intervals for non-FDDA jobs; Flags refer to INTF4D intervals for FDDA jobs.
KBOGUS
T/F: Kbogus data are included (One flag for each time) Flags refer to 12-hour intervals for non-FDDA jobs; Flags refer to INTF4D intervals for FDDA jobs.
BUDWGT
Weighting factor for the BUDDY test.
ERRMXW
Maximum difference allowed (m/s) for the ERRMX check for winds.
ERRMXT
Maximum difference allowed (K) for the ERRMX check for temperature.
ERRMXP
Maximum difference allowed (mb) for the ERRMX check for pressure.
IPLOT
T/F: Plot the raobs (One flag for each time period).
ISKEWT
Plot the raobs as skew-T (ISKEWT=1) or Stuve (ISKEWT=2) diagrams.
ISPRINT
T/F Print surface input observations.
IFPRNT
T/F Print out samples of analyses and first-guess.
MM5 Tutorial
F-3
Appendix F
F.6 How to Run RAWINS 1) Obtain the source code tar file from NCAR’s ftp site: ftp://ftp.ucar.edu/mesouser/MM5V3/RAWINS.TAR.gz 2) gunzip the file (“gunzip RAWINS.TAR.gz”), and untar it (“tar -xf RAWINS.TAR”). The directory RAWINS will be created. 3) Go to the RAWINS directory (“cd RAWINS”), create the RAWINS job deck by typing “make rawins.deck”. 4) Edit rawins.deck for script options, input data location/file names, parameter statements and namelist values. 5) Make sure you have the required input files: REGRID_DOMAINx, and observations files from NCAR’s archive. You may use fetch.job from the Templates/ directory to obtain observations files from NCAR’s MSS. 6) Type “rawins.deck >& log” to compile and run the program. 7) Check files “log”, and “rawins.print.out” for possible compile and run time errors. If the job is successful, you should at least obtain the RAWINS output file: RAWINS_DOMAINx. More information can be found in README file in the RAWINS directory.
F.7 Check Your Output Always check the printout returned by RAWINS. You should check for at least the following:
• The “STOP 99999” print statement is the signal that RAWINS completed without crashing. This does not necessarily mean, however, that RAWINS did what you expected it to.
• Print statements soon after rawins.exe has begun executing echo many of the settings from your namelist, describe the files that are read, and describe what RAWINS expects to do.
• Number of stations found. Check to see if this number is reasonable for your area of interest.
• “DATA REMOVED BY ...” will list the data points which have been removed in the analysis procedure by the ERRMAX check and the BUDDY check.
F-4
MM5 Tutorial
Appendix F
F.8 RAWINS didn’t Work! What Went Wrong? Various problems may cause RAWINS to crash. Careful examination of the printout will often reveal the problem and the solution. Some common problems:
• Read past end-of-file: Check all the input filenames to make sure that the proper files were read. Double-check that the original files are the right ones.
• Double-check parameter settings. Remember the expanded dimensions if you used an expanded domain for TERRAIN and REGRID.
• “REPORT TYPE IRTYP = ### NOT KNOWN. STOPPING”: The most likely problem is that observations input files have not been correctly specified. Check that the InRaobs script variable refers to an upper-air observation file, and that the InSfc6h and InSfc3h script variables refer to files from list A and B respectively from the catalog.sfc file.
• RAWINS gets confused if the PTOP value (top pressure level, specified in REGRID) is lower than (i.e., higher up in the atmosphere than) 50 mb.
• RAWINS is currently limited to processing 50 time periods in one submittal.
F.9 RAWINS Files and Unit Numbers RAWINS reads and writes to and from a number of different files. RAWINS acesses most files by referring to the fortran unit numbers Unit numbers are assigned as follows: Table 6.1 File names, fortran unit numbers and their description for RAWINS. File name
Unit number
Description
REGRID_DOMAINx
fort.4
First-Guess fields output from REGRID (input)
rawins.namelist
fort.14
namelist for user options (input)
raobsA, B, etc.
fort.15, 16, etc.
Input upper-air observations
sfc3hrA, B, etc.
fort.20, 21, etc.
Input 3-hourly surface observations
sfc6hrA, B, etc.
fort.25, 26, etc.
Input 6-hourly surface observations
shpvolA, B, etc
fort.30, 31, etc.
Input ship and bouy observations
autobog
fort.10
Input autobogus list as edited by user
kbogus
fort.12
Input kbogus list
nbogus
fort.13
Input nbogus list
MM5 Tutorial
F-5
Appendix F
F-6
File name
Unit number
Description
RAWINS_DOMAINx
fort.2
Main output file: Analyzed fields
RAWOBS_DOMAINx
fort.11
Upper-air observations as read by RAWINS (output)
UPR4DOBS_DOMAINx
fort.61
Upper-air observations processed by RAWINS (output)
SFCFDDA_DOMAINx
fort.39
Output surface analyses for FDDA
rawab.out
fort.40
Output autobogus list of suspect stations
SFC4DOBS_DOMAINx
fort.60
Surface observations processed by RAWINS (output)
MM5 Tutorial
Appendix F
F.10 NBOGUS example In the following example, the following namelist options are assumed: RAWINS Settings: F4D = T INTF4D = 6 NBOGUS = T, F, T, T, F Sample NBOGUS file: 1000.0 850.0 700.0 500.0 852.0 795.0 1000.0 850.0 700.0 500.0 400.0 844.5 780.0 685.0
1000.0 850.0 700.0 500.0 99999.0
841.4 812.4
1994-01-26_00:00 00001 1 4 99999.0 99999.0 99999.0 1441.2 6.0 2.7 3011.3 -2.5 10.2 5599.8 -19.8 15.0 1994-01-26_00:00 00001 -1 2 1422.0 5.3 2.0 1988.6 5.0 0.9 1994-01-26_00:00 00002 1 5 99999.0 99999.0 99999.0 99999.0 99999.0 99999.0 3020.7 -2.8 11.1 5602.4 -20.8 14.1 7204.8 -34.7 6.2 1994-01-26_00:00 00002 -1 3 1501.0 6.2 2.3 2152.1 4.0 11.1 3192.0 -3.5 16.6 999 888 1994-01-26_00:12 00001 1 4 99999.0 99999.0 99999.0 99999.0 99999.0 99999.0 2909.9 -8.8 1.2 5467.3 -21.5 4.4 1994-01-26_00:12 00001 -1 1 99999.0 99999.0 99999.0 999 888 1994-01-26_00:18 00002 -7.3 0.9 1994-01-26_00:18 00003 -5.9 0.7 888
40.20 -103.20 99999.0 99999.0 100.1 14.2 184.6 13.8 192.6 9.3 40.20 -103.20 99.2 5.8 143.8 15.5 41.20 -103.70 99999.0 99999.0 99999.0 99999.0 99999.0 99999.0 99999.0 99999.0 99999.0 99999.0 41.20 -103.70 99999.0 99999.0 99999.0 99999.0 99999.0 99999.0 40.20 -103.20 99999.0 99999.0 99999.0 99999.0 111.3 14.9 74.0 24.3 40.20 -103.20 99999.0 99999.0 41.20 -103.70 68.7 4.2 41.60 -104.80 26.1 2.6
The fields in the station header records are date, station number, integer flag for mandatory (1) or significant (-1) levels, number of levels, latitude, and longitude. The fields in the data records are pressure (mb), height (m), temperature (oC), dewpoint depression (oC), wind direction (degrees from North), wind speed (m s-1). Note that since Version 3, the date format used in nbogus and kbogus files have been changed to use character*16 format. The format for nbogus file has been changed to 10
FORMAT(22X,A16,1X,A5,2I3,2F8.2)
MM5 Tutorial
F-7
Appendix F
for upper-air data (mandatory and significant levels), and for surface 10
FORMAT(22X,A16,1X,A5,6X,2F7.1)
F.11 NBOGUS Notes: • See Appendix D of the DATAGRID/RAWINS document for further details. Note though the format for dates have been changed (see previous page). • The NBOGUS file is a read with formatted read statements. Any misplaced numbers are likely to cause RAWINS to fail, or worse, to misinterpret the NBOGUS file without failing. It is thus extremely important to carefully examine the RAWINS output for an NBOGUS submittal. • Mandatory and significant-level data from a given upper-air report are separated. • Mandatory levels below ground are filled with 99999.0 • For an upper-air report with mandatory-level data only, insert one significant level with all fields filled with 99999.0 With significant-level data only, insert two mandatory-level records will all fields set to 99999.0 • Namelist variable NBOGUS refers to ITIMINT hourly intervals if F4D option is not used. NBOGUS refers to INTF4D hourly intervals if F4D option is used.
• For times when NBOGUS is false, no information is included in the NBOGUS file (not
even the 888 and 999 lines discussed in the following two items). • 999 in the date column denotes the end of upper-air data for a particular time. • 888 in the date column denotes the end of surface data for a particular time. • If no surface data are available, a line with 888 in the date column immediately follows the line with 999 in the date column. • For times when upper-air data would normally be expected, but there are only surface bogus reports, inlude a line with 999 in the date column before starting with the surface reports. • For times when no upper-air data are expected, start with the surface data (without inserting a line with 999 in the date column).
F.12 KBOGUS Notes: The format of KBOGUS data in V3 RAWINS is 40
FORMAT(I4,I2,1X,A8,1X,F7.1,1X,F7.1,4(1X,F7.1),2X,I2,4X,A16)
Note the format for the date has been changed from ‘I8’ to ‘A16’.
F-8
MM5 Tutorial
Appendix F
MM5 Tutorial
F-9
Appendix F
F-10
MM5 Tutorial
Appendix G
Appendix G: Alternative Plotting Package - RIP G.1 Purpose Program RIP is an alternative plotting utility program for the MM5 modeling system. RIP (which stands for Read/Interpolate/Plot) is a Fortran program that invokes NCAR Graphics routines for the purpose of visualizing output from the Fourth and Firth Generation versions of the PSU/NCAR Mesoscale Modeling (MM4/MM5) System. It has been under continues development since 1991, primarily by Mark Stoelinga at both NCAR and the University of Washington. The RIP program makes shaded/color plots and the overlaying of more than two fields much easier. This package become so popular among many MM5 users that it has since been incorporated into the MM5 programs and are supported by mesouser. RIP can plot output from most of the MM5 programs; TERRAIN_DOMAINx, REGRID_DOMAINx, RAWINS_DOMAINx, LITTLE_R_DOMAINx, MMINPUT_DOMAINx, LOWBDY_DOMAINx, and MMOUT_DOMAINx. RIP can produce the following output: • generates horizontal plots on σ, pressure, θ, θe, or PV surfaces; • generates vertical plots on σ, pressure or log pressure, π, θ, θe, or PV as coordinate surface; • plot Skew-T/log p soundings at grid point or lat/lon location, with optional hodographs and sounding quantities; • generate single-point profiles; • generate forward and backward trajectories; and • generate input data for Vis5D The program is well documented. Documentation can be obtained inside the program tar file in the Doc/ directory, and on MM5 home page at http://www.mmm.ucar.edu/mm5/doc.html. The newest version of this code is called RIP4. The difference between this new version and the older version, is that RIP4 can also ingest WRF model data.
G.2 RIPDP RIP does not read MM5 output directly, so the RIP Data Preparation (RIPDP) is used to read the MM5 output files, process the data and output data in a format that RIP can read. Once RIPDP has run, every variable at each time period in the MM5 output file will have been placed in a separate file. Note, this process generates a substantial number of RIP input files. RIPDP makes use of a namelist to extract data from the MM5 output file and process the required data: &userin ptimes=0,-72,1,ptimeunites=’h’,tacc=90.,discard=’LANDMASK’,H2SO4’, iexpandedout=1 &end
MM5 Tutorial
G-1
Appendix G
Table G-1: RIPDP namelist Namelist Variable
Variable Type
Description
ptimes
INTEGER
Times to process. This can be a string of times or a series in the form of A,-B,C, which means “times from hour A, to hour B, every C hours”
ptimeunit
CHARACTER
Time units. This can be either ‘h’ (hours), ‘m’ (minutes), or ‘s’ (seconds)
tacc
REAL
Time tolerance in seconds. Any time in the model output that are within tacc seconds of the time specified in ptimes will be processed.
discard
CHARACTER
All variables listed here, will not be processed.
iexpandedout
INTEGER
Only relevant for TERRAIN of REGRID output. If output is on an expanded domain and this flag is set to 1, the output will be plotted on the expanded domain.
G.3 RIP UIF Once the data is in the correct format, the desired plots can be generated with the use of a RIP UIF (User Input File): &userin idotitle=1,titlecolor=’def.foreground’, ptimes=0,6,12, ptimeunits=’h’,tacc=120,timezone=-7,iusdaylightrule=1, iinittime=1,ivalidtime=1,inearesth=0, flmin=.09, frmax=.92, fbmin=.10, ftmax=.85, ntextq=0,ntextcd=0,fcoffset=0.0,idotser=0, idescriptive=1,icgmsplit=0,maxfld=10,itrajcalc=0,imakev5d=0 &end &trajcalc rtim=15,ctim=6,dtfile=3600.,dttraj=600.,vctraj=’s’, xjtraj=95,90,85,80,75,70,65,80.6,80.6,80.6,80.6,80.6,80.6, yitraj=50,55,60,65,70,75,80,77,77,77,77,77,77, zktraj=.9,.9,.9,.9,.9,.9,.9,.99,.9,.8,.7,.6,.5, ihydrometeor=0 &end =========================================================================== ---------------------Plot Specification Table --------------------=========================================================================== feld=xlus; ptyp=hh; chfl; cosq=1,dark.gray,2,light.yellow,3,light.green,> 4,yellow,5,yellow,6,light.green,7,light.yellow,8,light.green,> 9,light.green,10,light.yellow,11,green,12,dark.green,13,green,> 14,dark.green,15,green,16,light.blue,17,green,18,green,> 19,light.gray,20,light.gray,21,dark.green,22,light.gray,> 23,light.gray,24,white feld=map; ptyp=hb G-2
MM5 Tutorial
Appendix G
feld=tic; ptyp=hb time=0 =========================================================================== feld=ter; ptyp=hc; cint=100; colr=red feld=map; ptyp=hb feld=tic; ptyp=hb time=0 =========================================================================== feld=ter; ptyp=hc; cint=50; cmth=fill; cosq=-1e-5,light.blue,1e-5,white,> 3000,brown feld=map; ptyp=hb feld=tic; ptyp=hb time=0 =========================================================================== feld=tmc; ptyp=hc; vcor=s; levs=b1; cint=2; cmth=fill;> cosq=-32,light.violet,-24,violet,-16,blue,-8,green,0,yellow,8,red,> 16,orange,24,brown,32,light.gray feld=slp; ptyp=hc; cint=2; linw=2 feld=uuu,vvv; ptyp=hv; vcmx=-1; colr=white; intv=5 feld=map; ptyp=hb feld=tic; ptyp=hb =========================================================================== feld=tmc; ptyp=hc; vcor=p; levs=850,700,-300,100; cint=2; cmth=fill;> cosq=-32,light.violet,-24,violet,-16,blue,-8,green,0,yellow,8,red,> 16,orange,24,brown,32,light.gray feld=ght; ptyp=hc; cint=30; linw=2 feld=uuu,vvv; ptyp=hv; vcmx=-1; colr=white; intv=5 feld=map; ptyp=hb feld=tic; ptyp=hb =========================================================================== feld=pvo; ptyp=vc; crsa=10,30; crsb=30,10; vcor=p; vwin=1050,200; cint=.25;> cmth=fill; cosq=0,white,4,dark.gray; cbeg=0; cend=5 feld=the; ptyp=vc; cint=2; colr=red feld=uuu,vvv,omg; ptyp=vv feld=tic; ptyp=vb =========================================================================== feld=tic; ptyp=sb; sloc=KORD; hodo; sndg feld=tmc; ptyp=sc; colr=red feld=tdp; ptyp=sc; colr=blue feld=uuu,vvv; ptyp=sv; colr=dark.green; hodo; sndg ===========================================================================
A RIP UIF consists of two namelists (userin - which control the general input specifications; and trajcalc - which control the creation of trajectories), and a Plot Specification Table (PST), which control the plotting of the required frames.
Table G-2: RIP namelist: userin Namelist Variable
Variable Type
Description
idotitle
INTEGER
Control of first part of title line.
titlecolor
CHARACTER
Control color of the title lines
MM5 Tutorial
G-3
Appendix G
Table G-2: RIP namelist: userin Namelist Variable
Variable Type
Description
ptimes
INTEGER
Times to process. This can be a string of times or a series in the form of A,-B,C, which means “times from hour A, to hour B, every C hours”
ptimeunits
CHARACTER
Time units. This can be either ‘h’ (hours), ‘m’ (minutes), or ‘s’ (seconds)
tacc
REAL
Time tolerance in seconds. Any time in the model output that are within tacc seconds of the time specified in ptimes will be processed.
timezone
INTEGER
Specifies the offset from Greenwich time.
iusdaylightrule
INTEGER
Flag to determine if US daylight saving is applied.
iinittime
INTEGER
Controls the plotting of the initial time on the plots.
ivalidtime
INTEGER
Controls the plotting of the plot valid time.
inearesth
INTEGER
Plot time as two digits rather than 4 digits.
flmin
REAL
Left frame limit
flmax
REAL
Right frame limit
fbmin
REAL
Bottom frame limit
ftmax
REAL
Top frame limit
ntextq
INTEGER
Quality of the text
ntextcd
INTEGER
Text font
fcoffset
INTEGER
Change initial time to something other than output initial time.
idotser
INTEGER
Generate time series output files (not plots only ASCII file that can be used as input to a plotting program).
idescriptive
INTEGER
Use more descriptive plot titles.
icgmsplit
INTEGER
Split metacode into several files.
maxfld
INTEGER
Reserve memory for RIP.
ittrajcalc
INTEGER
Generate trajectory output files (use namelist trajcalc when this is set).
imakev5d
INTEGER
Generate output for Vis5D
G-4
MM5 Tutorial
Appendix G
G.4 RIP PST The second part of the RIP UIF consists of the Plot Specification Table. The PST provides all of the user control over particular aspects of individual frames and overlays. The basic structure of the PST is as follows: • The first line of the PST is a line of consecutive equal signs. This line as well as the next two lines are ignored by RIP, it is simply a banner that says this is the start of the PST section. • After that there are several groups of one or more lines separated by a full line of equal signs. Each group of lines is a frame specification group (FSG), and it describes what will be plotted in a single frame of metacode. Each FSG must be ended with a full line of equal signs, so that RIP can determine where individual frames starts and ends. • Each line within a FGS is referred to as a plot specification line (PSL). A FSG that consists of three PSL lines, will result in a single metacode frame with three overlaid plots. Examples of frame specification groups (FSG’s): =========================================================================== feld=tmc; ptyp=hc; vcor=p; levs=850,700,-300,100; cint=2; cmth=fill;> cosq=-32,light.violet,-24,violet,-16,blue,-8,green,0,yellow,8,red,> 16,orange,24,brown,32,light.gray feld=ght; ptyp=hc; cint=30; linw=2 feld=uuu,vvv; ptyp=hv; vcmx=-1; colr=white; intv=5 feld=map; ptyp=hb feld=tic; ptyp=hb ===========================================================================
This FSG will generate 5 overlaid plots: • Temperature in degrees C (feld=tmc). This will be plotted as a horizontal contour plot (ptyp=hc), on pressure levels (vcor=p). The pressure levels used will be 850 and 700 to 300 in steps of 100 mb (thus 5 plots will be generated, on 850, 700, 600, 500, 400, and 300 mb). The contour intervals are set to 2 (cint=2), and shaded plots (cmth=fill) will be generated with a color range from light violet to light gray. • Geopotential heights (feld=ght), will also be plotted as a horizontal contour plot. This time the contour intervals will be 30 (cint=30), and contour lines, with a line width of 2 (linw=2) will be used. • Wind vectors (feld=uuu,vvv), plotted as barbs (vcmax=-1). • A map background will be displayed (feld=map), and • Tic marks will be placed on the plot (feld=tic). =========================================================================== feld=pvo; ptyp=vc; crsa=10,30; crsb=30,10; vcor=p; vwin=1050,200; cint=.25;> cmth=fill; cosq=0,white,4,dark.gray; cbeg=0; cend=5 feld=the; ptyp=vc; cint=2; colr=red feld=uuu,vvv,omg; ptyp=vv feld=tic; ptyp=vb ===========================================================================
This FSG will generate 4 overlaid plots: • Potential Vorticity (feld=pvo). This will be plotted as a vertical contour plot (ptyp=vc), from grid point 10,30 to grid point 30,10 (crsa=10,30; crsb=30,10). The vertical coordinate used is pressure (vcor=p), with a window from 1050 to 200 mb (vwin=1050,200). The contour
MM5 Tutorial
G-5
Appendix G
intervals are set to .25 (cint=.25), and shaded plots (cmth=fill) will be generated with a color range from white to dark gray. Only values between 0 and 5 will be plotted (cbeg=0; cend=5). • Potential temperature (feld=the), will also be plotted as a vertical contour plot. This time the contour intervals will be 2 (cint=2), and the contour lines will be plotted in red (colr=red). • 3D circulation vectors (feld=uuu,vvv,omg) in the plane of the cross-section, and • Tic marks will be placed on the plot (feld=tic). =========================================================================== feld=tic; ptyp=sb; sloc=KORD; hodo; sndg feld=tmc; ptyp=sc; colr=red feld=tdp; ptyp=sc; colr=blue feld=uuu,vvv; ptyp=sv; colr=dark.green; hodo; sndg ===========================================================================
This FSG will generate a sounding plot: • Temperature in degrees C (tmc), and dew point temperatures (tdp), will be plotted in red and blue on a skew-T/log p diagram. • Wind barbs (uuu,vvv) will be plotted on the side in green. • The station for which the plot is generated is ORD (sloc=KORD ; see station list in RIP directory for station names and locations). • A hodograph (hodo) as well as sounding information (sndg) will be plotted.
G.5 How to Run RIP 1) Obtain the source code tar file from one of the following places: Anonymous ftp: ftp://ftp.ucar.edu/mesouser/MM5V3/RIP.TAR.gz On NCAR MSS: /MESOUSER/MM5V3/RIP.TAR.gz 2) gunzip the file, untar it. A directory RIP will be created. cd to RIP. 3) Set up the environment variables NCARG_ROOT & RIP_ROOT setenv NCARG_ROOT /usr/local/ncarg (note: location of NCAR Graphics library may vary on your machine) setenv RIP_ROOT your-rip-directory (example: setenv RIP_ROOT /home/usr/MM5V3/RIP) 4) Type ‘make’ to obtain a list of machines on which the code can be compiled. Type ‘make your-machine’ to create an executable for your platform. 5) Edit ripdp_sample.in to set up the namelist for the data processing, and run ripdp ripdp -n ripdp_sample.in model-data-set-name data-file-1 data-file-2 data-file-3 ... (Use ripdp_mm5 for RIP4)
G-6
MM5 Tutorial
Appendix G
where: model-data-set-name is the name (including path if data is to be written to a directory) that will be used as a prefix to save all the newly created processed data. Since this set creates a lot of file, it is a good practice to write these files to a directory. data-file-1 is the MM5 output file that are going to be viewed with RIP (if you have split files, list them) 6) Edit rip_sample.in to set up the namelists for plotting as well as enter the specific plots you wish to create in the PST. 7) Run RIP rip -f model-data-set-name rip-sample.in 8) If successful this will create a metacode file rip_sample.cgm, which can be viewed with the NCAR Graphics utility ‘idt’.
MM5 Tutorial
G-7
Appendix G
G-8
MM5 Tutorial
Appendix H
Appendix H: Running MM5 Jobs on IBM H.1 Purpose Batch job decks (shell scripts) are used to run the MM5 modeling system programs on IBM. Job decks for NCAR IBMs (blackforest, or babyblue) are written in C-shell, and can be used in either batch or interactive mode. Users with syntax questions for C-shell constructs should refer to man page for “csh”. New IBM users should browse through NCAR/Scientific Computing Division’s (SCD) users’ guide (URL: http://www.scd.ucar.edu/computers/blackforest to get familiar with the IBM environment. Questions regarding IBM usage should be directed to SCD consultant at [email protected]. For information on obtaining an computer account at NCAR, please visit SCD’s Web page: http://www.scd.ucar.edu/css/resources/requesthome.html. All the modeling system job decks are structured similarly. The discussion in this section is focused on describing the general structure of the job deck. Shell variables that are common throughout the MM5 modeling system job decks are discussed. The examples are taken from the TERRAIN, REGRID, RAWINS, INTERPF, GRAPH, and MM5 job decks located in the ~mesouser/MM5V3 directory seen on blackforest/babyblue or in program tar files in /mesouser/MM5V3 directory on NCAR’s anonymous ftp (ftp.ucar.edu).
H.2 Prerequisite There are several things a user need to prepare before starting running jobs on NCAR’s IBMs.
• By default, your logon shell on the IBM's is the Korn shell (ksh). To change it to sh or csh or tcsh, please follow the instructions in:
http://www.scd.ucar.edu/docs/ibm/ref/admin.html#shell
• Make sure your .csrh file is up-to date and contains the following: setenv NCARG_ROOT /usr/local setenv NCARG_LIB /usr/local/lib32/r4i4 This enables a user to use NCAR Graphics. The environment variable NCARG_ROOT is required on a workstation where NCAR Graphics is available.
• Make sure you have an up-to-date .rhosts file on both your local machine and IBM. A typical .rhosts file looks like this:
blackforest.ucar.edu username babyblue.ucar.edu username
• Make sure that you browse through the ~mesouser directories on the IBMs, or mesouser/
directory on anonymous ftp. All job decks, program tar files, data catalogs, and utility programs reside in the directory.
MM5 Tutorial
H-1
Appendix H
H.3 Functions of Job Decks The general job deck construct and functions are the following:
• batch job commands that are used for IBMs to request CPU time and memory • input/output pathnames to retrieve and archive data files • job switches and output options • where to obtain Fortran program tar files • parameter statements used in Fortran 77 programs to define domain and data dimensions • FORTRAN namelist used during program execution to select runtime options • a section that does not normally require user modification All IBM job decks are structured similarly. Each deck begins with a number of Loadleveler commands required for batch job execution on an IBM. It is followed by shell variables needed to set up data input and output pathnames, job switches, and your source code location, etc. The next section of a job deck contains PARAMETER statements required for Fortran programs, and namelist used during execution. The remaining job deck, which handles the data input, generation of the executable from the FORTRAN source code, and data archival, does not typically need user modification. A user needs only to modify this section to adjust the default setting for running the modeling system on other site. If a user plans to work on NCAR’s IBMs, all he/she needs to acquire are the job decks from the ~mesouser/MM5V3 directory. These decks will enable a user to run in batch as well as in interactive mode on the IBM. When using these decks interactively, cd to /ptmp/$USER (this directory will not disappear when one logs off, and should never be removed manually), and copy the tar files to the directory before running.
H.4 How to Run IBM Jobs? Since IBM is primarily designed for batch jobs, most of the MM5 programs can be run on an IBM with batch job decks. To run a MM5 modeling system job deck on NCAR’s IBM, obtain decks from ~mesouser/MM5V3 directory, edit them, and submit them to IBM by typing llsubmit mm5-modeling-system-job-deck To see all jobs and their status in all IBM SP-cluster systems queues, type llq To cancel one or more jobs from the Loadleveler queue, type llcancel job Note 1: All jobs on IBM should be run in the /ptmp/$USER directory Note 2: Large jobs (runs with big model domains) require a lot of memory. For efficient running jobs, make sure that each processor will not require more than 250-300MB. To check the memory requirements of your run, use the “size” command: size mm5.mpp If the size of the executable becomes large, increase the values of PROCMIN_NS and
H-2
MM5 Tutorial
Appendix H
PROCMIN_EW (see Appendix D for details on these variables), to reduce the memory requirement of each processor. Example: the SOC case with PROCMIN_NS and PROCMIN_EW both set to 1, will require roughly 47MB per processor to run. With PROCMIN_NS and PROCMIN_EW both set to 2, this drops off to 18MB. -
The PROCMIN_NS and PROCMIN_EW variables are in the configure.user file After changing these variables, first do a ‘make uninstall’, before doing ‘make mpp’
Once you have determined the optimal settings for PROCMIN_NS and PROCMIN_EW, change the requested processors in the job script (see section H.5.1 below), to match these settings. Example 1: If you set PROCMIN_NS and PROCMIN_EW both to 2, set “#@ node = 1” and “#@ tasks_per_node = 4”, to run on 4 processors. Example 2: If you set PROCMIN_NS to 2 and PROCMIN_EW to 4, set “#@ node = 2” and “#@ tasks_per_node = 4”, to run on 8 processors. (Hint: On bluesky this will change to “#@ node = 1” and “#@ tasks_per_node = 8”, since bluesky has 8 processors per node) Hint: For maximum efficiency always match the PROCMIN_NS and PROCMIN_EW variables to the number of processors you run on. For the SOC case, say you set both PROCMIN_NS and PROCMIN_EW to 1, but run on 4 processors. In this case the model will run, but EACH of the 4 processors will use 47MB of memory. If you compile with PROCMIN_NS and PROCMIN_EW both set to 2 and run on 4 processors, each of the processors will only use 18MB of memory, which will be more efficient.
H.5 What to Modify in a Job Deck? H.5.1 Batch Job Queueing Commands Each of the IBM job decks has several lines of header commands for the Loadleveler queueing system. Inclusion of these commands allows the decks to be submitted into the batch queue on the IBM with “llsubmit” command (e.g. llsubmit my.job). The Loadleveler commands inside the job decks are preceded by “# @ ”; when these jobs are run interactively, the Loadleveler commands are interpreted as comments. #!/bin/csh # @ job_type = parallel # @ environment = COPY_ALL;MP_EUILIB=us # @ job_name = my_job # @ output = my_job.out # @ error = my_job.err # @ node = 1 # @ network.MPI = css0,shared,us # @ tasks_per_node = 4 # @ node_usage = not_shared # @ checkpoint = no
MM5 Tutorial
H-3
Appendix H
# @ wall_clock_limit = 1800 # @ class = com_reg # @ queue set MP_SHARED_MEMORY = yes if ( ! -e /ptmp/$USER ) then mkdir /ptmp/$USER endif set TMPDIR=/ptmp/$USER cd $TMPDIR ....... ....... cd Run timex poe ./mm5.mpp exit
# @ job_type = parallel When you run a parallel job, always set "job_type = parallel" because the default is serial. In a parallel job, you must also specify tasks_per_node and the number of nodes because this tells the parallel resource manager, POE, how to distribute the parallel tasks. # @ environment = COPY_ALL;MP_EUILIB=us Specifies your initial environment variables when your job step starts. # @ job_name = my_job Specifies the name of the job. # @ output = my_job.out Specifies a file to which standard output is written. If you do not include this keyword, the stdout output will be discarded. # @ error = my_job.err Specifies the file to which standard-error output is written. If you do not include this keyword, the stderr output will be discarded. # @ node = 1 Number of nodes required for your job. Note: You also need to set #@tasks_per_node, to provide instructions about the parallel tasks in your job and the node(s) on which they will run. The number of processors used are calculated as processors = node * tasks_per_node In this example (together with #@task_per_node=4), 4 processors will be used. If you change #@node to 8 (and leave #@task_per_node=4), 32 nodes will be used. # @ tasks_per_node = 4 Number of tasks to perform per node. Use together with #@node Note: blackforest has 4 processors per node and bluesksy has 8 processors per node. So never set this higher than 4 for backforest or higher than 8 for bluesky.
H-4
MM5 Tutorial
Appendix H
# @ node_usage = not_shared Whether to share nodes acquired for your tasks. If your task is not compute intensive, you can allow tasks from other users to share the processors on the node(s) you are using by setting node_usage=shared. For compute-intensive tasks, you can achieve maximum performance by setting node_usage=not_shared. # @ wall_clock_limit = 1800 Specifies the maximum wall-clock time that LoadLeveler will allow the job to run. Will prevent runaway jobs from consuming excessive resources; this can help your job run sooner when IBM SP-cluster systems queues are full. If wall_clock_limit is not explicitly set, the default is the maximum value allowed for the class (queue). Expressed in seconds or as hh:mm:ss (hours, minutes, seconds). # @ class = com_reg The name of the job queue to which this script will be submitted. Currently, IBM SP-cluster systems has these queues for users: • share (a pool of a few serial nodes for handling many processes such as compilations, MSS transfer steps within a batch job, and other serial jobs) • com_reg, com_pr, com_sb, com_spec (For Community Computing users using LoadLeveler to submit production jobs at four priority levels: regular, premium, standby, and special. These queues can provide multiple-node resources for batch jobs. • interactive (For brief interactive work.) Note: The interactive class serves as a queue for POE to get system resources. This class is not typically accessed by users. # @ queue Causes LoadLeveler to submit the preceding macros to the queue name specified in the "class=" macro. More information on these and other Loadleveler commands are available on the Web: http://www.scd.ucar.edu/docs/ibm/ref/ll.html
H.5.2 Shell Variables After the Loadleveler commands, the next user modification is the shell variables that are used to set up NCAR IBM’s Mass Storage System (MSS) pathnames and job switches. The job decks have all of the optional settings available, with the inactive lines commented (with a # at the first column). - MSS related set up On NCAR’s IBM, the input and output files generated by the MM5 modeling system are stored on MSS. Users need to set up the MSS pathname for the input and output files so the script can retrieve the input files prior to the execution of the program, and store the output files after the program is completed. For complete information on how to use the MSS facility at NCAR, please review SCD’s online documentation: http://www.scd.ucar.edu/docs/catalog/cat.mss.html. Usage
MM5 Tutorial
H-5
Appendix H
of MSS is charged against your GAUs allocation. If you are running the job decks on other IBM’s which doesn’t have MSS, the MSS related shell variables can be defined as a pathname to a data disk. Example: set set # # # set # set
ExpName = TEST/EXP1 RetPd = 365
# MSS pathname # Retention period in days for MSWRITEs
MSS filenames for terrain data sets InRegrid =
( ${ExpName}/REGRID_DOMAIN1 )
InRaobs = /DSS/xxxxxx
#
MSS Filename(s) for ADP RAOB
ExpName: This is used to set up the MSS input and output pathname, and is recommended to use something that describes the experiment. A user should use the same ExpName for every job deck in the MM5 modeling system for a single experiment. If ExpName doesn’t start with a /, then NCAR IBM’s’ login name as MSS root name (upper case) is assumed. RetPd: It is used to set the retention period in days for the MSS files. This is one of the options for the local NCAR mswrite command. The upper limit for retention period is 32767 days. InX, OutX: One of the conventions used to denote input files from output files is the name given to the shell variable holding the MSS path locations. Shell variables starting with the string “In” are files coming from the MSS; they are inputs to a program. Shell variables starting with the string “Out” are to be archived to the MSS; they are outputs from a program. The same MSS file can be an output file from one of the job decks, and then be an input file in a subsequent program. In a normal submittal, the InX and OutX variables do not need to be changed. However, if a user needs to input a MSS file under other user's login name, these variables need to be modified. (Note that InX and OutX only provide the MSS pathname, the file names are hard-wired in the section of the shell scripts that does not typically require user modification. For example, an output file from program TERRAIN would be TEST/EXP1/TERRAIN_DOMAIN1.) Host and domain: They are used to remotely copy files to and from NCAR’s IBM to user's local machine. The shell variable Host should have a form of [email protected]:directory, where username is user's login name on the user’s local machine, host and domain are local machine's name and address, and directory is the directory to copy a file to or from. If the user’s local login name is the same as the login name on NCAR’s IBM, username may be eliminated. In order for the remote shell facility to work on another machine without logging on, such as rcp, you must have a file called .rhosts in the login directory on each remote machine that you want to copy files to or from. The .rhosts file should contain a list of machine and username pairs: host1.domain1 username1 host2.domain2 username2 H-6
MM5 Tutorial
Appendix H
For example, if mesouser wants to have remote access between NCAR’s blackforest and local machines mmm1 and mmm2, the .rhosts file on blackforest should have the following lines mmm1.mmm.ucar.edu mesouser mmm2.mmm.ucar.edu mesouser and the .rhosts file on machines mmm1 and mmm2 should have the lines blackforest.ucar.edu mesouser blackforest mesouser Note that in this example there are two entry lines for blackforest. The user needs to test out on their local machine to see whether only one is needed. The user also needs to protect their.rhosts files to avoid security problem by using the following command chmod 600 .rhosts
If mesouser wants to rcp a file back to machine mmm1 under directory /usr/tmp/mesouser, the Host variable should be set to set Host = [email protected]:/usr/tmp/mesouser
- Job Switches Since the MM5 modeling system is designed for multiple applications, there are many options on how a job may be run. These options include different sources for input meteorological data, ways to do objective analysis, running the model hydrostatically or nonhydrostatically, and whether the job is an initial or restart run, etc. A user is required to go through the shell variables and make selection of job switches. The following is an example taken from the regrid.deck, and the selection is with regarding to the type of global analysis to be used to create the first guess fields: # # Select the source of first-guess analyses from among the following # datasets archived at NCAR # set ARCHIVE = GDAS # NCEP Global Data Assimilation System, # archives begin 1997-04-01 #
set ARCHIVE = ON84
# NCEP GDAS Archives through 1997-03-31
# #
set ARCHIVE = NNRP
# NCEP/NCAR Reanalysis Project
This table lists the shell variables that need to be defined by users for each program of the MM5 modeling system: Program Name TERRAIN REGRID RAWINS MM5
Shell Variables TerPlt ARCHIVE Submit, INOBS, SFCsw, BOGUSsw compile, execute, STARTsw, FDDAsw These shell variables will be discussed in detail in the other chapters of this document.
MM5 Tutorial
H-7
Appendix H
H.5.3 Accessing Source Code Typically this is the way source code tar files are accessed in a (batch) IBM job deck: # # # #
Where is your source code? set UseMySource = yes set UseMySource = no
# # ------ GET RAWINS TAR FILE --------------------------# if ( $UseMySource == yes ) then cp $Home/rawins.tar . tar -xf rawins.tar else if ( $UseMySource == no ) then msread rawins.tar.gz /MESOUSER/MM5V3/RAWINS.TAR.gz # cp /fs/othrorgs/home0/mesouser/MM5V3/RAWINS.TAR.gz . gunzip rawins.tar tar -xf rawins.tar endif rm rawins.tar
As an example, the files contained in program RAWINS tar file is listed below: CHANGES Diff/ Makefile README Templates/ con.tbl map.tbl src/
Description of changes to the program Will contain difference files between consecutive releases Makefile to create the program executable General information about the program directory Job deck directory Table file for plots Table file for plots Program source code directory
H.5.4 Parameter Statements The parameter statement used by FORTRAN 77 programs to define dimensions of domain or data sizes are typically specified in FORTRAN include file. These are direct modifications to the source code, implying that strict FORTRAN syntax must be observed. One can modify the include file directly in the source code directory, or modify it inside a job deck. In a deck, the unix cat command is used to create the FORTRAN include file. The usage of cat is shown below: cat > src/param.incl << EOF PARAMETER (.....) ..................... EOF
As an example, the following is taken from rawins.deck: cat > src/paramdim.incl << EOF C C IMX,JMX, MUST CORRESPOND TO THE DIMENSIONS IN THE INPUT FILE. THESE C WILL BE THE EXPANDED DIMENSIONS IF THIS IS THE COARSE GRID, AND THE C EXPANDED OPTION WAS SELECTED. C
H-8
MM5 Tutorial
Appendix H
C C C
LMX MUST BE GREATER THAN OR EQUAL TO THE MAXIMUM NUMBER OF LEVELS (PRESSURE LEVELS + SURFACE). PARAMETER ( IMX=45, JMX=52 )
C EOF
H.5.5 Fortran Namelist The MM5 modeling system uses FORTRAN namelist to provide a way of selecting runtime options without re-compiling the program. The unix cat command is used to create namelist files from the shell script during the execution of the shell script. The variables in the namelist are described in detail in other chapters of this document specific to those individual programs. The format is the following cat >! xxxx << EOF &...... ......... & ;----------------------------------EOF
Since the namelist is not a ANSI 77 standard, the FORTRAN 77 compiler used by different machines may have different syntax for the namelist. # # Set the starting date of the time period you want to process: # START_YEAR = 1993 # Year (Four digits) START_MONTH = 03 # Month ( 01 - 12 ) START_DAY = 12 # Day ( 01 - 31 ) START_HOUR = 00 # Hour ( 00 - 23 ) END_YEAR = 1993 # Year (Four digits) END_MONTH = 03 # Month ( 01 - 12 ) END_DAY = 15 # Day ( 01 - 31 ) END_HOUR = 00 # Hour ( 00 - 23 ) # # Define the time interval to process. # INTERVAL = 43200 # Time interval (seconds) to process. / ......... # ############################################################################### ################### ######################## ################### END USER MODIFICATION ######################## ################### ######################### ###########################################################################
After the user has 1) modified the Loadleveler commands to set the proper class for the job, 2) correctly set the shell variables for the switches and MSS pathnames, 3) modified the parameter statements, and 4) set up the FORTRAN namelist, there is typically no more user modification required in the deck. The rest of the script can be treated as a black box.
MM5 Tutorial
H-9
Appendix H
H.6 What is in the Remainder of a Job Deck? H.6.1 Acquiring Input Files from the Mass Storage System Syntax: msread local-filename MSSfilename
where local-filename is the filename in the current directory, and MSS filename is the filename on MSS. Example: # # terrain files # msread terrain ${InTerr}
where InTerr is defined somewhere earlier in the job script. msread: This is a local NCAR command that copies an MSS file to disk.
H.6.2 Setting Up Fortran Input and Output Units Syntax: ln -s filename Fortran-unit
This command makes a ‘link’ between filename and Fortran-unit. Example: # # set up fortran input files for RAWINS # if ( -e assign.rawins ) rm assign.rawins setenv FILENV assign.rawins ln -s autobog fort.10 ln -s kbogus fort.12 ln -s nbogus fort.13
H.6.3 Creating FORTRAN Executable Unix make utility is used to generate FORTRAN executable in the MM5 modeling system. The rules and compile options a make command uses are contained in the Makefile for programs TERRAIN, REGRID, RAWINS, INTERPF, NESTDOWN and GRAPH, and configure.user for program MM5. For more information on make, please see Chapters 3 and 8 of this document.
H-10
MM5 Tutorial
Appendix H
H.6.4 Execution X.exe >&! X.print.out
Where X is the program name. Example: # # run RAWINS # date if ( -e acct ) rm acct rawins.exe >&! rawins.print.out ja -s >! acct cat acct >> rawins.print.out
date: This command prints the dates. ja -s: This UNICOS command ends the job accounting information. The -s option produces the summary report.
H.6.5 Save Files When a job is completed, some files used in and output from the execution of a program are archived on MSS using the tar command: tar -cf X.tar ......
An example: tar -cf rawins.out.tar src/rawins.exe rawins.namelist rawins.print.out
H.6.6 Writing Output Files to Mass Storage System Syntax: mswrite -t $RetPd local-filename MSSfilename
Example: tar -cf rawins.out.tar src/rawins.exe rawins.namelist rawins.print.out mswrite -t $RetPd rawins.out.tar ${OutRaw}/rawins_domain${DomId}.out.tar # # rawins output files # ls -ls mswrite -t $RetPd RAWINS_DOMAIN${DomId} ${OutRaw}/RAWINS_DOMAIN${DomId} echo mswrite -t $RetPd RAWINS_DOMAIN${DomId} ${OutRaw}/ MM5 Tutorial
H-11
Appendix H
RAWINS_DOMAIN${DomId} if ( $Submit != 2 ) then mswrite -t $RetPd SND.PLT ${OutRaw}/SND.PLT_DOMAIN${DomId} endif # if ( -e SFCFDDA_DOMAIN${DomId} ) then mswrite -t $RetPd SFCFDDA_DOMAIN${DomId} ${OutRaw}/SFCFDDA_DOMAIN${DomId} endif # mswrite -t $RetPd RAWOBS_DOMAIN${DomId} ${OutRaw}/RAWOBS_DOMAIN${DomId} mswrite -t $RetPd SFC4DOBS_DOMAIN${DomId} ${OutRaw}/SFC4DOBS_DOMAIN${DomId} mswrite -t $RetPd UPR4DOBS_DOMAIN${DomId} ${OutRaw}/UPR4DOBS_DOMAIN${DomId} # if ( $Submit == 1 ) then mswrite -t $RetPd rawab.out ${OutRaw}/AUBG_SUB1_DOMAIN${DomId} mswrite -t $RetPd AB.PLT ${OutRaw}/AUB_SUB1_PLT_DOMAIN${DomId} else if ( $Submit == 2 ) then mswrite -t $RetPd autobog ${OutRaw}/AUBG_SUB2_DOMAIN${DomId} endif
mswrite: This is a NCAR local command used to copy an existing file to MSS.
H-12
MM5 Tutorial