Quick Start#

Running the real-from-ideal and real-from-dwd-ana cases#

To run other cases than these, please refer to the shortest guide to success.

Available cases:

  • real-from-ideal: Start with a test simulation (ltestcase=.TRUE.) to produce a first-guess file and an analysis file to start a second simulation.

  • real-from-dwd-ana: Start with a first-guess and analysis from DWD (relying on a local archive to do that.)

  • build-only: only build the ICON model, do not run anything

NOTE: The current implementation of real-from-dwd-ana relies on the local archives at LMU. The templates to provide the initial conditions have to be modified to run from a different local system.

With Autosubmit installed, we can run these ICON workflows with Autosubmit by following these steps:

1. Create an experiment:#

autosubmit expid -min -repo https://gitlab.com/auto-icon/auto-icon.git -conf "conf/<ICON_CASE>.yml" -b main -H <HPCARCH> -d "Launching an ICON experiment with autosubmit."

This will create a new experiment folder and will give us the corresponding experiment id.

autosubmit create <expid> -np

Then clones the auto-icon repository.

2. Additional configuration#

Create an additional yaml file with some more configuration in <expid>/conf/.

NOTE: It is important to add these things in a different file because minimal.yml is the first file to be read, and all the other configuration files overwrite its contents. If we try to replace a variable that also appears in the default configuration files (i.e. local_destination_folder) it won’t have an effect.

The name of the file doesn’t matter as long as it is a .yml file, for example my_configuration.yml.

To this file we will add the following:

  • We will add a path in the local machine at which the output data will be transferred.

data_management:
  # Where do we put the output files afterwards?
  local_destination_folder: /Path/to/output/folder
  • Some details about how to build icon (if required, see below).

NOTE: Since in the LMU example we are building everything from scratch, probably it is the best starting point to build icon with spack in a platform without configure scripts.

In some systems it is necessary to link to specific libraries or use existing packages. Some additional parameters need to be changed in the configuration to allow that. Here we have two examples:

  • At LMU:

spack:
  compiler: "gcc@11.3.0" # desired compiler for spack
  root: "$SCRATCH/autoicon-spack" # path to a spack install, will be downloaded to if not present
  externals: "slurm"
  user_cache_path:  "$SCRATCH/autoicon-spackcache" # spack puts data here when bootstrapping, leave empty to use home folder
  user_config_path: "$SCRATCH/autoicon-spackconfig" # spack puts data here when bootstrapping, leave empty to use home folder
  disable_local_config: false # if true, spack installs into spack source dir
  upstreams: "/software/opt/focal/x86_64/spack/2023.02/spack/opt/spack"

icon:
  build_cmd: "icon-nwp@%ICON.VERSION%% %SPACK.COMPILER%+debug~mpichecks target=x86_64_v2 source=dkrz_https"
  version: 2.6.5-nwp0
  • At LRZ:

spack:
  init: "module load user_spack" # command to load spack environment, e.g. module load spack, use spack/setup-env.sh if empty
  compiler: "gcc@11.2.0" # desired compiler for spack
  root: "$PLATFORM_SCRATCH/autoicon-spack" # path to a spack install, will be downloaded to if not present
  externals: "slurm"
  user_cache_path:  "$PLATFORM_SCRATCH/autoicon-spackcache" # spack puts data here when bootstrapping, leave empty to use home folder
  user_config_path: "$PLATFORM_SCRATCH/autoicon-spackconfig" # spack puts data here when bootstrapping, leave empty to use home folder

icon:
  build_cmd: "icon-nwp@%ICON.VERSION%% %SPACK.COMPILER%~mpichecks source=dkrz_https ^openmpi/amct7nx"
  load_cmd: "icon-nwp@%ICON.VERSION%% %SPACK.COMPILER%~mpichecks source=dkrz_https"
  version: 2.6.5-nwp0

NOTE: At LRZ, the compute nodes don’t have git available by default.

To overcome this problem we can compile the model and setup the python environment in the login nodes. To do that, we can overwrite in which platform these jobs will be running by adding the following content to the file my_configuration.yml.

Jobs:
  BUILD_ICON:
    PLATFORM: LRZ_LOGIN
  BUILD_PYTHON_ENVIRONMENT:
    PLATFORM: LRZ_LOGIN

3. Platforms#

On an HPC system other than Levante or HoreKa, the platform needs to be added. To do so, adjust the file proj/git_project/conf/common/platforms.yml or add a configuration file conf/platforms.yml following the given template. In any case except art, the user info has to be added there. For the art case, a few further steps have to be done, following these instructions.

NOTE: If we need to run in a login node (i.e. LRZ_LOGIN) we need to add the login node as a platform as well. Specifying type=ps .

4. Create and run#

Now you should be able to run

autosubmit create <expid> -np

for creation of the final experiment workflow and

autosubmit run <expid>

for running the experiment. To run in the background, use nohup autosubmit run <expid> & instead (cf. Autosubmit user guide).

A step-by-step guide for the full workflow is provided in the documentation.