Configuring buildtest¶
The buildtest configuration file is used for configuring behavior of buildtest. There is a json schema file settings.schema.json that defines structure on how to write your configuration file.
For more details on schema attributes see Settings Schema Documentation
Default Configuration¶
The default configuration for buildtest can be found in the git repo relative
to root of buildtest at buildtest/settings/config.yml
.
User may override the default configuration by creating their custom file in
$HOME/.buildtest/config.yml
.
Shown below is the default configuration.
executors:
local:
bash:
description: submit jobs on local machine using bash shell
shell: bash
sh:
description: submit jobs on local machine using sh shell
shell: sh
python:
description: submit jobs on local machine using python shell
shell: python
config:
editor: vi
Executors¶
Executors are responsible for running jobs, currently buildtest supports the following executors:
local
slurm
lsf
Their is a ssh executor is supported in schema but currently not implemented in buildtest.
The local executor is responsible for submitting jobs locally. Currently, buildtest
supports bash
, sh
and python
shell. The executors are referenced in
your buildspec with the executor
key as follows:
executor: local.bash
The executor
key in buildtest settings is of type object
, the sub-fields
are local
, ssh
, and slurm
.
Local Executors¶
In this example below we define a local executor named bash that is referenced
in buildspec executor: local.bash
:
executors:
local:
bash:
shell: bash
Each local executor requires the shell
key which takes the pattern
^(/bin/bash|/bin/sh|sh|bash|python).*
Any buildspec that references the executor local.bash
will submit job
as bash /path/to/test.sh
.
You can pass options to shell which will get passed into each job submission. For instance if you want bash executor to submit jobs by login mode you can do the following:
executors:
local:
login_bash:
shell: bash --login
Then you can reference this executor as executor: local.login_bash
and your
tests will be submitted via bash --login /path/to/test.sh
.
Slurm Executors¶
The slurm executors are defined in the following section:
executors:
slurm:
<slurm-executor1>:
<slurm-executor2>:
Slurm executors are responsible for submitting jobs to slurm resource manager.
You can define as many slurm executors as you wish, so long as you have a unique
name to reference each executor. Generally, you will need one slurm executor
per partition or qos that you have at your site. Let’s take a look at an example
slurm executor called normal
:
executors:
slurm:
normal:
options: ["-C haswell"]
qos: normal
This executor can be referenced in buildspec as executor: slurm.normal
. This
executor defines the following:
qos: normal
will add-q normal
to the launcher command. buildtest will check if qos is found in slurm configuration. If not found, buildtest will reject job submission.options
key is used to pass any options to launcher command. In this example we add-C haswell
.
buildtest configuration for Cori @ NERSC¶
Let’s take a look at Cori buildtest configuration:
executors:
defaults:
pollinterval: 10
launcher: sbatch
local:
bash:
description: submit jobs on local machine using bash shell
shell: bash
sh:
description: submit jobs on local machine using sh shell
shell: sh
python:
description: submit jobs on local machine using python shell
shell: python
slurm:
debug:
description: jobs for debug qos
qos: debug
cluster: cori
shared:
description: jobs for shared qos
qos: shared
bigmem:
description: bigmem jobs
cluster: escori
qos: bigmem
xfer:
description: xfer qos jobs
qos: xfer
gpu:
description: submit jobs to GPU partition
options: ["-C gpu"]
cluster: escori
config:
editor: vi
paths:
prefix: $HOME/cache/
In this setting, we define 3 LocalExecutors: local.bash
, local.sh
and local.python
and 5 SlurmExecutors: slurm.debug
, slurm.shared
, slurm.bigmem
, slurm.xfer
, and slurm.gpu
.
We also introduce section defaults
section to default configuration for executors.
At the moment, the launcher
and pollinterval
are available
fields in default which only apply for SlurmExecutor and LSFExecutor. Currently, buildtest supports
batch submission via sbatch
so all SlurmExecutors will inherit sbatch
as launcher.
The pollinterval
field is used with SlurmExecutor to poll jobs at set interval in seconds
when job active in queue (PENDING
, RUNNING
).
At Cori, jobs are submitted via qos instead of partition so each slurm executor
has the qos key. The description
key is a brief description of the executor
which you can use to document the behavior of the executor. The cluster
field
specifies which slurm cluster to use, at Cori in order to use bigmem
qos we
need to specify -M escori
where escori is the slurm cluster. buildtest will
detect slurm configuration and check if cluster is a valid cluster name.
In addition, sacct will poll job against the cluster name (sacct -M <cluster>
).
The options
field is use to specify any additional options to launcher (sbatch
)
on command line. For slurm.gpu
executor, we use this executor for submit to CoriGPU
which requires sbatch -M escori -C gpu
. Any additional #SBATCH options are defined
in buildspec using sbatch
key.
buildtest configuration for Ascent @ OLCF¶
Ascent is a training system for Summit at OLCF, which is using a IBM Load Sharing Facility (LSF) as their batch scheduler. Ascent has two queues batch and test. To define LSF Executor we set top-level key lsf in executors section.
The default launcher is bsub which can be defined under defaults
. The
pollinterval
will poll LSF jobs every 10 seconds using bjobs
. The
pollinterval
accepts a range between 10 - 300 seconds as defined in
schema. In order to avoid polling scheduler excessively pick a number that is best
suitable for your site.
executors:
defaults:
launcher: bsub
pollinterval: 10
local:
bash:
description: submit jobs on local machine using bash shell
shell: bash
sh:
description: submit jobs on local machine using sh shell
shell: sh
python:
description: submit jobs on local machine using python shell
shell: python
lsf:
batch:
queue: batch
test:
queue: test
config:
editor: vi
paths:
prefix: /tmp
buildspec roots¶
buildtest can detect buildspec using buildspec_roots
keyword. For example we
clone the repo https://github.com/buildtesters/buildtest-cori at /Users/siddiq90/Documents/buildtest-cori
config:
editor: vi
paths:
buildspec_roots:
- /Users/siddiq90/Documents/buildtest-cori
If you run buildtest buildspec find --clear
it will detect all buildspecs in
buildspec_roots. buildtest will find all .yml extension. By default buildtest will
add the $BUILDTEST_ROOT/tutorials
to search path, where $BUILDTEST_ROOT is root
of buildtest repo.
Example Configurations¶
buildtest provides a few example configurations for configuring buildtest this
can be retrieved by running buildtest schema -n settings.schema.json --examples
or short option (-e
), which will validate each example with schema file
settings.schema.json
.
$ buildtest schema -n settings.schema.json -e
File: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples/settings.schema.json/valid/local-executor.yml
Valid State: True
________________________________________________________________________________
executors:
local:
bash:
description: submit jobs on local machine using bash shell
shell: bash
before_script: |
time
echo "commands run before job"
after_script: |
time
echo "commands run after job"
sh:
description: submit jobs on local machine using sh shell
shell: sh
python:
description: submit jobs on local machine using python shell
shell: python
config:
editor: vi
paths:
prefix: /tmp
File: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples/settings.schema.json/valid/ssh-executor.yml
Valid State: True
________________________________________________________________________________
executors:
ssh:
localhost:
host: localhost
user: siddiq90
identity_file: ~/.ssh/id_rsa
config:
editor: vi
paths:
prefix: /tmp
File: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples/settings.schema.json/valid/slurm-example.yml
Valid State: True
________________________________________________________________________________
executors:
defaults:
pollinterval: 20
slurm:
normal:
options: ["-C haswell"]
qos: normal
before_script: |
time
echo "commands run before job"
after_script: |
time
echo "commands run after job"
config:
editor: vi
paths:
prefix: /tmp
File: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples/settings.schema.json/valid/combined_executor.yml
Valid State: True
________________________________________________________________________________
executors:
local:
bash:
description: submit jobs on local machine
shell: bash -v
slurm:
haswell:
launcher: sbatch
options:
- "-p haswell"
- "-t 00:10"
lsf:
batch:
launcher: bsub
options:
- "-q batch"
- "-t 00:10"
ssh:
login:
host: cori
user: root
identity_file: ~/.ssh/nersc
config:
editor: vi
paths:
prefix: /tmp
clonepath: /tmp/repo
logdir: /tmp/logs
testdir: /tmp/buildtest/tests
File: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples/settings.schema.json/valid/lsf-example.yml
Valid State: True
________________________________________________________________________________
executors:
defaults:
pollinterval: 10
launcher: bsub
lsf:
batch:
description: "LSF Executor name 'batch' that submits jobs to 'batch' queue"
queue: batch
options: ["-W 20"]
before_script: |
time
echo "commands run before job"
after_script: |
time
echo "commands run after job"
test:
description: "LSF Executor name 'test' that submits jobs to 'test' queue"
launcher: bsub
queue: test
options: ["-W 20"]
config:
editor: vi
If you want to retrieve full json schema file run
buildtest schema -n settings.schema.json --json
or short option -j