Configuring buildtest¶
The buildtest configuration file is used for configuring buildtest. This is defined by JSON schemafile named settings.schema.json. For more details on all properties see Settings Schema Documentation.
Default Configuration¶
The default buildtest configuration is located at buildtest/settings/config.yml
relative to root of repo. User may override the default configuration by creating
their own buildtest configuration at $HOME/.buildtest/config.yml
and buildtest
will read the user configuration instead.
Shown below is the default configuration provided by buildtest.
editor: vi
executors:
local:
bash:
description: submit jobs on local machine using bash shell
shell: bash
sh:
description: submit jobs on local machine using sh shell
shell: sh
python:
description: submit jobs on local machine using python shell
shell: python
What is an executor?¶
An executor is responsible for running the test and capture output/error file and return code. An executor can be local executor which runs tests on local machine or batch executor that can be modelled as partition/queue. A batch executor is responsible for dispatching job, then poll job until its finish, and gather job metrics from scheduler.
Executor Declaration¶
executors is a JSON object, the structure looks as follows:
executors:
local:
<local1>:
<local2>:
<local3>:
slurm:
<slurm1>:
<slurm2>:
<slurm3>:
lsf:
<lsf1>:
<lsf2>:
<lsf3>:
The LocalExecutors are defined in section local where each executor must be unique name:
executors:
local:
The LocalExecutors can be bash
, sh
and python
shell and they are
referenced in buildspec using executor
field as follows:
executor: local.bash
The executor is referenced in buildspec in the format: <type>.<name>
where
type is local, slurm, lsf defined in the executors section and name
is the executor name. In example above local.bash refers to the LocalExecutor
using bash shell. Similarly, SlurmExecutors and LSFExecutors are defined
in similar structure.
In this example below we define a local executor named bash that is referenced
in buildspec as executor: local.bash
:
executors:
local:
bash:
shell: bash
The local executors requires the shell
key which takes the pattern
^(/bin/bash|/bin/sh|sh|bash|python).*
Any buildspec that references local.bash
executor will submit job using bash
shell.
You can pass options to shell which will get passed into each job submission. For instance if you want bash login executor you can do the following:
executors:
local:
login_bash:
shell: bash --login
Then you can reference this executor as executor: local.login_bash
and your
tests will be submitted via bash --login /path/to/test.sh
.
buildtest configuration for Cori @ NERSC¶
Let’s take a look at Cori buildtest configuration:
editor: vi
buildspec_roots:
- $HOME/buildtest-cori
executors:
defaults:
pollinterval: 10
launcher: sbatch
max_pend_time: 90
local:
bash:
description: submit jobs on local machine using bash shell
shell: bash
sh:
description: submit jobs on local machine using sh shell
shell: sh
python:
description: submit jobs on local machine using python shell
shell: python
e4s:
description: "E4S testsuite locally"
shell: bash
before_script: |
cd $SCRATCH
git clone https://github.com/E4S-Project/testsuite.git
cd testsuite
source /global/common/software/spackecp/luke-wyatt-testing/spack/share/spack/setup-env.sh
source setup.sh
slurm:
debug:
description: jobs for debug qos
qos: debug
cluster: cori
shared:
description: jobs for shared qos
qos: shared
max_pend_time: 180
bigmem:
description: bigmem jobs
cluster: escori
qos: bigmem
max_pend_time: 300
xfer:
description: xfer qos jobs
qos: xfer
cluster: escori
gpu:
description: submit jobs to GPU partition
options: ["-C gpu"]
cluster: escori
max_pend_time: 300
premium:
description: submit jobs to premium queue
qos: premium
e4s:
description: "E4S runner"
qos: debug
cluster: cori
options:
- "-C haswell"
before_script: |
source /global/common/software/spackecp/luke-wyatt-testing/spack/share/spack/setup-env.sh
source $HOME/buildtest-cori/e4s/setup.sh
In this setting, we define the following executors
LocalExecutors:
local.bash
,local.sh
,local.python
,local.e4s
SlurmExecutors:
slurm.debug
,slurm.shared
,slurm.bigmem
,slurm.xfer
,slurm.gpu
,slurm.premium
,slurm.e4s
We introduce section defaults
which defines configuration for all executors
as follows:
defaults:
pollinterval: 10
launcher: sbatch
max_pend_time: 90
The launcher field is applicable for SlurmExecutor and LSFExecutor in this
case, launcher: sbatch
inherits sbatch as the job launcher for all executors.
The pollinterval
field is used to poll jobs at set interval in seconds
when job is active in queue. The max_pend_time
is maximum time job can be pending
within an executor, if it exceeds the limit buildtest will cancel the job. buildtest will
invoke scancel
or bkill
to cancel Slurm or LSF job. The pollinterval, launcher
and max_pend_time have no effect on LocalExecutors.
At Cori, jobs are submitted via qos instead of partition so we model a slurm executor
named by qos. The qos
field instructs which Slurm QOS to use when submitting job.
The description
key is a brief description of the executor only served for
documentation purpose. The cluster
field specifies which slurm cluster to use
(i.e sbatch --clusters=<string>
). In-order to use bigmem
, xfer
,
or gpu
qos at Cori, we need to specify escori cluster (i.e
sbatch --clusters=escori
).
buildtest will detect slurm configuration and check qos, partition, cluster
match with buildtest specification. In addition, buildtest supports multi-cluster
job submission and monitoring from remote cluster. This means if you specify
cluster
field buildtest will poll jobs using sacct with the
cluster name as follows: sacct -M <cluster>
.
The options
field is use to specify any additional options to launcher (sbatch
)
on command line. For slurm.gpu
executor, we use the options: -C gpu
in order to submit to Cori GPU cluster which requires sbatch -M escori -C gpu
.
Any additional #SBATCH options are
defined in buildspec for more details see Batch Scheduler Support
The max_pend_time
option can be overridden per executor level for example the
section below overrides the default to 300 seconds:
bigmem:
description: bigmem jobs
cluster: escori
qos: bigmem
max_pend_time: 300
The max_pend_time
is used to cancel job only if job is pending in queue, not if it
is in run state. buildtest starts a timer at job submission and every poll interval (pollinterval
field)
checks if job has exceeded max_pend_time only if job is in PENDING (SLURM)
or PEND (LSF) state. If job pendtime exceeds max_pend_time limit, buildtest will
cancel job using scancel
or bkill
depending on the scheduler. Buildtest
will remove cancelled jobs from poll queue, in addition cancelled jobs won’t be
reported in test report.
buildspec roots¶
buildtest can discover buildspec using buildspec_roots
keyword. This field is a list
of directory paths to search for buildspecs. For example we clone the repo
https://github.com/buildtesters/buildtest-cori at $HOME/buildtest-cori and assign
this to buildspec_roots as follows:
buildspec_roots:
- $HOME/buildtest-cori
This field is used with the buildtest buildspec find
command. If you rebuild
your buildspec cache using --clear
option it will detect all buildspecs in defined
in all directories specified by buildspec_roots. buildtest will recursively
find all .yml extension and validate each buildspec with appropriate schema.
By default buildtest will add the $BUILDTEST_ROOT/tutorials
and $BUILDTEST_ROOT/general_tests
to search path, where $BUILDTEST_ROOT is root of repo.
before_script and after_script for executors¶
Often times, you may want to run a set of commands before or after tests for more than
one test. For this reason, we support before_script
and after_script
section
per executor which is of string type where you can specify multi-line commands.
This can be demonstrated with an executor name local.e4s responsible for building E4S Testsuite:
e4s:
description: "E4S testsuite locally"
shell: bash
before_script: |
cd $SCRATCH
git clone https://github.com/E4S-Project/testsuite.git
cd testsuite
source /global/common/software/spackecp/luke-wyatt-testing/spack/share/spack/setup-env.sh
source setup.sh
The e4s executor attempts to clone E4S Testsuite in $SCRATCH and activate
a spack environment and run the initialize script source setup.sh
. buildtest
will write a before_script.sh
and after_script.sh
for every executor.
This can be found in var/executors
directory as shown below:
$ tree var/executors/
var/executors/
|-- local.bash
| |-- after_script.sh
| `-- before_script.sh
|-- local.e4s
| |-- after_script.sh
| `-- before_script.sh
|-- local.python
| |-- after_script.sh
| `-- before_script.sh
|-- local.sh
| |-- after_script.sh
| `-- before_script.sh
4 directories, 8 files
The before_script
and after_script
field is available for all executors and
if its not specified the file will be empty. Every test will source the before
and after script for the given executor.
The editor: vi
is used to open buildspecs in vi editor, this is used by commands like
buildtest buildspec edit
. For more details see Editing Buildspecs.
The editor field can be vi, vim, nano, or emacs depending on your editor
preference.
buildtest configuration for Ascent @ OLCF¶
Ascent is a training
system for Summit at OLCF, which is using a IBM Load Sharing
Facility (LSF) as their batch scheduler. Ascent has two
queues batch and test. To declare LSF executors we define them under lsf
section within the executors
section.
The default launcher is bsub which can be defined under defaults
. The
pollinterval
will poll LSF jobs every 10 seconds using bjobs
. The
pollinterval
accepts a range between 10 - 300 seconds as defined in
schema. In order to avoid polling scheduler excessively pick a number that is best
suitable for your site:
editor: vi
executors:
defaults:
launcher: bsub
pollinterval: 10
max_pend_time: 45
local:
bash:
description: submit jobs on local machine using bash shell
shell: bash
sh:
description: submit jobs on local machine using sh shell
shell: sh
python:
description: submit jobs on local machine using python shell
shell: python
lsf:
batch:
queue: batch
description: Submit job to batch queue
test:
queue: test
description: Submit job to test queue
CLI to buildtest configuration¶
The buildtest config
command provides access to buildtest configuration, shown
below is the command usage.
$ buildtest config --help
usage: buildtest [options] [COMMANDS] config [-h] {view,validate,summary} ...
optional arguments:
-h, --help show this help message and exit
subcommands:
buildtest configuration
{view,validate,summary}
view View Buildtest Configuration File
validate Validate buildtest settings file with schema.
summary Provide summary of buildtest settings.
View buildtest configuration¶
If you want to view buildtest configuration you can run the following
$ buildtest config view
editor: vi
executors:
local:
bash:
description: submit jobs on local machine using bash shell
shell: bash
sh:
description: submit jobs on local machine using sh shell
shell: sh
python:
description: submit jobs on local machine using python shell
shell: python
Note
buildtest config view
will display contents of user buildtest settings ~/.buildtest/config.yml
if found, otherwise it will display the default configuration
Validate buildtest configuration¶
To check if your buildtest settings is valid, run buildtest config validate
.
This will validate your configuration with the schema settings.schema.json.
The output will be the following.
$ buildtest config validate
/Users/siddiq90/Documents/buildtest/buildtest/settings/config.yml is valid
Note
If you defined a user setting (~/.buildtest/config.yml
) buildtest will validate this file instead of default one.
If there is an error during validation, the output from jsonschema.exceptions.ValidationError
will be displayed in terminal. For example the error below indicates there was an error
on editor
key in config object which expects the editor to be one of the
enum types [vi
, vim
, nano
, emacs
]:
$ buildtest config validate
Traceback (most recent call last):
File "/Users/siddiq90/.local/share/virtualenvs/buildtest-1gHVG2Pd/bin/buildtest", line 11, in <module>
load_entry_point('buildtest', 'console_scripts', 'buildtest')()
File "/Users/siddiq90/Documents/buildtest/buildtest/main.py", line 32, in main
check_settings()
File "/Users/siddiq90/Documents/buildtest/buildtest/config.py", line 71, in check_settings
validate(instance=user_schema, schema=config_schema)
File "/Users/siddiq90/.local/share/virtualenvs/buildtest-1gHVG2Pd/lib/python3.7/site-packages/jsonschema/validators.py", line 899, in validate
raise error
jsonschema.exceptions.ValidationError: 'gedit' is not one of ['vi', 'vim', 'nano', 'emacs']
Failed validating 'enum' in schema['properties']['config']['properties']['editor']:
{'default': 'vim',
'enum': ['vi', 'vim', 'nano', 'emacs'],
'type': 'string'}
On instance['config']['editor']:
'gedit'
Configuration Summary¶
You can get a summary of buildtest using buildtest config summary
, this will
display information from several sources into one single command along.
$ buildtest config summary
buildtest version: 0.8.1
buildtest Path: /Users/siddiq90/Documents/buildtest/bin/buildtest
Machine Details
______________________________
Operating System: darwin
Hostname: DOE-7086392.local
Machine: x86_64
Processor: i386
Python Path /Users/siddiq90/.local/share/virtualenvs/buildtest-1gHVG2Pd/bin/python
Python Version: 3.7.3
User: siddiq90
Buildtest Settings
________________________________________________________________________________
Buildtest Settings: /Users/siddiq90/.buildtest/config.yml
Buildtest Settings is VALID
Executors: ['local.bash', 'local.sh', 'local.python']
Buildspec Cache File: /Users/siddiq90/Documents/buildtest/var/buildspec-cache.json
Number of buildspecs: 2
Number of Tests: 33
Tests: ['/Users/siddiq90/Documents/buildtest/tutorials/systemd.yml', '/Users/siddiq90/Documents/buildtest/tutorials/run_only_distro.yml', '/Users/siddiq90/Documents/buildtest/tutorials/shell_examples.yml', '/Users/siddiq90/Documents/buildtest/tutorials/environment.yml', '/Users/siddiq90/Documents/buildtest/tutorials/python-hello.yml', '/Users/siddiq90/Documents/buildtest/tutorials/vars.yml', '/Users/siddiq90/Documents/buildtest/tutorials/selinux.yml', '/Users/siddiq90/Documents/buildtest/tutorials/shebang.yml', '/Users/siddiq90/Documents/buildtest/tutorials/pass_returncode.yml', '/Users/siddiq90/Documents/buildtest/tutorials/hello_world.yml', '/Users/siddiq90/Documents/buildtest/tutorials/root_user.yml', '/Users/siddiq90/Documents/buildtest/tutorials/tags_example.yml', '/Users/siddiq90/Documents/buildtest/tutorials/run_only_platform.yml', '/Users/siddiq90/Documents/buildtest/tutorials/python-shell.yml', '/Users/siddiq90/Documents/buildtest/tutorials/skip_tests.yml', '/Users/siddiq90/Documents/buildtest/tutorials/sleep.yml', '/Users/siddiq90/Documents/buildtest/tutorials/compilers/vecadd.yml', '/Users/siddiq90/Documents/buildtest/tutorials/compilers/gnu_hello.yml', '/Users/siddiq90/Documents/buildtest/tutorials/compilers/pre_post_build_run.yml', '/Users/siddiq90/Documents/buildtest/tutorials/compilers/passing_args.yml', '/Users/siddiq90/Documents/buildtest/general_tests/configuration/disk_usage.yml', '/Users/siddiq90/Documents/buildtest/general_tests/configuration/ssh_localhost.yml', '/Users/siddiq90/Documents/buildtest/general_tests/configuration/systemd-default-target.yml', '/Users/siddiq90/Documents/buildtest/general_tests/configuration/ulimits.yml', '/Users/siddiq90/Documents/buildtest/general_tests/sched/slurm/sacctmgr.yml', '/Users/siddiq90/Documents/buildtest/general_tests/sched/slurm/scontrol.yml', '/Users/siddiq90/Documents/buildtest/general_tests/sched/slurm/squeue.yml', '/Users/siddiq90/Documents/buildtest/general_tests/sched/slurm/sinfo.yml', '/Users/siddiq90/Documents/buildtest/general_tests/sched/lsf/bmgroups.yml', '/Users/siddiq90/Documents/buildtest/general_tests/sched/lsf/lsinfo.yml', '/Users/siddiq90/Documents/buildtest/general_tests/sched/lsf/bugroup.yml', '/Users/siddiq90/Documents/buildtest/general_tests/sched/lsf/bqueues.yml', '/Users/siddiq90/Documents/buildtest/general_tests/sched/lsf/bhosts.yml']
Buildtest Schemas
________________________________________________________________________________
Available Schemas: ['script-v1.0.schema.json', 'compiler-v1.0.schema.json', 'global.schema.json', 'settings.schema.json']
Supported Sub-Schemas
________________________________________________________________________________
script-v1.0.schema.json : /Users/siddiq90/Documents/buildtest/buildtest/schemas/script-v1.0.schema.json
Examples Directory for schema: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples
compiler-v1.0.schema.json : /Users/siddiq90/Documents/buildtest/buildtest/schemas/compiler-v1.0.schema.json
Examples Directory for schema: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples
Example Configurations¶
buildtest provides a few example configurations for configuring buildtest this
can be retrieved by running buildtest schema -n settings.schema.json --examples
or short option (-e
), which will validate each example with schema file
settings.schema.json
.
$ buildtest schema -n settings.schema.json -e
File: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples/settings.schema.json/valid/local-executor.yml
________________________________________________________________________________
editor: vi
executors:
local:
bash:
description: submit jobs on local machine using bash shell
shell: bash
before_script: |
time
echo "commands run before job"
after_script: |
time
echo "commands run after job"
sh:
description: submit jobs on local machine using sh shell
shell: sh
python:
description: submit jobs on local machine using python shell
shell: python
File: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples/settings.schema.json/valid/slurm-example.yml
________________________________________________________________________________
editor: emacs
buildspec_roots:
- $HOME/buildtest-cori
testdir: /tmp/buildtest
executors:
defaults:
pollinterval: 20
launcher: sbatch
max_pend_time: 30
slurm:
normal:
options: ["-C haswell"]
qos: normal
before_script: |
time
echo "commands run before job"
after_script: |
time
echo "commands run after job"
File: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples/settings.schema.json/valid/combined_executor.yml
________________________________________________________________________________
editor: vi
executors:
local:
bash:
description: submit jobs on local machine
shell: bash -v
slurm:
haswell:
launcher: sbatch
options:
- "-p haswell"
- "-t 00:10"
lsf:
batch:
launcher: bsub
options:
- "-q batch"
- "-t 00:10"
File: /Users/siddiq90/Documents/buildtest/buildtest/schemas/examples/settings.schema.json/valid/lsf-example.yml
________________________________________________________________________________
editor: vi
executors:
defaults:
pollinterval: 10
launcher: bsub
max_pend_time: 45
lsf:
batch:
description: "LSF Executor name 'batch' that submits jobs to 'batch' queue"
queue: batch
options: ["-W 20"]
before_script: |
time
echo "commands run before job"
after_script: |
time
echo "commands run after job"
test:
description: "LSF Executor name 'test' that submits jobs to 'test' queue"
launcher: bsub
queue: test
options: ["-W 20"]
If you want to retrieve full json schema file for buildtest configuration you can
run buildtest schema -n settings.schema.json --json
or short option -j
.