Batch Scheduler Support

Slurm

buildtest can submit jobs to Slurm assuming you have slurm executors defined in your configuration file. The SlurmExecutor class is responsible for managing slurm jobs which will perform the following action

  1. Check slurm binary sbatch and sacct.

  2. Dispatch Job and acquire job ID using sacct.

  3. Poll all slurm jobs until all have finished

  4. Gather Job results once job is complete via sacct.

buildtest will dispatch slurm jobs and poll all jobs until all jobs are complete. If job is in PENDING or RUNNING state, then buildtest will keep polling at a set interval defined by pollinterval setting in buildtest. Once job is not in PENDING or RUNNING stage, buildtest will gather job results and wait until all jobs have finished.

In this example we have a slurm executor cori.slurm.knl_debug, in addition we can specify #SBATCH directives using sbatch field. The sbatch field is a list of string types, buildtest will insert #SBATCH directive in front of each value.

Shown below is an example buildspec

version: "1.0"
buildspecs:
  slurm_metadata:
    description: Get metadata from compute node when submitting job
    type: script
    executor: cori.slurm.knl_debug
    tags: [jobs]
    sbatch:
      - "-t 00:05"
      - "-N 1"
    run: |
      export SLURM_JOB_NAME="firstjob"
      echo "jobname:" $SLURM_JOB_NAME
      echo "slurmdb host:" $SLURMD_NODENAME
      echo "pid:" $SLURM_TASK_PID
      echo "submit host:" $SLURM_SUBMIT_HOST
      echo "nodeid:" $SLURM_NODEID
      echo "partition:" $SLURM_JOB_PARTITION

buildtest will add the #SBATCH directives at top of script followed by content in the run section. Shown below is the example test content. Every slurm will insert #SBATCH --job-name, #SBATCH --output and #SBATCH --error line which is determined by the name of the test.

#!/bin/bash
#SBATCH -t 00:05
#SBATCH -N 1
#SBATCH --job-name=slurm_metadata
#SBATCH --output=slurm_metadata.out
#SBATCH --error=slurm_metadata.err
export SLURM_JOB_NAME="firstjob"
echo "jobname:" $SLURM_JOB_NAME
echo "slurmdb host:" $SLURMD_NODENAME
echo "pid:" $SLURM_TASK_PID
echo "submit host:" $SLURM_SUBMIT_HOST
echo "nodeid:" $SLURM_NODEID
echo "partition:" $SLURM_JOB_PARTITION

The cori.slurm.knl_debug executor in our configuration file is defined as follows

system:
  cori:
    executors:
      slurm:
        knl_debug:
          qos: debug
          cluster: cori
          options:
          - -C knl,quad,cache
          description: debug queue on KNL partition

With this setting, any buildspec test that use cori.slurm.knl_debug executor will result in the following launch option: sbatch --qos debug --clusters=cori -C knl,quad,cache </path/to/script.sh>.

Unlike the LocalExecutor, the Run Stage, will dispatch the slurm job and poll until job is completed. Once job is complete, it will gather the results and terminate. In Run Stage, buildtest will mark test status as N/A because job is submitted to scheduler and pending in queue. In order to get job result, we need to wait until job is complete then we gather results and determine test state. buildtest keeps track of all buildspecs, testscripts to be run and their results. A test using LocalExecutor will run test in Run Stage and returncode will be retrieved and status can be calculated immediately. For Slurm Jobs, buildtest dispatches the job and process next job. buildtest will show output of all tests after Polling Stage with test results of all tests. A slurm job with exit code 0 will be marked with status PASS.

Shown below is an example build for this test

$ buildtest build -b buildspecs/jobs/metadata.yml
User:  siddiq90
Hostname:  cori02
Platform:  Linux
Current Time:  2021/09/03 10:08:49
buildtest path: /global/homes/s/siddiq90/github/buildtest/bin/buildtest
buildtest version:  0.10.2
python path: /global/homes/s/siddiq90/.conda/envs/buildtest/bin/python
python version:  3.8.8
Test Directory:  /global/u1/s/siddiq90/github/buildtest/var/tests
Configuration File:  /global/u1/s/siddiq90/.buildtest/config.yml
Command: /global/homes/s/siddiq90/github/buildtest/bin/buildtest build -b buildspecs/jobs/metadata.yml

+-------------------------------+
| Stage: Discovering Buildspecs |
+-------------------------------+

+--------------------------------------------------------------------------+
| Discovered Buildspecs                                                    |
+==========================================================================+
| /global/u1/s/siddiq90/github/buildtest-cori/buildspecs/jobs/metadata.yml |
+--------------------------------------------------------------------------+
Discovered Buildspecs:  1
Excluded Buildspecs:  0
Detected Buildspecs after exclusion:  1

+---------------------------+
| Stage: Parsing Buildspecs |
+---------------------------+

Valid Buildspecs:  1
Invalid Buildspecs:  0
/global/u1/s/siddiq90/github/buildtest-cori/buildspecs/jobs/metadata.yml: VALID


Total builder objects created: 1
builders: [slurm_metadata/303d1e32]


name            id        description                                         buildspecs
--------------  --------  --------------------------------------------------  ------------------------------------------------------------------------
slurm_metadata  303d1e32  Get metadata from compute node when submitting job  /global/u1/s/siddiq90/github/buildtest-cori/buildspecs/jobs/metadata.yml

+----------------------+
| Stage: Building Test |
+----------------------+

 name           | id       | type   | executor             | tags     | testpath
----------------+----------+--------+----------------------+----------+--------------------------------------------------------------------------------------------------------------------------------
 slurm_metadata | 303d1e32 | script | cori.slurm.knl_debug | ['jobs'] | /global/u1/s/siddiq90/github/buildtest/var/tests/cori.slurm.knl_debug/metadata/slurm_metadata/303d1e32/slurm_metadata_build.sh

+---------------------+
| Stage: Running Test |
+---------------------+

______________________________
Launching test: slurm_metadata
Test ID: 303d1e32-52eb-4d77-9a36-04a5143c4cbd
Executor Name: cori.slurm.knl_debug
Running Test:  /global/u1/s/siddiq90/github/buildtest/var/tests/cori.slurm.knl_debug/metadata/slurm_metadata/303d1e32/slurm_metadata_build.sh
slurm_metadata/303d1e32 JobID: 46508594 dispatched to scheduler
Polling Jobs in 30 seconds
slurm_metadata/303d1e32: Job 46508594 is complete!
slurm_metadata/303d1e32: Writing output file: /global/u1/s/siddiq90/github/buildtest/var/tests/cori.slurm.knl_debug/metadata/slurm_metadata/303d1e32/slurm_metadata.out
slurm_metadata/303d1e32: Writing error file: /global/u1/s/siddiq90/github/buildtest/var/tests/cori.slurm.knl_debug/metadata/slurm_metadata/303d1e32/slurm_metadata.err

+-----------------------+
| Completed Polled Jobs |
+-----------------------+

 name           | id       | executor             | status   |   returncode |   runtime
----------------+----------+----------------------+----------+--------------+-----------
 slurm_metadata | 303d1e32 | cori.slurm.knl_debug | PASS     |            0 |   30.9923

+----------------------+
| Stage: Test Summary  |
+----------------------+

 name           | id       | executor             | status   |   returncode |   runtime
----------------+----------+----------------------+----------+--------------+-----------
 slurm_metadata | 303d1e32 | cori.slurm.knl_debug | PASS     |            0 |   30.9923



Passed Tests: 1/1 Percentage: 100.000%
Failed Tests: 0/1 Percentage: 0.000%


Writing Logfile to: /tmp/buildtest_2159tqkz.log
A copy of logfile can be found at $BUILDTEST_ROOT/buildtest.log -  /global/homes/s/siddiq90/github/buildtest/buildtest.log

The SlurmExecutor class is responsible for processing slurm job that may include: dispatch, poll, gather, or cancel job. The SlurmExecutor will gather job metrics via sacct.

buildtest can check status based on Slurm Job State, this is defined by State field in sacct. In next example, we introduce field slurm_job_state which is part of status field. This field expects one of the following values: [COMPLETED, FAILED, OUT_OF_MEMORY, TIMEOUT ] This is an example of simulating fail job by expecting a return code of 1 with job state of FAILED.

 1version: "1.0"
 2buildspecs:
 3  wall_timeout:
 4    type: script
 5    executor: cori.slurm.haswell_debug
 6    sbatch: [ "-t '00:00:10'", "-n 1"]
 7    description: "This job simulates job timeout by sleeping for 300sec while requesting 5sec"
 8    tags: ["jobs", "fail"]
 9    run: sleep 180
10    status:
11      slurm_job_state: "TIMEOUT"

If we run this test, buildtest will mark this test as PASS because the slurm job state matches with expected result defined by field slurm_job_state. This job will be TIMEOUT because we requested 2 mins while this job will sleep 300sec (5min).

(buildtest) siddiq90@cori02> buildtest build -b buildspecs/jobs/fail/timeout.yml
User:  siddiq90
Hostname:  cori02
Platform:  Linux
Current Time:  2021/09/03 13:34:13
buildtest path: /global/homes/s/siddiq90/github/buildtest/bin/buildtest
buildtest version:  0.10.2
python path: /global/homes/s/siddiq90/.conda/envs/buildtest/bin/python
python version:  3.8.8
Test Directory:  /global/u1/s/siddiq90/github/buildtest/var/tests
Configuration File:  /global/u1/s/siddiq90/.buildtest/config.yml
Command: /global/homes/s/siddiq90/github/buildtest/bin/buildtest build -b buildspecs/jobs/fail/timeout.yml

+-------------------------------+
| Stage: Discovering Buildspecs |
+-------------------------------+

+------------------------------------------------------------------------------+
| Discovered Buildspecs                                                        |
+==============================================================================+
| /global/u1/s/siddiq90/github/buildtest-cori/buildspecs/jobs/fail/timeout.yml |
+------------------------------------------------------------------------------+
Discovered Buildspecs:  1
Excluded Buildspecs:  0
Detected Buildspecs after exclusion:  1

+---------------------------+
| Stage: Parsing Buildspecs |
+---------------------------+

Valid Buildspecs:  1
Invalid Buildspecs:  0
/global/u1/s/siddiq90/github/buildtest-cori/buildspecs/jobs/fail/timeout.yml: VALID


Total builder objects created: 1
builders: [wall_timeout/ae385691]


name          id        description                                                                  buildspecs
------------  --------  ---------------------------------------------------------------------------  ----------------------------------------------------------------------------
wall_timeout  ae385691  This job simulates job timeout by sleeping for 300sec while requesting 5sec  /global/u1/s/siddiq90/github/buildtest-cori/buildspecs/jobs/fail/timeout.yml

+----------------------+
| Stage: Building Test |
+----------------------+

 name         | id       | type   | executor                 | tags             | testpath
--------------+----------+--------+--------------------------+------------------+-------------------------------------------------------------------------------------------------------------------------------
 wall_timeout | ae385691 | script | cori.slurm.haswell_debug | ['jobs', 'fail'] | /global/u1/s/siddiq90/github/buildtest/var/tests/cori.slurm.haswell_debug/timeout/wall_timeout/ae385691/wall_timeout_build.sh

+---------------------+
| Stage: Running Test |
+---------------------+

______________________________
Launching test: wall_timeout
Test ID: ae385691-9eb4-413c-ac5b-f2be1bcc449e
Executor Name: cori.slurm.haswell_debug
Running Test:  /global/u1/s/siddiq90/github/buildtest/var/tests/cori.slurm.haswell_debug/timeout/wall_timeout/ae385691/wall_timeout_build.sh
wall_timeout/ae385691 JobID: 46518859 dispatched to scheduler
Polling Jobs in 30 seconds


Current Jobs
_______________


+--------------+----------+--------------------------+----------+----------+---------+
|     name     |    id    |         executor         |  jobID   | jobstate | runtime |
+--------------+----------+--------------------------+----------+----------+---------+
| wall_timeout | ae385691 | cori.slurm.haswell_debug | 46518859 | RUNNING  |  30.38  |
+--------------+----------+--------------------------+----------+----------+---------+
Polling Jobs in 30 seconds


Current Jobs
_______________


+--------------+----------+--------------------------+----------+----------+---------+
|     name     |    id    |         executor         |  jobID   | jobstate | runtime |
+--------------+----------+--------------------------+----------+----------+---------+
| wall_timeout | ae385691 | cori.slurm.haswell_debug | 46518859 | RUNNING  | 60.521  |
+--------------+----------+--------------------------+----------+----------+---------+
Polling Jobs in 30 seconds
wall_timeout/ae385691: Job 46518859 is complete!
wall_timeout/ae385691: Writing output file: /global/u1/s/siddiq90/github/buildtest/var/tests/cori.slurm.haswell_debug/timeout/wall_timeout/ae385691/wall_timeout.out
wall_timeout/ae385691: Writing error file: /global/u1/s/siddiq90/github/buildtest/var/tests/cori.slurm.haswell_debug/timeout/wall_timeout/ae385691/wall_timeout.err

+-----------------------+
| Completed Polled Jobs |
+-----------------------+

 name         | id       | executor                 |    jobID | jobstate   | status   |   returncode |   runtime
--------------+----------+--------------------------+----------+------------+----------+--------------+-----------
 wall_timeout | ae385691 | cori.slurm.haswell_debug | 46518859 | TIMEOUT    | PASS     |            0 |   90.6563

+----------------------+
| Stage: Test Summary  |
+----------------------+

 name         | id       | executor                 | status   | returncode_match   | regex_match   | runtime_match   |   returncode |   runtime
--------------+----------+--------------------------+----------+--------------------+---------------+-----------------+--------------+-----------
 wall_timeout | ae385691 | cori.slurm.haswell_debug | PASS     | False              | False         | False           |            0 |   90.6563



Passed Tests: 1/1 Percentage: 100.000%
Failed Tests: 0/1 Percentage: 0.000%


Writing Logfile to: /tmp/buildtest_yr61l5t9.log
A copy of logfile can be found at $BUILDTEST_ROOT/buildtest.log -  /global/homes/s/siddiq90/github/buildtest/buildtest.log

buildtest marked this test PASS because the jobstate TIMEOUT match the value provided by slurm_job_state in the buildspec.

LSF

buildtest can support job submission to IBM Spectrum LSF if you have defined LSF executors in your configuration file.

The bsub property can be used to specify #BSUB directive into job script. This example will use the executor ascent.lsf.batch executor that was defined in buildtest configuration.

1version: "1.0"
2buildspecs:
3  hostname:
4    type: script
5    executor: ascent.lsf.batch
6    bsub: [ "-W 10",  "-nnodes 1"]
7
8    run: jsrun hostname

The LSFExecutor poll jobs and retrieve job state using bjobs -noheader -o 'stat' <JOBID>. The LSFExecutor will poll job so long as they are in PEND or RUN state. Once job is not in any of the two states, LSFExecutor will gather job results. buildtest will retrieve the following format fields using bjobs: job_name, stat, user, user_group, queue, proj_name, pids, exit_code, from_host, exec_host, submit_time, start_time, finish_time, nthreads, exec_home, exec_cwd, output_file, error_file to get job record.

PBS

buildtest can support job submission to PBS Pro or OpenPBS scheduler. Assuming you have configured PBS Executors in your configuration file you can submit jobs to the PBS executor by selecting the appropriate pbs executor via executor property in buildspec. The #PBS directives can be specified using pbs field which is a list of PBS options that get inserted at top of script. Shown below is an example buildspec using the script schema.

 version: "1.0"
 buildspecs:
   pbs_sleep:
     type: script
     executor: generic.pbs.workq
     pbs: ["-l nodes=1", "-l walltime=00:02:00"]
     run: sleep 10

buildtest will poll PBS jobs using qstat -x -f -F json <jobID> until job is finished. Note that we use -x option to retrieve finished jobs which is required inorder for buildtest to detect job state upon completion.

Shown below is an example build of the buildspec using PBS scheduler.

[pbsuser@pbs tests]$ python3.7 ./bin/buildtest -c tests/settings/pbs.yml build -b tests/examples/pbs/sleep.yml --poll-interval=5
User:  pbsuser
Hostname:  pbs
Platform:  Linux
Current Time:  2021/09/03 20:40:24
buildtest path: /tmp/GitHubDesktop/buildtest/bin/buildtest
buildtest version:  0.10.2
python path: /bin/python
python version:  3.7.0
Test Directory:  /tmp/GitHubDesktop/buildtest/var/tests
Configuration File:  /tmp/GitHubDesktop/buildtest/tests/settings/pbs.yml
Command: ./bin/buildtest -c tests/settings/pbs.yml build -b tests/examples/pbs/sleep.yml --poll-interval=5

+-------------------------------+
| Stage: Discovering Buildspecs |
+-------------------------------+

+-----------------------------------------------------------+
| Discovered Buildspecs                                     |
+===========================================================+
| /tmp/GitHubDesktop/buildtest/tests/examples/pbs/sleep.yml |
+-----------------------------------------------------------+
Discovered Buildspecs:  1
Excluded Buildspecs:  0
Detected Buildspecs after exclusion:  1

+---------------------------+
| Stage: Parsing Buildspecs |
+---------------------------+

Valid Buildspecs:  1
Invalid Buildspecs:  0
/tmp/GitHubDesktop/buildtest/tests/examples/pbs/sleep.yml: VALID


Total builder objects created: 1
builders: [pbs_sleep/631998a2]


name       id        description    buildspecs
---------  --------  -------------  ---------------------------------------------------------
pbs_sleep  631998a2                 /tmp/GitHubDesktop/buildtest/tests/examples/pbs/sleep.yml

+----------------------+
| Stage: Building Test |
+----------------------+

 name      | id       | type   | executor          | tags   | testpath
-----------+----------+--------+-------------------+--------+------------------------------------------------------------------------------------------------------
 pbs_sleep | 631998a2 | script | generic.pbs.workq |        | /tmp/GitHubDesktop/buildtest/var/tests/generic.pbs.workq/sleep/pbs_sleep/631998a2/pbs_sleep_build.sh

+---------------------+
| Stage: Running Test |
+---------------------+

______________________________
Launching test: pbs_sleep
Test ID: 631998a2-dc7c-4407-9b3f-552be9a11161
Executor Name: generic.pbs.workq
Running Test:  /tmp/GitHubDesktop/buildtest/var/tests/generic.pbs.workq/sleep/pbs_sleep/631998a2/pbs_sleep_build.sh
[pbs_sleep] JobID: 394.pbs dispatched to scheduler
Polling Jobs in 5 seconds


Current Jobs
_______________


+-----------+----------+-------------------+---------+----------+---------+
|   name    |    id    |     executor      |  jobID  | jobstate | runtime |
+-----------+----------+-------------------+---------+----------+---------+
| pbs_sleep | 631998a2 | generic.pbs.workq | 394.pbs |    R     |  5.143  |
+-----------+----------+-------------------+---------+----------+---------+
Polling Jobs in 5 seconds
pbs_sleep/631998a2: Job 394.pbs is complete!
pbs_sleep/631998a2: Writing output file: /tmp/GitHubDesktop/buildtest/var/tests/generic.pbs.workq/sleep/pbs_sleep/631998a2/pbs_sleep.o394
pbs_sleep/631998a2: Writing error file: /tmp/GitHubDesktop/buildtest/var/tests/generic.pbs.workq/sleep/pbs_sleep/631998a2/pbs_sleep.e394

+-----------------------+
| Completed Polled Jobs |
+-----------------------+

 name      | id       | executor          | jobID   | jobstate   | status   |   returncode |   runtime
-----------+----------+-------------------+---------+------------+----------+--------------+-----------
 pbs_sleep | 631998a2 | generic.pbs.workq | 394.pbs | F          | PASS     |            0 |    10.193

+----------------------+
| Stage: Test Summary  |
+----------------------+

 name      | id       | executor          | status   | returncode_match   | regex_match   | runtime_match   |   returncode |   runtime
-----------+----------+-------------------+----------+--------------------+---------------+-----------------+--------------+-----------
 pbs_sleep | 631998a2 | generic.pbs.workq | PASS     | N/A                | N/A           | N/A             |            0 |    10.193



Passed Tests: 1/1 Percentage: 100.000%
Failed Tests: 0/1 Percentage: 0.000%


Writing Logfile to: /tmp/buildtest_moa4gi1x.log
A copy of logfile can be found at $BUILDTEST_ROOT/buildtest.log -  /tmp/GitHubDesktop/buildtest/buildtest.log

Cobalt

Cobalt is a job scheduler developed by Argonne National Laboratory that runs on compute resources and IBM BlueGene series. Cobalt resembles PBS in terms of command line interface such as qsub, qacct however they slightly differ in their behavior.

Cobalt support has been tested on JLSE and Theta system. Cobalt directives are specified using #COBALT this can be specified using cobalt property which accepts a list of strings. Shown below is an example using cobalt property.

1version: "1.0"
2buildspecs:
3  yarrow_hostname:
4    executor: jlse.cobalt.yarrow
5    type: script
6    cobalt: ["-n 1", "--proccount 1", "-t 10"]
7    run: hostname

In this example, we allocate 1 node with 1 processor for 10min. This is translated into the following job script.

#!/usr/bin/bash
#COBALT -n 1
#COBALT --proccount 1
#COBALT -t 10
#COBALT --jobname yarrow_hostname
source /home/shahzebsiddiqui/buildtest/var/executors/cobalt.yarrow/before_script.sh
hostname
source /home/shahzebsiddiqui/buildtest/var/executors/cobalt.yarrow/after_script.sh

When job starts, Cobalt will write a cobalt log file <JOBID>.cobaltlog which is provided by scheduler for troubleshooting. The output and error file are generated once job finishes. Cobalt job progresses through job state starting –> pending –> running –> exiting. buildtest will capture Cobalt job details using qstat -lf <JOBID> and this is updated in the report file.

buildtest will poll job at set interval, where we run qstat --header State <JobID> to check state of job, if job is finished then we gather results. Once job is finished, qstat will not be able to poll job this causes an issue where buildtest can’t poll job since qstat will not return anything. This is a transient issue depending on when you poll job, generally at ALCF qstat will not report existing job within 30sec after job is terminated. buildtest will assume if it’s able to poll job and is in exiting stage that job is complete, if its unable to retrieve this state we check for output and error file. If file exists we assume job is complete and buildtest will gather the results.

buildtest will determine exit code by parsing cobalt log file, the file contains a line such as

Thu Nov 05 17:29:30 2020 +0000 (UTC) Info: task completed normally with an exit code of 0; initiating job cleanup and removal

qstat has no job record for capturing returncode so buildtest must rely on Cobalt Log file.

Jobs exceeds max_pend_time

Recall from Configuring buildtest that max_pend_time will cancel jobs if job exceed timelimit. buildtest will start a timer for each job right after job submission and keep track of time duration, and if job is in pending state and it exceeds max_pend_time, then job will be cancelled.

We can also override max_pend_time configuration via command line --max-pend-time. To demonstrate, here is an example where job was cancelled after job was pending and exceeds max_pend_time. Note that cancelled job is not reported in final output nor updated in report hence it won’t be present in the report (buildtest report). In this example, we only had one test so upon job cancellation we found there was no tests to report hence, buildtest will terminate after run stage.

[pbsuser@pbs buildtest]$ python3.7 ./bin/buildtest -c tests/settings/pbs.yml build -b tests/examples/pbs/hold.yml --poll-interval=3 --max-pend-time=5
User:  pbsuser
Hostname:  pbs
Platform:  Linux
Current Time:  2021/09/03 20:45:34
buildtest path: /tmp/GitHubDesktop/buildtest/bin/buildtest
buildtest version:  0.10.2
python path: /bin/python
python version:  3.7.0
Test Directory:  /tmp/GitHubDesktop/buildtest/var/tests
Configuration File:  /tmp/GitHubDesktop/buildtest/tests/settings/pbs.yml
Command: ./bin/buildtest -c tests/settings/pbs.yml build -b tests/examples/pbs/hold.yml --poll-interval=3 --max-pend-time=5

+-------------------------------+
| Stage: Discovering Buildspecs |
+-------------------------------+

+----------------------------------------------------------+
| Discovered Buildspecs                                    |
+==========================================================+
| /tmp/GitHubDesktop/buildtest/tests/examples/pbs/hold.yml |
+----------------------------------------------------------+
Discovered Buildspecs:  1
Excluded Buildspecs:  0
Detected Buildspecs after exclusion:  1

+---------------------------+
| Stage: Parsing Buildspecs |
+---------------------------+

Valid Buildspecs:  1
Invalid Buildspecs:  0
/tmp/GitHubDesktop/buildtest/tests/examples/pbs/hold.yml: VALID


Total builder objects created: 1
builders: [pbs_hold_job/db8014c4]


name          id        description    buildspecs
------------  --------  -------------  --------------------------------------------------------
pbs_hold_job  db8014c4  PBS Hold Job   /tmp/GitHubDesktop/buildtest/tests/examples/pbs/hold.yml

+----------------------+
| Stage: Building Test |
+----------------------+

 name         | id       | type   | executor          | tags   | testpath
--------------+----------+--------+-------------------+--------+-----------------------------------------------------------------------------------------------------------
 pbs_hold_job | db8014c4 | script | generic.pbs.workq |        | /tmp/GitHubDesktop/buildtest/var/tests/generic.pbs.workq/hold/pbs_hold_job/db8014c4/pbs_hold_job_build.sh

+---------------------+
| Stage: Running Test |
+---------------------+

______________________________
Launching test: pbs_hold_job
Test ID: db8014c4-547b-487e-9d2e-f3c743addff9
Executor Name: generic.pbs.workq
Running Test:  /tmp/GitHubDesktop/buildtest/var/tests/generic.pbs.workq/hold/pbs_hold_job/db8014c4/pbs_hold_job_build.sh
[pbs_hold_job] JobID: 395.pbs dispatched to scheduler
Polling Jobs in 3 seconds


Current Jobs
_______________


+--------------+----------+-------------------+---------+----------+---------+
|     name     |    id    |     executor      |  jobID  | jobstate | runtime |
+--------------+----------+-------------------+---------+----------+---------+
| pbs_hold_job | db8014c4 | generic.pbs.workq | 395.pbs |    H     |  3.167  |
+--------------+----------+-------------------+---------+----------+---------+
Polling Jobs in 3 seconds
pbs_hold_job/db8014c4: Cancelling Job: 395.pbs because job exceeds max pend time: 5 sec with current pend time of 6.214

Cancelled Jobs: [pbs_hold_job/db8014c4]
Unable to run any tests

Cray Burst Buffer & Data Warp

For Cray systems, you may want to stage-in or stage-out into your burst buffer this can be configured using the #DW directive. For a list of data warp examples see section on DataWarp Job Script Commands

In buildtest we support properties BB and DW which is a list of job directives that get inserted as #BW and #DW into the test script. To demonstrate let’s start off with an example where we create a persistent burst buffer named databuffer of size 10GB striped. We access the burst buffer using the DW directive. Finally we cd into the databuffer and write a 5GB random file.

Note

BB and DW directives are generated after scheduler directives. The #BB comes before #DW. buildtest will automatically add the directive #BB and #DW when using properties BB and DW

 1version: "1.0"
 2buildspecs:
 3  create_burst_buffer:
 4    type: script
 5    executor: cori.slurm.debug
 6    batch:
 7      nodecount: "1"
 8      timelimit: "5"
 9      cpucount: "1"
10    sbatch: ["-C knl"]
11    description: Create a burst buffer
12    tags: [jobs]
13    BB:
14      - create_persistent name=databuffer capacity=10GB access_mode=striped type=scratch
15    DW:
16      - persistentdw name=databuffer
17    run: |
18      cd $DW_PERSISTENT_STRIPED_databuffer
19      pwd
20      dd if=/dev/urandom of=random.txt bs=1G count=5 iflags=fullblock
21      ls -lh $DW_PERSISTENT_STRIPED_databuffer/

Next we run this test and inspect the generated test we will see that #BB and #DW directives are inserted after the scheduler directives

#!/bin/bash
#SBATCH --nodes=1
#SBATCH --time=5
#SBATCH --ntasks=1
#SBATCH --job-name=create_burst_buffer
#SBATCH --output=create_burst_buffer.out
#SBATCH --error=create_burst_buffer.err
#BB create_persistent name=databuffer capacity=10GB access_mode=striped type=scratch
#DW persistentdw name=databuffer
cd $DW_PERSISTENT_STRIPED_databuffer
pwd
dd if=/dev/urandom of=random.txt bs=1G count=5 iflag=fullblock
ls -lh $DW_PERSISTENT_STRIPED_databuffer

We can confirm their is an active burst buffer by running the following

$ scontrol show burst | grep databuffer
    Name=databuffer CreateTime=2020-10-29T13:06:21 Pool=wlm_pool Size=20624MiB State=allocated UserID=siddiq90(92503)