Skip to main content

How to batch execute variations of your Test Case

Written by Ines
Updated over 3 weeks ago

This guide walks you through using the Test Asset Batch feature to run the same test case multiple times with different data — different URLs, browsers, credentials, or any variable your test uses.

Overview

With Test Asset Batch, you upload one or more CSV files as test assets. Each row in the CSV defines a unique variation of your test case. When you trigger a run via the API, every row becomes its own test run — all grouped under a single batch.

This is ideal for:

  • Testing the same flow across multiple browsers or devices

  • Running a login test with different user credentials

  • Validating a page across many URLs

  • Any scenario where you want to parameterize your test case

Step 1: Create Your CSV File

Create a CSV file where each row represents one test variation.

There are two types of columns:

Data Columns

These are columns whose names match the variable placeholders in your test case (without the [] brackets). The values override your test case variables with the highest priority.

For example, if your test step says:

Navigate to [ENVIRONMENT_URL] and log in with [USERNAME]

Your CSV should have columns named ENVIRONMENT_URLand USERNAME:

ENVIRONMENT_URL,USERNAME https://staging.example.com,alice https://production.example.com,bob

Native Columns (Browser/Environment Overrides)

Columns prefixed with _ override execution settings per row. Use these to vary the browser, resolution, environment, or other execution parameters across rows.

Column

Description

Example Values

_rowname

Display name for the row in results

Chrome Login, Mobile Safari

_browserType

Browser engine

Chromium, Chrome, Firefox, Safari

_resolution

Viewport size

1920x1080, 375x812

_deviceType

Device category

Desktop, Mobile, Tablet

_deviceName

Specific device

iPhone 14, Pixel 7

_location

Browser geolocation

ParisSelfHosted

_locale

Browser language

en-US, fr-FR, de-DE

_darkTheme

Dark mode

true, false

_javascript

JavaScript enabled

true, false

_ignoreHttpsErrors

Skip HTTPS validation

true, false

_avoidDetection

Stealth mode

true, false

_deviceScaleFactor

Pixel density

1, 2, 3

_forcedColors

Forced color mode

active, none

_username

HTTP auth username

admin

_password

HTTP auth password

secret123

_proxy

Proxy server

_highlightElements

Highlight actions

true, false

_environmentId

Override environment

UUID of the environment

_personaId

Override persona

UUID of the persona

Complete Example CSV

Here is a CSV that tests a login flow across 3 browsers with different users:

ENVIRONMENT_URL,USERNAME,PASSWORD,_rowname,_browserType,_resolution https://app.example.com/login,alice,pass123,Chrome Desktop,Chrome,1920x1080 https://app.example.com/login,bob,pass456,Firefox Desktop,Firefox,1440x900 https://app.example.com/login,charlie,pass789,Safari Mobile,Safari,375x812

This produces 3 test runs:

  1. Chrome Desktop — logs in as alice on Chrome at 1920x1080

  2. Firefox Desktop — logs in as bob on Firefox at 1440x900

  3. Safari Mobile — logs in as charlie on Safari at 375x812

Step 2: Upload the CSV as a Test Asset

Upload your CSV file to your project in the Thunders web app:

  1. Open your Project

  2. Go to the Test Assets tab

  3. Click Upload and select your CSV file

  4. Note the generated reference name displayed in the assets list (e.g., FILE_LOGINS_CSV)

The reference name follows the pattern: FILE_<FILENAME>_<EXTENSION>

  • The filename is uppercased

  • Hyphens, spaces, dots, and parentheses become underscores

  • Multiple consecutive underscores are collapsed to one

Examples:

  • logins.csv becomes FILE_LOGINS_CSV

  • test-data.csv becomes FILE_TEST_DATA_CSV

  • my users (v2).csv becomes FILE_MY_USERS_V2_CSV

Step 3: Run via the API with Test Asset References

Add the testAssetReferences field to your API request body when calling the /api/test-cases/run or /api/test-sets/run endpoint:

{   "ProjectId": "c8e34ec4-2464-43c7-8db8-1b3a47a22337",   "TestCaseIds": [     "a836fadc-377a-46fe-96b0-21f37c626bf9"   ],   "EnvironmentId": "eca24252-e566-40a8-b2b0-707b7efa85d8",   "PersonaId": "70d0ac52-fe94-4d89-aba9-8ef28dd8c04c",   "BrowserSettings": {     "Location": "ParisSelfHosted",     "browserType": "Chromium",     "DeviceType": "Desktop",     "Resolution": "1440x900"   },   "TestAssetReferences": ["FILE_LOGINS_CSV"] }

The BrowserSettings in the request body serve as defaults. Any native column in the CSV (like _browserType) overrides the corresponding default for that specific row.

References can also be wrapped in brackets: [FILE_LOGINS_CSV].

Headers

Header

Value

Authorization

Bearer YOUR_THUNDER_TEST_TOKEN

X-MS-API-ROLE

M2M

Content-Type

application/json

Step 4: Combine Multiple CSV Files (Optional)

You can reference multiple CSV files in a single request. Their rows are concatenated into one batch with continuous indexing.

"TestAssetReferences": ["FILE_DESKTOP_BROWSERS_CSV", "FILE_MOBILE_DEVICES_CSV"]

If desktop-browsers.csv has 3 rows and mobile-devices.csv has 2 rows, the resulting batch will contain 5 test runs indexed 1 through 5.

This is useful for organizing your test data into logical groups (e.g., one CSV per device category, one per region, one per user role).

Step 5: Review Results

After execution, each batch row appears as a separate test run in the Test Runs tab. You can identify batch runs by:

  • Batch Row Name: The _rowname column value (or a default like 1/3, 2/3, 3/3 if no _rowname was provided)

  • Batch Data Overrides: The variable values that were injected for that specific row

All runs from the same batch share a batchId, making it easy to filter and group results.

GitHub Actions Example

Here is a complete GitHub Actions workflow that runs a batch of test variations:

name: Run Thunders Batch Tests  on:   workflow_dispatch:  jobs:   run-batch-tests:     runs-on: ubuntu-latest     steps:       - name: Run Batch Test Variations         uses: fjogeleit/http-request-action@v1         with:           url: 'https://api.thunders.ai/api/test-cases/run'           method: 'POST'           customHeaders: >             {               "Authorization": "Bearer $",               "X-MS-API-ROLE": "M2M",               "Content-Type": "application/json"             }           data: >             {               "ProjectId": "c8e34ec4-2464-43c7-8db8-1b3a47a22337",               "TestCaseIds": ["a836fadc-377a-46fe-96b0-21f37c626bf9"],               "EnvironmentId": "eca24252-e566-40a8-b2b0-707b7efa85d8",               "PersonaId": "70d0ac52-fe94-4d89-aba9-8ef28dd8c04c",               "BrowserSettings": {                 "Location": "ParisSelfHosted",                 "browserType": "Chromium",                 "DeviceType": "Desktop",                 "Resolution": "1440x900"               },               "TestAssetReferences": ["FILE_LOGINS_CSV", "FILE_MOBILE_CSV"]             }

Quick Reference

Concept

Detail

CSV upload

Test Assets tab in your project

Reference format

FILE__ (uppercased, special chars become _)

API parameter

testAssetReferences (array of strings)

Native columns

Prefix with _ to override browser/environment settings

Data columns

No prefix; override test case variable placeholders

Multiple CSVs

Pass multiple references; rows merge sequentially

Batch grouping

All runs share a batchId

Row naming

Use _rowname column or defaults to index/total

Did this answer your question?