Docs Performance Testing

Performance Testing

Mockarty includes a built-in performance testing engine that lets you write JavaScript-based load tests against any protocol Mockarty supports: HTTP, gRPC, SOAP, GraphQL, WebSocket, SSE, Kafka, RabbitMQ, MCP, and more. Scripts are compatible with the k6 ESM format, so you can reuse existing k6 scripts or write new ones in a familiar syntax.


Why Performance Test Your APIs?

Performance issues rarely surface during development. A single endpoint that responds in 5ms under no load can degrade to 2 seconds when 500 users hit it simultaneously. Performance testing helps you:

  • Identify bottlenecks before they reach production. Find the endpoint, database query, or external dependency that breaks under load.
  • Establish baselines. Know your system’s throughput (requests/second) and latency percentiles (p50, p95, p99) so you can detect regressions.
  • Validate SLAs. Define pass/fail thresholds like “p95 latency must be under 200ms” and automatically fail builds that violate them.
  • Plan capacity. Determine how many concurrent users your infrastructure can handle before you need to scale.
  • Test resilience. Simulate traffic spikes, slow consumers, and error storms to verify graceful degradation.

Script Engine Overview

The performance engine executes JavaScript scripts using the goja runtime. Scripts follow a CommonJS module pattern internally, but Mockarty auto-converts k6-style ESM imports (import http from 'k6/http') into the native format.

Key Concepts

Concept Description
Virtual User (VU) A simulated user executing your script in a loop. Each VU runs its own isolated JavaScript runtime.
Iteration One complete execution of your default function. A VU runs iterations continuously until the test ends.
Duration How long the test runs. Defaults to 30 seconds if not specified.
Stages Time-based phases that ramp VUs (or RPS) up and down, enabling realistic traffic patterns.
RPS Mode Instead of fixing VU count, you set a target requests-per-second and the engine auto-adjusts VUs to maintain it.
Thresholds Pass/fail criteria evaluated against final metrics. A breached threshold marks the test as failed.
Abort Criteria Conditions that trigger early test termination (e.g., error rate exceeding 50% for 30 seconds).

Script Format Auto-Detection

Mockarty automatically detects three script formats and converts them:

  1. Mockarty nativerequire('mockarty/http') with module.exports.default
  2. k6 ESMimport http from 'k6/http' with export default function
  3. Legacy mk/pmmk.http.get(...) and mk.test(...) patterns

You can write in whichever format you prefer. The engine converts everything to its internal CommonJS format before execution.


Writing Your First Performance Test

Here is a minimal HTTP load test:

import http from 'k6/http';
import { check, sleep } from 'k6';

export const options = {
  vus: 10,
  duration: '30s',
};

export default function () {
  const res = http.get('https://api.example.com/users');

  check(res, {
    'status is 200': (r) => r.status === 200,
    'response time < 500ms': (r) => r.timings.duration < 500,
  });

  sleep(1);
}

Or equivalently in Mockarty native format:

var http = require('mockarty/http');

module.exports.options = {
  vus: 10,
  duration: '30s',
};

module.exports.default = function () {
  var res = http.get('https://api.example.com/users');

  check(res, {
    'status is 200': function (r) { return r.status === 200; },
    'response time < 500ms': function (r) { return r.timings.duration < 500; },
  });

  sleep(1);
};

Both scripts produce identical results. The engine converts k6 imports automatically.

Step-by-Step Breakdown

  1. Import modules — Load the HTTP module (and any others you need).
  2. Define options — Set VU count, duration, stages, thresholds.
  3. Write the default function — This is your iteration. Each VU calls it in a loop.
  4. Add checks — Validate response status, body, timing.
  5. Add sleep — Simulate user think time between requests (in seconds).

Test Options

Options control how the test runs. Define them as export const options (k6 format) or module.exports.options (native format).

Option Type Default Description
vus int 1 Number of concurrent virtual users.
duration string "30s" Test duration (e.g., "30s", "5m", "1h"). Go duration format.
iterations int 0 (unlimited) Total iterations across all VUs. Test stops when reached.
rps int 0 (VU mode) Target requests per second. Engine auto-adjusts VU count to maintain this rate.
maxVUs int 200 Safety cap on VU count in RPS mode. Prevents runaway scaling.
stages []Stage [] Time-based VU or RPS ramp stages. Overrides vus and duration.
thresholds map {} Pass/fail criteria evaluated against final metrics.
abortCriteria []Criterion [] Conditions that trigger early test termination.

Stage Definition

Each stage has a duration and a target:

export const options = {
  stages: [
    { duration: '30s', target: 20 },   // Ramp up to 20 VUs over 30s
    { duration: '1m', target: 20 },    // Stay at 20 VUs for 1 minute
    { duration: '30s', target: 0 },    // Ramp down to 0 VUs over 30s
  ],
};

The engine linearly interpolates VU count between stages.

RPS Stages

For RPS-based control, use targetRPS instead of target:

export const options = {
  maxVUs: 100,
  stages: [
    { duration: '30s', targetRPS: 50 },   // Ramp to 50 req/s
    { duration: '1m', targetRPS: 200 },   // Ramp to 200 req/s
    { duration: '30s', targetRPS: 0 },    // Ramp down
  ],
};

The engine uses a proportional controller with damping to adjust VU count, avoiding oscillation. VU changes are limited to ±20% per second.

Execution Modes

The engine runs in one of two modes:

  • Performance mode — Multiple VUs, timeseries collection, charts. Active when vus > 1, duration > 0, stages are defined, or iterations > 0.
  • Functional mode — Single VU, single iteration, detailed check results. Active when no performance options are set. Useful for verifying script correctness before scaling up.

Available Modules

Mockarty provides 14 built-in modules covering all supported protocols and common utilities.

mockarty/http

HTTP client for REST API testing.

var http = require('mockarty/http');

// GET request
var res = http.get('https://api.example.com/users');

// POST with JSON body and headers
var res = http.post('https://api.example.com/users',
  JSON.stringify({ name: 'Alice', email: 'alice@example.com' }),
  { headers: { 'Content-Type': 'application/json' } }
);

// Other methods
http.put(url, body, params);
http.patch(url, body, params);
http.del(url, params);
http.head(url, params);
http.options(url, params);
http.request('CUSTOM_METHOD', url, body, params);

Response object:

Field Type Description
status int HTTP status code
body string Response body as string
json() function Parse body as JSON
headers object Response headers
timings.duration float Request duration in milliseconds
error string Error message (empty on success)

Automatic metrics: http_reqs (counter), http_req_duration (trend), http_req_failed (rate), data_sent, data_received.

Route normalization: URLs are automatically normalized for per-route reporting. UUID segments become {uuid}, numeric segments become {id}. Example: GET /users/42/orders becomes GET /users/{id}/orders.


mockarty/grpc

gRPC client with server reflection support.

var grpc = require('mockarty/grpc');

// Connect with reflection (discovers services automatically)
var client = grpc.connect('localhost:4770', {
  reflect: true,
  plaintext: true,
});

// Invoke a unary RPC
var res = client.invoke('mypackage.UserService/GetUser', {
  user_id: '123',
}, {
  headers: { 'authorization': 'Bearer token123' },
});

check(res, {
  'gRPC status OK': function (r) { return r.status === 'OK'; },
  'has user name': function (r) { return r.message.name !== ''; },
});

// Timings available
// res.timings.duration — request duration in ms

client.close();

Connect options:

Option Type Description
reflect bool Use server reflection to discover services. Required.
plaintext bool Disable TLS (for local testing).
tls bool Enable TLS with system certificates.

Automatic metrics: grpc_reqs (counter), grpc_req_duration (trend), grpc_req_failed (rate).


mockarty/soap

SOAP/XML web service client.

var soap = require('mockarty/soap');

var res = soap.call('http://localhost:8080/soap', {
  action: 'GetWeather',
  body: '<GetWeatherRequest><City>London</City></GetWeatherRequest>',
  headers: {
    'X-Custom': 'value',
  },
});

check(res, {
  'SOAP status 200': function (r) { return r.status === 200; },
  'has body': function (r) { return r.body.length > 0; },
});

If the body is not already wrapped in a SOAP envelope, the module wraps it automatically.

Automatic metrics: soap_reqs (counter), soap_req_duration (trend), soap_req_failed (rate).


mockarty/ws

WebSocket client for bidirectional communication testing.

var ws = require('mockarty/ws');

var conn = ws.connect('ws://localhost:8080/ws', {
  headers: { 'Authorization': 'Bearer token' },
});

// Send a message
conn.send(JSON.stringify({ type: 'subscribe', channel: 'orders' }));

// Send binary data
conn.sendBinary('raw binary data');

// Receive a message (optional timeout in seconds, default 30)
var msg = conn.receive(5);

check(msg, {
  'received message': function (m) { return m.length > 0; },
});

conn.close();

Automatic metrics: ws_sessions (counter), ws_messages_sent (counter), ws_messages_received (counter), ws_connecting (trend), ws_send_duration (trend), ws_recv_duration (trend), ws_connect_failed (rate).


mockarty/sse

Server-Sent Events client.

var sse = require('mockarty/sse');

// Connect and collect events
var events = sse.connect('http://localhost:8080/events', {
  timeout: '10s',   // Connection timeout
  limit: 5,         // Max events to collect (0 = unlimited until timeout)
  headers: {
    'Authorization': 'Bearer token',
  },
});

check(events, {
  'received events': function (e) { return e.length > 0; },
  'first event has data': function (e) { return e[0].data !== ''; },
});

// Each event object: { id, event, data, retry }

Automatic metrics: sse_events_received (counter), sse_connecting (trend), sse_connect_failed (rate).


mockarty/kafka

Apache Kafka producer and consumer.

var kafka = require('mockarty/kafka');

// Create a producer
var producer = kafka.producer({
  brokers: ['localhost:9092'],
});

// Produce messages
producer.produce({
  topic: 'orders',
  messages: [
    { key: 'order-1', value: JSON.stringify({ amount: 99.99 }) },
    { key: 'order-2', value: JSON.stringify({ amount: 149.50 }) },
  ],
});

// Create a consumer
var consumer = kafka.consumer({
  brokers: ['localhost:9092'],
  groupId: 'perf-test-group',
});

// Consume messages
var messages = consumer.consume({
  topic: 'orders',
  limit: 10,
  timeout: '5s',
});

check(messages, {
  'consumed messages': function (m) { return m.length > 0; },
});

// Each message: { key, value, topic, partition, offset }

Automatic metrics: kafka_messages_produced (counter), kafka_messages_consumed (counter), kafka_produce_duration (trend), kafka_consume_duration (trend), kafka_produce_failed (rate).


mockarty/rabbitmq

RabbitMQ publisher and consumer.

var rabbitmq = require('mockarty/rabbitmq');

var conn = rabbitmq.connect('amqp://guest:guest@localhost:5672/');

// Publish a message
conn.publish({
  exchange: '',
  routingKey: 'task_queue',
  body: JSON.stringify({ task: 'process_order', orderId: '12345' }),
  contentType: 'application/json',
});

// Consume messages
var messages = conn.consume({
  queue: 'task_queue',
  limit: 5,
  timeout: '10s',
});

check(messages, {
  'received messages': function (m) { return m.length > 0; },
});

// Each message: { body, routingKey, exchange, contentType }

conn.close();

Automatic metrics: rmq_messages_published (counter), rmq_messages_consumed (counter), rmq_publish_duration (trend), rmq_consume_duration (trend), rmq_publish_failed (rate).


mockarty/mcp

MCP (Model Context Protocol) client for testing MCP servers.

var mcp = require('mockarty/mcp');

var client = mcp.connect('http://localhost:8910/sse');

// List available tools
var tools = client.listTools();

// Call a tool
var result = client.callTool('get_weather', {
  city: 'London',
});

check(result, {
  'no error': function (r) { return !r.isError; },
  'has content': function (r) { return r.content && r.content.length > 0; },
});

// result.content = [{ type: "text", text: "..." }, ...]
// result.timings.duration — duration in ms

client.close();

Automatic metrics: mcp_reqs (counter), mcp_req_duration (trend), mcp_req_failed (rate).


mockarty/redis

Redis client with connection pooling shared across VUs.

var redis = require('mockarty/redis');

// Open connection (uses MOCKARTY_REDIS_ADDR env var by default)
var client = redis.open('redis://localhost:6379/0');

// String commands
client.set('user:1:name', 'Alice', 3600);   // key, value, ttl_seconds
var result = client.get('user:1:name');       // { value: "Alice", error: null }
client.incr('counter');
client.decr('counter');

// Hash commands
client.hset('user:1', 'email', 'alice@example.com');
var email = client.hget('user:1', 'email');
var all = client.hgetall('user:1');

// List commands
client.lpush('queue', 'item1', 'item2');
var item = client.rpop('queue');
var items = client.lrange('queue', 0, -1);

// Set commands
client.sadd('tags', 'golang', 'performance');
var members = client.smembers('tags');
var exists = client.sismember('tags', 'golang');

// Sorted set commands
client.zadd('leaderboard', 100, 'player1');
var score = client.zscore('leaderboard', 'player1');
var top = client.zrange('leaderboard', 0, 9);

// Key management
client.del('key1', 'key2');
client.exists('key1');
client.expire('key1', 3600);
var ttl = client.ttl('key1');

All commands return { value, error } objects. Connection pools are shared across VUs and cleaned up automatically when the test ends. Keys created via set, hset, lpush, rpush, sadd, and zadd get a default 12-hour TTL to prevent leftover test data.

Automatic metrics: redis_commands (counter), redis_cmd_duration (trend), redis_cmd_failed (rate).


mockarty/sql

SQL database client (PostgreSQL) with connection pooling.

var sql = require('mockarty/sql');

var db = sql.open('postgres', 'postgres://user:pass@localhost:5432/testdb?sslmode=disable');

// Query (returns array of row objects)
var users = db.query('SELECT id, name, email FROM users WHERE active = $1 LIMIT $2', [true, 100]);

check(users, {
  'found users': function (u) { return u.length > 0; },
  'first user has name': function (u) { return u[0].name !== ''; },
});

// Execute (returns { rowsAffected: N })
var result = db.exec('UPDATE users SET last_login = NOW() WHERE id = $1', [users[0].id]);

// Parameterized queries — pass params as array or variadic args
db.query('SELECT * FROM orders WHERE user_id = $1 AND status = $2', userId, 'pending');

Connection pools are shared across VUs. The close() method is a no-op at the VU level; pools are cleaned up when the engine shuts down.


mockarty/encoding

Encoding/decoding utilities.

var encoding = require('mockarty/encoding');

// JSON
var obj = encoding.jsonParse('{"name":"Alice"}');   // → { name: "Alice" }
var str = encoding.jsonStringify({ name: 'Alice' }); // → '{"name":"Alice"}'

// Base64
var encoded = encoding.base64Encode('Hello, World!');   // → "SGVsbG8sIFdvcmxkIQ=="
var decoded = encoding.base64Decode('SGVsbG8sIFdvcmxkIQ=='); // → "Hello, World!"

mockarty/faker

Fake data generation for realistic test payloads.

var faker = require('mockarty/faker');

var user = {
  id: faker.uuid(),
  firstName: faker.firstName(),
  lastName: faker.lastName(),
  email: faker.email(),
  username: faker.username(),
  password: faker.password(),
  phone: faker.phoneNumber(),
  ip: faker.ipv4(),
  bio: faker.sentence(),
  registered: faker.rfc3339(),
  creditCard: faker.ccNumber(),
  jwt: faker.jwt(),
};

Available functions:

Category Functions
Person firstName, firstNameMale, firstNameFemale, lastName, name, titleMale, titleFemale
Internet email, username, password, url, domainName, ipv4, ipv6, macAddress
UUID uuid, uuidDigit
Address latitude, longitude
Datetime date, timeString, monthName, yearString, dayOfWeek, dayOfMonth, timestamp, unixTime, rfc3339, century, timezone, timePeriod
Text word, sentence, paragraph
Payment ccType, ccNumber, currency, amountWithCurrency
Phone phoneNumber, tollFreePhoneNumber, e164PhoneNumber
Types bool, positiveInt, negativeInt, jwt

All faker functions are thread-safe and use their own random state.


mockarty/data

Shared data arrays for data-driven tests.

var data = require('mockarty/data');

// SharedArray runs the factory ONCE, then shares the result across all VUs
var users = data.SharedArray('users', function () {
  return [
    { username: 'alice', password: 'pass123' },
    { username: 'bob', password: 'pass456' },
    { username: 'charlie', password: 'pass789' },
  ];
});

module.exports.default = function () {
  // Access by index (VU-safe, read-only)
  var idx = Math.floor(Math.random() * users.length);
  var user = users.get(idx);

  var http = require('mockarty/http');
  http.post('https://api.example.com/login',
    JSON.stringify(user),
    { headers: { 'Content-Type': 'application/json' } }
  );
};

The factory function executes exactly once (the first VU to request it), and subsequent VUs receive a reference to the same data without copying. This is ideal for loading large datasets (CSV files, JSON arrays) without duplicating memory across VUs.


mockarty/core

Core utility functions. These are also available as globals (check, sleep, group, fail), so importing the module is optional.

var core = require('mockarty/core');

// Check — validate a value against named assertions
core.check(response, {
  'status is 200': function (r) { return r.status === 200; },
});

// Sleep — pause execution for N seconds
core.sleep(1.5);

// Group — organize checks and requests into named groups
core.group('user login flow', function () {
  // requests and checks inside the group
});

// Fail — immediately abort this iteration
core.fail('Unexpected response');

Global Functions

The following functions are always available without importing any module:

Function Description
check(value, checks) Run named assertions. Returns true if all pass.
sleep(seconds) Pause VU execution.
group(name, fn) Group related operations for reporting.
fail(reason) Abort the current iteration with an error.

Custom Metrics

Create custom metrics as globals:

// Counter — accumulates a total
var myCounter = new Counter('my_custom_counter');
myCounter.add(1);

// Trend — tracks a distribution (min, max, avg, percentiles)
var myTrend = new Trend('my_custom_trend');
myTrend.add(42.5);

// Rate — tracks pass/fail ratio
var myRate = new Rate('my_custom_rate');
myRate.add(true);   // passed
myRate.add(false);  // failed

// Gauge — tracks a current value
var myGauge = new Gauge('my_custom_gauge');
myGauge.add(7);

Custom metrics appear in thresholds and the final report alongside built-in metrics.


Thresholds and SLAs

Thresholds define pass/fail criteria for your test. If any threshold is breached, the test result is marked as failed.

export const options = {
  vus: 50,
  duration: '2m',
  thresholds: {
    'http_req_duration': [
      'avg < 200',       // Average response time under 200ms
      'p(95) < 500',     // 95th percentile under 500ms
      'p(99) < 1000',    // 99th percentile under 1s
      'max < 3000',      // No request over 3s
    ],
    'http_req_failed': [
      'rate < 0.01',     // Less than 1% errors
    ],
    'checks': [
      'rate > 0.95',     // 95% of checks pass
    ],
    'iterations': [
      'count > 1000',    // At least 1000 iterations completed
    ],
    // Custom metrics work too
    'my_custom_trend': [
      'avg < 100',
      'p(90) < 200',
    ],
  },
};

Threshold Expression Syntax

<stat> <operator> <value>

Stats: avg, min, max, med, count, rate, p(N) (where N is the percentile, e.g., p(50), p(95), p(99))

Operators: <, <=, >, >=, ==, !=

Threshold Results

After the test completes, each threshold is evaluated and reported:

All thresholds passed:
  PASS [http_req_duration] avg < 200: actual=87.34
  PASS [http_req_duration] p(95) < 500: actual=245.12
  PASS [http_req_failed] rate < 0.01: actual=0.00

Or with failures:

Threshold violations:
  PASS [http_req_duration] avg < 200: actual=87.34
  FAIL [http_req_duration] p(95) < 500: actual=612.89
  PASS [http_req_failed] rate < 0.01: actual=0.00

Abort Criteria

Abort criteria allow the engine to stop a test early when conditions indicate something is severely wrong, preventing wasted time and resources.

export const options = {
  vus: 100,
  duration: '10m',
  abortCriteria: [
    {
      name: 'High error rate',
      metric: 'http_req_failed',
      stat: 'rate',
      condition: '>',
      value: 0.5,
      duration: '30s',      // Must sustain for 30s before aborting
      enabled: true,
    },
    {
      name: 'Latency spike',
      metric: 'http_req_duration',
      stat: 'p95',
      condition: '>',
      value: 5000,           // p95 > 5 seconds
      duration: '1m',
      enabled: true,
    },
  ],
};

The duration field requires the condition to be sustained for the specified period before triggering. This prevents abort on transient spikes. If duration is omitted or "0s", the abort triggers immediately.

Supported metrics for abort: http_req_failed (rate), http_req_duration (avg, p90, p95, p99, min, max), http_reqs (count, rate), checks (rate).

When an abort triggers, the test report includes stoppedReason: "auto_stop: High error rate".


APDEX Score

Every performance test automatically calculates an APDEX (Application Performance Index) score. APDEX is an industry-standard metric that translates response times into a 0-to-1 user satisfaction score.

How APDEX Is Calculated

APDEX uses a satisfaction threshold T (default: 500ms) to classify every request:

Zone Condition Description
Satisfied response_time <= T Users are satisfied with the performance.
Tolerating T < response_time <= 4T Users notice delays but tolerate them.
Frustrated response_time > 4T or error Users are frustrated. All errors count as frustrated.

Formula:

APDEX = (Satisfied + Tolerating * 0.5) / Total

Rating Scale

Score Rating
>= 0.94 Excellent
>= 0.85 Good
>= 0.70 Fair
>= 0.50 Poor
< 0.50 Unacceptable

APDEX in Reports

APDEX is computed both globally and per-route:

{
  "summary": {
    "apdex": {
      "score": 0.92,
      "rating": "Good",
      "satisfied": 4500,
      "tolerating": 350,
      "frustrated": 150,
      "total": 5000,
      "thresholdT": 500
    },
    "per_route_stats": [
      {
        "route": "GET /users/{id}",
        "count": 2500,
        "apdex": {
          "score": 0.98,
          "rating": "Excellent"
        }
      },
      {
        "route": "POST /orders",
        "count": 2500,
        "apdex": {
          "score": 0.86,
          "rating": "Good"
        }
      }
    ]
  }
}

Per-route APDEX helps you identify which specific endpoints are dragging down the overall score.


Distributed Execution

For larger tests, Mockarty supports distributed execution via Runner Agents. Instead of running the test on the admin node, the coordinator dispatches performance tasks to remote runner agents.

How It Works

  1. Admin node receives the test request and creates a task of type performance.
  2. Coordinator (gRPC service on port 5773) matches the task to an available runner agent with the performance capability.
  3. Runner agent picks up the task, executes the script using the perfengine, and streams progress updates back.
  4. Admin node collects the final report and stores it in the database.

Setting Up a Runner Agent

# Build the runner agent
go build -o runner-agent ./cmd/runner-agent

# Run it
COORDINATOR_ADDR=admin-node:5773 \
API_TOKEN=mki_your_integration_token \
RUNNER_NAME=perf-runner-1 \
CAPABILITIES=performance \
MAX_CONCURRENT=4 \
./runner-agent
Environment Variable Description
COORDINATOR_ADDR Admin node gRPC address (host:5773)
API_TOKEN Integration token (create via Admin UI > Integrations)
RUNNER_NAME Human-readable name for this runner
CAPABILITIES Comma-separated: performance, api_test
MAX_CONCURRENT Max concurrent test executions on this runner
SHARED If true, accepts tasks from all namespaces
NAMESPACE If not shared, only accepts tasks for this namespace

For detailed setup instructions, see Integrations. For deployment architecture details, see Scaling Architecture.


Scheduling

Performance tests can run on a cron schedule for continuous performance monitoring.

Creating a Schedule

Schedules link a saved performance configuration to a cron expression:

POST /ui/api/perf-schedules
{
  "name": "Nightly Load Test",
  "configId": "config-uuid-here",
  "cronExpression": "0 2 * * *",
  "namespace": "default",
  "enabled": true,
  "options": {
    "vus": 50,
    "duration": "5m"
  }
}

The cron expression uses standard 5-field format: minute hour day-of-month month day-of-week.

How Scheduling Works

  1. A background loop checks for due schedules every 60 seconds.
  2. When a schedule’s nextRunAt time has passed, the coordinator creates a performance task.
  3. The task is dispatched to an available runner agent (or executed locally if no runners are configured).
  4. After execution, nextRunAt is recalculated from the cron expression.
  5. If a cron expression is invalid, the schedule is automatically disabled to prevent infinite retries.

Schedule-specific options override the saved configuration’s options, allowing you to run the same script with different parameters on different schedules.


Reading Results

Summary Metrics

Every test report includes a summary with these key metrics:

Metric Description
http_req_duration.avg Average response time (ms)
http_req_duration.med Median (p50) response time
http_req_duration.p90 90th percentile response time
http_req_duration.p95 95th percentile response time
http_req_duration.p99 99th percentile response time
http_req_duration.min Fastest response
http_req_duration.max Slowest response
http_reqs.count Total requests made
http_reqs.rate Requests per second (throughput)
http_req_failed.rate Error rate (0.0-1.0)
data_sent Total bytes sent
data_received Total bytes received
vus_max Peak concurrent VU count
iterations.count Total iterations completed
checks.rate Check pass rate (0-100%)

Additional Indices

Metric Description
stddev_ms Standard deviation of response times
cv_pct Coefficient of variation (%) — lower means more consistent
throughput_stability_cv CV of RPS over time — lower means more stable throughput

Per-Route Breakdown

The report includes per-route statistics covering all protocols:

  • HTTP: GET /users/{id}, POST /orders
  • gRPC: gRPC mypackage.UserService/GetUser
  • SOAP: SOAP GetWeather
  • MCP: MCP tool get_weather
  • Kafka: Kafka produce orders, Kafka consume orders
  • RabbitMQ: RMQ publish /task_queue, RMQ consume task_queue
  • WebSocket: WS connect /ws, WS send /ws, WS recv /ws
  • SSE: SSE /events

Each route entry includes count, error rate, latency percentiles, and its own APDEX score.

Timeseries Data

The report includes timeseries data sampled every 3 seconds for chart rendering:

  • Latency (avg and p95 over time)
  • Throughput (requests per second over time)
  • Active VUs over time
  • Error rate over time

Protocol-Specific Summaries

When you use non-HTTP modules, the report includes additional summaries:

{
  "grpc_reqs": { "count": 5000, "err_rate": 0.02, "duration": { "avg": 12.3, "p95": 45.6 } },
  "kafka_msgs": { "total": 10000, "produced": 5000, "consumed": 5000, "err_rate": 0.0 },
  "rmq_msgs": { "total": 8000, "produced": 4000, "consumed": 4000 },
  "ws_msgs": { "connections": 100, "sent": 5000, "received": 4800 },
  "sse_events": { "connections": 50, "events": 25000 }
}

Advanced Patterns

Ramp-Up / Ramp-Down (Stress Test)

Simulate a gradual traffic increase to find the breaking point:

export const options = {
  stages: [
    { duration: '2m', target: 50 },    // Warm up
    { duration: '3m', target: 200 },   // Ramp to peak
    { duration: '5m', target: 200 },   // Sustain peak
    { duration: '2m', target: 50 },    // Ramp down
    { duration: '1m', target: 0 },     // Cool down
  ],
  thresholds: {
    'http_req_duration': ['p(95) < 1000'],
    'http_req_failed': ['rate < 0.05'],
  },
};

Correlation (Extract Tokens from Responses)

Chain requests by extracting values from one response and using them in the next:

var http = require('mockarty/http');

module.exports.default = function () {
  // Step 1: Login and extract token
  var loginRes = http.post('https://api.example.com/auth/login',
    JSON.stringify({ username: 'testuser', password: 'testpass' }),
    { headers: { 'Content-Type': 'application/json' } }
  );

  var token = loginRes.json().token;

  // Step 2: Use token in subsequent requests
  var profileRes = http.get('https://api.example.com/profile', {
    headers: { 'Authorization': 'Bearer ' + token },
  });

  check(profileRes, {
    'profile loaded': function (r) { return r.status === 200; },
  });

  // Step 3: Create an order
  var orderRes = http.post('https://api.example.com/orders',
    JSON.stringify({ product: 'widget', quantity: 1 }),
    { headers: {
      'Authorization': 'Bearer ' + token,
      'Content-Type': 'application/json',
    }}
  );

  var orderId = orderRes.json().id;

  // Step 4: Check order status
  var statusRes = http.get('https://api.example.com/orders/' + orderId, {
    headers: { 'Authorization': 'Bearer ' + token },
  });

  check(statusRes, {
    'order exists': function (r) { return r.status === 200; },
    'order is pending': function (r) { return r.json().status === 'pending'; },
  });
};

Custom Metrics

Track application-specific metrics alongside built-in ones:

var http = require('mockarty/http');

var loginDuration = new Trend('login_duration');
var orderSuccess = new Rate('order_success_rate');
var totalOrders = new Counter('total_orders');

module.exports.options = {
  vus: 20,
  duration: '5m',
  thresholds: {
    'login_duration': ['avg < 300', 'p(95) < 1000'],
    'order_success_rate': ['rate > 0.95'],
  },
};

module.exports.default = function () {
  var start = Date.now();
  var res = http.post('https://api.example.com/login',
    JSON.stringify({ user: 'test', pass: 'test' }),
    { headers: { 'Content-Type': 'application/json' } }
  );
  loginDuration.add(Date.now() - start);

  var orderRes = http.post('https://api.example.com/orders',
    JSON.stringify({ item: 'widget' }),
    { headers: { 'Content-Type': 'application/json' } }
  );

  orderSuccess.add(orderRes.status === 201);
  totalOrders.add(1);

  sleep(1);
};

Data-Driven Tests

Use SharedArray to load test data that is created once and shared across all VUs:

var http = require('mockarty/http');
var data = require('mockarty/data');
var faker = require('mockarty/faker');

// Create 1000 test users once, shared across all VUs
var users = data.SharedArray('test-users', function () {
  var result = [];
  for (var i = 0; i < 1000; i++) {
    result.push({
      email: faker.email(),
      password: faker.password(),
      name: faker.name(),
    });
  }
  return result;
});

module.exports.options = {
  vus: 50,
  duration: '5m',
};

module.exports.default = function () {
  // Each VU picks a random user
  var idx = Math.floor(Math.random() * users.length);
  var user = users.get(idx);

  http.post('https://api.example.com/register',
    JSON.stringify(user),
    { headers: { 'Content-Type': 'application/json' } }
  );

  sleep(0.5);
};

Multi-Protocol Test

Test HTTP, gRPC, and Kafka in a single script:

var http = require('mockarty/http');
var grpc = require('mockarty/grpc');
var kafka = require('mockarty/kafka');

var grpcClient = grpc.connect('localhost:4770', { reflect: true, plaintext: true });
var kafkaProducer = kafka.producer({ brokers: ['localhost:9092'] });

module.exports.options = {
  vus: 10,
  duration: '2m',
};

module.exports.default = function () {
  // HTTP: Create order via REST
  var orderRes = http.post('http://localhost:8080/api/orders',
    JSON.stringify({ product: 'widget', qty: 1 }),
    { headers: { 'Content-Type': 'application/json' } }
  );

  check(orderRes, {
    'order created': function (r) { return r.status === 201; },
  });

  // gRPC: Verify inventory via gRPC
  var invRes = grpcClient.invoke('inventory.Service/GetStock', {
    product_id: 'widget',
  });

  check(invRes, {
    'gRPC OK': function (r) { return r.status === 'OK'; },
    'stock available': function (r) { return r.message.quantity > 0; },
  });

  // Kafka: Publish order event
  kafkaProducer.produce({
    topic: 'order-events',
    messages: [{ key: 'widget', value: JSON.stringify({ event: 'order_placed' }) }],
  });

  sleep(1);
};

Real-World Example: E-Commerce Checkout Flow

A comprehensive load test simulating a realistic e-commerce user journey:

import http from 'k6/http';
import { check, sleep, group } from 'k6';

const BASE_URL = 'https://api.mystore.com';

export const options = {
  stages: [
    { duration: '1m', target: 25 },     // Warm up
    { duration: '3m', target: 100 },    // Ramp to steady state
    { duration: '5m', target: 100 },    // Sustain load
    { duration: '2m', target: 200 },    // Peak traffic
    { duration: '2m', target: 200 },    // Sustain peak
    { duration: '1m', target: 0 },      // Cool down
  ],
  thresholds: {
    'http_req_duration': ['p(95) < 800', 'p(99) < 2000'],
    'http_req_failed': ['rate < 0.02'],
    'checks': ['rate > 0.98'],
  },
  abortCriteria: [
    {
      name: 'Error rate too high',
      metric: 'http_req_failed',
      stat: 'rate',
      condition: '>',
      value: 0.3,
      duration: '1m',
      enabled: true,
    },
  ],
};

export default function () {
  const headers = { 'Content-Type': 'application/json' };

  // 1. Login
  let token;
  group('login', function () {
    const loginRes = http.post(BASE_URL + '/auth/login',
      JSON.stringify({
        email: 'loadtest-' + Math.floor(Math.random() * 10000) + '@example.com',
        password: 'TestPass123!',
      }),
      { headers: headers }
    );

    check(loginRes, {
      'login status 200': (r) => r.status === 200,
      'login has token': (r) => r.json().token !== undefined,
    });

    token = loginRes.json().token;
  });

  sleep(1);

  const authHeaders = {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer ' + token,
  };

  // 2. Browse products
  group('browse', function () {
    const listRes = http.get(BASE_URL + '/products?page=1&limit=20', {
      headers: authHeaders,
    });

    check(listRes, {
      'products loaded': (r) => r.status === 200,
      'has products': (r) => r.json().items.length > 0,
    });

    // View a random product detail
    const products = listRes.json().items;
    const randomProduct = products[Math.floor(Math.random() * products.length)];

    const detailRes = http.get(BASE_URL + '/products/' + randomProduct.id, {
      headers: authHeaders,
    });

    check(detailRes, {
      'product detail loaded': (r) => r.status === 200,
    });
  });

  sleep(2);

  // 3. Add to cart
  let cartId;
  group('add to cart', function () {
    const cartRes = http.post(BASE_URL + '/cart/items',
      JSON.stringify({
        productId: 'prod-001',
        quantity: 2,
      }),
      { headers: authHeaders }
    );

    check(cartRes, {
      'item added to cart': (r) => r.status === 200 || r.status === 201,
    });

    cartId = cartRes.json().cartId;
  });

  sleep(1);

  // 4. Checkout
  group('checkout', function () {
    const checkoutRes = http.post(BASE_URL + '/checkout',
      JSON.stringify({
        cartId: cartId,
        paymentMethod: 'credit_card',
        shippingAddress: {
          street: '123 Load Test Ave',
          city: 'Performance City',
          zip: '12345',
        },
      }),
      { headers: authHeaders }
    );

    check(checkoutRes, {
      'checkout succeeded': (r) => r.status === 200 || r.status === 201,
      'has order ID': (r) => r.json().orderId !== undefined,
    });
  });

  sleep(3);
}

This script:

  • Simulates realistic think times with sleep() between actions
  • Uses group() blocks for per-step reporting in the results
  • Extracts authentication tokens and uses them across requests
  • Ramps traffic through warm-up, steady state, and peak phases
  • Defines thresholds for p95/p99 latency and error rate
  • Includes an abort criterion to stop early if errors exceed 30%

See Also

  • API Reference — REST API endpoints for managing performance configs, running tests, and retrieving results
  • Integrations — Setting up Runner Agents for distributed test execution
  • Faker Reference — All available fake data generators for mock responses and test data
  • Scaling Architecture — Distributed deployment with Runner Agents and resolvers
  • Recorder — Record live traffic and export as performance scripts
  • Fuzzing — Automated security testing for your APIs