Integrations Guide
Table of Contents
- Overview
- Integration Tokens
- Mock Resolver Nodes
- Runner Agents
- Server Generator & API-First Approach
- CI/CD Integration
- Webhook Notifications
- Telemetry & Monitoring
- Architecture Diagrams
Overview
Mockarty supports a distributed architecture that allows you to scale mock serving, test execution,
and code generation across multiple nodes. This guide covers how to connect external components
to your Mockarty admin node and adopt an API-first development workflow.
There are three external component types that can connect to Mockarty:
| Component | Purpose | Connection |
|---|---|---|
| Mock Resolver | Handles mock resolution traffic, offloads admin node | gRPC coordinator (port 5773) |
| Runner Agent | Executes API tests and performance tests remotely | gRPC coordinator (port 5773) |
| Server Generator | Generates standalone protocol servers from API specs | HTTP API (admin node) |
All machine-to-machine communication is secured with integration tokens — dedicated
authentication credentials separate from user API tokens.
Integration Tokens
What Are Integration Tokens?
Integration tokens are authentication credentials designed for machine-to-machine communication
between Mockarty nodes. They are distinct from user API tokens:
| Token Type | Prefix | Purpose |
|---|---|---|
| User API token | mk_* |
User automation, CI/CD, REST API calls |
| Integration token | mki_* |
Node-to-node authentication (resolver, runner, orchestrator) |
Integration tokens are hashed with bcrypt before storage — Mockarty never stores the plaintext
token. The token is displayed only once at creation time. Copy it immediately.
Creating Integration Tokens
Via the UI
- Navigate to Settings > Integrations
- Click Create Integration
- Select the token type:
mock_resolver— for Mock Resolver nodestest_runner— for Runner Agent nodesorchestrator— for Server Generator orchestrator
- Enter a descriptive name (e.g.,
resolver-eu-west-1,runner-ci-pipeline) - Click Create
- Copy the token from the confirmation dialog
Via the API
curl -X POST http://localhost:5770/ui/api/integrations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer mk_your_admin_api_token" \
-d '{
"name": "resolver-prod-1",
"type": "mock_resolver"
}'
Response:
{
"integration": {
"id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"name": "resolver-prod-1",
"type": "mock_resolver",
"enabled": true,
"createdAt": "2026-03-13T10:00:00Z"
},
"token": "mki_abc123def456..."
}
Important: The
tokenfield is returned only in the creation response. Store it securely.
Token Types
mock_resolver — Grants the node permission to:
- Receive mock configurations from the admin node
- Resolve incoming mock requests
- Report health status back to the coordinator
test_runner — Grants the node permission to:
- Receive API test and performance test tasks
- Report test results back to the admin node
- Access mock definitions needed for test execution
orchestrator — Grants the node permission to:
- Access the Server Generator orchestrator API
- Create and manage generated servers
- Sync mock definitions for generated servers
Revoking Tokens
To revoke an integration token:
- Navigate to Settings > Integrations
- Find the integration in the list
- Click Disable to temporarily suspend, or Delete to permanently revoke
Via API:
# Disable (soft revoke)
curl -X PUT http://localhost:5770/ui/api/integrations/INTEGRATION_ID \
-H "Authorization: Bearer mk_your_admin_api_token" \
-d '{"enabled": false}'
# Delete (permanent)
curl -X DELETE http://localhost:5770/ui/api/integrations/INTEGRATION_ID \
-H "Authorization: Bearer mk_your_admin_api_token"
When a token is revoked, the corresponding node loses connectivity and stops receiving updates
at the next heartbeat cycle (typically within 30 seconds).
Mock Resolver Nodes
Purpose
Mock Resolver nodes handle mock resolution traffic — the actual serving of mock responses to
your applications and tests. By offloading resolution to dedicated nodes, you keep the admin
node focused on UI, configuration, and coordination tasks.
Architecture
Your App/Tests ──HTTP──▶ Mock Resolver :5780
│
gRPC :5773
│
▼
Admin Node :5770
The resolver connects to the admin node via the gRPC coordinator on port 5773. It receives
mock configurations and caches them locally for fast resolution. Updates are pushed from the
admin node in real time.
When to Use
- High mock traffic: resolver nodes can be scaled horizontally
- Separation of concerns: keep admin UI and mock serving on different nodes
- Network isolation: place resolvers closer to your test infrastructure
- High availability: multiple resolvers provide redundancy
Setup
Step 1: Create an Integration Token
Create a token of type mock_resolver (see Integration Tokens).
Step 2: Download the Binary
Download the mock-resolver binary for your platform from the Mockarty releases page.
Step 3: Configure Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
COORDINATOR_ADDR |
Yes | — | Address of the admin node gRPC coordinator (e.g., admin-host:5773) |
API_TOKEN |
Yes | — | Integration token (mki_*) |
RESOLVER_NAME |
No | hostname | Human-readable name for this resolver |
HTTP_PORT |
No | 5780 |
Port for serving mock responses |
NAMESPACE |
No | sandbox |
Namespace this resolver serves |
LOG_LEVEL |
No | info |
Logging level (debug, info, warn, error) |
Step 4: Run
export COORDINATOR_ADDR="admin.example.com:5773"
export API_TOKEN="mki_your_resolver_token"
export RESOLVER_NAME="resolver-1"
export HTTP_PORT="5780"
./mock-resolver
The resolver will connect to the coordinator, download current mock configurations, and begin
serving mock responses.
Routing Traffic
Once the resolver is running, direct your mock traffic to it instead of the admin node:
| Traffic Type | Target |
|---|---|
| Mock requests (HTTP, gRPC, etc.) | Resolver node (port 5780) |
| Admin UI, API, configuration | Admin node (port 5770) |
Example — point your tests at the resolver:
# Before (directly to admin)
curl http://admin:5770/api/users/123
# After (to resolver)
curl http://resolver-1:5780/api/users/123
Multiple Resolvers
You can run multiple resolver nodes and load-balance across them:
# Resolver 1
COORDINATOR_ADDR="admin:5773" API_TOKEN="mki_token1" RESOLVER_NAME="resolver-1" HTTP_PORT=5780 ./mock-resolver
# Resolver 2
COORDINATOR_ADDR="admin:5773" API_TOKEN="mki_token2" RESOLVER_NAME="resolver-2" HTTP_PORT=5781 ./mock-resolver
Place an HTTP load balancer (nginx, HAProxy, cloud LB) in front of them for even distribution.
Health Check
Each resolver exposes a health endpoint:
curl http://resolver-1:5780/health/live
Use this for load balancer health checks and monitoring.
Dashboard
Each resolver node runs a lightweight dashboard showing connection status, resolved mock counts,
and performance metrics:
http://resolver-1:6780/
Docker Example
docker run -d \
--name mockarty-resolver \
-e COORDINATOR_ADDR="admin-host:5773" \
-e API_TOKEN="mki_your_resolver_token" \
-e RESOLVER_NAME="resolver-docker" \
-e HTTP_PORT="5780" \
-p 5780:5780 \
-p 6780:6780 \
mockarty/mock-resolver:latest
Runner Agents
Purpose
Runner Agents execute API test collections and performance/load tests remotely. By offloading
test execution to dedicated runner nodes, heavy load tests do not impact the admin node’s
performance.
Architecture
Admin Node :5770
│
gRPC :5773 (coordinator dispatches tasks)
│
▼
Runner Agent :6770 (executes tests, reports results)
The runner connects to the admin node via the gRPC coordinator. When a user triggers a test
run or performance test, the coordinator dispatches the task to an available runner. Results
are streamed back to the admin node.
When to Use
- Load testing: performance tests generate significant traffic — run them on dedicated nodes
- Distributed testing: run tests from different network locations or regions
- Parallel execution: multiple runners can execute tasks concurrently
- Resource isolation: keep test execution separate from admin operations
Setup
Step 1: Create an Integration Token
Create a token of type test_runner (see Integration Tokens).
Step 2: Download the Binary
Download the runner-agent binary for your platform from the Mockarty releases page.
Step 3: Configure Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
COORDINATOR_ADDR |
Yes | — | Address of the admin node gRPC coordinator (e.g., admin-host:5773) |
API_TOKEN |
Yes | — | Integration token (mki_*) |
RUNNER_NAME |
No | hostname | Human-readable name for this runner |
SHARED |
No | false |
If true, runner receives tasks from ALL namespaces |
NAMESPACE |
No | sandbox |
Namespace this runner serves (ignored if SHARED=true) |
MAX_CONCURRENT |
No | 5 |
Maximum concurrent task executions |
CAPABILITIES |
No | api_test |
Comma-separated capabilities: api_test, performance |
DASHBOARD_PORT |
No | 6770 |
Port for the runner dashboard |
LOG_LEVEL |
No | info |
Logging level |
Step 4: Run
export COORDINATOR_ADDR="admin.example.com:5773"
export API_TOKEN="mki_your_runner_token"
export RUNNER_NAME="runner-1"
export SHARED="true"
export CAPABILITIES="api_test,performance"
export MAX_CONCURRENT="10"
./runner-agent
Scope: Shared vs. Namespace Runners
Shared runners (SHARED=true):
- Receive tasks from all namespaces
- Ideal for general-purpose test infrastructure
- Typically deployed by platform/DevOps teams
Namespace runners (SHARED=false, NAMESPACE=team-backend):
- Receive tasks only from their assigned namespace
- Ideal for team-specific infrastructure
- Can be deployed closer to team resources
Example configuration:
# Shared runner — handles all namespaces
SHARED=true CAPABILITIES="api_test,performance" ./runner-agent
# Team-specific runner — only "payments" namespace
SHARED=false NAMESPACE="payments" CAPABILITIES="api_test" ./runner-agent
Capabilities
Capabilities determine what task types a runner can accept:
| Capability | Task Types |
|---|---|
api_test |
API test collection runs, scheduled test executions |
performance |
Load tests, stress tests, performance benchmarks |
A runner can have multiple capabilities:
CAPABILITIES="api_test,performance"
If a runner only has api_test, it will never receive performance test tasks, and vice versa.
Multiple Runners
You can run multiple runner agents. The coordinator distributes tasks automatically based on
availability and capabilities:
# Runner 1 — API tests only
RUNNER_NAME="runner-api-1" CAPABILITIES="api_test" DASHBOARD_PORT=6770 ./runner-agent
# Runner 2 — Performance tests only
RUNNER_NAME="runner-perf-1" CAPABILITIES="performance" DASHBOARD_PORT=6771 ./runner-agent
# Runner 3 — Both
RUNNER_NAME="runner-all-1" CAPABILITIES="api_test,performance" DASHBOARD_PORT=6772 ./runner-agent
Note: If running multiple runners on the same host, assign different
DASHBOARD_PORTvalues
to avoid port conflicts.
Dashboard
Each runner provides a dashboard showing task status, execution history, and resource usage:
http://runner-1:6770/
Docker Example
docker run -d \
--name mockarty-runner \
-e COORDINATOR_ADDR="admin-host:5773" \
-e API_TOKEN="mki_your_runner_token" \
-e RUNNER_NAME="runner-docker" \
-e SHARED="true" \
-e CAPABILITIES="api_test,performance" \
-e MAX_CONCURRENT="10" \
-p 6770:6770 \
mockarty/runner-agent:latest
Server Generator & API-First Approach
What Is the Server Generator?
The mockarty-server-generator is a standalone binary that generates fully functional protocol
servers from API specifications. Generated servers serve mock responses sourced from your
Mockarty instance, enabling an API-first development workflow.
API-First Workflow
The API-first approach lets you define your API contract before writing any implementation code:
- Define your API spec (OpenAPI, .proto, GraphQL schema, etc.)
- Generate a standalone server from the spec using
mockarty-server-generator - Mock responses are automatically created in Mockarty based on the spec
- Develop your client application against the generated server
- Replace the generated server with your real implementation when ready
This workflow enables frontend and backend teams to work in parallel — the frontend team
develops against the generated mock server while the backend team implements the real service.
License Requirement
Server generation requires the api-first-generator feature in your Mockarty license.
Check your license status at Settings > License.
Supported Protocols
| Protocol | Input Format | Server Type |
|---|---|---|
| OpenAPI / Swagger | .yaml, .json |
HTTP REST server |
| gRPC | .proto |
gRPC server |
| MCP | .json config |
MCP server (SSE transport) |
| GraphQL | .graphql schema |
GraphQL server |
| SOAP | .wsdl |
SOAP server |
| Kafka | .json config |
Kafka consumer/producer |
| RabbitMQ | .json config |
RabbitMQ consumer/publisher |
| SSE | .json config |
Server-Sent Events server |
| WebSocket | .json config |
WebSocket server |
Three Operating Modes
1. CLI Mode
Generate server code from specs directly on the command line:
# Generate an OpenAPI server
./mockarty-server-generator \
-server-type openapi \
-input ./api/openapi.yaml \
-output ./generated-server \
-create-mocks \
-mockarty-url http://localhost:5770 \
-api-token mk_your_token
# Generate a gRPC server
./mockarty-server-generator \
-server-type grpc \
-input ./proto/service.proto \
-output ./generated-grpc-server \
-create-mocks \
-mockarty-url http://localhost:5770 \
-api-token mk_your_token
The -create-mocks flag automatically creates mock definitions in Mockarty based on the spec.
2. Orchestrator Mode
Run the server generator as a REST API service for managing generated servers programmatically:
./mockarty-server-generator orchestrator \
-port 8888 \
-api-token mk_your_token \
-mockarty-url http://localhost:5770
The orchestrator exposes endpoints for creating, listing, and managing generated servers.
It requires an API token for authentication.
3. Experimental UI
A web interface for interactive server generation:
./mockarty-server-generator experimental-ui \
-port 8888 \
-api-token mk_your_token \
-mockarty-url http://localhost:5770
Access the UI at http://localhost:8888/ui.
Generated Server Output
The generator produces a standalone Go project with vendored dependencies:
generated-server/
main.go
go.mod
go.sum
vendor/
...
Build and run the generated server:
cd generated-server
go build -mod=vendor -o server .
./server
The generated server is fully self-contained — no external dependency downloads needed at build time.
Generated Server Environment Variables
| Variable | Default | Description |
|---|---|---|
HTTP_ADMIN_BASE_URL |
— | URL of the Mockarty admin node (e.g., http://admin:5770) |
NAMESPACE |
sandbox |
Namespace for mock resolution |
HTTP_PORT |
8080 |
Port for the generated server |
API_TOKEN |
— | API token for authenticating with Mockarty |
Smart Merge
When you re-run the generator with -create-mocks against an existing set of mocks, the
generator performs a smart merge:
- New endpoints are added as new mocks
- Existing mocks are updated (routes, methods) without losing manually added conditions,
store references, or custom response modifications - Deleted endpoints are not automatically removed (manual cleanup required)
This allows you to iterate on your API spec without losing manual mock customizations.
Docker Example (Orchestrator)
docker run -d \
--name mockarty-server-generator \
-e API_TOKEN="mk_your_token" \
-e MOCKARTY_URL="http://admin-host:5770" \
-p 8888:8888 \
mockarty/server-generator:latest \
orchestrator -port 8888
For detailed usage of all server types, input formats, and configuration options, see the
Server Generator Guide.
CI/CD Integration
User API Tokens for Automation
For CI/CD pipelines, use user API tokens (mk_*), not integration tokens.
Create an API token at Settings > API Tokens or via the admin API.
Creating Mocks in CI
Automate mock creation as part of your CI pipeline:
# Create a mock for the users endpoint
curl -X POST http://mockarty:5770/mock/create \
-H "Content-Type: application/json" \
-H "Authorization: Bearer mk_your_ci_token" \
-d '{
"id": "ci-users-get",
"http": {
"route": "/api/users/:id",
"httpMethod": "GET"
},
"response": {
"statusCode": 200,
"payload": {
"id": "$.pathParam.id",
"name": "$.fake.FirstName",
"email": "$.fake.Email"
}
}
}'
Running Test Collections via API
Trigger test collection runs from your CI pipeline:
# Run a test collection
curl -X POST http://mockarty:5770/ui/api/test-runs/run \
-H "Content-Type: application/json" \
-H "Authorization: Bearer mk_your_ci_token" \
-d '{
"collectionId": "your-collection-uuid",
"environmentId": "your-env-uuid"
}'
Performance Tests via API
Trigger performance/load tests:
curl -X POST http://mockarty:5770/ui/api/perf/run \
-H "Content-Type: application/json" \
-H "Authorization: Bearer mk_your_ci_token" \
-d '{
"script": {
"collectionId": "your-collection-uuid",
"environmentId": "your-env-uuid"
},
"options": {
"duration": "60s",
"concurrency": 10
}
}'
Importing OpenAPI Specs
Auto-generate mocks from OpenAPI specs in your CI pipeline:
curl -X POST http://mockarty:5770/ui/api/openapi/import \
-H "Authorization: Bearer mk_your_ci_token" \
-F "file=@./api/openapi.yaml" \
-F "namespace=ci-tests"
GitHub Actions Example
name: Integration Tests with Mockarty
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
mockarty:
image: mockarty/mockarty:latest
ports:
- 5770:5770
env:
DB_DSN: "postgres://..."
steps:
- uses: actions/checkout@v4
- name: Wait for Mockarty
run: |
for i in $(seq 1 30); do
curl -s http://localhost:5770/health/live && break
sleep 2
done
- name: Import OpenAPI Spec
run: |
curl -X POST http://localhost:5770/ui/api/openapi/import \
-H "Authorization: Bearer ${{ secrets.MOCKARTY_TOKEN }}" \
-F "file=@./api/openapi.yaml"
- name: Run Tests
run: npm test
env:
API_BASE_URL: http://localhost:5770
Webhook Notifications
Configuration
Configure webhooks at Settings > Webhooks in the admin UI.
Each webhook requires:
- URL: The endpoint to receive notifications
- Events: Which events trigger the webhook
- Secret (optional): Used to sign the payload for verification
Event Types
| Event | Description |
|---|---|
mock.resolved |
A mock request was matched and a response was served |
mock.created |
A new mock was created |
mock.deleted |
A mock was deleted |
test_run.completed |
An API test collection run finished |
undefined_request.received |
A request was received with no matching mock |
Payload Format
Webhook payloads are JSON:
{
"event": "test_run.completed",
"timestamp": "2026-03-13T10:30:00Z",
"data": {
"testRunId": "run-uuid",
"collectionId": "collection-uuid",
"status": "failed",
"totalTests": 25,
"passed": 23,
"failed": 2,
"duration": "45s"
}
}
If a webhook secret is configured, the payload is signed with HMAC-SHA256 and the signature
is included in the X-Mockarty-Signature header.
Retry Policy
Failed webhook deliveries are retried with exponential backoff:
- Attempt 1: Immediate
- Attempt 2: After 30 seconds
- Attempt 3: After 2 minutes
- Attempt 4: After 10 minutes
- Attempt 5: After 1 hour
After 5 failed attempts, the webhook delivery is marked as failed. You can view delivery
history and retry manually from the webhooks settings page.
Use Cases
- Slack notifications: Post to a Slack channel when a test run fails
- Auto-mock creation: Listen for
undefined_request.receivedevents and automatically
create mocks for unhandled endpoints - Audit trail: Forward all events to a logging service for compliance
- Dashboard updates: Trigger dashboard refreshes when mocks change
Telemetry & Monitoring
OpenTelemetry Integration
Mockarty supports OpenTelemetry for distributed tracing. Configure via environment variables:
| Variable | Default | Description |
|---|---|---|
OTEL_EXPORTER_TYPE |
none |
Exporter type: jaeger, zipkin, otlp, none |
OTEL_EXPORTER_ENDPOINT |
— | Exporter endpoint URL |
OTEL_SAMPLING_RATE |
1.0 |
Sampling rate (0.0 to 1.0) |
OTEL_SERVICE_NAME |
mockarty |
Service name in traces |
Example configuration:
export OTEL_EXPORTER_TYPE="otlp"
export OTEL_EXPORTER_ENDPOINT="http://otel-collector:4317"
export OTEL_SAMPLING_RATE="0.1"
Prometheus Metrics
All Mockarty components expose Prometheus metrics at /metrics:
# Admin node
curl http://admin:5770/metrics
# Resolver node
curl http://resolver:5780/metrics
Key metrics include:
mockarty_mock_resolutions_total— Total mock resolutions by route and statusmockarty_request_duration_seconds— Request duration histogrammockarty_active_mocks— Number of active mocksmockarty_store_operations_total— Store read/write operations
Health Endpoints
All components expose health endpoints for monitoring and load balancer configuration:
| Component | Endpoint | Port |
|---|---|---|
| Admin node | GET /health/live |
5770 |
| Mock Resolver | GET /health/live |
5780 |
| Runner Agent | GET /health/live |
6770 |
| Generated servers | GET /health |
8080 (default) |
Health response:
{
"status": "ok",
"releaseId": "1.2.3",
"uptime": "72h15m30s"
}
Grafana Dashboard
Mockarty provides a Grafana dashboard template for visualizing key metrics. Import the
dashboard JSON from the Mockarty releases page or configure manually using the Prometheus
metrics listed above.
Recommended dashboard panels:
- Mock resolution rate (requests/second)
- Response time percentiles (p50, p95, p99)
- Active mocks count
- Error rate by route
- Store operation latency
- Connected resolver and runner count
Architecture Diagrams
Basic Setup (Single Node)
┌──────────────┐ ┌──────────────────┐
│ Your App / │──HTTP──▶│ Mockarty Admin │
│ Tests │ │ :5770 │
└──────────────┘ │ │
│ - Mock serving │
│ - Admin UI │
│ - API │
│ - Test runner │
└──────────────────┘
Distributed Setup (Recommended for Production)
┌──────────────┐
┌───▶│ Resolver 1 │
│ │ :5780 │
┌──────────────┐ │ └──────┬───────┘
│ Your App / │────┤ │ gRPC :5773
│ Tests │ │ ┌──────▼───────────────┐
└──────────────┘ │ │ Mockarty Admin │
│ │ :5770 │
└───▶│ │
│ - Admin UI │
┌──────│ - Coordinator │
│ │ - Configuration │
│ └──────────────────────┘
│ gRPC :5773
│
┌──────▼───────┐
│ Runner Agent │
│ :6770 │
│ │
│ - API tests │
│ - Load tests │
└───────────────┘
Full Architecture with Server Generator
┌─────────────────────────────────────────────────────────────────────┐
│ Development Environment │
│ │
│ ┌────────────┐ ┌─────────────┐ ┌────────────────────────┐ │
│ │ API Spec │───▶│ Server │───▶│ Generated Server │ │
│ │ (OpenAPI) │ │ Generator │ │ :8080 │ │
│ └────────────┘ └──────┬──────┘ │ │ │
│ │ │ Serves mock responses │ │
│ creates mocks │ from Mockarty │ │
│ │ └───────────┬────────────┘ │
│ ▼ │ │
│ ┌──────────────┐ │ │
│ ┌──────────┐ │ Mockarty │◀───────────────┘ │
│ │ Frontend │────▶│ Admin │ fetches mock data │
│ │ App │ │ :5770 │ │
│ └──────────┘ └──────┬───────┘ │
│ │ │
│ ┌──────┴───────┐ │
│ │ │ │
│ ┌──────▼──────┐ ┌────▼────────┐ │
│ │ Resolver │ │ Runner │ │
│ │ :5780 │ │ Agent │ │
│ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────────────┘
Port Reference
| Component | Default Port | Purpose |
|---|---|---|
| Admin node (HTTP) | 5770 | Admin UI, API, mock serving |
| Admin node (gRPC coordinator) | 5773 | Node coordination |
| Mock Resolver (HTTP) | 5780 | Mock serving |
| Mock Resolver (Dashboard) | 6780 | Resolver dashboard |
| Runner Agent (Dashboard) | 6770 | Runner dashboard |
| Generated server (HTTP) | 8080 | Generated server traffic |
| Server Generator Orchestrator | 8888 | Orchestrator API & UI |
See Also
- Installation & Deployment — Docker Compose examples, environment variables, and TLS setup
- Code Generator Guide — Detailed guide to the server generator CLI for all 9 protocols
- Administration Guide — User management, namespaces, authentication, and security
- API Reference — Full REST API documentation for programmatic mock management
- Quick Start Guide — Get up and running in 10 minutes