Skip to content

Clients Guide

Integrating Deterministic Distributed Compute

This guide explains how to integrate Forge Pool into your system as a compute client.

It assumes familiarity with:

Quickstart
Concepts
Execution Model


Integration Model

Forge Pool integrates as a deterministic execution backend.

Your application:

  • constructs Kernel execution requests
  • submits canonical execution envelopes
  • stores replay-critical metadata
  • consumes reduced outputs and artifacts

Forge Pool:

  • validates identity and policy
  • plans shards deterministically
  • schedules distributed agents
  • reduces results deterministically
  • emits replay metadata
  • records billing context

You do not interact directly with Agents.

You interact with:

  • Web Core API — the public execution entry point
  • HQ — control, observability, and billing surface

Authentication

Production Base URL

text
https://api.forgepool.io

Required Headers

http
Authorization: Bearer <your_api_key>
Content-Type: application/json

Alternative deployments may also support:

http
X-FORGE-API-KEY: <your_api_key>

Project API tokens are:

  • scoped per project
  • rate limited
  • revocable
  • tied to billing and quota context

See: → Authentication


The Execution Contract

All public compute requests use the canonical Kernel execution endpoint:

http
POST /api/v0/ops/execute

Every request must contain:

  • ctx
  • op
  • seed
  • policy
  • args

Example structure:

json
{
  "ctx": {
    "job_id": "app-req-001",
    "trace_id": "trace-001",
    "billing": { "mode": "billable" }
  },
  "op": {
    "name": "mc",
    "version": 1,
    "profile": "insurance.v1"
  },
  "seed": {
    "mode": "explicit",
    "value": "ROOT_SEED_001"
  },
  "policy": {
    "target": "cpu",
    "min_agents": 1,
    "max_agents": 50,
    "verify": "spotcheck"
  },
  "args": {
    "iterations": 10000000,
    "claim_freq": 2.0,
    "claim_severity_mu": 6.0,
    "claim_severity_sig": 1.0
  }
}

This structure is invariant across workload families.

Adapters and client-side domain layers map business input into this canonical format.


Example — Insurance Monte Carlo

bash
curl -X POST https://api.forgepool.io/api/v0/ops/execute \
  -H "Authorization: Bearer <your_api_key>" \
  -H "Content-Type: application/json" \
  -d '{ ... }'

Response (simplified):

json
{
  "ok": true,
  "job_id": "01K...",
  "status": "COMPLETED",
  "hub": {
    "metrics": {
      "wall_ms": 37317,
      "agents_used": 10,
      "shards": 10
    },
    "output": {
      "loss": {
        "mean": 1853.46,
        "variance": 22930414.49
      }
    },
    "replay": {
      "root_seed": "ROOT_SEED_001"
    }
  },
  "billing": {
    "credits": 1.42,
    "eur": 0.17
  }
}

Clients receive reduced output, not raw shard-level truth as the primary response surface.


Execution Lifecycle

When you submit a request:

  1. Web Core validates auth and policy.
  2. The request is registered.
  3. Hub deterministically plans shards.
  4. Agents execute shards in isolation.
  5. Results are aggregated deterministically.
  6. Verification runs when policy requires it.
  7. Replay metadata is recorded.
  8. Billing context is finalized.
  9. The response is returned.

Completed jobs are immutable truth surfaces for later replay and audit.


Determinism and Replay

For regulated or institutional use:

  • always specify explicit seed
  • persist the full request payload
  • store root_seed from response
  • store job_id
  • persist any required artifact references

Re-executing with an identical contract produces reproducible results under the same runtime doctrine.

Determinism is enforced at:

  • shard planning
  • seed derivation
  • reduction behavior
  • workload identity

See: → DeterminismReplay


Policy Configuration

The policy block defines execution constraints:

  • target: cpu / gpu / any
  • min_agents: minimum parallelization target
  • max_agents: upper bound on scaling
  • verify: none / spotcheck / full

Policy allows:

  • cost control
  • latency tuning
  • trust strengthening
  • heterogeneous routing

Policy influences planning. It does not redefine workload semantics.


Platform Memory Surfaces

For advanced workflows, Forge exposes supporting platform surfaces.

KV

Lightweight structured state and coordination references.

Blob

Artifact and object storage.

VMem

Reusable execution-adjacent numeric or intermediate memory surfaces.

These surfaces support:

  • multi-step workflows
  • cached distributions
  • staged execution graphs
  • artifact persistence
  • replay and audit references

Memory surfaces are accessed through dedicated endpoints.


Observability

In HQ → Jobs you can inspect:

  • shard allocation
  • agent participation
  • replay seed
  • verification status
  • execution metrics
  • billing records

Observability is structural, not optional.


Scaling and Limits

Each project may enforce:

  • rate limits
  • iteration caps
  • concurrency limits
  • billing boundaries
  • verification policy boundaries

Enterprise clients may request:

  • private Hub instances
  • regional routing
  • dedicated agent pools
  • custom verification policies

Production Checklist

Before going live:

  • use explicit seeds in production-critical jobs
  • log job_id and root_seed
  • validate replay reproducibility
  • test failure handling for 4xx and 5xx cases
  • monitor credit burn rate
  • confirm policy tuning
  • review shard distribution and verification behavior

Integration Philosophy

Forge Pool is not a black-box compute API.

It is a deterministic execution substrate.

Your system owns business logic. Forge Pool owns execution integrity.