Skip to content

Kernel Execution Model

This document defines the canonical execution behavior of the Forge Pool Kernel: how a request becomes shards, how verification is applied, how results are reduced, and what must be produced for replay and audit.


1. Execution Contract

Every Kernel workload is defined by a canonical envelope.

json
{
  "ctx": {
    "job_id": "optional-client-id",
    "run_id": "optional-studio-run-id",
    "stage_id": "optional-studio-stage-id",
    "trace_id": "optional-trace-id",
    "billing": {
      "mode": "billable | test",
      "reason": "api | studio_run | lab | internal"
    }
  },
  "op": {
    "name": "mc",
    "version": 1,
    "profile": "insurance.v1"
  },
  "args": {
    "... profile-specific parameters ..."
  },
  "policy": {
    "target": "cpu | gpu | any",
    "min_agents": 0,
    "max_agents": 0,
    "timeout_ms": 0,
    "verify": "none | spotcheck | full"
  },
  "seed": {
    "mode": "derived | explicit",
    "value": "optional-string"
  },
  "artifacts": {
    "persist": true,
    "kv_namespace": "jobs",
    "include": ["result_json", "executions_json", "metrics_json"]
  }
}

This structure is canonical.

It ensures execution is:

  • deterministic
  • shardable
  • verifiable
  • replayable
  • auditable via artifacts

2. Shard → Execute → Verify → Aggregate

The Kernel enforces a four-phase execution pattern:

Shard

Workload is divided into deterministic partitions (shards). Shard planning must be reproducible under identical inputs.

Execute

Agents execute independently using shard parameters and shard-derived seed.

Verify

Verification is policy-driven:

  • none: no redundancy
  • spotcheck: duplicate a subset of shards
  • full: redundant execution across all shards (or configured ratio)

Verification exists because Agents are treated as mixed-trust executors.

Aggregate

The Hub reduces shard outputs via deterministic reduction. Agent ordering and routing must not affect the final result.


3. Convergence Detection (probabilistic workloads)

For probabilistic workloads, execution may stop early if convergence is achieved.

Convergence is measurable, not assumed:

  • target confidence band reached
  • variance stabilizes
  • iteration budget exhausted

If convergence is used, the result must still be replayable using the same contract.


4. Deterministic Seed Semantics

Seeds must:

  • propagate to all shards
  • survive replay
  • prohibit time-based entropy inside execution

Derived mode produces a root seed and shard seeds deterministically. Explicit mode pins the provided root seed.

Replay requires the same:

  • op.name
  • op.version
  • op.profile
  • args
  • seed (mode + value, or derived behavior)
  • reduction rules

(See: Replay)


5. Output Requirements

All workloads must return:

  • output (domain result surface)
  • iteration count / work performed
  • shard execution metadata (executions)
  • verification outcomes (if enabled)
  • replay reference (root_seed / replay_key)
  • artifact references (kv/blob pointers + hashes)

No workload may return a non-reproducible artifact.


6. Artifact Discipline

Artifacts are first-class execution truth:

  • immutable once written
  • hash-addressed
  • sufficient for audit and replay

Typical artifacts include:

  • reduced result JSON
  • executions list (agent_id, shard_id, seed, result_hash, wall/cpu metrics)
  • verification report
  • metrics snapshot

7. Failure Model (high level)

Failures are explicit and recorded:

  • invalid request (schema / auth / quota)
  • hub scheduling failure
  • agent execution failure
  • verification failure
  • aggregation failure

A failure still produces a traceable record (job_id, error code, and minimal metadata) suitable for audit and incident review.