Skip to content

Execution Path

This document explains what happens when a workload is executed through Forge Pool.

While different workloads use different primitives and profiles, all of them pass through the same canonical runtime path.


Canonical Path

text
Client / System

Web / API surface

Hub ingress

Shard planning

Scheduling and dispatch

Agents

Primitive + Profile execution

Verification

Aggregation

Final result

Step 1 — Request Intake

A client or integrated system submits a workload request.

At this stage, the system handles:

  • authentication
  • structural validation
  • request normalization
  • admission into the runtime

This stage ensures that only valid work enters the core execution system.


Step 2 — Hub Ingress

The Hub receives the request and establishes execution context.

This includes:

  • job identity
  • project scope
  • runtime metadata
  • policy constraints
  • workload classification

The Hub does not compute at this stage.

It prepares the request for distributed execution.


Step 3 — Shard Planning

The workload is decomposed into shards appropriate to its primitive and profile.

Examples:

  • Monte Carlo → iteration blocks
  • BLAS → matrix tiles
  • media processing → segments
  • ensemble modeling → member partitions

Shard planning must remain deterministic for reproducibility.


Step 4 — Scheduling

The Scheduler assigns shards to Agents based on:

  • capability
  • throughput history
  • network quality
  • reliability
  • fairness and policy

The goal is not only speed, but correct and stable distributed execution.


Step 5 — Dispatch

Shards are dispatched from the Hub to Agents using the transport layer.

The system uses QUIC for:

  • multiplexed transport
  • reliability under imperfect network conditions
  • efficient result return
  • scalable Agent connectivity

Step 6 — Agent Execution

Agents execute the assigned shard inside an isolated runtime boundary.

At this point:

  • the primitive family is known
  • the profile semantics are known
  • the shard parameters are fixed
  • seeded determinism is applied where required

This is where actual compute occurs.


Step 7 — Primitive and Profile Execution

The primitive defines the computation family.

The profile defines the exact workload semantics.

Examples:

  • mc@1 + eta.v1
  • graph@1 + financial.contagion.v1

This is the canonical computation layer of Forge.


Step 8 — Result Return

Agents return structured partial outputs to the Hub.

These outputs may include:

  • scalar metrics
  • histograms
  • matrix tiles
  • media segments
  • diagnostics
  • verification metadata

Returned outputs remain partial until reduction is complete.


Step 9 — Verification

Depending on workload and policy, the system may perform:

  • redundant shard comparison
  • statistical consistency checks
  • structural validation
  • integrity checks on returned outputs

Verification strengthens correctness and trust.


Step 10 — Aggregation

The Aggregation Layer merges partial results into one deterministic final output.

Aggregation logic depends on workload type, but the goal is always the same:

  • preserve correctness
  • preserve reproducibility
  • produce a normalized final result

Step 11 — Final Delivery

The Hub constructs and returns the final response.

This may include:

  • final result values
  • metadata
  • diagnostics
  • execution references
  • runtime statistics

At this point, the workload is complete.


What Does Not Happen in the Execution Path

The following are not part of the canonical compute path:

  • ad hoc client-side compute
  • adapter-defined compute truth
  • local semantic overrides of primitive behavior
  • hidden side effects that alter outcomes

These are explicitly excluded because they would break system integrity.


Why This Matters

Forge is not defined only by distributed work.

It is defined by controlled distributed execution.

That means every request follows a known, auditable path from intake to result.