Appearance
Build an Adapter
Overview
Forge adapters are open by design.
There is no single required implementation template, framework, or internal class structure.
However, every Forge-compatible adapter should be built around the same core idea:
Shape input.
Orchestrate execution if needed.
Shape output.
Never redefine compute truth.
This page describes how to build an adapter that remains compatible with the Forge execution model.
Design Goal
A valid adapter should answer three questions clearly:
- What input does it accept?
- Does it call Forge primitives or not?
- What output does it produce?
If those three things are legible, the adapter is already on the right path.
Canonical Build Path
The typical build path looks like this:
text
Define responsibility
→ define input contract
→ define output contract
→ implement validation
→ implement transformation
→ optionally implement execution orchestration
→ finalize output
→ add observability and replay referencesNot every adapter uses every step, but this is the standard design path.
Step 1 — Define Responsibility
Before writing code, define the adapter’s primary responsibility.
Examples:
- normalize external portfolio input
- orchestrate a probabilistic risk execution flow
- aggregate multiple primitive outputs
- export a result surface to an external system
Do not start with implementation details.
Start with:
What boundary role does this adapter serve?
That answer determines everything else.
Step 2 — Define the Input Contract
Every adapter needs an explicit input contract.
At minimum, define:
- required fields
- optional fields
- structural expectations
- validation rules
- default behavior
Questions to answer:
- What does the adapter require to run?
- What may be missing?
- What is invalid?
- What can be defaulted?
The input contract should be strict enough to prevent ambiguity, but flexible enough to support real-world use.
Step 3 — Define the Output Contract
The adapter should emit a clear and bounded output.
Typical output surfaces include:
- normalized internal envelopes
- primitive-ready execution results
- aggregated domain summaries
- export-safe result bundles
- replay or trace metadata
The output contract should make it obvious whether the adapter:
- only transformed state
- used Forge compute
- combined upstream results
- produced a delivery-oriented output
Step 4 — Decide Whether It Uses Forge Compute
This is a critical design decision.
Adapters may be:
Non-compute adapters
These do not call primitives.
They usually handle:
- ingestion
- validation
- normalization
- export shaping
- ecosystem bridging
Compute-participating adapters
These call one or more Forge primitives.
They usually handle:
- execution planning
- profile selection
- multi-stage orchestration
- result collection and shaping
Be explicit.
A good adapter never leaves this ambiguous.
Step 5 — Implement Validation
Validation should happen early.
Typical validation responsibilities include:
- required field checks
- schema correctness
- domain constraint checks
- supported mode / policy checks
- execution safety checks
Validation should fail explicitly and preserve error context.
Do not defer obviously invalid state deeper into the pipeline.
Step 6 — Implement Transformation
Transformation converts external state into adapter-usable state.
Typical transformation steps:
- normalize names and field structures
- reshape arrays or matrices
- standardize units
- derive internal envelopes
- enrich missing metadata when appropriate
This is usually where an adapter becomes truly useful.
It isolates the rest of the system from external messiness.
Step 7 — Build Execution Requests (If Needed)
If the adapter participates in compute, it should build canonical primitive requests.
Typical request fields include:
- primitive name
- primitive version
- profile
- args
- seed
- policy
- artifact preferences
The adapter should not invent new execution semantics.
It should map domain state into the existing canonical execution model.
Step 8 — Delegate Compute
Adapters do not perform canonical compute.
They delegate it.
That means:
- send execution requests through the canonical execution surface
- preserve primitive boundaries
- collect responses without mutating primitive meaning
The adapter may orchestrate one primitive or many.
It may run:
- sequential flows
- staged flows
- multi-primitive chains
But the compute itself remains outside the adapter.
Step 9 — Shape the Output
After validation, transformation, and optional execution, the adapter should shape final output.
This may include:
- summary construction
- domain-specific result formatting
- chart / table shaping
- export bundle generation
- replay reference packaging
- trace metadata
Output shaping is allowed.
Output falsification is not.
The adapter must preserve the meaning of upstream results.
Step 10 — Preserve Traceability
If the adapter participates in canonical execution, it should preserve enough information to make the flow understandable.
Typical trace surfaces include:
- request identifiers
- primitive execution references
- replay tokens
- adapter version
- execution metadata
- stage names
This is essential for:
- debugging
- auditing
- replay
- trust
Recommended Internal Structure
Forge does not require one internal code layout.
Still, a practical structure usually separates responsibilities such as:
- handler / transport boundary
- service / orchestration logic
- validator
- mapper / request builder
- selector / response shaper
- shared utilities
A typical service layout may look like:
text
adapter/
handler
service
validator
mapper
selector
types
utilThis is not mandatory.
It is simply a proven way to keep adapters understandable.
Minimal Adapter Shapes
Minimal Non-Compute Adapter
Good for:
- ingest
- export
- bridge adapters
Typical shape:
text
Input
→ Validate
→ Transform
→ OutputMinimal Compute Adapter
Good for:
- single primitive orchestration
Typical shape:
text
Input
→ Validate
→ Transform
→ Build primitive request
→ Execute
→ Shape outputMulti-Stage Adapter
Good for:
- chained domain flows
- perception pipelines
- institutional orchestration
Typical shape:
text
Input
→ Validate
→ Transform
→ Stage 1 execution
→ Stage 2 execution
→ Aggregate
→ FinalizeExample Design Questions
Before finalizing an adapter, ask:
Responsibility
- Is this adapter mainly ingest, execution, aggregation, output, or bridge?
Truth
- Where does compute truth come from?
Boundary
- Is it clear what happens locally vs what happens through Forge execution?
Replay
- If this adapter participates in canonical flow, can the request path be reconstructed?
Honesty
- Does the output clearly reflect what actually happened?
If the answer to any of these is “not really”, the adapter needs refinement.
Anti-Patterns
Avoid these patterns:
Hidden compute
Running local compute while presenting it as Forge execution
Semantic override
Changing what a primitive result means
Boundary collapse
Mixing validation, compute, aggregation, and output shaping into unreadable logic without role clarity
Opaque outputs
Returning results without traceability or execution context
Fake determinism
Claiming replayability without preserving the inputs or execution references required for replay
Forge-Compatible Adapter Checklist
A solid adapter should have:
- a clear responsibility
- an explicit input contract
- an explicit output contract
- visible validation
- visible transformation logic
- explicit compute participation or non-participation
- boundary clarity
- preserved traceability
- no primitive semantic override
Build Philosophy
Forge is strict at the core and open at the edge.
That means adapter authors have freedom in implementation, but not freedom to distort execution truth.
This is the correct balance:
- maximum ecosystem flexibility
- minimum semantic drift
Summary
To build a Forge-compatible adapter:
- define the boundary clearly
- keep the responsibility legible
- use primitives for compute truth
- preserve replay and traceability
- be honest about what the adapter does
That is enough to build adapters that scale with the system instead of fragmenting it.
