Skip to main content

Audits

Status (as of February 6, 2026): Signals v1 is under audit and no public audit reports are available yet. Until reports are published, treat the protocol as unaudited and do your own research (DYOR) before using it.

"Under audit" means independent review is ongoing, not completed. Until reports are public, unknown issues may exist.

Audits reduce uncertainty but do not remove it. An ongoing audit means the review process is incomplete, and findings may still be pending. That is a material difference from a completed audit with published fixes.

Audits cover only a defined scope and a specific code snapshot. They do not guarantee safety, and they do not remove trust assumptions around oracles, governance, and chain liveness.

Audit results should be read as evidence about a particular scope, at a particular time. They are not a blanket statement about the protocol's safety. Even a strong audit can miss issues, and a narrow audit scope can leave key surfaces untouched.

Signals v1 is also upgradeable in v1. When implementations change, the relevant question is which code was reviewed and whether later deployments match that reviewed snapshot.

What an audit would cover in Signals v1

Signals v1 is not a single contract. It is a composed system with a few distinct mechanism surfaces:

  • Core and entrypoints: the proxy entrypoints that define public behavior and route calls to modules.
  • Trading and pricing: range position open/increase/decrease/close logic, CLMSR cost math, rounding, and fee overlay accounting.
  • Lifecycle and finality: market creation validation, settlement state machine, finalization transitions, and the settlement-to-claim gating rule.
  • Oracle integration: signed pull validation, admissible sample constraints, candidate selection rule, and fallback settlement path.
  • Safety and accounting: admissible market creation gates, daily maker-side accounting, payout reserve escrow, and fee waterfall allocation.
  • Upgrade and authorization surface: governance-controlled wiring, executor allowlists, pause behavior, and upgradeable implementation changes.

In practice, this usually maps to the following contracts and modules:

  • SignalsCore and its proxy surface
  • SignalsPosition and the claim semantics surface
  • TradeModule, MarketLifecycleModule, OracleModule, RiskModule, LPVaultModule
  • fee policy contracts referenced by markets

An audit scope that omits one of these surfaces can still be useful, but its conclusions should be interpreted narrowly. Signals is intentionally modular. That improves composability, but it also means an incomplete audit can leave the highest-risk surface untouched.

Two surfaces are easy to underestimate:

  • Module wiring and delegatecall context. Behavior is composed through module addresses but executed in the core storage context. A wiring mistake is not "just configuration." It changes the executed code path.
  • Boundary arithmetic. Rounding direction and unit conversions show up as tiny discrepancies per trade. Those discrepancies are part of the economic contract because they compound across volume.

Scope boundaries that affect conclusions

Audit results depend heavily on scope boundaries. A report can be correct within its scope and still leave large surfaces unevaluated.

Examples of boundaries that materially change conclusions:

  • auditing only the core proxy entrypoints but not the delegated modules
  • auditing the delegated modules but not the proxy wiring and authorization
  • auditing trading math but not settlement windows and claim gating
  • auditing settlement but not the maker-side accounting and fee waterfall
  • auditing Solidity logic but not economic behavior under adversarial sequences

Signals exposes configuration and module wiring on-chain, but a report still needs to state which wiring and which deployed implementations were reviewed.

Upgradeable scope and regime boundaries

Signals v1 uses stable proxy entrypoints. That has two implications for audits.

First, an audit can cover the logic of a module or an implementation contract without covering the deployed regime. A deployed regime is defined by:

  • proxy addresses,
  • current implementation addresses behind those proxies,
  • module wiring addresses referenced by the core,
  • and fee policy contracts referenced by live markets.

Second, a report is most actionable when it includes a mapping between the reviewed snapshot and the deployed regime. "Reviewed commit hash X" is more useful than "reviewed Signals v1" because upgrades are explicit.

For an upgradeable system, the strongest audit publication includes:

  • the audited code snapshot identifier,
  • a list of contract addresses and bytecode hashes that were reviewed,
  • and remediation notes that map findings to concrete upgrades.

This is not paperwork. It is how an audit becomes an object that can be tied to on-chain regime boundaries.

Economic review versus security review

Signals v1 has two intertwined dimensions.

The security dimension is correctness of state transitions: trading and settlement windows, signature validation, authorization, payout reserve escrow, and vault accounting safety.

The economic dimension is behavior under adversarial sequences: slippage under large trades, monotonicity under rounding, boundary behavior at settlement time, and maker-side loss behavior under stress.

An audit can be strong on one dimension and weak on the other. A complete review usually includes:

  • contract logic correctness,
  • mathematical parity tests against reference models,
  • and scenario-based analysis across timing boundaries.

Audit reports

No public audit reports are available yet.

When reports are published, list them here with:

  • Auditor name(s)
  • Report link(s)
  • Scope (contracts/components)
  • Audited code snapshot or deployment references
  • Dates

Scope notes

Signals v1 combines several subsystems: an on-chain pricing engine, a daily settlement state machine, claim semantics for range positions, and maker-side accounting. An audit can cover all of that, or it can focus on a subset.

For Signals, scope boundaries that often affect conclusions include:

  • Pricing math and rounding behavior.
  • Settlement windows, candidate validity checks, and finality transitions.
  • Claim semantics for [lowerTick,upperTick)[\text{lowerTick}, \text{upperTick}) positions.
  • Fee overlay and fee accounting.
  • Upgradeability and governance authorization surfaces.

Until public audit reports are available, the protocol should be treated as unaudited regardless of internal testing or formal reasoning.

Scope boundaries also interact with deployment. A report may audit contract code without auditing the deployment configuration, or it may audit a subset of modules while assuming other components are correct. Signals exposes configuration and module wiring on-chain, but a report still needs to state what was reviewed.

Interpreting audit reports

Audit reports typically include findings, fixes, and residual risks. Two points often carry as much weight as the raw finding list:

  • Scope and snapshot: which contracts, which dependencies, and which code hash or deployment the auditor reviewed.
  • Remediation status: whether findings were fixed, deferred, or accepted as known risks, and whether fixes are reflected in deployed bytecode.

An audit can be high quality and still miss issues. It can also identify issues that are outside the protocol's control, such as oracle trust assumptions or chain-level liveness risk. Those are addressed in the trust model, not "fixed" in code.

Audits also vary in type. A security audit focuses on correctness and exploitability of contract logic. An economic review focuses on mechanism and incentive behavior under adversarial trade sequences. Signals spans both: the pricing engine is mathematical, but it is also an on-chain system with rounding, timing windows, and an authority surface.

Audit reports often contain findings that are "not bugs" but are still regime facts. Examples include:

  • a design choice to gate market creation,
  • a choice to make finalization permissioned in v1,
  • a choice to keep LP access private in v1.

These are not vulnerabilities in the narrow sense, but they are trust surfaces. A high-quality report will describe them explicitly and separate them from implementation defects.

When reports are published, a complete publication set usually includes:

  • The report document itself.
  • A code snapshot identifier or deployment mapping.
  • Remediation notes that map findings to fixes.
  • A short statement of what remains as a known risk surface in v1.

Audit publication is also part of transparency. A report is most useful when it can be tied to a specific deployed regime and when fixes can be traced to a concrete upgrade or configuration change.

Until that publication exists, "under audit" should be treated strictly as an in-progress status, not as a completed security statement.