This is an alpha version of an analytical tool. Outputs are quantitative ranges produced under your declared assumptions — they characterise plausible futures, not predictions.
This tool does not provide financial, medical, or legal advice. All decisions remain yours. Apply your own professional judgement to any result produced here.
Portfolio / multi-outcome runs (schema built, full UI in development)
Persistent sessions and user profiles
Live data stream integration
We are asking alpha testers to report on
Frame proposals — does the system propose the right frame for your problem?
Input parsing — are your declared values and labels correctly interpreted?
Result notes — are the plain-language result interpretations accurate and useful?
Sensitivity output — does the driver ranking match your domain intuition?
Source type classification — does the multiplier assigned to your input feel calibrated?
Any dead ends, crashes, or responses that feel wrong
Epistemic Forecasting System
Session setup
ALPHA
Domain
Experience
Gates
Which domain are you working in?
This helps me use the right terminology and suggest appropriate frame types. More domains are being added — if yours isn't listed, choose the closest match.
Finance & investment
Revenue, returns, valuations, cost modelling
Healthcare & life sciences
Clinical outcomes, trial results, population metrics
More domains coming — early access users shape what's added next.
How do you work with uncertainty?
This calibrates how I explain results and communicate ranges. You can change this at any time.
Exploring
Plain language explanations
New to probabilistic forecasting or working with it occasionally. I'll explain what ranges and percentiles mean as we go.
Professional
Guided with context
Comfortable with uncertainty ranges and confidence levels. You understand p10/p90 but want context on how they're derived here.
Expert
Full diagnostics, minimal friction
You build or interpret statistical models regularly. Source type classification, sensitivity indices, and mode selection shown by default.
Before you run — three things to confirm
These are epistemic gates — not legal boilerplate. Each one describes how to read the output correctly. They stay visible throughout your session.
1
Declared assumptions only
The engine uses only the values you declare. It does not fill gaps, apply defaults, or pull from external sources. The output is entirely conditional on your inputs.
2
Characterisation, not prediction
The output range shows what is plausible given your declared assumptions — not what will happen. The range reflects input uncertainty, not prediction error.
3
Downside acknowledged
The p10 lower bound is a genuine possibility under your assumptions — not a statistical artefact. You are confirming you have considered it before proceeding.
These gates are recorded in your framing receipt and stay visible throughout the session. They can't be bypassed — they protect the integrity of the output.
Source types are set to Expert estimate for all inputs. Review the dropdowns if your inputs were obtained differently.
✕
+ Declare input via shell
ValueFileDataframe
Review this analysis structure before continuing.
Are you combining values measured in the same unit — for example, revenues from different products — or constructing a composite index across different measures?
Variable
Value
Low
High
Weight
All variables must be in the same unit for portfolio mode. Check variable units before running.
The weight distribution changed the sensitivity ranking from your initial entry. The current ranking reflects the weights as shown. Review before running.
Weight sum:1.000✓
Portfolio run
Shared inputs
No shared inputs declared yet.
❯
Run
This output characterises plausible futures under declared assumptions. It is not a recommendation. All decisions remain yours.
Result
No run yet
◎
Declare inputs and run to see results
Portfolio result
Joint scenarios
Median estimate
—
under these assumptions
p10 —80% rangep90 —
Range reflects input uncertainty, not prediction error
Awaiting run
Gate confirmations
0/3
Gates not yet confirmed.
Declared inputs
0 declared
No inputs declared yet.
Shell log
0 entries
No actions logged.
Retrospective prompt
When this horizon passes, recording the actual outcome helps calibrate future analyses.
❮
Receipt
Range simulation
* Each run generates a fresh random sample — dot positions vary between runs. The shape and range reflect your declared uncertainty consistently.
How was this value obtained?
Uncertainty multiplier: 1.30
The range you declared is the starting point. How you obtained the value determines how far the system extends it.
Assistant
Gate 1
Problem within scope — single declared outcome, quantifiable inputs confirmed.
Gate 2
Inputs are declared assumptions — not predictions. Uncertainty will be propagated through the engine.
Gate 3
Downside acknowledged — the p10 scenario is a plausible outcome under your declared assumptions.
Shell
Shell — not yet initialised Assistant initialises after onboarding completes.→ Shell settings
Data stream
Data stream — not configured No model API connection active. Available on paid plans.→ Model API settings
No session
—
Gates 0/3
▾Overview
▸How it works
▸Technical basis
Definitions
Source type reference
The source type classifies how an input value was obtained. It determines the uncertainty multiplier applied to your declared range — more direct sources receive narrower multipliers.
Direct measurement×1.00
You measured the value directly using a calibrated instrument or sensor. No interpretation or third-party reporting involved.
Range impact: declared range used as-is — no expansion.
Recorded observation×1.10
The value appears in an official record, audited dataset, or published report. You did not measure it yourself but the source is verifiable.
Range impact: slight expansion to account for transcription and reporting uncertainty.
User attested observation×1.25
You observed the value yourself but did not formally record or instrument it. Based on memory, notes, or informal log.
Range impact: moderate expansion for recall and unverified observation uncertainty.
Expert estimate×1.30
Derived through professional judgement, domain knowledge, or structured elicitation. Not directly measured but grounded in expertise.
Range impact: standard expansion — default for professionally sourced inputs.
Reported estimate×1.35
Someone else estimated the value and reported it to you. You are accepting their estimate without independent verification.
Range impact: additional expansion for indirect sourcing and transmission uncertainty.
User assumption×1.65
You chose this value as a working assumption with no empirical basis. Typically used as a placeholder or sensitivity input.
Range impact: largest expansion — assumption inputs drive the widest output uncertainty.
Mode threshold note: When all inputs are Expert estimate or better, the engine may qualify for Mode 1 (scenario range). Inputs at Reported estimate or User assumption push toward Mode 2 (distributional). The mode shown in the run panel reflects the combination of all declared source types.