Q3 2026 launch·Limited to 100 Pro launch licences·10 tester spots open — reserve below →

SYSTEMATIC BITCOIN TRADING · LAUNCHING Q3 2026

Structural state modelling across
interacting timeframes,
with explainable execution logic.

Not an indicator stack. Not a signal bot. Not an AI black box. CORE is a self-contained research and execution platform that models market structure — channels, regimes, band geometry — across six timeframes, then exposes every decision the model makes. Local execution, walkforward-validated, fully auditable.

125k
Lines of audited Python
78
Modular subsystems
6
Validation methodologies built in
100%
Local — your data never leaves

WHY CORE EXISTS

Most trading systems collapse into one of two extremes: rigid indicator stacks that can't adapt, or opaque AI models nobody can inspect. CORE was built because neither is acceptable for serious systematic trading.

The system is structured as a sequence of independent layers — signal generation, structural context, policy logic, ML scoring, execution, diagnostics. Each one is configurable in isolation. Each one is replaceable. Each one is auditable on its own terms. The architecture isn't a stylistic preference; it's the only way to iterate safely on a system that handles real money.

That separation buys three things you can't get from a monolithic black box: explainable decisions, validation without hidden coupling, and the ability to change one component without rebuilding the rest. The screenshots, the validation report, the strategy builder — they all flow from this one architectural commitment.

Built for traders who want the keys.

Every signal, filter, and gate is configurable and inspectable. The screenshot below is one tab — there are sixteen more.

CORE PC
TRADE POLICY
ML WORKSHOP
AUTONOMOUS
XAI VIEWER
SETTINGS
BASE STRATEGY
PC Hybrid
Band Crossover
Band Bounce
RSI Strategy
TIMEFRAMES
5m + 15m
1m / 1h / 4h / 1d
EQUITY (12M WALKFORWARD)BTC/USDT
+117%▲ net annualised
WIN RATE
52.8%
TRADES (12M)
523
AVG PNL
+0.28%

How CORE thinks.

Most trading software treats the market as a stream of indicator readings. CORE treats it as a structured object — channels, regimes, band geometry, multi-timeframe state — and runs every decision through a deterministic pipeline you can inspect at every stage.

01 / INGEST

Market structure

Parallel channels, rainbow bands, geometric state. Six timeframes in parallel.

02 / CLASSIFY

Regime detection

Trend / range / divergence per TF. HTF context and alignment scoring.

03 / GATE

Trade Policy

Composable rules across 31 dimensions. BLOCK, ALLOW, REVERSE per signal.

04 / SCORE

ML confidence

XGBoost classifier on 41 features. Per-trade confidence, OOD detection.

05 / EXECUTE

Local execution

Hardware-bound, encrypted models, signed binaries. Native exchange routing.

06 / EXPLAIN

Diagnostics & replay

Per-decision drivers, full replay viewer, validation suite re-runnable on your data.

Every stage is independently auditable and independently toggleable. Run the policy layer alone for rule-based trading. Add ML scoring as a confirmation gate. Or stack the Pro layers (Rainbow Policy v2, Stretch Policy) on top for parallel signal streams. The pipeline is the same; what you put through it is yours to configure.

What's under the hood.

Five pillars, each with its own thesis. None of them are cosmetic, none of them are bolt-ons. Auditable, testable, documented in the included reference.

Trade any timeframe. Switch on the fly.

Every timeframe is a first-class citizen — there's no "real" TF the system is built around with the others as bolt-ons. The same engine that finds 1-minute scalps finds daily swings. The same policy layer governs both. The same ML pipeline scores both. Switching scope is a configuration choice, not a redeploy.

1m 5m 15m 1h 4h 1d + multi-TF fusion
1. BootstrapDownload data window
2. ConfigurePick TFs, set policies
3. Auto-tradeSystem runs end-to-end

Signal generation

A strategy builder, not a black box. Six independently-implemented base strategies that you compose into a complete signal pipeline — any one as the trigger, any other as a confirmation gate. The composability is the point: structural-channel logic confirmed by band crossover, or RSI strategy gated by parallel-channel context. The strategies are the alphabet; you write the language.

  • Parallel Channels / PC Hybrid / PC Breaks — three flavours of channel-break detection, with two distinct engines (AUTO with DAR + wave anchoring, or MANUAL parameterised regression-fit).
  • Band Crossover — 4-band rainbow system (A/B/C/D) with configurable trigger pairs, cross styles (penetrate / fully cross), per-band position and angle filters.
  • Band Bounce — touch-and-react logic on the rainbow A-band with RSI confirmation and slope filters.
  • RSI Strategy — configurable OB/OS thresholds with turning detection, importable as either a base or a confirmation lane.
  • Composable filters — any base strategy can be stacked as a confirmation on any other (e.g. PC break confirmed by band crossover, or band bounce gated by RSI zone). Per-side, per-TF.
6 BASE STRATEGIES · COMPOSABLE · 7 CANONICAL SIGNAL TYPES

Trade Policy & filtering

A composable decision layer that turns market structure into execution rules. Thirty-one filter dimensions — regime, RSI, slope, volume, PC quality, rainbow context, HTF alignment — combinable per signal type, per timeframe, with three deterministic actions. Rules are JSON, version-controllable, exportable. The policy layer is the discipline: it enforces what you decided when you weren't trading, on the trades that come when you are.

  • 31 filter dimensions: regime states, RSI zones, slope buckets, ATR/vol buckets, PC qualities, rainbow context, HTF alignment, and more.
  • Quick TF Gate — multi-timeframe selection auto-generates blocking rules.
  • Stretch policy bypass — independent signal layer for band-stretch transitions.
  • Archetype gating — load a clusters CSV, restrict trades to recognised setups.
31 DIMENSIONS · 3 ACTIONS · COMPOSABLE

ML pipeline & workshop

Confidence scoring, not signal generation. The XGBoost classifier doesn't decide what to trade — the strategy and policy layers do that. The model assigns a probability of profit to candidate signals, and you choose the threshold that controls how aggressive the system is. ML Workshop lets you search for the feature subset that performs best out-of-sample, validate any candidate model against six orthogonal methodologies, and verify that what runs live matches what passed validation.

  • 41-feature engineering surface — structural, regime, PC channel, multi-timeframe context.
  • Automated feature search with composite scoring — hard floors on n/month, win rate, max drawdown, holdout disagreement.
  • Auto-retrain at configurable intervals; SGD partial-fit between full retrains.
  • Models versioned as preset bundles — share, audit, roll back.
  • Live-vs-replay 2×2 contingency to verify deployed model matches research.
41 FEATURES · XGBOOST + SGD · 6 VALIDATION METHODS

What using CORE looks like.

A research-and-deployment loop, not a one-click bot. The sequence below is how a model goes from raw historical data to live execution — four phases, each with its own outputs, each independently inspectable. You can stop after any phase. You can re-enter from any phase. Most operators iterate phases 2 and 3 several times before deploying.

PHASE01

Generate snapshots from historical data

In the Live Trading tab, download a bootstrap window sized to your hardware and your patience — bigger windows mean richer training data and longer compute. Set your signal scope (e.g. 5m, 15m, 1h) and run with ML frozen: no retraining, just the existing logic running through history.

What you get out: ML snapshots capturing every candidate signal in a fully-featured form, plus a trade audit reconstructing the exact decisions the system made at every bar. These are the raw materials for everything downstream.

OUTPUTS
  • ml_snapshots.jsonl
  • trade_audit_v2.jsonl
  • xai_explain.jsonl
PHASE02

Find the model in ML Workshop

Open ML Workshop, point it at your snapshots, and run a Check Model on a baseline feature set. Compare single-TF training to multi-TF combinations — adding context from other timeframes is often where the signal comes from. The validation numbers don't lie: you'll see exactly which combination produces a model that holds up out-of-sample.

When you have a baseline that's working, set hard floors (n/month, win rate, max drawdown, holdout disagreement) and hit Search Model. CORE explores feature subsets via backward elimination and swap perturbation, scoring each candidate against your floors. The model that comes out is often not the one you'd have built by hand — and often performs better.

VALIDATION
  • CPCV
  • Walkforward
  • Quarterly hold-out
  • Monte Carlo
  • Timeshift gate
  • Reverse-trade gate
PHASE03

Verify on data the model has never seen

Take fresh data from after the training window, run an independent walkforward through it in ML Workshop. The output mirrors exactly what you'd see if you'd run the model live on that period — no leakage, no peeking, just the model meeting unseen reality.

This is the phase that tells you whether the search-found model is real or whether it found patterns that won't repeat. If the validation report and the independent walkforward agree, you have a deployable model. If they disagree, back to phase two — that's the loop.

CHECKS
  • fresh-data WR
  • per-threshold PnL
  • regime stability
  • per-TF performance
PHASE04

Deploy to live execution

Back to Live Trading. Download a smaller bootstrap — just enough to warm the most demanding feature window. Upload your validated model, set it frozen (no retrain), choose your position size, leverage, and entry style (maker-hybrid for zero-fee fills if your venue supports it).

In Autonomous, set your minimum confidence threshold and a daily trade cap, then enable Auto Trade. The system walk-forwards through your bootstrap while fresh bars stream from the exchange. Once it catches up to the live edge, execution begins. From there it's a research-and-monitoring rhythm: check diagnostics daily, retrain on accumulated audits weekly or monthly, adjust policies as regimes shift.

CONTROLS
  • min confidence
  • trades / day cap
  • position size
  • leverage
  • maker-hybrid entry
  • daily loss limit

The architecture isn't a guess. It's the residue of 50+ documented research scripts, calendar stress tests across multiple market regimes, and the failures those tests surfaced. What you see is what survived.

Validation, not vibes.

Every model ships with a validation report. Numbers below are from a recent non-optimised reference model trained on 5m + 15m + 1h data, ~17k training snapshots, evaluated on roughly 5k held-out trades per quarter. Methodology and dataset are described in the report, which is included with each model preset.

Total OOS return
+223%
Quarterly hold-out, 2yr period
Win rate @ 0.50
53.1%
n = 794 OOS trades
Win rate @ 0.65
70.0%
n = 20 — high selectivity
Validation suite
6 / 6
Walkforward · holdout · CPCV · MC · leak gates

Quarterly hold-out — every trade scored once, fully out-of-sample

Threshold Trades Win rate Avg PnL Sum PnL
0.5079453.1%+0.28%+223.1%
0.5527860.4%+0.40%+111.2%
0.607767.5%+0.49%+37.6%
0.652070.0%+0.54%+10.8%

Both win rate and average per-trade PnL improve monotonically with the confidence threshold — the score is doing real work, picking better setups rather than just smaller ones. Sample size shrinks at higher thresholds, as expected; per-trade economics keep improving.

Walkforward — same numbers, different methodology

Train window Trades / yr Win rate Avg PnL Net annualised
12 months52452.8%+0.28%+117.3%
9 months60850.9%+0.25%+116.3%
6 months770~50%+0.22%+121.9%
3 months96442.1%+0.14%+30.1%

Three different train-window sizes converge on roughly the same number — that's the consistency signal. The 3m window underperforms because the model needs more training history to stabilise; longer windows recover. All numbers are at threshold 0.50, which is intentionally permissive to keep sample sizes meaningful.

Methodology

Quarterly hold-out

The trading period is split into 4 quarters. For each quarter, the model is trained on the other three and evaluated on the held-out quarter. Every trade is scored exactly once, on data the model has never seen. This is the most rigorous out-of-sample test in the suite.

Walkforward

Rolling-window simulation. Train on N months, evaluate on the next month, advance, retrain. Closest analogue to live operation: model never sees future data, retrains on a fixed schedule. Reported across train windows of 3 / 6 / 9 / 12 months for stability comparison.

Combinatorial Purged Cross-Validation (CPCV)

10 batches × 45 combos with embargo periods between train and test slices. Detects look-ahead bias. Treated as a sanity check, not a primary metric — CPCV is known to be optimistic versus walkforward.

Monte Carlo permutation test

Real CPCV baseline run alongside 200 permutations of shuffled labels. If the model performs no better on real data than on randomly relabelled data, the apparent edge is luck — the test rejects it. Adds a bootstrap 95% confidence interval around the real win rate. Reports a p-value and a verdict (significant / marginal / not significant). The reference model passes at p < 0.05.

Reverse-labels gate

Train the model on losses labelled as wins and wins as losses. A leaking model — one that's accidentally seeing future information — will still pick the inverted winners and post a high WR. A clean model collapses to chance. Reference model passed: 26.1% WR vs the 60% threshold for failure.

Timeshift gate

Shift labels forward by 20 / 50 / 100 bars and retrain. A leaking model still picks future winners through the shift; a clean one drops to noise. Reference model passed: 0% WR at all shifts.

Disclosure. The numbers above are from a single non-optimised reference model. Your results in live trading will differ. Training data is 1-minute BTC bars across approximately two years, exposing the model to thousands of intraday drawdown, panic, and reversal episodes — including the January 2026 selloff, which it traded through without breaking down. Backtest results, regardless of methodology, do not guarantee future performance. Trading involves substantial risk of loss. CORE is software; it does not eliminate risk, it exposes it.

Pricing.

Monthly subscription — keeps you on the latest build, the latest models, the latest validation tooling. Pay annually and save up to 25%. Live sales open Q3 2026 — reserve your spot below.

TRIAL
£0
7 days · no card required · launches with the product
  • All trading subsystems unlocked
  • Full backtest + walkforward suite
  • ML Workshop with feature search
  • Live execution disabled
  • Hardware-bound licence
Notify me at launch
PRO
£599 /month
or £5,391 / year— save £1,797
  • Everything in Standard
  • Rainbow Policy v2 — rule-first structural layer
  • Stretch Policy — parallel signal engine
  • Multi-venue execution (Binance, Bitget)
  • Two machines per licence
  • Priority email support
Reserve Pro

Reserve your spot.

CORE launches Q3 2026, capped at 100 Pro licences. Three ways to get on the list. Email goes nowhere except your inbox when there's actual news.

NOTIFY ME No cap

Just tell me when it's live.

No commitment. One email at launch, one at major milestones. Unsubscribe in one click.

  • Notified before public launch
  • Launch pricing locked at signup
  • Zero spam, no upsells
TESTER · EARLY ACCESS 0 / 10 spots

Help shape it. Get half off.

Run pre-launch builds. Find bugs. Give honest feedback. We need ~2 hours/week. Not a casual signup — quiet ghosters get their slot reassigned.

  • 50% off any tier for 6 months at launch
  • Direct line to the developer during testing
  • Founding tester credit in the app (opt-in)
  • First look at every new feature

We read every application. Selected testers will be contacted within 7 days.

Questions.

If yours isn't here, the answer is probably yes.

Why a waitlist?
CORE is software-complete and validated, but the business side isn't ready yet — payment processing, accounting, support workflow, licence server, terms of service. Rather than half-launch, we're building the back-end properly and using the runway to find serious early users. Reserving a spot now locks in launch pricing and guarantees access to one of the 100 Pro licences (or a Standard licence at the price shown).
What does testing involve?
Install pre-launch builds, run them on real or paper accounts, report what breaks and what's confusing, suggest improvements. Roughly 2 hours/week for ~3 months before launch. Not a free-software giveaway — we need genuine engagement. In exchange: 50% off any tier for 6 months at launch, founding-tester credit, direct line to the developer. Ten spots, very selective. We read every application.
Is this a signals service?
No. CORE is software you install. There's no chat group, no Telegram, no copy trading. You run it, configure it, and own the outcome.
What about hardware?
Modern laptop or desktop, Windows. 8GB RAM minimum, 16GB recommended for ML training. Not designed for VPS deployment yet — that's roadmap.
Source code?
Distributed as encrypted, signed binaries. Source is not public. Commercial software, not open source.
How do I know the numbers are real?
Every model ships with its own validation report — the same format you see above. The validation suite is built into the product; you can re-run any of it on your own data. The results aren't curated by us, they're produced by the same code that runs the model.
What if my model overfits?
The Workshop's feature search includes hard floors on holdout disagreement and walkforward consistency. Models that look great on training data but degrade on holdout get auto-rejected. The leak-safety gates (timeshift + reverse-labels) catch subtle data leakage that would otherwise pass standard validation.
Cancellation and refunds?
Cancel monthly anytime — your subscription runs until the end of the period you've paid for, then stops. Annual plans aren't refundable mid-term, which is why monthly exists: try the system for a month or two before committing. The 7-day trial exists for the same reason.
Will it make me money?
Trading is risky and most retail traders lose money. CORE is a tool that lets you execute and test strategies more rigorously than spreadsheets and intuition allow. It does not guarantee returns. Past backtest performance is not predictive of future results, and live execution exposes risks (slippage, partial fills, exchange downtime, regime change) that no backtest fully captures.