SYSTEMATIC BITCOIN TRADING · LAUNCHING Q3 2026
Not an indicator stack. Not a signal bot. Not an AI black box. CORE is a self-contained research and execution platform that models market structure — channels, regimes, band geometry — across six timeframes, then exposes every decision the model makes. Local execution, walkforward-validated, fully auditable.
WHY CORE EXISTS
Most trading systems collapse into one of two extremes: rigid indicator stacks that can't adapt, or opaque AI models nobody can inspect. CORE was built because neither is acceptable for serious systematic trading.
The system is structured as a sequence of independent layers — signal generation, structural context, policy logic, ML scoring, execution, diagnostics. Each one is configurable in isolation. Each one is replaceable. Each one is auditable on its own terms. The architecture isn't a stylistic preference; it's the only way to iterate safely on a system that handles real money.
That separation buys three things you can't get from a monolithic black box: explainable decisions, validation without hidden coupling, and the ability to change one component without rebuilding the rest. The screenshots, the validation report, the strategy builder — they all flow from this one architectural commitment.
Every signal, filter, and gate is configurable and inspectable. The screenshot below is one tab — there are sixteen more.
Most trading software treats the market as a stream of indicator readings. CORE treats it as a structured object — channels, regimes, band geometry, multi-timeframe state — and runs every decision through a deterministic pipeline you can inspect at every stage.
Parallel channels, rainbow bands, geometric state. Six timeframes in parallel.
Trend / range / divergence per TF. HTF context and alignment scoring.
Composable rules across 31 dimensions. BLOCK, ALLOW, REVERSE per signal.
XGBoost classifier on 41 features. Per-trade confidence, OOD detection.
Hardware-bound, encrypted models, signed binaries. Native exchange routing.
Per-decision drivers, full replay viewer, validation suite re-runnable on your data.
Every stage is independently auditable and independently toggleable. Run the policy layer alone for rule-based trading. Add ML scoring as a confirmation gate. Or stack the Pro layers (Rainbow Policy v2, Stretch Policy) on top for parallel signal streams. The pipeline is the same; what you put through it is yours to configure.
Five pillars, each with its own thesis. None of them are cosmetic, none of them are bolt-ons. Auditable, testable, documented in the included reference.
Every timeframe is a first-class citizen — there's no "real" TF the system is built around with the others as bolt-ons. The same engine that finds 1-minute scalps finds daily swings. The same policy layer governs both. The same ML pipeline scores both. Switching scope is a configuration choice, not a redeploy.
A strategy builder, not a black box. Six independently-implemented base strategies that you compose into a complete signal pipeline — any one as the trigger, any other as a confirmation gate. The composability is the point: structural-channel logic confirmed by band crossover, or RSI strategy gated by parallel-channel context. The strategies are the alphabet; you write the language.
A composable decision layer that turns market structure into execution rules. Thirty-one filter dimensions — regime, RSI, slope, volume, PC quality, rainbow context, HTF alignment — combinable per signal type, per timeframe, with three deterministic actions. Rules are JSON, version-controllable, exportable. The policy layer is the discipline: it enforces what you decided when you weren't trading, on the trades that come when you are.
Confidence scoring, not signal generation. The XGBoost classifier doesn't decide what to trade — the strategy and policy layers do that. The model assigns a probability of profit to candidate signals, and you choose the threshold that controls how aggressive the system is. ML Workshop lets you search for the feature subset that performs best out-of-sample, validate any candidate model against six orthogonal methodologies, and verify that what runs live matches what passed validation.
Two parallel decision systems that see things the base pipeline cannot. Rainbow Policy v2 enforces structural rules in a locked evaluation order — TF, regime, direction, band, lreg, geo — making the system act on archetype-level setups rather than per-tick reactions. Stretch Policy is its own signal generator: a separate model with a 46-feature encoder dedicated to band-stretch geometry, identifying the deformed-band scenarios the base strategies don't have language for.
A research-and-deployment loop, not a one-click bot. The sequence below is how a model goes from raw historical data to live execution — four phases, each with its own outputs, each independently inspectable. You can stop after any phase. You can re-enter from any phase. Most operators iterate phases 2 and 3 several times before deploying.
In the Live Trading tab, download a bootstrap window sized to your hardware and your patience — bigger windows mean richer training data and longer compute. Set your signal scope (e.g. 5m, 15m, 1h) and run with ML frozen: no retraining, just the existing logic running through history.
What you get out: ML snapshots capturing every candidate signal in a fully-featured form, plus a trade audit reconstructing the exact decisions the system made at every bar. These are the raw materials for everything downstream.
Open ML Workshop, point it at your snapshots, and run a Check Model on a baseline feature set. Compare single-TF training to multi-TF combinations — adding context from other timeframes is often where the signal comes from. The validation numbers don't lie: you'll see exactly which combination produces a model that holds up out-of-sample.
When you have a baseline that's working, set hard floors (n/month, win rate, max drawdown, holdout disagreement) and hit Search Model. CORE explores feature subsets via backward elimination and swap perturbation, scoring each candidate against your floors. The model that comes out is often not the one you'd have built by hand — and often performs better.
Take fresh data from after the training window, run an independent walkforward through it in ML Workshop. The output mirrors exactly what you'd see if you'd run the model live on that period — no leakage, no peeking, just the model meeting unseen reality.
This is the phase that tells you whether the search-found model is real or whether it found patterns that won't repeat. If the validation report and the independent walkforward agree, you have a deployable model. If they disagree, back to phase two — that's the loop.
Back to Live Trading. Download a smaller bootstrap — just enough to warm the most demanding feature window. Upload your validated model, set it frozen (no retrain), choose your position size, leverage, and entry style (maker-hybrid for zero-fee fills if your venue supports it).
In Autonomous, set your minimum confidence threshold and a daily trade cap, then enable Auto Trade. The system walk-forwards through your bootstrap while fresh bars stream from the exchange. Once it catches up to the live edge, execution begins. From there it's a research-and-monitoring rhythm: check diagnostics daily, retrain on accumulated audits weekly or monthly, adjust policies as regimes shift.
The architecture isn't a guess. It's the residue of 50+ documented research scripts, calendar stress tests across multiple market regimes, and the failures those tests surfaced. What you see is what survived.
Three real tabs from a live build. The rest — autonomous training, live trading, analytics, diagnostics, XAI viewer — you'll see on day one.
Replay any historical run with structural overlays in place. PC channels, entry and exit markers, the full trade-outcome panel. Step through trade-by-trade, or jump to any TF.
The decision layer that turns market structure into execution discipline — 30+ filter dimensions, three actions, all combinable per signal type and timeframe.
Run the full validation suite from a single workflow. CPCV, Walkforward, Timeshift, Reverse-trade, MonteCarlo, Quarterly hold-out — all wired into the same pipeline that ships with every model.
Every model ships with a validation report. Numbers below are from a recent non-optimised reference model trained on 5m + 15m + 1h data, ~17k training snapshots, evaluated on roughly 5k held-out trades per quarter. Methodology and dataset are described in the report, which is included with each model preset.
| Threshold | Trades | Win rate | Avg PnL | Sum PnL |
|---|---|---|---|---|
| 0.50 | 794 | 53.1% | +0.28% | +223.1% |
| 0.55 | 278 | 60.4% | +0.40% | +111.2% |
| 0.60 | 77 | 67.5% | +0.49% | +37.6% |
| 0.65 | 20 | 70.0% | +0.54% | +10.8% |
Both win rate and average per-trade PnL improve monotonically with the confidence threshold — the score is doing real work, picking better setups rather than just smaller ones. Sample size shrinks at higher thresholds, as expected; per-trade economics keep improving.
| Train window | Trades / yr | Win rate | Avg PnL | Net annualised |
|---|---|---|---|---|
| 12 months | 524 | 52.8% | +0.28% | +117.3% |
| 9 months | 608 | 50.9% | +0.25% | +116.3% |
| 6 months | 770 | ~50% | +0.22% | +121.9% |
| 3 months | 964 | 42.1% | +0.14% | +30.1% |
Three different train-window sizes converge on roughly the same number — that's the consistency signal. The 3m window underperforms because the model needs more training history to stabilise; longer windows recover. All numbers are at threshold 0.50, which is intentionally permissive to keep sample sizes meaningful.
The trading period is split into 4 quarters. For each quarter, the model is trained on the other three and evaluated on the held-out quarter. Every trade is scored exactly once, on data the model has never seen. This is the most rigorous out-of-sample test in the suite.
Rolling-window simulation. Train on N months, evaluate on the next month, advance, retrain. Closest analogue to live operation: model never sees future data, retrains on a fixed schedule. Reported across train windows of 3 / 6 / 9 / 12 months for stability comparison.
10 batches × 45 combos with embargo periods between train and test slices. Detects look-ahead bias. Treated as a sanity check, not a primary metric — CPCV is known to be optimistic versus walkforward.
Real CPCV baseline run alongside 200 permutations of shuffled labels. If the model performs no better on real data than on randomly relabelled data, the apparent edge is luck — the test rejects it. Adds a bootstrap 95% confidence interval around the real win rate. Reports a p-value and a verdict (significant / marginal / not significant). The reference model passes at p < 0.05.
Train the model on losses labelled as wins and wins as losses. A leaking model — one that's accidentally seeing future information — will still pick the inverted winners and post a high WR. A clean model collapses to chance. Reference model passed: 26.1% WR vs the 60% threshold for failure.
Shift labels forward by 20 / 50 / 100 bars and retrain. A leaking model still picks future winners through the shift; a clean one drops to noise. Reference model passed: 0% WR at all shifts.
Monthly subscription — keeps you on the latest build, the latest models, the latest validation tooling. Pay annually and save up to 25%. Live sales open Q3 2026 — reserve your spot below.
CORE launches Q3 2026, capped at 100 Pro licences. Three ways to get on the list. Email goes nowhere except your inbox when there's actual news.
No commitment. One email at launch, one at major milestones. Unsubscribe in one click.
Reserve a Pro launch licence. We'll contact you when sales open. No payment now — this is just your spot in the queue. Cap is real: when 100 are reserved, the list closes.
Run pre-launch builds. Find bugs. Give honest feedback. We need ~2 hours/week. Not a casual signup — quiet ghosters get their slot reassigned.
We read every application. Selected testers will be contacted within 7 days.
If yours isn't here, the answer is probably yes.