A production trading platform built for sub-10ms internal latency, thousands of orders per second, and services that can be hosted near your execution venues.
Services on the hot path can be scaled and hosted in different locations near your execution venues. Core services interact via a custom socket protocol that adds only microseconds of latency.
A signal generated by a strategy travels through four hand-offs before it reaches the market within milliseconds.
Market data connectors consume real-time data from brokers and vendors of your choice.
A strategy running in the Strategies Service evaluates incoming market data and emits a trade decision.
The signal is sent to the Accounts Service, which checks pre-trade risk, position limits, and account state, then creates an authoritative order record.
The AS forwards the order to the Execution Service hosting the relevant broker or exchange connector. It translates the order into the broker's native protocol and submits it.
The fill arrives via the same connector. The Accounts Service updates positions, recomputes PnL, and propagates state changes to Strategies Service, operations dashboards, the client portal, and audit logs.
The same flow runs in backtests, with Backtesting Workers playing all roles in-process and consuming the same market data and static configs that live trading reads.
The platform is organized into a hot path, a research stack, a shared data layer, and an access layer. Each layer can be scaled independently.
The authoritative source of truth for orders, fills, positions, and account state. Enforces pre-trade risk and account constraints. Processes thousands of orders and trades per second. Scales by account and by region.
Runs strategy code and execution algos. Subscribes to market data and emits signals to AS. One service can host many strategies; scales by strategy and by region.
Hosts broker and exchange connectors. Translates platform orders into the native protocol of each venue.
All components can scale by region: strategies trading on CME can use Accounts Service, Strategies Service and Execution Service that live near CME, services running Binance connectors live in Tokyo.
Run strategies against historical data with the same execution semantics as live. Scale horizontally across thousands of cores on demand — workers spin up from a compute fleet, run, and terminate.
Keep results from all backtesting and optimization runs in a centralized database.
Ingest from exchange direct feeds, broker feeds, and vendor providers. Normalize to an internal schema and forward to consumers in real time.
Persistent storage of historical market data, shared between live and backtesting. Backtests read the same data live trading consumes — not a separate "research" dataset.
Set contracts, trading sessions, instrument configurations, calendars once. Consumed by both backtest and live, so that you backtest matches what you deploy.
An internal web app for your operators and quants to configure, deploy, and monitor strategies, and to access reports.
An optional external app where investors view performance and reports, and initiate deposits and withdrawals.
Backtests run the identical strategy code, against the identical market data, with the identical static configuration that runs in live trading.
No separate research-vs-production code path, no separate dataset, no separate config. A strategy that behaved a certain way in backtest behaves the same way in production — modulo real-world execution.
Core services communicate over a custom socket protocol that adds only microseconds of latency.
Accounts Service uses WAL that can be replayed in case of a failure. All events are asyncrhonously persisted in the centralized database for reporting purposes.
The platform is designed to run as a mesh, not a monolith. Different services for different strategies and venues can live in different regions.
A typical deployment puts instances of Accounts Service, Strategies Service, and Execution Service close to your primary broker, and backtesting workers in whichever region offers the cheapest compute.
The platform is deployed into your environment — AWS, Azure, GCP, OVH, or on-premise. Each customer runs on a dedicated deployment, isolated from any other customer's data and compute.
We operate the platform: deployments, upgrades, monitoring, and on-call. You retain ownership of strategy code, market data, broker credentials, and capital. We do not custody funds and do not see broker credentials unless you explicitly grant access for support.
Stateful services run with replication and automated failover within a region. Stateless services (Strategies, Execution) restart on failure with no loss of in-flight state — the OMS is the source of truth and replays missed updates after reconnection.
Configuration changes and deployments are audited and reversible. A staging environment runs against live market data without sending real orders. Observability data — metrics, traces, logs — flows into your existing tooling.
When the platform is deployed into your cloud, you retain ownership of:
Written by your team, versioned in your Git, deployed via the Control Panel.
Collected into your data layer, persisted in your storage.
Held in your cloud's secret store.
Held in your broker accounts. The platform places orders on your behalf but does not custody funds.
Start backtesting today. Go live when ready.
All at a cost comparable to a single engineer, without capital investment.