00 — practices
Five tracks. One bar. AI that earns its place in production.
We work with desks, labs, and engineering teams who measure the work in PnL or in uptime. Pick the track that fits. We will tell you if it does not.
tracks
5
trading · finance · crypto · research · engineering
engagement cap
5
never more, on purpose
typical scope
2–12 wks
research to PnL
reply window
2 days
bad fits told plainly
01 — the bench
Named in operator language. Scoped to weeks, not quarters.
01
Trading & execution
Signal pipelines, regime detection, slippage models, and execution logic that survives the next regime change.
- alpha capture
- TCA
- venue routing
- live monitoring
02
Finance
Portfolio construction, factor research, risk overlays — the boring half done well, with code your quants can actually read.
- factor research
- risk overlays
- backtest infra
- portfolio analytics
03
Crypto
Market-maker tooling, on-chain data pipelines, OTC ops, and the parts of DeFi where AI actually helps.
- MM tooling
- on-chain data
- OTC ops
- monitoring
04
Research
Eval harnesses, RAG systems that scale, and the work to take an interesting paper to something running at 06:00 every weekday.
- eval harnesses
- RAG infra
- paper-to-prod
- experiment tooling
05
Engineering
Code-gen workflows, internal tooling, dev infra — applying AI where it earns its place inside an engineering org.
- internal tools
- code-gen workflows
- dev infra
- eval-in-CI
02 — fit
Five engagements at a time, no more.
That cap is a feature. It keeps the work serious and the calendar honest. Most engagements run 2 to 12 weeks. Some are a one-week look at a notebook nobody dares touch. Others ship a daily job that runs at 06:00 and stays running.
We will say no to fits we cannot earn — that includes anything that wants a model where a script works.
03 — contact
Send a sentence about the work.
We reply within two business days. If we are a bad fit we will tell you, and usually point at someone better.