Why I Chose Rust for 0xInsider

No QA team. No on-call rotation. One person, one binary, one financial product. The compiler is my entire engineering org.

Trevor I. Lasn Trevor I. Lasn
· 13 min read
Building 0xinsider.com — see who's winning across prediction markets (Polymarket, Kalshi, and more) — and what they're trading right now.

I have no QA team. No on-call rotation. No second pair of eyes on my pull requests at 1am. I’m one person running a production financial terminal that tracks 7,000+ prediction market traders across Polymarket and Kalshi, streams every large trade in real time, and computes P&L analytics on millions of records. If a bug ships, there’s nobody to page. If a data race corrupts a trader’s P&L at 3am, I find out when a user opens a support ticket.

That’s why I chose Rust. Not because it’s trendy. Not because the benchmarks look good. Because when you’re solo and the product handles financial data, the compiler has to be your engineering team — your QA, your code reviewer, your safety net. If it compiles, the concurrency is correct, every error path is handled, and every database query matches the schema. I push on Friday night and sleep fine.

I’ve been writing TypeScript for years. I started writing Rust seriously in October 2025, building trading bots for prediction markets. That work pulled me deep into the APIs, the on-chain data, the trading patterns — and eventually led to 0xInsider. It’s an intelligence terminal for prediction markets: real-time trade feeds, trader analytics, performance grading, leaderboards, portfolio breakdowns. One Rust binary powers all of it.

Here’s what that one binary actually does.

One process, many subsystems

Everything runs in a single process, all the time.

main()
├─ HTTP server (user requests, hundreds of concurrent connections)
├─ sync_loop (10 concurrent trader syncs, 11-phase pipeline each)
├─ polymarket_websocket (primary real-time whale trade detection via RTDS)
├─ whale_trades_poller (60s backfill for trades missed during WS reconnects)
├─ periodic_tasks (rankings, materialized view refresh, classifiers)
├─ discovery_loop (daily wallet discovery, new trader detection)
├─ kalshi_websocket (live market data stream)
├─ sports_websocket (live market data stream)
├─ trade_flow_monitor (real-time flow analysis)
└─ export_worker (background dataset exports)

Each trader sync is an 11-phase pipeline — fetch activity from external APIs, parse and validate, upsert into a time-partitioned database, resolve market metadata, recompute market stats and P&L, fetch positions, classify the trader’s strategy, compute derived metrics, backfill missed trades, run alerts, and reconcile resolved markets with integrity checks. Ten of those run simultaneously while the WebSocket ingests trades in real time and the HTTP server handles user requests without flinching.

In most languages, the risk is subtle. Two tasks touch the same state, one mutates it, and you get a data race that only manifests under production load at 2am. In Rust, if two tasks access shared state without proper synchronization, the code won’t compile. Not “it might race condition” — it refuses to build. That’s a fundamentally different contract. The concurrency bugs don’t happen at runtime because they can’t exist at compile time.

But the interesting part isn’t the safety guarantees everyone talks about. It’s the specific architectural patterns Rust enables that I couldn’t replicate — or couldn’t trust — in other languages.

190 SQL queries, zero runtime surprises

The backend has 190+ SQL query files and 247 database migrations. The schema changes constantly — I ship features almost daily, which means new columns, renamed fields, altered types. Every single query is verified against the real database schema at compile time using SQLx macros.

src/traders/profile.rs
let row = sqlx::query_file_as!(
TraderRow,
"queries/traders/get_trader_profile.sql",
address
)
.fetch_optional(&state.pool)
.await
.map_err(db_err!("fetching trader"))?;

The SQL lives in a separate .sql file. At compile time, SQLx reads it, connects to the database schema, and verifies every column name, every type, every parameter. The TraderRow struct must match the query’s output exactly or it won’t compile.

Rename a column in migration 248 and forget to update one of those 190 query files? cargo build fails. Not a runtime crash in production three days later when someone happens to hit the right endpoint. The build itself won’t finish. For a solo developer shipping schema changes weekly, this is the difference between confidence and anxiety. I refactor the database schema the same way I’d refactor code — rename freely, change types, restructure joins — and the compiler tells me every file I missed.

No ORM does this. Prisma and TypeORM validate against the schema at codegen time, but they still rely on runtime deserialization. SQLx validates the exact query, the exact types, and the exact result shape — at compile time, with zero runtime overhead. It’s the closest thing to a database type system I’ve found.

External APIs lie

If you build on third-party data, you already know this. Fields change types without warning. An endpoint that returned JSON yesterday returns HTML today. A field that was always present stops showing up. Documentation says one thing, the actual response says another.

When you’re computing financial metrics — P&L, position sizes, win rates — a mishandled field doesn’t just cause a crash. It causes a wrong number. A NaN that sneaks into a P&L calculation. A trade size parsed as shares when it should be dollars. A null that silently becomes zero and wipes out someone’s profit history. Those bugs are worse than crashes because nobody notices until the data is already wrong.

In Rust, Serde forces you to declare the exact shape you expect from every external response:

src/polymarket_ws/parsing.rs
#[derive(Debug, serde::Deserialize)]
pub struct RtdsTrade {
pub proxy_wallet: Option<String>, // not always present
pub size: Option<serde_json::Value>, // can be int or float
pub price: Option<f64>,
pub timestamp: Option<i64>,
}

Every field is Option<T> because the API doesn’t guarantee any of them. The code that consumes this struct has to handle the None case — the compiler won’t let you pretend a field exists when it might not. No undefined is not a function. No Cannot read property 'size' of null. You deal with the messiness upfront, or your code doesn’t compile.

This sounds like more work. It is — on the first day. After that, it’s less work, because you never debug phantom NaNs in production. Every edge case is handled once, at parse time, and the compiler ensures nothing downstream ever sees unvalidated data.

Every failure mode, named and handled

External APIs go down. They rate-limit you. They return garbage. A market that existed yesterday returns 404 today. When you’re integrating with multiple exchanges that each have a dozen failure modes, “try/catch everything” isn’t a strategy — it’s a prayer. You’ll forget one edge case at 2am and ship a silent failure.

In Rust, I model every failure mode as an enum variant. The compiler forces me to handle each one:

src/error.rs
pub enum ApiError {
RateLimited, // wait and retry with backoff
NotFound, // this trader/market is gone, skip forever
ResourceGone(String), // market deleted, skip
InvalidResponse(String), // returned garbage, don't retry
Timeout(u64), // transient, retry
ExchangeUnavailable { status: u16, body: String },
InvalidRequest(String),
UpstreamAuth(String),
}

RateLimited goes back in the queue with exponential backoff. NotFound gets skipped forever. InvalidResponse gets logged and dropped — no point retrying garbage. Each variant carries exactly the data needed to make the right recovery decision.

The real payoff comes later. When I add a new variant — say, a new exchange introduces a rate limit with a retry-after header — the compiler flags every match statement in the codebase that doesn’t handle it. Every single one. Not “you should probably check your error handling.” A hard compile error in every file that matches on this enum. No silent failures. No empty catch {}. No “we swallowed an error somewhere and now a user’s portfolio shows $0.”

The fast lane: priority scheduling with biased select

When a user visits a trader’s profile on 0xInsider, the system syncs that trader’s latest data on demand. But the sync loop is also running bulk jobs — background re-syncs, discovery, periodic refreshes. Without careful scheduling, a user’s profile visit could wait behind ten bulk jobs. The page would feel slow for no good reason.

Rust’s async runtime gives me something most languages can’t: priority scheduling with tokio::select! and its biased mode.

src/sync/sync_loop.rs
loop {
tokio::select! {
biased;
Some(job) = fast_lane_rx.recv() => {
// Profile visits get handled immediately
spawn_sync(job, &fast_semaphore).await;
}
_ = redis_interval.tick() => {
// Bulk jobs only run when the fast lane is empty
if let Some(job) = queue.dequeue().await {
spawn_sync(job, &bulk_semaphore).await;
}
}
}
}

The biased keyword tells Tokio to check the fast lane channel first, every iteration. A profile visit never waits behind a bulk sync — it jumps the queue. Two separate semaphores (3 fast lane slots, 7 bulk slots) ensure that a burst of bulk work can’t starve interactive requests, and vice versa.

In Go, select is deliberately fair — it picks a random ready case. You can’t prioritize without building your own scheduling layer. In Node.js, the event loop processes callbacks in insertion order. Rust’s biased select is a single keyword that gives you deterministic priority scheduling for free. It directly affects UX: trader profiles load fast because the sync never waits in line.

Backpressure that can’t deadlock

0xInsider detects large trades in real time via a persistent WebSocket connection to the exchange. Every trade that settles on-chain flows through this socket. The reader parses each message, filters for trades above the dollar threshold, and pushes them downstream for database insertion and enrichment — order book snapshots, signal scoring, trader auto-import, SSE broadcast to the frontend. Users see a large trade in the terminal within a second of it settling.

That’s the happy path. The failure mode is what matters: if the database slows down during a bulk sync or a materialized view refresh, the insertion step backs up. Without backpressure, the WebSocket reader blocks waiting for the flusher, misses a heartbeat, disconnects, and triggers a reconnect storm — losing trades during every reconnect window.

I use a bounded mpsc channel between the reader and the flusher:

src/polymarket_ws/stream.rs
let (flush_tx, flush_rx) = tokio::sync::mpsc::channel::<Vec<PendingTrade>>(8);

The channel holds 8 batches maximum. If the database is slow and the channel fills up, try_send() returns Err — the reader drops that batch and keeps reading the socket. The WebSocket connection stays alive. No missed heartbeat, no reconnect, no trade gap.

Any trades dropped during a slow flush get picked up by a REST poller that runs every 60 seconds as a backfill. The two systems write to the same table with the same unique constraint — ON CONFLICT DO NOTHING — so duplicates are impossible. Both sort their inserts by (condition_id, transaction_hash) to prevent deadlocks when they write concurrently.

In most languages, coordinating a real-time reader, a batched flusher, and a backfill poller — all writing to the same table without deadlocks or data loss — requires careful manual synchronization. In Rust, the bounded channel enforces backpressure, the compiler ensures the reader and flusher can’t share mutable state, and the type system prevents you from accidentally using a blocking send where you need a non-blocking one.

Deterministic resource cleanup

0xInsider streams trades to browsers via server-sent events. Hundreds of concurrent connections, each holding a slot. When a user closes their tab, disconnects, or times out, that slot needs to be released — immediately, not “eventually when the GC runs.”

In Rust, I wrote a guard struct with a Drop implementation:

src/sse.rs
struct SseConnectionGuard {
user_id: i32,
connections: Arc<DashMap<i32, AtomicU32>>,
}
impl Drop for SseConnectionGuard {
fn drop(&mut self) {
if let Some(count) = self.connections.get(&self.user_id) {
count.fetch_sub(1, Ordering::Relaxed);
}
GLOBAL_SSE_COUNT.fetch_sub(1, Ordering::Relaxed);
}
}

The guard moves into the async stream. When the stream ends — for any reason — Rust drops the guard and the counter decrements. Client disconnects? Dropped. Server timeout? Dropped. Panic somewhere upstream? Still dropped. You can’t forget to clean up because cleanup isn’t your job. The language handles it.

This pattern repeats everywhere in the system. Semaphore permits release when the sync task finishes — even if it panics. Database connections return to the pool when the query handler exits scope. Redis connections are reclaimed automatically. In a system with seven concurrent subsystems sharing resources, deterministic cleanup isn’t a nice-to-have. It’s the reason the connection pool isn’t exhausted after 48 hours and the SSE slot counter doesn’t drift.

Two pools, zero cross-contamination

The backend runs two separate database connection pools — one for the API (user requests) and one for the sync pipeline (background data processing). They have different configurations because they serve different workloads:

src/main.rs
// API pool: fast queries, short timeouts
let api_pool = PgPoolOptions::new()
.max_connections(api_pool_size) // default 8
.acquire_timeout(Duration::from_secs(5))
.after_connect(|conn, _| Box::pin(async move {
conn.execute("SET statement_timeout = '30s'").await?;
conn.execute("SET work_mem = '4MB'").await?;
Ok(())
}))
.connect(&database_url).await?;
// Sync pool: heavy queries, generous timeouts
let sync_pool = PgPoolOptions::new()
.max_connections(sync_pool_size) // default 10
.acquire_timeout(Duration::from_secs(30))
.after_connect(|conn, _| Box::pin(async move {
conn.execute("SET statement_timeout = '600s'").await?;
conn.execute("SET work_mem = '8MB'").await?;
Ok(())
}))
.connect(&database_url).await?;

The API pool has a 30-second statement timeout. If a query takes longer, it gets killed — something is wrong, and the user shouldn’t wait. The sync pool gets 600 seconds because a full trader sync fetches thousands of records, computes P&L, and reconciles market outcomes. That’s legitimately slow work.

The important thing: the compiler enforces which functions use which pool. API handlers receive &ApiState which holds the API pool. Sync tasks receive &SyncState which holds the sync pool. You can’t accidentally run a 10-minute sync query on the API pool and block every user request. The type system makes the mistake impossible.

Three lines that cut memory 30%

Rust lets you swap the global memory allocator. The default system allocator fragments heavily under concurrent workloads — ten sync workers, hundreds of rate limiter entries, cache objects being created and destroyed constantly. Jemalloc is designed for exactly this pattern:

src/main.rs
#[cfg(not(target_env = "msvc"))]
#[global_allocator]
static GLOBAL: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;

Three lines. No code changes anywhere else. The allocator handles every allocation in the entire process — Tokio’s task scheduler, DashMap entries, database connection buffers, JSON parsing. Memory usage dropped roughly 30% under production load.

You can’t do this in Go, Node.js, or Python. The runtime owns the allocator. In Rust, it’s a compile-time decision with zero runtime overhead. For a single-process system that runs everything — API server, sync workers, WebSocket handlers, job queue — memory efficiency compounds.

One binary, one deploy

The whole system compiles to a single binary. No runtime dependencies, no version mismatches between services, no “works locally but the container is missing a library.” The Dockerfile is a multi-stage build — compile the Rust code, copy one binary into a minimal Debian image. The final container is about 150MB: the entire API server, all sync workers, WebSocket handlers, and the job queue processor.

For a solo developer, this matters more than it sounds. One binary means one thing to deploy, one thing to monitor, one process to restart, one set of logs to search. No Docker Compose orchestrating five services. No service mesh. No “the worker is up but the API isn’t” at 2am. When the deploy goes out, everything goes out together, and either all of it works or none of it does.

The tradeoffs

Rust is not all upside. I’ve been writing it for five months. The borrow checker still fights me regularly — there are moments where I know what I want to do and spend 20 minutes convincing the compiler I’m not about to cause a use-after-free. Compile times are painful. A clean build takes minutes, not seconds. And the learning curve is genuinely steep. Coming from years of TypeScript, Rust demanded a different kind of thinking — but that discipline is exactly why the system is as solid as it is.

The roadmap makes the tradeoff clearer. Right now, 0xInsider reads and displays data. What’s coming is direct trade execution from the terminal, copy trading where you mirror a top trader’s positions automatically, and automated strategies. Placing a real trade on behalf of a user, with real money, is a much higher safety bar than showing them a chart. Copy trading means the system makes decisions autonomously — buying and selling based on another trader’s activity. A race condition there doesn’t show a stale number. It loses someone’s money. I’d rather already be in a language built for that than rewrite the backend when the stakes get higher.

The trade isn’t writing speed. It’s the bugs that never happen. The production incidents that never fire. The race conditions that can’t compile. The SQL schema drift caught at build time instead of at 3am. For a financial product with a team of one, I’d pick Rust again without hesitating.


Trevor I. Lasn

Building 0xinsider.com — see who's winning across prediction markets (Polymarket, Kalshi, and more) — and what they're trading right now. Product engineer based in Tartu, Estonia, building and shipping for over a decade.


Found this article helpful? You might enjoy my free newsletter. I share dev tips and insights to help you grow your coding skills and advance your tech career.


Related Articles

Check out these related articles that might be useful for you. They cover similar topics and provide additional insights.

Tech
5 min read

Can OSSPledge Fix Open Source Sustainability?

The Open Source Pledge aims to address open source sustainability challenges by encouraging companies to pay $2,000 per developer per year

Nov 17, 2024
Read article
Tech
2 min read

Google's AI distribution advantage

While everyone debates models and features, Google owns the distribution channels that make AI stick

Jul 25, 2025
Read article
Tech
8 min read

Apple's Secret Sauce: The Untold Stories Behind Its Success

Diving deep into the lesser-known factors that propelled Apple from a garage startup to a tech titan

Sep 30, 2024
Read article
Tech
3 min read

Why Anthropic (Claude AI) Uses 'Member of Technical Staff' for All Engineers (Including Co-founders)

Inside Anthropic's unique approach to preventing talent poaching and maintaining organizational equality

Oct 23, 2024
Read article
Tech
5 min read

Understanding Agent2Agent (A2A): A Protocol for LLM Communication

An exploration of Google's new open protocol that enables different AI systems to exchange information and collaborate

Apr 13, 2025
Read article
Tech
10 min read

Amazon's Rise to Tech Titan: A Story of Relentless Innovation

How Jeff Bezos' 'Day 1' philosophy turned an online bookstore into a global powerhouse

Sep 30, 2024
Read article
Tech
5 min read

VoidZero: Threat or Catalyst for Open Source JavaScript Tooling?

When Evan You announced VoidZero, I'll admit - I got excited. And a little nervous.

Oct 15, 2024
Read article
Tech
2 min read

Is it even worth learning to code?

With AI tools like Claude Code, Cursor, GitHub Copilot, OpenAI Codex, and Lovable, is learning to code still valuable?

Oct 17, 2025
Read article
Tech
4 min read

No, Quantum Computers Won't Break All Encryption

Symmetric encryption algorithms like Advanced Encryption Standard (AES) are largely quantum-resistant already

Oct 31, 2024
Read article

This article was originally published on https://www.trevorlasn.com/blog/why-i-chose-rust-for-my-backend. It was written by a human and polished using grammar tools for clarity.