A production Polymarket bot is a low-latency event pipeline: a chain listener feeds a scoring engine, which feeds risk gates, which feed an EIP-712 signer, which feeds a private-mempool submitter, which feeds a confirmer and an append-only ledger. Poly Syncer's automated Polymarket trading stack runs this pipeline at roughly 600 ms p99 on the Elite lane and ~1.5 s p99 on Pro across BNB Chain and Ethereum, signing every order with EIP-712 typed data and routing through Flashbots-style protected endpoints. This piece walks the architecture component by component, with the latency budget that constrains every design choice.
The pipeline at a glance
Before diving into components, here is the data flow as a sequence diagram in text form. A "leader fill" is the on-chain event we react to.
[Polymarket exchange]
|
leader fill event
v
[ Listener ] ---logs ringbuffer---> [ Observability ]
|
v
[ Decoder + dedupe ]
|
v
[ Scoring / freshness gate ] -- composite, last-refresh, sample N
|
v
[ Per-user risk gates ] -- USDC range, category whitelist,
| daily loss cap, position cap,
| per-wallet ceiling
v
[ Sizer ] -- pro-rata to leader, USDC-bounded
|
v
[ EIP-712 signer (HSM-backed) ] -- nonce, deadline, chainId
|
v
[ Submitter ] -- private mempool / protected RPC
|
v
[ Confirmer ] -- block inclusion + receipts
|
v
[ Append-only ledger ] -- user-visible trade history
Every arrow is a budget line. The total budget at p99 is the sum of the slowest path through the boxes plus network round-trips. To hit 600 ms p99 on Elite, no single component can spend more than ~150 ms at p99, and most are well under 50 ms.
1. The listener
The listener is the part of automated Polymarket trading that has to never sleep. It maintains durable WebSocket subscriptions to multiple RPC endpoints — we run a fleet of seven on BNB Chain and five on Ethereum mainnet, geographically split — and consumes logs rather than blocks, because logs arrive a few hundred milliseconds before the block-level notification on most providers.
Three properties matter:
- Multi-RPC consensus. We accept an event the moment two of the seven providers have emitted matching logs. This kills the failure mode where a single laggy or compromised provider feeds bad data.
- Reorg awareness. Polygon reorgs are rare but real. Every observed event carries a confidence level; a 1-block confirmation routes through a smaller-position path while ≥3 confirmations unlocks full sizing.
- Backpressure. The listener writes to a bounded ringbuffer. If downstream stages stall, we shed the oldest events rather than running out of memory; the same buffer feeds an audit log so dropped events are recoverable.
Listener latency budget
p50 RPC log delivery is roughly 80 ms from block production; p99 is around 240 ms on the slower providers. Multi-RPC consensus adds essentially zero because we accept on the second arrival, which is typically still under p50 of the slowest stream. Internal queue overhead is ~1.2 ms.
2. Decoder and dedupe
Raw logs are RLP-encoded; we decode against the exchange ABI into typed records (market, side, price, size, taker address, leader address). A SHA-256 of (txHash, logIndex) keys our dedupe set, which lives in a Redis cluster with a 30-second TTL. The dedupe gate is what protects us against the multi-RPC consensus accepting the same event twice when both confirming providers happen to be a few ms apart.
This is also where we apply the first idempotency check. Every downstream submission carries the same event hash; if a transient failure triggers a retry, the dedupe key prevents double-fill. Idempotency is the single property that makes the rest of the pipeline retryable without correctness risk.
3. Scoring and freshness gate
Not every leader fill should trigger a copy. The scoring stage answers "is this leader still on the leaderboard, with a fresh enough composite, and is the trade itself in-distribution for that leader?" Three checks:
- Composite cache lookup. The most recent 15-minute composite for the leader wallet must be at or above the user's threshold (default Sharpe ≥ 1.6 for new users).
- Sample size floor. Leader must have ≥ 30 trades in the rolling window; users can require ≥ 100 in /dashboard/settings.
- Trade-shape sanity. If the leader's typical position is $50–$300 and this fill is $14,000, we flag it as out-of-distribution and route it through a slower confirmation path; the size on the user side is bounded regardless.
The full scoring stack is documented in our wallet scoring post and /methodology.
4. Per-user risk gates
This is where automated Polymarket trading becomes safe. Every user has a configuration object loaded into memory at engine start; risk gates run as a chain of pure functions over (event, user_config, user_state).
| Gate | Default | Configurable at |
|---|---|---|
| USDC min / max per trade | $10 / $200 | /dashboard/settings |
| Category whitelist | All 25 | /dashboard/settings |
| Daily loss cap | 5% of bankroll | /dashboard/settings |
| Open position cap | 15 | /dashboard/settings |
| Per-leader exposure cap | 20% of bankroll | /dashboard/settings |
| Resolution proximity gate | Skip if <10 min to resolve | not configurable |
The resolution-proximity gate is hard-coded because firing a copy on a market that resolves in 90 seconds is almost always a mistake — the leader is closing for liquidity reasons, not edge reasons.
5. The sizer
Sizing is pro-rata to the leader's bankroll, then bounded by the user's USDC range. If the leader put 2.4% of their bankroll into the trade, we compute 2.4% of the user's effective bankroll and clip to [min, max]. We do not blindly mirror the leader's USDC amount because copying $5,000 with $500 of capital does not mean anything sensible.
The sizer also applies fractional Kelly damping when the user has the Kelly toggle enabled in settings; the default damping is 0.5x full-Kelly, which empirically halves drawdown without halving long-run growth.
6. The EIP-712 signer
Every Polymarket order is an off-chain signed message that the exchange contract verifies on-chain at fill time. The signature scheme is EIP-712 typed structured data, which is the standard for human-readable, replay-resistant signatures across EVM chains.
Why EIP-712 specifically:
- Domain separator binds the signature to a specific contract on a specific chain, so a signature for a BNB Chain order cannot be replayed on Ethereum mainnet, and vice versa.
- Typed fields let users (and security tooling) see exactly what they are authorizing instead of an opaque hex blob.
- Nonce + deadline in the message prevent replay even within the same domain.
Our signer is HSM-backed for the platform-side keys and never touches user keys — users always sign from their own non-custodial wallet, with a session key delegated to the engine via a scoped permit. Signer p99 is ~12 ms.
7. Submitter and the private mempool
If we broadcast a copy order to the public mempool, anyone running a sandwich bot can see it and react before block inclusion. On Polygon and on BNB Chain, the public mempool is observable to anyone with an RPC subscription. The defense is to submit through a private path: a relay that holds the transaction until block construction, bypassing the public mempool.
We use protected endpoints in the Flashbots-compatible model: the transaction reaches the validator/builder directly without ever appearing in a public node's pending pool. The cost is a small inclusion-time variance; the benefit is empirically measured at roughly 7–14 bps of slippage saved per fill on volatile markets, well above the cost. The full analysis is in our MEV-protection writeup.
Submitter latency budget
- Sign the tx: ~12 ms p99
- POST to relay: ~35 ms p99
- Relay-to-builder: ~80 ms p99
- Block inclusion: 0–2 blocks (Polygon ~2 s blocks; one block typical for Elite, one to two for Pro)
8. The confirmer
Once submitted, the confirmer subscribes to receipt events for the txHash and tracks block depth. A receipt at depth = 1 marks the position as provisional; at depth = 3 it is final. The user-facing dashboard updates on provisional; the ledger writes the immutable record on final.
If the receipt does not arrive within 12 seconds, we treat the submission as lost and surface a recoverable error rather than retrying blindly. Idempotency keys ensure that a manual retry from the user does not double-fill.
9. The append-only ledger
Every event the pipeline produces — received, scored, gated, sized, signed, submitted, confirmed, finalized — lands in a Postgres-backed append-only ledger keyed by event hash. The ledger has three readers: the user-facing trade history view, the audit/compliance subsystem, and the analytics/backtest pipeline. Nothing is ever updated in place; corrections are written as compensating entries.
This pattern — event sourcing — is non-negotiable for a financial system. It makes every behaviour debuggable after the fact and makes "what did the engine do at 14:32:08.241" a one-query answer.
The full latency budget
Composing the components, the Elite lane budget at p99 is:
| Stage | p50 (ms) | p99 (ms) |
|---|---|---|
| Listener (multi-RPC log delivery) | 80 | 240 |
| Decode + dedupe | 1.0 | 3.5 |
| Scoring / freshness gate | 2.0 | 9.0 |
| Risk gates | 0.6 | 2.1 |
| Sizer | 0.3 | 1.0 |
| EIP-712 signer (HSM) | 4.0 | 12.0 |
| Submitter (relay POST + builder) | 55 | 140 |
| Block inclusion (Polygon, 1 block target) | 0 | ~200 |
| Total order submission | ~145 | ~610 |
The Pro lane has the same architecture but a smaller RPC fleet (3 vs 7), no co-location, and queues that admit Pro work behind Elite during congestion. The result is roughly ~1.5 s p99 instead of 600 ms. Plan tier mapping is in /dashboard/billing.
Failover
Failover is layered. From innermost to outermost:
- Per-RPC failover. A provider that misses three log heartbeats inside 10 seconds is taken out of rotation; multi-RPC consensus continues with the remaining providers.
- Region failover. Each lane runs in two AWS regions; a region-level fault triggers DNS-level cutover within ~28 seconds.
- Submitter failover. If the primary protected relay is unreachable, the submitter falls through to a secondary, then to a public-mempool fallback — the latter is logged and surfaced to users so they know an MEV-exposed fill happened.
- Engine kill switch. Operators (and individual users via /dashboard/settings) can halt the engine instantly; an in-flight signed order with a deadline far enough out is the only thing that can still land.
Replay protection
Every signed order carries a nonce sourced from the signer's monotonic counter and a deadline of (now + 60 s). The exchange contract enforces nonce uniqueness; the deadline limits the window where a stalled relay could land an old order against a moved market. Combined with EIP-712 domain separators on chainId, the replay attack surface is narrow: an attacker would need to capture a signed order before its deadline and reach the same chain's relay before the legitimate path. The private-mempool submission path also denies them the capture step.
Why we run on BNB Chain and Ethereum
Polymarket settles in USDC on EVM-compatible chains. We support both BNB Chain and Ethereum mainnet at the engine layer; users choose the chain at plan signup based on where their USDC already lives and their preference for fee profile. BNB Chain offers ~3-second blocks and sub-cent gas; Ethereum offers maximum settlement assurance and the deepest USDC liquidity. The architecture above is chain-agnostic — the only chain-specific code is the RPC fleet config and the domain separator in EIP-712. Bridging context lives in our USDC payment network guide.
What we deliberately do not do
- We do not custody user funds. Every trade is signed from the user's own wallet using a scoped session key. The engine cannot move USDC out of the user's wallet to anywhere except the exchange contract.
- We do not run a centralized order book. All orders go to Polymarket's own exchange contract; Poly Syncer is a client, not a venue.
- We do not retry signed orders blindly. A failed submission either succeeds at the secondary relay or surfaces an error; never both.
- We do not optimize for vanity metrics. Sub-100 ms p99 would be marketing-friendly but would require co-location with specific validators that we believe creates opaque trust dependencies. 600 ms p99 with multi-RPC consensus is the right tradeoff.
Frequently asked questions
Why EIP-712 instead of a simpler signature scheme?
Three reasons: domain separation prevents cross-chain replay; typed fields are auditable by tooling and by the user's wallet UI; and EIP-712 is the de facto standard for off-chain order signing across EVM venues, which means hardware wallets and security tooling already understand it.
What is the actual measured p99 latency from leader fill to user fill?
Roughly 600 ms on the Elite lane and ~1.5 s on the Pro lane, end-to-end from leader-fill log delivery to user-order block inclusion. Per-component p50/p99 are in the table above. We publish rolling-window numbers on the changelog.
Why a private mempool? Isn't that just for Ethereum mainnet?
No. Public-mempool MEV is a Polygon and BNB Chain phenomenon as much as an Ethereum one. The private-mempool submission path provides ~7–14 bps of measured slippage protection on volatile Polymarket fills regardless of chain. The architectural detail is the same; the relay endpoints differ per chain.
What happens if the engine sees a leader's trade but the user has insufficient USDC?
The risk-gate stage detects the shortfall before signing and emits a "skipped: insufficient balance" event to the ledger. No signature is produced. The user sees the skip in their trade history and a banner in the dashboard prompting a top-up. We never attempt partial fills automatically.
How is idempotency enforced across retries?
Every event hashes to a deterministic event_id (SHA-256 of txHash + logIndex). The dedupe table keys on event_id with a 30-second TTL; the signer keys on event_id when minting the nonce; the ledger keys on event_id as primary key. A retry at any stage maps to the same event_id and either no-ops or surfaces the existing record.
Where is the source of truth for "did the trade happen?"
The on-chain receipt at depth = 3 is the source of truth. The Poly Syncer ledger is a derived view; if the ledger and on-chain disagree, on-chain wins and the ledger is corrected with a compensating entry. This is the standard event-sourced reconciliation pattern.
Where to dive deeper
The full engineering stack — including the indexer, the scoring batch jobs, and the observability surface — is documented in the whitepaper, with the API surface at /developers. For the trader's-eye view of what this architecture delivers in practice, the leaderboard guide and setup walkthrough are the places to start. Plan-level latency tier choices are at /dashboard/billing.