Service status
Each service below is independently monitored from three geographic regions with one-second probes. Uptime is defined as the percentage of probes returning a successful response within the service-level objective. p50 and p99 latency are measured end-to-end, including network round-trip from the probe to the service.
| Service | Status | Uptime (90d) | p50 latency | p99 latency |
|---|---|---|---|---|
| Mirror Engine | Operational | 99.98% | 420 ms | 1.8 s |
| Public RPC | Operational | 99.92% | 1.4 s | 7.2 s |
| Premium RPC | Operational | 99.97% | 380 ms | 1.6 s |
| Co-located Node (Elite) | Operational | 99.99% | 180 ms | 610 ms |
| API | Operational | 99.96% | 74 ms | 310 ms |
| WebSocket Feed | Operational | 99.94% | 38 ms | 180 ms |
| Panel | Operational | 99.99% | 120 ms | 480 ms |
The Mirror Engine end-to-end SLO is 1.8 seconds p99, measured from leader fill confirmation on Polygon to copier fill submission. The Co-located Node is reserved for Elite traffic and runs in the same data center as a tier-one Polygon validator, which is what makes sub-second mirroring practical.
Recent incidents
Every operational event that touched user-visible behaviour in the last 90 days is listed below. Detailed post-mortems for medium-or-higher severity incidents are filed in the changelog within seven days of resolution.
Premium RPC — Minor · 11 minutes
Window: 2026-03-14 09:42 UTC → 2026-03-14 09:53 UTC.
Summary: Elevated p99 latency (4.2 s vs. 1.8 s SLO) following an upstream Polygon validator restart. Mirror SLO maintained via failover to secondary pool.
Resolution: Automatic failover; primary pool restored within 11 minutes. No fills lost. No funds at risk.
Panel — Minor · 23 minutes
Window: 2026-02-22 17:08 UTC → 2026-02-22 17:31 UTC.
Summary: Panel UI returned 502 errors on a subset of requests after a deploy regression in the static asset pipeline. Mirror engine and API unaffected.
Resolution: Rolled back deploy; root cause was a missing CDN cache invalidation. Added invalidation gate to deploy pipeline.
WebSocket Feed — Minor · 4 minutes
Window: 2026-01-09 23:14 UTC → 2026-01-09 23:18 UTC.
Summary: Brief disconnects across approximately 12% of subscribed clients during an internal load-balancer rotation.
Resolution: Connection draining now waits for client ACK before terminating sockets. Reconnect logic on the client side handled the gap transparently for most users.
Subscribe to incident notifications
Poly Syncer doesn’t collect emails for the product, but we do operate a status mailing list. Email
[email protected] with the subject line subscribe
and you’ll receive notifications for every new incident, status change, and resolved post-mortem.
Unsubscribe at any time by replying with unsubscribe.
For programmatic access, the public status endpoint at https://api.polysyncer.com/v1/status
returns the same data as this page in JSON, refreshed every 30 seconds. The schema is documented
in the API reference.
How we measure
Uptime probes run every second from monitoring nodes in three independent regions and authenticate to the same endpoints real users use. We do not exclude scheduled maintenance from uptime numbers — if a service is down for any reason, it counts. Latency percentiles are computed over a 90-day rolling window using the t-digest algorithm, which is accurate to better than 0.01% at the tails.