regional-latency

Regional Latency

Tracking the latency between cloud hosting platforms and Neon's Postgres database regions. Visit the Frequently Asked Questions section to learn more.

Percentiles and graphs representing the best to worst query latency between Neon's US East (N. Virginia) region and the listed cloud hosting platforms over the past 12 hours. Open an issue on GitHub if you'd like to see a new provider or region added.

DigitalOcean

New York City (New York, USA)

P50P75P95P99
11 ms12 ms13 ms16 ms

Fly

Ashburn, Virginia (US)

P50P75P95P99
17 ms19 ms23 ms27 ms

Railway

US East (Virginia, USA)

P50P75P95P99
7 ms7 ms8 ms12 ms

Vercel

Washington, D.C., USA

P50P75P95P99
6 ms7 ms15 ms135 ms

Frequently Asked Questions

  • What is the purpose of this application?

    By understanding the latency of a basic SELECT query to a Neon database from your preferred hosting platform and region, you can make informed decisions about where to deploy your application backend and Neon database to reduce the impact of latency between the two.

  • How can I estimate the impact of database queries on API response times using this dashboard?

    You can use the P50 and P99 measurements to approximate the best and worst case round-trip time for each database query issued from your hosting platform to your Postgres database on Neon. Optimize your queries, use JOINs, and batch queries in a single round-trip to reduce the impact of latency on the performance of your application.

  • How often is the data updated?

    New data is collected every 15 minutes.

  • Is the application and benchmark code open-source?

    Yes. Application code can be found in the neon-latency-tracker repository on GitHub. Benchmark code that runs on each hosting platform is located in the neon-query-bench repository on GitHub.

  • How are the Neon Postgres databases configured for testing?

    The Neon databases used for testing are set to our Free Tier size of 0.25 CU. Neon's auto-suspend feature is disabled to prevent cold starts being factored into latency measurements. Production applications will rarely encounter a cold start, since they receive consistent traffic.

  • What Postgres driver is used?

    Neon's serverless driver for Postgres is used to perform requests from each hosting platform to a Neon database. This driver is designed to provide a low latency experience on both serverless and traditional hosting platforms.

  • Are the tests performed using an ORM?

    Drizzle ORM is used to perform the queries in the benchmark. Many projects use ORMs to interact with Postgres and to manage their schemas. Using an ORM in the benchmark provides a more realistic set of measurements.

  • What resources are provisioned on the hosting platforms?

    The default instance size and scaling options are used on each hosting platform. For example, on DigitalOcean App Platform the instance has 1vCPU and 1GB of memory.

  • Are serverless environments warmed up prior to testing?

    Yes. The deployment platforms involved are "warmed up" before we measure latency. The warm up process involves performing a fixed number of queries prior to beginning latency measurements. This is to ensure that the latency measurements are indicative of a real-world application handling production traffic, i.e an application that doesn't see frequent cold starts.

  • Do you have benchmarks that demonstrate the overhead of Neon cold starts?

    Sure do! If you're building an application that has low traffic requirements and are curious about the impact of auto-suspend on query times, take a look at this benchmark instead.