Today I tried signing into MyChart because I got an email notification about a new statement (ugh). The log-in experience was so slow (seconds) that I immediately closed the window and went back to my daily life. I’ll probably never log in again. Point being, a bad user experience is sometimes the last user experience.
For growth-focused companies the priority right out of the gates is user experience. They know it needs to be great or they won’t collect enough users and enough revenue (or VC money) to go and build out the feature rich application of their dreams. One obstacle that stands in the way of user experience is the speed of light. It’s too slow (hot take!). The human perceives anything under 100 milliseconds as “instant”. Anything over 200 milliseconds can feel kind of laggy (which leads to people like me never logging into their MyChart).
One way to lower latency, and keep your user experiences under that 100-millisecond threshold, is to keep data close to users. This isn’t breaking news. But what is new and interesting is the variety of ways to go about locating data close to users.
Geo-partitioning is the ability to attach data (at the row-level) to a location. This enables you to control data locality in the database, as opposed to requiring manual schema changes and complex, brittle application logic. Geo-partitioning is also distinctly different than ‘partitioning’ because it combines the values of the data with the physical implementation of the database itself. In a distributed SQL database, each node runs on a server and that server has a location.
Often, geo-partitioning is discussed within the context of Data Localization which is the ability to pin customer data to a specific location to comply with regulations that require data to be domiciled within geographic boundaries.
What gets lost in the conversation about data location is the fact that geo-partitioning, or keeping user data close to users, enhances performance.
In this video demo of geo-partitioning, you can watch how the implementation of geo-partitioning improves application performance by reducing data access latencies.
The video features a 9-node deployment across 3 US regions on GCE. Before geo-partitioning is added a max latency of 99% of the queries is in the 100’s of milliseconds. After geo-partitioning, 99% of all queries are now 4 milliseconds or less, and 90% of all SQL queries execute in less than 2 milliseconds. In some cases, latency is even sub-millisecond.
We think this kind of performance improvement is really exciting and our esteemed docs team built out a tutorial so you can get your hands in the geo-partitioning soil. We hope that you’ll use the tutorial to play with geo-partitioning for your own applications. And we look forward to your questions about how you can leverage this feature to enhance your performance.
If you’d like to dig deeper into geo-partitioning you can reference these resources to learn more:
•Blog Post: What Global Data Actually Looks Like
•Webinar: The Power of Data Locality in Distributed SQL
•Documentation: Partitioning
•Documentation: the ALTER TABLE…PARTITION BY
command
Please feel free to connect with us in the CockroachDB Forum, on twitter, or in our community slack to share your feedback.
Dealing with distributed database performance issues? Let’s talk about CDNs.
Even though they’re at …
Read moreAs cloud-native web developers, we want to build apps that scale easily. Databases are often a pain point, though. We …
Read moreWhen the performance of your SQL database drops suddenly, do you know why?
Tracking performance and linking cause to …
Read more