Author of the blog here. I had a great time writing this. By far the most complex article I've ever put together, with literally thousands of lines of js to build out these interactive visuals. I hope everyone enjoys.
The animations are fantastic and awesome job with the interactivity. I find myself having to explain latency to folks often in my work and being able to see the extreme difference in latencies for something like a HDD vs SSD makes it much easier to understand for some people.
Edit: And for real, fantastic work, this is awesome.
The level of your effort really shows through. If you had to ballpark guess, how much time do you think you put in? and I realize keyboard time vs kicking around in your head time are quite different
Thank you! I started this back in October, but of course have worked on plenty of other things in the meantime. But this was easily 200+ hours of work spread out over that time.
If this helps as context, the git diff for merging this into our website was: +5,820 −1
The visuals are awesome; the bouncing-box is probably the best illustration of relative latency I've seen.
Your "1 in a million" comment on durability is certainly too pessimistic once you consider the briefness of the downtime before a new server comes in and re-replicates everything, right? I would think if your recovery is 10 minutes for example, even if each of three servers is guaranteed to fail once in the month, I think it's already like 1 in two million? and if it's a 1% chance of failure in the month failure of all three overlapping becomes extremely unlikely.
Thought I would note this because one-in-a-million is not great if you have a million customers ;)
> Your "1 in a million" comment on durability is certainly too pessimistic once you consider the briefness of the downtime before a new server comes in and re-replicates everything, right?
Absolutely. Our actual durability is far, far, far higher than this. We believe that nobody should ever worry about losing their data, and thats the peace of mind we provide.
> Instead of relying on a single server to store all data, we can replicate it onto several computers. One common way of doing this is to have one server act as the primary, which will receive all write requests. Then 2 or more additional servers get all the data replicated to them. With the data in three places, the likelihood of losing data becomes very small.
Is my understanding correct, that this means you propagate writes asynchronously from the primary to the secondary servers (without waiting for an "ACK" from them for writes)?
1 in a million is the probability that all three servers die in one months, without swapping out the broken ones.
So at some point in the month all the data is gone.
If you replace the failed(or failing) node right away, the failure percentage goes down greatly.
You would likely need the probability of a node going done in 30 minutes time space.
Assuming the migration can be done in 30 min.
(i hope this calculation is correct)
If 1% probability per month then
1%/(43800/30) = (1/1460)% probability per 30 min.
For three instances:
(1/1460)% * (1/1460)% * (1/1460)% = (1/3112136000)% probability per 30 min that all go down.
Calculated for one month
(1/3112136000)% * (43800/30) = (1/2131600)%
So one in 213 160 000 that all three servers go down in a 30 minute time span somewhere in one month.
After the 30 minutes another replica will already be available, making the data safe.
I'm happy to be corrected.
The probability course was some years back :)
One thing I will suggest: you’re assuming failures are non-correlated and have an equally weighted chance per in it of time.
Neither is a good assumption from my experience. Failures being correlated to any degree greatly increases the chances of what the aviation world refers to as “the holes in the Swiss cheese lining up”.
Kudos to whoever patiently & passionately built these. On an off topic - This is a great perspective for building realistic course work for middle & high school students. I'm sure they learn faster & better with visuals like these.
Half on topic: what libs/etc did you use for the animations? Not immediately obvious from the source page.
(it's a topic I'm deeply familiar with so I don't have a comment on the content, it looks great on a skim!) - but I've been sketching animations for my own blog and not liked the last few libs I tried.
Interesting. Running any chrome extensions that might be messing with things? Alternatively, if you can share any errors you're getting in the console lmk.
Amazing presentation. It really helps to understand the concepts.
The only add is that it understates the impact of SSD parallelism. 8 Channel controllers are typical for high end devices and 4K random IOPS continue to scale with queue depth, but for an introduction the example is probably complex enough.
It is great to see PlanetScale moving in this direction and sharing the knowledge.
Just going off specs sheets from manufacturers and reviews (mostly consumer products, so enterprise should be the same or better).
There are only a few major NAND manufacturers: Samsung, Micron, Kioxia / Western Digital, SK Hynix, and their branded products are usually the best.
There are also several 3rd party controller developers: Phison, Marvell, Silicon Motion, which I think are the largest, and then a bunch of others.
I hadn't looked at this in a couple years, so 16 channel controllers are more common now, but only on high end enterprise devices.
4KB random read/write specs are definitely not trustable without testing. They are usually at max queue depth and, at least for consumer devices, based on writing to a buffer in SLC mode, so they will be a lot lower once the buffer is exhausted. Enterprise specs might be more realistic but there isnt as much public testing data available.
This is beautiful and brilliant, and also is a great visual tool to explain how some of the fundamental algorithms and data structures originate from the physical characteristics of storage mediums.
I wonder if anyone remembers the old days where you programmed your own custom defrag util to place your boot libs and frequently used apps to the outer tracks of the hard drive, so they are loaded faster due to the higher linear velocity of the outermost track :)
The visualizations are excellent, very fun to look at and play with, and they go along with the article extremely well. You should be proud of this, I really enjoyed it.
Hi, what actually are _metal_ instances that are being used when you're on EC2 that have local NVME attached? Last time I looked, apart from the smallest/slowest Graviton, you have to spend circa 2.3k USD/mo to get a bare-metal instance from AWS - https://blog.alexellis.io/how-to-run-firecracker-without-kvm...
Hi there, PS employee here. In AWS, the instance types backing our Metal class are currently in the following families: r6id, i4i, i3en and i7ie. We're deploying across multiple clouds, and our "Metal" product designation has no direct link to Amazon's bare-metal offerings.
Were you at all inspired by the work of Bartosz
Ciechanowski? My first thought was that you all might have hired him to do the visuals for this post :)
I don’t see any animations on Safari. Also, I’d much prefer a variable-width font, monospace prose is hard to read. While I can use Reader Mode, that removes the text coloring, and would likely also hide the visuals (if they were visible in the first place).
The visuals add a lot to this article. A big theme throughout is latency, and the visual help the reader see why tape is slower than an hdd, which is slower than an ssd, etc. Also, its just plain fun!
I'm curious, what do you do on the internet without js these days?
> I'm curious, what do you do on the internet without js these days?
Browse the web, send/receive email, read stories, play games, the usual. I primarily use native apps and selectively choose what sites are permitted to use javascript, instead of letting websites visited on a whim run javascript willy nilly.
I've been advocating for SQLite+NVMe for a while now. For me it is a new kind of pattern you can apply to get much further into trouble than usual. In some cases, you might actually make it out to the other side without needing to scale horizontally.
Latency is king in all performance matters. Especially in those where items must be processed serially. Running SQLite on NVMe provides a latency advantage that no other provider can offer. I don't think running in memory is even a substantial uplift over NVMe persistence for most real world use cases.
> I've been advocating for SQLite+NVMe for a while now.
Why SQLite instead of a traditional client-server database like Postgres? Maybe it's a smidge faster on a single host, but you're just making it harder for yourself the moment you have 2 webservers instead of 1, and both need to write to the database.
> Latency is king in all performance matters.
This seems misleading. First of all, your performance doesn't matter if you don't have consistency, which is what you now have to figure out the moment you have multiple webservers. And secondly, database latency is generally miniscule compared to internet round-trip latency, which itself is miniscule compared to the "latency" of waiting for all page assets to load like images and code libraries.
> Especially in those where items must be processed serially.
You should be avoiding serial database queries as much as possible in the first place. You should be using joins whenever possible instead of separate queries, and whenever not possible you should be issuing queries asynchronously at once as much as possible, so they execute in parallel.
I'd recommend going with postgres if there is a good chance you'll need it, instead of starting with SQLite and switching later - as their capabilities and data models are quite different.
For small traffic, it's pretty simple to run it on the same host as web app, and unix auth means there are no passwords to manage. And once you need to have multiple writers, there is no need to rewrite all the database queries.
That’s a limitation you’ll hit pretty quickly unless you’ve specifically planned your architecture to be mostly read-only SQLite or one SQLite per session.
You certainly won’t hit it with most corporate OLAP processing, which is nearly all read-only SQlite. Writes are generally batched and processed outside ‘normal’ business hours, where the limitations of SQlite writing are irrelevant.
Postgres supports Unix sockets when running on the same machine. That’s what I use, for a significant latency improvement over the TCP stack even at 127.0.0.1.
> I am pretty sure most of these vendors would offer strict guidance to not do that.
Then you'd be wrong. Running Postgres or MySQL on the same host where Apache is running is an extremely common scenario for sites starting out. They run together on 512 MB instances just fine. And on an SSD, that can often handle a surprising amount of traffic.
It’s not a stretch to imagine that a scenario where you’re willing to run SQLite locally is also one where it’s acceptable to run Postgres locally. You’ve presumably already got the sharding problem solved, so why not? It’s less esoteric of an architecture than multiwriter SQLite.
Like 95% of websites that aren’t Amazon or google? Ton of sites that run in a single small vm. Postgres scales down quite nicely and will happily run in say, 512MB.
Perhaps I wasn't clear enough in my comment. When I said "database latency is generally miniscule compared to internet round-trip latency", I meant between the user and the website. Because they're often thousands of miles away, there are network buffers, etc.
But no, a local network hop doesn't introduce "orders of magnitude" more latency. The article itself describes how it is only 5x slower within a datacenter for the roundtrip part -- not 100x or 1,000x as you are claiming. But even that is generally significantly less than the time it takes the database to actually execute the query -- so maybe you see a 1% or 5% speedup of your query. It's just not a major factor, since queries are generally so fast anyways.
The kind of database latency that you seem to be trying to optimize for is a classic example of premature optimization. In the context of a web application, you're shaving microseconds for a page load time that is probably measured in hundreds of milliseconds for the user.
> I don't get to decide this. The business does.
You have enough power to design the entire database architecture, but you can't write and execute queries more efficiently, following best practices?
Sqlite can be run in process. Latency and bandwidth can be made 10x worst by process context switching alone. Plus being able to get away with n+1s could save a lot of dev time depending on the crew, before Claude (tho the dev still needs to learn that the speed problem is due to this and refactor the query, or write it fast the first time)
The SQLite filesystem is laid out to hedge against HDD defragging. It wouldn't benefit as much as changing it to a more modern layout that's SSD-native, then using NVMe
Oops, O_DIRECT does not actually make that big of a difference. I had updated my ad-hoc test to use O_DIRECT, but didn't check that write() now returned errors because of wrong alignment ;-)
As mentioned in the sibling comment, syncs are still slow. My initial 1-2ms number came from a desktop I bought in 2018, to which I added an NVME drive connected to an M.1 slot in 2022. On my current test system I'm seeing avg latencies of around 250us, sometimes a lot more (there a fluctuations).
# put the following in a file "fio.job" and run "fio fio.job"
# enable either direct=1 (O_DIRECT) or fsync=1 (fsync() after each write())
[Job1]
#direct=1
fsync=1
readwrite=randwrite
bs=64k # size of each write()
size=256m # total size written
Add sync=1 to your fio O_DIRECT write tests (not fsync, but sync=1) and you’ll see a big difference on consumer SSDs without power loss protection for their controller buffers. It adds the FUA flag (force unit access) to the write requests to ensure persistence of your writes, O_DIRECT alone won’t do that
Enterprise NVMe can do fsync much faster than consumer hardware. This is because they can cheat and report successful fsync() before data actually had been flushed to flash. They have backup capacitors which allow them to flush caches in case of power loss, so no data loss.
What drive is this and does it need a trim? Not all NVMe devices are created equal, especially in consumer drives. In a previous role I was responsible for qualifying drives. Any datacenter or enterprise class drive that had that sort of latency in direct IO write benchmarks after proper pre-conditioning would have failed our validation.
Unfortunately, this data is harder to find than it should be. For instance, just looking at Kioxia, which I've found to be very performant, their datasheets for the CD series drives don't mention write latency at all. Blocks and Files[1] mentions that they claim <255us average, so they must have published that somewhere. This is why we would extensively test multiple units ourselves, following proper preconditioning as defined by SNIA. Averaging 250us for direct writes is pretty good.
I'm not an expert, but I think an enterprise NVMe will have some sort of power loss protection so it can afford to fsync to ram/caches as they will be written down in a power loss.
Consumer NVMe drives afaik lack this so fsync will force the file to be written.
I assume fsyncing a whole file does more work than just ensuring that specific blocks made it to the WAL which it can achieve with direct IO or maybe sync_file_range.
NVMe is just a protocol. There are drives that are absolute shit and others that cost as much as luxury automobiles. In either case not quite DRAM latency because it is expansion bus attached.
Sqlite doesn't work super well with parallelism in writing. It supports it, yes, but in a bit clunky way and it still can fail. To avoid problems with parallel writing besides setting a specific clunky mode of operations a trick of using a single thread for writing in an app can be used. Which usually makes the already complicated parallel code slightly more complicated.
If only one thread of writing is required, then SQLite works absolutely great.
Entire financial exchanges are not running single threaded writes to their persistent data store. If they are, and you have a link, I’d love to be proven wrong.
I had a lot of fun with Coolify running my app and my database on the same machine. It was pretty cool to see zero latency in my SQL queries, just the cost of the engine.
I'm always curious about latency for all these newdb offerings like PlanetScale/Neon/Supabase.
It seems like they don't emphasise strongly enough _make sure you colocate your server in the same cloud/az/region/dc as our db. I suspect a large fraction of their users don't realise this, and have loads of server-db traffic happening very slowly over the public internet. It won't take many slow db reads (get session, get a thing, get one more) to trash your server's response latency.
Seeing the disk IO animation reminded me of Melvin Kaye[0]:
Mel never wrote time-delay loops, either, even when the balky Flexowriter
required a delay between output characters to work right.
He just located instructions on the drum
so each successive one was just past the read head when it was needed;
the drum had to execute another complete revolution to find the next instruction.
I was reminded of Mel as well! If you haven't seen it, Usagi Electric on YouTube has gotten a drum-memory system from the 1950s nearly fully-functional again.
I think something about distributed storage which is not appreciated in this article:
1. Some systems do not support replication out of the box. Sure your cassandra cluster and mysql can do master slave replication, but lots of systems cannot.
2. Your life becomes much harder with NVME storage in cloud as you need to respect maintenance intervals and cloud initiated drains. If you do not hook into those system and drain your data to a different node, the data goes poof. Separating storage from compute allows the cloud operator to drain and move around compute as needed and since the data is independent from the compute — and the cloud operator manages that data system and draining for that system as well — the operator can manage workload placements without the customer needing to be involved.
Good points. PlanetScale's durability and reliability are built on replication - MySQL replication - and all the operational software we've written to maintain replication in the face of servers coming and going, network partitions, and all the rest of the weather one faces in the cloud.
Replicated network-attached storage that presents a "local" filesystem API is a powerful way to create durability in a system that doesn't build it in like we have.
Agreed, if you are a mature enough and well funded organization you probably should be using NVME and then run distributed systems on top of the NVMEs to manage replication yourself.
AWS, for one example, provide a feed of upcoming "events" in EC2 in which certain instances will need to be rebooted or terminated entirely due to whatever maintenance they're doing on the physical infrastructure.
If you miss a termination event you miss your chance to copy that data elsewhere. Of course, if you're _always_ copying the data elsewhere, you can rest easy.
Having recently added support for storing our incremental indexes in https://github.com/feldera/feldera on S3/object storage (we had NVMe for longer due to obvious performance advantages mentioned in the previous article), we'd be happy for someone to disrupt this space with a better offering ;).
Metal looks super cool, however at my last job when we tried using instance local SSD's on GCP, there were serious reliability issues (e.g. blocks on the device losing data). Has this situation changed? What machine types are you using?
Neat workaround! We only started working with GCP Local SSDs in 2024 and can report we haven't experienced read or write failures due to bad sectors in any of our testing.
That said, we're running a redundant system in which MySQL semi-sync replication ensures every write is durable to two machines, each in a different availability zone, before that write's acknowledged to the client. And our Kubernetes operator plus Vitess' vtorc process are working together to aggressively detect and replace failed or even suspicious replicas.
In GCP we find the best results on n2d-highmem machines. In AWS, though, we run on pretty much all the latest-generation types with instance storage.
If this is true, then how do "serverless" database providers like Neon advertise "low latency" access? They use object storage like S3, which I imagine is an order of magnitude worse than networked storage for latency.
edit: apparently they build a kafkaesque layer of caching. No thank you, I'll just keep my data on locally attached NVMe.
For years, I just didn't get why replicated databases always stick with EBS and deal with its latency. Like, replication is already there, why not be brave and just go with local disks? At my previous orgs, where we ran Elasticsearch for temporary logs/metrics storage, I proposed we do exactly that since we didn't even have major reliability requirements. But I couldn't convince them back then, we ended up with even worse AWS Elasticsearch.
I get that local disks are finite, yeah, but I think the core/memory/disk ratio would be good enough for most use cases, no? There are plenty of local disk instances with different ratios as well, so I think a good balance could be found. You could even use local hard disk ones with 20TB+ disks for implementing hot/cold storage.
Big kudos to the PlanetScale team, they're like, finally doing what makes sense. I mean, even AWS themselves don't run Elasticsearch on local disks! Imagine running ClickHouse, Cassandra, all of that on local disks.
I looked into this with an idea of running SQL Server Availability Groups on the Azure Las_v3 series VMs, which have terabytes of local SSD.
The main issue was that after a stop-start event, the disks are wiped. SQL Server can’t automatically handle this, even if the rest of the cluster is fine and there are available replicas. It won’t auto repair the node that got reset. The scripting and testing required to work around this would be unsupportable in production for all but the bravest and most competent orgs.
Really, really great article. The visualization of random writes is very nicely done.
On:
> Another issue with network-attached storage in the cloud comes in the form of limiting IOPS. Many cloud providers that use this model, including AWS and Google Cloud, limit the amount of IO operations you can send over the wire. [...]
> If instead you have your storage attached directly to your compute instance, there are no artificial limits placed on IO operations. You can read and write as fast as the hardware will allow for.
I feel like this might be a dumb series of questions, but:
1. The ratelimit on "IOPS" is precisely a ratelimit on a particular kind of network traffic, right? Namely traffic to/from an EBS volume? "IOPS" really means "EBS volume network traffic"?
2. Does this save me money? And if yes, is from some weird AWS arbitrage? Or is it more because of an efficiency win from doing less EBS networking?
I see pretty clearly putting storage and compute on the same machine strictly a latency win, because you structurally have one less hop every time. But is it also a throughput-per-dollar win too?
> 1. The ratelimit on "IOPS" is precisely a ratelimit on a particular kind of network traffic, right? Namely traffic to/from an EBS volume? "IOPS" really means "EBS volume network traffic"?
The EBS volume itself has a provisioned capacity of IOPS and throughput, and the EC2 instance it's attached to will have its own limits as well across all the EBS volumes attached to it. I would characterize it more like a different model. An EBS volume isn't just just a slice of a physical PCB attached to a PCIe bus, it's a share in a large distributed system a large number of physical drives with its own dedicated network capacity to/from compute, like a SAN.
> 2. Does this save me money? And if yes, is from some weird AWS arbitrage? Or is it more because of an efficiency win from doing less EBS networking?
For network-attached storage IOPS limits packets per second, not bandwidth, since IO operations can happen at different sizes (e.g. 4K vs. 16K blocks).
Great nerdbaiting ad. I read all the way to the bottom of it, and bookmarked it to send to my kids if I feel they are not understanding storage architectures properly. :)
I love the visuals, and if it's ok with you will probably link them to my class material on block devices in a week or so.
One small nit:
> A typical random read can be performed in 1-3 milliseconds.
Um, no. A 7200 RPM platter completes a rotation in 8.33 milliseconds, so rotational delay for a random read is uniformly distributed between 0 and 8.33ms, i.e. mean 4.16ms.
>a single disk will often have well over 100,000 tracks
By my calculations a Seagate IronWolf 18TB has about 615K tracks per surface given that it has 9 platters and 18 surfaces, and an outer diameter read speed of about 260MB/s. (or 557K tracks/inch given typical inner and outer track diameters)
I reviewed it three times for different conferences :-)
I’m still annoyed they didn’t include the drain time equation I used for calculating track width, which falls out of one of their equations.
Oh, and I’m very glad you showed differing track sizes across the platter. (BTW, did you know track sizes differ between platters? Google “disks are like snowflakes”)
Nice article, but the replicated approach isn't exactly comparing like with like. To achieve the same semantics you'd need to block for a response from the remote backup servers which would end up with the same latency as the other cloud providers...
For this particular one I used d3.js, but honestly this isn't really the type of thing it's designed for. I've also used GSAP for this type of thing on this article I wrote about database sharding.
Hrm "unlimited IOPS"? I suppose contrasted against the abysmal IOPS available to Cloud block devs. A good modern NVMe enterprise drive is specced for (order of magnitude) 10^6 to 10^7 IOPS. If you can saturate that from database code, then you've got some interesting problems, but it's definitely not unlimited.
Technically any drive has a finite IOPS capacity. We have found that no matter how hard we tried, we could not get MySQL to exhaust the max IOPS of the underlying hardware. You hit CPU limits long before hitting IOPS limits. Thus "infinite IOPS."
Disk latency, and one's aversion to it, is IMHO the only way Hetzner costs can run up on you. You want to keep the database on local disk, and not their very slow attached Volumes (Hetzner EBS). In short, you can have relatively light work-loads that will be on sort of expensive VMs because you need 500GB, or more, of local disk. 1TB local disk is the biggest VM they offer in the US. 300 EUR a month.
Author of the blog here. I had a great time writing this. By far the most complex article I've ever put together, with literally thousands of lines of js to build out these interactive visuals. I hope everyone enjoys.
The animations are fantastic and awesome job with the interactivity. I find myself having to explain latency to folks often in my work and being able to see the extreme difference in latencies for something like a HDD vs SSD makes it much easier to understand for some people.
Edit: And for real, fantastic work, this is awesome.
Thank you! The visuals definitely add something special to this post specifically since time is a big element in explaining latencies.
I love this kind of datavis.
We are generally bad at internalizing comparisons at these scales. The visualizations make a huge difference in building more detailed intuitions.
Really nice work, thank you!
The level of your effort really shows through. If you had to ballpark guess, how much time do you think you put in? and I realize keyboard time vs kicking around in your head time are quite different
Thank you! I started this back in October, but of course have worked on plenty of other things in the meantime. But this was easily 200+ hours of work spread out over that time.
If this helps as context, the git diff for merging this into our website was: +5,820 −1
The visuals are awesome; the bouncing-box is probably the best illustration of relative latency I've seen.
Your "1 in a million" comment on durability is certainly too pessimistic once you consider the briefness of the downtime before a new server comes in and re-replicates everything, right? I would think if your recovery is 10 minutes for example, even if each of three servers is guaranteed to fail once in the month, I think it's already like 1 in two million? and if it's a 1% chance of failure in the month failure of all three overlapping becomes extremely unlikely.
Thought I would note this because one-in-a-million is not great if you have a million customers ;)
> Your "1 in a million" comment on durability is certainly too pessimistic once you consider the briefness of the downtime before a new server comes in and re-replicates everything, right?
Absolutely. Our actual durability is far, far, far higher than this. We believe that nobody should ever worry about losing their data, and thats the peace of mind we provide.
> Instead of relying on a single server to store all data, we can replicate it onto several computers. One common way of doing this is to have one server act as the primary, which will receive all write requests. Then 2 or more additional servers get all the data replicated to them. With the data in three places, the likelihood of losing data becomes very small.
Is my understanding correct, that this means you propagate writes asynchronously from the primary to the secondary servers (without waiting for an "ACK" from them for writes)?
For PlanetScale Metal, we use semi-sync replication. The primary need to get an ack from at least one replica before committing.
Is that ack sent once the request is received or once it is stored on the remote disk?
1 in a million is the probability that all three servers die in one months, without swapping out the broken ones. So at some point in the month all the data is gone.
If you replace the failed(or failing) node right away, the failure percentage goes down greatly. You would likely need the probability of a node going done in 30 minutes time space. Assuming the migration can be done in 30 min.
(i hope this calculation is correct)
If 1% probability per month then 1%/(43800/30) = (1/1460)% probability per 30 min.
For three instances: (1/1460)% * (1/1460)% * (1/1460)% = (1/3112136000)% probability per 30 min that all go down.
Calculated for one month (1/3112136000)% * (43800/30) = (1/2131600)%
So one in 213 160 000 that all three servers go down in a 30 minute time span somewhere in one month. After the 30 minutes another replica will already be available, making the data safe.
I'm happy to be corrected. The probability course was some years back :)
One thing I will suggest: you’re assuming failures are non-correlated and have an equally weighted chance per in it of time.
Neither is a good assumption from my experience. Failures being correlated to any degree greatly increases the chances of what the aviation world refers to as “the holes in the Swiss cheese lining up”.
Kudos to whoever patiently & passionately built these. On an off topic - This is a great perspective for building realistic course work for middle & high school students. I'm sure they learn faster & better with visuals like these.
It would be incredibly cool if this were used in high school curricula.
Half on topic: what libs/etc did you use for the animations? Not immediately obvious from the source page.
(it's a topic I'm deeply familiar with so I don't have a comment on the content, it looks great on a skim!) - but I've been sketching animations for my own blog and not liked the last few libs I tried.
Thanks!
I heavily, heavily abused d3.js to build these.
Small FYI that I couldn't see them in Chrome 133.0.6943.142 on MacOS. Firefox works.
It's the complete opposite for me — there are no animations in Firefox even with uBlock Origin disabled, but Brave shows them fine.
The browser console spams this link: https://react.dev/errors/418?invariant=418
edit: looks like it's caused by a userstyles extension injecting a dark theme into the page; React doesn't like it and the page silently breaks.
Ohhh interesting! Obviously not ideal, but I guess just an extension issue?
Interesting. Running any chrome extensions that might be messing with things? Alternatively, if you can share any errors you're getting in the console lmk.
Oh, looks like it. I disabled extensions one by one til I found it was reflect.app's extension. Edit: reported on their discord.
False alarm :) Amazing work!!
Amazing presentation. It really helps to understand the concepts.
The only add is that it understates the impact of SSD parallelism. 8 Channel controllers are typical for high end devices and 4K random IOPS continue to scale with queue depth, but for an introduction the example is probably complex enough.
It is great to see PlanetScale moving in this direction and sharing the knowledge.
Thank you for the info! Do you have any good references on this for those who want to learn more?
Just going off specs sheets from manufacturers and reviews (mostly consumer products, so enterprise should be the same or better).
There are only a few major NAND manufacturers: Samsung, Micron, Kioxia / Western Digital, SK Hynix, and their branded products are usually the best.
There are also several 3rd party controller developers: Phison, Marvell, Silicon Motion, which I think are the largest, and then a bunch of others.
I hadn't looked at this in a couple years, so 16 channel controllers are more common now, but only on high end enterprise devices.
4KB random read/write specs are definitely not trustable without testing. They are usually at max queue depth and, at least for consumer devices, based on writing to a buffer in SLC mode, so they will be a lot lower once the buffer is exhausted. Enterprise specs might be more realistic but there isnt as much public testing data available.
Great work! Thank you for making this.
This is beautiful and brilliant, and also is a great visual tool to explain how some of the fundamental algorithms and data structures originate from the physical characteristics of storage mediums.
I wonder if anyone remembers the old days where you programmed your own custom defrag util to place your boot libs and frequently used apps to the outer tracks of the hard drive, so they are loaded faster due to the higher linear velocity of the outermost track :)
The visualizations are excellent, very fun to look at and play with, and they go along with the article extremely well. You should be proud of this, I really enjoyed it.
Thank you!
Hi, what actually are _metal_ instances that are being used when you're on EC2 that have local NVME attached? Last time I looked, apart from the smallest/slowest Graviton, you have to spend circa 2.3k USD/mo to get a bare-metal instance from AWS - https://blog.alexellis.io/how-to-run-firecracker-without-kvm...
Hi there, PS employee here. In AWS, the instance types backing our Metal class are currently in the following families: r6id, i4i, i3en and i7ie. We're deploying across multiple clouds, and our "Metal" product designation has no direct link to Amazon's bare-metal offerings.
Were you at all inspired by the work of Bartosz Ciechanowski? My first thought was that you all might have hired him to do the visuals for this post :)
Bartosz Ciechanowski is incredible at this type of stuff. Sam Rose has some great interactive blogs too. Both have had big hits here on HN.
I don’t see any animations on Safari. Also, I’d much prefer a variable-width font, monospace prose is hard to read. While I can use Reader Mode, that removes the text coloring, and would likely also hide the visuals (if they were visible in the first place).
Interesting! Any errors you can report? Should work in safari but maybe you have something custom going on, or an older version?
I don't see a single visual. I don't use the web with javascript. Why not embed static images instead or in addition?
The visuals add a lot to this article. A big theme throughout is latency, and the visual help the reader see why tape is slower than an hdd, which is slower than an ssd, etc. Also, its just plain fun!
I'm curious, what do you do on the internet without js these days?
> I'm curious, what do you do on the internet without js these days?
Browse the web, send/receive email, read stories, play games, the usual. I primarily use native apps and selectively choose what sites are permitted to use javascript, instead of letting websites visited on a whim run javascript willy nilly.
I respect it. In my very biased opinion, it's worth enabling for this article.
They're not just static images or animations, they're interactive widgets.
I've been advocating for SQLite+NVMe for a while now. For me it is a new kind of pattern you can apply to get much further into trouble than usual. In some cases, you might actually make it out to the other side without needing to scale horizontally.
Latency is king in all performance matters. Especially in those where items must be processed serially. Running SQLite on NVMe provides a latency advantage that no other provider can offer. I don't think running in memory is even a substantial uplift over NVMe persistence for most real world use cases.
> I've been advocating for SQLite+NVMe for a while now.
Why SQLite instead of a traditional client-server database like Postgres? Maybe it's a smidge faster on a single host, but you're just making it harder for yourself the moment you have 2 webservers instead of 1, and both need to write to the database.
> Latency is king in all performance matters.
This seems misleading. First of all, your performance doesn't matter if you don't have consistency, which is what you now have to figure out the moment you have multiple webservers. And secondly, database latency is generally miniscule compared to internet round-trip latency, which itself is miniscule compared to the "latency" of waiting for all page assets to load like images and code libraries.
> Especially in those where items must be processed serially.
You should be avoiding serial database queries as much as possible in the first place. You should be using joins whenever possible instead of separate queries, and whenever not possible you should be issuing queries asynchronously at once as much as possible, so they execute in parallel.
Until you hit the single-writer limitation in SQLite, you do not need to spend more CPU cycles on Postgres
I'd recommend going with postgres if there is a good chance you'll need it, instead of starting with SQLite and switching later - as their capabilities and data models are quite different.
For small traffic, it's pretty simple to run it on the same host as web app, and unix auth means there are no passwords to manage. And once you need to have multiple writers, there is no need to rewrite all the database queries.
That’s a limitation you’ll hit pretty quickly unless you’ve specifically planned your architecture to be mostly read-only SQLite or one SQLite per session.
You certainly won’t hit it with most corporate OLAP processing, which is nearly all read-only SQlite. Writes are generally batched and processed outside ‘normal’ business hours, where the limitations of SQlite writing are irrelevant.
The entire point is to avoid the network hop.
Application <-> SQLite <-> NVMe
has orders of magnitude less latency than
Application <-> Postgres Client <-> Network <-> Postgres Server <-> NVMe
> You should be avoiding serial database queries as much as possible in the first place.
I don't get to decide this. The business does.
Postgres supports Unix sockets when running on the same machine. That’s what I use, for a significant latency improvement over the TCP stack even at 127.0.0.1.
"...has orders of magnitude less latency than..."
[citation needed]. Local network access shouldn't be much different than local IPC.
> Local network access
In what production scenarios do MySQL, Postgres, DB2, Oracle, et. al., live on the same machine as the application that uses them?
I am pretty sure most of these vendors would offer strict guidance to not do that.
> I am pretty sure most of these vendors would offer strict guidance to not do that.
Then you'd be wrong. Running Postgres or MySQL on the same host where Apache is running is an extremely common scenario for sites starting out. They run together on 512 MB instances just fine. And on an SSD, that can often handle a surprising amount of traffic.
It’s not a stretch to imagine that a scenario where you’re willing to run SQLite locally is also one where it’s acceptable to run Postgres locally. You’ve presumably already got the sharding problem solved, so why not? It’s less esoteric of an architecture than multiwriter SQLite.
Like 95% of websites that aren’t Amazon or google? Ton of sites that run in a single small vm. Postgres scales down quite nicely and will happily run in say, 512MB.
Context switches plus mmap accesses are often slower than mmap accesses.
I’ve tested this before and Postgres is measurably faster over Unix socket than over local network.
You don't have IPC for sqlite, do you?
You do of you access the same database from miltiple processes.
Perhaps I wasn't clear enough in my comment. When I said "database latency is generally miniscule compared to internet round-trip latency", I meant between the user and the website. Because they're often thousands of miles away, there are network buffers, etc.
But no, a local network hop doesn't introduce "orders of magnitude" more latency. The article itself describes how it is only 5x slower within a datacenter for the roundtrip part -- not 100x or 1,000x as you are claiming. But even that is generally significantly less than the time it takes the database to actually execute the query -- so maybe you see a 1% or 5% speedup of your query. It's just not a major factor, since queries are generally so fast anyways.
The kind of database latency that you seem to be trying to optimize for is a classic example of premature optimization. In the context of a web application, you're shaving microseconds for a page load time that is probably measured in hundreds of milliseconds for the user.
> I don't get to decide this. The business does.
You have enough power to design the entire database architecture, but you can't write and execute queries more efficiently, following best practices?
Sqlite can be run in process. Latency and bandwidth can be made 10x worst by process context switching alone. Plus being able to get away with n+1s could save a lot of dev time depending on the crew, before Claude (tho the dev still needs to learn that the speed problem is due to this and refactor the query, or write it fast the first time)
The SQLite filesystem is laid out to hedge against HDD defragging. It wouldn't benefit as much as changing it to a more modern layout that's SSD-native, then using NVMe
I still measure 1-2ms of latency with an NVMe disk on my Desktop computer, doing fsync() on a file on a ext4 filesystem.
Update: about 800us on a more modern system.
Not so sure that's true. This is single-threaded direct I/O doing a fio randwrite workload on a WD 850X Gen4 SSD:
I checked again with O_DIRECT and now I stand corrected. I didn't know that O_DIRECT could make such a huge difference. Thanks!
Oops, O_DIRECT does not actually make that big of a difference. I had updated my ad-hoc test to use O_DIRECT, but didn't check that write() now returned errors because of wrong alignment ;-)
As mentioned in the sibling comment, syncs are still slow. My initial 1-2ms number came from a desktop I bought in 2018, to which I added an NVME drive connected to an M.1 slot in 2022. On my current test system I'm seeing avg latencies of around 250us, sometimes a lot more (there a fluctuations).
Random writes and fsync aren't the same thing. A single unflushed random write on a consumer SSD is extremely fast because it's not durable.
You're right. Sync writes are ten times as slow. 331µs.
Add sync=1 to your fio O_DIRECT write tests (not fsync, but sync=1) and you’ll see a big difference on consumer SSDs without power loss protection for their controller buffers. It adds the FUA flag (force unit access) to the write requests to ensure persistence of your writes, O_DIRECT alone won’t do that
I believe that's power saving in action. A single operation at idle is slow, the drive needs time to wake from idle.
Enterprise NVMe can do fsync much faster than consumer hardware. This is because they can cheat and report successful fsync() before data actually had been flushed to flash. They have backup capacitors which allow them to flush caches in case of power loss, so no data loss.
Here PM983 doing `fio --name=fsync_test --ioengine=sync --rw=randwrite --bs=4k --size=1G --numjobs=1 --runtime=10s --time_based --fsync=1`
The same test on SN850XWhat drive is this and does it need a trim? Not all NVMe devices are created equal, especially in consumer drives. In a previous role I was responsible for qualifying drives. Any datacenter or enterprise class drive that had that sort of latency in direct IO write benchmarks after proper pre-conditioning would have failed our validation.
My current one reads SAMSUNG MZVL21T0HCLR-00BH1 and is built into a quite new work laptop. I can't get below around 250us avg.
On my older system I had a WD_BLACK SN850X but had it connected to an M.1 slot which may be limiting. This is where I measured 1-2ms latency.
Is there any good place to get numbers of what is possible with enterprise hardware today? I've struggled for some time to find a good source.
Unfortunately, this data is harder to find than it should be. For instance, just looking at Kioxia, which I've found to be very performant, their datasheets for the CD series drives don't mention write latency at all. Blocks and Files[1] mentions that they claim <255us average, so they must have published that somewhere. This is why we would extensively test multiple units ourselves, following proper preconditioning as defined by SNIA. Averaging 250us for direct writes is pretty good.
[1] https://blocksandfiles.com/2023/08/07/kioxias-rocketship-dat...
I'm not an expert, but I think an enterprise NVMe will have some sort of power loss protection so it can afford to fsync to ram/caches as they will be written down in a power loss. Consumer NVMe drives afaik lack this so fsync will force the file to be written.
I assume fsyncing a whole file does more work than just ensuring that specific blocks made it to the WAL which it can achieve with direct IO or maybe sync_file_range.
NVMe is just a protocol. There are drives that are absolute shit and others that cost as much as luxury automobiles. In either case not quite DRAM latency because it is expansion bus attached.
Sqlite doesn't work super well with parallelism in writing. It supports it, yes, but in a bit clunky way and it still can fail. To avoid problems with parallel writing besides setting a specific clunky mode of operations a trick of using a single thread for writing in an app can be used. Which usually makes the already complicated parallel code slightly more complicated.
If only one thread of writing is required, then SQLite works absolutely great.
> If only one thread of writing is required, then SQLite works absolutely great.
The whole point of getting your commands down to microsecond execution time is so that you can get away with just one thread of writing.
Entire financial exchanges operate on this premise.
Entire financial exchanges are not running single threaded writes to their persistent data store. If they are, and you have a link, I’d love to be proven wrong.
https://www.infoq.com/presentations/LMAX/
https://use.expensify.com/blog/scaling-sqlite-to-4m-qps-on-a...
I had a lot of fun with Coolify running my app and my database on the same machine. It was pretty cool to see zero latency in my SQL queries, just the cost of the engine.
I'm always curious about latency for all these newdb offerings like PlanetScale/Neon/Supabase.
It seems like they don't emphasise strongly enough _make sure you colocate your server in the same cloud/az/region/dc as our db. I suspect a large fraction of their users don't realise this, and have loads of server-db traffic happening very slowly over the public internet. It won't take many slow db reads (get session, get a thing, get one more) to trash your server's response latency.
Can I just say that I love how informative this was that I completely forgot it was to promote a product? Excellent visuals and interactivity.
Seeing the disk IO animation reminded me of Melvin Kaye[0]:
[0] https://pages.cs.wisc.edu/~markhill/cs354/Fall2008/notes/The...I was reminded of Mel as well! If you haven't seen it, Usagi Electric on YouTube has gotten a drum-memory system from the 1950s nearly fully-functional again.
I think something about distributed storage which is not appreciated in this article:
1. Some systems do not support replication out of the box. Sure your cassandra cluster and mysql can do master slave replication, but lots of systems cannot.
2. Your life becomes much harder with NVME storage in cloud as you need to respect maintenance intervals and cloud initiated drains. If you do not hook into those system and drain your data to a different node, the data goes poof. Separating storage from compute allows the cloud operator to drain and move around compute as needed and since the data is independent from the compute — and the cloud operator manages that data system and draining for that system as well — the operator can manage workload placements without the customer needing to be involved.
Good points. PlanetScale's durability and reliability are built on replication - MySQL replication - and all the operational software we've written to maintain replication in the face of servers coming and going, network partitions, and all the rest of the weather one faces in the cloud.
Replicated network-attached storage that presents a "local" filesystem API is a powerful way to create durability in a system that doesn't build it in like we have.
Agreed, if you are a mature enough and well funded organization you probably should be using NVME and then run distributed systems on top of the NVMEs to manage replication yourself.
This is where s2.dev could in theory come to the rescue. Able to keep up with the streaming bandwidth, but durable.
I assume DRBD still exists although it's certainly easier to use EBS.
what do you mean by drains?
AWS, for one example, provide a feed of upcoming "events" in EC2 in which certain instances will need to be rebooted or terminated entirely due to whatever maintenance they're doing on the physical infrastructure.
If you miss a termination event you miss your chance to copy that data elsewhere. Of course, if you're _always_ copying the data elsewhere, you can rest easy.
Nice blog. There is also a problem that generally cloud storage is "just unusually slow" (this has been noted by others before, but here is a nice summary of the problem http://databasearchitects.blogspot.com/2024/02/ssds-have-bec...)
Having recently added support for storing our incremental indexes in https://github.com/feldera/feldera on S3/object storage (we had NVMe for longer due to obvious performance advantages mentioned in the previous article), we'd be happy for someone to disrupt this space with a better offering ;).
That database architects blog is a great read.
Metal looks super cool, however at my last job when we tried using instance local SSD's on GCP, there were serious reliability issues (e.g. blocks on the device losing data). Has this situation changed? What machine types are you using?
Our workaround was this: https://discord.com/blog/how-discord-supercharges-network-di...
Neat workaround! We only started working with GCP Local SSDs in 2024 and can report we haven't experienced read or write failures due to bad sectors in any of our testing.
That said, we're running a redundant system in which MySQL semi-sync replication ensures every write is durable to two machines, each in a different availability zone, before that write's acknowledged to the client. And our Kubernetes operator plus Vitess' vtorc process are working together to aggressively detect and replace failed or even suspicious replicas.
In GCP we find the best results on n2d-highmem machines. In AWS, though, we run on pretty much all the latest-generation types with instance storage.
This is really cool, and PlanetScale Metal looks really solid, too. Always a huge sucker for seeing latency huge latency drops on releases: https://planetscale.com/blog/upgrading-query-insights-to-met....
If this is true, then how do "serverless" database providers like Neon advertise "low latency" access? They use object storage like S3, which I imagine is an order of magnitude worse than networked storage for latency.
edit: apparently they build a kafkaesque layer of caching. No thank you, I'll just keep my data on locally attached NVMe.
For years, I just didn't get why replicated databases always stick with EBS and deal with its latency. Like, replication is already there, why not be brave and just go with local disks? At my previous orgs, where we ran Elasticsearch for temporary logs/metrics storage, I proposed we do exactly that since we didn't even have major reliability requirements. But I couldn't convince them back then, we ended up with even worse AWS Elasticsearch.
I get that local disks are finite, yeah, but I think the core/memory/disk ratio would be good enough for most use cases, no? There are plenty of local disk instances with different ratios as well, so I think a good balance could be found. You could even use local hard disk ones with 20TB+ disks for implementing hot/cold storage.
Big kudos to the PlanetScale team, they're like, finally doing what makes sense. I mean, even AWS themselves don't run Elasticsearch on local disks! Imagine running ClickHouse, Cassandra, all of that on local disks.
I looked into this with an idea of running SQL Server Availability Groups on the Azure Las_v3 series VMs, which have terabytes of local SSD.
The main issue was that after a stop-start event, the disks are wiped. SQL Server can’t automatically handle this, even if the rest of the cluster is fine and there are available replicas. It won’t auto repair the node that got reset. The scripting and testing required to work around this would be unsupportable in production for all but the bravest and most competent orgs.
Really, really great article. The visualization of random writes is very nicely done.
On:
> Another issue with network-attached storage in the cloud comes in the form of limiting IOPS. Many cloud providers that use this model, including AWS and Google Cloud, limit the amount of IO operations you can send over the wire. [...]
> If instead you have your storage attached directly to your compute instance, there are no artificial limits placed on IO operations. You can read and write as fast as the hardware will allow for.
I feel like this might be a dumb series of questions, but:
1. The ratelimit on "IOPS" is precisely a ratelimit on a particular kind of network traffic, right? Namely traffic to/from an EBS volume? "IOPS" really means "EBS volume network traffic"?
2. Does this save me money? And if yes, is from some weird AWS arbitrage? Or is it more because of an efficiency win from doing less EBS networking?
I see pretty clearly putting storage and compute on the same machine strictly a latency win, because you structurally have one less hop every time. But is it also a throughput-per-dollar win too?
> 1. The ratelimit on "IOPS" is precisely a ratelimit on a particular kind of network traffic, right? Namely traffic to/from an EBS volume? "IOPS" really means "EBS volume network traffic"?
The EBS volume itself has a provisioned capacity of IOPS and throughput, and the EC2 instance it's attached to will have its own limits as well across all the EBS volumes attached to it. I would characterize it more like a different model. An EBS volume isn't just just a slice of a physical PCB attached to a PCIe bus, it's a share in a large distributed system a large number of physical drives with its own dedicated network capacity to/from compute, like a SAN.
> 2. Does this save me money? And if yes, is from some weird AWS arbitrage? Or is it more because of an efficiency win from doing less EBS networking?
It might. It's a set of trade-offs.
That makes sense. The weirdness of https://docs.aws.amazon.com/ebs/latest/userguide/ebs-io-char... makes more sense now. Reminds me of DynamoDB capacity units.
For network-attached storage IOPS limits packets per second, not bandwidth, since IO operations can happen at different sizes (e.g. 4K vs. 16K blocks).
More specific details for EC2 instances can be seen in the docs here: https://docs.aws.amazon.com/ec2/latest/instancetypes/gp.html...
Great nerdbaiting ad. I read all the way to the bottom of it, and bookmarked it to send to my kids if I feel they are not understanding storage architectures properly. :)
The nerdbaiting will now provide generational benefit!
I love the visuals, and if it's ok with you will probably link them to my class material on block devices in a week or so.
One small nit: > A typical random read can be performed in 1-3 milliseconds.
Um, no. A 7200 RPM platter completes a rotation in 8.33 milliseconds, so rotational delay for a random read is uniformly distributed between 0 and 8.33ms, i.e. mean 4.16ms.
>a single disk will often have well over 100,000 tracks
By my calculations a Seagate IronWolf 18TB has about 615K tracks per surface given that it has 9 platters and 18 surfaces, and an outer diameter read speed of about 260MB/s. (or 557K tracks/inch given typical inner and outer track diameters)
For more than you ever wanted to know about hard drive performance and the mechanical/geometrical considerations that go into it, see https://www.msstconference.org/MSST-history/2024/Papers/msst...
Whoah, thanks for sharing the paper.
I reviewed it three times for different conferences :-)
I’m still annoyed they didn’t include the drain time equation I used for calculating track width, which falls out of one of their equations.
Oh, and I’m very glad you showed differing track sizes across the platter. (BTW, did you know track sizes differ between platters? Google “disks are like snowflakes”)
That great infographic at the top illustrates one big reason why 'dev instances in the cloud' is a bad idea.
Amazing! The visualizations are so great!
Nice article, but the replicated approach isn't exactly comparing like with like. To achieve the same semantics you'd need to block for a response from the remote backup servers which would end up with the same latency as the other cloud providers...
Fantastic article, well explained and beautiful diagrams. Thank you bddicken for writing this!
You are welcome!
Probably the best diagrams I've ever seen in a blog post.
Very nice animations.
Gosh, this is beautiful. Fantastic work, Ben. <3
Can someeone share their expirience in creating such diagrams. What libraries and tools can be useful for such interactive diagrams?
For this particular one I used d3.js, but honestly this isn't really the type of thing it's designed for. I've also used GSAP for this type of thing on this article I wrote about database sharding.
https://planetscale.com/blog/database-sharding
Do you mean something for data visualization, or tricks condensing large data sets with cursors?
https://d3js.org/
Best of luck =3
Hrm "unlimited IOPS"? I suppose contrasted against the abysmal IOPS available to Cloud block devs. A good modern NVMe enterprise drive is specced for (order of magnitude) 10^6 to 10^7 IOPS. If you can saturate that from database code, then you've got some interesting problems, but it's definitely not unlimited.
Technically any drive has a finite IOPS capacity. We have found that no matter how hard we tried, we could not get MySQL to exhaust the max IOPS of the underlying hardware. You hit CPU limits long before hitting IOPS limits. Thus "infinite IOPS."
Plenty of text but also many cool animations. I'm a sucker for visual aids. It's a good balance.
We are working on a platform that lets you measure this stuff with pretty high precision in real time.
You can check out our sandbox here:
https://yeet.cx/play
what local nvme is getting 20us? Nitro?
That was a cool advertisement, I must give them that.
Disk latency, and one's aversion to it, is IMHO the only way Hetzner costs can run up on you. You want to keep the database on local disk, and not their very slow attached Volumes (Hetzner EBS). In short, you can have relatively light work-loads that will be on sort of expensive VMs because you need 500GB, or more, of local disk. 1TB local disk is the biggest VM they offer in the US. 300 EUR a month.