If you have ever compared two VPS plans and wondered why one with a smaller disk feels faster than another with a bigger one, IOPS is usually the missing piece. IOPS, short for input/output operations per second, is the count of individual read or write requests a storage device can complete in one second. On a VPS, that single number often predicts perceived performance better than gigabytes of capacity or advertised throughput.
The reason is simple. Most server workloads — databases, web apps, mail queues, container layers — generate a stream of small, scattered reads and writes. They do not move giant files end to end. They hammer the disk with short requests, and the device's IOPS budget decides how many of those it can clear per second before everything else starts waiting.
This guide explains what IOPS actually measures, how it is tested on Linux, how it differs between SSDs and HDDs, and why it should shape how you size a VPS plan.
Quick answer
IOPS is the number of read or write operations a storage device can complete per second. A higher IOPS number means the device can serve more concurrent small requests before queuing or latency becomes a problem.
For a VPS, you should care about IOPS when the workload is:
- a database (MySQL, PostgreSQL, Redis persistence)
- a busy CMS like WordPress or Magento
- a mail or messaging server
- a container host running many small services
- anything with lots of users hitting small files at the same time
You can mostly ignore IOPS when the workload is dominated by a few large sequential file transfers, like a backup target or a media archive — those care more about throughput.
What IOPS actually measures
An I/O operation is a single read or write request handed to the storage device. IOPS is the count of those requests completed per second. It does not say how big each operation was, and it does not say how long any individual request waited. That is why IOPS is almost always reported alongside two other numbers:
- Block size — how big each request is (commonly 4 KiB for random workloads, 1 MiB for sequential)
- Latency — how long each request takes from submission to completion
-
Throughput — total bytes per second, which is roughly
IOPS × block size
If you ever see an IOPS number quoted without a block size or workload description, treat it as marketing rather than measurement. A device that does 100,000 IOPS at 4 KiB random reads is not the same as one that does 100,000 IOPS at 1 MiB sequential writes — the second case is moving 250× more data.
Random vs sequential, and why it matters
Two workloads can produce wildly different IOPS on the same device:
- Sequential I/O reads or writes blocks that sit next to each other on disk. It is friendly to every storage technology and produces the highest throughput.
- Random I/O reads or writes blocks scattered across the device. It is much harder on traditional spinning disks and is the workload most server applications actually generate.
Databases, virtual machine images, container layers, and most web application backends are dominated by small random I/O. That is why the IOPS number quoted for a VPS plan usually refers to small-block random performance — it is the worst-case workload, and the one most likely to be the bottleneck.
Two other parameters shape the result:
- Queue depth — how many requests are in flight at once. Low queue depth (1–4) reflects a single-threaded app; high queue depth (32+) reflects a busy server with many concurrent users.
- Block size — small blocks (4 KiB) favour high IOPS; large blocks (128 KiB+) favour high throughput.
A meaningful IOPS quote should specify all three: read/write mix, queue depth, and block size.
SSDs vs HDDs: the gap is enormous
The single biggest factor in your VPS storage performance is the underlying device class. The order-of-magnitude differences between drive types are well established in vendor datasheets and storage textbooks:
| Device class | Typical random 4K IOPS | Why |
|---|---|---|
| 7,200 rpm HDD | ~75–200 | Mechanical seek time dominates |
| 15,000 rpm enterprise HDD | ~175–210 | Faster spindle, still mechanical |
| SATA SSD | ~10,000–100,000 | No moving parts, SATA bus limit |
| NVMe SSD | ~100,000–1,000,000+ | PCIe direct attach, deep parallelism |
A modern NVMe SSD can deliver thousands of times more random IOPS than a fast HDD. That is why even a small NVMe-backed VPS often feels much faster than a much larger HDD-backed plan — there is no contest at the small-random-I/O layer where most server workloads actually live.
If you are planning a database or a busy multi-tenant app, prefer NVMe-backed VPS storage and treat HDD-backed plans as backup or archive targets only. For a deeper comparison of the two SSD families, see the related guide on NVMe vs SSD performance, and for choosing the filesystem that sits on top of that storage layer, see XFS vs ext4 for a VPS.
How to measure IOPS on a Linux VPS
You do not have to take a hosting provider's word for it. Two standard Linux tools cover almost every IOPS question you will have on a VPS, and both are in the default Ubuntu and Debian repositories. On a fresh Ubuntu 24.04 container we just confirmed that apt-get install fio sysstat installs fio 3.36 and sysstat 12.6.1 — the same versions you will get on most current VPS images.
fio — synthetic benchmarking
fio (Flexible I/O Tester) is the de-facto Linux storage benchmark. It lets you describe an exact workload — read/write mix, block size, queue depth, duration — and reports the resulting IOPS, throughput, and latency.
A simple random-read IOPS test against a test file looks like this:
sudo apt-get install -y fio
fio --name=randread --filename=/var/tmp/fio-test \
--rw=randread --bs=4k --size=1G --iodepth=32 \
--runtime=30 --time_based --direct=1
Warning:
fiowrites real data. Always point it at a dedicated test file (like/var/tmp/fio-test) and never at a raw block device that holds anything you care about.
The output reports IOPS, bandwidth, and latency percentiles. That is the right number to compare across VPS plans, not the marketing figure on the order page.
iostat — live observation
iostat (from the sysstat package) reports what real applications are actually doing to the disk right now. The most useful invocation is:
sudo apt-get install -y sysstat
iostat -dx 1
The tps (transfers per second) column is effectively IOPS, broken down by device, with read/write split, average request size, and queue depth alongside. Run it during peak traffic and you will see whether your workload is anywhere near the device's IOPS budget.
IOPS, latency, and throughput together
A storage device has three performance dimensions, and they trade off against each other:
- IOPS — how many requests per second
- Latency — how long each request takes
- Throughput — total bytes per second
You cannot maximise all three at once. Push queue depth up and IOPS climbs but latency gets worse. Use a larger block size and throughput climbs but IOPS drops. The right balance depends on the workload: a database wants low latency at moderate IOPS, a backup job wants high throughput and does not care about latency.
When you evaluate a VPS plan, ask for all three numbers — and the test conditions that produced them. A single IOPS figure with no context tells you almost nothing.
Why IOPS matters for a VPS
On shared VPS infrastructure, IOPS is often the first resource to feel constrained. CPU and RAM limits are visible and easy to monitor. Storage IOPS limits are invisible until the disk queue fills up and every request starts waiting in line. The symptom is the same one users describe as "the site got slow" — but the cause is not CPU or memory, it is I/O wait.
If you are sizing or troubleshooting a VPS, treat IOPS as a first-class capacity number alongside CPU cores and RAM. For database-heavy or container-heavy workloads it will usually be the dimension that decides whether the plan is comfortable or constantly running hot.
FAQ
What is a good IOPS number for a VPS?
It depends on the workload, but as a rough guide a single busy WordPress or small database site is usually happy in the low thousands of random 4K IOPS. A high-traffic e-commerce or analytics workload often wants tens of thousands. Modern NVMe-backed VPS plans usually clear both bars comfortably.
Is higher IOPS always better?
Not on its own. A higher IOPS number with much higher latency or a much smaller block size may be worse for your real workload than a lower IOPS number under realistic conditions. Always compare IOPS, latency, and block size together.
What is the difference between IOPS and throughput?
IOPS counts operations per second; throughput counts bytes per second. They are related by block size: throughput ≈ IOPS × block size. Random small-block workloads are usually IOPS-bound; sequential large-block workloads are usually throughput-bound.
If you want a VPS where the storage layer is fast enough that IOPS rarely becomes the bottleneck, start with a Cloud VPS plan on NVMe storage and benchmark it yourself with fio before you migrate production traffic.
Closing summary
IOPS is the most useful single number for predicting how a VPS will feel under real server workloads. It measures the rate of small read and write operations, and it is almost always the bottleneck before CPU or RAM on database-heavy or multi-tenant systems.
Use fio to benchmark and iostat to observe, compare drives in the right device class, and never trust an IOPS number that does not come with a block size, queue depth, and read/write mix. Once you read storage specs that way, choosing the right VPS plan gets a lot easier.