Why Your Database Query Is Getting Slower Over Time (When Nothing Changed)
You wrote a query. It was fast. It ran in milliseconds. Everyone was happy.
Months passed. The query now takes seconds. Sometimes minutes. The code has not changed. The database version has not changed. The server is the same. Nothing changed.
Or so you think.
Something did change. You just were not looking at the right thing. The query is the same. The data is not. The usage patterns are not. The database’s internal state is not.
Let me explain why database queries get slower over time and what you can do about it.
The Story of the Query That Slowed Down
I inherited a system with a slow query. The previous team said it used to be fast. The code was untouched for two years. Nothing changed.
I did not believe them. Something changed.
I looked at the query. It was simple. A join between two tables. A where clause on a date range. An order by. Nothing suspicious.
Then I looked at the data. The orders table had 50,000 rows when the query was written. It now had 5,000,000 rows.
The query was scanning more data. Not because the query changed. Because the table grew.
The query was written to fetch “orders from the last 7 days.” That used to be a few thousand rows. Now it was hundreds of thousands. The query did not change. The volume of data it processed did.
The fix was not to change the query. It was to add an index that the table had outgrown.
| When Query Was Written | Now |
|---|---|
| 50,000 rows in orders table | 5,000,000 rows |
| Last 7 days = 2,000 rows | Last 7 days = 200,000 rows |
| Query ran in 50ms | Query ran in 8 seconds |
| No index needed | Index essential |
The Hidden Changes No One Tracks
Queries slow down over time because of hidden changes. Here are the most common culprits.
The table grew.
This is the most obvious and most ignored. Your query was fast when the table was small. The table is not small anymore. The query still works the same way. But “same” means scanning more data.
The distribution of data changed.
Maybe your query looks for records where status = ‘pending’. When the query was written, 1% of records were pending. Now 50% are pending. The database’s execution plan assumes the old distribution. It is now wrong.
The indexes got fragmented.
Indexes are not magic. They need maintenance. Over time, as data is inserted, updated, and deleted, indexes become fragmented. The database has to do more work to use them.
The statistics are outdated.
Databases use statistics to decide how to run queries. Statistics about how many rows match a condition. About how data is distributed. If statistics are old, the database makes bad decisions.
The cache is no longer helping.
Your query was fast because the data was in memory. Cached. Over time, as more data is accessed, your data got evicted. Now the database has to read from disk.
| Hidden Change | Why It Slows Queries |
|---|---|
| Table grew | More data to scan |
| Data distribution changed | Wrong execution plan |
| Index fragmentation | Indexes work harder |
| Outdated statistics | Bad query decisions |
| Cache eviction | Disk instead of memory |
The Execution Plan That Changed (Even Though the Query Did Not)
Here is something that surprises many developers.
The same query can have different execution plans at different times.
The database decides how to run your query based on statistics. It looks at table sizes. Index cardinality. Data distribution. It makes a choice.
When the data changes, the database’s choice may change. Sometimes it changes to a worse plan.
I saw this happen. A query that used an index for six months suddenly stopped using it. The database decided a full table scan was faster. It was wrong. The query went from 10ms to 10 seconds.
Why did the database change its mind? Statistics were updated. A new index was added elsewhere. The table size crossed a threshold. The database made a different choice. A bad choice.
| Good Plan | Bad Plan |
|---|---|
| Uses index | Full table scan |
| Returns quickly | Reads millions of rows |
| Low I/O | High I/O |
| Based on accurate stats | Based on outdated or misleading stats |
The Index That Stopped Being Useful
Indexes are not set-and-forget. They degrade over time.
Here is what happens to an index over months of usage.
Inserts add new rows. The index has to grow. If the index is on an ever-increasing value (like an auto-increment ID), new entries go to the right side. The index becomes unbalanced.
Updates change indexed values. The database has to move entries around. This creates fragmentation.
Deletes leave holes in the index. The index still has space reserved for rows that no longer exist.
After enough of this, the index is less efficient. The database has to navigate through empty space. It has to follow more pointers. The index that once made your query fast now barely helps.
| Index Problem | What Happens |
|---|---|
| Inserts | Index grows, may become unbalanced |
| Updates | Entries move, causing fragmentation |
| Deletes | Holes remain, wasted space |
| Overall | Index efficiency decreases over time |
The Statistics That Lied to the Database
Databases are smart. But they are only as smart as their information.
Statistics tell the database:
-
How many rows are in this table?
-
How many distinct values in this column?
-
What is the most common value?
-
How are values distributed?
If statistics are stale, the database makes bad guesses.
Your query looks for status = ‘active’. The statistics say 10% of rows are active. Actually, 90% are active now. The database chooses an index scan because it expects few rows. But with 90% active, a full table scan would be faster. The database chose wrong. Your query is slow.
Updating statistics would fix this. But many databases do not update statistics automatically. Or they update on a schedule that does not match your data’s rate of change.
| If Statistics Say | But Reality Is | Database Chooses | Result |
|---|---|---|---|
| 10% active | 90% active | Index scan (slow) | Bad |
| 1000 rows | 1,000,000 rows | Memory operations | Disk operations |
| Unique values | Mostly duplicates | Efficient plan | Inefficient plan |
How to Diagnose a Query That Got Slower
When a query slows down over time, do not change the query first. Investigate.
Step 1: Check how much data the query is processing now vs. before.
Same query. Different data volume. That alone explains most slowdowns.
Step 2: Look at the execution plan.
Has it changed? Is the database using the indexes you expect? Is it doing a full table scan where it used to use an index?
Step 3: Check index fragmentation.
Most databases have commands to show index fragmentation. If fragmentation is high, rebuild the index.
Step 4: Update statistics.
Force the database to refresh its knowledge about your data. Then check if the execution plan improves.
Step 5: Check cache hit ratio.
Is your data still in memory? Or is the database reading from disk for every query?
| Diagnostic | What to Look For |
|---|---|
| Data volume | Has the table grown significantly? |
| Execution plan | Did the plan change? Is it using indexes? |
| Index fragmentation | High fragmentation? Rebuild needed. |
| Statistics | Are they up to date? |
| Cache hit ratio | Is data in memory or on disk? |
The Fixes That Work
Once you know why the query slowed down, you can fix it.
If the table grew:
-
Add an index that matches the query’s where clause
-
Consider partitioning the table by date
-
Archive old data that is rarely accessed
If the execution plan changed:
-
Update statistics
-
Force a specific plan with query hints (last resort)
-
Rewrite the query to be more explicit about what you want
If indexes are fragmented:
-
Rebuild or reorganize indexes on a schedule
-
Monitor fragmentation monthly
If statistics are stale:
-
Set up automatic statistics updates
-
Update statistics after large data changes
If the cache is cold:
-
Warm up the cache on application startup
-
Consider a dedicated caching layer
| Root Cause | Fix |
|---|---|
| Table grew | Add index, partition, archive |
| Bad execution plan | Update stats, query hints, rewrite |
| Index fragmentation | Rebuild indexes regularly |
| Stale statistics | Auto-update stats |
| Cold cache | Warm cache, add caching layer |
The Query That Taught Me to Monitor
There was a query that ran every hour. It aggregated sales data for the last 24 hours. It was fast for two years.
Then it started timing out.
Nothing changed. Same code. Same database. Same server.
I dug in. The sales table had grown from 100,000 rows to 10,000,000 rows. The query scanned the entire table every hour. It used to scan 100,000 rows. Now it scanned 10,000,000.
The query did not change. The data volume did. The query was not designed to scale.
The fix was not to optimize the query. It was to change the approach. Instead of scanning the last 24 hours every hour, we maintained a rolling aggregate. A summary table updated incrementally.
The query went from scanning millions of rows to scanning a few thousand. It became fast again. And it will stay fast even as the table grows.
| Before | After |
|---|---|
| Scan all rows every hour | Maintain rolling aggregate |
| Slower as table grows | Constant time regardless of table size |
| 10,000,000 rows scanned | 5,000 rows scanned |
| Timing out | Sub-second |
Preventing Slowdowns Before They Happen
You cannot prevent all slowdowns. But you can build systems that degrade gracefully.
Monitor query performance over time.
Do not wait for users to complain. Track query execution times. Alert when a query gets 2x slower.
Review queries before they go to production.
Ask: how will this query perform when the table has 10x more data? 100x more? Design for growth.
Set up regular maintenance.
Update statistics. Rebuild indexes. Archive old data. Do this on a schedule.
Test with realistic data volumes.
Do not test with 1000 rows when production will have 1,000,000. Your tests lied to you.
Build for the data you will have, not the data you have now.
Today’s fast query is tomorrow’s slow query. Plan ahead.
| Prevention | Why It Matters |
|---|---|
| Monitor performance | Catch slowdowns early |
| Review with growth in mind | Design for scale |
| Regular maintenance | Keep indexes and stats healthy |
| Test with realistic data | Uncover scaling issues before production |
| Plan for future data | Avoid rewrites later |
Closing Thoughts
The query did change. You just were not looking at the right thing.
The code stayed the same. The data did not.
Your database is not betraying you. It is just responding to a world that changed around it. Tables grew. Data distribution shifted. Indexes fragmented. Statistics aged. Caches cooled.
None of these are failures. They are normal. They are expected. They are why databases need maintenance, monitoring, and occasional rethinking.
The next time a query slows down for “no reason,” look deeper. Check the data volume. Look at the execution plan. Check index fragmentation. Update statistics.
The reason is there. You just have to find it.
And once you find it, you can fix it. Not by changing the query. By changing what is around it.
That is the difference between a developer who fights symptoms and one who solves root causes.