As backend developers, we've all been there—a slow query appears, and our first instinct is to add an index. But here's a truth that separates senior engineers from juniors: more indexes don't always mean better performance. In fact, over-indexing can cripple your database's write performance, bloat your storage, and introduce subtle bugs that are incredibly hard to diagnose.
The Hidden Costs of Indexes
Every index you create comes with hidden costs that compound over time:
- Write Overhead: Every INSERT, UPDATE, and DELETE must update all relevant indexes. With 10 indexes on a table, you're doing 10x the write work.
- Storage Bloat: Indexes consume disk space—sometimes more than the table itself. I've seen production databases where indexes were 3x larger than the actual data.
- Memory Pressure: The query optimizer needs to keep index statistics in memory. More indexes = more memory consumption.
- Optimizer Confusion: With too many indexes, the query planner might choose suboptimal execution plans, leading to slower queries.
Real-World Case Study: The E-Commerce Disaster
In one of my projects, an e-commerce platform was experiencing severe performance degradation during peak sales. The orders table had 23 indexes—added over time by different developers solving different problems. Here's what we found:
-- The problematic table structure
CREATE TABLE orders (
id BIGINT PRIMARY KEY,
user_id BIGINT,
product_id BIGINT,
status VARCHAR(50),
created_at TIMESTAMP,
updated_at TIMESTAMP,
-- ... 15 more columns
);
-- Too many overlapping indexes!
CREATE INDEX idx_user ON orders(user_id);
CREATE INDEX idx_user_status ON orders(user_id, status);
CREATE INDEX idx_user_created ON orders(user_id, created_at);
CREATE INDEX idx_user_status_created ON orders(user_id, status, created_at);
-- ... 19 more indexes
The solution? We analyzed query patterns using EXPLAIN ANALYZE and consolidated to
just 5 carefully designed composite indexes. The result:
- INSERT latency dropped from 45ms to 8ms (82% improvement)
- Storage reduced by 2.3GB
- Peak throughput increased by 340%
The Golden Rules of Indexing Strategy
1. Analyze Before You Index
Never add an index without evidence. Use these tools:
-- MySQL: Enable slow query log
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1;
-- PostgreSQL: Enable pg_stat_statements
CREATE EXTENSION pg_stat_statements;
SELECT query, calls, mean_time, total_time
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 20;
2. Understand Composite Index Column Order
The order of columns in a composite index matters tremendously. Follow the ESR Rule (Equality, Sort, Range):
-- Query pattern
SELECT * FROM orders
WHERE status = 'completed' -- Equality
AND user_id = 123 -- Equality
AND created_at > '2026-01-01' -- Range
ORDER BY created_at DESC; -- Sort
-- Optimal index
CREATE INDEX idx_optimal ON orders(status, user_id, created_at);
3. Audit Indexes Regularly
Set up quarterly index audits. Here's a query to find unused indexes:
-- PostgreSQL: Find unused indexes
SELECT
schemaname || '.' || relname AS table,
indexrelname AS index,
pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size,
idx_scan AS times_used
FROM pg_stat_user_indexes ui
JOIN pg_index i ON ui.indexrelid = i.indexrelid
WHERE NOT indisunique
AND idx_scan < 50
ORDER BY pg_relation_size(i.indexrelid) DESC;
Key Takeaways
- Treat indexes as a trade-off between read and write performance
- Always measure query patterns before adding indexes
- Consolidate overlapping indexes into efficient composite indexes
- Audit and remove unused indexes quarterly
- Document why each index exists in your schema
Remember: The best index is one that serves multiple query patterns efficiently, not one index per query.