Look, I get it. You haven't thought about Big O since college — if you ever did at all. You've been shipping production code for years, and .sort() just works. Why would you care about algorithmic complexity when you have Claude Code writing half your codebase anyway?
(Quick disclaimer: everything in this article applies to any AI coding tool — Copilot, Cursor, Codex, you name it. I just happen to use Claude Code almost exclusively these days. It's my little buddy, "claudinho" as we say in Brazilian Portuguese. So that's what I'll reference throughout, but the principles are tool-agnostic.)
Here's the thing: understanding Big O doesn't make you faster at writing code. It makes you faster at evaluating the code an AI writes for you. And that's a completely different — and way more valuable — skill in 2026.
Let me explain.
You're not the coder anymore. You're the reviewer.
When you use Claude Code, your job shifts. You go from "person who writes the solution" to "person who decides if the solution is good enough." And that's where most developers get stuck.
Claude Code will happily generate a function that works. It passes the tests. It looks clean. Ship it, right?
Not so fast.
I was working on a feature recently where I asked Claude Code to filter and deduplicate a list of user records. It gave me something like this:
function getUniqueUsers(users) {
return users.filter((user, index) => {
return users.findIndex((u) => u.id === user.id) === index;
});
}Clean. Readable. Works perfectly with 50 users. But what happens when you hit 10,000 users? Or 100,000?
If you don't know Big O, you look at that and think "neat." If you do know Big O, you immediately see the problem: .filter() runs through the entire array (that's O(n)), and inside each iteration, .findIndex() also runs through the entire array (another O(n)). That's O(n²). For 100,000 records, that's potentially 10 billion comparisons.
The fix? A Set.
function getUniqueUsers(users) {
const seen = new Set();
return users.filter((user) => {
if (seen.has(user.id)) return false;
seen.add(user.id);
return true;
});
}Same result. O(n) instead of O(n²). The difference between "works fine" and "crashes the tab."
And here's the kicker: I didn't rewrite this by hand. I told Claude Code: "This is O(n²) because of the nested findIndex. Can you refactor it to O(n) using a Set or Map?" And it did. Instantly.
The Big O knowledge didn't help me write the code. It helped me prompt for better code.
The cheat sheet you actually need
You don't need to memorise proofs. You need to recognize patterns. Here's what matters in practice:
O(1) — Constant time. Accessing an object property, looking up a Map/Set value. The holy grail. No matter how big the data gets, the operation takes the same time.
O(n) — Linear. One loop through the data. A .map(), a .filter(), a .forEach(). Totally fine for most things.
O(n²) — Quadratic. A loop inside a loop over the same data. This is the red flag. When you see nested iterations in Claude Code's output, alarm bells should go off.
O(log n) — Logarithmic. Binary search. You cut the problem in half each step. Like looking up a word in a dictionary — you don't go page by page.
O(n log n) — Linearithmic. What good sorting algorithms achieve (merge sort, quicksort). When Claude Code uses .sort(), this is what's happening under the hood.
That's it. Five patterns. You don't need more for day-to-day work.
How this changes your Claude Code workflow
Once you internalise these patterns, your prompts get specific. Instead of:
"Write a function that finds duplicate emails in a list"
You write:
"Write a function that finds duplicate emails in a list. Use a frequency map for O(n) time complexity. The list could have 500k+ entries."
See the difference? The first prompt might give you a perfectly correct O(n²) solution. The second prompt gives you the right solution and communicates your constraints. You're speaking the same language as the model.
Here are a few more examples of how Big O thinking improves your prompts:
Before: "Sort this array of objects by date"
After: "Sort this array of objects by date. It's already mostly sorted — would insertion sort be more efficient here than the default .sort()?"
Before: "Check if this string is a palindrome"
After: "Check if this string is a palindrome using two pointers from both ends, O(n) time and O(1) space — no need to reverse the string"
Before: "Find the common elements between two arrays"
After: "Find the common elements between two arrays. Convert the smaller one to a Set first for O(n + m) instead of O(n × m)"
You're not writing the code. You're directing it. And direction requires vocabulary.
The uncomfortable truth about AI-assisted development
Here's what nobody talks about: AI coding tools are making the gap between junior and senior developers wider, not smaller.
A junior developer uses Claude Code and gets a working solution. A senior developer uses Claude Code and gets an optimised solution — because they know what to ask for. They can look at generated code and say "this won't scale" or "this is doing three passes when one would work."
Big O is one of those fundamentals that seems academic until you realize it's the vocabulary you need to have meaningful conversations with your AI tools. It's the difference between being a passenger and being the navigator.
And the beautiful irony? You don't even need to implement the algorithms yourself. You just need to understand them well enough to know when Claude Code is handing you an O(n²) solution dressed up in clean syntax.
"Cool, but my software doesn't need this"
Now, if you're anything like me — a bit of a geek, the kind of person who finds algorithmic puzzles genuinely fun — the complexity stuff alone is enough to get you hooked. There's something deeply satisfying about turning an O(n²) into an O(n). It's like solving a puzzle where the reward is your code running a thousand times faster.
But I know some of you are reading this and thinking: "Vinny, that's nice. But my API handles 200 requests a day, not 200,000. I'm not building Google Search. Why should I care?"
Fair. Let's talk about what those unnecessary operations actually cost.
Those 100 million comparisons that could've been 10,000? They're not free. Someone is paying for that CPU time. On AWS, on Vercel, on whatever you're running. If it's serverless, every extra millisecond is an extra fraction of a cent. Sounds tiny, right? Now multiply that by your user count. By the number of calls per day. By every endpoint where you accepted an O(n²) from your AI tool without questioning it.
It adds up. It always adds up.
And here's the part that's easy to ignore but hard to un-see once you see it: wasted computation is wasted energy. Every CPU cycle that didn't need to happen is electricity that didn't need to be consumed, heat that didn't need to be dissipated, carbon that didn't need to be emitted. We're at a point in computing history where the difference between 1,000 operations and 100,000 might be invisible to the human eye. Your user won't notice. Your loading spinner won't even appear.
But the planet notices. Your infrastructure bill notices.
It's using a sledgehammer to push in a thumbtack. It gets the job done — but at what cost?
Being mindful of complexity isn't just about building things that scale. It's about not burning resources for no reason. It's about craft. And in an era where AI can generate mountains of "working" code in seconds, the developers who care about efficiency aren't just better engineers — they're more responsible ones.
Where to go from here
If you're convinced and want to actually learn this stuff, here's my honest suggestion: don't start with a textbook. Start with code you've already shipped.
Open a recent project. Look at the functions with loops. Ask yourself: "Is there a loop inside a loop here? Could I use a Map or Set to eliminate one?" Run it through Claude Code and ask it to analyse the time complexity.
You'll be surprised how many O(n²) solutions are hiding in production code that "works fine" — until it doesn't.
