I have a rule I apply to every system I build: change one variable at a time.
It sounds simple. It sounds obvious. Most developers will nod and agree with it immediately. Then they'll spend a weekend changing six things at once, see an unexpected result, and spend the next three days trying to figure out which of the six things caused it.
I know because I was that developer. And nothing cured me of it faster than building an automated cryptocurrency trading bot.
How It Started: The Problem Manual Trading Actually Creates
I want to be honest about why I built the bot, because the answer changed over time.
The first version of the answer was: I want to make money while I sleep.
That's true, but it's not the real answer. The real answer came about three months into manually trading and realizing what manual trading actually is: a system designed to make you blame yourself for losses and take credit for wins. You stay up too late watching charts. You make a trade because a number feels right. You hold a position longer than you should because letting go feels like admitting you were wrong. You second-guess every decision because you're human and humans are not built for emotionless, repetitive, high-frequency decisions at 2am.
The problem wasn't the market. The problem was me — specifically, my presence in the process.
So the goal evolved: eliminate the variable of human emotion entirely. Build something that executes the same logic the same way every single time, regardless of what the price is doing, what I'm feeling, or what time it is. The bot doesn't have opinions. It doesn't have anxiety. It doesn't have a bad day.
That was version one of the real answer.
The second version came later: I want to capture opportunities that are physically impossible for a human to act on.
Spread discrepancies in OTC markets exist in windows that close in seconds. No human, no matter how fast or focused, can scan, evaluate, and execute consistently at the speed required to capture them reliably. Automation isn't just more convenient than manual trading in these cases — it's the only viable tool for the job.
By the time I was deep into development, the original "make money while I sleep" framing felt almost quaint. The real project had become: build a system that's faster, more consistent, and more disciplined than any human can be.
The One-Variable Rule in Practice
Here's where the engineering philosophy comes in.
When you're building a trading system, you're really building a decision engine with a dozen tunable parameters: scan frequency, spread thresholds, trade size, target pairs, profit-taking triggers, cooldown windows, error handling logic, and more. Every one of these variables affects the output. And the output — whether a trade was profitable — is a lagging signal. You don't know if your configuration worked until after the fact, and even then the market conditions during that run are never exactly replicated.
This creates an obvious trap: you get a bad result, so you tweak four things at once, get a better result, and tell yourself you fixed it. But which of the four things fixed it? You don't know. And the next time conditions shift, you're flying blind.
The one-variable rule is the only discipline that breaks this trap. You pick one parameter, change it, run enough cycles to generate a statistically meaningful sample, evaluate the result, then decide whether to keep the change or revert it. Then — and only then — do you move to the next variable.
It is slower. It requires patience that feels almost painful when you want to just ship something. But it produces something more valuable than a fast result: it produces knowledge. You don't just end up with a configuration that works. You end up understanding why it works.
The Scan Frequency Case Study
The most interesting lesson I learned from this entire process came from testing scan frequency.
My assumption going in was straightforward: scan more often, catch more opportunities, make more money. More scans per unit time means more potential trigger events, which means more trades, which means more accumulation. Linear logic.
It was wrong.
What I actually found was that there's a meaningful relationship between scan frequency and signal quality. Scan too fast and you're not just capturing real opportunities — you're also capturing noise. You execute on micro-movements that look like spreads but aren't durable enough to close profitably. You increase transaction volume without increasing profitable volume. Worse, in some API environments, you start hitting rate limits or creating race conditions in your own execution logic.
Scan too slow and you're leaving real opportunities on the table — windows open and close before your next cycle runs.
The right frequency isn't "as fast as possible." It's the frequency that optimizes the ratio of quality signals to noise, accounting for the specific exchange's API behavior and the volatility characteristics of the pairs you're targeting.
I wouldn't have found that without isolating the variable. If I'd been changing scan frequency alongside spread thresholds at the same time, I'd have never cleanly separated the effect. The result would have worked or not worked, and I would have learned nothing transferable.
Over 1,800+ scan cycles, each version of the bot taught me something specific — not because I got lucky with a configuration, but because I was disciplined about not changing more than one thing at a time.
What This Has to Do With Software Engineering Generally
Everything I just described applies to any complex system you build.
Debugging a production issue while simultaneously pushing a dependency update, a config change, and a refactor? You're doing six-variable testing. When it breaks worse or fixes itself mysteriously, you've learned nothing you can reproduce.
Testing a new database indexing strategy while also migrating to a new query pattern? Same problem. The outcome — fast or slow — doesn't tell you which change was responsible.
The one-variable discipline is uncomfortable because it slows you down in the moment. It requires you to run the system in a state you know isn't optimal so you can learn from a controlled comparison. It requires treating your own work as an experiment rather than a solution.
But the engineers I've watched ship the most reliable systems are almost always the ones who are obsessive about this. Not because they follow a rule, but because they've internalized something true: a system you understand is worth more than a system that happens to be working.
The trading bot taught me to internalize it. Not because someone told me the rule — I knew the rule. But because the market doesn't let you cheat. You can't charm it, you can't wing it, and you definitely can't out-feel it.
Change one variable. Run enough cycles. Learn something real. Repeat.
That's the whole methodology.