Of course, you can’t really predict the future, and it’s easy to accidentally set up a backtest that assumes you can (be careful!).
But there’s a simple technique for simulating how your strategy would have played out “into the future.”
It’s called in-sample/out-of-sample testing (sometimes IS/OOS).
It sounds more complex than it really is (data scientists like to over-complicate things sometimes), but here’s how I like to think about it.
When you create and optimize a trading strategy, you can never be sure it will continue to work. But wouldn’t it be nice to look into the future and see if it does?
IS/OOS testing lets you do that by segmenting the trades in your backtest into two sets: a “training” set and a “test” set.
So instead of optimizing across your entire set of backtested trades, you only optimize on the training set.
Then, when you feel you’re at a stopping point and you’re happy with your strategy, you can see how it performs on the test set – the trades that you held out and therefore are “out of sample.”
If you’ve overfitted your strategy, the performance on the test set is not going to match up with the performance on the training set, so it should be easy to spot.
This lets you pretend you went back in time and developed your strategy at a previous date with all the information you could have known at that moment, and then try to “predict the future” from that point.
This is exactly the situation you’re in now – asking yourself how likely your strategy will continue to perform in the future.
I love this because it’s a simple thing you can do outside of market hours that allows you to build confidence in your strategy before putting real money on the line.
Next time, I’ll address how to create your training set and test set. There are two common ways of doing it, and, in trading, one of them is very wrong.
-Dave