undefined cover
undefined cover
A Deep Dive into Two Centuries of Statistical Evidence for Successful Trend Following Trading Strategies cover
A Deep Dive into Two Centuries of Statistical Evidence for Successful Trend Following Trading Strategies cover
Papers With Backtest: An Algorithmic Trading Journey

A Deep Dive into Two Centuries of Statistical Evidence for Successful Trend Following Trading Strategies

A Deep Dive into Two Centuries of Statistical Evidence for Successful Trend Following Trading Strategies

15min |02/08/2025
Play
undefined cover
undefined cover
A Deep Dive into Two Centuries of Statistical Evidence for Successful Trend Following Trading Strategies cover
A Deep Dive into Two Centuries of Statistical Evidence for Successful Trend Following Trading Strategies cover
Papers With Backtest: An Algorithmic Trading Journey

A Deep Dive into Two Centuries of Statistical Evidence for Successful Trend Following Trading Strategies

A Deep Dive into Two Centuries of Statistical Evidence for Successful Trend Following Trading Strategies

15min |02/08/2025
Play

Description

Can trend following strategies truly outperform random chance in the world of algorithmic trading? Join us in this enlightening episode of Papers With Backtest: An Algorithmic Trading Journey as we dissect the groundbreaking research paper 'Two Centuries of Trend Following' authored by L'Imperiere, Durambol, Seeger, Potters, and Bouchot from Capital Fund Management. This episode dives deep into the statistical significance of trend following, revealing a T-statistic of 5.9 since 1960 and an astonishing nearly 10 over the last two centuries—strong evidence that these strategies are not mere products of luck.


Discover how the authors meticulously analyzed data spanning four major asset classes: commodities, currencies, stock indices, and bonds, utilizing futures data from 1960 and spot price proxies dating back to 1800. We unpack their innovative methodology, which employs exponential moving averages to identify trend signals, allowing for a comprehensive understanding of how these strategies perform across various asset classes and time periods.


Throughout the discussion, we explore the implications of a saturation effect in trend strength, shedding light on the critical differences between long-term and short-term trend strategies. As the financial landscape evolves, understanding these dynamics becomes increasingly vital for traders looking to enhance their algorithmic trading approaches.


Despite the challenges posed by recent market fluctuations, our analysis underscores the robustness of trend following strategies. We highlight the key findings from the paper that suggest not only the efficacy of these methods but also their relevance in today’s trading environment. Whether you're an experienced trader or new to algorithmic trading, this episode is packed with insights that can sharpen your trading acumen.


Join us as we navigate through the complexities of trend following and its implications for future trading strategies. With a focus on empirical data and rigorous analysis, this episode is essential listening for anyone serious about mastering the art of algorithmic trading. Tune in to Papers With Backtest and equip yourself with the knowledge to elevate your trading game!


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Transcription

  • Speaker #0

    Hello. Welcome back to Papers with Backtest podcast. Today, we dive into another algo trading research paper.

  • Speaker #1

    Hi there. Yes. Today, we're looking at a really interesting one called Two Centuries of Trend Following. It's by L'Imperiere, Durambol, Seeger, Potters, and Bouchot. They're all from Capital Fund Management or CFM. CFM,

  • Speaker #0

    right. Okay. So Two Centuries of Trend Following. Let's unpack that. What's the core question they're tackling here?

  • Speaker #1

    Well, fundamentally, they're asking, Does trend following actually work? You know, the basic idea of buying what's going up and selling what's going down. Does that generate anomalous returns?

  • Speaker #0

    Anomalous meaning like better than you just expect by chance.

  • Speaker #1

    Exactly. And they looked across four big asset classes, commodities, currencies, stock indices and bonds. OK,

  • Speaker #0

    standard stuff. But the two centuries part, that sounds pretty ambitious.

  • Speaker #1

    It really is. They use futures data going back to 1960, which is already quite a long period. Yeah. But then using spot price data as proxies, they managed to. pushed the analysis back for some series all the way to 1800. Wow. 1800. That's incredible. So what was the big headline finding from all that digging?

  • Speaker #0

    The main takeaway, and this is probably what should grab your attention most, is that they argue trend following is one of the most statistically significant anomalies out there.

  • Speaker #1

    Really? More than others we hear about.

  • Speaker #0

    Well, the numbers are pretty compelling. They found a T statistic of around five just since 1960. And if you go back across the full two centuries, that T-stat jumps to nearly 10.

  • Speaker #1

    10. Okay, for listeners not deep into stats, a high T-stat basically means it's very unlikely the results are just random luck, right?

  • Speaker #0

    Precisely. Super unlikely. And importantly, this holds true even after the account for the general upward drift you see in many markets over time.

  • Speaker #1

    So it's not just about being long in a rising market. There's something specific to the trend signal itself.

  • Speaker #0

    That's what the data suggests. Its persistence across such a long time span and across different types of assets is what makes it so interesting for anyone like you, the learner, trying to figure out what really drives market behavior.

  • Speaker #1

    OK, so our mission for this deep dive then is to really get into the weeds of how they defined and tested this strategy.

  • Speaker #0

    Exactly. We want to understand the specific trading rules they simulated and look closely at the back tested results they reported over these very long historical periods.

  • Speaker #1

    Great, let's do it. starting with the trading rules, how did they actually define the trend they were following? What was the indicator?

  • Speaker #0

    Okay, so the core was an exponential moving average, an EMA, of past monthly prices. Crucially, they calculated this average excluding the current month's price.

  • Speaker #1

    Ah, okay. So looking back, but not including the very latest price point in the average itself.

  • Speaker #0

    Right. And this EMA served as their reference level. The actual signal, let's call it S, was then calculated. Oh. They took the difference between the previous month's closing price And that Russian's EMA level.

  • Speaker #1

    Okay, so price versus its recent average. Makes sense.

  • Speaker #0

    But then they divided that difference by a measure of recent volatility.

  • Speaker #1

    Ah, volatility. So adjusting for how choppy the market's been.

  • Speaker #0

    Exactly. The volatility, sigma, was also calculated using an exponential moving average, but this time on the absolute monthly price changes. Same time frame, in months.

  • Speaker #1

    Got it. So the signal strength depends not just on how far the price is from its average, but also on how much prices have been swinging around lately. And this leads to a constant risk approach.

  • Speaker #0

    Yes. The division by volatility means you're sort of scaling the position size based on risk. And remember, this initial analysis is without trading cost.

  • Speaker #1

    OK, clear. And how did they calculate the profit and loss, the P&L, in this simulation?

  • Speaker #0

    So the theoretical quantity they traded for any asset was proportional to the inverse of that volatility measure, sigma. So less volatile assets get a larger theoretical position size. Right.

  • Speaker #1

    Constant risk again.

  • Speaker #0

    Yep. And the sign of the trade, whether they went long or short, was just determined by the sign of that trend signals we just discussed.

  • Speaker #1

    Positive signal go long, negative signal go short. Simple enough.

  • Speaker #0

    Then the P&L for each step was basically the sign of their position times the next price change divided by that initial volatility measure. They just sum those up over time.

  • Speaker #1

    And you mentioned end months for the look back period. Did they stick to one specific end?

  • Speaker #0

    They focused primarily on n equals five months for most of the presented results.

  • Speaker #1

    Five months, okay.

  • Speaker #0

    But they did check other values and stated the results were pretty robust. The main conclusions didn't really change much if they used, say, three months or eight months instead.

  • Speaker #1

    Good to know it wasn't just cherry-picking one specific time frame. Now let's talk about the assets they tested this on, starting with the futures data since 1960. What was in the basket?

  • Speaker #0

    It was a pretty diverse mix. They had seven commodity contracts, things like crude oil, Natural gas, corn, wheat, sugar.

  • Speaker #1

    Yeah, the usual suspects.

  • Speaker #0

    Right. Then seven 10-year government bond contracts from major countries. Australia, Canada, Germany, Japan, Switzerland, UK, US.

  • Speaker #1

    Okay. Global bonds.

  • Speaker #0

    Seven stock index futures, too, from similar developed markets. And finally, six major currency futures contracts. Aussie dollar, Canadian dollar, the marquero, yen, Swiss franc, British pound, all against the U.S. dollar implicitly.

  • Speaker #1

    Right. So good spread across asset classes and geographies. Where did they get all this data from?

  • Speaker #0

    They mentioned using global financial data, GFD, as the source.

  • Speaker #1

    GFD. OK. So they run the five month strategy on this basket from 1960 onwards. What do the overall results look like?

  • Speaker #0

    Pretty impressive, actually. The aggregated P&L shown in their figure one looks fairly steady over time. The overall T statistic came out at 5.9.

  • Speaker #1

    And we said 5.9 is highly significant. What about risk-adjusted returns? Sharp ratio.

  • Speaker #0

    The sharp ratio was 0.8, which is quite good for a strategy across multiple asset classes over decades.

  • Speaker #1

    Yeah, 0.8 is solid. But you mentioned earlier they de-biased the results. What was that about?

  • Speaker #0

    Right. So many of these markets, particularly stock indices, tend to drift upwards over the long term. If you're just long, you make money from that drift. They wanted to isolate the profit that came specifically from timing the trends, not just from being exposed to the market's general upper direction. So they statistically remove that long-term drift component.

  • Speaker #1

    Ah, I see. So peeling away the buy and hold part of the return, did the trend following effect survive that?

  • Speaker #0

    It did. Even after de-biasing, the t-statistic was still 5.0, still very significant.

  • Speaker #1

    Wow. OK. And was the strength consistent? Did it work everywhere all the time?

  • Speaker #0

    That's another key finding. They looked at results broken down by sector commodities, bonds, indices, currencies, and also decade by decade since 1960. The P statistic for the raw trend strategy was above 2.1 for every sector in every decade. Even the Debiased T-Stats stayed above 1.6 across the board. That suggests real universality.

  • Speaker #1

    That level of consistency is pretty remarkable. OK, now let's get to the really long-term stuff going back. before 1960. How did they manage that?

  • Speaker #0

    Yeah, this is where it gets fascinating. Since futures didn't really exist in the same way back then, they had to use proxies.

  • Speaker #1

    Proxies like what?

  • Speaker #0

    Spot prices mostly. So actual exchange rates for currencies, cash prices for commodities, stock market index levels, and government bond yields or rates for the bond component.

  • Speaker #1

    Okay. Using the underlying cash market data, were there any issues with that data going way back? Oh,

  • Speaker #0

    definitely limitations. For currencies, things were complicated before 1973 because of the Bretton Woods fixed exchange rate system and the gold standard before that. So less free-floating trend potential there.

  • Speaker #1

    Right. Fixed rates wouldn't trend much.

  • Speaker #0

    And liquid government bond markets, as we know them, really only developed widely after World War I, maybe around 1918 onwards.

  • Speaker #1

    So BAP's there too.

  • Speaker #0

    But for stock indices and commodities, the data goes back much, much further. They mentioned UK index data potentially back to 1693 and some commodity prices like sugar back to the

  • Speaker #1

    1780s. Incredible historical reach. But the big question is, how well do these spot price proxies actually represent what futures would have done if they'd existed? Can we trust the results?

  • Speaker #0

    That's obviously crucial. So they did a validation exercise. They compared the trend following results using futures versus using spot prices during the period when both were available. generally since the early

  • Speaker #1

    1980s. And what did that comparison show?

  • Speaker #0

    Overall, the correlation was pretty high, 91% between the P&L from the future strategy and the spot strategy since 1982.

  • Speaker #1

    91%. That's quite close.

  • Speaker #0

    It is. It was a bit lower for commodities, around 65%. And they attribute that difference to the cost to carry, things like storage costs, interest rates, which affect futures, but not spot prices directly.

  • Speaker #1

    Okay, that makes sense for commodities.

  • Speaker #0

    But even with that caveat for commodities, they argue the overall high correlation validates using the spot data for the earlier periods. They even suggest the spot results might be a conservative estimate of what futures might have achieved.

  • Speaker #1

    Right, because futures might offer more trending opportunities sometimes.

  • Speaker #0

    Potentially, or at least different dynamics. So, okay, armed with these proxies, they ran the analysis over the full two centuries. What did those results show?

  • Speaker #1

    Even stronger, somehow. The overall performance shown in their figure four is quite something. The t statistic for the full period over 10, specifically 10.5, a t stat over 10. That's almost unbelievable for financial data over that long a period.

  • Speaker #0

    It's exceptionally high. And even after de-biasing for the simple buy and hold contribution was still 9.8.

  • Speaker #1

    Still incredibly strong. What about the Sharpe ratio over two centuries?

  • Speaker #0

    It was 0.72. Still very respectable. Slightly lower than the post-1960 futures-only period, but covering a much, much longer and more varied history.

  • Speaker #1

    And how did just buying and holding do over that same two-century period?

  • Speaker #0

    Interestingly, the T-statistic for the passive long-only drift component was 4.6, which is significant. Of course, markets do tend to go up, but it's notably lower than the de-biased trend following T-STAT of 9.8.

  • Speaker #1

    So the trend timing component added more statistical significance than just holding the assets.

  • Speaker #0

    That's the implication, yes. Except perhaps for commodities where the drift was stronger.

  • Speaker #1

    And did this two-century performance hold up across the different asset classes too? Yes,

  • Speaker #0

    they found significant performance on each individual sector. The raw T-stats were all above 2.9 and the de-biased ones were all above 2.7. So consistently positive across the board.

  • Speaker #1

    And the decade by decade consistency you mentioned for the post 1960 period, did that hold up over 200 years?

  • Speaker #0

    This is maybe the most remarkable chart, figure seven in the paper. It shows the rolling performance and based on their analysis, the strategy showed positive performance in every single decade across those two centuries.

  • Speaker #1

    Every decade, including like the Great Depression, World Wars, stagflation.

  • Speaker #0

    According to their backtest using this methodology, yes, positive P&L contribution. in every 10-year block.

  • Speaker #1

    That kind of resilience is hard to wrap your head around. Okay, the paper also mentioned something about a saturation effect. What's that about?

  • Speaker #0

    Right. So they didn't just look at the final P&L. They also analyzed the relationship between the strength of their trend signal- Mess. And the actual price change that happened in the next month.

  • Speaker #1

    Okay. Trying to see if a stronger signal led to a proportionally bigger price move.

  • Speaker #0

    Exactly. And they found evidence that it didn't, really. The relationship wasn't linear. Very strong signals, didn't seem to predict. proportionally larger future moves, the effect kind of flattened out.

  • Speaker #1

    So the trend sort of runs out of steam when it gets really extreme.

  • Speaker #0

    That's one interpretation. They modeled this using a hyperbolic tangent function, which basically captures this idea of saturation and S-curve shape. It fit the data better than a simple straight line or even a cubic relationship.

  • Speaker #1

    Why might that happen? Why would strong trends saturate?

  • Speaker #0

    Their hypothesis involves fundamentalist traders. The idea is that when a trend becomes very strong, very stretched, perhaps other traders who look more at underlying value step in and start pushing back, resisting the trend, which causes it to level off or saturate.

  • Speaker #1

    Interesting. So a sort of mean reversion kicking in at the extremes driven by other market participants.

  • Speaker #0

    Possibly, yes. It suggests the market dynamics aren't purely trend following all the time.

  • Speaker #1

    Now we have to address the elephant in the room. Trend following, especially systematic trend following by CTAs, has had a tougher time more recently, say post-2011 or so. Did the paper touch on that?

  • Speaker #0

    Yes, they definitely acknowledge that. They show a chart, figure six, which does illustrate a relative plateau in the aggregated trend performance in the years leading up to the paper's publication. There was a lot of discussion around that time about overcrowding or the strategy just... not working as well.

  • Speaker #1

    Right. The whole death of trend narrative.

  • Speaker #0

    Exactly. But they immediately put this recent period into the context of their two century back test.

  • Speaker #1

    How does it look in that longer view?

  • Speaker #0

    Well, figure seven, the one showing decade by decade performance, makes it clear that similar periods of flat or lower performance have happened before. They point to the 1940s as another example.

  • Speaker #1

    Ah, so maybe the recent struggles aren't unprecedented in the very long run.

  • Speaker #0

    That's the argument they make. They emphasize that even with recent weaker periods, the cumulative 10-year performance in their backtest has never been negative over the entire two centuries.

  • Speaker #1

    Never negative over any 10-year window in 200 years. That's quite a statement.

  • Speaker #0

    It is. And they contrast this long-term persistence with the fate of much shorter-term trend following. They show another chart, figure 8, suggesting that trends over very short horizons, like three days, seem to have decayed significantly, especially since the 1990s. So,

  • Speaker #1

    a divergence. long-term trend holding up while short-term trend fades.

  • Speaker #0

    That seems to be what their data indicates. The multi-month trend signal they focused on shows this remarkable long-term persistence, whereas very fast mean reversion might be impacting the shortest time scales.

  • Speaker #1

    Fascinating. Okay, so wrapping this up, what's the main message you think listeners should take away from this deep dive into the paper?

  • Speaker #0

    I think the key takeaway is the sheer weight of evidence they present for trend following as they defined it being a persistent, statistically significant phenomenon. not just for decades, but potentially for centuries and across diverse markets.

  • Speaker #1

    That long-term robustness, even with recent bumps, is the core story.

  • Speaker #0

    Absolutely. And the analysis of the saturation effect adds another layer, suggesting market dynamics might be more complex than simple linear trends. The contrast between the resilience of long-term trends and the apparent decay of short-term ones is also really thought-provoking.

  • Speaker #1

    It definitely is. Which leads to a final thought for you, the listener. Given this long history and robustness of long-term trend following shown in the paper, what could really be driving the challenges seen more recently in shorter-term trend strategies? And what might this divergence tell us about how market anomalies and trading itself are evolving? Something to ponder.

  • Speaker #0

    A very relevant question for today's markets.

  • Speaker #1

    Thank you for tuning in to Papers with Backtest podcast. We hope today's episode gave you useful insights. Join us next time as we break down more research. And for more papers and backtests, find us at https.paperswithbacktests.com. Happy trading!

Chapters

  • Introduction to Trend Following Research

    00:03

  • Main Findings of the Paper

    00:20

  • Methodology of Trend Following Strategy

    00:41

  • Trading Rules and Signal Calculation

    02:23

  • Asset Classes and Data Sources

    04:37

  • Long-Term Performance Analysis

    09:10

  • Discussion on Saturation Effect

    11:03

  • Key Takeaways and Conclusion

    14:00

Description

Can trend following strategies truly outperform random chance in the world of algorithmic trading? Join us in this enlightening episode of Papers With Backtest: An Algorithmic Trading Journey as we dissect the groundbreaking research paper 'Two Centuries of Trend Following' authored by L'Imperiere, Durambol, Seeger, Potters, and Bouchot from Capital Fund Management. This episode dives deep into the statistical significance of trend following, revealing a T-statistic of 5.9 since 1960 and an astonishing nearly 10 over the last two centuries—strong evidence that these strategies are not mere products of luck.


Discover how the authors meticulously analyzed data spanning four major asset classes: commodities, currencies, stock indices, and bonds, utilizing futures data from 1960 and spot price proxies dating back to 1800. We unpack their innovative methodology, which employs exponential moving averages to identify trend signals, allowing for a comprehensive understanding of how these strategies perform across various asset classes and time periods.


Throughout the discussion, we explore the implications of a saturation effect in trend strength, shedding light on the critical differences between long-term and short-term trend strategies. As the financial landscape evolves, understanding these dynamics becomes increasingly vital for traders looking to enhance their algorithmic trading approaches.


Despite the challenges posed by recent market fluctuations, our analysis underscores the robustness of trend following strategies. We highlight the key findings from the paper that suggest not only the efficacy of these methods but also their relevance in today’s trading environment. Whether you're an experienced trader or new to algorithmic trading, this episode is packed with insights that can sharpen your trading acumen.


Join us as we navigate through the complexities of trend following and its implications for future trading strategies. With a focus on empirical data and rigorous analysis, this episode is essential listening for anyone serious about mastering the art of algorithmic trading. Tune in to Papers With Backtest and equip yourself with the knowledge to elevate your trading game!


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Transcription

  • Speaker #0

    Hello. Welcome back to Papers with Backtest podcast. Today, we dive into another algo trading research paper.

  • Speaker #1

    Hi there. Yes. Today, we're looking at a really interesting one called Two Centuries of Trend Following. It's by L'Imperiere, Durambol, Seeger, Potters, and Bouchot. They're all from Capital Fund Management or CFM. CFM,

  • Speaker #0

    right. Okay. So Two Centuries of Trend Following. Let's unpack that. What's the core question they're tackling here?

  • Speaker #1

    Well, fundamentally, they're asking, Does trend following actually work? You know, the basic idea of buying what's going up and selling what's going down. Does that generate anomalous returns?

  • Speaker #0

    Anomalous meaning like better than you just expect by chance.

  • Speaker #1

    Exactly. And they looked across four big asset classes, commodities, currencies, stock indices and bonds. OK,

  • Speaker #0

    standard stuff. But the two centuries part, that sounds pretty ambitious.

  • Speaker #1

    It really is. They use futures data going back to 1960, which is already quite a long period. Yeah. But then using spot price data as proxies, they managed to. pushed the analysis back for some series all the way to 1800. Wow. 1800. That's incredible. So what was the big headline finding from all that digging?

  • Speaker #0

    The main takeaway, and this is probably what should grab your attention most, is that they argue trend following is one of the most statistically significant anomalies out there.

  • Speaker #1

    Really? More than others we hear about.

  • Speaker #0

    Well, the numbers are pretty compelling. They found a T statistic of around five just since 1960. And if you go back across the full two centuries, that T-stat jumps to nearly 10.

  • Speaker #1

    10. Okay, for listeners not deep into stats, a high T-stat basically means it's very unlikely the results are just random luck, right?

  • Speaker #0

    Precisely. Super unlikely. And importantly, this holds true even after the account for the general upward drift you see in many markets over time.

  • Speaker #1

    So it's not just about being long in a rising market. There's something specific to the trend signal itself.

  • Speaker #0

    That's what the data suggests. Its persistence across such a long time span and across different types of assets is what makes it so interesting for anyone like you, the learner, trying to figure out what really drives market behavior.

  • Speaker #1

    OK, so our mission for this deep dive then is to really get into the weeds of how they defined and tested this strategy.

  • Speaker #0

    Exactly. We want to understand the specific trading rules they simulated and look closely at the back tested results they reported over these very long historical periods.

  • Speaker #1

    Great, let's do it. starting with the trading rules, how did they actually define the trend they were following? What was the indicator?

  • Speaker #0

    Okay, so the core was an exponential moving average, an EMA, of past monthly prices. Crucially, they calculated this average excluding the current month's price.

  • Speaker #1

    Ah, okay. So looking back, but not including the very latest price point in the average itself.

  • Speaker #0

    Right. And this EMA served as their reference level. The actual signal, let's call it S, was then calculated. Oh. They took the difference between the previous month's closing price And that Russian's EMA level.

  • Speaker #1

    Okay, so price versus its recent average. Makes sense.

  • Speaker #0

    But then they divided that difference by a measure of recent volatility.

  • Speaker #1

    Ah, volatility. So adjusting for how choppy the market's been.

  • Speaker #0

    Exactly. The volatility, sigma, was also calculated using an exponential moving average, but this time on the absolute monthly price changes. Same time frame, in months.

  • Speaker #1

    Got it. So the signal strength depends not just on how far the price is from its average, but also on how much prices have been swinging around lately. And this leads to a constant risk approach.

  • Speaker #0

    Yes. The division by volatility means you're sort of scaling the position size based on risk. And remember, this initial analysis is without trading cost.

  • Speaker #1

    OK, clear. And how did they calculate the profit and loss, the P&L, in this simulation?

  • Speaker #0

    So the theoretical quantity they traded for any asset was proportional to the inverse of that volatility measure, sigma. So less volatile assets get a larger theoretical position size. Right.

  • Speaker #1

    Constant risk again.

  • Speaker #0

    Yep. And the sign of the trade, whether they went long or short, was just determined by the sign of that trend signals we just discussed.

  • Speaker #1

    Positive signal go long, negative signal go short. Simple enough.

  • Speaker #0

    Then the P&L for each step was basically the sign of their position times the next price change divided by that initial volatility measure. They just sum those up over time.

  • Speaker #1

    And you mentioned end months for the look back period. Did they stick to one specific end?

  • Speaker #0

    They focused primarily on n equals five months for most of the presented results.

  • Speaker #1

    Five months, okay.

  • Speaker #0

    But they did check other values and stated the results were pretty robust. The main conclusions didn't really change much if they used, say, three months or eight months instead.

  • Speaker #1

    Good to know it wasn't just cherry-picking one specific time frame. Now let's talk about the assets they tested this on, starting with the futures data since 1960. What was in the basket?

  • Speaker #0

    It was a pretty diverse mix. They had seven commodity contracts, things like crude oil, Natural gas, corn, wheat, sugar.

  • Speaker #1

    Yeah, the usual suspects.

  • Speaker #0

    Right. Then seven 10-year government bond contracts from major countries. Australia, Canada, Germany, Japan, Switzerland, UK, US.

  • Speaker #1

    Okay. Global bonds.

  • Speaker #0

    Seven stock index futures, too, from similar developed markets. And finally, six major currency futures contracts. Aussie dollar, Canadian dollar, the marquero, yen, Swiss franc, British pound, all against the U.S. dollar implicitly.

  • Speaker #1

    Right. So good spread across asset classes and geographies. Where did they get all this data from?

  • Speaker #0

    They mentioned using global financial data, GFD, as the source.

  • Speaker #1

    GFD. OK. So they run the five month strategy on this basket from 1960 onwards. What do the overall results look like?

  • Speaker #0

    Pretty impressive, actually. The aggregated P&L shown in their figure one looks fairly steady over time. The overall T statistic came out at 5.9.

  • Speaker #1

    And we said 5.9 is highly significant. What about risk-adjusted returns? Sharp ratio.

  • Speaker #0

    The sharp ratio was 0.8, which is quite good for a strategy across multiple asset classes over decades.

  • Speaker #1

    Yeah, 0.8 is solid. But you mentioned earlier they de-biased the results. What was that about?

  • Speaker #0

    Right. So many of these markets, particularly stock indices, tend to drift upwards over the long term. If you're just long, you make money from that drift. They wanted to isolate the profit that came specifically from timing the trends, not just from being exposed to the market's general upper direction. So they statistically remove that long-term drift component.

  • Speaker #1

    Ah, I see. So peeling away the buy and hold part of the return, did the trend following effect survive that?

  • Speaker #0

    It did. Even after de-biasing, the t-statistic was still 5.0, still very significant.

  • Speaker #1

    Wow. OK. And was the strength consistent? Did it work everywhere all the time?

  • Speaker #0

    That's another key finding. They looked at results broken down by sector commodities, bonds, indices, currencies, and also decade by decade since 1960. The P statistic for the raw trend strategy was above 2.1 for every sector in every decade. Even the Debiased T-Stats stayed above 1.6 across the board. That suggests real universality.

  • Speaker #1

    That level of consistency is pretty remarkable. OK, now let's get to the really long-term stuff going back. before 1960. How did they manage that?

  • Speaker #0

    Yeah, this is where it gets fascinating. Since futures didn't really exist in the same way back then, they had to use proxies.

  • Speaker #1

    Proxies like what?

  • Speaker #0

    Spot prices mostly. So actual exchange rates for currencies, cash prices for commodities, stock market index levels, and government bond yields or rates for the bond component.

  • Speaker #1

    Okay. Using the underlying cash market data, were there any issues with that data going way back? Oh,

  • Speaker #0

    definitely limitations. For currencies, things were complicated before 1973 because of the Bretton Woods fixed exchange rate system and the gold standard before that. So less free-floating trend potential there.

  • Speaker #1

    Right. Fixed rates wouldn't trend much.

  • Speaker #0

    And liquid government bond markets, as we know them, really only developed widely after World War I, maybe around 1918 onwards.

  • Speaker #1

    So BAP's there too.

  • Speaker #0

    But for stock indices and commodities, the data goes back much, much further. They mentioned UK index data potentially back to 1693 and some commodity prices like sugar back to the

  • Speaker #1

    1780s. Incredible historical reach. But the big question is, how well do these spot price proxies actually represent what futures would have done if they'd existed? Can we trust the results?

  • Speaker #0

    That's obviously crucial. So they did a validation exercise. They compared the trend following results using futures versus using spot prices during the period when both were available. generally since the early

  • Speaker #1

    1980s. And what did that comparison show?

  • Speaker #0

    Overall, the correlation was pretty high, 91% between the P&L from the future strategy and the spot strategy since 1982.

  • Speaker #1

    91%. That's quite close.

  • Speaker #0

    It is. It was a bit lower for commodities, around 65%. And they attribute that difference to the cost to carry, things like storage costs, interest rates, which affect futures, but not spot prices directly.

  • Speaker #1

    Okay, that makes sense for commodities.

  • Speaker #0

    But even with that caveat for commodities, they argue the overall high correlation validates using the spot data for the earlier periods. They even suggest the spot results might be a conservative estimate of what futures might have achieved.

  • Speaker #1

    Right, because futures might offer more trending opportunities sometimes.

  • Speaker #0

    Potentially, or at least different dynamics. So, okay, armed with these proxies, they ran the analysis over the full two centuries. What did those results show?

  • Speaker #1

    Even stronger, somehow. The overall performance shown in their figure four is quite something. The t statistic for the full period over 10, specifically 10.5, a t stat over 10. That's almost unbelievable for financial data over that long a period.

  • Speaker #0

    It's exceptionally high. And even after de-biasing for the simple buy and hold contribution was still 9.8.

  • Speaker #1

    Still incredibly strong. What about the Sharpe ratio over two centuries?

  • Speaker #0

    It was 0.72. Still very respectable. Slightly lower than the post-1960 futures-only period, but covering a much, much longer and more varied history.

  • Speaker #1

    And how did just buying and holding do over that same two-century period?

  • Speaker #0

    Interestingly, the T-statistic for the passive long-only drift component was 4.6, which is significant. Of course, markets do tend to go up, but it's notably lower than the de-biased trend following T-STAT of 9.8.

  • Speaker #1

    So the trend timing component added more statistical significance than just holding the assets.

  • Speaker #0

    That's the implication, yes. Except perhaps for commodities where the drift was stronger.

  • Speaker #1

    And did this two-century performance hold up across the different asset classes too? Yes,

  • Speaker #0

    they found significant performance on each individual sector. The raw T-stats were all above 2.9 and the de-biased ones were all above 2.7. So consistently positive across the board.

  • Speaker #1

    And the decade by decade consistency you mentioned for the post 1960 period, did that hold up over 200 years?

  • Speaker #0

    This is maybe the most remarkable chart, figure seven in the paper. It shows the rolling performance and based on their analysis, the strategy showed positive performance in every single decade across those two centuries.

  • Speaker #1

    Every decade, including like the Great Depression, World Wars, stagflation.

  • Speaker #0

    According to their backtest using this methodology, yes, positive P&L contribution. in every 10-year block.

  • Speaker #1

    That kind of resilience is hard to wrap your head around. Okay, the paper also mentioned something about a saturation effect. What's that about?

  • Speaker #0

    Right. So they didn't just look at the final P&L. They also analyzed the relationship between the strength of their trend signal- Mess. And the actual price change that happened in the next month.

  • Speaker #1

    Okay. Trying to see if a stronger signal led to a proportionally bigger price move.

  • Speaker #0

    Exactly. And they found evidence that it didn't, really. The relationship wasn't linear. Very strong signals, didn't seem to predict. proportionally larger future moves, the effect kind of flattened out.

  • Speaker #1

    So the trend sort of runs out of steam when it gets really extreme.

  • Speaker #0

    That's one interpretation. They modeled this using a hyperbolic tangent function, which basically captures this idea of saturation and S-curve shape. It fit the data better than a simple straight line or even a cubic relationship.

  • Speaker #1

    Why might that happen? Why would strong trends saturate?

  • Speaker #0

    Their hypothesis involves fundamentalist traders. The idea is that when a trend becomes very strong, very stretched, perhaps other traders who look more at underlying value step in and start pushing back, resisting the trend, which causes it to level off or saturate.

  • Speaker #1

    Interesting. So a sort of mean reversion kicking in at the extremes driven by other market participants.

  • Speaker #0

    Possibly, yes. It suggests the market dynamics aren't purely trend following all the time.

  • Speaker #1

    Now we have to address the elephant in the room. Trend following, especially systematic trend following by CTAs, has had a tougher time more recently, say post-2011 or so. Did the paper touch on that?

  • Speaker #0

    Yes, they definitely acknowledge that. They show a chart, figure six, which does illustrate a relative plateau in the aggregated trend performance in the years leading up to the paper's publication. There was a lot of discussion around that time about overcrowding or the strategy just... not working as well.

  • Speaker #1

    Right. The whole death of trend narrative.

  • Speaker #0

    Exactly. But they immediately put this recent period into the context of their two century back test.

  • Speaker #1

    How does it look in that longer view?

  • Speaker #0

    Well, figure seven, the one showing decade by decade performance, makes it clear that similar periods of flat or lower performance have happened before. They point to the 1940s as another example.

  • Speaker #1

    Ah, so maybe the recent struggles aren't unprecedented in the very long run.

  • Speaker #0

    That's the argument they make. They emphasize that even with recent weaker periods, the cumulative 10-year performance in their backtest has never been negative over the entire two centuries.

  • Speaker #1

    Never negative over any 10-year window in 200 years. That's quite a statement.

  • Speaker #0

    It is. And they contrast this long-term persistence with the fate of much shorter-term trend following. They show another chart, figure 8, suggesting that trends over very short horizons, like three days, seem to have decayed significantly, especially since the 1990s. So,

  • Speaker #1

    a divergence. long-term trend holding up while short-term trend fades.

  • Speaker #0

    That seems to be what their data indicates. The multi-month trend signal they focused on shows this remarkable long-term persistence, whereas very fast mean reversion might be impacting the shortest time scales.

  • Speaker #1

    Fascinating. Okay, so wrapping this up, what's the main message you think listeners should take away from this deep dive into the paper?

  • Speaker #0

    I think the key takeaway is the sheer weight of evidence they present for trend following as they defined it being a persistent, statistically significant phenomenon. not just for decades, but potentially for centuries and across diverse markets.

  • Speaker #1

    That long-term robustness, even with recent bumps, is the core story.

  • Speaker #0

    Absolutely. And the analysis of the saturation effect adds another layer, suggesting market dynamics might be more complex than simple linear trends. The contrast between the resilience of long-term trends and the apparent decay of short-term ones is also really thought-provoking.

  • Speaker #1

    It definitely is. Which leads to a final thought for you, the listener. Given this long history and robustness of long-term trend following shown in the paper, what could really be driving the challenges seen more recently in shorter-term trend strategies? And what might this divergence tell us about how market anomalies and trading itself are evolving? Something to ponder.

  • Speaker #0

    A very relevant question for today's markets.

  • Speaker #1

    Thank you for tuning in to Papers with Backtest podcast. We hope today's episode gave you useful insights. Join us next time as we break down more research. And for more papers and backtests, find us at https.paperswithbacktests.com. Happy trading!

Chapters

  • Introduction to Trend Following Research

    00:03

  • Main Findings of the Paper

    00:20

  • Methodology of Trend Following Strategy

    00:41

  • Trading Rules and Signal Calculation

    02:23

  • Asset Classes and Data Sources

    04:37

  • Long-Term Performance Analysis

    09:10

  • Discussion on Saturation Effect

    11:03

  • Key Takeaways and Conclusion

    14:00

Share

Embed

You may also like

Description

Can trend following strategies truly outperform random chance in the world of algorithmic trading? Join us in this enlightening episode of Papers With Backtest: An Algorithmic Trading Journey as we dissect the groundbreaking research paper 'Two Centuries of Trend Following' authored by L'Imperiere, Durambol, Seeger, Potters, and Bouchot from Capital Fund Management. This episode dives deep into the statistical significance of trend following, revealing a T-statistic of 5.9 since 1960 and an astonishing nearly 10 over the last two centuries—strong evidence that these strategies are not mere products of luck.


Discover how the authors meticulously analyzed data spanning four major asset classes: commodities, currencies, stock indices, and bonds, utilizing futures data from 1960 and spot price proxies dating back to 1800. We unpack their innovative methodology, which employs exponential moving averages to identify trend signals, allowing for a comprehensive understanding of how these strategies perform across various asset classes and time periods.


Throughout the discussion, we explore the implications of a saturation effect in trend strength, shedding light on the critical differences between long-term and short-term trend strategies. As the financial landscape evolves, understanding these dynamics becomes increasingly vital for traders looking to enhance their algorithmic trading approaches.


Despite the challenges posed by recent market fluctuations, our analysis underscores the robustness of trend following strategies. We highlight the key findings from the paper that suggest not only the efficacy of these methods but also their relevance in today’s trading environment. Whether you're an experienced trader or new to algorithmic trading, this episode is packed with insights that can sharpen your trading acumen.


Join us as we navigate through the complexities of trend following and its implications for future trading strategies. With a focus on empirical data and rigorous analysis, this episode is essential listening for anyone serious about mastering the art of algorithmic trading. Tune in to Papers With Backtest and equip yourself with the knowledge to elevate your trading game!


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Transcription

  • Speaker #0

    Hello. Welcome back to Papers with Backtest podcast. Today, we dive into another algo trading research paper.

  • Speaker #1

    Hi there. Yes. Today, we're looking at a really interesting one called Two Centuries of Trend Following. It's by L'Imperiere, Durambol, Seeger, Potters, and Bouchot. They're all from Capital Fund Management or CFM. CFM,

  • Speaker #0

    right. Okay. So Two Centuries of Trend Following. Let's unpack that. What's the core question they're tackling here?

  • Speaker #1

    Well, fundamentally, they're asking, Does trend following actually work? You know, the basic idea of buying what's going up and selling what's going down. Does that generate anomalous returns?

  • Speaker #0

    Anomalous meaning like better than you just expect by chance.

  • Speaker #1

    Exactly. And they looked across four big asset classes, commodities, currencies, stock indices and bonds. OK,

  • Speaker #0

    standard stuff. But the two centuries part, that sounds pretty ambitious.

  • Speaker #1

    It really is. They use futures data going back to 1960, which is already quite a long period. Yeah. But then using spot price data as proxies, they managed to. pushed the analysis back for some series all the way to 1800. Wow. 1800. That's incredible. So what was the big headline finding from all that digging?

  • Speaker #0

    The main takeaway, and this is probably what should grab your attention most, is that they argue trend following is one of the most statistically significant anomalies out there.

  • Speaker #1

    Really? More than others we hear about.

  • Speaker #0

    Well, the numbers are pretty compelling. They found a T statistic of around five just since 1960. And if you go back across the full two centuries, that T-stat jumps to nearly 10.

  • Speaker #1

    10. Okay, for listeners not deep into stats, a high T-stat basically means it's very unlikely the results are just random luck, right?

  • Speaker #0

    Precisely. Super unlikely. And importantly, this holds true even after the account for the general upward drift you see in many markets over time.

  • Speaker #1

    So it's not just about being long in a rising market. There's something specific to the trend signal itself.

  • Speaker #0

    That's what the data suggests. Its persistence across such a long time span and across different types of assets is what makes it so interesting for anyone like you, the learner, trying to figure out what really drives market behavior.

  • Speaker #1

    OK, so our mission for this deep dive then is to really get into the weeds of how they defined and tested this strategy.

  • Speaker #0

    Exactly. We want to understand the specific trading rules they simulated and look closely at the back tested results they reported over these very long historical periods.

  • Speaker #1

    Great, let's do it. starting with the trading rules, how did they actually define the trend they were following? What was the indicator?

  • Speaker #0

    Okay, so the core was an exponential moving average, an EMA, of past monthly prices. Crucially, they calculated this average excluding the current month's price.

  • Speaker #1

    Ah, okay. So looking back, but not including the very latest price point in the average itself.

  • Speaker #0

    Right. And this EMA served as their reference level. The actual signal, let's call it S, was then calculated. Oh. They took the difference between the previous month's closing price And that Russian's EMA level.

  • Speaker #1

    Okay, so price versus its recent average. Makes sense.

  • Speaker #0

    But then they divided that difference by a measure of recent volatility.

  • Speaker #1

    Ah, volatility. So adjusting for how choppy the market's been.

  • Speaker #0

    Exactly. The volatility, sigma, was also calculated using an exponential moving average, but this time on the absolute monthly price changes. Same time frame, in months.

  • Speaker #1

    Got it. So the signal strength depends not just on how far the price is from its average, but also on how much prices have been swinging around lately. And this leads to a constant risk approach.

  • Speaker #0

    Yes. The division by volatility means you're sort of scaling the position size based on risk. And remember, this initial analysis is without trading cost.

  • Speaker #1

    OK, clear. And how did they calculate the profit and loss, the P&L, in this simulation?

  • Speaker #0

    So the theoretical quantity they traded for any asset was proportional to the inverse of that volatility measure, sigma. So less volatile assets get a larger theoretical position size. Right.

  • Speaker #1

    Constant risk again.

  • Speaker #0

    Yep. And the sign of the trade, whether they went long or short, was just determined by the sign of that trend signals we just discussed.

  • Speaker #1

    Positive signal go long, negative signal go short. Simple enough.

  • Speaker #0

    Then the P&L for each step was basically the sign of their position times the next price change divided by that initial volatility measure. They just sum those up over time.

  • Speaker #1

    And you mentioned end months for the look back period. Did they stick to one specific end?

  • Speaker #0

    They focused primarily on n equals five months for most of the presented results.

  • Speaker #1

    Five months, okay.

  • Speaker #0

    But they did check other values and stated the results were pretty robust. The main conclusions didn't really change much if they used, say, three months or eight months instead.

  • Speaker #1

    Good to know it wasn't just cherry-picking one specific time frame. Now let's talk about the assets they tested this on, starting with the futures data since 1960. What was in the basket?

  • Speaker #0

    It was a pretty diverse mix. They had seven commodity contracts, things like crude oil, Natural gas, corn, wheat, sugar.

  • Speaker #1

    Yeah, the usual suspects.

  • Speaker #0

    Right. Then seven 10-year government bond contracts from major countries. Australia, Canada, Germany, Japan, Switzerland, UK, US.

  • Speaker #1

    Okay. Global bonds.

  • Speaker #0

    Seven stock index futures, too, from similar developed markets. And finally, six major currency futures contracts. Aussie dollar, Canadian dollar, the marquero, yen, Swiss franc, British pound, all against the U.S. dollar implicitly.

  • Speaker #1

    Right. So good spread across asset classes and geographies. Where did they get all this data from?

  • Speaker #0

    They mentioned using global financial data, GFD, as the source.

  • Speaker #1

    GFD. OK. So they run the five month strategy on this basket from 1960 onwards. What do the overall results look like?

  • Speaker #0

    Pretty impressive, actually. The aggregated P&L shown in their figure one looks fairly steady over time. The overall T statistic came out at 5.9.

  • Speaker #1

    And we said 5.9 is highly significant. What about risk-adjusted returns? Sharp ratio.

  • Speaker #0

    The sharp ratio was 0.8, which is quite good for a strategy across multiple asset classes over decades.

  • Speaker #1

    Yeah, 0.8 is solid. But you mentioned earlier they de-biased the results. What was that about?

  • Speaker #0

    Right. So many of these markets, particularly stock indices, tend to drift upwards over the long term. If you're just long, you make money from that drift. They wanted to isolate the profit that came specifically from timing the trends, not just from being exposed to the market's general upper direction. So they statistically remove that long-term drift component.

  • Speaker #1

    Ah, I see. So peeling away the buy and hold part of the return, did the trend following effect survive that?

  • Speaker #0

    It did. Even after de-biasing, the t-statistic was still 5.0, still very significant.

  • Speaker #1

    Wow. OK. And was the strength consistent? Did it work everywhere all the time?

  • Speaker #0

    That's another key finding. They looked at results broken down by sector commodities, bonds, indices, currencies, and also decade by decade since 1960. The P statistic for the raw trend strategy was above 2.1 for every sector in every decade. Even the Debiased T-Stats stayed above 1.6 across the board. That suggests real universality.

  • Speaker #1

    That level of consistency is pretty remarkable. OK, now let's get to the really long-term stuff going back. before 1960. How did they manage that?

  • Speaker #0

    Yeah, this is where it gets fascinating. Since futures didn't really exist in the same way back then, they had to use proxies.

  • Speaker #1

    Proxies like what?

  • Speaker #0

    Spot prices mostly. So actual exchange rates for currencies, cash prices for commodities, stock market index levels, and government bond yields or rates for the bond component.

  • Speaker #1

    Okay. Using the underlying cash market data, were there any issues with that data going way back? Oh,

  • Speaker #0

    definitely limitations. For currencies, things were complicated before 1973 because of the Bretton Woods fixed exchange rate system and the gold standard before that. So less free-floating trend potential there.

  • Speaker #1

    Right. Fixed rates wouldn't trend much.

  • Speaker #0

    And liquid government bond markets, as we know them, really only developed widely after World War I, maybe around 1918 onwards.

  • Speaker #1

    So BAP's there too.

  • Speaker #0

    But for stock indices and commodities, the data goes back much, much further. They mentioned UK index data potentially back to 1693 and some commodity prices like sugar back to the

  • Speaker #1

    1780s. Incredible historical reach. But the big question is, how well do these spot price proxies actually represent what futures would have done if they'd existed? Can we trust the results?

  • Speaker #0

    That's obviously crucial. So they did a validation exercise. They compared the trend following results using futures versus using spot prices during the period when both were available. generally since the early

  • Speaker #1

    1980s. And what did that comparison show?

  • Speaker #0

    Overall, the correlation was pretty high, 91% between the P&L from the future strategy and the spot strategy since 1982.

  • Speaker #1

    91%. That's quite close.

  • Speaker #0

    It is. It was a bit lower for commodities, around 65%. And they attribute that difference to the cost to carry, things like storage costs, interest rates, which affect futures, but not spot prices directly.

  • Speaker #1

    Okay, that makes sense for commodities.

  • Speaker #0

    But even with that caveat for commodities, they argue the overall high correlation validates using the spot data for the earlier periods. They even suggest the spot results might be a conservative estimate of what futures might have achieved.

  • Speaker #1

    Right, because futures might offer more trending opportunities sometimes.

  • Speaker #0

    Potentially, or at least different dynamics. So, okay, armed with these proxies, they ran the analysis over the full two centuries. What did those results show?

  • Speaker #1

    Even stronger, somehow. The overall performance shown in their figure four is quite something. The t statistic for the full period over 10, specifically 10.5, a t stat over 10. That's almost unbelievable for financial data over that long a period.

  • Speaker #0

    It's exceptionally high. And even after de-biasing for the simple buy and hold contribution was still 9.8.

  • Speaker #1

    Still incredibly strong. What about the Sharpe ratio over two centuries?

  • Speaker #0

    It was 0.72. Still very respectable. Slightly lower than the post-1960 futures-only period, but covering a much, much longer and more varied history.

  • Speaker #1

    And how did just buying and holding do over that same two-century period?

  • Speaker #0

    Interestingly, the T-statistic for the passive long-only drift component was 4.6, which is significant. Of course, markets do tend to go up, but it's notably lower than the de-biased trend following T-STAT of 9.8.

  • Speaker #1

    So the trend timing component added more statistical significance than just holding the assets.

  • Speaker #0

    That's the implication, yes. Except perhaps for commodities where the drift was stronger.

  • Speaker #1

    And did this two-century performance hold up across the different asset classes too? Yes,

  • Speaker #0

    they found significant performance on each individual sector. The raw T-stats were all above 2.9 and the de-biased ones were all above 2.7. So consistently positive across the board.

  • Speaker #1

    And the decade by decade consistency you mentioned for the post 1960 period, did that hold up over 200 years?

  • Speaker #0

    This is maybe the most remarkable chart, figure seven in the paper. It shows the rolling performance and based on their analysis, the strategy showed positive performance in every single decade across those two centuries.

  • Speaker #1

    Every decade, including like the Great Depression, World Wars, stagflation.

  • Speaker #0

    According to their backtest using this methodology, yes, positive P&L contribution. in every 10-year block.

  • Speaker #1

    That kind of resilience is hard to wrap your head around. Okay, the paper also mentioned something about a saturation effect. What's that about?

  • Speaker #0

    Right. So they didn't just look at the final P&L. They also analyzed the relationship between the strength of their trend signal- Mess. And the actual price change that happened in the next month.

  • Speaker #1

    Okay. Trying to see if a stronger signal led to a proportionally bigger price move.

  • Speaker #0

    Exactly. And they found evidence that it didn't, really. The relationship wasn't linear. Very strong signals, didn't seem to predict. proportionally larger future moves, the effect kind of flattened out.

  • Speaker #1

    So the trend sort of runs out of steam when it gets really extreme.

  • Speaker #0

    That's one interpretation. They modeled this using a hyperbolic tangent function, which basically captures this idea of saturation and S-curve shape. It fit the data better than a simple straight line or even a cubic relationship.

  • Speaker #1

    Why might that happen? Why would strong trends saturate?

  • Speaker #0

    Their hypothesis involves fundamentalist traders. The idea is that when a trend becomes very strong, very stretched, perhaps other traders who look more at underlying value step in and start pushing back, resisting the trend, which causes it to level off or saturate.

  • Speaker #1

    Interesting. So a sort of mean reversion kicking in at the extremes driven by other market participants.

  • Speaker #0

    Possibly, yes. It suggests the market dynamics aren't purely trend following all the time.

  • Speaker #1

    Now we have to address the elephant in the room. Trend following, especially systematic trend following by CTAs, has had a tougher time more recently, say post-2011 or so. Did the paper touch on that?

  • Speaker #0

    Yes, they definitely acknowledge that. They show a chart, figure six, which does illustrate a relative plateau in the aggregated trend performance in the years leading up to the paper's publication. There was a lot of discussion around that time about overcrowding or the strategy just... not working as well.

  • Speaker #1

    Right. The whole death of trend narrative.

  • Speaker #0

    Exactly. But they immediately put this recent period into the context of their two century back test.

  • Speaker #1

    How does it look in that longer view?

  • Speaker #0

    Well, figure seven, the one showing decade by decade performance, makes it clear that similar periods of flat or lower performance have happened before. They point to the 1940s as another example.

  • Speaker #1

    Ah, so maybe the recent struggles aren't unprecedented in the very long run.

  • Speaker #0

    That's the argument they make. They emphasize that even with recent weaker periods, the cumulative 10-year performance in their backtest has never been negative over the entire two centuries.

  • Speaker #1

    Never negative over any 10-year window in 200 years. That's quite a statement.

  • Speaker #0

    It is. And they contrast this long-term persistence with the fate of much shorter-term trend following. They show another chart, figure 8, suggesting that trends over very short horizons, like three days, seem to have decayed significantly, especially since the 1990s. So,

  • Speaker #1

    a divergence. long-term trend holding up while short-term trend fades.

  • Speaker #0

    That seems to be what their data indicates. The multi-month trend signal they focused on shows this remarkable long-term persistence, whereas very fast mean reversion might be impacting the shortest time scales.

  • Speaker #1

    Fascinating. Okay, so wrapping this up, what's the main message you think listeners should take away from this deep dive into the paper?

  • Speaker #0

    I think the key takeaway is the sheer weight of evidence they present for trend following as they defined it being a persistent, statistically significant phenomenon. not just for decades, but potentially for centuries and across diverse markets.

  • Speaker #1

    That long-term robustness, even with recent bumps, is the core story.

  • Speaker #0

    Absolutely. And the analysis of the saturation effect adds another layer, suggesting market dynamics might be more complex than simple linear trends. The contrast between the resilience of long-term trends and the apparent decay of short-term ones is also really thought-provoking.

  • Speaker #1

    It definitely is. Which leads to a final thought for you, the listener. Given this long history and robustness of long-term trend following shown in the paper, what could really be driving the challenges seen more recently in shorter-term trend strategies? And what might this divergence tell us about how market anomalies and trading itself are evolving? Something to ponder.

  • Speaker #0

    A very relevant question for today's markets.

  • Speaker #1

    Thank you for tuning in to Papers with Backtest podcast. We hope today's episode gave you useful insights. Join us next time as we break down more research. And for more papers and backtests, find us at https.paperswithbacktests.com. Happy trading!

Chapters

  • Introduction to Trend Following Research

    00:03

  • Main Findings of the Paper

    00:20

  • Methodology of Trend Following Strategy

    00:41

  • Trading Rules and Signal Calculation

    02:23

  • Asset Classes and Data Sources

    04:37

  • Long-Term Performance Analysis

    09:10

  • Discussion on Saturation Effect

    11:03

  • Key Takeaways and Conclusion

    14:00

Description

Can trend following strategies truly outperform random chance in the world of algorithmic trading? Join us in this enlightening episode of Papers With Backtest: An Algorithmic Trading Journey as we dissect the groundbreaking research paper 'Two Centuries of Trend Following' authored by L'Imperiere, Durambol, Seeger, Potters, and Bouchot from Capital Fund Management. This episode dives deep into the statistical significance of trend following, revealing a T-statistic of 5.9 since 1960 and an astonishing nearly 10 over the last two centuries—strong evidence that these strategies are not mere products of luck.


Discover how the authors meticulously analyzed data spanning four major asset classes: commodities, currencies, stock indices, and bonds, utilizing futures data from 1960 and spot price proxies dating back to 1800. We unpack their innovative methodology, which employs exponential moving averages to identify trend signals, allowing for a comprehensive understanding of how these strategies perform across various asset classes and time periods.


Throughout the discussion, we explore the implications of a saturation effect in trend strength, shedding light on the critical differences between long-term and short-term trend strategies. As the financial landscape evolves, understanding these dynamics becomes increasingly vital for traders looking to enhance their algorithmic trading approaches.


Despite the challenges posed by recent market fluctuations, our analysis underscores the robustness of trend following strategies. We highlight the key findings from the paper that suggest not only the efficacy of these methods but also their relevance in today’s trading environment. Whether you're an experienced trader or new to algorithmic trading, this episode is packed with insights that can sharpen your trading acumen.


Join us as we navigate through the complexities of trend following and its implications for future trading strategies. With a focus on empirical data and rigorous analysis, this episode is essential listening for anyone serious about mastering the art of algorithmic trading. Tune in to Papers With Backtest and equip yourself with the knowledge to elevate your trading game!


Hosted by Ausha. See ausha.co/privacy-policy for more information.

Transcription

  • Speaker #0

    Hello. Welcome back to Papers with Backtest podcast. Today, we dive into another algo trading research paper.

  • Speaker #1

    Hi there. Yes. Today, we're looking at a really interesting one called Two Centuries of Trend Following. It's by L'Imperiere, Durambol, Seeger, Potters, and Bouchot. They're all from Capital Fund Management or CFM. CFM,

  • Speaker #0

    right. Okay. So Two Centuries of Trend Following. Let's unpack that. What's the core question they're tackling here?

  • Speaker #1

    Well, fundamentally, they're asking, Does trend following actually work? You know, the basic idea of buying what's going up and selling what's going down. Does that generate anomalous returns?

  • Speaker #0

    Anomalous meaning like better than you just expect by chance.

  • Speaker #1

    Exactly. And they looked across four big asset classes, commodities, currencies, stock indices and bonds. OK,

  • Speaker #0

    standard stuff. But the two centuries part, that sounds pretty ambitious.

  • Speaker #1

    It really is. They use futures data going back to 1960, which is already quite a long period. Yeah. But then using spot price data as proxies, they managed to. pushed the analysis back for some series all the way to 1800. Wow. 1800. That's incredible. So what was the big headline finding from all that digging?

  • Speaker #0

    The main takeaway, and this is probably what should grab your attention most, is that they argue trend following is one of the most statistically significant anomalies out there.

  • Speaker #1

    Really? More than others we hear about.

  • Speaker #0

    Well, the numbers are pretty compelling. They found a T statistic of around five just since 1960. And if you go back across the full two centuries, that T-stat jumps to nearly 10.

  • Speaker #1

    10. Okay, for listeners not deep into stats, a high T-stat basically means it's very unlikely the results are just random luck, right?

  • Speaker #0

    Precisely. Super unlikely. And importantly, this holds true even after the account for the general upward drift you see in many markets over time.

  • Speaker #1

    So it's not just about being long in a rising market. There's something specific to the trend signal itself.

  • Speaker #0

    That's what the data suggests. Its persistence across such a long time span and across different types of assets is what makes it so interesting for anyone like you, the learner, trying to figure out what really drives market behavior.

  • Speaker #1

    OK, so our mission for this deep dive then is to really get into the weeds of how they defined and tested this strategy.

  • Speaker #0

    Exactly. We want to understand the specific trading rules they simulated and look closely at the back tested results they reported over these very long historical periods.

  • Speaker #1

    Great, let's do it. starting with the trading rules, how did they actually define the trend they were following? What was the indicator?

  • Speaker #0

    Okay, so the core was an exponential moving average, an EMA, of past monthly prices. Crucially, they calculated this average excluding the current month's price.

  • Speaker #1

    Ah, okay. So looking back, but not including the very latest price point in the average itself.

  • Speaker #0

    Right. And this EMA served as their reference level. The actual signal, let's call it S, was then calculated. Oh. They took the difference between the previous month's closing price And that Russian's EMA level.

  • Speaker #1

    Okay, so price versus its recent average. Makes sense.

  • Speaker #0

    But then they divided that difference by a measure of recent volatility.

  • Speaker #1

    Ah, volatility. So adjusting for how choppy the market's been.

  • Speaker #0

    Exactly. The volatility, sigma, was also calculated using an exponential moving average, but this time on the absolute monthly price changes. Same time frame, in months.

  • Speaker #1

    Got it. So the signal strength depends not just on how far the price is from its average, but also on how much prices have been swinging around lately. And this leads to a constant risk approach.

  • Speaker #0

    Yes. The division by volatility means you're sort of scaling the position size based on risk. And remember, this initial analysis is without trading cost.

  • Speaker #1

    OK, clear. And how did they calculate the profit and loss, the P&L, in this simulation?

  • Speaker #0

    So the theoretical quantity they traded for any asset was proportional to the inverse of that volatility measure, sigma. So less volatile assets get a larger theoretical position size. Right.

  • Speaker #1

    Constant risk again.

  • Speaker #0

    Yep. And the sign of the trade, whether they went long or short, was just determined by the sign of that trend signals we just discussed.

  • Speaker #1

    Positive signal go long, negative signal go short. Simple enough.

  • Speaker #0

    Then the P&L for each step was basically the sign of their position times the next price change divided by that initial volatility measure. They just sum those up over time.

  • Speaker #1

    And you mentioned end months for the look back period. Did they stick to one specific end?

  • Speaker #0

    They focused primarily on n equals five months for most of the presented results.

  • Speaker #1

    Five months, okay.

  • Speaker #0

    But they did check other values and stated the results were pretty robust. The main conclusions didn't really change much if they used, say, three months or eight months instead.

  • Speaker #1

    Good to know it wasn't just cherry-picking one specific time frame. Now let's talk about the assets they tested this on, starting with the futures data since 1960. What was in the basket?

  • Speaker #0

    It was a pretty diverse mix. They had seven commodity contracts, things like crude oil, Natural gas, corn, wheat, sugar.

  • Speaker #1

    Yeah, the usual suspects.

  • Speaker #0

    Right. Then seven 10-year government bond contracts from major countries. Australia, Canada, Germany, Japan, Switzerland, UK, US.

  • Speaker #1

    Okay. Global bonds.

  • Speaker #0

    Seven stock index futures, too, from similar developed markets. And finally, six major currency futures contracts. Aussie dollar, Canadian dollar, the marquero, yen, Swiss franc, British pound, all against the U.S. dollar implicitly.

  • Speaker #1

    Right. So good spread across asset classes and geographies. Where did they get all this data from?

  • Speaker #0

    They mentioned using global financial data, GFD, as the source.

  • Speaker #1

    GFD. OK. So they run the five month strategy on this basket from 1960 onwards. What do the overall results look like?

  • Speaker #0

    Pretty impressive, actually. The aggregated P&L shown in their figure one looks fairly steady over time. The overall T statistic came out at 5.9.

  • Speaker #1

    And we said 5.9 is highly significant. What about risk-adjusted returns? Sharp ratio.

  • Speaker #0

    The sharp ratio was 0.8, which is quite good for a strategy across multiple asset classes over decades.

  • Speaker #1

    Yeah, 0.8 is solid. But you mentioned earlier they de-biased the results. What was that about?

  • Speaker #0

    Right. So many of these markets, particularly stock indices, tend to drift upwards over the long term. If you're just long, you make money from that drift. They wanted to isolate the profit that came specifically from timing the trends, not just from being exposed to the market's general upper direction. So they statistically remove that long-term drift component.

  • Speaker #1

    Ah, I see. So peeling away the buy and hold part of the return, did the trend following effect survive that?

  • Speaker #0

    It did. Even after de-biasing, the t-statistic was still 5.0, still very significant.

  • Speaker #1

    Wow. OK. And was the strength consistent? Did it work everywhere all the time?

  • Speaker #0

    That's another key finding. They looked at results broken down by sector commodities, bonds, indices, currencies, and also decade by decade since 1960. The P statistic for the raw trend strategy was above 2.1 for every sector in every decade. Even the Debiased T-Stats stayed above 1.6 across the board. That suggests real universality.

  • Speaker #1

    That level of consistency is pretty remarkable. OK, now let's get to the really long-term stuff going back. before 1960. How did they manage that?

  • Speaker #0

    Yeah, this is where it gets fascinating. Since futures didn't really exist in the same way back then, they had to use proxies.

  • Speaker #1

    Proxies like what?

  • Speaker #0

    Spot prices mostly. So actual exchange rates for currencies, cash prices for commodities, stock market index levels, and government bond yields or rates for the bond component.

  • Speaker #1

    Okay. Using the underlying cash market data, were there any issues with that data going way back? Oh,

  • Speaker #0

    definitely limitations. For currencies, things were complicated before 1973 because of the Bretton Woods fixed exchange rate system and the gold standard before that. So less free-floating trend potential there.

  • Speaker #1

    Right. Fixed rates wouldn't trend much.

  • Speaker #0

    And liquid government bond markets, as we know them, really only developed widely after World War I, maybe around 1918 onwards.

  • Speaker #1

    So BAP's there too.

  • Speaker #0

    But for stock indices and commodities, the data goes back much, much further. They mentioned UK index data potentially back to 1693 and some commodity prices like sugar back to the

  • Speaker #1

    1780s. Incredible historical reach. But the big question is, how well do these spot price proxies actually represent what futures would have done if they'd existed? Can we trust the results?

  • Speaker #0

    That's obviously crucial. So they did a validation exercise. They compared the trend following results using futures versus using spot prices during the period when both were available. generally since the early

  • Speaker #1

    1980s. And what did that comparison show?

  • Speaker #0

    Overall, the correlation was pretty high, 91% between the P&L from the future strategy and the spot strategy since 1982.

  • Speaker #1

    91%. That's quite close.

  • Speaker #0

    It is. It was a bit lower for commodities, around 65%. And they attribute that difference to the cost to carry, things like storage costs, interest rates, which affect futures, but not spot prices directly.

  • Speaker #1

    Okay, that makes sense for commodities.

  • Speaker #0

    But even with that caveat for commodities, they argue the overall high correlation validates using the spot data for the earlier periods. They even suggest the spot results might be a conservative estimate of what futures might have achieved.

  • Speaker #1

    Right, because futures might offer more trending opportunities sometimes.

  • Speaker #0

    Potentially, or at least different dynamics. So, okay, armed with these proxies, they ran the analysis over the full two centuries. What did those results show?

  • Speaker #1

    Even stronger, somehow. The overall performance shown in their figure four is quite something. The t statistic for the full period over 10, specifically 10.5, a t stat over 10. That's almost unbelievable for financial data over that long a period.

  • Speaker #0

    It's exceptionally high. And even after de-biasing for the simple buy and hold contribution was still 9.8.

  • Speaker #1

    Still incredibly strong. What about the Sharpe ratio over two centuries?

  • Speaker #0

    It was 0.72. Still very respectable. Slightly lower than the post-1960 futures-only period, but covering a much, much longer and more varied history.

  • Speaker #1

    And how did just buying and holding do over that same two-century period?

  • Speaker #0

    Interestingly, the T-statistic for the passive long-only drift component was 4.6, which is significant. Of course, markets do tend to go up, but it's notably lower than the de-biased trend following T-STAT of 9.8.

  • Speaker #1

    So the trend timing component added more statistical significance than just holding the assets.

  • Speaker #0

    That's the implication, yes. Except perhaps for commodities where the drift was stronger.

  • Speaker #1

    And did this two-century performance hold up across the different asset classes too? Yes,

  • Speaker #0

    they found significant performance on each individual sector. The raw T-stats were all above 2.9 and the de-biased ones were all above 2.7. So consistently positive across the board.

  • Speaker #1

    And the decade by decade consistency you mentioned for the post 1960 period, did that hold up over 200 years?

  • Speaker #0

    This is maybe the most remarkable chart, figure seven in the paper. It shows the rolling performance and based on their analysis, the strategy showed positive performance in every single decade across those two centuries.

  • Speaker #1

    Every decade, including like the Great Depression, World Wars, stagflation.

  • Speaker #0

    According to their backtest using this methodology, yes, positive P&L contribution. in every 10-year block.

  • Speaker #1

    That kind of resilience is hard to wrap your head around. Okay, the paper also mentioned something about a saturation effect. What's that about?

  • Speaker #0

    Right. So they didn't just look at the final P&L. They also analyzed the relationship between the strength of their trend signal- Mess. And the actual price change that happened in the next month.

  • Speaker #1

    Okay. Trying to see if a stronger signal led to a proportionally bigger price move.

  • Speaker #0

    Exactly. And they found evidence that it didn't, really. The relationship wasn't linear. Very strong signals, didn't seem to predict. proportionally larger future moves, the effect kind of flattened out.

  • Speaker #1

    So the trend sort of runs out of steam when it gets really extreme.

  • Speaker #0

    That's one interpretation. They modeled this using a hyperbolic tangent function, which basically captures this idea of saturation and S-curve shape. It fit the data better than a simple straight line or even a cubic relationship.

  • Speaker #1

    Why might that happen? Why would strong trends saturate?

  • Speaker #0

    Their hypothesis involves fundamentalist traders. The idea is that when a trend becomes very strong, very stretched, perhaps other traders who look more at underlying value step in and start pushing back, resisting the trend, which causes it to level off or saturate.

  • Speaker #1

    Interesting. So a sort of mean reversion kicking in at the extremes driven by other market participants.

  • Speaker #0

    Possibly, yes. It suggests the market dynamics aren't purely trend following all the time.

  • Speaker #1

    Now we have to address the elephant in the room. Trend following, especially systematic trend following by CTAs, has had a tougher time more recently, say post-2011 or so. Did the paper touch on that?

  • Speaker #0

    Yes, they definitely acknowledge that. They show a chart, figure six, which does illustrate a relative plateau in the aggregated trend performance in the years leading up to the paper's publication. There was a lot of discussion around that time about overcrowding or the strategy just... not working as well.

  • Speaker #1

    Right. The whole death of trend narrative.

  • Speaker #0

    Exactly. But they immediately put this recent period into the context of their two century back test.

  • Speaker #1

    How does it look in that longer view?

  • Speaker #0

    Well, figure seven, the one showing decade by decade performance, makes it clear that similar periods of flat or lower performance have happened before. They point to the 1940s as another example.

  • Speaker #1

    Ah, so maybe the recent struggles aren't unprecedented in the very long run.

  • Speaker #0

    That's the argument they make. They emphasize that even with recent weaker periods, the cumulative 10-year performance in their backtest has never been negative over the entire two centuries.

  • Speaker #1

    Never negative over any 10-year window in 200 years. That's quite a statement.

  • Speaker #0

    It is. And they contrast this long-term persistence with the fate of much shorter-term trend following. They show another chart, figure 8, suggesting that trends over very short horizons, like three days, seem to have decayed significantly, especially since the 1990s. So,

  • Speaker #1

    a divergence. long-term trend holding up while short-term trend fades.

  • Speaker #0

    That seems to be what their data indicates. The multi-month trend signal they focused on shows this remarkable long-term persistence, whereas very fast mean reversion might be impacting the shortest time scales.

  • Speaker #1

    Fascinating. Okay, so wrapping this up, what's the main message you think listeners should take away from this deep dive into the paper?

  • Speaker #0

    I think the key takeaway is the sheer weight of evidence they present for trend following as they defined it being a persistent, statistically significant phenomenon. not just for decades, but potentially for centuries and across diverse markets.

  • Speaker #1

    That long-term robustness, even with recent bumps, is the core story.

  • Speaker #0

    Absolutely. And the analysis of the saturation effect adds another layer, suggesting market dynamics might be more complex than simple linear trends. The contrast between the resilience of long-term trends and the apparent decay of short-term ones is also really thought-provoking.

  • Speaker #1

    It definitely is. Which leads to a final thought for you, the listener. Given this long history and robustness of long-term trend following shown in the paper, what could really be driving the challenges seen more recently in shorter-term trend strategies? And what might this divergence tell us about how market anomalies and trading itself are evolving? Something to ponder.

  • Speaker #0

    A very relevant question for today's markets.

  • Speaker #1

    Thank you for tuning in to Papers with Backtest podcast. We hope today's episode gave you useful insights. Join us next time as we break down more research. And for more papers and backtests, find us at https.paperswithbacktests.com. Happy trading!

Chapters

  • Introduction to Trend Following Research

    00:03

  • Main Findings of the Paper

    00:20

  • Methodology of Trend Following Strategy

    00:41

  • Trading Rules and Signal Calculation

    02:23

  • Asset Classes and Data Sources

    04:37

  • Long-Term Performance Analysis

    09:10

  • Discussion on Saturation Effect

    11:03

  • Key Takeaways and Conclusion

    14:00

Share

Embed

You may also like