Contact Us
Contact Us
Featured Image

Hold your breath and hallucinate: Humanizing machine learning in investment management

Nothing is more universal than breathing and dreaming. Yet, relative to their importance, they have been given little nuanced focus. The same can be said for machine learning in investment management.

Professions related to breathing, like pulmonology, traditionally focus on the maladies of the lungs – not necessarily HOW we breathe and improving its capacity1. Similarly, sleep’s importance has been heavily studied and understood – yet dreams are still viewed as “epiphenomena”, merely byproducts, yet certainly fun to analyze (like that pesky nightmare of missing your final exam!).

Recent research by neuroscientist Eric Hoel proposes that dreaming is actually analogues to a machine learning process - the brain’s way of augmenting data and creating hallucinatory “out-of-sample” experiences to prevent overfitting of “in-sample” real-world experiences2. This is fascinating – these findings help explain the nuance of the human process it is trying to imitate (e.g., neural networks).

Even as artificial intelligence (AI) / machine learning becomes more mainstream (i.e., ChatGPT), it is still misapplied within investing. This TMT serves as an intro (with longer-form content to follow) about how we leverage & humanize machine learning in our research & portfolio construction processes to help alleviate issues, like overfitting, of our ideas and views.

Clarity of thought and better decisions

“Correlation is not causation;” “Quantitative models are backward-looking and overfit to history;” and “Garbage in, garbage out”. These are just a few of the countless sayings that, especially as a quant, can be irksome. Not because they are not valid – but rather, again, they lack nuance. These sentiments are like pulmonologists – focusing on the “bad” – not how models, even ultimately wrong ones, can help us make better decisions. In his book Beyond Diversification, Sebastien Page, head of Global Multi-Asset at T.Rowe Price, addresses this out of the gate3

“But our goal is not to describe all the complexities of real life. Rather, we want to provide tools to help decision-making. If the model improves the odds that we’ll make the right decisions, then we should use it.” – JPP

Most, if not all, models are wrong. But at the end of the day, we are looking to improve our capacity to make better decisions – our breath (literally and figuratively) as an investment advisor. Similar to the diversification of a portfolio, at ALPS we start by marrying quantitative models with fundamental research and expertise to reduce reliance on one (type of) model. In what other ways can we improve our capacity to breathe?

Wicked machines

Page correctly notes that “historical data are all we have”, so how can we augment it? We have previously discussed resampling & ergodicity – “dream-like” concepts that help reduce our reliance on historical data and help model potential future scenarios. We will delve into these and related concepts more deeply in subsequent pieces.

For now, it is important HOW we think about applying machine learning techniques. It is easy to fall into the trap of taking out-of-the-box, open-source machine learning models and applying them directly to financial markets – to predicting Asset X’s return over the next day, or quarter, etc. This leads to poorly constructed models because it lacks context, particularly to the type of domain we are playing in. In his book Range, David Epstein discusses the difference between “kind” and “wicked” domains4:

[In] “kind” learning environments. Patterns repeat over and over, and feedback is extremely accurate and usually very rapid. In golf or chess, a ball or piece is moved according to rules and within defined boundaries, a consequence is quickly apparent, and similar challenges occur repeatedly…

In wicked domains, the rules of the game are often unclear or incomplete, there may or may not be repetitive patterns and they may not be obvious, and feedback is often delayed, inaccurate, or both. In the most devilishly wicked learning environments, experience will reinforce the exact wrong lessons.

Most open-source libraries are built within kind domains (e.g., facial recognition) – where patterns repeat and feedback is accurate. To apply to the wicked domain of financial markets, we have to dream differently – dare we say “lucid dreaming”. 
For us, there is extra emphasis on explainability vs. model performance – how can we traverse the black box and communicate what the model is picking up on vs. eking out an extra point of accuracy? Sometimes this involves iterations of modeling – starting with a complex model, identifying key features, and then using those features in a simplified model. It also includes a dose of creativity. 

Instead of saying Asset X will return X% vs. benchmark Y%, we may have more confidence in saying that Asset X will outperform the benchmark in this type of market regime (i.e., classification). Or we may be less concerned with individual predictions and more concerned with identifying potential anomalies in the market, like if a few macro drivers are explaining an abnormally high variance of the market. This nuanced domain expertise ultimately helps us make better decisions.


We are here to help clients achieve their dreams so they can breathe easier. However, like dreams, some of our Two Minute Tuesdays get cut short (for good reason!).  Be on the lookout for longer-form pieces where you can hopefully find some serendipitous ideas.

Important Disclosures & Definitions

1 Nestor, James. Breath: The New Science of a Lost Art Riverhead Books, 2020.


Page, Sebastien. Beyond Diversification: What Every Investor Needs to Know About Asset Allocation, 2021.

Epstein, David. Why Generalists Triumph in a Specialized World, 2019.

Performance data quoted represents past performance. Past performance is no guarantee of future results; current performance may be higher or lower than performance quoted.

One may not invest directly in an index.

AAI000192  3/31/2024

Recent Two Minute Tuesdays