Fact, Fiction and the Stories Told By Data
Mar 25, 2025

The Kickoff
January always feels like a prologue, but now that February is here, the markets are well and truly in motion—no more holiday liquidity lulls, just data, volatility, and the realities of the year taking shape. There's a lot to unpack in this edition, which will undoubtedly set the precedent for a busy and analytics-rich year ahead.
The Compass
Here's a rundown of what you can find in this edition:
Our newest additions in members and features
Exclusive insights from our chat with Dimitri Bianco, FRM who's Head of Quantitative Risk and Research at Agora Data
A statistical look into why alternative data hasn’t saved hedge funds
Highlights from the latest global news after a tumultuous open to the year
How to tell a better data story
We love Alt Data Weekly, here's why
Insider Trading
We’re starting the year strong with key milestones across product, team, and fundraising. Our dev team is expanding, with a new quant hire joining this month—and more to come. Our pre-seed round is closing successfully at the end of February, fuelling the next stage of growth. Meanwhile, Quanted has now surpassed 2.5 million features, strengthening our platform’s ability to uncover alpha.
On the product side, we’re doubling down on explainability. We're actively working on enhancing our reports to give quants even deeper insights into feature relevance, dataset selection, and model evolution. This includes developing a Cumulative Feature Impact framework to track how performance evolves as features are added, alongside dataset usage insights to highlight the most frequently leveraged sources. We're also exploring advanced visualisations—SHAP values, ICE plots, feature interaction heat-maps, and more—to provide a clearer, more intuitive breakdown of feature importance and model behaviour. These enhancements are in progress, and we’re excited to roll them out as we push towards even more robust and interpretable results.
January has been a treat—a fast start to what promises to be an exciting year. With momentum on our side, we’re looking forward to continuing upwards and to the right.
The Tradewinds
Expert Exchange
We sat down with Dimitri Bianco, FRM, Head of Quant Risk and Research at Agora Data, Inc., where he develops and validates risk models driving the firm’s AI-powered financial solutions. With over a decade of experience in quant finance, covering credit, operational, market, and PPNR risk, he has worked across every stage of model development—from building and implementation to auditing and refinement.
Beyond Agora, Dimitri is a recognised industry voice having spoken at American Bankers Association events and top AI and quant finance conferences. His passion for the field extends to his YouTube channel, where he offers a candid look at career development, technical skills, and the evolving role of quants. We discussed how quant finance has evolved, the growing complexity of AI/ML, and the importance of collaboration, adaptability, and explainability.
Reflecting on the trajectory of your career in quant research, how has the industry evolved from when you first started and what has been the most memorable moment for you so far?
I work on the sell-side of the quant finance industry, and when I started, the implementation of the Dodd-Frank Act was just starting to get traction at the banks. Bringing in “quants” was a very new concept to many of the banks on the risk management side, however throughout my career it has been very rewarding to help shape the standards of model risk management. The rise of machine learning and AI has made my job more challenging. Quant Research and Quant Model Development are required to understand math, statistics, and finance at a very deep level. This has been both a blessing as new tools are now more readily available, however it has also made the job more challenging as there is so much more to learn than there was in the past. Also challenging is sorting through the available tools to determine what is right and what is wrong in the context of proper use.
One of the most memorable moments in my career came in 2018 during my time at Santander. I was working on a small team of five people and was in complete flow. The team was so synced together that the quality of work was high while also producing a high amount of output. My mentor Qingchao Li made a large impact on me that year as well as I learned many new methods in the credit risk space that I had not seen before such as reject inference, which expanded my perspective and deepened my expertise in the field.
What were the 3 key skills you honed during your career that elevated your performance most?
The three skills that I have honed over the years, and continue to work on today, are increasing my math intelligence, teaching skills, and the ability to communicate to a wide range of audiences. Math helps in better understanding and testing ideas. Teaching and communication are equally critical for developing a supportive and efficient team, creating positive challenges to ideas from across the business and industry, and gain buy-in from other parts of the firm.
Could you provide a couple of technical insights or innovations in recent quant research that you believe would be particularly interesting to financial professionals?
The advances and application of reinforcement learning and inverse reinforcement learning to asset allocation has been interesting. Igor Halperin has some interesting research on this topic and a great textbook with code to help understand and test ideas. There could be interesting applications of reinforcement learning in analyzing market competition in exchanges as well as other industries such as the sell-side of auto finance.
What upcoming innovations in predictive analytics excite you the most, and how do you think they will impact the market?
I am most excited for more research and methods on explainability for machine learning models. Models will always have an end user such as a trader, operations staff, investor, or consumer and gaining trust and buy-in often comes with being able to explain the general ideas and drivers of a model. Many of the machine learning models going on in the industry are just patter recognition with limited ability to forecast. These models are brittle as they do not hold when new data and economic cycles change. By adding more explainability, it will be easier to select models with causal relationships instead of fitting patterns.
How do you see the reliance on external data sources for alpha generation changing the relationship between quants and their model development process?
The reliance on external data sources seems to be a natural evolution of business efficiency. Firms can get an edge from data, modeling methodology and application, or a combination of the two. Collecting data internally could give you an edge however hiring and maintaining such a team is expensive and there is still a risk that other will catch on quickly to your data sources and collections. Model development is often slowed down and becomes inefficient as a lot of time is spent working with data teams to ensure data quality and get data into a simple and usable format. External data sources help reduce model development time and potential errors in data from data handling.
What is the next major project or initiative you’re working on, and how do you see it improving the domain?
At Agora Data, I will be dedicating more time to a project on quantum modeling for consumer finance. Modeling losses, profits, and consumer behavior naturally follow economics waves which can be computer with classical computing however only in a discrete manner. The proper method of computing and modeling waves is to use quantum computing. By applying quantum mechanics it should provide both more insightful (explainable) models while also increasing model accuracy and robustness.
Before we wrap up, is there anything else you’d like to share with our readers?
I am proud to have our second annual Quaint Quant Conference here in the Dallas Fort-Worth area April 18th. This started two years ago as a coffee meetup to get quants together to share ideas. This turned into the Quaint Quant Conference in 2024 when Agora Data decided to sponsor the meetup and turn it into a conference. For 2025 I am excited to have both Agora Data and the University of Texas Dallas sponsoring the event. Planning has just begun and we already have a great list of experts from around the US flying in to present and share ideas. I am still trying to keep the conference feeling quaint, meaning a good place to share ideas and learn from others. Too many conferences feel cold as you show up listen to some ideas and leave. There needs to be more collaboration and support in the quant community, and this is the event to help bring the community closer together on both the buy-side, sell-side, and academic side.
Numbers & Narratives
The Alternative Data Illusion: Why Most Funds Are Still Behind
Alternative data is now a staple in institutional investing, but most firms still aren’t extracting its full value. A recent Bloomberg survey, which polled 166 portfolio managers and analysts managing $820 billion total in AUM, and the Hedgeweek 2024 report reveal the growing divide between data adoption and real alpha generation:
• 66% of hedge funds use data for portfolio management and risk analysis (Hedgeweek), yet only 28% leverage it for alpha generation. This signals that while data is being incorporated into workflows, few funds have figured out how to turn it into a systematic trading advantage.
• 79% of investment managers cite integrating multiple datasets as their biggest challenge (Bloomberg), making data usability the primary bottleneck. Most firms aren’t short on data—they’re short on structured, tradeable signals.
• 85% of investment managers expect their reliance on third-party software for data analytics to increase over the next five years (Bloomberg), signaling that even larger firms are struggling to manage in-house data infrastructure at scale.
• 75% of funds believe consumer spending data will provide an outsized informational edge (Bloomberg), yet many still lack the tools to extract timely insights from alternative sources before the market prices them in.
• 66% of hedge funds are exploring AI-driven predictive analytics, while 45% are using AI for news and sentiment analysis (Hedgeweek). But AI models are only as good as their inputs—poor data structure means poor predictive power.
The real problem isn’t a lack of data—it’s poor execution. Firms are drowning in raw information but struggling to clean, standardise, and integrate it efficiently.
By now it's clear enough to say that next decade won’t be won by those who simply buy more data—it will belong to firms that can engineer fragmented datasets into structured, actionable intelligence faster than the competition. The winners aren’t chasing more sources. They’re solving the execution gap before everyone else catches up.
Link to the Hedgeweek report
Link to the Bloomberg report
Market Pulse
The DeepSeek Effect
DeepSeek’s 95% cost reduction in AI training and operations has triggered a fundamental repricing of AI infrastructure stocks. The assumption that compute demand scales linearly with AI performance is now under pressure, leading to Nvidia’s $600B selloff, the largest in market history. This shift extends beyond GPUs—ASML (-7%) and Siemens Energy (-20%) saw losses as markets questioned the sustainability of high-CAPEX AI supply chains. A parallel to the 2018 crypto mining glut emerges, where overinvestment in compute infrastructure outpaced real demand, leading to excess supply and compressed margins. However, Jevons Paradox suggests AI adoption could accelerate, shifting the constraint from compute to proprietary data access. Meanwhile, allegations that DeepSeek “distilled” OpenAI’s models raise the risk of AI IP litigation, a factor that could drive sector volatility beyond traditional valuation models. The +8.9% Nvidia recovery and Nasdaq’s +2% bounce indicate that capital is now rotating toward AI firms with data advantages over pure compute plays. Quants must adjust volatility models, weighing legal defensibility alongside compute scaling.
A Closer Look at Outlooks
The 2025 investment outlooks signal a market where sector dispersion, liquidity distortions, and macro shifts challenge traditional quant models. J.P. Morgan's focus on dispersion marks a break from narrow market leadership, suggesting that quants relying on momentum factors may need to recalibrate as AI-driven capital spending reshapes returns. This aligns with Morgan Stanley's concerns over S&P 500 concentration, where reliance on mega-cap tech raises fragility—if leadership shifts, dispersion trades could outperform. Fixed income adds complexity—Goldman Sachs expects rate cuts to support bonds, but Julius Baer warns resilient U.S. growth may keep real yields high, requiring quants to rethink duration exposure. Meanwhile, Deutsche Bank’s focus on fiscal dominance suggests liquidity-sensitive signals need rethinking as government spending overtakes central bank intervention. BlackRock and Franklin Templeton highlight China’s slowdown, raising questions about historical EM risk premia. As policy divergence reshapes pricing inefficiencies, quants must pivot from backward-looking factors to capturing emerging structural distortions in real-time.
Navigational Nudges
Numbers don’t sell strategies—narratives do. In quant finance, the ability to translate complex insights into a compelling story is as important as the insights themselves. The best quants know that uncovering edges is only one piece of the puzzle —making others believe in them is just as important.
Here’s how to tell a better data story:
Use Multiple Lenses
One number never tells the full story. Instead of just showing Sharpe or alpha, present alternative angles. What happens when volatility spikes? Does the model hold across asset classes? How does liquidity impact execution? A single frame is easy to poke holes in—a well-rounded view is much harder to dismiss.Start with the 'Why'
Most data stories fail because they start with raw numbers. Instead, frame the insight as a problem to solve. Most momentum models collapse in high-volatility periods—except this one. Here’s why. That makes people lean in before you even show a single chart.Make Data Feel Alive
Static backtests bore people. Instead of showing endless tables, bring the numbers to life. Animate how factor exposures shift over time. Highlight when a model’s edge starts fading. Show regime changes dynamically instead of burying them in a spreadsheet. The more intuitive the visuals, the faster your audience will get it.Own the Uncertainty
Nothing builds credibility faster than admitting risk. Every model has failure points—acknowledge them before someone else does. What assumptions does it rely on? Where does it struggle? What happens in a liquidity crunch? A transparent story is a trustworthy one.Layer the Complexity
Good storytelling isn’t about simplifying—it’s about structuring complexity. Start with the big idea in one sentence. Then, explain why it matters in a few lines. Only after that, dive into the supporting details. If you drop everything on your audience at once, they’ll tune out before they get to the good part.
A strong model doesn’t sell itself. A strong story does. In quant finance, the way you frame an insight can be the difference between it being ignored or driving real conviction.
The Knowledge Buffet
by John Farrall
There’s a lot of noise in alternative data, but this is one of the few things to look forward to in your inbox every week. Its carefully and intentionally curated down to what’s actually worth paying attention to beyond the headlines and trends and that's why it’s become a regular point of discussion in our office.
The insights often challenge assumptions and surface ideas that aren’t being talked about elsewhere. This is one of those few newsletters that consistently delivers real value, making it extremely resourceful for anyone working with data at a high level.
The Closing Bell
If 2025 is your year of identifying valuable data to enrich your models, learn how our tool can help you do that in less than 30 minutes by scheduling an intro call with our founders below.
The Kickoff
January always feels like a prologue, but now that February is here, the markets are well and truly in motion—no more holiday liquidity lulls, just data, volatility, and the realities of the year taking shape. There's a lot to unpack in this edition, which will undoubtedly set the precedent for a busy and analytics-rich year ahead.
The Compass
Here's a rundown of what you can find in this edition:
Our newest additions in members and features
Exclusive insights from our chat with Dimitri Bianco, FRM who's Head of Quantitative Risk and Research at Agora Data
A statistical look into why alternative data hasn’t saved hedge funds
Highlights from the latest global news after a tumultuous open to the year
How to tell a better data story
We love Alt Data Weekly, here's why
Insider Trading
We’re starting the year strong with key milestones across product, team, and fundraising. Our dev team is expanding, with a new quant hire joining this month—and more to come. Our pre-seed round is closing successfully at the end of February, fuelling the next stage of growth. Meanwhile, Quanted has now surpassed 2.5 million features, strengthening our platform’s ability to uncover alpha.
On the product side, we’re doubling down on explainability. We're actively working on enhancing our reports to give quants even deeper insights into feature relevance, dataset selection, and model evolution. This includes developing a Cumulative Feature Impact framework to track how performance evolves as features are added, alongside dataset usage insights to highlight the most frequently leveraged sources. We're also exploring advanced visualisations—SHAP values, ICE plots, feature interaction heat-maps, and more—to provide a clearer, more intuitive breakdown of feature importance and model behaviour. These enhancements are in progress, and we’re excited to roll them out as we push towards even more robust and interpretable results.
January has been a treat—a fast start to what promises to be an exciting year. With momentum on our side, we’re looking forward to continuing upwards and to the right.
The Tradewinds
Expert Exchange
We sat down with Dimitri Bianco, FRM, Head of Quant Risk and Research at Agora Data, Inc., where he develops and validates risk models driving the firm’s AI-powered financial solutions. With over a decade of experience in quant finance, covering credit, operational, market, and PPNR risk, he has worked across every stage of model development—from building and implementation to auditing and refinement.
Beyond Agora, Dimitri is a recognised industry voice having spoken at American Bankers Association events and top AI and quant finance conferences. His passion for the field extends to his YouTube channel, where he offers a candid look at career development, technical skills, and the evolving role of quants. We discussed how quant finance has evolved, the growing complexity of AI/ML, and the importance of collaboration, adaptability, and explainability.
Reflecting on the trajectory of your career in quant research, how has the industry evolved from when you first started and what has been the most memorable moment for you so far?
I work on the sell-side of the quant finance industry, and when I started, the implementation of the Dodd-Frank Act was just starting to get traction at the banks. Bringing in “quants” was a very new concept to many of the banks on the risk management side, however throughout my career it has been very rewarding to help shape the standards of model risk management. The rise of machine learning and AI has made my job more challenging. Quant Research and Quant Model Development are required to understand math, statistics, and finance at a very deep level. This has been both a blessing as new tools are now more readily available, however it has also made the job more challenging as there is so much more to learn than there was in the past. Also challenging is sorting through the available tools to determine what is right and what is wrong in the context of proper use.
One of the most memorable moments in my career came in 2018 during my time at Santander. I was working on a small team of five people and was in complete flow. The team was so synced together that the quality of work was high while also producing a high amount of output. My mentor Qingchao Li made a large impact on me that year as well as I learned many new methods in the credit risk space that I had not seen before such as reject inference, which expanded my perspective and deepened my expertise in the field.
What were the 3 key skills you honed during your career that elevated your performance most?
The three skills that I have honed over the years, and continue to work on today, are increasing my math intelligence, teaching skills, and the ability to communicate to a wide range of audiences. Math helps in better understanding and testing ideas. Teaching and communication are equally critical for developing a supportive and efficient team, creating positive challenges to ideas from across the business and industry, and gain buy-in from other parts of the firm.
Could you provide a couple of technical insights or innovations in recent quant research that you believe would be particularly interesting to financial professionals?
The advances and application of reinforcement learning and inverse reinforcement learning to asset allocation has been interesting. Igor Halperin has some interesting research on this topic and a great textbook with code to help understand and test ideas. There could be interesting applications of reinforcement learning in analyzing market competition in exchanges as well as other industries such as the sell-side of auto finance.
What upcoming innovations in predictive analytics excite you the most, and how do you think they will impact the market?
I am most excited for more research and methods on explainability for machine learning models. Models will always have an end user such as a trader, operations staff, investor, or consumer and gaining trust and buy-in often comes with being able to explain the general ideas and drivers of a model. Many of the machine learning models going on in the industry are just patter recognition with limited ability to forecast. These models are brittle as they do not hold when new data and economic cycles change. By adding more explainability, it will be easier to select models with causal relationships instead of fitting patterns.
How do you see the reliance on external data sources for alpha generation changing the relationship between quants and their model development process?
The reliance on external data sources seems to be a natural evolution of business efficiency. Firms can get an edge from data, modeling methodology and application, or a combination of the two. Collecting data internally could give you an edge however hiring and maintaining such a team is expensive and there is still a risk that other will catch on quickly to your data sources and collections. Model development is often slowed down and becomes inefficient as a lot of time is spent working with data teams to ensure data quality and get data into a simple and usable format. External data sources help reduce model development time and potential errors in data from data handling.
What is the next major project or initiative you’re working on, and how do you see it improving the domain?
At Agora Data, I will be dedicating more time to a project on quantum modeling for consumer finance. Modeling losses, profits, and consumer behavior naturally follow economics waves which can be computer with classical computing however only in a discrete manner. The proper method of computing and modeling waves is to use quantum computing. By applying quantum mechanics it should provide both more insightful (explainable) models while also increasing model accuracy and robustness.
Before we wrap up, is there anything else you’d like to share with our readers?
I am proud to have our second annual Quaint Quant Conference here in the Dallas Fort-Worth area April 18th. This started two years ago as a coffee meetup to get quants together to share ideas. This turned into the Quaint Quant Conference in 2024 when Agora Data decided to sponsor the meetup and turn it into a conference. For 2025 I am excited to have both Agora Data and the University of Texas Dallas sponsoring the event. Planning has just begun and we already have a great list of experts from around the US flying in to present and share ideas. I am still trying to keep the conference feeling quaint, meaning a good place to share ideas and learn from others. Too many conferences feel cold as you show up listen to some ideas and leave. There needs to be more collaboration and support in the quant community, and this is the event to help bring the community closer together on both the buy-side, sell-side, and academic side.
Numbers & Narratives
The Alternative Data Illusion: Why Most Funds Are Still Behind
Alternative data is now a staple in institutional investing, but most firms still aren’t extracting its full value. A recent Bloomberg survey, which polled 166 portfolio managers and analysts managing $820 billion total in AUM, and the Hedgeweek 2024 report reveal the growing divide between data adoption and real alpha generation:
• 66% of hedge funds use data for portfolio management and risk analysis (Hedgeweek), yet only 28% leverage it for alpha generation. This signals that while data is being incorporated into workflows, few funds have figured out how to turn it into a systematic trading advantage.
• 79% of investment managers cite integrating multiple datasets as their biggest challenge (Bloomberg), making data usability the primary bottleneck. Most firms aren’t short on data—they’re short on structured, tradeable signals.
• 85% of investment managers expect their reliance on third-party software for data analytics to increase over the next five years (Bloomberg), signaling that even larger firms are struggling to manage in-house data infrastructure at scale.
• 75% of funds believe consumer spending data will provide an outsized informational edge (Bloomberg), yet many still lack the tools to extract timely insights from alternative sources before the market prices them in.
• 66% of hedge funds are exploring AI-driven predictive analytics, while 45% are using AI for news and sentiment analysis (Hedgeweek). But AI models are only as good as their inputs—poor data structure means poor predictive power.
The real problem isn’t a lack of data—it’s poor execution. Firms are drowning in raw information but struggling to clean, standardise, and integrate it efficiently.
By now it's clear enough to say that next decade won’t be won by those who simply buy more data—it will belong to firms that can engineer fragmented datasets into structured, actionable intelligence faster than the competition. The winners aren’t chasing more sources. They’re solving the execution gap before everyone else catches up.
Link to the Hedgeweek report
Link to the Bloomberg report
Market Pulse
The DeepSeek Effect
DeepSeek’s 95% cost reduction in AI training and operations has triggered a fundamental repricing of AI infrastructure stocks. The assumption that compute demand scales linearly with AI performance is now under pressure, leading to Nvidia’s $600B selloff, the largest in market history. This shift extends beyond GPUs—ASML (-7%) and Siemens Energy (-20%) saw losses as markets questioned the sustainability of high-CAPEX AI supply chains. A parallel to the 2018 crypto mining glut emerges, where overinvestment in compute infrastructure outpaced real demand, leading to excess supply and compressed margins. However, Jevons Paradox suggests AI adoption could accelerate, shifting the constraint from compute to proprietary data access. Meanwhile, allegations that DeepSeek “distilled” OpenAI’s models raise the risk of AI IP litigation, a factor that could drive sector volatility beyond traditional valuation models. The +8.9% Nvidia recovery and Nasdaq’s +2% bounce indicate that capital is now rotating toward AI firms with data advantages over pure compute plays. Quants must adjust volatility models, weighing legal defensibility alongside compute scaling.
A Closer Look at Outlooks
The 2025 investment outlooks signal a market where sector dispersion, liquidity distortions, and macro shifts challenge traditional quant models. J.P. Morgan's focus on dispersion marks a break from narrow market leadership, suggesting that quants relying on momentum factors may need to recalibrate as AI-driven capital spending reshapes returns. This aligns with Morgan Stanley's concerns over S&P 500 concentration, where reliance on mega-cap tech raises fragility—if leadership shifts, dispersion trades could outperform. Fixed income adds complexity—Goldman Sachs expects rate cuts to support bonds, but Julius Baer warns resilient U.S. growth may keep real yields high, requiring quants to rethink duration exposure. Meanwhile, Deutsche Bank’s focus on fiscal dominance suggests liquidity-sensitive signals need rethinking as government spending overtakes central bank intervention. BlackRock and Franklin Templeton highlight China’s slowdown, raising questions about historical EM risk premia. As policy divergence reshapes pricing inefficiencies, quants must pivot from backward-looking factors to capturing emerging structural distortions in real-time.
Navigational Nudges
Numbers don’t sell strategies—narratives do. In quant finance, the ability to translate complex insights into a compelling story is as important as the insights themselves. The best quants know that uncovering edges is only one piece of the puzzle —making others believe in them is just as important.
Here’s how to tell a better data story:
Use Multiple Lenses
One number never tells the full story. Instead of just showing Sharpe or alpha, present alternative angles. What happens when volatility spikes? Does the model hold across asset classes? How does liquidity impact execution? A single frame is easy to poke holes in—a well-rounded view is much harder to dismiss.Start with the 'Why'
Most data stories fail because they start with raw numbers. Instead, frame the insight as a problem to solve. Most momentum models collapse in high-volatility periods—except this one. Here’s why. That makes people lean in before you even show a single chart.Make Data Feel Alive
Static backtests bore people. Instead of showing endless tables, bring the numbers to life. Animate how factor exposures shift over time. Highlight when a model’s edge starts fading. Show regime changes dynamically instead of burying them in a spreadsheet. The more intuitive the visuals, the faster your audience will get it.Own the Uncertainty
Nothing builds credibility faster than admitting risk. Every model has failure points—acknowledge them before someone else does. What assumptions does it rely on? Where does it struggle? What happens in a liquidity crunch? A transparent story is a trustworthy one.Layer the Complexity
Good storytelling isn’t about simplifying—it’s about structuring complexity. Start with the big idea in one sentence. Then, explain why it matters in a few lines. Only after that, dive into the supporting details. If you drop everything on your audience at once, they’ll tune out before they get to the good part.
A strong model doesn’t sell itself. A strong story does. In quant finance, the way you frame an insight can be the difference between it being ignored or driving real conviction.
The Knowledge Buffet
by John Farrall
There’s a lot of noise in alternative data, but this is one of the few things to look forward to in your inbox every week. Its carefully and intentionally curated down to what’s actually worth paying attention to beyond the headlines and trends and that's why it’s become a regular point of discussion in our office.
The insights often challenge assumptions and surface ideas that aren’t being talked about elsewhere. This is one of those few newsletters that consistently delivers real value, making it extremely resourceful for anyone working with data at a high level.
The Closing Bell
If 2025 is your year of identifying valuable data to enrich your models, learn how our tool can help you do that in less than 30 minutes by scheduling an intro call with our founders below.
The Kickoff
January always feels like a prologue, but now that February is here, the markets are well and truly in motion—no more holiday liquidity lulls, just data, volatility, and the realities of the year taking shape. There's a lot to unpack in this edition, which will undoubtedly set the precedent for a busy and analytics-rich year ahead.
The Compass
Here's a rundown of what you can find in this edition:
Our newest additions in members and features
Exclusive insights from our chat with Dimitri Bianco, FRM who's Head of Quantitative Risk and Research at Agora Data
A statistical look into why alternative data hasn’t saved hedge funds
Highlights from the latest global news after a tumultuous open to the year
How to tell a better data story
We love Alt Data Weekly, here's why
Insider Trading
We’re starting the year strong with key milestones across product, team, and fundraising. Our dev team is expanding, with a new quant hire joining this month—and more to come. Our pre-seed round is closing successfully at the end of February, fuelling the next stage of growth. Meanwhile, Quanted has now surpassed 2.5 million features, strengthening our platform’s ability to uncover alpha.
On the product side, we’re doubling down on explainability. We're actively working on enhancing our reports to give quants even deeper insights into feature relevance, dataset selection, and model evolution. This includes developing a Cumulative Feature Impact framework to track how performance evolves as features are added, alongside dataset usage insights to highlight the most frequently leveraged sources. We're also exploring advanced visualisations—SHAP values, ICE plots, feature interaction heat-maps, and more—to provide a clearer, more intuitive breakdown of feature importance and model behaviour. These enhancements are in progress, and we’re excited to roll them out as we push towards even more robust and interpretable results.
January has been a treat—a fast start to what promises to be an exciting year. With momentum on our side, we’re looking forward to continuing upwards and to the right.
The Tradewinds
Expert Exchange
We sat down with Dimitri Bianco, FRM, Head of Quant Risk and Research at Agora Data, Inc., where he develops and validates risk models driving the firm’s AI-powered financial solutions. With over a decade of experience in quant finance, covering credit, operational, market, and PPNR risk, he has worked across every stage of model development—from building and implementation to auditing and refinement.
Beyond Agora, Dimitri is a recognised industry voice having spoken at American Bankers Association events and top AI and quant finance conferences. His passion for the field extends to his YouTube channel, where he offers a candid look at career development, technical skills, and the evolving role of quants. We discussed how quant finance has evolved, the growing complexity of AI/ML, and the importance of collaboration, adaptability, and explainability.
Reflecting on the trajectory of your career in quant research, how has the industry evolved from when you first started and what has been the most memorable moment for you so far?
I work on the sell-side of the quant finance industry, and when I started, the implementation of the Dodd-Frank Act was just starting to get traction at the banks. Bringing in “quants” was a very new concept to many of the banks on the risk management side, however throughout my career it has been very rewarding to help shape the standards of model risk management. The rise of machine learning and AI has made my job more challenging. Quant Research and Quant Model Development are required to understand math, statistics, and finance at a very deep level. This has been both a blessing as new tools are now more readily available, however it has also made the job more challenging as there is so much more to learn than there was in the past. Also challenging is sorting through the available tools to determine what is right and what is wrong in the context of proper use.
One of the most memorable moments in my career came in 2018 during my time at Santander. I was working on a small team of five people and was in complete flow. The team was so synced together that the quality of work was high while also producing a high amount of output. My mentor Qingchao Li made a large impact on me that year as well as I learned many new methods in the credit risk space that I had not seen before such as reject inference, which expanded my perspective and deepened my expertise in the field.
What were the 3 key skills you honed during your career that elevated your performance most?
The three skills that I have honed over the years, and continue to work on today, are increasing my math intelligence, teaching skills, and the ability to communicate to a wide range of audiences. Math helps in better understanding and testing ideas. Teaching and communication are equally critical for developing a supportive and efficient team, creating positive challenges to ideas from across the business and industry, and gain buy-in from other parts of the firm.
Could you provide a couple of technical insights or innovations in recent quant research that you believe would be particularly interesting to financial professionals?
The advances and application of reinforcement learning and inverse reinforcement learning to asset allocation has been interesting. Igor Halperin has some interesting research on this topic and a great textbook with code to help understand and test ideas. There could be interesting applications of reinforcement learning in analyzing market competition in exchanges as well as other industries such as the sell-side of auto finance.
What upcoming innovations in predictive analytics excite you the most, and how do you think they will impact the market?
I am most excited for more research and methods on explainability for machine learning models. Models will always have an end user such as a trader, operations staff, investor, or consumer and gaining trust and buy-in often comes with being able to explain the general ideas and drivers of a model. Many of the machine learning models going on in the industry are just patter recognition with limited ability to forecast. These models are brittle as they do not hold when new data and economic cycles change. By adding more explainability, it will be easier to select models with causal relationships instead of fitting patterns.
How do you see the reliance on external data sources for alpha generation changing the relationship between quants and their model development process?
The reliance on external data sources seems to be a natural evolution of business efficiency. Firms can get an edge from data, modeling methodology and application, or a combination of the two. Collecting data internally could give you an edge however hiring and maintaining such a team is expensive and there is still a risk that other will catch on quickly to your data sources and collections. Model development is often slowed down and becomes inefficient as a lot of time is spent working with data teams to ensure data quality and get data into a simple and usable format. External data sources help reduce model development time and potential errors in data from data handling.
What is the next major project or initiative you’re working on, and how do you see it improving the domain?
At Agora Data, I will be dedicating more time to a project on quantum modeling for consumer finance. Modeling losses, profits, and consumer behavior naturally follow economics waves which can be computer with classical computing however only in a discrete manner. The proper method of computing and modeling waves is to use quantum computing. By applying quantum mechanics it should provide both more insightful (explainable) models while also increasing model accuracy and robustness.
Before we wrap up, is there anything else you’d like to share with our readers?
I am proud to have our second annual Quaint Quant Conference here in the Dallas Fort-Worth area April 18th. This started two years ago as a coffee meetup to get quants together to share ideas. This turned into the Quaint Quant Conference in 2024 when Agora Data decided to sponsor the meetup and turn it into a conference. For 2025 I am excited to have both Agora Data and the University of Texas Dallas sponsoring the event. Planning has just begun and we already have a great list of experts from around the US flying in to present and share ideas. I am still trying to keep the conference feeling quaint, meaning a good place to share ideas and learn from others. Too many conferences feel cold as you show up listen to some ideas and leave. There needs to be more collaboration and support in the quant community, and this is the event to help bring the community closer together on both the buy-side, sell-side, and academic side.
Numbers & Narratives
The Alternative Data Illusion: Why Most Funds Are Still Behind
Alternative data is now a staple in institutional investing, but most firms still aren’t extracting its full value. A recent Bloomberg survey, which polled 166 portfolio managers and analysts managing $820 billion total in AUM, and the Hedgeweek 2024 report reveal the growing divide between data adoption and real alpha generation:
• 66% of hedge funds use data for portfolio management and risk analysis (Hedgeweek), yet only 28% leverage it for alpha generation. This signals that while data is being incorporated into workflows, few funds have figured out how to turn it into a systematic trading advantage.
• 79% of investment managers cite integrating multiple datasets as their biggest challenge (Bloomberg), making data usability the primary bottleneck. Most firms aren’t short on data—they’re short on structured, tradeable signals.
• 85% of investment managers expect their reliance on third-party software for data analytics to increase over the next five years (Bloomberg), signaling that even larger firms are struggling to manage in-house data infrastructure at scale.
• 75% of funds believe consumer spending data will provide an outsized informational edge (Bloomberg), yet many still lack the tools to extract timely insights from alternative sources before the market prices them in.
• 66% of hedge funds are exploring AI-driven predictive analytics, while 45% are using AI for news and sentiment analysis (Hedgeweek). But AI models are only as good as their inputs—poor data structure means poor predictive power.
The real problem isn’t a lack of data—it’s poor execution. Firms are drowning in raw information but struggling to clean, standardise, and integrate it efficiently.
By now it's clear enough to say that next decade won’t be won by those who simply buy more data—it will belong to firms that can engineer fragmented datasets into structured, actionable intelligence faster than the competition. The winners aren’t chasing more sources. They’re solving the execution gap before everyone else catches up.
Link to the Hedgeweek report
Link to the Bloomberg report
Market Pulse
The DeepSeek Effect
DeepSeek’s 95% cost reduction in AI training and operations has triggered a fundamental repricing of AI infrastructure stocks. The assumption that compute demand scales linearly with AI performance is now under pressure, leading to Nvidia’s $600B selloff, the largest in market history. This shift extends beyond GPUs—ASML (-7%) and Siemens Energy (-20%) saw losses as markets questioned the sustainability of high-CAPEX AI supply chains. A parallel to the 2018 crypto mining glut emerges, where overinvestment in compute infrastructure outpaced real demand, leading to excess supply and compressed margins. However, Jevons Paradox suggests AI adoption could accelerate, shifting the constraint from compute to proprietary data access. Meanwhile, allegations that DeepSeek “distilled” OpenAI’s models raise the risk of AI IP litigation, a factor that could drive sector volatility beyond traditional valuation models. The +8.9% Nvidia recovery and Nasdaq’s +2% bounce indicate that capital is now rotating toward AI firms with data advantages over pure compute plays. Quants must adjust volatility models, weighing legal defensibility alongside compute scaling.
A Closer Look at Outlooks
The 2025 investment outlooks signal a market where sector dispersion, liquidity distortions, and macro shifts challenge traditional quant models. J.P. Morgan's focus on dispersion marks a break from narrow market leadership, suggesting that quants relying on momentum factors may need to recalibrate as AI-driven capital spending reshapes returns. This aligns with Morgan Stanley's concerns over S&P 500 concentration, where reliance on mega-cap tech raises fragility—if leadership shifts, dispersion trades could outperform. Fixed income adds complexity—Goldman Sachs expects rate cuts to support bonds, but Julius Baer warns resilient U.S. growth may keep real yields high, requiring quants to rethink duration exposure. Meanwhile, Deutsche Bank’s focus on fiscal dominance suggests liquidity-sensitive signals need rethinking as government spending overtakes central bank intervention. BlackRock and Franklin Templeton highlight China’s slowdown, raising questions about historical EM risk premia. As policy divergence reshapes pricing inefficiencies, quants must pivot from backward-looking factors to capturing emerging structural distortions in real-time.
Navigational Nudges
Numbers don’t sell strategies—narratives do. In quant finance, the ability to translate complex insights into a compelling story is as important as the insights themselves. The best quants know that uncovering edges is only one piece of the puzzle —making others believe in them is just as important.
Here’s how to tell a better data story:
Use Multiple Lenses
One number never tells the full story. Instead of just showing Sharpe or alpha, present alternative angles. What happens when volatility spikes? Does the model hold across asset classes? How does liquidity impact execution? A single frame is easy to poke holes in—a well-rounded view is much harder to dismiss.Start with the 'Why'
Most data stories fail because they start with raw numbers. Instead, frame the insight as a problem to solve. Most momentum models collapse in high-volatility periods—except this one. Here’s why. That makes people lean in before you even show a single chart.Make Data Feel Alive
Static backtests bore people. Instead of showing endless tables, bring the numbers to life. Animate how factor exposures shift over time. Highlight when a model’s edge starts fading. Show regime changes dynamically instead of burying them in a spreadsheet. The more intuitive the visuals, the faster your audience will get it.Own the Uncertainty
Nothing builds credibility faster than admitting risk. Every model has failure points—acknowledge them before someone else does. What assumptions does it rely on? Where does it struggle? What happens in a liquidity crunch? A transparent story is a trustworthy one.Layer the Complexity
Good storytelling isn’t about simplifying—it’s about structuring complexity. Start with the big idea in one sentence. Then, explain why it matters in a few lines. Only after that, dive into the supporting details. If you drop everything on your audience at once, they’ll tune out before they get to the good part.
A strong model doesn’t sell itself. A strong story does. In quant finance, the way you frame an insight can be the difference between it being ignored or driving real conviction.
The Knowledge Buffet
by John Farrall
There’s a lot of noise in alternative data, but this is one of the few things to look forward to in your inbox every week. Its carefully and intentionally curated down to what’s actually worth paying attention to beyond the headlines and trends and that's why it’s become a regular point of discussion in our office.
The insights often challenge assumptions and surface ideas that aren’t being talked about elsewhere. This is one of those few newsletters that consistently delivers real value, making it extremely resourceful for anyone working with data at a high level.
The Closing Bell
If 2025 is your year of identifying valuable data to enrich your models, learn how our tool can help you do that in less than 30 minutes by scheduling an intro call with our founders below.