After the 2008 financial crisis, regulators adopted macroprudential regulation, shifting their focus from the health of individual institutions to the stability of the financial system as a whole. Central banks began monitoring indicators such as asset bubbles and excessive leverage in an effort to identify risks earlier. For years, however, their main constraint was the lack of granular, real-time data.
That constraint has largely eased. Today, regulators have access to vast datasets that offer increasingly detailed views of balance sheets, not only within traditional banking, but also across the expanding shadow banking sector.
This data-rich environment opens the door for artificial intelligence. AI models can process vast amounts of information and identify complex patterns invisible to humans, promising to forecast market distress with precision. However, integrating AI into financial stability efforts presents complex challenges that require careful examination.
The Allure and the Dilemma of Predictive AI
The primary allure of using AI in financial regulation lies in its predictive power. The researchers built a graph transformer to demonstrate this capability. Trained on 14 years of financial holdings data, their model could reconstruct investor positions with remarkable accuracy. In a striking test, even though its training data ended in 2019, the model accurately predicted trading behaviors during the chaotic market crash of 2020, triggered by the onset of the Coronavirus pandemic. Such a tool could give regulators a "granular signal" of where vulnerabilities are building, allowing them to focus their resources and policies on specific corners of the market, such as the less transparent shadow banking sector, where much of the systemic risk now resides.
However, this predictive power comes with a fundamental dilemma, which Coppola describes as a "Faustian bargain." The problem is that models based purely on historical data can overlook the underlying structural forces that govern the economy. An AI model might be able to pinpoint where a financial fire is about to start with startling accuracy, but it cannot explain why it is happening. This lack of causal understanding is a critical flaw for a regulator. If a model signals trouble, but cannot explain its root cause, how can a policymaker be sure that a specific intervention will actually fix the problem? The AI essentially offers a high-probability alert without providing a usable theory of change, leaving regulators in the difficult position of having to act on a "black box" recommendation without fully understanding the potential consequences.
Navigating Moral Hazard and the Path to Model-Informed Regulation
The mere existence of a model known to predict crises could fundamentally change the behavior of the very investors the model is meant to oversee. Financial institutions and banks, aware that a regulator’s AI is monitoring certain asset classes or leverage levels, might become more willing to take risks. For example, they could assume that if a crisis were truly imminent, the AI would detect it and prompt the regulator to intervene, effectively creating a safety net that encourages greater risk-taking. At the same time, investors might strategically steer clear of the specific investments or activities the model is known to scrutinize. Over time, this dynamic could make the predictive model less effective and shift risk into unmonitored areas, ultimately increasing overall systemic vulnerability.
To resolve these tensions, the researchers proposed moving beyond a binary choice between pure AI prediction and traditional economic models. Instead, they advocate for a hybrid approach they call “model-informed” regulation. Their research provides a blueprint for this by demonstrating that AI models perform best when used in conjunction with existing economic theory. In their framework, the AI’s role is not to provide the final answer, but to serve as a powerful diagnostic tool. It can sift through massive datasets to detect anomalies, pinpoint pockets of high vulnerability, and suggest hypotheses about where and how the system is stressed. Then, traditional economic models can be applied to interpret the AI’s findings, diagnose the underlying structural issues, and design effective, targeted policy responses.
AI as Guide, Not Governor
The researchers present a nuanced vision for AI in financial regulation, recognizing its potential to support more proactive, data-driven risk management. At the same time, they caution against handing over decision-making entirely to algorithms. The central challenge lies in integrating AI’s predictive capabilities with established economic theory.
Their model-informed framework positions AI as a tool for guiding inquiry rather than serving as a final policy arbiter. This hybrid approach helps avoid the limitations of purely predictive systems while strengthening traditional models. Ultimately, success will depend on sustained research and development, with central banks moving deliberately to ensure that AI tools enhance stability rather than introduce new risks into the system.
Download the PDF of this page here
Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.