AI and financial stability: Mitigating risks, harnessing benefits

Panel of experts speaking at the FSOC/Brookings AI Conference
Brookings Institution
Editor's note:

This post is a summary of a conference held on June 6-7, 2024. Watch the full videos and read the preview here. Quotes were edited for clarity.

On June 6-7, the Financial Stability Oversight Council (FSOC) and the Brookings Institution convened leaders across government, industry, academia, nonprofits, and trade associations to discuss the financial stability implications of artificial intelligence (AI). Sandra Lee, The Treasury Department鈥檚 Deputy Assistant Secretary for FSOC, highlighted that the conference aimed 鈥渢o promote thoughtful discussion on the risks associated with AI and how to mitigate these risks while still harnessing AI鈥檚 benefits.鈥 Over two days at Treasury and the Brookings Institution, participants shared views on policy ideas and the past, present, and future of AI in financial services. A summary of key highlights follows, and videos of the full conference .

AI and systemic risk

Treasury Secretary Janet Yellen in her noted that Treasury had recently issued a public on the uses, opportunities, and risks of AI in the financial services sector. Secretary Yellen stated that AI, 鈥渨hen used appropriately, can improve efficiency, accuracy, and access to financial products.鈥 She went on to note that 鈥渟pecific vulnerabilities may arise from the complexity and opacity of AI models; inadequate risk management frameworks to account for AI risks; and interconnections that emerge as many market participants rely on the same data and models.鈥 She offered a potential approach for exploring some of these issues, stating that 鈥渟cenario analysis could help regulators and firms identify potential future vulnerabilities and inform what we can do to enhance resilience.鈥

Acting Comptroller of the Currency Michael Hsu鈥檚 touched on this same theme, noting that 鈥淎I can be used as a tool or a weapon.鈥 Acting Comptroller Hsu discussed the evolution of AI, 鈥渨here it is used at first to produce inputs to human decisionmaking, then as a co-pilot to enhance human actions, and finally as an agent executing decisions on its own on behalf of humans.鈥 Acting Comptroller Hsu stated that 鈥渢he risks and negative consequences of weak controls increase steeply as one moves from AI as input to AI as co-pilot to AI as agent.鈥 He emphasized FSOC鈥檚 centrality in this discussion, noting that 鈥渢he FSOC is uniquely positioned to contribute to this, given its role and ability to coordinate among agencies, organize research, seek industry feedback, and make recommendations to Congress.鈥

Regulation of AI usage in finance

How to best regulate AI usage-related risks in finance dominated much of the discussion throughout the conference. Lisa Rice, CEO of the National Fair Housing Alliance, argued that the appropriate regulatory framework for AI must involve substantial testing of technology before it can be released in society. She proposed 鈥渉aving a team of trusted institutions to work in a collaborative fashion to explore the models and really vigorously test them, to see how they perform, red-team them, blue-team them, to see if you can identify any bias or harm, and also see if you can compel them or constrain them to be fairer from the outset.鈥 Terah Lyons, JPMorgan Chase Managing Director & Global Head of AI Policy, saw a 鈥渘eed for commercial organizations to have clarified guidance from regulators, supervisors, and other authorities with respect to the way that AI should be responsibly implemented.鈥 Some participants were interested in seeing as much harmonization as possible with a shift from state to federal regulation and to international standards (given the EU AI Act). American University Law Professor Hilary Allen made the point that 鈥渢here may be places where stakes are so high, hallucination is not worth the risk, that we need rules to 鈥榡ust say no’.鈥

Regulators鈥 ability to build expertise on AI and appropriately regulate AI was another common theme. Erie Meyer, Consumer Financial Protection Bureau Chief Technologist and Senior Advisor to the Director, stated: 鈥淲e have to have the right talent in the room to do the work 鈥 to better meet the moment to understand how these firms are working, where the risks are, what rocks we should be looking under, and what we should do about them.鈥 Fabio Natalucci, International Monetary Fund Deputy Director of the Monetary and Capital Markets Department suggested that regulators monitor developments; assess vulnerabilities, such as 鈥渨hether this is an amplification of old mechanisms that we understand from before 鈥 that just operates faster in different contexts鈥; and determine whether the regulatory framework needs to be adjusted, including asking 鈥渨hether the risk model that we use is appropriate or if we need new models.鈥

Compliance challenges and financial stability implications

Secretary Yellen noted that FSOC member agencies 鈥渉ave frameworks and tools that can help mitigate risks related to the use of AI, such as model risk management guidance and third-party risk management. That said, there are also new issues to confront, and this is a rapidly evolving field.鈥澛燗cting Comptroller Hsu similarly echoed: 鈥淲hat starts off as responsible innovation can quickly snowball into a hyper-competitive race 鈥 In time, risks grow undetected or unaddressed until there is an eventual reckoning. We saw this with derivatives and financial engineering leading up to the 2008 financial crisis and with crypto leading up to 2022鈥檚 crypto winter.鈥 EY Partner Anuj Mallick observed that they are seeing governance structures around AI start to evolve, where it鈥檚 not just how you deploy the technology and the governance that used to be around that, but it鈥檚 actually bringing in legal, compliance, risk functions into it, to be able to understand the actual outcome.鈥

Allen noted that, 鈥渋f everybody is relying on the same kind of data, and everybody is using the same few algorithms, everyone is going to be acting in lockstep. We know from the run-up to 2008 that herd behavior is very dangerous when things go badly.鈥 Brookings Senior Fellow Nicol Turner Lee also cited 鈥渢he challenge of the same cloud computing companies, and the same third-party companies selling the same data to a variety of companies, which could end up with collusion or some type of price-fixing that we鈥檙e not aware of.鈥

Acting Comptroller Hsu hypothesized another scenario 鈥渢he nightmare paperclip/Skynet scenario for financial stability does not require big leaps of the imagination. Say an AI agent is programmed to maximize stock returns鈥 The AI agent concludes that to maximize stock returns, it should take short positions in a set of banks and spread information to prompt runs and destabilize them.鈥 Samara Cohen, BlackRock Chief Investment Officer of ETF and Index Investments, added that 鈥渢he potential for confidence to be undermined by various forms of cybersecurity issues, by deepfakes, by the intentional misuse of data in a model is critically important and something we would look to the regulatory system to safeguard.鈥 Discover Executive Vice President Keith Toney echoed: 鈥淭he more risk would be if I was on a single cloud platform, and if that cloud provider gets compromised.鈥

AI in exacerbating bias or facilitating financial inclusion

Broad consensus emerged that AI has the potential both to exacerbate bias and to facilitate financial inclusion. Virginia Commissioner of Insurance Scott White noted a hypothetical use of AI and algorithms for micro-pricing as well as the emergence of 鈥渁lgorithmic models that can now process [huge datasets], so it can amplify potential biases.鈥 Dominic Delmolino, Amazon Web Services Vice President of Worldwide Public Sector Technology & Innovation, relayed that thinking has evolved:I used to believe that if I just had all the data, I could solve any problem. Now, it鈥檚 not that the data informs me but how the data I select, for what purpose and for what domain, and for what use, that becomes that much more important.鈥 Rice noted that AI 鈥渃an see race and gender quicker than the human eye can, so we鈥檙e finding all kinds of ways that these systems are perpetuating bias and have been discriminating against people, locking them out of the financial markets and the housing markets鈥攁nd now we have to train regulators and also train the industry on ways we can use AI to innovate and protect consumers and expand the market responsibly.鈥

On the other hand, many participants noted the potential for AI to be part of the solution towards financial inclusion. As the conversation highlighted the discrimination throughout the financial services arena, Turner Lee commented that 鈥渨hen humans are in charge in financial services, maybe we鈥檙e actually defeating鈥 the objective of reducing discrimination. Jo Ann Barefoot, CEO and co-founder of the Alliance for Innovative Regulation, noted that 鈥渢here is potential that this problem of the consumer of financial services 鈥 not being attentive, or not being highly financially literate, or not being sophisticated, or being too busy, all of it鈥攖hat the AI agent may be part of the solution.鈥 A lively conversation between the panelists was moderated by CNBC鈥檚 Jon Fortt who observed that many Americans would rather 鈥渃hase GameStop or buy Bitcoin or get on Draft Kings 鈥 Those are the things that are potentially going to distract that underbanked user and there鈥檚 much more of an incentive, than 鈥楬ey, the rate on your Discover Bank account is actually going to be a little bit better than the one that you鈥檙e getting.鈥欌

Regulator usage of AI

Barefoot encouraged regulators to adopt AI: 鈥淲hat if we were using AI in our bank examinations for fair lending, to look for more data, more sophisticated analysis?鈥 Brookings Senior Fellow Aaron Klein echoed this: 鈥淚n 2007 鈥 regulators had been telling us, banks had never been safer, as witnessed by the lack of any failures,鈥 but 鈥渢wo years later, any human intelligence would have told you that the financial system was incredibly shaky in January 2007. Would an AI have done anything different? Had regulators been incorporating AI, would there have been a different response?鈥

Todd Conklin, Treasury Chief AI Officer and Deputy Assistant Secretary for Cybersecurity and Critical Infrastructure Protection, noted that federal agencies鈥 adoption of technology has already occurred but is gradual: 鈥淎bout 10 years ago, we started our first cloud modernization effort within our national security infrastructure, and a lot of that investment was to create the foundation for our AI analytics program. And we completed that modernization effort a few years ago and are now finally starting to see the fruit of that investment through our AI development.鈥

The conference concluded with several breakout sessions where participants engaged in free-flowing discussions synthesizing the discussions of AI and markets, human in the loop, and regulating AI. The conference was a step forward to implement FSOC鈥檚 recommendation in its that 鈥渇inancial institutions, market participants, and regulatory and supervisory authorities further build expertise and capacity to monitor AI innovation and usage and identify emerging risks.鈥澛


  • Acknowledgements and disclosures

    The authors would like to thank Riki Fujii-Rajani and Kyle Lee for their assistance.

    麻豆官网首页入口免费 is financed through the support of a diverse array of foundations, corporations, governments, individuals, as well as an endowment. A list of donors can be found in our annual reports published online聽here. The findings, interpretations, and conclusions in this report are solely those of its author(s) and are not influenced by any donation.