- Headspace
- Posts
- A Walk Down the New Wall Street
A Walk Down the New Wall Street
Managing the Arrival, Risks, and Promise of AI in Investment Management
INTRODUCTION
Article Objective
With the ever-increasing fervor around the use if AI/GenAI, several colleagues have shared with me over the past several weeks how individuals are hungry for more than mere musings on the SEC’s now-rescinded rule proposal on conflicts of interest and predictive analytics. The general sentiment has been that, although content exists around AI/GenAI, much of it is not specific to our industry, and for any content that is relevant, it hasn’t yet reached the level of practical application (or even foundational education). This article is intended to be an incremental but foundational step toward curing that. To that end, I thought the most appropriate goals of this piece would be as follows:
Provide basic foundational information about AI/ GenAI relative to our industry (such as its prevalence of use, benefits, and basic AI/GenAI vocabulary),
Provide a more particular sense of the types of risks AI/GenAI used by investment managers can present, and
Provide ideas regarding how to potentially address those risks from a governance, risk, and compliance perspective.
Key Points
Additionally, although the remainder of this article will delve more deeply, there are three fundamental points to be gleaned:
The use of traditional artificial intelligence (AI) and generative AI (GenAI) by investment managers has evolved from a state of potentiality to a state of prevalence, and is quickly approaching the status of a functional and strategic imperative. The main contributing factors include the high level of accessibility of such tools, as well as their demonstrable and potential benefits (e.g. the ability to create ultra-customized portfolios, processing vast amounts of data to hone security selection recommendations, creating operational efficiency, generating risk and other assessments based on myriad data types, etc.).
Accompanying the benefits of AI/GenAI are also various tangible risks (e.g. portfolio recommendations based on culled data that is inaccurate or even fabricated, the inability to understand how certain tools have arrived at a recommendation or decision, etc.). These risks have the potential to impact investors and prospective investors, as well as firm operations and even broader markets themselves.
Even in the absence of specific regulation, general risk management and fiduciary duty principles suggest that these risks should be accounted for in an investment manager’s governance, risk, and compliance apparatus. Accordingly, investment managers should consider implementing Responsible AI or AI Usage programs that address any number of relevant topics such as AI/GenAI tool implementation and monitoring, employee education/“reskilling,” and oversight of 3rd party service providers who themselves use or rely on Ai/GenAI (among others)
And with that, let’s go for a walk down the new Wall Street, shall we?
PREVALENCE & BENEFITS
AI/GenAI has meaningfully reached the shores of investment management; it is no longer a vague figure on a distant horizon. Its ease of accessibility, coupled with demonstrable and potential benefits, have not only contributed to its arrival, but have turned its use from being a “nice to have” for firms to a fast-approaching competitive and functional imperative.
In 2022, a CFA Institute study had already revealed that 81% of institutional investors were more interested in investing in a fund that relies on AI and big data tools than a fund that relies primarily on human judgment to make investment decisions. The same survey found 87% of respondents said they trusted their asset manager more because of increased use of technology. The 2022 survey appears to have been prescient in retrospect. In 2023, Vanguard claimed to be using GenAI as part of its portfolio management process, and JP Morgan was reported to have begun using GenAI for compliance management (reviewing legal documents and extracting essential data points and clauses). More recent industry research produced this year has shown that over 50% of investment managers are using AI within their investment strategy or asset class research, with over 30% planning to do so. Similar research this year indicates that among alternative fund managers, 92% of such managers are already using AI as part of their risk and compliance procedures, with 55% of them having started two years ago. And just this past July, TIAA announced the rollout of a GenAI platform, and T. Rowe Price similarly noted it has seen a 30% increase in productivity attributable to its AI usage.
The reasons for this ramping make sense. GenAI models such as GPT, Llama2 (Meta’s flagship GenAI tool), Titan (Amazon’s flagship GenAI tool), and Claude2 (Anthropic’s flagship GenAI tool) are as accessible to an ordinary individual as they are the world’s wealthiest corporations and most advanced nation states. Additionally, the benefits of AI/GenAI – demonstrable or potential – are numerous and compelling, which include but not are limited to the following (Appendix B contains a more expansive list):
The ability to sift through large amounts of data – and take into account new data types – that are relevant to security selection and market analysis
The ability to provide ultra-customized portfolios at scale
The ability to create personalized and tailored marketing materials based on open-source (i.e. public) information
The ability to create robotic telephonic or web-based client and prospective client interactions
The ability to automatically generate risk and other assessments based on processing and analyzing vast amounts and types of data (structured, semi-structured, and unstructured).
Between AI/GenAI’s accessibility and these types of benefits, its increased use is not surprising, nor is the appetite by both investment managers and clients to begin leveraging and experiencing it. Indeed, such technology bears the promise of solving for age-old challenges such as optimizing the suitability and personalization of investment advice, enhancing market and other risk forecasting, and driving operational excellence so that human investment and risk professionals can devote their thinking to higher-level cognitive activities. Without doubt, AI/GenAI’s arrival on the block holds great promise for clients and industry professionals alike.
POTENTIAL RISKS
Accompanying AI/GenAI’s potential benefits are potential risks as well. These risks are far more numerous than simply presenting conflicts of interest considerations (which is the narrow scope of the SEC’s rescinded predictive analytics rule proposal).
Conflicts of interest are certainly one important category of risk AI/GenAI presents. It gets to the heart of one leg of an investment manager’s fiduciary duty – the duty of loyalty. However, the SEC’s rule proposal seemed to fall short of addressing the other leg of a manager’s fiduciary duty – the duty of care.
AI and GenAI are vulnerable to more than just providing recommendations or communications that may steer an investor or prospective investor to make a decision that is in the manager’s rather than client’s or prospective client’s best interest. As we all know, the topic of “AI washing” has emerged in our space in the same way “green washing” did, and it will undoubtedly continue to be scrutinized by the SEC as a matter of course. In addition to regulatory interests de jour, however, are also a number of very practical risks AI/GenAI pose, which, given their nature, should also capture our attention – and which will also likely land on the regulatory radar at some point as well.
Fundamentally, AI/GenAI is dependent on the quality of the data its pull in and can pull in data from a variety of sources (public and otherwise). This data can have flaws such as embedded biases (e.g. prior to making a security recommendation, a GenAI tool reviews information from the internet that states females are less effective CEOs of publicly traded companies than males) or come in various forms that can create complexities for analysis and processing (e.g. semi-structured or unstructured data could be more challenging to assess and therefore make conclusions or recommendations less reliable). Additionally, AI/GenAI output are vulnerable to other categorical risks such as hallucinations (providing a response or recommendation that is or is based on information that is wrong), data drift (when the data or the type of data being input/gathered into an AI/GenAI tool changes over time relative to the data the AI/GenAI model was considering and using at the outset), model drift (decay in an AI/GenAI’s predictive power as a result of changes in real world data, such as a model that is meant to detect spam becoming less effective based on typical email content used in spam campaigns changing over time), and lack of explainability (where a decision or recommendation by AI/GenAI cannot be explained or the logic/reasons behind the decision/results cannot be seen and understood), among others.
These types of risks can manifest in an investment manager’s business in a variety of ways. While Appendix B contains a more extensive list, such risks may include (among others):
Investment professionals are unable to see or understand the factors a GenAI tool took into account when making a recommendation
A tool placing outsized weight on a certain factor that has been deemed less relevant to security analysis
The tool obtaining or accessing material non-public information as part of the data retrieval process
Investment recommendations are based on factors that are simply made up or wrong
Investment managers engaging in AI washing
All of these risks create general risk management and ethical considerations that warrant addressing even absent specific regulatory guidance. Additionally, I would not rule out these risks turning into true compliance risks even in the absence of targeted regulation. The SEC continues to examine and investigate certain advisers’ use of AI, and in prior analogous contexts, it has brought cases against investment managers for failing to supervise and understand models that employed algorithms for things such as automated trading, wash sale monitoring, and performance reporting, to name a few. The topic also continues to garner attention with FINRA. Accordingly, it seems prudent for investment managers to implement some framework or architecture that is designed to mitigate AI/GenAI risk specific to their organizations.
RESPONSIBLE AI/AI USAGE
To address the types of risks discussed in the above section, investment managers need to ensure they are practicing Responsible AI and have a Responsible AI and/or AI Usage program in place. Leveraging existing governance, risk, and compliance practices and methodologies, it is feasible for investment managers to design such programs even in the absence of specific regulatory guidance or standards.
“Responsible AI” is a broad term that encompasses the business and ethical choices associated with how organizations adopt and deploy AI capabilities. Implicit within this concept are of course other standards, such as ensuring AI/GenAI tools are working as and when they are intended. Although the SEC has not provided guidance in this regard, first, I don’t know that common governance, risk, and compliance practices and methodologies are poorly suited to manage the types of risk AI/GenAI engenders; rather, the challenges seem to be more in climbing a learning precipice. Additionally, FINRA has provided high-level guidance, and a more particular analogous resource to consider is guidance on model development, implementation, and use promulgated by the Board of Governors of the Federal Reserve. Such guidance touches upon topics such as disciplined model development and implementation processes, as well as ongoing monitoring and testing. Last, a variety of public policy statements and even legislation have been undertaken in both our own country (with the Biden Administration having issued an Executive Order and a blueprint concerning the responsible use of AI/GenAI) as well as on a global scale (including the EU Artificial Intelligence Act, as well as other steps taken by the UK, China, the G7, and other international collaborations). These policy statements and legislative actions provide the contours of relevant considerations and issues.
Using these resources and perspectives as a starting point, the following selected topics (among others) would seem to be reasonable elements for what having a Responsible AI/AI Usage program at an investment manager should contain (Appendix C contains a more detailed blueprint):
TABLE 1: BLUEPRINT FOR A RESPONSIBLE AI/AI USAGE PROGRAM
Education & “Reskilling” | → Providing initial and ongoing education to employees on how AI and Gen AI work |
AI/GenAI Tool Development & Implementation | → Ensuring AI/GenAI tools (and applications thereof) meet certain standards before being deployed and on an ongoing basis |
Ongoing Monitoring, Testing & Reporting | → Conducting real-time monitoring and systematic periodic back-testing of AI/GenAI outputs for instances of risk events that have occurred or might occur |
Disclosures | → Updating disclosures to clients/prospective clients to detail risks and potential conflicts of interest associated with the manager’s use of AI/GenAI |
Cyber & Information Security | → Implementing measures to ensure that an AI/GenAI tool (and applications thereof) is not anomalously exposed to cyber and information security risks |
Business Continuity Planning | → AIntegrating an AI/GenAI tool’s use into a firm’s BCP program as an identified dependency (depending on the criticality of the tool/solution to the manager) |
Governance | → Implementing governance bodies and processes, as well as policies & procedures, that serve to facilitate and oversee AI/Gen tool (and application thereof) development, implementation, and ongoing monitoring |
Use of AI/GenAI by 3rd Party Service Providers | → Assessing 3rd party vendors’ or service providers’ own use of AI/GenAI, as well as the vendor’s/service provider’s Responsible AI program |
TAKEAWAY RESOURCES & PARTING THOUGHTS
In addition to the body of this article itself, I have included three appendices and supplemental materials designed to serve as handy references for you as you head off into conferences, board meetings, and cocktail parties. Those appendices, which I urge you to read at some point and at least keep in your folios, are as follows:
Appendix A – Common AI Terms. Appendix A serves as a glossary of terms commonly used and heard surrounding the topic of AI/GenAI. While not all of them are used in this article per se, I wanted to equip people with a basic vocabulary so that language does not become a barrier to learning in this space. Additionally, throughout this article, the font color for certain words and phrases is azure. These azure-colored words and phrases represent terms whose definitions appear in Appendix A (though in some instances I provide shorthand definitions in the body of the article itself where it seemed more needed).
Appendix B – Potential AI Uses, Benefits & Potential Risks. Appendix B serves as a map that more particularly shows potential uses and benefits of AI/GenAI, as well as the corresponding potential risks of such uses. It is intended to provide a deeper level of insight compared to most publicly available research and commentary on AI/GenAI in our industry.
Appendix C – Blueprint for a Responsible AI Program. Even absent regulation, AI/GenAI can create tangible risks for clients, prospective clients, and investment managers themselves. As a result, the information in Appendix C is meant to put forth a potential Responsible AI program framework investment managers could consider adopting in some shape or degree, depending on their current or aspirational uses of AI/GenAI.
More broadly than the aforementioned resources, I certainly acknowledge that numerous topics touched upon in this article could be the subject of an article in their own right. The same can be said of topics not even touched upon in this article beyond the context of mere vocabulary (e.g. artificial neural networks, deep learning, unsupervised learning, etc.) If you’re anything like me, the new epoch of AI/GenAI in general – not just for our industry – sparks a complicated mix of excitement and fear, the proportion of which can vacillate each day. The practical and existential questions it raises are at once fascinating and overwhelming. But, I do believe we have the tools to manage this new phase of human evolution, and productive and helpful steps are being taken to navigate these new waters (not by the SEC, mind you). While much work in our industry needs to be done, I believe we are more than capable of doing it, provided we collaborate (and that includes public-private collaboration). If we do, I’ve no doubt we’ll achieve outcomes in line with the sentiment author and video game enthusiast Joanna Maciejewska has before expressed: I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.
I’ll drink to that, Joanna . . . and with my own, human hand.
Thanks for reading.
Resources
|
|
|
Bibliography
In the interests of facilitating the type of “reskilling” that all of us will need to do to live and thrive in our industry’s next phase, I have shared the list of research I used in preparing or have referred to in this article.
Beane, Matt, “Gen AI Is Coming for Remote Workers First,” Harvard Business Review, July 22, 2024.
Board of Governors of the Federal Reserve System, SR 11-7, “Guidance on Model Risk Management,” April 4, 2011.
Board of the International Organization of Securities Commissions, “The Use of Artificial Intelligence and Machine Learning by Market Intermediaries and Asset Managers: Final Report,” https://www.iosco.org/library/pubdocs/pdf/IOSCOPD684.pdf, September 2021.
Bradford, Shelby, “Artificial Neural Networks: Learning by Doing,” The Scientist University, March 1, 2024.
Broby, Daniel, “The Use of Predictive Data Analytics in Finance,” The Journal of Finance and Data Science, May 20, 2022.
Caiazza, Amy B., “The Use of Artificial Intelligence by Investment Advisers: Considerations Based on an Adviser’s Fiduciary Duty,” https://www.wsgr.com/en/insights/the-use-of-artificial-intelligence-by-investment-advisers-considerations-based-on-an-advisers-fiduciary-duties.html, May 28, 2020.
Croce, Brian, “Vanguard CEO Says AI Will Revolutionize Asset Management,” Pension & Investments, May 24, 2023.
Donelan, Michelle, “A Pro-Innovation Approach to AI Regulation: Government Response,” https://www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response, February 6, 2024.
Eloundou, Tyna et al., “GPTs Are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models,” https://arxiv.org.abs/2303.10130, March 17, 2023.
European Parliament, “EU AI Act: First Regulation on Artificial Intelligence,” https://www.europarl.eurpoa.eu/topics/en/article/20230601STO93840/eu-ai-act-first-regulation-on-artificial-intelligence, August 6, 2023.
Executive Order 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/, October 30, 2023
Feldman, Robin et al., “AI Governance in the Financial Industry,” Stanford Journal of Law, Business & Finance, 2022.
Gensler, Gary, “Jack Bogle, Haystacks, and Putting the Interest of the Clients First: Prepared Remarks before the 2024 Conference on Emerging Trends in Asset Management,” https://www.sec.gov/newsroom/speeches-statements/gensler-etam-051624, May 16, 2024.
Goel, Abhinav et al., “The Transformation Imperative: Generative AI in Wealth and Asset Management,” https://www.ey.com/en_us/insights/financial-services/generative-ai-transforming-wealth-and-asset-management, October 31, 2023.
Hickey, Bridget, “Video: T. Rowe Developers See 30% Jump in Productivity with AI: COO,” Ignites, July 17, 2024.
IA-6353, “Conflicts of Interest Associated with the use of Predictive Analytics by Broker-Dealers and Investment Advisers,” July 26, 2023.
In the Matter of Prosper Funding, LLC, Investment Advisers Act Release No. 10630, April 19, 2019.
In the Matter of Timothy S. Dembski, Investment Advisers Act Release No. 4671, March 24, 2017.
In the Matter of Wealthfront Advisers, Investment Advisers Act Release No. 5086, Dec. 21, 2018.
Jenkins, Robert, “How Might AI Impact Investment Management?,” https://www.lseg.com/en/insights/data-analytics/how-might-ai-impact-investment-management, October 12, 2023.
Kaczmarski, Kamil et al., “The AI Tipping Point,” https://www.oliverwyman.com/content/dam/oliver-wyman/vs/publications/2023/october/Oliver_Wyman_Morgan_Stanley_Global_Wealth_and_Asset_Management_report_2023_The_Generative_AI_Tipping%20Point1.pdf, 2023.
Kennedy, Joe, “Generative AI for Asset and Wealth Management: Thinking beyond Use Cases,” https://www.pwc.com/us/en/tech-effect/ai-analytics/generative-ai-asset-wealth-management.html, September 28, 2023.
Madiega, Tambiama, “Artificial Intelligence Act,” Briefing: EU Legislation in Progress, March 2024.
Martin, Alan, “Robotics and Artificial Intelligence,” AI Business, November 26, 2021.
McIntyre, Chris et al., “How Asset Managers Can Transform with Generative AI,” https://bcg.com/publications/2023/how-genai-can-transform-asset-management, July 31, 2023.
Microsoft Source, “Microsoft and LinkedIn Release the 2024 Work Trend Index on the State of AI at Work,” https://news.microsoft.com/2024/05/08/microsoft-and-linkedin-relesae-the-2024-work-trend-index-on-the-state-of-ai-at-work/, May 8, 2024.
Niederberger, Ursula et al., “AI Integration in Investment Management 2024 Global Manager Survey,” https://www.mercer.com/assets/global/en/shared-assets/global/attachments/pdf-2024-Mercer-AI-integration-in-investment-management-2024-global-manager-survey-report-03212024.pdf, 2024.
Ocorian, “Navigating Opportunities and Risks in the Global Financial Landscape: Outlook 2024,” https://25500968.fs1.hubspotusercontent-eu1.net/hubfs/25500968/PDF%20downloads/Ocorian%20Outlook%202024%20Report%20(4)pdf, 2024.
OECD, “Artificial Intelligence, Machine Learning and Big Data in Finance: Opportunities, Challenges and Implications for Policy Makers,” https://www.oecd.org/finance/artificial-intelligence-machine-learning-big-data-in-finance.htm, 2021.
Preece, CFA, Rhodri, “Ethics and Artificial Intelligence in Investment Management: A Framework for Professionals,” https://www.cfainstitute.org/-/media/documents/article/industry-research/Ethics-and-Artificial-Intelligence-in-Investment-Management_Online.pdf, 2022.
Regulatory Notice 24-09, “FINRA Reminds Members of Regulatory Obligations when Using Generative Artificial Intelligence and Large Language Models,” June 27, 2024.
Rosidi, Nate, “Multimodal Models Explained,” Natural Language Processing, March 27, 2023.
Schoff, Ken et al., “Empowering Wealth Managers with Generative AI,” Professional Wealth Management, April 11, 2024.
Suleyman, Mustafa et al., The Coming Wave (Random House), 2023.
Volz, Beagan Wilcox, “TIAA Rolls Out Gen AI Platform,” Ignites, July 18, 2024.
The White House, “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People,” https://www.whitehouse.gov/ostp/ai-bill-of/rights/, October 2022.
Wintermeyer, Lawrence, “AI Is Getting to Work in the Highly Regulated Investment Management Industry,” Forbes, February 22, 2024.
Zeyi, Yang, “Four Things to Know about China’s New AI Rules in 2024,” https://www.technologyreview.com/2024/01/17/1086704/china-ai-regulation-changes-2024/, January 17, 2024.