Your wealth management firm is probably using artificial intelligence (AI), but are you managing the risks?
AI has been a significant focus in 2025, with an AI-focused Executive Order encouraging federal agencies to use the technology more efficiently and the SEC exploring its benefits, risks, and potential future regulatory guidance for investment advisers and wealth managers.
While only 5% of investment adviser firms are using AI for client-facing interactions, 40% have implemented AI tools internally. Nearly half of all compliance professionals have increased compliance testing around AI. Still, the other half (44%) have no formal testing or validation of the outputs from their AI tools — a major compliance red flag if not addressed quickly and properly
Like any new technology, you can’t just plug in AI and assume it will handle compliance on its own. As your wealth management firm adopts or expands its use of AI, what are the key risk areas to watch, and how can you ensure AI is managed effectively while staying aligned with evolving regulatory expectations? Let’s take a closer look.
Related: How Generative AI Impacts Your FI’s Risk Management Program
Do you know what data your AI systems and vendors access? Does it include “covered information” under Reg S-P or Reg S-ID? While AI isn’t explicitly mentioned in the regulations, RIAs must ensure their AI systems are developed, trained, and deployed in alignment with the requirements.
Under the final rule, broker-dealers, RIAs, and other covered institutions must take specific steps to ensure the privacy, safeguarding, and disposal of customer information. Requirements include:
Take action now: With the Reg S-P deadlines looming (Larger firms with $1.5 billion or more in assets under management must comply with the specified requirements by December 3, 2025, while smaller firms have until June 3, 2026), RIAs should ensure their data collection, storage, and usage practices meet the updated S‑P requirements, embedding safeguards across the full AI lifecycle — from development and training to deployment and continuous monitoring.
While Reg S‑ID requirements aren’t new, the rise of AI and other digital tools has heightened identity theft risks, making it essential for RIAs to identify red flags, detect them when they occur, respond appropriately, and regularly update their programs. With sensitive client data at stake, ongoing SEC scrutiny, and new technology creating potential exposure points, a strong S‑ID program helps prevent financial loss, regulatory issues, and reputational loss.
Take action now: Earlier this year, the SEC charged two broker-dealers with securities violations, including failing to implement sound policies and procedures to protect customers from identity theft. Staying current on identity theft and other cybersecurity best practices is not only good risk management — it could save you millions of dollars in penalties.
Related: Essential Risk Assessments for Financial Institutions
Given the potential risks it introduces — from algorithmic bias to data security and intellectual property concerns — AI governance remains a critical priority for firms.
To address these challenges, RIAs should implement human oversight, conduct sensitivity and scenario analyses, and run bias testing to mitigate these risks and ensure AI systems operate fairly and transparently.
Take action now: While there are no comprehensive regulations, now is the time to be proactive and ensure your firm and vendors use AI responsibly. Focus on AI as part of your firm's larger risk management program and communicate how your institution and vendors (including service providers) use AI in internal and external outreach.
Related: What is AI Auditing and Why Does It Matter?
Last year, the SEC took action against two firms that claimed to use AI-enabled investment models in their marketing when they weren’t using such technology — an example of AI washing. The SEC found the advisers violated the Marketing Rule, which prohibits registered investment advisers (RIAs) from releasing untrue or unsubstantiated advertisements.
While former acting SEC Chairman Mark Uyeda cautioned against heavily regulated rules that might hinder AI innovation, investment advisers should still be wary and precise in their AI-related marketing and outreach.
Take action now: If you say you use AI, how exactly are you using it? Can you substantiate those claims? Ensure you can satisfactorily answer these questions and others from examiners and auditors down the road.
When monitoring your firm’s internal AI, you should also assess how your vendors use it. What systems are they deploying? What services are they supporting? Go beyond traditional vendors and consider service providers, such as marketing firms, portfolio management sub-advisers, and AI tool providers that process meeting notes or emails.
If a vendor or service provider is noncommunicative or unwilling to provide information about their AI usage, consider escalating the issue to higher leadership.
Take action now: As part of your firm's vendor management program, regularly review your third parties’ controls, performance, and adherence to contractual obligations.
Related: 5 Business Continuity Red Flags in Vendor Relationships and How to Address Them
Now that we’ve identified some of the essential AI risk areas, let’s dive into how investment advisers and wealth management firms can proactively mitigate AI-related risks in their products, services, and outreach:
As you reevaluate your firm’s AI strategy, consider how you can effectively use AI while mitigating risks. As AI usage grows, so will the opportunities and challenges. Is your firm prepared for the future?
Do you know how your vendors and service providers are using AI? Learn best practices for managing third-party AI risk in our webinar.