Your wealth management firm is probably using artificial intelligence (AI), but are you managing the risks?
AI has been a significant focus in 2025, with an AI-focused Executive Order encouraging federal agencies to use the technology more efficiently and the SEC exploring its benefits, risks, and potential future regulatory guidance for investment advisers and wealth managers.
While only 5% of investment adviser firms are using AI for client-facing interactions, 40% have implemented AI tools internally. Nearly half of all compliance professionals have increased compliance testing around AI. Still, the other half (44%) have no formal testing or validation of the outputs from their AI tools — a major compliance red flag if not addressed quickly and properly
Like any new technology, you can’t just plug in AI and assume it will handle compliance on its own. As your wealth management firm adopts or expands its use of AI, what are the key risk areas to watch, and how can you ensure AI is managed effectively while staying aligned with evolving regulatory expectations? Let’s take a closer look.
Related: How Generative AI Impacts Your FI’s Risk Management Program
AI risk areas
Data security and privacy
Do you know what data your AI systems and vendors access? Does it include “covered information” under Reg S-P or Reg S-ID? While AI isn’t explicitly mentioned in the regulations, RIAs must ensure their AI systems are developed, trained, and deployed in alignment with the requirements.
Reg S-P requirements
Under the final rule, broker-dealers, RIAs, and other covered institutions must take specific steps to ensure the privacy, safeguarding, and disposal of customer information. Requirements include:
- Comprehensive Incident Response Programs: RIAs must establish written policies and procedures for detecting, responding to, and recovering from unauthorized access to customer information — including incidents involving AI systems.
- 30-Day Breach Notification: Firms are required to notify affected individuals within 30 days of discovering unauthorized access to sensitive customer information, such as a breach involving AI data or systems. The firm/adviser remains responsible for compliance even if notification duties are delegated.
- Service Provider Oversight: RIAs must implement oversight procedures for third-party service providers, including AI vendors, and ensure they report breaches (or suspected breaches) involving client data within 72 hours. The adviser remains responsible for compliance even if notification duties are delegated.
- Recordkeeping Requirements: Firms must maintain documentation supporting compliance with the Safeguards and Disposal Rules for at least five years, including records of incidents, investigations, and notifications.
Take action now: With the Reg S-P deadlines looming (Larger firms with $1.5 billion or more in assets under management must comply with the specified requirements by December 3, 2025, while smaller firms have until June 3, 2026), RIAs should ensure their data collection, storage, and usage practices meet the updated S‑P requirements, embedding safeguards across the full AI lifecycle — from development and training to deployment and continuous monitoring.
Reg S-ID requirements
While Reg S‑ID requirements aren’t new, the rise of AI and other digital tools has heightened identity theft risks, making it essential for RIAs to identify red flags, detect them when they occur, respond appropriately, and regularly update their programs. With sensitive client data at stake, ongoing SEC scrutiny, and new technology creating potential exposure points, a strong S‑ID program helps prevent financial loss, regulatory issues, and reputational loss.
Take action now: Earlier this year, the SEC charged two broker-dealers with securities violations, including failing to implement sound policies and procedures to protect customers from identity theft. Staying current on identity theft and other cybersecurity best practices is not only good risk management — it could save you millions of dollars in penalties.
Related: Essential Risk Assessments for Financial Institutions
Governance and oversight
Given the potential risks it introduces — from algorithmic bias to data security and intellectual property concerns — AI governance remains a critical priority for firms.
To address these challenges, RIAs should implement human oversight, conduct sensitivity and scenario analyses, and run bias testing to mitigate these risks and ensure AI systems operate fairly and transparently.
Take action now: While there are no comprehensive regulations, now is the time to be proactive and ensure your firm and vendors use AI responsibly. Focus on AI as part of your firm's larger risk management program and communicate how your institution and vendors (including service providers) use AI in internal and external outreach.
Related: What is AI Auditing and Why Does It Matter?
Marketing and advertising
Last year, the SEC took action against two firms that claimed to use AI-enabled investment models in their marketing when they weren’t using such technology — an example of AI washing. The SEC found the advisers violated the Marketing Rule, which prohibits registered investment advisers (RIAs) from releasing untrue or unsubstantiated advertisements.
While former acting SEC Chairman Mark Uyeda cautioned against heavily regulated rules that might hinder AI innovation, investment advisers should still be wary and precise in their AI-related marketing and outreach.
Take action now: If you say you use AI, how exactly are you using it? Can you substantiate those claims? Ensure you can satisfactorily answer these questions and others from examiners and auditors down the road.
Vendor and service provider management
When monitoring your firm’s internal AI, you should also assess how your vendors use it. What systems are they deploying? What services are they supporting? Go beyond traditional vendors and consider service providers, such as marketing firms, portfolio management sub-advisers, and AI tool providers that process meeting notes or emails.
If a vendor or service provider is noncommunicative or unwilling to provide information about their AI usage, consider escalating the issue to higher leadership.
Take action now: As part of your firm's vendor management program, regularly review your third parties’ controls, performance, and adherence to contractual obligations.
Related: 5 Business Continuity Red Flags in Vendor Relationships and How to Address Them
Using AI effectively and transparently
Now that we’ve identified some of the essential AI risk areas, let’s dive into how investment advisers and wealth management firms can proactively mitigate AI-related risks in their products, services, and outreach:
- Maintain strong risk management controls. Establishing measures, processes, and mechanisms to mitigate risks is crucial. Due diligence and risk assessments, qualified staff for risk accountability, and an AI usage policy are just a few examples of AI risk management controls.
- Proactively manage vendor relationships. Perform due diligence on third-party providers, document oversight and risk management practices, and ensure vendors adhere to your firm's compliance obligations.
- Create an AI governance policy. Set clear expectations for how your team can and cannot use AI tools. Include examples of acceptable use (e.g., drafting internal notes) and prohibited use (e.g., entering client PII into public AI systems) to ensure consistent and compliant adoption across the firm.
- Review AI-generated content promptly. If your firm uses AI tools for meeting summaries or notes, establish a process for timely review. Regulators haven’t yet defined how these records will be treated during exams, but accuracy, completeness, and archival standards remain high, so treat them like any other compliance record.
- Review marketing and outreach campaigns. Before you publish or promote any marketing materials that reference AI, ensure your compliance team reviews them for accuracy, clear disclosures, appropriate audience targeting, and delivery methods. If you're using vendors to distribute content, review how they distribute it and what data or criteria are used to reach the audience.
- Enhance information security. IT focuses on mitigating harm from data breaches and other cyber vulnerabilities, including AI cybersecurity risks, which occur when bad actors use AI to access systems or produce more efficient cyberattacks.
- Ensure secure storage policies. Your firm has a plethora of sensitive information to keep safe. As you implement AI or any new technology, ensure your protection and security procedures, including physical and environmental controls, are updated.
- Stay current on regulatory updates. We can expect more risk alerts and guidance from the SEC as examiners begin seeing how AI is implemented. A compliance management solution that sends automatic updates tailored to your firm and its services can save you valuable time and resources so you can focus on your AI use strategy.
As you reevaluate your firm’s AI strategy, consider how you can effectively use AI while mitigating risks. As AI usage grows, so will the opportunities and challenges. Is your firm prepared for the future?
Do you know how your vendors and service providers are using AI? Learn best practices for managing third-party AI risk in our webinar.
