Your wealth management firm is probably using artificial intelligence (AI), but are you managing the risks?
AI adoption in wealth management has accelerated fast — 40% of investment adviser firms have implemented AI tools internally, but 44% of those firms have no formal testing or validation of their outputs. That's a significant compliance gap, and regulators know it. The SEC and FINRA have both signaled they're paying close attention to how firms — and their vendors and service providers — are using AI. In February 2026, the U.S. Department of the Treasury wrapped up a major public-private initiative to develop practical tools to help financial organizations adopt AI more securely, a signal that AI risk management is now a cross-agency priority, not just an SEC concern.
The expectation is clear: even if your firm hasn't deployed AI directly, you're still responsible for understanding how it's being used throughout your extended network.
Like any new technology, you can’t just plug in AI and assume it will handle compliance on its own. So what are the key risk areas to watch, and how can you ensure AI is managed effectively while staying aligned with evolving regulatory expectations?
Related: Emerging Securities Risks: What Firms Need to Know in 2026
AI risk areas
Data security and privacy
Do you know what data your AI systems and vendors access? Does it include “covered information” under Reg S-P or Reg S-ID? While AI isn’t explicitly mentioned in the regulations, RIAs must ensure their AI systems are developed, trained, and deployed in alignment with the requirements.
Reg S-P requirements
Under the final rule, broker-dealers, RIAs, and other covered institutions must take specific steps to ensure the privacy, safeguarding, and disposal of customer information. Requirements include:
- Comprehensive Incident Response Programs: RIAs must establish written policies and procedures for detecting, responding to, and recovering from unauthorized access to customer information — including incidents involving AI systems.
- 30-Day Breach Notification: Firms are required to notify affected individuals within 30 days of discovering unauthorized access to sensitive customer information, such as a breach involving AI data or systems. The firm/adviser remains responsible for compliance even if notification duties are delegated.
- Service Provider Oversight: RIAs must implement oversight procedures for third-party service providers, including AI vendors, and ensure they report breaches (or suspected breaches) involving client data within 72 hours. The adviser remains responsible for compliance even if notification duties are delegated.
- Recordkeeping Requirements: Firms must maintain documentation supporting compliance with the Safeguards and Disposal Rules for at least five years, including records of incidents, investigations, and notifications.
Take action now: The large-firm deadline (AUM of $1.5 billion or more) passed on December 3, 2025, and smaller firms must comply by June 3, 2026. Whether your firm is big or small, ensure your data collection, storage, and usage practices meet the requirements with proper safeguards across the full AI lifecycle, from development and training to deployment and ongoing monitoring.
Related: Understanding the SEC’s Regulation S-P Vendor and Incident Response Requirements
Reg S-ID requirements
While Reg S‑ID requirements aren’t new, the rise of AI and other digital tools has heightened identity theft risks, making it essential for RIAs to identify red flags, detect them when they occur, respond appropriately, and regularly update their programs. With sensitive client data at stake, ongoing SEC scrutiny, and new technology creating potential exposure points, a strong S‑ID program helps prevent financial loss, regulatory issues, and reputational loss.
Take action now: Earlier this year, the SEC charged two broker-dealers with securities violations, including failing to implement sound policies and procedures to protect customers from identity theft. Staying current on identity theft and other cybersecurity best practices is not only good risk management — it could save you millions of dollars in penalties.
Related: Essential Risk Assessments for Financial Institutions
Governance and oversight
Given the potential risks it introduces — from algorithmic bias to data security and intellectual property concerns — AI governance remains a critical priority for firms.
To address these challenges, RIAs should implement human oversight, conduct sensitivity and scenario analyses, and run bias testing to mitigate these risks and ensure AI systems operate fairly and transparently.
Take action now: The SEC's 2026 Examination Priorities identified "Emerging Financial Technology" as a key risk area. FINRA's enterprise-wide initiative signals the same expectation from another direction. Examiners will expect RIAs to inventory AI use across the firm — including by affiliates and service providers — and show that governance policies are actually being followed, not just written down.
Related: What is AI Auditing and Why Does It Matter?
Marketing and advertising
Last year, the SEC took action against two firms that claimed to use AI-enabled investment models in their marketing when they weren’t using such technology — an example of AI washing. The SEC found the advisers violated the Marketing Rule, which prohibits registered investment advisers (RIAs) from releasing untrue or unsubstantiated advertisements.
While former acting SEC Chairman Mark Uyeda cautioned against heavily regulated rules that might hinder AI innovation, investment advisers should still be wary and precise in their AI-related marketing and outreach.
Take action now: In December 2025, the SEC issued a Marketing Rule risk alert flagging missing disclosures on testimonials and endorsements, third-party ratings used without due diligence, and advisers whose policies reflected the rule, but whose practices didn't. If your firm uses testimonials, influencers, referral programs, or third-party ratings, make sure what you do matches what your policies say.
Vendor and service provider management
When monitoring your firm’s internal AI, you should also assess how your vendors use it. What systems are they deploying? What services are they supporting? Go beyond traditional vendors and consider service providers, such as marketing firms, portfolio management sub-advisers, and AI tool providers that process meeting notes or emails.
This includes affiliates. Entities that may feel like internal relationships can still function as third parties when it comes to AI oversight obligations. Regulators expect RIAs to understand how AI is being used throughout their extended network — not just by arms-length vendors.
If a vendor or service provider is noncommunicative or unwilling to provide information about their AI usage, consider escalating the issue to higher leadership.
Take action now: As part of your firm's vendor management program, regularly review your third parties’ controls, performance, and adherence to contractual obligations.
Download the Checklist: Vendor and Service Provider Due Diligence
Using AI effectively and transparently
Now that we’ve identified some of the essential AI risk areas, let’s dive into how investment advisers and wealth management firms can proactively mitigate AI-related risks in their products, services, and outreach:
- Maintain strong risk management controls. Establishing measures, processes, and mechanisms to mitigate risks is crucial. Due diligence and risk assessments, qualified staff for risk accountability, and an AI usage policy are just a few examples of AI risk management controls.
- Proactively manage vendor relationships. Perform due diligence on third-party providers, document oversight and risk management practices, and ensure vendors adhere to your firm's compliance obligations.
- Create an AI governance policy. Set clear expectations for how your team can and cannot use AI tools. Include examples of acceptable use (e.g., drafting internal notes) and prohibited use (e.g., entering client PII into public AI systems) to ensure consistent and compliant adoption across the firm.
- Review AI-generated content promptly. If your firm uses AI tools for meeting summaries or notes, establish a process for timely review. Regulators haven’t yet defined how these records will be treated during exams, but accuracy, completeness, and archival standards remain high, so treat them like any other compliance record.
- Review marketing and outreach campaigns. Before you publish or promote any marketing materials that reference AI, ensure your compliance team reviews them for accuracy, clear disclosures, appropriate audience targeting, and delivery methods. If you're using vendors to distribute content, review how they distribute it and what data or criteria are used to reach the audience.
- Enhance information security. IT focuses on mitigating harm from data breaches and other cyber vulnerabilities, including AI cybersecurity risks, which occur when bad actors use AI to access systems or produce more efficient cyberattacks.
- Ensure secure storage policies. Your firm has a plethora of sensitive information to keep safe. As you implement AI or any new technology, ensure your protection and security procedures, including physical and environmental controls, are updated.
- Stay current on regulatory updates. We can expect more risk alerts and guidance from the SEC as examiners begin seeing how AI is implemented. A compliance management solution that sends automatic updates tailored to your firm and its services can save you valuable time and resources so you can focus on your AI use strategy.
Download the Checklist: Managing Your Vendors' AI Risk
Final thoughts
As you reevaluate your firm’s AI strategy, consider how you can effectively use AI while mitigating risks. As AI usage grows, so will the opportunities and challenges. Is your firm prepared for the future?
Not sure which vendors regulators actually care about — or where to start? Watch our on-demand webinar for a practical, plain-language foundation for managing third-party relationships the right way.

