Back to Research
ComplianceStrategy

Compliance-Defensible AI Playbook

Justin McAfee, CFA·
Compliance-Defensible AI Playbook

A framework for RIA compliance officers and firm principals. Not legal advice; specific policy decisions should be reviewed with counsel.

I. Executive Summary

Registered investment advisers sit at an uncomfortable intersection: client demand, staff demand, and competitive pressure all push toward AI adoption, while the regulatory picture feels unsettled enough that many firms have defaulted to either blanket bans or ad-hoc use with no written oversight. Neither posture holds up under examination, and neither is necessary.

Artificial intelligence is now the single largest compliance concern cited by RIA compliance officers. In the 2025 Investment Management Compliance Testing Survey of 577 compliance professionals, 57% ranked AI and predictive analytics as the hottest compliance topic, surpassing anti-money laundering (41%) and cybersecurity (38%). At the same time, only 40% of firms have formally adopted AI for internal use, 5% use AI in client interactions, and — most tellingly — 44% of firms that have adopted AI report no formal testing or validation of its outputs. The gap between adoption and governance is where exam findings and enforcement actions are emerging.

The core thesis

Most AI use cases available to an RIA today are defensible under current law, provided four controls are documented, repeatable, and integrated into the firm's written compliance program:

  1. Human-in-the-loop review of every AI output that reaches a client, a prospect, or a regulator.
  2. A data boundary that keeps client non-public personal information (NPI) out of public models and restricts sensitive data to vetted, contract-bound vendors.
  3. Supervision and testing under Rule 206(4)-7 — tool inventory, output sampling, disclosure-match, and failure logging.
  4. Accurate disclosure in Form ADV Part 2A and marketing materials about what the firm does — and does not — do with AI.

The SEC's enforcement record to date is consistent with this framing. The two headline AI-washing cases — Delphia and Global Predictions, settled in March 2024 — were not about the sophistication of AI tools. They were about saying something in ADV filings, press releases, or on a website that was not true. The Marketing Rule, the Compliance Rule, the Recordkeeping Rule, and Section 206 fiduciary principles already reached the conduct. No new rule was needed.

Where the rulemaking landscape stands

The SEC's 2023 proposed rule on Conflicts of Interest Associated with the Use of Predictive Data Analytics was formally withdrawn on June 12, 2025 as part of a package of 14 rescinded proposals. That eliminates the prescriptive conflict-neutralization regime many firms were bracing for, but it does not reduce existing obligations. The 2026 Division of Examinations Priorities, published November 17, 2025, elevate AI oversight across multiple categories — cybersecurity, automated investment tools, fiduciary duty, and marketing — and signal that examiners will test whether firms' AI disclosures match their actual practices.

How to use this document

This playbook moves from framework to implementation. Section II maps the regulatory stack that actually applies, including the ERISA overlay for advisers who serve retirement plans or IRAs. Section III is a tool-category risk matrix — the operational core — rating common AI use cases Green, Yellow, or Red and identifying the controls that move a tool from one rating to another, followed by a special-focus section on AI that touches the investment stack directly. Section IV details the four controls that make most AI use defensible, and a companion section addresses E&O and cyber insurance considerations. Section V tracks what is likely to change in the next twelve to twenty-four months. Regulatory guidance and vendor capabilities are moving fast — the framework below is a snapshot and should be read against whatever has shifted since.

Nothing in this playbook constitutes legal advice. Specific policy decisions should be reviewed with outside counsel and reflected in the firm's written compliance program under Advisers Act Rule 206(4)-7.


II. The Regulatory Stack (What Actually Applies)

No federal rule today addresses AI in the adviser context by name. That is frequently misread as "the rules don't apply yet." They do. The SEC and FINRA have consistently taken the position that existing rules are technology-neutral and apply to AI use the same way they apply to any other tool an adviser deploys. The regimes below are the ones most often implicated in RIA AI use cases.

A. SEC Division of Examinations Focus on AI

The Division of Examinations has, in a series of Risk Alerts and Examination Priorities going back to 2023, made clear that compliance policies, disclosures, and marketing practices are core examination areas — all of which are implicated by AI adoption — even before any AI-specific rule exists.

The 2024 and 2025 Examination Priorities added explicit AI-related focus areas, and the Fiscal Year 2026 Priorities (published November 17, 2025) elevate the scrutiny further. The 2026 Priorities specifically call out:

  • Accurate representations about AI capabilities — the so-called "AI washing" risk. Marketing materials, Form ADV disclosures, and client communications must accurately describe the extent, nature, and limitations of the firm's AI usage.
  • Adequate policies and procedures governing AI usage, including for fraud prevention, back-office operations, anti-money laundering, and trading.
  • Alignment between a firm's AI disclosures and what is actually happening inside the firm.
  • Regulation S-P compliance, particularly incident response programs that account for AI vendors and third-party data flows.

Practical takeaway: examiners are now asking for a firm's AI inventory, its written AI policy, its vendor due-diligence file, and evidence of how it tests AI outputs. Firms that can produce these materials quickly tend to fare better than firms that cannot.

B. Predictive Data Analytics Proposal — Withdrawn

In July 2023, the SEC proposed a rule that would have required broker-dealers and investment advisers to eliminate or neutralize conflicts of interest arising from "covered technologies" used in investor interactions. The definition was broad enough to implicate calculators, spreadsheets, and most client-facing software, and the proposal drew widespread industry pushback, including a detailed comment letter from the Investment Adviser Association.

On June 12, 2025, the SEC formally withdrew the proposal as one of fourteen Gensler-era rulemakings the current Commission has opted not to finalize. Any future federal AI-specific rule for advisers would require a new proposal and comment period.

This is good news for planning purposes — there is no prescriptive conflict-neutralization regime on the horizon — but it is not a reduction in obligations. The underlying risks the proposal addressed (AI bias, opaque decision-making, conflicts of interest) still exist and are policed through existing anti-fraud, fiduciary, and compliance rules. As Proskauer observed post-withdrawal, the responsibility has shifted from regulatory mandate to firm self-governance.

C. The Marketing Rule — Advisers Act Rule 206(4)-1

The Marketing Rule is the most active enforcement vector for AI-related conduct by RIAs. Every SEC AI-specific settlement against an adviser to date has included a Marketing Rule charge. The rule applies to any direct or indirect communication offering advisory services, including websites, social media, email, pitch decks, and one-on-one communications that include hypothetical performance.

What the Marketing Rule requires when AI is involved:

  • Claims about AI use must be true and substantiated. Statements like "AI-driven forecasts," "proprietary machine learning model," or "first regulated AI financial advisor" must be supported by the adviser's actual practices and by documents the firm can produce on request.
  • AI-generated testimonials and endorsements fall under the same disclosure and oversight requirements as any other testimonial. Synthetic voices, synthetic likenesses, and AI-composed quotes attributed to clients are high-risk and generally should be avoided. Real testimonials edited or enhanced by AI require, at minimum, that the disclosures, compensation tags, and compliance oversight log reflect the underlying conduct accurately.
  • Hypothetical performance includes output from AI models that project returns, backtest strategies, or illustrate portfolios. The rule requires written policies reasonably designed to ensure hypothetical performance is shown only to audiences for whom it is relevant, with sufficient context to be understood, and the adviser must have a reasonable basis to believe the presentation complies. AI-generated projections shown on a public website are a recurring enforcement target.
  • Records must substantiate both the content of the advertisement and the basis for believing any testimonial or third-party material complies.

The December 2025 Risk Alert on Marketing Rule observations called out firms that had adopted updated policies but failed to implement them consistently. Drafting the policy is not enough; training, supervision logs, and a review workflow need to exist.

The AI-washing enforcement pattern. Two settled matters in March 2024 — Delphia (USA) Inc. and Global Predictions, Inc. — established how the SEC applies existing rules to overstated AI claims. Delphia paid $225,000; Global Predictions paid $175,000. In both cases, the advisers represented AI and machine-learning capabilities in Form ADV, press releases, websites, and client communications that they did not, in fact, possess. The Commission charged violations of Section 206(2), Section 206(4), the Marketing Rule, and the Compliance Rule. Notably, Delphia continued making misleading statements for roughly two years after a 2021 SEC examination had flagged the issue, which aggravated the compliance-program charge. The playbook lesson: fix disclosures promptly and make sure all channels are updated, not just Form ADV.

D. The Compliance Rule — Advisers Act Rule 206(4)-7

The Compliance Rule requires every SEC-registered adviser to (i) adopt and implement written policies and procedures reasonably designed to prevent violations of the Advisers Act, (ii) review those policies at least annually, and (iii) designate a Chief Compliance Officer responsible for administration.

AI implicates the Compliance Rule in two ways. First, the use of an AI tool in any business process — marketing drafting, client communication, meeting documentation, research, trading — is a change to firm processes and should be reflected in the written compliance program. Second, failure to have policies that address AI has itself been charged as a Compliance Rule violation, most visibly in Delphia and Global Predictions. A firm that does not have a written AI acceptable use policy, a vendor diligence process, and a testing protocol should not be surprised when an examiner flags the gap.

The annual review contemplated by the rule is a natural moment to re-inventory AI tools, test disclosures against actual practice, and refresh vendor assessments. Firms should document the review with a written memo the CCO signs.

E. The Recordkeeping Rule — Advisers Act Rule 204-2

Rule 204-2 requires advisers to keep books and records — including communications that recommend or propose investment action, records substantiating performance claims, and records supporting advertisements — for five years, with the first two years readily accessible. The rule is also technology-neutral: an AI-generated summary of a client meeting, a prompt-and-output pair used to draft a client email, or a chatbot transcript that discussed a recommendation can all be records within scope.

Practical implications for AI use:

  • Meeting notes generated by an AI assistant are records of the client conversation and should flow into the firm's system of record (typically the CRM) on the same retention schedule as any other advisor note.
  • Prompt and output logs for AI tools that help draft client communications should be retained when the output reaches the client. Several vendors now retain these automatically; confirm retention duration and export capability during diligence.
  • If marketing claims or disclosures describe AI use, the records that substantiate those claims (model documentation, vendor contracts, testing records) must be available for production.
  • Deleting AI outputs on a whim — or allowing a vendor to auto-delete under a "zero retention" setting — may create a recordkeeping problem if the output constitutes a required record. The solution is usually to export to the firm's archive before deletion, not to disable the vendor's privacy settings.

F. Regulation S-P and State Privacy Laws

On May 16, 2024, the SEC adopted significant amendments to Regulation S-P. Larger RIAs (those with $1.5 billion or more in AUM) had to comply by December 3, 2025; smaller RIAs must comply by June 3, 2026. The amendments require every covered RIA to:

  • Develop, implement, and maintain a written incident response program designed to detect, respond to, and recover from unauthorized access to or use of customer information.
  • Notify affected individuals within 30 days of determining that sensitive customer information was — or is reasonably likely to have been — accessed or used without authorization.
  • Oversee service providers with access to customer information through ongoing due diligence and monitoring. This directly implicates AI vendors.
  • Maintain specified records related to the above for five years.

For AI purposes, the amendments do three things. First, they make the firm responsible for what its AI vendors do with client data — the service provider oversight language leaves no room for a "the vendor handles it" defense. Second, they expand what counts as "customer information" to include data the firm receives from third parties, which covers data flowing through AI tools integrated with CRM, custodian, and financial planning platforms. Third, they create a 30-day clock that starts when the firm determines unauthorized access is reasonably likely — which means AI-related incidents (a prompt injection, a data exfiltration from a vendor, a misconfigured integration) need to be caught, triaged, and documented fast.

State privacy overlays. State laws interact with AI in ways that can catch RIAs operating nationally by surprise:

  • California amended the CCPA via AB 1008 to clarify that personal information can exist in "abstract digital formats," including AI systems capable of outputting personal information. The practical effect is that AI model outputs containing personal data fall squarely within CCPA scope. SB 1223 added neural data to the definition of sensitive personal information.
  • New York's cybersecurity regulation (23 NYCRR Part 500), in an October 16, 2024 industry letter, addressed AI specifically. Although primarily aimed at entities regulated by the NY Department of Financial Services, its four-risk framework (AI-enabled social engineering, AI-enhanced cyberattacks, risks from the vast data AI requires, and third-party AI supply-chain risk) has become a widely referenced playbook for cybersecurity diligence more generally.
  • The Colorado AI Act, the first comprehensive state AI law in the United States, expressly covers "financial services" as a high-risk sector. After revisions during a 2025 special session, enforcement was delayed to June 30, 2026. Developers and deployers of high-risk AI systems will face documentation, impact assessment, and notice obligations.
  • Two-party consent states (California, Florida, Pennsylvania, Illinois, Massachusetts, Washington, and others) require all-party consent before recording a conversation. Recording-based AI meeting tools need a consent workflow that defaults to the stricter standard rather than the adviser's home state rule.

G. Fiduciary Duty and Reg BI Parallels

Section 206 of the Advisers Act imposes a fiduciary duty that includes a duty of care and a duty of loyalty. Nothing in AI adoption changes that. If an AI tool influences what is recommended to a client, the adviser remains responsible for ensuring the recommendation is in the client's best interest and that any material conflicts are disclosed. For dual registrants, Regulation Best Interest applies a parallel care obligation to broker-dealer recommendations. FINRA has been explicit in Regulatory Notice 24-09 and its 2026 Annual Regulatory Oversight Report that existing rules — including Rule 3110 supervision, Rule 2210 communications, and Rule 4511 recordkeeping — apply to generative AI use the same way they apply to any other tool.

The operational implication: "black box" is not a defense. Former Director of the SEC's Division of Investment Management William Birdthistle reminded advisers in March 2024 that it is not acceptable for an adviser's understanding of AI to be "information goes in, magic happens, something comes out" — a frame that has since been echoed in FINRA guidance and in SEC examination questions. Firms are expected to be able to explain, at a reasonable level of detail, what their AI tools do, what data they use, how outputs are reviewed, and how errors are caught.

H. ERISA and the Retirement-Plan Overlay

RIAs that advise 401(k) or other ERISA plans, provide rollover recommendations, or serve participant-level retirement clients are subject to the Department of Labor's fiduciary framework in addition to the Advisers Act. AI adoption does not change that framework, but it creates new surfaces on which ERISA's prudence requirements are tested.

The standard is process-based. ERISA § 404(a)(1)(B) imposes a "prudent expert" standard that is higher than common-law trust prudence and is judged on whether the fiduciary's conduct was prudent at the time, not on outcome. AI outputs produced without documented selection rationale, bias testing, or human review are particularly exposed because they leave no trail of the prudent-process evaluation ERISA requires.

The current rules-of-the-road. The DOL's 2024 Retirement Security Rule was vacated by federal courts in mid-2024 before its effective date, DOL dropped its Fifth Circuit appeal in November 2025, and EBSA restored the prior framework on March 18, 2026. Until a replacement is adopted, the pre-amendment PTE 2020-02 — Impartial Conduct Standards, written fiduciary acknowledgment, documented specific-reasons for the rollover, annual retrospective review — is the operative compliance regime for rollover advice. DOL has not issued AI-specific fiduciary guidance for retirement plans; practitioners apply § 404 prudence and the EBSA cybersecurity guidance (originally 2021, reconfirmed 2024) by analogy.

Practical implications for RIAs advising plans or IRAs:

  • Vendor selection and monitoring of an AI tool is itself a fiduciary act. Diligence documentation — bias testing, data handling, contractual protections — must be retained as a prudent-process record, separate from Advisers Act vendor diligence.
  • If AI materially informs rollover analysis, document how it interacts with PTE 2020-02's specific-reasons requirement. The PTE requires a written explanation of why the rollover is in the participant's best interest; an AI-generated rationale must be reviewed, verified, and authored by the fiduciary.
  • Participant-facing AI (chatbots, projection tools, enrollment wizards) must satisfy ERISA fiduciary communication standards in addition to the Advisers Act requirements that already push those tools to Red in Section III.
  • Systematic outcomes that favor the plan sponsor or a particular fund lineup — even inadvertently, through training data — can expose the fiduciary to a process claim under § 404.

III. Tool Category Risk Matrix

The matrix below is the operational core of this playbook. Each entry is a common AI use case available to RIAs today. The rating reflects the typical risk profile when the listed controls are in place:

  • GREEN — routine adoption appropriate with standard policy, vendor diligence, and human review.
  • YELLOW — adoption appropriate but requires heightened controls (consent workflow, data handling restrictions, elevated supervision, or specific client disclosure).
  • RED — generally should not be adopted today, or should be used only under very narrow, exception-based conditions with express client consent and legal review.

Ratings assume the firm has the foundational program described in Section IV. A Green tool adopted without any policy, diligence, or review is not Green. The tools named in parentheses are illustrative examples of the vendor category. Inclusion does not constitute an endorsement, and omission does not constitute a criticism. Specific vendor evaluation must be conducted in light of the firm's circumstances.

Use case / RatingRegulatory hooksRequired controlsVendor diligence
1. Meeting transcription & notes
YELLOW
Jump, Zocks, Fathom, Fellow, Zoom AI Companion, Microsoft Copilot
  • Rule 204-2 (notes = records)
  • Rule 206(4)-7 (supervision)
  • Reg S-P (NPI to vendor)
  • State two-party consent
  • Fiduciary duty (record accuracy)
  • Written consent in stricter of home- or client-state rules
  • Advisor review before CRM post — no auto-post
  • 5-year retention mirroring CRM
  • Prefer non-recording tools for two-party-consent clients
  • Flag notes that discuss specific recommendations
  • SOC 2 Type II (not Type I)
  • DPA with explicit "no training on our data"
  • Zero-retention audio option or documented schedule
  • U.S. hosting preferred
  • CRM integration fidelity (Redtail, Wealthbox, Salesforce FSC)
  • Encryption at rest & in transit; SSO/MFA
  • Incident notice consistent with Reg S-P 30-day clock
2. Client email drafting
YELLOW
Microsoft Copilot Enterprise, enterprise ChatGPT, Gemini Business, internal RAG
  • Rule 204-2 (outbound comms)
  • Rule 206(4)-7
  • Marketing Rule if offering services or including performance
  • Reg S-P & state privacy (NPI in prompts)
  • Section 206 fiduciary duty
  • Enterprise / tenant-isolated models only — no consumer tools
  • Human review and send by licensed person; no auto-send
  • Ban on account numbers, full names + balances, SSNs, DOBs in prompts
  • Archive outbound mail per normal protocol
  • Quarterly CCO sampling: drafted vs. sent
  • Enterprise tier, no model training on firm data (in writing)
  • Tenant isolation — no cross-customer leakage
  • Geographic data residency
  • Audit log access
  • DLP integration (SSN / account exfiltration)
  • SOC 2 Type II; CCPA / GDPR posture as applicable
3. Research & market summaries
GREEN
Perplexity Enterprise, internal RAG over research feeds, Bloomberg AI, Claude for research
  • Rule 204-2 if supporting a recommendation
  • Marketing Rule if shared with prospects
  • Fiduciary duty of care (verify citations / hallucinations)
  • Analyst verifies every cited figure at source before client use
  • Label output "AI-drafted, analyst-reviewed" until finalized
  • No raw AI summaries externally without licensed review
  • Retain citations alongside summaries that support recommendations
  • Citation integrity (tool shows sources)
  • No training on firm prompts
  • Enterprise tier with tenant isolation
  • SOC 2 Type II
  • Retrieval scope documented
4. CRM enrichment & data entry
GREEN
AI agents that move meeting notes to CRM fields, auto-populate tasks, draft follow-ups
  • Rule 204-2 (CRM = advisory records)
  • Rule 206(4)-7
  • Reg S-P (NPI between systems)
  • Write access limited to structured fields and notes
  • No auto-approval of tasks affecting client accounts
  • Audit trail retained for the underlying record's period
  • Periodic reconciliation against source system of record
  • Exportable bidirectional sync logs
  • Role-based access controls
  • DPA covers sub-processors
  • SOC 2 Type II
  • Incident SLA meets Reg S-P timelines
5. Financial plan drafting
YELLOW
AI in eMoney, MoneyGuidePro, RightCapital, or LLMs drafting narrative sections
  • Section 206 fiduciary duty
  • Marketing Rule if projections used to solicit
  • Rule 204-2 (plan = record)
  • Form ADV 2A methodology disclosure
  • Licensed planner signs off on every deliverable; human is author
  • Projections anchored to firm's standard assumption set; deviations documented
  • Update Form ADV 2A Items 4 & 8 if AI is material to methodology
  • No full client data files in consumer-tier LLMs
  • Planning-software module or tenant-isolated enterprise AI
  • Assumptions and models documented and versioned
  • U.S. data residency
  • SOC 2 Type II; planning-stack integration
6. Marketing content generation
YELLOW
Blog drafts, LinkedIn posts, website copy, newsletters, brochures
  • Marketing Rule 206(4)-1 — largest enforcement surface
  • Rule 204-2(a)(11) (ad retention)
  • Rule 206(4)-7
  • CCO or designee pre-use review against marketing checklist
  • Substantiation file for every factual claim — AI does not create substantiation
  • No AI testimonials, synthetic quotes, or stock imagery presented as clients
  • AI-capability claims verified against actual practice
  • Retain final version and approval record for 5 years
  • Enterprise tier, no training on prompts
  • Output filter for prohibited claims (best, guaranteed, risk-free, etc.)
  • Watermarking / provenance for images
  • Clear terms on rights to output
7. Client-facing chatbots
RED
LLM-powered chat on firm website, in client portal, or for prospect intake
  • Section 206 — any conversation can drift into advice
  • Marketing Rule if soliciting or describing services
  • Form ADV 2A disclosure (materially new channel)
  • Reg S-P if the bot takes NPI
  • State wiretapping / chat-recording laws
  • Reg BI for dual registrants
  • Strict scope limit to non-advisory topics with hard guardrails
  • Unambiguous "AI, not advice" disclosure
  • Escalation to a licensed human for anything advice-adjacent
  • Conversation logs retained; regular compliance sampling
  • Legal review pre-launch; Form ADV 2A amendment
  • Testable guardrails / refusal behavior
  • Full conversation logs exportable and retainable
  • Prompt-injection hardening
  • Consumer PII handling
  • SOC 2 Type II; CCPA posture for California residents
  • Contractual cap on model autonomy
8. Voice cloning & video avatars
RED
Advisor voice synthesis, deepfake avatars for video, AI-voiced client outreach
  • Marketing Rule — high misleading-representation risk
  • Section 206 anti-fraud
  • State right-of-publicity laws
  • NYDFS Part 500 / parallel cyber guidance (impersonation)
  • Reg S-P (incident response if the clone is misused)
  • Default: do not adopt for client-facing use
  • Internal-only use requires strict access controls and a ban on external distribution
  • Firmwide deepfake-fraud training and documented callback / safe-word protocol
  • No AI-voiced outreach without express written consent and legal review
  • Generally unsuitable for client-facing work regardless of vendor capability
  • For internal training: watermarking, C2PA content credentials, access logs, revocation capability
  • Contractual prohibition on deepfake creation of real individuals without consent

How to use the matrix

  • Treat the rating as a starting point, not a conclusion. A Yellow tool used with stronger controls than typical may be defensible as Green for a particular firm; a Green tool used with weaker controls than typical may be Yellow or Red in practice.
  • The Required Controls above are cumulative on top of the four foundational controls in Section IV. A tool is never "safe" in isolation — the firm's overall program is what makes any individual tool defensible.
  • Vendor Diligence items should be refreshed annually and captured in a file the CCO can produce on examination. A one-page Vendor Diligence Summary per tool is a reasonable artifact to produce.
  • When in doubt — particularly for any Red-rated item — involve outside counsel before deployment, not after.
  • The matrix covers AI at the operational layer. AI that touches the investment process itself — portfolio analytics, trade allocation, best execution, agentic tools that place orders or move funds — is addressed separately in the Special Focus section that follows.

Special Focus: AI in the Investment Stack

The risk matrix above covers AI at the operational layer — meetings, emails, marketing, CRM, planning support. A different and in some ways sharper set of compliance questions arises when AI touches the investment layer: portfolio analytics, trade recommendations, rebalancing, best execution, trade allocation, and "agentic" AI that can place orders or move funds. These uses sit directly on top of the Advisers Act fiduciary duty, and the guardrails are stricter than anything in Section III.

Best execution and the duty of care

The SEC's 2019 Fiduciary Interpretation Release (IA-5248) frames the duty of care as including three sub-duties: advice in the client's best interest, best execution where the adviser selects broker-dealers, and monitoring over the life of the relationship. AI-driven order routing or best-execution analysis does not dilute those duties — it becomes the evidence of how the firm discharges them. Examiners will ask, in substance: how does the firm evaluate best execution when a model the firm does not fully understand is choosing where orders go? A defensible answer requires documented testing, periodic review against alternative venues, and a named supervisor who can articulate the methodology.

Trade allocation and cherry-picking exposure

Recent SEC enforcement — including the Western Asset / Ken Leech matter (November 2024) — relies heavily on statistical analysis of trade blotters to surface allocation patterns that systematically favor certain accounts. An AI allocation system that produces the same patterns — even inadvertently, through skewed training data or optimization for a metric other than fairness — creates the same enforcement exposure. The defense is a documented, auditable allocation methodology and periodic fairness testing, not model sophistication.

Robo-advice and code-as-advice risk

The closest pre-generative-AI precedent is the SEC's 2017 IM Guidance on Robo-Advisers, which addressed disclosure, information-gathering for suitability, and compliance programs for automated advice. The cleanest precedent for "model produced bad output" liability is the SEC's $9 million settlement with Betterment (April 2023), which charged coding errors in an automated tax-loss-harvesting service under Advisers Act antifraud and compliance provisions. Earlier, Wealthfront settled for $250,000 (December 2018) over related TLH misstatements. Neither case involved generative AI, but both illustrate that the SEC will charge an adviser for client harm caused by a production system whether the output's "author" is a line of Python or a language model.

Custody Rule and agentic AI

Advisers Act Rule 206(4)-2 defines custody broadly: an adviser has custody whenever it has "any authority to obtain possession" of client funds. The Division of Investment Management's Custody Rule FAQ (Question II.4) established that an adviser with a standing letter of authorization is deemed to have custody unless seven specific conditions are satisfied — the client owns the third-party account, the adviser cannot change the payee, and so on.

An agentic AI with discretion over the amount, payee, or timing of fund movements almost certainly puts the adviser into custody, which triggers the surprise-examination requirement, separate reporting, and heightened recordkeeping. The SEC has issued no no-action letter on point; the SLOA framework is applied by analogy only. Until the Commission speaks, firms deploying agents that can initiate fund movement should assume custody consequences and involve counsel before go-live.

"Black box" is especially hard to defend in the investment process

Rule 206(4)-7 requires a compliance program reasonably designed to prevent violations of the Advisers Act. The 2026 Exam Priorities explicitly call for "written policies, procedures, or guidance that address the acceptable uses of AI and provide for appropriate human oversight." The SEC has not adopted an explicit explainability rule, but the combination of these requirements — plus the fiduciary duty of care — means an adviser who cannot explain, at a reasonable level of detail, what an investment-process AI tool does and why it produced a given output is operating a program that is not reasonably designed.

Banking regulators have long applied SR 11-7 model risk management principles — validation, documentation, governance, monitoring — to any model in a revenue-critical decision process. SR 11-7 does not formally bind RIAs, but examiners increasingly treat its four pillars as the de facto expectation for firms running quantitative models, and the 2026 Priorities' human-oversight language moves in the same direction. An RIA running investment-process AI without analogous documentation is an outlier.

For dual registrants, FINRA Regulatory Notice 15-09 on algorithmic trading supervision is directly applicable and specifically requires supervision at every stage of algorithm development (not just post-deployment), written descriptions comprehensible without code review, kill-switches, and cross-algorithm manipulation surveillance.

What to do operationally

  • Inventory every AI system that produces an output used in investment decision-making. Include vendor-embedded AI — OMS copilots, portfolio-analytics assistants — alongside in-house tools.
  • For each system, document inputs, model class, training or configuration data, expected output, the human review step, and failure modes.
  • For any agent that can place orders or move funds, assume custody treatment and involve counsel before launch.
  • Test allocation and best-execution logic for systematic bias quarterly at minimum, and retain the testing.
  • Ensure the Form ADV Part 2A description of methodology matches what the AI is actually doing. A brochure that describes "fundamental analysis by the investment team" while the underlying recommendations are model-generated is the Delphia failure mode, relocated.

IV. The Four Controls That Make Anything Defensible

If the risk matrix in Section III is the question — "which tools, and how?" — this section is the answer-in-four-parts to the question examiners actually ask: what does your program look like? Every SEC AI-related settlement against an RIA to date could have been avoided, or substantially mitigated, by the four controls below. They are neither exotic nor expensive. They require discipline more than technology.

These controls should be woven into the firm's existing compliance program under Rule 206(4)-7, not maintained as a separate AI regime. Examiners want to see AI treated as one more category of business process, not as an exception.

Control 1 — Human-in-the-Loop Sign-Off

The single most important control, and the one most commonly under-documented. "Human-in-the-loop" is not a gesture — it is a named person, a defined moment, and a retained record.

What the control requires:

  • A named reviewer. For any AI output that will reach a client or a regulator, the firm's policy should identify who reviews it (by role, not individual: e.g., "the advisor on the relationship," "the CCO or designee," "a licensed supervisor"). "Somebody checks it" is not a control.
  • A defined moment of review. Before what action does the review happen — before send, before post, before filing, before CRM commit? The moment should be operationally enforced where possible (e.g., email drafting tools that require a human send action), and policy-enforced where it cannot be.
  • Documented evidence. The review leaves a trail: a sent-from field, a signed memo, a CRM note, a supervisor approval in a marketing review tool, or at minimum a logged approval event. If an examiner asks "how do you know this particular output was reviewed," the answer is a document, not a recollection.
  • Judgment, not rubber-stamping. The reviewer must have the expertise and the time to actually evaluate the output. An associate approving 200 AI-drafted emails a day is not a human in the loop; they are a bottleneck pretending to be a control.

Where the control is easy to weaken:

  • Auto-post settings. Meeting-note tools that post to CRM the instant the meeting ends eliminate the review step. Turn the auto-post off and require the advisor to review before commit.
  • Scheduled sends. AI agents that schedule client emails for future delivery without a human send action should be configured so the scheduling step itself requires approval.
  • "Pre-approved templates." A template approved once does not make every AI-generated instance approved; the instance-level review still matters.
  • High-volume marketing generation. If the firm pushes out dozens of AI-drafted social posts per week, the review process has to scale with it.

Control 2 — Data Boundary Policy

The data boundary defines what client information can touch an AI tool, which tools are approved for which data types, and what happens at the vendor on the other end. Get this wrong and the firm has a Reg S-P problem, a fiduciary-duty problem, or both.

The three tiers. The cleanest framework splits firm data into three tiers and assigns each tier to approved tools:

  • Tier 1 — Public / generic. Market commentary, publicly available research, non-client-specific questions. Any enterprise tool with a data-processing agreement is acceptable. Consumer-tier tools are tolerable only with an explicit training-opt-out confirmation.
  • Tier 2 — Internal and redacted. Anonymized case studies, internal memos, numerical examples with no client identifiers. Enterprise tools with tenant isolation and documented no-training clauses only. Consumer tools prohibited.
  • Tier 3 — NPI, MNPI, and privileged content. Full names tied to balances, Social Security numbers, account numbers, dates of birth, client health information, details that identify a specific client's holdings or strategy, legal-privileged communications. Only tools under a signed master services agreement with explicit data handling terms, preferably hosted in U.S. infrastructure, with a completed vendor diligence file on record. Consumer AI tools — ChatGPT Free/Plus, Claude.ai Free/Pro without a business agreement, Gemini personal, Microsoft Copilot Free — are prohibited for Tier 3 data without exception.

The zero-retention caveat. "Zero retention" is a commercial representation, not a legal shield. Morrison Foerster's 2025 guidance to advisers flagged the issue plainly: even vendors that do not retain data in the ordinary course can be compelled to produce outputs or processed documents under court order or subpoena. Zero retention reduces the probability of a vendor-side data incident; it does not eliminate the adviser's obligations under Reg S-P, fiduciary duty, or state privacy law. Zero retention is helpful diligence; it is not a substitute for diligence.

Practical boundary rules:

  • Nothing identifying a specific client goes into a prompt without that client's information being within the scope of the vendor's contract.
  • No SSNs, account numbers, or dates of birth in any prompt, regardless of vendor tier. There is almost never a legitimate business reason, and the downside is substantial.
  • Client names and balances together are NPI. Either anonymize before prompting, or use a Tier-3-approved tool.
  • MNPI handling follows existing firm procedures — AI does not create a new pathway for MNPI leakage that the firm's existing walls don't already address, as long as approved tools operate within the same boundaries.

Control 3 — Supervision and Testing

Rule 206(4)-7 requires that compliance policies be "reasonably designed." Testing is how the firm — and the examiner — confirms that the policies are working. The 2025 IMCT Survey finding that 44% of firms using AI have no formal testing or validation is the single clearest indicator of where enforcement attention is headed.

What testing should cover:

  • Inventory accuracy. Does the firm know, at any given moment, every AI tool its staff is using? Shadow IT in AI tools is common; periodic attestations and network telemetry are both reasonable approaches.
  • Output sampling. For each approved tool, a documented sample of outputs is reviewed periodically (monthly or quarterly for high-volume uses; semi-annually for low-volume). The sample is compared against the firm's standards for accuracy, completeness, and compliance with the Marketing Rule and fiduciary duty.
  • Disclosure match. Periodic test that what Form ADV 2A and marketing materials say about AI matches what the firm is actually doing. This is the Delphia test. Any drift triggers an ADV amendment and a marketing review.
  • Failure logging. When AI produces a bad output — hallucinated citation, wrong client reference, incorrect calculation, inappropriate recommendation — it is logged. A low failure rate is not the goal; a known failure rate is. Zero logged failures over a year is a signal that the logging is broken, not that the AI is perfect.
  • Vendor change monitoring. AI vendors roll out new features, new default settings, and new sub-processors on their own cadence. The firm needs a standing process to catch material changes — a vendor who quietly starts training on customer data, or adds a new sub-processor in a non-U.S. jurisdiction, creates a compliance event the firm needs to act on.

Sample rate guidance. Sample rates are firm- and use-case-specific, but the following ranges are defensible starting points for an initial program:

  • High-volume client communications (AI-drafted emails, marketing posts): 5–10% of outputs reviewed monthly, with all outputs reviewed pre-send at the individual level (Control 1).
  • Meeting notes: all reviewed by the advisor pre-commit (Control 1); a CCO sample of 10–20 per quarter spot-checked against source transcripts where available.
  • Financial plans: 100% reviewed by a licensed planner; CCO samples 5–10% for methodology drift.
  • Research summaries: analyst-verified at source level (Control 1); CCO samples 10% quarterly.

Document the sample rate in the policy. Adjust it with experience. Revise it in the annual 206(4)-7 review.

Control 4 — Disclosure

Disclosure does two jobs simultaneously. It satisfies the adviser's fiduciary duty to inform clients of material facts about the advisory relationship, and it immunizes the firm against the AI-washing charges that have now been brought against multiple RIAs. Done well, disclosure is a shield; done badly or omitted, it is the vulnerability that regulators exploit first.

Where disclosure goes:

  • Form ADV Part 2A. If AI is material to methods of analysis (Item 8), the advisory business (Item 4), the types of services offered, or the risks the adviser's approach presents, it belongs in the brochure. Material in this context means: a reasonable investor would want to know.
  • The firm's website. If marketing copy describes AI use, it must describe it accurately. If the firm has no AI page, silence is defensible. Overstatement is not.
  • Client communications. Significant new uses of AI (a client-facing chatbot, a major change to how plans are drafted) may warrant direct client notice, particularly for firms serving high-touch clients who will notice the change regardless.
  • Advisory contracts, where practice changes materially.

Principles for good AI disclosure:

  • Be clear on what you do — and don't — use AI for. The Delphia pattern was claiming capabilities the firm did not have. Do not describe aspirational AI use as current practice. If the firm uses AI to draft a newsletter, say that; do not say AI informs investment decisions unless it does.
  • Avoid hypothetical language to describe actual practices. "The firm may use AI" when the firm does use AI is a flag — Debevoise guidance to RIAs on Form ADV 2A review is specific on this point, and the SEC has charged firms for using "may" to describe what they in fact do.
  • Be accurate about risks. Standard AI risks — hallucinations, bias, data quality, cybersecurity exposure, vendor concentration — should be disclosed in terms the client can understand. The SEC has historically charged firms for using hypothetical risk language to describe risks that have already materialized.
  • Update on material change. New categories of AI use, new vendor classes, or new failure modes trigger an ADV amendment.
  • Consent where needed. Meeting recording for any client in a two-party-consent state requires consent. Use of client-identifying data in a new AI tool may require notice even where technical consent is not required, because it is the kind of material fact a client would want to know.

What AI disclosure is not:

  • It is not a substitute for the other three controls. A beautifully disclosed AI program without human review, data boundaries, or testing is still an enforcement target.
  • It is not a one-time exercise. Disclosure drifts from practice over time. The annual compliance review and Form ADV amendment cycle exist to catch the drift; both should explicitly test AI statements.
  • It is not a marketing exercise. Overstated AI disclosure is the enforcement pattern. Understated disclosure has not been an enforcement issue for any RIA to date. When in doubt, disclose less rather than more.

The four controls as a program

These four controls operate together. Human review catches AI output errors before they reach clients; data boundaries keep NPI from leaving the firm in the first place; testing finds the patterns human review misses and closes the loop back into policy; disclosure aligns what the firm says externally with what it does internally. A weakness in any one control creates exposure in the others — a testing program that finds nothing to log often means the data boundary isn't enforced enough to catch failures when they occur.

A practical way to operationalize the four controls is a single one-page AI Control Summary per approved tool, maintained by the CCO, that identifies: (a) the tool and its tier, (b) the named human reviewer, (c) the approved data tier, (d) the sample and testing cadence, and (e) the disclosure location. When an examiner asks for the firm's AI program, this page per tool — plus the acceptable use policy and the vendor diligence file — is what to hand over first.

The written artifacts that support the controls. A defensible program produces four documents that tie back to the controls above: a firm-wide acceptable use policy (Controls 1–3), a vendor due diligence questionnaire completed before any tool enters the approved register (Control 2), a Form ADV Part 2A disclosure paragraph reviewed annually against actual practice (Control 4), and an employee training attestation signed at onboarding and annually (Controls 1 and 3). These artifacts are firm-specific. Templates circulate freely in industry but almost always need substantial rework to match a firm's services, client base, and regulatory posture; they are best developed with compliance counsel or a qualified consultant rather than lifted wholesale.


Insurance Considerations: E&O and Cyber Coverage for AI

The four controls reduce the probability of AI-caused client harm. Insurance is what sits underneath when controls fail. Most RIAs have not re-underwritten their E&O or cyber programs with AI in mind, and the coverage picture in 2026 is materially different from what it was eighteen months ago.

There is still no reported U.S. decision adjudicating AI-caused losses under an RIA's E&O or cyber policy. Everything below is prospective — issues to raise with the firm's broker at renewal, not decided law.

What has shifted in the market

  • "Silent AI" is the defining ambiguity. Most E&O policies define "professional services" without reference to AI and without explicit inclusion. When an AI tool — not a natural person — is the proximate producer of bad advice, coverage hinges on how the carrier reads the definition. This is the same structural problem the industry had with "silent cyber" a decade ago, and it is being resolved the same way: through carrier-specific endorsements, some clarifying, some excluding.
  • AI exclusions are proliferating, but unevenly. Berkley introduced an "Absolute AI Exclusion" in 2025 intended for D&O, E&O, and fiduciary-liability placements; it excludes claims arising from "any actual or alleged use, deployment, or development of Artificial Intelligence," including AI-generated content, inadequate AI governance, chatbot communications, and regulatory actions. Other carriers have adopted narrower endorsements. Some are adding affirmative AI coverage by endorsement. The market is bifurcated — a single characterization of "industry posture" is not defensible.
  • Cyber policies currently cover AI-enabled social engineering (deepfakes, voice cloning, AI phishing), but social-engineering sublimits are typically capped at $250,000 — materially below the loss profile of a successful deepfake fraud on an advisory firm. (Coalition; WTW.)

What a CCO should do at renewal

  • Ask the broker, in writing, for the carrier's position on (a) AI-generated professional advice, (b) hallucinated or incorrect AI outputs delivered to clients, (c) AI-enabled social engineering losses, and (d) regulatory defense costs in AI-washing actions. Get the answer in writing from the carrier, not only the broker.
  • Review all endorsements on the current E&O and cyber policies for AI-related language — inclusive or exclusive. Flag any Berkley-style absolute AI exclusion and any narrower versions, and separately flag any carrier that has added affirmative AI coverage.
  • Reconcile the firm's register of approved AI tools against the insurance application at renewal. Non-disclosure of a material AI use case, discovered after a claim, is the clean path to a denial.
  • For firms using agentic AI or investment-process AI (see Special Focus: AI in the Investment Stack), consider whether the E&O tower is sized to the scenario of a systematic error across many accounts rather than a single-client errors-and-omissions event. Mass-error exposure is the coverage question most likely to be underwater.
  • Raise social-engineering sublimits and ensure deepfake / voice-clone incidents are explicitly enumerated. A $250,000 sublimit is not a control; it is a deductible on the real loss.

Related insurance developments

The NAIC AI Model Bulletin (December 2023) governs insurers' own use of AI — not their insureds directly — but shapes how carriers underwrite AI-exposed accounts and is driving new AI-usage questionnaires at E&O renewal. NYDFS's October 16, 2024 industry letter does not mandate insurance, but it raises the operational-resilience bar that cyber underwriters scrutinize at renewal, particularly for deepfake and voice-phishing controls.

The Canadian Moffatt v. Air Canada decision (February 2024) — the leading published precedent on principal liability for chatbot misstatements — is not U.S. law. But an RIA whose chatbot makes a misleading investment statement would face analogous Advisers Act liability under Rule 206(4)-1 and Section 206 antifraud provisions, and an insurer could additionally invoke an "intentional misrepresentation" carve-out. The case reinforces the Red rating on client-facing chatbots in Section III.


V. What's Coming

The regulatory landscape for AI in investment advisory is moving in two directions simultaneously. At the federal level, the trend since early 2025 has been deregulatory — the most prescriptive AI-specific rule proposal was withdrawn, and the 2026 Exam Priorities rely on existing statutes and rules rather than signaling new rulemaking. At the state level, and through FINRA guidance for dual registrants, the trend is in the opposite direction: more specific frameworks, more disclosure obligations, and more emphasis on governance and vendor oversight.

This section tracks what an RIA's CCO should be watching over the next twelve to twenty-four months, with the understanding that the specifics will continue to move.

A. Predictive Data Analytics Rule — Status: Withdrawn; Watch for Re-Proposal

The SEC's 2023 Predictive Data Analytics ("PDA") proposal would have required advisers and broker-dealers to identify, evaluate, and neutralize or eliminate conflicts of interest associated with a broad category of "covered technologies" used in investor interactions. Industry pushback focused on the breadth of the definition, which could have captured spreadsheets, calculators, and most client-facing software, and on the prescriptive remediation regime.

On June 12, 2025, the Commission formally withdrew the proposal as part of a fourteen-rule rescission package. The withdrawal was substantive, not procedural: the Commission stated that it does not intend to finalize the proposal, and any future rulemaking on the subject would require a new notice and comment period.

What this means for RIAs:

  • There is no federal PDA rule coming in the near term. Firms that built their AI compliance programs around avoiding a future PDA rule obligation can shift resources to the controls described in Section IV.
  • The substantive risks the PDA proposal addressed — AI-driven conflicts of interest, AI bias in recommendations, opaque model behavior — remain live and are policed under existing fiduciary duty, anti-fraud, Marketing Rule, and Compliance Rule standards. The Commission's enforcement posture on AI has, if anything, become more active on these grounds since the proposal was withdrawn.
  • A future Commission could re-propose a more narrowly tailored rule. Monitor the SEC's regulatory agenda publications (typically Spring and Fall) for any signal of re-proposal.
  • The Division of Examinations' 2026 Priorities already incorporate the substance of the PDA concerns. Expect examiners to probe AI conflicts and disclosures even without a PDA rule in place.

B. State-Level AI Regulation

States are moving faster and more specifically than the federal government. Three state frameworks are particularly worth watching for RIAs with clients or operations across jurisdictions.

Colorado Artificial Intelligence Act (SB24-205). The Colorado AI Act, signed in May 2024, is the first comprehensive state framework governing high-risk AI systems in the United States. It imposes obligations on both developers and deployers of high-risk AI used in "consequential decisions" — a defined term that expressly includes "financial or lending services."

The law's enforcement date has moved. Originally set for February 1, 2026, it was pushed to June 30, 2026 during the General Assembly's 2025 special session, during which lawmakers also considered but ultimately did not pass substantive revisions. Enforcement authority rests with the Colorado Attorney General.

Key obligations for covered deployers (which likely includes RIAs deploying high-risk AI tools for Colorado residents):

  • Implement a risk management program for the high-risk AI system, reasonably designed to address algorithmic discrimination.
  • Complete an impact assessment for each high-risk AI system.
  • Provide notice to consumers subject to a consequential decision made using a high-risk AI system, including a description of the decision, the principal reasons, and a right to correct data.
  • Notify the Attorney General of any algorithmic discrimination discovered.

The practical question for an RIA: do any of the firm's AI tools make or contribute to consequential decisions about Colorado residents? A meeting-note tool or a marketing-draft tool probably does not. A tool that scores prospect suitability, screens accounts for advisory enrollment, or materially contributes to a recommendation probably does — and those tools need the Colorado compliance overlay.

California — CCPA, AB 1008, and SB 1223. California has expanded the California Consumer Privacy Act's reach into AI in two practical ways:

  • AB 1008 (signed September 2024) clarifies that personal information under the CCPA includes data in "abstract digital formats," specifically "artificial intelligence systems that are capable of outputting personal information." The practical consequence for an RIA is that AI model outputs containing personal data about California residents are within the CCPA's reach, and the rights of California consumers (access, deletion, correction) apply to that data.
  • SB 1223 adds "neural data" to the list of sensitive personal information.

In parallel, the California Privacy Protection Agency has been developing regulations for automated decision-making technology. RIAs serving California residents should plan for CCPA overlays on any AI tool that outputs personal information about those clients, and should expect the California regulatory posture on AI to continue to evolve.

New York — NYDFS Industry Letter and Part 500. While NYDFS does not directly regulate most SEC-registered investment advisers, its October 16, 2024 industry letter on AI-related cybersecurity risks has become a de facto reference for the financial services sector. The letter applies the NYDFS Cybersecurity Regulation (23 NYCRR Part 500) framework to AI, identifying four risk categories: AI-enabled social engineering, AI-enhanced cyberattacks, risks from the vast data AI requires, and third-party AI supply-chain dependencies.

For RIAs, the letter is valuable as a checklist. Its four categories map cleanly to Regulation S-P's service-provider oversight obligations and should inform a firm's vendor diligence process. Dual registrants or RIAs that are part of banking or insurance organizations regulated by NYDFS will have direct compliance obligations.

Other state activity to watch:

  • Texas, Virginia, and several other states have introduced AI legislation. None has adopted a comprehensive framework as sweeping as Colorado's, but the trajectory is clear: expect more state-level action, not less, through 2026 and 2027.
  • The National Association of Insurance Commissioners and state insurance regulators have been active on AI in the insurance sector; for RIAs affiliated with insurance operations, these developments are more immediately relevant.
  • Two-party recording consent laws continue to apply and have not been softened for AI tools. The safest posture is to treat every meeting recording as requiring all-party consent, regardless of where the firm is located.

C. FINRA and the Dual-Registrant Overlay

For RIAs that are also FINRA-member broker-dealers, the FINRA regime applies in parallel to the Advisers Act requirements discussed throughout this playbook. FINRA's public posture on AI has been consistent and pragmatic:

  • Regulatory Notice 24-09 (June 2024) reminded members that FINRA rules are technology-neutral and apply to generative AI and large language models just as they apply to any other tool. The Notice identified Rule 3110 (Supervision), Rule 2210 (Communications with the Public), Rule 4511 (Records), Rule 3120 (Supervisory Control System), and anti-fraud rules as particularly implicated by AI use.
  • The 2026 FINRA Annual Regulatory Oversight Report (published December 9, 2025) carries forward the themes of Notice 24-09 and adds observations specific to AI agents — autonomous systems that take actions on behalf of users. FINRA's guidance for agents emphasizes human-in-the-loop oversight, guardrails, access controls, and clear escalation paths, which track closely to Section IV of this playbook.
  • FINRA's Advertising Regulation function continues to apply Rule 2210 to AI-generated communications. Any AI-drafted communication that meets the definition of a "communication with the public" is subject to the same content standards, approval requirements, and recordkeeping obligations as any other such communication.

Practical implications for dual registrants:

  • Policies addressing AI need to work under both the Advisers Act and FINRA rules. The controls in Section IV are constructed to satisfy both regimes.
  • Written supervisory procedures (WSPs) under Rule 3110 should expressly address AI, and the supervisor responsible for each AI tool should be named.
  • Advertising review processes under Rule 2210 need to account for AI-generated content, including output that appears in social media, LinkedIn, and websites.
  • FINRA's third-party risk management guidance (including Regulatory Notice 21-29) applies to AI vendors. Diligence and ongoing monitoring should be documented.
  • For registered representatives who are also investment adviser representatives, the AI policies of the broker-dealer and the investment adviser need to be harmonized — conflicts between the two regimes create compliance risk.

D. Federal Trends Worth Monitoring

Beyond the withdrawn PDA proposal, several federal developments bear watching:

  • SEC Examination Priorities are published annually (typically October or November). Watch for AI-related emphasis in the FY 2027 Priorities when they are released in late 2026.
  • The SEC's Division of Enforcement continues to pursue AI-washing matters under existing rules. The September 2025 Marketing Rule enforcement action — the first under Chair Atkins — signaled that enforcement in this area has not slowed, even as rulemaking has.
  • The FTC has been active on AI-related fraud and consumer protection. While not a direct regulator of RIAs, FTC enforcement patterns often foreshadow SEC and FINRA activity.
  • Congressional activity on AI has been persistent but has not yet produced a comprehensive federal framework. A sector-specific federal AI law for financial services remains possible but is not imminent.
  • The NIST AI Risk Management Framework has become a widely cited reference for firms seeking a structured approach to AI governance. While not legally binding, adopting its vocabulary and structure tends to make examinations easier.

E. Keeping an AI Compliance Program Current

An AI compliance program has a practical half-life of roughly nine to twelve months — tools, vendor terms, and regulatory posture all move faster than most firms plan for. A defensible program includes:

  • Quarterly review of the firm's register of approved AI tools. Pricing, sub-processors, default settings, and model versions all change on vendor cadences, not firm cadences.
  • Triggered review of the risk matrix whenever a new category of tool enters the firm, a vendor's material terms change, or a significant enforcement action shifts the legal backdrop.
  • Alignment of the annual Rule 206(4)-7 compliance review with an AI-specific refresh — including testing that Form ADV disclosures still match actual practice.
  • A reading list that covers SEC Examination Priorities, FINRA Annual Regulatory Oversight Reports, state AG and attorney general AI guidance, and at least one practitioner compliance newsletter.
  • An annual outside-counsel review of AI disclosures — particularly Form ADV Part 2A — against actual practice. The Delphia pattern is the single most likely enforcement theory, and also the easiest to prevent.

Closing note

The firms that have been charged with AI-related violations over the last two years were not charged because AI itself is dangerous. They were charged because they said things they could not substantiate, because they lacked written policies the rules required them to have, or because they allowed disclosure to drift out of sync with practice. None of those failure modes is novel; they are the same failure modes that have driven adviser enforcement for decades. The four controls in Section IV exist to prevent them. A firm that adopts AI with discipline — with a policy, a vendor file, a testing cadence, accurate disclosures, and a CCO who understands what is being deployed — can take advantage of genuine productivity gains without assuming meaningful additional regulatory risk.

The firms that take a blanket-ban posture are not safer; they are simply deferring the question. Examiners looking at AI in 2026 and beyond will be looking at firms that use it and firms that do not — and the firms with well-documented programs will fare better than either the firms using AI without governance or the firms avoiding it without strategy.


Selected References

Primary regulatory materials and authoritative secondary sources referenced in this playbook.

SEC materials

FINRA materials

State materials

ERISA / retirement-plan materials

Insurance materials

Other federal and framework materials

Industry research and practitioner commentary


Eigen Consulting helps independent RIAs implement AI under a defensible compliance framework. Get in touch to discuss your firm's AI program.