AI chip on circuit board

Why “Monitored” Matters More Than “AI-Labeled” in Securities

What the SEC Crackdown Means for Financial Advisors and Investors

In an era dominated by technological buzzwords, “AI-powered” has become one of the most overused—and increasingly scrutinized—phrases in financial services. While artificial intelligence (AI) holds real potential to enhance investment strategies, improve forecasting, and optimize portfolio management, it has also opened the door to a dangerous trend: AI washing—when firms exaggerate or outright fabricate their use of AI in marketing materials or investor communications.

This trend is no longer just a PR issue; it’s now a regulatory one.

In a landmark enforcement action, the U.S. Securities and Exchange Commission (SEC) charged two investment advisory firms—Delphia (USA) Inc. and Global Predictions Inc.—for falsely promoting their services as AI-powered. Despite centering their marketing around machine learning and predictive algorithms, regulators found the actual use of AI at these firms to be minimal or non-existent. The result? Both companies settled and paid a combined $400,000 in civil penalties.

But the penalties weren’t the real message. The real message, according to the SEC, is this: “AI washing” is securities fraud—plain and simple.

1. Accountability and Transparency

The Delphia and Global Predictions case underscores a growing expectation: if you’re going to advertise AI, it needs to be real, explainable, and governed.

Unlike some “AI-labeled” securities that rely heavily on buzzwords with little substance, monitored securities are backed by documented compliance procedures, human oversight, and clear audit trails. These systems aren’t just effective—they’re credible. And credibility matters more than ever.

Delphia, for example, claimed it used proprietary AI models on user-contributed data to make better investment predictions. However, the SEC found no evidence of such AI being implemented. In fact, the firm didn’t even collect or use the very data it claimed formed the backbone of its predictive models. Similarly, Global Predictions marketed itself as the “first regulated AI financial advisor,” yet couldn’t substantiate its use of AI in any material way.

When firms fail to explain or document their AI processes, they erode investor trust. Today’s investors don’t just want to see flashy algorithms—they want transparency. Who’s overseeing the AI? What happens when it produces an error? Can a human override it?

Firms that can’t answer these questions clearly will find themselves increasingly out of favor with both regulators and clients.

2. Regulatory Action is Increasing

The SEC’s action against Delphia and Global Predictions is not a one-off. It’s part of a much broader trend in regulatory scrutiny of AI in the financial sector.

U.S. Developments:

  • Predictive Data Analytics Proposal: The SEC has proposed new rules requiring firms to manage conflicts of interest when using predictive data analytics (PDA)—especially when AI could influence a customer’s investment decisions.

     

  • Marketing Rule Enforcement: Under the updated SEC Marketing Rule, firms must now maintain records to support any marketing claims. Misrepresenting the role or capability of AI is no different from any other form of misleading promotion.

     

  • AI Sweep Exams: The SEC’s Division of Examinations has conducted sweep exams on advisors and funds, asking how they use AI and whether those claims are accurate.

     

International Momentum:

Across jurisdictions, regulators are converging around a common principle: adopting AI doesn’t excuse accountability—it amplifies the need for it.

3. Legal and Financial Risk

AI misrepresentation isn’t just a regulatory risk; it’s a legal and financial one. Investors, emboldened by recent enforcement actions, are pursuing class-action lawsuits when AI-related promises don’t materialize.

Key Legal Trends:

Take for instance a publicly traded tech firm that advertised “cutting-edge AI” behind its marketing platform. Investors later discovered that the technology was not operational or was significantly overstated. After stock prices fell, the lawsuits poured in. Or consider Apple, which faced a shareholder suit over allegedly overstating progress on AI integration with Siri—an issue tied directly to investor confidence and market performance.

The lesson is clear: if your AI claims move your stock price, you’d better be able to prove them.

4. Governance and Trust Matter More Than Branding

In the finance world, trust is everything. And trust is built on governance, not gimmicks.

The temptation to brand every product as “AI-enhanced” may generate short-term excitement, but long-term credibility requires more than a buzzword. Regulators and investors are starting to ask deeper questions:

  • How is your AI model trained?

     

  • Who reviews its decisions?

     

  • Can clients understand how it works?

     

  • What controls exist if something goes wrong?

     

Characteristics of Monitored Systems:

  • Human Oversight: There must be someone responsible for monitoring the AI, validating its outputs, and intervening if necessary.

     

  • Auditability: Can decisions made by the AI be traced and explained?

     

  • Risk Management: How does the AI handle edge cases, anomalies, or shifts in market behavior?

     

Financial institutions that prioritize these elements are far more likely to win the trust of regulators and clients alike. As Deloitte puts it, “Human oversight and transparency are two pillars of reliability and trustworthiness in AI use.”

5. From Hype to Compliance: Best Practices

So, what should firms do if they truly want to incorporate AI into their investment products or services?

Compliance Best Practices:

  1. Document Everything: Maintain technical documentation of AI models, including inputs, logic, and outputs.

     

  2. Substantiate Marketing Claims: Any public statements about AI use must be backed by actual implementation.

     

  3. Maintain Human Review Loops: Ensure there are compliance and risk teams who review AI decisions regularly.

     

  4. Conduct Bias Testing: Especially important if AI is involved in lending, credit scoring, or portfolio allocation.

     

  5. Disclose AI Limitations: Be honest about what the AI can’t do. Transparency goes a long way in managing expectations.

     

Remember: Oversight is the differentiator. In an industry governed by fiduciary duty and investor protection, responsibility doesn’t end when an algorithm begins. It begins because of it.

Conclusion: Real AI Requires Real Governance

The SEC has spoken: AI hype will not protect you from enforcement. If your company markets an AI-powered solution in the financial space, you must be prepared to show your work—because regulators, investors, and litigators are all watching.

In this environment, firms that focus on truth, transparency, and oversight will stand out. They will not only avoid enforcement actions but will also attract clients looking for dependable, well-governed investment options.

In the long run, “monitored” matters more than “AI-labeled.” Because when performance dips or markets crash, it’s not the buzzword that keeps your clients loyal—it’s the trust they’ve placed in your systems, your team, and your integrity.