In an era dominated by technological buzzwords, “AI-powered” has become one of the most overused—and increasingly scrutinized—phrases in financial services. While artificial intelligence (AI) holds real potential to enhance investment strategies, improve forecasting, and optimize portfolio management, it has also opened the door to a dangerous trend: AI washing—when firms exaggerate or outright fabricate their use of AI in marketing materials or investor communications.
This trend is no longer just a PR issue; it’s now a regulatory one.
In a landmark enforcement action, the U.S. Securities and Exchange Commission (SEC) charged two investment advisory firms—Delphia (USA) Inc. and Global Predictions Inc.—for falsely promoting their services as AI-powered. Despite centering their marketing around machine learning and predictive algorithms, regulators found the actual use of AI at these firms to be minimal or non-existent. The result? Both companies settled and paid a combined $400,000 in civil penalties.
But the penalties weren’t the real message. The real message, according to the SEC, is this: “AI washing” is securities fraud—plain and simple.
The Delphia and Global Predictions case underscores a growing expectation: if you’re going to advertise AI, it needs to be real, explainable, and governed.
Unlike some “AI-labeled” securities that rely heavily on buzzwords with little substance, monitored securities are backed by documented compliance procedures, human oversight, and clear audit trails. These systems aren’t just effective—they’re credible. And credibility matters more than ever.
Delphia, for example, claimed it used proprietary AI models on user-contributed data to make better investment predictions. However, the SEC found no evidence of such AI being implemented. In fact, the firm didn’t even collect or use the very data it claimed formed the backbone of its predictive models. Similarly, Global Predictions marketed itself as the “first regulated AI financial advisor,” yet couldn’t substantiate its use of AI in any material way.
When firms fail to explain or document their AI processes, they erode investor trust. Today’s investors don’t just want to see flashy algorithms—they want transparency. Who’s overseeing the AI? What happens when it produces an error? Can a human override it?
Firms that can’t answer these questions clearly will find themselves increasingly out of favor with both regulators and clients.
The SEC’s action against Delphia and Global Predictions is not a one-off. It’s part of a much broader trend in regulatory scrutiny of AI in the financial sector.
Across jurisdictions, regulators are converging around a common principle: adopting AI doesn’t excuse accountability—it amplifies the need for it.
AI misrepresentation isn’t just a regulatory risk; it’s a legal and financial one. Investors, emboldened by recent enforcement actions, are pursuing class-action lawsuits when AI-related promises don’t materialize.
Take for instance a publicly traded tech firm that advertised “cutting-edge AI” behind its marketing platform. Investors later discovered that the technology was not operational or was significantly overstated. After stock prices fell, the lawsuits poured in. Or consider Apple, which faced a shareholder suit over allegedly overstating progress on AI integration with Siri—an issue tied directly to investor confidence and market performance.
The lesson is clear: if your AI claims move your stock price, you’d better be able to prove them.
In the finance world, trust is everything. And trust is built on governance, not gimmicks.
The temptation to brand every product as “AI-enhanced” may generate short-term excitement, but long-term credibility requires more than a buzzword. Regulators and investors are starting to ask deeper questions:
Financial institutions that prioritize these elements are far more likely to win the trust of regulators and clients alike. As Deloitte puts it, “Human oversight and transparency are two pillars of reliability and trustworthiness in AI use.”
So, what should firms do if they truly want to incorporate AI into their investment products or services?
Remember: Oversight is the differentiator. In an industry governed by fiduciary duty and investor protection, responsibility doesn’t end when an algorithm begins. It begins because of it.
The SEC has spoken: AI hype will not protect you from enforcement. If your company markets an AI-powered solution in the financial space, you must be prepared to show your work—because regulators, investors, and litigators are all watching.
In this environment, firms that focus on truth, transparency, and oversight will stand out. They will not only avoid enforcement actions but will also attract clients looking for dependable, well-governed investment options.
In the long run, “monitored” matters more than “AI-labeled.” Because when performance dips or markets crash, it’s not the buzzword that keeps your clients loyal—it’s the trust they’ve placed in your systems, your team, and your integrity.