Lawyers have always been the ones to ask the hard questions, and this is a moment that calls for that instinct. Don’t let “AI-powered” be the end of the conversation. Make it the start of due diligence.
Last year, the SEC charged two investment advisors for misleading claims about their use of artificial intelligence. The firms faced $400,000 in penalties. Not a headline-grabber in terms of dollar amount, but the implications were clear: if you’re inflating what your AI can do, there will be legal consequences.
As someone who has built an AI company from the ground up, I have seen how fast the language around AI gets away from the actual tech. Terms like “intelligent,” “autonomous,” or “human-level” get thrown around with no explanation. Sometimes, with no product behind them at all.
This kind of exaggeration is what regulators are calling AI washing. And if you work in law, compliance or any profession where accuracy matters, this not only poses a tech problem, but a risk management one as well.
What Is “AI Washing,” and Why Does It Matter?
Like greenwashing in sustainability, AI washing refers to inflated or false claims about the use of artificial intelligence in products or services. In some sectors, this amounts to little more than bad marketing. But in law, finance, healthcare and defense (sectors built on accountability and trust) the consequences of exaggerating your products are much worse.
In 2024, the number of securities class action filings related to AI misstatements doubled compared to the previous year, reflecting heightened scrutiny over companies’ AI claims. This surge also indicates that both regulators and investors are increasingly vigilant about “AI washing”.
When firms claim their tools are “AI-powered” without clarity, transparency or evidence, they put clients, investors and users at risk. Decision-makers may rely on systems they don’t understand or assume capabilities that don’t exist. In regulated industries, this opens the door to liability and undermines trust.
A Call for Clarity and Accountability
AI systems, particularly those built on machine learning, can be incredibly powerful and useful. A 2024 survey by Thomson Reuters revealed that 63% of lawyers have utilized AI tools in their practice, with 12% using them regularly. These tools are primarily employed for tasks like summarizing case law and drafting documents. But it’s important to remember AI tools are not magic. They require large, sometimes sensitive datasets. They can have bias or errors. And most importantly, they work best within clearly defined parameters.
For legal professionals evaluating AI tools or advising clients who develop them, a few key questions matter more than any buzzword. You need to ask yourself, what exactly does the system do? How is it trained? What are its failure rates? Can you verify its outputs?
For example, if a platform claims to use AI to redact sensitive information in legal documents, decision-makers should understand the difference between pattern-matching and machine learning approaches, and the risks each one introduces. A tool that performs well on standard templates might struggle with unstructured or multilingual content. Without clarity, users are left guessing, and sensitive data is at risk of being exposed.

Being clear about what an AI system can’t do is just as important as what it can. That level of specificity builds credibility. It also empowers users to stay informed, flag risks and participate in ongoing quality control.
Building Trust in an Oversaturated Market
Industries across the board are rewarding flash over function. But sustainable innovation (especially in the legal domain) comes from systems that are explainable, auditable and precise. That starts with honest conversations about what AI can actually do.
It’s worth remembering: AI doesn’t need to be framed as “revolutionary” to be valuable. Solving a focused, high-stakes problem with consistency is often more impactful than trying to automate broad categories of human decision-making. Legal professionals know this better than most; precision matters more than potential.
The Legal Industry Can Set the Tone
Lawyers have always been the ones to ask the hard questions, and this is a moment that calls for that instinct. Don’t let “AI-powered” be the end of the conversation. Make it the start of due diligence.
We don’t need to slow down innovation. But we do need clarity. If a tool relies on machine learning, it should come with documentation, explainability and a clear scope of what it can (and can’t) do.


Join the conversation!