Bar associations addressed the highest-stakes AI uses first: filings, research, client advice. Marketing content is next.
Can AI generate law firm marketing content? Absolutely.
The more important question is how law firm marketing teams should be using, and what systems they should implement to ensure that it is compliant with lawyer advertising regulations and legally accurate.
Finally – and critically – The point of content marketing is to generate business, so content needs to rank well. In a sea of AI content being churned out every day, there is a real question as to whether any of it can rank at all without significant human intervention.
I’ve been producing legal content for law firms for over a decade, and I’ve been watching how AI has changed content marketing in real time across hundreds of websites. The AI output goes live. Clients read it. Nobody reviews it for accuracy. Nobody asks whether it meets professional responsibility standards.
That’s not a marketing problem. That’s a professional responsibility problem.
The rules that govern it already exist. Most firms just haven’t looked at it that way yet.
Bar Association AI Guidance Has a Blind Spot
The ABA’s Year 2 Report on the Impact of AI on the Practice of Law, released December 2025, documents a genuine surge in formal AI guidance across jurisdictions. The focus is consistent: court filings, legal research tools, client intake, billing practices.
Florida Bar Advisory Opinion 24-1 is one of the most detailed issued to date. It covers AI chatbots in intake and AI in legal research at length. It says almost nothing about the practice area pages and blog posts sitting on your firm’s website right now.
That’s the gap.
Your prospective clients read that content before they ever call you. It shapes whether they trust the firm, understand what you do, and decide to reach out. That content is a communication from the attorney. Attorney communications don’t get a pass from professional responsibility rules because you didn’t personally write them.
ABA Formal Opinion 512 (2024) made clear that lawyers remain responsible for work product regardless of whether AI assisted in producing it.
That principle doesn’t stop at the courthouse door.
The Rules That Already Apply
While bar associations should update their guidance to address marketing content directly, current rules already apply. No new rule is needed for this to become a disciplinary issue.
As the ABA Law Practice Magazine noted in early 2026, three existing model rules already cover AI-generated marketing content directly.
- Model Rule 7.1 prohibits false or misleading communications about a lawyer’s services. If an AI tool produced content with inaccurate legal information, outdated statutes, or misleading claims about your firm’s record, and it went live without attorney review, that’s your problem. Not the vendor’s.
- Model Rule 1.1 requires competence, including an understanding of the benefits and risks of technology you use in practice. Publishing AI content without understanding what the tool was trained on, or what errors it tends to produce, is a competence issue.
- Model Rule 5.3 governs supervision of non-lawyer assistance, and it extends to third-party contractors. Your marketing agency is a third-party vendor. You remain responsible for what they produce under your name.
One more layer worth noting: Google’s E-E-A-T framework requires named attorney authorship, verifiable credentials, and demonstrated legal knowledge for content to rank and build trust. AI-generated content without attorney review fails that standard on both counts. Law firm content needs to align with professional responsibility rules and establish the firm as a genuine authority.
It’s a tall order on every front.
The framework is already there. It just hasn’t been applied to marketing content yet.
Where Enforcement Is Heading
The legislative pace is accelerating faster than most firms realize.
In 2025, lawmakers across all 50 states introduced AI-related legislation for the first time. 145 bills were enacted into law. As of March 2026, 45 states have introduced 1,561 more. Washington state passed HB 1170 in March 2026, requiring disclosure when content includes AI-generated elements.
That’s the direction, not the exception.
Pennsylvania already mandates explicit disclosure of AI use in all court submissions. Oregon Ethics Opinion 2025-205 addressed similar disclosure and supervision requirements. The trajectory from court filings to client-facing content is a short one.
The sanctions picture is sharpening too. Damien Charlotin’s AI hallucination case database at HEC Paris has catalogued over 1,200 cases globally in which AI produced hallucinated content submitted to courts, growing at five to six new documented cases every day.
In Johnson v. Dunn, No. 2:21-cv-1701 (N.D. Ala. July 23, 2025), three attorneys at a prominent national firm were sanctioned and disqualified from the case. Each was required to provide the sanctions order to every client, opposing counsel, and presiding judge in every pending matter where they served as counsel.
The duty to use AI responsibly attaches to the attorney personally. Not the tool. Not the vendor.
What a Law Firm AI Policy for Content Actually Looks Like
I’m not suggesting firms avoid using AI for content. AI speeds up production. Attorney review is what makes it publishable and more likely to rank.
Again – The question isn’t whether firms can use AI for marketing content. They can. The question is what attorneys need to do to ensure compliance and accuracy, and how to add value so that content produced with AI can actually rank and drive results.
The answer is a documented law firm AI policy that governs how it’s used. Some firms are building this in-house. Others are working with attorney-led content agencies that build review and accuracy verification into every piece from the start.
A real policy covers four things.
- Attorney review before publication: Every AI-generated piece, including practice area pages, blog posts, and FAQ answers, gets reviewed by a licensed attorney before it goes live. That review gets documented.
- Accuracy verification: AI produces content that can be legally wrong. Statutes that don’t exist. Standards that have changed. Jurisdiction-specific claims that don’t reflect local law. The reviewing attorney verifies against primary sources, not just reads for tone.
- Vendor oversight: If your marketing agency produces AI-assisted content for your firm, your engagement should address how content is reviewed, who owns accuracy, and what happens when errors surface. Their use of AI doesn’t reduce your responsibility for what gets published.
- A content audit: If you’ve been publishing AI-generated content without a formal review process, that’s a current problem. Inaccurate content on a live website is inaccurate right now.
This is the same supervisory standard you already apply to work from associates and paralegals. Most firms just haven’t thought to apply it to their marketing vendor.
The Gap Between Adoption and Governance Is Where Exposure Lives
According to the Clio 2025 Legal Trends Report, 79% of legal professionals used AI tools last year. Only 44% of firms had any formal AI governance policy in place.
That gap is where the exposure lives.
Bar associations addressed the highest-stakes AI uses first: filings, research, client advice. Marketing content is next. The professional responsibility framework is already in place. The firms that build a documented review process now won’t be caught flat-footed when that opinion lands.
It’s a tall order.


Join the conversation!