If your law firm’s content strategy relies heavily on AI-generated articles, you may be in for a rude awakening sometime soon (if you haven’t been already). New research from Google engineers shows that detecting AI-generated text is getting significantly better.
The study, conducted by two men who work at Google DeepMind talk about a hybrid AI content detection system that can identify machine-written content far more reliably than previous methods, even in situations where AI detection has historically struggled.
One of the authors of the research leads work on detecting LLM-generated content. That alone tells you something important: Google is paying close attention to how AI is being used on websites.
Now this isn’t to say that all AI content is bad, or that you shouldn’t use AI to help you create content. But you need to be careful about how you utilize it on your website because it can end up hurting your visibility on Google, answer engines, traffic, lead volume, and (most importantly) your revenue.
Why Does Google Care If You Use AI-Generated Content?
Costs Involved
First, having to index and manage the entire web is incredibly expensive. In 2026, Google’s Capex is projected to reach nearly $185 billion, largely driven by the need to build massive data centers to handle both AI search (Gemini/AI Overviews) and the sheer volume of content being produced. That cost also includes the compute energy needed for AI processing. Now, imagine how much AI content is being generated every day from all over the world. That creates a pretty massive line item for them. Google isn’t going to just include all the AI content that is generated if it has no value to users when that content already exists.
Google’s systems are now specifically tuned to detect “Scaled Content Abuse.” If your AI content is just a regurgitation of existing web content, Google has no reason to include it.
Google Doesn’t Reward “Easy”
Take it from someone who has analyzed Google’s algorithm nearly every day for over 20 years: Google doesn’t often reward pressing the “easy” button when it comes to anything SEO related. At least, not for any extended period of time.
In 2024, a massive leak of Google Search’s internal engineering documentation (thousands of pages of API data) confirmed what many of us who have been studying the algorithm for a very long time suspected. Two specific metrics in those documents stand out when we talk about AI:
contentEffort: This is a literal indicator used in their ranking systems. It suggests that Google attempts to measure the amount of work involved in creating a piece of content. Because AI can generate 1,000 words in an instant, its “effort score” is effectively zero.originalContentScore: Google uses this to weigh how much of your page is truly unique compared to existing data. Since LLMs (Large Language Models) are trained on existing web data, the content they produce is not original. They are essentially mirrors of what has already been said in the documents they were trained on.
If your strategy is built on “low effort” and “low originality,” that’s not the right recipe for long-term success.
The Dangers for Law Firms That Use AI Content the Wrong Way

Over the past couple of years, we have already seen many websites lose significant organic traffic after publishing large volumes of low-quality AI-generated content (law firm websites included). If your law firm relies on traffic and leads from Google, be very careful here. Whether you are writing the content in-house or you use an agency, you’d better know how they are using AI. There are many law firm marketing agencies right now that are going the easy (ie, low “ContentEffort”) route, and that’s going to cause huge problems for many law firms. Don’t go the cheap and easy route today, or it may cost you significant revenue in the future.
Additionally, you may lose Google’s trust, which ultimately means reduced SEO performance overall.
How AI Content Detection Works Today
Most systems used to detect AI-generated text rely on two main approaches:
1. Watermarking
2. AI Classifiers
Watermarking
Some AI providers embed statistical patterns into the text generated by their models. These patterns act as a kind of invisible watermark. Detection tools can analyze text and determine whether those patterns are present. However, watermark detection works best when the model has a lot of freedom in choosing words. For example:
- “Write a blog post titled “What to do After a Car Accident””
In open-ended prompts like this, the AI has many possible ways to generate text, which makes watermark signals easier to detect. But when prompts are factual and constrained, the model has fewer choices. For example:
- “What is the statute of limitations in California for personal injury?”
In situations like these, the watermark signal becomes weaker and harder to detect for AI.
AI Classifiers
Another common method uses machine learning classifiers. These models are trained to recognize statistical and stylistic patterns that appear more often in AI-generated writing. Classifier systems are generally more consistent than watermarking, but they have their own problem. Sometimes they incorrectly flag human-written content as AI.
If you have ever used AI content detection tools (we use multiple here at iLawyer), you have probably seen this happen. We have on many occasions. False positives are something search engines like Google want to avoid.
Hybrid Detection Model Works Much Better
The research from Google shows that combining both techniques creates a far stronger detection system. Instead of relying only on watermark signals or only on classifiers, the hybrid system analyzes both. When the two signals were combined in the study, detection accuracy improved dramatically.
The hybrid system also performed well when:
- only short pieces of text were available
- the text had been lightly edited or paraphrased
Those are exactly the kinds of changes people often make when trying to disguise AI-generated content.
How Google May Handle AI Content Going Forward
Google does not prohibit the use of AI tools for writing. But Google does care about quality, originality, and value. Pages that simply generate AI text that repeats what already exists on the internet do not provide value to users. Google knows this, and that’s why they are removing large amounts of AI-generated content from their index (aside from the massively expensive factors involved).
When websites publish large volumes of AI content, Google has responded in predictable ways. This can mean:
- reduced rankings for individual pages
- page or pages removed entirely from their index
- reduced rankings across entire websites
- algorithmic penalties
- lowered trust levels for domains (which results in ranking drops and traffic losses)
What About Answer Engines?
Answer engines such as ChatGPT, Gemini, and Perplexity frequently rely on Google search results and trusted web sources when deciding which law firms or websites to reference in their answers. If your pages are removed from Google’s index or your site ends up algorithmically penalized, your chances of being cited by answer engines will drop significantly. Data from a great AI visibility tracking tool will show this very clearly.
For law firms in 2026, strong Google rankings still play a major role in both traditional SEO and answer engine visibility.
Could Websites Be Penalized by Both Google and AI Answer Engines?
Yes, it’s possible. At this time, answer engines still rely heavily on Google search results to find trusted sources. If a law firm’s rankings get crushed in Google, its visibility in AI-generated answers will drop as well.
Answer engines are also getting better at identifying low-quality or spammy sources. As these systems evolve, they will become better at detecting the spammy AI visibility tactics being used right now for many agencies, marketers, and businesses. Currently, they are pretty bad at it and frankly lack the experience and systems that Google does to detect and prevent manipulation. However, I would expect them to eventually penalize brands similar to how Google penalizes websites.