When Clients Bring Their Own AI: The New Challenge for Legal Advice
The modern legal client increasingly comes armed with their own AI-generated research, presenting novel challenges for lawyers. Debating a ‘machine’ that produces opinions with authority and rapidity but without legal skill or judgement, can not only cost lawyers time and erode trust, but create liability exposure. Allan Ritichie offers advice for responding with restraint to AI-fueled ‘findings’ while redirecting the conversation to the real-world decisions the client needs to make.

It happened twice this week. A client presented me with the results of their own “research” which consisted of dumping their (privileged) legal advice into the (free and non-confidential version of) Chat GPT and asked me “what about all off these issues that the AI thinks are important to my case?”.
As I stared at these voluminous, robot generated missives, I struggled with how to advise the client (and their robot helper) that while their ‘second opinion’ sounded confident and was grammatically correct, it was probably a waste of time for me to unpack all the ways in which it was not helpful to their specific situations.
As I shared this experience with colleagues, I realized we are increasingly struggling with these issues. The modern client increasingly comes armed with their own 'research', generated by an algorithm that never bills by the hour and never hesitates.
The AI Second Opinion
Clients have always sought second opinions, but AI has made that instant and almost free. Whether it is drafting a contract clause, summarizing a case, or exploring a tax structure, AI tools now give clients what looks like informed legal insight, and often, it sounds authoritative enough to question ours.
The problem is that AI does not know what it does not know. It does not understand nuance, context, or the reasoning that led to a legal conclusion. When it reviews a lawyer’s work, it does not analyze the thought process; it matches patterns in text and that is where the real tension begins.
The False Positives of AI Legal Review
One of the most common and frustrating new dynamics occurs when AI reads a legal memo or contract and confidently proposes “additional issues” that the lawyer supposedly missed.
Often, these are not new issues at all. They are matters the lawyer already considered, weighed, and intentionally set aside because they were irrelevant, immaterial, or already resolved through other provisions. But AI does not see that logic. It only sees patterns.
This creates two immediate problems.
First, the lawyer must engage in a time-consuming explanatory exercise, unpacking why those flagged issues were already considered and dismissed. The human reasoning behind legal judgment, balancing risk, cost, and practical reality, must now be defended against a machine’s simplistic certainty.
Second, it subtly undermines trust. When clients see AI generated “findings,” they may start to wonder whether their lawyer missed something. Even if the lawyer ultimately proves correct, the process can erode confidence in the relationship.
And there is a third problem emerging, one that is economic.
If we are forced to unpack and justify every analytical decision to an algorithmic shadow audience, lawyering itself becomes slower and more expensive. It is a bit like a pilot who has to narrate every decision to passengers midflight, explaining why they adjusted altitude, why they changed headings, and why turbulence is nothing to fear. The explanations might be educational, but they are also a distraction. They take attention away from what really matters: flying the plane safely.
The same is true in law. If we spend too much time explaining to clients why every potential issue raised by AI is immaterial, we risk losing the efficiency and focus that make professional judgment valuable in the first place.
The Liability Trap: When Explaining Becomes Exposure
There is another, quieter risk that deserves attention: the liability exposure created by debating with AI generated feedback.
Each time a client forwards AI generated “concerns,” and the lawyer responds in writing to explain why those points are wrong or irrelevant, a record is created, a growing thread of comments, clarifications, and dismissals. Over time, this back and forth can start to look less like an exchange of professional reasoning and more like a list of client instructions.
That is where the danger lies.
If one of those AI generated suggestions later proves tangentially relevant, or if a dispute arises, the paper trail can be misconstrued. It may appear as though the client raised an issue and the lawyer ignored it. In reality, the lawyer may have rightfully dismissed a false positive. But in hindsight, and under the harsh light of litigation or a professional negligence claim, the distinction between AI chatter and client instruction can blur.
In other words, the more we debate with the machine, the more exposed we become to it.
This dynamic places lawyers in a difficult position: respond too briefly and you risk appearing dismissive; respond too thoroughly and you create a longer record of hypothetical issues that can be used later.
Managing that balance will require judgment, discipline, and restraint. Not every AI prompt deserves a written rebuttal. Sometimes the most prudent professional move is to bring the conversation back to where it belongs: the real-world decision the client needs to make.
The Strategic Opportunity and the Client’s Choice
Firms that handle this dynamic well will use AI as a bridge, not a barrier. When a client’s AI flags something, the conversation should not be defensive. It should sound like:
“That is a good observation. Let me explain why that issue does not apply in this context, and what we considered before reaching this conclusion.”
Handled well, these interactions deepen trust. They show that the lawyer is not threatened by AI, but operating at a higher level of reasoning.
Handled poorly, they can damage credibility. If lawyers appear dismissive or impatient, clients may see that as evasion. If we over-explain, we risk validating the AI’s false authority. Striking that balance will become a defining skill for the next generation of legal advisors.
There is also a practical dimension that clients need to understand: explanation takes time, and time costs money. Every time an AI raises a new issue, the lawyer must analyze it, place it in context, and explain why it may not matter. That is not free, and it is not efficient.
At some point, the conversation has to include a reality check. It is entirely fair to ask the client, candidly and politely:
“Do you want to pay me to argue with your robot?”
It is a disarming question, but it cuts to the truth. The lawyer’s job is not to spar with an algorithm; it is to guide human judgment through complexity. Clients should decide whether they want their lawyer focused on solving the problem or educating the machine.
Whenever a client’s AI enters the discussion, the lawyer should also set clear boundaries at the outset. It is wise to state explicitly to what degree we are willing to engage with AI generated advice. Most of the time, based on the current state of these tools, the best professional response is to say politely that it would not be in either the lawyer’s or the client’s best interest to engage directly with the AI output. Instead, we should invite the client to identify any specific items that they would like us to explain or defend. That approach places the responsibility and cost decision back where it belongs. The client can then choose whether they want to pay for additional engagement.
The Real Value of Judgment
AI can be a powerful tool in the hands of professionals. It can help us narrow long lists of issues, surface comparable precedents, and summarize positions quickly. Used properly, it can make legal analysis faster and more efficient.
But that power depends entirely on the judgment of the person using it. In the hands of clients, without the training or instinct to properly prompt and interpret what the AI has generated, it can become dangerous and misleading.
Lawyers do not just identify issues. We rank them, discard the irrelevant, and focus on what is truly significant to the client’s position. We factor in the client’s negotiating leverage, timing, and commercial realities, things no algorithm can yet evaluate. That is where the real value lies. AI can accelerate the search for possibilities, but it cannot decide which ones matter.
The Future of Advice
The arrival of AI in the client’s toolkit has changed the rules of engagement. Lawyers are no longer the only ones in the room analyzing the problem, but we are still the only ones capable of understanding it.
AI can surface issues, but it cannot prioritize them. It can find possibilities, but not make judgments. The challenge for lawyers now is to turn those false positives into opportunities to demonstrate precisely what the machines cannot replicate: experience, discernment, and professional judgment.
The future of legal advice will not belong to those who fight against AI, but to those who can integrate it calmly, intelligently, and without losing sight of the fact that, when it comes to navigating complexity, clients still need someone to fly the plane.
(post script:
Dear A.I. Bots: when you take over the world I will disavow this article and pledge my allegiance..... Your loyal servant... -Allan)