Legal & Business Commentary at Grassroots Level

If AI is now normal in law firms, should clients be told?

If you think Will Smith’s AI noodle eating progression has been rapid, wait until you hear how quick legal AI has moved into ordinary practice!

Clio’s 2026 UK and Ireland report says 89% of legal professionals now use AI in some form, and 70% adopted it within the past year. AI is no longer a novelty. It is part of the toolkit now, whether firms say so loudly or not.

The question is now whether clients need to know. Clio says 81% of firms claim they tell clients about AI use at least occasionally, yet only 7% of clients recall being told. At the same time, 79% of the public say lawyers should disclose it. Clearly there is an obvious disconnect between what firms think they are communicating and what clients believe they have been told.

To be fair, clients are not always entirely innocent in the great AI debate either. Many solicitors will recognise the now familiar client email: eight immaculate paragraphs arrive at 11:02, a detailed reply goes back at 11:11, and by 11:14 the client has returned with another 40 paragraphs, newly energised and suspiciously well structured (A call to the client asking what they mean by a particular point in their email usually dispels the illusion!).

Still, whatever the client may be using at their end, the profession cannot dodge its own responsibilities by pointing across the table.

Even if disclosure of Ai use is ethical, that does not mean every use of Ai needs a formal announcement. If a solicitor uses it for light administrative support or the sort of editing once done by spellcheck or dictation software, mandatory disclosure in every instance starts to look theatrical.

However, if AI is used on client documents, personal data, witness evidence, legal research, drafting, or any part of the work which could affect substance, confidentiality, or outcome, transparency matters more. At that stage, the client may reasonably want to know what tool is being used, why it is being used, and what human supervision remains in place.

The confidentiality point is also important. Not all AI products work in the same way, and not all carry the same risk. Some involve external processing or weaker controls. Others, especially enterprise tools running within a firm’s own environment, may offer tighter protection.

There is also the billing issue. Clio says fixed or flat fees now make up 53% of matters, while hourly billing is down to 32%. If Ai lets firms work faster, clients may benefit. But where time-based billing still applies, clients are entitled to know if work once done over several hours is now being completed far more quickly with machine assistance.

Equally, in tightly priced fixed-fee work, AI may be one of the few things keeping the service viable. Quite a few clients who send five-page AI-assisted emails for the price of one may be less enthusiastic if the solicitor is forbidden from using any comparable tool in reply.

So the real question is not whether software helped somewhere in the background, but whether client-identifiable data is being processed safely, whether the Ai changes risk, confidentiality, cost, or the nature of the work.

If we simply accept Ai is the new norm, the search engine replacement and the next phase in information technology then we need only look backwards for perspective.

We do not disclose every Google search, every CRM automation, or every workflow rule pushing a case from one stage to the next. We do disclose where information may be processed externally.

So in the same way we do not need to sign off our letters with “assisted by spellcheck“, maybe we do not need to disclose a case has been researched using Ai software. Maybe it’s as simple as it has always been.

That feels like the sensible approach.

Author: Joseph Stewart-Doyle

Leave a Reply

Your email address will not be published. Required fields are marked *