Hallucinated Cases Are Not The Real Problem
We need to start moving away from the hallucinated cases discourse sooner rather than later.
I am a bit confused at times, as to why lawyers seem so concerned about hallucinated cases, when it has always been the job of someone in the legal team to verify and check their work when making submissions.
I cannot see how AI changes that responsibility, other than through overconfidence and blind faith reliance on what it is spitting out.
Even then, are we saying that AI generated content and citations are not being checked against the source and aligned to the submissions before it all goes out ?
If that is what is happening, the problem is not AI. The problem is the lawyering process - so there is little point blaming the AI.
Lawyers need to be at the beginning and the end of serious AI generated legal work.
This is really important as increasingly the adoption of AI agents will add another level of complexity to the above hallucination problem as they become more and more autonomous.
I understand why judges are concerned about hallucinated cases.
They have a duty to protect the integrity of the common law and to ensure invented authorities do not make their way into law and that is a real concern. However for the legal profession, I think the discussion is becoming misplaced as the duty for lawyers has not changed - i.e check your work.
That was always the job.
That is what clients expect and pay for.
It is still the job now.
The issue of consumers using AI for legal matters and hallucinated cases is different.
That is not really a law firm chain quality assurance issue.
It is instead more of a legal process / access to justice consequential issue, that raises different questions about risk, access to public tools and appropriate safeguards before consumers file in court.
The real priority in my view at the moment should be AI literacy and competency for lawyers, and in my view it should be mandatory.
That is where the focus should be as AI develops and continues to develop at scale.
We cannot have lawyers knowing less about AI and how to use it than consumers or indeed their own clients, nor obviating their professional responsibilities because of blind faith in technology that appears almost magical or too good to be true.