Tag: AI and family law

Family Law and AI Hallucinations

Hardly a day goes by without news that a lawyer – and increasingly a client – has been sanctioned for using AI for legal research which contain hallucinations. Do programs like LexisNexis and Westlaw entirely solve the hallucination risk, or are family law researchers at risk?

AI Family Law

Cyberdyne Legal Research

Being a lawyer today means relying on artificial intelligence the same way we rely on human staff. AI helps with analyzing documents and discovery, as well as performing core legal tasks, from researching caselaw to document drafting. But make no mistake, AI models can hallucinate fake results. And, as any fan of the Terminator movies knows, AI may also lead to human extinction.

However, courts are dealing more with the hallucination issue, and the misuse of AI in court filings. Most of these cases, if not all of them, deal with citations to non-existent legal authority or the attribution of quotes to cases that do not contain the quoted material — produced as a result of what has come to be termed “AI hallucinations.”  A legal AI hallucination occurs when a generative AI model gives you information that appears plausible but is in fact wrong, fabricated, or unsupported by the citation.

Clients are getting in trouble too. A recent federal case is a little different take on hallucinations. In the recent federal case it appears that AI was used not to hallucinate the law, but to hallucinate the facts. If an hallucination is an answer by an AI with made up cases, inventing facts would be a huge new risk in a high-stakes divorce.

In a recent case, the plaintiff filed a sworn declaration opposing a motion for summary judgment which contained multiple fabricated quotations, along with manufactured citations to deposition transcripts, as if they came from sworn testimony.

However, the declaration grossly mischaracterized the testimony and other facts in the record. At oral argument, the lawyers used some of these fabricated “facts” to argue to the Court that this case contained genuine issues in factual dispute.

More interesting, the client and his former counsel refused to accept responsibility for creating and submitting the declaration despite having had multiple opportunities to do so. The court ultimately ordered attorneys’ fees be paid by the lawyer and the client thousands of dollars to the other side as a sanction.

Semantic Collapse and Legal Research

I recently wrote an article about the new players in AI, “Retrieval-Augmented Generation, or RAGs.” The leading AI legal research tools are RAGS. Empirical analysis of the leading AI legal research RAGS — like those offered by LexisNexis and Thomson Reuters — may still generate hallucinations in a non-trivial number of cases.

The Commentator article found that some studies have shown that RAG AI research models may still hallucinate. But it may be getting worse. An even newer claim has come to light. They call it “Semantic Collapse.” Supposedly, once your AI platform hits about 10,000 documents, the AI system starts treating valuable data like random noise.

In one recent study, four document sets contained around 300 pages of documents which answered test questions. However, each set of documents contained different numbers of additional, irrelevant pages, ranging from 1,000 pages to 100,000. An ideal RAG system should behave identically across all document sets.

But in practice, the added irrelevant pages tricked the RAG system into retrieving the wrong answer for a given query. And the more documents that were introduced, the more a wrong answer was likely to happen. The conclusion reached was that RAG performance tends to degrade as the number of documents increases.

The AI Paradox

There are some fair and unfair observations about the purported new study. True, a vector search may become less sharp at distinguishing highly relevant versus non-relevant documents as the volume of documents increases. But at the same time, the study was done by a competitor maker of a RAG system, which introduces the problem of bias.

More importantly, there is an inherent paradox when we use AI. It is called the AI trust paradox, and it is a phenomenon in which the more confident and human-sounding an AI chatbot becomes, the more we trust it. The problem is we can’t trust it. All AI systems, event the ones that seem reliable can get it wrong. While AI can increase our efficiency, we need to think of them as inexperienced assistants that need our guidance.

The U.S. District Court case is here.

My Florida Bar Commentator article on AI is here.

Artificial Intelligence and Professional Responsibility

Family lawyers are becoming early adopters of Artificial Intelligence, and we are facing new challenges to our professional responsibility. My new article in the Florida Bar Commentator examines how generative AI forces lawyers to expand the traditional duties of candor, confidentiality, and competence to include this new relationship we have with our non-human assistants.

AI Law

Generative AI is a subset of a much broader world of AI, which focuses on creating the text, images, and music we use in our practice and personal lives. Generative AI systems, like Claude and ChatGPT, are the best-known subset of Artificial Intelligence.

AI is evolving rapidly. In February 2019, when OpenAI released GPT-2, it could barely count to five, and threw insults at users. A mere four years later, Stanford Law School administered the Uniform Bar Exam to GPT-4, and it passed the multiple-choice portion of the exam, the written portion, and scored in the 90th percentile overall.

The 2023 Future Ready Lawyer Report showed that seventy-six percent of legal professionals in corporate legal departments and sixty-eight percent of law firms use generative AI at least once a week.

Along those lines, eighty-five percent of law firm lawyers and eighty-four percent of in-house lawyers say they expect to make greater use of technology to improve productivity. So what could go wrong?

A lot can go wrong with AI. So much can go wrong that the Florida Bar has issued Ethics Opinion 24-1. An easy mistake to make is with confidentiality. Before uploading your clients’ confidential information into an AI chatbot, review your AI system’s privacy policies. Avoid uploading any client information unless the AI platform encrypts your data.

Lawyers who rely on generative AI for research, drafting, communication, and client intake have the same responsibilities, and face many of the same risks, when relying on paralegals and assistants. A 2024 study of general-purpose chatbots found that AI models hallucinated as much as eighty-two percent of the time on legal queries.

Ultimately, a lawyer is responsible for the work product that their nonlawyer assistants and AI programs create. This is true regardless of whether that work product was originally drafted or researched by a nonlawyer or an AI program.

The Federal Reserve Bank of Dallas recently published a paper hoping to alleviate concerns that AI will become our evil overlords. Unfortunately, the Federal Reserve Bank’s paper admitted that, under some scenarios:

“AI eventually surpasses human intelligence, the machines become malevolent, and this eventually leads to human extinction.”

The article is available from the Florida Bar Family Law Section Website here.