Family lawyers are becoming early adopters of Artificial Intelligence, and we are facing new challenges to our professional responsibility. My new article in the Florida Bar Commentator examines how generative AI forces lawyers to expand the traditional duties of candor, confidentiality, and competence to include this new relationship we have with our non-human assistants.

Generative AI is a subset of a much broader world of AI, which focuses on creating the text, images, and music we use in our practice and personal lives. Generative AI systems, like Claude and ChatGPT, are the best-known subset of Artificial Intelligence.
AI is evolving rapidly. In February 2019, when OpenAI released GPT-2, it could barely count to five, and threw insults at users. A mere four years later, Stanford Law School administered the Uniform Bar Exam to GPT-4, and it passed the multiple-choice portion of the exam, the written portion, and scored in the 90th percentile overall.
The 2023 Future Ready Lawyer Report showed that seventy-six percent of legal professionals in corporate legal departments and sixty-eight percent of law firms use generative AI at least once a week.
Along those lines, eighty-five percent of law firm lawyers and eighty-four percent of in-house lawyers say they expect to make greater use of technology to improve productivity. So what could go wrong?
A lot can go wrong with AI. So much can go wrong that the Florida Bar has issued Ethics Opinion 24-1. An easy mistake to make is with confidentiality. Before uploading your clients’ confidential information into an AI chatbot, review your AI system’s privacy policies. Avoid uploading any client information unless the AI platform encrypts your data.
Lawyers who rely on generative AI for research, drafting, communication, and client intake have the same responsibilities, and face many of the same risks, when relying on paralegals and assistants. A 2024 study of general-purpose chatbots found that AI models hallucinated as much as eighty-two percent of the time on legal queries.
Ultimately, a lawyer is responsible for the work product that their nonlawyer assistants and AI programs create. This is true regardless of whether that work product was originally drafted or researched by a nonlawyer or an AI program.
The Federal Reserve Bank of Dallas recently published a paper hoping to alleviate concerns that AI will become our evil overlords. Unfortunately, the Federal Reserve Bank’s paper admitted that, under some scenarios:
“AI eventually surpasses human intelligence, the machines become malevolent, and this eventually leads to human extinction.”
The article is available from the Florida Bar Family Law Section Website here.









