The Akron Legal News

Login | September 18, 2024

ABA tackles GenAi

RICHARD WEINER
Technology for Lawyers

Published: August 30, 2024

The American Bar Association has released its 15-page Formal Opinion 512 on the use of Generative Artificial Intelligence Tools (like Chat GPT). The opinion is dated July 29, 2024.
That is a pretty timely response to tools that came on the market only a couple of years ago and that are difficult to understand at best.
On some level, of course, they seem to be easy. You just type a question into a window and a chatbot responds to it. The difficult part for lawyers is assigning any level of accuracy or trust to the responses.
A different issue, and one that needs to be discussed much more widely, is whether or not the non-lawyer public can trust GenAI to deliver to them an accurate answer to a legal question. The answer is a qualified “no,” but that’s a discussion for next time.
Law is a topic that is frankly too subtle for most GenAI chatbots to respond to accurately at scale. For one, they are usually at least a year behind in their ability to find information. For another, they can’t get behind paywalls (like LexisNexis/ Westlaw, etc.). Their response to these shortcomings can be honest or dishonest. The dishonest responses are “hallucinations.” There have been a lot of those, to the detriment of anyone who tried to use them in court.
The fundamental rule governing the use of GenAI tools for attorneys is the OG programmer saw, Garbage In, Garbage Out (GIGO). Bad, unqualified, incomplete, irrelevant, etc. output comes from the same classification of input. And this is true of legal AI.
In fact, though the opinion doesn’t break this down, there are three different kinds of legal AI. In descending order of trustworthiness, they are:
1. Mega outfits like Thomson (West) and LexisNexis that are developing their own in-house GenAI that can fetch actual citations of actual cases in the proper context (as trustworthy as GenAI can be);
2. Small startups concentrating on legal AI that don’t have the kinds of resources that the big folks have (lots of them); and
3. Open-to-the-public GenAI like ChatGPT that pretend to understand the question but often don’t, therefore occasionally hallucinating or outright lying. This induces cringey lawyer headlines and bar suspensions, etc.
But, like I indicated, this first stab isn’t terrible. It seems to concentrate on #3 above, and has a few things to say to try to get over the embarrassing headlines that said hallucinations have dropped on the legal profession.
Rather than creating new rules governing attorney usage of GenAI, the Opinion refers members to parts of other rules that might apply. Specifically, the opinion says that it “identifies some ethical issues involving the use of GAI tools and offers general guidance for lawyers attempting to navigate this emerging landscape.”
The opinion “states that to ensure clients are protected, lawyers and law firms using GAI must ‘fully consider their applicable ethical obligations,’ which includes duties to provide competent legal representation, to protect client information, to communicate with clients and to charge reasonable fees consistent with time spent using GAI.”
It added that the ABA committee and state and local bar association ethics committees will likely continue to “offer updated guidance on professional conduct issues relevant to specific GAI tools as they develop.”
Model Rules specifically mentioned in the opinion are:
Rule 1.1 (Competence). The new Opinion suggests a risk/benefit analysis in using GenAI to deliver services to clients. In other words, you are still the delivery system, not the chatbot.
Model Rule 1.6 (Confidentiality). Cannot say this enough—anything you type into the chat window becomes public information. The data is owned by the company that owns the chatbot. No way to get it back.
Model Rule 1.4 (Communications). You have a fiduciary duty to be the source of legal communications to the client. The chatbot does not.
Model Rule 1.5 (Fees). You can charge clients for the actual time spent on chatbots, but you can’t charge them for the time that you spent training yourself on the use of the chatbot. Or, I guess, defending yourself for following a hallucination that gets you thrown out of court.
TTFN.
You can read Formal Opinion 512 here: https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf


[Back]