A few lawyers still read my blog, and I’d like to offer them a bit of advice about using Artificial Intelligence in their legal practices. And if If I know my audience, you are probably screaming “Hell No!”
You’ve been through other waves of tech hype promising to revolutionize legal practice, and you’re not going to fall for it this time either. Seeing as I’m a trained computer scientist, practicing software developer, and all-round tech geek, I am here to tell you that is absolutely the right attitude. AI is not going to revolutionize your legal practice.
However, if you’re careful, I think it could help.
(I also think AI could help other professionals besides lawyers, but I’m writing this for lawyers because I know I have a few in my audience. And because it’s more provocative that way. I need attention.)
To be clear, I’m not trying to sell you on using AI in your practice. I know many of you have been resisting, and I understand your skepticism. But I’ve been learning a bit about AI over the past year or so, and I think there are a few things that might be worth your time to try out. Obviously, I don’t know enough about being a lawyer to be sure any of this can help, but I’m pretty sure it won’t hurt. (Except for wasting some of your time.)
When I used to work in the legal tech field, one of our guiding principles was that the most effective legal technology doesn’t help with the core practice of law. It helps by making some of the peripheral activities more efficient — billing, document retrieval, managing discovery, document formatting, tracking service of process — so that the lawyers can focus on what they are best at. I think the same principle applies to AI.
Two quick caveats before we go:
- I am not a lawyer, nor did I consult with lawyers before writing this. I.e. I’m guessing a bit here.
- AI can mean a lot of things, but for this post I’m only talking about generative large language models and ChatGPT in particular.
So if you’re wondering if AI could help your practice, here are a few suggestions to try.
The Iron Rule of AI Safety for Lawyers
So how can AI help? Or more importantly, how can AI help without hurting?
I think the most important safety measure is don’t let the AI speak for you. The big pitch for a lot of AI these days is that it can generate documents — motions, contracts, discovery demands. But if you read the stories about lawyers who have gotten in trouble for using AI, that’s exactly how they used it. They prompted the AI to write a document for them, and then sent it to someone — often a judge — without fixing all of the AI’s mistakes.
In theory, you can review and revise the AI’s document before sending it off, but that’s going to be a lot of work. Unlike a junior associate, you can’t train the AI to follow firm-wide guidelines. It will generate content in its own style, organized whatever way it sees fit, requiring significant effort to edit and align with your standards.[1]Technically, some AI systems (including some versions of ChatGPT) can be trained using a process called fine tuning, so you could feed it example documents to train up a custom variant of ChatGPT more aligned with your needs. However, that would be a substantial technical project, and it would require multiple training and evaluation cycles. It’s not something most small firms would want to do.
You can revise your prompts to make the result more like what you want, but the AI is still going to do things you don’t like. And you’ll have to fix those things every time. Sometimes the AI will do totally off-the-wall stuff, like citing made-up court cases or referring to non-existent people. Editing the AI output will be a lot of work — possibly more work than if you just wrote it yourself — especially if you could just revise a similar document you already have on hand.
I believe it was either Eric Turkewitz or Scott Greenfield who coined the phrase “If you outsource your marketing, you outsource your ethics,” as a warning not to let strangers create your marketing materials. Letting an AI create your work product is probably even worse.
The key is to use the AI for tasks other than document generation.
Ill-Defined Searches
First of all, you can use an AI like ChatGPT as a kind of search engine to find answers when you can’t come up with good search terms.
For example, if you’re trying to remember an old movie about a hostage drama on a train, and you remember that Walter Matthau was in it, you can Google “hostage drama on a train starring Walter Matthau” and Google will find it.
But suppose you can’t remember Walter Matthau’s name. All you remember is that the movie starred an actor who was also in the original Odd Couple movie. Google Search won’t be much help.[2]Unless you have access to Google’s AI search capabilities, which may be rolling out as I write this. But ChatGPT can figure it out:
Me: What was that hostage drama on a train that stared an actor who also starred in the Odd Couple movie?
ChatGPT: You’re referring to the 1974 film “The Taking of Pelham One Two Three.” The movie is a thriller about a group of criminals who hijack a New York City subway train and hold the passengers hostage for ransom. The film stars Walter Matthau, who also starred in “The Odd Couple” as Oscar Madison.
Now you have enough information to do a proper Google search for “The Taking of Pelham One Two Three” and verify that it’s really the movie you were thinking of.
I think this will also work with court cases. ChatGPT should be able to find cases based on details you remember from the case or what areas of law it affected. Trivially, I can prompt the AI with “What’s the legal case from the last century that overthrew the case that established the ‘shouting fire in a crowded theater’ standard?” and it quickly finds Brandenburg. ChatGPT should be able to find more obscure cases as well, but I don’t know enough about law to think of any good test queries.
Of course, ChatGPT is well known for hallucinating court cases that don’t exist, so you should definitely check if the case exists and is really what you were looking for. If it’s not the right answer, continue the conversation with a “Nope. Try again.” You might want to give it more hints if you can think of anything, or at least try explaining what’s wrong with what it found. You might also try asking it for the best 5 or 10 answers it has.
(You can also use this kind of AI search for many other pieces of public knowledge that you might need to track down — the names of corporations, cities meeting certain criteria, dog breeds, books, movies, celebrities, and so on. Again, verify the answers before using them.)
I should emphasize that ChatGPT probably isn’t working off the case documents themselves. It’s mostly trained on things people have written about the cases. It will miss what they miss and misunderstand what they misunderstand.
Chat GPT isn’t searching any legal databases or document repositories in real time. It’s not even searching copies of documents. Months or years ago somebody fed ChatGPT a whole bunch of documents, which it analyzed for interesting bits it could use to build its massive neural network. The answers you receive are coming from that neural network as it processes your prompts. Any data that didn’t make it into the network won’t be in the answer. Conversely, any wrong data that made it into the network could also make it into the answer.
Translation
Second, AI can help with translation. If you want ChatGPT to translate something, just ask it like this:
Me: I’m a lawyer speaking to a Spanish-speaking client who is in the courthouse lockup. Please translate the following for him:
Hi, My name is Mark Smith, and your father has asked me to represent you. Your arraignment is in 2 hours, and we have until then to prepare you. But first, how are you? Do you have any medical problem which might require immediate treatment?
Notice that my prompt starts with some background about the conversation to help ChatGPT understand the context. It probably didn’t make much difference in this case, but it’s a good habit to get into.
As with all automatic translation programs, it might not be very accurate. I wouldn’t use it for communicating critical details or in situations where an error could cause great harm. But it will work in a pinch for non-life-threatening conversations.[3]As I write this, OpenAI is trotting out an upgrade that can do live spoken-language translation on your phone.
The irony does not escape me that this is literally letting the AI speak for you — something I said was a bad idea above. I think this is an acceptable risk for several reasons:
- The involvement of the AI is transparent to all parties.
- Translating your words is substantially different from generating new words to put in your mouth.
- According to some experiments, automated translation using large language models is usually better than most earlier methods of automated translation.
- I did warn you not to use it for anything critical.
That said, involving an AI run by a third-party may have implications for privilege or confidentiality. This is not something I can help with. It’s up to you lawyers to figure it out. The same goes for any other legal or ethical issues related to the use of AI.
Summarization
Third, and here we’re getting into sketchy territory, ChatGPT can summarize documents. Suppose you get a batch of electronic documents in response to discovery. You could copy and paste the text from each one into a template something like this:
Please summarize the following document the way an experienced lawyer would. The document was provided as discovery from an appliance manufacturer that we are suing for producing defective toasters due to poor training of the engineering staff. Give me an assessment of whether or not the document is relevant to the case, paying special attention to issues that enhance or mitigate liability.
—
Insert the document text here.
The prolog in the template accomplishes four things:
- It tells ChatGPT that this is a summarization task.
- It cues ChatGPT to respond as if it were an experienced lawyer. Prompting generative LLMs with a description of the role they are to play has been shown to improve answer quality.
- It provides some background context for the document.
- It makes it clear that the goal of summarization is the assessment of relevance.
You will probably want to adjust the prolog to be more focused on your particular scenario. Don’t be afraid to write a few thousand words if you need to.[4]When working with a generative LLM like ChatGPT, you should save copies of any prologs you create that turn out to be useful. Then follow the prolog with the text of the document to summarize, and ChatGPT will try to produce a summary and an assessment of relevance.
As with all things AI, you shouldn’t completely trust the results. In particular, LLMs are not fine-grained analytical tools. Don’t expect ChatGPT to sift through your discovery documents and find important revelations. Or rather, don’t assume that a document isn’t worth reading just because ChatGPT didn’t report any smoking guns. This is more of a big-picture approach for a rough classification. So if you’re seeking toaster-design liability clues, you probably don’t need to read an email that ChatGPT says is about employee dental benefits.
You may think you can do this faster and better with a document review team, or even just by glancing at the first page of each document to see what it’s about. And you may be right about that. Like I said, this is a sketchy application for AI.
That said, consider the power of combining this task with the previous one: If you have a lot of foreign language documents, you could use ChatGPT to summarize them in English to help you figure out which ones should be prioritized for a human translator.
I’m pretty sure that some of the major e-discovery tools and document management systems are already offering AI translation and summarization as features. Modern AI could make those features better.
The AI Review
I said earlier that it’s dangerous to use AI tools to write documents that you will be sending to other people as part of your practice. You could probably fix the AI documents with a manual review, but that’s going to be time consuming, and you might get a quicker and/or better document by writing it yourself.
On the other hand, it might make sense to flip the script and ask the AI to review your work.
This is obviously safe to do, since you don’t have to include anything the AI produces. But it is useful? The theory is that the AI has been trained on documents similar to yours and has “learned” how they are normally constructed. So when it processes your document, it can suggest things it has seen elsewhere. It won’t help you break new ground, but it might see some things you missed.
The only way know if this is worth doing is to try it, but I’ve heard a number of AI skeptics say that this is one area where AI is surprisingly useful. Not revolutionary, but useful.
As before, you will have some prompting to do. More than likely, you will want to have a conversation with ChatGPT about the document. To illustrate what that might look like, I fed an earlier draft of this post through ChatGPT 4o, and started with this prompt:
Me: Please review the document below as if you were a computer-literate lawyer and let me know how it could be improved. The document is a blog post in response to skepticism in the legal community over the usefulness of AI. I’m trying to suggest a few areas where AI may be genuinely useful without over-hyping the use of AI in legal practice. Please don’t re-write the document, just identify any parts of it that are especially confusing and let me know. You can offer a proposed re-write of those parts. Here’s the document:
[Document omitted.]
ChatGPT proposed 8 changes. Some of them suggested using more professional phrasing — it didn’t like the “Hell No!” at the beginning, for example, suggesting instead “I understand that many of you may be skeptical about the use of AI in legal practice.” I ended up using four of the suggested changes — fixing two typos and rephrasing a couple of sentences to be more concise.
Since most of the suggestions had been about readability issues, I realized I needed to get the AI to focus more on accuracy.
Me: Are there any parts of the blog post that make major mistakes about AI or ChatGPT in particular?
ChatGPT said it was mostly accurate, but it nevertheless offered six suggestions, four of which recommended emphasizing the ways in which AI could screw things up. One of the suggestions was to mention fine tuning, which I’ve added as footnote, and the other was to emphasize that ChatGPT doesn’t actually search documents, which I put into a new paragraph in the search section.
It sounds like ChatGPT didn’t find any huge problems with what I wrote, but did I miss anything important?
Me: Are there any other issues I should raise?
ChatGPT suggested seven possible additions to the post, including more examples and a discussion of tools that use AI, which I considered outside the scope. Based on a couple of its suggestions, I added a note reminding lawyers that they will have to puzzle out the legal and ethical issues of involving an AI. I also added a note at the end about the importance of keeping up to date on the technology.
Finally, I decided to ask ChatGPT to suggest other ways AI could be helpful.
Me: Finally, do you have any more suggestions where AI could really help lawyers without contradicting the main themes of the document?
Here ChatGPT strayed into the hype zone, suggesting things like using AI for contract analysis, due diligence, and compliance monitoring, all of which sound fanciful to me. It also mentioned document management, which I already touched on, and predictive analytics, which might work but requires a whole different kind of AI. Probably the craziest thing it suggested was using AI for client intake and onboarding:
AI chatbots can handle initial client inquiries, gather relevant information, and schedule consultations, freeing up time for lawyers to focus on substantive legal work.
Dear God no. Don’t do that. This is the worst case of letting the AI speak for you. At least a judge will likely recognize AI-generated legal nonsense for what it is. A client might well take it seriously and act on it. Also, I’m already anticipating the appellate question of whether chatbot-client privilege is a real thing.
I tried refining the question:
Me: Most of those sound complicated or expensive. Can you suggest additional ways that a small firm could use existing AI tools without making a major project of it?
That just caused it to reel off a bunch of AI tools that might be helpful, and I don’t want to get into that. I could have kept going, refining the question to get some better answers, but I think this is enough to show how to do AI review.
In Conclusion
AI technology has been changing rapidly for a couple of years now, and more changes are on the way. The new large language models like the one in ChatGPT have been revolutionary, and I don’t know if we’ll see more revolutions any time soon, but I think there’s plenty of opportunity to refine the models we have, find new and interesting ways to use them, and integrate them more elegantly into our lives.
As ChatGPT pointed out earlier, if you want to use AI to improve your job or your life, you’ll need to pay attention to new developments in the field. Keep reading. Keep trying new things.
Meanwhile, I’ve suggested a few areas where AI might be helpful. ChatGPT is a pretty good search tool, a decent translator, and a an OK writing coach. And under the right combination of circumstances, you may also find a use for its ability to summarize documents. But whatever you do, don’t let it be the face of your legal practice — not to judges, not to other lawyers, and especially not to clients.
Footnotes
↑1 | Technically, some AI systems (including some versions of ChatGPT) can be trained using a process called fine tuning, so you could feed it example documents to train up a custom variant of ChatGPT more aligned with your needs. However, that would be a substantial technical project, and it would require multiple training and evaluation cycles. It’s not something most small firms would want to do. |
---|---|
↑2 | Unless you have access to Google’s AI search capabilities, which may be rolling out as I write this. |
↑3 | As I write this, OpenAI is trotting out an upgrade that can do live spoken-language translation on your phone. |
↑4 | When working with a generative LLM like ChatGPT, you should save copies of any prologs you create that turn out to be useful. |
Rick Horowitz says
I keep meaning to write a post about my experiences with ChatGPT-4 on my law blog. (I’ve also tried others, like Gemini, with similar experiences.)
tl;dr version: I wouldn’t trust it as far as I can spit Saturn.
For the longer version, you’ll get info on that later. I wrote it here, then decided it was turning into a blog post.
And, since I already said I keep meaning to write a blog post on it, I’ve now cut all that and pasted it into my draft!
Mark Draughn says
Hey Rick, Great to see you’re still dropping by.
I’ve done that exact same thing. Started to reply on someone’s blog and it got so long I decided to turn it into a post. Of course, about half the time it never gets out of draft stage, because I don’t have as much time for blogging as I used to.
When you get something, feel free to drop a link to it here in the comments.