Ellie’s manifesto is post no. 1 in our series, “AI goes to law school.”
The first time I was told that AI would become an important part of future legal practice, I didn’t think it was true. I hoped it wasn’t true. As someone who never learned how to code, and occasionally struggles even to make photocopies, I was resistant to the idea that I would have to learn about technology when I wanted to learn about the law.
My only experiences with AI were news stories of lawyers receiving sanctions for fake case citations, blog posts decrying the theft of creative works, and frustrating conversations where a chatbot kept insisting I make banana bread even though I didn’t own any bananas. But as expert after expert came to discuss AI with us at my summer internship, it seemed like AI in law would become inevitable.
I decided that if I was ever going to jump in, it should be as soon as possible—and I’m glad I did. During my final year of law school, I enrolled in Vanderbilt’s AI in Law Lab practicum. AI is not what I thought it was, and it’s not what it was two years ago or even six months ago. Law students should know what it is and how to use it. Here’s where I would start.
1. Take some time to learn what AI is and how it works.
AI might not be exactly what you expect. It is not a search engine or a supercomputer (even though, for some reason, Google insists on providing AI answers to everything now). It does not always provide the “right” answer. It often fails to do basic math. It struggles to understand how often the letter “r” appears in the word “strawberry.” But it is great at tasks that seem much more complicated, like summarizing hundreds of pages of information or drafting documents based on complex instructions. When I started using AI, many of my frustrations came from not understanding why the technology struggled so much with seemingly simple tasks. The answer is in how it works.
The best way to understand what you should and shouldn’t use AI for is to develop some understanding of what it is, in a technical sense. The simplest way I’ve heard it described is that it’s essentially a really fancy autocomplete, like you might use on your cell phone: based on all the information it has, it predicts the next best word until it generates an answer to your prompt. Of course, the reality is a lot more complicated than that. But that alone starts to explain why it is so good at writing (it has a lot of examples of what words usually go together) and not as good at providing truthful information (it does not “know” the answer; it predicts the answer).
Get into the nuts and bolts, at whatever level is comfortable for you. You can read articles, listen to podcasts, or watch YouTube videos to develop an understanding of how the technology works. This will help you predict which tasks it will be helpful for, and which tasks will require a lot more supervision on your part. Once you’ve started to develop that understanding, you should try it out. I recommend starting with low-stakes, or even non-legal tasks. Ask a chatbot to do math, write stories, critique your writing, help you plan a vacation, or anything else you can think of. You’ll start to get a sense of what it does well with, and what it doesn’t.
If you decide to use AI in your legal practice, this experimentation and understanding can help you avoid mistakes and use your time wisely. If you understand what AI is bad at and why, you’re less likely to get into trouble by relying on it for something like case citations. And as you get a sense of what AI is good at and why, you’ll be able to find tasks that AI can help you complete more efficiently rather than those it would just be quicker to do yourself.
2. Continue to develop expertise about the law.
Of course, you have to do this whether you choose to work with AI or not. But the important part is that AI is not, and will not be, a substitute for actual legal knowledge. Because AI does not “know” the right answer and is often happy to confidently provide an incorrect one, it is important that you know what the right answer should look like.
Despite many tries, I’ve yet to get a chatbot to correctly answer a complicated legal question. I only know this for certain because I found the answers myself. But that doesn’t mean AI can’t be helpful to legal practice.
Drafting any kind of document always takes me a long time, and probably longer than it should. I’ve used AI to help me write press releases, communications to clients, and communications to other lawyers. One strategy that has worked well for me is to do my own research and take notes, as if I was writing a memo to another lawyer in my firm. This is the fastest way for me to write because it’s what I have the most practice with—something else might work better for you.
I then feed that memo into my preferred AI technology (without any confidential information) and ask it to help me turn it into something different, such as a letter communicating those same findings to the client. This technique works well for me because I have already done the research myself and made my own conclusions, so it’s easy to verify whatever the AI spits out. But it saves me time by starting the drafting process on something that otherwise can be stressful or tedious.
There are a lot of ways to incorporate AI into your legal workflow, but they will all require you to know the law and to know what the product you’re creating should look like. So it’s important to learn those things first, and learn them well.
If you’re getting started with AI, I recommend testing out its capabilities by comparing its work to work you’ve already done. If you’ve written a memo, give your preferred AI program the same prompt and see how its memo stacks up—again, you’ll learn what it’s pretty good at and what it’s not. If you choose to use it for real work in the future, you’ll have a sense of what to look out for and what touches you’ll need to add that you know it will probably miss.
3. Stay informed and think outside the box.
At the end of the day, whether you’ll be allowed to use AI for professional tasks such as discovery, legal research, or drafting documents will depend on where you work and your employer’s policies on AI use. But if you’re interested in these technologies, you will be able to find ways to incorporate them into your legal career. While I do occasionally use AI at work, most of what I do is not replaceable by AI (at least not yet). Instead, I often use AI for tasks outside of work.
One of my favorite projects was to build a custom chatbot intended to provide advice to new lawyers. I designed it to help practice conversations with other lawyers and create checklists for projects and meetings, among other things. I recently used it to help me prepare for an interview by having it ask me questions about my interests and qualifications and provide feedback on my answers. Although I can’t speak to how realistic this practice conversation was, it got me to think quickly and deeply about my answers in a way that reading common interview questions online would not.
Before enrolling in an AI practicum, I never would have known that building a tool like this was possible, at least for me. Now, even if my future firm does not allow AI for work assignments, I feel confident that I will use it in other aspects of my legal career. During my semester in the practicum, I was frequently introduced to AI tools that caused me to think, “I didn’t know AI could do that.” Maybe the biggest change since I started learning AI is that I have a better idea of what’s possible, which leads me to try new things and, when they go well, incorporate them into my life or work.
If you’re interested in using AI, think about joining a newsletter or following accounts that will let you know what’s out there. It might surprise you. From there, be creative and try new things. You might find a new favorite use for AI that you’d never thought about. And, if your firm does welcome AI use, you’ll be prepared to jump right in.