Billy’s manifesto is post no. 2 in our series, “AI goes to law school.”
When it comes to using AI as a law student, my advice is simple:
If you think AI may be useful for something, just try it.
That is, of course, overly simplistic, but it is by far the most important lesson I learned from my time in the Vanderbilt AI Law Lab. These tools are remarkably accessible and surprisingly simple to use. There is an abundance of free resources to help you learn what these models are and how they work, how to use them, and how to tailor them to any problem you’re hoping to apply them to. The most fruitful thing you can do with AI is iteration—apply it to an issue and then refine your model or the inputs again and again until you achieve your desired result.
It’s surprising, that such a massively complex technological system can be so easily poked and prodded from the comfort of our homes without specialized tools or knowledge, but it is precisely because we are able to do so that we can get so much out of it. I have rarely come across anything where trial and error are more constantly rewarded, with such a low barrier to entry. There is no reason I can imagine not to try.
While that is my primary takeaway from the lab, I have a few other insights to offer to better understand these models, their utility, and our relationships with them. Here are three that I found worth considering.
1. These models generate text, they do not generate facts.
It may seem basic, but understanding this is key to a healthy relationship with AI. The fundamental fatal errors people make while using models like Claude and ChatGPT stem from a misunderstanding of what they do and what they can provide. Though they can sound remarkably well-informed on a subject, it is paramount that we remember that they are generating text based on the models’ best efforts at pattern recognition and a vast library of resources; they are not reasoning their way through your question and providing you with an expert opinion.
Certainly, there are times when the outputs are perfect and perhaps indistinguishable from a bespoke, synthesized response backed by research and coherent reason, but the outputs can also be downright bizarre, baffling, perhaps even dangerous, and—though continuously improving—deeply fallible. Take the model’s responses with a grain of salt and remember that while they are an excellent starting point, you should make efforts to verify any assertions and know that they are not something you can cite.
If you keep in mind that these models make errors and look for an assistant to help point you in the right direction instead of looking for a finished product, there is a tremendous amount of utility in using them to conduct legal research.
2. Do not undervalue the ability to jumpstart creation.
My emphasis on the errors these models make is in no way attempting to downplay the incredible utility of this sort of text generation. Just because you can’t take every output as gospel doesn’t mean those outputs don’t have uses. The amount of time that a model can save by writing a first draft of an email or a newsletter is massive.
You can tell the AI what you’d like, the tone it should take, the intended recipient, etc., and then take its outputs, sometimes elegant and sometimes wooden, and refine them to serve your purpose and to reflect the way you would have written it yourself. You have to remember that these outputs are not a finished product. In many ways, receiving the output is the first step, and the second is extensive editing on your part to make it your own. The models are excellent at creating a template, or a jumping-off point, but it is up to you to fill in the details.
3. Be realistic, be curious, and when in doubt, keep experimenting.
You must be realistic in your expectations for the outputs of these models. Assuming they don’t make errors or relying too heavily on them is a recipe for disaster, but so too is assuming they make too many errors to be useful. What’s more, it would be a fatal mistake to think that the first failed attempt at receiving a particular output meant the model would never be able to generate the output you’re seeking.
Step back, consider the language used in your prompt, and perhaps provide the model documents to reference or try a different model that might be better suited to your problem. Keep iterating, keep honing, the model will adapt—it’s remarkable how quickly and easily you can right the ship sometimes.
If you don’t know how to address a problem you’re having, go looking for other users’ solutions. As with many promising, nascent technologies, there are countless guides, blogs, discussions, and more dedicated to helping us puzzle through this fascinating new tool, and it would be foolish not to make use of the immense, emergent repository of collective iteration, experimentation, and troubleshooting available to us.
Before VAILL I used AI mostly as a brainstorming tool or a jumping-off point, but now I feel confident using AI to manipulate spreadsheets, do data analysis, write basic code, and even create my own GPT. I cannot recommend VAILL enough if you are interested in AI, but I also have to emphasize how incredible it is that you could learn almost all of this on your own, today.
It can be hard to believe that a technology attracting billions and billions in investment and so frequently considered world-changing would be accessible to the average person, but it is right now. We are in a remarkable time where an exciting and potentially massively influential technology is available to anyone with a computer. If there is only one thing I could impress upon you, it’s that you can do all of this. You can try to write code. You can run tests. You can create a GPT. If you’re curious, nothing is stopping you from starting today.
Next time you wonder to yourself, can AI do this?
Try it and find out.