New episode of our video podcast, Speaking of Litigation: What if the key to navigating your most complex legal challenges lies in the capabilities of artificial intelligence (AI)?
Join Epstein Becker Green attorneys Alkida Kacani and Christopher Farella as they sit down with Jonathan Murphy, Senior Manager of Forensics at BDO, to examine how AI is revolutionizing the practice of law.
Discover how advanced technologies are refining e-discovery, optimizing predictive analytics, and transforming document review processes. The discussion also takes a deep look into the ethical considerations of integrating AI into legal work, from safeguarding sensitive information to maintaining professional standards in a highly dynamic field.
Podcast: Amazon Music, Apple Podcasts, Audacy, Audible, Deezer, Goodpods, iHeartRadio, Overcast, Pandora, PlayerFM, Pocket Casts, Spotify, YouTube, YouTube Music
Transcript
[00:00:00] Alkida Kacani: Today we will explore the real impact of innovation on the legal profession and dive into a question that's on everyone's mind. Is AI making litigation more efficient and revolutionizing it, or is it a Pandora's box we're opening? Hello everyone and welcome to Speaking of Litigation. I'm Alkida Kacani, a litigation partner with Epstein Becker and Green, where I spend my time navigating the complex world of commercial disputes, government investigations, and false claims act cases.
[00:00:29] Alkida Kacani: With me today is Jonathan Murphy, Senior Manager of Forensics and a certified forensic examiner at BDO, where he regularly consults with clients on the use of machine learning and AI technologies in the e-discovery document review and investigation spaces. And Chris Farella, a colleague and fellow partner at Epstein Becker and Green, where he is also a litigator, but today we are asking him to wear his other hat, that of the general counsel to EBG, where he has to address ethical issues concerning how the firm and our attorneys practice law. Welcome and thank you to you both for being here.
[00:01:10] Christopher Farella: Thank you.
[00:01:11] Jonathan Murphy: Thanks, Alkida.
[00:01:12] Alkida Kacani: So let's get into it. Everywhere you turn these days, it feels like everyone's talking about AI, and for good reason.
[00:01:19] Alkida Kacani: It's rapidly changing the way we live and work and the legal world is no exception. It's undeniable that AI is transforming the practice of law. In litigation especially, AI is becoming a powerful tool that's helping attorneys with everything from reviewing mountains of documents during e-discovery, to using predictive analysis to gauge how a case might play out, or even speeding up legal research.
[00:01:44] Alkida Kacani: But as with everything else it's not all smooth sailing. There are real ethical questions to address also, such as the transparency in decision making or the risk of relying too heavily on AI. So Jonathan and Chris, I'm going to ask you both these questions. Let's talk a little bit about how AI is actually being used.
[00:02:06] Alkida Kacani: What are some of the key ways AI is showing up in litigation today?
[00:02:09] Jonathan Murphy: Yeah. In the e-discovery space, our primary interaction with AI is twofold. The traditional administration stuff you'd expect, people summarizing call notes, people proofreading their emails, using it to quality check billing entries, all of those admin aspects of the legal industry.
[00:02:29] Jonathan Murphy: On the more traditional side of things, it's document review. Anything that is a low hanging fruit or considered a sort of lower level task, a slightly monotonous task. Anything that you can do to bring in AI to that aspect is becoming a bit of a revolutionary game changer. We've had AI in the e-discovery space for a really long time.
[00:02:50] Jonathan Murphy: Anyone who's familiar with prioritized review, active learning, conceptual analytics and cluster wheels. These have been around for years and those are forms of AI too, but it's the public's imagination being captured by generative AI, things like ChatGPT, that has completely changed and encouraged a lot of e-discovery vendors, software and otherwise, to integrate that kind of natural language processing technology into their e-discovery platform to help in legal matters.
[00:03:21] Christopher Farella: And also what's happening too is there's been an expansion of AI into things such as searches. Google itself has now incorporated Gemini into its search function. So when you do a search, you now get a summary of some of the high points with links to the articles or websites where it drives it from.
[00:03:43] Christopher Farella: There's also some of the commercial legal research groups, also have their own AI built into the new software applications that they have. I've been; in my role as general counsel, I've been asked about deposition summaries and having AI also be part of Zoom calls or Teams calls where they can transcribe what's going on.
[00:04:06] Christopher Farella: These are useful applications. They have some specific concerns that we'll probably get into a little bit later, but you still want to make sure that you're doing the work because AI can be very helpful. But you have to understand what you're looking at and you have to verify and be transparent about what it is that you're doing.
[00:04:26] Jonathan Murphy: Chris makes a great point about Google. If you Google legal questions now, Google will say generative AI is experimental. Go and consult a professional. And so it is trying to prompt you to say, go and talk to the lawyers as well. And in addition to what we've talked about as well, we've also got the idea that there are corporations, companies who have had huge amounts of historic data, huge amounts of previous litigation. There's another section of AI, which is using that historical information for predictive purposes, whether it's how a particular judge might behave on a particular day of the week, or how things have gone for people with a similar matter to yours in a similar jurisdiction.
[00:05:08] Jonathan Murphy: So there's a lot of things both on the day-to-day task side of things and the wider reaching legal industry that are being hugely impacted by the AI.
[00:05:18] Alkida Kacani: Interesting. So Jonathan, obviously the use of AI in e-discovery is a big one. No one misses the days of sifting through thousands of emails manually.
[00:05:31] Alkida Kacani: But as we can imagine with that kind of automation, there's also a concern about accuracy and oversight. So what do you see as the biggest limitations of AI in these tools?
[00:05:42] Jonathan Murphy: Yeah, you raise a great point about accuracy there. I think there's a lot of trust and a lot of faith that needs to happen in the security of the platforms you are using.
[00:05:53] Jonathan Murphy: And this goes beyond AI generally. You need to make sure that if your data is being hosted in a platform where it's being accessed by somebody else, that that data is secure, whether you're using AI or not. So these are very traditional things that we should be expecting from the people we work with.
[00:06:11] Jonathan Murphy: There's so much sensitive corporate data out there that it has to be protected. And I think sometimes when people hear about the AI, they imagine their data going elsewhere and being read by someone, but ultimately that same concern applies to the non-AI world as well. If you've got 20 junior associates or paralegals working on your document review, you have to trust and put whatever measures you can in place to make sure they're not taking photos of the screen or downloading the documents.
[00:06:42] Jonathan Murphy: And most software, most vendors, have things in place to help with this, to audit the actions of people, to make sure they're behaving in a appropriate and ethical way. We expect that from our AI and a lot of the software vendors who have created or integrated AI into their products, they'll very happily talk to you about their security white papers.
[00:07:03] Jonathan Murphy: Let you know how it all works, the ethical considerations they've added in, as much as they can. There is a degree of black box that they can't necessarily share with you because it becomes a little bit like the secret spices, their proprietary information, which can be a little bit of a concern.
[00:07:25] Jonathan Murphy: I think other than that, the other major concern that folks tend to have related to the AI is either a cost component of how much it costs to use and integrate this AI, or how consistent it is. There is an interesting consistency problem when it comes to AI in the sense that two people asking the same AI the same question might get slightly nuanced outputs. .
[00:07:46] Jonathan Murphy: And we've gotten so used to, in the e-discovery space, something like a term search being very binary. Either this document has this word in it or it doesn't. That feels very scientific. And when you talk to people about the AI, and the more nuanced information it might give you or the more varied information it might give out to you, that can make people feel a little bit more uncomfortable. Because it doesn't feel quite as scientific as the binary, yes or no.
[00:08:20] Alkida Kacani: Chris, this brings us to the elephant in the room, ethics. So from a legal standpoint, what ethical concerns should attorneys be thinking about when using AI?
[00:08:32] Christopher Farella: So to start off, the ethics rules now, they require that lawyers have technical competence.
[00:08:39] Christopher Farella: It's no longer a time when you can just say, I don't know how to do that. The printer doesn't like me, or whatever the excuse was in the past, that's gone. So both under the ABA as well as under state ethical rules, that's a common theme. And you have to understand the benefits and risks associated with using the technology.
[00:08:59] Christopher Farella: So it's not only getting to understand what you're using, but understanding what risks are posed by that use, and that's where we want to concentrate with AI, is some of the risks. It's not saying that you need to necessarily be a computer expert to do it. You need to know enough because you may have to report this to the court as to what happened.
[00:09:19] Christopher Farella: How did that search that Jonathan was just talking about, how did it come to those conclusions? So you need to understand that to make sure that you have the proper candor before the court. Again, with AI, the concern is really how do you keep your information confidential? And that becomes a huge piece of understanding what the risks are.
[00:09:40] Christopher Farella: A number of clients don't want their information put into AI. Well, today AI is running in the background of a lot of different things. So you have to be very clear with your clients about what exactly are you asking us to do and what are you not asking us to do, and how do we use your information properly?
[00:09:58] Christopher Farella: And when you look at some of the states and how they regulate the use of AI, they haven't created new rules. They're just using a lot of the rules that we already have in place for other technology. And they come up with a couple of themes. One is, be truthful and accurate. So lawyers may have to make sure that the content that they use is based upon proper sources.
[00:10:19] Christopher Farella: Again, don't let things hallucinate. Don't just submit things to the court that haven't been checked, or the facts or legal reasoning hasn't been checked. Make sure you verify all the output that's done from something very simple to, you know, complex, you know, legal briefs or arguments.
[00:10:36] Christopher Farella: And again, maintain confidentiality. The prompts that you use or documents that you upload, have you anonymized those properly so that your client isn't helping teach the tool, the AI tool, on their information. So, those are a number of things to worry about. And then at the end of the day, communicate with your client.
[00:10:55] Christopher Farella: Make sure your client understands exactly how you're using it in the firm. Is it being used to ideate some briefs or letters, or is it being used, you know, in a more tangential way where you're just doing some research and you're not necessarily using your client's data?
[00:11:11] Jonathan Murphy: Just to add into that, Chris made an amazing point there about confidentiality and anonymizing data.
[00:11:18] Jonathan Murphy: Even if you're not using an e-discovery tool to use the AI for document review, most everyone has access to things like ChatGPT, Apple Intelligence, the other AI models, on their phone. In theory, unless your company has restricted access to these websites, there's nothing stopping someone from taking the email they're about to send to their clients, putting it into Chat GPT or otherwise, and saying refine this email for me.
[00:11:45] Jonathan Murphy: Make sure it's concise, clear, friendly. But that information that's going in, that could be privileged information, confidential information that is just going into the AI and people are having it read and interpret their emails. And unless there are restrictions in place about that, that AI may not have the same guarantees about not learning and absorbing your information that some of the e-discovery software has to go into.
[00:12:12] Jonathan Murphy: And so you've got people who may be using this anyway, and I think encouraging them and training them and educating them on why we can't just take the legal email or the document that we want to send to our client and have it proofread as if it was just spell check, is really important because I think people don't think about it.
[00:12:32] Jonathan Murphy: So I think it's just an amazing point Chris made about the confidentiality, and thinking about things before you do them.
[00:12:38] Christopher Farella: And to jump on what Jonathan said too, is you not only have to educate the lawyers, which just an aside, our firm does. But we do have to make sure that you also tell your clients. Using the example of transcribing a meeting, if you're representing a board and the board meeting is being transcribed, there's going to be a printed transcript of the board meeting.
[00:13:01] Christopher Farella: And you have to be very careful about warning your client about the potential that the privilege, if let's say you as a lawyer are making a statement about a case or giving legal advice to the board at that meeting, that that part of the transcript either gets excised out or somehow altered or redacted, or it's restricted in who gets to look at it, because now you've got, floating around, a transcript of that meeting and some of the advice that you gave and it can easily fall into the wrong hands, and, you know, inadvertently become produced or part of an email that went out to people that weren't supposed to see it in the first place.
[00:13:41] Jonathan Murphy: What might have been a tangential conversation at the water cooler is now a thing that was recorded, transcribed, and available to everyone.
[00:13:49] Jonathan Murphy: It's another source of evidence, and I think that's an amazing component of the AI as well. The prompts, we've seen it already, that there are corporations and companies that use AI, the prompts they're using in their day-to-day business are discoverable information separate from legal companies using it.
[00:14:08] Jonathan Murphy: It's just another thing to have to think about in terms of people creating data and the ever-growing amount of it.
[00:14:14] Alkida Kacani: So it's clear AI tools are becoming more integrated into, you know, legal research, case management, you know, client meetings, internal meetings. So, how do we ensure that clients fully understand the trade-offs in relying on AI.
[00:14:30] Jonathan Murphy: Yeah, it's a great question and I think there's a couple of components to it, and part of it is about asking the right questions. If you are working with someone, if you are using AI, someone should be competent and confident in understanding how it works and making sure that if you are working with an e-discovery vendor on that AI, you need to make sure that you understand what they're telling you about the software.
[00:14:50] Jonathan Murphy: If they're saying it works in this way, it can do this and it can't do that. Don't let them talk circles around you. Ask questions until you understand. Chris made a great point earlier about competency in technology and that's always been the case.
[00:15:08] Jonathan Murphy: This is just another aspect to understand what's going on. So I think clear communication, informed consent about what the software, what the AI can and cannot do, is really important. And I think the comparison with this when you're talking to your clients is always the human component. I think it's a very fair comparison to compare the work that the AI is going to do to a human.
[00:15:33] Jonathan Murphy: And so whether that's document review and the 20 junior associates doing the document review versus the AI or the legal research being a series of questions typed into a GPT model versus somebody spending hours going through books, there is a contrast between what the humans do and what the AI does that are really important.
[00:15:58] Jonathan Murphy: And we've talked about this before amongst ourselves, that there's a really important aspect in your legal education on how you are treating AI. Because it's very easy to say to your client, you know, this is just the computer. Anything that relates to the computer we need to hand off to somebody else.
[00:16:15] Jonathan Murphy: But this is an opportunity to be involved, engaged, and really understand it, because if you are using AI for your document review, for example, the instructions you give to the AI, the prompt, those are your case strategy. Those are how you are approaching the investigation, the litigation, and it's not the same as your list of search terms that you might disclose.
[00:16:40] Jonathan Murphy: It is a very different thing, and there's an interesting debate happening at the moment about prompts as discoverable. I definitely fall on the side of those are your personal information, those are your strategy. Those shouldn't be given to the other side because it shows your train of thought and I think it is completely different than your list of DT search terms.
[00:17:01] Christopher Farella: So I agree with Jonathan. I think that's a very important point, that we have to remember that AI is just another tool and we have to adjust to a tool. It's not changing the legal practice the way we know it. We still have to keep our clients’ information confidential. We still have to act competently.
[00:17:19] Christopher Farella: In that case, we still have to protect our thought processes. And there's the work product doctrine that comes into play. If you handwrote a bunch of notes on how you were going to attack a certain document or strategize a case, you wouldn't be handing that over to the adversary. And it's the same thing with your prompts, because you're trying to use this tool to bring forth information about the case and help you with your strategy and maybe help you with some of the legal analysis. But yeah, search terms are becoming part of the sort of meet and confer aspects of the rules in the federal court as well as in the state courts, where you should be sharing information about how you are searching databases.
[00:18:00] Christopher Farella: Because these databases are so voluminous. You do want to try to conserve resources so you're not endlessly battling in court about, you know, what did you produce, what did you not produce? It's much better for everyone involved to have an understanding of how it is that you're going about doing it.
[00:18:18] Christopher Farella: How you get there is a whole different story. That's your mental impressions and thoughts and strategy. So those should be protected.
[00:18:26] Alkida Kacani: So Chris, as AI continues to shape litigation, what can be done to ensure there's ongoing training in this area?
[00:18:34] Christopher Farella: So, I think it presents a number of issues. One is definitely, lawyers need to stay on top of the legal obligations and ethical obligations on how you work through AI and use AI.
[00:18:48] Christopher Farella: I think it's the duty of general counsel such as myself, as well as, you know, attorneys in CLE proceedings, a lot of the states require you now to have some technical CLEs or continuing legal education. So make sure you stay on top of the latest cases. How are judges viewing this? For example, a recent judge allowed redactions in a case that were, you know, were text messages that were redacted for relevance, which is something that we've all wanted to do with the explosion of data. Because there's always so much stuff around the real conversations. But there's been no mechanism. And in this case, the court actually didn't allow it because no one asked for it. They just did it.
[00:19:34] Christopher Farella: And the court said, you can't just do that. You need to talk to the other side and maybe get an agreement that you can redact for relevance. So there's a lot that needs to be, you have to follow the case law, see how it's evolving. See how the technology's evolving. What are the, what are both the benefits and the risks?
[00:19:53] Christopher Farella: And so again, staying in touch with your general counsel, others, and taking courses, I think are the big part of it. But also bring in your experts within your firm. You know, if you have an e-discovery group the way we do, bring them into the discussion. Bring your vendors such as Jonathan and BDO into the discussion.
[00:20:12] Christopher Farella: Ask them questions about how best to go about, you know, finding information or protecting certain information.
[00:20:19] Alkida Kacani: Jonathan, turning to you, how can vendors better support the legal teams by, educating them on the AI tools, and being mindful not to overstep into giving legal advice?
[00:20:31] Jonathan Murphy: Yeah, it's a great question, and I think Chris and you talked about education there as a huge driver. The great thing is, everyone wants to talk about AI. Everyone wants to be part of the conversation. There are so many opportunities to find CLE, CPE on technology and AI. I can say for myself, if any member of a legal team came to me and said, I want to talk more about how this AI works, it would be like my Christmas and birthdays all came at once. I love getting to geek out about this stuff.
[00:21:02] Jonathan Murphy: And as we mentioned earlier, making sure that someone who maybe understands the tech on a different level than you, doesn't just say things that sound really impressive, but make sure they're meaningful to you. So I think deliberate, intentional, practical things you can learn are always going to be valued. And much better than just pontificating about something.
[00:21:24] Jonathan Murphy: I think going in with the intention of here's how I'm going to use this. Here's a strategy we can do, here's something we can take away. I think empowerment through the knowledge of the AI is incredible. So I think I absolutely agree with that. On the vendor side, I think I can say for myself that there's always going to be an aspect where no matter what we are doing, we aren't the legal people.
[00:21:44] Jonathan Murphy: We are not the legal experts. Whether that's search terms you provide, and there's a great contrast in that, that if you provide me a list of search terms, or search terms from the other side, where the logic of those search terms or the particular syntax doesn't work, I can consult with you and I can say this isn't going to work for your purposes.
[00:22:04] Jonathan Murphy: Here's how you need to change it. But the AI is a little bit vaguer than that. There are some general rules, but think of it more like a style guide. You don't want to use double negatives, you don't want to use too much legalese. You want it to be clear and concise, but anyone can learn and train and practice how to write well.
[00:22:24] Jonathan Murphy: That's the skill that needs to be involved. And most vendors, most software people, most e-discovery vendors, they're not going to put themselves in the risky position of being the ones writing these prompts. In the same way that if you brought in your human document reviewers, it's someone from the legal team who needs to explain to the document reviewers what is relevant, what is privilege, what is of importance in this matter.
[00:22:48] Jonathan Murphy: That information comes from the legal experts, so that absolutely has to stay there. We can provide guidance and tell you what we did, but we are not likely to be the ones writing these prompts anytime soon because you are the investigators. You are the litigators, you are the ones who understand what the matter is.
[00:23:07] Jonathan Murphy: It's just on my side it's more making sure the technology provides you results that you can verify and statistics you can use to prove that. So I think a lot of that is really important. And the other aspect of risk, like with any machine technology, we want to make sure you know what it does and doesn't do.
[00:23:26] Jonathan Murphy: If we run search terms on a data set, we also need to make sure you know you have a lot of video files in your data set, and those aren't going to be search responsive, or you have a lot of pictures and those aren't going to be responsive. The same thing applies to the AI. We need to let you know where it's going to work, what it's going to do, where it's not going to work.
[00:23:51] Jonathan Murphy: And if you are using it, like it’s used nowadays to make judgements or predictions about documents without having to put eyes on every single one of them, we need to make sure you've got empirical statistics that you can present in a court of law to a judge, to the other side, to say we used this AI and here is why it worked well. Here is the precision. Here is the validation of what we did.
[00:24:13] Jonathan Murphy: Here is the recall of the document set that allows you to go, we did it well, and here's proof that we did it well. So I think those are the main aspects of what vendors can do to help legal teams mitigate risk with AI.
[00:24:29] Christopher Farella: From the legal team perspective, I think the most practical result is, and advice is, use your head.
[00:24:36] Christopher Farella: Do the thing that … Use what you have that AI doesn't, which is your brain. And start using, and think about are you fulfilling those things that you were doing prior to AI coming onto the market? Are you being competent? Are you protecting the client data? Are you being truthful and transparent with the court and your adversaries?
[00:24:55] Christopher Farella: Are you making sure that what you are submitting to the court has been verified and is supportable under the rules, because you're still facing the exact same rules and regulations that you had pre-AI. AI hasn't spawned any new rules. It's been applying the old rules to the new technology.
[00:25:16] Christopher Farella: So, you know, we were amused, you know, a couple of years ago by the attorney who couldn't get the cat filter off of his Zoom. Right? And that was a very humorous thing. And at that time we needed to have a little humor. But that was a harmless thing. Now we have attorneys who are submitting documents to the court that have been completely made up by AI, and they haven’t done those – and the press has taken that, you know, and of course that becomes sort of the example versus really the exception. Just go back to the basics. When you filed a brief prior to AI, what did you do? You read through it, you made sure the cites were right, you made sure the facts were correct and supportable. Still do the same thing with AI.
[00:25:54] Alkida Kacani: Well, thank you both for that lively discussion. Let me leave you with this final thought. As AI continues to shape the future of litigation and practice of law, the question isn't just how we can use these tools to be more efficient, but how we can ensure that in doing so, we don't compromise the very principles of justice and fairness that the legal system is built on.
[00:26:17] Alkida Kacani: again, thank you Chris and Jonathan for joining us today. Your insights have been incredibly valuable and I appreciate you taking the time to share your expertise.
[00:26:27] Christopher Farella: Thank you. It's been a pleasure.
[00:26:28] Jonathan Murphy: Thank you, Alkida.
[00:26:29] Alkida Kacani: And a special thank you to our listeners. You can find Speaking of Litigation on YouTube or wherever you get your podcasts.
About Speaking of Litigation®
No business likes litigation. Lawsuits and trials can be stressful, unpredictable, and often confounding—even for battle-scarred business leaders. But they’re something almost every business must confront. The Speaking of Litigation® video podcast pulls back the curtain for an inside look at the various stages of litigation and the key strategic issues businesses face along the way. Knowledge is power, and this show empowers executives and in-house counsel to make better decisions before, during, and after disputes. Subscribe to Speaking of Litigation® for a steady flow of practical, thought-provoking insights about litigation from Epstein Becker Green litigators.
Trouble playing podcast? Please contact us at thisweek@ebglaw.com and mention whether you were at home or working within a corporate network. We'd also love to hear your suggestions for future episode topics.
SPEAKING OF LITIGATION® is a registered trademark of Epstein Becker & Green, P.C.
Subscribe to the Podcast

Never miss an episode!
Subscribe to Speaking of Litigation® on your preferred platform – Amazon Music, Apple Podcasts, Audacy, Audible, Deezer, Goodpods, iHeartRadio, Overcast, Pandora, PlayerFM, Pocket Casts, Spotify, YouTube, YouTube Music.
Sign Up FOR EMAIL NOTIFICATIONS
Spread the Word

Would your colleagues, professional network, or friends benefit from Speaking of Litigation? Please share each episode on LinkedIn, Facebook, X, YouTube, and YouTube Music, and encourage your connections to subscribe for email notifications.