Episode 10: How Your AI Input Becomes Legal Evidence — with Dr. Angie Burks
Dr. Angie Burks is a Visiting Senior Lecturer at the Kelley School of Business at Indiana University. Her work sits at the intersection of business, law, and engineering, with a focus on the legal liabilities of digital communication and artificial intelligence. She has taught negotiation and mediation to over 800 students and professionals and previously served as a faculty member at The Ohio State University’s College of Engineering. In this episode, Dr. Burks discusses how what we type into AI can become legal evidence, the permanence of digital communication, and her P.A.C.E.D. framework for using technology thoughtfully.
To learn more about her work, search "Angie Burks" online or visit her profile through Indiana University.
The Center of Excellence for Women & Tech supports women students, faculty, staff, and alumni in building confidence and leadership through technology. Stay connected with us on Instagram at @IUwomenandtech.
Special thanks to Rebecca Ramsey and the Center for Language Technology (CeLT) for podcast production support. CeLT has supported language learning, media development, and instructional technology at Indiana University since 1959.
Kosha (Intro): Hello, and welcome to the Women of IU podcast, the show that highlights and celebrates the incredible work women do every day at Indiana University and inspires future women leaders. I’m your host, Kosha Patel.
Today we’re diving into a topic that affects all of us: how your AI input can become legal evidence.
Our guest today is Dr. Angie Burks, a visiting senior lecturer at Indiana University’s Kelley School of Business. Professor Burks blends business, law, and engineering in her teaching and research, focusing on the legal liabilities of artificial intelligence, electronic communication, and corporate engineering ethics.
Before joining IU, she spent 12 years teaching at The Ohio State University’s College of Engineering, where she led courses in ethics and presented nationally on topics like emails as legal evidence and mechanical failures within Toyota. She has received multiple teaching awards and has taught negotiation and mediation to more than 800 graduate students and executives worldwide.
Professor Burks, thank you so much for joining us today. It’s an honor to have you here on the Women of IU podcast.
Angie: Kosha, thank you so much for having me. I really appreciate what the Center of Excellence for Women & Technology is doing. I also want to acknowledge Jeannette Lehr, who has been such a great assistant director for the center. I’m very grateful to be a part of this platform, so thank you to both of you. The center is truly having an impact across the country.
Kosha: Thank you — we are so glad to have you here. To start off, what originally sparked your interest in ethics and the legal side of digital communication?
Angie: I became interested in ethics and the legal liabilities of electronic communication when I was in law school. I’ll begin with emails, and then connect this to AI.
Many people believe that once you delete an email, it disappears forever. But that simply isn’t true. When I was in law school, I worked with attorneys handling employment discrimination cases. One day, a man came in and said he had been fired because he was gay, and no one would take his case. The attorneys invited him to tell his story, and after hearing him, they agreed to take it.
When they reached out to the company and explained that he believed he was fired due to his sexual orientation, the company said something along the lines of, “We would never do that.” Remember, this was about 25 years ago. The attorneys asked whether they could look into the company servers, and the company agreed. Once they did, they discovered emails employees had assumed were deleted — but were still there.
Jeff’s boss had emailed another employee saying he thought Jeff was gay. Both employees deleted the email, but the messages continued with comments about AIDS, touching doorknobs, and homophobic fears. All “deleted,” but all still discoverable. Those emails became evidence, and the company was held liable.
There was another case involving two secretaries in Manhattan who casually gossiped over email. One claimed a coworker was drinking on the job, and the other said another employee was having an affair. Again, deleted, deleted, deleted — but later surfaced as evidence.
That’s when I fully understood that every email and every text message is a legal document. It has a timestamp, an identifiable author, and it is admissible in court.
The same logic now applies to AI.
AI concerns me because people confide personal thoughts into these systems like they would a journal. That isn’t always bad — but those thoughts can become legal evidence. Sam Altman, the CEO of OpenAI, recently confirmed that if input is involved in a lawsuit, they may be required to produce that data. Privacy settings do not erase legal liability. Your words live somewhere, whether you see them or not.
So that is where my interest began — and how it led me straight into AI today.
Kosha: That’s powerful, especially because most people don’t realize digital actions have that sort of permanence. Even when something is “deleted,” it still exists somewhere. Your real-world examples make that so clear.
I want to build on something you said. Students and professionals use AI every single day, without thinking about legal implications. What are the key things you think people should know before they start typing into tools like ChatGPT?
Angie: The most important thing to understand is that what you type into AI can become legal evidence.
Let me share two scenarios I give to students. The first involves a sorority member — we’ll call her Kate. Her sorority has a pledge challenge: drink two gallons of water in five minutes. They think it’s harmless because it’s not alcohol. But the pledge collapses. Kate panics and turns to ChatGPT, typing something like, “Sarah was forced to drink two gallons of water in five minutes during pledging. She’s unresponsive. How do we revive her before the ambulance gets here?”
She’s scared, she wants help, and she doesn’t think about the legal footprint she’s creating.
If Sarah dies from water intoxication, that chat log could be used as evidence showing they forced her to drink it — and those sorority members could face criminal charges. What she typed in panic, thinking it would vanish, could follow her into court.
Now for the second example. Imagine a student named Amy. While in college, she interns at Samsung on a smartphone project. After graduation, she gets a job at Apple, also on smartphone development. At home, she uses AI to integrate concepts she learned at Samsung with what she’s building at Apple. The product turns out great, but later Samsung engineers see similarities and file a trade secret lawsuit. During discovery, Amy’s AI chat history could be subpoenaed. If it shows she fed confidential Samsung information into AI and incorporated it into Apple’s product, Apple could be held liable, and so could she.
What we type into AI is not invisible. It can be retrieved. It can be used. It can be evidence.
A brilliant AI scholar, Kate Devlin from King’s College, said something I think everyone should remember: what we think is private in AI is not truly private. We are sharing information with a tech company — not a confidential service. It wasn’t built to be one.
Kosha: Wow. I think that reframes AI use for a lot of people. Most of us don’t think, “This could be read back to me in court one day.” We treat AI like it’s a private notebook or assistant, and it’s really not that at all.
You mentioned that you teach your students a framework to help them slow down and think before they type. Can you share more about that?
Angie: Yes — I teach something called P.A.C.E.D., which is a way to pause and think before sending something digitally.
P stands for “Is it publishable?” Ask yourself how you would feel if your AI input showed up on the front page of the New York Times or was shared publicly without context. A stands for “How would I feel if this were admitted into court?” Think back to Kate or Amy. Would they want those chats read aloud to a judge? Probably not. C relates to the courtroom as well — we consider court as a place where context disappears and words harden into evidence. E stands for “Am I emotional?” We type differently when we’re tired, stressed, angry, or overwhelmed. Emotional moments lead to digital regret. And D means “Once I delete it, do I understand it isn’t really gone?” Deleting doesn’t erase the existence of the data beneath it.
So P.A.C.E.D. is less about fear and more about awareness. It reminds people to approach AI intentionally rather than impulsively.
Kosha: I love that. It feels like a gut-check system — something simple you could mentally run through in seconds before typing. And I think your “emotional” point is especially relevant to students. When you’re stressed before a deadline, it’s easy to type something like, “My professor banned AI but please just write this for me,” without realizing how permanent that input could be.
Let’s shift a bit. Do you think AI is changing how people think about honesty, authenticity, and integrity — in the classroom, in the workplace, in life?
Angie: I think AI has the power to change our moral compass, yes. Not because AI is unethical, but because temptation usually is. When something is fast, easy, and invisible, shortcuts grow more appealing. We start to think, “No one will know,” but machines and metadata always know. We have to be intentional about the kind of character we carry into digital spaces.
That brings us to the intersection of humanity and technology.
Kosha: Yes — say more about that.
