Episode 3: Whose Business Is It Anyway?
Discussion with Stephanie Moore and Angie Raymond in the IU Kelley School of Business about business ethics, the law, and the influence of AI on the workplace of the future.
Stephanie Moore
Lecturer of Business Law and Ethics in the Kelley School of Business
Angie Raymond
Associate Professor of Business Law and Ethics in the Kelley School and an adjunct faculty member in the Maurer School of Law
INTRO MUSIC 1
Laurie Burns McRobbie:
Welcome to creating equity in an AI enabled world: conversations about the ethical issues raised by the intersections of artificial intelligence technologies, and us. I'm Laurie Burns McRobbie, University fellow in the center of excellence for women and technology. Each episode of this podcast series will engage members of the IU community in discussion about how we should think about AI in the real world, how it affects all of us, and more importantly, how we can use these technologies to create a more equitable world.
INTRO MUSIC 2
Laurie Burns McRobbie:
Today, we're talking about AI in the workplace. And we asked the question, whose business is it anyway? AI is increasingly embedded in the employee experience, notably in the hiring process. And of course, technology companies are in the business of developing AI enabled products that could create or perpetuate inequities, or ideally correct for them. AI systems in the workplace affect how workers experience their working lives. And companies that don't develop their products with equity in mind are influencing the world we all experience. So what does this mean for the very nature of work and human effort? And how should we be preparing students for the future of work? In the studio with me today are two faculty members from the Kelley School of Business at IU Bloomington. Stephanie Moore is a lecturer of business law and ethics in the in the Kelley School where her interest and expertise includes business ethics, critical thinking and conflict resolution. Angie Raymond is Associate Professor of Business Law and Ethics, and an adjunct faculty member in the Mauer School of Law. Angie focuses on internet law, big data and ethics and law in the information age. Stephanie, Angie, and I will discuss the intersection of business ethics and AI ethics from both a workplace perspective and a legal perspective. So welcome Angie and Stephanie. I'm going to start by asking each of you to talk about your respective area of research and teaching in the context of how we're defining ethical AI. Stephanie, do you want to kick us off?
Stephanie Moore:
Sure. So I teach and research and business law, business ethics and critical thinking. And as we continue to move in the direction of more technology and AI use in the workplace, making sure that we continue to work toward the most ethical use of that technology is of increasing importance. We know that human decision making alone is rife with biases. We also know that use of AI can reproduce some of those human biases. The intersection here is how the ethical use of AI and business can help solve some of those inequities. ethical use of AI has to be an intentional process, where we interrogate every step of the system. So what we are doing well, what are the issues? What needs improvement? How can we pivot, and what is the most effective and ethical relationship between humans and the AI we develop and use?
Laurie Burns McRobbie:
Thank you, Angie?
Angie Raymond:
So thank you for the invitation and the wonderful introduction. I teach several classes on ethics and technology both at the Kelley School at Kelly direct. And then of course, beginning some with my work at both the graduate and the undergraduate level. I've been researching in this area for considerable time, primarily focusing on surveillance, privacy, governance, and AI ethics. But most importantly, my favorite work is working on best practices and playbooks where we consider how we should design a framework to ask questions about how we use how we gather how we store and how we share data. How do we create that framework where decision making is not completely done by the technology but is instead used to augment individuals and make them better at whatever it is their goal is?
Laurie Burns McRobbie:
That's great. Thank you to both of you. That's a great way to kick off this discussion. Well, technology has been in the workplace for decades. What what do you think it is about AI? That changes the landscape? I can think of an example here, of course, is that AI technologies are used widely and hiring now which they weren't previously and there was a big issue a few years ago, when Amazon discovered that its hiring algorithms were excluding women because the data being referenced reflected past hiring practices. So what else are you seeing in terms of how companies and organizations are utilizing AI more equitably?
Stephanie Moore:
Well, the first thing companies should do is be absolutely transparent about their use of AI. When Apple changed their policy to require app developers to ask users for permission, before tracking user activity across other apps, web and websites, 96% of users chose not to be tracked. So there was this misconception that people didn't care that much about tracking and about their privacy. But given the opt in option, almost everyone chose not to be tracked. Some big companies are losing a lot of money. But prior to the opt in approach, they were arguably using that data without users informed consent. So this is a much more ethical use. Companies are also using AI for things like hiring for things like performance evaluations, and the like. So they interview with prompted questions, trying to get to things like you know, an employee's emotional quotient, they use these emotional quotient evaluations, where they have employees take a variety of tests, they look at stress, tolerance, self and social awareness. And then they use this data for all kinds of things to assess leadership potential, to, to hire to design teams. So if you're a company, and you're using something like that, how aware are your employees of the implications of that AI generated data on your career trajectory? So ethical use presupposes that employees understand and know and have informed consent. Now, one of the tangible benefits of AI in hiring is that it does set criteria and stick to those criteria in the process. We know that human centered hiring processes are rife with subjective determinations. So sometimes an employer won't even start with a set of objective criteria. And then even if they do, they may not adhere to it because of various external and subjective factors. So AI can be very effective in keeping the process centered, objective and clean.
Laurie Burns McRobbie:
And I think this is also reinforces your point about transparency, because in setting those criteria, company and employee hiring supervisor has to think about criteria that are fair, across multiple kinds of populations and workstyles. Angie, Stephanie briefly mentioned concerns about performance and privacy in the workplace, she touched on those issues, which are huge issues, what's the current status of employee protection in the in that area in the workplace?
Angie Raymond:
Well, so as you mentioned, in all fairness, a lot of what we're speaking about is not necessarily new. But it's it's gotten much wider, much wider reach. And in fact, it's much more pervasive. For example, the law is pretty well settled that employers can read, and in fact, do read your work emails, they can restrict the use of and searched through your work computer. And in many cases, any device connected to their network gets searched, at least at some level as well. None of that is really new, there are a couple of really new things. First of all, as Stephanie mentioned, people seem to care a lot more, they're a lot more aware of these practices, that the technology is much more pervasive. And it's much more connected to such an extent that it's actually very hard to draw the line between work and home. And of course, we now have tons of far reaching mechanisms of communication, like digital social connections, that put new pressure on the reach of what used to be a pretty simple conversation between friends, now has the capacity to reach the world. I think one of the things we really need to think about is this balance, how do we actually create a balance, the law is going to draw some very bright lines for us. And one hopes, in many instances that some of these lines may change a bit, as we begin to understand the pervasiveness of the surveillance that the technology is capable of. However, in many instances, this balance comes from policy created by the employers, and then some employers that are doing a wonderful job. So for example, it is in fact the case that your IT department can in fact, know what device you're using on the network at what time for how long and for what purpose, that might not be a bad thing. Right. Cybersecurity should be in the forefront of all of our minds. But in fact, the question really needs to be not whether or not they should surveil us at that level to protect the infrastructure of the communications network, an essential function of cybersecurity. The question that we need to be asking is, Where else should that information go? And in most employees mind, the simple answer is nowhere else. If your goal is to surveil for the purposes of protecting the infrastructure, absolutely. There are very few people that would disagree with that. But with a few limited exceptions, mostly in the area of criminal behavior, for the most part that in for nation should stay with the tech department, and it should serve the function of cybersecurity not be given to the hands of your employer for the purposes of surveillance.
Laurie Burns McRobbie:
Let's talk a bit more about the ways in which equity issues can be exacerbated by how we're using and creating AI. Stephanie, where do you see those issues being exacerbated?
Stephanie Moore:
A lot of times these issues are exacerbated because of the intersection of our humaneness with the algorithms. So I think one of the main issues is when companies believe that because it is AI enabled, that is it is intrinsically fair. So this isn't necessarily true for a number of reasons. One of which is because at the base, humans have set and tweaked those algorithms. So if the criteria is that candidates must have attended a top 10 University, for example, some bias is going to seep in. So who is kept out of that pool? You know, I don't know. Another issue would be that there could be hidden biases and algorithms that we may not even consider perhaps, an algorithm that cuts out interrupted work experience would disadvantage women or maybe a GPA cut off could disadvantage students from lower income backgrounds, who had to work and go to school, you know, things like that. Sometimes a company might even use the resumes of their current workforce to train the AI. So if the current workforce isn't diverse already, then that algorithm would reproduce and entrench systemic equities, in the new hires that they're looking to, to bring in, you know, a more diverse workforce, but they're using the non diverse workforce to do that, and training their their algorithms. So in all of these cases, the issue lies with the AI development. Another issue we see, in companies that use facial recognition systems, they really need to be aware that these are still very imperfect. So there was a landmark federal study in 2019 by the National Institute of Standards and Technology, which found empirical evidence that most of the facial recognition algorithms exhibit demographic differentials that can worsen their accuracy based on a person's age, gender, or race. So facial recognition systems misidentified people of color more often than people who are white, Asian, and African American people were up to 100 times more likely to be misidentified than white men. Native American people had the highest false positive rate of all ethnic ethnicities, but women had a higher misidentification rate than men, elderly people and children had high error rates. So the only place where we saw a relatively high rate of accuracy was with middle aged white men. I do think that companies are getting better at identifying some of these more obvious problems. But we do have to continually interrogate these algorithms to make sure that they aren't reproducing systemic inequalities. Of course, companies also use AI to track which has a number of those benefits and disadvantages that that Angie, started to talk about. And, you know,
Laurie Burns McRobbie:
Yeah, we'll talk talk more about that, too. Angie, I want to also have you speak to that, but also, perhaps what we can learn from previous work that considers equitable uses of technology in the workplace.
Angie Raymond:
I think there's a couple of incredibly important lessons. The first one is, in my opinion, some of this technology was not yet ready for primetime, and it got unleashed. And we have to be into demand a better process. Before these are put into place, it shouldn't come as a surprise to anyone that misidentification occurred and facial recognition technology. That's that's sort of computer science one on one, that if you feed, you know, a whole bunch of pictures of a particular race, culture ethnicity, into, you know, the training model that that's what's gonna spit out. And so how that was released on the world we should be a little bit worried about. The other thing we need to think about is, let's pretend like there were 10 people sitting in a room who were oblivious to that fact, someone left that room and did a sales pitch and a whole group of other people, then oftentimes, you've listened more to the sales pitch than actually digging in on how they understand the technology also made the decision to release it. So these are multiple levels of problems, but then we have to give credit where credit is due. The only reason we're talking about these is because it was caught. No one intended for this to be out there. And I think that's one of the wonderful parts about talking about AI in any context is that mistakes are made There are not usually instances that are glaring examples of intentional discriminatory processes being unleashed. And that is an amazing credit to the industry for at least taking on board the fact that we need to keep an eye on this, we're going to keep track of what's coming out of it. And when we discover something that's disparaging in whatever way it is, from misidentification to blatant discrimination, we're going to do something about it, as opposed to burying it under the headlines. And and I think that's an incredibly important thing. And we need to learn the lesson of maybe we should be doing a little bit better keeping an eye on it before we release it. But certainly what we need to be doing is giving credit where credit's due, they caught this and stopped using it and are attempting to make adjustments.
Laurie Burns McRobbie:
What are some specific examples of how you see these kinds of appropriate uses in the working world?
Angie Raymond:
Let's use the example of tracking in the workplace. You know, performance, obviously, is something that employers are incredibly concerned about. And on the face of it, one might not assume that tracking is necessarily bad in the workplace. Of course, there's all kinds of advantages to tracking. If you're, for example, a UPS driver. It's not only that they're tracking you in that traditional sense. But they also might know if you're in an accident, or if you haven't returned to your truck for 10 minutes, maybe you've had some type of emergency, maybe the packages are too heavy in the truck and need to be reconfigured in the truck. There are lots of good parts of the surveillance tracking, not to mention the just obvious truthfulness of the fact that we assume it's used for bad purposes. But maybe what they're realizing is your route is too big, or you're expected to carry too many packages, or that all of the deliveries that you do on your particular route tend to be heavier packages, because of the neighborhood that you deliver in. All of these things can be wonderful employee safety things that don't necessarily need to be a bad thing. So once again, we need to talk about how this technology is used. And how we use this technology could also be starting to feel a lot like Big Brother. Of course, I would assume we're all aware of Amazon's culture of surveillance, where there's been arguments that they are surveilled to such a level that they actually can't even take a bathroom break. Now you and I have all worked in a workplace where we understand our friend who loves to hang out in the bathroom, and read text messages and email and just is using that as an excuse to do work, versus someone who needs to be in the bathroom for 20 minutes. This is a perfect example of situation where we can't rely on technology. But Amazon decided to only listen to the technology. So if you were on break for too long, you got an automated response, and you're fired off with their heads. Well, that's simply silly. Of course, there are tons of reasons and particular days where I might need a little longer in the bathroom, give me a break, what we need to be using to some human in that part of that conversation. The other thing that we tend to find, quite frankly, in these situations is there is a developing sort of narrative around a conspiracy theory that everything tracks us. Everything surveils us. That may be true, quite honestly. But the real question is how is that information and that data used to what purpose? If it's for safety, and it's genuinely used for safety, maybe we're okay with it popping up a new recommendation on Netflix? Perfect. Popping up something that tells my supervisor that I was in the bathroom longer than I should have been? Creepy, and probably not appropriate. And that for me is where for the most part, I draw the line.
Stephanie Moore:
So this is where we ask what makes it more ethical. There's a clear benefit, right and doing some tracking of employees, higher productivity, safety, etc. But we also have to ask a benefit for whom, right? So with informed consent representation, there's more ability to balance out benefit versus disadvantage and make that tracking more ethical.
Laurie Burns McRobbie:
Indeed. So Angie, this is a question you can probably speak to, which is the status of the law as it applies to tech, specifically in employment right now.
Angie Raymond:
Well, sorry, Hoosier fans, I am a longtime Illinois resident. So I am very proud of what Illinois has done in the area of employment and technology. For those of you who aren't familiar, it's a great place to look at it was part of one of the few first states who did things like make rules about whether or not your employer can demand your Facebook password, for example. And that's probably some of the easiest stuff. But right now, of course, we have the Illinois biometric Privacy Act, passed in 2008. It actually has taken on a lot of the issues that we're talking about, and the technology has actually caught up in many instances with the employment policies that are associated with the act. So most people point to this being very forward looking. You know, for the most part, you could use biometric identifiers for almost any purpose in employment. And, of course, it won't surprise you that written employee policy often supported that choice, to collect to store to use that information in almost any way they wanted to this act curtailed that significantly. So for example, as Stephanie has talked about, at this point, many times, you know, it informs the employees in writing of the collection that they're doing the purpose of the collection, and how long that information will be stored, of course, that is very reminiscent of GDPR, out of the European Union. It requires obtaining written consent for all this collection. And you have to inform the employee how you plan to destroy the information. This is a common question that I frequently asked, I visited a doctor this morning and I said, where is this going? Why do you need it? And when are you no longer going to have it? So these are common questions that quite frankly, we teach and all the cybersecurity stuff as well, you should be asking this. Someone wants to write down your credit card number because things aren't working today. Your question is, what are you going to do with that when you enter the credit card information into the system, but the paper still there? Promise you people forget to do that. So under the Act, companies are prohibited from selling, which is a big issue that we could talk about for days, leasing, trading or in any way profiting from someone's biometric information. In fact, you can't disclose the biometric information unless the employee consents to such a disclosure. And the disclosure and if the disclosure is used for financial transaction has to be authorized. And and under the exception is almost always that law enforcement or a subpoena. In addition, it calls it carves out sort of cybersecurity 101. You must use reasonable measures to protect any biometric information that you collect and store, you have to prohibit unauthorized access. And they have to meet industry standards, which of course, is what the NIST standards in cybersecurity requires. Well, one would hope this goes, you know, this is sort of the bellwether moment as it relates to biometric information. But this is an interesting conversation probably for another day. Although it's important that I know that Illinois is not alone. The California State Assembly is considering new rules that offer workers greater protection from digital monitoring, under the workplace tech technology Accountability Act. And this is all done in a way to protect workers from the use of technologies that negatively affect privacy, well being those type of things. We already have laws that restrict employers ability to insist on using always on apps and other technology. And we of course, have laws that in some states that protect mandatory requirements to reveal Facebook passwords or passwords, to accounts that you use privately. exceptions to that, as always, and of course, I'm assuming everyone is aware there is currently federal privacy law being considered. It certainly has his criticisms. But let's all remember, it's in draft stages yet. So hopefully, there will be some changes made to that.
Stephanie Moore:
So I'll just add on to what Angie said too, I think it's important that we remember that, you know, once biometric data is, you know, manipulated, if it's manipulated, it's incredibly difficult to fix, right? So we only have one face, we only have one set of fingerprints. And if that data is compromised, it's much more difficult to deal with than a stolen credit card or email or even a social security number that could potentially be changed because of ongoing identity theft. So, you know, if that if our biometric data is stolen, people are we're permanently vulnerable, you know, for forever based on that biometric based identity fraud. So protections for this data are incredibly important. You know, and I think it's super, super important conversation to be having and thinking about.
Laurie Burns McRobbie:
Yeah, and we'll we'll hope to pick up on this in another pile when another one of these podcasts because it is certainly the dive a little bit more into where things stand on the federal level with something that Beth Plale spoke to in our very first episode of this series, about the need for federal action in this area, because this is so universal. So turning to both of you teach, hooray, and I'd like to ask you both to talk about how you measure the impact of student discussions, student discussions, and their experiences involving business ethics, especially in the context of technology and certainly, AI and the internet in general. That is, do these experiences change how students think about the technologies and do they change how the discipline is taught?
Stephanie Moore:
Yeah, I would say absolutely. The students in our classrooms are involved at every step of this process in creating and implementing, but also on the other end, feeling the effects of companies who are using AI in hiring or elsewhere in the process. So our courses cover a broad umbrella of business related topics, and how those intersect with ethics. But each topic shares a core element, and that we're teaching students how to approach issues and think critically. So here, the AI isn't ethical or unethical, we're talking about how we develop and use AI and other technologies in a more ethical way. So the courses help students develop their own critical and ethical decision making framework and make ethical decisions in their use of those technologies.
Laurie Burns McRobbie:
Angie, what do you see going on?
Angie Raymond:
So my course is a sort of a hybrid survey course oftentimes, where we talk about law, ethics and governance. And so and I do that at multiple levels, so we spend a lot of time especially as the class kicks off, with students, just becoming aware of how much technology is in their life. And to begin to talk through that. And so quite honestly, I'm oftentimes surprised that students are surprised how much whatever entity pick one knows about them, you know, this is not, are no longer a generation where they're really embracing Facebook, necessarily, which we could talk about as a good or bad thing. But that's the easy place to start. But we can also talk about something as simple as the technology systems in educational environments, we could talk about the various parking that they do on a daily basis, and all of those things are actually gathering a lot of information about them. And they frequently don't realize it. So that's, that's, I think just the key to a lot of what we do is we're forgetting that, that there's a lot of people who either didn't grow up with the technology, so have no reason to know, or who did grow up with the technology. But it came at a very early age, and it was super easy. So they never had to think critically about it. They never had to really dig in to think about "Hold on, what is this that I'm actually sharing this information with? Who is it?" So then for my class, we pretty quickly move on and they get to from those early days, they get to identify a type of technology, and they dig in on it. So we do what I call an audit, I probably should stop using that word, because it's come to mean, it's something slightly different. But what they do is we talk about power decision making those type of things where we talk about "Wait, who made that rule? Why is it the rule? Should you be able to challenge that rule? Who is this entity that has your data?" it turns out, it's not the Bloomington city council, it is this third party provider of the app, and Bloomington doesn't even see the data in a lot of instances. So we try to you know, really think critically about the data, the technology itself, where it is who's making the decisions, who's talking about, you know, power and the decision making that it comes from. And there, it's interesting. So I'll give you my favorite project. And I won't tell you the student name because that would be way inappropriate. But one of the astute one of my favorite projects, was a student who wanted to improve athletic performance. And really hadn't thought about what that meant how much information you have to do, to be able to run faster, or hit something harder. You know how much medical driven health data is getting fed into that app. And so the students position on on this change substantially once they realized how much that was how much you're actually sharing. And then was pleasantly surprised when we actually the second part of the project is actually go find the company's policy and look at it. And it turns out, they have an excellent policy for privacy. And assuming they actually do it, we can't go quite that far in the class. But assuming they actually live up to the policy, we came out in an okay place. But the scary part was the student wasn't aware, that heart rate, how long you sleep, all of those things impact athletic performance, and it was clearly gathering all that information.
Laurie Burns McRobbie:
What a great learning experience for that student. And I hope you're both seeing that at the end of those classes that students have have gotten that and hopefully are going out into the world a bit better equipped to deal with these things. So we're coming to the the end of our time together. And I'd like to ask both of you to reflect on where you see us going from an ethical standpoint moving forward. Stephanie, do you want to start?
Stephanie Moore:
Sure. So research indicates that you know in many fields, the best results are going to come from humans supported by intelligent machines, humans are able to see the big picture. They have it have interpersonal skills, and they can and should continue to interrogate the systems to make sure they're operating in an ethical way. So AI has the opportunity to improve efficiency, diversity and consistency. But the most ethical use still does seem to be in tandem with some human oversight.
Angie Raymond:
Sure, I'll give one of my favorite examples is always in the medical field, right? Technology is incredibly good at picking up things oftentimes much earlier than humans can pick it up. They're all technology is also very good at picking up patterns that oftentimes we would never notice, or maybe aren't given enough time to talk to the human being. So the machines are trained are trained on those digital records outcomes, a millions of previous patients can produce a diagnosis for a sick patient, along with recommendations for treatment, and perhaps further tests. This is one of the big pushes, I'm hoping Beth spoke about this as well, this is one of the big pushes right now in technology and healthcare is there's just simply not enough good data to be used to make good diagnoses. And so there's a push on the national level now to have data sharing of, of medical information at a level that should not destroy our privacy, but should nonetheless lead to some incredible patterns. We're working on a couple projects here at IU. So here's the plug, and hoping to work on more. For example, we're working on feeding data, one of the things that we're pretty confident in, and there's been recent previous research on is that there are actually exposures that occur to us that don't rise to the level of an exposure. So for example, lead has to be at a certain level or doesn't rise to the level of exposure. Well, what if over time, we have a lot of different exposures, none of them so significant, that they rise to the level of concern, but all of them sort of pervasive? What if it turns out that these 37, maybe minor exposures to particular chemicals, makes it more likely for you to develop breast cancer, prostate cancer, fill in the blank, a lot of different diagnoses. And unfortunately, we can't do that yet. This is what technology is going to allow us to do. It's going to allow us to crunch big data in ways that we've never will be able to do as human beings. That's something that I'm incredibly proud of. And we do it right here at IU. And we'll continue to do stuff like that, where we're trying to use data for good. And I know that's become sort of a trite phrase, so I don't use it lightly. But I think in the healthcare field, it's an incredibly important conversation to have.
Laurie Burns McRobbie:
Oh, Indeed, indeed. And it certainly is something that affects all of us. And Anna speaks as well to the points you've been both been making about the need for the right kinds of protections and the need for transparency in how these how these processes are put together, how third party tools are being used in the workplace and in every other setting that that affects our lives. Well, I want to thank both of you so much for this discussion. We've been talking about whose business is it anyway, as we think about the use of AI in the workplace and the importance of using tools in equitable manners to create equity and the world we all live in. So thank you both.
Guests:
Thank you for having us.
OUTRO MUSIC
Laurie Burns McRobbie:
This podcast is brought to you by the Center of Excellence for Women and Technology on the IU Bloomington campus. Production support is provided by Film, Television, and Digital production student Lily Schairbaum and the IU Media School, communications and administrative support is provided by the Center, and original music for this series was composed by IU Jacobs School of Music student, Alex Tedrow. I’m Laurie Burns McRobbie, thanks for listening.