Episode 5: The Doctor is in…the AI?
Discussion with Hannah Zeavin and Elizabeth Kaziunas on the use of AI for telehealth and teletherapy – its history and current use and the ethical issues that arise from technology-mediated healthcare and healthcare data.
Hannah Zeavin
Assistant Professor of Informatics in the Luddy School, where she focused on the history of human sciences, the history of technology, feminist STS, and media theory.
Elizabeth Kaziunas
Luddy Faculty
Investigates the social impacts of artificial intelligence in healthcare.
INTRO MUSIC 1
Laurie Burns McRobbie:
Welcome to creating equity in an AI enabled world: conversations about the ethical issues raised by the intersections of artificial intelligence technologies, and us. I'm Laurie Burns McRobbie, University fellow in the center of excellence for women and technology. Each episode of this podcast series will engage members of the IU community in discussion about how we should think about AI in the real world, how it affects all of us, and more importantly, how we can use these technologies to create a more equitable world.
INTRO MUSIC 2
Laurie Burns McRobbie:
The use of technology to support mental and physical health has been around for a long time, and it's come a long way. But the ethical issues around the use of automated processes to treat patients are very much still with us. Today, we are acutely aware of the surge in demand for healthcare services of all kinds, especially mental health services. And the pandemic has led to an increase in the use of telehealth technologies. The ethical issues are enormous, particularly when you consider how vulnerable people may be when seeking treatment. And we may not yet have the appropriate regulatory and legal structures to address the use of patient data, particularly when you consider things like an app on an iPhone; and there's so much more. Here today to talk about the state of these automated tools for addressing mental and physical health and the ethical and equity issues arising from the use of these tools are two professors from the Luddy School of Informatics, Computing and Engineering. Hannah Zeavin is an assistant professor of informatics in the Luddy school where she focuses on the history of human sciences, the history of technology and media theory. Her book, The Distance Cure, a history of teletherapy came out last year from MIT Press. Elizabeth Kaziunas, is an assistant professor in the Department of Informatics at the Luddy school. And her research contributes to the fields of human computer interaction and health informatics by examining the social and organizational contexts of emerging health information systems, and the lived experiences of health datafication. And both Hannah and Liz are new to Bloomington this fall. And I am delighted to be able to welcome you to Bloomington and I you,
Hannah Zeavin:
Thank you so much for having us.
Liz Kaziunas:
Yeah, excited to be here.
Laurie Burns McRobbie:
Let's start by talking about your work. Each of you, if you can each describe your work in the areas of teletherapy and Healthcare Informatics and what you see as the most challenging ethical issues. Hannah, do you want to get started?
Hannah Zeavin:
Sure. Thanks so much. So my work in teletherapy has largely centered on redescribing the history of clinical psychology via its shadow form that of teletherapy. And in my book, The Distance Cure: A History of Teletherapy, I argue that rather than this being a new concern in the age of the iPhone teletherapy is actually older than therapy itself. And along the way has had an escorting, democratizing promise, the idea that if we can just reach patients, wherever they are, whenever they need care, we'll be able to extend what is historically been a very limited form of care that of of mental health care, the opposite has often held. So one of the stories I try and tell in my book is how that escorting democratizing promise is gone unfulfilled, and instead, many of the kinds of apps for the iPhone sure but also, innovations in automating mental health care have actually harmed the very patients that they seek to capture. And to quote unquote, help.
Laurie Burns McRobbie:
Liz?
Liz Kaziunas:
Yes, so I come from the field of human computer interaction. So this involves a lot of thinking about design and the lived experience of people. My concern has been in the healthcare setting. So understanding how people use health technologies to manage their both their health conditions and also to think about wellness, but also some of the challenges that brings with it. And when you ask the question about, you know, what are these ethical considerations with automation and this kind of move towards thinking about the ways in which I think technology is impacting the behavioral healthcare space that Hannah mentioned? One of the things I think my work speaks to is this issue around data, what counts as health data within this space? Especially behavioral health, you're seeing a move beyond just maybe what we think of HIPAA regulated healthcare data that you see in electronic, medical health care system, something that would be, you know, available to your doctor or other health care professionals to a move of data from people's everyday lives, the way they use their phone, the way that they might interact with social media. Even things like facial expressions and one's gait could be considered now health data and even giving insights into behavioral health. So this is sort of something that is, I think, an emerging space in terms of the design possibilities of using data for behavioral health interventions. But it's also I think, a sort of Pandora's box that we're opening and really thinking about the impact of that data, how it's used, who defines it, where it comes from, and I think that probably connects, you know, a lot towards Hannah's wonderful research, you know, thinking about the field of mental health itself, how it was defined, and its histories and legacies that we sort of have inherited as designers and technologists today.
Laurie Burns McRobbie:
Yeah, absolutely. Hannah, as you say, teletherapy has been around for a long time. And when we were talking earlier, before this podcast, we talked about Eliza, which is a program that emerged in the 1960s. I think, that mimics therapeutic conversations between a doctor and patient needing psychiatric treatment of some kind. And it was written written originally to to demonstrate the superficiality of human-computer interaction, but it kind of caught on and I bring it up because I played with Eliza on a mainframe computer in the 1980s. And to me, it was kind of a game, I think, to a lot of people it was but to a lot of other people it was it was very real. It started to take on I think some of the attributes that we see now in AI enabled--well--Siri, Alexa, etc, that have these human qualities. And in fact, I think there was some reason to think that it might have been one of the earliest programs to be able to pass the Turing test. Whether it actually did or not, I don't know. But Eliza's still out there. Right. Can you say more about where we are right now with these tools?
Hannah Zeavin:
Sure. Thank you. Yeah, so Eliza exactly right. Right. It was this early 1966 MIT experiment that was meant to do something completely else. And when it debuted on MIT's famous time sharing, network computer system, you know, those who played with it, and I think it is a game, like you say, right, those who played with it. Also, in addition to experiencing a kind of fun and pleasure wanting to be alone with it, they really took on the anthropomorphizing quality of a quote unquote woman named Eliza, much like Siri and Alexa or feminized box today, and really took the therapeutic premise seriously. And what's really amusing about Eliza is now and I have my students play with her and my teaching, she's understood to be kind of clunky, and not so great at responding, but basically is not experienced that differently from the common mental health chat bots in this space now like whoa, bot, from Google's Weiser, etc. And the major difference is that kind of lack of friction. But going back to the ethical concerns, I think arise right out of this question, much of this mental health data that's being collected now, is not just going to Joseph Weizenbaum at MIT to prove a point, but instead is being used in a whole host of ways having very little to do with mental health. So earlier this past year, there was an entire crisis, when a popular text messaging mental health service for teenagers, was actually selling its data to be used to make better corporate customer service support for Uber and Lyft excetera. So that very same teenager who's turning to an automated service, for many reasons, is now also being exploited in this kind of huge leap and at scale, which was not possible in Weizenbaum's time.
Laurie Burns McRobbie:
Major, major issues. Liz, where where are you from your disciplinary perspective. You know, what, what are the what else would you say about the, as you focus in on data and Healthcare Informatics?
Liz Kaziunas:
Well, this this rise of kind of using natural language processing for chatbots and the development of creating these and you know, digital mental health interventions. It's something very prevalent, within the, you know, the HCI, the design world and computer science world, I think there's a lot of excitement around this idea, especially around the kind of hopeful vision of providing therapeutic services to people who don't have access to an actual doctor in their community. And that's something that is, you know, widespread across the United States as well as, as a as a global concern. So I think while the story of Eliza you know, it's absolutely sort of amusing to kind of think about that history and how it's gotten reinterpreted. But as an ethnographer, and someone who kind of studies that lived experience I'm so curious to about, you know, why people have grabbed on to some of these chat bots as shallow as they are in terms of providing care. And I think so much, it's because of the real absence of behavioral health care within one's own community within one's own social world. And so you in the absence, you know, of having a understanding doctor or health professional that you can turn to, and really, you know, potentially gain help for depression or anxiety or things like this, you know, for many people, maybe the only kind of option they feel like they have at this time is turning to some of these technologies. And I think, for me, as a, you know, a scholar of HCI, that is something that I think we don't grapple with enough, I think that we just look towards these often hopeful places have kind of injecting technology without considering that broader social context. And just because people are, you know, using them, and grasping onto them, doesn't mean they're necessarily a good thing for our healthcare system, or the kind of health we want for our children or parents or ourselves. And that kind of broader question of what kind of care are they actually enabling is something I think my field needs to reckon with, as well as kind of the broader healthcare community. The legacies of things like allies are very, I think, haven't been questioned and really interrogated in our field, we sort of accept them and kind of build upon them. But yeah, I think that there's some deep reflection that needs to happen. And people are starting to grapple with some of those questions. So in that way, I think there's some movement towards, you know, are these the type of technologies that we want to be investing our resources in and our time and energies as designers, as computer scientists, as you know, a design researchers?
Hannah Zeavin:
Yeah, I think that you also raise this question like, if in the absence of human to human care, is computational care, the best thing on offer? And why? Why do people turn to it, because there's plenty of of evidence that shows that people aren't only going to go run and grab the only interventions offered to them. And they think that that has to do with, you know, going back to Eliza and your experience of it, right. It's a game, that the kind of gamification of mental health care, whether it's for reaching your steps, or for, you know, getting five minutes of meditation and collecting a streak has really entered the field. And all of these different corporations that make bots, drawing on this history of Eliza know that and they present it both as gamified. And cutified, one of them uses an avatar of an adorable Penguin, for instance, or a replica that you can make in your own image or the image of someone else combining like 1990 Sims, with, you know, Eliza from 1966. And that these interventions, what they offer in terms of efficacy, is very secondary to the idea that someone wants to use them again and again. And in fact, where there's efficacy findings, they're very poor. But most of the apps that are available over the counter, say without being referred by a psychiatrist, don't post their efficacy stance findings don't have any regulation, basically whatsoever. And so I think that put together with the data that they're drawing, which is not just your concern, Liz, but also right, I think should be all of our concerns, is really a damning portrait of that kind of day to day, quote unquote, accessible AI mental health tool.
Laurie Burns McRobbie:
And with respect, Liz, you can probably speak somewhat to this is this thinking about telemedicine more generally, which falls into other healthcare or other medical specialties outside of psychiatry or counseling? Where there there is there are privacy standards, there is some regulation so forth is there. Do you have some of the same concerns seeing the same, you know, third party products coming seeping into the tele telehealth space more generally?
Liz Kaziunas:
Yeah, absolutely. I think this is a trend that we've been seeing for a long time. Especially just, you know, even from my time starting my PhD till till now, you know, in the last decade, a move for I think Silicon Valley and large tech companies to get involved in healthcare space, you know, you see this from companies like Google and Meta and a real interest in partnering with hospitals and clinics to access patient data, to be able to use that data in with machine learning algorithms to extract I think the goal is, you know, useful information that can impact everything to the types of cancer treatments that one might apply, you know, in particular, healthcare settings to the behavioral healthcare space, and that movement of I think, like a corporate interest and in healthcare data is something that is, is growing, and it is often I think, people are not sure quite what to do with that on the technology side. You know, as a someone in the field of, of HCI, I think there's a lot of interest, especially among those who really are involved on the technical end of you know, machine learning kind of research to partner with tech companies, there's an access their issue and computational power to be able to do cutting edge research. But I think we we do really have to question, you know, the ethics of that as well, those alliances between I think academic researchers, whether they be in a, you know, a university healthcare system, or or, you know, in a computer science department, you know, what does it mean to sort of be working with industry partners in this space, I mean, healthcare, obviously, there's HIPPA regulations, and a lot of these companies like Amazon and Google, you know, are adhering to HIPAA. But HIPAA is sort of limited. And no, I'm not a policy expert, but in my understanding of, of accounting for all the different types of data that are being collected and used. We mentioned, some of these, you know, digital apps, like the chat bots, that handle studies. But there's, you know, a wide range of data on social media that can be collected and used for making insights into people's behavioral health. Now, I should pause and say, you know, some of that is contested. We know what insights are actually being made from that data has been questioned by many scholars and researchers. And if those predictions and use of data are accurate, what do they actually tell us about people's behavioral health care status, and I think those are kind of a larger set of issues as well. But that kind of broader interest in data collection and sort of harvesting of personal data across so many aspects of people's lives, extends to school settings, right? It extends to their workplace, and wellness programs that might be using, again, all this data being collected that isn't technically protected by HIPAA as a healthcare data in a very narrow sense, as we have defined it in this country, but it certainly is being used as proxies for making insights in about people's health and predictions about their health. And so I think that is kind of a really large and pressing issue is kind of defining what counts as health data, and maybe acknowledging that this is messier and more complicated than the kind of legislation, you know, that is in place. You know, this was designed before a lot of the advent of some of the modern machine learning technology. So we haven't really come to terms I think, with mapping policy to our current state of kind of computational capabilities. And so that is one large area that I think, you know, considering who has access to data, where it's coming from, where it's being housed, is is all playing a part in in making those decisions and considering industry and how they're involved in healthcare now is absolutely a part of that larger equation.
Laurie Burns McRobbie:
Yeah, well in the fact that we have essentially a for-profit healthcare system in this country just just fuels that and and and the technology is moving very quickly. So it's hard to get kind of out in front of it to anticipate some of this. What are the other pieces here of course, and when we saw this specially during the pandemic was arise in in well, let's call it remote therapy, that is to say, you're talking to a person but they're on Zoom and and in many respects you can see this as a good thing because it really did expand access for people who didn't have it, assuming the infrastructure was there for them to be able to do that. But it gets to the question about the fact that at the end of the day, there are there are people there are therapists who have to be either directly or indirectly involved in all this and it and it leads me to ask about the well, both, I guess, a two prong question. One has to do with the, the way in which professionals in these fields are trained and and how they operate. Because there is some kind of there are standards and there is a regulatory structure. And I'm thinking here of I guess it was something like 100 years ago, the Flexner Report was instrumental in creating the medical training system that we have in the United States today. It's standardized across medical schools, you have no funding for residents, residents, education, the model of residents education, and there doesn't seem to have been, aside from psychiatry, which falls under a medical profession, in the sort of more general psychology, psychological counseling realm, something comparable in hand, I wonder if you can speak to the whether that would be a really useful thing to do, because at the end of the day, it could address some of these questions that have been sitting out there not addressed.
Hannah Zeavin:
Thank you so much. So I think there's a lot here. The first is that, you know, I had just finished my book when the pandemic started, and was, you know, interested and invested in seeing what would happen when remote therapy be stopped being the shadow form and started being the dominant if not only mode of care on offer. And I assumed that there would be some jump and access period, because there will be a kind of more flexibility. And what we saw instead, and there was a very persuasive report done by Time Magazine, was that there was only a maybe four or 5% growth in access, because there are only so many therapists in this country, as you point out, and they already were overworked. So they had no time in their schedules. And what's more, they stopped having cancellations, because no one was getting called into a lunch meeting. Because no one had to go pick up a sick kid, Everyone who is of the therapist classes was home, right? And the history of mental health care in this country is one that is deeply classed, because it's remained very expensive. For very concrete reasons having to do with what are some policy things we could think about insurance has almost never had anything approaching parity, true parity with healthcare, not that our insurance system is viable in the long term, it seems we all agree with this. But nonetheless, it's worse for mental health care. And that means that interventions that are quick and easy to offer have won the day over a psychodynamic talk therapy, which used to be the prevailing model until really 1980. So of course, psycho pharmacological drugs, cognitive behavioral therapies, anything that's quick and easy to use, including technology. So that's where the emphasis has been, in order to truly change the model of mental healthcare in this country, you would need to massively invest in education. And it's not quite the same as you're pointing out as medical education where there is an MD and then there are standards with regards to residency. But because for the simple fact that we have clinical psychologists, we have, you know, marriage and family therapists, we have social workers, we have Psy ds, and on and on, it's a more diverse field that way in terms of credentialing, but also because there's no parity between states. So each state licenses its own counselors, whichever way they do it. That's 50 different ways to do it. In the United States. Very few of them allow reciprocal licensing is the term of art. So that if you're in California, you can tell a treat people say in Indiana, which would make an actual difference in this access. So of course, and we almost got this done in the 1940s after World War Two, when we understood already that the US was in a mental health care shortage in crisis. But we added literally 1000 psychoanalysts, and we need to really radically reevaluate how this care is taught so that our nation can experience it right now. Like with medical school, it's very expensive to put yourself through counseling school, and there is no national top down infrastructure for ensuring its quality or its accessibility.
Laurie Burns McRobbie:
Well, and just this is a bit a bit off topic or maybe it's not another topic or another podcast entirely, but there is still stigma of course around the seeking of Mental health care. And that, you know, that can vary from population to population and, and then you layer these kinds of access and expense issues on top of it. And and, you know, here we are in what really can be considered a mental healthcare crisis and, and it affects I'm Liz, I'm glad you mentioned how this plays out in school environments because of course, the the rise of concerns we have in the K 12 population around mental health is also and child psychiatry being in a sense another sort of sub specialty, within an already overtaxed specialty to begin with. So this is these are really, really crucial issues. When you think about sort of, kind of thinking about maybe more of an ideal environment, and you've certainly touched on both both the by describing the opposite of the ideal, but also some of the things that you think are possible. I'd love to hear you both talk about both both what you're concerned about as we look ahead, given where we are right now what you know, what do we need to be very cautious about? But also, what are you excited about? What what did these technologies bring to us that can address these kinds of care issues that you've both been mentioning?
Liz Kaziunas:
I'll go with in terms of concern, I think one of the real issues that keeps me up at night is this idea of sort of injecting a one size fits all approach to behavioral health across different communities, whether they have access to I think, healthcare services or not, I spent many years in Michigan doing ethnographic work and community behavioral health in a lot of underserved communities. So I got a real close up picture of the local challenges that are involved in accessing care, along with the possibilities of things like telehealth, and in talking with patients as well as clinicians and local healthcare providers. I mean, one of the things that will come up again and again is, you know, I'm glad I have access to a doctor, you know, as a patient in Michigan, even if they're in, you know, South Carolina, or Texas, it's really better than nothing, sometimes I'm waiting months and months just to see a doctor at all for this, you know, for pressing health condition. However, the people who spoke about care, in terms of you know, really being able to access care that helped change their lives, and really gave them an opportunity to feel like they were being listened to, and being able to make positive changes, however, they define that were often those who had access to local doctors and local knowledge and communities of support. Who understood, you know, you know, here's a place not just to, you know, get a prescription and get that filled. But you know, here is a community who understands the issues of, you know, food, access, of transportation of feeling safe, when you exercise of all of the really complex, social and economic factors that go into our health care, especially things like behavioral health. So if you're living in a community where, you know, people have struggled to get work and jobs for many years, because, you know, factories have closed, and you are dealing with, you know, real moment to moment and daily concerns of access to food and safe housing. You know, dealing with things like depression, anxiety are part of most people's lived experience. And so I think, being able to both understand and tap into that, as well as connect people, to support systems that really can grapple and understand those like localized challenges. It's so important, and I think it's something that I have these broader approaches, whether that be gamification and all the kinds of issues that go along with that. But even maybe more clinically inspired approaches towards behavioral health often just leave absent. And we're not quite sure how to deal with that, as a, you know, as in the technology world. I think, though, however, while that's a concern, and it's a really pressing concern, because I think it can actually block the ability to get the needed care. It's also a place potentially, for hope. I think that the more people on a local level who are involved in conceptualizing AI interventions in health care and especially behavioral health, a variety of expertise from lived experience of what it's like to have behavioral health condition to nurses and community health workers and local doctors, I think all these voices are currently on the sidelines of both defining the problem as well as conceptualizing the intervention and the hoped for kind of maybe an end goal of, you know, what does it look like to use AI or machine learning? What types of data are important in this particular community? What kind of solutions, you know, can we envision? So I think, democratizing that in the HCI world, we often talk about participatory design. And that's an idea where you involve many stakeholders from across a, you know, a design space. So that wouldn't be not just data scientists and engineers, and it would not just be, you know, a psychiatrist or a trained clinician, but people who are have a stake in that issue, whether it be a patient, a advocacy group, maybe even in some communities that might be part of the faith community, that might be an important part of how people take care of each other in that community. And we need to be including those voices that's often invisible. It's often silent. But I think if you're not including that expertise, that knowledge, we're going to always be losing, in kind of a attempt to incorporate technology into really complex social situations as well as health care situations.
Laurie Burns McRobbie:
Hannah, what would you say your, your, your concerns, which I know we have many of but but what also excites you about the potential?
Hannah Zeavin:
Yeah, I mean, what I'm concerned about, right now has to do with the increased capture of this market by corporate interest. Amazon, for instance, moving further and further into health care, is something that I'm made nervous by, especially because in the mental health side of things, they're really reliant on paralinguistic vocal monitoring, which is a fancy word for listening to the sort of signifiers of the voice. Something my beloved colleague, Beth, who's an ethnographer, at Princeton works a lot on. And this is really reminiscent of stuff that Liz was bringing up earlier, where like, what counts is data very quickly begins to sound like phonology, or something from the early 20th century, and is being built into this kind of really weak, if not totally bunk metrics, that are really driving our healthcare space. Now, that that makes me nervous because Amazon has such a platform, such capture and control of market that as they start to roll out more and more units of their health care initiative, I can see that it would make a lot of sense that the same people Liz's talking about not being able to meet with a doctor in person will soon be shunted into Amazon care. And other things like it, bad for therapeutic labor, bad for patients, bad for the ecosystem. But what I'm very excited by has to do with these already really beautiful examples of technologists working with artists working in community to make really small scale applications to make these sort of initial really beautiful well considered AI bots and other kinds of interventions that are helping communities flourish on their own terms, because of exactly this notion of participatory you know, community design, which means that cares happening in community, by community for community with technology. So as just one example that comes to mind Rashaad Newsome is a technologist who works with AI at Stanford. He's also an incredible artist. And in the wake of George Floyd's murder two years ago, he made an app called the being app, which is explicitly for dealing with depression and anger in the black community. And He's rolled out different elements of it as it's slowly grown from a chatbot, to a yoga space to a meditation space, looking at that kind of care of the whole person. Because I think this is something to that psychiatry is really hived off, especially, you know, Here, take this or do this activity, and the rest of you will get better. And we've seen that when we really take time, like Liz is describing to know the full person, what are your housing needs, your your food insecurities, right, that those really contribute to treating much more effectively depression and anxiety. And I think that that's really possible. And in various communities, whether it's Luddy, which has a hugely strong health informatics program at Stanford, right, where there are technologists who really understand that community has to come first.
Liz Kaziunas:
Yeah, and while this is not in the behavioral healthcare space, I think there's a lot of inspiration And to be had from other kind of bottom up approaches to thinking about health technology design. So one group of folks that I have been privileged to kind of study and work with are from the type one diabetes community. So groups like Nightscout, and it's called Open APS message is an open source, artificial pancreas system are a group and a collective, really of of people who have type one diabetes, as well as family members who, and friends who are in you know, concerned about care to access their own personal health data and be able to create and design tools and prevention that support the needs of that particular community. And so there's a lot of personalization, and there is a lot of kind of work being done to create specialized types of interfaces, for example, for mobile app for viewing and using your continuous glucose monitor and getting treatment for those kinds of conditions. So I think also looking to different spaces, both in the technology world and healthcare world for those examples of localized, really approaches, but also, you know, examples of kind of these open source movements in the healthcare space where patients have kind of, you know, very, I think, importantly, stepped up and said, you know, we are the experts of our own health. And, you know, we can learn a lot of these types of design approaches to sort of customizing our care are our provide these provide alternative ways of kind of conceptualizing the way technology gets used, and how it gets applied to these complex problems. So I, you know, it's, again, a different healthcare space with different considerations. But I do think it's important to kind of look at those examples and healthcare and beyond in, you know, there's a lot of different settings where people are kind of exploring community driven change in terms of technology design. So yeah, I take hope and inspiration from that.
Laurie Burns McRobbie:
That's great. And as we as we all showed, in the end, I think you're you're both echoing what has been a kind of hallmark of previous episodes of this podcast, which, which is about the ability of people to control data to control their data. And what the requirement to be able to do that is transparency is is is to understand that that the patient in this case, I'm talking about health care, is the owner of the of the should be the owner of their own health care, their own health care data. So and that again, I think, Hannah, your concern about third parties coming in and and I think all the concerns about where data is going are just really, really, really important to talk about this space. I want to thank both of you so much for this conversation. It's I've got all kinds of ideas for for conversations I want to continue to have and with you and with others and I really appreciate the time you've taken here at a very busy time of year to be with us today. Thank you both.
Liz Kaziunas:
Thank you so much.
Hannah Zeavin:
Thank you
OUTRO MUSIC
Laurie Burns McRobbie:
This podcast is brought to you by the Center of Excellence for Women and Technology on the IU Bloomington campus. Production support is provided by Film, Television, and Digital production student Lily Schairbaum and the IU Media School, communications and administrative support is provided by the Center, and original music for this series was composed by IU Jacobs School of Music student, Alex Tedrow. I’m Laurie Burns McRobbie, thanks for listening.