AI APU Business Careers & Learning Cyber & AI Podcast Politics in the Workplace

Podcast: Building Ethics into an Artificial Intelligence System

Podcast featuring Dr. Linda C. Ashar, J.D., Faculty Member, and
Dr. Wanda CurleeProgram Director, School of Business

Ethics must be intentionally coded into artificial intelligence systems. But who decides on the ethical or moral standards used? How can programmers minimize bias from the system? What regulations are needed to oversee this evolving technology? In this episode, Dr. Linda Ashar talks to APU business professor Dr. Wanda Curlee about the challenges of incorporating ethics into AI. Learn why it’s so important for corporations and any entity using AI to understand the system in entirety, monitor for unintentional bias, and assess the validity of outcomes. 

Listen to the Episode:

Subscribe to Politics in the Workplace

Apple Podcasts | Spotify | Google Podcasts | Stitcher

Read the Transcript

Dr. Linda Ashar: Hello everyone, this is Linda Ashar, your podcast host. In this podcast series, we consider a broad range of topics of timely interest to both employers and employees. And today we are exploring ethical issues relating to artificial intelligence.  

Optimizing logistics, detecting fraud, designing art, conducting research, translating languages, banking and shopping—these are just some of the ways intelligent machine systems are transforming our lives mostly for the better. As these systems become more capable, our world becomes more efficient and consequently richer.

AI has been very much a presence in the workplace for decades, but it also has raised an increasing number of concerns about its implications for changing lifestyles and work prospects in the future. How does it control our lives? What are the responsibilities imposed on human users of AI systems? Are there ethical challenges?

Today, we are privileged to have Dr. Wanda Curlee back with us to talk about these questions. She is an expert in AI applications. Dr. Curlee is the Program Director for Business Administration in the School of Business at American Public University. She has a Master’s in Technology Management and a Doctorate of Management and Organizational Leadership.

She has been teaching online for over 20 years and has worked in the business sector with AI applications and currently researches AI topics. Dr. Curlee is active with Project Management Institute where she has several certifications and serves on its ethics committee. Wanda, welcome, and thank you for being here today.

Dr. Wanda Curlee: Linda, thank you so much for having me. This should be an exciting topic that we’re going to research and look into and I look forward to it.

Dr. Linda Ashar: Well, I agree that it’s a great topic. Before we get into the details of the ethical focus of today’s discussion, we probably should explain what we mean by artificial intelligence as a baseline, especially for the workplace. Would you briefly explain AI in the workplace for us with a bit of historical perspective?

Dr. Wanda Curlee: Sure. AI, or actually artificial intelligence, has been around for over 50 years. Most people don’t realize that, but it’s been around for a while. Now, it’s just recently, and when I say recently, probably in the last 15 years has started to make its inroads into business.

So what do we mean by artificial intelligence? Artificial intelligence is machine learning. It is ones and zeros, folks. It is coded. It is nothing magical. It is nothing that just happens. Somebody has to sit down and code AI.

We have AI in robots, we have AI in healthcare, we have AI in project management. If a Fortune 500 company doesn’t have AI, they’re not going to be a Fortune 500 company for very long because it’s needed. AI is very good at looking at things and seeing the patterns and coming up with suggestions based on those patterns.

So it’s everywhere in business and in fact, it’s in our own lives as well. AI, when you think of Alexa or Google or Siri, those are all AI systems. In fact, your laptop or your desktop has AI on it. You just don’t see it as AI. So it’s infiltrating throughout our lives.

[Podcast: Artificial Intelligence History — Mythology to Pandemic]

Is AI “The Terminator?” Is AI Data from “Star Trek?” No, we’re not there yet. And hopefully I don’t know that we’ll ever get there, at least not in my lifetime, but I would hope through the ethical views of AI, we will never get to something like “The Terminator.”

Dr. Linda Ashar: Well, AI’s changes in how we work and live and it’s continuing changes to workplace practice have raised ethical questions, though, about its use. In many ways, this is just as much a new frontier for ethics and risk assessment as it is for emerging technology. So Wanda, which ethical issues and conversations keep AI experts up at night?

Dr. Wanda Curlee: Well, when AI was first being done—I mean from a practical aspect that your teenager that is very good at coding could do AI—they didn’t think about ethics, they didn’t think about what goes in and what comes out and biased was put in there. So one of the things that we have to learn about AI and ethics is the bias that’s encoded within the AI. And bias can be good, or it can be bad, it just depends on what the application is for.

For example, let’s take facial recognition. If you encode into the AI, a zip code and that zip code has a certain character of the population. And that zip code is coded as that’s where a lot of the drug runners are, for example. And those drug runners are Hispanic, let’s say, it’s going to key on those areas and maybe do facial recognition wrong because it’s now saying I need to look in this zip code instead of looking maybe in an all-white neighborhood or in an all-African American neighborhood or in a mixed neighborhood.

So we have to be careful on how we implement bias and how do we get rid of it. And I’ll discuss some of that a little later on, but that’s one of the things that keep many people awake at night.

Another one is called the black box theory. We entered data into the AI because that’s what AI looks at is data. And we don’t know how it spits out the answers on the other end. It goes through a black box and we just have to trust it. Well, should we trust it? Are we giving it data that is biased? Are we giving it data that is completely wrong? And it’s taking that data and considering it correct.

So we have to understand what that algorithm is doing in the black box area. And my suggestion is if you’re in business and you don’t know what your AI is doing in that middle part, turn it off because you are setting yourself up for failure, you’re setting yourself up possibly for lawsuits. And possibly your customers will no longer want to come to you because you’re doing something that you don’t know what it’s doing.

We like to know what our human resources are doing whether they’re doing something fraudulent, whether they’re doing something unethical, you can teach ethics to AI. But again, it’s got to be done correctly because even with ethics, you can put in bias.

You have a self-automated car, so an automated car that’s driving, nowadays we still have people in there, but let’s say we get to the point where we allow them on the road by themselves. And by the way, there are 18 wheelers out there that don’t have drivers in them, just on certain areas in Texas. So don’t everybody get alarmed.

But what ethics do you put in there as to who do they not hit? If there’s a pedestrian going to be hit, do they hit the older person or do they hit the younger person. Or if there is a driver in there, do they kill the driver instead?

So it’s difficult to understand how you’re going to code those kinds of ethics and who decides on the ethical or morality issue within the AI. So those are some of the issues that are keeping some folks up at night and it should be keeping the C-suite up desperately at night if they truly don’t have a system to check the ethical bounds of their AI.

Dr. Linda Ashar: So it seems like an overly simplistic statement, but I’m going to say it anyway. We can’t expect AI to have any ethics and the ethics are going to be no better than the ethics of the person coding it?

Dr. Wanda Curlee: Correct. Unless that person coding it has oversight on them. If I’m allowed to go and do what I want to do, willy nilly, I’m going to put my own ethics in it. Well, my own ethics might be great, but it might not be the ethics of the business because you can have business ethics and then you can have your own ethics.

So we have to make sure that we have oversight on those coders and those in the IT department, because once it’s coded and turned over to the IT department, they need to make sure that the ethics is keeping up to speed with what the organization wants and as the organization changes because organizations do change.

Dr. Linda Ashar: So putting on my lawyer’s hat for a minute, if we have an accident on that truck, you mentioned, and I’m representing the estate and family of the person that was killed, I’m going to want to find out were simulations run on deciding who gets killed. Did they run simulations on the system and programming this system for making a call on whether to run over person A, B, or C?

Dr. Wanda Curlee: Correct. You would want to understand, as I mentioned, the black box before, the first thing you would want to check is, is it a black box? Do they understand what’s going on in that black box? If they don’t, shame on them and as a lawyer, I’m sure you would be all over that.

If they do understand what’s going on in that black box, what ethics were coded into it? Were the ethics, the standard norm ethics that you would find in Texas? Is the standard norm ethics that you would find in the United States per se, or was it the ethics of that business? Assuming that business had ethical standards. And yes, absolutely, they should be running simulations to make sure it is holding by its ethics.

Dr. Linda Ashar: Well, I’m flashing back to the 1970s and the Pinto gas tank bomb when Ford ran not a simulation in terms of who would get blown up for real, but they ran the paper simulation of, was it cheaper to let people burn or recall the cars and fix them before putting them on the market?

All the literature says it was cheaper to put the cars out and pay whatever the damages would be than to fix the cars ahead of time. And the damages were far greater than they projected I believe, that wasn’t AI, but it’s an analog, I think, to the concept of projecting which decision will be made.

Dr. Wanda Curlee: Absolutely. I totally agree with you. Technology is inherently not good or bad. It’s just technology. Your laptop’s not good or bad, it’s the person behind it. So AI is the same thing. We can make AI be good, we can make AI be bad. We can make AI for military purposes, we can make AI for civilian or commercial purposes. Different ethical bounds would be in different ones.

So in your analog of the Pinto, if we go back to that 18-wheeler and it kills somebody, let’s say it’s still has a human driver, let’s say it kills the human driver. Was that what it was programmed to do? Did the human driver know that? Was that something that human driver was willing to accept?

Yeah, those are all things that we need to understand. And that’s why many, many people suggest that we have an ethics board for AI. And if a company doesn’t have an ethics board for AI, they are behind the wheel because if they don’t start one, the government will do it for them.

Dr. Linda Ashar: And perhaps put requirements on that are too onerous for the company beyond what they should have to do.

Dr. Wanda Curlee: Right. If companies are acting responsibly, which companies should, then it seems to me and I’m not going to speak for the government because sometimes we don’t understand why the government does some things. But, if companies are acting responsibly with their AI, protecting our privacy, protecting what’s happening to us and how they’re doing customer satisfaction and being ethical, then regulations won’t need to be as onerous as they could be. That’s my humble opinion anyway.

Dr. Linda Ashar: I’m certainly not someone who would disagree with that based on my experience, well, in law for over 30 years and philosophically as well.

This also triggers a historical circumstance more recent than the Pinto. This is another big company example that was well reported at the time. And it’s my understanding it’s been remedied. Amazon had an algorithm that had bias in detecting people’s gender, I believe. And the bias in the system was detecting males more accurately.

And Amazon eventually discovered this in its AI hiring and recruitment, preferring male candidates over females. They discovered that they were getting more data and hiring information on male candidates, And when they discovered this, they terminated that system.

Dr. Wanda Curlee: Yeah. But how many wrongly hired people were there? Something that I wanted to bring to everyone’s attention is there’s actually a journal now called AI and Ethics, which I found fascinating, I didn’t know it, it’s a peer-reviewed journal, which is good in the sense for academics.

But one of the authors there is Jamie Brandon, Dr. Jamie Brandon. And one thing that he astutely said was: AI algorithms don’t take causation into account, only correlation. So it behooves the company as AI did, to see why were they hiring only white males or seemed to be more on white males? And it was because their AI wasn’t coded correctly.

So yeah, it was good that they terminated it, but I guess I would argue why weren’t they doing some testing and piloting to see if that was going on? Why didn’t they understand the black box part of it? They should have been able to pick up that bias. So to me, it should have never gotten off of first base to be a hiring system.

Dr. Linda Ashar: Yeah. And I don’t know all the particulars, this is how it’s been reported in several articles that are out there on the internet. It just happens to be that this was reported about Amazon. I suspect they’re not the only algorithm-using company out there.

And I only mentioned them because they are the company that this has been reported on. And it’s a good thing that they fixed it. But the thing, is the point about that one is, this was an inherent bias that somehow, and maybe not even consciously, because that’s the hideous part of bias. It isn’t always a conscious thing that’s put into systems.

Dr. Wanda Curlee: Oh, absolutely. Another interesting bias that I ran across that was in the AI and Ethics journal that was reported by one of their authors, was that think about if you were looking at an AI to do weddings and you were concentrating only on the bride.

Here in the United States, we think about a white gown, so the AI is going to go and look for white gowns for AI, if it’s looking for bridal outfits. But if you’re from India, you might be wearing something red or something very bright; that’s not something we would do in the states.

So again, there you have bias again; and that’s not even something that was done maliciously. It’s just based on that person’s culture. They’re used to brides being in white. So we’ve got to be very careful on what we want the AI to do and have a panel to think about the different biases and unethical things that could be in there.

Dr. Linda Ashar: Is there such a thing as a program that can be run to detect bias in a program?

Dr. Wanda Curlee: That’s an interesting question. There are AI systems that do detect bias, but it’s not on other AI systems. But I would venture to guess that if there is a need for it, there’ll be some entrepreneur out there that will create an AI system and it may exist already, I’m just not aware of it, that does detect bias.

But again, whose bias do you look for? What might be biased in India may not be biased here. What might be biased here in the states may not be biased in Mexico. Again, it gets very difficult as to what we look for in bias. It’s kind of the ethical situation. What’s ethical and what’s unethical?

Dr. Linda Ashar: Well, maybe it’s not a question of necessarily of ethics, although ethics is the motivation, but let’s take the wedding dress. So if a program is skewed that it’s going to have a preference for white, couldn’t the program detect that as opposed to being balanced across the color spectrum?

Dr. Wanda Curlee: I would guess it could. Where you get into the more slippery things like, let’s say there is, going back to the facial recognition and zip codes. Let’s say you do have a zip code that has a lot of people that are criminals, but what kind of criminals are they? And do you really want to target that zip code? Maybe they’re petty thieves, do we really put that to that zip code?

Or maybe they’re hardened criminals, and maybe we do want to have that zip code, but maybe we don’t. Because you don’t only want that type of person that lives in that zip code to only be looked at for the hardened criminals, so to speak, because we have hardened criminals of every gender, of every race, of every creed. So it’s difficult, I think, when we get into the more complex biases to look at.

Dr. Linda Ashar: So maybe the bias starts with what kind of questions we’re asking?

Dr. Wanda Curlee: Could be, are we asking the right question?

Dr. Linda Ashar: Because rather than asking about the zip code, maybe we should be asking, where are those who are committing X type of felony, and that’s going to generate the zip codes where those felonies exist.

Dr. Wanda Curlee: It’s interesting that you should ask that because I was reading about an AI system that was done with the criminal justice system, mainly jails and prisons, to try to understand who is going to be a repeat offender.

Now, understand that the vast majority of people in prisons are Black, right, wrong or indifferent, that’s what it is. And they found out that this AI system was heavily biased towards criminals that were African-American. It very rarely said that a white criminal would be a repeat offender, very rarely, but almost a 100% of the time it said that an African-American criminal would be a repeat offender. Somebody should have picked up on that very quickly.

Dr. Linda Ashar: That brings me to my next question that I had that was circling around in my head on this discussion. And that is, we have all read, and I think in a prior discussion, you may have even mentioned that AI tends to learn.

And in that example that you just gave, is that a possible feature of the AI picking up from the original programming that it was sifting and sorting and because the identification marks were there, that the majority of X felony were African-American offenders. And then the system started leaning to the bias of African-American offenders?

Dr. Wanda Curlee: It could be, or it could have been the data, remember it learns from the data that it was given. If it’s only given data of Black offenders, let’s say 99% of the offenders given, whereas there’s really, it should really only be 50% of the offenders given because 50% of the violent offenders are also white. And I’m just throwing these statistics out. I don’t know that they’re factual numbers. I do not.

Then if it’s only receiving those of African Americans and not of the white offenders, even if it exists, yes, it is learning, but it is learning incorrectly because it has the wrong data. Remember, it learns from the data it receives and if the data is bad, it’s going to learn incorrectly. And that’s why we, as humans, have to look at it to make sure that it’s doing the right thing.

We have to make sure that what’s going in is correct and what’s coming out on the other side is correct. Because what might be going in is correct, but maybe what’s going on in that black box, if you don’t understand what’s going on, it’s shooting out the wrong data. They have actually found that with some healthcare systems.

Dr. Linda Ashar: Well, that was where I was coming from. I mean, let’s assume that data wasn’t wrong, but it just happens to be that there was a large percentage of minorities who are in the data and the AI on its own learns an assumption and starts applying that assumption in future calculations.

Dr. Wanda Curlee: Right. And what behooves the criminal justice system is that it needs to look at, they need to triangulate just like we all do in academia to make sure that what we’re researching actually makes sense. But what they need to do is look at all the data out there, not just your prison.

That’s part of the issue was that many of the wardens were only looking at their prison instead of looking at the whole vast prison system. The federal government was only looking at the federal, the state was only looking at the state, the private we’re only looking at the private, the state-run prisons were only looking at the state, instead of collectively looking at all of it.

And now will that ever happen? I don’t know, but we’ve got to be very careful with that. We run into that same system with healthcare because we have insurance companies that don’t want to share data.

Dr. Linda Ashar: Well, let me take a leap into the future because I’m watching the clock here on our time. Human beings are at the top of the food chain because of ingenuity and intelligence. We dominate bigger, faster, stronger animals, because we can create and use tools to control them or overcome them, whether it be cages or weapons or training and conditioning.

So let’s turn that concept to AI. Will AI conceivably one day develop the same advantage over us? Can a rogue AI become a self-aware to the exclusion of human control?

Dr. Wanda Curlee: At this point no. We haven’t been able to put, they have virtual neural networks, but they don’t have self-awareness yet. I think the closest that anybody might think that robot might have self-awareness I don’t know if you understand Sophia, the humanoid robot. She was actually on the Today Show, she’s been made a global citizen, but she’s not even self-aware at this point, she’s just responding to what she has learned. But again, she’s not self-aware, but she’s learning at a tremendous amount of speed. She’s also been made a citizen of Saudi Arabia, although I’m not sure what that means, but anyway.

So will it happen in the future? If we can put ethics within this, no, it won’t happen, humans will still dominate. And I’d like to step back for just a minute, Deloitte has come up with what they call the trustworthy AI framework. And it’s a circle, and inside of the smallest circle is the regulatory compliance. So it assumes that your AI is regulatory compliant. And right now we have very few regulatory items around AI.

But going around that circle of regulatory compliance, it says that:

  • It has to be fair and impartial. So we’ve talked about that quite a bit today.
  • It needs to be robust and reliable, it needs to protect your privacy. We don’t want our stuff being sold out there by a rogue AI, especially with our healthcare data.
  • It needs to be safe and secure, so it shouldn’t be hacked.
  • It needs to be responsible and accountable. That’s talking not only of the AI, but the people who own the AI, the company.
  • It needs to be transparent and explainable so you need to be able to explain how the algorithm works and how it’s learning.

So those are all things that I think if we put those or some version of that within our companies, within the healthcare system, we won’t ever have to worry about a rogue AI system. I shouldn’t say never, never say never, but we would be able even to control that rogue AI system by other AI systems.

Dr. Linda Ashar: So can an AI system be educated to protect itself from illegal or unethical use?

Dr. Wanda Curlee: At this point, no, but I think in the future, as we learn more about AI and learn more about coding it, yes, I think so. But remember where there’s good AI, there’s also bad AI. All you have to do is go out to the Dark Web and you will see AI systems out there that potentially are trying to hack into other AI systems, AI systems that are trying to take Bitcoin away from people.

There’s bad AI systems that try to go into a company and paralyze it. And some of them have done quite well. We’ve heard where some companies and our own government and here in the United States that have been paralyzed by AI systems. So we need to protect ourselves from the bad apples, as much as we need to protect ourselves from the good apples, because we want to make the good apples ethical.

Dr. Linda Ashar: Do you have any other final closing thoughts on this topic before we wrap up?

Dr. Wanda Curlee: Oh, absolutely. AI, as I mentioned at the beginning of this is inherently not bad or good. It’s the people that are coding it.

And we have to, as citizens of this world, whether you’re in industry, whether you use it just on your laptop, or Alexa or Google, you need to educate yourself, you need to understand, do you need to know how to code it, ones and zeros? Heavens no, I don’t know. I don’t know a one from a zero, but I’ve educated myself on understanding what AI can do. What are the pros and cons of AI, what’s the good about it, what’s the bad about it and what unethical people can do with AI.

So we all need to understand that so we can protect ourselves. And corporations, especially big corporations and the government to some extent, need to understand AI. Unfortunately, the United States is not leading in AI, it’s actually China, Germany, and Finland that are leading in the AI industry.

So I’m not saying we need to catch up, but we need to understand where we want to go as a country with AI and what regulations do we want to put around it? The European Union’s already put a lot of regulations around AI. Do we want to do that or do we want to be a little bit freer with it? I don’t know, I’m not here to make that judgment call one way or the other, but we need to educate ourselves.

We need to make sure that we tell our lawmakers what we want to happen with AI and be proud of what we’re doing with AI. Silicon Valley is going a long way with AI. So let’s be proud, let’s be educated, and let’s understand what’s going on with AI at corporations.

CEOs, CXOs, CTOs, I challenge you to take on the responsibility of making sure that your AI is ethical, that you understand what’s going on in it, and that you turn it off if you don’t understand what’s going on in that black box.

Dr. Linda Ashar: Wanda, thank you very much. This has been a great discussion and have enjoyed every minute of it.

Dr. Wanda Curlee: Thank you.

Dr. Linda Ashar: For many people, it might be scary to think about AI system surpassing human intelligence and the ethical issues that come with AI use are complex. The key is to keep these issues in context and analyze the broader societal issues in play and understand that it’s humans that are in control. And we need to educate ourselves about AI and how to use it.

As well, society has to consider where laws are needed to regulate how AI should be used. On balance, it has brought many improvements to our way of life, and we need to keep learning and stay informed, to be prepared, to actively engage in innovations, and make the best decisions for our future.

Today, we’ve been exploring ethical issues relating to artificial intelligence in the workplace with Dr. Wanda Curlee, an expert in AI applications. This is Linda Ashar, thanking you for listening to our podcast.

About the Speakers

Dr. Linda C. Ashar is a full-time Associate Professor in the School of Business at American Public University, teaching undergraduate and graduate courses in business, law, and ethics. She obtained her Juris Doctor from the University of Akron School of Law. Her law practice spans more than 30 years and includes employment law and litigation on behalf of employers and employees.

Dr. Wanda Curlee is a Program Director at American Public University. She has over 30 years of consulting and project management experience and has worked at several Fortune 500 companies. Dr. Curlee has a Doctor of Management in Organizational Leadership from the University of Phoenix, a MBA in Technology Management from the University of Phoenix, and a M.A. and a B.A. in Spanish Studies from the University of Kentucky. She has published numerous articles and several books on project management.

Comments are closed.