APU Business Careers & Learning Leading Forward Podcast

What Are Compliance Issues with Artificial Intelligence?

Podcast with Dr. Wanda CurleeProgram Director, School of Business and
Dr. William Oliver HedgepethFaculty Member, Transportation and Logistics Management

Artificial intelligence, or AI, has developed rapidly in the last decade. Are laws and regulations keeping up? In this episode, Dr. Wanda Curlee talks to APU professor Dr. Oliver Hedgepeth about compliance issues, safety concerns, and why it’s so important to talk about the potential risks involved with using artificial intelligence systems.

Listen to the Episode:

Subscribe to Leading Forward
Apple Podcasts | Google Podcasts | Spotify

Read the Transcript:

Dr. Wanda Curlee: Welcome to the podcast. I’m your host, Wanda Curlee. Today, we are going to be chatting about compliance issues with artificial intelligence. My guest is Dr. Oliver Hedgepeth, who is a professor of Logistics, Supply Chain Management and Reverse Logistics courses at American Public University. He has many years of experience writing about lecturing and talking about artificial intelligence related to many industries. Oliver, welcome back, and thank you for joining me.

[Podcast: The Role of Artificial Intelligence During the Pandemic]

Dr. Oliver Hedgepeth: Wanda, thank you very much for setting up the session and talking about the compliance issues about artificial intelligence. They seem to be getting more and more in terms of headline news today, and it’s a good topic to discuss with our students as well as just the general public.

Dr. Wanda Curlee: Absolutely. Absolutely. You have been following the area of artificial intelligence or machine learning for many years. Before we get into that area, what made you interested in compliance issues as it relates to artificial intelligence?

Dr. Oliver Hedgepeth: The compliance issues really followed years of just being excited about using artificial intelligence such as when we started using machine intelligence. It seems like an interesting technology to use to assist how we do our jobs. We see AI technology and machine technology moving boxes in warehouses. We see drone deliveries of packages that are using AI technology. And it sounds like that’s kind of cool to do. And we get excited. And we got excited years ago. Go back into the 50s and 60s when something called computers were first invented, and they were processing something called data and started processing a lot of data. And people realized one day you can use these computers to predict patterns of what might be happening next in that warehouse, or a human, or a medical treatment, people thought about.

And so like, oh, and a computer can calculate within minutes what it took humans days or weeks or years to analyze data and come up with a pattern. So, it’s like, this is cool, let’s just do it. But now we’ve been doing it and people are starting to raise a question of what is AI and how is it impacting my life?

Dr. Wanda Curlee: Yep. That is very, very interesting because AI is impacting everybody’s lives, even if they don’t know it. Chatbots for example are AI and people don’t realize that. And you either love them or hate them or somewhere in between. So, what are the dangers of artificial intelligence. Seems to me that it’s agnostic, so why would it be dangerous?

Dr. Oliver Hedgepeth: It’s an interesting term: danger. And it’s a good term to use for right now, as we try to understand what we call compliance. I happen to know that there are medical facilities, doctors, Duke Hospital and others, that are using AI to help predict what should the surgeon do next, instead of the surgeon analyzing all the data and the nurses analyzing reams of data. The AI system is saying, “Oh, you need to cut here, you need to add this or add this chemical to this person to save their lives.”

AI is being used to help protect people. In many ways, the sepsis is one of them, and code blue is another one. We get code blue in a hospital, you’re about to die. And code blue goes off and all the doctors and nurses are trying to save your life, you’re about to die in a few minutes.

AI is being used to predict that you might get a code blue 12 hours from now, and all of a sudden, you never get it. You don’t die of sepsis. They’re learning that AI can help you there. But the thing is, it’s being done without really a human 100% of the time. They do have humans looking at it.

There are uses of AI that might violate some rules and regulations or laws. We have rules and regulations. We have laws, Congress has passed that says, if you drive a car, here’s what you need to do. And when you drive a car, you follow these laws. If you don’t follow them, you get pulled over by a state trooper and get a ticket, or you get killed, if you drive on the wrong lane. There are laws and rules. I don’t really see a lot of laws and rules about AI, and I’m not sure whether we need them, but Congress is starting to ask questions. And so I’m starting to ask questions too.

Do we need to rethink how we use AI? It seems like there are risk involved and maybe some unknowable risk that we didn’t just think about. And there are people using it today. The U.S. Chamber of Commerce is studying this right now. I think University of Washington law department or law school, they got lawyers now talking about AI. I’ve never talked to a lawyer or a Congressman about AI and applications. I’ve talked to people like you and others who use it. And it’s like, “Oh, it’s cool, it helps me do my job better.”

But I think we need to go one step further. Like when you built the first automobile. Automobile was cool back 80 years ago, but later they needed regulations on how to use that automobile. Maybe it’s time to start thinking a little more about some of the risk that might be involved in using AI, although it may not be a lot of risk, I think we need to think about it.

Dr. Wanda Curlee: Absolutely. Absolutely. I actually did talk to a lawyer one time about AI and she indicated that the company would be held responsible, or leadership in the company, if something happened with the AI. So that was interesting because you can’t hold software responsible for something because it’s programmed. So anyway, you were talking about risks. Risk can be devastating in any industry and every industry has risks and many industries use AI. So, how do you see industries trying to minimize unknowable risk? And those are the ones that should spark fear in every leader of a company because you don’t know what’s coming around the corner or is it possible?

Dr. Oliver Hedgepeth: That’s a very good question. As I see industries: Amazon, a great industry. The warehousing and moving boxes and stuff and FedEx, Walmart. All these companies are starting to invest in AI. I would think they might want to start hiring you or me; some college professors who are interested in AI or the lawyer you talked to, as far as maybe a committee. Or hire someone and says, “Okay, here’s a new group. We’ll give them a new office over here and hire two or three people to oversee what’s going on.”

They already have people in these major companies that look at how is our trucking? How is our transportation working? How are we putting boxes together? How are we treating people? Are we giving them time off?

Maybe it’s time to hire in all these companies a new group of thinkers who don’t care really about the box moving or the truck driver and what they’re doing, but the entire process of this technology being used along with the human technology, too. The human still does things, but this technology is helping them do things.

Maybe it’s time for these companies to start thinking about the risk, although the risk may not be obvious. It’s time to start thinking about it. So, I’m thinking maybe these people, not just the lawyers, but regular people, that are working at these companies to start looking at possible problems because if you look at the possible problems, you might be able to understand what this AI technology really is all about.

Dr. Wanda Curlee: So, many people are worried about AI as far as their jobs or safety. So what do you see as the issues for AI? Or are there any issues?

Dr. Oliver Hedgepeth: Oh, golly. Gee. That’s a great question. And I bet if you ask 10 people, what the issues of AI are, you’ll get at least 10 if not 12 answers back. Depends on whether you’re an employee who just got hired on the job and are having to wear an electronic device that’s an AI device to help you move a box. Or you’re wearing a headset that may say, “Oh, do this in six seconds from now, or this in eight seconds from now, or pay attention to this.” And it’s helping you do your job better.

So, the risk may depend on the age of the person, the experience of the person, and what they see as a problem. We still are humans and all our business, whether we’re making a cheeseburger for somebody or moving a truck full of goods, we’re still humans dealing with humans, dealing with humans.

We have ethical issues. We have gender issues. We have racial issues for goodness sake and they’re all built on humans thinking. And one aspect that is a risk I really think we need to worry about. And I’ve seen some of it, is that some of the AI systems that have been built by us humans and they are built by us humans. They’re built by you. They’re built by me. Two of us might be sitting down and the rules that are typed into the algorithms of AI. It’s just a computer program. You type in the rules. It says, for example, an employer, I want to hire somebody and you may type in some rules about asking questions. Do you have a college degree? Do you have experience? And you may also ask without thinking about it, what race are you? Are you white? Are you Black? Are you Hispanic? You see it all the time, but maybe we shouldn’t ask that question anymore.

That may be bias that you’ve built into an AI system that interviews you, for example, in terms of employment. So, there are human aspects that go in these AI systems that may need to be rethought about. Someone may apply for a job and AI systems right now are interviewing everybody. You apply to Amazon or Walmart. You go online. You’re not dealing with anybody, you’re dealing with an AI system that’s categorizing you.

If you apply for a job on Indeed, it’s a great job market, Indeed, here’s a job. I want to be a writer. Okay? Okay. Here’s what I do. And all of a sudden, you write down, how many years you’ve been writing and it’s like, “Ooh, we don’t need this guy. He’s going to die, because he is too old.” So, it may be a bias built in.

So, some of these issues, I think, need to be thought through a little more carefully. That’s where I think where we’re getting to, I’d say in the next, well, next few years. And I think as a college teacher, we need to start thinking about it with our students because those students are the workers dealing with this AI stuff already. We deal with military students, I teach military and you do too, who are getting a college degree, getting a master’s degree in supply chain management or logistics or management or economics or accounting or a bachelor’s degree. They want to apply for a job. But an AI system is going to interview them before they reach a human. So, there’s a lot of these things we need to keep thinking about.

Dr. Wanda Curlee: So, Oliver, you’ve talked a lot about bias within AI and I totally agree. Many people don’t understand AI is a software program and it has to be coded, but there’s also safety issues with AI. We’ve seen where I think it was Google had an autonomous vehicle that actually killed a pedestrian, but not on purpose, of course, but it was the decision making within the AI. What do you see as some safety issues within an AI?

Dr. Oliver Hedgepeth: Yeah, there are the safety issues. The other day, there was a AI system that the police I heard on the news, pulled over a car for some traffic violation and the police pulled up and the video shows the policeman walks up to it and the car takes off and it goes another block and then pulls over again. So the police pulls over. He’s like, “Okay.” And he looks into the window and there’s nobody there.

The AI system decided, I’m in the wrong parking space. I had to stop. So, it pulled over and went a little further and pulled over in a safe zone. But you’re right. There are AI autonomous systems that have killed people. And that’s a very important point to think about. We are still not there to have, I think, 100% autonomous vehicles.

But the key thing that I think is really an issue today is what the Navy is doing. The Navy wants to build 143 autonomous artificial intelligence ships to fire missiles, no Navy humans on board by 2045. And they’re building 21 right now to be deployed by 2025. Here it is, 2022. So in three years, there’s going to be 21 Navy ships out there or submarines floating around, no humans on board. At least that’s what they’re thinking. There are several congressmen who are questioning, are we sure we haven’t looked at these risks? What if that ship sends out a missile to the wrong place? You have to have something there. Plus, do you take all the humans out of the ship? Do you take all the humans out of a truck, for example?

We see trucks now, autonomously being driven by an AI system, but the truck driver is sitting there on the other side with his hands, or her hands, outside the steering wheel like, I’m ready to grab this thing when it runs off the road. So, they’re still there, they’re still not sure about it.

We’re getting closer to having total autonomous systems. But in all my years of studying AI, which goes back in 1985, I don’t think we’re there yet to let it be 100% AI for all of these systems. The Navy’s pushing this and I encourage people to look into what’s going on in the Navy because there’s a lot of billions of dollars that’ll be invested. And I want to make sure that the people who are the experts in AI who work for the Navy or hired by the Navy, know what they’re talking about and are not just throwing around out a bunch of buzzwords or platitudes.

Make sure they’re not some vague plan. Whatever we’re doing we have to be careful, by this human element thing and the risk of a possible danger. How can an autonomous truck or autonomous ship kill someone accidentally? It is important to keep thinking about.

Dr. Wanda Curlee: So, Oliver, you brought up some of the dangers. Is artificial intelligence dangerous? Do we really want it out there if it’s that dangerous?

Dr. Oliver Hedgepeth: All technology has danger to it. It’s how it’s being applied or used and making sure the technology can do what you really want it to do. We are using AI for anti-money laundering, for example. That’s a good thing. Money laundering. There’s people out there on a TV every day, trying to scam people into investing, cheating, getting their money. Inventory optimization. How do you optimize all the boxes of goods at Amazon or Walmart? Large companies or FedEx or UPS? How do you optimize all the shipment of goods and boxes and where they go? AI is optimizing it.

Energy management. I didn’t realize that the energy, our electricity is being managed, not by humans, but by thinking machines, AI, and predictive maintenance. We have the Air Force jets. They’re flying all around and Army helicopters flying all around and they need maintenance. And we have a lot of great women and men who know how to maintain a jet and a helicopter and other weapon systems.

Well, I just realized that predictive maintenance is using AI as well to say, “This bolt’s going to go bad in about two weeks. Let’s pull it off now.” Versus waiting for the bolt to go bad and all of a sudden the helicopter crashes and kills somebody.

So, there are a lot of complex things that are going on. I think there is a tolerance for these risk that we have to address. We have to understand the tolerance of the risk of driving a pickup truck, of using AI to assist us and do a job. It doesn’t sound like we’re new to AI, but it feels like after all these well, 30-some years, we’re still new to it in terms of those risks. People haven’t really talked about the risk as much or the regulations and the laws that might govern them as much as we are now. So, I think it’s something we need to think about.

Dr. Wanda Curlee: Okay. So there was a professor, Professor Stewart Russell. He’s a computer scientist who has led research on artificial intelligence and he fears about humanity. And, in fact, he’s actually said artificial intelligence is as dangerous as nuclear weapons. You care to comment on that?

Dr. Oliver Hedgepeth: Yes. I’ve read his work as well. And artificial intelligence as dangerous as nuclear weapons. Wow. What a statement. A headline in a newspaper. Scare everybody to death.

Dr. Wanda Curlee: Absolutely.

Dr. Oliver Hedgepeth: Yeah. I don’t want to walk around with a nuclear weapon in my back pocket. I don’t mind walking around with an AI system in my back pocket. In fact, I have one, it’s my cell phone. It’s a nice AI system. It’s smarter than I am. If I try to type the wrong word, it will correct me. So, I don’t think it’s really as dangerous as a nuclear weapon, but I think the headline is good to use and I’d like him still using it. And I will use it because it brings the attention to people. And then they’ll analyze what’s wrong with the nuclear weapon?

Well, if you pushed a button, that’s what’s wrong with it right now. If someone pushed the button that kills people, it does kill a lot of people. AI, it will kill somebody. Yes. An autonomous vehicle that goes off track somewhere, it can kill somebody.

But, it won’t kill, hopefully, 100 million people or wipe out a whole city. Might turn all the lights off, if it’s managing the energy, all of a sudden on a Friday night, oops, electricity goes out because the AI system decided it’s time to clean house or something. But I don’t think it’s that dangerous, but it’s a good metaphor to start thinking about.

He’s bringing the definition of AI into the conversation. The definition of artificial intelligence does not exist. There are many definitions of AI. In fact, if you look at definitions that are changing over the last 15 years, you see that AI is now part of being both a consumer, AI consumes things, and an employee. So think about it. It’s consuming something. It’s consuming data about you, about your purchasing habits, about your medical conditions, but it’s also an employee. It’s going to go out there and buy something for you. Or it might be in the operating room helping to put something in your body to make you live while the doctors and nurses are doing something else.

So, I see that the definition of AI is changing when you use that term, AI is as bad as a nuclear weapon or similar to a nuclear weapon. So, leave that as a headline to cause the conversation to start. But I think we really need to think through what is AI to us today, as we lean into the future, say the next five years? Especially since the Navy, it’s going to have autonomous warships out there. I don’t know how many, but they’re doing it.

And by the way, sideline, a footnote. China is already developing autonomous warships with little missiles on them and that’s very important. They’re protecting their shoreline already. I don’t know how many they have. That’s probably some top-secret thing in the Pentagon, but they’re stating in the newspaper, they’ve already got them and they’re building more.

And other countries could do the same thing because the technology is there. Look at what’s happening right now there’s a war between Russia and Ukraine. Ukraine and Russia are at a war. And a lot of things are happening with drones. It’s a drone coming in, it’s dropping a missile while being from a long distance. All this is AI technology related.

The U.S. Chamber of Commerce is trying to think through this. And I think they’re going to issue a report sometime later this year about some of these risks. So, somebody’s thinking about it.

Dr. Wanda Curlee: So, you’ve talked a little bit about how AI is transforming the world, but do companies and the government, including the military, have an obligation to make sure that the AI is ethical? And whose ethics do we use?

Dr. Oliver Hedgepeth: Oh, golly gee, that is a question. Ethics is a very important component of any technology application because the ethics could be violated accidentally. As I mentioned earlier, you could have an AI system, which is an algorithm, which is a set of rules that you, as a human, put into that system or that it may have developed. It could develop those rules on its own, but it could have developed something that has a bias built in without your knowledge.

It may have some built in biases that companies need to really worry about. And I think worry is the keyword. They need to rethink through how they develop such systems to move a box or to hire you or to fire you. So, I really think that companies really need to think through this. And it goes back to my point earlier that I really believe these companies need to hire more people or train some of their employees to let’s look at this technology. Now you don’t need a master’s degree or Ph.D. in computer technology to do this. And the reason I say this is because of the lawyers who are getting involved now looking at the rules and regulations.

So, we need to rethink these things and that’s what companies are doing. I think they’re starting to do it. They are doing it. So, it’s important to see companies doing this and doing more of it as we try to figure out going back to the earlier thing, the risk and the dangers that are there, we just want to make sure we understand those dangers and those risks and try to avoid them before they happen.

Dr. Wanda Curlee: Well, we’ve gone full circle here. Oliver, thank you very much. This has been an exciting topic on AI and compliance issues because AI is, while I still believe it’s in the toddler stage, it’s still growing by leaps and bounds, just like a toddler learns. Do you have any last words you would like to leave for our listeners?

Dr. Oliver Hedgepeth: Well, as AI spreads, we do need to think about whether it should be regulated. I do think that, and we need to think about do the current rules that govern use of AI for predictive human behavior need to be changed. We need to talk to the public. If the public’s not happy with something in AI, need to ask them. American Public University, our university, did do a study two years ago of about 100 students who were mostly military people in their 30s, 40s, 50 years old about how you like AI. And they overall said, we like AI, sounds okay to me. But we need to keep asking those questions. And we need to keep looking at maybe discovering what is the new definition of AI and machine learning in the legal definition? Maybe not what AI is doing to help us, but maybe there’s a new legal definition of AI we need to start thinking about. That’s my final word.

[Related: Can Teacherbots Be Programmed with Ethical Behavior?]

Dr. Wanda Curlee: Makes sense to me and thank you to our listeners for joining us. We have some exciting podcasts coming in the area of artificial intelligence. Stay tuned and stay well.

Dr. Wanda Curlee is the Department Chair of the Business Administration program. She has over 30 years of consulting and project management experience and has worked at several Fortune 500 companies. Dr. Curlee has a Doctor of Management in Organizational Leadership from the University of Phoenix, a MBA in Technology Management from the University of Phoenix and a M.A. and a B.A. in Spanish Studies from the University of Kentucky. She has published numerous articles and several books on project management.

Comments are closed.