AI APU Business Cyber & AI Intellectible Podcast

Should Humans Limit Advancements of Artificial Intelligence?

Podcast featuring Dr. Gary L. Deel, Ph.D, J.D.Faculty Director, School of Business and
Dr. William Oliver HedgepethFaculty Member, Transportation and Logistics Management

All artificial intelligence systems have bias because a human being programmed it. In this episode, Dr. Gary Deel talks to APU professor Dr. Oliver Hedgepeth about his 40+ years of experience working with AI systems and smart computers. Learn about the impressive advancements of this technology and the endless benefits for humans as well as the risks and threats of this technology. Also learn about the limitations of intelligence and why aiming for superhuman computer systems may not be in the best of humans.

Listen to the Episode:

Subscribe to Intellectible
Apple Podcasts | Spotify | Google Podcasts

Read the Transcript:

Dr. Gary Deel: Welcome to the podcast Intellectible. I’m your host, Dr. Gary Deel. Today, we’re talking about the impact of artificial intelligence as it relates to fear and trust. My guest today is Dr. Oliver Hedgepeth, who is a professor of Logistics, Supply Chain Management and Reverse Logistics courses at American Public University. Oliver has many years of experience writing about, lecturing, and talking about improvements to education and work due to crises, disasters, and technology innovations. Oliver, welcome to Intellectible, and thank you for being our guest today.

Dr. Oliver Hedgepeth: Gary. Well, thank you very much. Glad to be here and talking about this topic.

Dr. Gary Deel: Absolutely, it’s a pleasure to have you. So without attempting to start off too broadly, artificial intelligence has seen a lot of elevation in the public spotlight, a lot of evolution over the last few decades, and a lot of attention, concerns, and excitement about the future opportunities that it holds, but also the future risks. So I guess to set the stage for our listeners who may not be all too familiar with the topic, can you define for us what AI or artificial intelligence means in 2021?

Dr. Oliver Hedgepeth: Yes. Thank you very much. And to start with a definition of artificial intelligence, or AI, is really appropriate. The reason is I first ran into this back in the ’60s when I worked for the government, and we were developing smart computer systems, AI systems, and then later in the ’80s, they got more excited in doing that.

The definition of artificial intelligence is many definitions. That’s the problem with AI today, or one of the problems. And it’s more of a social definition of a lot of people, because everybody has their own different view. There are dictionaries I’ve got published back in the 1960s that define artificial intelligence as machine intelligence. You’ve got statements of being able to take data, and make a decision and come up with an answer.

Artificial intelligence is really that simple. It’s where we collect a lot of data, maybe millions of pieces of data, and we jam it into a computer system, which we know what computers are, kind of, and the AI system can outthink us.

Whereas we may give you a book to read, that AI system can read that book in a matter of seconds and identify, “Here are the key parts of that book.” It would take you maybe several days. So an AI system, the definition really is a smart computer, and “smart” is really a wrong word, but as humans, me and you, Gary, we talk to each other. We say who’s smart, who’s not smart, or talk about people who are dumb or slow, maybe they’re slow in deciding things.

But AI is really the ability to process tremendous amounts of data about decisions that are made by humans, or decisions that just need to be made, such as to stop or go, to stop or go. Does the ship need to stop or go? Does the airplane need to stop or go?

AI software is used in automobiles today to drive them by themselves, and it looks around at all the data that’s around them, and it makes a decision to stop or go, or do something else. So I’m glad you asked, because the definition is not one definition. If you look it up on the internet, you’ll come up with about 20 different definitions, specific words. It’s an interesting concept, and I would say just we’ll call it smart machines right now, for lack of a better word. Okay, Gary?

Dr. Gary Deel: Absolutely, and just historically speaking, I want to reference sort of one of the pioneers in thinking about artificial intelligence, which was of course, the late Alan Turing, for those who don’t know, was instrumental in cracking the Enigma code of the Nazi military, and is widely believed to have foreshortened World War II by several years, and probably saved millions of lives, because Turing essentially, as a British mathematician and one of the first computer scientists, built one of the very first computers. And that computer was able to crack the Enigma encryption code that allowed us to understand what the Nazis were doing, where they were moving, their deployed troops, and equipment.

And so, Turing famously once said that, “The test for a truly intelligent artificial being or composition of materials would be that if you could have a conversation or input/output with a machine and not be able to tell the difference whether that machine was a machine or human.”

So in other words, if you were text messaging or instant messaging with another party and you couldn’t distinguish whether the party was actually a computer or machine versus another human being, if the responses were so intuitive and sophisticated so as to make it impossible to determine from another human being standpoint, whether you’re talking to a machine or another human being. That would be the threshold, that when we could cross that threshold to where it’s indistinguishable, that’s the measure of intelligence. Is that still the metric by which we measure artificial intelligence than 21st century, or have we moved the bar forward?

Dr. Oliver Hedgepeth: You make me smile. Yes, the Turing Test has been challenged, you might say for the last 20 some years. It depends on who you are. It depends on who you are. If you are a senior citizen in a senior citizen home, and you’re in your 80s and you don’t move around too much, and I speak with authority for that, because my mother was one. And you have a little robot that sits there and holds your hand, and listens to you, and you ask it a question like, “Oh, is that going to be cold today, or hot?” And it says, “It’s going to be cold today.” “Wow,” they said. “Should I go get a cup of coffee?” And they might say, “Well, did you have a cup earlier?” “Yes.” “Well, maybe you can have one more.” Well, you think you’re talking to a really smart person, a machine that seems like it’s a complete human being.

The Turing Test is most interesting. There are other software system that appear to be like that, like a human. What I really like was one of the earlier computer chess sets that beat the masters, the human masters. When a master chessman goes against the robot idea, or the AI system of a robot, or AI system playing chess and it beats you, really, it’s like, “My gosh, that little AI system, that little robot system is smarter than me, and I’m the Grandmaster of Chess on the entire freaking planet?”

Well, one can say the Turing Test might be passed at that point. Now, if you ask that little AI robot system that can beat anybody at chess, “How do we make coffee?” It won’t know what you’re talking about. So it depends on what you ask and where you’re asking it, and the specific nature of your questions. The Turing Test works here on the right-hand side, but then on the left-hand side, it may not work.

Dr. Gary Deel: I think what we’re talking about is the difference between specific intelligence and general intelligence. So to your point earlier, we now have technology, and software programs, and computers that can far outperform humans in specific tasks. You mentioned chess, I would use an even simpler example like a calculator. A calculator today will always be better than the most agile mathematicians that humans can produce at simple calculations. And now that we have computer chess programs that are better than humans, they will always be better than humans. We hold no hope of ever being able to cross that barrier again in terms of prevailing over our digital counterparts.

But when we talk about general intelligence, exactly to your point, is that when you try to adapt that same artificial intelligence to anything else, it’s limited. And I think about in my home, I have the Google smart home technology that allows me to tell Google to turn on lights and to lock doors, but it’s limited. I can’t ask Google literally anything, and there’s certain points at which Google has to say, “I’m sorry, I didn’t understand your question. I don’t know what you’re asking me. This is the best I can do.”

And so, we haven’t quite reached that sort of ubiquitously general intelligence that one would imagine you could have in a conversation with a human being where you could ask anything, another human being. And at the very least, you might get an, “I don’t know, but at least I understood your question.” Whereas the best artificial intelligence we have to put to task today, struggles when it’s trying to be put into those, what engineers would call the corner cases. Those difficult scenarios that it hasn’t yet been programmed or adapted to learn responses to. How does bias affect the way that artificial intelligence sees its role and its ability to perform in these types of environments?

Dr. Oliver Hedgepeth: Well, before I talk about bias, let me get back to the Turing Test for just a minute. Do we need the Turing Test today? You were talking, I was listening to you, I was very excited, the word you used, and I just wonder if the Turing Test has outlived what it was supposed to be. And I remember a man sat on one side of a table, and there was a wall and questions being asked, and answers were coming, and didn’t know whether there was a human on the other side of that wall or a computer. And when it got to the point where, “Oh, it’s a computer answering all these questions,” well it’s passed the Turing Test, so AI systems are pretty smart.

Given what you’ve talked about, there are specific applications for all of these AI systems, and I just wonder if the Turing Test is just something we should just historically smile at and say, “Yeah, that’s it.” Or do we really need to worry about, or not worry about it, but think about, will there be some robot, as we’ve seen in the movies, and there’s some news reels now, where the robot is sitting next to you and it looks like a human and talks like a human, and seems to carry on a conversation. But we still know it’s not really real, but you think maybe the Turing Test needs to be redefined there, Gary?

Dr. Gary Deel: Well, I think it’s an interesting question. Because intelligence is really open for interpretation in terms of how we define that. In fact, even in the psychological world, we tend to split our definitions when we talk about things like standardized intelligence, that is to say, like an IQ or an intelligence quotient, with other types of intelligence, such as emotional intelligence, the ability to intuitively understand and empathize, and adapt one’s approach to communicating with other people for the purposes of accomplishing goals or cohabitating peacefully, et cetera.

So it’s a really open and abstract concept when we talk about intelligence, but I do want to caveat something I mentioned earlier, because I was describing the Google home software, which to be fair, I think is fairly impressive, given how far we’ve come in so little time. I mean, to go back just two decades and to describe to someone that you could sit in your home and just bark commands at a computer to lock your doors, and turn on your oven, and turn off and on lights and fans, and all kinds of appliances, would seem like something out of a science fiction movie, just going back to say, the 1990s. But here we are and yet, I don’t think Google Home is a representation of the limit of our technological capability today.

I’m thinking about, for example, IBM’s Watson supercomputer, which seems to be, at least in my understanding, more approaching the summit of what we’ve been able to accomplish with AI. Watson, as far as I know is the undisputed Jeopardy! champion of the world at this point. When we think about general intelligence, I think that’s probably more closely approximating general intelligence because of course, in a game of Jeopardy! you could conceivably be asked about anything, science, math, history, art. And Watson seems to have the intuition built into its coding to be able to pick those answers out of the internet and whatever other databases it’s using, but I’ve never had a conversation with Watson, so I don’t know how comfortably it passes the Turing Test.

Dr. Oliver Hedgepeth: That’s good. Now, you did ask another question, and it’s about bias, and bias is the most interesting concept that’s really surfaced the last year or two, about 2019. I started running into some literature, and now I’ve got about 300 pieces of literature published in the last two years about identifying and managing bias in artificial intelligence systems. And as we read it, read all the literature, and you sit back and think about what’s going on, AI systems, well, there’s two things.

The AI systems are biased. I’ll say this right now. Every AI system you can dream of, think of, show me, is biased. Now, I’m not being negative when I say biased. There are positive biases and negative biases. But the logic, the algorithms and the data that your AI system has been trained on, your Google security system, or the AI in a vacuum cleaner, or some other AI system that’s managing your database at Walmart, whatever, or an AI system for police management of traffic. It is biased, because the logic and the rules were written by a human, like you and me.

And we’ve found over the last few years that AI systems aren’t developed the same way all over the world. Humans are designing the rules that says, “If A, then B. If B, then C. If the color green happens, then stop. If the color yellow happens, then turn right.” Whatever it is.

But it’s designed by some human who may be in India with a different set of logic from someone in Montana, who may have a different way of looking at things from somewhere in South Carolina. And if you ask somebody to develop the rules of solving a problem that would later turn into an AI system, a robot system, for example, such as a teacher, let’s say that.

What if we design a little robot system or AI system to teach a nice little course, a little low-level introductory course on English? “Here’s what a verb is. Here’s what a noun is. Here’s what a participle is. Here’s how you use it in a sentence.” It’s developed by a human who has been raised on a different set of rules, maybe a different set of logic.

Now, I’ll give you a good example. I was the director of the Army’s Artificial Intelligence Center for Logistics back in, gosh, I guess ’85 to ’90 at Fort Lee, Virginia. And I had a team of about 30 workers, you might say computer programmers, and I had a lot of money and a lot of computers. General Thurman, Vice Chief of Staff at the time, of the Army, he set up these seven AI centers, that was one, and we hired all these people to program logic to do maintenance on military equipment.

We took 17 feet of rule books on how to maintain an army tank, 17 feet of rule books, and we put all those rules in an AI system. It took, oh my gosh, 20 some people six months to write all these rules in computer code. They’re very simple. “If this happens, turn here. If this happens, turn there. Use this part, use that part. If you smell this, here’s the problem.”

And so, this system was our smart piece of software. It was AI. Well, we had a problem. After a while, we would test it out, and we took it out on the road with the Army experts and mechanics, and the Army was saying, “Here’s what’s wrong with the motor.” However, the AI system was missing some of these things.

We found out that one of my employees who was Indian, somehow her language, English language, when she saw the word “and,” like, “Do this and do that,” well, she would write the computer code, “Do this.” “And do this.” The “and” was not connected to the first sentence or the second sentence. They weren’t connected. She thought “and” meant put a period there in the sentence. It was just the strangest thing in the world, and we had to go back and reprogram, because the logic was bad, but that’s just one extreme example.

But the humans who develop the code or decide, “Here’s the kind of data I want to put in this AI system,” might be 23 years old, might be 76 years old, and the AI system you develop might be, say with facial recognition. And we’ve seen facial recognition software fail over and over again, because maybe you, as the data input person and the coder of the algorithm, you designed it to look at a photograph and see the various parts of a face, maybe a white man’s face. Think about that. A white man’s face.

And then you’re checking all these faces for IDs to make sure this person’s not a criminal or something going through an airport, police screening, and all of a sudden, it gets to someone who’s got a brown face, or a yellow face, or a black face, or whatever color face, a different color face because you trained it on 300 white faces. This really happened, and it just couldn’t work with other faces.

And so, we found that the data input was a problem. It was biased. We’re calling it biased now, and that is one of the many aspects that is a problem that we’re trying to face today. And I’m working with the National Institute of Standards and Technology, NIST, and this year, they are really looking at how we figure out all this bias, and fix it as best we can. And they know we’re not going to fix that bias 100%, but we just need to get these systems to be a little more trustworthy.

Dr. Gary Deel: I think that’s a great segue into one more piece that I wanted to talk about, and that is on the subject of trust and specific risks when we talk about AI. We’re speaking today with Dr. Oliver Hedgepeth about the impact of artificial intelligence as it relates to fear and trust.

There’s been some debate historically about whether or not humans are even capable of developing superhuman artificial intelligence, because some have argued traditionally that we’re limited by our own intelligence.

That is to say, we can’t create a technological mechanism or machine that is more intelligent than us, because that’s the limit of what we can do. But I think that’s pretty patently untrue when we look at specific intelligence, for example, and we’ve already discussed in our podcast how we’ve developed superhuman intelligence when it comes to specific tasks like mathematics, like chess, and a whole host of things that we use computers today for that far outshine what even the very best that the human race has to offer, can do.

And so, I want to reference what comes out of that if you accept the fact that we will probably someday, given that we have not yet already, we will probably develop a superhuman artificial intelligence, and what does that mean for us? Dr. Sam Harris, who is a neuroscientist and an author, and a public speaker, has done a TED Talk on this that is very interesting, and I would recommend our readers take a listen to it. It’s on YouTube, if you haven’t seen it already, but he talks pretty eloquently about the risks here, and I’m curious to know your thoughts.

What Dr. Harris talks about, and one of the examples he uses, is to describe the importance of specifically outlining what it is that we’re asking such an intelligence to do. If we can imagine, and I’m crediting him with this example, because it’s his analogy, but if we could imagine that we created a superhuman intelligence tomorrow and we wanted to assign it a task, something to accomplish for us, he uses this rather comical example of spam email, right? Nobody likes spam email. We don’t want spam email. How do we get rid of spam email?

And so, he says, “Imagine a scenario where we ask our superhuman intelligence that we’ve just created, and given the power to do what it needs to do to, to solve problems for us.” We say, “Okay, your first mission is to stop spam email. We don’t want spam email, just figure it out, get rid of it. We don’t want it anymore.”

And so, we can imagine a situation where without certain parameters, a machine could look at this problem and say, “Okay, how do I go about stopping spam email?” Well, what causes spam email? Humans create spam email. So if I get rid of all the humans, we don’t have any more spam email problem.

Now, it’s obviously a funny scenario, but a dark humor there, because we can imagine a machine misunderstanding that obviously, our intention is not for you to wipe out the human race to accomplish this objective. I mean, that should go without saying, but the point is, for a machine, that may not be obvious, right? The intuition there to understand that, of course, that is a parameter, a line that should not be crossed, may not be intuitive, right? It may not be implied or understood on the other end.

And so, it is inherently critical that when we do this, we do it correctly, and that we create the conditions that are necessary for that superhuman intelligence to operate safely in a way that it doesn’t blow up in our faces. What are your thoughts on that in terms of the risks moving forward?

Dr. Oliver Hedgepeth: Oh boy, oh boy. I’m glad you mentioned that and that video, and all the folks who think about superhuman computers. I’m going on record as not supporting superhuman computers. I define a superhuman computer as just a computer that’s got more stuff in it, more data in it to analyze more problems, because that’s what a superhuman computer is doing. It’s solving a problem.

Now, the problem be how to make a cake, or how to cross the street, or how to save your life. It could be a computer that’s going to do surgery on you and split you wide open with a doctor looking on, as it replaces your lung. We do have AI systems robots that are doing surgery with human surgeons.

But I object, disagree with you on the superhuman computer, because I’m not certain that society wants a superhuman computer. It sounds good, and we can go back to “I, Robot,” the robot days, early science fiction movies, where you see Robby the Robot. Being a superhuman computer, he seemed to have, it seemed to have total intelligence as a superhuman computer, but I think we’ll have supercomputers. I’m not sure I’d call it superhuman. I’m not sure I would see a robot. As I define superhuman, I would see a superhuman to be one who would tell me how to make a lemon cake, how to change the oil on my Jeep, how to plant a tree in the backyard in the wintertime or the summertime, and how much should we feed the birds tomorrow since it’s 30 below zero, how do we feed them?

A superhuman computer would do all the things that you and I would do, looking at all those scenarios. Will we have a computer that really will be able to do all the different aspects that you do during the day. Everything that you had to make a decision for since you woke up this morning. Everyone you talked to all, the phone calls, you made. All the keyboard things you did on the keyboard. All the thinking you’re doing about, “What do I have for lunch?”, and how to make that sandwich?

I’m not certain I agree that the superhuman computer will ever be here. Well, not in my lifetime, not in your lifetime. If there is one, I don’t know if we really want it, because it has to deal with humans.

As you mentioned, we don’t want to get rid of the humans, and as I see what’s happening in the world, and you mentioned the word “spam,” that’s really good, and look at the political landscape that’s around us here in 2021. There are humans who lie and they lie very effectively, and they sound really like they’re telling the truth. And there seem to be millions of people who might like a lie, listen to a lie. How does a superhuman computer distinguish between me telling them, “This is something,” and it’s a lie, versus “That is something,” and then it’s the truth?

So I think as long as humans are human, the way we are, the way we are defined, we seem to be defined, it’s going to be very difficult to program that computer. Now, there’s a question. Do you want to program your superhuman computer, as you call it, to lie? You want them to lie? Can a computer lie? Is there a reason to lie? Would a lie help save someone’s life? It could.

My summary is really, I don’t think there ever will be a superhuman computer. I think the superhuman computers will be there for solving certain problems, a group of problems, but they won’t solve all the problems. So I think the superhuman computer is still going to be in little isolated areas.

Dr. Gary Deel: That’s perfect. Well, there’s obviously a lot more to this conversation, and I hope we can have you back again to pick it up with part two. But for the meantime, I want to thank you for sharing your expertise and perspectives on these topics, and thanks for joining me today for this episode of Intellectible.

Dr. Oliver Hedgepeth: Well, thank you Gary, for asking really, really hard questions. I really appreciate it, and I’m looking forward to a follow-on in the future.

Dr. Gary Deel: No, it’s been a great discussion, thank you. And thank you to our listeners for joining us. You can learn more about these topics by visiting the various American Public University blogs. Be well, and stay safe, everyone.

Gary Deel

Dr. Gary Deel is a faculty member with the Dr. Wallace E. Boston School of Business. He holds an M.S. in Space Studies, an M.A. in Psychology, an M.Ed. in Higher Education Leadership, an M.A. in Criminal Justice, a J.D. in Law, and a Ph.D. in Hospitality/Business Management. Gary teaches classes in various subjects for the University, the University of Central Florida, the University of Florida, Colorado State University, and others.

Comments are closed.