APU Business Careers Careers & Learning Innovations in the Workplace Podcast

Podcast: The Past and Future of Artificial Intelligence

Podcast with Dr. Wanda Curlee, Faculty Member, School of Business and
Dr. Oliver Hedgepeth, Faculty Member, Transportation and Logistics Management

Artificial intelligence (AI) is not only in our homes, but also in a variety of industries. In this podcast, Dr. Wanda Curlee talks to Dr. Oliver Hedgepeth about his lengthy career working on artificial intelligence systems, starting in the 1960s. Gain a historical perspective about how AI has evolved over the years and Dr. Hedgepeth’s research assessing the future of AI and robotics.

Start a management degree at American Public University.

Learn about the challenges of incorporating human behavior into AI and robotic systems, the ethical and moral issues, and the impact AI will have on jobs. Also learn what industries are using AI, including transportation, manufacturing, restaurants, education, healthcare and energy, as well as how artificial intelligence is being used by people in their homes.

Listen to the Episode:

Subscribe to Innovations in the Workplace
Apple Podcasts | Spotify | Google Podcasts

Read the Transcript

Dr. Wanda Curlee: Welcome to the podcast, Innovations in the Workplace. I’m your host, Wanda Curlee. Today, we are going to be chatting about the history of artificial intelligence.

My guest today is Dr. Oliver Hedgepeth, who is a professor at American Public University. He has many years of experience working with artificial intelligence. Oliver, welcome to Innovations in the Workplace. And thank you for joining me.

Dr. Oliver Hedgepeth: My pleasure, Wanda, and thanks for setting up this podcast on this emerging and reemerging concept. It’s an older concept, but it is changing names. And looking forward to having this discussion on how it’s changing.

Dr. Wanda Curlee: You have vast experience in AI. Most people call artificial intelligence AI, because artificial intelligence is a mouthful. When did you first become acquainted with AI and where?

Dr. Oliver Hedgepeth: Well, when I first became acquainted with AI, we didn’t call it that. We called it machine intelligence or smart machines, and that was 1967. I was working with the Defense Intelligence Agency, then later with the Department of Defense.

And during that timeframe of the 1960s and 70s, I was hired by the government as a mathematician, and then immediately sent to a place called IBM, which taught me every computer programming language they had available at the time.

I was part of a team that learned to program in ALGOL and assembler and every kind of weird language. To understand how to develop software that would take millions of pieces of data and turn it into decision points for humans to make decisions on.

Our problem with AI was back then was — what we call AI now — was we wanted to manage data that was so large and getting so much larger around the world and how to make sense of it. We thought computers could crunch it all together.

We thought AI — what we call AI now — was just going to be nothing but number crunching. And it stayed that way for about 10 years or so. And then later, probably in the 70s, people started saying, “Well, maybe we could put human rules in computer software.”

And instead of just adding all these billions of pieces of numbers together to find out what’s going on in the world, what the enemy is doing. And I was working with the Defense Department, what the enemy is up to and projecting what might be the next battle or conflict or innovation, but human rules. How humans think about all this data. And they started adding human behavior aspects to it back in the ‘70s and ‘80s. And I was working on [that] at that time.

And about 1985, I became the director of the first Artificial Intelligence Center for Army logistics. General Max Thurman — I’ll speak about him later — developed seven AI centers. He was funded by [the] Secretary of Defense to develop software.

And I was developing AI software for logistics applications. It was along the same lines, in terms of how to use lots of data and lots of human rules on how to do things, how to manage the data that would usually take people, 20 or 30 people in a room for 20 or 30 days to analyze what’s going on in a complex situation, a crisis that’s evolving such as you see in the White House happening today. All these crises around the world — what do you do?

And they wanted to have rules in computers to help us think about the next step to take. That’s how I got involved in it. Since the 1990s and 2000, I’ve been involved in AI in terms of research, because I’m a research scientist and writing papers on AI and trying to find out what’s going on with AI and robotics and where it might be going in the future. Because that’s kind of how I got started.

Dr. Wanda Curlee: Well, that was quite interesting. Oliver, let me ask you this. You talk about programming human behavior. We all know that each person has different human behavior. So what behavior do you try to put into a computer so that they act like a human as much as possible?

Dr. Oliver Hedgepeth: Well, that’s a nice question. The human behavior aspects that we want to put in computers — or as I say, AI or robotic systems, which today are merging together, you’re not sure which is which. But you want questions that are answered by pure data — accurate data— in terms of projecting where you want to go next. And for example again, if you have a crisis happening, you want to be able to make sure you make an observation with the data you’ve got, maybe it’s half data, partial data, as best you can.

And AI software is really good at handling data that’s kind of fuzzy data. Whereas mathematical models need exact data, they don’t use fuzzy data. And so that’s important, but some of the decisions we make cross ethical boundaries as well.

And we have to be careful about the ethical boundaries of putting rules in so that we don’t have a system that makes a recommendation or a decision that could hurt a human. We don’t want any AI system to hurt a human. We only want them to help humans.

And that’s one of the key things about the rules we put into it. We don’t want them to hurt people.

Dr. Wanda Curlee: That’s interesting because we could have a whole podcast on ethics and AI. But let’s shift a little bit. I know AI has many names. How many names are you aware of, and why do you think AI has so many names?

Dr. Oliver Hedgepeth: Well, AI — like I mentioned — has been around for a long time. First entry or birthday I guess, was the invention of the electronic computer back during the World War II, 1941. And that was hoped to be used to help in that war that was raging for about four years.

And then about 1949 after the war, computer programs were invented. Like I mentioned earlier in my time in the ‘60s, to store instructions for repetitive operation, calculate again, large quantities of data. And then the ‘60s, like I mentioned, there was a company, the Advanced Research Project Organization, that got a big handle on it, ARPA.

And they started DARPA and ARPA and they started really looking at how to use these computers. And the first applications had many different roles, they called them.

There are hundreds of different names. A lot of the names were for the application itself, what it was trying to do, whether it was giving advice on attacking something or give advice on how to grow something.

But there are hundreds of different names that can be used from bank-managed investments, under stock market for example, to invest in. And how do we identify a problem with the Army helicopter? I know about that.

So you’ve got things along those lines. You still call them AI machines, they’re such names as you will see that. And there’s visual AI and giving vision help to impaired people.

But the names that will continue to evolve, and there are many applications for all of these names and you’ll have names that would be called like a voice assistant or natural language processing, text recognition, drones.

These drones and AI [are] part of the drones that are working out there. And still they’re intelligent machines. And so it’s going to be fun to see what names are out there. One name that’s just come to light in the newspaper I read last week was Flippy, F-L-I-P-P-Y. Flippy makes hamburgers at White Castle.

Dr. Wanda Curlee: Oh, amazing.

Dr. Oliver Hedgepeth: They make hamburgers at White Castle. And the company that makes Flippy, this little robot, is also going to make them and install them at several universities, because students late at night need a hamburger or French fries. And people at White Castle at two in the morning might need a hamburger or French fries, and Flippy is going to be making it.

The human may lose his job, or he may not. He may be cleaning the seats and everything because of this pandemic. A lot of names.

Dr. Wanda Curlee: You talked about robots such as Flippy. Are there any other robots that you think are using artificial intelligence out in the world?

Dr. Oliver Hedgepeth: Yeah. Oh, goodness. There are robots that are driving cars, for example. There are robots that are driving cars, they’re driving 18-wheelers. I know there’s three 18-wheelers out there, I saw them and research on it, that are driving across the country, big 18-wheeler rigs.

Now there’s a human inside, and there’s no robot behind the wheel. The wheel is there. Somebody could drive it and the human’s sitting there, but the robot is a piece of machine intelligence and it’s working to solve that.

The other robotic systems are in schools. Japan is using robot teachers for the K to 12. I think the first, second, third, fourth grades. The kids that are first, second, third, fourth, grades, fifth grades.

They really like their robot. Now the human teacher is there also, saying, “Here’s what you got to do. Here’s your letters. You got to do this math, got to do this coloring.” But the robot is there to do assistance coworker with that human teacher, and the kids really like it.

There are robots entering nursing homes. You’ve got elderly people who are alone. And a lot of people in nursing homes are alone.

I’ve dealt with some people in nursing homes. They’re with a lot of people, but they’re really kind of lonely. And a little robot, about three foot tall, I forget his name, walks around and you can hold his hand and I can see a woman or man holding his hand.

And the robot will sit down on the couch with you and you can talk to the robot about “How’s the weather today?” And the robot will say, “Well, weather’s nice today. How do you feel today?” And if you want to just talk on and on and on and have a wonderful, wonderful time.

So there are a lot of helpful robots that are out there as well as the ones that will be doing applications. There’s also a robot that I read recently landing airplanes. Yes. And now airplanes aren’t being landed by robot systems or machine intelligence right now, but they are built and they’re in experimental stage right now.

So the next time you go flying, a human still will land it. But there’s a lot of smart software, AI software, machine intelligence in the operation of that plane. And they will be landing by themselves perfectly, we hope, possibly in the future.

It’s a question of you. Would you be careful? Would you like to be landing in an airplane landed by a robot?

Dr. Wanda Curlee: That is interesting. I know when I was stationed in Patuxent River, when I was in the Navy, they were actually testing fighter jets landing with software. The pilot was in there, but he was told not to touch the controls. Some of those pilots were my friends, and they just had a very hard time with that.

So I can imagine what these commercial pilots are feeling. I was doing some research on AI before this podcast, and there are some that sa, actually AI started back when aliens came to Earth, when the Egyptians were here. And so in your opinion — I don’t buy that — but in your opinion, when do you think AI really started and why?

Dr. Oliver Hedgepeth: Oh my goodness, AI, it depends on your definition of AI, I guess. And if it’s a machine of some kind, that’s calculating numbers, for example, a lot of us just calculating numbers, making a decision. You go back to Babbage’s machine, which was an idea of how to calculate numbers, that was a calculating machine.

That’s kind of an early start, but the concept, science fiction… I call it the science fiction. I’ve heard that story before, the science fiction stories back in the ‘50s were really great with robots and aliens coming down and that’s when I first was introduced to Robby the Robot. A 1956 movie that was out about the red planet, I think it was.

And actually I’ve got a picture of myself standing next to Robby because I was at a place called Alien Technology in California. Alien Technology was a place that was developing radio frequency identification, tags for tracking boxes of stuff, replacing barcodes.

But the early Egyptians, yeah, I’ve read those stories. I’ve seen the movies of people analyzing the hieroglyphics. They say these had to be from outer space and also looking at pyramids and different designs on the planet surface that says, you can only see that from outer space and it should be a signal to somebody from outer space.

I wouldn’t call it artificial intelligence. I’m sure AI and smart software, if the people were outer space, it’s a smart software. It’s like the software we’re using now. We type, your Microsoft Word document is an AI piece of software.

That’s why you get recommendations for changing the words you’re typing or changing the format somehow. The software wants to help you.

That’s a personal story, and it’s kind of fun to talk about. But I think my version of AI that really makes sense started in the ‘40s. But you can go back to the 1800s or 1900s with the Luddites. They call them the Luddites and the Captain Luddite I think, or Colonel Luddite.

And a Luddite is a military who was really against weaving machines. Taking women’s jobs away from them, the machines manufacturing garments that can be done by machine faster than women who did it those days by hand.

And so they destroyed all the machines. So any kind of mechanical device is similar to what we’re calling AI and robotics. It’s really all related together.

Dr. Wanda Curlee: Interesting. So you’ve talked about robots, you’ve talked a little bit about how we’re going with AI and how it’s in schools in Japan. I foresee it coming to schools here in the United States because of COVID, but we’ll see. But how do you see AI evolving in the future?

Dr. Oliver Hedgepeth: Well, it’s interesting, whatever AI/robotics/machine intelligence, whatever we call it, it’s going to evolve for our human needs. The humans are asking for things. Not that we’re lazy, we just want help in doing things.

But let me go back. All of these AI robotics systems also have to follow, I’ll call it the three laws of robotics. You remember Isaac Asimov wrote about the three laws of robotics, 1942, when the first AI smart machines were being evolved.

And he said, the first law is a robot may not injure a human being or through inaction, allow a human being to come to harm. So whatever we’re developing in the future, that’s going to be used around your house, maybe I’d say a vacuum cleaner, it can’t hurt.

A robot vacuum cleaner — or maybe it’s Amazon and their Prime robot along the roadway — driving to your house and getting out and coming to your door, dropping a package off. It can’t hurt you. So it can’t run over you.

And the second law, a robot must obey the orders given it by humans except where such orders would conflict with the first law. So if you tell a robot “Stop,” it’s going to stop, it better stop. If you say, “Shut up,” it’s going to shut up if it’s talking to you.

And we have those robots in the house. If you have one of those little systems that you say, “Siri, play this music for me’ or “Siri, how do I get to McDonald’s down the road?” And it gives you the instructions. That’s a robot AI system.

And then the third law, a robot must protect its own existence. As long as it doesn’t interfere with the first and second law. So the robot’s going to try to save itself. Someone’s trying to hurt it, it’s going to get out of the way.

So those things are important. And I see the future, that crystal ball you might say, of what’s going to happen, being driven by the current situation. Here it is 2020, we’re in a pandemic. We’re in a large COVID-19 worldwide problem. We’ve got a pandemic on our hands with people dying and we’ve got economic crisis, the unemployment’s high, and people are changing jobs.

Instead of going to restaurants, people are doing takeout. I am finding evidence, as I just mentioned, that Flippy is doing hamburgers. And it’s safer to have the robot making a hamburger than to have a human making a hamburger.

So I can see that what’s going to evolve over the next, let’s say two to five years, is going to be, I really do see it’s going to be coming from the crises that are unfolding around us. The pandemic says we need robots that give us what humans want, whether it’s food, whether it’s instructions and in a safe manner.

That’s why I can see the Japanese robots coming to maybe American classrooms. Or we can see other online smart software coming as well. I’m not sure what the next five or 10 or 15 years will be like, but I do believe these crises that we’re involved in, so the pandemic crisis, there’s the economic crisis, and starting in September of 2020, there’s going to be the education crisis.

And that goes back to using robots to help you or smart software as students are learning more online. I’ve been talking to a lot of people who are teachers who teach in a classroom, and they are absolutely frightened by having to teach online and some smart software can help them deliver the lecture. So I see a lot of this kind of AI system evolving over the next two or three years.

Dr. Wanda Curlee: So Oliver, we were talking about various industries. I know that we’re developing vaccines, and Moderna is known for using AI for developing vaccines.

And there’s a lot of data to crunch in the medical area. So I’m sure they’re using AI. I’m sure AI is helping them look at X-rays and CAT scans and those kinds of things to determine if somebody really has COVID and how to help them.

But can you talk about some other industries and what they’re using AI for? I mean, you mentioned the autonomous semis and other things. How else do you see AI being used in various industries?

Dr. Oliver Hedgepeth: An industry that may not really think about a lot, we see them cooking hamburgers, driving trucks. And developing artificial intelligence can, again use a lot of energy. It uses a lot of energy. A lot of people are not thinking about this.

But all these robots and AI systems that have been developed use energy. And there’s a big move right now for those people who develop solar energy versus the energy that comes into your house. A lot of these AI systems may also be tied with solar energy or other things that can create energy. The data scientists are out there looking at how to do this.

So it’s really getting kind of exciting in that sense. What we’re having in our home. I see it again, the home front being used a lot with AI. We’ve got robots that will clean your house. We’ve got systems that will be secure for you.

You got an automatic security system around your house that knows that you are the one who is supposed to be here. Your children are supposed to be in the house. And any other voice, you’re not supposed to [be] in the house, can set an alarm.

There are systems that can be in a house to detect… Like smoke detectors that go off. Well, maybe a smoke detector could be tied to another indicator in the house that would identify where something might be going on as a fire and even have it put out. We have sprinkler systems all over buildings.

There’s evidence that people are looking at smart machines for AI safety, AI being used for safety in homes. It’s kind of exciting to see what’s happening out there. Almost everything you can think of that a human wants to do, is being turned into an AI or a machine intelligence.

If you look at how machines came about – I remember seeing in the early AI magazines and AI books – I remember seeing a room with 100 accountants in it, 100 accountants and adding machines. And they were doing all the taxes. They represented this major company. They did taxes for people and companies, 100 accountants.

When the first computers came in from IBM, they had one computer that was in that room later, one desk, one person. The other 100 desks were gone. It was a big, empty warehouse.

I see a lot of jobs going away, but I also see jobs as coworkers with AI and robotic systems. I’m not really afraid anymore that robots are going to replace my job as a teacher, or as a worker, or a manufacturer, or a cook. I see what’s going to happen is we are just going to be working more so with robots and you just get used to it. A robot’s going to be around.

I do have robots in my house. I have three robots that clean the floors, for example. And they’re the darnedest thing in the world. And they stay out of my way and they know where I’m at and they back off and they kind of wait for me. So I see a lot of these things happening. Human life will be changing with robots in the next few years.

Dr. Wanda Curlee: I see the robots doing repetitive tasks and smaller tasks, which will allow us to do the more value-add. I know I expect that to happen in project management one of these days.

AI is a type of technology, you’ve already mentioned that. It needs to be coded, and it’s coded by people. So let’s get back to the ethics issue. How does ethics play in AI and is humanity at risk? I think about the Terminator, for example.

Dr. Oliver Hedgepeth: I mean the Terminator who just, they’re going out there and killing people.

Dr. Wanda Curlee: Right. If you look at that movie, it was AI. It was a robot.

Dr. Oliver Hedgepeth: There was that movie. And there was other movies about how computers take over the world. That is a fear. The ethics of artificial intelligence is per the ethics of technology in general, that robots and other AI has.

I guess you could divide it into my moral behavior of humans, as humans design, construct, and use and treat AI. What it’s being used for. AI ethics is really concerned about that moral behavior.

And they call it artificial moral agents, AMAs. We have those out there. Ethics and moral behavior has to be part of it. And I do believe it is part of it.

Now having said that, I have to go one step outside and disagree with myself. We do have drones in the military that have AI software in them. You see them on the news now and then about how a drone overseas somewhere, delivered a weapon and killed somebody, killed a human.

Well, we don’t want robots to kill us. However, there are going to be weaponized robots. There will be weaponized robots. In my early days in the military, I was working with AI systems that would kill people, yes.

But you wanted to kill the enemy. I come from a DOD, I’m a military-type person and we want them to do that, but we don’t want the robot to kill our people. So the robot’s got to be able to distinguish between a good and bad person, you might say. Or an American versus the enemy, you have to do that.

We had similar type of simple coded numbers machines in the early days of World War II, where you could tell what airplane was the enemy and what was the friendly plane. So you wouldn’t shoot down your buddy. But I do see that there are weaponized robots that will be out there and they will always be out there.

If you’ve got a weapon on a battlefield that won’t hurt one of our soldiers, that must be what you want. You want to have machines that can strike from a distance. We have jets for example, they’re not robots. They’re flown by human, but they can fire missiles miles away. And the enemy can’t see that missile coming.

So I’m sorry to say, there are such conditions like that. There are robots that can cut beef and pork up, but we would hope that robot recognizes that there’s a human person nearby and doesn’t cut them up. There’s a lot of those issues that have to be thought of when you develop machines that do things like cut meat or kill people.

And again, I’m not going to harp on it, but that is the negative human or moral side. And even then there may be decisions you want to make as a military officer, whether there’s a logic to say, to shoot somebody or just to warn somebody. It’s an ongoing discussion in the DoD level and other places.

Dr. Wanda Curlee: I can imagine. I read a couple of articles, one was on swarm technology for drones, which was fascinating. The drones actually talk to each other and understand how to swarm.

And I’m not even going to guess what they’re going to try to use it for, but this was done by DARPA. And they had a film of these drones that were in various locations. And then they came together and started swarming together like birds do.

So to me, it’s amazing what we’re doing. And yes, we are going to have AI that does different things. So we’ll see how that goes in the future.

I also saw that the Navy, DARPA again, is looking at autonomous ships. They’re just now testing that. So it’ll be interesting how the military actually changes because of AI. So ethics is not universal. Those of us that teach know that ethics for Eastern cultures is much different than it is for Western cultures. Although there are similarities in everything.

So let’s go back to autonomous vehicles. I read a study and it was about ethics in Eastern and Western cultures. And they asked people, and it was crowdsourced. So they asked people from various cultures, “What would you do? What should the machine do that’s driving and there’s no human in it, and it comes to pedestrians? And it has the choice and it has to kill somebody. It has the choice of either killing a child or killing an older person.”

It was interesting in the Eastern cultures, they would say, “Kill the child.” In Western cultures, it said to kill the older person.

So there’s that dichotomy of ethics. I’m not saying one is right, or one is wrong, killing is bad, no matter what it is, but it does happen. So do you foresee AI ethics being different in different cultures? Or will there be one ethics inside of the AI machine?

Dr. Oliver Hedgepeth: Well, even if the company that makes these AI robotics systems, is the same for country, Europe or an Asian country or American. I do see what you just said in terms of how one culture views life-and-death situations in that situation, it is hard to fathom.

I do remember a movie in which a car went off the road and a robot dove in the water to save someone. And it could save a young girl or it could save someone who was smart and needs to be used.

Do you save the person who’s smart? An adult to actually go on to help create things for humans to live better, or the little child who’s just a child? Who do you save? Similar situations. I can see that this discussion will be going on forever in terms of robots that are put in a situation like that.

Like the truck or car that might kill somebody. I can see they’ll be trying to save as many lives as possible, but there will be a choice where you have to kill one person or the other. You have to kill one, the older versus the younger one.

That’s an interesting dilemma that I can see Americans discussing forever. But someone has to make a decision finally, if the robot car is going to be on a highway and it has to make that decision, the decision will have to be made.

Now again, if we have one culture that says “save the older person,” you certainly don’t want that logic to be on the other part of the world in another robot, which will be going against your logic. It’s just interesting discussions, I’ll call it, that will be out there. They will be there, those hard decisions will be made, but they’re no different. That decision is no different than you driving a car.

Then you[r} driving…. I give you a nice old 1966 Ford Fairlane. Big heavy car, 3,000 pounds, an antique car. And all it has is a motor and brakes and that’s it. And a heavy steering wheel.

And if you’re driving down the road and you have a choice of killing one person, you’ve got to turn left or right, and you got to; otherwise, you die. So you got to decide, “I’m going to die or I’m going to stay alive, but I’ve got to kill that older man walking down the street who’s probably in his 80s. Or there is that 12-year-old girl walking her dog.”

You have to make that decision today. It’s tough to do. And having to live with that decision is tough. The human has to live with that decision.

The robot car is not a human. It doesn’t live with it, but the person who owns the car and makes those decisions does have to live with it.

And that’s just an interesting dilemma. I’m not saying there’s an answer for it. It’s just, again, some of the human behaviors that we are putting into those cars, the example you gave is us a wonderful example. Thank you. Because that’s a discussion item for class that would just continue forever.

Dr. Wanda Curlee: Yeah. It’s interesting how ethics will drive AI differently in different cultures. But if you had a crystal ball —and we’ve talked about job loss — but job creation, how will AI change the internet or industries in five, 10, 15 years from now?

Dr. Oliver Hedgepeth: Well, there are many examples. I expect to see more increase in manufacturing. I expect to. I would not be surprised that in the future, if there are manufacturing plants developing a car for example, or meat processing plants, taking hogs and pigs and chickens, and there’s not a human in there.

There’s a manager, she’s there walking around or he’s there walking around and looking what’s going on, checking the machines. And I can see that manufacturing in the next five, 10, 15 years won’t have humans in them, as far as the large manufacturing.

There’ll still be small companies, maybe 50 people or less who are making some unique product for a smaller audience of people. But I do see humans being replaced. Now that means, oh, they’re going to lose their jobs. And I would imagine unions for manufacturing would be very upset at this coming to them, and be really against robots or AI systems invading the workplace.

When you look at motor companies, the car manufacturers today on television, you see an advertisement and you see robots. You see two people doing something, but then you see all these robot arms all over. That’s already happened in the last 50 years.

50 years ago, there’d be humans all over the place. You look at logistics systems, Amazon, you look at warehouses and how they’re being managed. You see lots of assembly lines and you see one or two humans moving some boxes, but you see the pallet of boxes moving down aisles by themselves, robots moving down by themselves.

If you’re standing in front of one of those, it will stop. It’s not going to hurt you. It will stop. But there are more machines doing these type[s] of manual labors.

I do expect manual labor to be automated, more so. That means the jobs that we would have in the future will be working with them in some way. Maybe a manager, maybe an overseer to make sure that they’re doing right.

Or we may be doing different jobs. I can see teaching changing, as I mentioned in Japan. I can see a teaching online. I’m an online professor and I can see how my teaching online, a lot of what I do, could be automated and the student would know that a robotic or AI system is answering their question or reviewing some aspect of some problem they did.

Maybe a mathematical formula they had to analyze and provide an answer and get a grade on it. It’s a nice grade and a nice statement; print it out.

So I see teachers and even [the] online world being replaced one day. But not replaced, be coworkers. Some might be replaced, but be working with the robots. So that’s kind of what I think’s going to happen in the future. Be interesting to see.

Dr. Wanda Curlee: Yes, it will be and AI is a fascinating area. And I think it’s going to change our lives radically, but I think it will be for the better, as you’ve mentioned. Oliver, thank you very much for joining me today for this episode of Innovations in the Workplace.

Dr. Oliver Hedgepeth: Thank you for inviting me, and thank you for bringing this topic to really center of attention today. Because as I said, these crises that are happening, the pandemic, the economic and this teaching crisis coming up, all those are helping drive the use of AI and robotic machines more so than in the past. But thank you again, Wanda. I really appreciate being part of this effort today.

Dr. Wanda Curlee: And thank you to our listeners for joining us. You can learn more about this topic and similar issues in artificial intelligence by reviewing APUS blogs. Stay well, and we’ll talk to you soon.

About the Speakers

Dr. Wanda Curlee is a Program Director at American Public University. She has over 30 years of consulting and project management experience and has worked at several Fortune 500 companies. Dr. Curlee has a Doctor of Management in Organizational Leadership from the University of Phoenix, an MBA in Technology Management from the University of Phoenix, and an M.A. and a B.A. in Spanish Studies from the University of Kentucky. She has published numerous articles and several books on project management.

Dr. Oliver Hedgepeth is a full-time professor at American Public University (APU). He was program director of three academic programs: Reverse Logistics Management, Transportation and Logistics Management and Government Contracting. He was Chair of the Logistics Department at the University of Alaska Anchorage. Dr. Hedgepeth was the founding Director of the Army’s Artificial Intelligence Center for Logistics from 1985 to 1990, Fort Lee, Virginia.

Comments are closed.