Editor’s Note: Listen to the first episode in this series.
Podcast by Dr. Bjorn Mercer, DMA, Department Chair, Communication and World Languages and
James Lendvay, Faculty Member, School of Arts, Humanities, and Education
AI-generated content: how original is it? What are the legal ramifications? How does AI identify truths? And, from whose perspective? American Public University’s Dr. Bjorn Mercer and James Lendvay discuss these questions—and more—on today’s episode. (Catch up on Part 1 here if you missed it.)
Listen to the Episode:
Subscribe to The Everyday Scholar
Apple Podcasts | Spotify | Google Podcasts
Read the Transcript:
Bjorn Mercer: Hello, my name is Dr. Bjorn Mercer, and today we’re talking to James Lendvay about AI in Education, the Arts, and the Future of Work: Part two. And so, welcome back, James.
James Lendvay: Hi, Bjorn. Thanks for having me again. It’s always great to be here, and always a pleasure.
Bjorn Mercer: Yeah, definitely. This is a great conversation, because AI has been a topic for a long time, but I think actual AI products are relatively new as of 2023, there are a flood of new AI products. So, it’s very interesting to see what’s going on. And, so, the first question honestly, which applies to education, the arts, and just for all of our jobs: is AI generative or is it just mimicry?
James Lendvay: This is a great question. Something that is probably still largely debated, although I guess it depends on who you ask in the field of development. We are using the language of generative, with ChatGPT, right? But there are questions as to whether it’s just compiling—these large language models—just compiling and kind of piecing together words based on predictions, the algorithms there, or if it’s doing something more like thinking, if we really have a good handle on what that means exactly.
We could just assume that it’s just mimicry, I suppose, for the time being, there’s lawsuits already coming down the pike about whether or not it’s generating authentic ideas or just piecing together a bunch of other ideas, whether from books or music or wherever, and to what extent that might be plagiarism or even copyright infringement. So, that’s going to be a really interesting and important, sort of, even, metaphysical, question: What is this thing? What is it doing? How is it doing it? And then, of course, how that applies to work, school, entertainment, arts, everything.
Bjorn Mercer: As of 2023, there have been a host of new AI companies out there that have put out products and are doing amazing things. And AI has been talked about for a long time, of course, but I think this is the first time where we’ve really seen it going out to the general public. And it’s interesting to think of, is it generative or mimicry? And I think when I see a lot of the AI art that is created, it’s easier to see how AI is mimicking what came before it.
Obviously, there’s millions and millions and millions of images that these AI programs are using to then spit out new versions of art. You ask them to do something and then it spits out something. But you can see how they’re all a little connected. You can see where the influence comes from. You can even see potentially models that they’re using to then put into the art. So they’re not real people, but they were at some point they took that image from somewhere.
And so some people are seeing themselves in AI art, and I’m not talking about do an art with some celebrity in there, just like do art with somebody in there. And that person did exist somewhere. It really is interesting to think of AI as mimicry. And can you explain a little about the lawsuits, not in depth, but what’s happening with that kind of AI mimicry and how that has created some complications with the law?
James Lendvay: Well, I think you used a really interesting word, which is new, and what does that mean? I’m seeing so many things and people are so entertained by so many of these little uses of AI right now, and I’ve been seeing a lot of things where they’ll take somebody’s voice. The first thing I can think of is they took the singer from Metallica and overdubbed the sound of his voice singing words to another song. And you would never know his voice was, it’s just natural. He was born with that voice. And that’s not something that the program made up. It’s not a unique creation in that term in that sense.
There’s other ones too where somebody will superimpose a face. I’ve been seeing a lot of this with Arnold Schwarzenegger’s face superimposed on somebody’s body, and then his voice is also superimposed on there and people seem to get a big kick out of that. At what point is that copyright issue? I don’t know how Arnold Schwarzenegger feels about that. I was actually watching a really interesting Senate hearing on AI on C-SPAN, and one of the witnesses said, when we talk about financial fraud, for example, there’s some heavy consequences for that, and so part of the discussion in that hearing was, can we set up laws and rules such that there will be enough punitive consequences that people will reel in the use of this?
And the point he made was really interesting, which was: if financial fraud, if we take that so seriously, why not take this other kind of fraudulent behavior as well, and impose similar sanctions perhaps? And then maybe that would sort of direct the way that people are using this.
Bjorn Mercer: And those are great examples that it makes me even think … I wonder if celebrities or even people with more of a public persona will try to trademark themselves. I mean, I think a lot of celebrities copyrighted and trademarked versus different aspects, but if they just trademark themselves. So in the future, if those Arnold Schwarzenegger things went out, and if they’re just for fun, who cares? But if then people are trying to make money off of it and not asking permission from the estate of Arnold Schwarzenegger that has a trademark on him, then they could sue because, yeah, it’s Arnold. We all know it’s Arnold, and if somebody is potentially making money, then that’s where a lot of the issues really come in.
And when I think of generative versus mimicry, it does seem like it’s only a matter of time before there are certain aspects of AI being generative, where it is coming up with some original ideas. I think that’s a fine line because even as humans, even our original ideas have sometimes already been created a hundred times over by the time we get to them.
James Lendvay: Yeah, there’s all these kinds of ideas coming down the pike and we’re picking things out and we maybe never have really completely new authentic ideas. But I also wanted to back up to something you said earlier when we were talking about copyrighting. And one of the concerns also from that Senate hearing I was watching was how this can be used to manipulate public opinion in politics. And when elections come along.
So, it’s one thing to mimic or to make some kind of fraudulent bit with Arnold Schwarzenegger, but what if that is the same things being done with a presidential candidate and now somebody doesn’t know the difference and you can’t tell real from fake. The power to influence the public is so great. And a lot of the senators who were speaking were very concerned about this, and I think rightly so. So whether it’s mimicry or generative in that sense, it may not matter as far as that stuff goes, but the generative aspect is going to still be important. But just the mimicry aspect as far as those things go is something really concerning.
Bjorn Mercer: And I agree, and not to say this already, but if we can take one thing from this podcast is that everybody should watch more C-SPAN. And so this transitions actually next to what is the role of AI in education? Is it a tool? Is it something else? I think faculty and administration in education are looking at AI more hesitantly, but I think students are fully embracing it and instead of being a combative relationship, how can it be a tool?
James Lendvay: So, I just had this week a group of essays come in for one of my classes, and we are now running all of our essays through this Turnitin system. The point of it is just to check for plagiarism, so, it runs an essay through a background check, basically, of a bunch of other hundreds of thousands, millions, of similar products, I suppose, to see if there’s enough matching that it could be considered plagiarism. And, it’ll spit back a number, 25% match. Turnitin has now added a feature where it’s checking for AI-generated material. There was a student who had a 100% match, and we were talking back and forth about it. I don’t want to say too much about it, but one of the questions that’s come up, and it’s come up with other students as well is, can I use something like Grammarly as a tool to help me?
It’s not writing for me, it’s not even mimicking anything. It’s just taking what I wrote and making it better—mechanically, and even stylistically, better. And one of the questions was whether Turnitin is identifying Grammarly’s corrections or edits as AI. Now I went in, and I was looking at some of Turnitin’s material. They claim “No,” that it does not pick up changes made by programs like Grammarly, and it only succeeds at detecting text that was derived from some kind of generative program. How it does that, I’m not sure. But yes, to your point, students are picking up on this and why not? Even if it’s not a tool, even if it’s doing the work for you, what are the ethical constraints that students have that are going to keep them from using something like that? Something that we can build into curriculum and even as so-called teachable moments to say, “Hey, if we’re talking about ethics, should we be using this, or should you be doing your own work?” And it’s kind of snowballed even just over the last six months into a lot of questions.
Bjorn Mercer: And I think these questions are valid, and they’re questions that need to be had, and they need to be had amongst students and faculty. When I look at AI, I see it as a tool. So in the perfect-case scenario, a student would be like, I have a paper that’s coming up, and it’s say, on Nietzsche. I always like saying Nietzsche because it’s fun to say. So, you go into an AI product and you say, can I have an outline? So, Nietzsche, outline, hit various things, and it spits out an outline. That’s a great thing to have. It helps you focus your ideas. Now if you say, Nietzsche, write a paper for me, and then you take that paper and you submit it to me, that is 100% fraud because you didn’t do it, a computer did it, but a tool to help organize your thoughts; a tool to help brainstorm. That’s pretty wonderful.
Of course, the concern is how many students will try to take the shortcut? Now with that said, how many students have always tried to take a shortcut? So, I think it doesn’t create a new problem, it just makes an old problem in a different way, if that makes sense.
James Lendvay: No, it definitely does. This has always been an issue, of course, cheating of one form or another, and this is just a new modality. The real concern here is how easy it is. In seconds, a student can have a paper drawn up for them on their phone with a really convenient copy function and just paste it right in, and you can write a paper that would maybe normally take you many hours, or perhaps days, depending on who you are, and you’re doing it in seconds. Now that sounds great. I’ll just quickly insert here. That sounds great. But, it also comes at the cost of really undermining what we want to do with education here.
A lot of people want to incorporate AI, and as you mentioned before, we can draw lines and say, “Okay, as a tool, that’s fine, but no more than that.” And, so, where are we going to draw that line? And then, how are we going to enforce it? Really tough.
Bjorn Mercer: And it really makes me think of critical thinking. So, in general, I would say that a lot of people struggle with critical thinking. I’m not saying everyone. We each have our blind spots towards how we critically think in different aspects of our lives. For some, it’s healthcare, for some, it’s numbers, and others, it’s ethical, various different ways in which we say we struggle with critical thinking. I worry for the student that will try to take the shortcut and I’ll say, be successful. They get through school and they’re able to use AI and have these papers that are essentially written for them, and then they figured out a way how to make it their own.
And then you get an adult who has done this. What is their critical thinking? Are they really critically thinking about the information that they are consuming and then spitting out? Or, are they potentially just blindly, “Okay, I have a problem, I’ll do it as fast as I can. It looks good enough. Here you go.” I mean, to me, that could really put individuals at a competitive disadvantage, and they might not even realize it if that makes sense.
James Lendvay: No, it does. A disadvantage in a number of ways. If AI is going to do what the promise has been—that it can think, that IT could be even superhuman—we have access to a program that can allow it to do our thinking for us. What does critical thinking become? What does that mean? It’s just a term that I have a tool to do that for me.
And I was thinking about this also in terms of how we have used tools and how these are coming online for other professions and just the internet in general. I mean, some of the tools that we had upfront already started doing, I suppose, some of the critical thinking for us. For example, real estate agents used to do a lot of the manual, heavy lifting of finding homes and tailoring search to what a client is looking for, and now you just type in your parameters in Zillow, and it’s done.
That’s nice. But, do we really want to offload too much of that work? Because, it’s exercise for our minds ultimately, right? What’s going to happen if we’re not using those capabilities and we delegate all that work to a program? I’m not sure where that leaves us, but that’s, I think, an interesting concern. Something to maybe think about.
Bjorn Mercer: And, so, this leads us to our last question: Are fears well-founded that AI may replace workers?
James Lendvay: Perhaps, but as we were just talking about with some of the technological tools that we’ve had, like Zillow or tax prep software or finding hotels, and do we need travel agents to do that? Even these apps like Duolingo that can teach you a language—okay, now does that mean, all of a sudden, we don’t need language instructors? Maybe, maybe not. I don’t know how many people statistically have been put out of work by that. But as we were talking about that, it kind of dawned on me, too, that one thing that this kind of technology doesn’t seem like it’s going to be able to replace is critical thinking with regard to our values.
So, it can say, if I give you a set of parameters based on what I want, then spit out the perfect house for me. But, it’s not going to help with the personal decision-making, the sort of critical thinking that we have to use to say, “This is where I want to go in my life and these are my values and this is how I’m going to try to get there.” Maybe a program could really help with that kind of thing. But I think that the world of values and the world of numbers and data and getting from one place to another in a procedural way are very different. And the latter is something that, clearly, this mimicking AI can do, but when we’re talking about getting from one place to another in our lives and the critical thinking that we apply in that sense, I’m skeptical as to what it can do if that makes sense.
Bjorn Mercer: To me, there’s certain industries that are at risk for AI such as tax prep, certain accounting, cars, and trucks eventually.
James Lendvay: Right. I was thinking about that as well in terms of just mechanics. It’s been a long time now that when you take your car into get it diagnosed for a problem, what do they do? They plug in that ODI sensor. So, we already have machines doing that, and we have for a long time. Now, does that really eat away at the opportunity and the need for a mechanic to utilize critical thinking, troubleshooting? Maybe, to some degree. I guess the difference there is that cars are so sophisticated now that you couldn’t really diagnose a problem with the computer, if you really wanted to. How time-consuming would that be? So, it’s maybe an effect of the complicated technology itself. It’s not just coming in and replacing what mechanics normally did. It’s something that needs to be done because of the advance of the technology that’s part of the job now. From a workplace and economic perspective, it’s very interesting.
Bjorn Mercer: It is. It also makes me think of healthcare when they are reading reports or MRIs or x-rays, if you put it through, say, an AI that has a database of everyone that’s ever existed and comparing it to everybody your age, everybody your ethnicity, humans can’t replace that. And, so, the computer will do the analysis, and spit out what it thinks it sees, and then a human would then look at it. To me, that’s an absolutely wonderful way of blending AI computer learning with humans. And, at the same time, even if you start having little robots that can go around and do the basics of healthcare, of just your initial screening, that’s not a bad thing either, because in healthcare, I mean, especially after COVID, nurses are extremely worn thin, and so I don’t see a lot of jobs as being dangerous, but as being hopefully complementary.
James Lendvay: Yeah. It was an interesting point you made that with AI that can go through a huge database and say, compare your X-ray with others to say, okay, here’s the diagnosis. It’s funny because that’s really what humans are trained to do. You go to medical school, and you pump your brain full of these images and all these things that you’re going to eventually use to go back, pull out of your memory, or synthesize somehow, and make determinations when you’re doing the job.
And now, AI is doing that, but how much more information can some database hold than the average human mind? And, it’s a whole lot more. So, if it’s doing the same thing, but with that much more robust database, it seems like, yeah, obviously, it’s going to do a better job.
Bjorn Mercer: I’ve been watching some documentaries about AI and warfare where the one thing that currently I think is a rule where the trigger is still pulled by human, but eventually, the trigger will be pulled by AI. And, with healthcare or different things like that, AI will collect the data, make the recommendations, and probably most of the times it’ll be correct, but the human still has to confirm it.
James Lendvay: Something you said really struck me about using diagnoses here and for whatever reason, it just popped into my head from watching the old Star Trek. They had those tricorders, and so, Dr. McCoy would just wave this thing in front of your body and tell you exactly what your problem was, and here’s the medications to take, and it was just a miraculous thing. And that was sort of like a precursor, at least in terms of the ideology of how that would work.
But then I started thinking, well, if that gives you a really good, expert opinion, and sometimes people are going to go, well, I still want a human’s opinion on this, even though I get it, okay …because people second guess authorities all the time. Well, people second-guess AI as an authority and say, well, I’d really like to get some human eyes on this, or can I get a second AI opinion? I don’t know if that kind of thing would happen, but there’s also going to always be that trust level in terms of how these things are applied. That’ll be interesting to see how that plays out as well.
Bjorn Mercer: I could imagine a time where you do get a second AI opinion because as you want a second doctor’s opinion, there will be large databases that probably don’t share with each other. Honestly, when you come to … Just thinking of medicine. When we were talking about Turnitin and education, Turnitin is a company that they don’t share with other companies, so Turnitin and Grammarly are different, and so those two databases don’t talk to each other, so you can get different opinions versus what’s available on the web. It’s interesting, there’s so much that could happen, and there’s just so many companies that are all fighting for it, which I think is a good thing. I’m an ardent capitalist, there should be competition. Now, I’m also a realist in the sense that there should be healthy, logical regulation that, hopefully, protects people and puts people before profit, but well, that doesn’t always happen.
James Lendvay: No, it does not. Sometimes, also, people have really good intentions, and when you’re dealing with something like this, these technologies that you can’t predict, it’s very possible that somebody will say, “Look, I want to change the world for the better.” As an example, perhaps, Elon Musk wants to start up this TruthGPT thing. It’s going to give us all and only the truth about the world. Okay. I’m sure he has, I mean, some good intentions there. What that’s going to look like or unleash? Who knows?
Bjorn Mercer: We’ve talked about the truth before. What is the truth, and from what perspective? And history is not so clean in the sense that an event occurs and the absolute facts are recorded, unbiased. When events occur, facts are recorded, but you don’t know who records those facts, and so, many aspects of everything in this world are open to an interpretation. I mean, I would say that that just from its premise is idealistic, which is wonderful, but is also flawed because then the truth is how you program it.
James Lendvay: I’m going to come back to this again, that Senate hearing I was watching, one of the witnesses was the CEO of an AI company, the name of which is slipping me. He kept talking about how we’re teaching the AI to do things in a certain way. We’re teaching it to do things in an ethical way, and the senators were kind of … I don’t think they really pressed him on that question enough. “Training.” What does that mean? How do you “train” it to follow certain rules? Is that something you can actually do? And it seems, sure, I mean, I don’t know any better, maybe that is possible, but it seems like a very difficult and not really in line with the idea of just pulling together large pieces of information and putting them together in a sort of puzzle. How do you get it to think that way and say, “Okay, I have these ethical parameters”?
Can we teach it to not spit out obvious falsehoods? Is it really going to be able to do that? It could probably say, look, I’ve generated a true answer, what I believe to be a true answer based on a database, so I’m only going to give you this one, and if another one comes across, we’re going to filter that one out. So, I’m sure the technology could do that eventually, but again, how it’s programmed, how it’s used is really going to be what determines that, I think.
Bjorn Mercer: Well, yeah, and not wanting to spit out falsehoods is a great idea, but then it also makes me think like … I’m just thinking of political hot topics of the day. If somebody said something about communism and a lot of what’s happened or a lot of perspectives today about communism and mid-19th-century Marxism is perspective. Unless you go back to the original text and only read what they wrote versus what happened in the next corresponding 150 years after that, or your own political leanings, I mean, what is true, what is false?
James Lendvay: That brings up a great point, because even if the AI-generated content was being broadcast, it still depends on the audience’s understanding. So, if somebody doesn’t know what communism is and they hear it, they’re going to filter that out however they want. They’re going to put their own spin on it, whether it’s a human generated that truth or a machine. There’s still going to be that other end where people—and maybe this gets us back to critical thinking—where people are still going to have to understand what they’re hearing, whether it’s true or not.
Bjorn Mercer: It really makes me think about critical thinking. It all comes back to critical thinking, and not just about when we’re talking about: Is AI generative or mimicry? How is it used in education? With workers? I mean, there’s just so many things that go along with it, where AI might give you an answer, and then you’re like, “Well, what’s the context? What are the primary sources that it took from? Is it taking from primary sources that are, say, from the Middle Ages, and are those primary sources only European? Or is it truly scanning in Middle Eastern primary sources? Is it scanning in Chinese primary sources?” You have the potentiality—unintentional—of having a Eurocentric perspective, and again, there’s nothing wrong with that, but if you also think that it’s the truth, where are the primary sources?
James Lendvay: And that’s a great point too, because if people start to see this generative technology as an absolute authority, then there’s not going to be any questioning of it. That could be a problem as well. I think that’s why a lot of people are worried about, well, if you see a faked image of Donald Trump saying something, you’re going to believe that he actually said that. Okay, that’s one thing. But if you just happen to Google something and you find some AI-generated content and you don’t know that it’s AI-generated, you’re going to take it as authoritative no matter where it came from. And even if you knew that it was generated by AI, you could say, “Well, I know that AI pulls from this huge database and library of stuff, and it filters it out in such a way, so I have a good reason to believe what it’s telling me.” We really are in the same place that we are now with how we deal with and how we think of expertise and how we either accept that or reject it.
Bjorn Mercer: In my last example, and then we’ll need to wrap it up, makes me think of, say you’re writing a paper about Indian history, and it’s called the Indian Rebellion of 1857, but in India, that’s typically called the First War of Independence. Now, from the British perspective, they called it the Indian Rebellion of 1857 because the Indians rebelled against us. Well, and if you look at that, that’s colonialism to a T because the Indians in India were trying to throw off the yoke of the English, and even in Wikipedia, in the English version of Wikipedia, English language—which is mainly American, I would say—why is it labeled as Indian Rebellion of 1857, when it should be the First War of Independence, maybe “slash” Indian Rebellion of 1850? So, it’s perspective, even just right here.
James Lendvay: We’re going to get that even now. I mean, it was January 6th. Was that a riot? Was that a protest? Was it an uprising? How that’s worded, that’s very important to how people perceive it, and how is AI going to choose which words to use?
Bjorn Mercer: Which to me comes down to who’s programming the AI, and then, which AI product are you using? So well, a fabulous conversation. James, any final words?
James Lendvay: Well, for me, again, this is affecting all of our lives professionally, in terms of how this is affecting education. This is going to be an ongoing issue, and hopefully, we can find a way to integrate AI—as everybody’s saying, it’s not going anywhere—in a productive and positive way for students and mitigate any instances or patterns of using AI inappropriately. So, it’s going to be a concerted effort, I suppose, and will have to be, on everybody’s part, to make sure that we are getting the most out of these technologies and using them ethically.
Bjorn Mercer: Excellent. Final, final words, and today we’re talking to James Lendvay about AI in education, the arts, and the future of work. Of course, my name is Dr. Bjorn Mercer, and thanks for listening.
Comments are closed.