AI APU Big Data & Analytics Careers & Learning Cyber & AI Everyday Scholar Online Learning Podcast Privacy

How Generative AI Enhances Lifelong Learning and Growth

Podcast by Dr. Bjorn Mercer, DMA, Department Chair, Communication and World Languages and
Dr. Mitch Colver, Associate Provost, American Public University System

In this insightful discussion, APU’s Dr. Bjorn Mercer and Dr. Mitch Colver explore the transformative role of generative AI in enhancing lifelong learning and critical thinking. Dr. Colver highlights how tools like ChatGPT serve as valuable aids in distributed cognition, helping individuals process information more efficiently and navigate complex topics in both personal and professional contexts.

The conversation touches on practical applications of generative AI, from resolving family debates to aiding in understanding complex texts. Dr. Colver emphasizes the importance of using AI as a resource to augment human ingenuity rather than replace it, underscoring the value of maintaining critical inquiry and intellectual engagement. Together, they advocate for a balanced approach, where AI complements human expertise, fostering deeper learning and creative problem-solving.  

Listen to the Episode:

Subscribe to The Everyday Scholar
Apple Podcasts | Spotify

Read the Transcript:

Bjorn Mercer: Hello, my name is Dr. Bjorn Mercer, and today we’re talking to Dr. Mitch Colver about different ways we can use generative AI. Welcome, Mitch.

Mitchell Colver: Yeah, thanks so much. Good to be here, Bjorn.

Bjorn Mercer: This is an absolutely fascinating topic. Generative AI (GenAI) has taken the world by storm, for lack of a better description, over the last year, several years, but especially over the last year with the release of various platforms that so many people are using. And so, the first thing we’re going to talk about, can you describe generative AI dialogue for lifelong learning?

Mitchell Colver: Yeah. I first got into AI and have been an enthusiast ever since. In November, December of 2022, when ChatGPT was first released to the public, I was one of those first million users that just couldn’t wait to use it.

And as a learning scientist, I’m very interested in not only how people learn, but how people interface with technology in order to better their lives, in order to work more efficiently in the workplace, and as professionals to function more successfully. And so, really, GenAI has become something close to watch as it unfolds on humanity, and there’s this dynamic interplay between our ability to grapple with it, to absorb it, and really to leverage it towards a more improved functioning in all domains, not just professional but personal as well. And so, I’m having a great time working with it and particularly trying to understand it from a learning scientist perspective.

Bjorn Mercer: That’s a great introduction. A few things that you had put here, so for lifelong learning, different ways you can interact with generative AI is like asking it about, say, diet, houseplants, controversial family conversations at the holidays. Let’s start with the controversial family conversations at the holidays. What has it recommended you do with that?

Mitchell Colver: Well, I think the real value of GenAI is something that’s called “distributed cognition,” and I think that most people are familiar with what that is even if they don’t use that fancy learning scientist term to describe it. If you have a calendar or a day timer where you put in appointments or you put in reminders to yourself of this is when I need to do this or this is when I need to do that, you’re using distributed cognition. You’re taking something that normally would live in your head and you’re putting it in a book or on a calendar or in a phone and you’re asking this tool or this resource to do something for you, you could do yourself, but that it’s very nice to distribute that load out and have technology do it for you.

The same when you use a kitchen timer, I joke that Amazon Alexa is just a $400 kitchen timer because that’s how it usually gets used is Alexa set an 18-minute timer for this pizza that I have in the oven. But a timer is a distributed cognition technology where a person realizes they could keep the minutes, they could look at the clock and come back in 18, but it’s going to be really easy for me just to set the timer and then it’ll go off and I’ll come back.

Distributed cognition is one of the ways that adults learn to function in a multifaceted environment where they have to multitask and ultimately where they need to be successful in order to optimize themselves. They can’t be the only resource that they put into deployment in order to be effective. GPT is identical. Generative AI is identical to that.

For example, at a family barbecue, you might have an uncle who has some conspiracy theories that they read on Facebook or something, and they’re spouting them off as if it’s gospel. And this is actually what happened on the 4th of July. A family member, a loved one who I appreciate deeply was engaging in some rhetorical exposition of like, “Here’s what’s going on in the nation, didn’t you realize these truths?” And I just pulled out my phone and very quietly typed into ChatGPT, “What about this? Is this true? I’m at a barbecue, a loved one is saying this. I don’t think it’s true. Fact check it and then give me some real time feedback of how to respond diplomatically to defuse this, to shut down the falsehood, but also to redirect and empower and cultivate constructively.” And so, it did. Four seconds later, I mean, this person’s paragraph had not even finished and four seconds later I had not only, “Oh, that’s a misconception, but a diplomatic way to engage and to say, ‘Hey, let’s remember these things. Let’s remember these truths.” And that was all provided by GPT. So, this is distributed cognition.

I can obviously go look and fact check. I can spend the time reading websites and doing all the things. I can’t do that quickly enough to respond to so-and-so at the barbecue and really help other loved ones who are present not be misguided. And it went very well. I was able to empower them. I was able to help them see why maybe some of the things they had been reading weren’t exactly true, but nearest neighbor to the truth, a little bit of a spin, and then able to help the conversation move in a constructive way rather than turning into butting heads, which of course is what always happens at these functions.

And so this is an example of distributed cognition where really dialoguing with GenAI in real time can really help you get through a sticky situation, and seamlessly. There was no evidence that I had been on GPT. I just looked like I had texting or something. So, it was tremendously valuable. And I think that in this case, it was a peacemaking tool.

Bjorn Mercer: I absolutely love that example because I think we’ve all been to family gatherings where a topic comes up and for the most part people will just not engage, just try to push it along. But a way in which GenAIwas able to help you dialogue with family member and then have a conversation rather than ignoring, I think is absolutely wonderful because people have a lot of truths in their lives. And so to be able to talk to someone and in a non-judgmental way and in a non-argumentative way of just talking about facts in a way in which people can come together is so important.

Mitchell Colver: Well, and I think part of why I have sympathy towards this family member and so that I didn’t want to engage in any combative conversation was that I’m intimately interested in human beings and human potential. At my core, as a learning scientist, really understanding how individuals take new knowledge and integrate it into their lives is an essential part of what I was trained to do and trained to facilitate. And so, I understand that his, what we call epistemology or his way of knowing, includes reading something on a friend’s Facebook post and accepting it as truth.

I don’t tend to get my facts from friends’ Facebook posts, not necessarily even the news media. I’m fairly skeptical of many things that are in print. And until I can triangulate and really, as an empiricist, until I can really verify that something passes the sniff test, so to speak, I’m very skeptical, and there’s many easy ways to do that. 

Generative AI is a new way to do that, and particular as these tools get more sophisticated in being able to cite their own research. Copilot is tremendously good at this where you can toggle it over to be more precise and less creative, and it will actually link those sources very often, reputable sources that it’s citing in order to get to where it’s going. ChatGPT, even on 4 Omni, the latest version of ChatGPT that’s available, it still has trouble with a little bit of hallucination now and again, and it’s not great at being proactive to give you the citations.

You see them here and again. It’s starting to cite things, which is nice, but it’s not as seamless as the Copilot experience. But the reality is is that I understand that in a sea of information, a relative, a loved one may not have the training, the skills, the wherewithal to truly come to a greater or deeper sense of reality as it really is instead of just something that they think might be so or that they’re excited about or that aligns with their ideology.

Bjorn Mercer: And I like that at the end you said aligns with their ideology because for so many people, the information they consume honestly aligns with their ideology. And I like that for this first topic, you said dialogue for lifelong learning because so many people in their lives don’t know what it’s like to truly learn throughout their entire lives, and they don’t have good information literacy skills where when they consume information, they truly analyze it, they truly are skeptical of what they’re consuming. And whatever their ideology is, if they get it from a certain source, they’ll be like, “You know what? It’s good enough. I think it’s true.”

And in reality, that causes a lot of problems. And it’s nothing against them because they probably weren’t taught that way, nor do they see many people actually doing that. And especially when they watch TV, networks or news or things like that, what they see is what they do. And so, they’re just mimicking behaviors that they see that their leaders do or different things like that. And so, it’s an extraordinarily difficult skill to have. And so, it’s really great to see that generative AI is a tool that can help with that, which is absolutely amazing.

Mitchell Colver: I have a terminal degree. I have a PhD in learning science, and it was towards the end of that doctoral experience that I realized that so much of what that training was about was not adding information to my master’s degree or to my bachelor’s degree or my high school experience. A doctoral training program, a terminal training program, very often, whether it’s medical, legal, philosophical, there is very much an approach to helping those in training to learn how to triangulate on reality and to use multiple sources.

So, in my experience, we were taught to: First read the literature, go outside yourself, look at what other people are doing, develop a consilience of awareness about what might be true. Then, through your own efforts, collect data in an empirical way, process that, line that up with the other research. And then really build through your own experiences and through your own lenses, build some kind of conclusion.

If you go into law, it works the same way. You get a terminal degree in law, they teach you to do exact same kind of triangulation. First you read case law. Then you take the evidence that’s available to you, you combine those. And then you take your own experience and your own sensibilities, and you triangulate on the case that you want to argue it.

Doctors work the exact same way in the medical field. First, they refer to medical history journals, they understand the situation that other people have already collected and gathered evidence. Then they could carry out their own laboratory tests and their own analysis and assessment of the patient. And then they incorporate those two things with their own professional experience and background. And through those three sources, triangulate on what diagnosis, prognosis, prescription that they think is appropriate.

And so having been through a terminal training experience, it’s really this realization that unless you’ve been taught to take that slow deliberative approach, it’s very easy to fall susceptible to your cognitive biases. And of course, the triangulation is all about eliminating those biases. GPT is this really interesting resource, generative AI, because it is a rapid research approach. Now, that has benefits, strengths and limitations, and I think it can cut to the quick, but it also can hallucinate. And so, the reality is that it very quickly can provide you with a synthesis of a great deal of research that would take many hours to process and to combine, but then you also may be getting nuggets of hallucination that you don’t realize aren’t so.

And so, ultimately, as a triangulating, maybe this is not a replacement for research, but maybe it’s a fourth leg – instead of a triangle, now a square, where you say, well, what is GenAI going to say about this topic? Because that might actually give me some direction on where I need to head as I do deep reading and deep research as I learn about what needs to happen.

Bjorn Mercer: Absolutely wonderful comments about that. And that actually is perfect, a perfect transition to the second topic, which is generative AI dialogue for understanding texts, which you were just talking about. And just what you’re saying, ideally, people don’t have to get a doctorate to truly understand how to analyze them, but there is something about going through a master’s and a doctorate and that writing style, that research that really does open up your mind to really looking at information and being critical of it. And so how does GenAI really help you understand texts? What’s the process of interacting with it?

Mitchell Colver: What’s interesting is that as curious of a person as I am and have always been, I’m not much of a reader. I have never been much of a reader. And I come from a family of readers, so I know what they look like, and it’s very hard for people to accept, people that know me very well, colleagues and friends, it’s almost like impossible for them to accept that I’m not much of a reader because I am well-informed, I speak about a variety of topics, I have a lot of information at my fingertips, and so they just assume that I’m just immersed in text all the time. But I’m not found with a book in my hand very often. And if I am, it’s often picture books, not just for children, but art books and things that have a lot of visuals.

Having said that, there have been times in places where I’ve read textbooks cover to cover as casual reading, but they were good textbooks, personality textbooks, social psychology, educational psychology textbooks. They were subjects that I thought were very interesting, and it turns out that the authors were really good writers. And it was I think much later in my life that I started to accept that although I like reading, I’m incredibly intolerant of poor writing, writing that lacks empathy for the reader’s knowledge state.

We learn through what’s called “situated cognition,” which means I’m in a situation and I want to expand my awareness of that situation through some kind of learning and focused attention on the issues that are at hand for me. What I’ve realized is I look at a lot of books. The authors want the reader to join them in the author’s situation and to join the author’s knowledge state and the author’s zone of proximal development. I can’t abide that. I have a tremendous number of books that I pick up and I start reading and immediately disengage because there’s no empathy in the writing.

And so, a good example is there’s these beautiful coffee table books that I have, a Leonardo da Vinci, Michelangelo, Van Gogh, these great books, they’re awful to read. The imagery is beautiful. The associated text is awful because the authors will introduce things, they’ll say, “Oh, when Van Gogh was living in this little town,” and blah, blah, blah, and it’s like, “I don’t know what little town you’re talking about. Why does it matter if that little town has no significant part of the plot of where we’re headed or what… Why are you telling me?” And then other authors, they’ll ease you into that, “A little town that this person was in and it’s in this location and the reason why that matters,” and they warm you up to the idea of why that would matter or why you should care.

So, in terms of decoding texts and understanding their meaning, GPT now becomes tremendously valuable because I don’t have to worry about an author being an awful writer in order for me to access the information because I have GPT in my pocket.

And so I’ll give you an example, a liver textbook. I was having some health issues, and it felt livery based on what ChatGPT had told me. And so, I got a liver textbook, and I started summing through the pages and it was impossibly hard. I don’t have any medical background or whatever, but I’d read a paragraph, and I’d say, “Okay, now that’s interesting.” I wanted to understand the function of the liver. And I’d say, “Okay, but I don’t know what this and this and this means,” so I would turn to GPT and say, “Okay, teach me with empathy,” in the prompt, right? “I just read this paragraph. It said this and this and this. Teach me about the liver with empathy, use metaphors. Link it into concepts and principles that are part of my educational training.” So, I would give it my degrees so that it would understand, and then it would do beautifully at taking something that was incomprehensible and really putting it in context.

It started this metaphor, “The liver is like a production factory, and it has three pod bay doors. In one bay comes the micronutrients from your diet. In another bay comes things that it’s going to process from the body that it’s going to break down and process. And then out the third bay, it sends wastes. And so, in the liver, there’s all this storage of all these micronutrients and everything’s going this way and that way, but it’s taking raw materials, it’s reconfiguring them into new materials that the body needs to function, and then distributing those materials. And then, as often as there’s toxins, it’s taking those and breaking them down into component parts and then producing wastes that leaves the body. So, it has two indoors and two outdoors.” So that’s really exciting.

I mean, because it was teaching me so elegantly, I was able to then ask a very leading question, which was, “Wait a minute. It seems like it would be efficient, when it’s breaking down toxins, if there’s waste to get rid of that. But if that any of the component parts of the toxin are then valuable micronutrients to redeploy those micronutrients for the essential components that the body needs to function.” And GPT immediately said, “Yes, you’ve caught on. That’s exactly what the liver does. It can take anything and break it down into raw materials and then reconfigure those raw materials and produce new things, and that’s why the liver is so essential to the body.”

And this got me hooked on micronutrients and I’m eating healthier than I ever have, but this was because of that transfer of learning. I wasn’t learning deeply about the liver because I wanted to be a pre-med student, and that would perhaps be a non-situational type of learning where it would be very boring. I was learning deeply about the liver because I have one and I want to be healthy. And GPT was giving me all the scaffolding that you need as a learner to build that, what we call “zone of proximal development,” where you can expand your knowledge and really come to understand something really that is complicated but comprehensible if you only have the right teacher. And so liver textbook suddenly becomes transformative because I have the added aid of generative AI and can use it to really prompt it to give me the things that I need to build a deep complicated understanding of a topic that was previously foreign to my background.

Bjorn Mercer: That is such a great example because not only are you incorporating understanding texts, but you’re demonstrating lifelong learning, which I really like how you are weaving the two topics into that. And just like you, I’m not medical. I always have to ask my wife about anything medical because I don’t get it.

But then to be able to read a text about the liver and then incorporate a dialogue with GenAI to help you understand that deeper is absolutely wonderful, and we can all do that. I mean, in any aspect of our lives, whatever we’re going through, whatever our interest is, if we’re reading a text, we’re not quite getting it, if we then use GenAI as a partner, as an addendum to what we’re reading, that’d be a really great way to understand deeper whatever we’re trying to get into.

Mitchell Colver: Yeah. And I think part of it was that I had a true situational need. I needed to be healthy, and what I had not known and what I wish someone had explained in seventh grade health class, and maybe they did, but they didn’t do it in a way that was as dynamic as generative AI is because generative AI really taught me based on my prior knowledge, really taught me based on expanding things that I already knew or that I already had a handhold on.

The ability to interact with GPT in that way requires us to utilize something called metacognition. And metacognition is just kind of knowing what you know and being able to understand the limitations, like knowing where my knowledge ends and where new knowledge would begin. And then, within that real estate of what I already know, how to contiguously expand the footprint of my knowledge by expanding the borders with nearest and neighbor topics.

And I think in seventh grade health, you’re just sitting there and very often the teacher just wants you to learn about the liver, wants you to learn about diet and micronutrients, and they just go straight at it directly instead of saying, “Do you know what a factory is? Do you know why raw materials need to be reprocessed? Do you understand that your body has needs and the only way it gets those needs met is if you eat a rich diet with the right things in it?” There’s 34 essential micronutrients, and I’m very, very fastidious about making sure that my diet has foods that check the box of all 34 so that it gets what it needs so that the liver isn’t down there trying to build something and it doesn’t have the right little component parts to make that happen. And I had never had that conception of my body before.

And in fact, when I’m on a hard run these days, it’s nice to remind myself, I will actually dialogue with my body and say, “I know it’s hard. When I get home. I promise I will give you everything you need to rebuild.” And there’s this confidence because I know how to make my life healthy. And all of this goes back to my ability to dialogue with generative AI in a constructive, productive way.

Bjorn Mercer: I just love that you have dialogue with GenAI. Learning about the liver has then allowed you to dialogue with your own body. And for all of us in our lives, we need to dialogue with our body. It’s the one body we have, and so we need to do as much as we can to make sure that it lasts as long as it can. The last topic we have here is generative AI dialogue for the writing process. Can you explain that?

Mitchell Colver: I was identified as a good writer as early as third grade. My instructor, my teachers at that age, third, fourth grade, would actually pull my parents aside and say, “This guy knows how to write. There’s something going on here. He’s a good writer.” And I was able to cultivate that and in college took a creative writing course. I took it twice actually. It was an elective. I loved the teacher and I really wanted to spend a lot of time doing some writing. And the thing is that as a writer, I learned, in all of the time that I have spent writing, that very often it’s reading back over my own writing and improving it, continually editing it, going back to the start and cleaning it up. And then I might write two paragraphs and then stop and read from the start, clean it up, read from the start, and then start writing again. And that this process of slowly iterating towards the end of the paper, by constantly going back to reread and to edit and to build really is an essential part of the writing process, this reviewing your own writing.

Well, with GPT, as I’ve co-written things with GPT, the unusual thing is is that when it writes stuff and then I go back to read it, I am very aware of its deficiencies, and it’s very easy for me to go and edit and correct things. And so, for a while, I was using GPT as a co-author and then citing it. This was co-authored with GPT. So even if it was my writing, I was saying I got GPT to help.

Recently I’ve shifted because I was never really very satisfied with that. And it was because I couldn’t own the writing as my own. And there would always be a question in the mind of the reader, “How much of this is him? How much of this is GPT? That’s confusing. I don’t like that as a product.” So, recently, I have learned that the best technique is to literally ask GPT to write something for paragraphs or whatever about a particular topic, and then completely replace that text with my own.

And this is truly very exciting because when I am done, it’s really my writing, and I’m not talking about replacing its ideas with my rephrasing. That’s not what’s required. Typically, it will have a paragraph and it’ll start with the first sentence. And I’ll say, “That’s actually not a good first sentence. I know a better first sentence. And it’s actually not even dissimilar to that one. It’s completely different. And so, then I’ll rewrite that sentence.” And then the next sentence, “Oh, yep, you know what? It doesn’t know the things that I know, because very often it’s technical writing in my discipline. And so, it doesn’t know about this research that I happen to know about, or if it knows, it didn’t call it up. It doesn’t know about these principles. It doesn’t know my favorite authors who make this same point and who make it better.” And so, I delete those, a second sentence of the paragraph and the third sentence and the fourth, and I replace it. And now what I have is my own full paragraph.

And in fact, if you were to compare it to mine, very often, there’s no association. You wouldn’t tell that it’s the same thing. And then I go on to the next paragraph and do the same thing. And then by the end, I have eight or 12 paragraphs. What started with four paragraphs from GPT has now become eight or 12.

As I do this, I’m much more creative. I work so much more quickly than if I were writing on a blank page. And I think it’s because it is much easier to write in a way that’s comparative than it is to write against a blank page because I can recognize its weaknesses and recognize where it’s going wrong and where it’s headed in the wrong direction, and then I can bring it back and say, “No, that’s actually completely the wrong way to go. I think it should go this way.” And as I go that way, I find that I have a tremendous amount to say. And I think what this is firing on is our competitive nature. Also, our ability to recognize that it is the act of comparison that leads us to really determine that there’s something of value, like evaluation itself requires us to compare.

And so, I am actually able to write tremendous amounts of text now using GPT as almost a whipping boy because now I’m deleting it like, “No, that’s wrong and that’s wrong, and I wouldn’t do that, and that’s a bad choice.” And the end all of this energy of editorializing is scapegoated onto the text produced by GPT, and then I have my own writing in hand, and I can own it wholly as my own because it truly is. And I think that that kind of interaction with GPT is the kind of outcome that we wish university students could really get a handle on, that they don’t want to offload or to delegate their intelligence into a machine. What they need to do is dialogue with the text produced by the machine in a way that makes their own production stronger.

Bjorn Mercer: That process is, I think, brilliant. It’s a way of taking GenAI using its strengths, but still creating your own work yourself. So often with students at any level, they might look at GenAI and be like, “Oh, write me a paragraph,” or “Write me an essay. Oh, it’s good, I’ll just copy and paste it and I’m done,” but they’re not going through the hard work of learning or writing or improving their writing skills.

That’s not the way we learn, by copying and pasting. But by having GenAI write something for you and then you going through the process, because at that point you should already know what you’re talking about, just like you’re saying, and so you’re the expert over GenAI.

And then using that as a guideline or as a roadmap to writing is absolutely brilliant. And I think it’s a really good way to look at using GenAI for a great writing process because for so many people, they look at the blank page and they get frozen. People have writing anxiety, they have math anxiety. But when you realize that where you go with your writing is very creative and can be very enjoyable, then I think people are more apt to doing it. But when they have that blank page, they freeze up. And so just like you said, it’s really great way of using GenAI.

Mitchell Colver: I do think that the blank page problem is something that honestly plagues authors and has plagued them for all time. I remember in that creative writing class with that great professor, Myrna Marler, she talked about clearing the throat. And the way she would describe it is that your first paragraph is very often awful, and you don’t know that. You won’t know that until you’re done writing the piece. Typically, she said, when she’s finished with a piece of writing, she’ll go back and read the first paragraph and highlight it and hit delete and then see how the piece reads. Just starting with the second paragraph. She calls that clearing the throat paragraph. That first paragraph is really when you’re just trying to move from a blank page to a page with text on it.

I have experimented with that and very often found that to be true, that typically when I’m clearing my throat on a blank page, the writing is poor, I’m not really saying what I want to say, and that when I finally get into the swing of things and then hit that first sentence of the second paragraph, I’m finally making my main point. I’m finally saying what I really want to say, and that in fact, the first paragraph becomes completely superfluous and can be deleted.

I think this interaction or interplay that I have now with GPT as a writing tool is that exact same effect. I’m using GPT to clear my throat and then I’m going to get writing. And so instead of having to suffer through that first paragraph by myself, I’m asking it to be the first paragraph, and then I’ll take it from there. Thank you.

And I think that to your point, it’s really critical from a learning perspective that students have that critical inquiry, that they have the interdisciplinary synthesis ability to see how things are coming together, to be skeptical of text produced by a machine, and then ultimately to realize that their component strengths and their faculty in writing is really what should shine and really what should be developing, not just turning something over to a machine.

Of course, this is the sad thing is is that it’s almost like GPT is such a powerful tool that it can either keep you atrophied intellectually, or if you develop yourself intellectually first can take that intellectual development that already exists and supercharge it. And so, there’s this very uncomfortable middle of the road where if you stay on ChatGPT and just keep asking it to do all of your intellectual tasks, you’ll never get through that fuzzy middle of learning the hard things you need to learn so that you can use GPT powerfully.

And I worry about that for our young people because it’s this notion of if they constantly have that as a crutch, are they going to intellectually atrophy such that they can never use it to its full capacity because they don’t have metacognitive skills, because they’re not good at critical inquiry, because they’re not skeptical and they don’t have information literacy or because they’re not good at synthesis in interdisciplinary context.

And so the promise of AI is deep and meaningful, but it requires us to have a greater and deeper understanding of the strengths and limitations of that human technology interface.

Bjorn Mercer: And that’s an absolutely wonderful comment because it really makes me think, I saw this interview with the NVIDIA CEO, and there they’re asking him, “What degrees should we get?” And he’s like, “Well…” This is NVIDIA one of the largest companies in the world right now – evaluation. And he said, well, he’s not recommending degrees per se, but to have domain excellence.

What that means also is that you’ve done the hard work of learning, and you’ve learned so much about your domain that you then are an expert, sorry, domain expertise. And so, does that require a degree? Not per se, but it also requires so much work, so much work throughout everything. And so, when you dialogue with GenAI with something, you bring your own expertise there and you’re able to then interact with GenAI as an expert versus GenAI being the expert. And that’s where I think the relationship could be imbalanced. And for bonus material, Mitch, what were you going to say about composers?

Mitchell Colver: I do think that this idea that GenAI could write music, and that’s cool on this blank page idea of maybe GenAI could generate a little bit of music and then I’ll take it from there. Great composers, that’s actually how they learned to write music is that they would take a melody that someone else had produced either 50 years earlier or 150 years earlier, and then they would write variations and themes on it. And it was through the act of taking someone else’s idea, in other words, not having to work with a blank page and then reprocessing it as your own idea that really taught them the inner of melody, harmony, structure, counterpoint development, and ultimately made them into greater composers.

Frankly, we talk about this all the time in modern music where you’ll have one artist and they’ll copy a riff or a melody line or a harmony from another artist, and there’ll be a lawsuit. And anyone who studied the history of music, and that’s a course I used to teach, they’ll understand that great composers are quoting each other all of the time. You don’t have a Bach, a Mozart, a Beethoven that at times and in places doesn’t put a finger on the side of their nose and give a little tap to a previous composer’s work, to a previous composer’s idea. And in fact, it’s that “everything is a remix” kind of idea where we’re taking things and we’re adding to them and we’re deepening them and we’re looking at them with new perspectives that really makes culture deep and meaningful.

So, I really am of the mind that generative AI is a good resource for dialogue, but it will never replace humans’ ability to have a cultural conversation through music, through writing, through art, and that the meaning of human-generated art will always outlast any meaning that is in generative AI produced art.

Bjorn Mercer: It makes me think of all of the masses from 14th, 15th, 16th century where they would just take their favorite composer, put it in the baseline, elongate it, different things like that, flip it upside down. Today, we don’t know those tunes, but I remember learning about that also. Or even I’m sure you remember in the movie Amadeus, where Mozart is like, “Oh, Salieri, I took one of your cute little pieces and I made some variations on it.”

When you’re learning music, yeah, all you do is rip off other people for years and years and years. And I’m not sure where the concept of the artist today has to be a genius and do everything original because I know that’s the message I got when I was first trying to learn how to write music is whatever you do has to be completely original. And it’s like, no, for the first decade, all you’re doing is learning from the masters, and that’s a good thing.

Mitchell Colver: Bach wrote 300 cantatas, and a cantata is usually eight, 10 pieces of music combined around a theme. And every one of his cantatas was based on an existing hymn, very often a hymn he had not written. And so what he was doing is he was taking a common melody or a common experience of a hymn that the congregations in his area would know well, they would know the tune of hymn and he was saying, “How can we take that hymn and re-express it eight different ways, re-express the melody with new counterpoint and new variation?” And so, this was allowing him to become this prolific composer, perhaps the greatest that had ever lived. And a huge body of his work is entirely grounded on quoting from existing music that he did not write. And I think that that’s what made him a great composer, is that he was free and enabled to find validation as an artist by remixing other people’s work.

And it’s a shame that we haven’t kept that tradition up as a form of not only musical expression, but musical training. Take other people’s work and make it your own and see what that looks like as you’ve remixed it. And I think it’s the same thing with the blank page of GPT. Take GPT, what it writes, and then depart from it and do something completely new in your own style. And I think that that would be a better outcome and more nurturing of the human spirit than just saying, “GPT, write this for me because I’m too lazy to sit at the computer.”

Bjorn Mercer: And I love that because, you see the music right there? That’s the big book of Bach chorales. What I’ve been doing is I’ve been taking a Bach chorale and I’ve been using those melodies as an inspiration to create a new work. And if you know the Bach chorales, and if you listen to the pieces that I’m writing, you’ll hear it. I’m not taking it line by line versus phrase by phrase. So the entire Bach chorale can be heard over like five minutes, but in different segments. And so I’m sure there’s a way in which I would just sit down and write my own melody. That’s fine, but I also want to connect the music I’m writing to Bach. I want to connect whatever philosophical ideas I have with this in these pieces that I’m writing to be connected to him and the greatness that he had.

Mitchell Colver: Yeah, I do think Paul Simon, one of his famous songs, American Tune, is based on one of the Christian hymns about the Passion of the Christ. So he takes it, and many people would know it as Crown of Piercing Thorns, but he changes it and expands it and makes it a beautiful little song that a lot of people who listen to it, they don’t even realize that it’s German, that it’s 400 years old and that he’s just spruced it up with a little guitar and some vocal intonation. It’s a beautiful example of where great art purely is a remix of something that had been loved for hundreds of years. Absolutely wonderful conversation, Mitch, about different ways to use GenAI. Any final words?

Mitchell Colver: I appreciate the conversation. I do think that as people integrate generative AI into their lives, they really need to be watching carefully and mindfully for how often they’re using it as a replacement for their own ingenuity versus using it as a resource, an external resource to expand and enhance their own ingenuity. And I worry tremendously when people aren’t treating it with the deference for both its ability to be strong, but it’s also, it has extreme limitation. Unless you interact with it in a way that really honors both its strengths and its limitations, I believe that it’s going to get us headed in the wrong direction because whatever else it should be doing, it should be enhancing our ability to function, not replacing our ability to function.

You wouldn’t arrive to a gym, maybe there’s a GPT subscription at the gym where a robot’s going to lift the weights for you. So, you show up and you say, “Okay, yes, sign me up for that GPT plan, and then I’ll come and I’ll sit and sit on a bench while the robot does all the lifting.” And I think that unless we get that into people’s minds, that is truly a relationship with generative AI that is possible, and that really is absurd in terms of developing human capacity, I think we are going to be off track and having serious trouble.

Bjorn Mercer: Absolutely wonderful final thoughts. So today, we are talking to Dr. Mitch Colver about different ways we can use generative AI. And of course, my name is Dr. Bjorn Mercer. And thanks for listening.

Dr. Bjorn Mercer is a Program Director at American Public University. He holds a bachelor’s degree in music from Missouri State University, a master’s and doctorate in music from the University of Arizona, and an M.B.A. from the University of Phoenix. Dr. Mercer also writes children’s music in his spare time.

Comments are closed.