Podcast by Dr. Bjorn Mercer, DMA, Department Chair, Communication and World Languages and
Dr. Jennifer Fisch-Ferguson, Faculty Member, School of Arts, Humanities, and Education
In this episode, Dr. Bjorn Mercer and Dr. Jennifer Fisch-Ferguson explore the evolving landscape of writing in the era of AI, addressing ethical concerns, student development, and the potential biases of generative language models. Dr. Fisch-Ferguson discusses the complexities of writing education and the importance of maintaining academic integrity and personal voice in an increasingly AI-assisted world. They examine how using AI as a supportive tool—rather than a crutch—can foster authentic writing skills and encourage curiosity. With reflections on lifelong learning and adapting to technology, this episode emphasizes the enduring value of critical thinking and creativity in writing.
Listen to the Episode:
Subscribe to The Everyday Scholar
Apple Podcasts | Spotify
Read the Transcript:
Bjorn Mercer: Hello, my name is Dr. Bjorn Mercer, and today we’re talking to Dr. Jennifer Fisch-Ferguson about learning to write in the era of AI. Welcome, Jennifer.
Jennifer Fisch-Ferguson: Thanks, Bjorn. Glad to be here.
Bjorn Mercer: This is a great conversation. Learning to write is hard no matter what age, what time. It is a slog, and it’s something that you can actually get better at slowly, but people have to be willing to learn. And so, with AI, I’d say complicates. So, the first question is what are the ethics of AI and writing?
Jennifer Fisch-Ferguson: There are quite a few. It’s a little more in-depth than people think about. Because as you mentioned, some people find writing a slog, and it’s further complicated by the fact that we are taught many different times over our education how to write differently. So, the snapshot that I like to discuss is when you’re in preschool or kindergarten, your composition, or your writing, is using crayons. You learn color appropriateness – you learn to write within the lines.
When you move up to kindergarten, first grade, you are then given three-letter words that you’re writing, but they also give you directional writing. So, for a lot of times, they teach you with an “S”. You start at the top, you end at the bottom, which is one of the first biases that we actually come into ethical concern. If a child is neurodivergent, it may be easier for them to create the letter from the bottom up.
So, we start early even with without AI – having these different ethics around how people write. So, move up to third or fourth grade, you are creating full sentences. Fifth and sixth grade, you are creating paragraphs. Seventh grade, you create an outline with introduction, body and then conclusion. Eighth, ninth grade you time to start to insert research. So, then we’re talking about learning to write in citations and proof. High school is a refinement of that idea.
And then in college we say, great, this is wonderful. Here’s a completely new format. Here’s a completely new way to think about your writing. Here’s a completely new way to think about thesis statement. And then people sit around confused that students aren’t necessarily a strong writer when they first enter college. And part of that comes back into disparities in schooling styles and writing styles.
What’s considered good writing, what’s considered professional writing. And working with generative AI doesn’t make it any better whatsoever because what we now take into account is not just writing styles and how people, but we’re looking at vocabulary.
We’re looking at connecting words, transitions. We’re looking at depth of information, we’re looking at relevance. We’re looking at how people tie concepts together or refer back to them and refer to sources. So, ethics and AI comes under weird umbrellas such as plagiarism, detecting AI generated content, the role that we ask of students in maintaining their own academic integrity, and then the responsibility of our educational institutions.
And some of the issues that we find is that institutions are having a hard time creating policies because generative AI changes so rapidly. So, you don’t want to spend the man-hours saying, we finally have a policy. And then that next Monday, AI has changed so much, the policy is useless. So, we’re in a very interesting time of it was launched in November of 2023, took the world by storm. What it looked like then and what it looks like now in August 2024 is a completely different look and completely different access.
Bjorn Mercer: Excellent, yeah. And I think that is a wonderful overview of it. So, if a person is going to go use an AI platform and they say, “Hey, can you write this for me?” And then they put it out, say on their blog and they say, “it’s by me”, but it was written by the AI platform, is that ethical?
Jennifer Fisch-Ferguson: I would say no. I often tell students that using AI as a drafting tool – so, “help me create an outline,” “here are some ideas I have”, “do they make sense together to work toward a broader topic?” I don’t find much different than working in a small group of students where everybody’s bouncing ideas off of each other or creating an outline and going to a tutor and having it looked at.
However, when you’re looking at, “here are all my ideas, now you put it into a paragraph or into a blog”, the way it hits me as an author is when I go to book shows, people come up and say, I have this really great idea. I’m going to give you the idea and then you can write it, but then I’m going to take credit because I gave you the idea. So, the work that is actually being done of putting things together in a coherent manner is not the work of the person writing the prompt. It’s the work of the generative AI.
Bjorn Mercer: That example of “go to Copilot”, “go to Gemini”, “write this for me”, and then I just put my name on it. People have been using ghost writers forever, but when people say buy a book by this person or that person, they know that it’s ghost-written. They know that this author didn’t actually sit down and take a year of crafting sentences and paragraphs and chapters – they paid them! That is an option for that type of writing. But generative AI makes it so easy. And so, if you do that, there are issues that come with that. And I think in college that is a serious issue, like I said, it’d be considered plagiarism. It’d be considered a fraud because it’s not your own words, it’s the words of generative AI. Now, how about this? Somebody goes to Gen AI and they say, “Hey, provide to me an outline.” It gives you an outline, and then you write your own words using that outline. How is that?
Jennifer Fisch-Ferguson: That’s no different than brainstorming with your friends. “Hey, I’ve got an idea about this story or this paper, I want you to give me some feedback.” I still do that with colleagues and friends. I’ve got this idea that doesn’t quite make sense.
I mean, it’s perfect in my head, but when I write it down or I say it, there’s something clearly missing. Let’s spitball back and forth. But the author is still doing the primary amount of work. So, AI to create an outline is not a bad thing. Quite frankly, I’ve used it to create an outline.
I’ve discarded a lot of stuff in the outline, but for a basic outline to see how your thoughts might gel together, I don’t think that’s problematic. You are still creating the basic ideas for AI to say, “well, based on the programs that I’ve seen and how I’m written, here’s how I think your ideas should come together.”
The cool thing is, put it in different AI platforms and see what it gives you because based on the inputs that that platform has received through usage, you might come up with slightly different or sometimes even radically different outlines.
Bjorn Mercer: And I love that because when I do any kind of research or brainstorming using GenAI, I’ll put the exact same prompt into Copilot, into Claude, into Gemini, into ChatGPT, and I’ll see what they give me. And they’re different. They’ll give you slightly different information and then to me, it’s on you to then synthesize that and then you are creating your own unique products.
And so, my last example is you wrote, say two paragraphs, you’re rushing, they’re bad, they’re just like a first draft, and then you say, “Hey, Copilot, can you edit this for readability?” Is that a good ethical use of generative AI?
Jennifer Fisch-Ferguson: I would say no. Everybody writes a horrible first draft. There are articles. There’s the one article that has the curse word in front before it says first drafts, really terrible first drafts. That’s part of practicing writing. That’s part of the writing process. So, if you can recognize it as bad writing, then going through and correcting it increases your skill level. It teaches you what not to do.
And I’ve come into contact with an AI written and had to sit down with this young man, and his whole thing was, “but I want to sound college educated.” He didn’t have the vocabulary. He did not have a very strong writing background. He came from an area in the inner city that had a horrible time even keeping English teachers and his whole desire because he already felt “othered” being in school was at least, can I sound articulate?
And so, we had a long conversation, and I mean he admitted it. There was no if, and, or, but, he’s like, “I just didn’t want to feel stupid. I didn’t want to sound stupid.” Which actually led to a really great conversation about, okay, then what are some of the tools that you can start to work with to make your writing better? So, in this one case, yes, he used generative AI. Yes, he admitted it and we could work with that.
But asking somebody to fix something that you’ve written that you expect to get a grade on is no different than paying somebody to write a paper for you. And quite frankly, it diminishes your capacity for learning and it diminishes your capabilities as a writer.
Because eventually, if it spits back something, you’re like, “oh, my gosh, this is great” – it’s now your crutch. The worst thing about using it as a crutch is after two or three times, you aren’t going to proofread. You’re going to take, “I gave it to AI, it’s done me right so far.” You’re going to stop proofreading and editing, and you’re never going to see the whole product until you get a failing grade or you get a plagiarism report because you stopped checking because you trusted that AI would do it better than you, which is never going to be the case, never… It does not have the nuance. It does not have your education background, learning experiences. It’s not going to be better.
Bjorn Mercer: And I love that how you said it becomes a crutch because so many people want to take the quick route. And I think your example is perfect because I remember feeling that when I was young. I just want to sound competent, educated, all those different things. And it took me years to realize that the best way to become a better writer is to first of all, read and read intentionally. Just don’t skip through things.
Because if you’re not a good reader and if you’re not consuming information and really figuring out how these authors are putting together their sentences and their paragraphs and their thoughts throughout the chapters, you’re skipping through comprehension. Because once you do that, then you can start mimicking them. It’s a slog of practice and practice.
But the most important thing is that ability to critical think, to problem-solve and communicate through writing. And that’s not, say, a natural thing and it takes work, but if students figure that out, that actually is a differentiator. It’s a skill that not everybody has and could be so beneficial for students in their future.
Jennifer Fisch-Ferguson: Yes, absolutely. And in addition to being a critical and intentional reader, the thing that I’ve noticed for years is vocabulary. People just do not have an expansive vocabulary. And where it affected in my life is my youngest child has dyslexia. And for his first three years of schooling, even though me saying, “I think he has dyslexia, can he be checked?” They’re like, “his vocabulary is too good, right?”
He has a mother who’s an English professor and an author and never spoke baby talk. And the one gift that I was going to make a point to give both of my kids was a vocabulary. Let me give them new words, let me give them different ways to articulate themselves because that’s the one thing I could give them to say, “Hey, this is going to help you express yourself just a little bit better.” But I think it comes to the same where you have now generations of people who don’t read well.
Our society is built around, if you’re going to advance and do things, you have to be able to read. And then as people get older, it now becomes an embarrassment for people saying, I’m not going to admit that. And I had worked with a gentleman, maybe 26, 27, who came to the group I was working with. The only reason he came was his pastor had asked him to read a Bible verse, and he is very vested in his spirituality, very invested in his church, and it hurt him that he could not read this Bible verse out loud. And he had to admit to the pastor, “I can’t read.” And so, then he was given tools to help him with that.
But many people kind of amble through and don’t have that ability. So, then it comes back to writing that it impacts so many more things. So, I’m not saying writing just to write or writing essays, but being able to create lists, being able to gather your thoughts and articulate them well, being able to express yourself correctly sometimes.
I think people get frustrated, and we see it in the student discussion boards where you can tell the student’s almost there, they have a really great idea. They don’t have the methods of expression just yet. And that’s where I see a lot of them, if they’re turning to generative AI, it’s like, “I’m almost there. Give me a little boost.”
Bjorn Mercer: And I liked how you talked about that example, because when I think of today where there’s near universal literacy, everybody can read, nearly everybody, I have to say that. But there is not universal adult literacy where people have information literacy, they can evaluate information. If misinformation comes out or disinformation or even propaganda comes out, can you actually sit through that article and read it and recognize that there are inconsistencies within the article? And I think so much information, so much confusion, so much partisan bickering comes about because people just don’t sit down and read. Take the time to evaluate information.
And I’m going to say, that’s no different than 200 years when people were largely illiterate and just listened to what people told them and said, “Okay, sure.” Because they’re skimming through everything and not taking the time. And so, if people skim and don’t really read, don’t really go for comprehension, how are they going to be good writers if they’re skipping everything anyways? And so, this actually transitions perfectly to our second topic, biases in large language models (LLM). Now, we could go on for days in podcasts about biases and language biases and culture, but how do we see this in large language models?
Jennifer Fisch-Ferguson: Talking about skimming, we are going to skim the surface of this one, otherwise we would have a 200-year-long podcast. Where I like to start the discussion in large language models is looking at people specifically in my lens of writing that are English as a second language learners, speakers, writers. Not saying that they didn’t necessarily grow up bilingually so, but when you’re looking at other languages… And I’m going to get a little into the weeds here….
So, if we’re going to look at regional language, how the writers that live in the Appalachians are going to write very, very different than people in Atlanta, Georgia, which are going to be very, very different from the people in San Diego, California. And that’s just regionally. Now, when you want to throw in people who come from other cultures, and again, we have regions there too, so this is a broad stroke, but people from Japan are going to write very differently from people than in Korea, Northern Korea, different from each other.
Vietnam, Thailand, the language models are slightly different. So, the writing is going to be different. And periodically you’ll have students, again, a lot of times I think it’s people not feeling confident. Yes, you’re always going to have people in a rush and people who just don’t want to. But some of these instances are people where again, it’s that feeling of I’m going to be ridiculed if I do not have language rules down. And especially in our case, we’re working with college students who want to be in college for the most part and just don’t want to feel inadequate.
But when you start to look at language models, even the disparity when using African-American vernacular English, sometimes how phrases are put together, people would look at it and say, well, that’s not proper English. And there’s a whole bunch of discussions around that. But looking through the lens of AI, sometimes I think that for students to come in and try to make this fix, “is this going to sound more like what my professor wants to see when I have to put something up on a discussion board?”
And that’s the spot where I know people can debate ideas. I want to sound like I know what I’m talking about, but it breaks down this notion where I look at is authentic student voice. I don’t think that discussion boards have to be this paragon of really crisp professional writing. This is you putting your ideas out there and engaging in learning discourse with your fellow students. It’s okay that all the words don’t come out correctly just yet.
Writing is a practice, it’s the one thing that people I work with probably are sick of hearing from me. It’s a process that’s a practice. You don’t get better at piano or guitar if you don’t practice, much to the chagrin of my children. It’s a practice thing. You have to do it. You don’t get better at speaking unless you practice. I, born and raised in the United States.
I teach writing. I have a lot of vocabulary. Would you like to know how often I trip over my words every single day? And this is after decades of practicing. So, when we look at some of these larger language models, some of the biases that come in is the inputs that generative AI have been given. Have they been given enough cultural inputs to actually recognize, oh, here’s this extra layer that’s going to need to be informed about your writing. Or is it can completely strip the student of their voice, of their academic voice and change around everything.
The student’s happy, like, oh, this looks better. This looks American, but is it worth losing your authentic voice? Is it worth losing your academic voice when you are trying to stand up and say, I’ve researched this, I’ve looked at it. I’ve critically read text, I’ve critically analyzed text, and here is my output, here is my essay. And AI kinda robs a student of that authenticity by not recognizing language models have to come into play.
Bjorn Mercer: It makes me think of… Just focusing on the US because that’s where we’re, we’re both raised here, I’ll say two generations ago or so, where the academic language, there’s one way to write. And I would say it’s a very Eurocentric, traditional way of writing. If you didn’t conform to that, then you were not, say accepted. You wouldn’t have passed, you wouldn’t have gotten your degrees – any number of things. It was not part of the process then. Fast-forward to today, there is a little more accepting of different inputs, different ways of phrases, different cultural elements that go into academia, which is a good thing.
Jennifer Fisch-Ferguson: So, this is a conversation, not surprisingly, I’ve been having quite a bit lately in various areas of academia. So, I’ve been writing an African-American literature course for our university, and some of the feedback, which feedback is great, gives me way to process things, is that sometimes the students feel a little “othered” in class because they’re not familiar. And my response to that is, “congratulations. You recognize that.”
The other part is, it’s okay to be a little othered if this is not the culture that you have been with. This is part of cultural exploration. But the last part is put back to them of why do you think being othered is a little important? And people don’t like to think about it. People like to be comfortable. Everybody likes to be comfortable. I want to sit in a place where I understand the rules, I understand the references.
And when we ask people to do that in academic settings, definitely in literature and writing settings, sometimes that initial pushback is kind of like, but I like being comfortable. But are you an active learner? Do you want to learn new things? And even talking with other faculty, I was speaking to a friend of mine who teaches philosophy, and I asked, well, why isn’t there any Chinese philosophy? Why isn’t there any Egyptian philosophy?
And his answer to me is, “well, I’m not going to get rid of canon.” And my rebuttal was, “I never asked you to ‘get rid of’. I asked you to add to.” Because how do you get a truly clear picture of what’s happening around? How do you expect your students to write thoroughly about philosophy in 920 AD if you are not looking globally? You are actually hindering their learning. And that is how I see generative AI with a lot of these things.
It has hindered learning because it has not had all the inputs given to it. It doesn’t have all the cultural aspects. It’s been giving a very narrow field. People are adding to it, that is changing. But we’re still looking through a very, very small and focused lens when using generative AI. And then when you add on top of it, a lot of people use very broad prompts. Many people do not go back and refine their prompts.
Even if you do go back and refine your prompt, the generative AI may decide that you don’t know what you’re talking about and gives you an answer that is erroneous. We call them “hallucinations”, which I think is hilarious. So, there’s a lot more. So, for example, I was trying to play around with this concept of, “well, what if I’m going to write a different language? How can AI help me?”
The long story short was it would’ve taken me less time just to write my own paper. By the time I went in and prompted and revised and prompted and revised, then had words with the AI for giving me false information, I was like, I could have had this written. It wasn’t a help. Now, granted, a lot of people stop with step one: “I’m going to give you a prompt, do your thing.” It’s a tool. It should be looked at as a tool. People really need to understand that it’s a very narrow scoped tool. There’s not enough about language models even included to do the work that you think you want it to do.
Bjorn Mercer: And that’s excellent. I like how you brought in with the course that some people feel “othered” and people being comfortable, and most people want to feel comfortable. But I’ll say that there’s some people that want that comfort no matter what – they never want to feel uncomfortable. But there’s other people where every day, they are “othered”.
There’s a large majority of people that never have to experience that. And so, when they do experience that, they’re uncomfortable, and they sometimes don’t know how to react. And we can actually see that is where culture changes, and people are uncomfortable with culture changes. My own example is I’ve been a classical musician my entire life.
We’ve talked many times about classical music, and I am trying to learn blues, and it’s uncomfortable. Theoretically, I understand blues. All the theory, everything about that. And I’ve even listened to blues. I mean, Howlin’ Wolf, Muddy Waters, all those great ’40s, ’50s, and ’60s artists I absolutely love, but I have not just sat down and played with their recordings.
And so, when I’m learning blues, I feel uncomfortable because it’s not a musical language that I’m proficient at. Theoretically, I know it. And so, I have to go beyond my comfort and really jump into that for if I ever want to play it, for it to be authentic. And I think as an example, that’s a way of trying to go beyond your comfort to incorporating something.
Jennifer Fisch-Ferguson: And I think sometimes we’re not saying abject pain is necessary. Not saying that unless you’re learning to plank for seven minutes, that was not great. But I think sometimes being uncomfortable adverse, wanting to stay very stagnant in that comfort also creates that deficiency in people, when you work towards something.
One thing with a lot of writers is we join critique groups. Is that comfortable? Oh, no, no. I put out something I thought was well-written. And writing’s personal. People try to say it’s not, it’s personal. It’s very personal. So is creating music, so is speaking another language. But I knew what I meant on the page, and then when I got a critique back, yeah, it stings just a little bit.
But when you want to do something well, you lean into that discomfort just a little bit, maybe after a couple of days of being mad about it and saying, “okay, but what can I learn from this?” And that’s the part that I try to pass on to people about writing is, yeah, you can use generative AI. It’s going to be a tool, but do you gain anything from that?
Does it force you to get uncomfortable, really and truly?
We look at all these things, it’s okay to be uncomfortable. Play with AI a little bit. And it’s that little uncomfort of, well, “what if I use it and it actually is better than I thought it was going to be?” Maybe that makes me feel a little insecure, because if this is my wheelhouse, it’s my topic, it’s what I’ve studied. Well, what if AI does just a good of job as me, to which then I encourage you, go break it, see how far it will go. See how intricate it gets.
Bjorn Mercer: Absolute excellent words. And so, the last topic we’re going to talk about is student learning and development. What are your thoughts there?
Jennifer Fisch-Ferguson: So, we’ve really kind of covered that a little bit through our talk, but one of the complaints that is still coming in when we reach out to businesses and saying, “Hey, how are our students performing?” There’s a lot of feedback about some of these soft skills, which I don’t consider to be soft, which are critical reading, critical analysis, critical writing.
If you don’t practice writing, you don’t practice the skill set. If you don’t practice reading, you don’t practice the skill set or when you skip. And critical analysis, how much actual discourse do you get when talking to a machine versus if you were in a small group and having a discussion and people disagreeing.
One of the things I put out in discussion boards all the time is, one, remember we’re in an academic setting. Two, never disagree with the author, you don’t know them, but you can disagree with the topic that’s being discussed.
You can disagree about how it’s being spoken about. You may have other examples. Because along with people wanting to be comfortable, sometimes they feel like having a different opinion is going to start a fight or a argument. And a lot of times people just… They don’t have it, not after a long day of work or doing these things. And then you come into your school space and somebody wants to be argumentative with you.
But how is your critical analysis if you can’t see both sides of something. Even if you don’t agree, you can look at both sides and still keep your opinion. Nothing wrong with that, but are you willing to look at both sides? Are you willing to investigate a little bit more? Are you willing, again, to get into that place of being uncomfortable in digging in a little deeper? And sometimes that doesn’t happen, which reduces our analysis skills.
We can even bring it down to, if you ate the same food every single day and you never change the seasonings that you put on it. Well, you’re going to say, no, I don’t like hot sauce because it’s too much for me. Have you tried hot sauce? Well, which hot sauce have you tried? Have you tried the ones that have a little bit of sweet to go with the hot sauce? There are a ton out there, and it kind of comes back into our academic spaces too.
It’s okay to disagree about a topic. There are plenty of topics that I can talk with my colleagues about that I’m not going to agree with them. That’s just not what it is. But I am willing to listen to their opinion. I am willing to do some of my own research to dig in. And I think using generative AI breaks that down a lot.
You’re not actually doing the work because then you’re not looking through multiple different sources that might have a different opinion. You’re putting it in one place. Bjorn and I like to play with the platforms. Maybe you put it into three different AI platforms and they have slightly dissenting opinions, but then are you following through with it?
And the worst part about it, I think, because I write creatively, is generative AI dampens that down. When you just say, do the work and it does the work, but are you curious? And if you’re not curious, can you still be creative? And so again, it’s like paying somebody else to do your work. You don’t get that engagement. It doesn’t give you those thoughts that creep up when you’re… My grandmother would call it wool-gathering, but daydreaming.
Sometimes you have random weird thoughts that come through, are you still getting those if you let something else do the work for you? And it’s not asking you to be curious, it’s not asking you to be creative, it’s not asking you to further your own investigation of knowledge. And I think that is something students should be aware of. And a lot of times when we talk about using generative AI, it’s linked in with copyright violations or plagiarism. Yes, that is 100% true. But I think explaining to some of our students or having the conversations of, but did you lose your sense of curiosity about your topic?
Bjorn Mercer: I think that’s the perfect wrap up where we’ve talked about ethics and AI, we talked about biases. AI should help you learn. AI should help you develop your skills most importantly. As you said, it should not become a crutch. And so, each person has to come to that realization and hopefully through curiosity, they realize that, oh, there’s this amazing tool out here, latest technology. This technology will probably become so common that we’ll probably stop talking about, oh, generative AI. It’s just going to be part of the fabric of what we use, but it needs to be there to help you learn, and it needs to be there to help you develop your skills. And so absolutely wonderful conversation today, Jennifer. Any final words?
Jennifer Fisch-Ferguson: Stay curious. I think that’s the best part. Definitely being an educator, I’m a lifelong learner. Yeah, I like to engage with new things. I like to be curious about things because we have a lot of amazing things around us, so don’t get stuck in the rut of just doing just because.
Bjorn Mercer: Great final words. And so, thank you, Jennifer, for a great conversation. Today we are talking about learning to write in the era of AI. Of course, my name is Dr. Bjorn Mercer, and thanks for being here.
Comments are closed.