Note: This article is Part 1 of a two-part series about the use of artificial intelligence (AI) in higher education and the steps that higher education professionals can take to enhance their classrooms with AI.
In recent years, few topics have provoked more heated discussion in academic circles than the use of artificial intelligence (AI) in higher education. Lately, there have been a series of alarmist articles from sources such as The Washington Post and Psychology Today about how AI will end the United States’ love affair with higher education or how AI is the Pandora’s box of higher education. Amid the anxieties and opportunities arising from integrating AI into higher education, I contend that human-to-human interaction will continue to dominate higher education.
However, educators should proactively embrace the latest generation of AI technology to enhance students’ learning experiences. They should also learn how higher education students may use the technology and take appropriate countermeasures before the mass adoption of AI during a multi-year transition.
Forward-looking higher education instructors can take a series of steps inside and outside the classroom to prepare for the widespread implementation of AI that will take place over the next generation. For example, educators should experiment with AI to develop assessments and assignments not easily replicated by AI.
Also, these instructors should be able to identify the type of text that the latest generation of AI software can create as a means of combating academic dishonesty. They can forge mentorship relationships with students inside and outside the classroom and start to think about how to create personalized learning and tutoring experiences before the widespread implementation of AI.
These preparatory steps will provide educators with a more informed understanding of the pros and cons of AI in higher education. This knowledge and experience of AI’s capacities and current weaknesses will not only forge deeper relationships between students and teachers and between students.
This effort will also allow instructors to move beyond assumptions and biases that may impact some of the contemporary critiques of AI. Moreover, AI experimentation at this juncture will provide the basis for improved academic scholarship in the future.
Also, educators, even online instructors, can create in-person discussions, oral presentations, study groups, and team projects to prevent students from academic cheating. Those teaching strategies will improve student-to-student interactions and faculty-to-student relationships, the heart of the educational experience.
Confronting the AI in Higher Education Critics
The term AI refers to the ability of computers to perform the tasks that were previously only completed by humans. For example, the latest generation of AI software has the potential to draft essays, answer prompt questions, and complete a collection of tasks that mimic the work that students and teachers currently perform.
AI software has been around for more than one generation. Nearly every student and academic has used spelling and grammar-checking software for years, and few are willing to return to a time before these technological innovations.
However, AI’s impact on education has become hotly debated. In recent months, ChatGPT, Google Bard, and other services have allowed students and educators to experiment with text generation powered by AI, and these experiments have sparked fierce debates over the role of AI in higher education.
Genuine fear exists among academics that their students will use AI to cheat, as these services can produce essays that are nearly impossible to distinguish from human writing. The fact that many students are already using AI plays into these fears. For example, Diverse Education noted that a survey revealed that 30% of students used one of the most common AI services last semester.
The critics of AI also raise other legitimate concerns, including that AI exacerbates existing social disparities. According to these critics, AI systems:
- Are biased
- Undermine academic integrity
- May stifle creativity
- Raise privacy concerns
- Are not ready for the classroom
Moreover, critics argue that practical, legal and regulatory issues require resolution before AI’s widespread implementation in higher education. These critics of AI build upon other criticisms of the technology raised by social justice activists.
For example, in the 2020 Netflix documentary “Coded Bias,” activists from the Algorithmic Justice League argued that AI can exacerbate existing social disparities. They also contend that AI provides governments and private companies with tools that can be used for nefarious purposes, such as racial profiling or discriminatory practices in hiring and law enforcement.
The criticism of the latest generation of AI software in higher education should not be dismissed out of hand as AI critics raise several genuine concerns. However, the objections are based on an erroneous assumption that AI’s latest generation will be fused into the current higher education system.
Such a view minimizes AI’s transformative nature and how AI will change the entire educational landscape over the next generation. Therefore, many of the critics assume that teaching and learning methods will remain unaltered.
Critics of AI in education, such as Neil Selwyn in The Future of AI and Education: Some Cautionary Notes, have raised issues ranging from how AI will displace educators to how AI is inherently biased. While there is merit to many of the criticisms of AI, they often mask the real motivation for the complaint: many educators fear change itself.
Schooled and thriving in an educational system created for the Industrial Revolution, many instructors seem to fear that AI will transform the relationship between educators and students and radically alter the educational landscape. This possible transformation creates anxiety. However, change also presents an opportunity for growth and a much-needed reorientation of the educational profession.
Existing Academic Research Does Not Support AI Bans
While some educators have presented a long list of AI criticisms, these criticisms often do not have strong support in existing scholarship. Surprisingly, educators have, in large part, ignored AI until recently. There needs to be more research into how earlier generations of AI – like automated grammar and spell-check software, internet searches, and other overlooked uses of computer-assisted tools – have already transformed the educational landscape over the past generation.
Only some educators are looking to ban and discourage using spell-checking or internet searches. For example, a 2020 International Journal of Educational Technology in Higher Education article from researchers Olaf Zawacki-Richter, Victoria I. Marín, Melissa Bond, and Franziska Gouverneur, “Systematic review of research on AI applications in higher education – where are the educators?” notes that despite the transformative potential of AI and its long existence, educators lack research into many issues related to AI.
Consequently, some current criticism of AI may derive from biases and assumptions, rather than scholarly research. Also, there is little research on the impact of AI software that can generate text like ChatGPT or other services, as these tools are in their infancy. However, the lack of research hasn’t stopped educators from advocating for bans on generative AI.
Inaction Regarding AI Is Not a Viable Option for Educators
While some educators advocated for bans on AI, others may want to put their heads in the sand and avoid any action, waiting for the dust to settle. Rather than being immobilized by the fear of AI or operating based on assumptions, higher education instructors should actively shape the direction of AI technology in a path that maximizes the benefits for as many people as possible and does so in an ethical manner.
To achieve these goals, educators must actively experiment with and study the pros and cons of AI by using the latest generation of AI software. They should embrace a new role in the educational structure and do so in a manner that minimizes potential harm to students while carefully considering the complex ethical and legal issues raised by AI.
Some critics may argue that the use of AI in higher education should wait until there has been systemic research and that the hasty adoption of AI may do more harm than good. These arguments assume that educators have a choice about whether to implement AI.
However, students and other educators are already using AI, and those who opt to do nothing will place themselves, their schools, and their students at a competitive disadvantage. The real debate should be focused on how AI can be used in an ethical manner that benefits the largest number of students.
Bans on student use of AI are likely to be ineffective because research has revealed that there are only partially reliable means to detect text generated by AI at the moment, according to Melissa Heikkilä of MIT Technology Review. So despite any prohibitions on the use of AI that schools may implement, schools can only enforce bans on student use if they revert to entirely in-person assessments or proctored exams, options that are unattractive for some educational institutions.
Given the widespread student use of AI, educators must now adapt to a changing educational landscape. Many students and other educators have access to AI tools that can generate answers to many pre-existing assessments and assignments.
Inaction is not a viable choice. While educators may complain about student misuse of AI, student use of this technology requires a creative response from educators that may compel a new assessments in a changed instructional landscape.
Related: ChatGPT: The Pros and Cons of Using AI in the Classroom
AI Is Much More Than Chatbots and Automated Grading
Most of the discussion around using AI in higher education derives from a limited understanding of the power and potential of AI in higher education. Certainly, the field of AI includes chatbots, automated grading and text generation.
However, the AI field also includes tools that allow instructors to create more personalized learning experiences for students. For instance, AI could be utilized to offer individualized tutoring and learning opportunities to enhance student learning beyond the current capabilities of modern educators.
Research published in the International Journal of Environmental Research and Public Health reveals that while the application of personalized education is in its infancy and requires more technological innovation, AI has the potential to enhance educational outcomes for those with learning differences. AI tools could benefit the student populations that most need additional help, such as:
- Students with learning differences
- Non-traditional students
- Adult learners
For example, educators could use AI to develop personalized tutoring tools to help those students who need extra help. AI could also be used to present material in a different format for students with learning differences.
While there has been much discussion of using chatbots and automated grading via AI, these uses assume that AI merely represents a tool educators use within the existing educational structure. This line of reasoning originates from a potentially faulty assumption that the existing educational landscape will remain the same and that the relationship between educators and students will remain fixed.
However, such assumptions reflect a failure to understand AI’s potential to transform education over the next generation. Today’s educational landscape will likely undergo a radical transformation over the next generation due to a collection of factors, including the widespread use of AI and a changing economy.
The critics of AI argue that using AI to grade student work raises ethical and privacy concerns, and AI may reflect bias. However, these criticisms assume that student assessment and assignment will remain static. As Rose Luckin noted in a recent Guardian article, student assessments and assignments will likely change in response to AI as educators cannot use assessments that are quickly completed by AI.
Focusing on grading and automated responses using chatbots merely represents the first stage of a groundbreaking technology. Properly harnessed and coupled with a skilled instructor, AI will usher in a hybrid learning environment that blurs the line between the online and in-person learning experience, which needs to be fully appreciated by many contemporary commentators.
But even the limited use of AI in grading admittedly raises various legal and ethical questions. For example, the submission of student work and the retention of that work raises issues related to privacy, student consent, and the possible stifling of student creativity.
At the same time, students may balk at using an educational system where they must complete schoolwork that takes hours to do. That work could be graded by a non-human in seconds without human intervention, creating an element of unfairness in the educational system.
Moreover, any method of grading via AI is likely to reflect bias. Automated grading tools are trained by benchmarks, using rubrics that may not allow sufficient creativity or may reward students for canned responses while penalizing individuality.
Finally, there are intellectual property issues raised by using any generative AI system trained on material protected by intellectual property law, according to a recent Massachusetts Institute of Technology article by Dylan Walsh.
Related: ChatGPT and Its Use by Non-Traditional College Students
AI Is Unlikely to Truly Replace Human Interaction in Higher Education
The complex maze of ethics and legal questions – including the potential for discrimination in the grading process and how AI programs may reflect certain biases – is why AI will not be widely used as a primary grading tool in the near future in higher education.
Similarly, while AI chatbots can provide simple answers to questions, students will always likely demand answers to their pressing questions from their human instructor in higher education. For example, I receive emails and calls from my students. Ultimately, no chatbot can replace the reassuring voice and guidance that a skilled human professor provides in a time of student panic or confusion.
Comments are closed.