ChatGPT could help students cheat — but it could also revolutionize education
This controversial new form of AI comes with a silver lining.
ChatGPT is a powerful language model developed by OpenAI that has the ability to generate human-like text, making it capable of engaging in natural language conversations. This technology has the potential to revolutionize the way we interact with computers, and it has already begun to be integrated into various industries.
However, the implementation of ChatGPT in the field of higher education in the U.K. poses a number of challenges that must be carefully considered. If ChatGPT is used to grade assignments or exams, there is the possibility that it could be biased against certain groups of students.
For example, ChatGPT may be more likely to give higher grades to students who write in a style that it is more familiar with, potentially leading to unfair grading practices. Additionally, if ChatGPT is used to replace human instructors, it could perpetuate existing inequalities in the education system, such as the under-representation of certain demographics in certain fields of study.
There is also the potential for ChatGPT to be used to cheat on exams or assignments. Since it is able to generate human-like text, ChatGPT could be used to write entire assignments or essays, making it difficult for educators to detect cheating.
For example, ChatGPT (meaning “generative pre-trained transformer”) could be asked to “write an essay about the challenges that ChatGPT poses higher education in the U.K.” In fact, the first four paragraphs of this article were written by ChatGPT in response to this exact request.
ChatGPT’s response (and this is your human author writing now) actually amounted to more than four paragraphs, as it went on to articulate its inability to fully replicate the expertise and real-world experience that human teachers bring to the classroom. This particular line of inquiry made me both appreciative of its concern for my job security and somewhat cynical of its Machiavellian designs to win me over.
In my research and teaching, I am involved in developing assessment and feedback processes that enrich the student experience while also equipping them with the skills they need upon graduation.
The truth is, if I was looking at 200 pieces of work submitted by first-year undergraduate students on this topic, I would probably give ChatGPT’s efforts a pass. But far from being worried about the challenges this AI program might pose, I see this instead as an opportunity to improve the way we assess learning in higher education.
The upside of AI
For me, the major challenge that ChatGPT presents is one I should be considering anyway: How can I make my assessments more authentic — meaning, useful and relevant? Authentic assessments are designed to measure students’ knowledge and skills in a way that is particularly tailored to their own lives and future careers.
These assessments often involve tasks or activities that closely mirror the challenges students may encounter in real life, requiring them to apply knowledge and skills in a practical or problem-solving context. Specific examples might include asking a group of engineering students to collaborate on a community issue as part of the Engineers without Borders challenge or inviting environmental science students to curate an art exhibition in a local gallery that explores the local impact of the climate crisis.
While there will always be a need for essays and written assignments — especially in the humanities, where they are essential to help students develop a critical voice — do we really need all students to be writing the same essays and responding to the same questions? Could we instead give them autonomy and agency and, in doing so, help to make their assessments more interesting, inclusive, and ultimately authentic?
As educators, we can even use ChatGPT directly to help us develop such assessments. So, rather than posing the question that generated the start of this article, I could instead present students with ChatGPT’s response alongside some marking instructions and ask them to provide a critique on what grade the automated response deserves and why.
Such an assessment would be much more difficult to plagiarise. It would also invite the students to develop their critical thinking and feedback skills, both of which are essential when they graduate into the workforce, no matter what their profession. Alternatively, ChatGPT could be used to generate scenario-based tasks that require students to analyze and solve problems they may encounter in their future careers.
This feels like Pandora’s box moment for assessment in higher education. Whether we decide to embrace ChatGPT in our pursuit of authentic assessment or passively acknowledge the ethical dilemmas it might present to academic integrity, there is a real opportunity here. This could help us reflect on how we assess our students and why this might need to change. Or, in the AI’s own words:
ChatGPT could be a useful tool for creating authentic assessments, but it would still be up to the instructor to design and implement the assessment in a way that is meaningful and relevant for their students.
The sophistication and capability of AI technologies are accelerating. Rather than reacting with trepidation, we must find and embrace the positives. Doing so will help us think about how we can specifically tailor the assessment of students and provide better and more creative support for their learning.
This article was originally published on The Conversation by Sam Illingworth at Edinburgh Napier University. Read the original article here.