ChatGPT can’t pass an experiential knowledge exam

Because artificial intelligence software does not have real world life experiences to draw from, there should be no worry about its implications for academic assessment.

I see from my LinkedIn network that many academics are discussing what the implications from artificial intelligence could be for assessing their students. ChatGPT has even passed an MBA exam! Reading about this I was entirely unconcerned. Should I be? My lack of concern stems from how I have been designing courses and setting assignments for nearly 20 years. But rather than assume that my assessments are immune from the misuse of artificial intelligence, I thought to write up my approach and see if fellow academics can see any potential problems. If not, then hopefully me sharing my approach will be of use to others.

Before 2006 I always used to give my students two options for their final essay on any of my modules. For the ones who were more interested in grappling with a diversity of ideas in the literature, I asked them to critique a particular set of ideas related to the course. For the ones who were more interested in reflecting on their experiences, either in life or within some of the class exercises, I asked them to critically reflect on that experience in relation to some of the ideas from the course. However, in 2006 a couple of things happened. The way grading standards were evolving meant that we were expected as academics to provide a grid for how we would award marks. For one of the essay options, the ability to reflect critically on one’s experience in relation to scholarship was really important and yet it was absent from the other essay type. And yet they were assessments for the same module that would need to be marked using the same grid. I knew I had to change, and something happened which helped me decide which way to go.

A couple of my students, who were seemingly not very passionate about the subject matter nor vocal in class, submitted essays that were decent but had two telltale signs of something not being right. Firstly some of the writing was in a style reminiscent of the old English that is sometimes written and spoken by the educated classes in India. These students were not from the Indian subcontinent.  Secondly the referencing was done in an academic journal style using the terms ‘Ibid’ and ‘op cit’. I took a wild guess and googled for Indian academic essay writing services, to discover indeed that there were such services available on a whole range of academic subjects.

What to do?

One of my colleagues joked that perhaps we could subcontract the same companies to mark the essays that they were being paid to write. I enjoyed that colleague’s irreverence and because he has since passed away I don’t mind sharing his joke (such is the stifling bureaucratic approach of so many academic institutions today I would feel awkward otherwise).

Well, I assume he was joking.

But what I wanted to do was avoid this problem arising in future, for everyone concerned, as there are such serious complications involved for students suspected of subcontracting their own essays. So from then on I focused only on essay questions that required people to interrogate their own experiences – whether from within class exercises or from their own lives.

Fortuitously, a few years later I moved to Cumbria University, where there is a tradition of over a century of what’s called ‘experiential learning’. That means valuing experiences in class and in the community as forms of knowledge that are as meaningful as whatever could be read, remembered and critiqued. Because I respected this heritage I sought to integrate it in all the courses I designed and had validated in the fields of sustainability and leadership studies. Therefore, there were always multiple kinds of experiences in class, designed to help students have a different kind of knowing than just by reading and discussing. In addition, we really wanted our students to try and do things in the world as part of their studies. Therefore, essays questions became things like the following:

“Choose one instance in your professional or personal life where you experienced something where the ideas from the module seem relevant, and share your insights on it. How does the scholarship and the class experiences influence how you question or understand that experience, and vice versa?”

“Select one of the experiential exercises from the module, and explore what you learned about two or more of the key learning outcomes of the module, and how the exercise might even be improved in future.”

“Find two people with apparently different approaches or views on one of the key themes of the course and facilitate a discussion between them on that. In your essay, explain your experience of that process, including what you learned from the dialogue that complements or contrasts with your own reading of relevant scholarship from the module. Offer confidentiality to the participants.”

“In a group of two or three, discuss your views on what was most important for you in particular to learn and unlearn from participation in the module, and after presenting in the essay where there is similarity or disagreement between you, share any insights about that similarity or disagreement. Include reference to key readings from the module.”

Could an artificial intelligence software write good essays in response to these questions? If so, would there be tell-tale signs that could be easily spotted and then students quizzed in ways that would quickly reveal they didn’t write them? Comments below please.

Although the current focus of the ChatGPT chatter is on students, I think that may be easily solved by taking the approach I describe above. That will fortunately mean making experiential learning more central to course design. What the longer term implication could be concerns academic journals. How many of them publish dull literature reviews that simply rely on summarising papers produced with keyword searches of databases that are then subjected to content analysis? Academics who do that form of literature review seem to aspire to be like machines, and so it is a small step to replace their efforts with the real thing. Therefore, credible journals, owned and governed by people who are interested in the pursuit of knowledge not just status and money, will need to evolve the kinds of articles they accept. Which is great news, no?

Donate to keep Jem writing / Read his book Breaking Together / Read Jem’s key ideas on collapse / Subscribe to this blog / Study with Jem / Browse his latest posts / Read the Scholars’ Warning / Visit the Deep Adaptation Forum / Receive Jem’s Biannual Bulletin / Receive the Deep Adaptation Review / Watch some of Jem’s talks / Find Emotional Support / Jem’s actual views on Covid

2 thoughts on “ChatGPT can’t pass an experiential knowledge exam”

  1. Perhaps it says something about the dominant education systems if when the most powerful and revolutionary technology ever comes along and offers the ability to navigate and process a significant portion of the corpus of human knowledge in radically more creative and accessible ways, its attention rather barely moves beyond self-serving concerns for the integrity of assessment – perhaps understandable as such a critical component from the dominant consumerisation and commodification of education.

  2. Yes, indeed, you are right. I also designed module assessments that required experiential and situated knowledge. This of course meant that the ‘teaching’ itself had to include experience. Not really a surprise, marking became a pleasure discovering with interest the things the students had discovered and valued. More of a surprise, their evaluations [compulsory and standardised across the institution] often declared that they had less work to do than in other modules of their courses, but also, in contradiction, they had read much more, and learned much they did not already know. Apparently learning didn’t feel like work. Unfortunately I frequently had to ‘find evidence’ that the experiential learning was sufficiently ‘academic’ to satisfy the quality control. It seems to me ChatGPT is like any other tool – the attitude of the user to the uncertainties of unknowns and wobbles of identity in learning from experience makes a difference to how it is perceived and then used.
    As you say, evolving will happen. Hope it is in the experiential direction, great news, rather than the nightmare of more algorithms and monetised industry in exam boards and moderation committees to police so-called education.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.