Graduate uses ChatGPT to write a university essay – that gets a passing grade

A graduate who used a powerful artificial intelligence ‘bot’ to write a university essay successfully hoodwinked a professor – who gave the report a passing grade. 

Pieter Snepvangers used the controversial ChatGPT AI to write an essay as part of an experiment to see if the software could be used by cheaters for their coursework.  

He told the tech to put together a complex 2,000-word piece on social policy – which it did in 20 minutes.

Pieter then asked a lecturer at a top Russell Group university to mark it and give their assessment and was stunned when the tutor said they’d have given it a score of 53 – a passing 2:2 mark.

The professor branded the essay ‘fishy’ but said it was closer to the work of a ‘waffling, lazy’ student than an AI, admitting: ‘You definitely can’t cheat your way to a first-class degree, but you can cheat your way to a 2:2.’

Graduate Pieter Snepvangers said he was shocked to find that an essay written entirely by an AI could have scored a passing university grade 

Pieter says he was shocked to find a university lecturer admitting students could ‘cheat their way’ to a passing grade.

Since launching three months ago, ChatGPT has concerned schools and universities across the world.

The software allows users to ask any question and receive an AI-generated answer in seconds which mimics the style and syntax of a human response.

Students in America have been banned from using the software in schools and UK universities are ‘scrambling’ to review how they can detect its use.

Writing for student news website The Tab, Pieter said: ‘I found a fairly prestigious Russell Group university and asked one of its lecturers if I could take his final-year social policy assessment to see if ChatGPT could really work.

‘I wanted to know what mark I could get and whether or not he’d spot the essay was written by a bot.

‘Under the premise of being a third-year social policy student completing a 2,000-word essay worth 75 per cent of a unit, I got to work.’

Pieter started off by simply asking the software the essay question and requested 2,000 words with references.

However, the tech – which was created by the Elon Musk-founded firm OpenAI – only managed to give back 365 words at first – only 15 percent of the requested number. 

The graduate decided to take a different approach and asked the bot ten separate questions all relating to the essay question, and eventually managed to get 3,500 words from the AI. 

Pieter came up with the experiment and asked a university lecturer to see if they could tell if the essay was written by an AI

Pieter came up with the experiment and asked a university lecturer to see if they could tell if the essay was written by an AI

He then went about taking the best paragraphs the software had given, and copied them in an order that ‘resembled the structure of an essay’.

He didn’t change or rewrite any of the words, and his essay was complete in 20 minutes.

He said: ‘All in all, 20 minutes to produce an essay which is supposed to demonstrate 12 weeks of learning.

‘Not bad. I nervously sent it off to my lecturer and awaited the verdict.’

Once marked, Pieter was shocked to find that although the software hadn’t delivered a top-notch grade, it had still achieved a passing 2:2.

When asked whether it was obvious the piece was written by a robot, the lecturer didn’t ‘think it would have been abundantly clear’, but said it was a bit ‘fishy’. 

His feedback continued: ‘Basically this essay isn’t referenced. It is very general. It doesn’t go into detail about anything. It’s not very theoretical or conceptually advanced.

‘This could be a student who has attended classes and has engaged with the topic of the unit. The content of the essay, this could be somebody that’s been in my classes. It wasn’t the most terrible in terms of content.’  

A snippet of the essay written by the controversial AI software - which a university lecturer said was 'fishy' but was reminiscent of essays written by 'waffling, lazy' students

A snippet of the essay written by the controversial AI software – which a university lecturer said was ‘fishy’ but was reminiscent of essays written by ‘waffling, lazy’ students 

The only element that ChatGPT completely failed on was a lack of in-text referencing, however the lecturer said that if a student ‘had sneaked some in which seemed plausible’, the essay would be given a mark of 53.

They also said that if Pieter had simply added references from the module’s reading list, he ‘might even have hit the high 50s’.

The lecturer even admitted that out of the essays he had marked so far, a shocking 12 per cent of them showed signs of being written using AI software.

Pieter said: ‘The truth is the software doesn’t give you the answer in one go. You will have to structure its responses in a more coherent order.

‘But I spent ten minutes doing this and got a 53 – it wouldn’t have taken much longer to add a few references from the reading list and bump it to a high 2:2.

‘ChatGPT is only three months old. You wouldn’t bet against it being able to write an essay worthy of a 2:1 in another three months.’

***
Read more at DailyMail.co.uk