Ever since generative AI was introduced, students have made use of it, often being uncertain in what way or to what degree they were allowed to use the results.
Some teachers wondered whether they received a remarkably good student essay – or one written by AI. Quickly some tools popped up trying to help with the decision, but their high error rate rather increased the confusion.
There are a few reported cases of students getting away with having their whole thesis written by AI, like this one from Switzerland – probably only the tip of the iceberg. On the other hand, some students that worked hard got punished for cheating, for example “The software says my student cheated using AI. They say they’re innocent. Who do I believe?” and “Georgia college student used Grammarly, now she is on academic probation”.
So how can Universities improve this situation?
AI text generators like ChatGPT are here to stay, but guidelines on acceptable use are needed.
Banning AI tools completely seems unrealistic and would also deprive students of acquiring the competence of using AI, judging their generated output – thereby being deprived of the benefits AI tools can bring to education.
So students should learn how to use them, but not turn their brains off and let ChatGPT ghost-write their whole text for them.
Writing text yourself also is crucial for learning how to think, research and argue, so throwing out essay- and thesis-writing completely of the educational agenda is not an option.
Instead, a ruleset for AI use (and abuse) needs to be established and clearly communicated to the students – they want to and need to know what the rules are.
Such rules should be set depending on the subject, education level and other factors individually. Generally, we believe it is reasonable and realistic that students should be allowed to use AI for inspiration and reflect on their ideas, while they generate the main line of thought, text structure and write the majority of words themselves. In the final write-up phase they could use AI feedback again, for revising and proofreading.
With a policy on acceptable AI use in place the next question arises:
How can such rules be controlled to avoid mistrust, wrong accusations and unfair assessment?
During the writing phase, close supervision can certainly help to follow the student’s progress and reduce the risk for abuse. However, this is a very time intensive approach, and in many cases prohibitively laborious.
After submission, AI detection tools could be applied to the finished text, but these tools have been shown to err in both ways:
- Some entirely AI written texts get wrongly classified as human written, especially if the student used disguising techniques like prompts including advice on the writing style.
- On the other hand, human written texts are too often classified as AI written, with an even higher probability if the tools were used, but in a permitted way – for example to revise the final text.
And in any case the problem of missing evidence remains: As the detection tools apply statistic and heuristic algorithms, they can never provide conclusive proof, leading to some brazen cheaters getting away and others might be punished for misconduct that was none.
The research and writing process should be documented for an evidence-based proof of genuine work!
Some institutions pushed ahead and introduced requirements for the student to keep a record of their progress, by writing reports or a research diary. This marks a shift towards giving more attention to the process of writing than solely the end product. Data on the evolution of the thoughts, research and text is taken while it happens and documented for potential later inspection if deemed necessary.
Mentafy is the tool to make that approach happen with very little overhead, in a manageable and time-efficient manner:
Students can write their thesis or essay just as they normally would, e.g. in Microsoft Word, while Mentafy accompanies and records in the background to provide them feedback during writing and finally a data-based report to certify their genuine work.
Mentafy reports aggregate metrics on the submission, providing an indication of whether the student work should be investigated in detail for misconduct. With limited resources this allows for a data-based decision on which texts should be put under scrutiny – and the detailed data records can be inspected in case of doubt which then provides comprehensible and reliable evidence.







No comment yet, add your voice below!