AI-Driven Transparency: Transforming Academic Integrity with Mentafy

Markus Goldbach and Johannes Knabe, the visionaries behind PlagScan, are setting new benchmarks with Mentafy—a groundbreaking solution that detects plagiarism and AI misuse reliably, all while respecting user privacy. Mentafy represents a major leap forward in building fair and future-ready education systems.

The rapid rise of artificial intelligence (AI) is revolutionizing education, presenting both opportunities and challenges. For educational institutions and educators, the task of integrating AI responsibly while preserving academic integrity is more pressing than ever. Educators are navigating a fine line—distinguishing genuine effort from potential misuse, often without clear evidence to guide them.

This is where Mentafy steps in: a solution designed to make writing processes transparent and provide educators with the insights necessary for fair and informed evaluations.

Mentafy’s Origin: A Future-Oriented Solution

Born from years of expertise, Mentafy is a response to the growing limitations of traditional plagiarism detection tools in the era of AI. Founders Markus Goldbach and Johannes Knabe, the minds behind PlagScan, recognized that it was no longer sufficient to evaluate only the final submission.

Mentafy goes further by documenting the entire writing process, offering educators a detailed view of how and when content was created—whether independently written, copied, or influenced by AI.

“Evaluating the final text alone no longer works,” says Goldbach. “We need to bring transparency to the writing process itself to ensure fair assessments and prevent misuse.”

This approach fosters trust, while also delivering actionable insights into students’ academic development that were previously unavailable.

Why Traditional Tools Fall Short and How Mentafy Fills the Gap

Conventional plagiarism detection tools rely on matching text to existing sources, making them ineffective against original-looking AI-generated content. Early AI detectors, too, often fail to reliably identify nuanced AI involvement or provide evidence-backed insights.

Mentafy addresses these shortcomings with a process-oriented approach that goes beyond surface-level checks. Key features include:

  • Writing Transparency: Mentafy tracks which parts of a text were independently written and which were influenced by plagiarism, ghostwriting, or AI. This enables educators to assess authenticity and originality with clear, evidence-based insights.
  • Privacy and Security: Students’ drafts and revisions are safeguarded. Mentafy only analyzes the final submitted text, respecting privacy by excluding intermediate versions or discarded content from its reports.
  • Efficient Project Management: Mentafy provides structured guidance, actionable tips, and intuitive tools to help students refine their writing, responsibly integrate AI tools, and develop lasting academic skills—all while meeting deadlines.

Mentafy’s Vision: Advancing Fairness in Education

Mentafy is more than a practical tool; it represents a paradigm shift in education. By documenting the entire writing process, it fosters transparency and accountability, contributing to a more equitable academic environment.

Looking ahead, Mentafy’s data on writing processes could be used to train virtual tutors, offering precise, adaptive, and affordable support tailored to each learner’s needs.

In an era where AI is transforming education, Mentafy is setting the standard for ethical integration. By promoting fair assessment practices and empowering educators and students alike, it ensures that human creativity and AI can thrive together.

Rethinking Academic Integrity: Our Path to 2025

At Mentafy, 2024 has been an exciting and pivotal year. Our mission is to support learners from the initial idea to the submission of academic work—using a process-based approach that ensures transparency in plagiarism, ghostwriting (contract cheating), and AI usage in writing.

As a provider of a software-as-a-service solution to promote academic integrity, we assisted nearly 100 educational institutions and their students in creating and evaluating academic work throughout 2024. We look forward to taking the necessary steps in the coming year to help even more schools and universities achieve their goals.


What We Achieved in 2024

Over the past year, we have refined our writing project support and significantly improved our algorithm for analyzing the writing process. This allows us to create even more precise evidence-based reports that reveal which parts of a text were independently crafted and which originated from external sources.

One of the key insights we gained is that many educational institutions face challenges in integrating such process-based writing support from the start of a project. Instead, they often rely on “post-hoc” approaches, such as submitting completed documents to plagiarism detection software or using AI detection tools. (Why this falls short of addressing the problem, read here.)

A heavy-handed approach could involve full surveillance—using keyloggers or other proctoring tools to track and monitor every interaction with a text document. At Mentafy, we consciously take a different path: No surveillance, no keyloggers!
Our software analyzes differences between automatically saved text versions for unusual patterns, but during the evaluation, only the final submitted text is reviewed. Out of respect for privacy and data protection, early drafts and deleted content are not included in the reports.


What’s Coming in 2025

To better address the challenges of integrating our tools into existing workflows, we’ve decided to expand our offerings. In 2025, Mentafy will introduce a solution for post-hoc analyses. This will allow us to extract valuable insights from metadata available after final submission, such as the Word .docx format or version histories from Google Drive or Microsoft OneDrive.

While this approach doesn’t achieve the same level of granularity as a live process analysis, it still provides reliable evidence of original work and uncovers serious violations—even when the writing process wasn’t monitored from the beginning.


Our Promise for 2025

We aim to make it easier for educational institutions to foster academic integrity without additional effort or an atmosphere of control. While reducing the workload for educators, Mentafy empowers learners to confidently and transparently certify their originality.

Thank you for trusting Mentafy. Together, we’re setting new standards for academic integrity and creating an environment where teaching and learning thrive in mutual respect.

Here’s to a successful 2025 with integrity!

Enhancing Academic Integrity Beyond GoogleDocs’ Or MS Word’s Version History

Educators today face the great challenge to deal with generative AI. Besides all the opportunities, how it can be a great tool to promote education, it also comes with the threat of students taking the ‘shortcut’, instead of learning how to research, think and write themselves. A clever and popularity gaining method to identify personal contribution to a text is having students write their papers in GoogleDocs/Drive or Microsoft Word/OneDrive, respectively, and then employ its version history to gauge the originality of student work. While this method provides insights into document changes over time, it lacks the depth and precision needed to ensure comprehensive academic integrity. And eventually, the version history is quite easily tempered with, if you are already aware of how it should look like for the teacher. That’s where Mentafy steps in—offering a streamlined, automated solution that goes beyond basic version tracking to analyze writing patterns, identify irregularities, and provide educators with a concise, actionable report.

Beyond Version History: What Mentafy Brings to the Table

With Mentafy, teachers no longer have to sift through endless document versions manually. Instead, our SaaS platform applies a thorough algorithmic analysis, highlighting areas of concern without requiring educators to dive deep into individual student timelines (to the version-scanner, for links to OneDrive and GoogleDrive and to the .docx-analysis, if you have a .docx document at hand). This way, every student is treated equitably, and only behavior that might suggest academic misconduct is flagged. Educators receive a standardized, objective report, ensuring a fair and efficient approach for all students.

Unlike GoogleDocs’ version history, which logs every keystroke, Mentafy focuses solely on misconduct-relevant data. We analyze and document the final version of a submitted document, allowing instructors to see only the results of our analysis, not every single addition or deletion. This ensures privacy and keeps the focus on what truly matters for academic integrity.

Fine-Grained Insights that Matter

Mentafy offers a more nuanced, granular view of the writing process than GoogleDocs can provide. While version histories may get purged or become overwhelming to navigate, Mentafy retains critical details in a concise format, allowing educators to detect patterns that signify whether a text is genuinely original, copied, or typed from an external source.

Additionally, Mentafy gives students the chance to proactively address any issues or misunderstandings by allowing them to comment on their work during the writing process. This open line of communication reduces the likelihood of later misinterpretations, and students are empowered to clarify their intentions, creating an environment of transparency and trust.

Streamlined, Objective Integrity Checks for All

Mentafy’s approach eliminates the need for teachers to single out individual students for additional scrutiny. Every submission undergoes the same thorough review, ensuring consistency and fairness in the process. Instructors receive insights on irregular writing patterns that may indicate potential issues, saving time and ensuring integrity is maintained across the board.

Our goal is to alleviate the manual burden on educators while providing an effective, privacy-respecting solution that upholds academic standards. With Mentafy, educators have a reliable partner in academic integrity, enhancing their ability to nurture honest scholarship without compromising student privacy.

In a landscape where ensuring originality is becoming more complex, Mentafy offers the best of both worlds—privacy and precision. Let Mentafy handle the process so you can focus on what matters most: teaching and guiding your students toward success.

 

AI in Education: A Student Caught Between Opportunity and Accusation

Digital transformation is reshaping global education, with AI becoming a pivotal tool in academic life. Yet, as AI gains prominence, new challenges emerge: one student faced serious misconduct accusations despite responsible AI use. Her story highlights the critical need for clear, transparent AI guidelines to support fair and future-ready academic environments.

As digital transformation accelerates, artificial intelligence (AI) is playing an ever-greater role in academia. Educators are exploring how to integrate AI effectively into teaching, while students face the challenge of using these tools responsibly without risking their academic integrity. A recent incident at a university in Brandenburg (Germany) highlights the complexities involved, illustrating the need for a balanced approach that supports both educators and students. Crafting thoughtful, clear guidelines is essential to build a fair and future-ready educational system that harnesses the power of AI without compromising academic values.

A Case in Point: When AI Transforms from Aid to Burden

As education adapts to the digital age, AI has become a vital resource for students, especially those navigating language barriers or trying to improve their academic writing and comprehension skills. However, using AI responsibly isn’t without its challenges. In a recent case at a university in Brandenburg, one student faced accusations of academic misconduct despite legitimate AI use, resulting in serious setbacks: missed deadlines, unexpected financial costs, and uncertainty about her academic future.

This case sheds light on an important question: How can students use AI as an effective learning tool without jeopardizing their academic standing? And for educators, the question is equally pressing: How should they respond to the evolving role of AI in academic work?

Guidance, not Surveillance:  New Role for Educators

Educators face the challenge of promoting responsible AI use while ensuring students’ work retains originality and reflects their own efforts. As AI tools become more integrated into learning, distinguishing between genuine support and excessive assistance is becoming increasingly challenging. Rather than viewing AI solely as a threat to integrity, educators have an opportunity to harness its potential to enhance learning. AI can empower students and give educators new ways to foster critical skills through personalized methods, adaptive learning, and creative teaching solutions. By focusing on how AI can positively impact the learning process, educators can foster a culture of trust and transparency.

Mentafy, for example, is a platform designed to support students in every step of their writing process, promoting a structured and transparent approach that shows how and when AI tools are used. This transparency allows educators to make fair assessments while giving students confidence in their capabilities. Mentafy’s approach helps educational institutions create a learning environment rooted in integrity, guiding students in developing essential academic skills for future success.

Building a Future-Ready Education System through Collaboration and Clarity 

The experience of the student in Brandenburg highlights the urgent need for updated guidelines on AI use in academia. Without clear directives, students are vulnerable to accusations that lack a foundation. Educational institutions must move beyond rigid approaches and implement flexible policies that foster responsible AI use. Platforms like Mentafy offer valuable support in achieving this goal. By documenting each stage of the writing process, Mentafy enables educators to assess students’ independent work while upholding academic standards. This approach also promotes essential skills such as critical thinking, research, and self-discipline.

Educators should be seen not as enforcers but as guides, working alongside students to shape a digital learning landscape. Clear policies, open communication, and the mindful use of tools like Mentafy are critical to creating a fair, transparent, and future-ready education system that benefits both educators and students.

Why a ChatGPT Watermarking Tool Will Not See the Light of Day and Would Not Solve the Problem of AI Ghostwriting Anyway

The use of generative AI in education and beyond remains controversial, particularly regarding authenticity and academic integrity. Some universities already have policies requiring the disclosure of all AI use, but without a control mechanism, one must hope for the honesty of the students.
The AI detectors sometimes do not work reliably (AI Detection Tools: When You Turn It In It’s Too Late!). Generative AI market leader OpenAI initially developed its own tool for analyzing the origin of text, but soon discontinued it due to a lack of accuracy (New AI classifier for indicating AI-written text).

It is therefore not surprising that attempts have been made to give the AI-generated texts a “watermark” for easy recognition and classification. The language model integrates a small bias in word selection (or technically correct “token selection”), i.e. the use of certain words to introduce a statistically specific, recognizable pattern to the text (A Watermark for Large Language Models). As a result, the text with a “watermark” will differ slightly from that without a watermark, as the words are not selected entirely freely – the quality of the text will generally be somewhat monotonous and rather poorer as a result.

OpenAI recently reported that this process works with a high recognition rate (Understanding the source of what we see and hear online). However, the same article also explains why this procedure will not be used on a regular basis. There are three ways to change the watermark pattern and thus prevent classification:

  • Translate the text into another language and back again using translation software (a simple trick that fraudsters have already used successfully with Copy&Paste plagiarism).
  • Have the text reworded by another language model.
  • The language model itself can be ‘tricked’ when generating the text by instructing it via a prompt to insert specific words or characters between each word and then remove them afterwards with a simple ‘search & replace’. Therefore, bypassing the watermarking process is “trivial for malicious actors”.

Apart from that, a unilateral introduction of OpenAI would probably be seen as a competitive disadvantage compared to other language models such as Google’s Gemini or Anthropic’s Claude. After all, texts outside the academic context could then also be classified as AI-generated without the knowledge of the creators – which would possibly be interpreted to their disadvantage.
Furthermore, it is also not possible to differentiate between the type of use – for example, whether a text was developed and written by the author himself with arguments and ideas, and AI only provided the linguistic finishing touches at the end, or whether it comes from AI completely without a basis. This could put non-native speakers at a disadvantage, for example, if it were then assumed that there was no personal contribution.

Therefore, documenting the research and writing process still seems to be a much more reliable way of transparently and fairly assessing the extent to which the author contributed to the creation of the text.

AI-Generated Text: Why Teachers Can’t Always Detect AI Misuse

In today’s AI era, a new technology has revolutionized how students write their assignments: AI writing assistants. With this unforeseen advancement of technology, a significant challenge arose for educators, properly evaluating their students’ work. Before the AI advancement educators were mostly concerned about whether the students were plagiarizing or ghostwriting their papers, but now they have to distinguish whether a student wrote their text independently or with AI assistance, which is not an easy task.

The old problems were difficult to solve, and realistically, they were never fully resolved. This new development presents the same challenge once again. How can teachers determine if a student wrote on their own? Are their suspicions justified? How can they prove their suspicions? Do AI detection tools work? 

This article underscores the difficulty teachers face and how Mentafy can be their solution in verifying their suspicions and shedding light on the authenticity of students’ work. 

The Difficulty of Detecting AI-Generated Text

Every day, these AI writing tools are getting better and so are the students, learning how to use and get the best outcome out of them. Here are some reasons why it is becoming increasingly difficult for teachers to distinguish AI-generated text.

Sophistication of AI Writing: AI writing tools like ChatGPT have become incredibly adept at mimicking human writing styles. They can generate nuanced, contextually relevant content nearly indistinguishable from human-written text. This makes it extremely difficult for teachers to detect AI use based mainly on intuition or stylistic cues.

Varying Prompts and Outputs: AI tools can generate different outputs from the same prompt or slightly varied prompts, resulting in unique texts each time leaving the teachers with the question: “Who actually wrote this?”. This variability means that even if a teacher suspects that a student used AI, the lack of a consistent pattern in the AI-generated content makes it hard to confirm those suspicions.

Real-World Evidence: Numerous articles such as this one, document cases where students used AI to write their papers or homework, passed successfully and their supervisors couldn’t tell a thing. These cases reinforce the effectiveness of AI tools in producing high-quality, seemingly authentic academic work.

Bias in Detection Tools: Existing AI detection tools often exhibit biases, particularly against non-native English speakers. Research shows that these tools disproportionately flag content from students whose first language isn’t English, leading to unfair disadvantages and potential misjudgments.

The Limitations of “Just Suspicion”

While teachers may suspect AI use based on inconsistencies in writing style or lack of critical thinking, they do not know how to prove it, but without concrete proof, these suspicions remain just that.

When it comes to traditional plagiarism checkers, they are not designed to identify AI-generated content, leaving a significant gap in detection capabilities.

Other platforms, that claim to detect AI, lead in most cases to a wrong answer, especially when it comes to a text that is mixed with AI. Some different studies and articles tried these different tools, with different texts such as human-written, AI-written, or 50/50. The results are more confusing than helpful. Making this situation even more confusing for the teachers, not knowing who to trust and wondering about their next actions on this matter.

Furthermore, what one teacher might consider suspicious, another might deem acceptable, leading to a lack of standardization in assessments. These subjective judgments lead to inconsistent evaluations, potentially resulting in unfair penalties.

Additionally, in all these situations teachers and students have found themselves in, are mostly focusing on punishment rather than fostering genuine learning and critical thinking in students. Accusing them of using AI without solid evidence, can create an environment of distrust and tension, undermining the educational mission.

Mentafy: Documentation for Decisive Insights

There should be given importance to evaluating the process of writing an academic paper rather than just evaluating based on the result one might receive from a Plagiarism/ AI-detection tool. To make such a thing possible the research and writing process should be documented for an evidence-based proof of students.

Mentafy offers a solution that goes beyond mere suspicion by providing comprehensive documentation of the writing process. Students can write their thesis or essay just as they normally would, e.g. in Microsoft Word, while Mentafy accompanies and records in the background to provide them feedback during writing and finally a data-based report to certify their genuine work.

Key features include:

Final Report: Mentafy tracks all changes made throughout the writing process, allowing teachers to see how a student’s work evolved. This transparency helps in assessing the student’s engagement and understanding of the material. Additionally, you can also generate interim reports to have insights during the project and support your students when necessary.

Writing Recorder: We have developed a writing pattern detection that flags suspicious patterns in student work, prompting further investigation alongside the documented writing process. Based on the writing pattern we differentiate, whether a text comes from the mind of a student, or whether it is simply copied from another source, e.g. generative AI. Combined with clear citation guidance we thereby help students to proactively cite those incidents correctly.

Research Recorder: This feature helps students stay organized and conduct accurate research. It records how and when external sources were used, ensuring proper citation practices and reducing the risk of plagiarism. By documenting the research process, Mentafy provides valuable context for the final submission. Teachers eventually get supervisor-friendly data about the research process, to gain insights for feedback and evaluation.

Conclusion

It is important to equip educators with the right tools they need to navigate the complexities of AI-generated content. By offering detailed documentation and insightful analysis, Mentafy helps teachers move beyond suspicion, ensuring a fair and educational approach to maintaining academic integrity. In an era where AI tools can produce high-quality academic work indistinguishable from student-written texts, Mentafy stands out as a critical resource for educators committed to fostering genuine learning and critical thinking skills.

AI Is Here To Stay. Where does Education go?

In a world increasingly shaped by digital technologies, education is facing a paradigm shift. Artificial intelligence (AI), especially generative AI, is not only changing the education sector, but also opening up new horizons for individual learning and teaching methods. As an innovative company specializing in the development of software for the responsible and transparent use of AI, Mentafy is at the forefront of this transformation. In this blog post, we highlight the most significant opportunities and risks that AI brings to the education of the future and outline actions that should be taken to ensure that AI has a positive impact on our education and, ultimately, our society.

Opportunities through AI in education

1. Personalization of learning

AI has the potential to revolutionize learning by creating individual learning paths tailored to the specific needs and abilities of each student. Adaptive learning systems continuously analyze learning progress and adjust teaching methods and materials accordingly. This not only promotes more effective learning, but also motivates students as they can learn at their own pace and according to their own interests.

2. Accessibility and inclusion

One of the greatest strengths of AI is its ability to break down barriers and make education accessible to all. AI-powered platforms can provide learning materials in different formats, be it text, audio or video, facilitating access for students with different needs and abilities. In addition, these technologies enable the global sharing of knowledge by making learning resources available in regions that previously had limited access to quality education.

3. Efficiency and support for teachers

AI can automate a variety of administrative tasks that normally take a lot of time, such as grading exams or managing student data. This gives teachers more time to focus on the individual support and educational development of their students. AI can also support teachers in the creation of teaching materials and provide valuable insights into effective teaching strategies by analyzing learning data.

Risks posed by AI in education

1. Data protection and security

The collection and analysis of learning data by AI systems comes with a significant risk of misuse and inadequate security of personal information. Protecting student privacy must be a top priority to ensure trust in these technologies.

2. Dependency and loss of human capabilities

Over-reliance on AI could lead to both teachers and students neglecting their critical thinking skills and creativity. There is a risk of dehumanizing the educational process if interpersonal interaction is reduced in favor of automated systems.

3. Inequality and barriers to access

Access to AI-supported educational resources is often unevenly distributed, which could exacerbate existing educational inequalities. Regions with limited financial and infrastructural resources may struggle to keep pace with technological development.

Measures for a positive impact of AI on education and society

1. Promotion of digital skills and analog basics

To ensure the effective and critical use of AI in education, schools and educational institutions should implement digital literacy programs for teachers and students. This will ensure that all stakeholders can reap the benefits of the technology and understand its risks. At the same time preserve basic non-digital skills.

2. Creation of ethical guidelines and regulations

Clear ethical guidelines and regulations for the use of AI in education are essential to ensure data protection and prevent misuse. Transparency in data use and compliance with strict data protection standards should form the basis of any AI application in education.

3. Investment in research and infrastructure

Governments and educational institutions should invest in research to develop fair and inclusive AI systems and provide the necessary infrastructure to enable access to these technologies for all learners. These investments are crucial to ensure that no one is excluded from the benefits of AI.

Conclusion

Artificial intelligence offers immense opportunities to transform and improve education. Through personalized learning, increased accessibility and support for teachers, AI can create a more inclusive and effective learning environment. At the same time, however, we need to be mindful of the risks and take proactive measures to ensure the responsible use of these technologies. At Mentafy, we firmly believe that with the right balance and careful implementation of measures, AI can have a positive impact on education and society. Let’s shape the education of the future together – transparently, responsibly and inclusively.

Why is the use of a writing assistant to record text creation advisable?

Ever since generative AI was introduced, students have made use of it, often being uncertain in what way or to what degree they were allowed to use the results.  
Some teachers wondered whether they received a remarkably good student essay – or one written by AI. Quickly some tools popped up trying to help with the decision, but their high error rate rather increased the confusion.  
There are a few reported cases of students getting away with having their whole thesis written by AI, like this one from Switzerland  – probably only the tip of the iceberg. On the other hand, some students that worked hard got punished for cheating, for example “The software says my student cheated using AI. They say they’re innocent. Who do I believe?” and “Georgia college student used Grammarly, now she is on academic probation”.  
So how can Universities improve this situation?

Banning AI tools completely seems unrealistic and would also deprive students of acquiring the competence of using AI, judging their generated output – thereby being deprived of the benefits AI tools can bring to education.  
So students should learn how to use them, but not turn their brains off and let ChatGPT ghost-write their whole text for them.  
Writing text yourself also is crucial for learning how to think, research and argue, so throwing out essay- and thesis-writing completely of the educational agenda is not an option.  
Instead, a ruleset for AI use (and abuse) needs to be established and clearly communicated to the students – they want to and need to know what the rules are.  
Such rules should be set depending on the subject, education level and other factors individually. Generally, we believe it is reasonable and realistic that students should be allowed to use AI for inspiration and reflect on their ideas, while they generate the main line of thought, text structure and write the majority of words themselves. In the final write-up phase they could use AI feedback again, for revising and proofreading.  
With a policy on acceptable AI use in place the next question arises:  

During the writing phase, close supervision can certainly help to follow the student’s progress and reduce the risk for abuse. However, this is a very time intensive approach, and in many cases prohibitively laborious.  
After submission, AI detection tools could be applied to the finished text, but these tools have been shown to err in both ways:  

  1. Some entirely AI written texts get wrongly classified as human written, especially if the student used disguising techniques like prompts including advice on the writing style.  
  1. On the other hand, human written texts are too often classified as AI written, with an even higher probability if the tools were used, but in a permitted way – for example to revise the final text. 

And in any case the problem of missing evidence remains: As the detection tools apply statistic and heuristic algorithms, they can never provide conclusive proof, leading to some brazen cheaters getting away and others might be punished for misconduct that was none.  

Some institutions pushed ahead and introduced requirements for the student to keep a record of their progress, by writing reports or a research diary. This marks a shift towards giving more attention to the process of writing than solely the end product. Data on the evolution of the thoughts, research and text is taken while it happens and documented for potential later inspection if deemed necessary.  
Mentafy is the tool to make that approach happen with very little overhead, in a manageable and time-efficient manner:  
Students can write their thesis or essay just as they normally would, e.g. in Microsoft Word, while Mentafy accompanies and records in the background to provide them feedback during writing and finally a data-based report to certify their genuine work.  
Mentafy reports aggregate metrics on the submission, providing an indication of whether the student work should be investigated in detail for misconduct. With limited resources this allows for a data-based decision on which texts should be put under scrutiny – and the detailed data records can be inspected in case of doubt which then provides comprehensible and reliable evidence. 

Mentafy report statistics example
Mentafy report statistics example chart

Mentafy at LEARNTEC 2024

Join Mentafy at LEARNTEC 2024: Revolutionizing Academic Integrity with AI

We’re excited to announce that Mentafy will be at LEARNTEC 2024 from June 4th to June 6th in Hall 2, Booth J26! Discover how our cutting-edge technology is transforming academic writing by helping students use AI safely and effectively.

Showcasing Our First Product: The Academic Paper Assistant

Our innovative platform assists students in creating academic papers while ensuring integrity and originality. Key features include:

  • Writing Pattern Algorithm: This powerful tool detects potential misconduct, and prevents students from unintentional plagiarism, ghostwriting and particularly the usage of AI, ensuring that student work is authentic.
  • Research Protocol: We help students with streamlining their research and a proper documentation, that seamlessly will become part of their writing diary.
  • Writing Project Management: Students will get a helping hand in following the necessary steps to a great paper. Teachers can customize these according to their own guidelines.

Why Visit Us?

At our booth, you can:

  • See Live Demonstrations: Experience our academic paper assistant in action and learn how it safeguards academic integrity.
  • Meet Our Team: Connect with experts passionate about advancing education through technology.
  • Get Exclusive Insights: Learn about upcoming features and enhancements.

Join us at LEARNTEC 2024, Hall 2, Booth J26, to explore how Mentafy is revolutionizing academic support with AI. Let’s shape the future of education together!

For more information, visit www.mentafy.com.

AI Detection Tools: When You Turn It In It’s Too Late!

In the AI era, tools like ChatGPT, Gemini or Bing Chat are reshaping how students learn and create. While these advancements empower students, they also raise constant concerns about academic integrity. Plagiarism checkers have become staples in educational institutions. Now their successors, AI detection tools, are promising to differentiate between human created work and AI-generated content. Their effectiveness is hotly debated in the scientific community. And as AI technology advances, and their text output becomes more and more human-like, the reliability of these misconduct detection tools are increasingly under scrutiny.

The Limitations of AI Detection Tools

While traditional plagiarism checkers excel at finding copied content, they struggle with a new challenge: identifying AI-generated text. These tools typically scan for matching phrases, sentences, or paragraphs against a vast database of existing content. However, AI can generate entirely unique text, blurring the lines between AI and human-written text, making detection fraught with challenges:

  • Accuracy Issues: AI detectors often struggle with false positives (flagging original work as AI-generated) and false negatives (missing instances of AI-generated text). Research indicates that the best performing AI detection tools only manage about a 50% success rate in accurately identifying AI-generated text​. This inconsistency raises concerns about their reliability when used as the sole measure for detecting AI-generated content in academic settings.
  • Bias Concerns: Beyond the issue of effectiveness, there’s a pressing concern regarding the fairness of AI detection tools. There is growing evidence that AI detection tools can exhibit biases, particularly against non-native English speakers, whose syntax or phrasing might differ from the norm. These biases can lead to unfair penalizations for students who are already disadvantaged by language barriers.
  • Transparency and Methodology: Many AI detection systems operate as ‘black boxes’ with proprietary algorithms that are not open to scrutiny. The algorithms are statistical and accordingly do not provide compelling evidence in case of doubt. Accordingly, educators and students are left without a clear understanding of why certain texts are flagged, which undermines trust in these tools and makes it difficult to appeal or rectify decisions based on their output.

Why AI Detection Results Should Not Be Taken For Granted

Relying on AI detection tools for academic evaluations poses significant risks:

  • Overemphasis on Final Output: These tools assess the final submission without any insight into the writing process. They do not consider the research, drafts, or the evolution of the submitted piece. This approach might discourage learning and exploration, focusing instead on punitive measures for final outputs.
  • Inhibiting Educational Growth: If students are only taught to avoid detection, they may miss out on learning how to conduct research ethically, how to cite properly, and how to engage critically with sources. Education should foster these skills rather than just policing the end product.
  • False Friend: Overreliance on these tools can give institutions a false sense of security, believing that they are effectively combating misconduct and upholding standards. This might lead to complacency, ignoring the need for more comprehensive education on academic integrity and the use of AI.

The Role of AI Detectors in Education

Relying solely on AI detection tools is insufficient to address the challenges of unauthorized AI content in educational settings. A more comprehensive approach that prioritizes the development of critical thinking and digital literacy skills is essential. While AI detectors can be a valuable tool, a broader educational framework is needed. This framework should empower students to make informed and ethical decisions regarding AI use within their academic work.

Education technology experts suggest leveraging AI detectors as educational tools rather than punitive measures. This approach can help students understand the nuances of AI-generated content and the importance of academic integrity, thereby enhancing their learning experience.

Or do we need post hoc AI-Detection Tools at all? – Introducing Mentafy

  • From Reactive to Proactive: As we consider the need for a more holistic approach to academic integrity, it’s clear that new solutions are necessary. This is where Mentafy comes into play—a platform designed not just to detect but to educate and integrate throughout the academic process.
  • Beyond the Final Product: Mentafy offers a proactive approach by embedding itself within the student’s academic journey, providing feedback and guidance. This is not about catching students after the fact; it is about guiding them from the start, ensuring they understand and embody the principles of integrity throughout their work.
  • Empowering Both Students and Educators: By documenting and analyzing the entire research and writing process, Mentafy offers a unique insight into student learning behaviors, promoting a culture of honesty and creativity. It helps educators understand not just the ‘what’ of student submissions, but the ‘how’ and ‘why,’ fostering a deeper, more meaningful engagement with academic work.

Moving forward

As AI continues to permeate educational settings, the focus must not only be on detecting AI usage rather on understanding and guiding it. Education must evolve towards an integrated approach that prioritizes the learning process and personal development over simple outcome assessments.

Mentafy empowers educators and institutions to break free from the limitations of current AI detection tools. By embracing this platform, they can foster environments where students learn to use AI responsibly and transparently. This shift is essential in nurturing well-rounded, ethically-minded scholars in the digital age.