Last week, we shared a behind-the-scenes story that surprised even us: people were using Mentafy – built for academic integrity – to check love letters for AI use.
Are you curious, too? Free check for AI-Usage
Now that Valentine’s week is over, we do see: The pattern didn’t just repeat in 2026. It scaled [6].
Table of Contents
ToggleWhat we saw in 2026 (in plain terms)
- Again roughly 1 in 10 submissions looked like personal communication, not academic work (based on internal classification).
- Our love-letter filter identified nearly 2,000 love-letter-type uploads around Valentine’s week – roughly 5x more than last year.
- And the most telling signal: a meaningful share of these love letters showed strong AI patterns – consistent with what we saw previously (internal aggregate signal; not a “proof” label for any individual text).
What made us smile (and think): some people didn’t even upload typed messages. They took photos of handwritten love letters on their phones and uploaded them. That’s a use case worlds away from term papers – but it tells you everything about the emotion behind the question:
“I don’t just want a nice message. I want something that’s real.”
Why this matters beyond romance
We asked our LinkedIn-Community, whether they would care, if their partner used AI to write your love letter. While 57% gave a clear ‘Yes’, the remaining 43% all at least demanded to know about it. And literally nobody said ‘No’. And that’s not just a Mentafy anecdote. A representative YouGov survey for GMX/WEB.DE found that 72% of Germans say they wouldn’t want AI to write their love letters or Valentine’s messages [1]. A recent study summarized by the University of Kent also suggests that using AI for socio-relational messages can backfire: even when people disclose it, they may be judged as less caring, less authentic, and less trustworthy [2].
The compliance problem: AI detection alone can’t carry “truth”
If society increasingly asks “Is this real?”, we need tools that can answer responsibly – especially in education, recruiting, and other high-stakes contexts.
The hard truth: post-hoc AI detectors are not reliable enough to be used as proof on their own. OpenAI retired its own AI text classifier due to a “low rate of accuracy” [3]. In German higher education, experts warn that AI detectors can produce misleading probability scores and bias-driven misclassifications – especially across different writing styles and language backgrounds [4].
The rising star: process evidence (version history) and digital forensics
What does scale better than guesswork? Evidence from the writing process. Even simple version history can reveal whether a text was built iteratively or pasted in one sudden block.
That’s the direction we see growing fast: combining classic checks (plagiarism, AI classification, citation analysis) with process-based evidence (revision history, writing patterns) to support fair decisions and better learning outcomes.
Where Mentafy fits
That’s why the future toolbox looks layered:
- Classic checks (plagiarism, citation analysis, AI classification)
- Plus process evidence (revision patterns, version history, writing development)
That combination answers “Is this real?” far better than any single score.
Want to try it yourself?
If you’re curious – about a love letter, a motivational letter, or a student paper – use our free AI classifier for a quick first signal. And if you’re an educator or institution, Mentafy’s suite goes further by combining multiple methods into one clear, reviewable view.
Valentine’s is over for 2026. The authenticity question isn’t.
Sources
[1] Friemel, C. (2026, February 9). Keine Gefühle für KI: Mehrheit der Deutschen hält sie aus der Liebe raus. GMX Newsroom. https://newsroom.gmx.net/2026/02/09/keine-gefu%CC%88hle-fu%CC%88r-ki-mehrheit-der-deutschen-haelt-sie-aus-der-liebe-raus/
[2] Claessens, S., Veitch, P., & Everett, J. A. C. (2026). Negative perceptions of outsourcing to artificial intelligence. Computers in Human Behavior, 177, 108894. https://doi.org/10.1016/j.chb.2025.108894
[3] Coldewey, D. (2023, July 25). OpenAI scuttles AI-written text detector over “low rate of accuracy”. TechCrunch. https://techcrunch.com/2023/07/25/openai-scuttles-ai-written-text-detector-over-low-rate-of-accuracy/
[4] Gostmann, I., & Hildermeier, L. (2026, February 12). KI prüft KI – und scheitert? Über Bias-Effekte und Verzerrungen in KI-Detektoren. Hochschulforum Digitalisierung. https://hochschulforumdigitalisierung.de/bias-effekte-ki-detektoren/
[5] University of Maryland, Baltimore County, Division of Information Technology. (2025, May 14). Using Google Doc “Versions” to detect student originality vs. AI abuse. https://doit.umbc.edu/news/post/150153/
[6] Mentafy GmbH. (2026). Internal analysis of anonymized user uploads around Valentine’s Day 2025–2026 (Unpublished internal dataset).






No comment yet, add your voice below!