Why Cover Letters Are Uniquely Hard to Screen
Cover letters sit at an awkward intersection for AI detection. They are short (usually 200 to 400 words), highly formulaic by convention, and written in a professional register that already sounds a bit stiff from human authors. That combination creates real problems for automated screeners.
Most AI detectors are trained primarily on longer texts (essays, articles, academic writing). At 250 words, statistical patterns are thinner and false positive rates climb. A detector that is 92% accurate on 500-word essays may drop to 70% accuracy on a 250-word letter. That is not a flaw in the tool; it is a property of short text.
This does not mean detection is impossible. It means the approach has to be different: less reliance on single-score thresholds, more attention to qualitative tells.
Seven Patterns That Give AI Cover Letters Away
1. The Universal Enthusiasm Opener
AI models are trained on thousands of cover letters where the opening expresses excitement about the role. The result is a gravitational pull toward openers like "I am writing to express my enthusiastic interest in..." or "I was thrilled to see this opportunity at [Company]." Human writers occasionally use these phrases; AI writers use them almost every time because they are statistically dominant in training data.
2. No Concrete Specifics
AI generates fluent prose about "leveraging cross-functional expertise to drive measurable outcomes" but rarely lands on a specific number, project name, or named person. Human candidates write "I reduced ticket resolution time from 4.2 days to 1.8 days in Q3" or "I worked directly with Sarah Chen on the Helios rebrand." AI writes "I have a proven track record of improving operational efficiency." The vagueness is not laziness; it is what happens when there is no real experience to draw on.
3. The Company Research Paragraph
AI models produce a boilerplate "I admire [Company]'s commitment to innovation and customer-centric values" paragraph that appears in roughly the same position in thousands of letters. When a candidate praises a company's culture using language that could apply to any company in the industry, that is a signal.
4. Uniform Sentence Length
Human writers vary sentence length naturally; some thoughts are long, some are punchy one-liners. AI prose tends toward medium-length sentences clustered in a narrow range. Reading a letter aloud and noticing that every sentence takes roughly the same amount of time is a practical test.
5. Closing Paragraphs That Sound Identical
"I look forward to the opportunity to discuss how my experience aligns with your needs" is the AI cover letter closing. Human writers close differently: they reference a next step, a timeline, a mutual connection, or simply end more abruptly. AI closings are almost always a gracious three-sentence wind-down.
6. No Personal Voice
A cover letter from a career-changer who spent 10 years teaching middle school before pivoting to UX design should sound different from a recent grad applying to their first job. AI flattens voice to a professional median. If a letter reads as if it could have been written by anyone applying for this category of job, that uniformity is the tell.
7. Keyword Mirroring Without Context
When candidates paste a job description into a prompt, the resulting letter mirrors the job description's language precisely: if the JD says "cross-functional collaboration," the letter says "cross-functional collaboration" in the first paragraph. Human applicants use the concepts but rarely echo the exact phrase three times in 300 words.
Using a Detection Tool on Short Text
Automated detectors work better as a triage signal than a verdict on cover letters. Here is how to interpret results:
- Score above 80%: Strong signal worth a closer manual read. Not a disqualification on its own.
- Score 50-80%: Ambiguous on short text. Treat as a flag to look for the qualitative tells above.
- Score below 50%: Low confidence. Do not act on this range for short professional letters.
Running the full letter through a detector like Airno gives you a confidence score plus a breakdown of which statistical patterns triggered. The "pattern" and "statistical" sub-scores are the most reliable signals for professional short text. The DeBERTa model score becomes more meaningful at 200+ words; below that, weight it less.
One useful practice: paste just the most distinctive paragraph (the one where the candidate describes their specific contribution). If it scores high, the whole letter likely is AI. If it scores low, the person may have used AI to polish structure while writing the substance themselves.
What AI-Polished Versus AI-Written Looks Like
These are not the same thing and a fair policy should treat them differently.
AI-written: The candidate prompted a model to write the letter from scratch, possibly editing it slightly. The content, structure, and voice are all generated. Specifics are thin or generic.
AI-polished: The candidate wrote the core content, then asked AI to fix grammar, tighten sentences, or improve flow. The specifics, voice, and structure are the candidate's; the surface-level polish is assisted. This is the equivalent of using a spell checker or having a friend proofread.
Detection tools cannot reliably distinguish these two cases, especially on short text. That is why policy matters more than scores.
A Fair Screening Policy Framework
Treating AI use as automatic disqualification is not defensible in 2026. Most professionals use AI tools throughout their work; penalizing cover letter assistance while accepting AI-assisted job descriptions from your own recruiters is inconsistent.
A reasonable framework:
- Define your standard upfront. If you require candidates to write without AI assistance, state it explicitly in the application instructions. Candidates can then choose to comply or self-select out. Ambiguity is unfair to candidates and creates legal exposure for employers.
- Use detection as a triage layer, not a verdict. Flag high-scoring letters for a closer read. Do not reject without human review.
- Weight the substance, not the style. A specific, accurate, well-reasoned letter that scored 75% on an AI detector is a better hire signal than a vague, credential-free letter that scored 20%.
- Follow up with a specific question. Ask the candidate to describe the experience they referenced in the letter in more detail during screening. A human who wrote the letter can; a candidate who submitted AI output usually cannot without contradicting themselves.
The Real Risk Is Not AI Letters
The practical concern for hiring teams is not that candidates used AI. It is that a letter describes skills or experiences the candidate does not actually have. Detection tools do not solve that. A well-crafted human lie scores zero on an AI detector.
The cover letter has always been a weak signal of job performance. AI use makes it weaker still. Teams that rely heavily on cover letters for screening decisions should consider whether skills assessments, portfolio reviews, or structured interviews do more of that work.
Quick Reference: Cover Letter Red Flags
| Pattern | AI Signal Strength | Notes |
|---|---|---|
| Universal enthusiasm opener | Medium | Common in human letters too; look for other signals |
| No concrete numbers or names | High | Strongest single tell |
| Generic company praise paragraph | High | Could describe any company in the sector |
| Uniform sentence length | Medium | Best tested by reading aloud |
| Mirror-image JD keyword use | High | Exact phrases repeated 2+ times |
| No personal voice | Medium | Subjective; calibrate against the role type |
| Boilerplate closing | Medium | Alone, not conclusive |
Bottom Line
AI-written cover letters are identifiable through a combination of automated detection and qualitative review. No single signal is definitive on short text; the case builds from several patterns together. The most reliable verification step remains a brief screening call where the candidate describes specifics from the letter in their own words.
Detection tools are a useful first layer. They are not a substitute for human judgment, and they should never be the sole basis for a hiring decision.
Check a Cover Letter with Airno
Paste the letter text into Airno for an ensemble confidence score, pattern breakdown, and highlighted suspicious spans. Use it as a triage signal alongside the qualitative tells in this guide.
Try Airno Free