Personal Statement · 7 min read
Does ERAS Detect AI Writing? The Truth About Application "Slop"
Published April 24, 2026
Does the ERAS portal automatically run your personal statement through an AI detector and reject it? No. But if you think that means you can paste a prompt into ChatGPT and submit the result, you are setting yourself up to fail. Program directors do not need an algorithm to tell them an essay was written by AI. They have a built-in radar for generic slop.
AAMC policy vs. reality
The technical side: ERAS does not currently run a native, auto-reject AI detector at the point of submission. There is no Turnitin-style score attached to your file when a program opens it.
The policy side is clearer. AAMC guidance states that applicants may use AI for brainstorming, proofreading, and editing, but you are required to certify that the submitted work is your own and accurately reflects your experiences. Using AI to generate a full essay from scratch — with no substantive input from you — crosses that line.
The real-world risk, though, is not getting caught by a detector. It is submitting an essay that reads exactly like the 4,000 other essays that pasted the same prompt into the same chatbot this week.
The real detector: program directors’ slop radar
Program directors read thousands of applications in a compressed window each cycle. They spot ChatGPT-shaped prose inside the first paragraph. Three tells give it away.
- The vocabulary tells. “Tapestry,” “delve,” “testament,” “unwavering,” “profoundly,” “intricate.” Any one of these is a yellow flag. Two in the same paragraph is a signed confession. Generic AI reaches for these words because they sound important; actual applicants almost never use them in speech.
- The structural tells. Compound sentences stacked three deep. Rule-of-three lists that trade specificity for rhythm. Vague abstractions where a concrete noun should live (“a challenging patient encounter” instead of “a 62-year-old with heart failure”). These patterns are the signature of next-token prediction trying to sound thoughtful.
- The reflection deficit. Generic AI writes about events, not internal change. It will describe a patient encounter in detail and then close the paragraph with an abstraction — “this experience taught me the importance of empathy” — instead of naming a specific thing you now do differently. A reader looking for the “so what?” hits a wall every time.
How to use AI without sounding like AI
The solution is not to avoid AI. The solution is to stop treating it as a ghostwriter and start treating it as a drafting partner that works from your material.
- Start from raw clinical notes. Feed the tool actual fragments: “free clinic, 90% of patients uninsured, saw a lot of uncontrolled diabetes, watched a patient choose between insulin and rent.” That is the kind of input a model cannot hallucinate. If you start with “write me a personal statement about empathy,” you get slop.
- Use AI for structure, not soul. Models are good at turning a shapeless pile of bullets into a legible paragraph. They are bad at inventing genuine insight you did not already have. Keep the reflection sentences yours.
- Rewrite the output in your own voice. Every AI draft needs a final editing pass where you read it out loud and replace any sentence that sounds like it came from a commencement speech. If you would not say it to a colleague at coffee, cut it.
The bottom line
Stop worrying about beating an invisible detector. Start worrying about whether your essay is actually memorable. The surest way to fail the slop radar is to make sure your structure is sound — a specific hook, a clear arc, real reflection, a concrete conclusion. If you want the scaffolding, start with our full guide on ERAS personal statement structure. For the related question most applicants ask next, see can I use ChatGPT for my ERAS personal statement. For the broader ethical frame, see is it ok to use AI for your residency application.