Shopper Resource
AI & YOUR SHOP REPORT
Why using AI to write your mystery shop narrative is fraud — and what happens when we find it
Shopper Resource
Why using AI to write your mystery shop narrative is fraud — and what happens when we find it
Jump To Section
Tools like ChatGPT, Google Gemini, and similar AI writing assistants are widely available, and we understand the temptation to use them when you’re tired or behind on a deadline. But using AI to write—or substantially rewrite—your shop narrative is not a shortcut. It is fraud.
When you submit a report, you are certifying that it is an accurate, firsthand account of your experience. An AI-generated narrative is none of those things. It is a fabrication—generated from a prompt, not from observation—and it misrepresents to our clients what actually occurred during your visit.
Our clients make real business decisions based on your reports. They use shop data to evaluate employee performance, identify training needs, adjust service standards, and in some cases take disciplinary action against staff. When a report is AI-generated rather than observed, those decisions are based on fiction.
An employee who did nothing wrong may face consequences based on details an AI invented. A genuine service problem may go undetected because the AI produced a plausible but inaccurate account. This is not a minor policy violation—it is a breach of the trust that clients place in us, and in you.
AI-generated text has recognizable patterns, and our editors are trained to identify them. We also use detection tools as part of our review process. Common indicators include:
We do not flag reports based on a single indicator. But when the pattern is clear, we act on it.
The consequences are serious and immediate. Reports confirmed or strongly suspected to be AI-generated will be rejected. You will not be paid the shop fee or reimbursed for any expenses. Depending on the circumstances, your shopper account may be permanently deactivated.
This applies equally to partial AI use—using an AI tool to rephrase, expand, or “clean up” a narrative you started is still considered AI-assisted fraud if the resulting text no longer reflects your own direct observation and expression.
There are legitimate uses of AI tools that do not violate our policies:
The following are not acceptable under any circumstances:
Your report should sound like you—because it should be you. It should reflect the specific things you saw, heard, and experienced during your visit, written in your own words. That is the product our clients are paying for, and it is the only thing we can stand behind.
If you are finding it difficult to write detailed narratives, our Writing Reports resource page covers the techniques and standards we expect. If you have questions about a specific report, contact your scheduler before submitting—not after.