Articles on: FAQ

Does your AI ever generate inaccurate or misleading content (i.e., hallucinate)?

Our team continuously tests and monitors our AI-driven tools, and to date, we’ve had zero reported cases of AI "hallucinations."


We built our AI-powered tools to stay grounded in the content and context users provide, and minimize the risk of AI “hallucinations."


That said, like all generative AI tools, there’s always a small chance the AI might occasionally suggest something that doesn’t fully match the resume or job description. 


We’ve put guardrails in place to keep this rare and manageable: 

  • For Resume Scanner, it pulls directly from the content in user resumes and compares it against best practices and job market expectations. Its feedback is based on resume content, so the risk is extremely low. If it suggests adding or expanding a skill/section, it’s based on what recruiters often look for, but we always encourage users to revise and make tweaks on their own, so they stay in control.
  • For PracticeAI, the tool analyzes spoken responses to provide feedback on clarity, confidence, and job alignment. Because the AI interprets conversational content, it may occasionally offer suggestions that feel slightly off-target. It is designed to be constructive, not prescriptive, and we always encourage users to use their own judgment when refining their responses.

Updated on: 19/03/2026

Was this article helpful?

Share your feedback

Cancel

Thank you!