Articles on: FAQ: AI Features & Product Accuracy

Does your AI ever generate inaccurate or misleading content (i.e., hallucinate)?

Our team continuously tests and monitors our AI-driven tools, and to date, we’ve had zero reported cases of AI hallucinations.


We built our AI-powered tools to stay grounded in the content and context users provide, and minimize the risk of AI “hallucinations”.


That said, like all generative AI tools, there’s always a small chance the AI might occasionally suggest something that doesn’t fully match the resume or job description. 


We’ve put guardrails in place to keep this rare and manageable: 

  • For Resume Scanner, it pulls directly from the content in user resumes and compares it against best practices and job market expectations. Its feedback is based on resume content, so hallucination risk is extremely low. If it suggests adding or expanding a skill/section, it’s based on what recruiters often look for, but we always encourage users to revise and make tweaks on their own, so they stay in control.
  • For Dynamic Interview, it analyzes users’ spoken responses and gives feedback based on clarity, confidence, and how well their answer aligns with the job. Because it’s interpreting their answer content, it may sometimes offer suggestions that feel slightly off. But it’s designed to be constructive, not prescriptive, and users are advised to make their own decisions on how to adjust their responses.

Updated on: 23/07/2025

Was this article helpful?

Share your feedback

Cancel

Thank you!