AI hallucination—that is, the generation of factually incorrect or nonsensical...
https://bizzmarkblog.com/why-reasoning-models-can-hallucinate-more-even-when-their-logic-improves/
AI hallucination—that is, the generation of factually incorrect or nonsensical outputs—remains a critical limiting factor in deploying language models reliably