AI hallucinations: Why LLMs make things up (and how to fix it) | Dark Hacker News