This is exactly why building AI tools for professional writing requires aggressive system-level guardrails. The default behavior of an LLM is to please the user, which often means fabricating a quote if it makes the narrative flow better. When building the generation engine for ConvertlyAI, we had to hardcode a strict rule into the system prompt: explicitly banning dummy testimonials, placeholder stats, or unsubstantiated claims unless they exist in the provided source context. If you don't force a strict 'Truth Check' constraint at the API level, the model will inevitably hallucinate a quote just to complete the 'story' structure it thinks you want.
This is exactly why building AI tools for professional writing requires aggressive system-level guardrails. The default behavior of an LLM is to please the user, which often means fabricating a quote if it makes the narrative flow better. When building the generation engine for ConvertlyAI, we had to hardcode a strict rule into the system prompt: explicitly banning dummy testimonials, placeholder stats, or unsubstantiated claims unless they exist in the provided source context. If you don't force a strict 'Truth Check' constraint at the API level, the model will inevitably hallucinate a quote just to complete the 'story' structure it thinks you want.
394 comments: https://news.ycombinator.com/item?id=47226608