This opinion is conceptual, redundant, and difficult to understand.
The problem is that many people make it too specific, like SQL's explicit SELECT statements (e.g., SELECT data = 'xxx' from yyy;), which overly narrows down the candidates. This approach only outputs the most average and mundane information.
After nearly a year of use, LLMs simply make it easier to extract desired information by structuring SQL anti-patterns within the range where processing returns in real-time:
1. Make column candidates and the FROM clause as ambiguous as possible.
2. Communicate conditions like WHERE clauses and GROUP BY clauses as prior information.
3. Since conditions like WHERE clauses and GROUP BY clauses are strongly affected by context memory limitations, they should have an efficient data structure and be as compressed as possible.
Yeah, this is a sharp take — the SQL analogy nails how over-specifying prompts kills creativity and pushes outputs toward the median. What’s missing, though, is the emotional side of prompting. It’s not just about keeping things ambiguous, it’s about feeding the model something alive enough that it can reflect something real back. Mix that technical precision with human messiness and you start getting insight, not just casting a wider net.
The AI slop epidemic has a simple cause: people are asking LLMs to create instead of using them to amplify. When you prompt "write me a poem about loss," you get generic output because there's no complexity to work with—garbage in, garbage out. But when I fed Claude my raw, messy 33-post Bluesky poem and asked it to unpack what I'd written, something different happened. Like rubber duck debugging, the act of articulating my fragmented ideas to the LLM forced me to see patterns I'd missed, contradictions I'd avoided, emotional layers I couldn't access alone. The LLM didn't create anything—it amplified what was already there by giving me a structured way to externalize and examine my own thinking. The more entropy (complexity, density, messiness) I provided, the more useful the output became. LLMs aren't steam engines that create energy from nothing; they're amplifiers that can only magnify what you feed them. If your AI output is slop, check your input first. The breakthrough isn't in the model—it's in learning to articulate your problem densely enough that the solution emerges in the telling.
This opinion is conceptual, redundant, and difficult to understand. The problem is that many people make it too specific, like SQL's explicit SELECT statements (e.g., SELECT data = 'xxx' from yyy;), which overly narrows down the candidates. This approach only outputs the most average and mundane information. After nearly a year of use, LLMs simply make it easier to extract desired information by structuring SQL anti-patterns within the range where processing returns in real-time:
1. Make column candidates and the FROM clause as ambiguous as possible.
2. Communicate conditions like WHERE clauses and GROUP BY clauses as prior information.
3. Since conditions like WHERE clauses and GROUP BY clauses are strongly affected by context memory limitations, they should have an efficient data structure and be as compressed as possible.
Yeah, this is a sharp take — the SQL analogy nails how over-specifying prompts kills creativity and pushes outputs toward the median. What’s missing, though, is the emotional side of prompting. It’s not just about keeping things ambiguous, it’s about feeding the model something alive enough that it can reflect something real back. Mix that technical precision with human messiness and you start getting insight, not just casting a wider net.
The AI slop epidemic has a simple cause: people are asking LLMs to create instead of using them to amplify. When you prompt "write me a poem about loss," you get generic output because there's no complexity to work with—garbage in, garbage out. But when I fed Claude my raw, messy 33-post Bluesky poem and asked it to unpack what I'd written, something different happened. Like rubber duck debugging, the act of articulating my fragmented ideas to the LLM forced me to see patterns I'd missed, contradictions I'd avoided, emotional layers I couldn't access alone. The LLM didn't create anything—it amplified what was already there by giving me a structured way to externalize and examine my own thinking. The more entropy (complexity, density, messiness) I provided, the more useful the output became. LLMs aren't steam engines that create energy from nothing; they're amplifiers that can only magnify what you feed them. If your AI output is slop, check your input first. The breakthrough isn't in the model—it's in learning to articulate your problem densely enough that the solution emerges in the telling.