Why not just go full-on dystopia and make a slackernews.com site that is an exact mirror of the submissions from HN api, and then populate them with entirely chatbot generated comment sections?
The main reason your comment (https://news.ycombinator.com/item?id=45160084) got downvoted was because you gave a summary without disclosing it's AI-generated, which made other users upset.
But even if you did disclose, it's still against HN rules, as other people in that thread told you. You commented "they’re free to delete it. I think it adds value", but as you've seen the community disagrees.
The guidelines are out of date and they don't mention anything for or against AI comments. Having your comment flagged makes the number of upvotes irrelevant.
People often upvote comments that are contrary to the spirit of HN.
And for better or worse, lawyering the guidelines is not in the spirit of HN. Better because moderation is generally patient, kind, and soft touch. Worse because the guidelines are guidelines not so much bright lines and some people want bright lines.
If there were a actual demand for that, Y Combinator could implement something similar to Google AI Overview that automatically summarizes each post, which would strongly discourages people from actually reading the articles and giving thoughtful insights.
Instead of low-effort ad hominems, which are also against the HN guidelines, you should read the blog posts. One of my recent posts which was highly upvoted on HN (https://news.ycombinator.com/item?id=43897320) demonstrates how I don't actually use LLMs all that much for my writing and only use them for specific use cases where they objectively add value, including my work at BuzzFeed.
In addition to everything else, this is bad for all web publishers.
No matter whose link is being shared on Hacker News... This would siphon away some traffic -- all the people who now just read the AI summary, without clicking through to the article.
No. And it's not just "anti-AI folks" who consider it spam and noise to copy-paste LLM summaries into a discussion forum.
Why not just go full-on dystopia and make a slackernews.com site that is an exact mirror of the submissions from HN api, and then populate them with entirely chatbot generated comment sections?
There would be no comments, just a summary of the contents of the post with a link to post.
But…why? This is a discussion forum. We come here to discuss. Also, none of the posts are so long that they warrant a summary (AI or otherwise).
The main reason your comment (https://news.ycombinator.com/item?id=45160084) got downvoted was because you gave a summary without disclosing it's AI-generated, which made other users upset.
But even if you did disclose, it's still against HN rules, as other people in that thread told you. You commented "they’re free to delete it. I think it adds value", but as you've seen the community disagrees.
https://news.ycombinator.com/newsguidelines.html
Not according to the guidelines and the upvotes the comment received.
The guidelines are out of date and they don't mention anything for or against AI comments. Having your comment flagged makes the number of upvotes irrelevant.
The search thread you were linked to (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...) are comments from dang, the head moderator of HN. His word is the law.
> His word is the law.
Well now that sounds dystopian.
All high-quality internet forums require good/strict moderation.
Where you apparently have to search for the text from one person to see if speech is allowed or not.
Each member has to learn to grok community norms.
If you think about a behavior, and wonder if it is within community norms, not behaving that way is the simplest thing that might work.
Or you can always ask the moderators using the |contact| link at the bottom of every page.
And sure you can just do it and find out what happens. If you do that you have no reasonable complaint and “sorry” is the best response.
People often upvote comments that are contrary to the spirit of HN.
And for better or worse, lawyering the guidelines is not in the spirit of HN. Better because moderation is generally patient, kind, and soft touch. Worse because the guidelines are guidelines not so much bright lines and some people want bright lines.
He’s the one that said it was in the guidelines, not me.
You are responsible for your behavior.
Why? Anyone who wanted that could copy-paste into chatgpt or install a browser extension to do it for them.
That creates a ton of traffic to LLMs for the same summary. Why not have one person do it and post it?
If there were a actual demand for that, Y Combinator could implement something similar to Google AI Overview that automatically summarizes each post, which would strongly discourages people from actually reading the articles and giving thoughtful insights.
That would be the idea of the static page. Save people time from reading an article they aren’t actually interested in.
It does sound like making your site and going wild there is a better approach than contaminating HN threads with LLM-generated comments.
While it will probably be appreciated as a "Show HN", please refrain from spamming links to it in threads here.
If people are not interested in the article, it might not belong on HN.
People can already do that from the submission headline.
[flagged]
Please don't post ragey swipes like this on HN.
Instead of low-effort ad hominems, which are also against the HN guidelines, you should read the blog posts. One of my recent posts which was highly upvoted on HN (https://news.ycombinator.com/item?id=43897320) demonstrates how I don't actually use LLMs all that much for my writing and only use them for specific use cases where they objectively add value, including my work at BuzzFeed.
In addition to everything else, this is bad for all web publishers.
No matter whose link is being shared on Hacker News... This would siphon away some traffic -- all the people who now just read the AI summary, without clicking through to the article.