Agreed. Use LLM all you want to do the discovery and proof but do not use it to replace your voice. I literally can’t read, my brain just shuts off when I see LLM text.
It is a strange phenomenon though, these walls of text that LLMs output, when you consider that one thing they're really good at is summarization, and that if they are trained on bug report data, you'd think they would reproduce it in terms of style and conciseness.
Is it mainly post-training that causes this behaviour? They seem to do it for everything, like they are really biased towards super verbose output these days. Maybe something to do with reasoning models being trained for longer output?
I was going to say "Needs more emojis", then I scrolled down, and phew! Purple tick box emojis! I was worried a human was in the loop for a second.
Thank God for LLM!
> Thank you for the context! That makes sense - runc is correctly rounding to 10ms (410000) as systemd requires.
You're absolutely right! I shall now commit ritualised suicide based on your feedback! <arrow in target emoji>
Having the AI reply to comments is almost like a direct insult.
Purple tick box is how GitHub formats links to merged PRs and closed issues.
Oh yay. More AI powered slop.
And I do mean slop.
It was fixed already: https://github.com/opencontainers/runc/pull/4751
So the bot goes ahead and spams every project that has already done the work. And unrelated ones.
Whilst also leaking that this is to do with a GEO-location project building a cluster on AWS, for Accenture. Probably government contract.
Hostname points to an insurance company.
"ip-10-7-66-184.prod-eks.newfront.com"
Whether this is a real bug or not, the fact that the entire report is LLM generated AI slop makes my eyes glaze over.
I'm sure it's also a waste of maintainers time to drop a wall of AI bullet points instead of just sharing the critical information.
Agreed. Use LLM all you want to do the discovery and proof but do not use it to replace your voice. I literally can’t read, my brain just shuts off when I see LLM text.
It is a strange phenomenon though, these walls of text that LLMs output, when you consider that one thing they're really good at is summarization, and that if they are trained on bug report data, you'd think they would reproduce it in terms of style and conciseness.
Is it mainly post-training that causes this behaviour? They seem to do it for everything, like they are really biased towards super verbose output these days. Maybe something to do with reasoning models being trained for longer output?
It’s crazy how they just copied and pasted it
Maybe it’s not even a copy paste. Just something running in a loop, interacting with LLMs and automatically calling APIs.