I think about it from an entropy POV: how much signal is the code/text transmitting?
You can tell it’s AI for a surprisingly high amount of LLM outputs. If you feel it’s regurgitating what you or your team already know, it’s slop.
This gets tricky of course, but it’s a tricky question. Though, I don’t think objective metrics work (in your case Cyclomatic Complexity), because information is relative by nature. What’s slop to someone is high-quality code or new information to someone else.
I think about it from an entropy POV: how much signal is the code/text transmitting?
You can tell it’s AI for a surprisingly high amount of LLM outputs. If you feel it’s regurgitating what you or your team already know, it’s slop.
This gets tricky of course, but it’s a tricky question. Though, I don’t think objective metrics work (in your case Cyclomatic Complexity), because information is relative by nature. What’s slop to someone is high-quality code or new information to someone else.