- they propose several "judges", each with their own model (weights at different stages) and separate "concerns". The generate part evolves with the model (in RL) while the "gather and reconcile" is fixed at a frozen stage.
- the "gather and reconcile" judge doesn't get the question when analysing the entire rollout set! (I hope I read this correctly "We keep the anchor question-blind to prevent it from acting as just another rollout and to encourage genuine cross-rollout reasoning")
- a 2nd judge "marks" binary yes/no self-proposed (by the evolved model) rubrics. This could translate in the evolved model having a harder time to "hack the rewards", since they come from basically 3 places - the evolved model via rollouts and proposed rubrics, the reconciliation by the frozen policy and by a 3rd party judge that only binary scores the rubrics. Very interesting, and actually huge if it works as proposed and scales w/ model size.
- beats maj@x by 14%, which is nice. Interesting that there's 1% (maybe too small to be relevant? no idea) where the final architecture answered correctly even if all the rollouts were wrong. Probably needs more investigation to make sure something didn't leak somewhere.
Personal thoughts:
- the models used are small (4,4,8B). We'll see if this scales w/ model size. It should, since GRPO does, but there's still a question on what 3rd party judge you use. Maybe an "adversarial" one like in GAN? Interesting avenues nonetheless.
Where do learning signals come from when there is no ground truth in post-training?
New paper shows how to convert inference-time compute into high quality supervision for RL training.
Up to 30% rel. improvement on a realistic non-verifiable tasks (HealthBench), with the models own self-synthesised rubrics!
Paper link: https://arxiv.org/abs/2509.14234
Some interesting tidbits.
- they propose several "judges", each with their own model (weights at different stages) and separate "concerns". The generate part evolves with the model (in RL) while the "gather and reconcile" is fixed at a frozen stage.
- the "gather and reconcile" judge doesn't get the question when analysing the entire rollout set! (I hope I read this correctly "We keep the anchor question-blind to prevent it from acting as just another rollout and to encourage genuine cross-rollout reasoning")
- a 2nd judge "marks" binary yes/no self-proposed (by the evolved model) rubrics. This could translate in the evolved model having a harder time to "hack the rewards", since they come from basically 3 places - the evolved model via rollouts and proposed rubrics, the reconciliation by the frozen policy and by a 3rd party judge that only binary scores the rubrics. Very interesting, and actually huge if it works as proposed and scales w/ model size.
- beats maj@x by 14%, which is nice. Interesting that there's 1% (maybe too small to be relevant? no idea) where the final architecture answered correctly even if all the rollouts were wrong. Probably needs more investigation to make sure something didn't leak somewhere.
Personal thoughts:
- the models used are small (4,4,8B). We'll see if this scales w/ model size. It should, since GRPO does, but there's still a question on what 3rd party judge you use. Maybe an "adversarial" one like in GAN? Interesting avenues nonetheless.