I am on really shaky ground, but I think that they are representing "uncertainty" in the QC sense per event and that they claim that over the totality of the events this then represents reality better than stochastic sampling.
But I've been reading the paper on and off today and I don't really understand much of it at all.
https://archive.is/2025.09.25-054751/https://www.ft.com/cont...
https://arxiv.org/pdf/2509.17715 for the paper.
- this is back testing. - the mechanism (from first skim) is to create features using the QC which are then fed to the classical algorithm.
Can anyone explain why creating features with QC might be a good idea?
I am on really shaky ground, but I think that they are representing "uncertainty" in the QC sense per event and that they claim that over the totality of the events this then represents reality better than stochastic sampling.
But I've been reading the paper on and off today and I don't really understand much of it at all.
You can finally be honest when you tell your girlfriend you may or may not have lost the wedding fund on meme stocks.
Babe, wake up. New hype cycle just dropped