I am trying to get a better understanding of what reasoning could possibly mean. So far, I am thinking that more we are able to compress knowledge more it's an indicator for reasoning. I would like to understand more about these, please tell me where my understanding is lacking or point me what I should learn more regarding this.
I was wondering if I could get a different way of thinking about reasoning machines as such. Reasoning models are trying to just externalize the reasoning through chain of thought or fine-tuning on reasoning focused dataset.
They all seem very hacky and not really reasoning. I wanted to see if there are alternative fundamental ways to think about reasoning as end by itself.
I am trying to get a better understanding of what reasoning could possibly mean. So far, I am thinking that more we are able to compress knowledge more it's an indicator for reasoning. I would like to understand more about these, please tell me where my understanding is lacking or point me what I should learn more regarding this.
Ask an LLM or Google, "what is a reasoning model in the context of large language models?"
I was wondering if I could get a different way of thinking about reasoning machines as such. Reasoning models are trying to just externalize the reasoning through chain of thought or fine-tuning on reasoning focused dataset.
They all seem very hacky and not really reasoning. I wanted to see if there are alternative fundamental ways to think about reasoning as end by itself.