I'm not sure I'd appreciate this feature. When I used LLMs I found conversations got more difficult the longer they went on - I'd ask one question, then another, and it'd try its best to answer the latter in the context of the former in a way I didn't intend. I used new chats as an explicit way to escape this. I also created new chats when I felt the LLM was going down the "wrong path" or generally interpreting things obtusely.
I'm in the habit of always opening a new chat because I change subjects quite often, and the LLM tends to randomly reference a previous question with more weight than it should.
That is strange, I thought this had always been the case. Long ago, I had a chat about feature selection and dimension reduction, and I expressed my preference of not resorting to PCA because it doesn't actually remove features. After some time, I had a new session chatting about multicollinearity and what adverse effects it might cause, and among the suggestions for addressing that, it suggested dimension reduction, saying "Although you don't prefer PCA ...".
This is a separate feature called memories. It saves preferences for future chats you tell it to remember. These snippets can be viewed in your settings.
I'm not sure I'd appreciate this feature. When I used LLMs I found conversations got more difficult the longer they went on - I'd ask one question, then another, and it'd try its best to answer the latter in the context of the former in a way I didn't intend. I used new chats as an explicit way to escape this. I also created new chats when I felt the LLM was going down the "wrong path" or generally interpreting things obtusely.
I think you are supposed to open a new chat for every new subject/question. Longer chats means higher consumption of tokens.
I'm in the habit of always opening a new chat because I change subjects quite often, and the LLM tends to randomly reference a previous question with more weight than it should.
Agree - it gets stuck sometimes and it just gets time to start over
That is strange, I thought this had always been the case. Long ago, I had a chat about feature selection and dimension reduction, and I expressed my preference of not resorting to PCA because it doesn't actually remove features. After some time, I had a new session chatting about multicollinearity and what adverse effects it might cause, and among the suggestions for addressing that, it suggested dimension reduction, saying "Although you don't prefer PCA ...".
This is a separate feature called memories. It saves preferences for future chats you tell it to remember. These snippets can be viewed in your settings.
I wonder how this works? Are they just RAG-ing the chat history or something more clever?
Likely the cheapest solution they could get at this scale, so yes, RAG
RAG with clever attention mechanisms?
i’ve told it not to change my contractions a thousand times, still can’t remember
Dick Large? Probably had trouble with the diminutive form.
wanker
[dead]