yeah, this is one of the reasons I'll choose to start a new conversation with claude - some old memory from earlier interacting with the current flow. I know, not quite the same thing, because that's just in the context, but it's kinda like memory. I've had the opposite experience with claude too - after several times where I'd pose a question and say "see the attached code dump", but forgot to attach a code dump, I told it to "please remember that if I ever mention an attachment, and there isn't one, stop processing and demand the attachment" because otherwise it goes off on a tangent, trying to be clever. Unlike other memories I've given it, it can't seem to hold onto this one, unless I remind it first at the start of the conversation, eg "remind me what you should do if I mention an attachment, but there is none" - it will recall, and then after that, will stick to it, until the next clean conversation.
But I like your idea - minds evolve, and part of the problem with llms is that they're largely static in nature - no evolution, no upskilling, just wait for the next model to come out. I understand the ramifications of learning all the time from user input (especially possible poisoning), but have always felt this is an area that could improve a lot.
yeah, this is one of the reasons I'll choose to start a new conversation with claude - some old memory from earlier interacting with the current flow. I know, not quite the same thing, because that's just in the context, but it's kinda like memory. I've had the opposite experience with claude too - after several times where I'd pose a question and say "see the attached code dump", but forgot to attach a code dump, I told it to "please remember that if I ever mention an attachment, and there isn't one, stop processing and demand the attachment" because otherwise it goes off on a tangent, trying to be clever. Unlike other memories I've given it, it can't seem to hold onto this one, unless I remind it first at the start of the conversation, eg "remind me what you should do if I mention an attachment, but there is none" - it will recall, and then after that, will stick to it, until the next clean conversation.
But I like your idea - minds evolve, and part of the problem with llms is that they're largely static in nature - no evolution, no upskilling, just wait for the next model to come out. I understand the ramifications of learning all the time from user input (especially possible poisoning), but have always felt this is an area that could improve a lot.