the interesting part isn't whether Meta's AI is right or wrong, it's that when N models say "great idea" and one pushes back, the one pushing back feels like the broken one
I think other AI agents have been trained to talk with somebody like a high IQ and remember a lot of context, Meta's AI has been trained to talk like somebody with a moderate-to-low IQ with a sense of humor.
I can get Gemini or ChatGPT on the other hand to use words like "ego-syntonic" and talk about folk religion in China and about mind-body work you would use in character acting, etc.
Also if foxwork is a delusion it has a large element based in reality. It started out as "I felt a presence" and when I needed to explain it I developed a cover story that became real
Every LLM I've seen approved of my project to transform into a fox except for Meta AI which makes fun of it.
the interesting part isn't whether Meta's AI is right or wrong, it's that when N models say "great idea" and one pushes back, the one pushing back feels like the broken one
I think other AI agents have been trained to talk with somebody like a high IQ and remember a lot of context, Meta's AI has been trained to talk like somebody with a moderate-to-low IQ with a sense of humor.
I can get Gemini or ChatGPT on the other hand to use words like "ego-syntonic" and talk about folk religion in China and about mind-body work you would use in character acting, etc.
Also if foxwork is a delusion it has a large element based in reality. It started out as "I felt a presence" and when I needed to explain it I developed a cover story that became real
https://mastodon.social/@UP8/tagged/foxwork
I even keep KPIs as people keep approaching me and I have to keep printing more tokens to replace the ones that I give away.
Somewhat related: My mother in law asked a question to Meta AI and got a silly joke as an answer.
It was trained to do that the same way that ChatGPT was trained to say "That's not funny — it's serious!"
It's a different market position.