openclaw fills the gap left by Siri and was built with tight integration with Apple ecosystem.
this how i understand the hype for something like openclaw vs the capabilities provided by zapier or n8n for years now.
i would say a majority of users are tech and non tech roles, who
(1) have an iPhone with all their contacts and data on iCloud
(2) have a macbook at work and use macOS daily
(3) use ChatGPT or Claude daily and trust it with their personal data
(4) aren't familiar with Linux or a VPS and don't trust themselves with setting it up through the terminal
(5) feel more at ease with a "a second macOS that i can debug visually on my monitor at home" rather than a remote linux VPS
you could still rent a mac mini but cloud providers will ask you $119 a month for a Mac mini M4 with 16GB of RAM. $599 is unbelievably cheap for a second computer with which you can do anything you can do on your usual Macbook
The premise that 'barely any decent size models can run on it' misses the biggest advantage of Apple Silicon: Unified Memory. Where else can you get a machine with 64GB or 128GB of VRAM for running quantized models at this price point? Buying the equivalent VRAM in Nvidia GPUs (like multiple RTX 3090s/4090s) would cost thousands of dollars, draw massive power, and sound like a jet engine. The Mac Mini is dead silent, sips power, and lets you run 70B+ parameter models locally via llama.cpp. It's currently the undisputed king of VRAM-per-dollar for local inference.
I believe AMD Ryzen AI Max 395+ with 128GB of RAM is the "undisputed king of VRAM-per-dollar", but Macs are slightly better performing for local inference.
I think the main reason people are buying mac minis is because of how much user friendly it is.
You can expect a software engineer or a devops guy to run stuff in a VPS but a slightly less technical person won't ever go there. In contrast, people are familiar with macOS and that's way less scary to setup.
The added benefit of the mac mini is that it can also double down as a second device one could use for something else too
Most don't realize that until they come across configuring openclaw or other agentic frameworks and then realize they've to use anthropic or openai via API (additional cost)
The AI techniques that fit on the mac mini and are accessible due to Apple's hci advantage are worth it to people who pay to solve their niche problem patterns.
openclaw fills the gap left by Siri and was built with tight integration with Apple ecosystem.
this how i understand the hype for something like openclaw vs the capabilities provided by zapier or n8n for years now.
i would say a majority of users are tech and non tech roles, who (1) have an iPhone with all their contacts and data on iCloud (2) have a macbook at work and use macOS daily (3) use ChatGPT or Claude daily and trust it with their personal data (4) aren't familiar with Linux or a VPS and don't trust themselves with setting it up through the terminal (5) feel more at ease with a "a second macOS that i can debug visually on my monitor at home" rather than a remote linux VPS
you could still rent a mac mini but cloud providers will ask you $119 a month for a Mac mini M4 with 16GB of RAM. $599 is unbelievably cheap for a second computer with which you can do anything you can do on your usual Macbook
That's a interesting usecase
Yeah four months of renting vs buying, speaks for itself
The premise that 'barely any decent size models can run on it' misses the biggest advantage of Apple Silicon: Unified Memory. Where else can you get a machine with 64GB or 128GB of VRAM for running quantized models at this price point? Buying the equivalent VRAM in Nvidia GPUs (like multiple RTX 3090s/4090s) would cost thousands of dollars, draw massive power, and sound like a jet engine. The Mac Mini is dead silent, sips power, and lets you run 70B+ parameter models locally via llama.cpp. It's currently the undisputed king of VRAM-per-dollar for local inference.
I believe AMD Ryzen AI Max 395+ with 128GB of RAM is the "undisputed king of VRAM-per-dollar", but Macs are slightly better performing for local inference.
When released unified memory was a issue since they are not swappable with a higher memory sticks.
Now it's game changer when it comes to AI, you're right, it delivers performance for local inference
I think the main reason people are buying mac minis is because of how much user friendly it is.
You can expect a software engineer or a devops guy to run stuff in a VPS but a slightly less technical person won't ever go there. In contrast, people are familiar with macOS and that's way less scary to setup.
The added benefit of the mac mini is that it can also double down as a second device one could use for something else too
Yep, it's a well designed tiny, affordable and powerful mac that appeals to even non-technical folks
They probably hope to run a LLM locally on the Mac mini but they don't realize that decent models require much more computational power
Yup, that's the point.
Most don't realize that until they come across configuring openclaw or other agentic frameworks and then realize they've to use anthropic or openai via API (additional cost)
The AI techniques that fit on the mac mini and are accessible due to Apple's hci advantage are worth it to people who pay to solve their niche problem patterns.
True, a good use case
It seems to me they've had enough foresight to prevent their openclaw credentials from being stolen via copy-fail
Can you elaborate? Has it been shown that copy-fail can break through KVM boundaries in Linux?
I understood it to be local-only (which is more likely to affect containers, bit I don't thinkmthat was demonstrated either).
It’s just so it can access iMessage
That’s the whole reason
Ai can even run in raspberry pi
Are you talking about running agents or models?
I’ll let you know when I get mine. Ordered April 1 and not expected until August.
Yeah it's all back ordered due to huge surge in demand
You may also check ebay