My daily driver is a M1 MBP with 64GB of ram. Using ollama, lm studio, or even just max-lm in python, a model like gpt-oss:20b can produce results. It runs anywhere from 50-80 tps, so don’t expect blazing fast edits, but it’s usable such that you can background it with clear instructions and come back to something that isn’t complete trash.
My daily driver is a M1 MBP with 64GB of ram. Using ollama, lm studio, or even just max-lm in python, a model like gpt-oss:20b can produce results. It runs anywhere from 50-80 tps, so don’t expect blazing fast edits, but it’s usable such that you can background it with clear instructions and come back to something that isn’t complete trash.