> Windows ML is the built-in AI inferencing runtime optimized for on-device model inference...lets both new and experienced developers build AI-powered apps
This sounds equivalent to Apple's announcement last week about opening up access for any developer to tap into the on-device large language model at the core of Apple Intelligence[1]
No matter the device, this is a win-win for developers making & consumers getting privacy-focused apps
How does Windows ML compare to just using something like Ollama plus an LLM that you download to your device (which seems like it would be much simpler)? What are the privacy implications of using Windows ML with respect to how much of your data it is sending back to Microsoft?
> Windows ML is the built-in AI inferencing runtime optimized for on-device model inference...lets both new and experienced developers build AI-powered apps
This sounds equivalent to Apple's announcement last week about opening up access for any developer to tap into the on-device large language model at the core of Apple Intelligence[1]
No matter the device, this is a win-win for developers making & consumers getting privacy-focused apps
[1] https://www.apple.com/newsroom/2025/09/new-apple-intelligenc...
How does Windows ML compare to just using something like Ollama plus an LLM that you download to your device (which seems like it would be much simpler)? What are the privacy implications of using Windows ML with respect to how much of your data it is sending back to Microsoft?
Ollama doesn't support NPUs.