Given the way current LLMs hallucinate, and given that Apple (presumably) won’t accept this behaviour in Siri, I’m skeptical that existing technology (or existing technology scaled up) can ever create the Siri Apple and its customers want.
I'll settle for "gets voice to text right most of the time". Seriously, Apple is so far behind on the cheapest table stakes at this point I highly doubt their high standards is the issue.
Yeah, but isn't the voice recognition (as opposed to voice comprehension) separate from the supposedly LLM powered bit of Siri? Don't get me wrong - I want better voice comprehension too - but I don't imagine that moving to a LLM powered Siri will solve that?
Oh absolutely. The amount of times I have to pause, take a deep breathe and OVER-enunciate (still with mixed success) because my voice, pulse rise and my patience decreases with every absolute butchering (like not even "close but no cigar" but "how on earth did you come up with that?") Siri does to dictated text message in CarPlay...
I don’t even bother anymore. When it reads back the text message and asks if I want to send it I just laugh heartily and say yeah. Sometimes the recipient has to read it aloud and try to phonetically guess what the original words were.
Literally what's the difference between that and Siri now.
Siri can't understand or pronounce very well.
A few weeks ago Siri via Car Play responded to a text and sent it without me saying a word or radio on, and with the setting where it asks first before sending enabled. It responding "Why?" to a serious text was seriously inconvenient in the moment. I watched it happen in disbelief.
(Edit: Didn't see your last paragraph before writing the response below)
I think there is a distinction between Siri misunderstanding what was said (which you can see/hear), and Siri understanding what you said but hallucinating an answer. In both cases, you strictly have to check the result, but in the first case it's clear that you've been misunderstood.
I don't think that's at all a safe presumption, given that AI still happily hallucinates summaries of text messages/email that is contradictory to that actual content of the message.
Seems weird to comment on delayed new features from Apple. Obviously if it doesn't meet the quality bar it would get pushed back, that's just how they do things.
But I wonder how much of the problem is due to trying to minimise data processing off-device. Even with Open AI as a last resort, I don't imagine you get much value choosing betwixt the local model or a private cloud that doesn't save context.
Meanwhile the average user is yeeting their PII into Altman's maw without much thought so Siri is always going to seem rubbish by comparison.
> Apple's challenge is they want to maintain privacy, which means doing everything on-device.
Apple is not trying to do everything on-device, though it prefers this as much as possible. This is why it built Private Cloud Compute (PCC) and as I understand it, it’s within a PCC environment that Google’s Gemini (for Apple’s users) will be hosted as well.
> Siri doesn’t always properly process queries or can take too long to handle requests, they said
I mean, for anyone familiar with LLMs this is not exactly a surprise. There is no way Apple can remove the inherent downsides of this technology regardless of how enthusiastic the ai bros are about it.
In a twisted way, I’m happy there are at least some teams at Apple where it doesn’t get a pass for bugs just because it has AI on the sticker
https://archive.is/2026.02.11-194917/https://www.bloomberg.c...
Given the way current LLMs hallucinate, and given that Apple (presumably) won’t accept this behaviour in Siri, I’m skeptical that existing technology (or existing technology scaled up) can ever create the Siri Apple and its customers want.
Yeah. Apple don’t half ass things. This is why people take their products seriously.
I'll settle for "gets voice to text right most of the time". Seriously, Apple is so far behind on the cheapest table stakes at this point I highly doubt their high standards is the issue.
Yeah, but isn't the voice recognition (as opposed to voice comprehension) separate from the supposedly LLM powered bit of Siri? Don't get me wrong - I want better voice comprehension too - but I don't imagine that moving to a LLM powered Siri will solve that?
Oh absolutely. The amount of times I have to pause, take a deep breathe and OVER-enunciate (still with mixed success) because my voice, pulse rise and my patience decreases with every absolute butchering (like not even "close but no cigar" but "how on earth did you come up with that?") Siri does to dictated text message in CarPlay...
I don’t even bother anymore. When it reads back the text message and asks if I want to send it I just laugh heartily and say yeah. Sometimes the recipient has to read it aloud and try to phonetically guess what the original words were.
Literally what's the difference between that and Siri now.
Siri can't understand or pronounce very well.
A few weeks ago Siri via Car Play responded to a text and sent it without me saying a word or radio on, and with the setting where it asks first before sending enabled. It responding "Why?" to a serious text was seriously inconvenient in the moment. I watched it happen in disbelief.
(Edit: Didn't see your last paragraph before writing the response below)
I think there is a distinction between Siri misunderstanding what was said (which you can see/hear), and Siri understanding what you said but hallucinating an answer. In both cases, you strictly have to check the result, but in the first case it's clear that you've been misunderstood.
I don't think that's at all a safe presumption, given that AI still happily hallucinates summaries of text messages/email that is contradictory to that actual content of the message.
Unless I misunderstand your reply, I think we're agreeing.
Seems weird to comment on delayed new features from Apple. Obviously if it doesn't meet the quality bar it would get pushed back, that's just how they do things.
But I wonder how much of the problem is due to trying to minimise data processing off-device. Even with Open AI as a last resort, I don't imagine you get much value choosing betwixt the local model or a private cloud that doesn't save context.
Meanwhile the average user is yeeting their PII into Altman's maw without much thought so Siri is always going to seem rubbish by comparison.
I just wish they would fix the out of memory disaster on my MacBook that is ios26
Do similar issues exist with Gemini on Android?
Or are these challenges very Siri/iOS specific?
Gemini can and does send everything to Google.
Apple's challenge is they want to maintain privacy, which means doing everything on-device.
Which is currently slower than the servers that others can bring to the table - because they already grab every piece of data you have.
> Apple's challenge is they want to maintain privacy, which means doing everything on-device.
Apple is not trying to do everything on-device, though it prefers this as much as possible. This is why it built Private Cloud Compute (PCC) and as I understand it, it’s within a PCC environment that Google’s Gemini (for Apple’s users) will be hosted as well.
This isn't planned to be exclusively on-device. Siri isn't exclusively on-device now, to begin with.
> Siri doesn’t always properly process queries or can take too long to handle requests, they said
I mean, for anyone familiar with LLMs this is not exactly a surprise. There is no way Apple can remove the inherent downsides of this technology regardless of how enthusiastic the ai bros are about it.
In a twisted way, I’m happy there are at least some teams at Apple where it doesn’t get a pass for bugs just because it has AI on the sticker
Damn paywalls! Sorry, I shouldn't be so negative. I'd just like to be able to read the article.