Google is using a special version of Gemini (fast, small) and a special version of their internal ranking API (faster, fewer anti-spam/quality measures).
That makes them very fast. But that also leads to a ton of hallucinations. If you ask for non existent things (like the cats.txt protocol), AI Overviews consistently fabricate facts. Ai Overviews can pull the content of the potential source ULRs directly from Google's cache.
ChatGPT is slow because they have to make an external API call to Bing or - even worse - to a scraping provider like SerpApi/Data4SEO/Oxylabs to crawl regular Google search results. That introduced two delays. OpenAI then has to fetch some of these potential source URLs in real time. That introduces another delay. And then OpenAI also uses a better (but slower) model than Google to generate the answer.
Over time, OpenAI should be able to catch up in terms of speed with their own web/search index.
If you try more complex questions, you might find AI Overviews less to your liking.
Google gets away with this because users are used to type simple queries - often just a few keywords. Any kind of AI answer is like magic.
OpenAI cannot do the same. Their users are used to having multi-turn conversations and receiving thoughtful answers to complex questions.
Interesting. I am still defaulting to ChatGPT when I anticipate having a multi-turn conversation.
But for questions where I expect a single response to do, Google has taken over.
Here's an example from this morning:
It's my first autumn in a new house, and my boiler (forced hot water heating) kicked on for the first time. The kickboards in the kitchen have Quiet-One Kickspace brand radiators with electric fans. I wanted to know what controls these fans (are they wired to the thermostat, detect radiator temp, etc?)
I searched "When does a quiet-one kickspace heater turn on". Google Overview answered correctly [1] in <1 second. Tried the same prompt to ChatGPT. Took 17 seconds to get the full (also correct, and similarly detailed) answer.
Both answers were equally detailed and of similar length.
[1] Confirmed correct by observing the operation of the unit.
Google's AI search overview is designed to quickly pull and summarize information from its massive web index, while ChatGPT search focuses on providing detailed conversational responses that may require more processing time. The speed difference users notice comes from fundamental differences in how these systems work - Google leverages its existing search infrastructure and pre-indexed web content, while ChatGPT processes queries through a more complex language model that generates responses token by token. Also, I would imagine that ChatGPT is using RAG more in generating some of its responses, and RAG is I/O bound. I/O bottlenecks are orders of magnitude slower than a process that could be completed mostly in memory.
Google is using a special version of Gemini (fast, small) and a special version of their internal ranking API (faster, fewer anti-spam/quality measures).
That makes them very fast. But that also leads to a ton of hallucinations. If you ask for non existent things (like the cats.txt protocol), AI Overviews consistently fabricate facts. Ai Overviews can pull the content of the potential source ULRs directly from Google's cache.
ChatGPT is slow because they have to make an external API call to Bing or - even worse - to a scraping provider like SerpApi/Data4SEO/Oxylabs to crawl regular Google search results. That introduced two delays. OpenAI then has to fetch some of these potential source URLs in real time. That introduces another delay. And then OpenAI also uses a better (but slower) model than Google to generate the answer.
Over time, OpenAI should be able to catch up in terms of speed with their own web/search index.
If you try more complex questions, you might find AI Overviews less to your liking.
Google gets away with this because users are used to type simple queries - often just a few keywords. Any kind of AI answer is like magic.
OpenAI cannot do the same. Their users are used to having multi-turn conversations and receiving thoughtful answers to complex questions.
Interesting. I am still defaulting to ChatGPT when I anticipate having a multi-turn conversation.
But for questions where I expect a single response to do, Google has taken over.
Here's an example from this morning:
It's my first autumn in a new house, and my boiler (forced hot water heating) kicked on for the first time. The kickboards in the kitchen have Quiet-One Kickspace brand radiators with electric fans. I wanted to know what controls these fans (are they wired to the thermostat, detect radiator temp, etc?)
I searched "When does a quiet-one kickspace heater turn on". Google Overview answered correctly [1] in <1 second. Tried the same prompt to ChatGPT. Took 17 seconds to get the full (also correct, and similarly detailed) answer.
Both answers were equally detailed and of similar length.
[1] Confirmed correct by observing the operation of the unit.
Google's AI search overview is designed to quickly pull and summarize information from its massive web index, while ChatGPT search focuses on providing detailed conversational responses that may require more processing time. The speed difference users notice comes from fundamental differences in how these systems work - Google leverages its existing search infrastructure and pre-indexed web content, while ChatGPT processes queries through a more complex language model that generates responses token by token. Also, I would imagine that ChatGPT is using RAG more in generating some of its responses, and RAG is I/O bound. I/O bottlenecks are orders of magnitude slower than a process that could be completed mostly in memory.