Is there anything preventing incumbents from developing their own ASIC equivalent of a google TPU?
Or maybe the GPUs are not really that different from TPU.
To me, one of the secret sauce of AI chips is Memory amd Interconnect bandwidth. Memory everyone is using the same HBM series. No difgerentiation. Multi chasis Interconnect is already not bottlenecked vs compute. So GPUs aren,t any worse.
> developing their own ASIC equivalent of a google TPU?
It'll be same story as Apple and their Mx chips. Lots have been trying and none have matched even after many generations. And not many companies have the pockets to build a successful chip, at scale, and be efficient.
https://archive.ph/vyG6B
Is there anything preventing incumbents from developing their own ASIC equivalent of a google TPU?
Or maybe the GPUs are not really that different from TPU.
To me, one of the secret sauce of AI chips is Memory amd Interconnect bandwidth. Memory everyone is using the same HBM series. No difgerentiation. Multi chasis Interconnect is already not bottlenecked vs compute. So GPUs aren,t any worse.
Ammuter Guesswork!
> developing their own ASIC equivalent of a google TPU?
It'll be same story as Apple and their Mx chips. Lots have been trying and none have matched even after many generations. And not many companies have the pockets to build a successful chip, at scale, and be efficient.