When I interned at IBM and this was a big deal. IBM was really invested on being "vertically" integrated "from sand to software" as they would say. One wonders what another run at this concept would be like given advances in semiconductor manufacture.
I have often wondered if a chip fab that could make 1,000 chips of a given type economically (which is to say using your custom chip in your system was less expensive than adopting an off the shelf chip) would be a thing. The whole 'tiny tapeout' thing would be a lot more interesting too.
Jim Keller is working on a small fab design that is intended to make small runs economic. Atomic Semi. Hasn't yielded anything yet, but they only started in 2023.
I worked at a company that was vertically integrated. The fab was across the road from where ASICs were designed. The last technology node, I believe, was 130nm on 6" wafers. To go the next step would have cost something like $1B in 1990s dollars. That's a tall order for an in-house fab; if you're going to spend that kind of money the fab had better be doing something all the time. So either you take on foundry work, or you get rid of the fab and farm out your work to a foundry elsewhere.
As far as I know, the same thing happened up the food chain; the company that did make our next ASICs (IBM, Essex Junction Vermont) has spun off that fab as well. So it goes.
Given that there are only three companies in the world even capable of competing on the sand side, I'd say that window of opportunity is increasingly shrinking.
Full vertical integration implies big iron, or something like it. Nobody is going to buy TI 45nm today when 3nm exists and runs circles around it, no matter how good the software is.
Old nodes are fine for a lot of industrial applications, but they don't care about the full stack software as much.
The best seem to be those contracting the sand out and doing the rest. Nvidia, AMD, and all the new AI players.
Intel probably has the best chance at full vertical, but from everything I read it seems to be suffering from the same bloat IBM suffered.
Unsure, but it wouldn't surprise me. It's the AMD GloFo all over again. I guess the difference is they took billions in chips subsidies so probably need to pretend to compete a while.
I've always found it amusing that somehow custom silicon makes economic sense in the absolute cheapest products.
You look inside a child's toy? A musical greeting card? A remote control? A $5 multimeter? ASIC. Often in the form of a black epoxy blob 'chip on board'.
You look inside a $30,000 industrial robot arm? No no no, we couldn't possibly afford custom silicon, FPGAs are the only option.
First of all the epoxy blob may well contain a mostly off-the-shelf SOC, maybe lightly customized (mask programmed ROM, choice of peripherals). Not a full custom design.
Second, volume! If you're going to make a million of something, the NRE of a relatively low-tech chip isn't so bad.
Also, FPGAs can be reprogrammed in the field if necessary.
It's not the wafer cost. The masks (the "negatives" for the lithography) are the problem. A mask set (you need multiple exposures for one device) for a modern EUV node costs 20-30m$. That's the limiting factor. You can't get cheaper than that.
As a sibling comment notes, multi-product wafers are a theoretical answer. However, since you have process corners (manufacturing defects aren't uniformly distributed on the wafer), it is unfeasible for anything but the cheapest parts.
The real next moonshot in the foundry business would be to lower the respin costs, i.e., the amount of money it costs when your fabbed first silicon doesn't yield or validate functionally in the way you had expected/planned.
If I were the US government (or any other), I'd focus on that. Subsidize the respin cost to zero in the short-term, given certain prerequisites for start-ups, and push an all-out Manhatten project RnD effort to lower the respin cost in the long run.
You might be interested in structured ASICs, which allow for substantial reuse of masks between different products. At the extreme was a via-only definition product where all interconnects were specified with one mask (and the via masks were among the cheapest to make since they are very uniform).
In regular ASIC development, we've had extra unrouted transistors available to wire in case of a mistake (so hopefully the respin involved just some new metal layers). Techniques like FIB can be used to test fixes to lower the number of respins, too. I'm not sure how much of this was automated to maximize chances of being useful.
I get that for state of the art fabs. Those optimize for long runs on big wafers. My question though is can you find a solution at a different node which favors cost/turnaround at the expense of not scaling?
For example, could one make a 200nm node with conventional UV masks and a limit of say 10 layers? Non mask lithography options? Or as in the article 'sub' masks where you step a single die image across the wafer?
Node size and price / sq mm? The last time I looked at what it would cost to have a partial wafer it was > $25,000 (of which slightly more than half was NRE charges) but would love to find something "hobbyist accessible." !
Indeed, you develop using a fab-specific PDK. But if you share a (Tower Semi, for instance) PDK, your device can have shuttle wafers with multiple designs from different companies. This is done for old nodes and low-volume or RnD parts.
The degree to which automation is now essential is astounding. Every time a human is on the clean room floor you are burning dollars in terms of defects. For a process node at 3nm and beyond, I don't think you could achieve any yield at all if the automation rate were to fall even a few percent.
IBM Microelectronics was a fairly significant fab and semiconductor player until the early 2010s.
Remember PowerPCs? That was IBM and was used everywhere from iMacs to Xbox 360s to Thinkpads.
Sadly, fabrication became commoditized because of outsourcing to Taiwan and South Korea, who gave unfair advantages to their state adjacent firms like TSMC and Samsung.
Then it was given away to GlobalFoundries, who ran out of cash trying to take IBM's 7nm process in to HVM and gave up on being a leading edge fab. IBM sued GF for this.
IIRC, the GF 7nm process was rumoured to have the best specs vs. Intel, TSMC and Samsung.
Just a nit, but PowerPCs were not used in Thinkpads except for very limited production runs in the mid-1990s. The problem was that IBM didn't have an OS for the platform. They had AIX, but it didn't make sense on a laptop. The idea was that OS/2 would provide a PC desktop of their own, but it barely shipped for PowerPC before IBM pulled the plug.
However IBM did design and build x86 chips in the 1990s, and these were used in Thinkpads.
When I interned at IBM and this was a big deal. IBM was really invested on being "vertically" integrated "from sand to software" as they would say. One wonders what another run at this concept would be like given advances in semiconductor manufacture.
I have often wondered if a chip fab that could make 1,000 chips of a given type economically (which is to say using your custom chip in your system was less expensive than adopting an off the shelf chip) would be a thing. The whole 'tiny tapeout' thing would be a lot more interesting too.
Jim Keller is working on a small fab design that is intended to make small runs economic. Atomic Semi. Hasn't yielded anything yet, but they only started in 2023.
https://atomicsemi.com/
I worked at a company that was vertically integrated. The fab was across the road from where ASICs were designed. The last technology node, I believe, was 130nm on 6" wafers. To go the next step would have cost something like $1B in 1990s dollars. That's a tall order for an in-house fab; if you're going to spend that kind of money the fab had better be doing something all the time. So either you take on foundry work, or you get rid of the fab and farm out your work to a foundry elsewhere.
As far as I know, the same thing happened up the food chain; the company that did make our next ASICs (IBM, Essex Junction Vermont) has spun off that fab as well. So it goes.
Given that there are only three companies in the world even capable of competing on the sand side, I'd say that window of opportunity is increasingly shrinking.
You can quite easily compete on old nodes and we will have to when old paid off hardware starts dying.
Full vertical integration implies big iron, or something like it. Nobody is going to buy TI 45nm today when 3nm exists and runs circles around it, no matter how good the software is.
Old nodes are fine for a lot of industrial applications, but they don't care about the full stack software as much.
The best seem to be those contracting the sand out and doing the rest. Nvidia, AMD, and all the new AI players.
Intel probably has the best chance at full vertical, but from everything I read it seems to be suffering from the same bloat IBM suffered.
Isn't Intel currently trying to spin out their foundry as a separate business?
Unsure, but it wouldn't surprise me. It's the AMD GloFo all over again. I guess the difference is they took billions in chips subsidies so probably need to pretend to compete a while.
> your custom chip in your system was less expensive than adopting an off the shelf chip
That's a tall order - off the shelf is rarely very expensive for the functionality you get (that is, compared to design cost.)
And that's not necessary. If you could get a chip much more expensive but with specific advantages, you'd already have a business. See FPGAs.
I've always found it amusing that somehow custom silicon makes economic sense in the absolute cheapest products.
You look inside a child's toy? A musical greeting card? A remote control? A $5 multimeter? ASIC. Often in the form of a black epoxy blob 'chip on board'.
You look inside a $30,000 industrial robot arm? No no no, we couldn't possibly afford custom silicon, FPGAs are the only option.
First of all the epoxy blob may well contain a mostly off-the-shelf SOC, maybe lightly customized (mask programmed ROM, choice of peripherals). Not a full custom design.
Second, volume! If you're going to make a million of something, the NRE of a relatively low-tech chip isn't so bad.
Also, FPGAs can be reprogrammed in the field if necessary.
It's not the wafer cost. The masks (the "negatives" for the lithography) are the problem. A mask set (you need multiple exposures for one device) for a modern EUV node costs 20-30m$. That's the limiting factor. You can't get cheaper than that.
As a sibling comment notes, multi-product wafers are a theoretical answer. However, since you have process corners (manufacturing defects aren't uniformly distributed on the wafer), it is unfeasible for anything but the cheapest parts.
The real next moonshot in the foundry business would be to lower the respin costs, i.e., the amount of money it costs when your fabbed first silicon doesn't yield or validate functionally in the way you had expected/planned.
If I were the US government (or any other), I'd focus on that. Subsidize the respin cost to zero in the short-term, given certain prerequisites for start-ups, and push an all-out Manhatten project RnD effort to lower the respin cost in the long run.
>lower the respin costs
You might be interested in structured ASICs, which allow for substantial reuse of masks between different products. At the extreme was a via-only definition product where all interconnects were specified with one mask (and the via masks were among the cheapest to make since they are very uniform).
In regular ASIC development, we've had extra unrouted transistors available to wire in case of a mistake (so hopefully the respin involved just some new metal layers). Techniques like FIB can be used to test fixes to lower the number of respins, too. I'm not sure how much of this was automated to maximize chances of being useful.
https://en.wikipedia.org/wiki/Structured_ASIC_platform
I get that for state of the art fabs. Those optimize for long runs on big wafers. My question though is can you find a solution at a different node which favors cost/turnaround at the expense of not scaling?
For example, could one make a 200nm node with conventional UV masks and a limit of say 10 layers? Non mask lithography options? Or as in the article 'sub' masks where you step a single die image across the wafer?
Multibeam corporation is making “maskless” lithography tools:
https://multibeamcorp.com/applications/#high-value
Multi product wafers are a thing, especially on older nodes. It’s accessible to hobbyists
Node size and price / sq mm? The last time I looked at what it would cost to have a partial wafer it was > $25,000 (of which slightly more than half was NRE charges) but would love to find something "hobbyist accessible." !
My understanding is that a chip design for one fab isn’t portable to another.
How does can hobbyist design be just be slapped onto a wafer (with others’ designs) and call it a day?
Indeed, you develop using a fab-specific PDK. But if you share a (Tower Semi, for instance) PDK, your device can have shuttle wafers with multiple designs from different companies. This is done for old nodes and low-volume or RnD parts.
The degree to which automation is now essential is astounding. Every time a human is on the clean room floor you are burning dollars in terms of defects. For a process node at 3nm and beyond, I don't think you could achieve any yield at all if the automation rate were to fall even a few percent.
IBM Microelectronics was a fairly significant fab and semiconductor player until the early 2010s.
Remember PowerPCs? That was IBM and was used everywhere from iMacs to Xbox 360s to Thinkpads.
Sadly, fabrication became commoditized because of outsourcing to Taiwan and South Korea, who gave unfair advantages to their state adjacent firms like TSMC and Samsung.
Then it was given away to GlobalFoundries, who ran out of cash trying to take IBM's 7nm process in to HVM and gave up on being a leading edge fab. IBM sued GF for this.
IIRC, the GF 7nm process was rumoured to have the best specs vs. Intel, TSMC and Samsung.
They also tried to do this with a free electron laser as the EUV source.
A JP lab is continuing those efforts, at least.
Just a nit, but PowerPCs were not used in Thinkpads except for very limited production runs in the mid-1990s. The problem was that IBM didn't have an OS for the platform. They had AIX, but it didn't make sense on a laptop. The idea was that OS/2 would provide a PC desktop of their own, but it barely shipped for PowerPC before IBM pulled the plug.
However IBM did design and build x86 chips in the 1990s, and these were used in Thinkpads.
What a fascinating read. Thanks for sharing.
+1