[Not as much capering to strike out here. I suspect my editor realized he wasn't actually helping.]
Thursday, April 23, 1998
by Rob Landley
Austin, TX (Apr. 23, 1998) -- On Tuesday, I wrote about how Intel makes their microprocessor chips. Then yesterday, I talked a bit about how Intel makes its microchips faster and smaller. And today, let's do as Olivia Newton John tells us to -- Fools, "Let's get fiscal." We'll consider how Intel's work makes (and costs) money.
We start by asking: How much does it cost Intel to manufacture a microchip?
The answer is both "not much" and "a heck of a lot." Before you can manufacture a chip, you have to design it. This can take years, and it involves highly trained, highly sought-after, highly paid professionals. By the truckload. Conversely, redesigning an existing chip for a higher speed (with shorter wires) is generally just a matter of a few tweaks, mostly a manufacturing issue.
Intel, however, is into creating new generations of chips, like the "386,""486," "Pentium," "Pentium II," and the upcoming "Merced." Creating new chips basically involves ripping existing chips apart and paying electrical engineers to redesign large parts of it from scratch. You also have to exhaustively test every new chip design. Anybody who remembers the floating-point bug in the original Pentiums (Intel: Quality you can count on, but not divide by?) should agree that it's a lot cheaper to find all the problems before you ship the chip.
Plus, as I explained yesterday, before you can manufacture chips at a smaller
(and thus faster) size, you have to re-tool your manufacturing facility.
Often, it's easier to just build a new one and use the old one for making
flash ram or motherboard controller chips
-- or making frisbees, or sum'thin'.
But building new manufacturing facilities ain't cheap. Cha-cha-cha-ching!
A brand-new state of the art CPU fabrication facility can literally cost
billions of dollars.
So, before you've made a single new chip, you've forked out a truly obscene amount of cash. The up-front expenses here are absolutely staggering.
On the plus side, though, these costs dig a moat around Intel's business. Microchip fabrication isn't something that your two nephews are going to kick off in their garage. Smaller companies such as NexGen, Cyrix, and WinChip tend to license production capacity from IBM to get themselves off the ground. Sometimes they use IBM for entire production runs. Building your own fabrication plant simply requires more start-up capital than your average venture capitalist or even an IPO can provide. Even for Intel, launching a new generation of chips is a major expense.
The good news for Intel is that once it's gotten to the point of actually
making its new chips, the whole process suddenly gets extremely cheap. Remember
that Intel has after-tax margins of 27.7% off its $25 billion in trailing
Incredible! That phrase "economy of scale" comes into play in a very
big way. It may be the truest example of it here.
How does Intel take advantage of these economies of scale?
For starters, the silicon wafers into which chips are etched can hold hundreds or even thousands of microchips, all in neat little rows, like exposures on a roll of film. As long as you're etching the wafer and spraying chemicals at it, you might as well fill the sucker from edge to edge with chips, and that's exactly what Intel does. Intel's drive toward making the chips smaller helps here. Smaller is faster, and smaller is cheaper... which, yes, does contradict the age-old message out of Texas. But in technology, at least this technology, it's very true.
As Intel makes smaller and smaller chips, the actual cost of making them -- judged as "money spent to burn one more wafer" -- can work out to almost pennies per chip. And with each chip sold, you're amortizing (paying off like a mortgage) the cost of the equipment, the years spent developing the design, and the marketing efforts of those guys who show Intel workers dancing around in multi-colored bunny suits.
This leads to an interesting little fact of microchip design. (I'm just full of these, eh?) If you're going to design and manufacture a few hundred chips (the way they used to for old mainframes), they'll cost thousands or even millions of dollars per chip. But if you design exactly the same chip and run off a million of them, the cost per chip drops dramatically. The more you run, you see, the cheaper they get. That's an economy of scale in a big way. And that's one of the reasons why the PC replaced the mainframe.
Another thing to keep in mind is the "yield" on Intel's efforts. The "yield" is the percentage of chips out of each batch that actually work. When Intel burns an 8-inch wafer containing a thousand chips, not all of them are going to work. A single dust mote will ruin an entire chip (which is why manufacturing is done in "clean rooms" where the workers wear those "bunny suits"). Air currents can blur the lines too -- have you ever seen a Sunday comics page where one of the colors isn't printed where it should go? It's the same principle. Or maybe the chemicals didn't get distributed evenly (gasp, a bubble!). There are literally dozens of things that can go wrong.
Yield issues in chip manufacturing are huge.
For instance, right now, Advanced Micro Devices (NYSE: AMD) has an outstanding new processor: the K6. It's one of the fastest chips out there, working at 300 megahertz. And K6s are selling like hotcakes. But Advanced Micro can't make enough of them to meet demand because, sadly, its yields are so low. And it can't raise prices too much or people will make do with the 266 megahertz chips from Intel. Unfortunately, for every wafer that Advanced Micro burns, only a small fraction of the chips work. It's enough to make company executives tear their hair out.
Getting the yield up is part of the standard shakedown process of any new microchip-fabrication process. You perfect things as you go along. And there are a lot of tricks you can pull to get yield up, all of which Intel has done in the past. Does anyone remember the Pentium 60 and Pentium 90? Actually, those were originally Pentium 66 and Pentium 100 chips that wouldn't run reliably at the faster speed (due to fuzzy wires and mis-aligned connections). But they ran fine at the slightly slower speed. So Intel developed and marketed the slower chips. It recognized that, in business, the yield matters most.
Obviously, Intel and other chipmakers want to get their yields up to 100%. But there's a threshold at which point they say, "Let's go with it -- this new solution is elegant enough for the marketplace and we should be able to pump out a high yield of chips per wafer." That's the profitability-to-quality debate at Intel -- it's the sort of debate that goes on at every company.
Once Intel pushes ahead, it necessarily has a certain number of reject chips. The decision about what to do with rejects can get pretty complicated as well, since the rejects (which obviously are sold cheaper if they can be made to work in a slower or limited fashion) may compete with the normal chips on price. If there isn't enough demand for your good chips and for your crippled ones, you may make more money by not selling the rejects at all. Then again, selling salvaged rejects is literally free money. Cannibalization or protection? It can be a tough call.
So, in answer to our query from across the cocktail party in yesterday's report, that's what Intel does for a living.
These last three reports have been a good deal of fun for me to write. I hope you enjoyed them, but for those fools (small-f!) who didn't -- don't worry, tomorrow I'm going to talk about something totally accessible. In fact, I'm thinking of being entirely sacrilegious and talking about something I read in last month's issue of Forbes: How day-traders sometimes actually do something useful.
Hey, it's rare enough that it deserves comment.
See you tomorrow, Fools!
- Rob Landley
P.S. If you want to read more on Intel and chipmaking, here are two links:
Information on Advanced Micro Devices K6
Information on Intel's Expanded Stock Buyback Plan