Gordon E. Moore

The microprocessor pioneer explores the technology
needed to keep the computer revolution humming

Gordon Moore on:

  • Birth of the Microprocessor

  • Moore's Law

  • Pushing the Technology

  • The Future of the Computer

  • In 1965, Gordon E. Moore noted that the number of devices on a chip (and hence the potential power of a computer) was doubling each year--and projected that out 10 years. As astonishing as it seemed, that relentless progression held true. His initial observation is now known as "Moore's Law." As co-founder and chairman of Intel Corp., Moore himself had a lot to do with proving out his prediction. Each leap in power required new technology that shrank the size of circuit lines so that ever more devices could be packed onto a sliver of silicon. In the 25 years since a team headed by Moore produced the first true microprocessor, Intel and its competitors have been able to pull off the stream of technological breakthroughs needed to sustain the computer revolution.

    But how far can semiconductor technology go? As microcircuit transistors shrink from microscopic to nanoscopic dimensions, is Moore's Law about to run out of steam? In this third section of a four-part interview, Moore speaks with Scientific American's west coast editor W. Wayt Gibbs about the new advances in design and manufacturing that the industry is betting on to provide faster and more complex chips for the next generations of computers


    SCIENTIFIC AMERICAN: While computers continue to demand more and more memory--at lower and lower prices--many experts believe that we are reaching the limits of optical technology to etch ever-smaller circuits. Some people are worrying about qualitative jumps in the increase of fabrication costs due to having to move beyond optics. Can Moore's Law survive the transition?

    GORDON MOORE: Moving beyond optics is a real challenge. We keep pushing optics further and further. Frankly, we've done it further that I ever would have imagined. There used to be conventional wisdom that a minimum circuit-line width on a microchip of one micron was the limit that we could do optically. Now we can do a quarter micron. The next couple generations--0.18 microns, probably 0.13--it looks like we can do optically.

    Beyond that, life gets very interesting. We have three equally unattractive alternatives, maybe four. I don't know quite how it's going to go. There's been a lot of effort spent on x-rays. X-rays were said to be the technology of choice at half a micron; now people hopefully predict using them at submicron levels--at 0.13, for instance.

    But it'll probably get tougher--it's the nature of the mask. X-ray photolithography requires one-to-one shadow imaging. In optical lithography, we make the pattern a lot bigger than the device, then project it down. Well, the one-to-one mask problem is extremely severe, particularly for x-rays, because the mask layer has to be thick enough to absorb the x-rays.

    Click here!
    FIRST MICROCOMPUTER, the 4004, made history in 1971. It contained 2,300 transistors etched in circuits 10 microns wide and slogged along at a processing speed of 108KHz.
    What you end up with, if you look at it under a microscope, are tall, skinny features. They are much taller than they are wide, at these dimensions. It is very hard to make the mask perfect enough and then to do the precision alignment. So while a lot of work continues on x-rays, some of us have lost our enthusiasm for that technology.

    Then there's the idea of electron beam writing. This can be used to make the small features. But it tends to be relatively slow. As you go to smaller dimensions, the total distance the beam has to travel to make the pattern keeps going up. The slowness gets emphasized by finer and finer dimensions and more complex structures.

    Now the industry is looking at ways to get around that by using electron beams in shapes other than a pencil beam--to write with squares, rectangles or whatever depending on the feature you're trying to build. Worst case, we will be able to make a layer or two of some very fine structures with an electron beam and then use optics to add on structures that are not so fine. That way you can still make very small transistors where you need them. That doesn't get you as far as you would like to go, but it gets you some of the advantages. So that's kind of a fallback position.

    Another option that we think deserves a very good look is using an intermediate wavelength, between x-rays and the ultraviolet light we use now. This has been given the name of EUV, for extreme ultraviolet. It used to be called soft x-rays, but x-rays have gotten enough of a bad name that it's called ultraviolet now. This is a range of wavelengths on the order of 13 nanometers.

    SA: Versus what wavelength range for x-rays?

    GM: Well, this is really soft x-ray, but x-rays are typically down at more like 13 angstroms, an order of magnitude smaller. Actually around 30 angstroms is where the x-ray work generally is done. Anyhow, at 13 nanometers, .013 micron, at that range you can still make mirrors. They're not easy--you have to coat them with something like 81 layers of masking material. And with current materials, reflectivity is only about 70 percent.

    This is actually technology we developed for Star Wars [the Ronald Reagan era anti-missile program]. We're thinking that this is potentially a lithography system that will take us as far as the material will let us go, a long ways from where it is now. Intel is actually trying to get an industry consortium together to support the research on this to see whether it really is practical or not. Then there are things like focused ion beams. Again, that has the resolution possibilities but also a lot of problems.

    But if EUV works, we have to go to a completely reflective system, because nothing is transparent in that range. You have to have a reflective mask instead of a transparent mask, which is an absolute change in the technology. You have to have a vacuum system. Everything has to be completely enclosed with inert gases to stabilize the material. You have to have a new resist system, something that will penetrate it enough at that wavelength. So there's a tremendous amount of engineering involved in making this work.

    SA: Is it clear that in principle at least, all these elements exist and will work?

    GM: It's clear that the optical things do. Is there a resist that has the desired characteristics? I don't know, but I suspect there is. People can make x-ray resist, people can make UV resist. It'll take a lot of fooling around, but somebody who really knows their organic chemistry well will come up with something.

    SA: Another roadblock that is sometimes cited is memory speeds, bottlenecks happening outside the processor that prevent it from running at full capacity.

    GM: This is an interesting deal. That used to be the case. The processors were quite a bit faster than memory, and that's what led initially to the complex instructions [used in Intel's CPUs]. You want the computer to do as much as it can with the stuff that's there. Then when semiconductor memory got up to the same order of speed as the processor, that's when the idea of RISC [Reduced Instruction Set Computing] processors came along.

    With RISC, you can go to memory a lot more often and do a lot of simple instructions. Now we're going back to the situation we had before, where the memory is quite a bit slower than the processor. I guess that would swing the scales back toward making complex instructions.

    That's something one has to live with, but what has happened in the meantime is much more dependence on cache memory, which is built into the microprocessor itself. And the cache memory does run in the same range of speeds as the processor. On chip you can fetch data from memory every cycle; off chip you can get there every couple of cycles. And the effectiveness of cache memory is pretty darn good. So that gets around most of the problem.

    SA: Do you expect to see more processor real estate devoted to cache, then?

    GM: That's one alternative. But if you look in our Pentium IIs, what we've done is to jam a lot of cache memory in separate packages right up against the processor. So we have some on chip, and then we have a lot more just off chip. We think at least for now that is a better compromise.

    SA: Does this give you an intermediate speed between completely on-chip memory and separate DRAM [Dynamic Random Access Memory] chips?

    GM: Yes, it works as a level 2 cache--that is, a two-clock-cycle cache. On chip you can still stay with one. But two isn't bad compared to going off to DRAMs, which requires tens of clock cycles.

    SA: What about synchronization problems as clock speeds rise and chip and die sizes stay large?
    Click here!
    INTEL'S LATEST, the Pentium II, debuted in 1997. It is crammed with 7.5 million transistors with circuit lines of just .25 micron. The chips blaze along at speeds of up to 300 megahertz.

    GM: That's something that requires a lot of attention. This isn't any area that I am expert in, but our people don't seem to be that concerned about it. In primitive circuit boards, keeping the clock signal consistent across the board was a problem. But there you had pretty significant dimensions. With the chips you can bring the clock in at a lot of different points, so you can keep it pretty well synchronized. It requires good engineering.

    SA: There are some--such as Ivan Sutherland and Robert Sproull at Sun Microsystems--who maintain that once you get into a gigahertz range it's going to be a real engineering headache to try and keep the clock signal synchronized everywhere.

    GM: A lot of things become headaches. Power is at least as big a concern. If you just let these things scale--you make the chips bigger, you make the frequencies higher--then you make the capacitance per unit area higher, since you have scaled everything. In two generations of technology, say from half micron to quarter micron, that's two steps down, to 50 percent of the starting size. When you look at the trends of making bigger chips, with more complexity and jacking up the clock speed, if you don't do anything else, the power goes up something like 40-fold.

    If you start with a 10-watt device and go up 40-fold...the darn thing smokes! It'll keep your lap warm, all right. So that is an area that really requires a lot of attention. And, of course we've handled it to date by lowering the voltage. But you can only go so far on that. So power gets to be a real problem when you get up into these high frequencies.

    SA: Is power a limiting factor?

    GM: You're kind of in a multidimensional box, and that's one of the dimensions you have to worry about. I suspect that clock skew will be another one. These are tough problems that require a lot of attention, but we have a lot of workpower working on them, too. Exactly when it will end up limiting us is hard to say.

    We clearly have a long way we could go before we get into trouble.

    SA: Would you give me your opinion on some of the technologies that are seen as most likely to help extend the life of the current procession of computer technology? How about phase shift masks: do you already use those in your manufacturing?

    GM: We keep avoiding them. Phase shift masks allow you to go to smaller dimensions with a given wavelength. They get very complicated to make when you go to a kind of random layout like you have on a microprocessor. It's easier to use them on memories. But if we don't have a shorter wavelength, it is the kind of thing we'll have to use to do the 0.13-micron generation with 193-nanometer excimer lasers.

    SA: So it sounds like you think they're going to be used eventually, it's just a matter of time.

    GM: I think it's likely we'll do something like that. We've done things sort of like that all along, although we weren't clever enough to call it phase shift masking. For years if we wanted to print a rectangle--if you just made a rectangular mask, the etched pattern tends to have rounded corners and look like a pillow due to diffraction--so we would just put little spikes around the corners of the rectangle to balance it out so that it printed a square. That's really a phase shift mask.

    SA: How about adding more layers to the chips?

    GM: More layers are something we do now without much concern. Going from one to two was tough, two to three was difficult, but five to six--piece of cake. A technology has come in there that is really amazing. This is the idea of chemical-mechanical polishing of the top surface. The problem used to be that as you went through more layers, the polishing got all screwed up. You'd get mountains and valleys and undercut levels, and things didn't work well. Now between putting down every layer of insulator and metal, we polish either the top of the metal or the top of the insulator flat. So we're always working on a flat surface. And that has really been a breakthrough technology in allowing multilayer structures.

    SA: How exactly do you polish them?

    GM: We have a great big lapping machine with some goo on there--chemical-mechanical, it's called. They use slurries that also react somewhat chemically with the surface. It's not just grinding. But it gives them a very flat surface. The end result is, we put five layers on top of each other and then ask the design engineers, "Would you like another layer of metal?"

    SA: Do you think that trend will continue, that chips will get even taller?

    GM: I think it will, yes. I think that's one of the real levers we have to work with.

    SA: Bigger wafers are coming, right?

    GM: I'm afraid so. Again I was a skeptic there. I convinced myself that we'd never go above the 200-millimeter wafer. The reason was, I argued, that the cost of material was going to become prohibitive. But the people who are going to supply it seem to think they can do it. Now, I haven't been at a silicon crystal growing facility in years. They must've learned something new since I was there.

    SA: Does it require an entirely new crystal growth technique, or does it just involve refinements of what they use to create 200-millimeter wafers?

    GM: It has to require something different, because the crystal hangs by this little seed. And the size of that seed has to be pretty small because you have to squeeze all the imperfections out of this seed before you start expanding. The limit to the size of the crystal you can grow used to be determined by the tensile strength of that seed: how much weight could you hang from it.

    That was why I argued that you couldn't go much bigger. As you increase the diameter of the silicon crystal and keep the weight the same, you have to decrease the length by the square! So an eight-inch-diameter crystal might be about 18 inches, and I could see a 12-inch crystal only a foot long. Then it takes longer to get out to the full width of the cylinder from the pointed top, and longer to get back. You need a thicker saw blade, so you've got to cut thicker wafers. So everything went in the direction of saying you get far fewer wafers out of a 12-inch crystal than an eight-inch one. I thought that would be a real limit.

    Now somebody must've learned how to go in there and grab the crystal and keep it growing, rather than support all the weight from the seed. That didn't used to be possible. And I don't quite know what they are doing, maybe they're getting away with short crystals. But somehow or other, the people who have to supply the silicon seem to think that 300 millimeters is okay. That being the case, the industry will build to 300-millimeter wafers.

    SA: Will it go to bigger die size, do you think?

    GM: Those are kind of independent variables. We could fit a lot bigger die on the 200-millimeter wafer if we had to. That depends partially on the field of the lithography tool. We don't like to have to stitch fields together. But the economics of that thing are limiting that as much as anything. We sell area, we sell real estate. And we've always sold it for about a billion dollars per acre of silicon; a bit less for DRAM, a bit more for microprocessors. But when I first started out in business, we sold it for about half that. And the problem is, if you let the die get too big, your costs get all out of whack. So, if you're limited in how much a particular market will pay for your product, you've got to limit the area also.

    SA: Assuming that the trend will continue for the next 10 years, what do you see happening with all those extra cycles? What are we going to do with that power?

    GM: That becomes an interesting question. Fortunately, the software industry has been able to take advantage of whatever speed and memory we could give them. They taken more than we've given, in fact. I used to run Windows 3.1 on a 60 megahertz 486, and things worked pretty well. Now I have a 196 megahertz Pentium running Windows95, and a lot of things take longer than they used to on the slower machine. There's just that much more in software, I guess.

    But one application that I think we're not to far away from is good speech recognition. It's dangerous to predict that, because it's been the application that has been five years away for the last 25 years. But I think that within the 10-year timetable that we're talking about, it ought to be generally available.