Gordon E. Moore

Intel's chairman emeritus
looks to the future


...........
Gordon Moore on:

  • Birth of the Microprocessor

  • Moore's Law

  • Pushing the Technology

  • The Future of the Computer












    MORE INTERVIEWS

  • From leading a closely knit team that put the first computer on a single chip 25 years ago to co-founding and leading Intel into a semiconductor powerhouse, Gordon E. Moore has been a dominant figure in the development of the modern computer.

    In September 1997, Moore took some time to speak with Scientific American's west coast editor, W. Wayt Gibbs. In this final section of their interview, Moore gazes into his crystal ball. Here is what he sees coming over the next decade.


    PART IV: THE FUTURE OF THE COMPUTER

    SA: You've said that good computer speech recognition will be realized in the next decade. Let me press you a bit on that, because there are certainly skeptics who would argue that the problem with speech recognition is not so much lack of fast processors and memory but lack of understanding about how to put grammar into a computer and how to parse complex sentences.

    GM: There is a lot of algorithm stuff going on there, too. I was at Cambridge University for the hundredth anniversary of the electron earlier this year and looked at what they were doing. They gave me a newspaper article to read, and I read the article [to the computer], and the machine took about four times as long as it took me to read this, but it did a pretty good job of recognizing what I read.

    This was a case where it certainly wasn't trained on someone with an American accent. So it was quite significantly speaker-independent. I didn't even read it anything initially to get it started. It uses a lot of context-related things in order to recognize speech. It calculates what phrase fits in this kind of a sentence.

    So in that respect it was looking at grammar, language structure and everything. And I read it continuous speech, no isolated words or anything. I haven't played with one in quite awhile; I think a lot of them you can teach individual words, and you can train them. But this looked to me like a fairly significant advance since the last time I had looked at modern speech recognition system.

    SA: Do you think that computers that are able to take dictation, like this machine, by themselves will be a significant breakthrough, or that we'll have to wait until the next step, which is computers being able to understand speech?

    GM: Well, if the Cambridge approach is the one that happens, these may not be that far apart. That system is recognizing speech in the context of a complete thought. It's recognizing which phrase fits into a particular kind of a sentence structure. And it even selects two or three different choices in places it misses, so that you can go back in and pick out the one you wanted.

    That approach clearly is the one that's going to have the impact. I think, though, that even a machine that would take really good dictation could have some fairly significant use. There are still a lot of people who would like to use computers, but who are intimidated by keyboards. Being able to talk to the machine would help them quite a bit.

    That is the one incremental capability that I can see that I think would have a rather significant impact on the way people use computers and would open up the whole next step towards the day when one can carry on an intelligent conversation with a computer. That may not be in 10 years. But I'll bet that certainly within 50 years and probably within 20 you will be able to have a conversation with your computer.

    SA: It's an interesting example because there are a lot of linguists working on this problem who are not entirely certain how to encode common knowledge and the kind of things that you need to make sense out of simple language into a computer. So in many respects, it's as much a software problem as it is a hardware problem.

    GM: It is. It is more of a software problem. But having the very capable hardware there gives them a lot of opportunities to derive other ways. Playing chess is a software problem, but you still need a mighty powerful computer to do it in a reasonable amount of time.

    SA: The state of software engineering is not as mature as the state of what you do here at Intel, and it doesn't seem to be progressing quite as fast either. Do you think the difficulty of designing very large, very complicated software might be a factor that limits demand for very fast, advanced computers?

    GM: I suppose it is, but the industry seems to muddle along okay. I will admit to not understanding why software is fundamentally different than the kind of hardware we do. One of these processors is also the output of hundreds of very bright engineers, who are working together to see the whole entity at the end, and we develop techniques so we can predict when things are going to get finished. We still have errata, but nothing like software.

    SA: You rarely miss your ship date by a year, as Microsoft has in the past.

    GM: No, we don't. It used to be that every once in a while we'd pass a threshold where the old techniques didn't work anymore. The last time we did that with processors was the 386 generation. It just took forever to get that up to the point where it was a shippable product, iteration after iteration.

    But then we went through an extensive effort improving our tools, and we have a very large ongoing investment in tools to try to keep them up to what we require for the processor we're working on today. So we've learned how to run projects like that, and we can predict pretty well when the things are going to come out. I don't see why software isn't potentially subject to the same kind of control.

    SA: Dan Hutcheson [president of VLSI Research] has mentioned in conversations that he and others in the industry worry about the design tools--the software that you use for designing these incredibly complex chips--and a scaling problem there. As Moore's Law continues its march, the simulation and modeling tools that you use to design and test the circuits are struggling to keep up. Might that create problems?

    GM: We're doing better now than we used to. Now we know how to work, and we're paying a lot of attention to it.

    SA: So for your purposes they are keeping up.

    GM: Yes. We all would like to have more, of course.

    SA: Here's another factor that might conceivably limit the impact of high-performance chips on computing as an industry: computers seem to be becoming more communications tools than calculation tools, so what about bandwidth as a limiting factor?

    GM: Bandwidth is a real problem in general--although not inside a company like Intel, where we've done a pretty good job connecting our computers. Processing power can substitute for bandwidth to a significant extent. For example, you can send tolerable video over ordinary phone lines so long as you have enough processing power to compress and decompress the images.

    So in some respect they're complementary. But I look forward to the day when we all have gigabit pipes coming into our houses. There's obviously a lot of industry work going on to try to make that available. It looks to me like it's going to come from a variety of different directions. Some of it will be the cable industry supplying it, maybe some of this DSL [digital subscriber line] stuff is going to actually come to pass.

    SA: Yet it's pretty clear that since communications bandwidth is infrastructure-dependent and so costly, we're going to see it grow more slowly than processing power, right?

    GM: A lot of the basic stuff is already out there. The fiber backbone that exists can carry an awful lot of stuff. I used to think that it was principally a switching problem. I didn't realize how much that, now that it's all digital, you can do without any real switching.

    I think it's being slowed down most by regulation. If they were driven by competition equivalent to that in our industry, things would go a lot quicker. Of course, such opinions are often the case of not understanding the other guy's problems.

    SA: The growth of processors and memory has over the years enabled a huge number of applications in industry and business that just could not have happened otherwise: controlling equipment and so on. Will further order-of-magnitude increases be applied to solve those kinds of business problems? Or are they probably going to get shunted into things that you suggested, like user interface and broadening the appeal and the use of computers, rather than solving problems that currently computers are too weak to attack?

    GM: I think it's going to move in a whole bunch of different directions. This industry is at the point now where some variety of specialization becomes increasingly likely.

    You see that already. There are industries that are crying for more computing power--drug design, for example. They want to model how molecules fit together, which requires a lot of computing. If all you want is a word processor and a spreadsheet, you have more than enough power now. We're looking to see what kind of applications on a typical desktop in a business would benefit from higher performance. And we haven't identified really any very general ones yet. It's easier to identify home applications that require higher power.

    SA: Home applications such as what?

    GM: They typically tend to be the multimedia type of thing. Image processing and getting a good video is still something that's much more attractive to the home user, typically, than it is to the business user. Although that is not necessarily going to be the case forever.

    SA: But there is a point at which your video is the best you can get through whatever size pipe you have connected to the house, and you aren't going to get anymore because compression is limited by mathematical theory.

    GM: Okay, you will always lose something in compression, I guess. But there is still a long way to go before we get that far.

    SA: You mentioned specialization. By that do you mean specialization in hardware devices?

    GM: Yes. For a lot of business applications, one of the most important things is being able to control them centrally. There's the NetPC proposed by Larry Ellison--this tackles the issue of total cost of ownership. We have different views there. Here we think that the network is still the weak part, so you should put a lot of stuff out there on the terminal if the user wants it. Larry would like to have something of his on the server and a bunch of dumb terminals out there. And, at Intel, we obviously have somewhat parochial interests in this matter, too.

    I think we will see some of each. But my personal view is that we will rely very heavily on smart clients, because we have all gotten used to owning our own resources, and I think we will want to hang on to that. There will be some specialized applications where you won't want to give the people on the end any control at all. But most of the cost of business computing in a big company is in the problems associated with controlling what gets in the system. And there is going to be a tremendous interest in some kind of control--being able to load software onto your computer and troubleshoot it from a distance.

    Now, if something goes wrong on my computer, I have to make a phone call, and somebody has to come up here and fiddle with the keyboard. That's ridiculous. The right way to do that is to let them take control of my computer from some central location. That kind of network control is something we can do now. It's just a question of getting it in place throughout the organization.

    SA: Taking specialization out a bit more into the future, there are quite a number in the industry who predict that although PCs won't go away, they will probably be supplemented, perhaps by putting some tasks into put into much more specialized devices. We already each have probably three or four computers scattered around our desk in various forms. But there might be lots more specialized for particular tasks. This, they say, would make them easier to use because they would have specialized interfaces and also would make them more convenient and powerful because they would be more portable.

    GM: Possibly. The last thing I want is five different interfaces. Although I guess I have that if I have five different programs.

    SA: Right. Let me put that another way. You can imagine having, five years from now, an incredibly powerful machine that is the equivalent of a minisupercomputer today. Or buying a bunch of chips equivalent to today's Pentium or Pentium Pro chips but that are performing smaller, simple tasks with great intelligence. So Moore's Law could push everybody toward buying the latest, greatest high-end chip, but it could also, by making the chips we now have much cheaper, start a whole new market down there. What do you think about that possibility?

    GM: It's not the chip that determines the cost of the machine, its everything else that goes with it. If the chips were free you could only shave a few hundred bucks off the price of these things.

    I think it is likely to go in several different directions. If someone can identify special-purpose things that really fill a need, that's fine. In some respects, the WebTV is that kind of device, when all you want is Internet access from your family room. It has a simple interface and is a relatively simple machine that lets you do one task. The general-purpose machine has tremendous advantages and the disadvantage of complexity. It will be interesting to see how some of these things play out. They're awfully hard to predict.

    SA: Complexity seems to have been rising in general purpose machines. Is the market for lower-power chips embedded in specialized devices a target for you?

    GM: Oh sure. It's a different kind of a market. We sell a lot of embedded control processors, mostly simple ones.

    SA: But those also get more complex with time, presumably.

    GM: You know, not much. Once they get in an application, you don't often need to increase the processing power. Even in auto engine control, you've got plenty of computing power in the 16-bit chips. You get very slow growth, but nothing like the PC market.

    The PC is a terribly complex device. It needs a simpler interface.

    SA: Do you think we're on the right track for getting to simple interface?

    GM: I'm not sure. I feel the same way about my TV and hi-fi sets--all these damn remotes. I don't use them often enough to learn how, and I get so frustrated I could throw the thing through the set. My wife gets even more frustrated than I do. But I'm in a different generation; the kids who grew up with this stuff seem to find it much easier. You probably are a lot more relaxed about it than I am.

    SA: One other thing I wanted to get you to prognosticate on is parallel processing. Intel has built supercomputers that use fleets of processors. When we hit the wall at the end of Moore's Law, as eventually we will--whether it is a firm wall or a squishy one--could this be at least a stopgap solution until the next generation technology is ready?

    GM: Oh, I think it is an ultimate solution, actually. Whatever you can do with one, you can do a lot more with several. And a surprisingly large number of the real-world problems can be done on a parallel machine. We've dealt mainly with a research consortium down at CalTech, and they found that the class of things that can be done faster on a parallel machine is larger than the class of things that cannot. All physical modeling and the like splits up fine.

    The problem is that those machines are so expensive. A lot of people would like to have them but spending millions of dollars for the state of the art for a year or two is not something many research organizations can do.

    SA: What about parallel processing on a much smaller scale--instead of massively parallel, just dual- or triple-processing?

    GM: That's no problem; servers are that way now. Workstations are or will be. We'll even see some of that on power users' PCs, I'm sure. Our new processors are rolling out the capability of doing that pretty easily. But it requires special software to take advantage of it.

    SA: Is that something Intel is working on?

    GM: Not the software; we let other people do that. Typically the UNIX systems can exploit dual processors, Windows NT does to some extent. The typical PC operating systems do not.

    SA: Is that because it is very hard to do so, or is this a chicken-and-egg problem?

    GM: It certainly is more difficult and complex, and the operating system gets bigger when you add that capability. But it is a way of getting more power out of the machines. We do a lot of other things to get more power--a typical engineering workstation here has 250 megabytes of DRAM, for example. And anything you can do to squeeze some more performance out is worth doing. In fact, our workstations all work together like a huge parallel computer on big problems at night. Then during the day they revert to single-user systems.

    SA: So, if over the next decade the generations between chip designs do stretch out, this would be one way to sell more processors, wouldn't it--put two or three in every machine?

    GM:Could be. Of course in that case the cost of computers would rise linearly with the number of chips. That is typically not the case when you replace one chip generation with a more powerful successor. They tend to come down the same price curve, but just offset in time.

    SA: Good point. Thanks for your time.


    PART ONE || PART TWO || PART THREE || PART FOUR