Dr. Dobb's TechNetcast


Dr. Dobbs Journal

mailing list subscribe

now playing on

Rethinking the Modern OS
Tim O'Reilly: The Shape of Thinks to Come
Don Box: Virtualization in Web Services
Effective lobbying for Open Source
Inside the XBox Launch
History of Linux

WSDC2002 Package

computer history
cyberlaw, cyberrights
graphics, game dev
mac os
open source
os design
software development
web services

Linus Torvalds At USENIX, Part 1 (25:00) PLAY
Linus Torvalds At USENIX, Part 2 (25:00) PLAY
Linus Torvalds At USENIX, Part 3 (25:00) PLAY
more programs from USENIX 1999 () PLAY

Linux BOF at USENIX 1999

In his remarks at USENIX 1999 Linus Torvalds comments on the State of Linux -with an emphasis on kernel evelopment- and fields questions from the audience. Taped at the Linux BOF (Birds of a Feather) session at USENIX 1999.

The following is the transcript of Linus' opening remarks at the Linux BOF at USENIX 1999 followed by excerpts from the Q/A session. The video links at the top of this page cover the entire session. The new G2 player for Linux can be downloaded here.

Linus Torvalds Whenever I talk, I either talk about completely strange, random things -and that happens when I go to places like Berkeley or something where I need to talk about like philosophy to fit into the group- or I talk about the Linux kernel. And when it comes to USENIX, I always talk about Linux kernel. So [today] I'll start off with just a few short words on the state of the kernel and then [cover some of] what we are working on right now and then [throw in] a few short sentences about what the bigger plan [is]. Then I'll just finish off with asking you all to come to grips with your gripes with Linux or with life in general and we'll just take it from there. If you have no gripes at all with Linux, I'll be really happy and this is really going to be a short talk. So that's the plan.

So what is the current state of Linux? How many of you are actually Linux users? [Most hands in audience raise] Oh, sorry, sorry, sorry [laughter]. How many of you are not Linux users and just came to watch the zoo? Okay, a few of you… We'll beat you up afterwards. The 2.2 kernel has been out for a few months. It's still in the stabilization phase meaning that there's always a period when patches keep coming out and we seem to have passed that [phase] to some degree. We've almost reached the slowdown phase where nobody is really interested in it anymore. There aren't any major showstopper problems that warrant a new patch and a month has passed probably since the last one. I'm readying a new patch set just to fix a few ugly things. I essentially hope that 2.2 will then really be almost dead in the same sense that 2.0 has been dead for the last year. There will be people who follow on bug track and fix the worst ones. There will be people who backport drivers, for example, from the development tree into the stable tree just because drivers are always needed. But 2.2 certainly seems not get much active development anymore and we opened up the 2.3 tree about a month ago and this is where all the development is going. But before I go into that I'll just mention a few basic facts about 2.2.

There had been a long time since 2.0. More than two-and-half years. So obviously we just needed a stable kernel but also we wanted to [work on] infrastructureFor example, 2.2 has the infrastructure [required] for doing SMP really well. Notice that I say that it has "the infrastructure to do SMP really well". That implies that it doesn't actually do SMP really well -because a lot of legacy codes still kind of relies on the one kernel lock. You all know the basic way SMP kernels always start off. So, for example, networking filesystems can avoid the kernel lock because now we actually have the infrastructure for doing fine-grain stuff but nobody has wrotten code [for this yet]. So 2.2 in some sense is SMP-ready but obviously nobody will claim that it scales all that much better than 2.0 although some paths have been cleaned up. Basic process handling, stuff like that has been cleaned up a lot. So it is more scaleable but certainly not anywhere close to where we want to be.

Another big change in 2.2 has [involved] making the filesystem infrastructure at least look the way I wanted it to be. There's all the new caching stuff is in place. The page cache has been improved to the point where it is actually possible to do write-through page caching too, although right only a few filesystems now use that feature. NFS, for example, does write-through page caching. But the other disk-based filesystems still use the old buffer cache to do writes, even though the new page cache is there to really handle all the rest of the I/0.

So you get the idea about 2.2. All the basics are in place --but we now need to take full advantage of the work that we've completed over the last two-and-a-half years. And that brings us to what we want to do and what we're working on right now.

There are fairly obvious deficiencies when it comes to the kernel. One of them is simply device support. One thing that comes up fairly often in the PC world right now is USB and we're working on items like that. FireWire is also being investigated. At a less frantic pace, but it is still being looked at. The other issue is scalability. We've had some interesting troubles with certain large companies. They made us look really bad by selecting exactly the right benchmark for us… [The results were] interesting. But it also got a lot of people motivated to do the things that they knew they had to do anyway. So our long-range plans [to be completed] "in a year" suddenly turned into "next week". Right now, for example, the current 2.3 series already contains the fully threaded networking code, but it's really not turned on yet because a lot of it is still behind the one kernel lock. But it's there and people are testing the waters by turning off the kernel lock for all the filesystem code and doing stuff like this. The same is true of filesystems. We actually have patches to turn off the kernel lock for basically all the interesting paths through the filesystem. The patches are so ugly at this point that you really don't want to look at them too much because you'd go blind…

That's [an overview of] what is going on when it comes to the kernel. Obviously a lot of the really interesting stuff is also going on in user mode and that's actually where most of the excitement has been coming from in the last year (with the Oracle and Lotus Domino announcements, for example --but I'm not even going to go into those issues unless I'm asked).

Then there are the non-technical issues. I'll just mention [these in passing] because it's not what I'm [involved in], but obviously things are going on with Linux. Companies are filing with the SEC. That's kind of exciting for those companies, and I wish I had more stock options... But the most interesting story happened a few months ago when Microsoft suddenly woke up and started really doing a PR campaign for Linux. For a few weeks I really could relate to Scott McNealy. I could understand why the guy hates Microsoft. I was like "oh, yes. Scott is my best buddy" and that was something really new. It's not that I have been unfriendly with Scott, it's just that I never was never all that interested in Microsoft. Suddenly I could understand what Scott was all about… There's a teaching there and I'll call it "the Sun disease" but it's true of others too. You start really hating your competition to the point that instead of doing the right thing for your customers, you try to screw over your competitors any which way you can… and then you come up with bad licenses for your new programming languages... Completely hypothetical example [laughter and clapping]. I was almost in a situation where I was thinking, "Okay, how can I screw Microsoft?" You start not thinking clearly.

Microsoft essentially paid for a benchmark and we really sucked at the benchmark compared to NT. Everybody was really stunned because it was the first time this had ever happened. People just thought that the numbers were basically made up and couldn't be true. It turns out that yes, they did do a few fairly ugly things and they did select the hardware to really show off NT and really show off just how bad Linux was. But it turns out that most of [the results] were actually true. They just found the benchmark for which NT had been optimized for during the last few years and for which Linux hadn't been optimized for at all. And that made me feel personally really, really bad and we spent a few weeks in an extensive PR fix-up campaign trying to educate people about lies, damn lies, statistics and benchmarks. You probably all know that benchmarks can prove just about anything if you get to select the benchmark. But it's kind of interesting having to be in the position to have to explain this to some people that actually wanted to be convinced of this. One of our advantages was that the press was very open to trying to understand the Linux viewpoint. I felt bad for a while. Then a guy from Australia sent me the same benchmark and also pointed out "how can we fix this". I realized that Microsoft wasn't the enemy at all. They were actually doing a good thing. They were pointing out that, hey, we suck at some things. Now I consider the anti-Linux group at Microsoft to be just a Linux user in disguise. They're trying to find the really bad problems in Linux. They haven't produced much so far but I expect that we'll get some really good bug reports from them... Bug reports are good. The only problem is that they don't go through the normal channels... If you look at it that way it's not that bad. You get to read about these bugs in the "Wall Street Journal" instead of getting them in personal e-mails. But eventually you get the information and the bugs can be fixed. So I'm fairly optimistic.

So what is the plan right now? There were obviously a lot of things we did wrong in the 2.2 series. The most strikingly big mistake was that [the project took] two-and-a-half years. The excuse was that as Linux grew and grew more complex more [development] time was needed. Developers don't want to be in a code freeze all the time. You want to have a long development cycle. A few things just brought this to a head, and we are now looking at these excuses and trying to find out if they actually make sense. Particularly [the excuse that] as things get more complex and larger they take longer to mature. Well, that excuse really sucks.

It turns out that what you actually want to do as things get larger and more complex is make many small incremental changes so that you don't go on for two-and-a-half years until a release goes public. Right now the plan is to actually try to fix the problems we are aware of and that we had planned to fix anyway and just fix them as quickly as we can and make a 2.4 release this year. I don't know if that is actually a workable plan but that's the plan. The deadline is October. I always miss my deadlines so maybe we'll have it by Christmas. But the plan is to just fix those things we know we suck at and get the release out and be really proud of how well we do. And then somebody will find how we suck at something else and we're back to square one --but not quite because at least we'll have a new stable base to work from. But this new release is not all about fixing problems. There's a lot of excitement in the new stuff. But you don't have to take that into account [in the same way] because people actually want to do the new and exciting stuff and never have lacked the time to do that.

So, I've given you the past, the now and the future and now we move on to the "Linux gripes" session and see where that gets us...

TechNetCast Catalog:
linux Linus Torvalds usenix technical conference 1999 presentation 

Related Programs:
Steve McConnell: The Ten Myths of Rapid Development
Things Your VC Never Bothered to Tell You And Other Scars on My Back
Linux in the Enterprise
Open Source Methodologies

Q&A Session, Excerpts

Items from the Q/A session reproduced here in no particular order. Entire session is available in video -links at top of page.

What sucks in 2.2 and what do you look forward to fixing?

I compile my kernel on a four-way right now and I expect that to be an eight-way in the not too distance future. I get a speed up of 370-something percent and that's cool but I also see that 20 percent of that is just spent on kernel lock. Right now for compiling kernels basically all of that kernel lock is memory allocation and the filesystem. Those two features are going away. I refuse to have that kind of locking overhead when it really isn't needed. So that is kind of the suckiness in 2.2 that we'll have to fix. And under other loads, the networking layer shows up, but that's already basically fixed. That's a separate issue.

There are other people who have other pet peeves. Device support is certainly one of them. It's inherently hard when you have new devices coming out--new buses coming out on a quarterly basis, so in that sense hard for not only for Linux, but also for everybody else. That's one of the issues that we'll need to look into very hard for 2.3.

There are also people who want to move Linux to the embedded space. You know about the TeeVO settop boxes. They run Linux although they don't say so in their commercials. That kind of market is kind of interesting and there are a few issues we need to sort out. There's been discussions for example on how to disable the memory-management code, because it takes too much space and we don't need it in these type of applications.

Linux for Merced

What I can say about Linux for Merced? I can't say very much at all because I've not signed any of the NDAs, although I have obviously talked with people who have. It's basically not a problem. It's a done deal. I think that Linux has booted in ten different simulations over the last few years. There's been a lot of different projects inside Intel, HP… I know of at least three projects inside Intel doing simulation work. Right now there's a conglomerate of HP, Intel, BEA, and Cygnus who share NDAs and a [common] code base that they're working on. I don't know how open all this is and how much I should say about it… By the time the Merced is released, it will have been done and it will all be open. So right now, for obvious reasons nobody can look at source code, including me, because they're keeping it internal and the GPL actually allows that and I understand why it's done that way. I don't happen to agree, but it's making their problem harder for them and it's not a big deal for me.

The only hard issues in Merced are still compiler related and the fact that you want to run old Intel binaries. Those are the two issues. The compiler-related issues are basically not our problem so much and 386 emulation is mostly done in hardware and a little 32-/64-bit magic has already been handled by the SPARC board, for example. So none of this is new. 64 bits isn't new. The mixed-mode environment isn't new. It's not a big deal. It so happens that apparently the first versions of Merced won't be all that competitive performance-wise. So I don't think it's actually going to be an issue until maybe in two years, but we'll certainly have Linux out before that.

USB and Removable Devices

That's actually a fairly complex issue and maybe I'll go into some of the details. Basically I did not object to having one device always show up in one place. USB and any other hot-plug facility have a notion IDs, but it's not complete. So two devices can actually look exactly the same. They are basically identical as far as the kernel is concerned. There's a lot of problems with trying to come up with unique names. It's very obvious that the kernel cannot do the perfect naming system. A good naming system for that kind of hot-plug facility needs to be done in user space. That's something that basically everybody agreed on. You need to have so much history and you need to have database of all the ideas you've ever seen in your life -it's not something a kernel should ever try to do in kernel space. So that leaves you with the option of doing it only in user space which is kind of strange, or having a separate naming scheme that is used internally inside the kernel and that is exported but that is not meant to actually be used by normal people. So it's kind of the difference between giving access to a raw device and giving access to a filesystem. I had very strong opinions on how the raw device should look. It should have a set of internal self-consistency that a lot of people didn't really see what I was driving at. Then the fact that most people that actually use that is a completely separate issue.

I also found interesting that while we had this discussion a lot of people really thought this was about USB --and it's not. It's a pluggable device system discussion and a lot of people tried to make the solution be USB-specific solutions and then kind of completely forgot about issues like moving devices around across different busses and still having them show up under the same name. A user obviously doesn't care which bus a printer or disk drive it's connected to. It should actually show up as the same printer or the same disk drive and that's a really hard problem to solve. It's not something that the kernel should solve.

Source Control and Regression Testing

We've always had good regression testing and it's called "a lot of foolish users." It scales really well. That hasn't really been the problem. We have a different model than companies that try to have internal Q&A. We have internal Q&A, don't get me wrong, but it tends to be not very controlled. It doesn't tend to be regression testing in any sense although there are a few suites [available]. The real regression testing is done through pre-patches. Commercial Linux vendors also do their own testing. They just have to and that's cool because they get paid to do it. They'll have some grunt doing the boring work, something no self-respecting developer actually ever wants to do, frankly. I apologize to all regression testers…

The question of source control is in the air. Maybe a lot of you know about Larry McVoys project to do a big BitMover (www.bitmover.com), whatever he calls it, a super CVS. I've talked to him... I use CVS at work. It [gets the job done] but basically sucks for a lot of things and I refuse to use it for the kernel. And I think Larry is doing a lot of things right. Just a few months ago [his product] was still at the point where it crashed more often than Windows so it wasn't really that useable. But it is certainly one of the things that is up in the air, maybe we'll try to use the BitKeeper stuff for source control. It has some really cool features that do not exist anywhere else and most of them actually do work. But I certainly am not committing to anything at this point. It's more, "let's see how it works in practice." I like some of the features it gives me and maybe people will be happy with it. But so far source control has obviously been a matter of "release often and do patches."


I'm really good at waffling. I kind of like the notion of devfs. A lot of people just love it and then a lot of people just hate it. The argument then becomes one that you can always include it or not include it as you want --and that's clearly a strong argument. At the same time, I'm worried about the people who hate it hate it so intensively that if it gets to be a standard feature and a lot of programs depend on it, people are just going to kill me. I think that we will have devfs. I disagree about some of the naming issues, but basically, especially with dynamic devices becoming much more common (hot plug PCI and stuff like that), the old kind of static device model doesn't really work anymore. So something like devfs is going to happen and I think it is going to go into 2.3 in the near future but I said that about 2.1 too.

Are you going to have a more secure kernel?

I've always considered security to be extremely important. That doesn't mean that I always agree with people who think that features bring security. I don't for example think that ACLs are all that great an idea, because sure you're secure but sure in practice they're extremely confusing and it's often simpler to have a really simple security model with obvious flaws, so obvious that you can't easily overlook the problems. We've moved into a more secure model. Instead of having 'root' being all or nothing, right now internally inside Linux there's basically no places that depend on 'rootness' anymore. It's all capabilities. The fact that we don't actually export that interface to outside the kernel in any really reasonable way is another thing, but internally we already do depend on capabilities. So we're moving in that direction. My biggest objective is to never, ever over-design and a lot of the security stuff in particular that I've been given--in my opinion--has been over-designed. Another thing that I hate is false security. So I always refused to apply the patches that make the stack segment non-executable. It stops some of the obvious exploits, but the same exploits actually do still work if you are just more clever about them. It was more of a security through obscurity feature. I'm sometimes seen as being very much anti-security but it hasn't been by design, I hope.

On Dennis Ritchie and Plan 9 and Inferno

We've met obviously. We haven't really discussed technical issues. When I visited Bell Labs, Dennis and others basically said others said that they weren't really interested in Unix anymore. They were looking at Plan 9… At the time Inferno was being designed. I've taken the approach that Linux is all about making the best kind of Unix-like system for the normal user. It's not about being exciting per se, it's about allowing people to do fun things. I've tried to copy freely from everybody and that includes copying the features I like from Plan 9 -- the proc filesystem and the good parts of their R fork. I've also copied freely from NT although I can only think of one actual specific instance of this. I'm kind of an OS agnostic in that sense and we haven't really discussed this issue very much.

Linux User Base and Market Penetration

I used to make up the numbers completely [laughter, clapping], and now I leave the making up of numbers to others. I don't really need to try to make them up because I think IDG made a reasonable job of trying to look at where Linux was in the server space, and the desktop is fairly obvious. You don't even need to look at it very closely to see who's ahead in that area. Certainly in the server space it seems that about a fifth of the machines are Linux machines. That's exactly the kind of statistics that Unix people hated when the same numbers were quoted about NT. Because even when they were true, the come-back from the Unix people was always "Hey, our servers are so much larger that if you actually count the number of people connected to them we win just by being the large gorilla." I just think all the numbers are fairly meaningless. The only thing that is clear is that, yeah, Linux is big enough to be in the top three in server space almost regardless of how you count. We're also the top three in the desktop space, but that doesn't mean as much anymore.

What Desktop do you Use

I can feel the flames coming... My current desktop is fvwm2 which is like saying "Okay, I have the horsepower to run something better because I have machines that most hackers have wet dreams about." But I still just decided that fvwm2 is what I need and I subscribe to the tenant that small really is beautiful. I've seen some of the enlightenment demos and I like the fact that when you move Windows around you can kind of see transparent Windows moving. When I saw that for the first time, I was drooling. Basically I don't really care about that kind of desktop issues. I can select my own background color and that's just about the control I need personally frankly and I refuse to use the really old Window manager.

fvwm2 was the first one that made me happy and then after that I stopped bothering to even search for better things. So in that sense I'm not so interested in kind of the desktop as you see it on the screen because my needs are not that high. I think that is where most of the really interesting work is going to happen. Not the Window manager per se but the desktop applications, the desktop use, and the desktop is interesting in that it's the only use of computer that isn't specialized. Supercomputers are basically always used for one thing. The same is true of the embedded space, so above and below you have specialized use. And the desktop is interesting exactly because the use is so varied and that makes it much more interesting to me as a kernel person because I get [unbelievable] problem reports from the embedded space. So above and below you have specialized use and the desktop is interesting exactly because the use is so varied and that makes it much more interesting to me as a kernel person. I get problem reports you wouldn't believe and that would never ever be a problem on a super computer or on an embedded device. So that's why I'm interested in the desktop because of the kind of user space issues that people keep pestering me about and that doesn't mean that I'm interested in the Window manager personally.

When I go to Linux conferences, I find that most of the people haven't heard of USENIX and if they have they often say it's irrelevant. Linux is as big as all the other Unix is combined now so why bother. What do you have to answer to that?

I kind of see why. Most [Linux users] are not original Unix people. Most Linux people who may have used Unix were users, not the USENIX kind of people. Or they actually came over from the dark side. So there's lack of connection between a large portion of the Linux community and USENIX. I remember the first time I went to USENIX… I was really happy that I had done this small Unix clone thing because it meant that even though it wasn't very well known at the time I actually got to be in the inside group. But I was outside enough that it was very clear that USENIX, at least at the time, was very hard to enter in that sense. You had to be of the old guard to be regarded as worthy.

So I think there is fault on both sides and it seems to be something that people are actually working on... [There is talk of] a Linux subgroup inside USENIX. [This will help] Linux people feel more comfortable with USENIX and USENIX [become] more open. A lot of this is just psychology. The fact that Linux is large, and using that as an excuse for not considering USENIX, is irrelevant. That's making excuses, rather than actually looking at why it happened in the first place. It is certainly true that USENIX tends, even now, much more "other Unix"- oriented than Linux-oriented and I think some people want to change that.

Where do you see user desktops going and what applications would you like to see developed to make it a lot easier for the general public to accept Linux?

Well most of this has been happening... I used to be really worried. I'm not that worried anymore because a lot of progress has been made. It's obviously a very hard space to enter --not really for technical reasons but because people get very, very, very attached to their desktops. It's a very personal thing. You like your editor even though it's obviously brain damaged, right? And for the same reason, you like the other pieces of your desktop even if they're completely brain damaged. So it's a really hard problem to enter the desktop space and it's obviously being made much harder by the fact that there's just a million different applications--some of them very specialized--on the desktop. And you really want to handle them all in order to make everybody happy. So the thing that I'm seeing happening right now is that Linux is entering the desktop from the commercial space where there already are commercial companies who see Linux as their desktop system inside the company because it does give them a desktop and at the same time, it does give them a good controlled environment. And you get the kind of applications you expect so you actually get the office suites and stuff especially in Germany where it seems to be fairly strong in all these areas partly because of Star Office which has been making this Microsoft Office clone. It's not your average Unix program anymore. You're not in Kansas. It actually looks like something from the 1990s.

This is kind of a political question, so you'll have to forgive me. Earlier you mentioned the odd new licensing scheme that Sun has come up with. How much are you worried about things that are not obviously in one of the two camps: the classic commercial camp and the open software camp, wolves in sheep's clothing, confusing or derailing the community or even polluting it.

I'm not that worried. I actually think that that is the way we're going to go eventually. I think black-and-white is wrong [clapping]. Let me finish... I think that you need black and white right now to some degree because it just used to be so commercialized. Obviously the GPL was a political statement against being too commercial. And I think the correct end result in whatever time frame is going to be that you're going to have these kinds of middle choices and you'll have the extreme commercial proprietary systems and you'll have the GPLs but most stuff will be somewhere in between. That's the situation we want to go in. I don't think we're there quite yet.


Back to
TechNetCast Home

(c) 1999-2003, Dr. Dobb's TechNetCast