Halloween Document I (Version 1.14)*note: some links have died, but have been left so for historical reasons.Open Source Software:
{ The body of the Halloween Document is an internal strategy memorandum
on Microsoft's possible responses to the Linux/Open Source phenomenon. |
Software Type |
|||||||
Commercial |
|||||||
Trial Software |
X (Non-full featured) |
X |
|||||
Non-Commercial Use |
X (Usage dependent) |
X |
|||||
Shareware |
X -(Unenforced licensing) |
X |
|||||
Royalty-free binaries ("Freeware") |
X |
X |
X |
||||
Royalty-free libraries |
X |
X |
X |
X |
|||
Open Source (BSD-Style) |
X |
X |
X |
X |
X |
||
Open Source (Apache Style) |
X |
X |
X |
X |
X |
X |
|
Open Source (Linux/GNU style) |
X |
X |
X |
X |
X |
X |
X |
License Feature |
Zero Price Avenue |
Redistributable |
Unlimited Usage |
Source Code Available |
Source Code Modifiable |
Public "Check-ins" to core codebase |
All derivatives must be free |
The broad categories of licensing include:
Commercial software is classic Microsoft bread-and-butter. It must be purchased, may NOT be redistributed, and is typically only available as binaries to end users.
Limited trial software are usually functionally limited versions of commercial software which are freely distributed and intend to drive purchase of the commercial code. Examples include 60-day time bombed evaluation products.
Shareware products are fully functional and freely redistributable but have a license that mandates eventual purchase by both individuals and corporations. Many internet utilities (like "WinZip") take advantage of shareware as a distribution method.
Non-commercial use software is freely available and redistributable by non-profit making entities. Corporations, etc. must purchase the product. An example of this would be Netscape Navigator.
Royalty-free binaries consist of software which may be freely used and distributed in binary form only. Internet Explorer and NetMeeting binaries fit this model.
Royalty-free libraries are software products whose binaries and source code are freely used and distributed but may NOT be modified by the end customer without violating the license. Examples of this include class libraries, header files, etc.
A small, closed team of developers develops BSD-style open source products & allows free use and redistribution of binaries and code. While users are allowed to modify the code, the development team does NOT typically take "check-ins" from the public.
Apache takes the BSD-style open source model and extends it by allowing check-ins to the core codebase by external parties.
CopyLeft or GPL (General Public License) based software takes the Open Source license one critical step farther. Whereas BSD and Apache style software permits users to "fork" the codebase and apply their own license terms to their modified code (e.g. make it commercial), the GPL license requires that all derivative works in turn must also be GPL code. "You are free to hack this code as long as your derivative is also hackable"
To us, open-source licensing and the rights it grants to users and third parties are primary, and specific development practice varies ad-hoc in a way not especially coupled to our license variations. In this Microsoft taxonomy, on the other hand, the central distinction is who has write access to a privileged central code base.
This reflects a much more centralized view of reality, and reflects a failure of imagination or understanding on the memo-authors's part. He doesn't grok our distributed-development tradition fully. This is hardly surprising... }
Open Source Software is Significant to Microsoft
This paper focuses on Open Source Software (OSS). OSS is acutely different from the other forms of licensing (in particular "shareware") in two very important respects:
OSS is a concern to Microsoft for several reasons:
A key barrier to entry for OSS in many customer environments has been its perceived lack of quality. OSS advocates contend that the greater code inspection & debugging in OSS software results in higher quality code than commercial software.
Recent case studies (the Internet) provide very dramatic evidence in customer's eyes that commercial quality can be achieved / exceeded by OSS projects. At this time, however there is no strong evidence of OSS code quality aside from anecdotal.
{ These sentences, taken together, are rather contradictory unless the ``recent case studies'' are all ``anecdotal''. But if so, why call them ``very dramatic evidence''?It appears there's a bit of self-protective backing and filling going on in the second sentence. Nevertheless, the first sentence is a huge concession for Microsoft to make (even internally).
In any case, the `anecdotal' claim is false. See Fuzz Revisited: A Re-examination of the Reliability of UNIX Utilities and Services .
Here are three pertinent lines from this paper:
"The failure rate of utilities on the commercial versions of UNIX that we tested . . . ranged from 15-43%." "The failure rate of the utilities on the freely-distributed Linux version of UNIX was second-lowest, at 9%." "The failure rate of the public GNU utilities was the lowest in our study, at only 7%.}TN remarks:
Note the clever distinction here (which Eric missed in his analysis). ``customer's eyes'' (in Microsoft's own words) rather than any real code quality. In other words, to Microsoft and the software market in general, a software product has "commercial quality" if it has the ``look and feel'' of commercial software products. A product has commercial quality code if and only if there is a public perception that it is made with commercial quality code. This means that MS will take seriously any product that has an appealing, commercial-looking appearance because MS assumes -- rightly so -- that this is what the typical, uninformed consumer uses as the judgment benchmark for what is ``good code''.TN is probably right. This didn't occur to me because, like most open-source programmers, I consider programs that crash and screw up a lot to be junk no matter how pretty their interfaces are....
Another barrier to entry that has been tackled by OSS is project complexity. OSS teams are undertaking projects whose size & complexity had heretofore been the exclusive domain of commercial, economically-organized/motivated development teams. Examples include the Linux Operating System and Xfree86 GUI.
OSS process vitality is directly tied to the Internet to provide distributed development resources on a mammoth scale. Some examples of OSS project size:
Project |
Lines of Code |
Linux Kernel (x86 only) |
500,000 |
Apache Web Server |
80,000 |
SendMail |
57,000 |
Xfree86 X-windows server |
1.5 Million |
"K" desktop environment |
90,000 |
Full Linux distribution |
~10 Million |
The OSS process is unique in its participants' motivations and the resources that can be brought to bare down on problems. OSS, therefore, has some interesting, non-replicable assets which should be thoroughly understood.
{ TN comments:Open source software has roots in the hobbyist and the scientific community and was typified by ad hoc exchange of source code by developers/users.
Internet Software
The largest case study of OSS is the Internet. Most of the earliest code on the Internet was, and is still based on OSS as described in an interview with Tim O'Reilly (http://www.techweb.com/internet/profile/toreilly/interview):
TIM O'REILLY: The biggest message that we started out with was, "open source software works." ... BIND has absolutely dominant market share as the single most mission-critical piece of software on the Internet. Apache is the dominant Web server. SendMail runs probably eighty percent of the mail servers and probably touches every single piece of e-mail on the Internet
Free Software Foundation / GNU Project
Credit for the first instance of modern, organized OSS is generally given to Richard Stallman of MIT. In late 1983, Stallman created the Free Software Foundation (FSF) -- http://www.gnu.ai.mit.edu/fsf/fsf.html -- with the goal of creating a free version of the UNIX operating system. The FSF released a series of sources and binaries under the GNU moniker (which recursively stands for "Gnu's Not Unix").
The original FSF / GNU initiatives fell short of their original goal of creating a completely OSS Unix. They did, however, contribute several famous and widely disseminated applications and programming tools used today including:
CopyLeft Licensing
FSF/GNU software introduced the "copyleft" licensing scheme that not only made it illegal to hide source code from GNU software but also made it illegal to hide the source from work derived from GNU software. The document that described this license is known as the General Public License (GPL).
Wired magazine has the following summary of this scheme & its intent (http://www.wired.com/wired/5.08/linux.html):
The general public license, or GPL, allows users to sell, copy, and change copylefted programs - which can also be copyrighted - but you must pass along the same freedom to sell or copy your modifications and change them further. You must also make the source code of your modifications freely available.
The second clause -- open source code of derivative works -- has been the most controversial (and, potentially the most successful) aspect of CopyLeft licensing.
Commercial software development processes are hallmarked by organization around economic goals. However, since money is often not the (primary) motivation behind Open Source Software, understanding the nature of the threat posed requires a deep understanding of the process and motivation of Open Source development teams.
{ This is a very important insight, one I wish Microsoft had missed. The real battle isn't NT vs. Linux, or Microsoft vs. Red Hat/Caldera/S.u.S.E. -- it's closed-source development versus open-source. The cathedral versus the bazaar.This applies in reverse as well, which is why bashing Microsoft qua Microsoft misses the point -- they're a symptom, not the disease itself. I wish more Linux hackers understood this.
On a practical level, this insight means we can expect Microsoft's propaganda machine to be directed against the process and culture of open source, rather than specific competitors. Brace for it... }
Some of the key attributes of Internet-driven OSS teams:
Communication -- Internet Scale
Coordination of an OSS team is extremely dependent on Internet-native forms of collaboration. Typical methods employed run the full gamut of the Internet's collaborative technologies:
OSS projects the size of Linux and Apache are only viable if a large enough community of highly skilled developers can be amassed to attack a problem. Consequently, there is direct correlation between the size of the project that OSS can tackle and the growth of the Internet.
Common Direction
In addition to the communications medium, another set of factors implicitly coordinate the direction of the team.
Common Goals
Common goals are the equivalent of vision statements which permeate the distributed decision making for the entire development team. A single, clear directive (e.g. "recreate UNIX") is far more efficiently communicated and acted upon by a group than multiple, intangible ones (e.g. "make a good operating system").
Common Precedents
Precedence is potentially the most important factor in explaining the rapid and cohesive growth of massive OSS projects such as the Linux Operating System. Because the entire Linux community has years of shared experience dealing with many other forms of UNIX, they are easily able to discern -- in a non-confrontational manner -- what worked and what didn't.
There weren't arguments about the command syntax to use in the text editor -- everyone already used "vi" and the developers simply parcelled out chunks of the command namespace to develop.
Having historical, 20:20 hindsight provides a strong, implicit structure. In more forward looking organizations, this structure is provided by strong, visionary leadership.
{ At first glance, this just reads like a brown-nose-Bill comment by someone expecting that Gates will read the memo -- you can almost see the author genuflecting before an icon of the Fearless Leader.More generally, it suggests a serious and potentially exploitable underestimation of the open-source community's ability to enable its own visionary leaders. We didn't get Emacs or Perl or the World Wide Web from ``20:20 hindsight'' -- nor is it correct to view even the relatively conservative Linux kernel design as a backward-looking recreation of past models.
Accordingly, it suggests that Microsoft's response to open source can be wrong-footed by emphasizing innovation in both our actions and the way we represent what we're doing to the rest of the world. }
Common Skillsets
NatBro points out that the need for a commonly accepted skillset as a pre-requisite for OSS development. This point is closely related to the common precedents phenomena. From his email:
A key attribute ... is the common UNIX/gnu/make skillset that OSS taps into and reinforces. I think the whole process wouldn't work if the barrier to entry were much higher than it is ... a modestly skilled UNIX programmer can grow into doing great things with Linux and many OSS products. Put another way -- it's not too hard for a developer in the OSS space to scratch their itch, because things build very similarly to one another, debug similarly, etc.
Whereas precedents identify the end goal, the common skillsets attribute describes the number of people who are versed in the process necessary to reach that end.
The Cathedral and the Bazaar
A very influential paper by an open source software advocate -- Eric Raymond -- was first published in May 1997 (http://www.redhat.com/redhat/cathedral-bazaar/). Raymond's paper was expressly cited by (then) Netscape CTO Eric Hahn as a motivation for their decision to release browser source code.
Raymond dissected his OSS project in order to derive rules-of-thumb which could be exploited by other OSS projects in the future. Some of Raymond's rules include:
Every good work of software starts by scratching a developer's personal itch
This summarizes one of the core motivations of developers in the OSS process -- solving an immediate problem at hand faced by an individual developer -- this has allowed OSS to evolve complex projects without constant feedback from a marketing / support organization.
{ TN remarks:Good programmers know what to write. Great ones know what to rewrite (and reuse).
Raymond posits that developers are more likely to reuse code in a rigorous open source process than in a more traditional development environment because they are always guaranteed access to the entire source all the time.
Widely available open source reduces search costs for finding a particular code snippet.
``Plan to throw one away; you will, anyhow.''
Quoting Fred Brooks, ``The Mythical Man-Month'', Chapter 11. Because development teams in OSS are often extremely far flung, many major subcomponents in Linux had several initial prototypes followed by the selection and refinement of a single design by Linus.
Treating your users as co-developers is your least-hassle route to rapid code improvement and effective debugging.
Raymond advocates strong documentation and significant developer support for OSS projects in order to maximize their benefits.
Code documentation is cited as an area which commercial developers typically neglect which would be a fatal mistake in OSS.
Release early. Release often. And listen to your customers.
This is a classic play out of the Microsoft handbook. OSS advocates will note, however, that their release-feedback cycle is potentially an order of magnitude faster than commercial software's.
{ This is an interestingly arrogant statement, as if they think I was somehow inspired by the Microsoft way of binary-only releases.But it suggests something else -- that even though the author intellectually grasps the importance of source code releases, he doesn't truly grok how powerful a lever the early release specifically of source code truly is. Perhaps living within Microsoft's assumptions makes that impossible.
TN comments:
The difference here is, in every release cycle Microsoft always listens to its most ignorant customers. This is the key to dumbing down each release cycle of software for further assaulting the non-PC population. Linux and OS/2 developers, OTOH, tend to listen to their smartest customers. This necessarily limits the initial appeal of the operating system, while enhancing its long-term benefits. Perhaps only a monopolist like Microsoft could get away with selling worse products each generation -- products focused so narrowly on the least-technical member of the consumer base that they necessarily sacrifice technical excellence. Linux and OS/2 tend to appeal to the customer who knows greatness when he or she sees it.The good that Microsoft does in bringing computers to the non-users is outdone by the curse they bring upon the experienced users, because their monopoly position tends to force everyone toward the lowest-common-denominator, not just the new users.
Note: This means that Microsoft does the ``heavy lifting'' of expanding the overall PC marketplace. The great fear at Microsoft is that somebody will come behind them and make products that not only are more reliable, faster, and more secure, but are also easy to use, fun, and make people more productive. That would mean that Microsoft had merely served as a pioneer and taken all the arrows in the back, while we who have better products become a second wave to homestead on Microsoft's tamed territory. Well, sounds like a good idea to me.
So, we ought to take a page from Microsoft's book and listen to the newbies once in a while. But not so often that we lose our technological superiority over Microsoft.
ESR again. I don't agree with TN's apparent assumption that ease-of-use and technical superiority are necessarily mutually exclusive; with good design it's possible to do both. But given limited resources and poor-to-mediocre design skills, they do tend to get set in opposition with each other. Thus there's enough point to TN's analysis to make it worth reproducing here. }
Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
This is probably the heart of Raymond's insight into the OSS process. He paraphrased this rule as "debugging is parallelizable". More in depth analysis follows.
{ Well, he got that right, anyway. }
Once a component framework has been established (e.g. key API's & structures defined), OSS projects such as Linux utilize multiple small teams of individuals independently solving particular problems.
Because the developers are typically hobbyists, the ability to `fund' multiple, competing efforts is not an issue and the OSS process benefits from the ability to pick the best potential implementation out of the many produced.
Note, that this is very dependent on:
The core argument advanced by Eric Raymond is that unlike other aspects of software development, code debugging is an activity whose efficiency improves nearly linearly with the number of individuals tasked with the project. There are little/no management or coordination costs associated with debugging a piece of open source code -- this is the key `break' in Brooks' laws for OSS.
Raymond includes Linus Torvald's description of the Linux debugging process:
My original formulation was that every problem ``will be transparent to somebody''. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. ``Somebody finds the problem,'' he says, ``and somebody else understands it. And I'll go on record as saying that finding it is the bigger challenge.'' But the point is that both things tend to happen quickly
Put alternately:
``Debugging is parallelizable''. Jeff [Dutky <dutky@wam.umd.edu>] observes that although debugging requires debuggers to communicate with some coordinating developer, it doesn't require significant coordination between debuggers. Thus it doesn't fall prey to the same quadratic complexity and management costs that make adding developers problematic.
One advantage of parallel debugging is that bugs and their fixes are found / propagated much faster than in traditional processes. For example, when the TearDrop IP attack was first posted to the web, less than 24 hours passed before the Linux community had a working fix available for download.
"Impulse Debugging"
An extension to parallel debugging that I'll add to Raymond's hypothesis is "impulsive debugging". In the case of the Linux OS, implicit to the act of installing the OS is the act of installing the debugging/development environment. Consequently, it's highly likely that if a particular user/developer comes across a bug in another individual's component -- and especially if that bug is "shallow" -- that user can very quickly patch the code and, via internet collaboration technologies, propagate that patch very quickly back to the code maintainer.
Put another way, OSS processes have a very low entry barrier to the debugging process due to the common development/debugging methodology derived from the GNU tools.
Any large scale development process will encounter conflicts which must be resolved. Often resolution is an arbitrary decision in order to further progress the project. In commercial teams, the corporate hierarchy + performance review structure solves this problem -- How do OSS teams resolve them?
In the case of Linux, Linus Torvalds is the undisputed `leader' of the project. He's delegated large components (e.g. networking, device drivers, etc.) to several of his trusted "lieutenants" who further de-facto delegate to a handful of "area" owners (e.g. LAN drivers).
Other organizations are described by Eric Raymond: (http://earthspace.net/~esr/writings/homesteading/homesteading-15.html):
Some very large projects discard the `benevolent dictator' model entirely. One way to do this is turn the co-developers into a voting committee (as with Apache). Another is rotating dictatorship, in which control is occasionally passed from one member to another within a circle of senior co-developers (the Perl developers organize themselves this way).
This section provides an overview of some of the key reasons OSS developers seek to contribute to OSS projects.
Solving the Problem at Hand
This is basically a rephrasing of Raymond's first rule of thumb -- "Every good work of software starts by scratching a developer's personal itch".
Many OSS projects -- such as Apache -- started as a small team of developers setting out to solve an immediate problem at hand. Subsequent improvements of the code often stem from individuals applying the code to their own scenarios (e.g. discovering that there is no device driver for a particular NIC, etc.)
Education
The Linux kernel grew out of an educational project at the University of Helsinki. Similarly, many of the components of Linux / GNU system (X windows GUI, shell utilities, clustering, networking, etc.) were extended by individuals at educational institutions.
Ego Gratification
The most ethereal, and perhaps most profound motivation presented by the OSS development community is pure ego gratification.
In "The Cathedral and the Bazaar", Eric S. Raymond cites:
The ``utility function'' Linux hackers are maximizing is not classically economic, but is the intangible of their own ego satisfaction and reputation among other hackers.
And, of course, "you aren't a hacker until someone else calls you hacker"
Homesteading on the Noosphere
A second paper published by Raymond -- "Homesteading on the Noosphere" (http://sagan.earthspace.net/~esr/writings/homesteading/), discusses the difference between economically motivated exchange (e.g. commercial software development for money) and "gift exchange" (e.g. OSS for glory).
"Homesteading" is acquiring property by being the first to `discover' it or by being the most recent to make a significant contribution to it. The "Noosphere" is loosely defined as the "space of all work". Therefore, Raymond posits, the OSS hacker motivation is to lay a claim to the largest area in the body of work. In other words, take credit for the biggest piece of the prize.
{ This is a subtle but significant misreading. It introduces a notion of territorial `size' which is nowhere in my theory. It may be a personal error of the author, but I suspect it reflects Microsoft's competition-obsessed culture. }
From "Homesteading on the Noosphere":
Abundance makes command relationships difficult to sustain and exchange relationships an almost pointless game. In gift cultures, social status is determined not by what you control but by what you give away.
...
For examined in this way, it is quite clear that the society of open-source hackers is in fact a gift culture. Within it, there is no serious shortage of the `survival necessities' -- disk space, network bandwidth, computing power. Software is freely shared. This abundance creates a situation in which the only available measure of competitive success is reputation among one's peers.
More succinctly (http://www.techweb.com/internet/profile/eraymond/interview):
SIMS: So the scarcity that you looked for was the scarcity of attention and reward?
RAYMOND: That's exactly correct.
Altruism
This is a controversial motivation and I'm inclined to believe that at some level, Altruism `degenerates' into a form of the Ego Gratification argument advanced by Raymond.
One smaller motivation which, in part, stems from altruism is Microsoft-bashing.
{ What a very fascinating admission, coming from a Microserf! Of course, he doesn't analyze why this connection exists; that might hit too close to home...}
A key threat in any large development team -- and one that is particularly exacerbated by the process chaos of an internet-scale development team -- is the risk of code-forking.
Code forking occurs when over normal push-and-pull of a development project, multiple, inconsistent versions of the project's code base evolve.
In the commercial world, for example, the strong, singular management of the Windows NT codebase is considered to be one of it's greatest advantages over the `forked' codebase found in commercial UNIX implementations (SCO, Solaris, IRIX, HP-UX, etc.).
Forking in OSS -- BSD Unix
Within OSS space, BSD Unix is the best example of forked code. The original BSD UNIX was an attempt by U-Cal Berkeley to create a royalty-free version of the UNIX operating system for teaching purposes. However, Berkeley put severe restrictions on non-academic uses of the codebase.
{ The author's history of the BSD splits is all wrong. }
In order to create a fully free version of BSD UNIX, an ad hoc (but closed) team of developers created FreeBSD. Other developers at odds with the FreeBSD team for one reason or another splintered the OS to create other variations (OpenBSD, NetBSD, BSDI).
There are two dominant factors which led to the forking of the BSD tree:
OK, we've learned something now. This may in fact explain the couinterintuitive fact that the projects which open up development the most actually have the least tendency to fork... }
Both of these motivations create a situation where developers may try to force a fork in the code and collect royalties (monetary, or ego) at the expense of the collective BSD society.
(Lack of) Forking in Linux
In contrast to the BSD example, the Linux kernel code base hasn't forked. Some of the reasons why the integrity of the Linux codebase has been maintained include:
Linus Torvalds is a celebrity in the Linux world and his decisions are considered final. By contrast, a similar celebrity leader did NOT exist for the BSD-derived efforts.
Linus is considered by the development team to be a fair, well-reasoned code manager and his reputation within the Linux community is quite strong. However, Linus doesn't get involved in every decision. Often, sub groups resolve their -- often large -- differences amongst themselves and prevent code forking.
In contrast to BSD's closed membership, anyone can contribute to Linux and your "status" -- and therefore ability to `homestead' a bigger piece of Linux -- is based on the size of your previous contributions.
Indirectly this presents a further disincentive to code forking. There is almost no credible mechanism by which the forked, minority code base will be able to maintain the rate of innovation of the primary Linux codebase.
Because derivatives of Linux MUST be available through some free avenue, it lowers the long term economic gain for a minority party with a forked Linux tree.
Ego motivations push OSS developers to plant the biggest stake in the biggest Noosphere. Forking the code base inevitably shrinks the space of accomplishment for any subsequent developers to the new code tree.
What are the core strengths of OSS products that Microsoft needs to be concerned with?
Like our Operating System business, OSS ecosystems have several exponential attributes:
The single biggest constraint faced by any OSS project is finding enough developers interested in contributing their time towards the project. As an enabler, the Internet was absolutely necessary to bring together enough people for an Operating System scale project. More importantly, the growth engine for these projects is the growth in the Internet's reach. Improvements in collaboration technologies directly lubricate the OSS engine.
Put another way, the growth of the Internet will make existing OSS projects bigger and will make OSS projects in "smaller" software categories become viable.
Like commercial software, the most viable single OSS project in many categories will, in the long run, kill competitive OSS projects and `acquire' their IQ assets. For example, Linux is killing BSD Unix and has absorbed most of its core ideas (as well as ideas in the commercial UNIXes). This feature confers huge first mover advantages to a particular project
The larger the OSS project, the greater the prestige associated with contributing a large, high quality component to its Noosphere. This phenomena contributes back to the "winner-take-all" nature of the OSS process in a given segment.
The larger the project, the more development/test/debugging the code receives. The more debugging, the more people who deploy it.
Binaries may die but source code lives forever
One of the most interesting implications of viable OSS ecosystems is long-term credibility.
Long-Term Credibility Defined
Long term credibility exists if there is no way you can be driven out of business in the near term. This forces change in how competitors deal with you.
{ TN comments:Note the terminology used here ``driven out of business''. MS believes that putting other companies out of business is not merely ``collateral damage'' -- a byproduct of selling better stuff -- but rather, a direct business goal. To put this in perspective, economic theory and the typical honest, customer-oriented businessperson will think of business as a stock-car race -- the fastest car with the most skillful driver wins. Microsoft views business as a demolition derby -- you knock out as many competitors as possible, and try to maneuver things so that your competitors wipe each other out and thereby eliminate themselves. In a stock car race there are many finishers and thus many drivers get a paycheck. In a demolition derby there is just one survivor. Can you see why ``Microsoft'' and ``freedom of choice'' are absolutely in two different universes? }
For example, Airbus Industries garnered initial long term credibility from explicit government support. Consequently, when bidding for an airline contract, Boeing would be more likely to accept short-term, non-economic returns when bidding against Lockheed than when bidding against Airbus.
Loosely applied to the vernacular of the software industry, a product/process is long-term credible if FUD tactics can not be used to combat it.
OSS is Long-Term Credible
OSS systems are considered credible because the source code is available from potentially millions of places and individuals.
{ We are deep inside the Microsoft world-view here. I realize that a typical hacker's reaction to this kind of thinking will be to find it nauseating, but it reflects a kind of instrumental ruthlessness about the uses of negative marketing that we need to learn to cope with.The really interesting thing about these two statements is that they imply that Microsoft should give up on FUD as an effective tactic against us.
Most of us have been assuming that the DOJ antitrust suit is what's keeping Microsoft from hauling out the FUD guns. But if His Gatesness bought this part of the memo, Microsoft may believe that they need to develop a more substantive response because FUD won't work.
This could be both good and bad news. The good news is that Microsoft would give up attack marketing, a weapon which in the past has been much more powerful than its distinctly inferior technology. The bad news is that, against us, giving it up would actually be better strategy; they wouldn't be wasting energy any more and might actually evolve some effective response. }
The likelihood that Apache will cease to exist is orders of magnitudes lower than the likelihood that WordPerfect, for example, will disappear. The disappearance of Apache is not tied to the disappearance of binaries (which are affected by purchasing shifts, etc.) but rather to the disappearance of source code and the knowledge base.
Inversely stated, customers know that Apache will be around 5 years from now -- provided there exists some minimal sustained interested from its user/development community.
One Apache customer, in discussing his rationale for running his e-commerce site on OSS stated, "because it's open source, I can assign one or two developers to it and maintain it myself indefinitely. "
Lack of Code-Forking Compounds Long-Term Credibility
The GPL and its aversion to code forking reassures customers that they aren't riding an evolutionary `dead-end' by subscribing to a particular commercial version of Linux.
The "evolutionary dead-end" is the core of the software FUD argument.
{ Very true -- and there's another glaring omission here. If the author had been really honest, he'd have noted that OSS advocates are well positioned to turn this argument around and beat Microsoft to death with it.By the author's own admission, OSS is bulletproof on this score. On the other hand, the exploding complexity and schedule slippage of the just-renamed ``Windows 2000'' suggest that it is an evolutionary dead end.
The author didn't go on to point that out. But we should. }
And the amateurs are ``making a progressively more credible argument''. By Microsoft's own admission, we're actually winning.
Maybe there's a message about the underlying products here? }
In particular, larger, more savvy, organizations who rely on OSS for business operations (e.g. ISPs) are comforted by the fact that they can potentially fix a work-stopping bug independent of a commercial provider's schedule (for example, UUNET was able to obtain, compile, and apply the teardrop attack patch to their deployed Linux boxes within 24 hours of the first public attack)
Alternatively stated, "developer resources are essentially free in OSS". Because the pool of potential developers is massive, it is economically viable to simultaneously investigate multiple solutions / versions to a problem and chose the best solution in the end.
For example, the Linux TCP/IP stack was probably rewritten 3 times. Assembly code components in particular have been continuously hand tuned and refined.
OSS = `perfect' API evangelization / documentation
OSS's API evangelization / developer education is basically providing the developer with the underlying code. Whereas evangelization of API's in a closed source model basically defaults to trust, OSS API evangelization lets the developer make up his own mind.
NatBro and Ckindel point out a split in developer capabilities here. Whereas the "enthusiast developer" is comforted by OSS evangelization, novice/intermediate developers --the bulk of the development community -- prefer the trust model + organizational credibility (e.g. "Microsoft says API X looks this way")
{ Whether it's really true that most developers prefer the `trust' model or not is an extremely interesting question.Twenty years of experience in the field tells me not; that, in general, developers prefer code even when their non-technical bosses are naive enough to prefer `trust'. Microsoft, obviously, wants to believe that its `organizational credibility' counts -- I detect some wishful thinking here.
On the other hand, they may be right. We in the open-source community can't afford to dismiss that possibility. I think we can meet it by developing high-quality documentation. In this way, `trust' in name authors (or in publishers of good repute such as O'Reilly or Addison-Wesley) can substitute for `trust' in an API-defining organization. }
Strongly componentized OSS projects are able to release subcomponents as soon as the developer has finished his code. Consequently, OSS projects rev quickly & frequently.
The weaknesses in OSS projects fall into 3 primary buckets:
The biggest roadblock for OSS projects is dealing with exponential growth of management costs as a project is scaled up in terms of rate of innovation and size. This implies a limit to the rate at which an OSS project can innovate.
Starting an OSS project is difficult
From Eric Raymond:
It's fairly clear that one cannot code from the ground up in bazaar style. One can test, debug and improve in bazaar style, but it would be very hard tooriginate a project in bazaar mode. Linus didn't try it. I didn't either. Your nascent developer community needs to have something runnable and testable to play with.
Raymond `s argument can be extended to the difficulty in starting/sustaining a project if there are no clear precedent / goal (or too many goals) for the project.
Bazaar Credibility
Obviously, there are far more fragments of source code on the Internet than there are OSS communities. What separates "dead source code" from a thriving bazaar?
One article (http://www.mibsoftware.com/bazdev/0003.htm) provides the following credibility criteria:
"....thinking in terms of a hard minimum number of participants is misleading. Fetchmail and Linux have huge numbers of beta testers *now*, but they obviously both had very few at the beginning.
What both projects did have was a handful of enthusiasts and a plausible promise. The promise was partly technical (this code will be wonderful with a little effort) and sociological (if you join our gang, you'll have as much fun as we're having). So what's necessary for a bazaar to develop is that it be credible that the full-blown bazaar will exist!"
I'll posit that some of the key criteria that must exist for a bazaar to be credible include:
Post-Parity Development
When describing this problem to JimAll, he provided the perfect analogy of "chasing tail lights". The easiest way to get coordinated behavior from a large, semi-organized mob is to point them at a known target. Having the taillights provides concreteness to a fuzzy vision. In such situations, having a taillight to follow is a proxy for having strong central leadership.
Of course, once this implicit organizing principle is no longer available (once a project has achieved "parity" with the state-of-the-art), the level of management necessary to push towards new frontiers becomes massive.
{ Nonsense. In the open-source world, all it takes is one person with a good idea.Part of the point of open source is to lower the energy barriers that retard innovation. We've found by experience that the `massive management' the author extols is one of the worst of these barriers.
In the open-source world, innovators get to try anything, and the only test is whether users will volunteer to experiment with the innovation and like it once they have. The Internet facilitates this process, and the cooperative conventions of the open-source community are specifically designed to promote it.
The third alternative to ``chasing taillights'' or ``strong central leadership'' (and more effective than either) is an evolving creative anarchy, in which there are a thousand leaders and ten thousand followers linked by a web of peer review and subject to rapid-fire reality checks.
Microsoft cannot beat this. I don't think they can even really understand it, not on a gut level. }
This is possibly the single most interesting hurdle to face the Linux community now that they've achieved parity with the state of the art in UNIX in many respects.
{ The Linux community has not merely lept this hurdle, but utterly demolished it. This fact is at the core of open-source's long-term advantage over closed-source development. }
Un-sexy work
Another interesting thing to observe in the near future of OSS is how well the team is able to tackle the "unsexy" work necessary to bring a commercial grade product to life.
{ Characterizing this kind of work as ``unsexy'' reveals an interesting blind spot. It has been my experience that for almost any kind of work, there will be somebody, somewhere, who thinks it's interesting or fulfilling enough to undertake it.Take the example of Unicode support above. Who's likely to do the best, most thorough job of implementing Unicode support, of the following three people?
It's likely to be either Ana or Jeff (all else, including skill sets, being equal), because they're scratching their itches. It ain't gonna be Joe.
Now, which development model is more likely to pull Ana or Jeff into the development effort -- closed source, or open?
Easy question. }
In the operating systems space, this includes small, essential functions such as power management, suspend/resume, management infrastructure, UI niceties, deep Unicode support, etc.
For Apache, this may mean novice-administrator functionality such as wizards.
Integrative/Architectural work
Integrative work across modules is the biggest cost encountered by OSS teams. An email memo from Nathan Myrhvold on 5/98, points out that of all the aspects of software development, integration work is most subject to Brooks' laws.
Up till now, Linux has greatly benefited from the integration / componentization model pushed by previous UNIX's. Additionally, the organization of Apache was simplified by the relatively simple, fault tolerant specifications of the HTTP protocol and UNIX server application design.
Future innovations which require changes to the core architecture / integration model are going to be incredibly hard for the OSS team to absorb because it simultaneously devalues their precedents and skillsets.
{ This prediction is of a piece with the author's earlier assertion that open-source development relies critically on design precedents and is unavoidably backward-looking. It's myopic -- apparently things like Python, Beowulf, and Squeak (to name just three of hundreds of innovative projects) don't show on his radar.We can only hope Microsoft continues to believe this, because it would hinder their response. Much will depend on how they interpret innovations such as (for example) the SMPization of the Linux kernel.
Interestingly, the author contradicts
himself on this point.
A former Microserf tells me that `throw one away' is actually pretty
close to a defined Microsoft policy, but one designed to leverage
marketing rather than fix problems. The project he was involved with
involved a web-based front-end to Exchange. The resulting first draft
(after 14 months of effort) was completely inferior to already
existing free-web-email (Yahoo, Hotmail, etc). The official response
to that was ``
He adds: Internet Explorer 5, just before one of its beta releases had
about 300K (yes, 300K) outstanding bugs targeted to be fixed before
the beta release. Much of this was accomplished by simply removing
large chunks of planned (new) functionality and pushing them to a
later (+1-2 years later) release.
}
These are weaknesses intrinsic to OSS's design/feedback methodology.
Iterative Cost
One of the key's to the OSS process is having many more iterations than commercial software (Linux was known to rev it's kernel more than once a day!). However, commercial customers tell us they want fewer revs, not more.
{ I wonder how this answer would change if Microsoft revs weren't so expensive?This is why commercial Linux distributors exist -- to mediate between the rapid-development process and customers who don't want to follow every twist of it. The kernel may rev once a day, but Red Hat only revs once in six months. }
"Non-expert" Feedback
The Linux OS is not developed for end users but rather, for other hackers. Similarly, the Apache web server is implicitly targetted at the largest, most savvy site operators, not the departmental intranet server.
The key thread here is that because OSS doesn't have an explicit marketing / customer feedback component, wishlists -- and consequently feature development -- are dominated by the most technically savvy users.
One thing that development groups at MSFT have learned time and time again is that ease of use, UI intuitiveness, etc. must be built from the ground up into a product and can not be pasted on at a later time.
{ This demands comment -- because it's so right in theory, but so hideously wrong in Microsoft practice. The wrongness implies an exploitable weakness in the implied strategy (for Microsoft) of emphasizing UI.There are two ways to build in ease of use "from the ground up". One (the Microsoft way) is to design monolithic applications that are defined and dominated by their UIs. This tends to produce ``Windowsitis'' -- rigid, clunky, bug-prone monstrosities that are all glossy surface with a hollow interior.
Programs built this way look user-friendly at first sight, but turn out to be huge time and energy sinks in the longer term. They can only be sustained by carpet-bomb marketing, the main purpose of which is to delude users into believing that (a) bugs are features, or that (b) all bugs are really the stupid user's fault, or that (c) all bugs will be abolished if the user bends over for the next upgrade. This approach is fundamentally broken.
The other way is the Unix/Internet/Web way, which is to separate the engine (which does the work) from the UI (which does the viewing and control). This approach requires that the engine and UI communicate using a well-defined protocol. It's exemplified by browser/server pairs -- the engine specializes in being an engine, and the UI specializes in being a UI.
With this second approach, overall complexity goes down and reliability goes up. Further, the interface is easier to evolve/improve/customize, precisely because it's not tightly coupled to the engine. It's even possible to have multiple interfaces tuned to different audiences.
Finally, this architecture leads naturally to applications that are enterprise-ready -- that can be used or administered remotely from the server. This approach works -- and it's the open-source community's natural way to counter Microsoft.
The key point is here is that if Microsoft wants to fight the open-source community on UI, let them -- because we can win that battle, too, fighting it our way. They can write ever-more-elaborate Windows monoliths that spot-weld you to your application-server console. We'll win if we write clean distributed applications that leverage the Internet and the Web and make the UI a pluggable/unpluggable user choice that can evolve.
Note, however, that our win depends on the existence of well-defined protocols (such as HTTP) to communicate between UIs and engines. That's why the stuff later in this memo about ``de-commoditizing protocols'' is so sinister. We need to guard against that. }
The interesting trend to observe here will be the effect that commercial OSS providers (such as RedHat in Linux space, C2Net in Apache space) will have on the feedback cycle.
How can OSS provide the service that consumers expect from software providers?
Support Model
Product support is typically the first issue prospective consumers of OSS packages worry about and is the primary feature that commercial redistributors tout.
However, the vast majority of OSS projects are supported by the developers of the respective components. Scaling this support infrastructure to the level expected in commercial products will be a significant challenge. There are many orders of magnitude difference between users and developers in IIS vs. Apache.
{ The vagueness of this last sentence is telling. Had the author continued, he would have had to acknowledge that Apache is clobbering the crap out of IIS in the marketplace (Apache's share 54% and climbing; IIS's somewhere around 14% and dropping).This would have led to a choice of unpalatable (for Microsoft) alternatives. It may be that Apache's informal user-support channels and `organizational credibility' actually produce better results than Microsoft's IIS organization can offer. If that's true, then it's hard to see in principle why the same shouldn't be true of other open-source projects.
The alternative -- that Apache is so good that it doesn't need much support or `organizational credibility' -- is even worse. That would mean that all of Microsoft's heavy-duty support and marketing battalions were just a huge malinvestment, like crumbling Stalinist apartment blocks forty years later.
These two possible explanations imply distinct but parallel strategies for open-source advocates. One is to build software that's so good it just doesn't need much support (but we'd do this anyway, and generally have). The other is to do more intensely what we're already doing along the lines of support mailing lists, newsgroups, FAQs, and other informal but extremely effective channels. A former Microserf adds: As of NT5 (sorry, Win2K :-) MS is going to claim a huge increase in IIS market share. This is because IIS5 is built directly linked with the NT kernel and handles all external TCP traffic (mail, http, etc). MSOffice is also going to communicate through IIS when talking with NT or Exchange, thus allowing them to add all internal LAN traffic to their usage reports. Let's see if we can pop their balloon before they raise it. }
For the short-medium run, this factor alone will relegate OSS products to the top tiers of the user community.
Strategic Futures
A very sublime problem which will affect full scale consumer adoption of OSS projects is the lack of strategic direction in the OSS development cycle. While incremental improvement of the current bag of features in an OSS product is very credible, future features have no organizational commitment to guarantee their development.
{ No. In the open-source community, new features are driven by the novelty- and territory-seeking behavior of individual hackers. This certainly is not a force to be despised. The Internet and the Web were built this way -- not because of `organizational commitment', but because somebody, somewhere, thought ``Hey -- this would be neat...''.Perhaps we're fortunate that `organizational credibility' looms so large in the Microsoft world-view. The time and energy they spend worrying about that and believing it's a prerequisite is resources they won't spend doing anything that might be effective against us. }
What does it mean for the Linux community to "sign up" to help build the Corporate Digital Nervous System? How can Linux guarantee backward compatibility with apps written to previous API's? Who do you sue if the next version of Linux breaks some commitment? How does Linux make a strategic alliance with some other entity?
{ Who do you sue if NT 5.0 (excuse me, "Windows 2000") doesn't ship on time? Has anyone ever recovered from Microsoft for any of their backwards-incompatibilities or other screwups?The question about backward compatibility is pretty ironic, considering that I've never heard of a program that will run under all of Windows 3.1, Windows 95, Windows 98, and NT 4.0 without change.
The author has been overtaken by events here. He should ask Microsoft's buddies at Intel, who bought a minority stake in Red Hat less than two months after this memo was written. }
In the last 2 years, OSS has taken another twist with the emergence of companies that sell OSS software, and more importantly, hiring full-time developers to improve the code base. What's the business model that justifies these salaries?
In many cases, the answers to these questions are similar to "why should I submit my protocol/app/API to a standards body?"