http://www.fool.com/portfolios/rulemaker/2000/rulemaker000224.htm

Thumbnail oversimplification of processors:

Before Linux

In the 1950's and 60's mainframe and minicomputer processors took up an entire circuit board. Unix started on these kind of systems, few of them remain in use today.

In 1969 an engineer at Intel invented the first microprocessor, the Intel 4004, by being the first to squeeze all the functions of a CPU onto a single silicon "chip". As transistor budgets increased they upgraded the 4004's design into the 8008 and then the 8080, the chip inside coin-operated Space Invaders machines and the Mits Altair. The Altair was widely cloned to form the first family of microcomputers, which contained (and were named after) the S-100 bus, programed in Basic from a startup called Micro-soft, and ran an OS called CP/M from a startup called Digital Research.

One of the Intel engineers left to form his own company that made an 8080 clone called the Z80. But the main alternative to the 8080 was from some ex-motorola engineers who left form MOStek, the company that did the (much cheaper) 6502 processor, with its own instruction set. Motorola sued the escaped engineers for being better at it than they were, and in the end the engineers went back to Motorola and commodore bought the rest of the company to use these processors in the Vic 20 (the first computer to sell a million units) and its successor the Commodore 64 (even more popular). The 6502 also wound up running the Apple I and Apple II, and the first big home game console (the Atari 2600).

In 1982 the march of Moore's Law drove the computing world to switch to 16 bits, coinciding with the arrival of the IBM PC. It was based on Intel's 8086 (actually a variant called the 8088 that ran the same software but fit in cheaper 8-bit motherboards and took twice as many clock cycles to do anything).

The main competitor to the 8086 was Motorola's 32-bit 68000 line of processors, used in just about everything except the PC (Macintosh, Amiga, Sun workstations...) Just as the 8086 was a sequel to the 8080, the 68k was a sequel to the 6502. got its own sequel in Motorola's 68000 processor. Motorola jumped straight to 32 bits, which had little advantage back when 64k was considered a lot of memory (and cost hundreds of dollars). The 68k powered Apple's Macintosh, Commodore's Amiga, Sun's unix workstations, and so on.

The main competitor to the 8086 was Motorola's 32-bit 68000 line of processors, used in just about everything except the PC (Macintosh, Amiga, Sun workstations...)

The 68000 Meanwhile Motorola promised the 68000 Everybody else (macintosh, amiga)was based on Motorola's 68000 because they thought RISC would replace CISC. So the Macintosh

By the time IBM was looking around to do its PC the

Then the Motorola 68000 was supposed to chdid the 68000 which was


ARM

The ARM processor is popular in the embedded space because it has the best power consumption to performance ratio, meaning it has the longest battery life and smallest amount of heat generated for a given computing task. It's the standard processor of smartphones. The 64 bit version (ARMv8) was announced in 2011 with a 2014 ship date for volume silicon.

Processor vs architecture

Although ARM hardware has many different processor designs with varying clock speeds, cache sizes, and integrated peripherals, from a software perspective what matters is ARM architectures, which are the different instruction sets a compiler can produce. The architecture names have a "v" in them and the processor designs don't, so "ARM922T" is a hardware processor design which implements the "ARMv4T" software instruction set.

The basic architectures are numbered: ARMv3, ARMv4, ARMv5, ARMv6, and ARMv7. An ARMv5 capable processor can run ARMv4 binaries, ARMv6 can run ARMv5, and so on. Each new architecture is a superset of the old ones, and the main reason to compile for newer platforms is efficiency: faster speed and better battery life. (I.E. they work about like i386, i486, i586, and i686 do in the x86 world. Recompiling ARMv4 code to ARMv5 code provides about a 25% speedup on the same hardware.)

The oldest architecture this compatibility goes back to is ARMv3 (which introduced 32-bit addressing), but that hardware is now obsolete. (Not just no longer being sold, but mostly cycled out of the installed base.) The oldest architecture still in use is "ARMv4", which should run on any ARM hardware still in use today (except ARMv7M, which is ARM in name only: it only implements the Thumb2 instruction set, not traditional arm instructions).

Architecture extensions

ARM architectures can have several instruction set extensions, indicated by letters after the ARMv# part. Some (such as the letter "E" denoting the "Jazelle" bytecode interpreter, which provides hardware acceleration for running Java bytecode) can safely be ignored if you're not using them, and others are essentially always there in certain architectures (such as the DSP extension signified by the letter "E" which always seems to be present in ARMv5). But some are worth mentioning:

The "Thumb" extension (ARMv4T) adds a smaller instruction set capable of fitting more code in a given amount of memory. Unfortunately thumb instructions often run more slowly, and the instruction set isn't complete enough to implement a kernel, so they supplement rather than replace the conventional ARM instruction set. Note that all ARMv5 and later processors include Thumb support by default, only ARMv4T offers it as an extension. The newer "Thumb2" version fixes most of the deficiencies of the original Thumb instruction set (you _can_ do a kernel in that), and is part of the ARMv7 architecture. The ARMv7M (Mangled? Mutant?) chip supports nothing _but_ Thumb2, abandoning backwards compatability with any other ARM binaries.

The VFP (Vector Floating Point) coprocessor provides hardware floating point acceleration. There are some older hardware floating point options, and some newer ones backwards compatible with VFP, but in general you can treat a chip as either "software floating point" or "VFP".

The other detail is "l" vs "b" for little-endian and big-endian. In theory ARM can do both (this is a compiler distinction, not a hardware distinction), but in practice little-endian is almost universally used on ARM, and most boards are wired up to support little-endian only even if the processor itself can theoretically handle both.

So for example, "armv4tl" is ARMv4 with Thumb extensions, little endian. This is the minimum requirements to use EABI, the current binary interface standard for Arm executables. (The older OABI is considered obsolete.)

Application Binary Interface

Linux initially implemented a somewhat ad-hoc ABI for ARM with poor performance and several limitations, and when ARM got around to documenting a new one the main downside was that it was incompatible with the old binaries.

So ARM has two ABIs that can run on the same hardware, the old one is called OABI and the new one is EABI. (This is a bit like the way BSD binaries won't run under Linux even though the hardware's the same, or the long ago switch from aout to ELF executable formats.

The oldest hardware that can run EABI is ARMv4T, so ARMv4 hardware without the Thumb extensions still has to use OABI, which is why you don't see much of that anymore. The kernel, C library, and compiler must all agree which ABI is in use or the binaries won't run. The transition to EABI happened somewhere around 2005, and these days everything is EABI.

Further Reading

In theory the best reference for ARM processors is the website of ARM Limited, but unfortunately they make it somewhat hard to find information about their older products and many of their pages are more concerned with advertising their newest products than giving the requested information. Wikipedia may be less frustrating, and occasionally even accurate.


Motorola 68000

Very popular in the 80's.


Mips

In the 1970's MIPS was a common acronym for "Millions of Instructions Per Second", a measure of processor speed. It also sounded like MITS (Microcontroller Instrumentation and Telemetry Systems), the company that made the first microcomputer (the MITS Altair).

In 1981 a RISC architecture design team at Stanford University led by John L. Hennessy (who went on to become president of the university) designed a "Microprocessor without Interlocked Pipeline Stages", and used that as an excuse to turn the acronym into a registered trademark of Mips Technologies, inc.

Mips used to be neck and neck with ARM until the company behind Mips fought a legal battle with Lexra that turned off enough customers to allow ARM to pull ahead and become the standard processor of smartphones. The mips platform has never fully recovered, but retains some niches, most prominently in devices connected to wall current (routers, set top boxes, etc). One advantage Mips historically had over Arm was availability of an FPGA version which allowed easier System-On-Chip prototyping. (This was before Linux ran on the Xylinx Microblaze.)

The glory days of Mips were when Silicon Graphics built unix graphics workstations around it. (SGI created the first 3D accelerator card, for the DEC VAX minicomputer. When Voodoo cards brought 3D to PCs, SGI bought half of Cray and tried to go upmarket into supercomputers, hired an ex-microsoft executive to be CEO who conviced them to abandon unix and port everything to Windows NT before returning to Microsoft, then they jumped on the Itanic bandwagon. Long story short: they don't do Mips anymore.) SGI's workstation success contributed to some game consoles such as the (Playstation 2) being mips based.

The company's website talks about their customers.


PowerPC

Apple's original Macintosh computers used Motorola 68000 processors. After Steve Jobs left to create NeXT and Pixar, "Project Pink" (an alliance between Apple, Motorola, and IBM) took IBM's "POWER" minicomputer processor and scaled it down for use in macintoshes. This created a "PC version of the POWER architecture", called PowerPC. (This lasted about a decade before Jobs returned and switched new Macs to x86, and did Arm-based phones and tablets.)

Motorola tried to strip PPC down for use in smartphones (creating the 880, a chip implementing a subset of the ppc instruction set), but that ended when they spun off their processor division as a new company (Freescale) and switched their phones to arm chips from other vendors. IBM tried to strip down its own embedded version (the 440) selecting a different (incompatible) subset of the powerpc instruction set. QEMU's "bamboo" board emulates this, and userspace 440 code should run on a full PPC (the kernel might not, the MMU implementation's different).

Powerpc is still around in game consoles (Xbox, Xbox360, Playstation 3) and some supercomputers. These days the power architecture is maintained by a consortium. and


Super Hitachi


Sparc


x86