From 0b47b7e62b8859cd30142df2555c6737fd6d9408 Mon Sep 17 00:00:00 2001
From: Rob Landley Toybox should be simple, small, fast, and full featured. In that order. When these goals need to be balanced off against each other, keeping the code
+ It should be possible to get about 80% of the way to each goal
+before they really start to fight.
+When these goals need to be balanced off against each other, keeping the code
as simple as it can be to do what it does is the most important (and hardest)
goal. Then keeping it small is slightly more important than making it fast.
Features are the reason we write code in the first place but this has all
been implemented before so if we can't do a better job why bother? It should be possible to get 80% of the way to each goal
-before they really start to fight. Here they are in reverse order
-of importance: These days toybox is the command line of Android, so anything the android
-guys say to do gets at the very least closely listened to. Toybox should provide the command line utilities of a build
environment capable of recompiling itself under itself from source code.
This minimal build system conceptually consists of 4 parts: toybox,
@@ -34,18 +38,20 @@ Android Open Source Project under the result. Any "circular dependencies"
should be solved by toybox including the missing dependencies itself
(see "Shared Libraries" below). Finally, toybox may provide some "convenience" utilties
+ Toybox may also provide some "convenience" utilties
like top and vi that aren't necessarily used in a build but which turn
the minimal build environment into a minimal development environment
(supporting edit/compile/test cycles in a text console), configure
network infrastructure for communication with other systems (in a build
cluster), and so on. The hard part is deciding what NOT to include.
-A project without boundaries will bloat itself
-to death. One of the hardest but most important things a project must
-do is draw a line and say "no, this is somebody else's problem, not
-something we should do."
+ And these days toybox is the command line of Android, so anything the android
+guys say to do gets at the very least closely listened to. The hard part is deciding what NOT to include. A project without boundaries
+will bloat itself to death. One of the hardest but most important things a
+project must do is draw a line and say "no, this is somebody else's problem,
+not something we should do."
Some things are simply outside the scope of the project: even though
posix defines commands for compiling and linking, we're not going to include
a compiler or linker (and support for a potentially infinite number of hardware
@@ -68,7 +74,10 @@ development systems, are a distraction from the 1.0 release. It's easy to say lots about optimizing for speed (which is why this section
+ Quick smoketest: use the "time" command, and if you haven't got a test
+case that's embarassing enough to motivate digging, move on. It's easy to say a lot about optimizing for speed (which is why this section
is so long), but at the same time it's the optimization we care the least about.
The essence of speed is being as efficient as possible, which means doing as
little work as possible. A design that's small and simple gets you 90% of the
@@ -77,16 +86,17 @@ it's worth (and often actually counterproductive). Still, here's some
advice: First, understand the darn problem you're trying to solve. You'd think
-I wouldn't have to say this, but I do. Trying to find a faster sorting
+I wouldn't have to say this, and yet. Trying to find a faster sorting
algorithm is no substitute for figuring out a way to skip the sorting step
entirely. The fastest way to do anything is not to have to do it at all,
and _all_ optimization boils down to avoiding unnecessary work. Speed is easy to measure; there are dozens of profiling tools for Linux
-(although personally I find the "time" command a good starting place).
-Don't waste too much time trying to optimize something you can't measure,
-and there's no much point speeding up things you don't spend much time doing
-anyway. Speed is easy to measure; there are dozens of profiling tools for Linux,
+but sticking in calls to "millitime()" out of lib.c and subtracting
+(or doing two clock_gettime() cals and then nanodiff() on them) is
+quick and easy. Don't waste too much time trying to optimize something you
+can't measure, and there's no much point speeding up things you don't spend
+much time doing anyway. Understand the difference between throughput and latency. Faster
processors improve throughput, but don't always do much for latency.
@@ -98,6 +108,12 @@ about avoiding system calls or function calls or anything else in the name
of speed unless you are in the middle of a tight loop that's you've already
proven isn't running fast enough.) The lowest hanging optimization fruit is usually either "don't make
+unnecessary copies of data" or "use a reasonable block size in your
+I/O transactions instead of byte-at-a-time".
+Start by looking for those, most of the rest of this advice is just explaining
+why they're bad. "Locality of reference" is generally nice, in all sorts of contexts.
It's obvious that waiting for disk access is 1000x slower than doing stuff in
RAM (and making the disk seek is 10x slower than sequential reads/writes),
@@ -147,7 +163,7 @@ memory killer to free up pages by killing processes (the alternative is the
entire OS freezing solid). Modern operating systems seldom run out of
memory gracefully. Also, it's better to be simple than clever. Many people think that mmap()
+ It's usually better to be simple than clever. Many people think that mmap()
is faster than read() because it avoids a copy, but twiddling with the memory
management is itself slow, and can cause unnecessary CPU cache flushes. And
if a read faults in dozens of pages sequentially, but your mmap iterates
@@ -160,7 +176,7 @@ try to speed things up, and measure again to confirm it actually _did_ speed
things up rather than made them worse. (And understanding what's really going
on underneath is a big help to making it happen faster.) In general, being simple is better than being clever. Optimization
+ Another reason to be simple than clever is optimization
strategies change with time. For example, decades ago precalculating a table
of results (for things like isdigit() or cosine(int degrees)) was clearly
faster because processors were so slow. Then processors got faster and grew
@@ -169,54 +185,108 @@ the table lookup (because the calculation fit in L1 cache but the lookup
had to go out to DRAM). Then cache sizes got bigger (the Pentium M has
2 megabytes of L2 cache) and the table fit in cache, so the table became
fast again... Predicting how changes in hardware will affect your algorithm
-is difficult, and using ten year old optimization advice and produce
-laughably bad results. But being simple and efficient is always going to
-give at least a reasonable result.Topics
+
+
+
Design goals
Features
-Speed
-
Even at the design level, a lot of simple algorithms scale terribly but +perform fine with small data sets. When small datasets are the common case, +"better" versions that trade higher throughput for worse latency can +consistently perform worse. +So if you think you're only ever going to feed the algorithm small data sets, +maybe just do the simple thing and wait for somebody to complain. For example, +you probably don't need to sort and binary search the contents of +/etc/passwd, because even 50k users is still a reasonably manageable data +set for a readline/strcmp loop, and that's the userbase of a fairly major +university. +Instead commands like "ls" call bufgetpwuid() out of lib/lib.c +which keeps a linked list of recently seen items, avoiding reparsing entirely +and trusting locality of reference to bring up the same dozen or so entries +for "ls -l /dev" or similar. The pathological failure mode of "simple +linked list" is to perform exactly as badly as constantly rescanning a +huge /etc/passwd, so this simple optimization shouldn't ever make performance +worse (modulo possible memory exhaustion and thus swap thrashing). +On the other hand, toybox's multiplexer does sort and binary +search its command list to minimize the latency of each command startup, +because the sort is a compile-time cost done once per build, +and the whole of command startup +is a "hot path" that should do as little work as possible because EVERY +command has to go through it every time before performing any other function +so tiny gains are worthwhile. (These decisions aren't perfect, the point is +to show that thought went into them.)
The famous quote from Ken Thompson, "When in doubt, use brute force", applies to toybox. Do the simple thing first, do as little of it as possible, and make sure it's right. You can always speed it up later.
Quick smoketest: build toybox with and without the command (or the change), +and maybe run "nm --size-sort" on files in generated/unstripped. +(See make bloatcheck below for toybox's built in nm size diff-er.)
+Again, being simple gives you most of this. An algorithm that does less work -is generally smaller. Understand the problem, treat size as a cost, and +is generally smaller. Understand the problem, treat size as a cost, and get a good bang for the byte.
-Understand the difference between binary size, heap size, and stack size. -Your binary is the executable file on disk, your heap is where malloc() memory -lives, and your stack is where local variables (and function call return -addresses) live. Optimizing for binary size is generally good: executing -fewer instructions makes your program run faster (and fits more of it in -cache). On embedded systems, binary size is especially precious because -flash is expensive (and its successor, MRAM, even more so). Small stack size +
What "size" means depends on context: there are at least a half dozen +different metrics in two broad categories: space used on disk/flash/ROM, +and space used in memory at runtime.
+ +Your executable file has at least +four main segments (text = executable code, rodata = read only data, +data = writeable variables initialized to a value other than zero, +bss = writeable data initialized to zero). Text and rodata are shared between multiple instances of the program running +simultaneously, the other 4 aren't. Only text, rodata, and data take up +space in the binary, bss, stack and heap only matter at runtime. You can +view toybox's symbols with "nm generated/unstripped/toybox", the T/R/D/B +lets you know the segment the symbol lives in. (Lowercase means it's +local/static.)
+ +Then at runtime there's +heap size (where malloc() memory lives) and stack size (where local +variables and function call arguments and return addresses live). And +on 32 bit systems mmap() can have a constrained amount of virtual memory +(usually a couple gigabytes: the limits on 64 bit systems are generally big +enough it doesn't come up)
+ +Optimizing for binary size is generally good: less code is less to go +wrong, and executing fewer instructions makes your program run faster (and +fits more of it in cache). On embedded systems, binary size is especially +precious because flash is expensive and code may need binary auditing for +security. Small stack size is important for nommu systems because they have to preallocate their stack -and can't make it bigger via page fault. And everybody likes a small heap.
+and can't make it bigger via page fault. And everybody likes a small heap. -Measure the right things. Especially with modern optimizers, expecting +
Measure the right things. Especially with modern optimizers, expecting something to be smaller is no guarantee it will be after the compiler's done -with it. Binary size isn't the most accurate indicator of the impact of a -given change, because lots of things get combined and rounded during -compilation and linking. Matt Mackall's bloat-o-meter is a python script -which compares two versions of a program, and shows size changes in each -symbol (using the "nm" command behind the scenes). To use this, run -"make baseline" to build a baseline version to compare against, and -then "make bloatometer" to compare that baseline version against the current -code.
- -Avoid special cases. Whenever you see similar chunks of code in more than +with it. Will total binary size is the final result, it isn't always the most +accurate indicator of the impact of a given change, because lots of things +get combined and rounded during compilation and linking (and things like +ASAN disable optimization). Toybox has scripts/bloatcheck to compare two versions +of a program and show size changes in each symbol (using "nm --size-sort"). +You can "make baseline" to build a baseline version to compare against, +and then apply your changes and "make bloatcheck" to compare against +the saved baseline version.
+ +Avoid special cases. Whenever you see similar chunks of code in more than one place, it might be possible to combine them and have the users call shared -code. (This is the most commonly cited trick, which doesn't make it easy. If -seeing two lines of code do the same thing makes you slightly uncomfortable, -you've got the right mindset.)
- -Some specific advice: Using a char in place of an int when doing math -produces significantly larger code on some platforms (notably arm), -because each time the compiler has to emit code to convert it to int, do the -math, and convert it back. Bitfields have this problem on most platforms. -Because of this, using char to index a for() loop is probably not a net win, -although using char (or a bitfield) to store a value in a structure that's -repeated hundreds of times can be a good tradeoff of binary size for heap -space.
+code (perhaps out of lib/*.c). This is the most commonly cited trick, which +doesn't make it easy to work out HOW to share. If seeing two lines of code do +the same thing makes you slightly uncomfortable, you've got the right mindset, +but "reuse" requires the "re" to have benefit, and infrastructure in search +of a user will generally bit-rot before it finds one. + +The are a lot of potential microoptimizations (on some architectures +using char instead of int as a loop index is noticeably slower, on some +architectures C bitfields are surprisingly inefficient, & is often faster +than % in a tight loop, conditional assignment avoids branch prediction +failures...) but they're generally not worth doing unless you're trying to +speed up the middle of a tight inner loop chewing through a large amount +of data (such as a compression algorithm). For data pumps sane blocking +and fewer system calls (buffer some input/output and do a big read/write +instead of a bunch of little small ones) is usually the big win. But +be careful about cacheing stuff: the two persistently had problems in computer +science are naming things, cache coherency, and off by one errors.
I don't do windows.
-We depend on C99 and posix-2008 libc features such as the openat() family of
+
+Standards
+
+
Toybox is implemented with reference to +c99, +Posix 2008, +LP64, +LSB 4.1, +the Linux man pages, +various IETF RFCs, +the linux kernel source's +Documentation directory, +utf8 and unicode, and our terminal control outputs ANSI +escape sequences. +Toybox gets tested with gcc and llvm on glibc, +musl-libc, and bionic, plus occasional FreeBSD and +MacOS builds for subsets +of the commands.
+ +For the build environment and runtime environment, toybox depends on +posix-2008 libc features such as the openat() family of functions. We also root around in the linux /proc directory a lot (no other way to implement "ps" at the moment), and assume certain "modern" linux kernel -behavior such as large environment sizes (linux commit b6a2fea39318, went into 2.6.22 -released July 2007, expanding the 128k -limit to 2 gigabytes. But it was then +behavior (for example linux 2.6.22 +expanded the 128k process environment size limit to 2 gigabytes, then it was trimmed back down to 10 megabytes, and when I asked for a way to query the actual value from the kernel if it was going to keep changing -like that, Linus declined). -In theory this shouldn't prevent us from working on -older kernels or other implementations (ala BSD), but we don't police their -corner cases.
+like that Linus declined). +We make an effort to support older kernels +and other implementations (primarily MacOS and BSD) but we don't always +police their corner cases very closely. + +Why not just use the newest version of each standard? + +
Partly to support older systems: +you can't fix a bug in the old system if you can't build in the old +enviornment.
+ +Partly because toybox's maintainer has his own corollary to Moore's law: +50% of what you know about programming the hardware is obsolete every 18 +months, but advantage of of C & Unix it's usually the same 50% cycling +out over and over.
+ +But mostly because the updates haven't added anything we care about. +Posix-2008 switched some things to larger (64 bit) data types and added the +openat() family of functions (which take a directory filehandle instead of +using the Current Working Directory), +but the 2013 and 2018 releases of posix were basically typo fixes: still +release 7, still SUSv4. (An eventual release 8 might be interesting but +it's not out yet.) We use C99 instead of C11 or newer because the new stuff +was mostly about threading (atomic variables and such), and except for using +// style single line comments we're more or less writing C89 code anyway. +The main other new thing of interest in C99 was explicit width data +types (uint32_t and friends), which LP64 handles for us.
+ +We're ignoring new versions of the Linux Foundation's standards (LSB, FHS) +entirely, for the same reason Debian is: they're not good at maintaining +standards. The Linux Foundation acquirred the Free Standards Group +the same way X acquired Y.
+ +We refer to current versions of man7.org because it's +not easily versioned (the website updates regularly) and because +Michael Kerrisk does a good job maintaining it so far. That said, we +try to "provide new" in our commands but "depend on old" in our build scripts. +(For example, we didn't start using "wait -n" until it had been in bash for 7 +years, and even then people depending on Centos' 10 year support horizon +complained.)
+ +Using newer vs older RFCs, and upgrading between versions, is a per-case +judgement call.
+ +How strictly do you adhere to these standards? + +
...ish? The man pages have a lot of stuff that's not in posix, +and there's no "init" or "mount" in posix, you can't implement "ps" +without replying on non-posix APIs....
+ +When the options a command offers visibly contradict posix, we try to have +a "deviations from posix" section at the top of the source listing the +differences.
+ +The build needs bash (not a pure-posix sh), and building on MacOS requires +"gsed" (because Mac's sed is terrible), but toybox is explicitly self-hosting +and failure to build under the tool versions we provide is a bug.
+ +Within the code, everything in main.c and lib/*.c has to build +on every supported Linux version, compiler, and library, plus BSD and MacOS. +We mostly try to keep #if/else staircases for portability issues to +lib/portability.[ch]. No other lib
+ +Portability of individual commands varies: we sometimes program directly
+against linux kernel APIs (unavoidable when accessing /proc and /sys),
+individual commands are allowed to #include
Note that Windows doesn't work like this, and I don't care, but if you're curious here are the insane legacy reasons why this is broken on Windows.
+The main squishy bit in LP64 is that "long long" was defined as +"at least" 64 bits instead of "exactly" 64 bits, and the standards body +that issued it collapsed in the wake of proprietary unix wars (all +those lawsuits between AT&T/BSDI/Novell/Caldera/SCO), so is +not available to issue an official correction. Then again a processor +with 128-bit general purpose registers wouldn't be commercially viable +until 2053 +(because 2005+32*1.5), and with the S-curve of Moore's Law slowly +bending back down as +atomic limits and exponential cost increases produce increasing +drag.... (The original Moore's Law curve would give a high end 2022 workstation +around 8 terabytes of RAM, available retail. Most don't even come with +that much disk space.) At worst we don't need to care for decades, the +S-curve means probably not in our lifetimes, atomic limits may mean "never". +I'm ok treating "long long" as exactly 64 bits.
+On platforms like x86, variables of type char default to unsigned. On platforms like arm, char defaults to signed. This difference can lead to @@ -444,8 +617,7 @@ work.
(This is why we use an external https wrapper program, because depending on openssl or similar to be linked in would change the behavior of toybox.)
- -Toybox is licensed 0BSD, which is a public domain equivalent license approved by SPDX. This works like other BSD licenses except that it doesn't @@ -464,8 +636,7 @@ most BSD or Apache licensed code without changing our license terms.
license, such as the xz decompressor or libtommath and libtomcrypt. - -The real coding style holy wars are over things that don't matter (whitespace, indentation, curly bracket placement...) and thus have no @@ -513,6 +684,18 @@ that's easier to search for perhaps?
(In C "char *a, b;" and "char* a, b;" mean the same thing: "a" is a pointer but "b" is not. Spacing it the second way is not how C works.) +We wrap lines at 80 columns. Part of the reason for this I (toybox's +founder Rob) have mediocre eyesight (so tend to increase the font size in +terminal windows and web browsers), and program in a lot of coffee shops +on laptops with a smallish sceen. I'm aware this exasperates Linus torvalds +(with his 8-character tab intents where just being in a function eats 8 chars +and 4 more levels eats half of an 80 column terminal),but you've +gotta break somewhere and even Linus admits there isn't another obvious +place to do so. (80 columns came from punched cards, which came +from civil war era dollar bill sorting boxes IBM founder Herman Hollerith +bought secondhand when bidding to run the 1890 census. "Totally arbitrary" +plus "100 yeas old" = standard.)
+If statements with a single line body go on the same line if the result fits in 80 columns, on a second line if it doesn't. We usually only use curly brackets if we need to, either because the body is multiple lines or -- 2.39.2