Mercurial > hg > kdocs
view master.idx @ 74:5b3c02758561
Title and author for OLS 2007 volume 2.
author | Rob Landley <rob@landley.net> |
---|---|
date | Tue, 16 Oct 2007 22:24:42 -0500 |
parents | 0c10e3aad7d2 |
children | baf5a8bc72a0 |
line wrap: on
line source
<html> <title>Linux Kernel Documentation</title> <body> <h2>Linux Kernel Documentation Index</h2> <p>This page collects and organizes documentation about the Linux kernel, taken from many different sources. What is the kernel, how do you build it, how do you use it, how do you change it...</p> <p>This is a work in progress, and probably always will be. Please let us know on the <a href=http://vger.kernel.org/vger-lists.html#linux-doc>linux-doc</a> mailing list (on vger.kernel.org) about any documentation you'd like added to this index, and feel free to ask about any topics that aren't covered here yet. This index is maintained by Rob Landley <rob@landley.net>, and tracked in <a href=http://landley.net/hg/kdocs>this mercurial repostiory</a>. The cannonical location for the page is <a href=http://kernel.org/doc>here</a>.</p> <hr> <put_index_here> <hr> <span id="Sources of documentation"> <p>These are various upstream sources of documentation, many of which are linked into the <a href=http://kernel.org/doc>linux kernel documentation index</a>.</p> <ul> <li><a href=Documentation>Text files in the kernel's Documentation directory.</a></li> <li><a href=htmldocs>Output of kernel's "make htmldocs".</a></li> <li><a href=menuconfig>Menuconfig help</a></li> <li><a href=readme>Linux kernel README files</a></li> <li><a href=xmlman>html version of man-pages package</a></li> <li><a href=http://lwn.net/Kernel/Index/>Linux Weekly News kernel articles</a></li> <li>Linux Device Drivers book (<a href=http://lwn.net/Kernel/LDD3/>third edition</a>) (<a href=http://www.xml.com/ldd/chapter/book/>second edition</a>)</li> <li><a href=ols>Ottawa Linux Symposium papers</li> <li><a href=http://www.linuxjournal.com/xstatic/magazine/archives>Linux Journal archives</a></li> <li><a href=http://www.ibm.com/developerworks/views/linux/library.jsp>IBM Developerworks Linux Library</a> (also <a href=http://www.ibm.com/developerworks/linux/library/l-linux-kernel/>here</a>) </li> <li><a href=http://www.tux.org/lkml/>Linux Kernel Mailing List FAQ</a></li> <li><a href=http://kernelplanet.org>Kernel Planet (blog aggregator)</a></li> <li><a href=video.html>Selected videos of interest</a></li> <li><a href=local>Some locally produced docs</a></li> </ul> <span id="Standards"> <ul> <li><a href=http://www.opengroup.org/onlinepubs/009695399/>Single Unix Specification v3</a> (Also known as Open Group Base Specifications issue 6, and closely overlapping with Posix. See especially <a href=http://www.opengroup.org/onlinepubs/009695399/idx/xsh.html>system interfaces</a>)</li> <li><a href=http://www.open-std.org/jtc1/sc22/wg14/www/standards>ISO/IEC C9899</a> the "C99" standard, defining the C programming language.</a></li> <li><a href=http://www.linux-foundation.org/spec/refspecs/>Linux Foundation's specs page</a> (ELF, Dwarf, ABI...)</li> </ul> </span id="Standards"> <span id="Translations"> <ul> <li><a href=http://tlktp.sourceforge.net/>Linux Kernel Translation Project</a></li> <li><a href=http://kernelnewbies.org/RegionalNewbies>Kernel Newbies regional pages</a></li> <li><a href=http://www.linux.or.jp/JF/index.html>Japanese</a></li> <li><a href=http://zh-kernel.org/docs>Chinese</a></li> </ul> </span id="Translations"> </span id="Sources of documentation"> <span id="Building from source"> <span id="User interface"> <span id="Configuring"> </span> <span id="building"> <span id="Building out of tree"> </span> </span> <span id="Installing"> </span> <span id="running"> </span> <span id="debugging"> <span id="QEMU"> </span> </span> <span id="cross compiling"> <span id="Cross compiling vs native compiling"> <p>By default, Linux builds for the same architecture the host system is running. This is called "native compiling". An x86 system building an x86 kernel, x86-64 building x86-64, or powerpc building powerpc are all examples of native compiling.</p> <p>Building different binaries than the host runs is called cross compiling. <a href=http://landley.net/writing/docs/cross-compiling.html>Cross compiling is hard</a>. The build system for the Linux kernel supports cross compiling via a two step process: 1) Specify a different architecture (ARCH) during the configure, make, and install stages. 2) Supply a cross compiler (CROSS_COMPILE) which can output the correct kind of binary code. An example cross compile command line (building the "arm" architecture) looks like:</p> <blockquote> <pre>make ARCH=arm menuconfig make ARCH=arm CROSS_COMPILE=armv5l- </pre> </blockquote> <p>To specify a different architecture than the host, either define the "ARCH" environment variable or else add "ARCH=xxx" to the make command line for each of the make config, make, and make install stages. The acceptable values for ARCH are the names of the directories in the "arch" subdirectory of the Linux kernel source code, see <a href="#Architectures">Architectures</a> for details. All stages of the build must use the same ARCH value, and building a second architecture in the same source directory requires "make distclean". (Just "make clean" isn't sufficient, things like the include/asm symlink need to be removed and recreated.)</p> <p>To specify a cross compiler prefix, define the CROSS_COMPILE environment variable (or add CROSS_COMPILE= to each make command line). Native compiler tools, which output code aimed at the environment they're running in, usually have a simple name ("gcc", "ld", "strip"). Cross compilers usually add a prefix to the name of each tool, indicating the target they produce code for. To tell the Linux kernel build to use a cross compiler named "armv4l-gcc" (and corresponding "armv4l-ld" and "armv4l-strip") specify "CROSS_COMPILE=armv4l-". (Prefixes ending in a dash are common, and forgetting the trailing dash in CROSS_COMPILE is a common mistake. Don't forget to add the cross compiler tools to your $PATH.)</p> </span> <span id="User Mode Linux"> </span> </span> </span> <span id="Infrastructure"> <span id="kconfig"> </span> <span id="kbuild"> </span> <span id="build and link (tmppiggy)"> </span> </span> </span> <span id="Installing and using the kernel"> <span id="Installing"> <span id="Kernel image"> </span> <span id="Bootloader"> </span> </span> <span id="A working Linux root filesystem"> <span id="Finding and mounting /"> <span id="initramfs, switch_root vs pivot_root, /dev/console"> </span> </span> <span id="Running programs"> <span id="init program and PID 1"> <span id="What does daemonizing really mean?"> </span> </span> <span id="Executable formats"> <p>The Linux kernel runs programs in response to the <a href=xmlman/man3/exec.html>exec</a> syscall, which is called on a file. This file must have the executable bit set, and must be on a filesystem that implements mmap() and which isn't mounted with the "noexec" option. The kernel understands several different <a href="#executable_file_formats">executable file formats</a>, the most common of which are shell scripts and ELF binaries.</p> <span id="Shell scripts"> <p>If the first two bytes of an executable file are the characters "#!", the file is treated as a script file. The kernel parses the first line of the file (until the first newline), and the first argument (immediately following the #! with no space) is used as absolute path to the script's interpreter, which must be an executable file. Any additional arguments on the first line of the file (separated by whitespace) are passed as the first arguments to that interpreter executable. The interpreter's next argument is the name of the script file, followed by the arguments given on the command line.</p> <p>To see this behavior in action, run the following:</p> <blockquote> <pre>echo "#!/bin/echo hello" > temp chmod +x temp ./temp one two three </pre> </blockquote> <p>The result should be:</p> <blockquote>hello ./temp one two three</blockquote> <p>This is how shell scripts, perl, python, and other scripting languages work. Even C code can be run as a script by installing the <a href=http://en.wikipedia.org/wiki/Tiny_C_Compiler>tinycc</a> package, adding "#!/usr/bin/tcc -run" to the start of the .c file, and setting the executable bit on the .c file.</p> </span> <span id="ELF"> <span id="Shared libraries"> </span> </span> </span> <span id="C library"> <p>Most userspace programs access operating system functionality through a C library, usually installed at "/lib/libc.so.*". The C library wraps system calls, and provides implementations of various standard functions.</p> <p>Because almost all other programming languages are implemented in C (including python, perl, php, java, javascript, ruby, flash, and just about everything else), programs written in other languages also make use of the C library to access operating system services.</p> <p>The most common C library implementations for Linux are <a href=http://www.linuxfromscratch.org/lfs/view/6.2/chapter06/glibc.html>glibc</a> and <a href=http://uClibc.org>uClibc</a>. Both are full-featured implementations capable of supporting a full-featured desktop Linux distribution.</p> <p>The main advantage of glibc is that it's the standard implementation used by the largest desktop and server distributions, and has more features than any other implementation. The main advantage of uClibc is that it's much smaller and simpler than glibc while still implementing almost all the same functionality. For comparison, a "hello world" program statically linked against glibc is half a megabyte when stripped, while the same program statically linked against uClibc strips down to 7k.</p> <p>Other commonly used special-purpose C library implementations include <a href=http://en.wikipedia.org/wiki/Klibc>klibc</a> and <a href=http://www.sourceware.org/newlib/>newlib</a>.</p> <span id="Exporting kernel headers"> <p>Building a C library from source code requires a special set of Linux kernel header files, which describe the API of the specific version of the Linux kernel the C library will interface with. However, the header files in the kernel source code are designed to build the kernel and contain a lot of internal information that would only confuse userspace. These kernel headers must be "exported", filtering them for use by user space.</p> <p>Modern Linux kernels (based on 2.6.19.1 and newer) export kernel headers via the "make headers_install" command. See <a href=local/headers_install.txt>exporting kernel headers for use by userspace</a> for more information.</p> </span> </span> <span id="Dynamic loader"> </span> </span> <span id="FHS directories"> <p>FHS spec</p> <a href="pending/hotplug.txt">populating /dev from sysfs</a>. </span> </span> </span> <span id="Reading the source code"> <span id="Source code layout"> <span id="Following the boot process"> </span> <span id="Major subsystems"> </span> <span id="Architectures"> </span> </span> <span id="Concept vs implementation"> <p>Often the first implementation of a concept gets replaced. Journaling != reiserfs, virtualization != xen, devfs gave way to udev... Don't let your excitement for the concept blind you to the possibility of alternate implementations.</p> </span> <span id="Concepts"> <span id="rbtree"> </span> <span id="rcu"> <p>RCU stands for "Read Copy Update". The technique is a lockless way to manage data structures (such as linked lists or trees) on SMP systems, using a specific sequence of reads and updates, plus a garbage collection step, to avoid the need for locks in both the read and the update paths.</p> <p>RCU was invented by Paul McKenney, who maintains an excellent page of <a href=http://www.rdrop.com/users/paulmck/RCU/>RCU documentation</a>. The Linux kernel also contains some <a href=Documentation/RCU>additional RCU Documentation</a>.</p> </span> </span> </span> <span id="Kernel infrastructure"> <span id="Process Scheduler"> <span id="History of the Linux Process Scheduler"> <p>The original Linux process scheduler was a simple design based on a goodness() function that recalculated the priority of every task at every context switch, to find the next task to switch to. This served almost unchanged through the 2.4 series, but didn't scale to large numbers of processes, nor to SMP. By 2001 there were calls for change (such as <a href=ols/2001/elss.pdf>this OLS paper</a>), and the issue <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020107_149.html#1>came to a head</a> in December 2001.</p> <p>In January 2002, Ingo Molnar <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020114_150.html#4>introduced the "O(1)" process scheduler</a> for the 2.5 kernel series, a design based on separate "active" and "expired" arrays, one per processor. As the name implied, this found the next task to switch to in constant time no matter how many processes the system was running.</p> <p>Other developers (<a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020513_166.html#4>such as Con Colivas</a>) started working on it, and began a period of extensive scheduler development. The early history of Linux O(1) scheduler development was covered by the website Kernel Traffic.</p> <p>During 2002 this work included <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020121_151.html#8>preemption</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020121_151.html#9>User Mode Linux support</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020211_153.html#2>new drops</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020211_153.html#7>runtime tuning</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020304_156.html#6>NUMA support</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020429_164.html#4>cpu affinity</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020617_171.html#4>scheduler hints</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020701_173.html#1>64-bit support</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020715_175.html#5>backports to the 2.4 kernel</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020715_175.html#4>SCHED_IDLE</a>, discussion of <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20020729_177.html#1>gang scheduling</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20021014_188.html#4>more NUMA</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20021118_192.html#9>even more NUMA</a>). By the end of 2002, the O(1) scheduler was becoming the standard <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20021223_197.html#1>even in the 2.4 series</a>.</p> <p>2003 saw support added for <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20030124_202.html#14>hyperthreading as a NUMA variant</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20030330_211.html#3>interactivity bugfix</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20030616_219.html#4>starvation and affinity bugfixes</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20030616_219.html#8>more NUMA improvements</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20030811_227.html#2>interactivity improvements</a>, <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20030811_227.html#8>even more NUMA improvements</a>, a proposal for <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20031026_237.html#7>Variable Scheduling Timeouts</a> (the first rumblings of what would later come to be called "dynamic ticks"), <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20031201_243.html#10>more on hyperthreading</a>...</p> <p>In 2004 there was work on <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20040120_248.html#2>load balancing and priority handling</a>, and <a href=http://mirell.org/kernel-traffic/kernel-traffic/kt20040212_252.html#5>still more work on hyperthreading</a>...</p> <p>In 2004 developers proposed several extensive changes to the O(1) scheduler. Linux Weekly News wrote about Nick Piggin's <a href=http://lwn.net/Articles/80911/>domain-based scheduler</a> and Con Colivas' <a href=http://lwn.net/Articles/87729/>staircase scheduler</a>. The follow-up article <a href=http://lwn.net/Articles/96554/>Scheduler tweaks get serious</a> covers both. Nick's scheduling domains were merged into the 2.6 series.</p> <p>Linux Weekly News also wrote about other scheduler work:</p> <ul> <li><a href=http://lwn.net/Articles/83633/>Filtered wakeups</a></li> <li><a href=http://lwn.net/Articles/105366/>When should a process be migrated</a></li> <li><a href=http://lwn.net/Articles/109458/>Pluggable and realtime schedulers</a></li> <li><a href=http://lwn.net/Articles/120797/>Low latency for audio applications:</a></li> <li><a href=http://lwn.net/Articles/176635/>Solving starvation problems in the scheduler:</a></li> <li><a href=http://lwn.net/Articles/186438/>SMPnice</a></li> </ul> <p>In 2007, Con Colivas proposed a new scheduler, <a href=http://lwn.net/Articles/224865/>The Rotating Staircase Deadline Scheduler</a>, which <a href=http://lwn.net/Articles/226054/>hit a snag</a>. Ingo Molnar came up with a new scheduler, which he named the <a href=http://lwn.net/Articles/230501/>Completely Fair Scheduler</a>, described in the LWN writeups <a href=http://lwn.net/Articles/230574/>Schedulers: the plot thickens</a>, <a href=http://lwn.net/Articles/231672/>this week in the scheduling discussion</a>, and <a href=http://lwn.net/Articles/240474/>CFS group scheduling</a>.</p> <p>The CFS scheduler was merged into 2.6.23.</p> </span> <span id="fork, exec"> </span> <span id="sleep"> </span> </span> <span id="Timers"> <span id="Interrupt handling"> </span> </span> <span id="memory management"> <ul> <li><a href="gorman">Understanding the Linux Virtual Memory Manager</a>, by Mel Gorman.</li> <li><a href=http://lwn.net/Articles/250967/>What every programmer should know about memory</a> by Ulrich Drepper.</li> <li>Ars technica ram guide, parts <a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part1-1.html>one</a> <a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part1-1.html>two</a> <a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part3-1.html>three</a></li> <span id="mmap, DMA"> </span> </span> <span id="vfs"> <span id="Pipes, files, and ttys"> <p>A pipe can be read from or written to, transmitting a sequence of bytes in order.</p> <p>A file can do what a pipe can, and adds the ability to seek to a location, query the current location, and query the length of the file (all of which are an integer number off bytes from the beginning of the file).</p> <p>A tty can do what a pipe can, and adds a speed (in bits per second) and cursor location (X and Y, with the upper left corner at 0,0). Oh, and you can make it go beep.</p> <p>Note that you can't call lseek() on a tty and you can't call termios (man 3 termios) functions on a file. Each can be treated as a pipe.</p> </span> <span id="Filesystems"> <span id="Types of filesystems (see /proc/filesystems)"> <span id="Block backed"> ols/2001/jffs2.pdf </span> <span id="Ram backed"> <span id="ramfs"> </span> <span id="tmpfs"> </span> </span> <span id="Synthetic"> <span id="proc"> </span> <span id="sys"> </span> <span id="internal (pipefs)"> </span> <span id="usbfs"> http://www.linux-usb.org/USB-guide/x173.html http://www.linux-usb.org/USB-guide/c607.html http://www.linuxjournal.com/comment/reply/7466 </span> <span id="devpts"> </span> <span id="rootfs"> </span> <span id="devfs (obsolete)"> <p>Devfs was the first attempt to do a dynamic /dev directory which could change in response to hotpluggable hardware, by doing the seemingly obvious thing of creating a kernel filesystem to mount on /dev which would adjust itself as the kernel detected changes in the available hardware.</p> <p>Devfs was an interesting learning experience, but turned out to be the wrong approach, and was replaced by sysfs and udev. Devfs was removed in kernel version 2.6.18. See <a href=local/hotplug-history.html>the history of hotplug</a> for details.</p> </span> </span> <span id="Network"> <span id="nfs"> </span> <span id="smb/cifs"> </span> <span id="FUSE"> </span> </span> </span> <span id="Filesystem drivers"> <span id="Using"> </span> <span id="Writing"> </span> </span> </span> </span> <span id="Drivers"> <span id="Filesystem"> </span> <span id="Block (block layer, scsi layer)"> <span id="SCSI layer"> <ul> <li><a href="Documentation/scsi">Documentation/scsi</a> scsi.txt scsi_mid_low_api.txt scsi-generic.txt scsi_eh.txt</li> <li><a href="http://sg.torque.net/sg/p/sg_v3_ho.html">SCSI Generic (sg) HOWTO</a></li> <li><a href="xmlman/man4/sd.html">man 4 sd</a></li> <li><a href="http://www.t10.org/scsi-3.htm">SCSI standards</a></li> </ul> </span> </span> <span id="Character"> <span id="serial"> </span> <span id="keyboard"> </span> <span id="tty"> <span id="pty"> </span> </span> <span id="audio"> </span> <span id="null"> </span> <span id="random/urandom"> </span> <span id="zero"> </span> </span> <span id="DRI"> </span> <span id="Network"> </span> </span> <span id="Hotplug"> http://kernel.org/ols/2001/hotplug.pdf local/hotplug-history.html </span> <span id="Input core"> </span> <span id="Network"> <pre> physical plip serial/slip/ppp ethernet routing ipv4 ipv6 ols/2001/mipl.pdf </pre> </span> <span id="Modules"> <span id="Exported symbols"> <p>EXPORT_SYMBOL() vs EXPORT_SYMBOL_GPL()</p> <p>List of exported symbols.</p> </span> </span> <span id="Busses"> </span> <span id="Security"> <span id="Traditional Unix security model"> Users, groups, files (rwx), signals. </span> <span id="Bolt-on paranoia"> <p>The traditional Unix security model is too simple to satisfy the certification requirements of large corporate and governmental organizations, so several add-on security models have been implemented to increase complexity.</p> <span id="Posix capabilities"> http://www.gentoo.org/proj/en/hardened/capabilities.xml </span> <span id="SELinux"> </span> </span> </span> <span id="API (how userspace talks to the kernel)"> <span id="Syscalls"> </span> <span id="ioctls"> </span> <span id="executable file formats"> <span id="a.out"> </span> <span id="elf"> <span id="css, bss, etc."> </span> </span> <span id="scripts"> </span> <span id="flat"> </span> <span id="misc"> </span> </span> <span id="Device nodes"> </span> <span id="Pipes (new pipe infrastructure)"> </span> <span id="Synthetic filesystems (as API)"> </span> </span> </span> <span id="Hardware"> <span id="Architectures"> <pre> alpha arm avr32 blackfin cris frv h8300 i386 ia64 m32r m68k m68knommu mips parisc powerpc ols/2001/iseries.pdf ppc s390 sh sh64 sparc sparc64 um v850 x86_64 xtensa include/asm-generic uml </pre> </span> <span id="DMA, IRQ, MMU (mmap), IOMMU, port I/O"> </span> <span id="Busses"> <span id="PCI, USB"> http://www.linux-usb.org/USB-guide/book1.html Documentation/usb </span> </span> </span> <span id="Following Linux development"> <span id="Distibutions."> </span> <span id="Releases"> <span id="Source control"> </span> </span> <span id="community"> <pre> CATB http://vger.kernel.org/vger-lists.html http://www.tux.org/lkml/ lwn, kernel traffic, kernelplanet. http://www.kernel.org/faq http://www.kernel.org/kdist/rss.xml git/mercurial Documentation/{CodingStyle,SubmitChecklist} The four layer (developer, maintainer, subsystem, linus) model. Politics Stable API nonsense Why reiser4 not in. </pre> </span id="community"> <span id="Submitting Patches"> </span> </span> <span id="Glossary"> </span> </body> </html>