Control Images

These are squashfs images that plug into Aboriginal Linux to drive an automated software build. These images are target-independent and self-contained: they include source code and build control scripts, and upload the result to the host via FTP through the virtual network.

Development discussion goes on the Aboriginal Linux mailing list.

How to use

Each system image tarball includes a "./" which expects a control image as its argument. The results are uploaded through the virtual network to an "upload" subdirectory under the system image directory.

This attaches the control image as the third virtual hard drive provided to QEMU (the first is the root filesystem, the second is a sparse 2 gigabyte ext3 filesystem provided by and mounted on /home to provide writeable space for the build). The Aboriginal Linux init script (sbin/ will mount this /dev/hdc on /mnt and check for an executable "/mnt/init". If it finds one, it will run that instead of launching a shell prompt. (It pauses for three seconds for the user to press any key to get a shell prompt anyway.) When the script exits, the system image shuts down causing QEMU to exit.

The /mnt/init script can use busybox ftpput to upload its results to the host using FTP_SERVER and FTP_PORT environment variables. (By default runs the busybox ftp server on the host, writing into an "upload" subdirectory.)

Example usage:

cd system-image-armv5l
./ ../static-tools.hdc
ls -l upload

Can I use this on real hardware?

Sure, grab a root-filesystem tarball, chroot into it (probably via sbin/ to set up the mount points properly), mount the squashfs on /mnt, and run /mnt/init.

You might want to export values for "CPUS", "HOST", "FTP_SERVER", and "FTP_PORT", and maybe even set up distcc (see for an example).

Creating build images

Download the source code and run "./" to list available build images (from the "images" directory). Run "./ all" to compile everything.

This build infrastructure is largely copied from Aboriginal Linux.

Designing your own build control images

The script attaches the control image as the third virtual hard drive provided to QEMU, hence the "hdc" extension. The init scripts then mount this on /mnt, and run /mnt/init if it exists instead of starting a command shell. From there, the script can do anything it needs to.

The full build-time environment is:

Creating your own build control image means creating a squashfs with what you want to appear on /mnt. It must contain at least an executable "init" file.

The basic control image build infrastructure

The "common" directory contains common code available to each build script. The top level "" file pulls in the common definitions from "common/", calls the "images/$NAME/" for the image in question, and then makes a squashfs of the result. Control images are created by running "./ imagename", or "./ all" to iterate through all available images/*/ files. The output should show up in the "build" directory.

The source code for control images lives under the "images" directory, each in its own subdirectory. Each one provides a file "images/$NAME/" which populates a "$TOP/build/$NAME" directory and should contain an executable file called "init" and any other files that init program requires.

You can write your own images/$NAME/, which can put anything it wants into the resulting image. (It doesn't have to perform a native build, it can launch a server or something. Up to you.)

The bootstrap infrastructure

The other option is to use the "bootstrap" infrastructure by linking an image's to common/ This copies the common/bootstrap code into the resulting system image, calls "images/$NAME/" to add additional package source, and copies additional files (I.E. build scripts) from the "images/$NAME/mnt" directory. This can use the same "download" function as aboriginal linux (there's a copy in include/

A bootstrap image builds packages and installs them into the system image, copying the system to a writeable /home/chroot first if necessary using setup-chroot. Note that the control image is --bind mounted into the new chroot, and thus remains read only.The /mnt/ script iterates through /mnt/package-list and calls /mnt/ on each name listed, in order. (It keeps a list of packages it's built in /usr/src/manifest so it can skip rebuilding successfully installed packages if interactively stopped and restarted.)

/mnt/ calls build scripts out of /mnt/build which can end in different extensions indicating how to set up the writeable build directory:

When /mnt/ exits, /mnt/init uploads a tarball of the resulting chroot filesystem. The intended use is to convert the minimal native development environment into a distribution environment for use with an existing distribution's package manager and source repository.