changeset 124:41ca6e4a8b6d default tip

Add Ottawa Linux Symposium 2011 and 2012 index pages.
author Rob Landley <>
date Fri, 26 Jul 2013 15:23:43 -0500
parents afcc37151224
files ols/2011/index.html ols/2012/index.html ols/index.html
diffstat 3 files changed, 447 insertions(+), 2 deletions(-) [+]
line wrap: on
line diff
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/ols/2011/index.html	Fri Jul 26 15:23:43 2013 -0500
@@ -0,0 +1,218 @@
+<title>Ottawa Linux Symposium (OLS) papers for 2011</title>
+<p>Ottawa Linux Symposium (OLS) Papers for 2011:</p>
+<hr><h2><a href="ols2011-gadre.pdf">X-XEN : Huge Page Support in Xen</a> - A.&nbsp;Gadre, K.&nbsp;Kabra, A.&nbsp;Vasani, K.&nbsp;Darak</h2>
+<p>Huge pages are the memory pages of size 2MB (x86-PAE and x86_64). The
+number of page walks required for translation from a virtual address to
+physical 2MB page are reduced as compared to page walks required for
+translation from a virtual address to physical 4kB page. Also the number
+of TLB entries per 2MB chunk in memory is reduced by a factor of 512 as
+compared to 4kB pages. In this way huge pages improve the performance of
+the applications which perform memory intensive operations. In the
+context of virtualization, i.e. Xen hypervisor, we propose a design and
+implementation to support huge pages for paravirtualized guest paging
+<p>Our design reserves 2MB pages (MFNs) from the domain's committed memory
+as per configuration specified before a domain boots. The rest of the
+memory is continued to be used as 4kB pages. Thus availability of the
+huge pages is guaranteed and actual physical huge pages can be provided
+to the paravirtualized domain. This increases the performance of the
+applications hosted on the guest operating system which require the huge
+page support. This design solves the problem of availability of 2MB
+chunk in guest's physical address space (virtualized) as well as the
+Xen's physical address space which would otherwise may be unavailable
+due to fragmentation.
+<hr><h2><a href="ols2011-lim.pdf">NPTL Optimization for Lightweight Embedded Devices</a> - Geunsik Lim, Hyun-Jin Choi, Sang-Bum Suh</h2>
+<p>One of the main changes included in the current Linux kernel is that, Linux thread model is transferred from LinuxThread to NPTL\citenptl-design for scalability and high performance. Each thread of user-space allocates one thread (1:1 mapping model) as a kernel for each thread's fast creation and termination. The management and scheduling of each thread within a single process is to take advantage of a multiple processor hardware. The direct management by the kernel thread can be scheduled by each thread. Each thread in a multi-processor system will be able to run simultaneously on a different CPU. In addition, the system service while blocked will not be delayed. In other words, even if one thread calls blocking a system call, another thread is not blocked.</p>
+<p>However, NPTL made features on Linux 2.6 to optimize a server and a desktop against Linux 2.4 dramatically. However, embedded systems are extremely limited on physical resources of the CPU and Memory such as DTV, Mobile phone. Some absences of effective and suitable features for embedded environments needs to be improved to NPTL. For example, the thread's stack size, enforced / arbitrary thread priority manipulation in non-preemptive kernel, thread naming to interpret their essential role, and so on.</p>
+<p>In this paper, a lightweight NPTL (Native POSIX Threads Library) that runs effectively on embedded systems, for the purpose of a way to optimize is described.
+<hr><h2><a href="ols2011-suzaki.pdf">Analysis of Disk Access Patterns on File Systems for Content Addressable Storage</a> - Kuniyasu Suzaki, Kengo Iijima, Toshiki Yagi, Cyrille Artho</h2>
+<p>CAS (Content Addressable Storage) is virtual disk with deduplication, which merges same-content chunks and reduces the consumption of physical storage. The performance of CAS depends on the allocation strategy of the individual file system and its access patterns (size, frequency, and locality of reference) since the effect of merging depends on the size of a chunk (access unit) used in deduplication.
+We propose a method to evaluate the affinity between file system and CAS, which compares the degree of deduplication by storing many same-contents files throughout a file system. The results show the affinity and semantic gap between the file systems (ext3, ext4, XFS, JFS, ReiserFS (they are bootable file systems), NILFS, btrfs, FAT32 and NTFS, and CAS.</p>
+<p>We also measured disk accesses through five bootable file systems at installation (Ubuntu 10.10) and at boot time, and found a variety of access patterns, even if same contents were installed. The results indicate that the five file systems allocate data scattered from a macroscopic view, but keep block contiguity for data from a microscopic view.
+<hr><h2><a href="ols2011-lissy1.pdf">Verifications around the Linux kernel</a> - A.&nbsp;Lissy, S.&nbsp;Lauri&egrave;re, P.&nbsp;Martineau</h2>
+<p>Ensuring software safety has always been needed, whether you are designing an on-board aircraft computer or next-gen mobile phone, even if the purpose of the
+verification is not the same in both cases. We propose to show the current state
+of the art of work around the verification of the Linux kernel, and by extension
+also present what has been done on other kernels. We will conclude with future
+needs that must be addressed, and some way of improvements that should be
+<hr><h2><a href="ols2011-lissy2.pdf">Faults in Patched Kernel</a> - A.&nbsp;Lissy, S.&nbsp;Lauri&egrave;re, P.&nbsp;Martineau</h2>
+<p>Tools have been designed to detect for faults in the Linux Kernel,
+such as Coccinelle, Sparse, or Undertaker, and studies of their results over the
+vanilla tree have been published. We are interested in a specific point: since
+Linux distributions patch the kernel (as other software) and since those patches
+might target less common use cases, it may result in a lower quality assurance
+level and fewer bugs
+found. So, we ask ourselves: is there any difference between upstream and
+distributions' kernel from a faults point of view ? We present an existing tool,
+Undertaker, and detail a methodology for reliably counting bugs in patched and
+non-patched kernel source code, applied to vanilla and distributions'
+kernels (Debian, Mandriva, openSUSE). We show that the difference is negligible
+but in favor of patched kernels.
+<hr><h2><a href="ols2011-mitake.pdf">Towards Co-existing of Linux and Real-Time OSes</a> - H.&nbsp;Mitake, T-H.&nbsp;Lin, H.&nbsp;Shimada, Y.&nbsp;Kinebuchi, N.&nbsp;Li, T.&nbsp;Nakajima</h2>
+<p>The capability of real-time resource management in the Linux kernel is
+dramatically improving due to the effective contribution of the real-time Linux
+community. However, to develop commercial products cost-effectively,
+it must be possible to re-use existing real-time applications from
+other real-time OSes whose OS API differs significantly from the POSIX
+interface. A virtual machine monitor that executes multiple operating
+systems simultaneously is a promising solution, but existing virtual
+machine monitors such as Xen and KVM are hard to used for embedded
+systems due to their complexities and throughput oriented designs.
+In this paper, we introduce a lightweight processor abstraction layer
+named SPUMONE. SPUMONE provides virtual CPUs (vCPUs) for respective guest OSes,
+and schedules them according to their priorities. In a typical case,
+SPUMONE schedules Linux with a low priority and an RTOS with a high priority. The
+important features of SPUMONE are the exploitation of an interrupt
+prioritizing mechanism and a vCPU migration mechanism that
+improves real-time capabilities in order to make the virtualization layer
+more suitable for embedded systems. We also discuss why the
+traditional virtual machine monitor design is not appropriate for
+embedded systems, and how the features of SPUMONE allow us to design
+modern complex embedded systems with less efforts.
+<hr><h2><a href="ols2011-vasavada.pdf"> Comparing different approaches for Incremental Checkpointing: The Showdown </a> - M.&nbsp;Vasavada, F.&nbsp;Mueller, P.&nbsp;Hargrove, E.&nbsp;Roman</h2>
+<p>The rapid increase in the number of cores and nodes in high
+performance computing (HPC) has made petascale computing a reality
+with exascale on the horizon. Harnessing such computational power
+presents a challenge as system reliability deteriorates with the
+increase of building components of a given single-unit
+reliability. Today's high-end HPC installations require applications
+to perform checkpointing if they want to run at scale so that failures
+during runs over hours or days can be dealt with by restarting from
+the last checkpoint. Yet, such checkpointing results in high overheads
+due to often simultaneous writes of all nodes to the parallel file
+system (PFS), which reduces the productivity of such systems in terms of
+throughput computing. Recent work on checkpoint/restart (C/R) has shown
+that incremental C/R techniques can reduce the amount of data written
+at checkpoints and thus the overall C/R overhead and impact on the PFS.</p>
+<p>The contributions of this work are twofold. First, it presents the
+design and implementation of two memory management schemes that enable
+incremental checkpointing. We describe unique approaches to
+incremental checkpointing that do not require kernel patching in one
+case and only require minimal kernel extensions in the other
+case. The work is carried out within the latest Berkeley Labs
+Checkpoint Restart (BLCR) as part of an upcoming release. Second, we
+evaluate the two schemes in terms of their system overhead for
+single-node microbenchmarks and multi-node cluster workloads. In
+short, this work is the final showdown between page write bit (WB) protection
+and dirty bit (DB) page tracking as a hardware means to support incremental
+Our results show savings of the DB approach over WB approach in almost
+all the tests. Further, DB has the potential of a significant reduction
+in kernel activity, which is of utmost relevance for proactive fault
+tolerance where an immanent fault can be circumvented if DB-based live
+migrations moves a process away from hardware about to fail.
+<hr><h2><a href="ols2011-clavis.pdf">User-level scheduling on NUMA multicore systems under Linux</a> - Sergey Blagodurov, Alexandra Fedorova</h2>
+<p>The problem of scheduling on multicore systems remains one of the hottest and the most challenging topics in systems research. Introduction of non-uniform memory access (NUMA) multicore architectures further complicates this problem, as on NUMA systems the scheduler needs not only consider the placement of threads on cores, but also the placement of memory. Hardware performance counters and hardware-supported instruction sampling, available on major CPU models, can help tackle the scheduling problem as they provide a wide variety of potentially useful information characterizing system behavior. The challenge, however, is to determine what information from counters is most useful for scheduling and how to properly obtain it on user level. </p>
+<p>In this paper we provide a brief overview of user-level scheduling techniques in Linux, discuss the types of hardware counter information that is most useful for scheduling, and demonstrate how this information can be used in an online user-level scheduler. The Clavis scheduler, created as a result of this research , is released as an open source project.
+<hr><h2><a href="ols2011-vallee.pdf">Management of Virtual Large-scale High-performance Computing Systems</a> - Geoffroy Vall&eacute;e, Thomas Naughton, Stephen L.&nbsp;Scott</h2>
+Linux is widely used on high-performance computing (HPC) systems,
+from commodity clusters to Cray supercomputers (which run the Cray Linux
+Environment). These platforms primarily differ in their system configuration:
+some only use SSH to access compute nodes, whereas others employ full resource
+management systems (e.g., Torque and ALPS on Cray XT systems).
+Furthermore, the latest improvements in system-level virtualization techniques, 
+such as hardware support, virtual machine migration for system resilience
+purposes, and reduction of virtualization overheads, enable the usage of 
+virtual machines on HPC platforms.</p>
+<p>Currently, tools for the management of virtual machines in the
+context of HPC systems are still quite basic, and often tightly coupled to
+the target platform.
+In this document, we present a new system tool for the management of virtual
+machines in the context of large-scale HPC systems, including a run-time
+system and the support for all major virtualization solutions.
+The proposed solution is based on two key aspects.
+First, Virtual System Environments (VSE), introduced in a previous
+study, provide a flexible method to define the software environment that will
+be used within virtual machines. Secondly, we propose a new system run-time for
+the management and deployment of VSEs on HPC systems, which supports a wide
+range of system configurations. For instance, this generic run-time can
+interact with resource managers such as Torque for the management of virtual
+<p>Finally, the proposed solution provides appropriate abstractions to enable
+use with a variety of virtualization solutions on different Linux HPC
+platforms, to include Xen, KVM and the HPC oriented Palacios.</p>
+<hr><h2><a href="ols2011-grekhov.pdf">The Easy-Portable Method of Illegal Memory Access Errors Detection for Embedded Computing Systems</a> - Ekaterina Gorelkina, Alexey Gerenkov, Sergey Grekhov</h2>
+<p> Nowadays applications on embedded systems become more and more complex and require
+more effective facilities for debugging, particularly, for detecting memory access
+errors. Existing tools usually have strong dependence on the architecture of processors
+that makes its usage difficult due to big variety of types of CPUs. In this paper
+an easy-portable solution of problem of heap memory overflow errors detection is
+suggested. The proposed technique uses substitution of standard allocation functions
+for creating additional memory regions (so called \it red zones) for detecting overflows
+and intercepting of page faulting mechanism for tracking memory accesses. Tests have
+shown that this approach allows detecting illegal memory access errors in heap with
+sufficient precision. Besides, it has a small processor-dependent part that makes this
+method easy-portable for embedded systems which have big variety of types of processors.
+<hr><h2><a href="ols2011-giraldeau.pdf">Recovering System Metrics from Kernel Trace</a> - F.&nbsp;Giraldeau, J.&nbsp;Desfossez, D.&nbsp;Goulet, M. Dagenais, M.&nbsp;Desnoyers</h2>
+<p>Important Linux kernel subsystems are statically instrumented with tracepoints, which enables the gathering of detailed information about a running system, such as process scheduling, system calls and memory management. Each time a tracepoint is encountered, an event is generated and can be recorded to disk for offline analysis. Kernel tracing provides system-wide instrumentation that has low performance impact, suitable for tracing online systems in order to debug hard-to-reproduce errors or analyze the performance.</p>
+<p>Despite these benefits, a kernel trace may be difficult to analyze due to the large number of events. Moreover, trace events expose low-level behavior of the kernel that requires deep understanding of kernel internals to analyze. In many cases, the meaning of an event may depend on previous events. To get valuable information from a kernel trace, fast and reliable analysis tools are required.</p>
+<p>In this paper, we present required trace analysis to provide familiar and meaningful metrics to system administrators and software developers, including CPU, disk, file and network usage. We present an open source prototype implementation that performs these analysis with the LTTng tracer. It leverages kernel traces for performance optimization and debugging.
+<hr><h2><a href="ols2011-masters.pdf">State of the kernel</a> - John C.&nbsp;Masters</h2>
+<p>Slides from the talk follow.
+<hr><h2><a href="ols2011-riker.pdf">Android Development</a> - Tim&nbsp;Riker</h2>
+<p>Slides from the talk follow.
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/ols/2012/index.html	Fri Jul 26 15:23:43 2013 -0500
@@ -0,0 +1,219 @@
+<title>Ottawa Linux Symposium (OLS) papers for 2012</title>
+<p>Ottawa Linux Symposium (OLS) Papers for 2012:</p>
+<hr><h2><a href="ols2012-komu.pdf">Sockets and Beyond: Assessing the Source Code of Network Applications</a> - M.&nbsp;Komu, S.&nbsp;Varjonen, A.&nbsp;Gurtov, S.&nbsp;Tarkoma</h2>
+<p>Network applications are typically developed with frameworks that hide
+the details of low-level networking. The motivation is to allow
+developers to focus on application-specific logic rather than
+low-level mechanics of networking, such as name resolution, reliability,
+asynchronous processing and quality of service. In this article, we
+characterize statistically how open-source applications use the
+Sockets API and identify a number of requirements for network applications based on our
+analysis. The analysis considers five fundamental questions: naming
+with end-host identifiers, name resolution, multiple end-host
+identifiers, multiple transport protocols and
+security. We discuss the significance of these findings for
+network application frameworks and their development. As two of our
+key contributions, we present generic solutions for a problem with OpenSSL
+initialization in C-based applications and a multihoming issue with
+UDP in all of the analyzed four frameworks.
+<hr><h2><a href="ols2012-lim.pdf">Load-Balancing for Improving User Responsiveness on Multicore Embedded Systems</a> - Geunsik Lim, Changwoo Min, YoungIk Eom</h2>
+<p>Most commercial embedded devices have been deployed with a single processor architecture. The code size and complexity of applications running on embedded devices are rapidly increasing due to the emergence of application business models such as Google Play Store and Apple App Store. As a result, a high-performance multicore CPUs have become a major trend in the embedded market as well as in the personal computer market. </p>
+<p>Due to this trend, many device manufacturers have been able to adopt more attractive user interfaces and high-performance applications for better user experiences on the multicore systems.</p>
+<p>In this paper, we describe how to improve the real-time performance by reducing the user waiting time on multicore systems that use a partitioned per-CPU run queue scheduling technique. Rather than focusing on naive load-balancing scheme for equally balanced CPU usage, our approach tries to minimize the cost of task migration by considering the importance level of running tasks and to optimize per-CPU utilization on multicore embedded systems.</p>
+<p>Consequently, our approach improves the real-time characteristics such as cache efficiency, user responsiveness, and latency. Experimental results under heavy background stress show that our approach reduces the average scheduling latency of an urgent task by 2.3 times.
+<hr><h2><a href="ols2012-mansoor.pdf">Experiences with Power Management Enabling on the Intel Medfield Phone</a> -  R.&nbsp;Muralidhar, H.&nbsp;Seshadri, V.&nbsp;Bhimarao, V.&nbsp;Rudramuni, I.&nbsp;Mansoor, S.&nbsp;Thomas, B.&nbsp;K.&nbsp;Veera, Y.&nbsp;Singh, S.&nbsp;Ramachandra</h2>
+<p>Medfield is Intel's first smartphone SOC platform built on a 32~nm process and the platform implements several key innovations in hardware and software to accomplish aggressive power management. It has multiple logical and physical power partitions that enable software/firmware to selectively control power to functional components, and to the entire platform as well, with very low latencies. </p>
+<p>This paper describes the architecture, implementation and key experiences from enabling power management on the Intel Medfield phone platform. We describe how the standard Linux and Android power management architectures integrate with the capabilities provided by the platform to provide aggressive power management capabilities. We also present some of the key learning from our power management experiences that we believe will be useful to other Linux/Android-based platforms. 
+<hr><h2><a href="ols2012-adepoutovitch.pdf">File Systems: More Cooperations - Less Integration.</a> - A.&nbsp;Depoutovitch, A.&nbsp;Warkentin</h2>
+<p>Conventionally, file systems manage storage space available to user programs and provide it through the file interface. 
+Information about the physical location of used and unused space is hidden from users. This makes the file system free space unavailable to other storage stack kernel components due to performance or layering violation reasons. This forces file systems architects to integrate additional functionality, like snapshotting and volume management, inside file systems increasing their complexity. </p>
+<p>We propose a simple and easy-to-implement file system interface that allows different software components to efficiently share free storage space with a file system at a block level. We demonstrate the benefits of the new interface by optimizing an existing volume manager to store snapshot data in the file system free space, instead of requiring the space to be reserved in advance, which would make it unavailable for other uses.
+<hr><h2><a href="ols2012-warkentin.pdf">``Now if we could get a solution to the home directory dotfile hell!''</a> - A.&nbsp;Warkentin</h2>
+<p>Unix environments have traditionally consisted of
+multi-user and diverse multi-computer configurations, backed by
+expensive network-attached storage.
+The recent growth and proliferation of desktop- and single machine-
+centric GUI environments, however, has made it very difficult to share
+a network-mounted home directory
+across multiple machines. This is particularly noticeable in the
+context of concurrent graphical logins or logins
+into systems with a different installed software base.The typical
+offenders are the ``modern'' bits of software such as
+desktop environments (e.g.&nbsp;GNOME), services (dbus, PulseAudio), and
+applications (Firefox),
+which all abuse dotfiles.</p>
+<p>Frequent changes to configuration
+format prevents the same set of configuration files from being easily used
+across even close versions of the same software. And whereas dotfiles
+historically contained read-once configuration,
+they are now misused for runtime lock files and writeable configuration databases,
+with no effort to guarantee correctness across concurrent accesses and
+differently-versioned components. Running such software concurrently, across different
+machines with a network mounted home directory, results in corruption, data loss, misbehavior
+and deadlock, as the majority of configuration is system-, machine- and installation- specific,
+rather than user-specific.</p>
+<p>This paper explores a simpler alternative to rewriting all
+existing broken software, namely, implementing separate host-specific profiles via
+filesystem redirection of dotfile accesses. Several approaches are
+discussed and the presented solution, the
+Host Profile File System, although Linux-centric, can be easily
+adapted to other similar environments such as OS X, Solaris and the
+<hr><h2><a href="ols2012-subramanian.pdf">Improving RAID1 Synchronization Performance Using File System Metadata</a> - H.&nbsp;Subramanian, A.&nbsp;Warkentin, A.&nbsp;Depoutovitch</h2>
+<p>  Linux MD software RAID1 is used ubiquitously by end users,
+  corporations and as a core technology component of other software
+  products and solutions, such as the VMware vSphere
+  Appliance(vSA). MD RAID1 mode provides data persistence and
+  availability in face of hard drive failures by maintaining two or
+  more copies (mirrors) of the same data. vSA makes data available
+  even in the event of a failure of other hardware and software
+  components, e.g.&nbsp;storage adapter, network, or the entire
+  vSphere
+  server. For recovery from a failure, MD has a mechanism for change
+  tracking and mirror synchronization.
+  However, data synchronization can consume a significant amount of
+  time and resources. In the worst case scenario, when one of the
+  mirrors has to be replaced with a new one, it may take up to a few
+  days to synchronize the data on a large multi-terabyte disk volume.
+  During this time, the MD RAID1 volume and contained user data are
+  vulnerable to failures and MD operates below optimal performance.
+  Because disk sizes continue to grow at a much faster pace compared
+  to disk speeds, this problem is only going to become worse in the
+  near future.
+  This paper presents a solution for improving the synchronization of
+  MD RAID1 volumes by leveraging information already tracked by file
+  systems about disk utilization. We describe and compare three
+  different implementations that tap into the file system and assist
+  the MD RAID1 synchronization algorithm to avoid copying unused
+  data. With real-life average disk utilization of 43%
+  synchronization time of a typical MD RAID1 volume compared to the
+  existing synchronization mechanism.
+<hr><h2><a href="ols2012-verma.pdf">Out of band Systems Management in enterprise Computing Environment</a> - D.&nbsp;Verma, S.&nbsp;Gowda, A.&nbsp;Vellimalai, S.&nbsp;Prabhakar</h2>
+<p>Out of band systems management provides an innovative mechanism to keep the digital ecosystem inside data centers in shape even when the parent system goes down. This is an upcoming trend where monitoring and safeguarding of servers is offloaded to another embedded system which is most likely an embedded Linux implementation. </p>
+<p>In today's context, where virtualized servers/workloads are the most prevalent compute nodes inside a data center, it is important to evaluate  systems management and associated challenges in that perspective. This paper explains how to leverage Out Of Band systems management infrastructure in virtualized environment. </p>
+<hr><h2><a href="ols2012-thiell.pdf">ClusterShell, a scalable execution framework for parallel tasks</a> - S.&nbsp;Thiell, A. Degr&eacute;mont, H.&nbsp;Doreau, A.&nbsp;Cedeyn</h2>
+Cluster-wide administrative tasks and other distributed jobs are often
+executed by administrators using locally developed tools and do not rely on a
+solid, common and efficient execution framework. This document covers this
+subject by giving an overview of ClusterShell, an open source Python
+middleware framework developed to improve the administration of HPC Linux
+clusters or server farms.</p>
+<p>ClusterShell provides an event-driven library interface that eases the
+management of parallel system tasks, such as copying files, executing shell
+commands and gathering results. By default, remote shell commands rely on SSH,
+a standard and secure network protocol. Based on a scalable, distributed
+execution model using asynchronous and non-blocking I/O, the library has shown
+very good performance on petaflop systems. Furthermore, by providing efficient
+support for node sets and more particularly node groups bindings, the library
+and its associated tools can ease cluster installations and daily tasks
+performed by administrators.</p>
+<p>In addition to the library interface, this document addresses resiliency and
+topology changes in homogeneous or heterogeneous environments. It also focuses
+on scalability challenges encountered during software development and
+on the lessons learned to achieve maximum performance from a Python software
+engineering point of view. </p>
+<hr><h2><a href="ols2012-salve.pdf">DEXT3: Block Level Inline Deduplication for EXT3 File System</a> - A.&nbsp;More, Z.&nbsp;Shaikh, V.&nbsp;Salve</h2>
+<p>Deduplication is basically an intelligent storage and compression technique that avoids saving redundant data onto the disk. Solid State Disk (SSD) media have gained popularity these days owing to their low power demands, resistance to natural shocks and vibrations and a high quality random access performance. However, these media come with limitations such as high cost, small capacity and a limited erase-write cycle lifespan. Inline deduplication helps alleviate these problems by avoiding redundant writes to the disk and making efficient use of disk space. In this paper, a block level inline deduplication layer for EXT3 file system named the DEXT3 layer is proposed. This layer identifies the possibility of writing redundant data to the disk by maintaining an in-core metadata structure of the previously written data. The metadata structure is made persistent to the disk, ensuring that the deduplication process does not crumble owing to a system shutdown or reboot. The DEXT3 layer also takes care of the modification and the deletion a file whose blocks have been referred by other files, which otherwise would have created data loss issues for the referred files.
+<hr><h2><a href="ols2012-chang.pdf">ARMvisor: System Virtualization for ARM</a> - J-H.&nbsp;Ding, C-J.&nbsp;Lin, P-H.&nbsp;Chang, C-H.&nbsp;Tsang, W-C.&nbsp;Hsu, Y-C.&nbsp;Chung</h2>
+<p>In recent years, system virtualization technology has gradually shifted its focus from data centers to embedded systems for enhancing security, simplifying the process of application porting as well as increasing system robustness and reliability. In traditional servers, which are mostly based on x86 or PowerPC processors, Kernel-based Virtual Machine (KVM) is a commonly adopted virtual machine monitor.
+However, there are no such KVM implementations available for the ARM architecture which dominates modern embedded systems. In order to understand the challenges of system virtualization for embedded systems, we have implemented a hypervisor, called ARMvisor, which is based on KVM for the ARM architecture.</p>
+<p>In a typical hypervisor, there are three major components: CPU virtualization, memory virtualization, and I/O virtualization. For CPU virtualization, ARMvisor uses traditional ``trap and emulate'' to deal with sensitive instructions. Since there is no hardware support for virtualization in ARM architecture V6 and earlier, we have to patch the guest OS to force critical instructions to trap. For memory virtualization, the functionality of the MMU, which translates a guest virtual address to host physical address, is emulated. In ARMvisor, a shadow page table is dynamically allocated to avoid the inefficiency and inflexibility of static allocation for the guest OSes. In addition, ARMvisor uses R-Map to take care of protecting the memory space of the guest OS. For I/O virtualization, ARMvisor relies on QEMU to emulate I/O devices. We have implemented KVM on ARM-based Linux kernel for all three components in ARMvisor. At this time, we can successfully run a guest Ubuntu system on an Ubuntu host OS with ARMvisor on the ARM-based TI BeagleBoard.
+<hr><h2><a href="ols2012-lissy.pdf">Clustering the Kernel</a> - A.&nbsp;Lissy, J.&nbsp;Parpaillon, P.&nbsp;Martineau</h2>
+<p>Model-checking techniques are limited in the number of states
+that can be handled, even with new optimizations to increase capacity.
+To be able to apply these techniques on very large code base such as the
+Linux Kernel, we propose to slice the problem into parts that are manageable for
+model-checking. A first step toward this goal is to study the
+current topology of internal dependencies in the kernel.
+<hr><h2><a href="ols2012-zeldovich.pdf">Non-scalable locks are dangerous</a> - Silas Boyd-Wickizer, M. Frans Kaashoek, Robert Morris, and Nickolai Zeldovich</h2>
+  Several operating systems rely on non-scalable spin locks for serialization.
+  For example, the Linux kernel uses ticket spin locks, even though scalable
+  locks have better theoretical properties.
+  Using Linux on a 48-core machine, this paper
+  shows that non-scalable locks can cause dramatic collapse in the
+  performance of real workloads, even for very short critical sections.
+  The nature and sudden onset of
+  collapse are explained with a new Markov-based performance model.  Replacing
+  the offending non-scalable spin locks with scalable spin locks avoids the
+  collapse and requires modest changes to source code.</p>
+<hr><h2><a href="ols2012-brahmaroutu.pdf">Fine Grained Linux I/O Subsystem Enhancements to Harness Solid State Storage</a> - S.&nbsp;Brahmaroutu, R.&nbsp;Patel, H.&nbsp;Rajagopalan, S.&nbsp;Vidyadhara, A.&nbsp;Vellimalai</h2>
+<p>Enterprise Solid State Storage (SSS) are high performing class devices targeted at business critical applications that can benefit from fast-access storage. While it is exciting to see the improving affordability and applicability of the technology, enterprise software and Operating Systems (OS) have not undergone pertinent design modifications to reap the benefits offered by SSS. This paper investigates the I/O submission path to identify the critical system components that significantly impact SSS performance. Specifically, our analysis focuses on the Linux I/O schedulers on the submission side of the I/O. We demonstrate that the Deadline scheduler offers the best performance under random I/O intensive workloads for the SATA SSS. Further, we establish that all I/O schedulers including Deadline are not optimal for PCIe SSS, quantifying the possible performance improvements with a new design that leverages device level I/O ordering intelligence and other I/O stack enhancements.
+<hr><h2><a href="ols2012-wang.pdf">Optimizing eCryptfs for better performance and security</a> - Li Wang, Y.&nbsp;Wen, J.&nbsp;Kong, X.&nbsp;Yi</h2>
+<p>This paper describes the improvements we have done to eCryptfs, a POSIX-compliant
+enterprise-class stacked cryptographic filesystem for Linux. The major improvements are as follows.
+First, for stacked filesystems, by default, the Linux VFS framework will maintain page caches for each level of filesystem in the stack, which means that the same part of file data will be cached multiple times. However, in some situations, multiple caching is not needed and wasteful, which motivates us to perform redundant cache elimination, to reduce ideally half of the memory consumption and to avoid unnecessary memory copies between page caches. The benefits are verified by experiments, and this approach is applicable to other stacked filesystems. Second, as a filesystem highlighting security, we equip eCryptfs with HMAC verification, which enables eCryptfs to detect unauthorized data modification and unexpected data corruption, and the experiments demonstrate that the decrease in throughput is modest. Furthermore, two minor optimizations are introduced. One is that we introduce a thread pool, working in a pipeline manner to perform encryption and write down, to fully exploit parallelism, with notable performance improvements. The other is a simple but useful and effective write optimization. In addition, we discuss the ongoing and future works on eCryptfs.
+<hr><h2><a href="ols2012-messier.pdf">Android SDK under Linux</a> - Jean-Francois Messier</h2>
+<p>This is a tutorial about installing the various components required to
+have an actual Android development station under Linux. The commands are
+simple ones and are written to be as independent as possible of your
+flavour of Linux. All commands and other scripts are in a set of files
+that will be available on-line. Some processes that would usually
+require user attendance have been scripted to run unattended and are
+pre-downloaded. The entire set of files (a couple of gigs) can be copied
+after the tutorial for those with a portable USB key or hard disk.
--- a/ols/index.html	Fri Jul 26 15:21:42 2013 -0500
+++ b/ols/index.html	Fri Jul 26 15:23:43 2013 -0500
@@ -4,14 +4,22 @@
 Linux Symposium.  The original volumes are available from
 <a href=>the OLS website</a>.</p>
+<h1><a href=2012>OLS 2012 individual papers</a></h1>
+<p>The proceedings are also available as <a href=../mirror/ols2012.pdf>one big
+PDF volume</a>.</p>
+<h1><a href=2011>OLS 2011 individual papers</a></h1>
+<p>The proceedings are also available as <a href=../mirror/ols2011.pdf>one big
+PDF volume</a>.</p>
 <h1><a href=2010>OLS 2010 individual papers</a></h1>
 <p>The proceedings are also available as <a href=../mirror/ols2010.pdf>one
 big PDF volume</a> from the
 <a href=>OLS 2010 website</a>.</p>
 <h1><a href=2009>OLS 2009 individual papers</a></h1>
 <p>The proceedings are also available as <a href=../mirror/ols2009.pdf>one