changeset 119:ece470aaa43b

The OLS 2010 index.
author Rob Landley <rob@landley.net>
date Tue, 03 May 2011 12:17:45 -0500
parents 715e68af377a
children 9a8b11877d11
files ols/2010/index.html
diffstat 1 files changed, 734 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
--- /dev/null	Thu Jan 01 00:00:00 1970 +0000
+++ b/ols/2010/index.html	Tue May 03 12:17:45 2011 -0500
@@ -0,0 +1,734 @@
+<html>
+<title>Ottawa Linux Symposium (OLS) papers for 2010</title>
+<body>
+
+<p>Ottawa Linux Symposium (OLS) Papers for 2010:</p>
+
+<p><b>Note: OLS never published final proceedings for 2010 on their website, just "draft proceedings" which are missing several papers.</b></p>
+
+<hr>
+
+<h2><a href="ols2010-pages-9-18.pdf">Boosting up Embedded Linux device: experience on Linux-based Smartphone</a> - Kunhoon Baik, Saena Kim, Suchang Woo, Jinhee choi</h2>
+
+<p>Modern smartphones have extensive capabilities and
+connectivities, comparable to those of personal com-
+puters (PCs). As the number of smartphone features
+increases, smartphone boot time also increases, since
+all features must be initialized during the boot time.
+Many fast boot techniques have focused on optimizing
+the booting sequence. However, it is difficult to obtain
+quick boot time (under 5 seconds) using the fast boot
+techniques, and many parts of the software platform require additional optimization. An intuitive way to obtain
+instant boot times, while avoiding these issues, is to boot
+directly from hibernation. We apply hibernation-based
+techniques to a Linux-based smartphone, and thereby
+overcome two major obstacles: long loading times for
+snapshot image and maintenance costs related to hard-
+ware change.</p>
+
+<p>We propose two mechanisms, based on hibernation, to
+obtain outstanding reductions in boot time. First, minimize the size of snapshot image via page reclamation,
+which reduces the load time of image. Snapshot is split
+into two major segments: essential-snapshot-image and
+supplementary-snapshot-image. The essential snapshot
+image is a minimally-sized image used to run the Linux
+kernel and idle screen, and the supplementary-snapshot-image contains the remained that could be restored on
+demand. Second, we add additional device information
+to the essential-snapshot-image, which is used when the
+the device is reactivated upon booting up. As a result,
+our mechanism omits some time-consuming jobs related
+to device re-initialization and software state recovery. In
+addition to quick boot times, our solution is low maintenance. That is, while the snapshot boot is implemented in the bootloader, our solution utilizes the kernel
+infrastructure because it is implemented in the kernel.
+Therefore, there is little effort required, even when the
+target hardware is changed. We prototyped our quick
+boot solution using a S5PC110-based smartphone.
+The results of our experiments indicate that we can obtain get dramatic gain in performance in a practical manner using this quick boot solution.</p>
+
+<hr>
+<h2><a href="ols2010-pages-19-32.pdf">Implementing an advanced access control model on Linux</a> - Kumar, Krunbacher, Banks</h2>
+
+<p>Traditional UNIX-like operating systems use a very
+simple mechanism for determining which processes get
+access to which files, which is mainly based on the file
+mode permission bits. Beyond that, modern UNIX-like
+operating systems also implement access control models
+based on Access Control Lists (ACLs), the most common being POSIX ACLs.</p>
+
+<p>The ACL model implemented by the various versions
+of Windows is more powerful and complex than POSIX
+ACLs, and differs in several aspects. These differences create interoperability problems on both sides; in
+mixed-platform environments, this is perceived as a sig-
+nificant disadvantage for the UNIX side.</p>
+
+<p>To address this issue, several UNIXes including Solaris and AIX started to support additional ACL models based on version 4 of the the Network File System (NFSv4) protocol specification. Apart from vendor-specific extensions on a limited number of file systems,
+Linux is lacking this support so far.</p>
+
+<p>This paper discusses the rationale for and challenges involved in implementing a new ACL model for Linux
+which is designed to be compliant with the POSIX stan-
+dard and compatible with POSIX ACLs, NFSv4 ACLs,
+and Windows ACLs. The authors' goal with this new
+model is to make Linux the better UNIX in modern,
+mixed-platform computing environments.</p>
+
+<hr>
+
+<h2><a href="ols2010-pages-33-34.pdf">Consistently Codifying Your Code: Taking Software Development to the Next Level</a> - Keith Bergelt</h2>
+
+<p><b>[NOTE: this paper has no contents, just an abstract]</b></p>
+
+<p>Consistently Codifying Your Code: Taking Software
+Development to the Next Level</p>
+<p>Over the years, sophisticated systems have been put in
+place to monitor software development and ensure that
+code integrity is maintained. Software developers expect to regularly use a revision control system such Git,
+CVS or SVN as part of their endeavors.</p>
+<p>One step that has historically been missing as a routine
+part of the development process is the codification of
+invention. Software developers continuously innovate.
+Due to a number of factors, these new innovations unfortunately have often failed to be published in a way
+that facilitates the ongoing protection of individual and
+community rights to these inventions.</p>
+<p>In order to improve the documentation of invention and
+lessen the ability of companies and patent trolls to leverage intellectual property against open source companies,
+as a community we must begin to capture invention regularly and in real time.</p>
+<p>Keith Bergelt, CEO of Open Invention Network, a company formed by IBM, NEC, Novell, Philips, Red Hat
+and Sony to enable and defend Linux, will share his insights into ways that companies can capture and codify
+invention at the time of development, ensuring that innovation is documented and leveraged in a manner so
+that the entire open source community will benefit.
+</p>
+
+<hr>
+<h2><a href="ols2010-pages-35-40.pdf">Developing Out-of-Tree Drivers alongside In-Kernel Drivers</a> - Jesse Brandenburg</h2>
+
+<p>Getting your driver released into the kernel with a GPL
+license is promoted as the holy grail of Linux hardware
+enabling, and I agree. That said, producing a quality GPL driver for use in the entire Linux ecosystem
+is not a task for the faint of heart. Releasing an Ethernet driver through kernel.org is one delivery method,
+but many users still want a driver that will support the
+newest hardware on older kernels.</p>
+
+<p>To meet our users' requirements for more than just hardware support in the latest kernel.org kernel, we in Intel's
+LAN Access Division (LAD) developed a set of coping
+strategies, processes, code, tools, and testing methods
+that are worth sharing. These learnings help us reuse
+code, maintain quality, and maximize our testing resources in order to get the best quality product in the
+shortest amount of time to the most customers. While
+not the most popular topic with core kernel developers,
+out-of-tree drivers are a necessary business solution for
+hardware vendors with many users. Our Open Source
+drivers generally work with all kernel releases 2.4 and
+later, and I'll explain many of the details about how we
+get there.</p>
+
+<hr>
+<h2><a href=ols2010-pages-41-42.pdf>Open Source Governance: An Approach</a> - Art Cannon</h2>
+
+<p><b>[Note: this "paper" has no abstract or contents.]</b></p>
+
+<hr>
+<h2><a href=ols2010-pages-43-44.pdf>Database on Linux in a virtualized environments over NFS</a> - Bikash Roy Choudhury</h2>
+
+<p><b>[Note: this "paper" has no abstract or contents.]</b></p>
+
+<hr>
+<h2><a href=ols2010-pages-45-56.pdf>KVM for ARM</a> - Christoffer Dall, Jason Nieh</h2>
+
+<p>As ARM CPUs grow in performance and ubiquity
+across phones, netbooks, and embedded computers, providing virtualization support for ARM-based devices
+is increasingly important.  We present KVM/ARM, a
+KVM-based virtualization solution for ARM-based devices that can run virtual machines with nearly unmodified operating systems.</p>
+
+<p>Because ARM is not virtualizable, KVM/ARM uses lightweight paravirtualization, a
+script-based method to automatically modify the source
+code of an operating system kernel to allow it to run
+in a virtual machine. Lightweight paravirtualization is
+architecture specific, but operating system independent.
+It is minimally intrusive, completely automated, and requires no knowledge or understanding of the guest operating system kernel code.</p>
+
+<p>By leveraging KVM, which is
+an intrinsic part of the Linux kernel, KVM/ARM's code
+base can be always kept in line with new kernel releases
+without additional maintenance costs, and can be easily
+included in most Linux distributions. We have implemented a KVM/ARM prototype based on the Linux kernel used in Google Android, and demonstrated its ability
+to successfully run nearly unmodified Linux guest operating systems.</p>
+
+<hr>
+<h2><a href=ols2010-pages-57-62.pdf>UBI with Logging</a> - Brijesh Singh, Rohit Vijay Dongre</a></h2>
+
+<p>Flash memory is widely adopted as a novel nonvolatile storage medium because of its characteristics:
+fastaccess speed, shock resistance, and low power consumption. UBI - Unsorted Block Images, uses mechanisms like wear leveling and bad block management to
+overcome flash limitations such as "erase before write".
+This simplifies file systems like UBIFS, which depend
+on UBI for flash management.  However, UBI design
+imposes mount time to scale linearly with respect to
+flash size.</p>
+
+<p>With increasing flash sizes, it is very important to ensure that UBI mount time is not a linear
+function of flash size. This paper presents the design
+of UBIL: a UBI layer with logging. UBIL is designed
+to solve UBI issues, namely mount time scalability &
+efficient user data mapping. UBIL achieves more than
+50% mount time reduction for 1GB NAND flash. With
+optimizations, we expect attach time to reduce up to
+70%. The read-write performance of UBIL introduces
+no degradation; a more elaborate comparison of results
+and merits of UBIL with respect to UBI are outlined in
+the conclusion of the paper.</p>
+
+<hr>
+
+<h2><a href=ols2010-pages-63-74.pdf>Looking Inside Memory:
+Tooling for tracing memory reference patterns</a> -
+Ankita Garg, Balbir Singh, Vaidyanathan Srinivasan</h2>
+
+<p>Memory is a critical resource that is non-renewable and
+is time consuming to regenerate by reclaim. While there
+are several tools available to understand the amount of
+memory utilized by an application, there is presently little infrastructure to capture the physical memory reference pattern of an application on a live system. This
+knowledge would enable the software developers and
+hardware designers to not only understand the amount
+of memory used, but also the way the references are laid
+out across RAM. The temporal and spatial reference patterns can provide new insights into the benchmark characteristics, which would enable memory related optimizations. Additional tools could be developed on top to
+extract useful data from the reference information. For
+example, a tool to understand the working set size of an
+application, and how it varies with time. The data could
+also be used to optimize the application for NUMA systems. Kernel developers could use the data to check
+fragmentation and generic data placement issues.</p>
+
+<p>In this paper, we introduce a memory reference instrumentation infrastructure in the Linux kernel that is built
+as a kernel module, on top of the trace framework. It
+works by collecting memory reference samples from
+page table entries at regular intervals. The data obtained
+is then post processed to plot various graphs for visualization. In this paper, we provide information on
+the design and implementation of this instrumentation,
+along with the challenges faced by such a generic memory instrumentation infrastructure. We will demonstrate
+additional tools built on this infrastructure to obtain
+interesting data collected from several benchmarks. The
+target audience are people interested in kernel based instrumentation, application developers and performance
+tuning enthusiasts.</p>
+
+<hr>
+
+<h2><a href=ols2010-pages-75-82.pdf>Dynamic Binary Instrumentation Framework for CE Devices</a> - Alexey Gerenkov, Sergey Grekhov, Jaehoon Jeong.</h2>
+
+<p>Developers use various methods and approaches to find 
+bugs and performance bottlenecks in their programs. 
+One of the effective and widely used approach is application profiling by dynamic instrumentation. There are 
+many various tools based on dynamic instrumentation. 
+Each tool has its own benefits and limitations what often forces developers to use several of them for profiling. 
+For example, in order to use Kprobe-based Systemtap tool developers need to write instrumentation script using special language. To use Dyninst
+profiling library developers need to write instrumenting 
+programs in C++. Thus each tool realizes its own profiling technology. Additionally various profiling tools 
+produce output data in their own formats and those formats are incompatible. Thus two above problems significantly increase complexity of debugging.</p>
+
+<p>In this paper we describe unique dynamic binary instrumentation engine concept which is used in our monitoring tool — System-Wide Analyzer of Performance
+(SWAP). This tool has modular open architecture and
+API which allow integrating various tools for providing powerful instrumentation and analysis framework
+for developers. Dyninst and Kprobe-based instrumentation engines are integrated into SWAP framework
+and used in a similar way. Modular structure of SWAP
+can be extended with other instrumentation and analysis
+methods by easy way. Also SWAP has several levels of
+API: instrumentation API, connection API, control API,
+user interface API and monitoring language framework
+API. This multilevel API architecture allows developers to re-use SWAP functionality and embed it into their
+own solutions. All above mentioned SWAP advantages
+essentially simplify debugging profiling process for em-
+bedded software.</p>
+
+<hr>
+
+<h2><a href=ols2010-pages-83-90.pdf>Prediction of Optimal Readahead Parameter in Linux by Using Monitoring Tool</a> - Ekaterina Gorelkina, Sergey Grekhov, Jaehoon Jeong, Mikhail Levin</h2>
+
+<p>Frequently, application developers face hidden
+performance problems that are provided by operating
+system internal behavior. For overriding such problems,
+the Linux operating system has a lot of parameters that can
+be tuned by user-defined values for accelerating the system performance. However, in common case evaluating
+of the best system parameters requires a very time consuming investigation in Linux operating system area that
+usually is impossible because of strong time limitations
+for development process.</p>
+
+<p>This paper describes a method that allows any application developer to find the optimal value for the Linux
+OS parameter: the optimal maximal read-ahead window size. This parameter can be tuned by optimal
+value in easy way that allows improving application performance in short time without getting any knowledge
+about Linux kernel internals and spending a lot of time
+for experimental search for the best system parameters.</p>
+
+<p>Our method provides the prediction of optimal maximal
+read-ahead window size for Linux OS by using the monitoring tool for Linux kernel. Scenario of our method
+using is very simple that allows obtaining the optimal
+value for maximal read-ahead window size for the single application run. Our experiments for Linux 2.6 show
+that our method detects an optimal read-ahead window
+size for various real embedded applications with adequate accuracy and optimization effect can be about a
+few and even a few dozen percents in comparison to default case. The maximal observed optimization effect
+for accelerating the embedded application start-up time
+was 59% in comparison to default case.</p>
+
+<p>Taking into account these facts the method proposed in
+this paper has a very good facilities to be widely and
+simply used for embedded applications optimization to
+increase their quality and effectiveness.
+
+<hr>
+
+<h2><a href=ols2010-pages-91-100.pdf>Unprivileged login daemons in Linux</a> - Serge Hallyn, Jonathan T. Beard</h2>
+
+<p>Login daemons require the ability to switch to the userid
+of any user who may legitimately log in. Linux provides
+neither a fine-grained setuid privilege which can be targeted at a particular userid, nor the ability for one privileged task to grant another task the setuid privilege. A
+login service must therefore always run with the ability
+to switch to any userid.</p>
+
+<p>Plan 9 is a distributed operating system designed at Bell
+Labs to be a next generation improvement over Unix.
+While it is most famous for its central design principle - everything is a file - it is also known for simpler
+userid handling. It provides the ability to pass a setuid
+capability - a token which may be used by a task owned
+by one userid to switch to a particular new userid only
+once - through the /dev/caphash and /dev/capuse files.
+Ashwin Ganti has previously implemented these files in
+Linux. His p9auth device driver was available for a time
+as a staging driver. We have modified the concepts explored in his initial driver to better match Linux userid
+and groups semantics. We provide sample code for a
+p9auth server and a fully unprivileged login daemon.
+We also present a biased view of the pros and cons of
+the p9auth filesystem.</p>
+
+<hr>
+
+<h2><a href=ols2010-pages-101-108.pdf>Twin-Linux: Running independent Linux Kernels simultaneously on separate cores of a multicore system</a> - Adhiraj Joshi, Swapnil Pimpale, Mandar Naik, Swapnil Rathi, Kiran Pawar</h2>
+
+<p>There are three classes of common consumer and enterprise computing - Server, Interactive and Real-Time. 
+These are characterized respectively by the need to obtain highest throughput, sustained responsiveness, and 
+hard real-time guarantees. These are contradictory requirements hence it's not possible to implement an operating system to achieve all these goals. Most operating 
+systems are designed towards serving only one of these 
+classes and try to do justice to the other two classes to a 
+reasonable extent.</p>
+
+<p>We demonstrate a technique to overcome this limitation
+when a single hardware box is required to fulfill multiple
+of these computing classes. We propose to run different copies of kernels simultaneously on different cores
+of a multi-core system and provide synchronization between the kernels using IPIs (Inter Processor Interrupts)
+and common memory. Our solution enables users to run
+multiple operating systems each one the best for its class
+of computing. For ex., using our idea we can configure a
+quad core system with 2 cores dedicated for server class
+computing (database processing), 1 core for UI applications and remaining 1 core for real-time applications.</p>
+
+<p>This idea has been used in the past, primarily on nonx86 processors and custom designed hardware. Our proposal opens the doors of this idea to the off-the shelf
+hardware resources. We present Twin-Linux, an implementation of this scenario for 2 processing units using
+Intel-Core-2-Duo system. This idea finds applications
+in - Filers, Intelligent Switches, Graphics Processing Engines, where different types of functions are performed
+in a pipelined manner.</p>
+
+<hr>
+
+<h2><a href=ols2010-pages-109-120.pdf>VirtFS - A virtualization aware File System pass-through</a> - Venkateswararao Jujjuri, Eric Van Hensbergen, Anthony Liguori, Badari Pulavarty.</h2>
+
+<p>This paper describes the design and implementation of 
+a paravirtualized file system interface for Linux in the 
+KVM environment. Today's solution of sharing host 
+files on the guest through generic network file systems 
+like NFS and CIFS suffer from major performance and 
+feature deficiencies as these protocols are not designed 
+or optimized for virtualization. To address the needs of 
+the virtualization paradigm, in this paper we are introducing a new paravirtualized file system called VirtFS. 
+This new file system is currently under development and 
+is being built using QEMU, KVM, VirtIO technologies 
+and 9P2000.L protocol.</p>
+
+<hr>
+
+<h2><a href=ols2010-pages-121-132.pdf>Taking Linux Filesystems to the Space Age: Space Maps in Ext4</a> - Saurabh Kadekodi, Shweta Jain</h2>
+
+<p>With the ever increasing filesystem sizes, there is a
+constant need for faster filesystem access. A vital requirement to achieve this is efficient filesystem metadata
+management.</p>
+
+<p>The bitmap technique currently used to manage free
+space in Ext4 is faced by scalability challenges owing to
+this exponential increase. This has led us to re-examine
+the available choices and explore a radically different
+design of managing free space called Space Maps.</p>
+
+<p>This paper describes the design and implementation of
+space maps in Ext4. The paper also highlights the limitations of bitmaps and does a comparative study of how
+space maps fare against them. In space maps, free space
+is represented by extent based red-black trees and logs.
+The design of space maps makes the free space information of the filesystem extremely compact allowing it to
+be stored in main memory at all times. This significantly
+reduces the long, random seeks on the disk that were
+required for updating the metadata. Likewise, analogous on-disk structures and their interaction with the
+in-memory space maps ensure that filesystem integrity
+is maintained. Since seeks are the bottleneck as far as
+filesystem performance is concerned, their extensive reduction leads to faster filesystem operations. Apart from
+the allocation/deallocation improvements, the log based
+design of Space Maps helps reduce fragmentation at the
+filesystem level itself. Space Maps uplift the performance of the filesystem and keep the metadata management in tune with the highly scalable Ext4.</p>
+
+<hr>
+
+<h2><a href=ols2010-pages-133-146.pdf>Mobile Simplified Security Framework</a> - Dmitry Kasatkin</h2>
+
+<p>Linux kernel has already several security frameworks
+such SELinux, AppArmor, Tomoyo and Smack. After
+some studies we found out that they are not very suitable
+for mobile consumer devices such as mobile phones.
+They either require too complicated administration or do
+not really provide any security API, which can be used
+by applications providing services to verify credentials
+of their clients, and then decide if a particular client can
+access the provided service or not.</p>
+
+<p>In this paper we present a new platform security framework developed by the Maemo security team specifically for mobile devices. The key subsystem of the Mobile Simplified Security Framework is the Access Control framework, which is used to bind privileges (resource tokens) to the application when the application
+is starting. Using a special API, different entities are
+able to verify possession of those resource tokens and
+allow/disallow access to protected resources. If any of
+the applications require an access to protected resources,
+a Manifest file with the credential request should be included in the package providing the application. The
+Manifest file is also used to declare new credentials,
+which are provided by an application coming from the
+package.</p>
+
+<hr>
+
+<h2><a href=ols2010-pages-147-152.pdf>Automating Virtual Machine Network Profiles</a> - Vivek Kashyap, Arnd Bergman, Stefan Berger, Gerhard Stenzel, jens Osterkamp</h2>
+
+<p>With the explosion of use of virtual machines in the
+data-center/cloud environments there is correspondingly
+a requirement for automating the associated network
+management and administration. The virtual machines
+share the limited number of network adapters on the system
+among them but may run workloads with contending
+network requirements. Furthermore, these workloads
+may be run on behalf of customers desiring complete
+isolation of their network traffic. The enforcement
+of network traffic isolation through access controls (filters)
+and VLANs on the host adds additional run-time
+and administrative overhead. This is further exacerbated
+when Virtual Machines are migrated to another physical
+system as the corresponding network profiles must be
+re-enforced on the target system. The physical switches
+must also be reprogrammed.</p>
+
+<p>This paper describes the Linux enhancements in kernel,
+in libvirt and layer-2 networking, enabling the offloading
+of the switching function to the external physical
+switches while retaining the control in Linux host. The
+layer 2 network filters and QoS profiles are automatically
+migrated with the virtual machine on to the target
+system and imposed on the physical switch port without
+administrative intervention.</p>
+
+<p>We discuss the proposed IEEE standard (802.1Qbg) and
+its implementation on Linux for automated migration of
+port profiles when a VM is migrated from one system to another.</p>
+
+<hr>
+<h2><a href=ols2010-pages-153-154.pdf>Coverage and Profiling for Real-time tiny Kernels</a> - Sital Prasad Kedia</h2>
+
+<p><b>[Note: this "paper" has no abstract or contents.]</b></p>
+
+<hr>
+<h2><a href=ols2010-pages-155-158.pdf>The advantages of a Kernel Sub-Maintainer</a> - Jeff Kirsher</h2>
+
+Last year, approximately 9,500 patches were submitted
+to the Linux kernel networking sub-system. Of these
+9.500 patches, roughly 8% of those patch submissions
+were against the in-kernel Intel R wired LAN drivers.
+In addition, over the last 2 years, the number of in-kernel
+Intel R wired LAN drivers went from 3 drivers
+(e100, e1000 & ixgb) to 8 drivers (e100, e1000, e1000e,
+igb, igbvf, ixgb, ixgbe & ixgbevf). With the increase
+in Intel R wired LAN kernel drivers and the large number
+of kernel patches, support and maintaining of the
+in-kernel drivers faced several challenges.</p>
+
+<p>To address the issues in maintaining and supporting the
+Intel R wired LAN in-kernel drivers, we needed a sub-maintainer
+to deal with all of these challenges. I will
+go on to explain the obstacles we overcame and the advantages
+we found by having a sub-maintainer and the
+processes we use to assist us in our daily routine.</p>
+
+<hr>
+<h2><a href=ols2010-pages-159-172.pdf>Linux-CR: Transparent Application Checkpoint-Restart in Linux</a> Oren Laadan, Serge Hallyn</h2>
+
+<p>Application checkpoint-restart is the ability to save the
+state of a running application so that it can later resume
+its execution from the time of the checkpoint. Application
+checkpoint-restart provides many useful benefits including
+fault recovery, advanced resources sharing, dynamic
+load balancing and improved service availability.
+For several years the Linux kernel has been gaining the
+necessary groundwork for such functionality, and now
+support for kernel based transparent checkpoint-restart
+is also maturing. In this paper we present the
+implementation of Linux checkpoint-restart, which aims for
+inclusion in Linux mainline. We explain the usage model
+and describe the user interfaces and some key kernel interfaces.
+Finally, we present preliminary performance
+results of the implementation.</p>
+
+<hr>
+<h2><a href=ols2010-pages-173-174.pdf>Optimizing processes on multicore for speed and latency</a> - Christoph H. Lameter</h2>
+
+<p><b>[Note: this "paper" has no contents.]</b></p>
+
+<p>High Performance Computing is coming to every file
+server and every desktop machine these days. The processing
+power of the average server will grow significantly with
+the introduction of Westwere (24 threads
+dual socket), Nehalem EX for quad socket (64 threads)
+and 8 socket machines (128 threads) which will make
+entirely new applications possible. In the financial market
+it is then possible to run a complete trading system
+setup on a single machine as demonstrated by running a
+NYSE simulation at the IDF conference by Intel. However,
+the same issues already show up with a smaller
+effect even on the run of the mill dual quad core systems.</p>
+
+<p>Intel Nehalem processors support NUMA – a technology
+so far only known from large supercomputers. The
+NUMA effects are small on todays dual quad core file
+servers but as the number of processors rises the distances
+of processors to memory will also increase and
+put more demands on the operating system and application
+software to obtain memory that a processor can
+reach in an efficient manner. Temporal locality issues
+dominate even within a core because the most effective
+storage is in the L1 cache that is local to one execution
+context but unreachable from another. It is vital that
+techniques originally developed for HPC are used to exploit
+the full potential that todays hardware provides.
+We will discuss Westmere, Nehalem EX, temporal and
+spatial locality management techniques, managing cpu
+caches, hyperthreading, latency and performance in
+Linux.</p>
+
+<hr><h2><a href=ols2010-pages-175-176.pdf>Open Source issues? Avoiding the iop(a)ds ahead.</a> - Christoph H. Lameter</h2>
+
+<p><b>[Note: this "paper" has no contents.]</b></p>
+
+<p>Have you ever wondered why Linux does not make
+progress in certain areas? Why have we not conquered
+the desktop yet? Why are many "commercial" applications
+not for Linux? Why is Apple dancing circles
+around us with Iphones, Ipods and Ipads?</p>
+
+<p>All these things require funding, a certain frame of mind
+that focuses on the end user and an autonomy on the
+part of the developer of new applications. Open source
+developers are frequently caught in an evil web of pressure
+by employers to work on proprietary ideas which
+sucks off the majority of time that is available for productive work, the necessity to maintain relationships
+with (like-minded) open source developers working on
+the same projects which results in a closed mind to
+end users and the inability to start something new and
+creative with somewhat controllable effort and benefit
+monetarily from it so that further efforts can be made.</p>
+
+<p>It seems that the ipod world has found a solution to these
+issues and enabled developers to create useful apps in a
+minimal time frame, benefit from it and grow the usefulness of their software. Sadly this world is controlled
+by a commercial entity, source code is not available.</p>
+
+<p>The impression to the general public is that open source
+contribution is something like a heroic effort. A renunciation of the riches that could be had and a taking of a
+vow of poverty. The motives of commercial entities can
+be in collaboration but what we have seen the contributions are mainly driven by commercial benefits derived
+from open source contribution. The motivation is not to
+provide a well designed easy to use application to the
+end user.</p>
+
+<p>The author thinks that we need to change the development process so that the design and creation of easily
+usable end user applications is rewarded and that at the
+same time it must be possible for others to use and modify the code created.</p>
+
+<hr>
+<h2><a href=ols2010-pages-177-188.pdf>Deploying Preemptible Linux in the Latest Camcorder</a> - Geunsik Lim</a></h2>
+
+<p>Currently, the Linux kernel is well equipped to compete
+with the soft realtime operating system. Linux has been
+the choices of the operating system. We adjusted optimized
+Linux kernel to the camcorder’s system architecture which
+is equipped with ARM cortex-A8 and implemented open-source
+based tool-chain, audio zoom calculation, and realtime HDMI
+I2C communication and
+userspace realtime thread program. Samsung has introduced Consumer Electronics Show(CES) this year with
+its new S-Series of full HD digital camcorders. These
+product is the world’s first commercially available camcorder which includes built-in Wi-Fi and DLNA connectivity.</p>
+
+<p>This paper describes our trouble shooting, cross-compiler
+issues, technical experiences and best practice
+in reducing latency in Linux and applications for developing
+an embedded product like camcorder. This discussion
+focuses on how commercial platform can optimize the
+realtime extensions available in Linux kernel,
+but it is also relevant to any software developer who may
+be concerned with finding a suitable tradeoff between
+throughput and responsiveness for embedded systems.
+Furthermore, many methods which implemented to further
+improve the system performance will be presented
+as well.</p>
+
+<hr><h2><a href=ols2010-pages-189-196.pdf>User Space Storage system stack Modules with File Level Control</a> - S. Narayan, R. K. Mehta, J.A. Chandy</h2>
+
+<p>Filesystem in Userspace (FUSE) is a typical solution
+to simplifying writing a new file system. It exports
+all file system calls to the user-space, giving programmer the ability to implement actual file system code in
+the user-space but with a small overhead due to context switching and memory copies between the kernel
+and the user-space. FUSE, however, only allows writing non-stackable file systems. The other alternative
+to simplify writing file system code is to use File System Translator (FiST), a tool that can be used to develop stackable file systems using template code. FiST
+is limited to the kernel space and requires learning a
+slightly simplified file system language that describes
+the operation of the stackable file system. In this work,
+we combine FUSE with FiST and present a stackable
+FUSE module which will allow users to write stackable
+file systems in the user-space. To limit the overhead
+of context switching operations, we provide this module in combination with our previously developed ATTEST framework that provides ways to filter files so that
+only those with specific extended attributes are exported
+to the user-space daemon. Further, these attributes can
+also be exported to user-space where multiple functions
+can behave as stackable modules with dynamic ordering. Another advantage of such a design is that it allows non-admin users to have stackable file system implemented and mounted, for example, on their respective home directories. In our experiments, we observe
+that having stackable modules in user-space has an overhead of around 26% for writes and around 39% for reads
+when compared to the standard stackable file systems.
+</p>
+
+<hr><h2><a href=ols2010-pages-197-206.pdf>expect-lite</a> - Craig Miller</h2>
+
+<p>Developers can always use a tool that will save money
+and keep the boss happy. Automation boosts efficiency
+while skipping drudgery. But how to implement automation for the rest of us without slowing down the real
+work of developing software? Introducing expect-lite.
+Written in expect, it is designed to directly map an interactive terminal session into an automation script. As
+easy as cutting and pasting text from a terminal window
+into a script, and adding ’>’ and ’<’ characters to the beginning of each line with advanced features to take you
+further. No knowledge of expect is required!</p>
+
+<p>In this paper, you’ll get an introduction to expect-lite,
+including applications where complex testing environments can be solved with just a few lines of expect-lite
+code. Although expect-lite is targeted at the software
+verification testing environment, its use is not limited to
+this environment, and it has been used world-wide for
+several years in router configuration, Macintosh application development, and FPGA development. expect-lite
+can be found at: http://expect-lite.sf.net/</p>
+
+<hr><h2><a href=ols2010-pages-207-216.pdf>Impediments to insititutional adoption of Free/Open Source Software</a> - Peter St. Onge</h2>
+
+<p>Free and Open Source Software (FOSS) have a number of characteristics that make it highly desirable
+in institutional settings: prevention of lock-in, cross-platform availability and version consistency across
+platforms, internationalization support, breadth and
+depth of choices, availability of updates and cost. Despite these manifold advantages, institutional uptake of
+FOSS has been limited. This paper discusses some of
+the key factors limiting adoption, and presents some
+suggestions on how these barriers can be overcome.</p>
+
+<hr><h2><a href=ols2010-pages-217-224.pdf>Linux kernel support to exploit phase change memory</a> - Youngwoo Park</a></h2>
+
+<p>Recently, phase change memory (PRAM) has been developed as a next generation memory technology. Because PRAM can be accessed as word-level using memory interface of DRAM and offer more density compared to DRAM, PRAM is expected as an alternative
+main memory device. Moreover, it can be used as additional storage of system because of its non-volatility.</p>
+
+<p>However, PRAM has several problems. First, the access latency of PRAM is still not comparable to DRAM.
+It is several times slower than that of DRAM. Second, PRAM can endure hundreds of millions of writes
+per cell. Therefore, if PRAM does not be managed
+properly, it has negative impact on the system performance and consistency.</p>
+
+<p>In order to solve these problems, we consider the Linux kernel level support to exploit PRAM in memory and storage system. We use
+PRAM with a small size DRAM and both PRAM and
+DRAM are mapped into single physical memory address space in Linux. Then, the physical memory pages,
+which are used by process, are selectively allocated
+based on the access characteristics. Frequently updated
+hot segment pages are stored in DRAM. PRAM is used
+for read only and infrequently updated pages. Consequently, we minimize the performance degradation
+caused by PRAM while reducing 50% energy consumption of main memory. In addition, the non-volatile characteristic of PRAM is used to support file system. We
+propose the virtual storage that is a block device interface to share the non-volatile memory pages of PRAM
+as a storage alternative. By using 256MB PRAM for virtual storage, we can decrease more than 40% of access
+time of disk.</p>
+
+<hr><h2><a href=ols2010-pages-225-230.pdf>The Virtual Contiguous Memory Manager</a> - Zach Pfeffer</h2>
+
+<p>An input/output memory management unit (IOMMU)
+maps device addresses to physical addresses. It also insulates the system from spurious or malicious device addresses and allows fine-grained mapping attribute control. The Linux kernel core does not contain a generic
+API to handle IOMMU mapped memory; device driver
+writers must implement device specific code to interoperate with the Linux kernel core. As the number
+of IOMMUs increases, coordinating the many address
+spaces mapped by all discrete IOMMUs becomes difficult without in-kernel support.</p>
+
+<p>To address this complexity the Qualcomm Innovation
+Center (QuIC) created the Virtual Contiguous Memory Manager (VCMM) API. The VCMM API enables
+device independent IOMMU control, VMM interoperation and non-IOMMU enabled device interoperation
+by treating devices with or without IOMMUs and all
+CPUs with or without MMUs, their mapping contexts
+and their mappings using common abstractions. Physical hardware is given a generic device type and mapping
+contexts are abstracted into Virtual Contiguous Memory
+(VCM) regions. Users "reserve" memory from VCMs
+and "back" their reservations with physical memory. We
+have implemented the VCMM to manage the IOMMUs
+of an upcoming ARM based SoC. The implementation
+will be posted to the Code Aurora Foundation’s site.</p>
+
+<hr><h2><a href=ols2010-pages-231-244.pdf>Transactional system calls on Linux</a> - Donald Porter</h2>
+
+<p>Have you ever had to manually back out an unsuccessful software install? Has a machine ever crashed on you
+while adding a user, leaving the group, password and
+shadow files inconsistent? Have you struggled to eliminated time-of-check-to-time-of-use (TOCTTOU) race
+conditions from an application? All of these problems
+have a single underlying cause: programmers cannot
+group multiple system calls into a single, consistent operation. If users (and kernel developers) had this power,
+there are a variety of innovative services they could
+build and problems they could eliminate. This paper describes system transactions and a variety of applications
+based on system transactions. We add system calls to
+begin, end, and abort a transaction. A system call that
+executes within a transaction is isolated from the rest of
+the system. The effects of a system transaction are undone if the transaction fails.</p>
+
+<p>This paper describes a research project that developed
+transactional semantics for 152 Linux system calls and
+abstractions including signals, process creation, files,
+and pipes. The paper also describes the practical challenges and trade-offs in implementing transactions in
+Linux. The code changes needed to support transactions are substantial, but so are the benefits. With no
+modifications to dpkg itself, we were able to wrap an
+installation of OpenSSH in a system transaction. The
+operating system rolls back failed installations automatically, preventing applications from observing inconsistent files during the installation, and preserving unrelated, concurrent updates to the file system. Overheads
+for using transactions in an application like software installation range from 10-70%.</p>
+
+<hr><h2><a href=ols2010-pages-245-254.pdf>CPU bandwidth control for CFS</a> - P. Turner, B. B. Rao, N. Rao</h2>
+
+<p>Over the past few years there has been an increasing focus on the development of features for resource
+management within the Linux kernel. The addition of
+the fair group scheduler has enabled the provisioning
+of proportional CPU time through the specification of
+group weights. Since the scheduler is inherently workconserving in nature, a task or a group can consume excess CPU share in an otherwise idle system. There are
+many scenarios where this extra CPU share can cause
+unacceptable utilization or latency. CPU bandwidth provisioning or limiting approaches this problem by providing an explicit upper bound on usage in addition to the
+lower bound already provided by shares.
+There are many enterprise scenarios where this functionality is useful. In particular are the cases of pay-per-use environments, and latency provisioning within
+non-homogeneous environments.
+This paper details the requirements behind this feature, the challenges involved in incorporating into CFS
+(Completely Fair Scheduler), and the future development road map for this feature.</p>
+
+
+<hr><h2><a href=ols2010-pages-255-262.pdf>Page/slab cache control in a virtualized environment</a> - Balbir Singh</h2>
+
+<p>The Linux page/slab cache subsystems are one of the
+most useful subsystems in the Linux kernel. Any attempts to limit its usage have been discouraged and
+frowned upon in the past. However, virtualization is
+changing the role of the kernel running on the system,
+specifically when the kernel is running as a guest. Assumptions about using all available memory as cache
+and optimizations will need need to be re-looked in an
+environment where resources are not fully owned by one
+guest OS.</p>
+
+<p>In this paper, we discuss some of the pain points of page
+cache in a virtualized environment; like double caching
+of data in both the host and guest and its impact on memory utilization. We look at the current page cache behavior of Linux running as a guest and when multiple
+instances of guest operating systems are running. We
+look at current practices and propose new solutions to
+the solving the double caching problem in the kernel.</p>
+
+<hr><h2><a href=ols2010-pages-263-264.pdf>The Benefits of More Procrastination in the Kernel</a> - Geoff T Smith</h2>
+
+<p><b>[Note: this "paper" has no contents.]</b></p>
+