ТОП просматриваемых книг сайта:
ACM Books
Скачать книги из серии ACM BooksАннотация
Smarter Than Their Machines: Oral Histories of the Pioneers of Interactive Computing is based on oral histories archived at the Charles Babbage Institute, University of Minnesota. Included are the oral histories of some key pioneers of the computer industry selected by John that led to interactive computing, such as Richard Bloch, Gene Amdahl, Herbert W. Robinson, Sam Wyly, J.C.R. Licklider, Ivan Sutherland, Larry Roberts, Robert Kahn, Marvin Minsky, Michael Dertouzos, and Joseph Traub, as well as his own. John has woven them together via introductions that is, in essence, a personal walk down the computer industry road. John had the unique advantage of having been part of, or witness to, much of the history contained in these oral histories beginning as a co-op student at Arthur D. Little, Inc., in the 1950’s. Eventually, he would become a pioneer in his own right by creating the computer industry's first successful software products company (Cullinane Corporation). However, an added benefit of reading these oral histories is that they contain important messages for our leaders of today, at all levels, including that government, industry, and academia can accomplish great things when working together in an effective way. This is how the computer industry was created, which then led to the Internet, both totally unanticipated just 75 years ago.
Аннотация
The wireless medium is a shared resource. If nearby devices transmit at the
same time, their signals interfere, resulting in a collision. In traditional
networks, collisions cause the loss of the transmitted information. For this
reason, wireless networks have been designed with the assumption that
interference is intrinsically harmful and must be avoided.
This book, a revised version of the author's award-winning Ph.D.
dissertation, takes an alternate approach: Instead of viewing interference
as an inherently counterproductive phenomenon that should to be avoided, we
design practical systems that transform interference into a harmless, and
even a beneficial phenomenon. To achieve this goal, we consider how wireless
signals interact when they interfere, and use this understanding in our
system designs. Specifically, when interference occurs, the signals get
mixed on the wireless medium. By understanding the parameters of this
mixing, we can invert the mixing and decode the interfered packets; thus,
making interference harmless. Furthermore, we can control this mixing
process to create strategic interference that allow decodability at a
particular receiver of interest, but prevent decodability at unintended
receivers and adversaries. Hence, we can transform interference into a
beneficial phenomenon that provides security.
Building on this approach, we make four main contributions: We present the
first WiFi receiver that can successfully reconstruct the transmitted
information in the presence of packet collisions. Next, we introduce a WiFi
receiver design that can decode in the presence of high-power
cross-technology interference from devices like baby monitors, cordless
phones, microwave ovens, or even unknown technologies. We then show how we
can harness interference to improve security. In particular, we develop the
first system that secures an insecure medical implant without any
modification to the implant itself. Finally, we present a solution that
establishes secure connections between any two WiFi devices, without having
users enter passwords or use pre-shared secret keys.
same time, their signals interfere, resulting in a collision. In traditional
networks, collisions cause the loss of the transmitted information. For this
reason, wireless networks have been designed with the assumption that
interference is intrinsically harmful and must be avoided.
This book, a revised version of the author's award-winning Ph.D.
dissertation, takes an alternate approach: Instead of viewing interference
as an inherently counterproductive phenomenon that should to be avoided, we
design practical systems that transform interference into a harmless, and
even a beneficial phenomenon. To achieve this goal, we consider how wireless
signals interact when they interfere, and use this understanding in our
system designs. Specifically, when interference occurs, the signals get
mixed on the wireless medium. By understanding the parameters of this
mixing, we can invert the mixing and decode the interfered packets; thus,
making interference harmless. Furthermore, we can control this mixing
process to create strategic interference that allow decodability at a
particular receiver of interest, but prevent decodability at unintended
receivers and adversaries. Hence, we can transform interference into a
beneficial phenomenon that provides security.
Building on this approach, we make four main contributions: We present the
first WiFi receiver that can successfully reconstruct the transmitted
information in the presence of packet collisions. Next, we introduce a WiFi
receiver design that can decode in the presence of high-power
cross-technology interference from devices like baby monitors, cordless
phones, microwave ovens, or even unknown technologies. We then show how we
can harness interference to improve security. In particular, we develop the
first system that secures an insecure medical implant without any
modification to the implant itself. Finally, we present a solution that
establishes secure connections between any two WiFi devices, without having
users enter passwords or use pre-shared secret keys.
Trust Extension as a Mechanism for Secure Code Execution on Commodity Computers - Bryan Jeffrey Parno
ACM BooksАннотация
As society rushes to digitize sensitive information and services, it is imperative to adopt adequate security protections. However, such protections fundamentally conflict with the benefits we expect from commodity computers. In other words, consumers and businesses value commodity computers because they provide good performance and an abundance of features at relatively low costs. Meanwhile, attempts to build secure systems from the ground up typically abandon such goals, and hence are seldom
adopted.
In this book, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems. At a high level, we support this premise by developing techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. Rather than entrusting her data to the mountain of buggy code likely running on her computer, we construct an on-demand secure execution environment which can perform security-sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Having established an environment for secure code execution on an individual computer, we then show how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on endhosts and trust the results within the network. Lastly, we extend the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud). We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner.
Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.
adopted.
In this book, I argue that we can resolve the tension between security and features by leveraging the trust a user has in one device to enable her to securely use another commodity device or service, without sacrificing the performance and features expected of commodity systems. At a high level, we support this premise by developing techniques to allow a user to employ a small, trusted, portable device to securely learn what code is executing on her local computer. Rather than entrusting her data to the mountain of buggy code likely running on her computer, we construct an on-demand secure execution environment which can perform security-sensitive tasks and handle private data in complete isolation from all other software (and most hardware) on the system. Meanwhile, non-security-sensitive software retains the same abundance of features and performance it enjoys today.
Having established an environment for secure code execution on an individual computer, we then show how to extend trust in this environment to network elements in a secure and efficient manner. This allows us to reexamine the design of network protocols and defenses, since we can now execute code on endhosts and trust the results within the network. Lastly, we extend the user's trust one more step to encompass computations performed on a remote host (e.g., in the cloud). We design, analyze, and prove secure a protocol that allows a user to outsource arbitrary computations to commodity computers run by an untrusted remote party (or parties) who may subject the computers to both software and hardware attacks. Our protocol guarantees that the user can both verify that the results returned are indeed the correct results of the specified computations on the inputs provided, and protect the secrecy of both the inputs and outputs of the computations. These guarantees are provided in a non-interactive, asymptotically optimal (with respect to CPU and bandwidth) manner.
Thus, extending a user's trust, via software, hardware, and cryptographic techniques, allows us to provide strong security protections for both local and remote computations on sensitive data, while still preserving the performance and features of commodity computers.
Аннотация
Parallelism is the key to achieving high performance in computing. However, writing efficient and scalable parallel programs is notoriously difficult, and often requires significant expertise. To address this challenge, it is crucial to provide programmers with high-level tools to enable them to develop solutions easily, and at the same time emphasize the theoretical and practical aspects of algorithm design to allow the solutions developed to run efficiently under many different settings. This thesis addresses this challenge using a three-pronged approach consisting of the design of shared-memory programming techniques, frameworks, and algorithms for important problems in computing. The thesis provides evidence that with appropriate programming techniques, frameworks, and algorithms, shared-memory programs can be simple, fast, and scalable, both in theory and in practice. The results developed in this thesis serve to ease the transition into the multicore era. The first part of this thesis introduces tools and techniques for deterministic parallel programming, including means for encapsulating nondeterminism via powerful commutative building blocks, as well as a novel framework for executing sequential iterative loops in parallel, which lead to deterministic parallel algorithms that are efficient both in theory and in practice. The second part of this thesis introduces Ligra, the first high-level shared memory framework for parallel graph traversal algorithms. The framework allows programmers to express graph traversal algorithms using very short and concise code, delivers performance competitive with that of highly-optimized code, and is up to orders of magnitude faster than existing systems designed for distributed memory. This part of the thesis also introduces Ligra+, which extends Ligra with graph compression techniques to reduce space usage and improve parallel performance at the same time, and is also the first graph processing system to support in-memory graph compression. The third and fourth parts of this thesis bridge the gap between theory and practice in parallel algorithm design by introducing the first algorithms for a variety of important problems on graphs and strings that are efficient both in theory and in practice. For example, the thesis develops the first linear-work and polylogarithmic-depth algorithms for suffix tree construction and graph connectivity that are also practical, as well as a work-efficient, polylogarithmic-depth, and cache-efficient shared-memory algorithm for triangle computations that achieves a 2–5x speedup over the best existing algorithms on 40 cores. This is a revised version of the thesis that won the 2015 ACM Doctoral Dissertation Award.
Аннотация
The idea of this book grew out of a symposium that was held at Stony Brook in September 2012 in celebration of David S.Warren's fundamental contributions to Computer Science and the area of Logic Programming in particular. Logic Programming (LP) is at the nexus of Knowledge Representation, Artificial Intelligence, Mathematical Logic, Databases, and Programming Languages. It is fascinating and intellectually stimulating due to the fundamental interplay among theory, systems, and applications brought about by logic. Logic programs are more declarative in the sense that they strive to be logical specifications of «what» to do rather than «how» to do it, and thus they are high-level and easier to understand and maintain. Yet, without being given an actual algorithm, LP systems implement the logical specifications automatically. Several books cover the basics of LP but focus mostly on the Prolog language with its incomplete control strategy and non-logical features. At the same time, there is generally a lack of accessible yet comprehensive collections of articles covering the key aspects in declarative LP. These aspects include, among others, well-founded vs. stable model semantics for negation, constraints, object-oriented LP, updates, probabilistic LP, and evaluation methods, including top-down vs. bottom-up, and tabling. For systems, the situation is even less satisfactory, lacking accessible literature that can help train the new crop of developers, practitioners, and researchers. There are a few guides onWarren’s Abstract Machine (WAM), which underlies most implementations of Prolog, but very little exists on what is needed for constructing a state-of-the-art declarative LP inference engine. Contrast this with the literature on, say, Compilers, where one can first study a book on the general principles and algorithms and then dive in the particulars of a specific compiler. Such resources greatly facilitate the ability to start making meaningful contributions quickly. There is also a dearth of articles about systems that support truly declarative languages, especially those that tie into first-order logic, mathematical programming, and constraint solving. LP helps solve challenging problems in a wide range of application areas, but in-depth analysis of their connection with LP language abstractions and LP implementation methods is lacking. Also, rare are surveys of challenging application areas of LP, such as Bioinformatics, Natural Language Processing, Verification, and Planning. The goal of this book is to help fill in the previously mentioned void in the LP literature. It offers a number of overviews on key aspects of LP that are suitable for researchers and practitioners as well as graduate students. The following chapters in theory, systems, and applications of LP are included.
Аннотация
The Fourier transform is one of the most fundamental tools for computing the frequency representation of signals. It plays a central role in signal processing, communications, audio and video compression, medical imaging, genomics, astronomy, as well as many other areas. Because of its widespread use, fast algorithms for computing the Fourier transform can benefit a large number of applications. The fastest algorithm for computing the Fourier transform is the Fast Fourier Transform (FFT), which runs in near-linear time making it an indispensable tool for many applications. However, today, the runtime of the FFT algorithm is no longer fast enough especially for big data problems where each dataset can be few terabytes. Hence, faster algorithms that run in sublinear time, i.e., do not even sample all the data points, have become necessary. This book addresses the above problem by developing the Sparse Fourier Transform algorithms and building practical systems that use these algorithms to solve key problems in six different applications: wireless networks; mobile systems; computer graphics; medical imaging; biochemistry; and digital circuits. This is a revised version of the thesis that won the 2016 ACM Doctoral Dissertation Award.
Computational Prediction of Protein Complexes from Protein Interaction Networks - Sriganesh Srihari
ACM BooksАннотация
Complexes of physically interacting proteins constitute fundamental functional units that drive almost all biological processes within cells. A faithful reconstruction of the entire set of protein complexes (the «complexosome») is therefore important not only to understand the composition of complexes but also the higher level functional organization within cells. Advances over the last several years, particularly through the use of high-throughput proteomics techniques, have made it possible to map substantial fractions of protein interactions (the «interactomes») from model organisms including Arabidopsis thaliana (a flowering plant), Caenorhabditis elegans (a nematode), Drosophila melanogaster (fruit fly), and Saccharomyces cerevisiae (budding yeast). These interaction datasets have enabled systematic inquiry into the identification and study of protein complexes from organisms. Computational methods have played a significant role in this context, by contributing accurate, efficient, and exhaustive ways to analyze the enormous amounts of data. These methods have helped to compensate for some of the limitations in experimental datasets including the presence of biological and technical noise and the relative paucity of credible interactions. In this book, we systematically walk through computational methods devised to date (approximately between 2000 and 2016) for identifying protein complexes from the network of protein interactions (the protein-protein interaction (PPI) network). We present a detailed taxonomy of these methods, and comprehensively evaluate them for protein complex identification across a variety of scenarios including the absence of many true interactions and the presence of false-positive interactions (noise) in PPI networks. Based on this evaluation, we highlight challenges faced by the methods, for instance in identifying sparse, sub-, or small complexes and in discerning overlapping complexes, and reveal how a combination of strategies is necessary to accurately reconstruct the entire complexosome.
Аннотация
The Handbook of Multimodal-Multisensor Interfaces provides the first authoritative resource on what has become the dominant paradigm for new computer interfaces— user input involving new media (speech, multi-touch, gestures, writing) embedded in multimodal-multisensor interfaces. These interfaces support smart phones, wearables, in-vehicle and robotic applications, and many other areas that are now highly competitive commercially. This edited collection is written by international experts and pioneers in the field. It provides a textbook, reference, and technology roadmap for professionals working in this and related areas. This first volume of the handbook presents relevant theory and neuroscience foundations for guiding the development of high-performance systems. Additional chapters discuss approaches to user modeling and interface designs that support user choice, that synergistically combine modalities with sensors, and that blend multimodal input and output. This volume also highlights an in-depth look at the most common multimodal-multisensor combinations—for example, touch and pen input, haptic and non-speech audio output, and speech-centric systems that co-process either gestures, pen input, gaze, or visible lip movements. A common theme throughout these chapters is supporting mobility and individual differences among users. These handbook chapters provide walk-through examples of system design and processing, information on tools and practical resources for developing and evaluating new systems, and terminology and tutorial support for mastering this emerging field. In the final section of this volume, experts exchange views on a timely and controversial challenge topic, and how they believe multimodal-multisensor interfaces should be designed in the future to most effectively advance human performance.
Аннотация
This organizational history relates the role of the National Science Foundation (NSF) in the development of modern computing. Drawing upon new and existing oral histories, extensive use of NSF documents, and the experience of two of the authors as senior managers, this book describes how NSF’s programmatic activities originated and evolved to become the primary source of funding for fundamental research in computing and information technologies. The book traces how NSF's support has provided facilities and education for computing usage by all scientific disciplines, aided in institution and professional community building, supported fundamental research in computer science and allied disciplines, and led the efforts to broaden participation in computing by all segments of society. Today, the research and infrastructure facilitated by NSF computing programs are significant economic drivers of American society and industry. For example, NSF supported work that led to the first widely-used web browser, Netscape; sponsored the creation of algorithms at the core of the Google search engine; facilitated the growth of the public Internet; and funded research on the scientific basis for countless other applications and technologies. NSF has advanced the development of human capital and ideas for future advances in computing and its applications. This account is the first comprehensive coverage of NSF's role in the extraordinary growth and expansion of modern computing and its use. It will appeal to historians of computing, policy makers and leaders in government and academia, and individuals interested in the history and development of computing and the NSF.