Аннотация

With the rapid development of cloud computing, the enterprises and individuals can outsource their sensitive data into the cloud server where they can enjoy high quality data storage and computing services in a ubiquitous manner. This is known as the outsourcing computation paradigm. Recently, the problem for securely outsourcing various expensive computations or storage has attracted considerable attention in the academic community. In this book, we focus on the latest technologies and applications of secure outsourcing computations. Specially, we introduce the state-of-the-art research for secure outsourcing some specific functions such as scientific computations, cryptographic basic operations, and verifiable large database with update. The constructions for specific functions use various design tricks and thus result in very efficient protocols for real-world applications.


The topic of outsourcing computation is a hot research issue nowadays. Thus, this book will be beneficial to academic researchers in the field of cloud computing and big data security.

Аннотация

The new field of cryptographic currencies and consensus ledgers, commonly referred to as <i>blockchains</i>, is receiving increasing interest from various different communities. These communities are very diverse and amongst others include: technical enthusiasts, activist groups, researchers from various disciplines, start ups, large enterprises, public authorities, banks, financial regulators, business men, investors, and also criminals. The scientific community adapted relatively slowly to this emerging and fast-moving field of cryptographic currencies and consensus ledgers. This was one reason that, for quite a while,the only resources available have been the Bitcoin source code, blog and forum posts, mailing lists, and other online publications. Also the original Bitcoin paper which initiated the hype was published online without any prior peer review. Following the original publication spirit of the Bitcoin paper, a lot of innovation in this field has repeatedly come from the community itself in the form of online publications and online conversations instead of established peer-reviewed scientific publishing. On the one side, this spirit of fast free software development, combined with the business aspects of cryptographic currencies, as well as the interests of today's time-to-market focused industry, produced a flood of publications, whitepapers, and prototypes. On the other side, this has led to deficits in systematization and a gap between practice and the theoretical understanding of this new field. This book aims to further close this gap and presentsa well-structured overview of this broad field from a technical viewpoint. The archetype for modern cryptographic currencies and consensus ledgers is Bitcoin and its underlying Nakamoto consensus. Therefore we describe the inner workings of this protocol in great detail and discuss its relations to other derived systems.

Аннотация

Digital forensic science , or digital forensics , is the application of scientific tools and methods to identify, collect, and analyze digital (data) artifacts in support of legal proceedings. From a more technical perspective, it is the process of reconstructing the relevant sequence of events that have led to the currently observable state of a target IT system or (digital) artifacts. Over the last three decades, the importance of digital evidence has grown in lockstep with the fast societal adoption of information technology, which has resulted in the continuous accumulation of data at an exponential rate. Simultaneously, there has been a rapid growth in network connectivity and the complexity of IT systems, leading to more complex behavior that needs to be investigated. The goal of this book is to provide a systematic technical overview of digital forensic techniques, primarily from the point of view of computer science. This allows us to put the field in the broader perspective of a host of related areas and gain better insight into the computational challenges facing forensics, as well as draw inspiration for addressing them. This is needed as some of the challenges faced by digital forensics, such as cloud computing, require qualitatively different approaches; the sheer volume of data to be examined also requires new means of processing it.

Аннотация

Privacy Risk Analysis fills a gap in the existing literature by providing an introduction to the basic notions, requirements, and main steps of conducting a privacy risk analysis. The deployment of new information technologies can lead to significant privacy risks and a privacy impact assessment should be conducted before designing a product or system that processes personal data. However, if existing privacy impact assessment frameworks and guidelines provide a good deal of details on organizational aspects (including budget allocation, resource allocation, stakeholder consultation, etc.), they are much vaguer on the technical part, in particular on the actual risk assessment task. For privacy impact assessments to keep up their promises and really play a decisive role in enhancing privacy protection, they should be more precise with regard to these technical aspects. This book is an excellent resource for anyone developing and/or currently running a risk analysis as it defines the notions of personal data, stakeholders, risk sources, feared events, and privacy harms all while showing how these notions are used in the risk analysis process. It includes a running smart grids example to illustrate all the notions discussed in the book.

Аннотация

The current social and economic context increasingly demands open data to improve scientific research and decision making. However, when published data refer to individual respondents, disclosure risk limitation techniques must be implemented to anonymize the data and guarantee by design the fundamental right to privacy of the subjects the data refer to. Disclosure risk limitation has a long record in the statistical and computer science research communities, who have developed a variety of privacy-preserving solutions for data releases. This Synthesis Lecture provides a comprehensive overview of the fundamentals of privacy in data releases focusing on the computer science perspective. Specifically, we detail the privacy models, anonymization methods, and utility and risk metrics that have been proposed so far in the literature. Besides, as a more advanced topic, we identify and discuss in detail connections between several privacy models (i.e., how to accumulate the privacy guarantees they offer to achieve more robust protection and when such guarantees are equivalent or complementary); we also explore the links between anonymization methods and privacy models (how anonymization methods can be used to enforce privacy models and thereby offer ex ante privacy guarantees). These latter topics are relevant to researchers and advanced practitioners, who will gain a deeper understanding on the available data anonymization solutions and the privacy guarantees they can offer.