Скачать книгу

model for the system, often in the form of a state space model, as the basis for diagnosis and prognosis. Different from these existing books, this book focuses on the concept of random effects and its applications in system diagnosis and prognosis. The impetus for this book arose from the current digital revolution. In this digital age, the essential feature of a modern engineering system is that a large amount of data from multiple similar units/machines during their operations are collected in real time. This feature poses significant intellectual opportunities and challenges. As for opportunities, since we have observations from potentially a very large number of similar units, we can compare their operations, share the information, and extract common knowledge to enable accurate and tailored prediction and control at the individual level. As for challenges, because the data are collected in the field and not in a controlled environment, the data contain significant variation and heterogeneity due to the large variations in working/usage conditions for different units. This requires that the analytics approaches should be not only general (so that the common information can be learned and shared), but also flexible (so that the behavior of an individual unit can be captured and controlled). The random effects modeling approaches can exactly address these opportunities and challenges.

      The book contains two main parts. The first part covers general statistical concepts and theory useful for describing and modeling variation, fixed effects, and random effects for both univariate and multivariate data, which provides the necessary background for the second part of the book. The second part covers advanced statistical methods for variation source diagnosis and system failure prognosis based on the random effects modeling approach. An appendix summarizing the basic results in linear spaces and matrix theory is also included at the end of the book for the sake of completeness.

      This book is intended for students, engineers, and researchers who are interested in using modern statistical methods for variation modeling, analysis, and prediction in industrial systems. It can be used as a textbook for a graduate level or advanced undergraduate level course on industrial data analytics and/or quality and reliability engineering. We also include “Bibliographic Notes” at the end of each chapter that highlight relevant additional reading materials for interested readers. These bibliographic notes are not intended to provide a complete review of the topic. We apologize for missing literature that is relevant but not included in these notes.

      Many of the materials of this book come from the authors’ recent research works in variation modeling and analysis, variation source diagnosis, and system condition and failure prognosis for manufacturing systems and beyond. We hope this book can stimulate some new research and serve as a reference book for researchers in this area.

      Shiyu Zhou Madison, Wisconsin, USA Yong Chen Iowa City, Iowa, USA

      We would like to thank the many people we collaborated with that have led up to the writing of this book. In particular, we would like to thank Jianjun Shi, our Ph.D. advisor at the University of Michigan (now at Georgia Tech.), for his continuous advice and encouragement. We are grateful for our colleagues Daniel Apley, Darek Ceglarek, Yu Ding, Jionghua Jin, Dharmaraj Veeramani, Yilu Zhang for their collaborations with us on the related research topics. Grateful thanks also go to Raed Kontar, Junbo Son, and Chao Wang who have helped with the book including computational code to create some of the illustrations and designing the exercise problems. Many students including Akash Deep, Salman Jahani, Jaesung Lee, and Congfang Huang read parts of the manuscript and helped with the exercise problems. We thank the National Science Foundation for the support of our research work related to the book.

      Finally, a very special note of appreciation is extended to our families who have provided continuous support over the past years.

      We will follow the custom of notations listed below throughout the book.

Скачать книгу


Item Notation Examples
Covariance matrix (variance–covariance matrix) cov(⋅), Σ cov(Y), Σy
Cumulative distribution function F(y), F(y;θ), (y)
Defined as :=
Density function f(y), f(y;θ), f(y)
Estimated/predicted value accent ^
Expectation operation E(⋅) E(Y), E(Y)
Identity matrix I, In (n by n identity matrix)
Indicator function I(⋅) I(y>a)
Likelihood function L(⋅) L(θ|y)
Matrix boldface uppercase X, Y
Matrix or vector transpose superscript T XT, YT
Mean of a random variable (vector) μ, μ μy
Model parameter lowercase Greek letter θ, λ
Negative log likelihood function l(⋅) l(θ|y) = −lnL(θ|y)
Normal distribution function N(⋅,⋅) N(μ2), N(μ,Σ)
Parameter space uppercase script Greek letters Θ, Ω