ТОП просматриваемых книг сайта:
Industrial Data Analytics for Diagnosis and Prognosis. Yong Chen
Читать онлайн.Название Industrial Data Analytics for Diagnosis and Prognosis
Год выпуска 0
isbn 9781119666301
Автор произведения Yong Chen
Жанр Математика
Издательство John Wiley & Sons Limited
The posterior mean given in (3.28) can be understood as a weighted average of the prior mean μ0 and the sample mean x̄, which is the MLE of μ. When the sample size n is very large, the weight for x̄ is close to one and the weight for μ0 is close to 0, and the posterior mean is very close to the MLE, or the sample mean. On the other hand, when n is very small, the posterior mean is very close the prior mean μ0. Similarly, if the prior variance σ02 is very large, the prior distribution has a flat shape and the posterior mean is close to the MLE. Note that because the mode of a normal distribution is equal to the mean, the MAP of μ is exactly μn. Consequently, when n is very large, or when the prior is flat, the MAP is close to the MLE.
Equation (3.29) shows the relationship between the posterior variance and the prior variance. It is easier to understand the relationship if we consider the inverse of the variance, which is called the precision. A high (low) precision corresponds to a low (high) variance. Equation (3.29) basically says that the posterior precision is equal to the prior precision with an added precision contribution proportional to n. Each observation adds a contribution of
Figure 3.3 Posterior distribution of the mean with various sample sizes
When the data follow a p-dimensional multivariate normal distribution with unknown mean μ and known covariance matrix Σ, the posterior distribution based on a random sample of independent observations D = {x1, x2,…, xn} is given by
where g(μ) is the density of the conjugate prior distribution Np(μ0, Σ0). Similar to the univariate case, the posterior distribution of μ can be obtained as
where
where x̄ is the sample mean of the data, which is the MLE of μ. It is easy to see the similarity between the results for the univariate data in (3.28) and (3.29) and the results for the multivariate data in (3.30) and (3.31). The MAP of μ is exactly μn. Similar to the univariate case, when n is large, or when the prior distribution is flat, the MAP is close to the MLE.
One advantage of the Bayesian inference is that the prior knowledge can be included naturally. Suppose, for example, a randomly sampled product turns out to be defective. A MLE of the defective rate based on this single observation would be equal to 1, implying that all products are defective. By contrast, a Bayesian approach with a reasonable prior should give a much less extreme conclusion. In addition, the Bayesian inference can be performed in a sequential manner very naturally. To see this, we can write the posterior distribution of μ with the contribution from the last data point xn separated out as
Equation (3.32) can be viewed as the posterior distribution given a single observation xn with the term in the square bracket treated as the prior. Note that the term in the square brackets is just the posterior distribution (up to a normalization constant) after observing n − 1 data points. Equation (3.32) says that we can treat the posterior based on the first n − 1 observations as the prior and update the posterior based on the next observation using the Bayes’ theorem. This process can be repeated sequentially for each new observation. The sequential update of posterior under the Bayesian framework is very useful when observations are collected sequentially over time.
Example 3.3: For the side_temp_defect
data set from a hot rolling process, suppose the true covariance matrix of the side temperatures measured at location 2, 40, and 78 of Stand 5 is known and given by
We use the nominal mean temperatures as given in Example 3.2 as the mean of the prior distribution and a diagonal matrix with variance equal to 100 for each temperature variable as its covariance matrix: