Back toAbstract, Chapter 3.
Many methods of trend analysis are possible. The Kalman Filter and Box-Jenkins prediction technique use past parameter estimates to predict future parameters. Many other techniques use intuitive reasoning to form predictions.
Intuitive reasoning techniques are often used by stock brokers and meteorologists to form predictions. These techniques rely on the experience and intuition of an "expert" in the field of study. Since on-line prediction requires that all of the predicting must be done by the computer, this method seems of little use in predicting faults of a servo system. If expert system (ie: neural nets) are developed, with an adequate increase in computer resources, then this technique could become possible. However, this technique does not seem to be possible with the computer resources available at this time.
The Kalman Filter uses the state transition matrix to predict future values of the time-series. Due to the difficulty in determining the state transition matrix for a time-varying system15, the Box-Jenkins technique is used in this research.
The Box-Jenkins filter achieves prediction of future trends by using the estimated system parameters to project to the next time interval. By repeating this projection, the prediction lead can be enlarged indefinitely at the expense of prediction accuracy.
The BOX-Jenkins prediction technique requires the use of an estimation routine to measure the system parameters. In this experiment, the same LSM estimation procedure is used to measure the system which produces the time series as was used to measure the parameters of the control plant.
The model used for the Box-Jenkins filter can be AutoRegressive (AR(n)), Moving Average (MA(q)), or an Autoregressive-Moving Average (ARMA(n,q)), where n and q are the number of historical points of data used by the model. The models compared in this research are the AR(p/skip) and the ARMA(1,1). The prediction routine uses p to be the time-span covered by the estimation, and skip to be the number of data points skipped between inputs to the filter (n = p/skip).
In the AR(p/skip) model, the skip factor is used to reduce the data processing for the computer. The success of the prediction is dependent on the time-span covered by the AR(p/skip) process inputs, while the confidence intervals are most dependent on the order of the process (p/skip), the time-series variance, and the prediction lead. Using a skip factor increases the time-span without increasing the number of points in the input vector. The effect of doing this is that a further distance can be projected by spreading out the interval of the input data while not increasing the processing requirements. Since a larger skip factor leads to a larger variance of the time series, the confidence intervals are wider when the skip factor is increased, but if the drift is extremely slow the spreading of the confidence interval is negligible. Different skip factors with the same number of points (p/skip) are demonstrated in section 4.3.
In the ARMA(1,1) model, the skip factor is used to spread out the time-span of the process. In this case, the model is made up of an AR(1) and a MA(1) model. Therefore, the skip factor is simply the time-lag of the regressive input. In the case of any first order model (AR(1), MA(1), or ARMA(1,1)), the skip factor is most often used to set the time-span of sensitivity. For example, a time-series with a periodic trend would be best modeled by an ARMA(1,1) if the skip factor were set to the period of the trend. The success of the prediction is dependent on the correlation of the skip factor and the period of the time series.
The goal of model selection is to choose the model which will best predict the trend of the time-series with a minimum computation requirement.
Pankratz21 lists the characteristics of a a good model as:
In this research, the form of the time series is assumed unknown. For example, no seasonal or periodic trends are expected. Therefore, selection of a parsimonious model is not possible. Reducing the number of coefficients in the model will reduce the likelihood of reliable fault prediction. To demonstrate this point, the AR(n) model is compared to an ARMA(1,1) model.
The AR(n) model has been chosen to maximize the flexibility of the model while not adding the computational complexity of a mixed model.
The ARMA(1,1) model is parsimonious in that only 2 coefficients are used for each sub process (AR and MA).
The reason for selection of the ARMA to represent a parsimonious model selection is that the ARMA contains both model forms. The problem with the ARMA(1,1) is that there is not enough historical data entering the model to make an accurate prediction without a-priori knowledge of the failure pattern.
The AR(n) model is developed by considering the time series to be the output data of an unstable regressive system that always has an input of 1. Thus the parameters of the unstable system are estimated by using the same procedure as described for the LSM parameter estimation routine. Figure 10 shows the configuration of the estimation/prediction procedure applied to an unstable system.
Figure 10, Configuration of Prediction Filter
With parameter estimates for the unstable system, the estimates are used to project the model output to the next time interval by Ym(i) = A*X(i). Where A and X are as defined similarly to the parameter estimation procedure. In this case, A is an p/skip by 1 matrix and
X = [1 Y(i-1*skip) Y(i-2*skip)...
for the estimation stage, or
X = [1 Ym(i-1*skip) Ym(i-2*skip)...
for the prediction stage of the prediction routine.
The ARMA(1,1) model is developed by considering the time series to be the output data of an unstable regressive moving average system that always has an input of 1. Thus the parameters of the unstable system are estimated in two stages. The first estimation stage estimates the regressive parameters by using the same procedure as described for the LSM parameter estimation routine. The second stage computes the moving average parameters by using an LSM input vector made up of output differences rather than regressive inputs. The outputs of the two estimated models are then combined to form the ARMA model. Figure 11 shows the configuration of the estimation/prediction procedure applied to an unstable system.
Figure 11, Configuration of Prediction Filter
With parameter estimates for the unstable system, the estimates are used to project the model output to the next time interval by
Ym(i) = mY*(1-A2)+A2*Y(i-skip)-Th2*At(i-skip)
mY is the average of the time-series calculated regressively,
A2 is the Y(i-skip) regressive parameter estimate,
Th2 is the Y(i-skip)-Y(i-2*skip) moving average parameter estimate, and
At is the random shock or error from the previous estimate.
A confidence analysis of the model based predictions is necessary to determine the certainty and meaning of the prediction. In a fault prediction system, false declaration of a fault would destroy the credibility of the built in test system. Therefore, a fault should only be declared if it's certainty is greater than an acceptable limit (95% assumed here).
In order to estimate or predict a failure, a-priori knowledge of the servo's required second order system model and parameter tolerances is needed. With minimum and maximum acceptable values for each parameter, the parameter predictions can be compared to the set thresholds. If the prediction minus the confidence interval exceeds the maximum threshold then a fault prediction should be issued. Likewise, if the prediction plus the confidence interval is less than the minimum threshold a fault prediction should be issued.
The confidence interval is derived from the variance of the error during the estimation stage, and from the value of the parameter estimates during the prediction stage. During the prediction stage, the error probability accumulates for each increment of the prediction lead.
To implement confidence estimates, the shock coefficients are needed to measure the variance of the model output error. The shock coefficients are determined by equating like powers in an expansion of the system model equated to a series of shocks with weights ¥. For example,
Y(i+L) = mY + ¥Lat + ¥L+1at-1 + ¥L+2at-2 + ...
is equated to
Ym(i+L) = prediction model equation,
where ¥ is the weight given to the random shock, and a is the random shock of the input noise.
For the ARMA case, the shock coefficients as derived from the parameter estimates are
¥0 = 1, and
for i=1:L, ¥i = A(2)i-1*(A(2)-Th(2)); end
where L is the prediction lead.
For the AR(n) case, the shock coefficients are computed from the parameter estimates iteratively. The coefficients are efficiently calculated by:
if L>n, jmax=n; else, jmax=L; end % length of AR(n)
for j=2:jmax, % calculate shock vector
for k=2:j, shk(j)=shk(j-1)+A(k)*shk(j-k+1); end
The variance of the model error is given by the expected value of the error squared. In random shock form21, the model is
Y(i+L)=mY + ¥Lat + ¥L+1at-1 + ¥L+2at-2 + ...
Therefore, the model error is given by
et(L) = ¥0at + ¥1at+L +...+ ¥Lat+1 .
From the model error, the error variance is
S2e(L) = S2a2*(1 + ¥12+ ¥22 +...+ ¥L-12), or
Assuming the random shocks are normally distributed, the confidence interval is given by21
conf(i) = t(100-
where t(100-s /2,m) is the student's t distribution,
1.96, for a 95% confidence interval,
1.28, for a 80% confidence interval.
By using N > 50 to determine the forgetting factor in the estimation, and to determine the required number of estimates needed prior to entering a prediction stage, the value for t(100-s /2,¥ )=1.96 is appropriate12.
Note that the confidence interval is the same as the confidence interval for the estimation stage where the lead l is set to 1.
Chapter 4.3 - Simulation of a Failure Prediction System
Abstract - Fault Prediction With Regression Models