This is just an Excerpt from a larger document, click here to view the entire document.Example of Field to Predicted Data
We now discuss, using some specific examples, several important issues regarding the field operating data and reliability prediction models. These issues are particularly relevant when dealing with reliability prediction, test, and life data.
For example, in the late 1980s, a study was conducted that compared predicted and field MTBFs in an attempt to quantify the uncertainty associated with the mentioned reliability predictions. This study was a "snapshot" in which both predicted and field MTBF system data was analyzed.
Because of the fragmented nature of the part and environmental data used in this study, and the fact that it was often necessary to interpolate or extrapolate from the available data when developing new models, statistical confidence intervals associated with the overall (combined) model results are greatly compromised. In addition to the variability associated with developing the models, there is human variability in making prediction and judgment assumptions about including or excluding of field failures, and failure definitions. As a result, the validity of confidence interval assumptions and, therefore, of its confidence levels can be seriously questioned.
The original data used to develop the confidence intervals was based on approximately 200 reliability predictions performed during the 1970s and 1980s and documented in a study sponsored by Rome Air Development Center (RADC) entitled "Reliability and Maintainability Operational Parameter Translation II," RADC-TR-89-299. It should also be remembered that the predictions performed on these 200 systems were developed a number of years ago, by a wide range of individuals, under many different assumptions. In addition, at that time, operating modes and other factors may have also been very different than what they are today, which is why combining data sets is so critical.
Field MTBFs used in the study introduce more variability with a wide range of operating hours, failure counts and maintenance policies for each system. Therefore, the study results could very well be different if reconstructed today using a statistical analysis approach as presented in this report. It serves only to provide a notion of the variability possible across a wide range of systems, companies, individuals and field maintenance policies that need to be analyzed. The results could be much better, say, if a single experienced reliability engineer were applying a standard prediction tool over a long period of time, and there was like consistency in field failure counting practices. But such information is not available.
Part failure models in MIL-HDBK-217, Telcordia and PRISMŽ
and other reliability prediction techniques are based on part data from numerous sources, environments and time epochs. Complete models are never developed under a single study contract, and the failure data do not come from a single source. For example, all MIL-HDBK-217 environmental factors were developed under study efforts separate from the one in which the part failure models were developed. Statistical studies for combining these data were never performed, so incompatibilities in data sets were never identified.
In addition, adding vendor and field failure rate data to the combination, results in a mixed prediction that may or may not represent the "new" design. Outside data sources are usually from units or components that have been previously developed and can be similar to the new but may have different technologies and, hence, have an indeterminable correlation to the "new" design.