There are a number of different approaches to the reliability prediction of electronic systems and equipment. Each approach has unique advantages and disadvantages.
The estimation of product reliability requires knowledge about the components, the design, the manufacturing processes and the operating conditions expected. Empirical prediction techniques based on modeling past experience and data present good estimates of reliability for similar or modified products but may not predict well for new products using new technologies or environmental conditions. The use of deterministic physics-of-failure techniques may predict wearout or end of life reliability with accuracy, but are often difficult to use and do not predict failures in the other domains. Field operational data on the same or similar products is the best estimate of a product's reliability, but is difficult and expensive to collect or obtain.
The choices of methodology to be used to predict reliability are summarized in Table 1. The period of time that each method is effective is indicated by the use of check marks. The order of effectiveness by relative rank is determined by experience data and the number of periods for which the methodology is appropriate.
Obtaining a specific number should never be the sole purpose of a reliability prediction; rather identifying and controlling the factors affecting reliability should be considered even more important than the predicted number itself.
Test or Field Data Based Predictions
Reliability predictions for modified or off-the-shelf products often make use of existing equipment (or assembly) designs or designs adapted to a particular application. Table 2 summarizes the data needed for reliability analyses based on test or field data.
Table 1. Reliability Prediction Methodologies
Test or Field Data
In-house test or operational data is used to estimate reliability of the product based on failures and time.
System Reliability Assessment
Consolidated assessment technique that combines predictions, process grading, operational profiles, software and test data using Bayesian techniques.
Similar Item Data
Based on empirical reliability field failure data on similar products operating in similar environments. Uses generic data from other organizations.
Translates a reliability prediction based on an empirical model to an estimated field reliability value. Implicitly accounts for some factors affecting field reliability that is not explicitly accounted for in the empirical model.
Typically relies on observed failure data to quantify part-level empirical model variables. Premise is that valid failure rate data is available.
Models each failure mechanism for each component individually. Component reliability is determined by combining the probability density function associated with each failure mechanism.
Table 2. Use of Existing Reliability Data
Product Field Data
Product Test Data
Piece Part Data
Data collection time period
Number of operating hours per product
Total number of part hours
Total number of observed maintenance actions
Number of "no defect found" maintenance actions
Number of induced maintenance actions
Number of "hard failure" maintenance actions
Number of observed failures
Number of relevant failures
Number of nonrelevant failures
Assuming an exponential distribution, a reasonable assumption for electronic components, the specific prediction for the product is simply a matter of determining operating hours and types of failures expected. The failure rate of the product can be determined from the following equation:
Failure Rate = Number of Failures / Operating Time
The advantage of predicting from field and test data is that the reliability results can be accurately determined including the associated uncertainty of the estimate. The disadvantage is the difficulty of obtaining and assessing accurate field and test data.
System Reliability Assessment Prediction
The System Reliability Assessment Predictive Modeling approach takes place in two successive stages, as shown in Figure 1. First, the system pre-build model is developed using a consolidated reliability assessment method. This method combines process grading factors with the operating profile and the initial reliability prediction. This forms the best estimate of product reliability. The second step consolidates the best estimate with system test and process data using Bayesian statistical techniques. This is the underlying methodology utilized in the 217Plus™ (Reference 1) and the FIDES (Reference 2) reliability assessment methods.
*Note: 217Plus™ was previously known as PRISM when it was owned by the RAC. This document was since updated and all References to PRISM have been changed 217Plus™.
Figure 1. System Reliability Assessment Modeling Approach System Pre-Build Phases System Post-Build Phases (Click to Zoom)
Similar Item/Circuit Prediction
This method starts with the collection of past experience data on similar products. The data is evaluated for form, fit and function compatibility with the new product. If the new product is an item that is undergoing a minor enhancement, the collected data will provide a good basis for comparison to the new product. Small differences in operating environment or conditions can be accounted for by using translation methods based on previous experiences. If the product does not have a direct similar item, then lower level similar circuits can be compared. In this case, data for components or circuits is collected and a product reliability value is calculated. The general expression for product reliability calculated from its constituent components using the similar item method is:
Rp = R1 · R2...Rn
Rp = Product reliability
R1, R2 ...Rn = Component reliability
The advantage to using the similar item prediction method is that it is the quickest way to estimate a new product's reliability, and is applicable when there is limited design information e.g., very early in the design phase. The disadvantage is the possibility that the new product will actually be substantially different from the similar item, resulting in incorrect or inaccurate predictions.
Prediction by Operational Translation
It has long been known that failure rate prediction models derived from empirical data will yield estimates that deviate from the actual observed failure rates. Field (operational) reliability differs from the inherent or predicted reliability because empirical models only assess inherent component reliability, and the reliability of systems in field operation includes all failure causes, including induced failures, problems resulting from inadequate design, system integration problems, manufacturing defects, etc. Since, the intent is to assess total system reliability including all factors that can affect system reliability, a translation may be necessary to convert the empirical predicted failure rate to an expected field failure rate. Specific techniques and models for determining the translation factors may be found in Appendix A of the RIAC publication "Reliability Toolkit:
Commercial Practices Edition" (Reference 3) or RADC-TR-89299 "Operational Parameter Translation" (Reference 4).
The advantages of this technique are the ease of use and application of environmental factors for harsh conditions. The disadvantage is the lack of up-to-date empirical data and the limited number of translation scenarios.
Empirical Model Prediction Techniques
Empirical models are those that have been developed from historical reliability data bases. This data can be either from fielded applications or from laboratory tests. As a result of the manner in which these models are developed, their relevance is a function of the data used to derive them. Therefore, reliability predictions will vary as a function of the specific empirical prediction methodology used, because the empirical data on which they are based was collected from different sources and environments. The methodology, the source of the models and the type of data utilized are shown in Table 3.
* For 217Plus™ this is not simply a choice between the 2 different methods, but rather a continuum between these two extremes based upon the number of program default values selected rather than unique values inserted.
Part Count Prediction
The parts count method is generally used to analyze electronic circuits in the early design phase, when the number and type of parts in each class (such as capacitor, resistor, transistor, microcircuit, etc.) are known and overall design complexity is likely to change during later phases of design/development. The method starts with the listing of the part types and their expected quantities. Reliability data is then taken from source books or software programs such as MIL-HDBK-217 (Reference 6), 217Plus™ (Reference 1) and TELCORDIA SR-332 (Reference 7). Failure rates, quantities of parts and adjustment factors are multiplied and the results for each part type are summed to determine the product reliability. (If 217Plus™ is used, then all default values are accepted which is equivalent to the "Parts Count" method.) This method assumes that the times-to-failure of the parts are exponentially distributed. The general expression for a product failure rate using this method is:
λproduct = Σ Ni(λGπA)i i=1
Total failure rate (failures per unit time)
Generic failure rate for the ith generic part
Adjustment factor for the ith generic part (quality factor, temperature factor, environmental factor)
Quantity of ith generic part
Number of different generic part categories
Detailed Stress Prediction
The part stress analysis method is used in the detailed design phase when individual part level information and design stress data are available. The method requires the use of defined models that include electrical and mechanical stress factors, environmental factors, duty cycles, etc. Each of these factors must be known, or be capable of being estimated, so that the effects of those stresses on the part failure rates can be evaluated. Table 4 shows several major factors which influence device reliability.
Table 4. Major Influence Factors on Device Reliability
Switches & Relays
# of Activations
As an example, a stress-temperature failure rate plot is shown in Figure 2. As can be seen from the plot, the failure rate increases as the temperature goes up, or as the applied stress (voltage) increases.
Figure 2. Trimmer Ceramic Capacitor Failure Rates/Stress Plot from MIL-HDBK-217 (Click to Zoom)
The advantage of the empirical prediction is the ease of use as the various models for components already exist in the literature. The disadvantage is that the data base may be outdated resulting in inaccurate estimates for new technology components.
The objective of any physics-of-failure analysis is to determine or predict when a specific end-of-life failure mechanism will occur for an individual component in a specific application. A physics-of-failure prediction looks at each individual failure mechanism such as electromigration, solder joint cracking, die bond adhesion, etc., to estimate the probability of component wearout within the useful life of the product. This analysis requires detailed knowledge of all material characteristics, geometries, and environmental conditions. Specific models for each failure mechanism are available from a variety of reference books (Reference 8).
The advantage of the physics-of-failure approach is that accurate predictions using known failure mechanisms can be performed to determine the wearout function. The disadvantage is that this method requires access to component manufacturer's material, process, and design data. In addition, the actual calculations and analysis are complicated activities requiring knowledge of materials, processes, and failure mechanisms.
Comparison of Popular Prediction
Tools A comparison of some of the more popular electronic reliability prediction tools is shown in Table 5.
In general a reliability prediction cannot be linked to a specific confidence interval, as might be done with a demonstration test or when measuring failure rates from field returns (References 9, 10, and 11). The primary reasons for this inability to define a confidence interval are:
Reliability prediction models, including 217Plus™, FIDES Guide and MIL-HDBK-217, are typically based on part data gathered from a variety of sources. Complete models are not usually developed from a single data source.
In some cases, while it might be possible to calculate a confidence interval for some basic part failure rate, it is practically impossible to predict the confidence interval for all of the modifying parameters, even when they are based upon well known and widely used physical acceleration laws, e.g., Arrhenius or the Inverse Power Law.
In addition to the variability associated with developing the models, there is also human variability involved in making prediction assumptions, analyzing the data, counting of field failures, and even in the failure definitions themselves.
Thus, because of the fragmented nature of the part and environmental data and the fact that it is usually necessary to interpolate or extrapolate from available data when developing new models, no statistical confidence intervals should be associated with the overall model results for any given prediction.
217Plus™, "System Reliability Assessment Software Tool", Reliability Information Analysis Center (RIAC), 1999.
FIDES Guide 2004, Issue A, Reliability Methodology for Electronic Systems, September 2004.
Reliability Toolkit: Commercial Practices Edition, Reliability Information Analysis Center (RIAC), 1995.
RADC-TR-89-299, "Reliability and Maintainability Operational Parameter Translation," 1989.
IEC 62380 TR Ed. 1.0 "Reliability Data Handbook - A universal model for reliability prediction of Electronic components, PCBs and equipment", August 2004.
SR-332, Issue 1 "Reliability Prediction Procedure for Electronic Equipment," TELCORDIA, May 2001.
Pecht M., "The Reliability Physics Approach to Failure Prediction Modeling," Quality and Reliability Engineering International, Vol. 6, 1990.
About the Author
* Note: The following information about the author(s) is same as what was on the original document and may not be correct anymore.
Norman B. Fuqua is a Senior Engineer with Alion Science and Technology. He has 44 years of varied experience in the field of dependability, reliability, and maintainability and has applied these principles to a variety of military, space, and commercial programs. At Alion Science and Technology, and its predecessor IIT Research Institute (IITRI), he has been responsible for reliability and maintainability training and for the planning and implementation of various dependability, reliability, and maintainability study programs.
Mr. Fuqua developed unique distance learning Web-based and WindowsTM-based computer-aided reliability training courses. He is the developer and lead instructor for the Reliability Analysis Center's (RIAC) popular Electronic Design Reliability Training Course. This three-day course has been presented over 200 times to some 7,000 students in the US, England, Denmark, Norway, Sweden, Finland, Germany, Israel, Canada, Australia, Brazil, and India. Audiences have included space, military, industrial, and commercial clients.
He was also the lead developer and instructor for a two-day Dependability Training Course for an Automotive Supplier and a three-day Robust Circuit Design Training Course. These courses enable mechanical and electronic design engineers and reliability engineers to utilize advanced software-based tools in producing designs that exhibit minimum sensitivity to both internal and external variations. Mr. Fuqua holds a Bachelor of Science degree in Electrical Engineering from the University of Illinois, Urbana Illinois, is a Registered Professional Engineer (Quality Engineer) in California (retired), a Senior Member of the IEEE and the IEEE Group on Reliability, and an American Society for Quality (ASQ) Certified Reliability Engineer (CRE).
He is a former Member of the Editorial Board, "Electrical and Electronics Series," for Marcel Dekker Inc., and the Education and Training Editor for the "SAE Communications in Reliability, Maintainability and Supportability Journal." He is also a former Member of the EOS/ESD Association, and Chairman of three different EOS/ESD Association Standards Committees.
He is the author of a number of technical papers, twenty RIAC publications and a reliability college textbook published by Marcel Dekker Inc.