Statistics - A Reliability Engineers Tool, Not Reliability Engineering
Consider the following three attempts at statistical humor:
Three statisticians go deer hunting with bows and arrows. They spot a big buck and take aim. One shoots and his arrow flies off three meters to the right. The second shoots and his arrow flies off three meters to the left. The third statistician jumps up and down yelling, We got him! We got him!
Statistics are like a bikini. What they reveal is suggestive, but what they conceal is vital. (Attributed to Aaron Levenstein.)
One day there was a fire in a wastebasket in the deans office and in rushed a physicist, a chemist, and a statistician. The physicist immediately starts to work on how much energy would have to be removed from the fire to stop the combustion. The chemist works on which regent would have to be added to the fire to prevent oxi- dation. While they are doing this, the statistician is setting fires to all the other wastebaskets in the office. What are you doing?
they demanded. Well to solve the problem, obviously you need a large sample size, the statistician replies.
You may or may not find these amusing. Whatever they may lack in humor, however, they do serve to illustrate an important truth of statistics. That truth is that without a thorough under- standing of a problem and the underlying assumptions of the sta- tistical analysis, and knowledge of the limitations of statistics, a person can be easily misled.
Statistics is one of the many tools used in reliability engineering. It is essentially the system of measurement used for reliability. As has been said, without measurement, there is no science. Unfortunately, statistics is seen as equivalent to and the essence of reliability. Nothing could be further from the truth. Reliability is first and foremost an engineering discipline.
The inherent reliability of a product (hardware, software, or service) is determined by its design and the way in which that design is implemented. For hardware, the implementation takes the form of manufacturing processes and, in some cases, installation. Improving the inherent reliability characteristics of a product after design is complete is at best expensive and usually problematic. So it is in designing the product and the processes for implementing the design that we must address reliability.
Design is the province of engineering, which includes systems engineers, safety engineers, logistics engineers, industrial engi- neers, and reliability engineers. The process of designing for reliability is straightforward. It consists of the following steps:
- Develop good design requirements from the customers needs.
- Allocate those requirements to the lower levels of design.
- Identify and analyze failure mechanisms.
- Redesign to eliminate, reduce the occurrence of, or reduce the effects of failures.
- Verify the effectiveness of design changes.
- Validate the level of reliability achieved in design.
- Assure the design reliability is not compromised during manufacture and installation.
In each of these steps, statistics can help us in many ways: to understand the distribution of times to failure, to evaluate the probability of occurrence and thereby prioritize failures, and to measure the level of reliability achieved. It is through hard engi- neering, however, that reliability is designed into the product, not through statistical analysis.
The most notable example of how statistics sometimes becomes the end rather than the means is prediction. Regardless of ones personal view of predictions, quantitative assessments of reliabil- ity prior to actual operation by the customer are needed for a vari- ety of reasons. These include determining if reliability is improving, whether requirements have been met, and which pieces of a product and how many of each should be bought as spares. The focus, however, during design should be on identifying design shortcomings, finding design solutions to address those short- comings, and verifying the effectiveness of the selected solution.
Many methods for making predictions are available to the relia- bility engineer. Some cannot be used until reasonable amounts of test data are available. Some provide only relative accuracy and are best used for comparing design alternatives. Still others provide not only a point estimate of reliability but confidence intervals. The method that should be used at any point in time should be determined by the type and amount of information available, not by some attachment to one method or another.