Maximize the Menu
Creating Robust Designs

Summary

Engineering products, from concrete structures to electronic circuits, are designed to perform a function by selecting component part parameters which will permit successful operation of the product in an expected use environment. Variations in the part parameters or in the operating environment will usually degrade the desired performance. Figure 1 shows that variation in the strength of a part and variation in the stress it sees can result in an area of overlap in which the stress can be greater than the strength, resulting in a failure. Since variations cannot be avoided, a number of countermeasures have been devised to assure satisfactory operation of a product when conditions deviate from nominal. Although some particular countermeasures claim the title "robust design" (indeed, the term is a registered trademark of the American Supplier Institute), all methods of dealing with variations can help to produce robust designs in the generic meaning of the term.

In this START sheet, we will briefly describe the most common countermeasures to variability: safety factors / derating, worst case circuit analysis, the "six sigma" design philosophy, process control based on the statistical design of experiments, and some contributions of Genichi Taguchi.



Safety Factors / Derating

These terms refer to the limiting of the nominal stresses on all parts to levels below their specified maximum. The use of safety factors in structures is common (e.g., a column meant to support a five ton load might be designed for ten tons). Similar policies in electronics engineering (e.g., a power transistor rated at 25 watts may be operated at 20 watts) are called derating. The effect is to shift the stress curve in Figure 1 to the left, reducing the area of stress-strength overlap. Critical parameters will differ from part to part (e.g., wattage for a power transistor vs. voltage for a capacitor) and some parts should not be derated (e.g., aluminum electrolytic capacitors). Derating guidelines are tabulated in the Reliability Toolkit: Commercial Practice Edition, available from RIAC and also in MIL-STD-975K, Notice 2, NASA Standard Parts Derating, the Rome Laboratory Technical Report RL-TR-92-11, Advanced Technology Component Derating, and the Naval Sea Systems Command TE000-AB-GTP-010, Parts Derating Requirements and Application Manual. Structural safety factors are recommended in civil and mechanical engineering handbooks and prescribed in building codes.

Figure 1
Figure 1 (Click to Zoom)



Worst Case Analysis

Worst Case Analysis and the electronic specific Worst Case Circuit Analysis (WCCA) consider the impact on desired performance of expected variations in part parameters. For example, a WCCA could determine whether or not the frequency of a radar transmitter would be within specifications if the parameters of parts used were at unfavorable "off-nominal" values. The most conservative worst case analysis calculates product performance with all parts at their worst value and causing errors in the same direction. This is known as extreme value analysis and is the easiest approach. Other approaches are root-sum-squared and Monte Carlo analysis which considers the statistical distribution of variables, recognizing that random variations of different parts are rarely all at extreme values in the same direction and that one variation can offset another. These more realistic approaches are more difficult to perform, but are important when the penalties of designing for the extreme value are too severe to make it practical. In any event, parts are selected so that their expected variations do not preclude acceptable product operation, as determined by the method used. WCCA is discussed in the Reliability Toolkit: Commercial Practices Edition. More detailed treatment may be found in the RIAC publication, Worst Case Circuit Analysis Application Guidelines. Finite element analysis (FEA) is a computer technique invented for analyzing stresses in mechanical and structural assemblies (and now also widely used in electronic stress analysis).



"Six Sigma" Design

The impact of variation in a product can be determined by comparing the distribution of the parameter of interest with the specified limits to that parameter. One measure of variation is the population standard deviation (sigma), which is estimated from samples using the formula:

Equation

Using the standard deviation, one can then determine the proportion of the product which will be between the upper and lower specified limits, and thus considered acceptable. For example, if a parameter is distributed normally, 66.3% of the product will have a parameter value within plus and minus one standard deviation of the mean value of the parameter, 95.5% will measure between plus and minus two, and 99.7% will be between plus and minus three sigmas from the mean. Figure 2 illustrates this.

Figure 2
Figure 2 (Click to Zoom)

Comparing the specification limits to the variation in the product yields a measure of robustness. One of these measures is Process Capability, which is calculated by Equation 2. A Process Capability of 1.0 means that 99.7% of the product will be "in-spec." Anything lower is generally considered bad, and quality oriented companies aim at higher values.

Equation

One shortcoming of the Process Capability measure is that it presumes the mean of the parameter of interest in the product will be its target value, as illustrated in Figure 3. However, "real world" distributions are more likely to resemble Figure 4, where the product mean value is displaced from the target. For this reason, a measure called Process Performance, Equation 3, is often preferred.

Figure 3
Figure 3 (Click to Zoom)


Figure 4
Figure 4 (Click to Zoom)


Equation

The "Six Sigma" program formulated by Motorola aims for such low variability in the product that six sigmas will fit between the specification limits (i.e., a Process Capability of 2.0), which, presuming the mean of the product is 1.5 sigmas off target (i.e., a Process Performance of 1.5), translates to 3.4 items per million out of specified limits. By way of comparison, the average business process is a "four sigma" process which translates to 6,200 items per million "out of spec." Achieving a "six sigma" process requires the control of critical process parameters, which can be identified by the statistical design of experiments, the next topic.



Statistical Design of Experiments

Design of Experiments (DOE) is a systematic approach to determining the optimum settings of process parameters. Parameters deemed to be important (usually by an ad hoc team of experts) are varied and the results observed. The simplest form is an experiment in which a high and low value is picked for each parameter and every possible combination is used. For example, a team trying to improve a wave-solder process might suggest varying the temperature, length of exposure and lead/tin ratio of the solder to determine ways to reduce solder defects. A high and low value is selected for each parameter. Every possible combination of high and low values is tested and its effect on solder defects measured. Adding for illustration a test at nominal conditions (i.e. those used before the experiment) the results of such an experiment might be as shown in Table 1.

Table 1
Test Temp Length L/T Ratio Defect Rate
1 N N N 1.5
2 L L L 3.2
3 H L L 3.1
4 L H L 1.9
5 H H L 2.4
6 L L H 1.6
7 H L H 1.6
8 L H H 3.3
9 H H H 2.2

Table 1 shows that the nominal settings also seem to be the optimal settings. However, we have not discussed other factors that may affect solder defects. These may befactors not under the control of the process owner, and before he accepts the nominal conditions as best, he should examine the robustness of all the settings under varying outside factors. For example, he may get boards of different sizes to solder. Do his test results on one board size hold for other sizes? Does the number of layers in the board make a difference? In our next topic, we shall discuss some methods championed by Genichi Taguchi, which answer such questions. It should be noted that there are various other means for considering robustness, such as randomizing the order of the experiment and repeating it several times, to average out the impact of unknown factors not tested.

It should also be noted that off-on factors, like the presence or absence of a flux in the solder, can be handled by calling the presence of the flux a high setting and the absence, a low setting (or vice-versa). Also, there are tests using more than two settings for some or all factors and fractional factorial test plans that provide more economical testing at the cost of not observing all possible interactions of the factors.

A beginning text on DOE is Understanding Industrial Designed Experiments, by Schmidt and Launsby, published in 1989 by the Air University Press, Colorado Springs.



Taguchi Innovations

Genichi Taguchi is a noted champion of reducing variation through DOE. Some of his innovations are the testing of "noise" arrays and the use of "signal to noise" ratios to determine optimum settings for robustness.

For example, a Taguchi approach to the experiment described in Table 1 might have extended it to include an "inner array" of the controllable factors tested and an "outer array" of uncontrollable ("noise") factors such as board size and number of layers. Assuming a high and low value was determined for each of these factors and including the results of Table 1 as a nominal setting for these factors, we might obtain the results shown in Table 2.

Table 2
Controllable Variations   Uncontrolled Variations
  Size: N L H L H
  Layers: N L L H H
Test Temp. Length L/T ratio  
1 N N N   1.5 2.9 1.9 2.4 2.6
2 L L L   3.2 9.0 1.8 1.6 7.8
3 H L L   3.1 2.6 2.0 2.3 4.8
4 L H L   1.9 2.4 1.6 1.5 2.9
5 H H L   2.4 2.2 1.5 1.7 1.9
6 L L H   1.6 2.4 1.6 1.5 2.9
7 H L H   1.6 1.9 1.7 1.7 1.8
8 L H H   3.3 3.3 1.6 1.6 3.3
9 H H H   2.2 2.6 1.8 1.6 1.9

The optimum solution shown in Table 1 does not appear bestinTable2. The settings of the controllable factors for other tests (e.g., 5, 6 and 7) give better results across the spectrum of the uncontrolled variations (i.e., more robustness).

Another Taguchi technique is to measure the experiment results in terms which consider both the measured values and their variations. These are called "signal to noise" ratios and stem from another Taguchi invention, called "loss functions." There are loss functions for "smaller is better" (e.g., defects), "nominal is better" (e.g., dimensions of a mechanical part), and "larger is better" (e.g., tensile strength). Each assumes that loss increases with the square of the distance a parameter is from its target value. For the example we have been using, the signal to noise ratio based on the "smaller is better" loss function is:

Equation

Combining the experimental results shown in Table 2 into signal to noise ratios, yields Table 3, which indicates that the settings of test number 7 create the most robust design.

Not all statisticians endorse Taguchi's procedures, but they are widely used. Taguchi is affiliated with the American Suppliers Institute (ASI), Allen Park MI, which has registered the term "Taguchi Methods" as a trademark. Taguchi's book, Introduction to Quality Engineering: Designing Quality into Products and Processes, is available from ASI.

Table 3
Test Temp Length L/T Ratio S/N
1 N N N -3.54
2 L L L -6.70
3 H L L -4.71
4 L H L -3.61
5 H H L -2.45
6 L L H -2.83
7 H L H -2.41
8 L H H -4.18
9 H H H -3.05

Other references are:

  1. Taguchi Techniques for Quality Engineering, by P.J. Ross, published in 1988 by McGraw-Hill.
  2. Quality Engineering Using Robust Design, by M. S. Phadke, 1989, Prentice Hall.
  3. Taguchi Methods, A Hands-on Approach, by G. S. Peace, 1993, Addison-Wesley.



About the Author

* Note: The following information about the author(s) is same as what was on the original document and may not be correct anymore.

Anthony Coppola is a Scientific Advisor to the Reliability Analysis Center operated by IIT Research Institute. He is the editor of the RAC Journal, an instructor in Reliability Engineering training courses, and the author of the "TQM Toolkit." Before joining IITRI, he spent 36 years developing reliability and maintainability engineering techniques at the Air Force Rome Laboratory, formerly known as the Rome Air Development Center. His last assignment at Rome Laboratories was as the Commander's Special Assistant for Total Quality Management.

Mr. Coppola holds a Bachelor's degree in Physics and a Master's in Engineering Administration, both from Syracuse University. He also completed the Industrial College of the Armed Forces correspondence program in National Security Management, and the Air War College Seminar Program. He has been a guest instructor for the Air Force Institute of Technology, the Air Force Academy, and George Washington University. He is a Fellow of the IEEE and a recipient of the IEEE Centennial medal. He also hold Air Force Medals for Outstanding Civilian Career Performance and Meritorious Civilian Service. He was the General Chairman of the 1990 Annual Reliability and Maintainability Symposium.