The Reliability Analytics Toolkit L10 to MTBF Conversion tool provides a quick and easy way to convert a quoted L_{10%} life to an average failure rate (or MTBF), provided that an educated guess can be made regarding a Weibull shape parameter (β). Continue reading

# Tag Archives: failure rate

# Bathtub Curve

Figure 1 shows a typical time versus failure rate curve for equipment. This is the well known “bathtub curve,” which, over the years, has become widely accepted by the reliability community.

It has proven to be particularly appropriate for electronic equipment and systems. Note that it displays the three failure rate patterns, a decreasing failure rate (DFR), constant failure rate (CFR), and an increasing failure rate (IFR).

# Failure Modeling

Failure modeling is a key to reliability engineering. Validated failure rate models are essential to the development of prediction techniques, allocation procedures, design and analysis methodologies, test and demonstration procedures, control procedures, etc. In other words, all of the elements needed as inputs for sound decisions to insure that an item can be designed and manufactured so that it will perform satisfactorily and economically over its useful life.

Inputs to failure rate models are operational field data, test data, engineering judgment, and physical failure information. These inputs are used by the reliability engineer to construct and validate statistical failure rate models (usually having one of the distributional forms described previously) and to estimate their parameters.

# Normal Distribution

There are two principal applications of the normal (or Gaussian) distribution to reliability. One application deals with the analysis of items which exhibit failure due to wear, such as mechanical devices. Frequently the wearout failure distribution is sufficiently close to normal that the use of this distribution for predicting or assessing reliability is valid.

The other application deals with the analysis of manufactured items and their ability to meet specifications. No two parts made to the same specification are exactly alike. The variability of parts leads to a variability in systems composed of those parts. The design must take this part variability into account, otherwise the system may not meet the specification requirement due to the combined effect of part variability. Another aspect of this application is in quality control procedures.

The basis for the use of normal distribution in this application is the central limit theorem which states that the sum of a large number of identically distributed random variables, each with finite mean and variance, is normally distributed. Thus, the variations in value of electronic component parts, for example, due to manufacturing are considered normally distributed.

The failure density function for the normal distribution is

Equ. 1

where

μ = the population mean

σ = the population standard deviation, which is the square root of

the variance.

# Reliability Theory

Most modern engineering disciplines are based on applied mathematics. An engineer or scientist observes a particular event and formulates a hypothesis (or conceptual model) which describes a relationship between the observed facts and the event being studied. In the physical sciences, conceptual models are, for the most part, mathematical in nature. *Mathematical models represent an efficient, shorthand method of describing an event and the more significant factors which may cause, or affect, the occurrence of the event.* Such models are useful to engineers since they provide the theoretical foundation for the development of an engineering discipline and a set of engineering design principles which can be applied to cause or prevent the occurrence of an event.