Bayesian Failure Modes and Effects Analysis: Case Study for the 1986 Challenger Failure

DOI : 10.17577/IJERTV4IS050848

Download Full-Text PDF Cite this Publication

Text Only Version

Bayesian Failure Modes and Effects Analysis: Case Study for the 1986 Challenger Failure

Kouroush Jenab and Teresa K. Kelley

College of Aeronautics

Embry-Riddle Aeronautical University Daytona Beach, FL, USA

Sam Khoury

College of Business Athens State University Athens, AL, USA

Abstract This paper provides background on Failure Modes and Effects Analysis (FMEA) and how it can be a useful tool for development programs, then uses the Challenger failure as an illustrative example. FMEA tends to be fairly subjective as discussed by several scholars and studies, as mentioned in this paper. The subjectivity and use of an integrate team of experts are a positive for FMEA. This paper suggests using averages to aggregate team scores as well as adding a Does it make sense? step to the overall FMEA approach. Through the use of the Challenger failure as an illustrative example, it can be seen that it is possible to some failure modes to have similar scores even though they have very different severity, occurrence, and detection ratings, which can mask the true risk of the failure mode during the prioritization step, but adding the additional step suggested, these similar scored failure modes can be re- examined.

Keywords Failure analysis, FMEA, Bayesian theorem

  1. INTRODUCTION

    This paper discusses the failure analysis tool Failure Modes and Effects Analysis (FMEA) and its application through a case study of the 1986 Space Transportation System (STS) Challenger failure. Section III of this document details FMEA and its application in other case studies as well as identifying how it will be applied to the Challenger Solid Rocket Booster (SRB) aft field joint failure observed in 1986. Section IV of this document will illustrate the application of FMEA for the Challenger failure.

    The STS was developed by NASA as a means to transport astronauts and cargo into low earth orbit. At launch, the space shuttle is attached to an external fuel tank and two SRBs [1]. While the external fuel tank is not reused, the shuttle itself and the solid rocket boosters are reused. The reuse of the SRBs is an important factor in the failure of the Challenger.

    The SRB is comprised of four pieces when shipped to the Kennedy Space Center for assembly using pins at the tang and clevis interface and sealed with O-rings. When the rockets ignite, the heat and pressure from ignition causes pressure to build up, which helps to seal the joint. This final integration makes up the field joint [1].

    1. SRB Field Joint

      The SRB field joint is the joint on the SRB of the Space Shuttle where the two fuel segments of the rocket booster are joined together. The upper rim of the bottom fuel segment is a U-shaped groove, known as a clevis, while the upper fuel segment has a portion at its bottom that slides into the clevis. These two are joined together using steel pins. The inner portion of the clevis is encapsulated with two O-rings that

      provide a seal [2]. The O-ring seals the interface allowing for the closed system needed for ignition and propulsion.

    2. The 1986 Challenger Failure

      The Challenger, launched on January 28, 1986, but met a catastrophic demise approximately 73 seconds into flight. There are several factors involved with the failure. The morning of the launch was the coldest morning on recorded at a Shuttle launch. While this cold temperature was a contributing factor to the field joints failure, there are several other factors that also contributed. If the cold weather was the only factor, then the entire field joints would have been affected and we can assume that other field joints would have also failed. From Chapter 4 of [3], it was noted that during integration of the SRB, the segments were found to be out of round. While they were not outside of the tolerance of the procedures and the procedures were followed to integrate the booster, the booster out of round condition made the joining portions of the segments wider than nominal. This stretched and compressed the O-ring more than it normally would have been. Given the cold morning on launch day, the material properties of the O-ring would have been stressed. When the O-ring was colder, the material did not have the same flexible, sealing properties as it would at warmer temperatures. Upon launch, the SRB encounters dynamic stresses that compress the O-ring [3]. Given the colder temperatures, the O-ring probably did not return to its normal shape after seeing the compression, which would have left gaps between the two booster segments and the interface would not have been sealed. Without a seal at the joint, fuel leaked out of the booster, which was seen in video of the launch. Reference [3] goes on to conclude that the aft field joint design was unacceptably sensitive to a number of factors. These factors were the effects of temperature, physical dimensions, the character of materials, the effects of reusability, processing and the reaction of the joint to dynamic loading. [3]

      A flaw in the design and material properties of the O-ring caused a catastrophic failure in the STS Challenger. One may ask how such a flaw could be overlooked during design and development. Reference [4] state it best in their article Efficient Analysis for FMEA:

      Much earlier in the design process, however, there must have been a point, as there is in all engineering projects, where a commitment had to be made to certain actions before detailed evidence was available on the probable performance of every single component on the shuttle. At that point the O- ring was only one of many thousands of similar components of broadly similar importance, and did not therefore justify the level of detailed analysis it attracted once its critical nature was revealed during subsequent development and service.

      Reference [4] also discuss the fact that prior to the failure, the analyses conducted on the components did not bring out any questions or concerns. They highlight that it is not possible to complete a detailed, in depth analysis of each component of the system, but that completing simpler analyses using older techniques that compliment the new analysis techniques would have given a better idea of component performance [4]. Ultimately, the factors discussed in this section contributed to the failure of the aft field joint. More analysis of the O-ring, better tolerances, improved procedures, and a better understanding of material properties could have made a difference, but at what cost? As pointed out, a full, detailed analysis of every component did not seem feasible, but would FMEA have helped identify increased risk in certain components? This paper will provide insight into FMEA and apply it to the aft field joint failure.

  2. LITERATURE REVIEW

    Reference [5] introduced a computerized approach for FMEA. Typically, FMEA was performed manually, but [5] introduced a matrix form that would allow engineers to input information and then allow the computer to locate intersections between the elements and failure effects. This new format would reduce cost and provide faster, more accurate assessments [5].

    Reference [6] presented techniques for automating and standardizing FMEA in order to gain more wide use of the tool. Reference [6] believed that automation and standardization were key to obtaining meaningful and useful results.

    Reference [4] presented their assessment of the use of minimal knowledge in mechanical systems reliability assessment. In their case, they studied the field joint of the Space Shuttle Rocket, among other failures. They claim that failurs can be overlooked based on the breadth-first evaluation that was the norm at the time [4].

    Reference [7] provides insight into FMEA from its use at Davart Plastics. According to her article, at that time (1990) FMEA was mainly used in large manufacturing companies, but thanks to new software packages, FMEA was becoming more available to small and medium sized companies. The use of FMEA enabled Davart Plastics to raise their quality standards and reduce waste [7].

    In 1998, [8] presented the concept that continuous improvement has to be knowledge based and cannot rely only on computer technology. Reference [8] called this approach Integrated FMEA (IFMEA).

    Reference [9] identified that due to newly initiated QS 9000 standards that required the use of FMEA, companies were not fully utilizing FMEA. As pointed out in [9], most companies only completed FMEAs as a requirement to meet instead of understanding their usefulness in the design and development of systems. Once companies fully understand the tool and how to use it, [9] claimed they would see a decrease in cost and resources.

    Reference [10] presented a new approach for using FMEA and calculus for production. In 2002, [11] presented an approach for FMEA using fuzzy logic claiming that their approach would increase the validity of the results of FMEA.

    They discussed the interdependencies between the various factors as well as between the failure modes themselves [11].

    In 2003, [12] suggested a new method of FMEA based on life cost. They too struggled with the subjectivity of the scoring approach to FMEA and presented a method for using life cost of the risk to better evaluate risks of failure. This method also allowed them to suggest design alternatives based on this life cycle cost.

    Also in 2003, the health care field was seeing more pressure for implementing some sort of proactive risk assessment process. Reference [13] presented information on FMEA as an option and discussed why FMEA was the unofficial standard to be used, even though the Joint Commission on Accreditation of Healthcare Organizations had not identified any one specific standard for implementation.

    Reference [14] presented a new insight for FMEA where the customers point of view for scoring severity is used instead of the engineers point of view. Up to this point, the engineer and the engineering team determined severity and the Risk Priority Number (RPN). Using the Kano model, [14] presented an enhanced method for FMEA claiming that managers would have new insight into possible failures from the customers point of view for a product that has not been used or fielded yet.

    Reference [15] continued to show that FMEA is a very subjective and qualitative approach to assessing failure. They presented an approach, again, using fuzzy logic to assess failure modes. They claimed that their approach resolved limitations seen in a more traditional FMEA approach [15].

    Reference [16] presented the concept that once a product is past design and is fielded, that FMEA alone may not be appropriate. FMEA is an iterative process and should be reassessed on a regular interval. They found that FMEA alone is difficult when trying to find the root cause of actual failures in the field and that Failure Analysis should also be conducted in conjunction with FMEA [16].

    Probabilistic model checking support for FMEA was introduced by [17]. The proposed the use of a method that made use of probabilistic fault injection and probabilistic model checking, which would allow safety engineers the ability to identify whether or not a failure mode could occur with a higher probability than the hazard [17].

    Reference [18] provided insight into the application of FMEA for software reliability. For years, FMEA had proved successful for hardware applications. The authors in this case, provide information on the applicability of FMEA for software through a case study of pressure valves [18]. Reference [19] presented a Bayesian approach to prioritizing failures using FMEA.

    In 2009, [20] provided insight into the medical community and how FMEA can be used in hospital risk management and failure identification. Reference [20] points out that FMEA began in the aerospace industry in the 1960s, but that it started to infiltrate the medical industry in the early 1990s. Also, [20] identifies the value of FMEA for many applications within the medical field to identify failures prior to them happening.

    Reference [21] presented a comparison of FMEA with Fault Tree Analysis (FTA) and Advanced Failure Modes and Effects Analysis (AFMEA). These three analysis tools are popular methods for analysis and the authors compare the three and provide insight into opportunities to blend FTA and FMEA in situations where one or the other method cannot solve the problems seen, which helped them develop new ideas about reliability [21].

    Reference [22] presents the idea of using Dempster-Shafer Theory as a way to aggregate ranking and scores used by FMEA. The FMEA process is an activity that allows teams or groups to categorize and prioritize risk. Reference [22] further point out that given group dynamics, it can be hard at times to get an unbiased aggregation, but that the Dempster-Shafer Theory can be used to get these unbiased values.

    Reference [23] provides insight into the use of FMEA for complex dam systems. They describe that even though FMEA may seem time consuming and costly to conduct, when used properly, it can provide insight into complex systems such as a dam and identify areas of concern or root causes for possible failures, which provides useful information for managing possible catastrophic risks and their mitigation plans. This information provided at the end of the FMEA can be used early to take action and optimize efficiency of the risk mitigation process as shown in their work for the Cerro do Lobo tailing dam [23].

    Reference [24] agree on the fact that FMEA is a systematic technique for assessing and analyzing processes in order to optimize and prevent failures from happening, but they identify the areas in which FMEA can go wrong and fail which causes unnecessary cost and time wasted. Some of the common pitfalls include no management backing of the effort, taking on too big of an FMEA project, scoring not being developed ahead of time, and scoring not customized. They go one to identify tips for conducting a successful FMEA that can save the program time and money in the long run [24].

    Therefore, this study is aimed at showing the applicability and usefulness of FMEA on complex systems through the study of FMEA and the 1986 Challenger failure. The following sections will describe FMEA and provide an example of how to use FMEA.

  3. USING FMEA WITH CONFLICT RESOLUTION FMEA was first introduced in the 1940s by the U.S.

    military and was further developed and applied in the aerospace and automotive industries prior to being widely accepted by other industries in the 1980s and 1990s. FMEA is a logical process for identifying all the possible failures within a system or process in order to mitigate those failures prior to fielding and operation. The failure modes are prioritized and the team starts with the highest priority failures. The effects analysis portion of the process is used to determine and understand what effects any particular failure may have on the end item, the customer, or stakeholders. FMEA should begin during development and design and continue as a process improvement method throughout the life cycle [25].

    Generally speaking, FMEA is a team based effort. During design development, a team of experts from varying fields should be assembled and the scope of the FMEA should be identified. The team should determine the functions of the

    system or process starting at the system level and decomposing to the subsystems and components. The team then works together in a brainstorming sort of fashion to identify all the ways that each function could fail. Then the team needs to determine all of te consequences for the failures and identify as a group how severe each effect is and give the failure mode and an effect severity rating between 1 and 10, with 1 being low consequence and 10 being catastrophic. The team should also determine possible root causes for each of the failure modes and then rate these with a probability of occurrence from 1 (unlikely) to 10 (inevitable). Then, the team can identify areas of control already established for the program that could prevent these failures from happening. These controls are then given a Detection rating from 1 to 10 where 1 means the control will definitely detect the failure and 10 means that the control will definitely not detect the failure. At this point, the team can calculate the Risk Priority Number (RPN) for each failure mode. The RPN is calculated by multiplying the Severity rating by the Occurrence rating by the Detection rating. Criticality can also be calculated by multiplying the Severity by the Occurrence, which will help in prioritizing which failures should be addressed first. At this point, the team can develop recommendations for new controls, revised controls, changes to design or parts selection, or changes to process in order to reduce the severity or occurrence of the failure. These recommendations can be made to program management and as steps are established and completed to mitigate the risk of failure, the FMEA should be updated [25].

    There are several industry and government standards for developing an FMEA. One such standard is Military Standard (MIL-STD) 1629A. It follows the main process delineated above, but instead of using number 1-10 for criticality, it uses Roman Numerals from I to IV, with I being catastrophic and IV being minor. The occurrence of the failure is also classified differently as with bands of probability between 0 and 1 and is based on the analysts judgement [26].

    One thing that becomes clear with FMEA is that because there are different standards guiding the methodology used, the program needs to use the standard dictated by their contract, be it a military application with necessity to use a MIL-STD or use an industry standard such as ISO 9000. Also, there are varying ways to rate and score the severity, occurrence, and detection ratings as well as different ways to calculate the RPN. One can also see how subjective this process could be considering it is a team based process. Just like risk management, where typically there are working groups that identify, score, and mitigate risks, the FMEA process uses a team of experts to identify all failure modes for a system or process, score, and recommend mitigation. While having a team of experts to conduct an FMEA is essential because they bring a synergy to the effort that would not otherwise produce the quality necessary for such a task, the subjectivity of the group can also skew the scoring. For instance, if the group has a highly regarded electrical engineer whose reputation for high quality, very detailed electrical designs, who happens to be very persuasive and a natural born leader type, and also has a systems engineer or test engineer who is a systems thinker that understands the functionality of the whole system, but is not very knowledgeable of electrical engineering and tends to not lead the group, then the group may not see the importance of the system or test engineers

    suggestion for scoring in the same light as the highly regarded electrical engineer. This paradigm can skew the groups overall agreement on scores, which will ultimately affect how the RPNs are calculated. Therefore, a slight difference in scoring may lead to a failure mode being categorized with a lower (or higher) priority and this increases risk for the program.

    Many experts have discussed the subjectivity of FMEA and the problems with aggregating scores from the group to include using fuzzy logic, averages, and as [22] described in their paper in 2012, using Dempster-Shafer Theory, as discussed in the Literature Review section of this paper. Many researchers have studied group think and group decision making. Many postulated that Bayes Theorem can be used to account for expert opinions within the group. Reference [27] uses a Bayesian estimation procedure for determining the priorities of the Analytic Hierarchy Process. Reference [28] describes the use of Bayesian modeling to understand a groups varied decisions as well; that a simple aggregation only works when all participants have the same opinion. This is the same position that many experts have taken with FMEA; that a simple aggregation does not work when trying to score the various failure modes. Bayes Theorem provides the ability for an experts opinion to be accounted for as well as their opinion after evidence, such as discussing with the group, has been made available. Reference [29] also discusses applying Bayes Theorem for group decisions, as follows:

    Bayes Theorem: P(A|B)= P(B|A)P(A)/P(B)

    Bayes inference can also be used similarly. In this equation, an expert has a prior opinion about something, then new evidence is presented and there is a posterior opinion in which the expert may adjust his probability based on that evidence. This is very similar to how group decision making works, but instead of the group discussing a particular failure, which can be likened to being the evidence, and Bayes Theorem applied, typically groups will vote or average their individual scores which takes away from the individual. The individual experts score gets lost in the aggregation. Whereas, using Bayes inference, one can account for the individual experts score as well as the group:

    P(H|E)=P(E|H)P(H)/P(E); where H is the individuals hypothesis (or opinion score), E is the score after new evidence, and P(E|H) is the likelihood factor [28]. In the case of the FMEA and how to apply this to improve the FMEA approach, we propose that each individual expert in the group provide their score of severity based on their prior experiences and expertise for each failure mode; this would be P(H). Then the group discusses the failure mode at length and provides rationale for their severity scores. After the discussion, each expert will reassess their scores, some may go up some may go down; this would be P(E). The likelihood factor P(E|H) will be the agreed upon Occurrence score of the FMEA. These scores can be aggregated using the mean, since each experts opinion is taken into consideration by using Bayes Theorem. Then in order to find the RPN, this probability is multiplied by the Detection score.

    This modified approach will be applied to the SRB aft field joint failure in Section IV. The following rating schema will be followed for the illustrative example:

      • Severity (S): 0.1 (little damage) 0.5(hardware damage) 1.0(catastrophic damage, loss of mission and life)

      • Occurrence (O): 0.1 (highly unlikely) 0.5(moderately likely) 1.0(highly likely)

      • Detection (D): 1 (high confidence of detection)

    5 (moderate confidence of detection 10 (no confidence of detection)

    Risk Priority Number (RPN): calculated by multiplying the probability by Detection (P(H|E) x D)

  4. ILLISTRATIVE EXAMPLE

    Table 1 is an FMEA produced through applying the modified FMEA process suggested in Section III to the aft field joint of the SRB for the STS. This FMEA is not meant to be exhaustive and was only applied to some of the major components of the SRB of which the aft field joint is a component. The example FMEA layout in Table 1 is based on the FMEA discussion and example provided in The Quality Toolbox, 2nd edition [25]. Also, this FMEA was conducted by the individual authors instead of a team of experts; therefore, the FMEA is limited to the knowledge and understanding of propulsion and rocket components of the authors through research and experience. One of the major themes of the FMEA approach is to use a team of experts in order to expand on all possible failure modes. This is an illustrative example to how the process while applying Bayes inference. As such, this FMEA example does not include the action items and management for mitigating risks.

    Let us assume that there are five team members for this example. We will also make assumptions about their scores, as previously mentioned, the authors have developed this example individually.

    TABLE I. FMEA ILLUSTRATON WITH CONFLICT RESULTION

    In this example, you can see each persons individual score prior to any outside evidence being presented and it is based simply on each persons past experience. Then, after group discussion that could possibly sway individual opinions for better or worse, the individual scores are provided again. In the example, one can see that some scored the severity higher based on new evidence and some were swayed to score a lower severity based on this new evidence. This is the posterior probability. Then using Bayes Theorem with the Occurrence score being the likelihood factor that all individual scores are assessed against, and the average of those calculations taken, a group score is identified for each failure mode. Then, this group score is multiplied by the detection rating to provide the RPN. In this way, each experts assessment is taken into account. Other methods include weighting the experts or averaging all the scores with no weighting, but both of these methods lack something. In the case of the weighting, all experts are equal, there is no way to weight one over the other since they should be coming from different fields and varying levels of experience, and weighting the experts can cause conflict within the group and thus an accurate assessment cannot be attained. By averaging the scores, conflict can still arise because one may feel more passionately about their opinion over another. Also, voting on scores can cause conflict within the group because some people may not feel as if they are heard during discussion. By applying Bayes Theorem to the process, the subjectivity of each expert is retained while being assessed against a group likelihood factor in a mathematical approach.

  5. CONCLUSION

FMEA is a powerful tool for providing managers detailed information early in development. This approach allows an integrated team to delve deep into potential design in order to identify areas of concern, which will help managers make important decisions on program resources, but also to help identify areas for trades and solidify design decisions, not to mention potentially saving to the program from failures or even catastrophic failures. This paper provided insight into the history of FMEA and different ways in which it has been applied over the years to include areas where various people have improved upon the tool. While many have identified that the subjectivity of FMEA can be problematic among groups especially when it comes to scoring and that aggregating scores is difficult using fuzzy logic or averages, the subjectivity of the approach is necessary, otherwise, a team of experts would not be assembled to conduct the FMEA together. This subjectivity just needs to be accounted for with logical approaches.

This paper has proved a whole new understanding for the importance of FMEA in the development and fielding of a system. This tool can provide great insight for a program if used correctly. The literature review revealed multitudes of studies on the subjectivity of FMEA and how to aggregate scores. Subjectivity is one of the keys to the FMEA and that team dynamics need to be understood to keep one expert from running away with the scores as discussed above. By using Bayes Theorem this subjectivity can be retained and applied through mathematical processes.

REFERENCES

  1. W. H. Starbuck, and F.J. Milliken,. Challenger: finetuning the odds until something breaks, Journal of Management Studies, Vol. 25(4), pp. 319-340, 1988.

  2. W. Harwood, W. CHALLENGER REMEMBERED The Shuttle Challenger's Final Voyage, n.d.

  3. The Roger's Commission. Chapter 4-The Cause of the Accident. Retrieved from NASA: http://science.ksc.nasa.gov/shuttle/missions/51- l/docs/rogers-commission/Chapter-4.txt. (2015, 02 20)

  4. S. Bednarz and D. Marriott, Efficient analysis for FMEA (Space Shuttle reliability) Proceedings., Annual Reliability and Maintainability Symposium, pp. 416 – 421. 1988.

  5. J. M. Legg, Computerized approach for matrix-form FMEA, IEEE Transactions on Reliability, vol. R-27(4), 254-257, 1978.

  6. H. B. Dussault, Automated FMEA status and failure, Annual Reliability and Maintainability Symposium, pp. 1-5, 1984.

  7. J. Webber, FMEA: Quality assurance methodology. Industrial Management & Data Systems, vol. 90(7), pp. 21-23, 1990.

  8. Z. Bluvband and E. Zilberberg, Knowledge base approach to integrated FMEA. Quality Congress. ASQ's Annual Quality Congress Proceedings. American Society for Quality, p. 535, 1998.

  9. R. A. Harpster, How to take the out of FMEAs. Measuring Business Excellence, vol. 3(3), pp. 20-24, 1999.

  10. F. Franceschini, and M. Galetto, A new approach for evaluation of risk priorities of failure modes in FMEA, International Journal of Production Research, vol. 39(13), pp. 2991-3002, 2001.

  11. K. Xu, L. C. Tang, M. Xie, S. L. Ho, and M. L. Zhu, Fuzzy assessment of FMEA for engine systems, Reliability Engineering & System Safety, vol. 75(1), pp. 17-29, 2002.

  12. S. J. Rhee, and K. Ishii, Using cost based FMEA to enhance reliability and serviceability, Advanced Engineering Informatics, vol. 17(3), pp. 179-188, 2003.

  13. J. Kusler-Jensen, and A. Weinfurter, FMEA: An idea whose time has come, SSM, vol. 9(3), pp. 30, 2003.

  14. A. Shahin, Integration of FMEA and the kano model: An exploratory examination, International Journal of Quality & Reliability Management, vol. 21(7), pp. 731-746, 2004.

  15. R. K. Sharma, D. Kumar, and P. Kumar, Systematic failure mode effect analysis (FMEA) using fuzzy linguistic modelling, International Journal of Quality & Reliability Management, vol. 22(9), pp. 986-1004, 2005.

  16. G. Cassanelli, G. Mura, F. Fantini, M. Vanzi, B. Plano, Failure analysis-assisted FMEA, Microelectronics Reliability, vol. 46(9), pp. 1795-1799. 2006.

  17. L. Grunske, R. Colvin, and K. Winter, Probabilistic model-checking support for FMEA, The Fourth International Conferecne on the Quatitative Evaluation of System, pp. 119-128, 2007.

  18. C. S. Putcha, p. Kalia, f. Pizzano, g. Hoskins, c. Newton, and k. J. Kamdar, A case study on fmea applications to system reliability studies, International Journal of Reliability, Quality, and Safety Engineering, vol. 15(2), pp. 159-166, 2008.

  19. Z. Yang, S. Bonsall, and J. Wang, Fuzzy rule-based Bayesian reasoning approach for prioritization of failures in FMEA, IEEE Tracsactions in Reliability, 57(3), pp.517-528, 2008

  20. M. L. Chiozza, and C. Ponzetti, FMEA: A model for reducing medical errors, Clinica Chimica Acta, vol. 404(1), pp. 75-78, 2009.

  21. S. Yu, Q. Yang, J. Liu, and M. Pan, A comparison of FMEA, AFMEA and FTA. The Proceedings of 2011 9th International Conference on Reliability, Maintainability and Safety, pp. 954-960, 2011.

  22. N. S. Kulkarni, and A. R. Johnson, Dempster-Shafe Theory approach to FMEA, IIE Annual Conference. Proceedings, January 2012.

  23. R. Santos, J. Serra, and L. Caldeira, FMEA of a tailings dam. Georisk: Assessment and Management of Risk for Engineered Systems and Geohazards, vol. 6(2), pp. 89-104, 2012.

  24. M. Johnson, and J. R. Silverman, FMEA on FMEA, 2013 Proceedings Annual Reliability and Maintainability Symposium (RAMS).

  25. N. R. Tague,. Failure Modes and Effects Analysis (FMEA). In N. R. Tague, The Quality Toolbox, 2nd Edition (pp. 236240). ASQ Quality Press. February 2004. Retrieved from http://asq.org/learn-about- quality/process-analysis-tool/overview/fmea.html

  26. Department of Defense. Military Standard 1629A procedures for performing a failure mode, effects and criticality analysis, Lakehurst, NJ, USA: Department of Defense, Engineering Specifications and Standards Department, November 1980.

  27. P. Gargallo, J. M. Moreno-Jiménez, and M. Salvador, AHP-group decision making: A bayesian approach based on mixtures for group pattern identification, Group Decision and Negotiation, vol. 16(6), pp. 485, 2007.

  28. F. Dietrich, Bayesian group belief, Social Choice and Welfare, vol. 35(4), pp. 595-626, 2010.

  29. R. L. Keeney, and R. Nau, A theorem for bayesian group decisions, Journal of Risk and Uncertainty, vol. 43(1), pp. 1-17, 2011.

Leave a Reply