This is the seventh in a series of blog postings in which we present a top-ten list of common errors encountered in the context of medical projects. Of course, such a ranking depends on personal observations and individual experience – and hence has a subjective outcome. Please feel invited to tell us about your perspective in the comments section!
Risk analyses are an inherent part in the development of safety-critical devices and thus present in every medical project. The ISO 14971 standard is of particular relevance in this context. Among other things, it contains requirements regarding the risk analysis process (including risk control) and related documentation. While the standard itself maintains a high level of abstraction, it provides concrete hints in its informative annexes. For example, Annex C contains a list of specific questions that can help identify safety issues, and Annex G refers the reader to established methods such as preliminary hazard analysis (PHA), fault tree analysis (FTA), and failure mode and effects analysis (FMEA).
Although the common methods and goals of risk analysis are defined in a rather clear fashion, and despite all the secondary literature on this field, many a risk analysis is performed in an inappropriate manner. In what follows, I will describe four typical fallacies that I have encountered again and again:
- Fallacy 1: The risk analysis only considers risks. This is a fundamental mistake because what the law really wants us to do is to trade off the risks against the benefit – see, for instance, the first sentence in Annex I of the Medical Device Directive (93/42/EEC). In the end, it all comes down to a simple question: Will the world be better off with or without the medical product in question?
- Fallacy 2: The risk analysis is not based on established terms. Many standards contain distinctive definitions for relevant terms in order to reduce the probability of misunderstandings. This also holds true for ISO 14971 – it explicitly defines terms to be used in the discussion of risks (e.g., harm, hazard, residual risk, and severity). Consequently, we should stick to these when reasoning about risks! (For instance, the headers of FMEA tables should reflect the official terms, not employ some undefined fantasy language.)
- Fallacy 3: The risk analysis mixes up different dimensions of probability. Many methods, especially FMEAs, make use of probabilistic reasoning. This is not a problem by itself, but naturally, formal methods can lead to a false sense of validity. The first example that comes to mind is this: If the input (namely estimations for probabilities of certain failures) is off the mark, then the results won’t mean much, either – no matter how sophisticated the intermediate calculations look. However, there are worse pitfalls, and one of them is particularly nasty: the careless jumbling of incompatible units of probability. But 0.5% per patient is neither five times as likely as 0.1% per year nor half as likely as 1% per use!
- Fallacy 4: The risk analysis mixes up technical and domain-specific knowledge. The most classic example for this is when software engineers are asked to participate in an FMEA session, but then expected to deliver information about possible physical harm resulting from a given failure mode.
Coming up next: #3, Late Safety. See you there!
- Top Ten Errors in Medical Projects #5: V for Vendetta
- Top Ten Errors in Medical Projects – #6: As Low As Reasonably Practicable
- Top Ten Errors in Medical Projects – #7: The 100% Confusion
- Top Ten Errors in Medical Projects – #8: Eager Beaver / Lazy Bum
- Top Ten Errors in Medical Projects – #9: Process versus Project
- Top Ten Errors in Medical Projects – #10: Clinical Something!