I just returned from a workshop sponsored by the FDA (and the NHLBI and NSF) on Computer Methods for Cardiovascular Devices.  It was an excellent workshop providing an audience of regulatory, academic and industrial interests a chance to get caught up on the state-of-the-art, trends and, in general, to exchange ideas on the issues of using computational methods to support regulatory filings for medical devices.  A general theme emerged for me during the workshop that I’d like to discuss in this article:

We are not providing the FDA with adequate validation of our computational models!

For years now I’ve been helping companies demonstrate the safety and effectiveness of their products.  I’ve written many FEA reports that have been reviewed and accepted by the agency, including cases where we’ve argued to forgo expensive and time consuming durability testing in lieu of providing computational results to support safety claims.  It has been my experience that the FDA has been very open to such an approach, provided there was an adequate demonstration of the validity of the FEA models.

From what I heard from reviewers at the workshop, however, the typical submission of FEA results does not include adequate validation.  I don’t know if it is because companies don’t know how or what to provide for validation of their FEA models or if they are reluctant to share testing or data that the FDA has not specifically asked for or if they have unreasonable expectations about what computational models can replace in terms of physical testing, but it is clear to me that if we want to leverage FEA to streamline the development and approval process then we need to take a proactive role at demonstrating how well our models describe our products.

It is far less expensive and time consuming to perform carefully designed bench tests to validate computational results than it is to run long term durability tests on our devices and hope they pass.  Not to mention far less risky from a product development perspective.  It was clear in the workshop that the FDA understands this and that they too are motivated to see a better balance between physical testing and computational modeling in a submission.

In my over ten years experience in the field, I have yet to run across a device or a specified test or loading scenario that I could not analyze using Abaqus and achieve excellent agreement between experiment and computer simulation.  Many times the endeavor to match experiment and analysis reveals critical insight into the mechanics of the product involved or nuances associated with the loading conditions that lead to important improvements.  With advanced contact, strong nonlinear capabilities and the extensibility of user subroutines, Abaqus provides a platform to model almost any physical scenario giving the engineer and product designer a more than ample toolkit for validating any device.


Still, as open and receptive as the FDA may be,
they are not in a position to advise on how best to perform the appropriate validation.
As engineers we need to establish the validity of our computational models and we need to do so BEFORE we submit results to the FDA.  In fact, we need to begin this effort early in the development process before we start making decisions based on our computational data.  Otherwise, how can we expect the FDA to accept that our results have emerged from a rigorous engineering methodology?

How much and what type of validation is necessary in any given case depends on how a model is going to be used.  Conversely, the confidence we have in a computational model depends on how extensively it has been applied and shown to agree with reality.  There are numerous opportunities we have during the development process to establish the validity and range of our computational models.  Radial force testing of different stent designs for example provides an excellent opportunity to confirm our models ability to predict reality.

In summary, the time is right for advancing the use of computational models for demonstrating the safety of our products.  But we need to be proactive and utilize models that are well grounded in experimental data.  How far we are able to leverage these results with the FDA will depend on how good of a job we do at convincing them that they represent actual experience.