[R]eliability

A Random Walk In the Medical Device Space

Gaussian Process Regression for FEA Designed Experiments - Building the Basics in R

A Google search for ‘Gaussian Process Regression’ returns some intimidating material for a non-statistician. After filtering away the obscure stuff I’ll never understand and digging around within the code that makes GPR happen, I’m proud to say that I feel I’ve gotten my arms around the basics of GPR. The only way to confirm if this is true or not is for me to try to explain it - so that’s what I plan to do in this post.

Power Analysis for a DV Test - Frequentist and Bayesian Estimation in R

When testing is costly or resource intensive, it’s not uncommon for management to ask an engineer “what are the chances that we pass?”. The answer will depend on the team’s collection of domain knowledge around the product and requirement but also in how the question is interpreted. It will also be sensitive to sample size considerations. In this post, I will show how to conduct an analysis to inform the response from both a Frequentist and Bayesian perspective.

Learning from Failure - Nitinol Fracture Mechanics in R

Despite our best efforts, nitinol implants fracture and fail. Sometimes we want them to fail (on the bench, to learn). Other times they fail unexpectedly and we need find out why. When the failure is a fractured nitinol structural element, high magnification imaging of the fracture surface via optical microscopy and SEM is essential. A trained engineer can quickly identify the nature of the fracture (fatigue or overload) and the presence of obvious special causes like witness marks or foreign material transfer become apparent.

A Real World Use Case for a Bayesian Reliability Model - How to Incorporate FEA into Risk Estimates

Frequentist statistical methods, despite their flaws, are generally serviceable for a large suite of practical problems faced by engineers during product development of medical devices. But even in domains where simple models usually do the trick, there remain instances where a Bayesian approach is the best (and perhaps only logical) way to tackle a problem. In the rest of this post, I will lay out a technical use-case and associated modeling workflow that is based on a real business problem encountered in the medical device industry.

Durability Testing of Stents Using Sensitivity-Based Methods in R

The current industry protocol for durability testing of vascular stents and frames involves testing many implants simultaneously at a range of different stimulus magnitudes (typically strain or stress). The test levels are spread out like a grid across the stimulus range of interest. Each implant is tested to failure or run-out at its specified level and a model is fit to the data using methods similar to those described in Meeker and Escobar.

Trying to Trick Linear Regression - Estimating Coefficients for Variables in R

In this post we will try to trick linear regression into thinking that a redundant variable is statistically significant. By redundant, I mean a candidate predictor variable that in reality is just noise (no effect on the outcome) but that we might include in an experiment because we don’t know if it is important or not. The trick is that we can set up the data generating process such that a redundant variable is highly correlated with the response.

Could AutoML win in the 'Sliced' Data Science Competition? The answer may shock you!

In this post I’ll be taking a break from my normal explorations in the medical device domain to talk about Sliced. Sliced is a 2-hour data science competition streamed on Twitch and hosted by Meg Risdal and Nick Wan. Four competitors tackle a prediction problem in real time using whatever coding language or tools they prefer, grabbing bonus points along the way for Data Visualization and/or stumbling onto Golden Features (hint: always calculate the air density when training on weather data).

Exploring Frequentist and Bayesian Tolerance Intervals in R

Tolerance intervals are used to specify coverage of a population of data. In the frequentist framework, the width of the interval is dependent on the desired coverage proportion and the specified confidence level. They are widely used in the medical device industry because they can be compared directly vs. product specifications, allowing the engineer to make a judgment about what percentage of the parts would meet the spec taking into account sampling uncertainty.

Tutorial - Design Study in Solidworks with Data Analysis in R

I decided to do something a little bit different with this post and show how R can be used in tandem with a traditional engineering CAD program. Together they comprise a streamlined and repeatable workflow that I’ve tried to leverage on my job when it makes sense to do so. Solidworks is a 3d CAD program that is used commonly in industry. One powerful feature that I don’t see used that much is the Design Study module.

Boundary Conditions and Anatomy - Correlated Data and Kernel Density Estimation in R

Measurements taken from patient anatomy are often correlated. For example, larger blood vessels might tend to have less curvature. Additionally, data are rarely Gaussian, favoring skewed shapes with some very large values and a lower bound of zero. These properties can make simulation and inference hard. In this post I will walk through a workflow for an engineering problem that might be presented in my industry.