Skip to content

Adam Bruehl Of Security Journey On Why the US Government is Getting Serious About Medical Device Cybersecurity

Adam Bruehl Of Security Journey On Why the US Government is Getting Serious About Medical Device Cybersecurity

Published on

This article was originally posted by David Leichner on Medium.

As devices become more connected, demonstrably secure communications are paramount to patient safety. This includes secure over-the-air/wire communication, authentication, authorization, and an assessment of tolerable risk when the control channel is compromised.

In an era where technology is revolutionizing healthcare, medical devices — from pacemakers to insulin pumps to hospital imaging machines — are becoming increasingly interconnected. While these advancements offer unprecedented benefits, they also expose healthcare systems and patients to new cybersecurity risks. Cyberattacks on medical devices can result in compromised patient safety, data breaches, and even loss of life. Acknowledging the gravity of the issue, the US Government is ramping up its focus on medical device cybersecurity through regulations, initiatives, and collaborations with industry stakeholders. As a part of this series, we had the pleasure of interviewing Adam Bruehl.

Adam is a Senior DevOps Engineer at Security Journey with extensive experience in medical device security for the healthcare industry. He has collaborated with pharmaceutical organizations to ensure compliance with FDA cyber requirements and has a deep understanding of the challenges surrounding IoMT devices.

Thank you so much for joining us in this interview series! Before we dig in, our readers would like to get to know you. Can you tell us a bit about how you grew up?

I grew up in Idaho, where, whether it was hiking, biking, skiing, or soccer, I spent most of my time outside in the Boise foothills. Otherwise, it was likely a fairly uneventful and typical childhood for the area.

 

Is there a particular story that inspired you to pursue a career in this field? We’d love to hear it.

Not specifically. In college, I decided I wanted to pursue a career in Bioinformatics — the intersection between Computer Science and Genetics. In that field, software developers and researchers work together to distill giant sets of biological data down to novel truths.

When I graduated college in 2008, the bottom kind of fell out of the job market and research budgets, which led me to pursue a job in pharma. My first job in FDA land was working for a Clinical Research Organization (CRO) where I started as a software developer and later worked in and moonlit in other positions. Drug companies hire CROs to run their clinical trials. This may include any subdiscipline from data management, to biostatistics, to clinical sites and subject recruitment. Later, I ended up working for several medical device companies developing everything from SaaS medical devices to imaging platforms to medication dispensers, to ultrasounds to wearable EKGs. Each of them is fascinating in their own way.

On the one hand, it’s a highly rewarding career path. If this drug or device works, we save or improve the quality of someone’s life. On the other hand, it’s exhausting because the stakes are high — our failures can hurt someone.

Early in my career, we were contracted by multiple organizations to work on independently developed Ebola/Marburg vaccines. It was fascinating to be able to understand the nuances between them, how they were developed, and how they performed. Then, a few years later, when watching the news about an outbreak, I heard a familiar name. One of the vaccines made it to the real world. I may have only contributed a small piece to that product, but maybe, just maybe, I helped someone live a bit longer.

 

Are you working on any exciting new projects now? How do you think that will help people?

Today, I’m working in the B2B cybersecurity space for a secure coding training provider, Security Journey, whose goal is to continuously train developers on how to spot and fix both new and emerging threats as well as the classics that just never seem to go away.

This is one of the areas I think I have the most to give back to the medical device world. Not because I’m a security expert, but because I understand both their world and the cloud-connected world they are charging into full speed ahead. In my opinion, effort is needed to bridge the gap. Rarely does a day go by when I don’t take our training and think to myself . . . Oh, yeah . . . I’ve seen this in the wild.

 

Ok, thank you. Let’s now move on to our main topic. For the uninitiated, can you explain the nature and scope of cybersecurity threats to modern medical devices? How significant is the risk in comparison to other sectors?

Medical devices possess one of, if not the broadest, most nuanced, and (potentially) dangerous cybersecurity footprints of any industry I’ve worked in.

By definition, a medical device is ‘any device intended for medical purposes’, but I don’t find that definition very helpful. Instead, I prefer to bracket the range with devices people have used. On the low-risk side, this includes everything from Band-Aids to tongue depressors and Bluetooth smart toothbrushes. While the high-risk side covers the life-critical systems such as pacemakers and ventilators. Then you must compound this complexity with the breadth of environments in which they operate; some are at home, some are at hospitals, and others are embedded in people’s bodies. Historically, devices are subdivided into three categories depending on the risk their failure poses to the patient/users’ well-being. Ranging from class 1 (toothbrush) to high-risk side, class 3 (pacemakers). This risk classification is used to inform both the regulatory scrutiny a device is under as well as how rigorous the design, testing, manufacture, and process controls used to make/market a device need to be.

Device regulations tend to have two origins. The first set was adopted from the lessons learned for producing quality manufactured goods. The second set is far more humbling. They were written in blood.

When it comes to cybersecurity, there are two primary areas of concern. Firstly, device regulations are largely focused on patient risk and device failure, which historically does not include cybersecurity. So, a pacemaker should run like a champion for its entire device lifespan; however, there’s no requirement for minimum safeguards from altering its programming. And historically, this was OK. Devices were isolated, and attacks required physical access.

Second, over the last decade or so, the industry has finally jumped into the internet era, and we are seeing some amazing technology come out. In the process, it’s bringing the risk of both the IT and SaaS/Cloud industries with it. And believe me, many cybersecurity best practices, like monthly security updates, are entirely incompatible with many regulations and manufacturer processes.

To put it in perspective: Let’s take two patients, who both receive an ultrasound of their heart. One requires surgery; the other does not. One class 2 device I worked on processed these images. For us, the most dangerous thing we could do is mix up patient images. If we mixed up these images for any reason, one patient might have a risky and unnecessary surgery, and the other could be denied life-saving treatment. That is just the risk associated with a normal software bug. If a malicious actor found an exploit and wanted to do more, they could.

 

What are your “5 Things Everyone Should Know About Medical Device Cybersecurity?”

1. As devices become more connected, demonstrably secure communications are paramount to patient safety. This includes secure over-the-air/wire communication, authentication, authorization, and an assessment of tolerable risk when the control channel is compromised.

One example that comes to mind are insulin pumps. Over the last decade and a half, several pumps have documented vulnerabilities that (after much delay) resulted in warnings and recalls. To cherry-pick from an October 2021 notice…

According to the FDA:

Using specialized equipment, an unauthorized person could instruct the pump to either over-deliver insulin to a patient, leading to low blood sugar (hypoglycemia), or stop insulin delivery, leading to high blood sugar and diabetic ketoacidosis, even death.

Personally, I feel the FDA is burying the lead. In all these cases, the pump included a wireless remote that would send control commands to the pump. This was achieved with an insecure communication protocol on a wireless ISM band. Any action the remote could instruct the pump to do could be mimicked or blocked by an unauthorized user and the pump is none the wiser.

The arrow of progress is a harsh companion. When this device was first conceived in the late 90s, this wireless control was a novel and amazing feature, and this attack was prohibitively difficult for all but a small community of RF enthusiasts. However, over the interceding decade or two tinkerer radio hardware progressed with the same velocity as cell phones. Today, this attack can be carried out with approximately $100 worth of hardware purchasable online. Though, to be clear there are no publicly known attacks using this exploit to harm people. However, devices with this known exploit have been on the market for nearly 20 years.

How does this happen?

In my experience, medical devices are designed to be reliable in the face of normal environmental and operator entropy, but security historically wasn’t a design consideration. And, in the past, why would it be? Many devices were isolated or only wirelessly relayed monitoring information via dedicated medical networks. However, as the industry steps out of this air-gapped sandbox and into the era of the cloud, it must adopt the same security posture we have in the cloud — If someone can mess with it, they will. Trust no one and make it secure by design; you may only get one chance.

2. Between their design, sales, and operational life cycle, it’s not uncommon for devices to have a 20+ lifecycle where they largely remain unchanged. Unfortunately, this doesn’t align with the reality of cybersecurity best practices. Without fail, a device built today will outlive several generations of cybersecurity best practices. As such, it is paramount that manufacturers ensure their devices are not only secure by today’s standards but can be secured to tomorrow’s standards.

One communication protocol I am (unfortunately) familiar with is Health Level Seven (HL7). Back in the late 1970s/early 1980s, electronic devices began to proliferate in hospitals and, rightfully so, device manufacturers felt they should establish standards for how they communicate with each other. Enter HL7.

How is HL7 used? If a doctor requests medication for a patient via the EMR/EHR system, that system would send an HL7 message to the pharmacy management system. The pharmacy management system would drop this order into the pharmacy work queue. And, when the prescription was filled, the pharmacy would send another HL7 message back saying it was ready for pickup and maybe even alert the billing system to add this cost to a patient’s bill. Functionally, HL7 is part of the nervous system that glues the various hospital systems together. HL7 has been used to communicate with a wide variety of systems, including everything from medical records to imaging, to prescribing to billing.

Unfortunately, there are two major flaws with this protocol

  1. It has no concept of security. Period.
  2. It’s still widely used today.

As a design decision (which was not unreasonable for the time), this protocol only focused on ensuring a message produced by one system could be consumed (with varying effort) in another arbitrary system. However, the protocol specification only covered the layouts/content of the messages but provided zero guidance on message verification, authentication, authorization, or secure transmission.

From a historical perspective, these shortcomings and decisions make sense. There was no internet (as we know it), and these systems would be connected via dedicated networks and point-to-point connections. Why would we need authorization when it’s a point-to-point cable from one side of the room to another? As such, it wasn’t considered. As with all things, the march of time carried on. These dedicated links were abandoned in favor of modern networks. But HL7 was not.

And therein lies the crux of it. HL7, which was designed for trusted physical networks, moved to untrusted networks potentially shared with the public Wi-Fi.

There are many steps taken to secure HL7 communication. That said, every measure was simply an attempt to isolate an insecure protocol as opposed to implementing a secure-by-design protocol and these kinds of workarounds only go so far.

A colleague of mine recently shared a story about an X-ray machine that had a particular thorn in his side. This device was first commissioned nearly 20 years before our conversation and, to his knowledge, had never received a significant software update. This device was built for and only supported one specific version of HL7: version 2. As a result, every system it talked to needed to, and always will need to, support HL7 v2. Even if a new downstream system supported a modern secure-by-design protocol, they still had to use HL7.

In theory, when two systems supported more modern protocols, they could use the highest mutually supported protocol. However, a diversity of protocols is not without its own risk — could you imagine an internet with 5 wildly different alternatives to HTTP? It’s absolutely doable but not without its own costs and risks. The few times I saw folks attempt to use something else, without fail, there was always a dedicated translation layer to maintain backward compatibility with HL7.

All decisions made in the 80s and baked into hardware from the 90s were directly impacting our ability to secure communications in 2015.

And therein lies the friction

  • From a clinical perspective, that old X-ray works great and functions as designed. Since there was no clinical need, there was no pressure on the manufacturer to implement modern standards.
  • If there is an update, when the hospital installs it, they will incur significant downtime, and expense, to rigorously evaluate if the machine is working properly. What would motivate them to pay that bill for a simple protocol change?
  • At the time, the FDA could only regulate clinical efficacy, and, even if they found a device insecure, it was not a valid reason to block the device from the market.

Then mix in that the HL7 protocol has existed since before cybersecurity was even a concept, and the hospitals are still running technological dinosaurs, and we’re just asking for an exploit or data leak. As an industry, device manufacturers need to aggressively drive cybersecurity modernization with as much effort and consideration as clinical efficacy.

3. While medical devices often lag well behind consumer electronics, they have finally crossed the cybersecurity Rubicon into the cloud-connected smartphone era. With this comes a headache for every IT administrator and something relatively new to the world of medical devices — system updates.

When devices were simple, their processors were simple and typically only did one thing. Were they hackable? Probably — but without a network connection, it would require one to physically tamper with the hardware. But as we add more and more features to these devices, Wi-Fi, Bluetooth, cloud connectivity, sweet touch screens, and apps, these processors just don’t cut it anymore. This has pushed manufacturers in two new directions. They still have a simple device, but now it talks to an app on a smartphone that does the heavy lifting. Alternatively, the devices embed a smartphone-like processor to handle heavy lifting.

In the first case, when something is tethered to a phone or tablet, you are as insecure as your device. What apps are on it? Has it been hacked? Does my smartphone safely handle medical data? But at least I know two things. First, Apple and Google both regularly release updates to remediate security issues, and second, they regularly scan for malicious apps and take steps to remediate them. They don’t catch every cybersecurity issue (and never will), but at least they have a regular and consistent process to reliably perform those updates.

With the second case, the device now has a fully featured, internet-connected, operating system inside of it. These, for better or worse, are typically moderately custom OSs derived from the main handful of mainline OSs. In the wild, I’ve seen flavors of Windows, Linux, and Android OS inside of devices. This includes devices in hospitals, clinical settings, and in the home.

These devices scare me the most because, if the manufacturer doesn’t provide an easy and regular update path (like my iPhone does), from a cybersecurity perspective, it’s easy to forget they exist. And, even if I remember they exist, how do I patch it without the manufacturers’ help?

In an ideal world, every month, I could poke a button on my device, it would download an update, reboot, and we would be good to go. But it is not that simple. The regulations, culture, and technology simply are not in place to enable this. The manufacture of medical devices is a heavily regulated and process-oriented world that is broadly modeled off industrial manufacturing processes and not software development. Things like monthly over-the-air updates are both paradigm-shattering and expensive.

In the past, with those simple devices, the only time a manufacturer would need to update the code on a device was when a condition was identified that could result in patient harm. In an ideal world, a device is never updated. This makes sense. From a design perspective, if that insulin pump is properly programmed, it will faithfully and reliably dispense medicine until its retirement. If you do choose to update the software, you’re introducing risk. After an update, maybe the device behaves differently, maybe it will become a brick, or maybe there’s a new bug that is even riskier to the patient.

This stance comes from a position of extreme caution regarding patient safety. If I update that code for anything except reducing the risk of harm, I may be creating a new source of harm. I have seen a device bricked after a bad update. However, medical devices are now in a world that requires us to consider, not only clinical harm but harm that may be created by a malicious actor.

When a manufacturer creates a modern networked device, they must commit to and maintain a consistent, reliable, and simple security update channel for the duration of the device’s lifecycle. In the case of embedded operating systems, this cannot require the user to remember to update their system but should integrate an over-the-air update system like modern cell phones do.

4. Regulations are both a blessing and a curse.

I’ve spent more than 10 years of my career under the FDA’s gaze, and, no matter how frustrating I find the regulations, they are there for a reason. They are either a good idea for manufacturing in general, or they are there because previously, something went wrong, and someone got hurt.

As a result, many of the regulations are similar to, or inspired by, industries with similar risks. My personal favorite (but imperfect) analogy when discussing risk and regulation is aviation. If a vendor fails to meet metallurgy standards for a part, the risk can vary greatly. If that bolt is used in a seat-back tray and it fails, annoying but safe (toothbrush). However, if that bolt held the wing on, it’s going to be a bad day (pacemaker). So, naturally, we have regulations about how to choose what kind of bolt we need, qualify vendors, ensure that the bolt meets specification when it arises, be sure it was properly installed, and, because nothing is perfect, figure out where the process went wrong when a bolt has an issue in the field.

It can be exhausting. I have had colleagues spend weeks trying to find the right bolt, and I’ve spent many nights, weekends, and holidays understanding what happened with failed devices.

However, this laser focus on harm and risk is not without consequence. If my success and definition of done is defined as checking this specific set of boxes, well, that’s exactly what I’ll do. I may even forget that checkboxes like cybersecurity may even exist.

That hl7 protocol? → Cybersecurity is not a consideration

And as such, it never gets done.

When it comes to cybersecurity, historically, there is no regulatory line item. There is no measure of success or failure, and as such, it gets left by the wayside. If anyone raises concerns since it’s not on the list, it’s easy to dismiss it as an unnecessary expense.

5. People really do care about patient outcomes.

During my tenure in both Pharma and Devices, one thing was clear, nearly everyone genuinely cared about patient outcomes. When designing a product, people would spend a tremendous amount of time discussing usability for various cohorts of users, and spend hours mapping out every way a device could fail, and how to mitigate the risk.

Once a product was released to the market, if anything went wrong that could (or did) cause patient harm, it was always all hands on deck until the issues were identified and remediated. Often, entire teams would commit their nights, weekends, and holidays to mitigate patient issues.

With every device I worked on, it was generally true that everyone had used it themselves, had a loved one who used it, or cared about someone who could benefit from it. As such, we all had skin in the game and genuinely cared about our patient outcomes.

Unfortunately, when you are down in the trenches, and every requirement staring you in your face is a functional, clinical, or regulatory requirement, it’s easy to lose sight of complex non-requirements like cybersecurity.

 

Let’s talk about the future. Considering the pace of technological advancements and the growing emphasis on cybersecurity, where do you see the future of medical device security in the next 5–10 years? Are there emerging technologies or methods that hold particular promise in safeguarding patient health and data?

When reflecting on the device teams I’ve been a part of, there appears to be a feedback loop between the regulations and corporate structure/culture.

Why are we structured this way? Because of the regulations.

Why are the regulations that way? Because that’s how the industry structure was when they were written.

As such, I’m encouraged by recent steps by both regulators and the industry to make cybersecurity a top-level line item.

As recently as this year, a device a colleague worked on was rejected by the FDA for an inadequate cybersecurity risk and processes assessment. I felt really bad for them, but the FDA was not wrong.

If this device applied for approval in 2022, it would not have been rejected. The FDA could not reject it for cybersecurity reasons until March 29, 2023. But now we need to see where it goes from here. I hope that rejections, like the one above, encourage manufacturers to reach out to the rest of the tech sector and internalize their best practices as their own. Ideally, this will slowly change device culture until cybersecurity is simply part of the team’s DNA.

 

You are a person of enormous influence. If you could inspire a movement that would bring the most amount of good to the most amount of people, what would that be? You never know what your idea can trigger. 😊

One thing that has been on my mind recently is the concept of mutual aid. In this model, the community gets together and engages in mutual giving. For example, I have chickens, and at one point, they were laying 6 dozen eggs a week. We can’t eat that many eggs in a week, so they must go or get tossed. Maybe I will go to nextdoor.com or parents’ night at school and let everyone know I have eggs for whoever wants/needs them. This provides our community with 2 opportunities. First, I’m interacting with my neighbors and providing them with the resources I may have given to a charity. Second, it allows them to give something back. Maybe 3 months later, I’m looking for help with a project, and that same family has some kids who need something to do over the weekend.

Instead of a faceless charity, I gave my excess resources to a local person who could use them. And in return, they can give back as they are able. To me, it’s a fantastic way to build strong local communities. The result is I know my neighbors, they know me, and we’re constantly strengthening those bonds.

 

How can our readers further follow your work online?

I can be found on LinkedIn and Github.

 

This was very inspiring and informative. Thank you so much for the time you spent on this interview!

About The Interviewer: David Leichner is a veteran of the Israeli high-tech industry with significant experience in the areas of cyber and security, enterprise software, and communications. At Cybellum, a leading provider of Product Security Lifecycle Management, David is responsible for creating and executing the marketing strategy and managing the global marketing team that forms the foundation for Cybellum’s product and market penetration. Prior to Cybellum, David was CMO at SQream and VP of Sales and Marketing at endpoint protection vendor, Cynet. David is a member of the Board of Trustees of the Jerusalem Technology College. He holds a BA in Information Systems Management and an MBA in International Business from the City University of New York.