Medical device security? Forget hackers, think 'hand-washing'
Simply put, security control considerations were "not really part of some of these early medical devices," said Kevin Fu, associate professor of electrical engineering and computer science at University of Michigan.
But many of those very medical devices are still in wide use at hospitals across the U.S.
Fu has been a longtime researcher into device security. He routinely sees potentially dangerous faults in implants and bedside devices, he said Wednesday at Healthcare IT News' Privacy & Security Forum in Boston.
By way of example, he pointed to one local hospital that had "600 Windows XP boxes in deployment." To his astonishment, he was told by one hospital staffer that many were unpatched.
"If you're using this old software, these old operating systems, you're vulnerable to all that malware – that garden-variety malware – that has been out in the wild for more than 10 years," said Fu.
"This is not rocket science; this is basic hygiene," he said. "This is forgetting to wash your hands before going into the operating room. Here we have medical devices where, if malware gets through the perimeter, there is very little defense."
When it comes to device security, the "media tends to focus on the sensational," he added. "That causes the public – and even some in the (hospital) boardrooms – to misunderstand where the most significant risk is.
[See also: Threat matrix: Malware and hacking pose dangers to medical devices]
"In my opinion, it boils down to much more basic stuff," said Fu. "Hackers do exist. But again, it boils down to something much more basic: 'hand-washing.'"
Indeed, he said, the much bigger risks came from more mundane activities: the "infection vector" of a corrupted USB drive; a vendor applying software updates "and unknowingly infecting machines along the way because they're carrying malware along with them."
But the focus still too often tends to be on the "hacker on the outside," said Fu. "I'm not saying these people don't exist, but it often overshadows the really basic hygiene stuff: The guy you just let in the door because you have a contract with him, and he's spreading software throughout the hospital by accident."
He pointed to another recent headline-grabbing example: an infusion pump with a startling "low-hanging fruit" vulnerability. (One security blogger called it, "literally the least secure IP enabled device I've ever touched in my life.")
"If you actually telnet into the box, you get a root shell prompt," said Fu. "Back when I was at MIT on the network security team, if we were to see a root shell prompt when you telnetted in in the early '90s, that would have been a very critical problem. We would have contacted the owner of that machine and said, 'Uh, no. uh-uh. That's not good.'
"It was very surprising to me that some of these vulnerabilities existed at all," he added. "On the other hand, knowing that the medical device community tends to be slower in maintaining their machines, and the legacy machines hang around much longer than in other industries, that made it somewhat less surprising."
As far as he knew, "nobody was harmed by this device," said Fu. "But what surprised me was it was the first product advisory from the FDA that was entirely due to cybersecurity vulnerability, as opposed to a cybersecurity threat."
He conceptualized the risk framework in three parts: vulnerabilities, threats and exploits.
"If you have a vulnerability and there's no threat, you're not going to have an exploit – you wouldn't have harm," said Fu. "Similarly, if you have a threat, but no vulnerability, you won't have an exploit."
The problem, he said, is that "it's very difficult to quantify, in any of the metrics clinical facilities are used to with safety, what is the likelihood of a vulnerability combining with a threat to become an exploit."
And these things can change overnight, said Fu: "You're just a small threat away from harm when you have devices that aren't even doing what I would consider basic hand-washing."