Johns Hopkins CISO: Don't overlook the critical importance of foundational infrastructure
Photo: Darren Lacey
Johns Hopkins Chief Information Security Officer Darren Lacey describes the security crisis facing healthcare by envisioning a scenario in another industry.
"Imagine you are incrementally improving your controls in, say, financial management," he says. "And suddenly you wake up and all transactions are now done in bitcoin or some new exotic currency. All those people and processes that worked reasonably well yesterday are now left wanting.
"This is the crisis we face in information security today, especially in enterprise healthcare information security," he explains, "where suddenly we have found ourselves in the crosshairs of the ransomware gangs."
Lacey, one of the top CISOs working in healthcare today, says he is interested in open-source tooling and the rise of memory safe languages. "In other words, I am increasingly interested in foundational technologies underlying our infrastructure," Lacey said.
We spoke with Lacey recently for a wide-ranging interview to discuss those foundational technologies and others. He offered frank and detailed perspective on what he's focused on at Johns Hopkins, and what healthcare information and security leaders should be thinking about as they manage the cybersecurity of their own IT infrastructure.
Q. As a CISO, you say you're increasingly interested in foundational technologies underlying healthcare's infrastructure. Why is that? And why now?
A. For a long time, people would ask me about the importance of zero-day vulnerabilities, those vulnerabilities that are actively exploited. My typical response was that I spend most of my time worrying about "zero-year" vulnerabilities.
Most adversaries were happy attacking vulnerabilities that were months or years old that, for any number of often justifiable reasons, had not been patched. Most of us more or less equated vulnerability management – one of the three or four primary missions of enterprise information security – with orderly testing and deploying security patches up and down the technology stack.
In an age where most of our applications and tools are built on a lattice of third-party software and open-source dependencies, getting patching right has never been more challenging. Yet even when we are able to maintain a mature vulnerability management program, the past two or three years have demonstrated it may not be enough to address the latest threats.
The rapid deployment of zero-day exploits, and even exploits that have not yet been published nor patched – what I call "minus-day" exploits – has turned vulnerability management on its head. For the past 10 years or so, enterprise information security practice primarily involved hardening privileged accounts, deploying multifactor authentication as widely as possible, building a solid incident detection and response capability, and maintaining Patch Tuesday vigilance in the vulnerability management program.
Now you can do all of these things and still easily fall prey to state-actor compromises, or much more likely, financially motivated ransomware attacks.
For those readers outside of information security, imagine you are incrementally improving your controls in, say, financial management, and suddenly you wake up and all transactions are now done in bitcoin or some new exotic currency. All those people and processes that worked reasonably well yesterday are now left wanting.
This is the crisis we face in information security today, especially in enterprise healthcare information security, where suddenly we have found ourselves in the crosshairs of the ransomware gangs.
So far, I have prattled on for a bit, but not even begun to answer your question. Yet understanding the context of our current predicament is perhaps more important than understanding the response that many of us are working through.
Cybersecurity in healthcare has never been more precarious. We are at greater risk, with it seems fewer ways to effectively respond. The old saw about security programs being "patch and pray" vastly understates how vulnerable we are to the vicissitudes of our threat environment.
We therefore need a new paradigm, and unfortunately the model de jour, "zero trust," however useful it might be, is not designed to account for the dramatic change in threat. While none of us are clear on a complete response, there are certain pieces that are coming into focus.
Reasonably well understood but typically second order controls like attack surface management, continuous adversarial testing, threat intelligence, and AI-driven behavioral analysis are coming to the fore.
My personal interests are taking me on a slightly different tack. If you are a philosopher and you find yourself stuck on a resistant problem in, say, ethics, it is often a good idea to retrace your steps back to the foundations of the problems in your field.
That may mean going back and reading Plato or it may be rethinking the most primitive concepts in your problem space. Unfortunately, neither Plato nor Aristotle had much to say about cyber, but we can still look at our primitives. And interestingly, our primitives are in flux nowadays, notably in two areas, cryptography around blockchains and potentially quantum and generative AI for how we process data.
Add to these the well-known but not fully addressed advances in embedded computing, Internet of Things, medical devices and control systems and we see the foundations of healthcare computing are increasingly shaky.
Our hardware substrate (for example, embedded, cloud servers), core software components (for example, cryptography, integration of software-as-a-service through APIs), and data processing (for example, advanced analytics and AI) have transformed over the past five years.
And here is the kicker: The vast majority of the readers here are not in the hardware, software or security business. Those of us who are payers and providers rely on vendors to tidy up the underlying IT infrastructure so we can deploy and use technologies to meet our respective missions.
Yet it seems to me the scope of the change over the past five years has demonstrated our ongoing program of outsourcing our technical brains to vendors has foundered and the current cyber crisis is perhaps the first of several cracks.
While my argument that we on the end user side take more responsibility for our technology approaches may seem anodyne, it raises all kinds of questions regarding what this would look like in practice.
We are unlikely to pull out Copilot and start building our own record systems or design our own chips. Yet can we better evaluate technologies and not just functionality? Conduct comprehensive and continuous testing? Monitor anomalies and fit for purpose?
In cybersecurity, we have no choice. In the medical device space, cyber leaders are working with vendors to develop Software Bills of Materials (SBOMs) to assist enterprise end users to evaluate and track underlying technologies. The obvious implication here is that cybersecurity teams of the kind I manage must be technically conversant and not just able to read a version number.
If we are, for example, evaluating a large language model, we need to understand enough about underlying training data and model function in order to put together a testing program. These are all deep technical issues that require an educated and continuously educating IT workforce.
The kinds of knowledge and skills we need going forward extend beyond cyber, but for now, I want to return this discussion to the current threat-driven crisis in cyber. Let me emphasize there is no way to predict which specific technology will fall prey to a zero-day.
Yet we can group specific applications in broader categories – such as networking, remote access, web sites, databases, etc. – and identify configurations and behaviors that each category may exhibit. It is common for larger organizations to use a number of web technologies – some Java, .NET, WordPress, etc.
Rather than threat model each separately, it may be a better use of our time to peer under the hood and identify testing and monitoring techniques that can be applied across the category and emphasize those. These common characteristics typically operate lower in the technology stack, at or near the "foundation." Our thinking is we may be able to anticipate zero-days by understanding "normal" configurations and behaviors of underlying technologies.
As we focus our attention at a more general and lower level, we will learn new tricks and develop new practices. We are seeing versions of just such a transformation now with emerging cloud security tools that focus on underlying systems in Azure and Amazon Web Services rather than the application itself.
There also has been some success in low-level attention regarding embedded security, but I would argue we have not yet found the convergent sweet spot.
Q. What is open-source tooling, another interest of yours, and how does it relate to infrastructure?
A. I was working on a simple machine learning tool using a programming language called Rust. It was a fairly simple "hello world" first iteration, and when I watched it compile, I saw it import more than 150 libraries. All of those libraries were open source and on Github.
If I had a problem with any of them, I could have gone to Github and read the code in order to figure out the issue. Indeed, reading the code of third-party libraries is a significant part of any developer's and security analyst's time. You would be hard pressed to find any complex application that does not have dozens if not hundreds of open-source dependencies – from Linux to Apache to Kubernetes.
Cloud infrastructures and tooling, for example, are much more reliant on open source than are the prior generation of on-premise technologies. I would argue that without Github to hold and organize open-source code, there is no AWS or Docker or most of our current technology stack.
The implications of a technology universe steeped in open source are not well understood (even by me). The one thing we can say for certain is that the most commonly used libraries, such as gcc and OpenSSL, are disproportionately carrying the weight for the world's cybersecurity. We will be seeing attacks on Log4J, an open-source Java logging library, as the tool is embedded in so many applications libraries and sub-libraries.
The tech giants have awakened to this and are actively supporting testing and maintenance for these, some of our most critical infrastructures.
Q. What do healthcare CIOs and your fellow CISOs need to know about open-source tooling as it relates to infrastructure challenges today?
A. It is not enough to understand technology at a high level and how it can be applied. We all need to recognize that part of our job is to understand how these technologies are built and how they interoperate.
Twenty-five years ago, you would not have considered hiring a network engineer who did not understand at some level how packets work.
Now I would say the same applies in the application space. It is critical that core technologies such as web servers, JSON, APIs and web requests along with dozens of other core technologies be well understood by nearly all of our technology staff and management.
Q. You talk about memory safe languages – which even the White House is interested in. What are they and why are they safer?
A. One of my primary interests is in Rust, which is well known for being a memory safe systems language. Interestingly, most of the applications that we use are already written in memory safe languages, as nearly all garbage collected languages are safe in that sense.
And that points to the problem of how many of us talk about "memory safety" in general. It typically means that a program or language is invulnerable to a set of well-known attacks, such as buffer overflows or use-after-free attacks.
In practice, though, memory is just one of the components to be safeguarded and, thus, "safe" languages are still vulnerable to all manner of more exotic attacks. The White House memorandum conveniently glossed over much of this complexity, and thus drew a predictable if tiresome negative response from many in the security community.
So rather than focus on memory safety alone, we should focus instead on the open-source library problem we just discussed. Security flaws in commonly used libraries are depth charges that can detonate against all kinds of programs, other libraries or embedded technologies.
Those of us in the technology field should demand that these libraries are developed, tested and maintained in the most stringent manner possible. We should therefore want to use the most rigorous technologies and platforms available to ensure we have done all we can to harden our shared infrastructure.
Doing things the hard way, as I am suggesting, flies in the face of most application development, where functionality and velocity are considered the primary virtues. A finicky and difficult language like Rust is a relatively straightforward example of a preferred toolset for technologies in an increasingly hostile world.
Q. What can health IT and security leaders at provider organizations be doing today with memory safe languages?
A. It is possible that technically savvy healthcare organizations will roll out their own generative AI with some help from the vendor community. In such cases, I believe memory safety will be one of about a dozen primary technical security requirements involved in choosing a platform or model.
Other than that, I don't see IT organizations using systems languages much. We use Rust at Hopkins information security because of its speed more than safety in order to build our system monitoring and command line tools. We also believe it is important that many of our adversarial tools be written to test memory issues at a fairly low level.
More generally, memory safety represents one of a whole series of low-level technical considerations for evaluating and securing technology. Our attention to the ingredients of the stew are just as important as the stew itself.
Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.