How AI is transforming cybersecurity, on defense and offense

Understanding the threat environment, knowing where controls are deployed and other core competencies become that much more critical when artificial intelligence is involved, says the CISO of Mass General Brigham.
By Bill Siwicki
12:26 PM

David Heaney, chief information security officer at Mass General Brigham

Photo: David Heaney

Cybersecurity is complex enough without adding artificial intelligence technology into the mix. But chief information officers, chief information security officers and other IT and security leaders have no choice but to deal with AI as it explodes across the healthcare landscape.

But how?

In this tenth and final installment of this series, which features the AI perspectives of some of health IT's leading innovators, David Heaney, chief information security officer at Mass General Brigham discusses where he finds himself today: at the intersection of cybersecurity and AI.

Heaney says there are three fundamental questions when it comes to them both:

  • How do we secure our use of AI technologies?

  • How do we use AI to better secure our organization?

  • How do we defend against AI-driven attacks and how these will change in the future?

He offers answers to those questions – and many others – in this two-part interview, today and tomorrow.

Q. CISOs and CIOs in healthcare obviously must secure everything at a hospital or health system. What differences does artificial intelligence bring when it comes to security?

A. I want to start by taking a moment to really just share a general perspective. I'm sure you'll hear me repeat throughout our conversation. And that's why I believe these AI technologies have truly incredible potential to change how we care for our patients, how we teach the next generation of caregivers, and how we foster new discoveries.

I also believe that on the cybersecurity side, these same technologies really emphasize the importance of getting the basics right. There will certainly be exceptions to this, but I'd argue there's a bigger risk of an attacker using AI to process some huge volume of data and find a corner of our environment that we didn't know about and use that to be able to perpetuate their attack rather than there is a chance of some movie-style, novel AI attack that's used to breach an existing set of controls.

What that means to me is that understanding the environment and knowing where your controls are deployed and really being great at the basics just becomes that much more critical.

The other component that goes hand in hand with what I just said, though, is that these technologies will never be as immature as they are right now. So hopefully, folks don't take what I just said to mean that I think these are somehow bad today.

It's that they're only going to improve and are going to do so in ways we can't foresee yet. So, building on all of that, to me, there are really three questions that I consider when it comes to AI and cybersecurity.

The first one is, How do we secure our use of AI technologies? Second, How do we use AI to better secure our organization? And third, How do we defend against AI-driven attacks and how are those going to change in the future?

In summary, the secure use of AI is one of several components of how we at MGB look at responsible use of AI overall. And it's absolutely critical organizations have a broad AI governance process in place to cover this.

Because responsible use would also include things like privacy and fairness and transparency in a number of other areas. And I'd be remiss if I didn't call out that security really is only one part of that overall responsible use of AI.

Going back to my earlier comment, though, the mechanism for securing all of these technologies starts with things that we've been doing for years. These tried-and-true security controls we've already been using to secure our existing tech footprint.

So then on the topic of using AI to secure the organization, I propose two potentially contradictory points here. The first one is most if not all organizations today have been using AI to secure themselves for years. They've been leveraging vendors that have built these technologies into their product suites, and again, have been for quite some time.

But then the contradiction potentially is that we're also in the nascent early days of being able to use it much more broadly. The reality of these AI security workers or AI agents is that today they can provide interesting ways to augment the great work our security teams do, including automation of basic tasks and things like that. But that functionality is still pretty limited.

Then on my third question I usually ask when I think of AI-driven attacks, I think a lot more about how AI is going to democratize existing attack techniques so that less skilled attackers are more able to use these advanced techniques. I also think about how the attackers are going to be able to process more data.

They're going to have much greater capability around the analysis that needs to be done before an attack even begins. Those things combined really go back to reiterate the importance of the consistency of control deployment across your organization.

Q. What kinds of AI are cyberattackers using today on the offense? And how are these attacks different from non-AI-based attacks?

A. It's a little cloudy out there. This ties into the model I mentioned earlier. But again, my opinion here might be a little bit unpopular, and it's really that at least at this point, the AI-driven attacks are not terribly different from what we've seen already. There's just more of them. They scale up in a way that makes it more difficult to be able to defend everywhere.

And again, my standard caveat that this is as immature as the tech is ever going to be certainly applies here. But really, if you look at any of the standard cybersecurity frameworks out there, they all mostly say the same things, right? And that's why they cross-reference each other so easily.

These frameworks haven't had many drastic changes over the years. So, when I think about how we defend against AI-generated attacks or AI-driven attacks or AI-related attacks, our security teams across the industry generally know what we need to do. The challenge is always in doing it.

What this democratization of the attacks is going to be driving and what the scale challenges are going to be is it's going to make the problem that much worse because the completeness needs to be at a different point than it potentially was before because no stone is going to go unturned.

So, when I talk about some of these basics, HHS released a set of cybersecurity performance goals earlier this year. And they say things like, use multifactor authentication, perform basic cybersecurity training, mitigate your known internet-facing vulnerabilities that could be used to leverage your environment.

Nobody's going to argue with those ideas. Those are tried and true, but they're just as relevant fighting against AI attacks as they were a couple of years ago. But then as we look forward, there's certainly some changes that are going to be coming here. And I believe the biggest ones are going to be around social engineering.

This is going to be a much greater problem than it is today, even though it's already a problem today. So just as an example, we've all had people in our organizations receive text messages or emails claiming to be from the CEO and saying things like anything from, Can you reroute this payment to me, I'm at a conference and I need $1,000 worth of gift cards?

These are pretty standard sorts of things that are out there in terms of attack techniques. And most organizations today can spot that pretty easily. But what happens when that phone call that comes into the service desk uses your CEO's voice, or even worse, if it's a video call, if it's a FaceTime or something, or a Zoom that's actually using her face.

And the question becomes, Are our service desks and support teams prepared for, for example, password reset requests that use these technologies to deep fake what they look like? And I can give you a very specific example of the democratization in this area.

So, the other day, my son took a picture of me sitting here in my home office that's going to accompany this interview. And today, as we're having this conversation, he is two rooms over in my house, my 14-year-old son, who's home on summer vacation. He downloaded software from GitHub that allows him to make a deep fake video of me using the photo that he took for me the other day.

And all that is going to take is some free software, a picture, a webcam and a bored kid on summer vacation. And maybe it's a bored kid on summer vacation who has a gaming computer that has the power to do it. But you get the idea. It doesn't take much today.

And then to take that forward, it's really going to be critical for us to fight these sorts of attacks together. And we need to leverage things like the Health ISAC [Health Information Sharing and Analysis Center] to share information about these threats and attacks in real time, because really, as they mature, I don't think there's one silver bullet solution for any of this.

But each organization out there is going to have to find their own right combination of training and testing and technical controls that makes the right solution for that.

Q. IT leaders at hospital and health systems are already working with vendors to gain benefits of AI for clinical, financial and operational efficiencies. How can it be better leveraged within a provider's security team?

A. Leveraging this from your vendor is a great callout, and it's something we do quite a bit at MGB. And again, I'd argue the cybersecurity industry broadly has been doing this for many years, and it's just getting to be that much more mature and that much more capable.

And it's really an area where it's just not possible for individual organizations, whether they're large-scale providers like Mass General Brigham or all of the smaller organizations across the country, we can't keep up with the R&D investments, with the scale and the other benefits that our technology vendors enjoy.

So, it makes it really critical to partner very closely with them, both so we can understand their offerings as well as influence the roadmap. At some level, and I'm sure there are others that would argue with this, any technology logic down to a simple if/then decision is some type of AI.

But as we've seen, there's been recent robust improvements in true machine learning as well as in generative AI that have either created new options or greatly enhanced what's already been in place.

And I put this into a few different categories. So virtually all of the detective controls we've been using have been using AI for years. And that could be things like your endpoint protection tools, your centralized monitoring systems, your identity risk capabilities, or scores of other technologies. Those are already AI-enabled, and they perform wonderfully.

And what that AI capability does is it allows the tools to process huge amounts of data effectively. It allows them to automate key tasks, to prioritize alerts, and much more. But the key there is really understanding where the artificial part of the intelligence ends and where the human part of the intelligence needs to begin.

As an example there, the human that's deploying that endpoint protection tool that is loaded up with AI capabilities needs to understand all of the assets in the environment to make sure that tool is deployed everywhere. And that's a human thing where we need to get that control deployed correctly.

And beyond that, it's really critical, again, to partner with those vendors so that you understand, at least at a high level, what the AI is doing and what's AI-driven that may or may not be as explainable as things that are rules-driven.

When you're putting these technologies into your environment, you understand where, frankly, they may be introducing risk associated with AI in order to offset the risk associated with cybersecurity. I think beyond that, generative AI can provide a great boost to productivity across all components of a digital team.

And these are going to be generative AI tools that come from vendors and third parties. But for example, in the cybersecurity space, you can have your risk analyst use it to summarize a lengthy compliance report they're trying to review or even create initial drafts of their own documentation.

Analysts have been using automation and AI for years to enrich their own data or take automated actions in some situations. Teams can generate tabletop scenarios, tabletop exercise scenarios out of thin air they can then use to test their capabilities. They get assistance with coding. They have, in theory, at least more efficient and hopefully shorter meetings and so much more.

In some of these areas, they're not always great, but again, they're only going to get better. I'm super excited about where the future is going to take us there. And then maybe one other use of some of these generative AIs that, again, are all going to come from vendors and third parties, could be, for example, to help prepare for an interview with an industry-leading organization like HIMSS.

I'm certainly not saying I would do that, but I have used various generative tools to create content for presentations. And I've actually found it to be really helpful for what I'll call translating. So, taking technical concepts that I need to be able to communicate. But if I have to communicate those to some of our executive leadership who are smart, and they are smart and they are engaged and they understand the business in the world of healthcare, but they don't come from the technical background.

It's great to be able to prompt it to say, Hey, take this language I'm going to give you and make it understandable to an audience like that. So, it really has broad applicability across all levels of the security team you might be looking at.

Click here to watch a video of this interview that contains bonus insights not found in this story.

To read PART TWO of this interview, click here.

Editor's Note: This is the tenth and final in a series of features on top voices in health IT discussing the use of artificial intelligence. Read the other installments:

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

The HIMSS Healthcare Cybersecurity Forum is scheduled to take place October 31-November 1 in Washington, D.C. Learn more and register.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.