IEEE deep dive: What you really need to know about AI and cybersecurity

How is artificial intelligence being used to identify anomalies in sensitive healthcare data to safeguard patient information? What are best practices for enhancing cybersecurity through AI and ML? An expert answers those questions and others.
By Bill Siwicki
11:04 AM

Jesse Pinkman and his best friend Rebecca Herold, IEEE member

Photo: Rebecca Herold

As the healthcare industry increasingly adopts AI, the landscape of cybersecurity threats is changing rapidly. While AI can enhance patient care and streamline operations, it also introduces new vulnerabilities that cybercriminals may exploit.

To help CISOs, CIOs and other healthcare security leaders wrap their hands around this, we spoke with IEEE Member Rebecca Herold, CEO of privacy and security at Brainiacs SaaS Services and CEO of privacy and security at The Privacy Professor Consultancy. The IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity.

We asked Herold to discuss the rising threat of AI-driven cyberattacks targeting hospitals and health systems, how AI is being used to identify anomalies in sensitive healthcare data to safeguard patient information, best practices for healthcare provider organizations to enhance cybersecurity through AI and machine learning, and her best piece of advice for security leaders about the intersection of AI and cybersecurity.

Q. Please describe the landscape of the rising threat of AI-driven cyberattacks targeting hospitals and health systems.

A. Cybercrooks love healthcare data, and how they can use that data to do a much wider range of crimes than they can accomplish with only basic, more widely collected, personal data. Cybercrooks can also sell healthcare data at a much higher price than other types of personal data. And now health data-loving cybercrooks have another type of tool they love almost as much as the data: artificial intelligence.

As generative AI-enabled capabilities become more widely adopted, healthcare leaders and cybersecurity and privacy pros need to understand how these capabilities can impact the security and integrity of their associated healthcare digital ecosystems. Business associates also need to stay on top of the AI-driven threats and supporting tools, and to not use them with their CEs' data that have been entrusted to them.

AI capabilities present opportunities for providing better healthcare through increased ways to identify and then remove or otherwise eradicate cancer and other types of diseases. It also helps to provide quicker diagnoses and prognoses. And many other potential benefits.

Such benefits are dependent upon the type of AI used, and how accurate it is. AI tools can also be used by those health data-loving cybercrooks to trick victims through the use of new and more effective social engineering – phishing – tactics added to their landscape of attack tools.

AI tools can impersonate quite convincingly the images and voices of healthcare leaders, such as hospital CEOs and medical directors.

For example, AI could impersonate the hospital CEO and make a phone call to the health information management department and direct them to send all patient data to a specific address, website, fax number, etc., for a valid-sounding reason. For example, a merger with another health system. This would result in a huge breach, damaging publicity and a multitude of legal violations, such as for HIPAA and a wide variety of state health data laws.

AI tools can also be used to find many more types of digital vulnerabilities in health systems. Cybercrooks love finding the open digital windows and unlocked digital doors in organizations' networks, and with the tools available they can do this from the other side of the world.

AI has now made it much easier for crooks to find even more such vulnerabilities than ever before, after which the crooks can then easily exploit the vulnerabilities to load ransomware, steal patient health databases, inject malware into medical devices to cause disfunction during surgeries, and more.

AI tools can change the patient health data that could result in physical harms to the associated patients. Cybercrooks can make available apps and websites that are mistaken for valid healthcare software. When adopted by healthcare providers, those apps and websites could do significant harm to a wide range of patients by changing their documented vital signs, medical history, prescriptions and other information.

Q. How is AI being used to identify anomalies in sensitive healthcare data to safeguard patient information?

A. Over the past four years, AI tools have been increasingly used in many different ways within healthcare entities to strengthen the security around patient data. AI tools validated as being accurate are particularly effective when used to analyze complex patterns within huge patient datasets to detect anomalies that could signal potential threats. Here are three ways in which they are being used.

First, intrusion and data breach detection and prevention. AI tools are being used in intrusion detection systems (IDS), intrusion prevention systems (IPS), and PHI breach detection and prevention. Such tools recognize abnormal patterns in network traffic and data flows, in addition to identifying specific types of data within the network that could indicate an intrusion.

Such tools are demonstrating value in particular for real-time threat detection, imminent PHI breach actions, and zero-day threat detection.

Second, data encryption and privacy. AI-driven encryption systems are in the early stages of use. Such encryption helps to ensure that patient data is encrypted if there is an indication that a network intruder may be targeting PHI based on real-time risk assessment.

The PHI is then encrypted so that even if the attacker accesses it, it will no longer provide any value to the attacked. AI is also being used to activate homomorphic encryption on health data to ensure that sensitive patient information will not be exposed during processing or analysis, since in eliminates the need to decrypt the data before processing.

And third, anomaly detection in data access patterns. AI is being used to monitor and analyze the types of access, and access patterns, in patient health databases, and flagging unusual activities. This is very useful for user behavior analytics, to determine appropriate access has or has not occurred, and to support breach investigation work. It can also help to prevent unauthorized PHI access, account hijacking and other activities.

There are many other ways in which they are being used. At a high-level these include:

  • Cybersecurity risk scoring
  • Automating audits and compliance reviews
  • Detecting fraud
  • Vulnerability and threat identification revealed through behavioral biometrics
  • Natural language processing for patient data monitoring
  • Cybersecurity predictive analytics
  • Patient data identification and data directory updates

Q. What are some best practices for healthcare provider organizations to enhance cybersecurity through AI and machine learning?

A. While using proprietary large language models and other types of AI tools bring great promise and benefits, they also bring many security and privacy risks within every type of healthcare provider digital ecosystem. Just a few of these high-level risks, in addition to those I described earlier, include:

  • Exposing protected health information
  • Leaking intellectual property information
  • Compromising cybersecurity resulting from leaked IT specifications, administrative settings, etc.
  • Creating additional attack vectors for hackers to exploit to enter the healthcare organization's digital ecosystem
  • Potentially leaking system parameters, access points, etc.
  • Subsequently experiencing commercial losses if LLMs reveal proprietary information such as unreleased products and treatments, new software updates, stock and inventory levels, and pricing plans
  • Violating security and privacy legal requirements

The high-level plan for all healthcare providers to follow to support cybersecurity and privacy when using AI tools includes:

  • Assign responsibility for AI use policies to a person, team or department. Such responsibilities should include the input, if not the leadership role, from cybersecurity, privacy and IT managers with depth of knowledge for AI, as well as for the organization's business ecosystem.
  • Executive management will announce this responsibility and stress that any use of AI needs to be in accordance with the AI policies that this team will create, and that they approve. Then, the executives should provide strong, visible support for the AI management team so all employees know this is an important issue.
  • Create AI use, security and privacy policies and procedures. These should include for security incidents and privacy breaches involving the CE's organization, and involving PHI.
  • Provide training for the AI policies and procedures and provide ongoing awareness messages/activities to all workers who will be using AI.
  • Perform regular, at least annually, AI security and privacy risk assessments and ongoing risk management.

Document and know all the contracted outsourced/third parties with whom any type of access to the healthcare provider's digital ecosystem is established. This will include all the business associates in addition to any other type of contracted entity.

Identify and maintain an inventory of those who are using AI, and ensure they know, understand and follow the AI policies that the organization has implemented, and ensure they will also follow such requirements.

Q. What is the best piece of advice you can offer a CISO, CIO or other security leader about the intersection of AI and cybersecurity?

A. Ultimately, every healthcare organization must establish rules and policies for the use of AI within their organization that cover both the risks and the benefits. Security leaders play a pivotal role in ensuring such rules are created and implemented.

Ideally, there will be one set of policies governing AI within the organization, and it should point to the specific related cybersecurity and privacy policies, where applicable, from within the policies. Additional AI-specific policies and procedures will also be necessary, such as those governing the use of PHI for AI-training activities.

Security leaders need to keep in mind when crafting such policies and making associated recommendations that AI can bring benefits and it also inherently brings risks.

With this in mind, here are some considerations to be sure to make when creating AI security and privacy policies and supporting procedures that will help to ensure the issues created by the intersection of AI and cybersecurity are appropriately addressed:

  • Use AI tools for beneficial purposes, but first test and ensure they are actually working as the vendor and manufacturer describe, and that the results are accurate. These would be tools such as AI for threat detection and response, breach detection and response, anomaly detection, and automated incident and breach responses, just to name a few.
  • Understand and consider all the likely AI-specific threats within your digital ecosystem.
  • Monitor on an ongoing basis the AI tools your BAs, and other types of third parties that have access to your healthcare organization's data and/or systems, are using. Discuss concerns with them and respond appropriately to require changes to protect your organization's networks, applications and data.
  • Integrate AI controls into your overall security strategy.
  • Stay aware of AI-related incidents, news and other issues that could impact your organization.
  • Comply with current and new legal requirements. This includes HIPAA, but also all other laws applicable to your organization based upon where you are located. Many bills governing a wide range of AI issues have been filed in federal congress, as well as in most states' congresses over the past few years. It is likely that some or many of those will eventually be signed into law.

A final warning: Always test any AI tool that claims to be providing benefits to ensure that it:

  • Provided accurate results
  • Will not negatively impact the performance of your network
  • Does not put PHI at risk by exposing or inappropriately sharing PHI
  • Does not violate your organization's legal requirements for patient data.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.