What to know before buying AI-based cybersecurity tools
Some artificial intelligence and machine learning proponents present the technologies as if they were manna from heaven, tools that have the capability to replace humans. And it’s not unusual for mere mention of the term “artificial intelligence” to evoke images of futuristic machines that can think for themselves.
The truth is simpler than that. Artificial intelligence and machine learning are tools healthcare executives, technical staff and clinicians can use to enhance operations and improve healthcare.
Artificial intelligence is when computers replicate something that humans do – real AI is when the results are as good or better than the best human results, said Dustin Rigg Hillard, vice president of engineering at Versive, which conducts machine learning and artificial intelligence hunting of cyber-adversaries and insider threats.
[Join Your Peers at HIMSS’ Healthcare Security Forum! Register Today]
“We can see the progress from DeepBlue beating Kasparov in 1997 to Watson beating Ken Jennings in 2011 to AlphaGo beating Ke Jie in 2017,” Hillard said. “A challenge for the field today is how to replicate the expertise and skills of humans across many more tasks, and provide that capability at a reasonable cost to businesses.”
Many healthcare cybersecurity executives struggle to fully staff teams with the expertise and skills necessary to protect their organization’s data and patients. AI can help mitigate these risks by automating some of the tasks and expertise required, though assessing the promises of technology and vendors can be difficult and time-consuming.
“Proving the value requires comparing the results of a tool to existing tools or team members, while keeping in mind that evolutionary steps can be valuable while in search of a revolutionary system,” Hillard said. “That can mean testing the capability of malware or intrusion detection to identify new threats, or judging if AI is able to replicate, or accelerate, the capabilities of a hunt team.”
Machine learning is a subset of overall AI that recognizes patterns in data and predicts outcomes based on past experience and data. Most AI systems incorporate machine learning technology to help generate results that replicate human ones.
“One example is that machine learning can be used to predict typical network behaviors based on historical network logs, and these predictions can be used to identify anomalous activity in a network,” Hillard said. “Connecting these anomalous events together across data sources and time into a full threat case begins to automate and replicate the work of expert hunt teams to help surface adversary campaigns.”
[Also: Artificial intelligence spending to surge in 2017, hit $46 billion by 2020]
True machine learning and artificial intelligence are two pieces of a CISO’s arsenal that have become imperative, said Anahi Santiago, chief information security officer at Christiana Care Health System in Delaware.
“Machine learning and artificial intelligence utilize the behaviors of end users and information systems to learn what is normal activity,” Santiago said. “Artificial intelligence can then be used to take action, without human intervention, when activity deviates from the norm.”
For example, a user accesses an electronic health record 40 times a day to care for patients. Machine learning understands this to be the norm. When the system detects the user accessing hundreds of records within a short period of time, it can apply artificial intelligence to block the access and send an alert to the information security team.
“That is a very basic example,” Santiago explained. “The technology currently available has evolved over the years and has become a relatively effective layer in the protection from cyber-threats. I use the term relatively because the threat landscape and the sophistication of the techniques used by threat actors continues to evolve. Artificial intelligence systems are not a panacea.”
But anti-virus, patch management and other point-in-time solutions cannot keep pace with the threat landscape, and as such, tools that employ machine learning and artificial intelligence are necessary to assist in the prevention and protection from the unknown, she added.
As far as burning through the hype, these tools have evolved in maturity, and there are ways to conduct due diligence to get past hype, she said.
“Two years ago, I was hesitant to buy in as there were a lot of entrants into this space and the technology, in my opinion, was not tested,” she said. “I no longer have that perspective and feel that there is enough information and enough happy customers to be able to make informed decisions about which tools are right for our environment.”
Healthcare organizations today need to set the stage for bringing in machine learning and artificial intelligence tools tomorrow. There are many actions they can take for a smooth transition to these potentially highly effective tools.
“A critical resource for any effective machine learning or AI solution is sufficient instrumentation and retention of historical data,” Hillard said. “AI is only as powerful as the input data and signals provided, so organizations can prepare by capturing their existing log data and consolidating in a common data repository that enables access to a broad set of tools and analytics.”
It also is important to gain complete visibility by instrumenting all aspects of the network, from the perimeter to end-points to internal network communication to application and server logs, he added.
“This broad visibility then prevents sophisticated adversaries from finding deep, dark corners of the network to hide in,” he explained. “Healthcare executives should be aware that the significant volume of personally identifiable information that their networks hold make them incredibly attractive targets for adversaries, both criminal and state-sponsored.”
This prominent place on the hierarchy of targets reinforces the need to create the foundation for AI and machine learning now to ensure that as these solutions are identified, they can be successfully implemented, he said.
The most important foundational step that cybersecurity teams must take before entering into the artificial intelligence space is to fully understand the business, Santiago said.
“Too often we implement security without fully understanding the impact on the business,” she said. “In order to gain credibility and to become a true partner with the business, infosec teams need to be intimate with how the business functions. That includes understanding clinical workflows, knowing the dynamics of how a healthcare organization moves and aligning security controls with the needs of our end users, not in spite of them.”
If infosec teams can achieve that kind of synergy, she added, they will be well positioned to be successful in implementing machine learning and artificial intelligence.
“Frankly,” she concluded, “they will be extremely successful in implementing a risk management framework that can have optimal outcomes for both security and the business.”
Santiago will be speaking the upcoming HIMSS and Healthcare IT News Healthcare Security Forum sessions “CISOs and CIOs: Stronger Together than Apart” and “The State of Healthcare Cybersecurity 2017 and Beyond.”
Twitter: @SiwickiHealthIT
Email the writer: bill.siwicki@himssmedia.com