Do new technologies take ethics out of healthcare?

By Staff Writer
12:00 AM

The healthcare industry has, for many years, been highly dependent on technology. But with the introduction and use of new technologies such as machine learning and AI in healthcare, the issue of ethics has come into question.

According to a Commonwealth Department of Health and Aged Care report, ethics is necessary where: 

• healthcare services are research oriented, in that clinicians conduct or directly research using data;

• healthcare services that have a wide spread of disciplines facilitate the development of cross-discipline linkages and collaborations;

• high technology research and development activities are located close to major healthcare

facilities, such as major teaching hospitals;

• networks exist or have been encouraged to develop between healthcare services, research centres and industry partners;

• opportunities for financial rewards are available for innovators in the public sector; and

• venture capital and intellectual property services are readily available, and tax and other public expenditures encourage innovation.

As such, even though these technologies bring huge potential and opportunities, they still need to be closely monitored.

The University of New South Wales Research Ethics and Compliance Support Director Dr Ted Rohr told HITNA that issues around ethics arise when healthcare access data from medical records for research, for example.

“Ethics is all about deciding whether the use of technology is appropriate and is used for public good. For example, AI has its positives, but it can be misused. So, having an ethical framework allows the proper use of medical databases for research and experiments with patients using devices,” he said.

“Technology doesn’t take the ethics out of healthcare, it brings more ethics into healthcare. It enables devices to be presented safely to patients or data from apps from being misused, for example.”

According to Rohr, a key part of new-age ethics is privacy legislation across both Federal and State Government legislation.

“We need national guidelines and code of conduct for medical bodies to create their own codes. They’re very important in establishing a culture that whenever technology is used, that the user has understood the need to safeguard patients first.”

But with healthcare being an industry that requires a human’s touch, Rohr said that it is unlikely that emotional decision making will fall into the hands of AI or robots.

“There was a recent study that matched where humans see themselves against AI. The prediction was that jobs for humans are to oversee decisions made by these smart systems, because the ethical judgements made by humans is still necessary,” he said.

RMIT University Associate Professor Adrian Dyer said “huge risks” around new technologies like AI and machine learning will exist if a code of ethics is not factored in during their development.

“It comes down to how the technology is implemented. Protocols need to be set up when developing such technology but balances and checks need to come from people,” he said.

“There are guidelines available but as soon as a system is collecting, managing or manipulating personal information, there’s national and state privacy laws that need to be obeyed. But exactly how that’s managed, needs to be looked into.”

THE NEED FOR A DIGITAL CODE OF ETHICS

In the move towards this future, the Royal Australian and New Zealand College of Radiologists (RANZCR) has released a draft report for the ethical use of AI and machine learning in medicine.

This draft report, created by its AI Working Group, outlines eight ethical principles to guide the development of professional and practice standards with regards to AI and machine learning.

These eight principles consist of considerations around safety, avoidance of bias, transparency and explainability, privacy and protection of data, decision making on diagnosis and treatment, the liability for decisions made, application of human values and governance of machine learning and AI.

RANZCR President Dr Lance Lawler said these principles aim to ensure the protection of patient data, balanced with the application of humanitarian values.

“New technologies such as AI are having a huge impact on healthcare, with enormous implications for both health professionals and patients. They have the ability to help doctors work in a more time-efficient and effective manner and – ultimately – provide even greater treatment for patients,” he said.

Lawler said these guiding principles are necessary as the way radiology adapts to AI has a flow-on effect for patients and other healthcare professionals.

“The agreed principles will, when established, complement existing medicinal ethical frameworks, but will also provide doctors and healthcare organisations with guidelines regarding the research and deployment of machine learning systems and AI tools in medicine,” he said.

"There is a lot of hype and misinformation around AI; it is important to look beyond that and concentrate on… how we can best use it for the maximum benefit of patients.”

Royal Brisbane and Women’s Hospital Metro North Hospital and Health Service Radiation Oncologist Professor Liz Kenny said the ethical principles outlined around AI algorithms and machine learning will help guide industry in decision making.

“We’ve never been at greater risk of having machine learning and AI algorithms take us down the wrong path if we don’t get the ethics of them right. Today, because software such as AI and machine learning are not considered medical devices, there’s no regulation or ethical requirements around them,” she said.

“So, we’ve got an opportunity to do this right – to do this solidly thinking through from patient safety, security of teams caring for them, the overall healthcare system, etc.

“But today, we’ve got no handle on it and if we continue this way, we’re going to lose trust and the technological potential in front of us.”

ETHICS AND THE STAKEHOLDERS' INVOLVEMENT

And for a standardised set of regulations around these technologies, governance and stakeholder involvement is necessary, according to Dyer.

“There are many stakeholders involved in new technologies. Before providing recommendations, researchers need to think about input from a variety of stakeholders. This shifts the burden of ethics across a number of stakeholders,” he said.

“For example, shareholders in a healthcare company may be interested in maximising profit. Their perception of ethics may be more relaxed as they’re intending on using that data to make a profit. Someone interested in privacy legislation may be interested in protecting patient information.

“So, there’s the general public – knowing what they expect, the wants of the healthcare industry and the way our governments manage that when determining the correct balance of ethics.”

The University of Auckland Health Systems Lecturer Dr Monique Jonas previously said ethics plays an important role in emerging health technologies as it is “crucial in determining which health technologies should be funded, for which patients, upon what terms”.

“The reason is that at least some emerging technologies promise to improve life and to extend life in a way that existing technologies are unable to do. So they carry potential benefits for patients,” she said.

“Decisions about health technologies are inevitably ethical – they involve people’s interests, societal values and distributive justice. The acute importance of the end-point – the decision about whether to fund a given technology – means that the whole lead-in process that informs that end-point must be defensible in ethical terms. But it is not always clear what decisions are most defensible in ethical terms.”

Kenny said governments need to address the ethical concerns arising from technologies like machine learning and AI algorithms by learning from the efforts of other countries.

“There’s currently no principles around the use of AI algorithms and machine learning in Australia. If this flows in a completely unmanaged environment, it will be detrimental. The opportunity is not tomorrow, it’s right now,” she said.

“It needs to start with something as simple as an agreement of terminology. For example, the UK has been very clear, through the House of Lords Select Committee, around a set of descriptors of these technologies. We don’t even have that remotely right here, which is resulting in ethical issues.”

Kenny added that policy must lead the ethical discussion and said the draft report for the ethical use of AI and machine learning calls on “government of the highest level” to help industry get it right.

“There needs to be a set of guiding principles for industry, vendors, researchers, care providers, and government themselves to set us on the right path,” she said.

“With the Human Rights Commission and TGA having set out consultations, there’s serious work that needs to be done. This is an opportunity for government and regulators to come together and see it through.”

GOING INTO THE FUTURE

Rohr said that ethics in technology will continue to be an ongoing discussion given the rate of technological advancements.

“There will need to be constantly evolving regulations in place to consistently ensure that there is no abuse of technology,” he added.

Dyer suggested that conversations around what industry expects out of AI and machine learning should be brought up with the general public as a starting point, which can then be used by governments to make decisions and set up a legal framework.

“It’s both a bottom-up and top-down approach. Research and industry have to advise government on what fits best, and government needs to act. Which one comes first, is interesting to see and will determine how we navigate this path going forward.”

However, Dyer said there needs to be a greater understanding of the capabilities of AI and machine learning before a set of standards are set out and that ultimately, technology should only be an enabler of change.

“The technologies are too new at the moment. I would advocate that for now, any time an important decision is made regarding human health, that it is always done by a perfectly-trained healthcare practitioner,” he said.

“Technology can guide decisions but you need a human who is ethically trained to make that decision.”

Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.