Artificial intelligence has helped the healthcare sector become more innovative in the last few decades. However, this rise has prompted cybersecurity concerns and whether machines can act ethically without human guidance.
In artificial intelligence, machines mimic the thought processes and decision-making capabilities of humans. Among the services artificial intelligence provides to the healthcare sector is the creation of custom care plans, checking in patients, and answering questions quickly. AI reduces healthcare costs and streamlines procedures while maintaining quality and automating time-consuming tasks without compromising quality.
A smart assistant
As artificial intelligence and deep learning have become more integrated with the healthcare sector, there has been an increase in the number of technological apps specifically designed to encourage healthier behaviour in patients without requiring them to visit the doctor regularly.
In the UK, Addenbrooke Hospital in Cambridge is using Microsoft's InnerEye system to automate and interpret scans for prostate cancer patients. This streamlines prostate cancer treatment, and the hospital has already found significant promise in the technology to process images and write up tumours and reports for patients with brain tumours.
DeepMind, a subsidiary of Google, has also been working with Moorfield Eye Hospital NHS Foundation Trust since 2016 to assist clinicians in improving the diagnosis and treatment of critical eye disorders. This method detects sight-threatening eye diseases in seconds and rates patients in order of treatment priority.
Remote patient care
AI-based solutions have also taken on a supportive role for doctors and nurses, performing as intelligent assistants. Alongside this, Telemedicine has been an efficient method for remotely monitoring patients, leading to the development of blood pressure cuffs and insulin administration systems that patients can use at home. As a result, physicians can remotely monitor their patients and easily predict health outcomes and improve the administrative process by allowing doctors to focus on critical patients.
For people requiring palliative care for conditions such as dementia, heart failure, and osteoporosis, technological applications have been developed to help ease loneliness. Robots have also shown their ability to transform end-of-life care, with some designed specifically to engage in conversation and social exchanges with patients who live alone. This collaboration between man and machine means patients remain independent for longer without the official aid of other humans.
Despite doctors' unconscious bias training, bias still exists in the healthcare sector that cannot be easily eliminated. Human decisions are prone to intentional/unintentional prejudice in how they treat their patients. These biases can lead to people receiving poor treatment, receiving inaccurate diagnoses, or experiencing delays in diagnosis. AI has the potential to alleviate this by reducing humans' subjective assessments of data in many circumstances since it is explicitly designed to consider variables that improve predicted accuracy, which unintentionally eliminates documented racial disparities in medical care.
Big data, data privacy, and automation bias
Even as deep learning tools and artificial intelligence have become more sophisticated, cybersecurity and human manipulation of big data remain common concerns for patients.
Big data can be used in healthcare to automate mundane tasks, but it requires storing vast amounts of patient data collected from various sources, such as medical records, wearable devices, and genetic testing. In most cases, patients are concerned about how their healthcare providers might use this information unethically without their knowledge if it falls into the wrong hands.
An attack on the NHS' big data in May 2017 brought this lack of faith in AI to a head. A ransom worm spread rapidly across several high-profile networks, namely NHS. As a result, thousands of appointments and operations were cancelled, and patients had to travel further to accident and emergency departments. The WannaCry ransomware attack revealed how sometimes the protection of patient information falls short of what is expected of all parts of the healthcare system. Before the attack took place, the NHS could have easily prevented the problem by updating their hospitals and healthcare facilities with current versions of Windows.
While AI-based tools are automated, they are ultimately created by humans. Using information fed to them by humans, they predict what's most likely to happen. In other words, AI systems can introduce or reflect bias and discrimination in three ways: patterns of health discrimination in data representatives with small sample sizes, and human choices made during the design, development, and deployment of these AI systems. Clinical decision-making is one area where time-strapped clinicians may turn to artificial intelligence for assistance with limited criticism of potential biases.
Balancing Innovation and patient privacy
A balance needs to be struck between innovation in artificial intelligence technology and patient privacy. This can be achieved through governance and monitoring of AI-based tools in healthcare. Artificial accountability toolkits could be created to help healthcare workers tackle potential risks, algorithm bias, and opacity. The Artificial Intelligence Initiative in the UK has already set up an algorithmic impact assessment program with the Ada Lovelace Institute to support developers with auditing their technology at early stages, increasing patient trust in AI and technology governance.
In the UK, AI and big data are expected to transform the prevention, early diagnosis, and treatment of chronic diseases by 2030. For current and future projects to demonstrate patient-centredness, inclusion, and impact, AI solutions must be clearly understood and integrated through a monitored collaborative effort between specialists and technology experts.