Artificial intelligence in healthcare is the use of complex algorithms and software in another words artificial intelligence (AI) to emulate human cognition in the analysis, interpretation, and comprehension of complicated medical and healthcare data. Specifically, AI is the ability of computer algorithms to approximate conclusions without direct human input.
What distinguishes AI technology from traditional technologies in
health care is the ability to gain information, process it and give a
well-defined output to the end-user. AI does this through machine learning algorithms and deep learning.
These algorithms can recognize patterns in behavior and create their
own logic. In order to reduce the margin of error, AI algorithms need to
be tested repeatedly. AI algorithms behave differently from humans in
two ways: (1) algorithms are literal: if you set a goal, the algorithm
can't adjust itself and only understand what it has been told
explicitly, (2) and some deep learning algorithms are black boxes; algorithms can predict extremely precise, but not the cause or the why.
The primary aim of health-related AI applications is to analyze
relationships between prevention or treatment techniques and patient
outcomes. AI programs have been developed and applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. Medical institutions such as The Mayo Clinic, Memorial Sloan Kettering Cancer Center, and the British National Health Service, have developed AI algorithms for their departments. Large technology companies such as IBM and Google,
have also developed AI algorithms for healthcare. Additionally,
hospitals are looking to AI software to support operational initiatives
that increase cost saving, improve patient satisfaction, and satisfy
their staffing and workforce needs. Companies are developing predictive analytics solutions that help healthcare managers improve business operations through increasing utilization, decreasing patient boarding, reducing length of stay and optimizing staffing levels.
History
Research in the 1960s and 1970s produced the first problem-solving program, or expert system, known as Dendral. While it was designed for applications in organic chemistry, it provided the basis for a subsequent system MYCIN, considered one of the most significant early uses of artificial intelligence in medicine. MYCIN and other systems such as INTERNIST-1 and CASNET did not achieve routine use by practitioners, however.
The 1980s and 1990s brought the proliferation of the
microcomputer and new levels of network connectivity. During this time,
there was a recognition by researchers and developers that AI systems in
healthcare must be designed to accommodate the absence of perfect data
and build on the expertise of physicians. Approaches involving fuzzy set theory, Bayesian networks, and artificial neural networks, have been applied to intelligent computing systems in healthcare.
Medical and technological advancements occurring over this
half-century period that have enabled the growth healthcare-related
applications of AI include:
- Improvements in computing power resulting in faster data collection and data processing
- Growth of genomic sequencing databases
- Widespread implementation of electronic health record systems
- Improvements in natural language processing and computer vision, enabling machines to replicate human perceptual processes
- Enhanced the precision of robot-assisted surgery
- Improvements in deeplearning techniques and data logs in rare diseases
Current research
Various specialties in medicine have shown an increase in research regarding AI.
Radiology
The
ability to interpret imaging results with radiology may aid clinicians
in detecting a minute change in an image that a clinician might
accidentally miss. A study at Stanford
created an algorithm that could detect pneumonia at that specific site,
in those patients involved, with a better average F1 metric (a
statistical metric based on accuracy and recall), than the radiologists
involved in that trial.
Several companies (icometrix, QUIBIM, Robovision, ...) have popped up
that offer AI platforms for uploading images to. There are also
vendor-neutral systems like UMC Utrecht's IMAGR AI. These platforms are trainable through deep learning to detect a wide range of specific diseases and disorders. The radiology conference Radiological Society of North America
has implemented presentations on AI in imaging during its annual
meeting. The emergence of AI technology in radiology is perceived as a
threat by some specialists, as the technology can achieve improvements
in certain statistical metrics in isolated cases, as opposed to
specialists.
Imaging
Recent advances have suggested the use of AI to describe and evaluate the outcome of maxillo-facial surgery or the assessment of cleft palate therapy in regard to facial attractiveness or age appearance.
In 2018, a paper published in the journal Annals of Oncology
mentioned that skin cancer could be detected more accurately by an
artificial intelligence system (which used a deep learning convolutional
neural network) than by dermatologists.
On average, the human dermatologists accurately detected 86.6% of skin
cancers from the images, compared to 95% for the CNN machine.
Psychiatry
In psychiatry, AI applications are still in a phase of proof-of-concept.
Areas where the evidence is widening quickly include chatbots,
conversational agents that imitate human behaviour and which have been
studied for anxiety and depression.
Challenges include the fact that many applications in the field
are developed and proposed by private corporations, such as the
screening for suicidal ideation implemented by Facebook in 2017. Such applications outside the healthcare system raise various professional, ethical and regulatory questions.
Disease Diagnosis
There
are many diseases and there also many ways that AI has been used to
efficiently and accurately diagnose them. Some of the diseases that are
the most notorious such as Diabetes, and Cardiovascular Disease (CVD)
which are both in the top ten for causes of death worldwide have been
the basis behind a lot of the research/testing to help get an accurate
diagnosis. Due to such a high mortality rate being associated with these
diseases there have been efforts to integrate various methods in
helping get accurate diagnosis’.
An article by Jiang, et al. (2017)
demonstrated that there are several types of AI techniques that have
been used for a variety of different diseases. Some of these techniques
discussed by Jiang, et al. include: Support vector machines, neural
networks, Decision trees, and many more. Each of these techniques is
described as having a “training goal” so “classifications agree with the
outcomes as much as possible…”.
To demonstrate some specifics for disease
diagnosis/classification there are two different techniques used in the
classification of these diseases include using “Artificial Neural
Networks (ANN) and Bayesian Networks (BN)”. From a review of multiple different papers within the timeframe of 2008-2017
observed within them which of the two techniques were better. The
conclusion that was drawn was that “the early classification of these
diseases can be achieved developing machine learning models such as
Artificial Neural Network and Bayesian Network.” Another conclusion
Alic, et al. (2017)
was able to draw was that between the two ANN and BN that ANN was
better and could more accurately classify diabetes/CVD with a mean
accuracy in “both cases (87.29 for diabetes and 89.38 for CVD).
Telehealth
The increase of telemedicine, has shown the rise of possible AI applications.
The ability to monitor patients using AI may allow for the
communication of information to physicians if possible disease activity
may have occurred.
A wearable device may allow for constant monitoring of a patient and
also allow for the ability to notice changes that may be less
distinguishable by humans.
Electronic health records
Electronic
health records are crucial to the digitalization and information spread
of the healthcare industry. However, logging all of this data comes
with its own problems like cognitive overload and burnout for users. EHR
developers are now automating much of the process and even starting to
use natural language processing (NLP) tools to improve this process. One
study conducted by the Centerstone research institute found that
predictive modeling of EHR data has achieved 70–72% accuracy in
predicting individualized treatment response at baseline. Meaning using an AI tool that scans EHR data. It can pretty accurately predict the course of disease in a person.
Drug Interactions
Improvements in natural language processing led to the development of algorithms to identify drug-drug interactions in medical literature.
Drug-drug interactions pose a threat to those taking multiple
medications simultaneously, and the danger increases with the number of
medications being taken.
To address the difficulty of tracking all known or suspected drug-drug
interactions, machine learning algorithms have been created to extract
information on interacting drugs and their possible effects from medical
literature. Efforts were consolidated in 2013 in the DDIExtraction
Challenge, in which a team of researchers at Carlos III University assembled a corpus of literature on drug-drug interactions to form a standardized test for such algorithms.
Competitors were tested on their ability to accurately determine, from
the text, which drugs were shown to interact and what the
characteristics of their interactions were. Researchers continue to use this corpus to standardize the measurement of the effectiveness of their algorithms.
Other algorithms identify drug-drug interactions from patterns in
user-generated content, especially electronic health records and/or
adverse event reports. Organizations such as the FDA Adverse Event Reporting System (FAERS) and the World Health Organization's VigiBase
allow doctors to submit reports of possible negative reactions to
medications. Deep learning algorithms have been developed to parse these
reports and detect patterns that imply drug-drug interactions.
Creation of New Drugs
DSP-1181,
a molecule of the drug for OCD (obsessive-compulsive disorder)
treatment, was invented by artificial intelligence through joint efforts
of Exscientia (British start-up) and Sumitomo Dainippon Pharma
(Japanese pharmaceutical firm). The drug development took a single
year, while pharmaceutical companies usually spend about five years on
similar projects. DSP-1181 was accepted for a human trial.
Industry
The
subsequent motive of large based health companies merging with other
health companies, allow for greater health data accessibility. Greater health data may allow for more implementation of AI algorithms.
A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems.
As the amount of data increases, AI decision support systems become
more efficient. Numerous companies are exploring the possibilities of
the incorporation of big data in the health care industry.
The following are examples of large companies that have contributed to AI algorithms for use in healthcare.
IBM
IBM's Watson Oncology is in development at Memorial Sloan Kettering Cancer Center and Cleveland Clinic. IBM is also working with CVS Health on AI applications in chronic disease treatment and with Johnson & Johnson on analysis of scientific papers to find new connections for drug development. In May 2017, IBM and Rensselaer Polytechnic Institute
began a joint project entitled Health Empowerment by Analytics,
Learning and Semantics (HEALS), to explore using AI technology to
enhance healthcare.
Microsoft
Microsoft's Hanover project, in partnership with Oregon Health & Science University's Knight Cancer Institute, analyzes medical research to predict the most effective cancer drug treatment options for patients. Other projects include medical image analysis of tumor progression and the development of programmable cells.
Google's DeepMind platform is being used by the UK National Health Service to detect certain health risks through data collected via a mobile app.
A second project with the NHS involves analysis of medical images
collected from NHS patients to develop computer vision algorithms to
detect cancerous tissues.
Tencent
Tencent is working on several medical systems and services. These include:
- AI Medical Innovation System (AIMIS), an AI-powered diagnostic medical imaging service
- WeChat Intelligent Healthcare
- Tencent Doctorwork
Intel
Intel's venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options.
Startups
Kheiron Medical developed deep learning software to detect breast cancers in mammograms.
Fractal Analytics
has incubated Qure.ai which focuses on using deep learning and AI to
improve radiology and speed up the analysis of diagnostic x-rays.
Other
Digital consultant apps like Babylon Health's GP at Hand, Ada Health, AliHealth Doctor You, KareXpert and Your.MD use AI to give medical consultation
based on personal medical history and common medical knowledge. Users
report their symptoms into the app, which uses speech recognition to
compare against a database of illnesses. Babylon then offers a
recommended action, taking into account the user's medical history.
Entrepreneurs in healthcare have been effectively using seven business
model archetypes to take AI solution
to the marketplace. These archetypes depend on the value generated for
the target user (e.g. patient focus vs. healthcare provider and payer
focus) and value capturing mechanisms (e.g. providing information or
connecting stakeholders).
IFlytek
launched a service robot “Xiao Man”, which integrated artificial
intelligence technology to identify the registered customer and provide
personalized recommendations in medical areas. It also works in the
field of medical imaging. Similar robots are also being made by companies such as UBTECH ("Cruzr") and Softbank Robotics ("Pepper").
Implications
The
use of AI is predicted to decrease medical costs as there will be more
accuracy in diagnosis and better predictions in the treatment plan as
well as more prevention of disease.
Other future uses for AI include Brain-computer Interfaces (BCI)
which are predicted to help those with trouble moving, speaking or with a
spinal cord injury. The BCIs will use AI to help these patients move
and communicate by decoding neural activates.
As technology evolves and is implemented in more workplaces, many
fear that their jobs will be replaced by robots or machines. The U.S.
News Staff (2018) writes that in the near future, doctors who utilize AI
will “win out” over the doctors who don't. AI will not replace
healthcare workers but instead, allow them more time for bedside cares.
AI may avert healthcare worker burn out and cognitive overload. Overall,
as Quan-Haase (2018) says, technology “extends to the accomplishment of
societal goals, including higher levels of security, better means of
communication over time and space, improved health care, and increased
autonomy” (p. 43). As we adapt and utilize AI into our practice we can
enhance our care to our patients resulting in greater outcomes for all.
Expanding care to developing nations
With
an increase in the use of AI, more care may become available to those
in developing nations. AI continues to expand in its abilities and as it
is able to interpret radiology, it may be able to diagnose more people
with the need for fewer doctors as there is a shortage in many of these
nations.
The goal of AI is to teach others in the world, which will then lead to
improved treatment and eventually greater global health. Using AI in
developing nations who do not have the resources will diminish the need
for outsourcing and can use AI to improve patient care.
For example, Natural language processing, and machine learning are
being used for guiding cancer treatments in places such as Thailand,
China, and India. Researchers trained an AI application to use NLP to
mine through patient records, and provide treatment. The ultimate
decision made by the AI application agreed with expert decisions 90% of
the time.
Regulation
While
research on the use of AI in healthcare aims to validate its efficacy
in improving patient outcomes before its broader adoption, its use may
nonetheless introduce several new types of risk to patients and
healthcare providers, such as algorithmic bias, Do not resuscitate implications, and other machine morality issues. These challenges of the clinical use of AI has brought upon potential need for regulations.
Currently no regulations exist specifically for the use of AI in healthcare. In May 2016, the White House announced its plan to host a series of workshops and formation of the National Science and Technology Council (NSTC) Subcommittee on Machine Learning and Artificial Intelligence.
In October 2016, the group published The National Artificial
Intelligence Research and Development Strategic Plan, outlining its
proposed priorities for Federally-funded AI research and development
(within government and academia). The report notes a strategic R&D
plan for the subfield of health information technology is in development stages.
The only agency that has expressed concern is the FDA. Bakul Patel, the
Associate Center Director for Digital Health of the FDA, is quoted
saying in May 2017.
“We're trying to get people who have hands-on development experience with a product's full life cycle. We already have some scientists who know artificial intelligence and machine learning, but we want complementary people who can look forward and see how this technology will evolve.”
The joint ITU - WHO
Focus Group on Artificial Intelligence for Health has built a platform
for the testing and benchmarking of AI applications in health domain.
As of November 2018, eight use cases are being benchmarked, including
assessing breast cancer risk from histopathological imagery, guiding
anti-venom selection from snake images, and diagnosing skin lesions.