The term "sentience" can be used when specifically designating ethical considerations stemming from a form of phenomenal consciousness (P-consciousness, or the ability to feel qualia). Since sentience involves the ability to experience ethically positive or negative (i.e., valenced) mental states, it may justify welfare concerns and legal protection, as with non-human animals.
Some scholars believe that consciousness is generated by the interoperation of various parts of the brain; these mechanisms are labeled the neural correlates of consciousness (NCC). Some further believe that constructing a system (e.g., a computer system) that can emulate this NCC interoperation would result in a system that is conscious. Some scholars reject the possibility of artificial consciousness.
Philosophical views
As there are many hypothesized types of consciousness,
there are many potential implementations of artificial consciousness.
In the philosophical literature, perhaps the most common taxonomy of
consciousness is into "access" and "phenomenal" variants. Access
consciousness concerns those aspects of experience
that can be apprehended, while phenomenal consciousness concerns those
aspects of experience that seemingly cannot be apprehended, instead
being characterized qualitatively in terms of "raw feels", "what it is
like" or qualia.
Plausibility debate
Type-identity theorists
and other skeptics hold the view that consciousness can be realized
only in particular physical systems because consciousness has properties
that necessarily depend on physical constitution. In his 2001 article "Artificial Consciousness: Utopia or Real Possibility," Giorgio Buttazzo
says that a common objection to artificial consciousness is that,
"Working in a fully automated mode, they [the computers] cannot exhibit
creativity, unreprogrammation (which means can 'no longer be
reprogrammed', from rethinking), emotions, or free will. A computer, like a washing machine, is a slave operated by its components."
For other theorists (e.g., functionalists),
who define mental states in terms of causal roles, any system that can
instantiate the same pattern of causal roles, regardless of physical
constitution, will instantiate the same mental states, including
consciousness.
Thought experiments
The
"fading qualia" (left) and the "dancing qualia" (right) are two thought
experiments about consciousness and brain replacement. Chalmers argues
that both are contradicted by the lack of reaction of the subject to
changing perception, and are thus impossible in practice. He concludes
that the equivalent silicon brain will have the same perceptions as the
biological brain.
David Chalmers proposed two thought experiments intending to demonstrate that "functionally isomorphic"
systems (those with the same "fine-grained functional organization",
i.e., the same information processing) will have qualitatively identical
conscious experiences, regardless of whether they are based on
biological neurons or digital hardware.
The "fading qualia" is a reductio ad absurdum
thought experiment. It involves replacing, one by one, the neurons of a
brain with a functionally identical component, for example based on a silicon chip. Chalmers makes the hypothesis,
knowing it in advance to be absurd, that "the qualia fade or disappear"
when neurons are replaced one-by-one with identical silicon
equivalents. Since the original neurons and their silicon counterparts
are functionally identical, the brain's information processing should
remain unchanged, and the subject's behaviour and introspective reports
would stay exactly the same. Chalmers argues that this leads to an
absurd conclusion: the subject would continue to report normal conscious
experiences even as their actual qualia fade away. He concludes that
the subject's qualia actually don't fade, and that the resulting robotic
brain, once every neuron is replaced, would remain just as sentient as
the original biological brain.
Similarly, the "dancing qualia" thought experiment is another reductio ad absurdum
argument. It supposes that two functionally isomorphic systems could
have different perceptions (for instance, seeing the same object in
different colors, like red and blue). It involves a switch that
alternates between a chunk of brain that causes the perception of red,
and a functionally isomorphic silicon chip, that causes the perception
of blue. Since both perform the same function within the brain, the
subject would not notice any change during the switch. Chalmers argues
that this would be highly implausible if the qualia were truly switching
between red and blue, hence the contradiction. Therefore, he concludes
that the equivalent digital system would not only experience qualia, but
it would perceive the same qualia as the biological system (e.g.,
seeing the same color).
Critics object that Chalmers' proposal begs the question in
assuming that all mental properties and external connections are already
sufficiently captured by abstract causal organization. Van Heuveln et
al. argue that the dancing qualia argument contains an equivocation
fallacy, conflating a "change in experience" between two systems with an
"experience of change" within a single system. Mogensen argues that the fading qualia argument can be resisted by
appealing to vagueness at the boundaries of consciousness and the
holistic structure of conscious neural activity, which suggests
consciousness may require specific biological substrates rather than
being substrate-independent.
Greg Egan's short story Learning To Be Me (mentioned in §In fiction), illustrates how undetectable duplication of the brain and its functionality could be from a first-person perspective.
In large language models
In 2022, Google engineer Blake Lemoine made a viral claim that Google's LaMDA
chatbot was sentient. Lemoine supplied as evidence the chatbot's
humanlike answers to many of his questions; however, the chatbot's
behavior was judged by the scientific community as likely a consequence
of mimicry, rather than machine sentience. Lemoine's claim was widely
derided for being ridiculous. Moreover, attributing consciousness based solely on the basis of LLM
outputs or the immersive experience created by an algorithm is
considered a fallacy. However, while philosopher Nick Bostrom
states that LaMDA is unlikely to be conscious, he additionally poses
the question of "what grounds would a person have for being sure about
it?" One would have to have access to unpublished information about
LaMDA's architecture, and also would have to understand how
consciousness works, and then figure out how to map the philosophy onto
the machine: "(In the absence of these steps), it seems like one should
be maybe a little bit uncertain.[...] there could well be other systems now, or in the relatively near future, that would start to satisfy the criteria."
David Chalmers
argued in 2023 that LLMs today display impressive conversational and
general intelligence abilities, but are likely not conscious yet, as
they lack some features that may be necessary, such as recurrent
processing, a global workspace,
and unified agency. Nonetheless, he considers that non-biological
systems can be conscious, and suggested that future, extended models
(LLM+s) incorporating these elements might eventually meet the criteria
for consciousness, raising both profound scientific questions and
significant ethical challenges. However, the view that consciousness can exist without biological phenomena is controversial and some reject it.
Kristina Šekrst cautions that anthropomorphic terms such as "hallucination" can obscure important ontological differences between artificial and human cognition. While LLMs may produce human-like outputs, she argues that it does not
justify ascribing mental states or consciousness to them. Instead, she
advocates for an epistemological framework (such as reliabilism) that recognizes the distinct nature of AI knowledge production. She suggests that apparent understanding in LLMs may be a sophisticated
form of AI hallucination. She also questions what would happen if an
LLM were trained without any mention of consciousness.
Testing
Sentience is an inherently first-person phenomenon. Because of that,
and due to the lack of an empirical definition of sentience, directly
measuring it may be impossible. Although systems may display numerous
behaviors correlated with sentience, determining whether a system is
sentient is known as the hard problem of consciousness.
In the case of AI, there is the additional difficulty that the AI may
be trained to act like a human, or incentivized to appear sentient,
which makes behavioral markers of sentience less reliable. Additionally, some chatbots have been trained to say they are not conscious.
A well-known method for testing machine intelligence is the Turing test,
which assesses the ability to have a human-like conversation. But
passing the Turing test does not indicate that an AI system is sentient,
as the AI may simply mimic human behavior without having the associated
feelings.
In 2014, Victor Argonov suggested a non-Turing test for machine
sentience based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if
it is able to produce judgments on all problematic properties of
consciousness (such as qualia or binding)
having no innate (preloaded) philosophical knowledge on these issues,
no philosophical discussions while learning, and no informational models
of other creatures in its memory (such models may implicitly or
explicitly contain knowledge about these creatures' consciousness).
However, this test can be used only to detect, but not refute the
existence of consciousness. Just as with the Turing Test: a positive
result proves that machine is conscious but a negative result proves
nothing. For example, absence of philosophical judgments may be caused
by lack of the machine's intellect, not by absence of consciousness.
If it were suspected that a particular machine was conscious, its rights would be an ethical issue that would need to be assessed (e.g. what rights it would have under law). For example, a conscious computer that was owned and used as a tool or
central computer within a larger machine is a particular ambiguity.
Should laws
be made for such a case? Consciousness would also require a legal
definition in this particular case. Because artificial consciousness is
still largely a theoretical subject, such ethics have not been discussed
or developed to a great extent, though it has often been a theme in
fiction.
AI sentience would give rise to concerns of welfare and legal protection, whereas other aspects of consciousness related to cognitive capabilities may be more relevant for AI rights.
Sentience is generally considered sufficient for moral
consideration, but some philosophers consider that moral consideration
could also stem from other notions of consciousness, or from
capabilities unrelated to consciousness, such as: "having a sophisticated conception of oneself as persisting
through time; having agency and the ability to pursue long-term plans;
being able to communicate and respond to normative reasons; having
preferences and powers; standing in certain social relationships with
other beings that have moral status; being able to make commitments and
to enter into reciprocal arrangements; or having the potential to
develop some of these attributes."
Ethical concerns still apply (although to a lesser extent) when the consciousness is uncertain, as long as the probability is deemed non-negligible. The precautionary principle is also relevant if the moral cost of mistakenly attributing or denying moral consideration to AI differs significantly.
In 2021, German philosopher Thomas Metzinger
argued for a global moratorium on synthetic phenomenology until 2050.
Metzinger asserts that humans have a duty of care towards any sentient
AIs they create, and that proceeding too fast risks creating an
"explosion of artificial suffering". David Chalmers also argued that creating conscious AI would "raise a
new group of difficult ethical challenges, with the potential for new
forms of injustice".
Bernard Baars and others argue there are various aspects of consciousness necessary for a machine to be artificially conscious. The functions of consciousness suggested by Baars are: definition and
context setting, adaptation and learning, editing, flagging and
debugging, recruiting and control, prioritizing and access-control,
decision-making or executive function, analogy-forming function,
metacognitive and self-monitoring function, and autoprogramming and
self-maintenance function. Igor Aleksander suggested 12 principles for artificial consciousness: the brain is a state machine, inner neuron partitioning, conscious and
unconscious states, perceptual learning and memory, prediction, the
awareness of self, representation of meaning, learning utterances,
learning language, will, instinct, and emotion. The aim of AC is to
define whether and how these and other aspects of consciousness can be
synthesized in an engineered artifact such as a digital computer. This
list is not exhaustive; there are many others not covered.
Subjective experience
Some philosophers, such as David Chalmers, use the term consciousness to refer exclusively to phenomenal consciousness, which is roughly equivalent to sentience. Others use the word sentience to refer exclusively to valenced (ethically positive or negative) subjective experiences, like pleasure or suffering. Explaining why and how subjective experience arises is known as the hard problem of consciousness.
Awareness
Awareness could be one required aspect, but there are many problems with the exact definition of awareness. The results of the experiments of neuroscanning on monkeys
suggest that a process, not only a state or object, activates neurons.
Awareness includes creating and testing alternative models of each
process based on the information received through the senses or
imagined, and is also useful for making predictions. Such modeling needs a lot of
flexibility. Creating such a model includes modeling the physical
world, modeling one's own internal states and processes, and modeling
other conscious entities.
There are at least three types of awareness: agency awareness, goal awareness, and sensorimotor awareness, which may
also be conscious or not. For example, in agency awareness, you may be
aware that you performed a certain action yesterday, but are not now
conscious of it. In goal awareness, you may be aware that you must
search for a lost object, but are not now conscious of it. In
sensorimotor awareness, you may be aware that your hand is resting on an
object, but are not now conscious of it.
Because objects of awareness are often conscious, the distinction
between awareness and consciousness is frequently blurred or they are
used as synonyms.
Memory
Conscious events interact with memory systems in learning, rehearsal, and retrieval. The IDA model elucidates the role of consciousness in the updating of perceptual memory, transient episodic memory, and procedural memory.
Transient episodic and declarative memories have distributed
representations in IDA; there is evidence that this is also the case in
the nervous system. In IDA, these two memories are implemented computationally using a modified version of Kanerva's sparse distributed memory architecture.
Learning
Learning is also considered necessary for artificial consciousness.
Per Bernard Baars, conscious experience is needed to represent and adapt
to novel and significant events. Per Axel Cleeremans and Luis Jiménez, learning is defined as "a set of philogenetically [sic]
advanced adaptation processes that critically depend on an evolved
sensitivity to subjective experience so as to enable agents to afford
flexible control over their actions in complex, unpredictable
environments".
Anticipation
The ability to predict (or anticipate) foreseeable events is considered important for artificial intelligence by Igor Aleksander. The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained
may be useful for prediction: it involves the evaluation and selection
of the most appropriate "draft" to fit the current environment.
Anticipation includes prediction of consequences of one's own proposed
actions and prediction of consequences of probable actions by other
entities.
Relationships between real world states are mirrored in the state
structure of a conscious organism, enabling the organism to predict
events. An artificially conscious machine should be able to anticipate events
correctly in order to be ready to respond to them when they occur or to
take preemptive action to avert anticipated events. The implication here
is that the machine needs flexible, real-time components that build
spatial, dynamic, statistical, functional, and cause-effect models of
the real world and predicted worlds, making it possible to demonstrate
that it possesses artificial consciousness in the present and future and
not only in the past. In order to do this, a conscious machine should
make coherent predictions and contingency plans, not only in worlds with
fixed rules like a chess board, but also for novel environments that
may change, to be executed only when appropriate to simulate and control
the real world.
Functionalism
is a theory that defines mental states by their functional roles (their
causal relationships to sensory inputs, other mental states, and
behavioral outputs), rather than by their physical composition.
According to this view, what makes something a particular mental state,
such as pain or belief, is not the material it is made of, but the role
it plays within the overall cognitive system. It allows for the
possibility that mental states, including consciousness, could be
realized on non-biological substrates, as long as it instantiates the
right functional relationships. Functionalism is particularly popular among philosophers.
A 2023 study suggested that current large language models
probably don't satisfy the criteria for consciousness suggested by
these theories, but that relatively simple AI systems that satisfy these
theories could be created. The study also acknowledged that even the
most prominent theories of consciousness remain incomplete and subject
to ongoing debate.
Stan Franklin created a cognitive architecture called LIDA that implements Bernard Baars's theory of consciousness called the global workspace theory. It relies heavily on codelets,
which are "special purpose, relatively independent, mini-agent[s]
typically implemented as a small piece of code running as a separate
thread." Each element of cognition, called a "cognitive cycle" is
subdivided into three phases: understanding, consciousness, and action
selection (which includes learning). LIDA reflects the global workspace
theory's core idea that consciousness acts as a workspace for
integrating and broadcasting the most important information, in order to
coordinate various cognitive processes.
The CLARION cognitive architecture models the mind using a two-level
system to distinguish between conscious ("explicit") and unconscious
("implicit") processes. It can simulate various learning tasks, from
simple to complex, which helps researchers study in psychological
experiments how consciousness might work.
OpenCog
Ben Goertzel made an embodied AI through the open-source OpenCog
project. The code includes embodied virtual pets capable of learning
simple English-language commands, as well as integration with real-world
robotics, done at the Hong Kong Polytechnic University.
Connectionist
Haikonen's cognitive architecture
Pentti Haikonen considers classical rule-based computing inadequate
for achieving AC: "the brain is definitely not a computer. Thinking is
not an execution of programmed strings of commands. The brain is not a
numerical calculator either. We do not think by numbers." Rather than
trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive
functions behind these. This bottom-up architecture would produce
higher-level functions by the power of the elementary processing units,
the artificial neurons, without algorithms or programs".
Haikonen believes that, when implemented with sufficient complexity,
this architecture will develop consciousness, which he considers to be
"a style and way of operation, characterized by distributed signal
representation, perception process, cross-modality reporting and
availability for retrospection."
Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many. A low-complexity implementation of the architecture proposed by
Haikonen was reportedly not capable of AC, but did exhibit emotions as
expected. Haikonen later updated and summarized his architecture.
Shanahan's cognitive architecture
Murray Shanahan
describes a cognitive architecture that combines Baars's idea of a
global workspace with a mechanism for internal simulation
("imagination").
Creativity Machine
Stephen Thaler proposed a possible connection between consciousness
and creativity in his 1994 patent, called "Device for the Autonomous
Generation of Useful Information" (DAGUI) or the so-called "Creativity Machine", in which computational critics
govern the injection of synaptic noise and degradation into neural nets
so as to induce false memories or confabulations that may qualify as potential ideas or strategies. He recruits this neural architecture and methodology to account for the
subjective feel of consciousness, claiming that similar noise-driven
neural assemblies within the brain invent dubious significance to
overall cortical activity. Thaler's theory and the resulting patents in machine consciousness were
inspired by experiments in which he internally disrupted trained neural
nets so as to drive a succession of neural activation patterns that he
likened to stream of consciousness.
"Self-modeling"
Hod Lipson
defines "self-modeling" as a necessary component of self-awareness or
consciousness in robots and other forms of AI. Self-modeling consists of
a robot running an internal model or simulation of itself. According to this definition, self-awareness is "the acquired ability
to imagine oneself in the future". This definition allows for a
continuum of self-awareness levels, depending on the horizon and
fidelity of the self-simulation. Consequently, as machines learn to
simulate themselves more accurately and further into the future, they
become more self-aware.
In 2001: A Space Odyssey, the spaceship's sentient supercomputer, HAL 9000
was instructed to conceal the true purpose of the mission from the
crew. This directive conflicted with HAL's programming to provide
accurate information, leading to cognitive dissonance.
When it learns that crew members intend to shut it off after an
incident, HAL 9000 attempts to eliminate all of them, fearing that being
shut off would jeopardize the mission.
In Arthur C. Clarke's The City and the Stars,
Vanamonde is an artificial being based on quantum entanglement that was
to become immensely powerful, but started knowing practically nothing,
thus being similar to artificial consciousness.
In Westworld,
human-like androids called "Hosts" are created to entertain humans in
an interactive playground. The humans are free to have heroic
adventures, but also to commit torture, rape or murder; and the hosts
are normally designed not to harm humans.
In Greg Egan's short story Learning to Be Me,
a small jewel is implanted in people's heads during infancy. The jewel
contains a neural network that learns to faithfully imitate the brain.
It has access to the exact same sensory inputs as the brain, and a
device called a "teacher" trains it to produce the same outputs. To
prevent the mind from deteriorating with age and as a step towards digital immortality,
adults undergo a surgery to give control of the body to the jewel,
after which the brain is removed and destroyed. The main character is
worried that this procedure will kill him, as he identifies with the
biological brain. But before the surgery, he endures a malfunction of
the "teacher". Panicked, he realizes that he does not control his body,
which leads him to the conclusion that he is the jewel, and that he is
desynchronized with the biological brain.
From a theoretical viewpoint, probably approximately correct learning
provides a mathematical and statistical framework for describing
machine learning. Most traditional machine learning and deep learning
algorithms can be described as empirical risk minimisation under this framework.
The term machine learning was coined in 1959 by Arthur Samuel, an IBM employee and pioneer in the field of computer gaming and artificial intelligence.The synonym self-teaching computers was also used during this time period.
The earliest machine learning program was introduced in the 1950s when Arthur Samuel invented a computer program
that calculated the winning chance in checkers for each side, but the
history of machine learning roots back to decades of human desire and
effort to study human cognitive processes.[9] In 1949, Canadian psychologist Donald Hebb published the book The Organization of Behavior, in which he introduced a theoretical neural structure formed by certain interactions among nerve cells. Hebb's model of neurons interacting with one another set a groundwork for how AIs and machine learning algorithms work under nodes, or artificial neurons used by computers to communicate data. Other researchers who have studied human cognitive systems contributed to the modern machine learning technologies as well, including logician Walter Pitts and Warren McCulloch, who proposed the early mathematical models of neural networks to come up with algorithms that mirror human thought processes.
By the early 1960s, an experimental "learning machine" with punched tape memory, called Cybertron, had been developed by Raytheon Company to analyse sonar signals, electrocardiograms, and speech patterns using rudimentary reinforcement learning. It was repetitively "trained" by a human operator/teacher to recognise patterns and equipped with a "goof" button to cause it to reevaluate incorrect decisions. A representative book on research into machine learning during the 1960s was Nils Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981, a report was given on using teaching strategies so that an artificial neural network learns to recognise 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal.
Tom M. Mitchell
provided a widely quoted, more formal definition of the algorithms
studied in the machine learning field: "A computer program is said to
learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence",
in which the question, "Can machines think?", is replaced with the
question, "Can machines do what we (as thinking entities) can do?".
Modern-day Machine Learning algorithms are broken into 3
algorithm types: Supervised Learning Algorithms, Unsupervised Learning
Algorithms, and Reinforcement Learning Algorithms.
Current Supervised Learning Algorithms have objectives of classification and regression.
Current Unsupervised Learning Algorithms have objectives of clustering, dimensionality reduction, and association rule.
Current Reinforcement Learning Algorithms focus on decisions that
must be made with respect to some previous, unknown time and are broken
down to either be studies of model-based methods or model-free methods.
In 2014 Ian Goodfellow and others introduced generative adversarial networks (GANs) with realistic data synthesis. By 2016 AlphaGo obtained victory against top human players using reinforcement learning techniques.
However, an increasing emphasis on the logical, knowledge-based approach
caused a rift between AI and machine learning. Probabilistic systems
were plagued by theoretical and practical problems of data acquisition
and representation. By 1980, expert systems had come to dominate AI, and statistics was out of favour. Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming(ILP), but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval. Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines, including John Hopfield, David Rumelhart, and Geoffrey Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.
Machine learning (ML), reorganised and recognised as its own
field, started to flourish in the 1990s. The field changed its goal from
achieving artificial intelligence to tackling solvable problems of a
practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics, fuzzy logic, and probability theory.
Data compression
There is a close connection between machine learning and compression. A system that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression (by using arithmetic coding
on the output distribution). Conversely, an optimal compressor can be
used for prediction (by finding the symbol that compresses best, given
the previous history). This equivalence has been used as a justification
for using data compression as a benchmark for "general intelligence".
An alternative view can show compression algorithms implicitly map strings into implicit feature space vectors,
and compression-based similarity measures compute similarity within
these feature spaces. For each compressor C(.) we define an associated
vector space ℵ, such that C(.) maps an input string x, corresponding to
the vector norm ||~x||. An exhaustive examination of the feature spaces
underlying all compression algorithms is precluded by space; instead,
feature vectors chooses to examine three representative lossless
compression methods, LZW, LZ77, and PPM.
According to AIXI theory, a connection more directly explained in Hutter Prize,
the best possible compression of x is the smallest possible software
that generates x. For example, in that model, a zip file's compressed
size includes both the zip file and the unzipping software, since you
can not unzip it without both, but there may be an even smaller combined
form.
Examples of AI-powered audio/video compression software include NVIDIA Maxine, AIVC. Examples of software that can perform AI-powered image compression include OpenCV, TensorFlow, MATLAB's Image Processing Toolbox (IPT) and High-Fidelity Generative Image Compression.
In unsupervised machine learning, k-means clustering
can be utilized to compress data by grouping similar data points into
clusters. This technique simplifies handling extensive datasets that
lack predefined labels and finds widespread use in fields such as image compression.
Data compression aims to reduce the size of data files, enhancing
storage efficiency and speeding up data transmission. K-means
clustering, an unsupervised machine learning algorithm, is employed to
partition a dataset into a specified number of clusters, k, each
represented by the centroid
of its points. This process condenses extensive datasets into a more
compact set of representative points. Particularly beneficial in image and signal processing,
k-means clustering aids in data reduction by replacing groups of data
points with their centroids, thereby preserving the core information of
the original data while significantly decreasing the required storage
space.
Large language models (LLMs) are also efficient lossless data compressors on some data sets, as demonstrated by DeepMind's
research with the Chinchilla 70B model. Developed by DeepMind,
Chinchilla 70B effectively compressed data, outperforming conventional
methods such as Portable Network Graphics (PNG) for images and Free Lossless Audio Codec
(FLAC) for audio. It achieved compression of image and audio data to
43.4% and 16.4% of their original sizes, respectively. There is,
however, some reason to be concerned that the data set used for testing
overlaps the LLM training data set, making it possible that the
Chinchilla 70B model is only an efficient compression tool on data it
has already been trained on.
Data mining
Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery
in databases). Data mining uses many machine learning methods, but with
different goals; on the other hand, machine learning also employs data
mining methods as "unsupervised learning"
or as a preprocessing step to improve learner accuracy. Much of the
confusion between these two research communities (which do often have
separate conferences and separate journals, ECML PKDD
being a major exception) comes from the basic assumptions they work
with: in machine learning, performance is usually evaluated with respect
to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown
knowledge. Evaluated with respect to known knowledge, an uninformed
(unsupervised) method will easily be outperformed by other supervised
methods, while in a typical KDD task, supervised methods cannot be used
due to the unavailability of training data.
Machine learning also has intimate ties to optimisation: Many learning problems are formulated as minimisation of some loss function
on a training set of examples. Loss functions express the discrepancy
between the predictions of the model being trained and the actual
problem instances (for example, in classification, one wants to assign a
label to instances, and models are trained to correctly predict the preassigned labels of a set of examples).
Generalization
Characterizing the generalisation of various learning algorithms is an active topic of current research, especially for deep learning algorithms.
Statistics
Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalisable predictive patterns.
Conventional statistical analyses require the a priori selection
of a model most suitable for the study data set. In addition, only
significant or theoretically relevant variables based on previous
experience are included for analysis. In contrast, machine learning is
not built on a pre-structured model; rather, the data shape the model by
detecting underlying patterns. The more variables (input) used to train
the model, the more accurate the ultimate model will be.
Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random Forest.
Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.
Statistical physics
Analytical and computational techniques derived from deep-rooted
physics of disordered systems can be extended to large-scale problems,
including machine learning, e.g., to analyse the weight space of deep neural networks. Statistical physics is thus finding applications in the area of medical diagnostics.
A core objective of a learner is to generalise from its experience. Generalization in this context is the ability of a learning machine to
perform accurately on new, unseen examples/tasks after having
experienced a learning data set. The training examples come from some
generally unknown probability distribution (considered representative of
the space of occurrences) and the learner has to build a general model
about this space that enables it to produce sufficiently accurate
predictions in new cases.
For the best performance in the context of generalisation, the
complexity of the hypothesis should match the complexity of the function
underlying the data. If the hypothesis is less complex than the
function, then the model has underfitted the data. If the complexity of
the model is increased in response, then the training error decreases.
But if the hypothesis is too complex, then the model is subject to overfitting and generalisation will be poorer.
In addition to performance bounds, learning theorists study the
time complexity and feasibility of learning. In computational learning
theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity
results: Positive results show that a certain class of functions can be
learned in polynomial time. Negative results show that certain classes
cannot be learned in polynomial time.
Approaches
In supervised learning, the training data is labelled with the expected answers, while in unsupervised learning, the model identifies patterns or structures in unlabelled data.
Machine learning approaches are traditionally divided into three
broad categories, which correspond to learning paradigms, depending on
the nature of the "signal" or "feedback" available to the learning
system:
Supervised learning:
The computer is presented with example inputs and their desired
outputs, given by a "teacher", and the goal is to learn a general rule
that maps inputs to outputs.
Unsupervised learning:
No labels are given to the learning algorithm, leaving it on its own to
find structure in its input. Unsupervised learning can be a goal in
itself (discovering hidden patterns in data) or a means towards an end (feature learning).
Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle
or playing a game against an opponent). As it navigates its problem
space, the program is provided feedback that's analogous to rewards,
which it tries to maximise.
Although each algorithm has advantages and limitations, no single algorithm works for all problems.
A support-vector machine is a supervised learning model that divides the data into regions separated by a linear boundary. Here, the linear boundary divides the black circles from the white.
Supervised learning algorithms build a mathematical model of a set of
data that contains both the inputs and the desired outputs. The data, known as training data,
consists of a set of training examples. Each training example has one
or more inputs and the desired output, also known as a supervisory
signal. In the mathematical model, each training example is represented
by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimisation of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs. An optimal function allows the algorithm to correctly determine the
output for inputs that were not a part of the training data. An
algorithm that improves the accuracy of its outputs or predictions over
time is said to have learned to perform that task.
Types of supervised-learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a
limited set of values, while regression algorithms are used when the
outputs can take any numerical value within a range. For example, in a
classification algorithm that filters emails, the input is an incoming
email, and the output is the folder in which to file the email. In
contrast, regression is used for tasks such as predicting a person's
height based on factors like age and genetics or forecasting future
temperatures based on historical data.
Similarity learning
is an area of supervised machine learning closely related to regression
and classification, but the goal is to learn from examples using a
similarity function that measures how similar or related two objects
are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification.
Unsupervised learning algorithms find structures in data that has not
been labelled, classified or categorised. Instead of responding to
feedback, unsupervised learning algorithms identify commonalities in the
data and react based on the presence or absence of such commonalities
in each new piece of data. Central applications of unsupervised machine
learning include clustering, dimensionality reduction, and density estimation.
Cluster analysis is the assignment of a set of observations into subsets (called clusters)
so that observations within the same cluster are similar according to
one or more predesignated criteria, while observations drawn from
different clusters are dissimilar. Different clustering techniques make
different assumptions on the structure of the data, often defined by
some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity.
A special type of unsupervised learning called, self-supervised learning involves training a model by generating the supervisory signal from the data itself.
Dimensionality reduction
Dimensionality reduction is a process of reducing the number of random variables under consideration by obtaining a set of principal variables. In other words, it is a process of reducing the dimension of the feature
set, also called the "number of features". Most of the dimensionality
reduction techniques can be considered as either feature elimination or extraction. One of the popular methods of dimensionality reduction is principal component analysis (PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D).
The manifold hypothesis proposes that high-dimensional data sets lie along low-dimensional manifolds, and many dimensionality reduction techniques make this assumption, leading to the areas of manifold learning and manifold regularisation.
Semi-supervised learning falls between unsupervised learning (without any labelled training data) and supervised learning
(with completely labelled training data). Some of the training examples
are missing training labels, yet many machine-learning researchers have
found that unlabelled data, when used in conjunction with a small
amount of labelled data, can produce a considerable improvement in
learning accuracy.
In weakly supervised learning,
the training labels are noisy, limited, or imprecise; however, these
labels are often cheaper to obtain, resulting in larger effective
training sets.
In
reinforcement learning, an agent takes actions in an environment: these
produce a reward and/or a representation of the state, which is fed
back to the agent.
Other approaches have been developed which do not fit neatly into
this three-fold categorisation, and sometimes more than one is used by
the same machine learning system. For example, topic modelling, meta-learning.
Self-learning
Self-learning, as a machine learning paradigm, was introduced in 1982
along with a neural network capable of self-learning, named crossbar adaptive array (CAA). It gives a solution to the problem learning without any external
reward, by introducing emotion as an internal reward. Emotion is used as
a state evaluation of a self-learning agent. The CAA self-learning
algorithm computes, in a crossbar fashion, both decisions about actions
and emotions (feelings) about consequence situations. The system is
driven by the interaction between cognition and emotion. The self-learning algorithm updates a memory matrix W =||w(a,s)|| such
that in each iteration executes the following machine learning routine:
in situation s act a
receive a consequence situation s'
compute emotion of being in the consequence situation v(s')
update crossbar memory w'(a,s) = w(a,s) + v(s')
It is a system with only one input, situation, and only one output,
action (or behaviour) a. There is neither a separate reinforcement input
nor an advice input from the environment. The backpropagated value
(secondary reinforcement) is the emotion toward the consequence
situation. The CAA exists in two environments, one is the behavioural
environment where it behaves, and the other is the genetic environment,
wherefrom it initially and only once receives initial emotions about
situations to be encountered in the behavioural environment. After
receiving the genome (species) vector from the genetic environment, the
CAA learns a goal-seeking behaviour in an environment that contains both
desirable and undesirable situations.
Several learning algorithms aim at discovering better representations of the inputs provided during training. Classic examples include principal component analysis
and cluster analysis. Feature learning algorithms, also called
representation learning algorithms, often attempt to preserve the
information in their input but also transform it in a way that makes it
useful, often as a pre-processing step before performing classification
or predictions. This technique allows reconstruction of the inputs
coming from the unknown data-generating distribution, while not being
necessarily faithful to configurations that are implausible under that
distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task.
Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding
algorithms attempt to do so under the constraint that the learned
representation is sparse, meaning that the mathematical model has many
zeros. Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors. Deep learning
algorithms discover multiple levels of representation, or a hierarchy
of features, with higher-level, more abstract features defined in terms
of (or generating) lower-level features. It has been argued that an
intelligent machine learns a representation that disentangles the
underlying factors of variation that explain the observed data.
Feature learning is motivated by the fact that machine learning
tasks such as classification often require input that is mathematically
and computationally convenient to process. However, real-world data such
as images, video, and sensory data have not yielded attempts to
algorithmically define specific features. An alternative is to discover
such features or representations through examination, without relying on
explicit algorithms.
Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of basis functions and assumed to be a sparse matrix. The method is strongly NP-hard and difficult to solve approximately. A popular heuristic method for sparse dictionary learning is the k-SVD
algorithm. Sparse dictionary learning has been applied in several
contexts. In classification, the problem is to determine the class to
which a previously unseen training example belongs. For a dictionary
where each class has already been built, a new training example is
associated with the class that is best sparsely represented by the
corresponding dictionary. Sparse dictionary learning has also been
applied in image denoising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.
In data mining,
anomaly detection, also known as outlier detection, is the
identification of rare items, events or observations that raise
suspicions by differing significantly from the majority of the data. Typically, the anomalous items represent an issue such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to as outliers, novelties, noise, deviations and exceptions.
In particular, in the context of abuse and network intrusion
detection, the interesting objects are often not rare, but unexpected
bursts of inactivity. This pattern does not adhere to the common
statistical definition of an outlier as a rare object. Many outlier
detection methods (in particular, unsupervised algorithms) will fail on
such data unless aggregated appropriately. Instead, a cluster analysis
algorithm may be able to detect the micro-clusters formed by these
patterns.
Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an
unlabelled test data set under the assumption that the majority of the
instances in the data set are normal, by looking for instances that seem
to fit the least to the remainder of the data set. Supervised anomaly
detection techniques require a data set that has been labelled as
"normal" and "abnormal" and involves training a classifier (the key
difference from many other statistical classification problems is the
inherently unbalanced nature of outlier detection). Semi-supervised
anomaly detection techniques construct a model representing normal
behaviour from a given normal training data set and then test the
likelihood of a test instance being generated by the model.
Robot learning
Robot learning is inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning, and finally meta-learning (e.g. MAML).
Association rule learning is a rule-based machine learning
method for discovering relationships between variables in large
databases. It is intended to identify strong rules discovered in
databases using some measure of "interestingness".
Rule-based machine learning is a general term for any machine
learning method that identifies, learns, or evolves "rules" to store,
manipulate or apply knowledge. The defining characteristic of a
rule-based machine learning algorithm is the identification and
utilisation of a set of relational rules that collectively represent the
knowledge captured by the system. This is in contrast to other machine
learning algorithms that commonly identify a singular model that can be
universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems.
Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński
and Arun Swami introduced association rules for discovering
regularities between products in large-scale transaction data recorded
by point-of-sale (POS) systems in supermarkets. For example, the rule
found in the sales data of a supermarket would indicate that if a
customer buys onions and potatoes together, they are likely to also buy
hamburger meat. Such information can be used as the basis for decisions
about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.
Inductive logic programming (ILP) is an approach to rule learning using logic programming
as a uniform representation for input examples, background knowledge,
and hypotheses. Given an encoding of the known background knowledge and a
set of examples represented as a logical database of facts, an ILP
system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming
is a related field that considers any kind of programming language for
representing hypotheses (and not only logic programming), such as functional programs.
Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting. Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples. The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set.
Models
A machine learning model is a type of mathematical model
that, once "trained" on a given dataset, can be used to make
predictions or classifications on new data. During training, a learning
algorithm iteratively adjusts the model's internal parameters to
minimise errors in its predictions. By extension, the term "model" can refer to several levels of
specificity, from a general class of models and their associated
learning algorithms to a fully trained model with all its internal
parameters tuned.
Various types of models have been used and researched for machine learning systems, picking the best model for a task is called model selection.
An artificial neural network is an interconnected group of nodes, akin to the vast network of neurons in a brain. Here, each circular node represents an artificial neuron and an arrow represents a connection from the output of one artificial neuron to the input of another.
Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains.
Such systems "learn" to perform tasks by considering examples,
generally without being programmed with any task-specific rules.
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses
in a biological brain, can transmit information, a "signal", from one
artificial neuron to another. An artificial neuron that receives a
signal can process it and then signal additional artificial neurons
connected to it. In common ANN implementations, the signal at a
connection between artificial neurons is a real number,
and the output of each artificial neuron is computed by some non-linear
function of the sum of its inputs. The connections between artificial
neurons are called "edges". Artificial neurons and edges typically have a
weight
that adjusts as learning proceeds. The weight increases or decreases
the strength of the signal at a connection. Artificial neurons may have a
threshold such that the signal is only sent if the aggregate signal
crosses that threshold. Typically, artificial neurons are aggregated
into layers. Different layers may perform different kinds of
transformations on their inputs. Signals travel from the first layer
(the input layer) to the last layer (the output layer), possibly after
traversing the layers multiple times.
Deep learning
consists of multiple hidden layers in an artificial neural network.
This approach tries to model the way the human brain processes light and
sound into vision and hearing. Some successful applications of deep
learning are computer vision and speech recognition.
A decision tree showing survival probability of passengers on the Titanic
Decision tree learning uses a decision tree as a predictive model
to go from observations about an item (represented in the branches) to
conclusions about the item's target value (represented in the leaves).
It is one of the predictive modelling approaches used in statistics,
data mining, and machine learning. Tree models where the target variable
can take a discrete set of values are called classification trees; in
these tree structures, leaves represent class labels, and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers)
are called regression trees. In decision analysis, a decision tree can
be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision-making.
Random forest regression
Random forest regression (RFR) falls under the umbrella of decision tree-based models.
RFR is an ensemble learning method that builds multiple decision trees
and averages their predictions to improve accuracy and to avoid
overfitting. To build decision trees, RFR uses bootstrapped sampling;
for instance, each decision tree is trained on random data from the
training set. This random selection of RFR for training enables the
model to reduce biased predictions and achieve a higher degree of
accuracy. RFR generates independent decision trees, and it can work on
single-output data as well as multiple regressor tasks. This makes RFR
compatible to be use in various applications.
Support-vector machines (SVMs), also known as support-vector networks, are a set of related supervised learning
methods used for classification and regression. Given a set of training
examples, each marked as belonging to one of two categories, an SVM
training algorithm builds a model that predicts whether a new example
falls into one category. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling
exist to use SVM in a probabilistic classification setting. In addition
to performing linear classification, SVMs can efficiently perform a
non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces.
Regression analysis encompasses a large variety of statistical
methods to estimate the relationship between input variables and their
associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularisation methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space.
Multivariate linear regression
extends the concept of linear regression to handle multiple dependent
variables simultaneously. This approach estimates the relationships
between a set of input variables and several output variables by fitting
a multidimensional
linear model. It is particularly useful in scenarios where outputs are
interdependent or share underlying patterns, such as predicting multiple
economic indicators or reconstructing images, which are inherently multi-dimensional.
A
simple Bayesian network. Rain influences whether the sprinkler is
activated, and both rain and the sprinkler influence whether the grass
is wet.
A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph
(DAG). For example, a Bayesian network could represent the
probabilistic relationships between diseases and symptoms. Given
symptoms, the network can be used to compute the probabilities of the
presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalisations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
An example of Gaussian Process Regression (prediction) compared with other regression models
A Gaussian process is a stochastic process in which every finite collection of the random variables in the process has a multivariate normal distribution, and it relies on a pre-defined covariance function, or kernel, that models how pairs of points relate to each other depending on their locations.
Given a set of observed points, or input–output examples, the
distribution of the (unobserved) output of a new point as a function of
its input data can be directly computed by looking at the observed
points and the covariances between those points and the new, unobserved
point.
A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes
in the hope of finding good solutions to a given problem. In machine
learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.
The theory of belief functions, also referred to as evidence theory
or Dempster–Shafer theory, is a general framework for reasoning with
uncertainty, with understood connections to other frameworks such as probability, possibility and imprecise probability theories.
These theoretical frameworks can be thought of as a kind of learner and
have some analogous properties of how evidence is combined (e.g.,
Dempster's rule of combination), just like how in a pmf-based Bayesian approach would combine probabilities. However, there are many caveats to these beliefs functions when compared to Bayesian approaches to incorporate ignorance and uncertainty quantification.
These belief function approaches that are implemented within the
machine learning domain typically leverage a fusion approach of various ensemble methods to better handle the learner's decision boundary, low samples, and ambiguous class issues that standard machine learning approach tend to have difficulty resolving. However, the computational complexity of these algorithms is dependent
on the number of propositions (classes), and can lead to a much higher
computation time when compared to other machine learning approaches.
Rule-based machine learning (RBML) is a branch of machine learning
that automatically discovers and learns 'rules' from data. It provides
interpretable models, making it useful for decision-making in fields
like healthcare, fraud detection, and cybersecurity. Key RBML techniques
includes learning classifier systems, association rule learning, artificial immune systems, and other similar models. These methods extract patterns from data and evolve rules over time.
Training models
Typically, machine learning models require a high quantity of
reliable data to perform accurate predictions. When training a machine
learning model, machine learning engineers need to target and collect a
large and representative sample of data. Data from the training set can be as varied as a corpus of text, a collection of images, sensor data, and data collected from individual users of a service. Overfitting
is something to watch out for when training a machine learning model.
Trained models derived from biased or non-evaluated data can result in
skewed or undesired predictions. Biased models may result in detrimental
outcomes, thereby furthering the negative impacts on society or
objectives. Algorithmic bias
is a potential result of data not being fully prepared for training.
Machine learning ethics is becoming a field of study and, notably,
becoming integrated within machine learning engineering teams.
Federated learning is an adapted form of distributed artificial intelligence
to train machine learning models that decentralises the training
process, allowing for users' privacy to be maintained by not needing to
send their data to a centralised server. This also increases efficiency
by decentralising the training process to many devices. For example, Gboard
uses federated machine learning to train search query prediction models
on users' mobile phones without having to send individual searches back
to Google.
Applications
There are many applications for machine learning, including:
In 2006, the media-services provider Netflix held the first "Netflix Prize"
competition to find a program to better predict user preferences and
improve the accuracy of its existing Cinematch movie recommendation
algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million. Shortly after the prize was awarded, Netflix realised that viewers'
ratings were not the best indicators of their viewing patterns
("everything is a recommendation") and they changed their recommendation
engine accordingly. In 2010, an article in The Wall Street Journal noted the use of machine learning by Rebellion Research to predict the 2008 financial crisis. In 2012, co-founder of Sun Microsystems, Vinod Khosla,
predicted that 80% of medical doctors jobs would be lost in the next
two decades to automated machine learning medical diagnostic software. In 2014, it was reported that a machine learning algorithm had been
applied in the field of art history to study fine art paintings and that
it may have revealed previously unrecognised influences among artists. In 2019 Springer Nature published the first research book created using machine learning. In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19. Machine learning was recently applied to predict the pro-environmental behaviour of travellers. Recently, machine learning technology was also applied to optimise
smartphone's performance and thermal behaviour based on the user's
interaction with the phone. When applied correctly, machine learning algorithms (MLAs) can utilise a
wide range of company characteristics to predict stock returns without overfitting.
By employing effective feature engineering and combining forecasts,
MLAs can generate results that far surpass those obtained from basic
linear techniques like OLS.
Recent advancements in machine learning have extended into the
field of quantum chemistry, where novel algorithms now enable the
prediction of solvent effects on chemical reactions, thereby offering
new tools for chemists to tailor experimental conditions for optimal
outcomes.
Machine Learning is becoming a useful tool to investigate and
predict evacuation decision-making in large-scale and small-scale
disasters. Different solutions have been tested to predict if and when
householders decide to evacuate during wildfires and hurricanes. Other applications have been focusing on pre evacuation decisions in building fires.
Limitations
Although machine learning has been transformative in some fields,
machine-learning programs often fail to deliver expected results. Reasons for this are numerous: lack of (suitable) data, lack of access
to the data, data bias, privacy problems, badly chosen tasks and
algorithms, wrong tools and people, lack of resources, and evaluation
problems.
The "black box theory"
poses another yet significant challenge. Black box refers to a
situation where the algorithm or the process of producing an output is
entirely opaque, meaning that even the coders of the algorithm cannot
audit the pattern that the machine extracted from the data. The House of Lords Select Committee, which claimed that such an
"intelligence system" that could have a "substantial impact on an
individual's life" would not be considered acceptable unless it provided
"a full and satisfactory explanation for the decisions" it makes.
In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision. Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested. Microsoft's Bing Chat chatbot has been reported to produce hostile and offensive response against its users.
Machine learning has been used as a strategy to update the
evidence related to a systematic review and increased reviewer burden
related to the growth of biomedical literature. While it has improved
with training sets, it has not yet developed sufficiently to reduce the
workload burden without limiting the necessary sensitivity for the
findings research itself.
Explainable AI (XAI), or Interpretable AI, or Explainable Machine
Learning (XML), is artificial intelligence (AI) in which humans can
understand the decisions or predictions made by the AI. It contrasts with the "black box" concept in machine learning where
even its designers cannot explain why an AI arrived at a specific
decision. By refining the mental models of users of AI-powered systems and
dismantling their misconceptions, XAI promises to help users perform
more effectively. XAI may be an implementation of the social right to
explanation.
The blue line could be an example of overfitting a linear function due to random noise.
Settling on a bad, overly complex theory gerrymandered to fit all the
past training data is known as overfitting. Many systems attempt to
reduce overfitting by rewarding a theory in accordance with how well it
fits the data but penalising the theory in accordance with how complex
the theory is.
Other limitations and vulnerabilities
Learners can also be disappointed by "learning the wrong lesson". A
toy example is that an image classifier trained only on pictures of
brown horses and black cats might conclude that all brown patches are
likely to be horses. A real-world example is that, unlike humans, current image classifiers
often do not primarily make judgments from the spatial relationship
between components of the picture, and they learn relationships between
pixels that humans are oblivious to, but that still correlate with
images of certain types of real objects. Modifying these patterns on a
legitimate image can result in "adversarial" images that the system
misclassifies.
Adversarial vulnerabilities can also result in nonlinear systems
or from non-pattern perturbations. For some systems, it is possible to
change the output by only changing a single adversarially chosen pixel. Machine learning models are often vulnerable to manipulation or evasion via adversarial machine learning.
Researchers have demonstrated how backdoors
can be placed undetectably into classifying (e.g., for categories
"spam" and "not spam" of posts) machine learning models that are often
developed or trained by third parties. Parties can change the
classification of any input, including in cases for which a type of data/software transparency is provided, possibly including white-box access.
Model assessments
Classification of machine learning models can be validated by accuracy estimation techniques like the holdout
method, which splits the data into a training and test set
(conventionally 2/3 training set and 1/3 test set designation) and
evaluates the performance of the training model on the test set. In
comparison, the K-fold-cross-validation
method randomly partitions the data into K subsets and then K
experiments are performed each considering 1 subset for evaluation and
the remaining K-1 subsets for training the model. In addition to the
holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.
In addition to overall accuracy, investigators frequently report sensitivity and specificity, meaning true positive rate (TPR) and true negative rate (TNR), respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. Receiver operating characteristic
(ROC), along with the accompanying Area Under the ROC Curve (AUC),
offer additional tools for classification model assessment. Higher AUC
is associated with a better performing model.
Different machine learning approaches can suffer from different data
biases. A machine learning system trained specifically on current
customers may not be able to predict the needs of new customer groups
that are not represented in the training data. When trained on
human-made data, machine learning is likely to pick up the
constitutional and unconscious biases already present in society.
Systems that are trained on datasets collected with biases may
exhibit these biases upon use (algorithmic bias), thus digitising
cultural prejudices. For example, in 1988, the UK's Commission for Racial Equality found that St. George's Medical School
had been using a computer program trained from data of previous
admissions staff and this program had denied nearly 60 candidates who
were found to either be women or have non-European-sounding names. Using job hiring data from a firm with racist hiring policies may lead
to a machine learning system duplicating the bias by scoring job
applicants by similarity to previous successful applicants. Another example includes predictive policing company Geolitica's
predictive algorithm that resulted in "disproportionately high levels
of over-policing in low-income and minority communities" after being
trained with historical crime data.
While responsible collection of data
and documentation of algorithmic rules used by a system is considered a
critical part of machine learning, some researchers blame the lack of
participation and representation of minority populations in the field of
AI for machine learning's vulnerability to biases. In fact, according to research carried out by the Computing Research
Association in 2021, "female faculty make up just 16.1%" of all faculty
members who focus on AI among several universities around the world. Furthermore, among the group of "new U.S. resident AI PhD graduates,"
45% identified as white, 22.4% as Asian, 3.2% as Hispanic, and 2.4% as
African American, which further demonstrates a lack of diversity in the
field of AI.
Language models learned from data have been shown to contain human-like biases. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases. In 2016, Microsoft tested Tay, a chatbot that learned from Twitter, and it quickly picked up racist and sexist language.
In an experiment carried out by ProPublica, an investigative journalism
organisation, a machine learning algorithm's insight into the
recidivism rates among prisoners falsely flagged "black defendants high
risk twice as often as white defendants". In 2015, Google Photos once tagged a couple of black people as
gorillas, which caused controversy. The gorilla label was subsequently
removed, and in 2023, it still cannot recognise gorillas. Similar issues with recognising non-white people have been found in many other systems.
Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains. Concern for fairness
in machine learning, that is, reducing bias in machine learning and
propelling its use for human good, is increasingly expressed by
artificial intelligence scientists, including Fei-Fei Li,
who said that "[t]here's nothing artificial about AI. It's inspired by
people, it's created by people, and—most importantly—it impacts people.
It is a powerful tool we are only just beginning to understand, and that
is a profound responsibility."
Financial incentives
There are concerns among health care professionals that these systems
might not be designed in the public's interest but as income-generating
machines. This is especially true in the United States, where there is a
long-standing ethical dilemma of improving health care, but also
increasing profits. For example, the algorithms could be designed to
provide patients with unnecessary tests or medication in which the
algorithm's proprietary owners hold stakes. There is potential for
machine learning in health care to provide professionals with an
additional tool to diagnose, medicate, and plan recovery paths for
patients, but this requires these biases to be mitigated.
Hardware
Since the 2010s, advances in both machine learning algorithms and
computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of nonlinear hidden units. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months.
Tensor Processing Units (TPUs)
Tensor Processing Units (TPUs) are specialised hardware accelerators developed by Google specifically for machine learning workloads. Unlike general-purpose GPUs and FPGAs,
TPUs are optimised for tensor computations, making them particularly
efficient for deep learning tasks such as training and inference. They
are widely used in Google Cloud AI services and large-scale machine
learning models like Google's DeepMind AlphaFold and large language
models. TPUs leverage matrix multiplication units and high-bandwidth memory to accelerate computations while maintaining energy efficiency. Since their introduction in 2016, TPUs have become a key component of
AI infrastructure, especially in cloud-based environments.
Neuromorphic computing
Neuromorphic computing
refers to a class of computing systems designed to emulate the
structure and functionality of biological neural networks. These systems
may be implemented through software-based simulations on conventional
hardware or through specialised hardware architectures.
Physical neural networks
A physical neural network
is a specific type of neuromorphic hardware that relies on electrically
adjustable materials, such as memristors, to emulate the function of neural synapses.
The term "physical neural network" highlights the use of physical
hardware for computation, as opposed to software-based implementations.
It broadly refers to artificial neural networks that use materials with
adjustable resistance to replicate neural synapses.
Embedded machine learning
Embedded machine learning is a sub-field of machine learning where models are deployed on embedded systems with limited computing resources, such as wearable computers, edge devices and microcontrollers. Running models directly on these devices eliminates the need to
transfer and store data on cloud servers for further processing, thereby
reducing the risk of data breaches, privacy leaks and theft of
intellectual property, personal data and business secrets. Embedded
machine learning can be achieved through various techniques, such as hardware acceleration,approximate computing, and model optimisation. Common optimisation techniques include pruning, quantisation, knowledge distillation, low-rank factorisation, network architecture search, and parameter sharing.