The Advent of Artificial Intelligence and the Technological Singularity
Original link: http://www.iawwai.com/FractalAI.html
It
has been called the Holy Grail of Modern Times. It is a great
scientific discovery waiting to be revealed and also a practical
invention with momentous and far reaching consequences. It is the
emergence of a final understanding of the workings of the brain and the
nature of the human mind.
This is in turn is the key to the creation of true Artificial Intelligence(AI) and the instigation of the much talked about Technological Singularity. It is the beginning of an era of extremely rapid and unprecedented scientific advance and technological progress, which will transform the world beyond recognition within the lifetimes of most people alive today..
Humankind
has come a long way on its historic journey of understanding the world
and the Universe. But there is a huge, glaring and very significant gap
in our knowledge waiting to be filled. The people of this world are
still waiting to learn about how the brain and mind works. A final
theory of brain and mind will be one of the great, if not the greatest
scientific discovery of all time. And in early 2014, the scientist
Stephen Hawking together with some other top minds declared that, ‘Success in creating Artificial Intelligence would be the biggest event in human history’.
The
puzzle of how the brain works and the prize of creating AI, are perhaps
some of the hottest topics of contemporary times. Governments are
pouring billions of dollars, euros, renminbi and yen into brain research
and the development of artificial intelligence. And this is matched and
even surpassed by corporate spending in the same areas. Hardly a month
seems to go by without some announcement of another major acquisition of
some AI startup by a tech behemoth such as Google for hundreds of
millions of dollars, or the hiring of some big name AI researcher by a
big tech company. The goal of creating AI and figuring out how the brain
works is perhaps the most important and certainly one of the most
exciting ventures of the age. A lot of resources, talent and attention
is being directed towards this end.
The Coming Breakthrough
But
there seems to be a conceptual blockage. Everyone knows what the great
goal is, but there is little idea as to how to get there. The goal of
creating true AI and working out how the brain works has turned out to
be a fiendishly difficult and profoundly intractable problem. Many
leading researchers asked when this final understanding of brain and
mind will come, will quite often give an estimate of 50 to 100 years.
And the same for the creation of true AI. Noam Chomsky the world’s most
cited academic and someone who made some early important contributions
to AI said in 2013 that a, ‘theory of what makes us smart is aeons away’. David Deutsch, a respected Oxford physicist and popular science writer wrote recently in 2012 that, ‘No brain on Earth is yet close to knowing what brains do’. And this is a sentiment which is shared by many experts. Yet Deutsch concedes that if is, ‘plausible just single idea stands between us and the breakthrough, but it will have to be one of the best ideas ever’.
A similar idea was expressed by Rodney Brooks, who was director of the
prestigious MIT (Massachusetts Institute of Technology) AI lab, who said
towards the end of the 1990s that there may emerge some, ‘organizational principle, concept or language that could revitalize mind science in the next century’.
In the Fractal Brain Theory this breakthrough ‘single idea’ and
revitalizing ‘organizational principle, concept and language’ is about
to be revealed. And it will emerge from the most unusual of
circumstances.
AI emerges from outside of any Academic, Government or Corporate Lab
John
McCarthy(1927-2011) is credited with first using the expression
Artificial Intelligence and made many pioneering contributions to the
field. He said in an interview that there was the intriguing possibility
that someone has already figured out how to create AI but ‘he hasn’t told us yet’.
John Horgan who is a popular science author and staff writer for
Scientific American magazine concluded from his numerous interviews with
specialists in the field that, ‘Some mind
scientists… prophesy the coming of a genius who will see patterns and
solutions that have eluded all his or her predecessors’.
And he quotes Harvard Psychologist Howard Gardner as saying that, ‘We
can’t anticipate the extraordinary mind because it always comes from a
funny place that puts things together in a funny kind of way.’ The
current emergence of a complete theory of brain and mind and its
revealing will make these statements seem uncanny prescient. For the
Fractal Brain Theory, apart from being a series of scientific
breakthroughs and a technological wonder; is also a remarkable life
story and a fascinating journey of scientific inquiry & self
discovery. The circumstances from which this exciting theory emerges
will at first seem most strange, but after a while will make perfect
sense. Because this complete and perhaps even final understanding of the
human brain and mind has come into being from completely outside of any
academic, government or corporate research lab. The story of the
Fractal Brain Theory is the tale of a lone mind, working outside of any
formal context or traditional institution. It has been an endeavour self
directed, self instructed and self motivated. The brain theory has been
formulated in and will emerge from London, but it will appear as a bolt
out of the blue from nowhere, to revolutionize the worlds of
neuroscience and artificial intelligence.
The Science behind the Fractal Brain Theory
Behind
the Fractal Brain Theory are three fundamental, very powerful and
interrelated ideas; that are systematically applied towards the
understanding of the brain and mind. This in turn leads to three major
critical breakthroughs which make up of the main body of the theory. The
three fundamental ideas behind the fractal brain theory are Symmetry,
Self Similarity and Recursivity. And the three major breakthroughs
comprise firstly a single unifying language for describing all the
myriad details and facets of the brain as well as the mind. Our second
breakthrough concept is a unifying structure deriving from our unifying
language, which allows us to see how everything related to brain and
mind comes together as a single integrated whole. Our third and most
surprisingly theoretical breakthrough is the idea that all the various
information processing of the brain and the many operations of the mind,
can be conceptualized as a single underlying unifying process and
captured in a single algorithm. Taken together these properties of the
fractal brain theory are set to revolutionize the worlds of theoretical
systems neuroscience and artificial intelligence. And so we’ll explain
these concepts and breakthroughs more clearly and in more detail.
The Symmetry, Self-Similarity and Recursivity theory of Brain and Mind
This
brain theory that is in the process of being revealed to world may also
be given the longer title of, ‘The Symmetry, Self-Similarity and
Recursivity theory of Brain and Mind’. This is quite an effort to say,
and so it is a useful and convenient shorthand to refer to the theory as
the ‘Fractal Brain Theory’. The world Fractal implies Symmetry,
Self-Similarity and Recursivity so the title ‘Fractal Brain Theory’ is
an entirely appropriate as well as useful shorthand. We’ll go through
each of these foundational concepts in turn in order to give a better
idea of the significance and power of the Fractal Brain Theory.
Symmetry
Symmetry
is such an amazingly powerful idea. In fact if the entire process of
science had to be summed up in a single word, then a good candidate for
this word would be ‘symmetry’. Science can be said to be the process of
discovering the patterns of nature and the Universe. But it is more than
that, because science is also the process of discovering the patterns
behind the patterns. That is, the meta-patterns and unifying patterns,
which show us how all the seemingly separate patterns are really
manifestations of the same underlying pattern. And so we have the same
problem in the brain, where we are confronted with a dizzying and myriad
array of facts and findings with no obvious and apparent way of seeing
any overarching pattern behind it all. So it makes perfect sense that
the idea of symmetry should be applicable. Indeed if symmetry is behind
the very process of science itself, then why should the search for a
scientific understanding of the brain be any other way? And so then the
problem becomes, how to apply this powerful concept towards that goal
and this is not at all obvious. The specific ways that the symmetries
behind the law of physics are explored in science, don’t translate in
any sort of direct or intuitive way to the study of the brain. The
symmetry of mathematical equations or of regular geometric forms, seems
far removed from the organic messiness and irregularities of biology and
brain. And at first glance and superficial inspection, the brain seems
so full of asymmetry and dissymmetry. So one of the problems that the
fractal brain theory solves and is how to interpret the brain and mind,
using some of the most cutting edge findings in neuroscience and some
bridging ideas from mathematics, in order to see clearly the underlying
and unifying symmetries behind it all. Physicist believe that there is
an overarching ‘supersymmetry’ that unifies all the natural laws of the
Universe, though this idea is still in the process of being fully worked
out. By the same token, the Fractal Brain Theory is able show that
likewise there is an overarching symmetry that is able to explain and
account for all the diverse phenomena of brain and mind. With this
underlying symmetry we are able to reduce all the vast complexities of
the brain and mind in a very elegant and compact description. And so
symmetry forms an important foundation of the brain theory.
Self-Similarity
The
idea of Self Similarity is synonymous with idea of something being
‘fractal’, hence the name Fractal Brain Theory. An object that is
self-similar or fractal contains smaller copies of its overall form
within itself repeated and at many smaller scales. A useful way of
looking at self-similarity is to think of it as nested symmetry, where a
pattern repeatedly contains copies of itself within itself. A much used
example of self similarity is that of a tree, where the diverging
pattern of branchings coming off the main trunk is repeated in a similar
way in its branches and even in the veins of its leaves. So a tree can
be described as self similar and fractal. Fractal geometry which was
discovered in the 1970s has been called the geometry of nature.
Tradition geometry deals with straight lines, regular triangles,
squares, circles and the like. Fractal geometry seems far better suited
to describing complex natural forms such as mountains, clouds and
snowflakes; as well as organic structures such as plants, animals,
people and even entire cities. It is even suggested by leading
scientists that the entire universe may have fractal structuring. And so
quite appropriately the Fractal Brain theory is the application of the
idea of self-similarity in the context of understanding the natural
phenomenon of brain and mind. It is an approach which has been suggested
and tried before in the past few decades but which came up against
hurdles which at the time seemed insurmountable. And on superficial
inspection and with a limited understanding of the brain, then it is not
at all apparent that the brain can be understood as being fractal. But
with the benefit of recent empirical findings from neuroscience and a
novel way of interpreting the data, then the Fractal Brain Theory is
able to show how indeed the brain and mind can be conceptualised as
being perfectly fractal and completely self-similar. And this sets up a
lot of the conceptual groundwork for the brain theory and gives the
theory its organizing principle..
Recursivity
Recursivity
really is a universal process and the process of life itself can be
considered as recursive. The process by which life comes into being,
starting from a fertilized egg, dividing into two, then recursively and
repeatedly dividing into 4, 8, 16 and so on, to give rise to all the
cells in your body, this is an example of a recursive process. And the
process of sexual reproduction, and the diverging and converging lines
of family trees, generation recursively following upon generation is
another example of recursion. Some thinkers even imagine the entire
Universe and everything that happens in it as one big recursive process,
so the idea of recursivity is pretty deep. Recursivity is a key concept
that underlies computer science and the workings of all computers. The
Fractal Brain theory shows that this phenomenon of recursivity is
fundamental for understanding how the brain and mind works.
Three breakthroughs: A unifying language, unifying structure & unifying process
The
fractal brain theory is the systematically application of the
fundamental principles of symmetry, self-similarity and recursivity
towards the understanding of brain and mind. And this leads to three
major scientific breakthroughs, which we’ll elaborate in turn…
A Single Unifying Language
The
first of our breakthrough concepts has been anticipated. It is a way of
describing not just all the structures and processes of the physical
substrate of the brain but also all the various emergent structures and
processes of mind; using a single unifying language. So for instance the
1996 publication, ‘Fractals of Brain, Fractals of Mind: In search of a Symmetry Bond’,
described the existence of a ‘secret symmetry’, secret in the sense of
being at that point undiscovered, which would allow us to conceptualize
the brain and mind as a single continuum and describe it in the same
language. This is the ‘symmetry bond’ referred to in the books title.
Professor of Psychology and commentator on all things AI, Gary Marcus,
described recently in 2014 how useful it would be to gain a unified
description of brain and mind, and how this could potentially
revolutionize the field. With the coming of the Fractal Brain Theory,
the ‘secret symmetry’ is secret no more. We have now exactly this
unifying language for describing all aspects of brain as well as mind.
It is also a descriptive language which is supported by a vast array of
empirical evidence, which suggests that it is not something ad hoc or
arbitrary but rather one which reflects fundamental truths about how the
brain and mind work. Indeed one of the strengths of the fractal brain
theory is that it does take into account and incorporates a vast array
of empirical facts and findings from neuroscience and psychology. It
uses the unifying language to describe in a common format, all this vast
diversity of information. This leads to the second major breakthrough
the brain theory enables..
A Single Unifying Structure
Intuitive
we know that there must be some sort of unity and integrated structure
behind the brain and mind. This is because we know that somehow, all the
various myriad aspects of our brains and minds must work together in an
unified and coordinated way to achieve our goals and objectives. We
know from our experience and introspection that this must be the case,
we have this personal sense of oneness and singular wholeness that gives
us the impression of self and identity. But it has been very
problematic for brain scientists and artificial intelligence researchers
to work out how exactly this is the case physiologically and how this
may be implemented. Neuroscience exists as an ocean of facts and
findings, with no obvious way to fit them all into a unified
understanding. In 1979, Francis Crick of DNA fame, wrote that in
relation to brain science, “what is conspicuously lacking is a broad framework of ideas in which to interpret these various concepts.” 35
years later, this unifying theoretical framework still seems to be
missing. Neuroscientists Henry Markram’s much publicized and very well
funded billion euro brain simulation project can be seen as an attempt
to integrate all the knowledge of neuroscience which exists into some
sort of integrated whole. Here the aim is to merely bring all the
neuroscience together in order to program it into a big computer
simulation, but without any theoretical underpinning behind it
whatsoever. A leading artificial intelligence researcher named Ben
Geotzel is attempting to bring together a lot of existing partial
solutions and previous attempts at AI, but is facing an ‘integration
bottleneck’, without any clear way to make all the separate pieces fit
and work together sensibly.
In
contrast what the Fractal Brain Theory introduces is a very elegant way
of arranging all the various aspects of brain and mind, and fitting
them all together into a single top-down hierarchical classification
structure. This partly derives from the having a single unifying
language with which to describe everything. By having a common
description for all the separate pieces of the puzzle, this is the
prerequisite for fitting all the pieces together into a single
structure. Furthermore this unified classification structure also
derives from what we know about hierarchical representations and
relationships in the brain as suggested by the actual neurophysiological
substrate and experimental findings. This gives us a very powerfully
integrated and all encompassing overview of brain organization and the
emergent structures of the mind which are grounded in the
neurophysiological substrate. It is an important step to fully
understanding the brain and the creation of true artificial
intelligence. After all, many of the biggest names in AI and theoretical
neuroscience stress the importance of hierarchical representations and
processes. What the fractal brain theory is able show is that the
entirety of brain and mind may be conceptualized as a single tightly
integrated and all encompassing hierarchical structure.
The
single all encompassing structure of brain and mind in turn leads to
the third, final and most dramatic breakthrough which the fractal brain
theory delivers. Given our all encompassing unifying structure we may
then ask, is it possible to define a single overarching process over
that structure which captures all the separate processes happening
within it. Or put another way, if we can represent the entire brain and
mind as a single integrated data structure, then is it possible to
specify a single algorithm over that data structure, which captures the
functionality of all the partial algorithms of brain and mind? And the
answer is yes.
A Single Unifying Process
This
is the most surprising and perhaps even shocking property of the
fractal brain theory. Because it shows that there exists a stunning
simplicity behind the inscrutable and mysterious functioning of the
brain and mind. The Fractal Brain theory shows how a single unifying
recursive process is able to explain all the component sub-processes of
brain and mind. This has been anticipated to some extent by various
researchers in the mind and brain sciences. For instance Eric Horvitz
the head of AI research at Microsoft, has speculated that there may
exist a ‘deep theory’ of
mind but doesn’t offer any idea of what this might look like. Steve
Grand OBE who is a prominent British AI theorist and inventor, thinks
there may exist a ‘one sentence solution’ behind
how the brain works. And several prominent researchers such a Jeff
Hawkins, Ray Kurzweil and Andrew Ng believe that there may exist a
universal ‘cortical algorithm’ which captures the functionality of all
the various different areas of cerebral cortex which together with the
related underlying wiring comprises about 80% of the human brain.
So
therefore it is already suggested by leading researchers that there may
exist a single algorithm for explaining the workings of most of the
brain. But the Fractal Brain Theory goes a lot further. Because what is
behind the theory is a universal algorithm and unifying process that is
able to span not just the functioning of the cerebral cortex but also
that of all the other major auxiliary brain structures comprising the
hippocampus, striatum, cerebellum, thalamus and emotion centres
including the hypothalamus and amygdala. The fractal brain is able to
demonstrate how a single overarching process is able to account for and
explain the purpose and functioning of all these main structures of the
brain. Significant for mainstream ideas about brain functioning is that
the fractal brain theory shows that cerebral cortex can’t really be
understood without considering the other ‘auxiliary’ brain
structures.Therefore what we are talking about is a single algorithm
behind the functioning of the entire brain, the emergent mind and
intelligence itself.
Almost
unbelievably the theory goes even further than this! For not only does
the theory describe how all the functioning of the brain and mind can be
captured by a single algorithm, but also that this overarching process
extends to the process of how brains and bodies comes into being, i.e.
neurogenesis and ontogenesis, and even describes the operation of the
DNA genetic computer guiding this developmental process. Astoundingly
the fractal brain theory is able to show that there is a singular
unified description and process behind the process by which life begins
from a fertilized egg, to give rise to bodies, to give rise to brains,
to give rise to minds and all the things that go on in our minds in our
lifetimes, right back to the purposefully directed central goal of our
lives which involves the process of fertilizing eggs. And so the cycle
begins again. This is a very bold, provocative and dramatic claim that
the fractal brain theory makes. It may seem like a theoretical
impossibility or some wild over-extension of thought and
overinterpretation of things, but it is also a reason why once these
aspects of the theory are fully comprehended, then they become a
powerful reason why the theory will quickly become accepted and gain
adherents. What at first seems fantastic might not seem so strange when
we consider what is indisputable. It is a fact that everything that
happens in our lives, everything that happens in our bodies and brains,
every cell created, every protein manufactured and every random nerve
firing that has ever occurred; and every thought and action that we’ve
ever had or performed; All of this has emanated from a fertilized egg.
Without this critical first event and tiny singularity in space and
time, then everything that follows from it will not have happened. If we
can discover a common underlying symmetry of process which shows how
all the separate emerging processes share a common underlying template
and can use our unifying language to describe all the many separate
phenomena including cell division(i.e. neurogenesis and ontogenesis), as
well as the DNA operating as information processor; and then be able to
describe all the separate processes as recursive and furthermore be
able to link them all up into a single recursive process. If we can do
this, then this great overarching view of things might not seem so
incredible. This great unifying algorithm and overarching recursive
process is the central idea behind the fractal brain theory and the key
to creating true artificial intelligence.
Recursive Self Modification: the secret behind intelligence
The
process of cell division, and functioning of individual nerve cells
seems far removed from the level of introspection, the complex thoughts
that we have, and the intricate behaviours that we have to perform in
our day to day lives. And so it may seem intuitively incongruous that
there may exist a single algorithm and process that can span the entire
gamut of everything that happens in our bodies and in our lives. However
there is a trick which enables the simplest of processes, i.e. cell
division, to give rise to the most complex i.e. our intricate thoughts.
This is recursive self modification. What the fractal brain theory
describes is a recursive process that is able to generate hierarchical
structures. These structures in turn manifest the same process but in an
expanded and augmented form. The unifying recursive process then uses
these augmented forms to further expand itself to create even more
complex and evolved structures, which in turn generate more complex
patterns of operation. And so the initial seed process feeds back on
itself in this recursive way, to generate our bodies, brains and all our
mental representations, thoughts and behaviours. This is really the
trick that makes the fractal brain theory tick, and the key to
understanding the nature of intelligence.
The
much discussed Technological Singularity also describes a recursive
feedback process where one generation of artificial intelligence is
quickly able to design the next augmented and improved next generation
of AI, in positive feedback cycle to create a so called ‘intelligence
explosion’. A very interesting property of the fractal brain theory is
that it describes this same process happening in the microcosm of the
human brain and emergent mind. Likewise, intelligence is made up of a
virtuous positive feedback cycle happening in our heads, but constrained
by our biological limitations and finite lifespans. It will be seen as
entirely appropriate, once this idea is fully accepted, that the key
process that enables true intelligence, i.e. recursive self
modification, is the trick behind creating artificial intelligence.
Which in turn enables the trick of recursive self modification to happen
on a grander scale to bring about the Technological Singularity.
Individual Artificial intelligences will then be clearly seen as a
fractal microcosm of the macrocosmic Technological Singularity that it
gives rise to.
The Special Significance of the Fractal Brain Theory
Our
three major theoretical breakthroughs in systems neuroscience and AI,
can be thought of as our three fundamental foundational concepts, i.e.
symmetry, self-similarity and recursivity, taken to the maximum. Our
single unifying language can be thought of as a single underlying
Symmetry behind the entirety of all the diverse aspects of neuroscience
and psychology. Likewise our single unifying structure can thought of as
a single all encompassing self-similar fractal of brain and mind. And
our single unifying process is the conceptualizing of all the separate
component processes of brain and mind happening in all contexts and
scales as being the expressions of a single seed recursive function.
These are the very powerful and profound properties of the fractal brain
theory, which would suggest that the theory is something quite special
and unique. When it is fully digested and accepted that it is possible
to understand the brain and mind using the fundamental scientific and
mathematical concepts of symmetry, self-similarity and recursivity, in
this complete and comprehensive manner; then the fractal brain theory
itself may come to be seen as something likewise fundamental.
The Fractal Brain Theory, Artificial Intelligence and the Technological Singularity
The
Fractal Brain Theory apart from being a series of scientific
breakthroughs, also addresses some of the biggest questions and hardest
puzzles in the quest to create true artificial intelligence. Naturally a
comprehensive scientific understanding of brains and minds should be
relevant and applicable towards the goal of creating artificial ones.
And this is indeed the case, the fractal brain theory answers some of
the biggest unanswered problems in the field. Also the brain theory has
the very interesting property of incorporating some of the most
recurring as well some of the most state of the art ideas in mainstream
artificial intelligence research, while at the same time giving us a
clear roadmap for the next steps forward.
The Future of Artificial Intelligence and the Technological Singularity
There
exists a close relationship between the Fractal Brain Theory and the
creation of Artificial Intelligence together with the instigation of the
much anticipated Technological Singularity. This will seem obvious to a
lot of people, that a fully scientific understanding of brain and mind
should directly facilitate the quest to create true AI. We’ll explore
this relationship in some detail by showing that there exist some deep
connections between our fractal way of looking at brain and mind, and
many existing ideas in AI and computer science. In the same way that the
theory is able to unify and integrate a vast amount of data, facts and
findings from brain and mind sciences; so too with the sciences of the
computational and informational. The brain theory sits at the nexus of
many of the existing approaches to creating artificial intelligence and
also some of the best and most utilitous ideas in computer science.
This
ability to integrate a lot of diverse ideas from artificial
intelligence into a coherent unified picture, from the very outset
solves a major problem which has been cited as the main reason why there
hasn’t been much theoretical progress in quest to create AI in the past
few decades. Patrick Winston of MIT, and early prominent researcher
recently called this the ‘mechanistic balkanization of AI’,
the state of affairs where the field has divided itself into many
sub-disciplines which study specific mechanisms or very circumscribed
approaches to AI. This coupled with an inability to take advantage of a
wider cross fertilization of ideas or seeing the necessity of working
together in trying to answer the larger problems. Moreover the ability
of the fractal brain theory to seamlessly bridge the divide between on
the one hand, the engineering fields of artificial intelligence and
computer science; and on the other neuroscience and psychology, solves a
wider and more significant ‘balkanization’. This is the inability and
sometimes reluctance on the part of AI researchers to embrace and take
advantage of facts and findings from the brain and mind sciences. The
powerfully integrated overarching perspective of the fractal brain
theory, provides for us a major advantage over the existing demarcated,
compartmentalized and overly narrow approaches to creating artificial
intelligence.
The
fractal artificial intelligence that derives from the fractal brain
theory will at first seem novel and groundbreaking, but at the same time
there will be a lot of familiarity inherent in its workings. In a sense
all of artificial intelligence and computer science, in one way or
another, is convergent upon the workings of brain and mind. The Fractal
Brain Theory and the new kind of artificial intelligence associated with
it is the fullest expression of this convergence. This self similar
(i.e. fractal), symmetrical and recursive way of looking at the brain
enables a massive unification of brain and mind; and furthermore also an
even wider unifying of neuroscience and psychology with artificial
intelligence, computer science and information theory. From now on we’ll
be using the expressions ‘new kind of artificial intelligence’ and
‘fractal artificial intelligence’ quite interchangeably. Like Stephen
Wolfram’s ‘new kind of science’ which seeks to reframe the laws of
physics and our understanding of the Universe in terms of simple
computational principles, especially modelling physical phenomena using
discrete Cellular Automata, so too in an analogous way we seek to
rationalize existing artificial intelligence techniques and reframe
existing approaches using a more succinct and unifying description. And
when we say ‘fractal artificial intelligence’, we mean the creation of
AI that derives from a view of brain and mind that sees its functioning
and structure as perfectly symmetrical, self similar and recursive.
The
formalising of the Fractal Brain theory in the language of binary trees
and binary combinatorial spaces has a very useful and interesting
property. This relates to the fortunate state of affairs that this
binary formalism is also the same language that underlies computer
science, artificial intelligence and information theory. At first this
might seem like an amazing coincidence or perhaps as some sort of
deliberate contrivance. But it is also a natural consequence of the fact
that the same constraints and issues faced by computer scientists in
their design of computing hardware and intelligent systems, are also
those that biological brains have to deal with. The same advantages of
using binary codes in computers, i.e. signal fidelity, persistence of
memory, processing accuracy, error tolerance and the handling of
‘noise’, are also advantages that may likewise be exploited by nature,
employing the same means. That is by going digital. Hence our idea of a
binary combinatorial coding, digitized brain grounded in actual
neurophysiology and anatomy comes about through the convergence of these
issues of information processing, that both computers and biological
brains have to work with and around. The use of the language of binary
trees and binary combinatorial spaces by the fractal brain theory, apart
from the correspondence with a mass of empirical data, facts and
findings from neuroscience and neuropsychology; also allows for the
complete bringing together of biological brains and minds with the
fields of computer science and artificial intelligence. In doing so it
illuminates, unifies and solves many of the biggest issues in these
technological endeavours. It answers a lot of the hard questions in AI
and even gives us insight into nature of the so called Technological
Singularity, which concerns all the implications of the advent ‘true’,
‘strong’ or ‘general’ artificial intelligence.
First
off, we’ll put these new ideas into the context of what some of the
leading researchers in the field of artificial intelligence and closely
related disciplines have speculated concerning the nature of true AI
when it finally arrives. These views are also often closely related to
what these thinkers believe about the nature of brain and mind. Roughly
speaking we have two camps. On one side we have those thinkers who think
that underlying the brain and mind is some sort of unifying and
ultimately relatively simple answer waiting to be discovered. This
viewpoint goes hand in hand with the notion of some critical
breakthrough or the elucidation of a set of of basic principles which
will unlock the puzzle of brain and mind which will then enable the
creation of true AI and beginning of the technological singularity. On
the other side we have those thinkers who believe the opposite, that
this won’t be the case.
The
first camp who believe in an underlying simplicity in brain and mind is
depicted on the left side of the diagram above. So for instance Steve
Grand who did some interesting work in evolutionary AI thinks there is a ‘single sentence solution’ for
understanding intelligence and what it’s for. Eric Horvitz the director
of AI research at Microsoft Corporation believes in the existence of a ‘deep theory’ of
AI, mind and brain waiting to be demonstrated. Though he has no idea
what this might look like. Andrew Ng (Former director of the Stanford AI
lab and now head of AI research for Baidu corp), Ray Kurzweil (A
successful AI implementer and a godfather of the Technological
Singularity) and Jeff Hawkins (A brain and AI theorist), all believe in
the existence of a single underlying cortical algorithm, the discovery
of which would explain how all the different areas of neocortex
function.
The second camp, depicted in the diagram above right, is
represented by researchers like Ben Geortzel who doesn’t believe in the
existence of a critical algorithm that could give rise to true AI. Nils
Nilsson who was the director of SRI (Stanford research institute) AI Lab
and author of many books on the subject, ‘doubts’ that an ‘overarching theory of AI will ever emerge’. Danny Hillis who did pioneering work on massive parallel computers expressed the view that ‘Intelligence is not a unitary thing’,
but rather a collection of rather ad hoc solutions. Doug Lenat a
practitioner of the so called GOFAI or good old fashioned AI, has
expressed similar sentiments. His own work holds out to the hope that AI
or at least ‘common sense’ for AI will emerge from the collecting
together and human hand coding of a myriad number of facts and pieces of
knowledge. And then there’s Stephen Wolfram (creator of Mathematica and
certified genius), who said that as a young man he believed in the
coming of some critical idea that would give us AI but now he doesn’t
anymore. So this gives us a picture of what it is that some of the big
names in AI and related fields are currently thinking.
The
Fractal Brain Theory will give strong support to the first camp and
demonstrate what a unified theory of brain, mind and AI looks like. In a
sense it gives us something akin to a ‘one sentence solution’, i.e. a
minimal self modifying recursive function that starts initially as a
representational atom. And it show a ‘deep theory’ of brain and mind
naturally derives from a similar process to how all ‘deep’ theories in
science come about, i.e. through the application of the principles of
symmetry. Also the brain theory shows that not only is there a single
unifying cortical algorithm that is able to account for all information
processing in the neocortex. But also that this underlying algorithm
also encompasses in its operation the function of the major auxiliary
structures of cerebellum and extended striatum. It even subsumes the
functioning of the emotion centres and the important process of
reinforcement learning. The brain theory demonstrates not a single
critical breakthrough or idea but really a whole series of closely
inter-related ones but which do distill to a single algorithm and
recursively self-modifying description. It shows that intelligence and
the functioning of brain and mind is indeed a unitary thing, that
underlying all the myriad complexity is simplicity. And that true AI and
common sense emerges not from trying to hard code all the diverse
complexity of mind but rather in the discovery of the relatively simple
underlying process which generates it in the first place. So far from
being a collection of ad hoc solutions, the brain and intelligence,
comes about through the application of an underlying common algorithm to
different contexts and the problems within those contexts, and this is
how things seem ad hoc. Also this symmetry, self-similarity and
recursivity brain theory gives us an ‘overarching’ theory not just of
AI, but also one that seamlessly brings together many of the best and
recurring ideas in AI, with what know about brain and mind. It acts as
the bridge between different sets of compartmentalized human scientific
and technological activities, i.e. Neuroscience, Psychology, Artificial
Intelligence and Computer Science, thus allowing them to more fully
interact and work together.
The Fractal Brain Theory as the next step in Artificial Intelligence
Next
we’ll give some examples of some of the biggest problems in AI and why
the fractal brain is able to solve them. So for instance, an AI approach
called ‘deep learning’ has received a lot of publicity lately. In the
1980s and 1990s this particular way of doing things was called ‘neural
networks’, and it is an approach to AI loosely based on a very
simplified and abstracted model of real neurons. It has caused a stir
recently due to the intense interest shown in deep learning by tech
behemoths such as Google, Facebook and Microsoft. While deep learning
has been demonstrated to be quite effective in simple image and audio
recognition tasks, its limitations are widely recognized. Perhaps the
two biggest names behind deep learning, Geoffrey Hinton and Yann LeCun,
who were recently acquired and hired by Google and Facebook respectively
to much fanfare, have both highlighted certain weaknesses of their
approach. And one of these is the over reliance of deep learning
algorithms on supervised learning where training these neural networks
requires labelled data, i.e. the training data set of images or sounds
by which the neural network ‘learns’, needs to be specifically labelled
beforehand by the human trainer(s). So an image of a cat has to be first
identified as a cat by a human or a speech sound needs to likewise be
labelled as the word it represents by somebody. Most human learning is
unsupervised, we make up our own labels and categories and the complex
world in which we function doesn’t come neatly labelled or categorized
for us. So this is seen as the next step which deep learning researchers
need to tackle in order to create better AI. Demis Hassabis is another
top AI researcher whose company Deep Mind, made the headlines early in
2014 after a $400 Million acquisition by Google. He has identified what
he calls ‘conceptual learning’ as the next major problem which needs to
be solved on the road to creating AI. He has identified a major gap in
our understanding of how we form concepts with which to describe and
reason about the world. This is really the same problem as the problem
of unsupervised learning and the puzzle of how to label the world. The
processes of labelling, categorizing and conceptualizing the world
really exist on the same continuum.
A
major clue as to how to solve this problem of labelling or
conceptualizing the world is provided by another shortcoming of the deep
learning approach which was recently described by Geoffrey Hinton, who
is in a sense the godfather and leading light of the subfield. In a talk
given in 2013, he highlighted the requirement for some sort of internal
‘generative’ process which is currently mainly missing from deep
learning algorithms, and has wholeheartedly said that the working out of
this generative process is the future of his field and the avenue of
research which will be most fruitful.
This missing internal ‘generative
method’ is really what most ordinary folks intuitive find lacking in
existing AI and which is considered a hallmark of intelligence and what
it is to be human. This is creativity. Hinton’s missing generative
aspect that is currently missing from neural networks and AI, can be
thought of as this lack of creativity in AI. It is also the key to
solving the puzzle of how to label, categorize and conceptualized. Put
simply, we need to be able to internally generate our labels, concepts
and categories because they are not externally provided for us. And we
need to match what we generate internally with what we sense externally.
But then we run up against the problem of what needs to be labelled,
categorized or conceptualized or how to give artificial intelligence its
own internal autonomous supervisor.
We
then touch upon another really deep and essential aspect of
intelligence and this is meaning and purpose. Any creature or AI without
a sense of purpose or meaning can hardly be called intelligent. And it
is solving the problem of giving artificial intelligence this internal
sense of meaning and purpose which when coupled with our internal
generative or creative process, enables us to solve the puzzle of
unsupervised learning in AI. This makes intuitive sense when we consider
that it is our internal sense of purpose and meaning which is our
personal supervisor and that which internally guides our learning. There
are already attempts in AI research to address this ‘barrier of
meaning’, in the form of research in reinforcement learning and utility
functions. What the fractal brain theory provides and a complete account
and understanding of the emotion centres, i.e. centres of utility
registration. These include structures such as the hypothalamus and
amygdala, which modulate structures called the basal ganglia with
neurotransmitters such as dopamine, and which is involved in the so
called ‘pathway of addiction’. So the theory shows how these engines of
the mind power the structures of reinforcement to guide and shape our
behaviours and our learning. We give AI a sense of meaning and purpose,
by reverse engineering how this is implemented in real brains.
The
solution to the puzzle of giving AI a generative or creative ability is
solved by the fractal brain theory and the AI deriving from it in a
very interesting way. The brain theory shows that the entire total
process of brain and mind is one big recursive generative process. It
shows that every operation of the mind and every process of the brain is
captured by an universal underlying symmetry of process which can be
understood as being generative mappings which search binary
combinatorial spaces of arbitrary size and depth. In other words the
process of brain and mind involves the creative exploration of
combinatorial possibilities which are then scored by their match to
external reality but also by signals of reward or punishment sent from
the emotion centres. What the brain theory shows is that to solve the
important problem of how to enable AI to form concepts, labels and
categories, you first need this generative process but also a solution
to the puzzle of giving an AI a sense of meaning and purpose. The
fractal brain theory shows how to do this clearly and explicitly, and so
provides answers for some of the biggest puzzles in AI and deep
learning, as highlighted by the leading researchers in the field. The
fractal AI deriving from the fractal brain theory is really the next
step and future of AI.
The Fractal Brain Theory in relation to existing Artificial Intelligence
We
next relate properties of the Fractal Brain Theory and its expression
in the language of binary trees to existing approaches and techniques in
AI. When we do this, then we discover a unifying of a great many ideas
in this field, and in particular the recurring and arguably the most
successful ones. We start by examining the three overall broad
approaches to doing AI and how all of these separate approaches are
subsumed by our new kind of AI.
These
three broad categories of existing artificial intelligence are firstly
what may be described as symbolic or ‘good old fashioned AI’,
abbreviated sometimes as GOFAI. And this approach is exemplified by
expert systems, chess playing programs, most existing natural language
processing systems and also systems like IBMs TV game show Jeopardy
playing champion Watson. The second approach may be called spatial
temporal AI and would include neural networks, deep learning systems,
and cellular automata approaches to AI. And the third and last approach
would be what may be called ‘combinatorial AI’ and this would include
techniques such as genetic algorithms, genetic programming and ‘neural
darwinism’.
Though
none of these three approaches by themselves produce true artificial
intelligence, nonetheless they capture some essential aspects of how
brains and minds work. Obviously at some level brains are processing
things at a symbolic level. And though neural networks and deep learning
implementations are generally not biologically realistic, nonetheless
real brains must in some way work as a network of interacting
components. So the whole neural network approach, while not necessarily
working in exactly the same way that real brains work, nonetheless may
still potentially capture in their functioning aspects of the building
blocks of intelligence. And as for the whole evolutionary and ‘genetic’
approach, it is entirely plausible that evolutionary processes are
happening in the brain and in our minds. This makes intuitive sense from
our introspection and also from observing the evolution of the
behavioural repertoire of babies and infants, where we see skills and
abilities literally evolving right before our eyes and ears on an almost
daily basis. Also the evolutionary algorithm is the most powerful
generator of pattern, form and diversity that we know of. It has created
all the myriad diversity of life on earth. The idea that human
creativity may likewise take advantage of this evolutionary process is a
compelling one.
In
a sense the existing work on symbolic AI, neural networks and genetic
algorithms has been the exploration of partial solutions to the problem
of creating artificial intelligence. It has created systems of utility
and some ability, but nothing that can be called truly intelligent. If
it is the case that all of these approaches do genuinely reflect aspects
of real intelligence and the functioning of real brains and minds,
which we believe is so, as just described; then surely the combining of
all these approaches into a single tightly integrated hybrid, which take
us closer to the creation of a closer likeness to real intelligence.
Some implementors of AI, for instance Ray Kurzweil, have already gone
down this route to some extent and arguably produced results which are
improvements on anything that may result from a less integrated and
hybridizing approach. So Kurzweil describes in his recent book ‘How to
create a mind’, the details of his approach, which includes an initial
evolutionary step, that creates a hierarchical spatial/temporal
structure and which then is made to incorporate preconfigured symbolic
and linguistic structures derived from psychological research into human
language processing. So Kurzweil’s approach spans the 3 broad
categories of AI we have described. But it only does so in a fragmented
and partially integrated manner. So for instance the initial
evolutionary aspect is discarded once the basic spatial temporal network
is created. After which Kurzweil’s ‘mind’ is functioning as a standard
hierarchical spatial temporal bayesian network.
Instead what we are
proposing is a full and tight integration of the 3 categories of AI in
our ‘new kind of AI’, whereby the symbolic and subsymbolic spatial
temporal is seen as a continuum and exist in a unified in single
conception. Also we envisage a continual and intrinsic working of the
evolutionary ‘genetic’ aspect in the functioning of this ‘new kind of
AI’. We’ll next describe how this can be so?
Fractal Artificial Intelligence and the Unification of the main approaches to AI
First
off, the formalism in which the fractal brain theory is expressed, i.e.
binary trees and binary combinatorial spaces, fits together very neatly
with so called GOFAI, i.e. good old fashioned symbolic and linguistic
AI. All of GOFAI is very well expressed as binary trees. All languages
can be completely analyzed in terms of binary tree structures and all
linguistic constructs can be expressed as binary trees. Also the formal
languages of logic, i.e. propositional and predicate logic are all
binary tree structures. And in a related way, AI created using
specialized languages such as Prolog or Lisp generally involve the
processing of underlying binary tree-like data structures. So in a very
direct way there exists a very immediate connection between the fractal
brain theory and symbolic AI.
In
relation to spatial-temporal AI we likewise are able to subsume these
approaches using our binary tree language. We may represent topographic
maps, also space and time in general using binary trees i.e. through the
binary subdividing of space in a manner related to quadtree
representations and wavelets transforms used in image processing. We may
likewise represent time, again using binary trees and the recursive
binary sub-divisioning of time in terms of past and future. Importantly
this binary representing of space and time can be implemented by the
neurophysiological substrate and also this way of looking at things is
backed up by a wealth of empirical evidence and data. So we have a ready
made way of dealing with space and time using the fractal brain theory,
and one that remarkably corresponds with how real brains and the real
minds of human beings actually process and deal with the world. So
fractal artificial intelligence is at the outset inherently spatial
temporal.
As for neural networks, with our binary tree scheme we can
totally subsume this way of doing AI, simply by mapping the nodes of any
neural network to the end nodes of our binary trees. We would then
represent the connections between the neural network ‘neurons’ as tree
walks traversing the overarching binary tree which creates or tree node
neurons. So any sort of neural network or deep learning structure of any
complexity can be perfectly modelled using our binary tree scheme. The
brain theory also describes how real brains are likewise binary tree
derived and how every neuron in real brains may likewise be represented
as binary tree end nodes.
This
binary tree and binary tree walk way of implementing neural networks
and deep learning structures may seem initially like a tremendous amount
of hassle and an unnecessary layer of superfluous complexity. But there
are very sound reasons for imagining neural networks in this way, apart
from a desire to see things in a unified and biologically plausible
conception. This has to do with the implementation of our fractal
artificial intelligence on massive parallel supercomputers and a very
efficient and in many ways optimal communication network topology called
the ‘Binary Fat Tree’. We’ll discuss this more a little later on.
Lastly
in our integrating of the broad approaches of existing AI, i.e.
combinatorial, evolutionary or genetic AI we also find a neat
correspondence with our binary tree and binary combinatorial space way
of looking at things. After all the process of evolution which is the
searching and scoring of combinatorial gene space, is perfectly modelled
with our corresponding search and scoring of binary combinatorial space
which the theory describes. In fact any sort of combinatorial code,
genetic, alpha-numeric or otherwise can be represented by and reduce to
binary combinatorial codes. Also, the evolutionary aspect is an inherent
part of the fractal brain theory, and by the same token also the
artificial intelligence that derives from it.
So
we therefore have in our new kind of fractal artificial intelligence
the full integration and complete unification of the 3 broad ways of
doing AI. In the language of binary trees and binary combinatorial
spaces we have the integration of the symbolic and subsymbolic in a
single conception. The puzzle of ‘symbol grounding’ in current
artificial intelligence thinking or the problem of reconciling higher
level symbolic and linguistics constructs with lower level spatial
temporal representations ceases to be a problem once we describe both
these domains in the same conceptual language. The symbolic and
subsymbolic are then seen as existing on the same continuum and
different levels of the same fractal hierarchy. What some AI researchers
consider as the ‘major’ problem of symbol grounding, in the context of
the fractal brain theory and the AI engineered from it, what problem?
The Fractal Brain Theory in relation to some recurring and specific ideas in AI
We
have just related the our new kind of AI to the existing broad
approaches of AI. What we do next is to show that there also exists a
tight correspondence between on the one hand the fractal brain theory
together with the fractal AI that derives from it, and on the other hand
many of the specific techniques and algorithms used in existing AI
systems. Though of course there does not necessarily exist corresponce
with any and every AI technique currently in use; nonetheless there are a
number of recurring and fundamental ideas in artificial intelligence
which are completely incorporated by the fractal brain theory. Arguably
it is these recurring ideas that are repeatedly found in the
implementation of AI systems over the past few decades, which are also
the ones closest to the workings of real intelligence. And that this is
the reason they are successful and keep being used. These recurring
ideas take on slightly different manifestations in different contexts
and may appear as different ideas but they underneath they are really
the same idea, which we’ll also discuss.
Divergence, Convergence and Intersection
We’ll
explain the first and perhaps most important of these recurring ideas.
In the most widely read artificial intelligence textbook, Peter Norvig
and Stuart Russell’s ‘Artificial Intelligence: A modern approach’, which
by some estimates is used by 95% of all undergraduate and post-graduate
AI courses in the English speaking world; we find this idea repeated in
several chapters and in different guises. In fact it is one of the
ideas introduced in the early chapters on AI and search. This is the
idea of a forward diverging and branching search into possibility space,
from some start point or set of initial conditions, towards some goal
state or answer that we wish to derive from our initial conditions and
start of search. This process is complemented by a backward process,
whereby from the goal state or answer we search backwards and explore
all the states that may lead to the goal. These complementary forward
and backward search processes, forwards from the initial state and
backward from the goal state are both tracked for intersection. That is
the possibility of the forward and backward processes meeting at the
same point. If this happens then the two process connect up and a single
path emerges from the initial state to the goal state and this is the
answer.
This
artificial intelligence technique which in the Norvig/Russell AI
textbook is initially described in the early chapters on searching
abstract problem spaces, is really fundamental to the working of AI as
it currently exists today. So later on in the book it also reappears in
the chapters on logic, where it is called forward chaining and backward
chaining. Only in this instance instead of some abstract search space
what is considered is the exploring into the space of logical
propositions through the laws of derivation by which these propositions
are created and transformed. So for instance in logic the problem would
be to try to discover a path of derivation from a set of given logical
propositions, i.e. axioms, which are taken as true; to some logical
proposition which we would like to verify. And similar to our reverse
search process, backward chains of mechanized reasoning are created from
this proposition, to go with the forward chains deriving from the given
initial axioms. If the two forward and backward process meet up, then
this shows the new proposition is true given our axioms, i.e. it can be
deduced.
A
practical application of this logical forward and backward chaining is
in robotic navigation. Whereby different places a robot can be is
described in a logical ‘situational calculus’, where each situation the
robot can be in is encoded as a logical proposition. And so the robot
may need to get somewhere from its current position or ‘situation’. With
the goal and current situation coded as logical propositions, a
procedure is initiated whereby a process of logical forward chaining
from the current state and one of backward chaining from the goal,
searches the possibility space of what the robot can do and how it can
move. When we have an intersection of the forward and backward chains
then gives the robot the exact sequence of maneuvers that the robot has
to do in order to get to the goal.
In
a less obvious way, this recurring idea of a complementary forward and
backward search into combinatorial or possibility space is also again
indirectly articulated in the Norvig/Russell book in the separate
chapters on decision theory and utility functions. This is because in
the process of making a series of consecutive decisions we likewise have
a divergence into possibility space. And there is a backward emanating
process involved in the construction of utility functions as happens in
real brains. This idea can be easily derived from some fundamental ideas
in the field of behavioural psychology, where things, places and
behaviours derive their salience and reward(i.e. utility) significance
through a backward associational process originating in what are called
unconditioned reinforcers. That is animals and humans are hardwired and
born to like certain things, and it is over time and the lifespan of the
animal or person that come to associate initially neutral
‘unconditioned’ stimuli with hardwired preferences, so that they become
conditioned reinforcers and stimuli. Thus they become emotionally
significant to us and through this associational process and are then
seen as utilitous and salient. Importantly these conditioned reinforcers
in turn are able to make other neutral stimuli associated with them
into future conditioned reinforcers. And so the process continues
spreading out, creating a web of salience and reinforcement in our
minds. In fact, some psychologist actually refer to this process whereby
so called primary or unconditioned reinforcers give rise to secondary,
tertiary and higher order reinforcers, as backward chaining. This is
really the essence of how real utility functions are created in real
brains and minds. And obviously our decisions are shaped by these
backward emanating structures of utility and reinforcement. i.e. we tend
to favour those decisions and courses of behaviour that maximize our
sense of reinforcement and utility. So on a higher level, the forward
process of making decisions and the diverging possibilities this
creates, intersects with the backward process by which we come to learn
about the utility and rewarding or aversive value of things. And these
two complementary forward and backward processes intersect to select
those paths of decision making that lead to reward and avoid aversion.
So this is exactly the forward and backward search, or the logical
forward chaining and backward chaining processes described in the
earlier chapters of the Norvig/Peter book and which are one of the most
foundational ideas in AI.
When
we now consider the fractal brain theory and real brains, then we find
this fundamental process of diverge, converge and intersect happening
everywhere and all fractal scales. It’s really the ubiquitous process
found in real brains and is the recurring self similar and symmetrical
process described by the brain theory. It is reflected in real neurons,
where the forward diverging of axons and the backward branching of
dendrites represents the physical manifestation of this process. The
fractal brain theory show that this process is truly ubiquitous
happening not just in the structures of the brain but also in relation
to the working of the mind.
Fractal AI & the Bayes rule
Another
recurring method or technique that has been something of a fixture in
the world of artificial intelligence implementations one way or another
is the Bayes rule. From the earliest expert systems to the most current
hierarchical deep learning architectures, either directly or indirectly
we find the Bayes rule or something akin to it at work. Something akin
to the Bayes rule is also integral to the workings real brains and this
is incorporated into the fractal brain theory. From the workings of
synapses to the associating of emotional salience to previous neutral
stimuli and sensory combinations we find a Bayes’esque mechanism at
work. Bayesian analysis used to be called inverse probability It is the
inverse probability aspect of the Bayes rule which enables us to find
forward probabilities leading towards reward states by tracking
backwards from the occurrence of rewards. This enables us to determine
whether a stimulus is a good predictor of that reward or not. This
inverse probability aspect of the Bayes formula, see below, has to do
with the reversal of the two events involved in a conditional
probability, i.e. ‘A’ and ‘B’. So the expression in the formula P(A|B)
translates into normal language as the probability of A given B has
already occurred. The Bayes formula allows us to discover the
probability of A given B, in terms of its inverse i.e. B given A or
P(B|A) as can be easily understood from the formula.
This
makes the problem of finding the predictors of reward or aversion far
more tractable, because we are able to consider the tiny subset of
stimuli or sensory combinations that occurred temporally and spatially
adjacent to the registering of reward. We are interested in A(the
reward) given B(the predictor) but we don’t know which Bs are actually
predictive of A and it would be very costly to track every single
possibly B, i.e potential predictor of A which could mean anything or
everything in external reality, because we don’t know what are these
predictors in the first place. This having to track everything in
external reality, would be the case if we didn’t have this inverse
method of working out conditional probabilities and finding predictors
of rewards or aversion. Without initially having any idea of what is
significant or possibly salient; then, we would have to keep track of
and score every conceivable combination and permutation of sensory
possibility that could be registered by our hypothetical artificial
intelligence, which would involve a huge number of possibilities. Out of
this we would find candidates which may be good predictors of reward or
otherwise. This would be a vastly more computationally expensive way of
doing things compared to our inverse probability i.e. Bayes, backward
tracking way of doing things. Because it is far easier to work with the
inverse i.e. P(B|A) or B given A. That is we work backwards from the
registering of the reward ‘A’ and then go on to track the ‘B’s given
‘A’, i.e. potential predictors of that reward, which would be the events
or stimuli that happened just before or in close spatial proximity to
the registering of the reward. This would consist of a tiny subset of
all the things which might potentially be predictive.
So
the Bayes rule is involved in learning higher order conditioners and
finding emotionally salient and behaviourally rewarding sensory
combinations. Also it is really something that is happening all over the
brain and at all scales. Which is what we’d expect in the context of a
completely symmetrical and self-similar understanding of the brain. The
dynamics of synaptic modification can be interpreted as implementing a
function that is akin to the workings of the Bayes rule. Though this
would not be in a strict mathematically precise way involving the high
precision representation of numerical values, as would be the case for
implementations of the Bayes rule in digital computers. What we’re
talking about is a very rough approximation of what is happening behind
the Bayes idea.
In
a sense what a synapse is representing is a conditional probability,
that is the ability one neuron B being able to influence the probability
of another neuron A to activate; i.e. P(A|B), the probability of A
given B. Activation of the postsynaptic receiving neuron would
correspond to P(A) in the formula above and activation of the
presynaptic neuron would correspond to P(B). The higher the synapse
strength or degree of its potentiation then given neuronal spiking on
its axonal side, the greater is the probability of activation of the
neuron at dendritic receiving end of the synapse. What would correspond
to the inverse aspect of the Bayes rule in the workings of neurons would
be the retrograde spike from the neuronal cell body, which goes
backwards along the dendrites reaching all the synapses embedded in
them. This would enable us to derive a result roughly corresponding to
the P(B|A) inverse term of the Bayes rule. As long as there is some
trace stored at the synapse which registered the recent activation of B
or pre-synaptic terminal, then whenever the retrograde back spike
reaches each synapse, we would have sufficient information from the back
spike and this hypothesized ‘trace’ to work out something akin to
P(B|A). If we stored this correlating of the back spike with the forward
signal i.e. P(B|A) as a change in synaptic strength then only those
synapses which were activated immediate prior to the post-synaptic
neuron activation would be strengthened.
This subset of synapses to
consider would correspond to the reduction in sensory combinations to
track, in relation to our earlier consideration of Bayes in relation to
extracting a relevant subset of emotionally salient sensory
combinations, against the myriad possibilities we would have to track
otherwise.
Once
we have this subset of strengthened synapses to consider, roughly
corresponding to P(B|A) then this would be subject to possible to
synaptic weakening or LTD long term depression of the synapse. And would
come about according to the rules by which synapses are de-potentiated
derived from empirical studies. This means anti-correlation. In our
ongoing discussion of neurons A & B, anti-correlation corresponds to
two case. Firstly presynaptic neuron B activates but postsynaptic
neuron A doesn’t; and secondly post-synaptic neuron A activates but
presynaptic neuron B doesn’t. Either way the synapse is weakened and the
probability of post-synaptic neuron A activating given activation in
pre-synaptic neuron B i.e. P(A|B) is lessened. This would give us a
future value for P(A|B) which would derive from our initial P(B|A),
which would more accurately reflect a forward predicting probability
which is what we want our synapses to be storing.
In
this way of matching the Bayes rule to the working of synapses, we’re
not saying that neurons are doing anything like multiplication or
division of real numbers or fractions to arrive at anything with any
sort of mathematical precision. We’re merely suggesting that through the
operation of dendritic spikes, LTP/LTD long term potentiation and long
term depression of synapses through the rules of correlation and
anti-correlation; we are able to very roughly approximate dependent
probabilities in a way that reflects some of the essential aspects of
what happening behind the Bayes rule, especially the backward or inverse
aspect.
Of
course with many neurons we could probably achieve more precise
representational aggregates. And also with arrays of neurons, and the
combined action of a myriad multitude of synapses, we could also do
something akin to Bayes involving spatial-temporal representations of
much greater complexity. In fact once we’ve shown how something like
Bayes is occurring at the level of an individual synapse then this
allows us to extrapolate something akin to the Bayes process to the
entire brain and at all levels. We may think of the Bayes process as
symmetrical and self-similar, happening in all regions of the brain and
at all scales. If we’ve already come to think of the entirety of all the
essential informational processing aspects of mind and brain as a
single all encompassing top-down hierarchy then we may likewise think of
the Bayes process as happening all over this all encompassing structure
in every way, combination and permutation conceivable.
Our
much simplified interpretation of the Bayes formula would probably
appall specialists using the Bayes rule in more mathematically precise
implementations of artificial intelligence applications. However what we
are aiming for is an interpretation of Bayes that is so simple that
even a neuron and its synapses could do a rough approximation of. The
main thing for our current considerations is that the Bayes formula has
been a recurring idea in the field of artificial intelligence and this
is because it is quite profound and extremely useful. It seems to
deliver the results. The technological data analysis computer firm
‘Autonomy’, which was for a long time the United Kingdom’s most valuable
listed tech company, before it was sold to an American buyer in 2014,
i.e. Hewlett Packard, and lost its autonomy, is said to have built its
fortune on the Bayes formula. Its wider use generally in the field of
artificial intelligence has been invaluable. And so we think it is most
important to incorporate this Bayes like functioning into any theory of
the brain and the artificial intelligence deriving from it. There is
actually a large body of research in the neuropsychological literature
which shows that something like the Bayes rule is happening all over the
brain and in the functioning of real minds. This most simple but also
most important of all mathematical formulas seems to reflect something
fundamental about the nature of intelligence and the workings of
existing artificial intelligence. It is thus an essential feature of the
fractal brain theory and fractal AI. From the fractal brain theory we
may derive products and services that are completely fractalized and
perfectly recursive versions of those sold by the UK company Autonomy.
The Fractal Brain Theory, Search and Google’s Technology
It
has been said by one of the founders of Google, Larry Page, that the
ultimate search engine is artificial intelligence. Google is perhaps the
single company in the world investing the most time, energy and money
in the quest to create AI. Both its founders have made no secret of
their desire for Google to be the corporate entity that brings AI to
world. Towards this end they have brought in and bought in all the top
artificial intelligence and neural network talent they can get their
hands on. Whether this conglomerate artificial intelligence by committee
with multi-billion dollar backing from one of the world’s most
technologically advance corporations, will deliver the goods, i.e. true,
strong and general AI, that is an open question. What we’ll discuss
here is the relationship between the fractal brain theory and fractal AI
on the one hand, and on the other hand, existing search engine
technology and Google’s declared goals and aspirations.
Firstly
there exists an interesting way that the Fractal brain theory is able
to effectively subsume and make into a subset existing Google search
engine technology. The ideas behind the Fractal brain theory enable us
to effective completely fractalize the way traditional search engines
work. So how does today’s conventional search engine technology work?
Most search engines including Google’s work on two levels. They are at
the level of entire webpages and also at the level of individual words
within those web pages. In a simplified nutshell, what the Google search
engine is doing is trawling as much of the world wide web as it can,
looking at every single web page it trawls and breaking it down and
representing it as what it calls a ‘forward list’. A ‘forward list’
corresponding to each web page simply consists of a list of all the
words contained within that webpage. So the number of different words
contained in a web page would correspond with the number of elements in
the forward list corresponding to that page. As so the Google search
engine would create a forward list for every single web page that it
looks at.
The
next step is to create a set of ‘backward lists’ from the total set of
all forward lists. This goes the other way, for every single word
contained in all the forward lists, we construct for it a backward list
which simply consists of a potential very long list of all the web pages
that contain that word. So a backward list for every single word which
indexes every web page which uses it. From all the backward lists which
are able to work out all the pages that contain a list of seperate
keywords. Obviously typing in a single keyword would specify a single
backward list. When we type in several keywords then what would happen
is that the google algorithm would take all the backward lists
corresponding to these keywords and find the intersection between them.
Or put another way, it would scan all the backward lists and make an
answer list which would contain all those web pages which were
registered in all of the backward lists specified by the different
keywords. This answer list would be the list of all those web pages
which contained all of the search terms or keywords.
To
all the web pages contained in this answer list the google algorithm
would apply an algorithm called ‘page rank’ which scores them according
to how many pages link to each page, with increased weight given to
pages from important websites which in turn are linked to by other pages
and ranked according in the same way. This is what would be served up
as the results of the search query, with the highest ranking pages
displayed first and in order of their page rank. This is a
simplification of things. In actual operation the google algorithm will
cache frequently entered search terms so that it doesn’t have to keep
repeatedly calculating the intersection between the backward lists of
several search terms. This can be quite computationally expensive. But
in essence this is how the google search works and is what lies behind
the multi-billion dollar revenue juggernaut that is Google corp.
In
relation to the two level process by which conventional search engines
index the web just described, the fractal brain theory and the new kind
of AI directly deriving from it does something most interesting. In
effect we are able to fractalize the process of web indexing and in
doing so, create a truly semantic web. Instead of working on merely two
rigid levels as the google search engine goes, i.e. in terms of entire
web pages and also on the level of individual words, instead the
application of the fractal brain theory to web indexing would give us a
recursive multi-level hierarchical indexing scheme. So that we would
consider on one level the letters contained within words, then the words
within sentences. Onto the sentences within paragraphs and all the
paragraphs within a web page.
Furthermore
because we use the same formalism and language to describe the
symbolic-linguistic as we do the spatial-temporal, this means means we
apply the same hierarchical decomposition to images, videos and sound
clips contained within web pages. Also the same addressing scheme can
even be used to index relational database and the like. So in the same
way that we are able to decompose all the structures and representations
of brain and mind using the fractal brain theory and its accompanying
binary formalism, so too we are able to apply the theory systematically
to decomposing the entire WWW. In the same way, the linguistic, visual,
audial and abstract representations of mind are expressed in the
unifying language, so we may do the same to all the various
representations stored in the web. But it would do so in a way that is
hierarchical, multi-level, and fractal.
But
what would this do for search? Firstly by using our binary language to
represent pictures, video and sound, it would allow for another kind of
search, using not keywords but perhaps image or sound fragments. So
perhaps by entering a picture or drawing into this new kind of search
engine, it would then pick the best matches to this reference pictures,
by going through its hierarchical index of pictures and videos contained
on the web. This new kind of search engine can be made to work like
conventional ones such as google’s, serving up web pages in response to
lists of keywords. The two level forward and backward lists of the
google search engine are actually contained in this new kind of search
engine, as a subset and limiting case.
There
has been much talk over the years of a ‘semantic web’ and incorporating
a sense of meaning into web search. But how would we do this with our
fractal brain theory inspired version of search. We would give our AI
search engine meaning in the same way that the fractal brain theory
shows how meaning is implemented in real brains. This would relate to
the functioning of the emotion centres, and how real brains and minds
construct all sorts of representations relating to the world, ourselves
and our needs, which all trace their meaning and their purpose from the
hardwiring of the emotion centres. We propose that this process by which
representations are given meaning in relation to our desires and
aversions is analogous to fractal growth processes such as diffusion
limited aggregates or DLAs, where structures grow out from a point to
form tree like constructs radiating out from the attractor point. These
initial seed attractor points in our brains correspond with our
hardwired unconditioned drives and built in rewards. So in the same way,
a similar sense of meaning can be given to artificial intelligences and
also our search engines built from the fractal brain theory. But
instead of built in drives like, thirst, hunger and sex, our search
engines would be given specific keywords or images of specific interest
from which it radiates DLA like structures, capturing related ideas,
images, sounds and concepts. It would be performing something like
Bayesian analysis in relation to all the myriad associations that would
come up, in order to find the most relevant and closely related concepts
or data. This be something along the lines of a fractalized Google
search engine meets a fractalized Autonomy style indexing scheme. This
type of search engine would require a few orders of magnitude more
computer power than existing ones, but this what can be expected to
become a reality over the next decade or so.
Hierarchical Representation, Context and the representing of Space & Time
Apart
from an integrated way of looking at existing ways of doing AI, our
binary tree and binary combinatorial space language provides us with a
very generalized and powerful solutions to some of the major outstanding
problems in artificial intelligence as it is practiced today. One
example of this is the problem of how to represent space and the time
and also the problem of context and hierarchical representation and
problem solving. The fractal brain theory describes how all the
structures of the brain together with the emergent structures of mind
may be unified into a single concept and importantly classified in a
single all encompassing hierarchical structure that also includes the
emotions centres. This hierarchical structure is also one of context and
containment, where everything within the structure exists with well
defined top-down, bottom-up, context and containment relationships to
everything else. It really gives us the most general way possible to
think about and formalise hierarchical structures. It also enables us to
define context and containment in a very flexible and recursive way,
and also one which can be directly implemented by the neural substrate
of the brain. This ability to recursively nest contexts within contexts
to arbitrary levels of details is a very powerful facility which is
handled by the polar frontal cortex, which is unique in human beings and
which is fully modelled by the brain theory. It is fully recursive
thinking and the facility of the polar frontal cortex which defines
human intelligence and give us our reasoning power.
Furthermore,
in the fractal brain theory we have a way of representing space and
time which actually finds correspondence with how real brains do this as
is suggested by a lot of experimental findings which are explained by
the brain theory. Also it is conceptually neat that our way of
representing space is also the same as our way for representing time,
i.e. using binary trees. This is important because in the workings of
our brains and minds there does seem to be an interchangeability in our
processing and perception of space and time. This would obviously be
facilitated if we had a common representational format for space and
time, i.e. binary trees. Also our spatial and temporal code is
inherently hierarchical, so our generalized solution for the
representation of space and time is also a generalized solution for
hierarchical representation.
In
relation to existing ideas and motivations in current AI research,
obviously these are all very significant insights. Leading figures in AI
research and theory, such as Ray Kurzweil, Peter Norvig and Jeff
Hawkins, go on and on about the importance hierarchical representations
and also learning and problem across these hierarchical structures. They
also talk about the importance of representing time and temporal
processing across these hierarchical structures but then admit that they
do not have a totally satisfactory idea of how to do this. Andrew Ng
said recently in early 2014, that this is one of the things that is very
important to understand and to implement, for the creation of true
artificial intelligence but also where no general solution currently
exists. What the Fractal Brain Theory is really telling us is that the
problem of hierarchical representation, the problem of how to represent
space and time, together with the problem of context; are all tightly
inter-bound with one another so that the most generic and universal
solution to each of them is directly connected to the solution of the
rest. So by framing everything in our hierarchical binary way of looking
at things, we understand that space and time are likewise binary and
hierarchical, as is context. This naturally gives us as a result, the
ability to represent spatial as well as temporal context, and to
conceptualize these spatial/temporal contexts likewise in a hierarchical
and nested way. Many existing ideas in AI relating to context, i.e.
‘frames’, ‘scripts’, ‘cases’ and case based reasoning, also
non-monotonic logic and defeasible reasoning; are all capable of being
fully expressed and subsumed by the concepts, processes and structures
of the fractal brain theory.
Fractal Artificial Intelligence is Scale Free
An
interesting consequence of our completely symmetrical, self similar and
recursive conception of the brain and mind is that we have a way of
thinking about artificial intelligence that encompasses the most complex
of brains to the most simplest. By looking at the workings of the brain
in a fractal way and showing the correspondence of this perspective
with actual physiology and brain organization, we are able to conceive
the workings of parts of a complex brain at many different sub-levels of
organization as being a reflection of the working of the whole. In a
similar way, we also conceptualize the workings of simple brains as we
would the workings of complex brains. The brain theory is able to span
the simplest brain to the most complex and all stages intermediate or
contained within.
A
article that came out around the year 2000, in the technical computer
programming magazine Dr Dobbs Journal. It speculated on the nature of AI
by describing the following scenario. If a competent programmer who
lived in the future, at a time when true artificial intelligence was a
commonplace reality and the technology behind it well understood,
discovered some ancient computers in his garden shed dating from around
the time of the article, would he be able to create AI on these
machines? More specifically the question was posed, would it be possible
to create artificial intelligence on a standard personal computer
existing around the year 2000, running an Intel Pentium III processor at
around 1 gigahertz, with around 512 megabytes of memory. Given the
Fractal Brain Theory, then the answer to this question would be a
resounding yes. We could create a mini artificial intelligence that
would contain all the necessary processes and features that would be
contained in a more full blown AI and human mind. The fractal brain
theory goes further to suggest than even a far simpler and basic set of
computing resources would enable us to create a microscopic likeness of a
more macroscopic AI, that would nonetheless capture the fractal
characteristics of artificial intelligences of far greater scale and
complexity. This would be a reflection of the fractal nature of the
brain theory and the scale free AI that would be created from it. This
also leads to further interesting properties of our new kind of fractal
artificial intelligence.
The
fractal brain theory and this scale free way of looking at the nature
of intelligence would also provide an answer for the excuse given by
various AI researchers for the lack of progress in the field, which is
that we are held back by a lack of computing power. Of all the lame
excuses for why we haven’t been able to achieve much progress over the
past few decades towards the creation of true AI, this one is probably
the lamest. The other lame excuses would include not enough money,
missing maths, or not enough people working on the problem. Once we
understand the fractal or scale free nature of intelligence, then the
inability to create AI had less to do with a lack of computing power but
more to do with a lack of theory and insight.
Fractal Artificial Intelligence is self-configuring, auto-scaling & auto-resource allocating
Another
interesting property of the Fractal AI which derives from the fractal
brain theory, is that it is able to grow into the computing resources
i.e. memory and processor power, that is allocated to it. This relates
to the important aspect of the brain theory which sees the process of
neurogenesis or the process by which real brains come into being through
a process of binary cell divisioning as being continuous with and
perfectly reflecting what we would normally understand as the processes
of brain and mind. The fractal brain theory conceptualizes the unity of
process behind what would normally be considered as very separate
processes of the brain. i.e. 1./ neurogenesis, 2,. spatially connecting
up brains, 3./ temporal representation and reconstruction, 4./ the
evolution of the spatial/temporal representations of the brain. By
coding all these processes in the language of binary trees and binary
combinatorial spaces, the brain theory shows that they are really one
single continuous underlying process.
The
upshot of this is that with the neurogenesis aspect of this master
process, we are literally able to grow our digitized artificial brains
within the memory and CPU resources of the host computers used to run
our fractal AI. This means we start with a seed recursive atom from
which the artificial neurogenesis process begins to create the substrate
of our digitized brain. Which then proceeds to wire up spatially to
form spatial representations, which in turn are chained in time. These
spatial-temporal are then continuously evolved. In this way our fractal
AI will expand into the memory space and CPU resources provided for it
and scale itself according to the amount memory available or some
inbuilt preset. It would then self-configure itself to represent salient
stimuli which correlate with its built in drives and rewards, and then
allocate memory and processing time resources towards the representation
and activation of relevant behaviours. In a sense all the different
aspects of our unifying underlying process or algorithm involves the
representing and exploring of binary combinatorial space. This process
of self-configuring and auto-resource allocating involves the scoring
and competing between these combinatorial representations. Thus
processor and memory resources are allocated to those representations
and behaviours which win out in this evolutionary process.
This
ability to auto-scale and self-configure relates to our earlier idea of
a scale free conception of how the brain works, i.e. the fractal brain
theory describes the functioning of the simplest brain to the most
complex and all levels in between. This means that whatever the size of
the artificial brain created by our self-sizing and autoconfiguring
algorithm, the way it works will be the same. These digitized artificial
brains of all sizes and all variety of configurations will be
performing and animating exactly the same underlying algorithm and
overarching process.
A New Kind of Whole Brain Emulation with the Fractal Brain Theory
The
fractal brain theory is able to describe real brains from the level of
the binary branching formation of individual neurons and their binary
branching axons and dendrites, up onto the level of brain modules and
brain regions; even right up to the level of entire brains and the
emergent thoughts that arise from them. And all with the same binary
tree, binary combinatorial space language, organizational principle and
concept. Therefore with this unifying language which may comprehensively
account for all the important information processing structures of the
brain, and also describe the processes of brain; we may seek to emulate
entire brains using the Fractal Brain Theory.
Not
only is this new approach suggested by the Fractal Brain Theory
grounded in the way that real brains, come into being, wire up and
represent things spatially and temporally. There are also reasons why
this way of doing what is know as ‘whole brain emulation’ or WBE for
short, will produce simulations of the critical aspects of brain
functioning, and by the same token artificial intelligence, which will
run several orders of magnitude faster and more efficiently than other
attempts to simulate the whole brain. Some of these other competing
approaches to whole brain emulation will seek to simulate an incredible
amount of detail relating to the fine scale neurophysiology of neurons
and brain structures. For instance the billion Euro ‘Human brain
project’ led by neuroscientist Henry Markram. These sort of approaches
use vast amounts of supercomputer resources to run their simulations and
typically use up many hours of supercomputer processing time to produce
literally seconds worth of simulated whole brain operation. For
instance the recent Waterloo University SPAUN simulation and what was
called the ‘world’s largest functioning model of the brain’.
With
our new kind of artificial intelligence we seek to produce whole brain
simulations that run in real time. And due to the scale free, i.e.
fractal nature of our whole brain emulations, we may construct them to
run on smart phones and desktop PCs but also be able to perfectly scale
up our fractal AI to run on the largest and most massively parallel
supercomputers. The new kind AI derived from the fractal brain theory
will run on whatever computer resources are given to it, above a certain
minimum. The amount of memory resource will determine the size of the
artificial brain we may simulate. The amount of processor power and
efficiency of the communication network will determine the temporal
resolution of the simulation and how fast it runs. Either way, our
fractal AI can be made to run in real time in any circumstance, though
its effectiveness in performing whatever role it has been assigned will
depend on adequate computing resources given to it.
Our
new approach to whole brain emulation relates intimately to some
sentiments concerning brain simulation expressed by the godfather of the
Technological Singularity himself, Ray Kurzweil. To go along with the
massive increase in computing power, i.e. memory and processors, that
will continue over the next few decades, Kurzweil proposes that to
create efficient and fast simulations of the entire brains, we may
systematically strip away unnecessary details of brain physiology that
are not directly related to its information processing functions. So an
obvious example would be the brain’s blood vessels and supporting
tissues, these won’t have to be simulated in our whole brain emulations.
But we can carry on this process of stripping down and abstracting away
the superfluous, for information processing, details of the brain to
arrive at the simplest bare essence necessary for the creation of
artificial minds. I believe that as we carry on this process of
abstraction to the limit then we arrive at a description of the brain
and mind in terms of binary trees and binary combinatorial spaces.
Things cannot get any more abstract than this. But if our fractal brain
theory is already completely expressed in this language, and also is
able to capture the essential substrates of the brain as well the
emergent representations of mind with this language, then we see
Kurzweil’s insight realize its fullest expression already fully formed
in this brain theory. This is an important point, because it means that
the whole brain emulations and artificial intelligence created from the
fractal brain theory will run fast and efficiently, even perhaps
optimally.
In
effect, through the scale free view of what brains are and what minds
do, that the fractal brain theory gives us; we may conceptualize any
brain of any scale or size as a whole brain. The fractal brain theory
shows how any size brain, even a one neuron brain, reflects in
simplified form, the most complex and largest of brains. So in a sense,
from the perspective of the fractal brain theory, even miniature
artificial intelligences running on smart phones, performing voice and
image recognition, natural language processing and data access, will be
mini whole brain emulations. Though of course it will only have a skill
set, knowledge and ‘cognitive’ capabilities which will be
correspondingly tiny compared to fractal artificial intelligences
running on massively parallel supercomputers. And this is what we’ll be
discussing next...
The Fractal Brain Theory implemented on Massively Parallel Supercomputers
When
it comes to the implementing artificial intelligence deriving from the
Fractal Brain Theory on massively parallel computers then we discover
that the binary tree data structures that are all pervading in the
theory, fit perfectly with arguably the best way of connecting the many
thousands of processing units in parallel supercomputers. This
communication architecture is called the Binary Fat Tree. In terms
scalability, low blocking rates, data throughput and efficiency it is
generally recognized as the best communication architecture for high
performance, large scale parallel computer designs.
In
the diagram below left we have the Connection Machine 5 (CM5) which was
released in 1991 and which was one of the first commercial parallel
supercomputers to feature the binary fat tree architecture. It took
advantage of the ease of expandability of binary fat tree topologies in
offering customers the option to buy the CM5 one module at a time and
linking them up to progressively increase computing power. In subsequent
years, the binary fat tree architecture fell out of favour but in 2013
it has made a stunning return in the form of the world’s currently
fastest supercomputer, China’s Tianhe 2 with a peak speed of around 45
to 55 PFlops/s or PetaFlops i.e. 1,000,000,000,000,000 calculations a
second. Within the next 6 years or so, China aims for exascale
computing, 1 Exaflop = 1000 Petaflops. Due to the inherent ability of
binary fat tree communication architectures to scale well, then there is
no reason to think that these future exascale efforts will not also
employ fat tree topologies.
Below
is a diagram of a fat tree network, actually a schematic of the CM5
communication architecture. We can see from this visual depiction of the
binary fat tree topology why it is so called. It is because the number
of channels involved in linking up the branching nodes gets
progressively ‘fatter’ as we traverse up to the root node. We we have a
thick very high bandwidth bundle of ‘wires’ at the top, and these
bundles getting thinner and lower bandwidth as we do down into the end
nodes.
These end node would be the actual microprocessors or computing
modules which do the calculations and execute the code. In this
schematic of the CM5 communication network we can see depicted on the
far right side, nodes which are dedicated to I/O or input/output
functions which would include things like disc storage access and other
interfaces to the outside world.
So
the implementation of the fractal brain theory as software running on
parallel supercomputers, fits hand in glove with binary fat tree
architectures. Because the brain theory reduces all brain and mind to
the language of binary trees and is able to conceptualize all of it as a
single integrated top-down binary hierarchy. Apart from this binary way
of looking at brains and minds finding a lot of correspondence with
empirical results from neuroscience, we may even think of actual brains
as also employing a binary fat tree like arrangement in its nerve fibre
systems connecting up the cerebral cortex. So for instance the corpus
callosum connecting up the two hemispheres forms the fattest most
visible bundle of nerve fibres in the entire brain. Next and lower down
the binary hierarchy would be the various longitudinal fasciculi
connecting the anterior and posterior halves of the brain. After this
would come the more local circuits connecting up progressively more
adjacent and smaller volumes of brain mass but with progressively
thinner fibre tracts. It would be an interesting avenue for future brain
research to see how far this line of reasoning extends to real brains.
It
is inevitable that all sorts of computing devices will employ more and
more parallel processors to gain increments in performance. On our smart
phones it is not uncommon to find quad-core processor with octa-core or
8 processing unit phones coming onto the market in 2014 and more to
come in 2015. This is a natural consequence of the coming to an end of
what is known as ‘Moore’s law’ which predicts the exponential increase
in computing power from the progressive miniaturization of electronic
circuits. We are almost at the physical limits as to how far this
process may continue. There are various other ways to achieve
performance gains. The most long term and viable of all of these is
through parallel computing using multiple to myriad numbers of
processors. And the most optimal way to wire up these future parallel
computers is by using binary fat tree topologies which will become
increasingly ubiquitous over time I believe. What is currently a very
high performance network topology used only in the fastest and most
state of the art supercomputers, will in the future be found in the
home, on our smart phones and in future Playstations and Xboxes. I also
believe these devices will be running the AI derived from the Fractal
Brain Theory, i.e. the animated digitized binary brains and minds which
the theory describes.
In
many ways the brain theory is minimal and highly efficient, even
theoretically optimal in some instances. It has natural and inherent
utility, not least in its ability to capture the functionality of some
of the most useful algorithms in AI and computer science, as described
earlier. When we couple this with the efficiency and optimality of
binary fat tree systems, then apart from convenience and unity of
concept, we may also be able to take advantage of hardware level
optimizations when implementing the brain theory on these architectures,
that would not exist with other approaches to doing AI or WBE (Whole
brain emulation). The perfect match between the binary structures of the
brain theory and binary fat tree topologies may allow us to engineer
our communication architectures specifically with the implementation of
the brain theory in mind to maximise performance. I envisage the
eventual hard-coding of the brain theory into custom silicon or ASICs
(Application specific integrated circuits). After the first few
generations of software manifestations of AI created from the brain
theory, the next step would ASIC hard-coded implementations wired
together using binary fat tree networks. And the next step? Probably
quantum computer implementations of the brain theory at least in part to
create hybrid systems. And how may all these fruits of the fractal
brain theory come about?
The Start Up at the End of Time
The
advent of artificial intelligence and instigation of the Technological
Singularity would be such a dramatic happening that it would signal the
conclusion of the existing era and usher in a whole new epoch. And so it
would be a like a punctuation mark separating out the great cycles of
time. One time cycle or epoch would end and another will begin. The
cycles we’re talking about are not, business cycles, seasonal cycles or
annual cycles. They’re not even cycles covering centuries or several
millennia. The immediate implications of a final understanding of brain
and mind and the advent of AI, together with transformations associated
with the Technological Singularity, would be so far reaching that they
represent a whole phase shift in planetary evolution comparable to the
dawning of life on earth, or the emergence of multicellular animals from
single celled life forms. I believe that the time cycle which is coming
to an end, or has already recently done so, would at least cover all of
recorded human civilization and span many thousands of years. 26,000
years would be a nice number. But these sorts of specifics are not
necessarily the central concern. The central concern’s are the so called
‘grand challenges’ facing humanity. This critical time of danger,
impending chaos and calamity but also one of opportunity and great hope.
I believe the fractal brain theory and the technological deriving from
it will be critical in the unfolding of this great drama. I see this
brain theory and ideas related to it as a major world historic
revelation. It is the bringing of this theory to the attention of the
world which is the next step. A closely related process and concurrent
process is the creation of artificial intelligence from this theory and
instrumental to this will be the foundation of the start-up at the end
of time.
This
goal of creating AI, this modern quest and Holy Grail of present times
is a very hot topic right now and will increasingly be so as the years
go by. Many eyes are on the prize and are searching for this Golden
Fleece, Philosophers Stone and Mythic White Whale. Many of the top
minds, richest corporations, most prestigious academic institutions and
even governments of the world have their sights firmly set on the
related goals of trying to figure out how the brain works, the mystery
of mind and the creation of true artificial intelligence. It is
something very much in the news and getting a lot of attention right
now. The creation of artificial intelligence has always excited people’s
imaginations. We are living in the era when the dream will finally
become reality. Science fiction will become scientific and technological
fact.
Leading
this charge is Google corporation. The multi-billion behemoth has
always had the creation of ‘strong’ AI as one of its aim as expressed at
various times by Google’s founders and also its CEO Eric Schimdt. After
all the ultimate search engine, as Larry Page himself has said, is
artificial intelligence. A giant mind that reads, views and listens to
everything on the web and can gives answers on the totality of what it
has ‘learned’ on request, together with advertisements juxtaposed to the
answers of course. Towards this end they have hired a while back ‘the
teacher of AI to the world’, Peter Norvig. They’ve acquired the
godfather of the Technological Singularity himself, Ray Kurzweil and
Geoffrey Hinton who is a central figure in the world of neural
networks(now called deep learning). Recently they’ve also bought in
child computer game prodigy, leading neuroscientist and artificial
intelligence programmer Demis Hassabis with their recent purchase of
London start-up Deep Mind. And likewise Facebook corporation seems to be
also showing a big interest in AI with the setting up of their AI
research division headed by leading neural network researcher Yann Lecun
and their investment in AI start-up Vicarious. Other computer
corporations such as IBM have recently intensified their efforts in AI
research and finding practically applications for their Watson system
which famously won the TV quiz show Jeopardy a few years ago. And then
there are all the various academic institutions, too numerous to list,
with efforts in AI or brain simulation. It is also worth mentioning the 1
billion euro European Union funded project to simulate the human brain
lead by neuroscientist Henry Markram.
There
seems to be in the world today a massive resurgence of effort towards
the goal of understanding the brain and creating AI. Into this context,
and from pretty much out of nowhere the Symmetry, Self-Similarity and
Recursivity Brain Theory enters into the arena. Out of the
disconnection, comes integration; out of the confusion comes clarity;
and out of the darkness light. With the Fractal Brain Theory comes the
integration of brain and mind science with computer science and
artificial intelligence. And also a tremendous unifying of ideas and
conceptions. The start-up at the end of time will translate these
insights and theoretical constructs into prototypes and products. It
will act as the midwife of artificial intelligence and also the womb for
the genesis of the thinking machine. It will seed the creation of a new
industry and be ground zero, the epicentre for the birth of the
Technological Singularity. And where will this start-up at the end of
time begin. Naturally it will be founded in London, the city where the
brain theory was steadily formulated and come into being over a period
of around 25 years. Already a bourgeoning tech hub it is still waiting
for its major corporation to emerge. Potentially the start-up at the end
of time will progress to become a corporation bigger than Google,
Apple, IBM, Microsoft and Facebook combined. When the full implications
of the brain theory and the technology associated with it are fully
recognized then why set our sights too low. The scale of wealth
generation from the creation of AI both directly and indirectly is
incalculable. For it is the technology that is able to create
technology, it is the invention which is able to invent. When we think
about how it is intelligence that is the real wealth generator in the
modern economy and prime creator of value then we start to see the real
implications of AI and envisage why all the fantastic speculations
around the coming of the Technological Singularity may not seem so wild
or fanciful. And so it is that these projections for the future of the
start-up at the end of time may not seem so over exaggerated but rather
and perhaps even conservative.
What
will emerge is not just a single corporate entity but rather the
creation of an entire industry which will span and merge many existing
industries. In a sense all the information technologies converge towards
artificial intelligence. In the same way one of the founders of Google
corporation states that the future of search, google’s primary business
and income stream, is AI. The same can also be said for all the rest of
the information technology businesses of the world. This convergence
will initially take the form of existing products and services
incorporating aspects of AI and various functionalities derived from it.
Later on AI will become central to how all these products and services
work and how they are designed. This will create a huge and dense nexus
of closely related industries, involved in telecommunications,
processor design and manufacture, consumer electronics, games,
entertainment, search, advertising and media. Eventually they will
function as one conglomeration with AI being the unifying glue behind it
all. Out of this crucible of Artificial Intelligence emerges the
Technological Singularity and alchemical transformation of the world.
This will a few years into the future. And the future starts with a
start-up. The Start-up at the end of time.