Search This Blog

Saturday, December 1, 2018

Brain–computer interface (updated)

From Wikipedia, the free encyclopedia
Diwakar Vaish, an inventor of a brain-controlled wheelchair, during a press ceremony.

A brain–computer interface (BCI), sometimes called a neural-control interface (NCI), mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCI differs from neuromodulation in that it allows for bidirectional information flow. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions.

Research on BCIs began in the 1970s at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA. The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature.

The field of BCI research and development has since focused primarily on neuroprosthetics applications that aim at restoring damaged hearing, sight and movement. Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels. Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s.

History

The history of brain–computer interfaces (BCIs) starts with Hans Berger's discovery of the electrical activity of the human brain and the development of electroencephalography (EEG). In 1924 Berger was the first to record human brain activity by means of EEG. Berger was able to identify oscillatory activity, such as Berger's wave or the alpha wave (8–13 Hz), by analyzing EEG traces.

Berger's first recording device was very rudimentary. He inserted silver wires under the scalps of his patients. These were later replaced by silver foils attached to the patient's head by rubber bandages. Berger connected these sensors to a Lippmann capillary electrometer, with disappointing results. However, more sophisticated measuring devices, such as the Siemens double-coil recording galvanometer, which displayed electric voltages as small as one ten thousandth of a volt, led to success.

Berger analyzed the interrelation of alternations in his EEG wave diagrams with brain diseases. EEGs permitted completely new possibilities for the research of human brain activities.

Although the term had not yet been coined, one of the earliest examples of a working brain-machine interface was the piece Music for Solo Performer (1965) by the American composer Alvin Lucier. The piece makes use of EEG and analog signal processing hardware (filters, amplifiers, and a mixing board) to stimulate acoustic percussion instruments. To perform the piece one must produce alpha waves and thereby "play" the various percussion instruments via loudspeakers which are placed near or directly on the instruments themselves.

UCLA Professor Jacques Vidal coined the term "BCI" and produced the first peer-reviewed publications on this topic. Vidal is widely recognized as the inventor of BCIs in the BCI community, as reflected in numerous peer-reviewed articles reviewing and discussing the field (e.g.). His 1973 paper stated the "BCI challenge": Control of objects using EEG signals. Especially he pointed out to Contingent Negative Variation (CNV) potential as a challenge for BCI control. The 1977 experiment Vidal described was the first application of BCI after his 1973 BCI challenge. It was a noninvasive EEG (actually Visual Evoked Potentials (VEP)) control of a cursor-like graphical object on a computer screen. The demonstration was movement in a maze.

After his early contributions, Vidal was not active in BCI research, nor BCI events such as conferences, for many years. In 2011, however, he gave a lecture in Graz, Austria, supported by the Future BNCI project, presenting the first BCI, which earned a standing ovation. Vidal was joined by his wife, Laryce Vidal, who previously worked with him at UCLA on his first BCI project.

In 1988 report was given on noninvasive EEG control of a physical object, a robot. The experiment described was EEG control of multiple start-stop-restart of the robot movement, along an arbitrary trajectory defined by a line drawn on a floor. The line-following behavior was the default robot behavior, utilizing autonomous intelligence and autonomous source of energy.

In 1990 report was given on a bidirectional adaptive BCI controlling computer buzzer by an anticipatory brain potential, the Contingent Negative Variation (CNV) potential. The experiment described how an expectation state of the brain, manifested by CNV, controls in a feedback loop the S2 buzzer in the S1-S2-CNV paradigm. The obtained cognitive wave representing the expectation learning in the brain is named Electroexpectogram (EXG). The CNV brain potential was part of the BCI challenge presented by Vidal in his 1973 paper.

In 2015, the BCI Society was officially launched. This non-profit organization is managed by an international board of BCI experts from different sectors (academia, industry, and medicine) with experience in different types of BCIs, such as invasive/non-invasive and control/non-control. The board is elected by the members of the Society, which has several hundred members. Among other responsibilities, the BCI Society organizes international meetings.

Versus neuroprosthetics

Neuroprosthetics is an area of neuroscience concerned with neural prostheses, that is, using artificial devices to replace the function of impaired nervous systems and brain related problems, or of sensory organs. The most widely used neuroprosthetic device is the cochlear implant which, as of December 2010, had been implanted in approximately 220,000 people worldwide. There are also several neuroprosthetic devices that aim to restore vision, including retinal implants.

The difference between BCIs and neuroprosthetics is mostly in how the terms are used: neuroprosthetics typically connect the nervous system to a device, whereas BCIs usually connect the brain (or nervous system) with a computer system. Practical neuroprosthetics can be linked to any part of the nervous system—for example, peripheral nerves—while the term "BCI" usually designates a narrower class of systems which interface with the central nervous system.
The terms are sometimes, however, used interchangeably. Neuroprosthetics and BCIs seek to achieve the same aims, such as restoring sight, hearing, movement, ability to communicate, and even cognitive function. Both use similar experimental methods and surgical techniques.

Animal BCI research

Several laboratories have managed to record signals from monkey and rat cerebral cortices to operate BCIs to produce movement. Monkeys have navigated computer cursors on screen and commanded robotic arms to perform simple tasks simply by thinking about the task and seeing the visual feedback, but without any motor output. In May 2008 photographs that showed a monkey at the University of Pittsburgh Medical Center operating a robotic arm by thinking were published in a number of well-known science journals and magazines.

Early work

Monkey operating a robotic arm with brain–computer interfacing (Schwartz lab, University of Pittsburgh)

In 1969 the operant conditioning studies of Fetz and colleagues, at the Regional Primate Research Center and Department of Physiology and Biophysics, University of Washington School of Medicine in Seattle, showed for the first time that monkeys could learn to control the deflection of a biofeedback meter arm with neural activity. Similar work in the 1970s established that monkeys could quickly learn to voluntarily control the firing rates of individual and multiple neurons in the primary motor cortex if they were rewarded for generating appropriate patterns of neural activity.

Studies that developed algorithms to reconstruct movements from motor cortex neurons, which control movement, date back to the 1970s. In the 1980s, Apostolos Georgopoulos at Johns Hopkins University found a mathematical relationship between the electrical responses of single motor cortex neurons in rhesus macaque monkeys and the direction in which they moved their arms (based on a cosine function). He also found that dispersed groups of neurons, in different areas of the monkey's brains, collectively controlled motor commands, but was able to record the firings of neurons in only one area at a time, because of the technical limitations imposed by his equipment.

There has been rapid development in BCIs since the mid-1990s. Several groups have been able to capture complex brain motor cortex signals by recording from neural ensembles (groups of neurons) and using these to control external devices.

Prominent research successes

Kennedy and Yang Dan

Phillip Kennedy (who later founded Neural Signals in 1987) and colleagues built the first intracortical brain–computer interface by implanting neurotrophic-cone electrodes into monkeys.

Yang Dan and colleagues' recordings of cat vision using a BCI implanted in the lateral geniculate nucleus (top row: original image; bottom row: recording)

In 1999, researchers led by Yang Dan at the University of California, Berkeley decoded neuronal firings to reproduce images seen by cats. The team used an array of electrodes embedded in the thalamus (which integrates all of the brain’s sensory input) of sharp-eyed cats. Researchers targeted 177 brain cells in the thalamus lateral geniculate nucleus area, which decodes signals from the retina. The cats were shown eight short movies, and their neuron firings were recorded. Using mathematical filters, the researchers decoded the signals to generate movies of what the cats saw and were able to reconstruct recognizable scenes and moving objects. Similar results in humans have since been achieved by researchers in Japan.

Nicolelis

Miguel Nicolelis, a professor at Duke University, in Durham, North Carolina, has been a prominent proponent of using multiple electrodes spread over a greater area of the brain to obtain neuronal signals to drive a BCI.

After conducting initial studies in rats during the 1990s, Nicolelis and his colleagues developed BCIs that decoded brain activity in owl monkeys and used the devices to reproduce monkey movements in robotic arms. Monkeys have advanced reaching and grasping abilities and good hand manipulation skills, making them ideal test subjects for this kind of work.

By 2000 the group succeeded in building a BCI that reproduced owl monkey movements while the monkey operated a joystick or reached for food. The BCI operated in real time and could also control a separate robot remotely over Internet protocol. But the monkeys could not see the arm moving and did not receive any feedback, a so-called open-loop BCI.

Diagram of the BCI developed by Miguel Nicolelis and colleagues for use on rhesus monkeys

Later experiments by Nicolelis using rhesus monkeys succeeded in closing the feedback loop and reproduced monkey reaching and grasping movements in a robot arm. With their deeply cleft and furrowed brains, rhesus monkeys are considered to be better models for human neurophysiology than owl monkeys. The monkeys were trained to reach and grasp objects on a computer screen by manipulating a joystick while corresponding movements by a robot arm were hidden. The monkeys were later shown the robot directly and learned to control it by viewing its movements. The BCI used velocity predictions to control reaching movements and simultaneously predicted handgripping force. In 2011 O'Doherty and colleagues showed a BCI with sensory feedback with rhesus monkeys. The monkey was brain controlling the position of an avatar arm while receiving sensory feedback through direct intracortical stimulation (ICMS) in the arm representation area of the sensory cortex.

Donoghue, Schwartz and Andersen

Other laboratories which have developed BCIs and algorithms that decode neuron signals include those run by John Donoghue at Brown University, Andrew Schwartz at the University of Pittsburgh and Richard Andersen at Caltech. These researchers have been able to produce working BCIs, even using recorded signals from far fewer neurons than did Nicolelis (15–30 neurons versus 50–200 neurons).

Donoghue's group reported training rhesus monkeys to use a BCI to track visual targets on a computer screen (closed-loop BCI) with or without assistance of a joystick. Schwartz's group created a BCI for three-dimensional tracking in virtual reality and also reproduced BCI control in a robotic arm. The same group also created headlines when they demonstrated that a monkey could feed itself pieces of fruit and marshmallows using a robotic arm controlled by the animal's own brain signals.

Andersen's group used recordings of premovement activity from the posterior parietal cortex in their BCI, including signals created when experimental animals anticipated receiving a reward.

Other research

In addition to predicting kinematic and kinetic parameters of limb movements, BCIs that predict electromyographic or electrical activity of the muscles of primates are being developed. Such BCIs could be used to restore mobility in paralyzed limbs by electrically stimulating muscles.

Miguel Nicolelis and colleagues demonstrated that the activity of large neural ensembles can predict arm position. This work made possible creation of BCIs that read arm movement intentions and translate them into movements of artificial actuators. Carmena and colleagues programmed the neural coding in a BCI that allowed a monkey to control reaching and grasping movements by a robotic arm. Lebedev and colleagues argued that brain networks reorganize to create a new representation of the robotic appendage in addition to the representation of the animal's own limbs.

The biggest impediment to BCI technology at present is the lack of a sensor modality that provides safe, accurate and robust access to brain signals. It is conceivable or even likely, however, that such a sensor will be developed within the next twenty years. The use of such a sensor should greatly expand the range of communication functions that can be provided using a BCI.

Development and implementation of a BCI system is complex and time consuming. In response to this problem, Gerwin Schalk has been developing a general-purpose system for BCI research, called BCI2000. BCI2000 has been in development since 2000 in a project led by the Brain–Computer Interface R&D Program at the Wadsworth Center of the New York State Department of Health in Albany, New York, United States.

A new 'wireless' approach uses light-gated ion channels such as Channelrhodopsin to control the activity of genetically defined subsets of neurons in vivo. In the context of a simple learning task, illumination of transfected cells in the somatosensory cortex influenced the decision making process of freely moving mice.

The use of BMIs has also led to a deeper understanding of neural networks and the central nervous system. Research has shown that despite the inclination of neuroscientists to believe that neurons have the most effect when working together, single neurons can be conditioned through the use of BMIs to fire at a pattern that allows primates to control motor outputs. The use of BMIs has led to development of the single neuron insufficiency principle which states that even with a well tuned firing rate single neurons can only carry a narrow amount of information and therefore the highest level of accuracy is achieved by recording firings of the collective ensemble. Other principles discovered with the use of BMIs include the neuronal multitasking principle, the neuronal mass principle, the neural degeneracy principle, and the plasticity principle.

BCIs are also proposed to be applied by users without disabilities. A user-centered categorization of BCI approaches by Thorsten O. Zander and Christian Kothe introduces the term passive BCI. Next to active and reactive BCI that are used for directed control, passive BCIs allow for assessing and interpreting changes in the user state during Human-Computer Interaction (HCI). In a secondary, implicit control loop the computer system adapts to its user improving its usability in general.

The BCI Award

The Annual BCI Research Award is awarded in recognition of outstanding and innovative research in the field of Brain-Computer Interfaces. Each year, a renowned research laboratory is asked to judge the submitted projects. The jury consists of world-leading BCI experts recruited by the awarding laboratory. The jury selects twelve nominees, then chooses a first, second, and third-place winner, who receive awards of $3,000, $2,000, and $1,000, respectively. The following list presents the first-place winners of the Annual BCI Research Award:
  • 2010: Cuntai Guan, Kai Keng Ang, Karen Sui Geok Chua and Beng Ti Ang, (A*STAR, Singapore);
Motor imagery-based Brain-Computer Interface robotic rehabilitation for stroke.
What are the neuro-physiological causes of performance variations in brain-computer interfacing?
  • 2012: Surjo R. Soekadar and Niels Birbaumer, (Applied Neurotechnology Lab, University Hospital Tübingen and Institute of Medical Psychology and Behavioral Neurobiology, Eberhard Karls University, Tübingen, Germany);
Improving Efficacy of Ipsilesional Brain-Computer Interface Training in Neurorehabilitation of Chronic Stroke.
  • 2013: M. C. Dadarlat, J. E. O’Doherty, P. N. Sabes (Department of Physiology, Center for Integrative Neuroscience, San Francisco, CA, US, UC Berkeley-UCSF Bioengineering Graduate Program, University of California, San Francisco, CA, US);
A learning-based approach to artificial sensory feedback: intracortical microstimulation replaces and augments vision.
Airborne Ultrasonic Tactile Display BCI.
  • 2015: Guy Hotson, David P McMullen, Matthew S. Fifer, Matthew S. Johannes, Kapil D. Katyal, Matthew P. Para, Robert Armiger, William S. Anderson, Nitish V. Thakor, Brock A. Wester, Nathan E. Crone (Johns Hopkins University, USA);
Individual Finger Control of the Modular Prosthetic Limb using High-Density Electrocorticography in a Human Subject.
  • 2016: Gaurav Sharma, Nick Annetta, Dave Friedenberg, Marcie Bockbrader, Ammar Shaikhouni, W. Mysiw, Chad Bouton, Ali Rezai (Battelle Memorial Institute, The Ohio State University, USA);
An Implanted BCI for Real-Time Cortical Control of Functional Wrist and Finger Movements in a Human with Quadriplegia.
  • 2017: S. Aliakbaryhosseinabadi, E. N. Kamavuako, N. Jiang, D. Farina, N. Mrachacz-Kersting (Center for Sensory-Motor Interaction, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark; Department of Systems Design Engineering, Faculty of Engineering, University of Waterloo, Waterloo, Canada; and Imperial College London, London, UK).
Online adaptive brain-computer interface with attention variations.

Human BCI research

Invasive BCIs

Vision

Jens Naumann, a man with acquired blindness, being interviewed about his vision BCI on CBS's The Early Show

Invasive BCI research has targeted repairing damaged sight and providing new functionality for people with paralysis. Invasive BCIs are implanted directly into the grey matter of the brain during neurosurgery. Because they lie in the grey matter, invasive devices produce the highest quality signals of BCI devices but are prone to scar-tissue build-up, causing the signal to become weaker, or even non-existent, as the body reacts to a foreign object in the brain.

In vision science, direct brain implants have been used to treat non-congenital (acquired) blindness. One of the first scientists to produce a working brain interface to restore sight was private researcher William Dobelle.

Dobelle's first prototype was implanted into "Jerry", a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry’s visual cortex and succeeded in producing phosphenes, the sensation of seeing light. The system included cameras mounted on glasses to send signals to the implant. Initially, the implant allowed Jerry to see shades of grey in a limited field of vision at a low frame-rate. This also required him to be hooked up to a mainframe computer, but shrinking electronics and faster computers made his artificial eye more portable and now enable him to perform simple tasks unassisted.

Dummy unit illustrating the design of a BrainGate interface

In 2002, Jens Naumann, also blinded in adulthood, became the first in a series of 16 paying patients to receive Dobelle’s second generation implant, marking one of the earliest commercial uses of BCIs. The second generation device used a more sophisticated implant enabling better mapping of phosphenes into coherent vision. Phosphenes are spread out across the visual field in what researchers call "the starry-night effect". Immediately after his implant, Jens was able to use his imperfectly restored vision to drive an automobile slowly around the parking area of the research institute. Unfortunately, Dobelle died in 2004 before his processes and developments were documented. Subsequently, when Mr. Naumann and the other patients in the program began having problems with their vision, there was no relief and they eventually lost their "sight" again. Naumann wrote about his experience with Dobelle's work in Search for Paradise: A Patient's Account of the Artificial Vision Experiment and has returned to his farm in Southeast Ontario, Canada, to resume his normal activities.

Movement

BCIs focusing on motor neuroprosthetics aim to either restore movement in individuals with paralysis or provide devices to assist them, such as interfaces with computers or robot arms.
Researchers at Emory University in Atlanta, led by Philip Kennedy and Roy Bakay, were first to install a brain implant in a human that produced signals of high enough quality to simulate movement. Their patient, Johnny Ray (1944–2002), suffered from ‘locked-in syndrome’ after suffering a brain-stem stroke in 1997. Ray’s implant was installed in 1998 and he lived long enough to start working with the implant, eventually learning to control a computer cursor; he died in 2002 of a brain aneurysm.

Tetraplegic Matt Nagle became the first person to control an artificial hand using a BCI in 2005 as part of the first nine-month human trial of Cyberkinetics’s BrainGate chip-implant. Implanted in Nagle’s right precentral gyrus (area of the motor cortex for arm movement), the 96-electrode BrainGate implant allowed Nagle to control a robotic arm by thinking about moving his hand as well as a computer cursor, lights and TV. One year later, professor Jonathan Wolpaw received the prize of the Altran Foundation for Innovation to develop a Brain Computer Interface with electrodes located on the surface of the skull, instead of directly in the brain.

More recently, research teams led by the Braingate group at Brown University and a group led by University of Pittsburgh Medical Center, both in collaborations with the United States Department of Veterans Affairs, have demonstrated further success in direct control of robotic prosthetic limbs with many degrees of freedom using direct connections to arrays of neurons in the motor cortex of patients with tetraplegia.

Partially invasive BCIs

Partially invasive BCI devices are implanted inside the skull but rest outside the brain rather than within the grey matter. They produce better resolution signals than non-invasive BCIs where the bone tissue of the cranium deflects and deforms signals and have a lower risk of forming scar-tissue in the brain than fully invasive BCIs. There has been preclinical demonstration of intracortical BCIs from the stroke perilesional cortex.

Electrocorticography (ECoG) measures the electrical activity of the brain taken from beneath the skull in a similar way to non-invasive electroencephalography, but the electrodes are embedded in a thin plastic pad that is placed above the cortex, beneath the dura mater. ECoG technologies were first trialled in humans in 2004 by Eric Leuthardt and Daniel Moran from Washington University in St Louis. In a later trial, the researchers enabled a teenage boy to play Space Invaders using his ECoG implant. This research indicates that control is rapid, requires minimal training, and may be an ideal tradeoff with regards to signal fidelity and level of invasiveness.

(Note: these electrodes had not been implanted in the patient with the intention of developing a BCI. The patient had been suffering from severe epilepsy and the electrodes were temporarily implanted to help his physicians localize seizure foci; the BCI researchers simply took advantage of this.)

Signals can be either subdural or epidural, but are not taken from within the brain parenchyma itself. It has not been studied extensively until recently due to the limited access of subjects. Currently, the only manner to acquire the signal for study is through the use of patients requiring invasive monitoring for localization and resection of an epileptogenic focus.

ECoG is a very promising intermediate BCI modality because it has higher spatial resolution, better signal-to-noise ratio, wider frequency range, and less training requirements than scalp-recorded EEG, and at the same time has lower technical difficulty, lower clinical risk, and probably superior long-term stability than intracortical single-neuron recording. This feature profile and recent evidence of the high level of control with minimal training requirements shows potential for real world application for people with motor disabilities.

Light reactive imaging BCI devices are still in the realm of theory. These would involve implanting a laser inside the skull. The laser would be trained on a single neuron and the neuron's reflectance measured by a separate sensor. When the neuron fires, the laser light pattern and wavelengths it reflects would change slightly. This would allow researchers to monitor single neurons but require less contact with tissue and reduce the risk of scar-tissue build-up.

Non-invasive BCIs

There have also been experiments in humans using non-invasive neuroimaging technologies as interfaces. The substantial majority of published BCI work involves noninvasive EEG-based BCIs. Noninvasive EEG-based technologies and interfaces have been used for a much broader variety of applications. Although EEG-based interfaces are easy to wear and do not require surgery, they have relatively poor spatial resolution and cannot effectively use higher-frequency signals because the skull dampens signals, dispersing and blurring the electromagnetic waves created by the neurons. EEG-based interfaces also require some time and effort prior to each usage session, whereas non-EEG-based ones, as well as invasive ones require no prior-usage training. Overall, the best BCI for each user depends on numerous factors.

Non-EEG-based human–computer interface

Pupil-size oscillation
In a 2016 article, an entirely new communication device and non-EEG-based human-computer interface was developed, requiring no visual fixation or ability to move eyes at all, that is based on covert interest in (i.e. without fixing eyes on) chosen letter on a virtual keyboard with letters each having its own (background) circle that is micro-oscillating in brightness in different time transitions, where the letter selection is based on best fit between, on one hand, unintentional pupil-size oscillation pattern, and, on the other hand, the circle-in-background's brightness oscillation pattern. Accuracy is additionally improved by user's mental rehearsing the words 'bright' and 'dark' in synchrony with the brightness transitions of the circle/letter.
Functional near-infrared spectroscopy
In 2014 and 2017, a BCI using functional near-infrared spectroscopy for "locked-in" patients with amyotrophic lateral sclerosis (ALS) was able to restore some basic ability of the patients to communicate with other people.

Electroencephalography (EEG)-based brain-computer interfaces

Overview
Recordings of brainwaves produced by an electroencephalogram

Electroencephalography (EEG) is the most studied non-invasive interface, mainly due to its fine temporal resolution, ease of use, portability and low set-up cost. The technology is somewhat susceptible to noise however.

In the early days of BCI research, another substantial barrier to using EEG as a brain–computer interface was the extensive training required before users can work the technology. For example, in experiments beginning in the mid-1990s, Niels Birbaumer at the University of Tübingen in Germany trained severely paralysed people to self-regulate the slow cortical potentials in their EEG to such an extent that these signals could be used as a binary signal to control a computer cursor. (Birbaumer had earlier trained epileptics to prevent impending fits by controlling this low voltage wave.) The experiment saw ten patients trained to move a computer cursor by controlling their brainwaves. The process was slow, requiring more than an hour for patients to write 100 characters with the cursor, while training often took many months. However, the slow cortical potential approach to BCIs has not been used in several years, since other approaches require little or no training, are faster and more accurate, and work for a greater proportion of users.

Another research parameter is the type of oscillatory activity that is measured. Birbaumer's later research with Jonathan Wolpaw at New York State University has focused on developing technology that would allow users to choose the brain signals they found easiest to operate a BCI, including mu and beta rhythms.

A further parameter is the method of feedback used and this is shown in studies of P300 signals. Patterns of P300 waves are generated involuntarily (stimulus-feedback) when people see something they recognize and may allow BCIs to decode categories of thoughts without training patients first. By contrast, the biofeedback methods described above require learning to control brainwaves so the resulting brain activity can be detected.

While an EEG based brain-computer interface has been pursued extensively by a number of research labs, recent advancements made by Bin He and his team at the University of Minnesota suggest the potential of an EEG based brain-computer interface to accomplish tasks close to invasive brain-computer interface. Using advanced functional neuroimaging including BOLD functional MRI and EEG source imaging, Bin He and co-workers identified the co-variation and co-localization of electrophysiological and hemodynamic signals induced by motor imagination. Refined by a neuroimaging approach and by a training protocol, Bin He and co-workers demonstrated the ability of a non-invasive EEG based brain-computer interface to control the flight of a virtual helicopter in 3-dimensional space, based upon motor imagination. In June 2013 it was announced that Bin He had developed the technique to enable a remote-control helicopter to be guided through an obstacle course.

In addition to a brain-computer interface based on brain waves, as recorded from scalp EEG electrodes, Bin He and co-workers explored a virtual EEG signal-based brain-computer interface by first solving the EEG inverse problem and then used the resulting virtual EEG for brain-computer interface tasks. Well-controlled studies suggested the merits of such a source analysis based brain-computer interface.

A 2014 study found that severely motor-impaired patients could communicate faster and more reliably with non-invasive EEG BCI, than with any muscle-based communication channel.

Dry active electrode arrays

In the early 1990s Babak Taheri, at University of California, Davis demonstrated the first single and also multichannel dry active electrode arrays using micro-machining. The single channel dry EEG electrode construction and results were published in 1994.[63] The arrayed electrode was also demonstrated to perform well compared to silver/silver chloride electrodes. The device consisted of four sites of sensors with integrated electronics to reduce noise by impedance matching. The advantages of such electrodes are: (1) no electrolyte used, (2) no skin preparation, (3) significantly reduced sensor size, and (4) compatibility with EEG monitoring systems. The active electrode array is an integrated system made of an array of capacitive sensors with local integrated circuitry housed in a package with batteries to power the circuitry. This level of integration was required to achieve the functional performance obtained by the electrode.

The electrode was tested on an electrical test bench and on human subjects in four modalities of EEG activity, namely: (1) spontaneous EEG, (2) sensory event-related potentials, (3) brain stem potentials, and (4) cognitive event-related potentials. The performance of the dry electrode compared favorably with that of the standard wet electrodes in terms of skin preparation, no gel requirements (dry), and higher signal-to-noise ratio.

In 1999 researchers at Case Western Reserve University, in Cleveland, Ohio, led by Hunter Peckham, used 64-electrode EEG skullcap to return limited hand movements to quadriplegic Jim Jatich. As Jatich concentrated on simple but opposite concepts like up and down, his beta-rhythm EEG output was analysed using software to identify patterns in the noise. A basic pattern was identified and used to control a switch: Above average activity was set to on, below average off. As well as enabling Jatich to control a computer cursor the signals were also used to drive the nerve controllers embedded in his hands, restoring some movement.

SSVEP mobile EEG BCIs

In 2009, the NCTU Brain-Computer-Interface-headband was reported. The researchers who developed this BCI-headband also engineered silicon-based MicroElectro-Mechanical System (MEMS) dry electrodes designed for application in non-hairy sites of the body. These electrodes were secured to the DAQ board in the headband with snap-on electrode holders. The signal processing module measured alpha activity and the Bluetooth enabled phone assessed the patients’ alertness and capacity for cognitive performance. When the subject became drowsy, the phone sent arousing feedback to the operator to rouse them. This research was supported by the National Science Council, Taiwan, R.O.C., NSC, National Chiao-Tung University, Taiwan’s Ministry of Education, and the U.S. Army Research Laboratory.

In 2011, researchers reported a cellular based BCI with the capability of taking EEG data and converting it into a command to cause the phone to ring. This research was supported in part by Abraxis Bioscience LLP, the U.S. Army Research Laboratory, and the Army Research Office. The developed technology was a wearable system composed of a four channel bio-signal acquisition/amplification module, a wireless transmission module, and a Bluetooth enabled cell phone.  The electrodes were placed so that they pick up steady state visual evoked potentials (SSVEPs). SSVEPs are electrical responses to flickering visual stimuli with repetition rates over 6 Hz that are best found in the parietal and occipital scalp regions of the visual cortex. It was reported that with this BCI setup, all study participants were able to initiate the phone call with minimal practice in natural environments.

The scientists claim that their studies using a single channel fast Fourier transform (FFT) and multiple channel system canonical correlation analysis (CCA) algorithm support the capacity of mobile BCIs. The CCA algorithm has been applied in other experiments investigating BCIs with claimed high performance in accuracy as well as speed. While the cellular based BCI technology was developed to initiate a phone call from SSVEPs, the researchers said that it can be translated for other applications, such as picking up sensorimotor mu/beta rhythms to function as a motor-imagery based BCI.

In 2013, comparative tests were performed on android cell phone, tablet, and computer based BCIs, analyzing the power spectrum density of resultant EEG SSVEPs. The stated goals of this study, which involved scientists supported in part by the U.S. Army Research Laboratory, were to “increase the practicability, portability, and ubiquity of an SSVEP-based BCI, for daily use.” Citation It was reported that the stimulation frequency on all mediums was accurate, although the cell phone’s signal demonstrated some instability. The amplitudes of the SSVEPs for the laptop and tablet were also reported to be larger than those of the cell phone. These two qualitative characterizations were suggested as indicators of the feasibility of using a mobile stimulus BCI.
Limitations
In 2011, researchers stated that continued work should address ease of use, performance robustness, reducing hardware and software costs.

One of the difficulties with EEG readings is the large susceptibility to motion artifacts. In most the previously described research projects, the participants were asked to sit still, reducing head and eye movements as much as possible, and measurements were taken in a laboratory setting. However, since the emphasized application of these initiatives had been in creating a mobile device for daily use, the technology had to be tested in motion.

In 2013, researchers tested mobile EEG-based BCI technology, measuring SSVEPs from participants as they walked on a treadmill at varying speeds. This research was supported by the Office of Naval Research, Army Research Office, and the U.S. Army Research Laboratory. Stated results were that as speed increased the SSVEP detectability using CCA decreased. As independent component analysis (ICA) had been shown to be efficient in separating EEG signals from noise, the scientists applied ICA to CCA extracted EEG data. They stated that the CCA data with and without ICA processing were similar. Thus, they concluded that CCA independently demonstrated a robustness to motion artifacts that indicates it may be a beneficial algorithm to apply to BCIs used in real world conditions.

Prosthesis and environment control

Non-invasive BCIs have also been applied to enable brain-control of prosthetic upper and lower extremity devices in people with paralysis. For example, Gert Pfurtscheller of Graz University of Technology and colleagues demonstrated a BCI-controlled functional electrical stimulation system to restore upper extremity movements in a person with tetraplegia due to spinal cord injury. Between 2012 and 2013, researchers at the University of California, Irvine demonstrated for the first time that it is possible to use BCI technology to restore brain-controlled walking after spinal cord injury. In their spinal cord injury research study, a person with paraplegia was able to operate a BCI-robotic gait orthosis to regain basic brain-controlled ambulation. In 2009 Alex Blainey, an independent researcher based in the UK, successfully used the Emotiv EPOC to control a 5 axis robot arm. He then went on to make several demonstration mind controlled wheelchairs and home automation that could be operated by people with limited or no motor control such as those with paraplegia and cerebral palsy.

Research into military use of BCIs funded by DARPA has been ongoing since the 1970s. The current focus of research is user-to-user communication through analysis of neural signals.

DIY and open source BCI

In 2001, The OpenEEG Project was initiated by a group of DIY neuroscientists and engineers. The ModularEEG was the primary device created by the OpenEEG community; it was a 6-channel signal capture board that cost between $200 and $400 to make at home. The OpenEEG Project marked a significant moment in the emergence of DIY brain-computer interfacing.

In 2010, the Frontier Nerds of NYU's ITP program published a thorough tutorial titled How To Hack Toy EEGs. The tutorial, which stirred the minds of many budding DIY BCI enthusiasts, demonstrated how to create a single channel at-home EEG with an Arduino and a Mattel Mindflex at a very reasonable price. This tutorial amplified the DIY BCI movement.

In 2013, OpenBCI emerged from a DARPA solicitation and subsequent Kickstarter campaign. They created a high-quality, open-source 8-channel EEG acquisition board, known as the 32bit Board, that retailed for under $500. Two years later they created the first 3D-printed EEG Headset, known as the Ultracortex, as well as a 4-channel EEG acquisition board, known as the Ganglion Board, that retailed for under $100.

In 2015, NeuroTechX was created with the mission of building an international network for neurotechnology.

MEG and MRI

ATR Labs' reconstruction of human vision using fMRI (top row: original image; bottom row: reconstruction from mean of combined readings)

Magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) have both been used successfully as non-invasive BCIs. In a widely reported experiment, fMRI allowed two users being scanned to play Pong in real-time by altering their haemodynamic response or brain blood flow through biofeedback techniques.

fMRI measurements of haemodynamic responses in real time have also been used to control robot arms with a seven-second delay between thought and movement.

In 2008 research developed in the Advanced Telecommunications Research (ATR) Computational Neuroscience Laboratories in Kyoto, Japan, allowed the scientists to reconstruct images directly from the brain and display them on a computer in black and white at a resolution of 10x10 pixels. The article announcing these achievements was the cover story of the journal Neuron of 10 December 2008.

In 2011 researchers from UC Berkeley published a study reporting second-by-second reconstruction of videos watched by the study's subjects, from fMRI data. This was achieved by creating a statistical model relating visual patterns in videos shown to the subjects, to the brain activity caused by watching the videos. This model was then used to look up the 100 one-second video segments, in a database of 18 million seconds of random YouTube videos, whose visual patterns most closely matched the brain activity recorded when subjects watched a new video. These 100 one-second video extracts were then combined into a mashed-up image that resembled the video being watched.

BCI control strategies in neurogaming

Motor imagery
Motor imagery involves the imagination of the movement of various body parts resulting in sensorimotor cortex activation, which modulates sensorimotor oscillations in the EEG. This can be detected by the BCI to infer a user's intent. Motor imagery typically requires a number of sessions of training before acceptable control of the BCI is acquired. These training sessions may take a number of hours over several days before users can consistently employ the technique with acceptable levels of precision. Regardless of the duration of the training session, users are unable to master the control scheme. This results in very slow pace of the gameplay. Advance machine learning methods were recently developed to compute a subject-specific model for detecting the performance of motor imagery. The top performing algorithm from BCI Competition IV dataset 2 for motor imagery is the Filter Bank Common Spatial Pattern, developed by Ang et al. from A*STAR, Singapore).
Bio/neurofeedback for passive BCI designs
Biofeedback is used to monitor a subject's mental relaxation. In some cases, biofeedback does not monitor electroencephalography (EEG), but instead bodily parameters such as electromyography (EMG), galvanic skin resistance (GSR), and heart rate variability (HRV). Many biofeedback systems are used to treat certain disorders such as attention deficit hyperactivity disorder (ADHD), sleep problems in children, teeth grinding, and chronic pain. EEG biofeedback systems typically monitor four different bands (theta: 4–7 Hz, alpha:8–12 Hz, SMR: 12–15 Hz, beta: 15–18 Hz) and challenge the subject to control them. Passive BCI involves using BCI to enrich human–machine interaction with implicit information on the actual user's state, for example, simulations to detect when users intend to push brakes during an emergency car stopping procedure. Game developers using passive BCIs need to acknowledge that through repetition of game levels the user's cognitive state will change or adapt. Within the first play of a level, the user will react to things differently from during the second play: for example, the user will be less surprised at an event in the game if he/she is expecting it.
Visual evoked potential (VEP)
A VEP is an electrical potential recorded after a subject is presented with a type of visual stimuli. There are several types of VEPs.

Steady-state visually evoked potentials (SSVEPs) use potentials generated by exciting the retina, using visual stimuli modulated at certain frequencies. SSVEP's stimuli are often formed from alternating checkerboard patterns and at times simply use flashing images. The frequency of the phase reversal of the stimulus used can be clearly distinguished in the spectrum of an EEG; this makes detection of SSVEP stimuli relatively easy. SSVEP has proved to be successful within many BCI systems. This is due to several factors, the signal elicited is measurable in as large a population as the transient VEP and blink movement and electrocardiographic artefacts do not affect the frequencies monitored. In addition, the SSVEP signal is exceptionally robust; the topographic organization of the primary visual cortex is such that a broader area obtains afferents from the central or fovial region of the visual field. SSVEP does have several problems however. As SSVEPs use flashing stimuli to infer a user's intent, the user must gaze at one of the flashing or iterating symbols in order to interact with the system. It is, therefore, likely that the symbols could become irritating and uncomfortable to use during longer play sessions, which can often last more than an hour which may not be an ideal gameplay.

Another type of VEP used with applications is the P300 potential. The P300 event-related potential is a positive peak in the EEG that occurs at roughly 300 ms after the appearance of a target stimulus (a stimulus for which the user is waiting or seeking) or oddball stimuli. The P300 amplitude decreases as the target stimuli and the ignored stimuli grow more similar.The P300 is thought to be related to a higher level attention process or an orienting response Using P300 as a control scheme has the advantage of the participant only having to attend limited training sessions. The first application to use the P300 model was the P300 matrix. Within this system, a subject would choose a letter from a grid of 6 by 6 letters and numbers. The rows and columns of the grid flashed sequentially and every time the selected "choice letter" was illuminated the user's P300 was (potentially) elicited. However, the communication process, at approximately 17 characters per minute, was quite slow. The P300 is a BCI that offers a discrete selection rather than a continuous control mechanism. The advantage of P300 use within games is that the player does not have to teach himself/herself how to use a completely new control system and so only has to undertake short training instances, to learn the gameplay mechanics and basic use of the BCI paradigm.

Synthetic telepathy/silent communication

In 2010 the DARPA's budget for the fiscal year included $4 million to start up a program called Silent Talk. The goal was to "allow user-to-user communication on the battlefield without the use of vocalized speech through analysis of neural signals". The program had three major goals:
  1. to attempt to identify electroencephalography patterns unique to individual words;
  2. ensure that those patterns are generalizable across users in order to prevent extensive device training;
  3. construct a fieldable pre-prototype that would decode the signal and transmit over a limited range.

In a $6.3 million Army initiative to invent devices for telepathic communication, Gerwin Schalk, underwritten in a $2.2 million grant, found the use of ECoG signals can discriminate the vowels and consonants embedded in spoken and imagined words, shedding light on the distinct mechanisms associated with production of vowels and consonants, and could provide the basis for brain-based communication using imagined speech.

In 2002 Kevin Warwick had an array of 100 electrodes fired into his nervous system in order to link his nervous system into the Internet to investigate enhancement possibilities. With this in place Warwick successfully carried out a series of experiments. With electrodes also implanted into his wife's nervous system, they conducted the first direct electronic communication experiment between the nervous systems of two humans.

Research into synthetic telepathy using subvocalization is taking place at the University of California, Irvine under lead scientist Mike D'Zmura. The first such communication took place in the 1960s using EEG to create Morse code using brain alpha waves. Using EEG to communicate imagined speech is less accurate than the invasive method of placing an electrode between the skull and the brain. On 27 February 2013 the group with Miguel Nicolelis at Duke University and IINN-ELS successfully connected the brains of two rats with electronic interfaces that allowed them to directly share information, in the first-ever direct brain-to-brain interface.

On 3 September 2014, direct communication between human brains became a possibility over extended distances through Internet transmission of EEG signals.

In March and May 2014, a study conducted by Dipartimento di Psicologia Generale – Università di Padova, EVANLAB – Firenze, LiquidWeb s.r.l. company and Dipartimento di Ingegneria e Architettura – Università di Trieste, showed confirmatory results analyzing the EEG activity of two human partners spatially separated approximately 190 km apart when one member of the pair receives the stimulation and the second one is connected only mentally with the first.

Cell-culture BCIs

Researchers have built devices to interface with neural cells and entire neural networks in cultures outside animals. As well as furthering research on animal implantable devices, experiments on cultured neural tissue have focused on building problem-solving networks, constructing basic computers and manipulating robotic devices. Research into techniques for stimulating and recording from individual neurons grown on semiconductor chips is sometimes referred to as neuroelectronics or neurochips.

The world's first Neurochip, developed by Caltech researchers Jerome Pine and Michael Maher

Development of the first working neurochip was claimed by a Caltech team led by Jerome Pine and Michael Maher in 1997. The Caltech chip had room for 16 neurons.

In 2003 a team led by Theodore Berger, at the University of Southern California, started work on a neurochip designed to function as an artificial or prosthetic hippocampus. The neurochip was designed to function in rat brains and was intended as a prototype for the eventual development of higher-brain prosthesis. The hippocampus was chosen because it is thought to be the most ordered and structured part of the brain and is the most studied area. Its function is to encode experiences for storage as long-term memories elsewhere in the brain.

In 2004 Thomas DeMarse at the University of Florida used a culture of 25,000 neurons taken from a rat's brain to fly a F-22 fighter jet aircraft simulator. After collection, the cortical neurons were cultured in a petri dish and rapidly began to reconnect themselves to form a living neural network. The cells were arranged over a grid of 60 electrodes and used to control the pitch and yaw functions of the simulator. The study's focus was on understanding how the human brain performs and learns computational tasks at a cellular level.

Ethical considerations

Important ethical, legal and societal issues related to brain-computer interfacing are:
  • Conceptual issues (researchers disagree over what is and what is not a brain-computer interface);
  • Obtaining informed consent from people who have difficulty communicating;
  • Risk/benefit analysis;
  • Shared responsibility of BCI teams (e.g. how to ensure that responsible group decisions can be made);
  • The consequences of BCI technology for the quality of life of patients and their families,
  • Side-effects (e.g. neurofeedback of sensorimotor rhythm training is reported to affect sleep quality);
  • Personal responsibility and its possible constraints (e.g. who is responsible for erroneous actions with a neuroprosthesis);
  • Issues concerning personality and personhood and its possible alteration;
  • Blurring of the division between human and machine;
  • Therapeutic applications and their possible exceedance;
  • Questions of research ethics that arise when progressing from animal experimentation to application in human subjects;
  • Mind-reading and privacy;
  • Mind-control;
  • Use of the technology in advanced interrogation techniques by governmental authorities;
  • Selective enhancement and social stratification;
  • Communication to the media.
In their current form, most BCIs are far removed from the ethical issues considered above. They are actually similar to corrective therapies in function. Clausen stated in 2009 that “BCIs pose ethical challenges, but these are conceptually similar to those that bioethicists have addressed for other realms of therapy”. Moreover, he suggests that bioethics is well-prepared to deal with the issues that arise with BCI technologies. Haselager and colleagues pointed out that expectations of BCI efficacy and value play a great role in ethical analysis and the way BCI scientists should approach media. Furthermore, standard protocols can be implemented to ensure ethically sound informed-consent procedures with locked-in patients.

The case of BCIs today has parallels in medicine, as will its evolution. Much as pharmaceutical science began as a balance for impairments and is now used to increase focus and reduce need for sleep, BCIs will likely transform gradually from therapies to enhancements. Researchers are well aware that sound ethical guidelines, appropriately moderated enthusiasm in media coverage and education about BCI systems will be of utmost importance for the societal acceptance of this technology. Thus, recently more effort is made inside the BCI community to create consensus on ethical guidelines for BCI research, development and dissemination.

Low-cost BCI-based interfaces

Recently a number of companies have scaled back medical grade EEG technology (and in one case, NeuroSky, rebuilt the technology from the ground up) to create inexpensive BCIs. This technology has been built into toys and gaming devices; some of these toys have been extremely commercially successful like the NeuroSky and Mattel MindFlex.
  • In 2006 Sony patented a neural interface system allowing radio waves to affect signals in the neural cortex;
  • In 2007 NeuroSky released the first affordable consumer based EEG along with the game NeuroBoy. This was also the first large scale EEG device to use dry sensor technology;
  • In 2008 OCZ Technology developed a device for use in video games relying primarily on electromyography;
  • In 2008 the Final Fantasy developer Square Enix announced that it was partnering with NeuroSky to create a game, Judecca;
  • In 2009 Mattel partnered with NeuroSky to release the Mindflex, a game that used an EEG to steer a ball through an obstacle course. By far the best selling consumer based EEG to date;
  • In 2009 Uncle Milton Industries partnered with NeuroSky to release the Star Wars Force Trainer, a game designed to create the illusion of possessing the Force;
  • In 2009 Emotiv released the EPOC, a 14 channel EEG device that can read 4 mental states, 13 conscious states, facial expressions, and head movements. The EPOC is the first commercial BCI to use dry sensor technology, which can be dampened with a saline solution for a better connection;
  • In November 2011 Time Magazine selected "necomimi" produced by Neurowear as one of the best inventions of the year. The company announced that it expected to launch a consumer version of the garment, consisting of cat-like ears controlled by a brain-wave reader produced by NeuroSky, in spring 2012;
  • In February 2014 They Shall Walk (a nonprofit organization fixed on constructing exoskeletons, dubbed LIFESUITs, for paraplegics and quadriplegics) began a partnership with James W. Shakarji on the development of a wireless BCI;
  • In 2016, a group of hobbyists developed an open-source BCI board that sends neural signals to the audio jack of a smartphone, dropping the cost of entry-level BCI to £20. Basic diagnostic software is available for Android devices, as well as a text entry app for Unity.

Future directions

Brain-computer interface

A consortium consisting of 12 European partners has completed a roadmap to support the European Commission in their funding decisions for the new framework program Horizon 2020. The project, which was funded by the European Commission, started in November 2013 and ended in April 2015. The roadmap is now complete, and can be downloaded on the project's webpage. A 2015 publication led by Dr. Clemens Brunner describes some of the analyses and achievements of this project, as well as the emerging Brain-Computer Interface Society. For example, this article reviewed work within this project that further defined BCIs and applications, explored recent trends, discussed ethical issues, and evaluated different directions for new BCIs. As the article notes, their new roadmap generally extends and supports the recommendations from the Future BNCI project managed by Dr. Brendan Allison, which conveys substantial enthusiasm for emerging BCI directions.

In addition to, other recent publications have explored the most promising future BCI directions for new groups of disabled users (e.g.,). Some prominent examples are summarized below.

Disorders of consciousness (DOC)

Some persons have a disorder of consciousness (DOC). This state is defined to include persons with coma, as well as persons in a vegetative state (VS) or minimally conscious state (MCS). New BCI research seeks to help persons with DOC in different ways. A key initial goal is to identify patients who are able to perform basic cognitive tasks, which would of course lead to a change in their diagnosis. That is, some persons who are diagnosed with DOC may in fact be able to process information and make important life decisions (such as whether to seek therapy, where to live, and their views on end-of-life decisions regarding them). Some persons who are diagnosed with DOC die as a result of end-of-life decisions, which may be made by family members who sincerely feel this is in the patient's best interests. Given the new prospect of allowing these patients to provide their views on this decision, there would seem to be a strong ethical pressure to develop this research direction to guarantee that DOC patients are given an opportunity to decide whether they want to live.

These and other articles describe new challenges and solutions to use BCI technology to help persons with DOC. One major challenge is that these patients cannot use BCIs based on vision. Hence, new tools rely on auditory and/or vibrotactile stimuli. Patients may wear headphones and/or vibrotactile stimulators placed on the wrists, neck, leg, and/or other locations. Another challenge is that patients may fade in and out of consciousness, and can only communicate at certain times. This may indeed be a cause of mistaken diagnosis. Some patients may only be able to respond to physicians' requests during a few hours per day (which might not be predictable ahead of time) and thus may have been unresponsive during diagnosis. Therefore, new methods rely on tools that are easy to use in field settings, even without expert help, so family members and other persons without any medical or technical background can still use them. This reduces the cost, time, need for expertise, and other burdens with DOC assessment. Automated tools can ask simple questions that patients can easily answer, such as "Is your father named George?" or "Were you born in the USA?" Automated instructions inform patients that they may convey yes or no by (for example) focusing their attention on stimuli on the right vs. left wrist. This focused attention produces reliable changes in EEG patterns that can help determine that the patient is able to communicate. The results could be presented to physicians and therapists, which could lead to a revised diagnosis and therapy. In addition, these patients could then be provided with BCI-based communication tools that could help them convey basic needs, adjust bed position and HVAC (heating, ventilation, and air conditioning), and otherwise empower them to make major life decisions and communicate.

This research effort was supported in part by different EU-funded projects, such as the DECODER project led by Prof. Andrea Kuebler at the University of Wuerzburg. This project contributed to the first BCI system developed for DOC assessment and communication, called mindBEAGLE. This system is designed to help non-expert users work with DOC patients, but is not intended to replace medical staff. An EU-funded project that began in 2015 called ComAlert conducted further research and development to improve DOC prediction, assessment, rehabilitation, and communication, called "PARC" in that project. Another project funded by the National Science Foundation is led by Profs. Dean Krusienski and Chang Nam. This project provides for improved vibrotactile systems, advanced signal analysis, and other improvements for DOC assessment and communication.

Motor recovery

People may lose some of their ability to move due to many causes, such as stroke or injury. Several groups have explored systems and methods for motor recovery that include BCIs. In this approach, a BCI measures motor activity while the patient imagines or attempts movements as directed by a therapist. The BCI may provide two benefits: (1) if the BCI indicates that a patient is not imagining a movement correctly (non-compliance), then the BCI could inform the patient and therapist; and (2) rewarding feedback such as functional stimulation or the movement of a virtual avatar also depends on the patient's correct movement imagery.

So far, BCIs for motor recovery have relied on the EEG to measure the patient's motor imagery. However, studies have also used fMRI to study different changes in the brain as persons undergo BCI-based stroke rehab training. Future systems might include the fMRI and other measures for real-time control, such as functional near-infrared, probably in tandem with EEGs. Non-invasive brain stimulation has also been explored in combination with BCIs for motor recovery.

Like the work with BCIs for DOC, this research direction was funded by different public funding mechanisms within the EU and elsewhere. The VERE project included work on a new system for stroke rehabilitation focused on BCIs and advanced virtual environments designed to provide the patient with immersive feedback to foster recovery. This project, and the RecoveriX project that focused exclusively on a new BCI system for stroke patients, contributed to a hardware and software platform called RecoveriX. This system includes a BCI as well as a functional electrical stimulator and virtual feedback. In September 2016, a training facility called a recoveriX-gym opened in Austria, in which therapists use this system to provide motor rehab therapy to persons with stroke.

Functional brain mapping

Each year, about 400,000 people undergo brain mapping during neurosurgery. This procedure is often required for people with tumors or epilepsy that do not respond to medication. During this procedure, electrodes are placed on the brain to precisely identify the locations of structures and functional areas. Patients may be awake during neurosurgery and asked to perform certain tasks, such as moving fingers or repeating words. This is necessary so that surgeons can remove only the desired tissue while sparing other regions, such as critical movement or language regions. Removing too much brain tissue can cause permanent damage, while removing too little tissue can leave the underlying condition untreated and require additional neurosurgery. Thus, there is a strong need to improve both methods and systems to map the brain as effectively as possible.

In several recent publications, BCI research experts and medical doctors have collaborated to explore new ways to use BCI technology to improve neurosurgical mapping. This work focuses largely on high gamma activity, which is difficult to detect with non-invasive means. Results have led to improved methods for identifying key areas for movement, language, and other functions. A recent article addressed advances in functional brain mapping and summarizes a workshop.

Flexible devices

Flexible electronics are polymers or other flexible materials (e.g. silk, pentacene, PDMS, parylene, polyimide) that are printed with circuitry; the flexible nature of the organic background materials allowing the electronics created to bend, and the fabrication techniques utilized to create these devices resembles those used to create integrated circuits and microelectromechanical systems (MEMS). Flexible electronics were first developed in the 1960s and 1970s, but research interest increased in the mid-2000s.

Neural dust

Neural dust is a term used to refer to millimeter-sized devices operated as wirelessly powered nerve sensors that were proposed in a 2011 paper from the University of California, Berkeley Wireless Research Center, which described both the challenges and outstanding benefits of creating a long lasting wireless BCI. In one proposed model of the neural dust sensor, the transistor model allowed for a method of separating between local field potentials and action potential "spikes", which would allow for a greatly diversified wealth of data acquirable from the recordings.

Assisted reproductive technology

From Wikipedia, the free encyclopedia

Assisted reproductive technology
Blausen 0060 AssistedReproductiveTechnology.png
Illustration depicting intracytoplasmic sperm injection (ICSI), an example of assisted reproductive technology.

Assisted reproductive technology (ART) are medical procedures used primarily to address infertility. It includes procedures such as in vitro fertilization. It may include intracytoplasmic sperm injection (ICSI), cryopreservation of gametes or embryos, and/or may involve the use of fertility medication. When used to address infertility, it may also be referred to as fertility treatment. ART mainly belongs to the field of reproductive endocrinology and infertility. Some forms of ART are also used with regard to fertile couples for genetic reasons (preimplantation genetic diagnosis). ART may also be used in surrogacy arrangements, although not all surrogacy arrangements involve ART.

Procedures

General

With ART, the process of sexual intercourse is bypassed and fertilization of the oocytes occurs in the laboratory environment (i.e., in vitro fertilization).

In the US, the Centers for Disease Control and Prevention (CDC)—which is required as a result of the 1992 Fertility Clinic Success Rate and Certification Act to publish the annual ART success rates at U.S. fertility clinics—defines ART to include "all fertility treatments in which both eggs and sperm are handled. In general, ART procedures involve surgically removing eggs from a woman's ovaries, combining them with sperm in the laboratory, and returning them to the woman's body or donating them to another woman." According to CDC, "they do not include treatments in which only sperm are handled (i.e., intrauterine—or artificial—insemination) or procedures in which a woman takes medicine only to stimulate egg production without the intention of having eggs retrieved."[1]
In Europe, ART also excludes artificial insemination and includes only procedures where oocytes are handled.

The WHO also defines ART this way.

Fertility medication

Most fertility medications are agents that stimulate the development of follicles in the ovary. Examples are gonadotropins and gonadotropin releasing hormone.

In vitro fertilization

In vitro fertilization is the technique of letting fertilization of the male and female gametes (sperm and egg) occur outside the female body.

Techniques usually used in in vitro fertilization include:
  • Transvaginal ovum retrieval (OVR) is the process whereby a small needle is inserted through the back of the vagina and guided via ultrasound into the ovarian follicles to collect the fluid that contains the eggs;
  • Embryo transfer is the step in the process whereby one or several embryos are placed into the uterus of the female with the intent to establish a pregnancy.
Less commonly used techniques in in vitro fertilization are:
  • Assisted zona hatching (AZH) is performed shortly before the embryo is transferred to the uterus. A small opening is made in the outer layer surrounding the egg in order to help the embryo hatch out and aid in the implantation process of the growing embryo;
  • Intracytoplasmic sperm injection (ICSI);
    Intracytoplasmic sperm injection (ICSI) is beneficial in the case of male factor infertility where sperm counts are very low or failed fertilization occurred with previous IVF attempt(s). The ICSI procedure involves a single sperm carefully injected into the center of an egg using a microneedle. With ICSI, only one sperm per egg is needed. Without ICSI, you need between 50,000 and 100,000. This method is also sometimes employed when donor sperm is used;
  • Autologous endometrial coculture is a possible treatment for patients who have failed previous IVF attempts or who have poor embryo quality. The patient's fertilized eggs are placed on top of a layer of cells from the patient's own uterine lining, creating a more natural environment for embryo development;
  • In zygote intrafallopian transfer (ZIFT), egg cells are removed from the woman's ovaries and fertilized in the laboratory; the resulting zygote is then placed into the fallopian tube.
  • Cytoplasmic transfer is the technique in which the contents of a fertile egg from a donor are injected into the infertile egg of the patient along with the sperm;
  • Egg donors are resources for women with no eggs due to surgery, chemotherapy, or genetic causes; or with poor egg quality, previously unsuccessful IVF cycles or advanced maternal age. In the egg donor process, eggs are retrieved from a donor's ovaries, fertilized in the laboratory with the sperm from the recipient's partner, and the resulting healthy embryos are returned to the recipient's uterus;
  • Sperm donation may provide the source for the sperm used in IVF procedures where the male partner produces no sperm or has an inheritable disease, or where the woman being treated has no male partner;
  • Preimplantation genetic diagnosis (PGD) involves the use of genetic screening mechanisms such as fluorescent in-situ hybridization (FISH) or comparative genomic hybridization (CGH) to help identify genetically abnormal embryos and improve healthy outcomes;
  • Embryo splitting can be used for twinning to increase the number of available embryos.

Pre-implantation genetic diagnosis

A pre-implantation genetic diagnosis procedure may be conducted on embryos prior to implantation (as a form of embryo profiling), and sometimes even of oocytes prior to fertilization. PGD is considered in a similar fashion to prenatal diagnosis. When used to screen for a specific genetic disease, its main advantage is that it avoids selective pregnancy termination as the method makes it highly likely that the baby will be free of the disease under consideration. PGD thus is an adjunct to ART procedures, and requires [in vitro fertilization to obtain oocytes or embryos for evaluation. Embryos are generally obtained through blastomere or blastocyst biopsy. The latter technique has proved to be less deleterious for the embryo, therefore it is advisable to perform the biopsy around day 5 or 6 of development.

Others

Other assisted reproduction techniques include:

Risks

The majority of IVF-conceived infants do not have birth defects. However, some studies have suggested that assisted reproductive technology is associated with an increased risk of birth defects. Artificial reproductive technology is becoming more available. Early studies suggest that there could be an increased risk for medical complications with both the mother and baby. Some of these include low birth weight, placental insufficiency, chromosomal disorders, preterm deliveries, gestational diabetes, and pre-eclampsia(Aiken and Brockelsby).

In the largest U.S. study, which used data from a statewide registry of birth defects, 6.2% of IVF-conceived children had major defects, as compared with 4.4% of naturally conceived children matched for maternal age and other factors (odds ratio, 1.3; 95% confidence interval, 1.00 to 1.67). ART carries with it a risk for heterotopic pregnancy (simultaneous intrauterine and extrauterine pregnancy). The main risks are:
Other risk factors are:
Sperm donation is an exception, with a birth defect rate of almost a fifth compared to the general population. It may be explained by that sperm banks accept only people with high sperm count.
Current data indicate little or no increased risk for postpartum depression among women who use ART.

Usage of assisted reproductive technology including ovarian stimulation and in vitro fertilization have been associated with an increased overall risk of childhood cancer in the offspring, which may be caused by the same original disease or condition that caused the infertility or subfertility in the mother or father.

That said, In a landmark paper by Jacques Balayla et al. it was determined that infants born after ART have similar neurodevelopment than infants born after natural conception.

Usage

Assisted reproductive technology procedures performed in the U.S. has more than doubled over the last 10 years, with 140,000 procedures in 2006, resulting in 55,000 births.

In Australia, 3.1% of births are a result of ART.

In case of discontinuation of fertility treatment, the most common reasons have been estimated to be: postponement of treatment (39%), physical and psychological burden (19%, psychological burden 14%, physical burden 6.32%), relational and personal problems (17%, personal reasons 9%, relational problems 9%), treatment rejection (13%) and organizational (12%) and clinic (8%) problems.

Society and culture

Ethics

Some couples find it difficult to stop treatment despite very bad prognosis, resulting in futile therapies. This may give ART providers a difficult decision of whether to continue or refuse treatment.

For treatment-specific ethical considerations, see entries in individual subarticles, e.g. In vitro fertilisation, Surrogacy and Sperm donation

Some assisted reproductive technologies can in fact be harmful to both the mother and child. Posing a psychological and a physical health risk, which may impact the ongoing use of these treatments. The adverse effects may cause for alarm, and they should be tightly regulated to ensure candidates are not only mentally, but physically prepared.

Costs

United States

Many Americans do not have insurance coverage for fertility investigations and treatments. Many states are starting to mandate coverage, and the rate of use is 278% higher in states with complete coverage.

There are some health insurance companies that cover diagnosis of infertility but frequently once diagnosed will not cover any treatment costs.

2005 approximate treatment/diagnosis costs (United States, costs in US$):
Another way to look at costs is to determine the expected cost of establishing a pregnancy. Thus if a clomiphene treatment has a chance to establish a pregnancy in 8% of cycles and costs $500, the expected cost is $6,000 to establish a pregnancy, compared to an IVF cycle (cycle fecundity 40%) with a corresponding expected cost of $30,000 ($12,000/.4).

For the community as a whole, the cost of IVF on average pays back by 700% by tax from future employment by the conceived human being.

United Kingdom

In the United Kingdom, all patients have the right to preliminary testing, provided free of charge by the National Health Service. However, treatment is not widely available on the NHS and there can be long waiting lists. Many patients therefore pay for immediate treatment within the NHS or seek help from private clinics.

In 2013, the National Institute for Health and Care Excellence published new guidelines about who should have access to IVF treatment on the NHS in England and Wales.

The guidelines also say women aged between 40 and 42 should be offered one cycle of IVF on the NHS if all of the following additional criteria are also met: They have never had IVF treatment before, have no evidence of low ovarian reserve (this is when eggs in the ovary are low in number or low in quality) and have been informed of the additional implications of IVF and pregnancy at this age. However, if tests show IVF is the only treatment likely to help them get pregnant, women should be referred for IVF straight away.

This policy is often modified by local Clinical Commissioning Groups, in a fairly blatant breach of the NHS Constitution for England which provides that patients have the right to drugs and treatments that have been recommended by NICE for use in the NHS. For example, the Cheshire, Merseyside and West Lancashire Clinical Commissioning Group insists on additional conditions:
  • The person undergoing treatment must have commenced treatment before her 40th birthday;
  • The person undergoing treatment must have a BMI of between 19 and 29;
  • Neither partner must have any living children, from either the current or previous relationships. This includes adopted as well as biological children;
  • Sub-fertility must not be the direct result of a sterilisation procedure in either partner (this does not include conditions where sterilisation occurs as a result of another medical problem). Couples who have undertaken a reversal of their sterilisation procedure are not eligible for treatment.

Canada

Some treatments are covered by OHIP (public health insurance) in Ontario and others are not. Those with bilaterally blocked fallopian tubes and under 40 have treatment is covered but are still required to pay lab fees (around $3,000–4,000). Coverage varies in other provinces. Most other patients are required to pay for treatments themselves.

Israel

Israel's national health insurance, which is mandatory for all Israeli citizens, covers nearly all fertility treatments. IVF costs are fully subsidized up to the birth of two children for all Israeli women, including single women and lesbian couples. Embryo transfers for purposes of gestational surrogacy are also covered.

Germany

On 27 January 2009, the Federal Constitutional Court ruled that it is unconstitutional, that the health insurance companies have to bear only 50% of the cost for IVF. On 2 March 2012, the Federal Council has approved a draft law of some federal states, which provides that the federal government provides a subsidy of 25% to the cost. Thus, the share of costs borne for the pair would drop to just 25%.

Fictional representation

Films and other fiction depicting emotional struggles of assisted reproductive technology have had an upswing in the latter part of the 2000s decade, although the techniques have been available for decades. Yet, the number of people that can relate to it by personal experience in one way or another is ever growing, and the variety of trials and struggles are huge.

For specific examples, refer to the fiction sections in individual subarticles, e.g. surrogacy, sperm donation and fertility clinic.
 
In addition, reproduction and pregnancy in speculative fiction has been present for many decades.

Research and speculative uses

The idea of using future ART techniques, including direct human germline engineering technologies, to select and genetically modify embryos for the purpose of human enhancement has been referred to as designer babies, reprogenetics, and liberal eugenics.

The term "liberal eugenics" was coined by bioethicist Nicholas Agar. Liberal eugenics is aimed at "improving" the genotypes of future generations through screening and genetic modification to eliminate "undesirable" traits. The term "reprogenetics" was coined by Lee M. Silver, a professor of molecular biology at Princeton University, in his 1997 book Remaking Eden.

The philosophical movement associated with these speculative uses is transhumanism. When eugenics is discussed in this context it usually in context of allowing parents to select desirable traits in an unborn child and not in the use of genetics to destroy embryos or to prevent the formation of undesirable embryos.

Safety is a major concern when it comes to the gene editing and mitochondrial transfer. Since the effects of germline modification can be passed down to multiple generations, experimentation of this treatment brings forth many questions and concerns about the ethics of completing this research. If a patient has undergone germline modification treatment, the coming generations, one or two after the initial treatment, will be used as trials to see if the changes in the germline have been successful. This extended waiting time could possess harmful implications since the effect of the treatment is not known until it has been passed down to a few generations. Problems with the gene editing may not appear until after the child with edited genes is born. If the patient assumes the risk alone, consent may be given for the treatment, but it is less justified when it comes to giving consent for future generations. On a larger scale, germline modification has the potential to impact the gene pool of the entire human race in a negative or positive way. Germline modification is considered a more ethically and morally acceptable treatment when a patient is a carrier for a harmful trait and is treated to improve the genotype and safety of the future generations. When the treatment is used for this purpose, it can fill the gaps that other technologies may not be able to accomplish.

The main ethical issue with pure germline modification is that these types of treatments will produce a change that can be passed down to future generations and therefore any error, known or unknown, will also be passed down and will affect the offspring. New diseases may be introduced accidentally. Since experimentation of the germline occurs directly on embryos, there is a major ethical deliberation on experimenting with fertilized eggs and embryos and killing the flawed ones. The embryo cannot give consent and some of the treatments have long-lasting and harmful implications. In many countries, editing embryos and germline modification for reproductive use is illegal. As of 2017, the United States of America restricts the use of germline modification and the procedure is under heavy regulation by the FDA and NIH. The American National Academy of Sciences and National Academy of Medicine gave qualified support to human genome editing in 2017 once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight." Germline modification would be more practical if sampling methods were less destructive and used the polar bodies rather than embryos.

Lee Silver has projected a dystopia in which a race of superior humans look down on those without genetic enhancements, though others have counseled against accepting this vision of the future. It has also been suggested that if designer babies were created through genetic engineering, that this could have deleterious effects on the human gene pool. Some futurists claim that it would put the human species on a path to participant evolution. It has also been argued that designer babies may have an important role as counter-acting an argued dysgenic trend.

In 2018, the Nuffield Council on Bioethics issued a report which concluded that under certain circumstances, editing of the DNA of human embryos could be acceptable. The Nuffield Council is a British independent organisation that evaluates ethical questions in medicine and biology.

In November 2018, He Jiankui announced that he had edited the genomes of two human embryos, to attempt to disable the gene for CCR5, which codes for a receptor that HIV uses to enter cells. He said that twin girls, Lulu and Nana, had been born a few weeks earlier. He said that the girls still carried functional copies of CCR5 along with disabled CCR5 (mosaicism) and were still vulnerable to HIV. The work was widely condemned as unethical, dangerous, and premature.

Chaos Makes the Multiverse Unnecessary

Science predicts only the predictable, ignoring most of our chaotic universe.

Scientists look around the universe and see amazing structure. There are objects and processes of fantastic complexity. Every action in our universe follows exact laws of nature that are perfectly expressed in a mathematical language. These laws of nature appear fine-tuned to bring about life, and in particular, intelligent life. What exactly are these laws of nature and how do we find them?

The universe is so structured and orderly that we compare it to the most complicated and exact contraptions of the age. In the 18th and 19th centuries, the universe was compared to a perfectly working clock or watch. Philosophers then discussed the Watchmaker. In the 20th and 21st centuries, the most complicated object is a computer. The universe is compared to a perfectly working supercomputer. Researchers ask how this computer got its programming.

How does one explain all this structure? Why do the laws seem so perfect for producing life and why are they expressed in such exact mathematical language? Is the universe really as structured as it seems?
Yanofsky_BR-1
One answer to some of these questions is Platonism (or its cousin Realism). This is the belief that the laws of nature are objective and have always existed. They possess an exact ideal form that exists in Plato’s realm. These laws are in perfect condition and they have formed the universe that we see around us. Not only do the laws of nature exist in this realm, but they live alongside all perfectly formed mathematics. This is supposed to help explain why the laws are written in the language of mathematics.

Platonism leaves a lot to be desired. The main problem is that Platonism is metaphysics, not science. However, even if we were to accept it as true, many questions remain. Why does this Platonic world have these laws, that bring intelligent life into the universe, rather than other laws? How was this Platonic attic set up? Why does our physical universe follow these ethereal rules? How do scientists and mathematicians get access to Plato’s little treasure chest of exact ideals?

The multiverse is another answer that has recently become quite fashionable. This theory is an attempt to explain why our universe has the life-giving laws that it does. One who believes in a multiverse maintains that our universe is just one of many universes. Each universe has its own set of rules and its own possible structures that come along with those rules. Physicists who push the multiverse theory believe that the laws in each universe are somewhat arbitrary. The reason we see structures fit for life in our universe is that we happen to live in one of very few universes that have such laws. While the multiverse explains some of the structure that we see, there are questions that are left open. Rather than asking why the universe has the structure it does, we can push the question back and ask why the multiverse has the structure it does. Another problem is that while the multiverse would answer some of the questions we posed if it existed, who says it actually exists? Since most believe that we have no contact with possible other universes, the question of the existence of the multiverse is essentially metaphysics.
It is almost a tautology: science predicts predictable phenomena.
There is another, more interesting, explanation for the structure of the laws of nature. Rather than saying that the universe is very structured, say that the universe is mostly chaotic and for the most part lacks structure. The reason why we see the structure we do is that scientists act like a sieve and focus only on those phenomena that have structure and are predictable. They do not take into account all phenomena; rather, they select those phenomena they can deal with.

Some people say that science studies all physical phenomena. This is simply not true. Who will win the next presidential election and move into the White House is a physical question that no hard scientists would venture to give an absolute prediction. Whether or not a computer will halt for a given input can be seen as a physical question and yet we learned from Alan Turing that this question cannot be answered. Scientists have classified the general textures and heights of different types of clouds, but, in general, are not at all interested in the exact shape of a cloud. Although the shape is a physical phenomenon, scientists don’t even attempt to study it. Science does not study all physical phenomena. Rather, science studies predictable physical phenomena. It is almost a tautology: science predicts predictable phenomena.

Scientists have described the criteria for which phenomena they decide to study: It is called symmetry. Symmetry is the property that despite something changing, there is some part of it that remains the same. When you say that a face has symmetry, you mean that if the left side is reflected and swapped with the right side, it will still look the same. When physicists use the word symmetry they are discussing collections of physical phenomena. A set of phenomena has symmetry if it is the same after some change. The most obvious example is symmetry of location. This means that if one performs the same experiment in two different places, the results should be the same. Symmetry of time means that the outcomes of experiments should not depend on when the experiment took place. And, there are many other types of symmetry.

The phenomena that are selected by scientists for study must have many different types of symmetry. When a physicist sees a lot of phenomena, she must first determine if these phenomena have symmetry. She performs experiments in different places and at different times. If she achieves the same results, she then studies them to find the underlying cause. In contrast, if her experiments failed to be symmetric, she would ignore them.

While scientists like Galileo and Newton recognized the symmetry in physical phenomena, the power of symmetry was first truly exploited by Albert Einstein. He postulated that the laws of physics should be the same even if the experimenter is moving close to the speed of light. With this symmetry in mind, he was able to compose the laws of special relativity. Einstein was the first to understand that symmetry was the defining characteristic of physics. Whatever has symmetry will have a law of nature. The rest is not part of science.

A little after Einstein showed the vital importance of symmetry for the scientific endeavor, Emmy Noether proved a powerful theorem that established a connection between symmetry and conservation laws. This is related to the constants of nature, which are central to modern physics. Again, if there is symmetry, then there will be conservation laws and constants. The physicist must be a sieve and study those phenomena that possess symmetry and allow those that do not possess symmetry to slip through her fingers.
We pick out those phenomena that satisfy the requirements of symmetry and predictability.
There are a few problems with this explanation of the structure found in the universe. For one, it seems that phenomena that we do select and that have laws of nature are exactly the phenomena that generate all the phenomena. The laws of particle physics, gravity, and quantum theory all have symmetries and are studied by physicists. All phenomena seem to come from these theories, even those that do not seem to have symmetry. So while it is beyond science to determine who the next president will be, that phenomena will be determined by sociology, which is determined by psychology, which is determined by neural biology which depends on chemistry which depends on particle physics and quantum mechanics. Determining the winner of an election is too complicated for the scientist to deal with, but the results of the election are generated by laws of physics that are part of science.

Despite this failing of our explanation for the structure of the laws of nature, we believe it is the best candidate for being the solution. It is one of the only solutions that does not invoke any metaphysical principle or the existence of a multitude of unseen universes. We do not have to look outside the universe to find a cause for the structure that we find in the universe. Rather, we look at how we are looking at phenomena.

Before we move on, we should point out that our solution has a property in common with the multiverse solution. We postulated that, for the most part, the universe is chaotic and there is not so much structure in it. We, however, focus only on the small amount of structure that there is. Similarly, one who believes in the multiverse believes that most of the multiverse lacks the structure to form intelligent life. It is only in a select few universes that we find complex structure. And we inhabitants of this complex universe are focused on that rare structure. Both solutions are about focusing on the small amount of structure in a chaotic whole.

A Hierarchy of Number Systems

This idea that we only see structure because we are selecting a subset of phenomena is novel and hard to wrap one’s head around. There is an analogous situation in mathematics that is much easier to understand. We will focus on one important example where one can see this selection process very clearly. First we need to take a little tour of several number systems and their properties.

Consider the real numbers. In the beginning of high school, the teacher draws the real number line on the board and says that these are all the numbers one will ever need. Given two real numbers, we know how to add, subtract, multiply, and divide them. They comprise a number system that is used in every aspect of science. The real numbers also have an important property: They are totally ordered. That means that given any two different real numbers, one is less than the other. Just think of the real number line: Given any two different points on the line, one will be to the right of the other. This property is so obvious that it is barely mentioned.

Yanofsky_BR-2
Emmy Noether: Noether’s theorem holds that every conservation law in physics is associated with a certain kind of symmetry.
While the real numbers seem like a complete picture, the story does not end there. Already in the 16th century, mathematicians started looking at more complicated number systems. They began working with an “imaginary” number i that has the property that its square is -1. This is in stark contrast to any real number whose square is never negative. They defined an imaginary number as the product of a real number and i. Mathematicians went on to define a complex number that is the sum of a real number and an imaginary number. If r1 and r2 are real numbers, then r1+r2i is a complex number. Since a complex number is built from two real numbers, we usually draw all of them in a two-dimensional plane. The real number line sits in the complex plane. This corresponds to the fact that every real number, r1, can be seen as the complex number r1+0i (that is, itself with zero complex component).

We know how to add, subtract, multiply, and divide complex numbers. However, there is one property that is different about the complex numbers. In contrast to the real numbers, the complex numbers are not totally ordered. Given two complex numbers, say 3 + 7.2i and 6 - 4i, can we tell which one is more and which one is less? There is no obvious answer. (In fact, one can totally order the complex numbers but the ordering will not respect the multiplication of complex numbers.) The fact that the complex numbers are not totally ordered means that we lose structure when we go from the real numbers to the complex numbers.

The story is not over with the complex numbers. Just as one can construct the complex numbers from pairs of real numbers, so too can one construct the quaternions from pairs of complex numbers. Let c1 = r1 + r2i and c2 = r3 + r4i be complex numbers; then we can construct a quaternion as q = c1 + c2j where j is a special number. It turns out that every quaternion can be written as

r1 + r2i + r3j + r4k,

where i, j, and k are all special numbers similar to complex numbers (they are defined by ijk = -1 = i2 = j2 = k2). So while the complex numbers are comprised of two real numbers, the quaternions are comprised of four real numbers. Every complex number r1 + r2i can be seen as a special type of quaternion: r1+ r2i + 0j + 0k. We can think of the quaternions as a four-dimensional space that has the complex numbers as a two-dimensional subset of it. We humans have a hard time visualizing such higher-dimensional spaces.

The quaternions are a full-fledged number system. They can be added, subtracted, multiplied, and divided with ease. Like the complex numbers, they fail to be totally ordered. But they have even less structure than the complex numbers. While the multiplication of complex numbers is commutative, that is, for all complex numbers c1 and c2 we have that c1c2 = c2c1, this is not true for all quaternions. This means there are quaternions q1 and q2 such that q1q2 is different than q2q1.
Rather than looking at the real numbers as central and the octonions as strange larger number systems, think of the octonions as fundamental and all the other number systems as just special subsets of octonions.
This process of doubling a number system with a new special number is called the “Cayley–Dickson construction,” named after the mathematicians Arthur Cayley and Leonard Eugene Dickson. Given a certain type of number system, one gets another number system that is twice the dimension of the original system. The new system that one develops has less structure (i.e. fewer axioms) than the starting system.

If we apply the Cayley–Dickson construction to the quaternions, we get the number system called the octonions. This is an eight-dimensional number system. That means that each of the octonions can be written with eight real numbers as

r1+ r2i + r3j + r4k +r5l + r6m + r7n + r8p.

Although it is slightly complicated, it is known how to add, subtract, multiply, and divide octonions. Every quaternion can be written as a special type of octonion in which the last four coefficients are zero.

Like the quaternions, the octonions are neither totally ordered nor commutative. However, the octonions also fail to be associative. In detail, all the number systems that we have so far discussed possess the associative property. This means that for any three elements, a, b, and c, the two ways of multiplying them, a(bc) and (ab)c, are equal. However, the octonions fail to be associative. That is, there exists octonions o1, o2 and o3 such that o1(o2o3) ≠ (o1o2)o3.

We can go on with this doubling and get an even larger, 16-dimensional number system called the sedenions. In order to describe a sedonion, one would have to give 16 real numbers. Octonions are a special type of sedonion: their last eight coefficients are all zero. But researchers steer clear of sedenions because they lose an important property. While one can add, subtract, and multiply sedenions, there is no way to nicely divide them. Most physicists think this is beyond the pale and “just” mathematics. Even mathematicians find sedenions hard to deal with. One can go on to formulate 32-dimensional number systems and 64-dimensional number systems, and so on. But they are usually not discussed because, as of now, they do not have many applications. We will concentrate on the octonions. A summary of all the number systems can be seen in this Venn diagram:
Yanofsky_BR-diagram-1
Let us discuss the applicability of these number systems. The real numbers are used in every aspect of physics. All quantities, measurements, and lengths of physical objects or processes are given as real numbers. Although complex numbers were formulated by mathematicians to help solve equations (i is the solution to the equation x2 = -1), physicists started using complex numbers to discuss waves in the middle of the 19th century. In the 20th century, complex numbers became fundamental for the study of quantum mechanics. By now, the role of complex numbers is very important in many different branches of physics. The quaternions show up in physics but are not a major player. The octonions, the sedenions, and the larger number systems rarely arise in the physics literature.

The Laws of Mathematics That We Find

The usual view of these number systems is to think that the real numbers are fundamental while the complex, quaternions, and octonions are strange larger sets that keep mathematicians and some physicists busy. The larger number systems seem unimportant and less interesting.

Let us turn this view on its head. Rather than looking at the real numbers as central and the octonions as strange larger number systems, think of the octonions as fundamental and all the other number systems as just special subsets of octonions. The only number system that really exists is the octonions. To paraphrase Leopold Kronecker, “God made the octonions, all else is the work of man.” The octonions contain every number that we will ever need. (And, as we stated earlier, we can do the same trick with the sedenions and even the 64-dimensional number system. We shall fix our ideas with the octonions.)

Let us explore how we can derive all the properties of the number systems that we are familiar with. Although the multiplication in the octonions is not associative, if one wants an associative multiplication, one can look at a special subset of the octonions. (We are using the word “subset” but we need a special type of subset that respects the operations of the number system. Such subsets are called “subgroups,” “subfields,” or “sub-normed-division-algebras.”) So if one selects the subset of all octonions of the form

r1+ r2i + r3j + r4k + 0l + 0m + 0n + 0p,

then the multiplication will be associative (like the quaternions). If one further looks at all the octonions of the form

r1+ r2i + 0j + 0k + 0l + 0m + 0n + 0p,

then the multiplication will be commutative (like the complex numbers). If one further selects all the octonions of the form

r1 + 0i + 0j + 0k + 0l + 0m + 0n + 0p,

then they will have a totally ordered number system. All the axioms that one wants satisfied are found “sitting inside” the octonions.

This is not strange. Whenever we have a structure, we can focus on a subset of special elements that satisfies certain properties. Take, for example, any group. We can go through the elements of the group and pick out those X such that, for all elements Y, we have that XY = YX. This subset is a commutative (abelian) group. That is, it is a fact that in any group there is a subset that is a commutative group. We simply select those parts that satisfy the axiom and ignore (“bracket out”) those that do not. The point we are making is that if a system has a certain structure, special subsets of that system will satisfy more axioms than the starting system.

This is similar to what we are doing in physics. We do not look at all phenomena. Rather, we pick out those phenomena that satisfy the requirements of symmetry and predictability. In mathematics, we describe the subset with the axiom that describes it. In physics, we describe the selected subset of phenomena with a law of nature.

We can describe the analogy we made with the following diagram:
Yanofsky_BR-diagram-2
Notice that the mathematics for a subset chosen to satisfy an axiom is easier than the mathematics for the whole set. This is because mathematicians work with axioms. They prove theorems and make models using axioms. When such axioms are missing, the mathematics gets more complicated or impossible.

Following our analogy, a subset of phenomena is easier to describe with a law of nature stated in mathematics. In contrast, when we look at the larger set of phenomena, it is harder to find that law of nature and the mathematics would be more complicated or impossible.

Working in Tandem and Going Forward

There is an important analogy between physics and mathematics. In both fields, if we do not look at the entirety of a system, but rather look at special subsets of the system, we see more structure. In physics we select certain phenomena (the ones that have a type of symmetry) and ignore the rest. In mathematics we are looking at certain subsets of structures and ignore the rest. These two bracketing operations work hand in hand.

The job of physics is to formulate a function from the collection of observed physical phenomena to mathematical structure:
observed physical phenomena mathematical structure.

That is, we have to give mathematical structure to the world we observe. As physics advances and we try to understand more and more observed physical phenomena, we need larger and larger classes of mathematics. In terms of this function, if we are to enlarge the input of the function, we need to enlarge the output of the function.

There are many examples of this broadening of physics and mathematics.

When physicists started working with quantum mechanics they realized that the totally ordered real numbers are too restrictive for their needs. They required a number system with fewer axioms. They found the complex numbers.
When Albert Einstein wanted to describe general relativity, he realized that the mathematical structure of Euclidean space with its axiom of flatness (Euclid’s fifth axiom) was too restrictive. He needed curved, non-Euclidian space to describe the spacetime of general relativity.
In quantum mechanics it is known that for some systems, if we first measure X and then Y, we will get different results than first measuring Y and then measuring X. In order to describe this situation mathematically, one needed to leave the nice world of commutativity. They required the larger class of structures where commutativity is not assumed.
When Boltzmann and Gibbs started talking about statistical mechanics, they realized that laws they were coming up with were no longer deterministic. Outcomes of experiments no longer either happen (p(X) = 1) or do not happen (p(X) = 0). Rather, with statistical mechanics one needs probability theory. The chance of a certain outcome of an experiment is a probability (p(X)) is an element of the infinite set [0,1] rather than the restrictive finite subset {0,1}).
When scientists started talking about the logic of quantum events, they realized that the usual logic, which is distributive, is too restrictive. They needed to formulate the larger class of logics in which the distributive axiom does not necessarily hold true. This is now called quantum logic.

Paul A.M. Dirac understood this loosening of axioms about 85 years ago when he wrote the following:
The steady progress of physics requires for its theoretical formulation a mathematics which get continually more advanced. This is only natural and to be expected. What however was not expected by the scientific workers of the last century was the particular form that the line of advancement of mathematics would take, namely it was expected that mathematics would get more and more complicated, but would rest on a permanent basis of axioms and definitions, while actually the modern physical developments have required a mathematics that continually shifts its foundation and gets more abstract. Non-Euclidean geometry and noncommutative algebra, which were at one time were considered to be purely fictions of the mind and pastimes of logical thinkers, have now been found to be very necessary for the description of general facts of the physical world. It seems likely that this process of increasing abstraction will continue in the future and the advance in physics is to be associated with continual modification and generalisation of the axioms at the base of mathematics rather than with a logical development of any one mathematical scheme on a fixed foundation.1

As physics progresses and we become aware of more and more physical phenomena, larger and larger classes of mathematical structures are needed and we get them by looking at fewer and fewer axioms. Dirac calls these mathematical structures with fewer axioms “increasing abstraction” and “generalisations of the axioms.” There is no doubt that if Dirac lived now, he would talk about the rise of octonions and even the sedenions within the needed number systems.

In order to describe more phenomena, we will need larger and larger classes of mathematical structures and hence fewer and fewer axioms. What is the logical conclusion to this trend? How far can this go? Physics wants to describe more and more phenomena in our universe. Let us say we were interested in describing all phenomena in our universe. What type of mathematics would we need? How many axioms would be needed for mathematical structure to describe all the phenomena? Of course, it is hard to predict, but it is even harder not to speculate. One possible conclusion would be that if we look at the universe in totality and not bracket any subset of phenomena, the mathematics we would need would have no axioms at all. That is, the universe in totality is devoid of structure and needs no axioms to describe it. Total lawlessness! The mathematics are just plain sets without structure. This would finally eliminate all metaphysics when dealing with the laws of nature and mathematical structure. It is only the way we look at the universe that gives us the illusion of structure.

With this view of physics we come to even more profound questions. These are the future projects of science. If the structure that we see is illusory and comes about from the way we look at certain phenomena, then why do we see this illusion? Instead of looking at the laws of nature that are formulated by scientists, we have to look at scientists and the way they pick out (subsets of phenomena and their concomitant) laws of nature. What is it about human beings that renders us so good at being sieves? Rather than looking at the universe, we should look at the way we look at the universe.

I am grateful to Jim Cox, Karen Kletter, Avi Rabinowitz, and Karl Svozil for many helpful conversations.

Noson S. Yanofsky has a Ph.D. in mathematics from The Graduate Center of The City University of New York. He is a professor of computer science at Brooklyn College of The City University of New York. In addition to writing research papers he has co-authored Quantum Computing for Computer Scientists and authored The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us. Noson lives in Brooklyn with his wife and four children.

References

1. Dirac, P.A.M. Quantised singularities in the electromagnetic field. Proceedings of the Royal Society 133, 60-72, (1931).

Additional Reading

Dray, T. & Manogue, C.A. The Geometry of the Octonions World Scientific Publishing Company, Singapore (2015).
Eddington, A.S. The Philosophy of Physical Science Cambridge University Press, New York, NY (1939).
van Fraassen, B.C. Laws and Symmetry Oxford University Press, New York, NY (1989).
Greene, B. The Hidden Reality: Parallel Universes and the Deep Laws of the Cosmos Knopf, New York, NY (2011).
Stenger, V.J. The Comprehensible Cosmos: Where Do the Laws of Physics Come From? Prometheus Books, Amherst, NY (2006).
Tegmark, M. Our Mathematical Universe: My Quest for the Ultimate Nature of Reality Knopf, New York, NY (2014).
Yanofsky, N.S. The Outer Limits of Reason: What Science, Mathematics, and Logic Cannot Tell Us MIT Press, Cambridge, MA (2013).
Lead image collage credits: Marina Sun / Shutterstock; Pixabay

This article was originally published in our “The Absurd” issue in June, 2017.

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...