Search This Blog

Wednesday, July 18, 2018

Neuroprosthetics

From Wikipedia, the free encyclopedia
Neuroprosthetics (also called neural prosthetics) is a discipline related to neuroscience and biomedical engineering concerned with developing neural prostheses. They are sometimes contrasted with a brain–computer interface, which connects the brain to a computer rather than a device meant to replace missing biological functionality.

Neural prostheses are a series of devices that can substitute a motor, sensory or cognitive modality that might have been damaged as a result of an injury or a disease. Cochlear implants provide an example of such devices. These devices substitute the functions performed by the ear drum and stapes while simulating the frequency analysis performed in the cochlea. A microphone on an external unit gathers the sound and processes it; the processed signal is then transferred to an implanted unit that stimulates the auditory nerve through a microelectrode array. Through the replacement or augmentation of damaged senses, these devices intend to improve the quality of life for those with disabilities.

These implantable devices are also commonly used in animal experimentation as a tool to aid neuroscientists in developing a greater understanding of the brain and its functioning. By wirelessly monitoring the brain's electrical signals sent out by electrodes implanted in the subject's brain, the subject can be studied without the device affecting the results.

Accurately probing and recording the electrical signals in the brain would help better understand the relationship among a local population of neurons that are responsible for a specific function.

Neural implants are designed to be as small as possible in order to be minimally invasive, particularly in areas surrounding the brain, eyes or cochlea. These implants typically communicate with their prosthetic counterparts wirelessly. Additionally, power is currently received through wireless power transmission through the skin. The tissue surrounding the implant is usually highly sensitive to temperature rise, meaning that power consumption must be minimal in order to prevent tissue damage.[2]

The neuroprosthetic currently undergoing the most widespread use is the cochlear implant, with over 300,000 in use worldwide as of 2012.[3]

History

The first known cochlear implant was created in 1957. Other milestones include the first motor prosthesis for foot drop in hemiplegia in 1961, the first auditory brainstem implant in 1977 and a peripheral nerve bridge implanted into the spinal cord of an adult rat in 1981. In 1988, the lumbar anterior root implant and functional electrical stimulation (FES) facilitated standing and walking, respectively, for a group of paraplegics.[4]

Regarding the development of electrodes implanted in the brain, an early difficulty was reliably locating the electrodes, originally done by inserting the electrodes with needles and breaking off the needles at the desired depth. Recent systems utilize more advanced probes, such as those used in deep brain stimulation to alleviate the symptoms of Parkinson's disease. The problem with either approach is that the brain floats free in the skull while the probe does not, and relatively minor impacts, such as a low speed car accident, are potentially damaging. Some researchers, such as Kensall Wise at the University of Michigan, have proposed tethering 'electrodes to be mounted on the exterior surface of the brain' to the inner surface of the skull. However, even if successful, tethering would not resolve the problem in devices meant to be inserted deep into the brain, such as in the case of deep brain stimulation (DBS).

Visual prosthetics

A visual prosthesis can create a sense of image by electrically stimulating neurons in the visual system. A camera would wirelessly transmit to an implant, the implant would map the image across an array of electrodes. The array of electrodes has to effectively stimulate 600-1000 locations, stimulating these optic neurons in the retina thus will create an image. The stimulation can also be done anywhere along the optic signal's path way. The optical nerve can be stimulated in order to create an image, or the visual cortex can be stimulated, although clinical tests have proven most successful for retinal implants.
A visual prosthesis system consists of an external (or implantable) imaging system which acquires and processes the video. Power and data will be transmitted to the implant wirelessly by the external unit. The implant uses the received power/data to convert the digital data to an analog output which will be delivered to the nerve via micro electrodes.

Photoreceptors are the specialized neurons that convert photons into electrical signals. They are part of the retina, a multilayer neural structure about 200 um thick that lines the back of the eye. The processed signal is sent to the brain through the optical nerve. If any part of this pathway is damaged blindness can occur.

Blindness can result from damage to the optical pathway (cornea, aqueous humor, crystalline lens, and vitreous). This can happen as a result of accident or disease. The two most common retinal degenerative diseases that result in blindness secondary to photoreceptor loss is age related macular degeneration (AMD) and retinitis pigmentosa (RP).

The first clinical trial of a permanently implanted retinal prosthesis was a device with a passive microphotodiode array with 3500 elements.[5] This trial was implemented at Optobionics, Inc., in 2000. In 2002, Second Sight Medical Products, Inc. (Sylmar, CA) began a trial with a prototype epiretinal implant with 16 electrodes. The subjects were six individuals with bare light perception secondary to RP. The subjects demonstrated their ability to distinguish between three common objects (plate, cup, and knife) at levels statistically above chance. An active sub retinal device developed by Retina Implant GMbH (Reutlingen, Germany) began clinical trials in 2006. An IC with 1500 microphotodiodes was implanted under the retina. The microphotodiodes serve to modulate current pulses based on the amount of light incident on the photo diode.[6]

The seminal experimental work towards the development of visual prostheses was done by cortical stimulation using a grid of large surface electrodes. In 1968 Giles Brindley implanted an 80 electrode device on the visual cortical surface of a 52-year-old blind woman. As a result of the stimulation the patient was able to see phosphenes in 40 different positions of the visual field.[7] This experiment showed that an implanted electrical stimulator device could restore some degree of vision. Recent efforts in visual cortex prosthesis have evaluated efficacy of visual cortex stimulation in a non-human primate. In this experiment after a training and mapping process the monkey is able to perform the same visual saccade task with both light and electrical stimulation.

The requirements for a high resolution retinal prosthesis should follow from the needs and desires of blind individuals who will benefit from the device. Interactions with these patients indicate that mobility without a cane, face recognition and reading are the main necessary enabling capabilities.[8]

The results and implications of fully functional visual prostheses are exciting. However, the challenges are grave. In order for a good quality image to be mapped in the retina a high number of micro-scale electrode arrays are needed. Also, the image quality is dependent on how much information can be sent over the wireless link. Also this high amount of information must be received and processed by the implant without much power dissipation which can damage the tissue. The size of the implant is also of great concern. Any implant would be preferred to be minimally invasive.[8]

With this new technology, several scientists, including Karen Moxon at Drexel, John Chapin at SUNY, and Miguel Nicolelis at Duke University, started research on the design of a sophisticated visual prosthesis. Other scientists[who?] have disagreed with the focus of their research, arguing that the basic research and design of the densely populated microscopic wire was not sophisticated enough to proceed.

Auditory prosthetics

Cochlear implants (CIs), auditory brain stem implants (ABIs), and auditory midbrain implants (AMIs) are the three main categories for auditory prostheses. CI electrode arrays are implanted in the cochlea, ABI electrode arrays stimulate the cochlear nucleus complex in the lower brain stem, and AMIs stimulates auditory neurons in the inferior colliculus. Cochlear implants have been very successful among these three categories. Today the Advanced Bionics Corporation, the Cochlear Corporation and the Med-El Corporation are the major commercial providers of cochlea implants.

In contrast to traditional hearing aids that amplify sound and send it through the external ear, cochlear implants acquire and process the sound and convert it into electrical energy for subsequent delivery to the auditory nerve. The microphone of the CI system receives sound from the external environment and sends it to processor. The processor digitizes the sound and filters it into separate frequency bands that are sent to the appropriate tonotonic region in the cochlea that approximately corresponds to those frequencies.

In 1957, French researchers A. Djourno and C. Eyries, with the help of D. Kayser, provided the first detailed description of directly stimulation the auditory nerve in a human subject.[9] The individuals described hearing chirping sounds during simulation. In 1972, the first portable cochlear implant system in an adult was implanted at the House Ear Clinic. The U.S. Food and Drug Administration (FDA) formally approved the marketing of the House-3M cochlear implant in November 1984.[10]

Improved performance on cochlear implant not only depends on understanding the physical and biophysical limitations of implant stimulation but also on an understanding of the brain's pattern processing requirements. Modern signal processing represents the most important speech information while also providing the brain the pattern recognition information that it needs. Pattern recognition in the brain is more effective than algorithmic preprocessing at identifying important features in speech. A combination of engineering, signal processing, biophysics, and cognitive neuroscience was necessary to produce the right balance of technology to maximize the performance of auditory prosthesis.[11]

Cochlear implants have been also used to allow acquiring of spoken language development in congenitally deaf children, with remarkable success in early implantations (before 2–4 years of life have been reached).[12] There have been about 80,000 children implanted worldwide.

The concept of combining simultaneous electric-acoustic stimulation (EAS) for the purposes of better hearing was first described by C. von Ilberg and J. Kiefer, from the Universitätsklinik Frankfurt, Germany, in 1999.[13] That same year the first EAS patient was implanted. Since the early 2000s FDA has been involved in a clinical trial of device termed the "Hybrid" by Cochlear Corporation. This trial is aimed at examining the usefulness of cochlea implantation in patients with residual low-frequency hearing. The "Hybrid" utilizes a shorter electrode than the standard cochlea implant, since the electrode is shorter it stimulates the basil region of the cochlea and hence the high-frequency tonotopic region. In theory these devices would benefit patients with significant low-frequency residual hearing who have lost perception in the speech frequency range and hence have decreased discrimination scores.[14]

Prosthetics for pain relief

The SCS (Spinal Cord Stimulator) device has two main components: an electrode and a generator. The technical goal of SCS for neuropathic pain is to mask the area of a patient's pain with a stimulation induced tingling, known as "paresthesia", because this overlap is necessary (but not sufficient) to achieve pain relief.[15] Paresthesia coverage depends upon which afferent nerves are stimulated. The most easily recruited by a dorsal midline electrode, close to the pial surface of spinal cord, are the large dorsal column afferents, which produce broad paresthesia covering segments caudally.

In ancient times the electrogenic fish was used as a shocker to subside pain. Healers had developed specific and detailed techniques to exploit the generative qualities of the fish to treat various types of pain, including headache. Because of the awkwardness of using a living shock generator, a fair level of skill was required to deliver the therapy to the target for the proper amount of time. (Including keeping the fish alive as long as possible) Electro analgesia was the first deliberate application of electricity. By the nineteenth century, most western physicians were offering their patients electrotherapy delivered by portable generator.[16] In the mid-1960s, however, three things converged to ensure the future of electro stimulation.
  1. Pacemaker technology, which had it start in 1950, became available.
  2. Melzack and Wall published their gate control theory of pain, which proposed that the transmission of pain could be blocked by stimulation of large afferent fibers.[17]
  3. Pioneering physicians became interested in stimulating the nervous system to relieve patients from pain.
The design options for electrodes include their size, shape, arrangement, number, and assignment of contacts and how the electrode is implanted. The design option for the pulse generator include the power source, target anatomic placement location, current or voltage source, pulse rate, pulse width, and number of independent channels. Programming options are very numerous (a four-contact electrode offers 50 functional bipolar combinations). The current devices use computerized equipment to find the best options for use. This reprogramming option compensates for postural changes, electrode migration, changes in pain location, and suboptimal electrode placement.[18]

Motor prosthetics

Devices which support the function of autonomous nervous system include the implant for bladder control. In the somatic nervous system attempts to aid conscious control of movement include Functional electrical stimulation and the lumbar anterior root stimulator.

Bladder control implants

Where a spinal cord lesion leads to paraplegia, patients have difficulty emptying their bladders and this can cause infection. From 1969 onwards Brindley developed the sacral anterior root stimulator, with successful human trials from the early 1980s onwards.[19] This device is implanted over the sacral anterior root ganglia of the spinal cord; controlled by an external transmitter, it delivers intermittent stimulation which improves bladder emptying. It also assists in defecation and enables male patients to have a sustained full erection.

The related procedure of sacral nerve stimulation is for the control of incontinence in able-bodied patients.[20]

Motor prosthetics for conscious control of movement

Researchers are currently investigating and building motor neuroprosthetics that will help restore movement and the ability to communicate with the outside world to persons with motor disabilities such as tetraplegia or amyotrophic lateral sclerosis. Research has found that the striatum plays a crucial role in motor sensory learning. This was demonstrated by an experiment in which lab rats' firing rates of the striatum was recorded at higher rates after performing a task consecutively.

To capture electrical signals from the brain, scientists have developed microelectrode arrays smaller than a square centimeter that can be implanted in the skull to record electrical activity, transducing recorded information through a thin cable. After decades of research in monkeys, neuroscientists have been able to decode neuronal signals into movements. Completing the translation, researchers have built interfaces that allow patients to move computer cursors, and they are beginning to build robotic limbs and exoskeletons that patients can control by thinking about movement.

The technology behind motor neuroprostheses is still in its infancy. Investigators and study participants continue to experiment with different ways of using the prostheses. Having a patient think about clenching a fist, for example, produces a different result than having him or her think about tapping a finger. The filters used in the prostheses are also being fine-tuned, and in the future, doctors hope to create an implant capable of transmitting signals from inside the skull wirelessly, as opposed to through a cable.

Preliminary clinical trials suggest that the devices are safe and that they have the potential to be effective.[citation needed] Some patients have worn the devices for over two years with few, if any, ill effects.[citation needed]

Prior to these advancements, Philip Kennedy (Emory and Georgia Tech) had an operable if somewhat primitive system which allowed an individual with paralysis to spell words by modulating their brain activity. Kennedy's device used two neurotrophic electrodes: the first was implanted in an intact motor cortical region (e.g. finger representation area) and was used to move a cursor among a group of letters. The second was implanted in a different motor region and was used to indicate the selection.[21]

Developments continue in replacing lost arms with cybernetic replacements by using nerves normally connected to the pectoralis muscles. These arms allow a slightly limited range of motion, and reportedly are slated to feature sensors for detecting pressure and temperature.[22]

Dr. Todd Kuiken at Northwestern University and Rehabilitation Institute of Chicago has developed a method called targeted reinnervation for an amputee to control motorized prosthetic devices and to regain sensory feedback.

Sensory/motor prosthetics

In 2002 an Multielectrode array of 100 electrodes, which now forms the sensor part of a Braingate, was implanted directly into the median nerve fibers of scientist Kevin Warwick. The recorded signals were used to control a robot arm developed by Warwick's colleague, Peter Kyberd and was able to mimic the actions of Warwick's own arm.[23] Additionally, a form of sensory feedback was provided via the implant by passing small electrical currents into the nerve. This caused a contraction of the first lumbrical muscle of the hand and it was this movement that was perceived.[23]

Obstacles

Mathematical modelling

Accurate characterization of the nonlinear input/output (I/O) parameters of the normally functioning tissue to be replaced is paramount to designing a prosthetic that mimics normal biologic synaptic signals.[24][25] Mathematical modeling of these signals is a complex task "because of the nonlinear dynamics inherent in the cellular/molecular mechanisms comprising neurons and their synaptic connections".[26][27][28] The output of nearly all brain neurons are dependent on which post-synaptic inputs are active and in what order the inputs are received. (spatial and temporal properties, respectively).[29]

Once the I/O parameters are modeled mathematically, integrated circuits are designed to mimic the normal biologic signals. For the prosthetic to perform like normal tissue, it must process the input signals, a process known as transformation, in the same way as normal tissue.

Size

Implantable devices must be very small to be implanted directly in the brain, roughly the size of a quarter. One of the example of microimplantable electrode array is the Utah array.[30]

Wireless controlling devices can be mounted outside of the skull and should be smaller than a pager.

Power consumption

Power consumption drives battery size. Optimization of the implanted circuits reduces power needs. Implanted devices currently need on-board power sources. Once the battery runs out, surgery is needed to replace the unit. Longer battery life correlates to fewer surgeries needed to replace batteries. One option that could be used to recharge implant batteries without surgery or wires is being used in powered toothbrushes.[citation needed] These devices make use of inductive coupling to recharge batteries. Another strategy is to convert electromagnetic energy into electrical energy, as in radio-frequency identification tags.

Biocompatibility

Cognitive prostheses are implanted directly in the brain, so biocompatibility is a very important obstacle to overcome. Materials used in the housing of the device, the electrode material (such as iridium oxide[31]), and electrode insulation must be chosen for long term implantation. Subject to Standards: ISO 14708-3 2008-11-15, Implants for Surgery - Active implantable medical devices Part 3: Implantable neurostimulators.

Crossing the blood–brain barrier can introduce pathogens or other materials that may cause an immune response. The brain has its own immune system that acts differently from the immune system of the rest of the body.

Questions to answer: How does this affect material choice? Does the brain have unique phages that act differently and may affect materials thought to be biocompatible in other areas of the body?

Data transmission

Wireless Transmission is being developed to allow continuous recording of neuronal signals of individuals in their daily life. This allows physicians and clinicians to capture more data, ensuring that short term events like epileptic seizures can be recorded, allowing better treatment and characterization of neural disease.

A small, light weight device has been developed that allows constant recording of primate brain neurons at Stanford University.[32] This technology also enables neuroscientists to study the brain outside of the controlled environment of a lab.

Methods of data transmission must be robust and secure. Neurosecurity is a new issue. Makers of cognitive implants must prevent unwanted downloading of information or thoughts[citation needed] from and uploading of detrimental data to the device that may interrupt function.

Correct implantation

Implantation of the device presents many problems. First, the correct presynaptic inputs must be wired to the correct postsynaptic inputs on the device. Secondly, the outputs from the device must be targeted correctly on the desired tissue. Thirdly, the brain must learn how to use the implant. Various studies in brain plasticity suggest that this may be possible through exercises designed with proper motivation.

Technologies involved

Local field potentials

Local field potentials (LFPs) are electrophysiological signals that are related to the sum of all dendritic synaptic activity within a volume of tissue. Recent studies suggest goals and expected value are high-level cognitive functions that can be used for neural cognitive prostheses.[33] Also, Rice University scientists have discovered a new method to tune the light-induced vibrations of nanoparticles through slight alterations to the surface to which the particles are attached. According to the university, the discovery could lead to new applications of photonics from molecular sensing to wireless communications. They used ultrafast laser pulses to induce the atoms in gold nanodisks to vibrate.[34]

Automated movable electrical probes

One hurdle to overcome is the long term implantation of electrodes. If the electrodes are moved by physical shock or the brain moves in relation to electrode position, the electrodes could be recording different nerves. Adjustment to electrodes is necessary to maintain an optimal signal. Individually adjusting multi electrode arrays is a very tedious and time consuming process. Development of automatically adjusting electrodes would mitigate this problem. Anderson's group is currently collaborating with Yu-Chong Tai's lab and the Burdick lab (all at Caltech) to make such a system that uses electrolysis-based actuators to independently adjust electrodes in a chronically implanted array of electrodes.[35]

Imaged guided surgical techniques

Image-guided surgery is used to precisely position brain implants.

Spintronics

From Wikipedia, the free encyclopedia
Spintronics (a portmanteau meaning spin transport electronics), also known as spin electronics, is the study of the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices.

Spintronics fundamentally differs from traditional electronics in that, in addition to charge state, electron spins are exploited as a further degree of freedom, with implications in the efficiency of data storage and transfer. Spintronic systems are most often realised in dilute magnetic semiconductors (DMS) and Heusler alloys and are of particular interest in the field of quantum computing.

History

Spintronics emerged from discoveries in the 1980s concerning spin-dependent electron transport phenomena in solid-state devices. This includes the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Johnson and Silsbee (1985)[5] and the discovery of giant magnetoresistance independently by Albert Fert et al.[6] and Peter Grünberg et al. (1988). The origins of spintronics can be traced to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow and initial experiments on magnetic tunnel junctions by Julliere in the 1970s.[8] The use of semiconductors for spintronics began with the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990[9] and of the electric dipole spin resonance by Rashba in 1960.[10]

Theory

The spin of the electron is an intrinsic angular momentum that is separate from the angular momentum due to its orbital motion. The magnitude of the projection of the electron's spin along an arbitrary axis is {\tfrac {1}{2}}\hbar , implying that the electron acts as a Fermion by the spin-statistics theorem. Like orbital angular momentum, the spin has an associated magnetic moment, the magnitude of which is expressed as
\mu ={\tfrac {\sqrt {3}}{2}}{\frac {q}{m_{e}}}\hbar .
In a solid the spins of many electrons can act together to affect the magnetic and electronic properties of a material, for example endowing it with a permanent magnetic moment as in a ferromagnet.
In many materials, electron spins are equally present in both the up and the down state, and no transport properties are dependent on spin. A spintronic device requires generation or manipulation of a spin-polarized population of electrons, resulting in an excess of spin up or spin down electrons. The polarization of any spin dependent property X can be written as
P_{X}={\frac {X_{\uparrow }-X_{\downarrow }}{X_{\uparrow }+X_{\downarrow }}}.
A net spin polarization can be achieved either through creating an equilibrium energy split between spin up and spin down. Methods include putting a material in a large magnetic field (Zeeman effect), the exchange energy present in a ferromagnet or forcing the system out of equilibrium. The period of time that such a non-equilibrium population can be maintained is known as the spin lifetime, \tau .

In a diffusive conductor, a spin diffusion length \lambda can be defined as the distance over which a non-equilibrium spin population can propagate. Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond). An important research area is devoted to extending this lifetime to technologically relevant timescales.

A plot showing a spin up, spin down, and the resulting spin polarized population of electrons. Inside a spin injector, the polarization is constant, while outside the injector, the polarization decays exponentially to zero as the spin up and down populations go to equilibrium.

The mechanisms of decay for a spin polarized population can be broadly classified as spin-flip scattering and spin dephasing. Spin-flip scattering is a process inside a solid that does not conserve spin, and can therefore switch an incoming spin up state into an outgoing spin down state. Spin dephasing is the process wherein a population of electrons with a common spin state becomes less polarized over time due to different rates of electron spin precession. In confined structures, spin dephasing can be suppressed, leading to spin lifetimes of milliseconds in semiconductor quantum dots at low temperatures.

Superconductors can enhance central effects in spintronics such as magnetoresistance effects, spin lifetimes and dissipationless spin-currents.[11][12]

The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common applications of this effect involve giant magnetoresistance (GMR) devices. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor.

Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers.

Other metal-based spintronics devices:
  • Tunnel magnetoresistance (TMR), where CPP transport is achieved by using quantum-mechanical tunneling of electrons through a thin insulator separating ferromagnetic layers.
  • Spin-transfer torque, where a current of spin-polarized electrons is used to control the magnetization direction of ferromagnetic electrodes in the device.
  • Spin-wave logic devices carry information in the phase. Interference and spin-wave scattering can perform logic operations.

Spintronic-logic devices

Non-volatile spin-logic devices to enable scaling are being extensively studied.[13] Spin-transfer, torque-based logic devices that use spins and magnets for information processing have been proposed.[14][15] These devices are part of the ITRS exploratory road map. Logic-in memory applications are already in the development stage.[16][17] A 2017 review article can be found in Materials Today.[18]

Applications

Read heads of magnetic hard drives are based on the GMR or TMR effect.

Motorola developed a first-generation 256 kb magnetoresistive random-access memory (MRAM) based on a single magnetic tunnel junction and a single transistor that has a read/write cycle of under 50 nanoseconds.[19] Everspin has since developed a 4 Mb version.[20] Two second-generation MRAM techniques are in development: thermal-assisted switching (TAS)[21] and spin-transfer torque (STT).[22]

Another design, racetrack memory, encodes information in the direction of magnetization between domain walls of a ferromagnetic wire.

Magnetic sensors can use the GMR effect.[citation needed]

In 2012 persistent spin helices of synchronized electrons were made to persist for more than a nanosecond, a 30-fold increase, longer than the duration of a modern processor clock cycle.[23]

Semiconductor-based spintronic devices

Doped semiconductor materials display dilute ferromagnetism. In recent years, dilute magnetic oxides (DMOs) including ZnO based DMOs and TiO2-based DMOs have been the subject of numerous experimental and computational investigations.[24][25] Non-oxide ferromagnetic semiconductor sources (like manganese-doped gallium arsenide GaMnAs),[26] increase the interface resistance with a tunnel barrier,[27] or using hot-electron injection.[28]

Spin detection in semiconductors has been addressed with multiple techniques:
  • Faraday/Kerr rotation of transmitted/reflected photons[29]
  • Circular polarization analysis of electroluminescence[30]
  • Nonlocal spin valve (adapted from Johnson and Silsbee's work with metals)[31]
  • Ballistic spin filtering[32]
The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon.[33]

Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-collinear to the injected spin orientation, called the Hanle effect.

Applications

Applications using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output.[34] Examples include semiconductor lasers. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope.

Magnetic-tunnel transistor: The magnetic-tunnel transistor with a single base layer[35] has the following terminals:
  • Emitter (FM1): Injects spin-polarized hot electrons into the base.
  • Base (FM2): Spin-dependent scattering takes place in the base. It also serves as a spin filter.
  • Collector (GaAs): A Schottky barrier is formed at the interface. It only collects electrons that have enough energy to overcome the Schottky barrier, and when states are available in the semiconductor.
The magnetocurrent (MC) is given as:
MC={\frac {I_{c,p}-I_{c,ap}}{I_{c,ap}}}
And the transfer ratio (TR) is
TR={\frac {I_{C}}{I_{E}}}
MTT promises a highly spin-polarized electron source at room temperature.

Storage media

Antiferromagnetic storage media have been studied as an alternative to ferromagnetism,[36] especially since with antiferromagnetic material the bits can be stored as well as with ferromagnetic material. Instead of the usual definition 0 -> 'magnetisation upwards', 1 -> 'magnetisation downwards', the states can be, e.g., 0 -> 'vertically-alternating spin configuration' and 1 -> 'horizontally-alternating spin configuration'.[37]).

The main advantages of antiferromagnetic material are:
  • non-sensitivity against data-damaging perturbations by stray fields due to zero net external magnetization[38];
  • no effect on near particles, implying that antiferromagnetic device elements would not magnetically disturb its neighboring elements;[38]
  • far shorter switching times (antiferromagnetic resonance frequency is in the THz range compared to GHz ferromagnetic resonance frequency);[39]
  • broad range of commonly available antiferromagnetic materials including insulators, semiconductors, semimetals, metals, and superconductors.[39]
Research is being done into how to read and write information to antiferromagnetic spintronics as their net zero magnetization makes this difficult compared to conventional ferromagnetic spintronics. In modern MRAM, detection and manipulation of ferromagnetic order by magnetic fields has largely been done away with in favor of more efficient and scalable reading and writing by electrical current. Methods of reading and writing information by current rather than fields are also being investigated in antiferromagnets as fields are ineffective anyways. Writing methods currently being investigated in antiferromagnets are through spin-transfer torque and spin-orbit torque from the spin Hall effect and the Rashba effect. Reading information in antiferromagnets via magnetoresistance effects such as tunnel magnetoresistance is also being explored.

Electron

From Wikipedia, the free encyclopedia

Electron
Atomic-orbital-clouds spd m0.png
Hydrogen atom orbitals at different energy levels. The more opaque areas are where one is most likely to find an electron at any given time.
Composition Elementary particle[1]
Statistics Fermionic
Generation First
Interactions Gravity, electromagnetic, weak
Symbol
e
,
β
Antiparticle Positron (also called antielectron)
Theorized Richard Laming (1838–1851),[2]
G. Johnstone Stoney (1874) and others.[3][4]
Discovered J. J. Thomson (1897)[5]
Mass 9.10938356(11)×10−31 kg[6]
5.48579909070(16)×10−4 u[6]
[1822.8884845(14)]−1 u[note 1]
0.5109989461(31) MeV/c2[6]
Mean lifetime stable ( > 6.6×1028 yr[7])
Electric charge −1 e[note 2]
−1.6021766208(98)×10−19 C[6]
−4.80320451(10)×10−10 esu
Magnetic moment −1.00115965218091(26) μB[6]
Spin 1/2
Weak isospin LH: −1/2, RH: 0
Weak hypercharge LH: -1, RH: −2

The electron is a subatomic particle, symbol
e
or
β
, whose electric charge is negative one elementary charge. Electrons belong to the first generation of the lepton particle family, and are generally thought to be elementary particles because they have no known components or substructure. The electron has a mass that is approximately 1/1836 that of the proton. Quantum mechanical properties of the electron include an intrinsic angular momentum (spin) of a half-integer value, expressed in units of the reduced Planck constant, ħ. As it is a fermion, no two electrons can occupy the same quantum state, in accordance with the Pauli exclusion principle. Like all elementary particles, electrons exhibit properties of both particles and waves: they can collide with other particles and can be diffracted like light. The wave properties of electrons are easier to observe with experiments than those of other particles like neutrons and protons because electrons have a lower mass and hence a longer de Broglie wavelength for a given energy.

Electrons play an essential role in numerous physical phenomena, such as electricity, magnetism, chemistry and thermal conductivity, and they also participate in gravitational, electromagnetic and weak interactions.[11] Since an electron has charge, it has a surrounding electric field, and if that electron is moving relative to an observer, it will generate a magnetic field. Electromagnetic fields produced from other sources will affect the motion of an electron according to the Lorentz force law. Electrons radiate or absorb energy in the form of photons when they are accelerated. Laboratory instruments are capable of trapping individual electrons as well as electron plasma by the use of electromagnetic fields. Special telescopes can detect electron plasma in outer space. Electrons are involved in many applications such as electronics, welding, cathode ray tubes, electron microscopes, radiation therapy, lasers, gaseous ionization detectors and particle accelerators.

Interactions involving electrons with other subatomic particles are of interest in fields such as chemistry and nuclear physics. The Coulomb force interaction between the positive protons within atomic nuclei and the negative electrons without, allows the composition of the two known as atoms. Ionization or differences in the proportions of negative electrons versus positive nuclei changes the binding energy of an atomic system. The exchange or sharing of the electrons between two or more atoms is the main cause of chemical bonding.[12] In 1838, British natural philosopher Richard Laming first hypothesized the concept of an indivisible quantity of electric charge to explain the chemical properties of atoms.[3] Irish physicist George Johnstone Stoney named this charge 'electron' in 1891, and J. J. Thomson and his team of British physicists identified it as a particle in 1897. Electrons can also participate in nuclear reactions, such as nucleosynthesis in stars, where they are known as beta particles. Electrons can be created through beta decay of radioactive isotopes and in high-energy collisions, for instance when cosmic rays enter the atmosphere. The antiparticle of the electron is called the positron; it is identical to the electron except that it carries electrical and other charges of the opposite sign. When an electron collides with a positron, both particles can be annihilated, producing gamma ray photons.

History

Discovery of effect of electric force

The ancient Greeks noticed that amber attracted small objects when rubbed with fur. Along with lightning, this phenomenon is one of humanity's earliest recorded experiences with electricity.[15] In his 1600 treatise De Magnete, the English scientist William Gilbert coined the New Latin term electricus, to refer to this property of attracting small objects after being rubbed.[16] Both electric and electricity are derived from the Latin ēlectrum (also the root of the alloy of the same name), which came from the Greek word for amber, ἤλεκτρον (ēlektron).

Discovery of two kinds of charges

In the early 1700s, Francis Hauksbee and French chemist Charles François du Fay independently discovered what they believed were two kinds of frictional electricity—one generated from rubbing glass, the other from rubbing resin. From this, du Fay theorized that electricity consists of two electrical fluids, vitreous and resinous, that are separated by friction, and that neutralize each other when combined.[17] American scientist Ebenezer Kinnersley later also independently reached the same conclusion.[18]:118 A decade later Benjamin Franklin proposed that electricity was not from different types of electrical fluid, but a single electrical fluid showing an excess (+) or deficit (-). He gave them the modern charge nomenclature of positive and negative respectively.[19] Franklin thought of the charge carrier as being positive, but he did not correctly identify which situation was a surplus of the charge carrier, and which situation was a deficit.[20]

Between 1838 and 1851, British natural philosopher Richard Laming developed the idea that an atom is composed of a core of matter surrounded by subatomic particles that had unit electric charges.[2] Beginning in 1846, German physicist William Weber theorized that electricity was composed of positively and negatively charged fluids, and their interaction was governed by the inverse square law. After studying the phenomenon of electrolysis in 1874, Irish physicist George Johnstone Stoney suggested that there existed a "single definite quantity of electricity", the charge of a monovalent ion. He was able to estimate the value of this elementary charge e by means of Faraday's laws of electrolysis.[21] However, Stoney believed these charges were permanently attached to atoms and could not be removed. In 1881, German physicist Hermann von Helmholtz argued that both positive and negative charges were divided into elementary parts, each of which "behaves like atoms of electricity".[3]

Stoney initially coined the term electrolion in 1881. Ten years later, he switched to electron to describe these elementary charges, writing in 1894: "... an estimate was made of the actual amount of this most remarkable fundamental unit of electricity, for which I have since ventured to suggest the name electron". A 1906 proposal to change to electrion failed because Hendrik Lorentz preferred to keep electron.[22][23] The word electron is a combination of the words electric and ion.[24] The suffix -on which is now used to designate other subatomic particles, such as a proton or neutron, is in turn derived from electron.[25][26]

Discovery of free electrons outside matter

A round glass vacuum tube with a glowing circular beam inside
A beam of electrons deflected in a circle by a magnetic field[27]

The German physicist Johann Wilhelm Hittorf studied electrical conductivity in rarefied gases: in 1869, he discovered a glow emitted from the cathode that increased in size with decrease in gas pressure. In 1876, the German physicist Eugen Goldstein showed that the rays from this glow cast a shadow, and he dubbed the rays cathode rays.[28] During the 1870s, the English chemist and physicist Sir William Crookes developed the first cathode ray tube to have a high vacuum inside.[29] He then showed that the luminescence rays appearing within the tube carried energy and moved from the cathode to the anode. Furthermore, by applying a magnetic field, he was able to deflect the rays, thereby demonstrating that the beam behaved as though it were negatively charged.[30][31] In 1879, he proposed that these properties could be explained by what he termed 'radiant matter'. He suggested that this was a fourth state of matter, consisting of negatively charged molecules that were being projected with high velocity from the cathode.[32]

The German-born British physicist Arthur Schuster expanded upon Crookes' experiments by placing metal plates parallel to the cathode rays and applying an electric potential between the plates. The field deflected the rays toward the positively charged plate, providing further evidence that the rays carried negative charge. By measuring the amount of deflection for a given level of current, in 1890 Schuster was able to estimate the charge-to-mass ratio of the ray components. However, this produced a value that was more than a thousand times greater than what was expected, so little credence was given to his calculations at the time.[30][33]

In 1892 Hendrik Lorentz suggested that the mass of these particles (electrons) could be a consequence of their electric charge.[34]

In 1896, the British physicist J. J. Thomson, with his colleagues John S. Townsend and H. A. Wilson,[13] performed experiments indicating that cathode rays really were unique particles, rather than waves, atoms or molecules as was believed earlier.[5] Thomson made good estimates of both the charge e and the mass m, finding that cathode ray particles, which he called "corpuscles," had perhaps one thousandth of the mass of the least massive ion known: hydrogen.[5][14] He showed that their charge-to-mass ratio, e/m, was independent of cathode material. He further showed that the negatively charged particles produced by radioactive materials, by heated materials and by illuminated materials were universal.[5][35] The name electron was again proposed for these particles by the Irish physicist George Johnstone Stoney, and the name has since gained universal acceptance.


While studying naturally fluorescing minerals in 1896, the French physicist Henri Becquerel discovered that they emitted radiation without any exposure to an external energy source. These radioactive materials became the subject of much interest by scientists, including the New Zealand physicist Ernest Rutherford who discovered they emitted particles. He designated these particles alpha and beta, on the basis of their ability to penetrate matter.[36] In 1900, Becquerel showed that the beta rays emitted by radium could be deflected by an electric field, and that their mass-to-charge ratio was the same as for cathode rays.[37] This evidence strengthened the view that electrons existed as components of atoms.[38][39]

The electron's charge was more carefully measured by the American physicists Robert Millikan and Harvey Fletcher in their oil-drop experiment of 1909, the results of which were published in 1911. This experiment used an electric field to prevent a charged droplet of oil from falling as a result of gravity. This device could measure the electric charge from as few as 1–150 ions with an error margin of less than 0.3%. Comparable experiments had been done earlier by Thomson's team,[5] using clouds of charged water droplets generated by electrolysis,[13] and in 1911 by Abram Ioffe, who independently obtained the same result as Millikan using charged microparticles of metals, then published his results in 1913.[40] However, oil drops were more stable than water drops because of their slower evaporation rate, and thus more suited to precise experimentation over longer periods of time.[41]

Around the beginning of the twentieth century, it was found that under certain conditions a fast-moving charged particle caused a condensation of supersaturated water vapor along its path. In 1911, Charles Wilson used this principle to devise his cloud chamber so he could photograph the tracks of charged particles, such as fast-moving electrons.[42]

Atomic theory

Three concentric circles about a nucleus, with an electron moving from the second to the first circle and releasing a photon
The Bohr model of the atom, showing states of electron with energy quantized by the number n. An electron dropping to a lower orbit emits a photon equal to the energy difference between the orbits.

By 1914, experiments by physicists Ernest Rutherford, Henry Moseley, James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons.[43] In 1913, Danish physicist Niels Bohr postulated that electrons resided in quantized energy states, with their energies determined by the angular momentum of the electron's orbit about the nucleus. The electrons could move between those states, or orbits, by the emission or absorption of photons of specific frequencies. By means of these quantized orbits, he accurately explained the spectral lines of the hydrogen atom.[44] However, Bohr's model failed to account for the relative intensities of the spectral lines and it was unsuccessful in explaining the spectra of more complex atoms.[43]

Chemical bonds between atoms were explained by Gilbert Newton Lewis, who in 1916 proposed that a covalent bond between two atoms is maintained by a pair of electrons shared between them.[45] Later, in 1927, Walter Heitler and Fritz London gave the full explanation of the electron-pair formation and chemical bonding in terms of quantum mechanics.[46] In 1919, the American chemist Irving Langmuir elaborated on the Lewis' static model of the atom and suggested that all electrons were distributed in successive "concentric (nearly) spherical shells, all of equal thickness".[47] In turn, he divided the shells into a number of cells each of which contained one pair of electrons. With this model Langmuir was able to qualitatively explain the chemical properties of all elements in the periodic table,[46] which were known to largely repeat themselves according to the periodic law.[48]

In 1924, Austrian physicist Wolfgang Pauli observed that the shell-like structure of the atom could be explained by a set of four parameters that defined every quantum energy state, as long as each state was occupied by no more than a single electron. This prohibition against more than one electron occupying the same quantum energy state became known as the Pauli exclusion principle.[49] The physical mechanism to explain the fourth parameter, which had two distinct possible values, was provided by the Dutch physicists Samuel Goudsmit and George Uhlenbeck. In 1925, they suggested that an electron, in addition to the angular momentum of its orbit, possesses an intrinsic angular momentum and magnetic dipole moment.[43][50] This is analogous to the rotation of the Earth on its axis as it orbits the Sun. The intrinsic angular momentum became known as spin, and explained the previously mysterious splitting of spectral lines observed with a high-resolution spectrograph; this phenomenon is known as fine structure splitting.[51]

Quantum mechanics

In his 1924 dissertation Recherches sur la théorie des quanta (Research on Quantum Theory), French physicist Louis de Broglie hypothesized that all matter can be represented as a de Broglie wave in the manner of light.[52] That is, under the appropriate conditions, electrons and other matter would show properties of either particles or waves. The corpuscular properties of a particle are demonstrated when it is shown to have a localized position in space along its trajectory at any given moment.[53] The wave-like nature of light is displayed, for example, when a beam of light is passed through parallel slits thereby creating interference patterns. In 1927 George Paget Thomson, discovered the interference effect was produced when a beam of electrons was passed through thin metal foils and by American physicists Clinton Davisson and Lester Germer by the reflection of electrons from a crystal of nickel.[54]
 
A symmetrical blue cloud that decreases in intensity from the center outward
In quantum mechanics, the behavior of an electron in an atom is described by an orbital, which is a probability distribution rather than an orbit. In the figure, the shading indicates the relative probability to "find" the electron, having the energy corresponding to the given quantum numbers, at that point.

De Broglie's prediction of a wave nature for electrons led Erwin Schrödinger to postulate a wave equation for electrons moving under the influence of the nucleus in the atom. In 1926, this equation, the Schrödinger equation, successfully described how electron waves propagated.[55] Rather than yielding a solution that determined the location of an electron over time, this wave equation also could be used to predict the probability of finding an electron near a position, especially a position near where the electron was bound in space, for which the electron wave equations did not change in time. This approach led to a second formulation of quantum mechanics (the first by Heisenberg in 1925), and solutions of Schrödinger's equation, like Heisenberg's, provided derivations of the energy states of an electron in a hydrogen atom that were equivalent to those that had been derived first by Bohr in 1913, and that were known to reproduce the hydrogen spectrum.[56] Once spin and the interaction between multiple electrons were describable, quantum mechanics made it possible to predict the configuration of electrons in atoms with atomic numbers greater than hydrogen.[57]

In 1928, building on Wolfgang Pauli's work, Paul Dirac produced a model of the electron – the Dirac equation, consistent with relativity theory, by applying relativistic and symmetry considerations to the hamiltonian formulation of the quantum mechanics of the electro-magnetic field.[58] In order to resolve some problems within his relativistic equation, Dirac developed in 1930 a model of the vacuum as an infinite sea of particles with negative energy, later dubbed the Dirac sea. This led him to predict the existence of a positron, the antimatter counterpart of the electron.[59] This particle was discovered in 1932 by Carl Anderson, who proposed calling standard electrons negatons and using electron as a generic term to describe both the positively and negatively charged variants.

In 1947 Willis Lamb, working in collaboration with graduate student Robert Retherford, found that certain quantum states of the hydrogen atom, which should have the same energy, were shifted in relation to each other, the difference came to be called the Lamb shift. About the same time, Polykarp Kusch, working with Henry M. Foley, discovered the magnetic moment of the electron is slightly larger than predicted by Dirac's theory. This small difference was later called anomalous magnetic dipole moment of the electron. This difference was later explained by the theory of quantum electrodynamics, developed by Sin-Itiro Tomonaga, Julian Schwinger and Richard Feynman in the late 1940s.[60]

Particle accelerators

With the development of the particle accelerator during the first half of the twentieth century, physicists began to delve deeper into the properties of subatomic particles.[61] The first successful attempt to accelerate electrons using electromagnetic induction was made in 1942 by Donald Kerst. His initial betatron reached energies of 2.3 MeV, while subsequent betatrons achieved 300 MeV. In 1947, synchrotron radiation was discovered with a 70 MeV electron synchrotron at General Electric. This radiation was caused by the acceleration of electrons through a magnetic field as they moved near the speed of light.[62]

With a beam energy of 1.5 GeV, the first high-energy particle collider was ADONE, which began operations in 1968.[63] This device accelerated electrons and positrons in opposite directions, effectively doubling the energy of their collision when compared to striking a static target with an electron.[64] The Large Electron–Positron Collider (LEP) at CERN, which was operational from 1989 to 2000, achieved collision energies of 209 GeV and made important measurements for the Standard Model of particle physics.[65][66]

Confinement of individual electrons

Individual electrons can now be easily confined in ultra small (L = 20 nm, W = 20 nm) CMOS transistors operated at cryogenic temperature over a range of −269 °C (4 K) to about −258 °C (15 K).[67] The electron wavefunction spreads in a semiconductor lattice and negligibly interacts with the valence band electrons, so it can be treated in the single particle formalism, by replacing its mass with the effective mass tensor.

Characteristics

Classification

A table with four rows and four columns, with each cell containing a particle identifier
Standard Model of elementary particles. The electron (symbol e) is on the left.

In the Standard Model of particle physics, electrons belong to the group of subatomic particles called leptons, which are believed to be fundamental or elementary particles. Electrons have the lowest mass of any charged lepton (or electrically charged particle of any type) and belong to the first-generation of fundamental particles.[68] The second and third generation contain charged leptons, the muon and the tau, which are identical to the electron in charge, spin and interactions, but are more massive. Leptons differ from the other basic constituent of matter, the quarks, by their lack of strong interaction. All members of the lepton group are fermions, because they all have half-odd integer spin; the electron has spin 1/2.[69]

Fundamental properties

The invariant mass of an electron is approximately 9.109×10−31 kilograms,[70] or 5.489×10−4 atomic mass units. On the basis of Einstein's principle of mass–energy equivalence, this mass corresponds to a rest energy of 0.511 MeV. The ratio between the mass of a proton and that of an electron is about 1836.[10][71] Astronomical measurements show that the proton-to-electron mass ratio has held the same value, as is predicted by the Standard Model, for at least half the age of the universe.[72]

Electrons have an electric charge of −1.602×10−19 coulombs,[70] which is used as a standard unit of charge for subatomic particles, and is also called the elementary charge. This elementary charge has a relative standard uncertainty of 2.2×10−8.[70] Within the limits of experimental accuracy, the electron charge is identical to the charge of a proton, but with the opposite sign.[73] As the symbol e is used for the elementary charge, the electron is commonly symbolized by
e
, where the minus sign indicates the negative charge. The positron is symbolized by
e+
because it has the same properties as the electron but with a positive rather than negative charge.[69][70]

The electron has an intrinsic angular momentum or spin of 1/2.[70] This property is usually stated by referring to the electron as a spin-1/2 particle.[69] For such particles the spin magnitude is 3/2 ħ while the result of the measurement of a projection of the spin on any axis can only be ±ħ/2. In addition to spin, the electron has an intrinsic magnetic moment along its spin axis.[70] It is approximately equal to one Bohr magneton,[74][note 4] which is a physical constant equal to 9.27400915(23)×10−24 joules per tesla.[70] The orientation of the spin with respect to the momentum of the electron defines the property of elementary particles known as helicity.[75]

The electron has no known substructure[1][76] and it is assumed to be a point particle with a point charge and no spatial extent.[9] In classical physics, the angular momentum and magnetic moment of an object depend upon its physical dimensions. Hence, the concept of a dimensionless electron possessing these properties contrasts to experimental observations in Penning traps which point to finite non-zero radius of the electron.[citation needed] A possible explanation of this paradoxical situation is given below in the "Virtual particles" subsection by taking into consideration the Foldy-Wouthuysen transformation.

The issue of the radius of the electron is a challenging problem of the modern theoretical physics. The admission of the hypothesis of a finite radius of the electron is incompatible to the premises of the theory of relativity. On the other hand, a point-like electron (zero radius) generates serious mathematical difficulties due to the self-energy of the electron tending to infinity.[77]

Observation of a single electron in a Penning trap suggests the upper limit of the particle's radius to be 10−22 meters.[78] The upper bound of the electron radius of 10−18 meters[79] can be derived using the uncertainty relation in energy.

There is also a physical constant called the "classical electron radius", with the much larger value of 2.8179×10−15 m, greater than the radius of the proton. However, the terminology comes from a simplistic calculation that ignores the effects of quantum mechanics; in reality, the so-called classical electron radius has little to do with the true fundamental structure of the electron.[80][note 5]

There are elementary particles that spontaneously decay into less massive particles. An example is the muon, with a mean lifetime of 2.2×10−6 seconds, which decays into an electron, a muon neutrino and an electron antineutrino. The electron, on the other hand, is thought to be stable on theoretical grounds: the electron is the least massive particle with non-zero electric charge, so its decay would violate charge conservation.[81] The experimental lower bound for the electron's mean lifetime is 6.6×1028 years, at a 90% confidence level.[7][82][83]

Quantum properties

As with all particles, electrons can act as waves. This is called the wave–particle duality and can be demonstrated using the double-slit experiment.

The wave-like nature of the electron allows it to pass through two parallel slits simultaneously, rather than just one slit as would be the case for a classical particle. In quantum mechanics, the wave-like property of one particle can be described mathematically as a complex-valued function, the wave function, commonly denoted by the Greek letter psi (ψ). When the absolute value of this function is squared, it gives the probability that a particle will be observed near a location—a probability density.[84]:162–218
 
A three dimensional projection of a two dimensional plot. There are symmetric hills along one axis and symmetric valleys along the other, roughly giving a saddle-shape
Example of an antisymmetric wave function for a quantum state of two identical fermions in a 1-dimensional box. If the particles swap position, the wave function inverts its sign.

Electrons are identical particles because they cannot be distinguished from each other by their intrinsic physical properties. In quantum mechanics, this means that a pair of interacting electrons must be able to swap positions without an observable change to the state of the system. The wave function of fermions, including electrons, is antisymmetric, meaning that it changes sign when two electrons are swapped; that is, ψ(r1, r2) = −ψ(r2, r1), where the variables r1 and r2 correspond to the first and second electrons, respectively. Since the absolute value is not changed by a sign swap, this corresponds to equal probabilities. Bosons, such as the photon, have symmetric wave functions instead.[84]:162–218

In the case of antisymmetry, solutions of the wave equation for interacting electrons result in a zero probability that each pair will occupy the same location or state. This is responsible for the Pauli exclusion principle, which precludes any two electrons from occupying the same quantum state. This principle explains many of the properties of electrons. For example, it causes groups of bound electrons to occupy different orbitals in an atom, rather than all overlapping each other in the same orbit.[84]:162–218

Virtual particles

In a simplified picture, every photon spends some time as a combination of a virtual electron plus its antiparticle, the virtual positron, which rapidly annihilate each other shortly thereafter.[85] The combination of the energy variation needed to create these particles, and the time during which they exist, fall under the threshold of detectability expressed by the Heisenberg uncertainty relation, ΔE · Δt ≥ ħ. In effect, the energy needed to create these virtual particles, ΔE, can be "borrowed" from the vacuum for a period of time, Δt, so that their product is no more than the reduced Planck constant, ħ6.6×10−16 eV·s. Thus, for a virtual electron, Δt is at most 1.3×10−21 s.[86]

A sphere with a minus sign at lower left symbolizes the electron, while pairs of spheres with plus and minus signs show the virtual particles
A schematic depiction of virtual electron–positron pairs appearing at random near an electron (at lower left)

While an electron–positron virtual pair is in existence, the coulomb force from the ambient electric field surrounding an electron causes a created positron to be attracted to the original electron, while a created electron experiences a repulsion. This causes what is called vacuum polarization. In effect, the vacuum behaves like a medium having a dielectric permittivity more than unity. Thus the effective charge of an electron is actually smaller than its true value, and the charge decreases with increasing distance from the electron.[87][88] This polarization was confirmed experimentally in 1997 using the Japanese TRISTAN particle accelerator.[89] Virtual particles cause a comparable shielding effect for the mass of the electron.[90]

The interaction with virtual particles also explains the small (about 0.1%) deviation of the intrinsic magnetic moment of the electron from the Bohr magneton (the anomalous magnetic moment).[74][91] The extraordinarily precise agreement of this predicted difference with the experimentally determined value is viewed as one of the great achievements of quantum electrodynamics.[92]

The apparent paradox (mentioned above in the properties subsection) of a point particle electron having intrinsic angular momentum and magnetic moment can be explained by the formation of virtual photons in the electric field generated by the electron. These photons cause the electron to shift about in a jittery fashion (known as zitterbewegung),[93] which results in a net circular motion with precession. This motion produces both the spin and the magnetic moment of the electron.[9][94] In atoms, this creation of virtual photons explains the Lamb shift observed in spectral lines.[87]

Interaction

An electron generates an electric field that exerts an attractive force on a particle with a positive charge, such as the proton, and a repulsive force on a particle with a negative charge. The strength of this force in nonrelativistic approximation is determined by Coulomb's inverse square law.[95]:58–61 When an electron is in motion, it generates a magnetic field.[84]:140 The Ampère-Maxwell law relates the magnetic field to the mass motion of electrons (the current) with respect to an observer. This property of induction supplies the magnetic field that drives an electric motor.[96] The electromagnetic field of an arbitrary moving charged particle is expressed by the Liénard–Wiechert potentials, which are valid even when the particle's speed is close to that of light (relativistic).[95]:429–434
 
A graph with arcs showing the motion of charged particles
A particle with charge q (at left) is moving with velocity v through a magnetic field B that is oriented toward the viewer. For an electron, q is negative so it follows a curved trajectory toward the top.

When an electron is moving through a magnetic field, it is subject to the Lorentz force that acts perpendicularly to the plane defined by the magnetic field and the electron velocity. This centripetal force causes the electron to follow a helical trajectory through the field at a radius called the gyroradius. The acceleration from this curving motion induces the electron to radiate energy in the form of synchrotron radiation.[84]:160[97][note 6] The energy emission in turn causes a recoil of the electron, known as the Abraham–Lorentz–Dirac Force, which creates a friction that slows the electron. This force is caused by a back-reaction of the electron's own field upon itself.[98]

Photons mediate electromagnetic interactions between particles in quantum electrodynamics. An isolated electron at a constant velocity cannot emit or absorb a real photon; doing so would violate conservation of energy and momentum. Instead, virtual photons can transfer momentum between two charged particles. This exchange of virtual photons, for example, generates the Coulomb force.[99] Energy emission can occur when a moving electron is deflected by a charged particle, such as a proton. The acceleration of the electron results in the emission of Bremsstrahlung radiation.[100]
 
A curve shows the motion of the electron, a red dot shows the nucleus, and a wiggly line the emitted photon
Here, Bremsstrahlung is produced by an electron e deflected by the electric field of an atomic nucleus. The energy change E2 − E1 determines the frequency f of the emitted photon.

An inelastic collision between a photon (light) and a solitary (free) electron is called Compton scattering. This collision results in a transfer of momentum and energy between the particles, which modifies the wavelength of the photon by an amount called the Compton shift.[note 7] The maximum magnitude of this wavelength shift is h/mec, which is known as the Compton wavelength.[101] For an electron, it has a value of 2.43×10−12 m.[70] When the wavelength of the light is long (for instance, the wavelength of the visible light is 0.4–0.7 μm) the wavelength shift becomes negligible. Such interaction between the light and free electrons is called Thomson scattering or linear Thomson scattering.[102]

The relative strength of the electromagnetic interaction between two charged particles, such as an electron and a proton, is given by the fine-structure constant. This value is a dimensionless quantity formed by the ratio of two energies: the electrostatic energy of attraction (or repulsion) at a separation of one Compton wavelength, and the rest energy of the charge. It is given by α ≈ 7.297353×10−3, which is approximately equal to 1/137.[70]

When electrons and positrons collide, they annihilate each other, giving rise to two or more gamma ray photons. If the electron and positron have negligible momentum, a positronium atom can form before annihilation results in two or three gamma ray photons totalling 1.022 MeV.[103][104] On the other hand, a high-energy photon can transform into an electron and a positron by a process called pair production, but only in the presence of a nearby charged particle, such as a nucleus.[105][106]

In the theory of electroweak interaction, the left-handed component of electron's wavefunction forms a weak isospin doublet with the electron neutrino. This means that during weak interactions, electron neutrinos behave like electrons. Either member of this doublet can undergo a charged current interaction by emitting or absorbing a
W
and be converted into the other member. Charge is conserved during this reaction because the W boson also carries a charge, canceling out any net change during the transmutation. Charged current interactions are responsible for the phenomenon of beta decay in a radioactive atom. Both the electron and electron neutrino can undergo a neutral current interaction via a
Z0
exchange, and this is responsible for neutrino-electron elastic scattering.[107]

Atoms and molecules

A table of five rows and five columns, with each cell portraying a color-coded probability density
Probability densities for the first few hydrogen atom orbitals, seen in cross-section. The energy level of a bound electron determines the orbital it occupies, and the color reflects the probability of finding the electron at a given position.

An electron can be bound to the nucleus of an atom by the attractive Coulomb force. A system of one or more electrons bound to a nucleus is called an atom. If the number of electrons is different from the nucleus' electrical charge, such an atom is called an ion. The wave-like behavior of a bound electron is described by a function called an atomic orbital. Each orbital has its own set of quantum numbers such as energy, angular momentum and projection of angular momentum, and only a discrete set of these orbitals exist around the nucleus. According to the Pauli exclusion principle each orbital can be occupied by up to two electrons, which must differ in their spin quantum number.

Electrons can transfer between different orbitals by the emission or absorption of photons with an energy that matches the difference in potential.[108] Other methods of orbital transfer include collisions with particles, such as electrons, and the Auger effect.[109] To escape the atom, the energy of the electron must be increased above its binding energy to the atom. This occurs, for example, with the photoelectric effect, where an incident photon exceeding the atom's ionization energy is absorbed by the electron.[110]

The orbital angular momentum of electrons is quantized. Because the electron is charged, it produces an orbital magnetic moment that is proportional to the angular momentum. The net magnetic moment of an atom is equal to the vector sum of orbital and spin magnetic moments of all electrons and the nucleus. The magnetic moment of the nucleus is negligible compared with that of the electrons. The magnetic moments of the electrons that occupy the same orbital (so called, paired electrons) cancel each other out.[111]

The chemical bond between atoms occurs as a result of electromagnetic interactions, as described by the laws of quantum mechanics.[112] The strongest bonds are formed by the sharing or transfer of electrons between atoms, allowing the formation of molecules.[12] Within a molecule, electrons move under the influence of several nuclei, and occupy molecular orbitals; much as they can occupy atomic orbitals in isolated atoms.[113] A fundamental factor in these molecular structures is the existence of electron pairs. These are electrons with opposed spins, allowing them to occupy the same molecular orbital without violating the Pauli exclusion principle (much like in atoms). Different molecular orbitals have different spatial distribution of the electron density. For instance, in bonded pairs (i.e. in the pairs that actually bind atoms together) electrons can be found with the maximal probability in a relatively small volume between the nuclei. By contrast, in non-bonded pairs electrons are distributed in a large volume around nuclei.[114]

Conductivity

Four bolts of lightning strike the ground
A lightning discharge consists primarily of a flow of electrons.[115] The electric potential needed for lightning can be generated by a triboelectric effect.[116][117]

If a body has more or fewer electrons than are required to balance the positive charge of the nuclei, then that object has a net electric charge. When there is an excess of electrons, the object is said to be negatively charged. When there are fewer electrons than the number of protons in nuclei, the object is said to be positively charged. When the number of electrons and the number of protons are equal, their charges cancel each other and the object is said to be electrically neutral. A macroscopic body can develop an electric charge through rubbing, by the triboelectric effect.[118]

Independent electrons moving in vacuum are termed free electrons. Electrons in metals also behave as if they were free. In reality the particles that are commonly termed electrons in metals and other solids are quasi-electrons—quasiparticles, which have the same electrical charge, spin, and magnetic moment as real electrons but might have a different mass.[119] When free electrons—both in vacuum and metals—move, they produce a net flow of charge called an electric current, which generates a magnetic field. Likewise a current can be created by a changing magnetic field. These interactions are described mathematically by Maxwell's equations.[120]

At a given temperature, each material has an electrical conductivity that determines the value of electric current when an electric potential is applied. Examples of good conductors include metals such as copper and gold, whereas glass and Teflon are poor conductors. In any dielectric material, the electrons remain bound to their respective atoms and the material behaves as an insulator. Most semiconductors have a variable level of conductivity that lies between the extremes of conduction and insulation.[121] On the other hand, metals have an electronic band structure containing partially filled electronic bands. The presence of such bands allows electrons in metals to behave as if they were free or delocalized electrons. These electrons are not associated with specific atoms, so when an electric field is applied, they are free to move like a gas (called Fermi gas)[122] through the material much like free electrons.

Because of collisions between electrons and atoms, the drift velocity of electrons in a conductor is on the order of millimeters per second. However, the speed at which a change of current at one point in the material causes changes in currents in other parts of the material, the velocity of propagation, is typically about 75% of light speed.[123] This occurs because electrical signals propagate as a wave, with the velocity dependent on the dielectric constant of the material.[124]

Metals make relatively good conductors of heat, primarily because the delocalized electrons are free to transport thermal energy between atoms. However, unlike electrical conductivity, the thermal conductivity of a metal is nearly independent of temperature. This is expressed mathematically by the Wiedemann–Franz law,[122] which states that the ratio of thermal conductivity to the electrical conductivity is proportional to the temperature. The thermal disorder in the metallic lattice increases the electrical resistivity of the material, producing a temperature dependence for electric current.[125]

When cooled below a point called the critical temperature, materials can undergo a phase transition in which they lose all resistivity to electric current, in a process known as superconductivity. In BCS theory, this behavior is modeled by pairs of electrons entering a quantum state known as a Bose–Einstein condensate. These Cooper pairs have their motion coupled to nearby matter via lattice vibrations called phonons, thereby avoiding the collisions with atoms that normally create electrical resistance.[126] (Cooper pairs have a radius of roughly 100 nm, so they can overlap each other.)[127] However, the mechanism by which higher temperature superconductors operate remains uncertain.

Electrons inside conducting solids, which are quasi-particles themselves, when tightly confined at temperatures close to absolute zero, behave as though they had split into three other quasiparticles: spinons, orbitons and holons.[128][129] The former carries spin and magnetic moment, the next carries its orbital location while the latter electrical charge.

Motion and energy

According to Einstein's theory of special relativity, as an electron's speed approaches the speed of light, from an observer's point of view its relativistic mass increases, thereby making it more and more difficult to accelerate it from within the observer's frame of reference. The speed of an electron can approach, but never reach, the speed of light in a vacuum, c. However, when relativistic electrons—that is, electrons moving at a speed close to c—are injected into a dielectric medium such as water, where the local speed of light is significantly less than c, the electrons temporarily travel faster than light in the medium. As they interact with the medium, they generate a faint light called Cherenkov radiation.[130]
 
The plot starts at zero and curves sharply upward toward the right
Lorentz factor as a function of velocity. It starts at value 1 and goes to infinity as v approaches c.

The effects of special relativity are based on a quantity known as the Lorentz factor, defined as \scriptstyle \gamma =1/{\sqrt {1-{v^{2}}/{c^{2}}}} where v is the speed of the particle. The kinetic energy Ke of an electron moving with velocity v is:
\displaystyle K_{\mathrm {e} }=(\gamma -1)m_{\mathrm {e} }c^{2},
where me is the mass of electron. For example, the Stanford linear accelerator can accelerate an electron to roughly 51 GeV.[131] Since an electron behaves as a wave, at a given velocity it has a characteristic de Broglie wavelength. This is given by λe = h/p where h is the Planck constant and p is the momentum.[52] For the 51 GeV electron above, the wavelength is about 2.4×10−17 m, small enough to explore structures well below the size of an atomic nucleus.[132]

Formation

A photon approaches the nucleus from the left, with the resulting electron and positron moving off to the right
Pair production of an electron and positron, caused by the close approach of a photon with an atomic nucleus. The lightning symbol represents an exchange of a virtual photon, thus an electric force acts. The angle between the particles is very small.[133]

The Big Bang theory is the most widely accepted scientific theory to explain the early stages in the evolution of the Universe.[134] For the first millisecond of the Big Bang, the temperatures were over 10 billion kelvins and photons had mean energies over a million electronvolts. These photons were sufficiently energetic that they could react with each other to form pairs of electrons and positrons. Likewise, positron-electron pairs annihilated each other and emitted energetic photons:

γ
+
γ

e+
+
e
An equilibrium between electrons, positrons and photons was maintained during this phase of the evolution of the Universe. After 15 seconds had passed, however, the temperature of the universe dropped below the threshold where electron-positron formation could occur. Most of the surviving electrons and positrons annihilated each other, releasing gamma radiation that briefly reheated the universe.[135]

For reasons that remain uncertain, during the annihilation process there was an excess in the number of particles over antiparticles. Hence, about one electron for every billion electron-positron pairs survived. This excess matched the excess of protons over antiprotons, in a condition known as baryon asymmetry, resulting in a net charge of zero for the universe.[136][137] The surviving protons and neutrons began to participate in reactions with each other—in the process known as nucleosynthesis, forming isotopes of hydrogen and helium, with trace amounts of lithium. This process peaked after about five minutes.[138] Any leftover neutrons underwent negative beta decay with a half-life of about a thousand seconds, releasing a proton and electron in the process,

n

p
+
e
+
ν
e
For about the next 300000400000 years, the excess electrons remained too energetic to bind with atomic nuclei.[139] What followed is a period known as recombination, when neutral atoms were formed and the expanding universe became transparent to radiation.[140]

Roughly one million years after the big bang, the first generation of stars began to form.[140] Within a star, stellar nucleosynthesis results in the production of positrons from the fusion of atomic nuclei. These antimatter particles immediately annihilate with electrons, releasing gamma rays. The net result is a steady reduction in the number of electrons, and a matching increase in the number of neutrons. However, the process of stellar evolution can result in the synthesis of radioactive isotopes. Selected isotopes can subsequently undergo negative beta decay, emitting an electron and antineutrino from the nucleus.[141] An example is the cobalt-60 (60Co) isotope, which decays to form nickel-60 (60Ni
).[142]
 
A branching tree representing the particle production
An extended air shower generated by an energetic cosmic ray striking the Earth's atmosphere

At the end of its lifetime, a star with more than about 20 solar masses can undergo gravitational collapse to form a black hole.[143] According to classical physics, these massive stellar objects exert a gravitational attraction that is strong enough to prevent anything, even electromagnetic radiation, from escaping past the Schwarzschild radius. However, quantum mechanical effects are believed to potentially allow the emission of Hawking radiation at this distance. Electrons (and positrons) are thought to be created at the event horizon of these stellar remnants.

When a pair of virtual particles (such as an electron and positron) is created in the vicinity of the event horizon, random spatial positioning might result in one of them to appear on the exterior; this process is called quantum tunnelling. The gravitational potential of the black hole can then supply the energy that transforms this virtual particle into a real particle, allowing it to radiate away into space.[144] In exchange, the other member of the pair is given negative energy, which results in a net loss of mass-energy by the black hole. The rate of Hawking radiation increases with decreasing mass, eventually causing the black hole to evaporate away until, finally, it explodes.[145]

Cosmic rays are particles traveling through space with high energies. Energy events as high as 3.0×1020 eV have been recorded.[146] When these particles collide with nucleons in the Earth's atmosphere, a shower of particles is generated, including pions.[147] More than half of the cosmic radiation observed from the Earth's surface consists of muons. The particle called a muon is a lepton produced in the upper atmosphere by the decay of a pion.

π

μ
+
ν
μ
A muon, in turn, can decay to form an electron or positron.[148]

μ

e
+
ν
e
+
ν
μ

Observation

A swirling green glow in the night sky above snow-covered ground
Aurorae are mostly caused by energetic electrons precipitating into the atmosphere.[149]

Remote observation of electrons requires detection of their radiated energy. For example, in high-energy environments such as the corona of a star, free electrons form a plasma that radiates energy due to Bremsstrahlung radiation. Electron gas can undergo plasma oscillation, which is waves caused by synchronized variations in electron density, and these produce energy emissions that can be detected by using radio telescopes.[150]

The frequency of a photon is proportional to its energy. As a bound electron transitions between different energy levels of an atom, it absorbs or emits photons at characteristic frequencies. For instance, when atoms are irradiated by a source with a broad spectrum, distinct absorption lines appear in the spectrum of transmitted radiation. Each element or molecule displays a characteristic set of spectral lines, such as the hydrogen spectral series. Spectroscopic measurements of the strength and width of these lines allow the composition and physical properties of a substance to be determined.[151][152]

In laboratory conditions, the interactions of individual electrons can be observed by means of particle detectors, which allow measurement of specific properties such as energy, spin and charge.[110] The development of the Paul trap and Penning trap allows charged particles to be contained within a small region for long durations. This enables precise measurements of the particle properties. For example, in one instance a Penning trap was used to contain a single electron for a period of 10 months.[153] The magnetic moment of the electron was measured to a precision of eleven digits, which, in 1980, was a greater accuracy than for any other physical constant.[154]

The first video images of an electron's energy distribution were captured by a team at Lund University in Sweden, February 2008. The scientists used extremely short flashes of light, called attosecond pulses, which allowed an electron's motion to be observed for the first time.[155][156]

The distribution of the electrons in solid materials can be visualized by angle-resolved photoemission spectroscopy (ARPES). This technique employs the photoelectric effect to measure the reciprocal space—a mathematical representation of periodic structures that is used to infer the original structure. ARPES can be used to determine the direction, speed and scattering of electrons within the material.[157]

Plasma applications

Particle beams

A violet beam from above produces a blue glow about a Space shuttle model
During a NASA wind tunnel test, a model of the Space Shuttle is targeted by a beam of electrons, simulating the effect of ionizing gases during re-entry.[158]

Electron beams are used in welding.[159] They allow energy densities up to 107 W·cm−2 across a narrow focus diameter of 0.1–1.3 mm and usually require no filler material. This welding technique must be performed in a vacuum to prevent the electrons from interacting with the gas before reaching their target, and it can be used to join conductive materials that would otherwise be considered unsuitable for welding.[160][161]

Electron-beam lithography (EBL) is a method of etching semiconductors at resolutions smaller than a micrometer.[162] This technique is limited by high costs, slow performance, the need to operate the beam in the vacuum and the tendency of the electrons to scatter in solids. The last problem limits the resolution to about 10 nm. For this reason, EBL is primarily used for the production of small numbers of specialized integrated circuits.[163]

Electron beam processing is used to irradiate materials in order to change their physical properties or sterilize medical and food products.[164] Electron beams fluidise or quasi-melt glasses without significant increase of temperature on intensive irradiation: e.g. intensive electron radiation causes a many orders of magnitude decrease of viscosity and stepwise decrease of its activation energy.[165]

Linear particle accelerators generate electron beams for treatment of superficial tumors in radiation therapy. Electron therapy can treat such skin lesions as basal-cell carcinomas because an electron beam only penetrates to a limited depth before being absorbed, typically up to 5 cm for electron energies in the range 5–20 MeV. An electron beam can be used to supplement the treatment of areas that have been irradiated by X-rays.[166][167]

Particle accelerators use electric fields to propel electrons and their antiparticles to high energies. These particles emit synchrotron radiation as they pass through magnetic fields. The dependency of the intensity of this radiation upon spin polarizes the electron beam—a process known as the Sokolov–Ternov effect.[note 8] Polarized electron beams can be useful for various experiments. Synchrotron radiation can also cool the electron beams to reduce the momentum spread of the particles. Electron and positron beams are collided upon the particles' accelerating to the required energies; particle detectors observe the resulting energy emissions, which particle physics studies.

Imaging

Low-energy electron diffraction (LEED) is a method of bombarding a crystalline material with a collimated beam of electrons and then observing the resulting diffraction patterns to determine the structure of the material. The required energy of the electrons is typically in the range 20–200 eV.[169] The reflection high-energy electron diffraction (RHEED) technique uses the reflection of a beam of electrons fired at various low angles to characterize the surface of crystalline materials. The beam energy is typically in the range 8–20 keV and the angle of incidence is 1–4°.[170][171]

The electron microscope directs a focused beam of electrons at a specimen. Some electrons change their properties, such as movement direction, angle, and relative phase and energy as the beam interacts with the material. Microscopists can record these changes in the electron beam to produce atomically resolved images of the material.[172] In blue light, conventional optical microscopes have a diffraction-limited resolution of about 200 nm.[173] By comparison, electron microscopes are limited by the de Broglie wavelength of the electron. This wavelength, for example, is equal to 0.0037 nm for electrons accelerated across a 100,000-volt potential.[174] The Transmission Electron Aberration-Corrected Microscope is capable of sub-0.05 nm resolution, which is more than enough to resolve individual atoms.[175] This capability makes the electron microscope a useful laboratory instrument for high resolution imaging. However, electron microscopes are expensive instruments that are costly to maintain.

Two main types of electron microscopes exist: transmission and scanning. Transmission electron microscopes function like overhead projectors, with a beam of electrons passing through a slice of material then being projected by lenses on a photographic slide or a charge-coupled device. Scanning electron microscopes rasteri a finely focused electron beam, as in a TV set, across the studied sample to produce the image. Magnifications range from 100× to 1,000,000× or higher for both microscope types. The scanning tunneling microscope uses quantum tunneling of electrons from a sharp metal tip into the studied material and can produce atomically resolved images of its surface.

Other applications

In the free-electron laser (FEL), a relativistic electron beam passes through a pair of undulators that contain arrays of dipole magnets whose fields point in alternating directions. The electrons emit synchrotron radiation that coherently interacts with the same electrons to strongly amplify the radiation field at the resonance frequency. FEL can emit a coherent high-brilliance electromagnetic radiation with a wide range of frequencies, from microwaves to soft X-rays. These devices are used in manufacturing, communication, and in medical applications, such as soft tissue surgery.[179]

Electrons are important in cathode ray tubes, which have been extensively used as display devices in laboratory instruments, computer monitors and television sets.[180] In a photomultiplier tube, every photon striking the photocathode initiates an avalanche of electrons that produces a detectable current pulse.[181] Vacuum tubes use the flow of electrons to manipulate electrical signals, and they played a critical role in the development of electronics technology. However, they have been largely supplanted by solid-state devices such as the transistor.

Mandatory Palestine

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Mandatory_Palestine   Palestine 1920–...