Search This Blog

Sunday, January 11, 2015

Human brain

From Wikipedia, the free encyclopedia

Human brain
Skull and brain normal human.svg
Human brain and skull
Cerebral lobes.png
Cerebral lobes: the frontal lobe (pink), parietal lobe (green) and occipital lobe (blue)
Details
Latin Cerebrum
Greek ἐγκέφαλος (enképhalos)
μυαλό (myaló)
System Central nervous system
Internal carotid arteries, vertebral arteries
Internal jugular vein, cerebral veins, external veins, basal vein, terminal vein, choroid vein, cerebellar veins
Precursor Neural tube
Identifiers
TA A14.1.03.001
FMA FMA:50801
Anatomical terminology

The human brain has the same general structure as the brains of other mammals, but has a more developed cerebral cortex than any other. Large animals such as whales and elephants have larger brains in absolute terms, but when measured using the encephalization quotient, which compensates for body size, the human brain is almost twice as large as the brain of the bottlenose dolphin, and three times as large as the brain of a chimpanzee. Much of the expansion comes from the cerebral cortex, especially the frontal lobes, which are associated with executive functions such as self-control, planning, reasoning, and abstract thought. The portion of the cerebral cortex devoted to vision, the visual cortex, is also greatly enlarged in humans.

The human cerebral cortex is a thick layer of neural tissue that covers most of the brain. This layer is folded in a way that increases the amount of surface that can fit into the volume available. The pattern of folds is similar across individuals, although there are many small variations. The cortex is divided into four "lobes", called the frontal lobe, parietal lobe, temporal lobe, and occipital lobe. (Some classification systems also include a limbic lobe and treat the insular cortex as a lobe.) Within each lobe are numerous cortical areas, each associated with a particular function, including vision, motor control, and language. The left and right sides of the cortex are broadly similar in shape, and most cortical areas are replicated on both sides. Some areas, though, show strong lateralization, particularly areas that are involved in language. In most people, the left hemisphere is "dominant" for language, with the right hemisphere playing only a minor role. There are other functions, such as spatiotemporal reasoning, for which the right hemisphere is usually dominant.

Despite being protected by the thick bones of the skull, suspended in cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the human brain is susceptible to damage and disease. The most common forms of physical damage are closed head injuries such as a blow to the head, a stroke, or poisoning by a variety of chemicals that can act as neurotoxins. Infection of the brain, though serious, is rare due to the biological barriers that protect it. The human brain is also susceptible to degenerative disorders, such as Parkinson's disease, multiple sclerosis, and Alzheimer's disease. A number of psychiatric conditions, such as schizophrenia and depression, are thought to be associated with brain dysfunctions, although the nature of such brain anomalies is not well understood.

Scientifically, the techniques that are used to study the human brain differ in important ways from those that are used to study the brains of other mammals. On the one hand, invasive techniques such as inserting electrodes into the brain, or disabling parts of the brain in order to examine the effect on behavior, are used with non-human species, but for ethical reasons, are generally not performed with humans. On the other hand, humans are the only subjects who can respond to complex verbal instructions. Thus, it is often possible to use non-invasive techniques such as functional neuroimaging or EEG recording more productively with humans than with non-humans. Furthermore, some of the most important topics, such as language, can hardly be studied at all except in humans. In many cases, human and non-human studies form essential complements to each other. Individual brain cells (except where tissue samples are taken for biopsy for suspected brain tumors) can only be studied in non-humans; complex cognitive tasks can only be studied in humans. Combining the two sources of information to yield a complete functional understanding of the human brain is an ongoing challenge for neuroscience.

Structure

Human brain viewed from below

The adult human brain weighs on average about 1.5 kg (3.3 lb)[1] with a volume of around 1130 cubic centimetres (cm3) in women and 1260 cm3 in men, although there is substantial individual variation.[2] Neurological differences between the sexes have not been shown to correlate in any simple way with IQ or other measures of cognitive performance.[3] The human brain is composed of neurons, glial cells, and blood vessels. The number of neurons, according to array tomography, has been shown to be about 86 billion neurons in the human brain with a roughly equal number of non-neuronal cells called glia.[4]

The cerebral hemispheres (the cerebrum) form the largest part of the human brain and are situated above other brain structures. They are covered with a cortical layer (the cerebral cortex) which has a convoluted topography.[5] Underneath the cerebrum lies the brainstem, resembling a stalk on which the cerebrum is attached. At the rear of the brain, beneath the cerebrum and behind the brainstem, is the cerebellum, a structure with a horizontally furrowed surface, the cerebellar cortex, that makes it look different from any other brain area. The same structures are present in other mammals, although they vary considerably in relative size. As a rule, the smaller the cerebrum, the less convoluted the cortex. The cortex of a rat or mouse is almost perfectly smooth. The cortex of a dolphin or whale, on the other hand, is more convoluted than the cortex of a human.

The living brain is very soft, having a consistency similar to soft gelatin or soft tofu. Although referred to as grey matter, the live cortex is pinkish-beige in color and slightly off-white in the interior.

General features

Human brain viewed through a mid-line incision

The human brain has many properties that are common to all vertebrate brains, including a basic division into three parts called the forebrain, midbrain, and hindbrain, with interconnected fluid-filled ventricles, and a set of generic vertebrate brain structures including the medulla oblongata and pons of the brainstem, the cerebellum, optic tectum, thalamus, hypothalamus, basal ganglia, olfactory bulb, and many others.

As a mammalian brain, the human brain has special features that are common to all mammalian brains, most notably a six-layered cerebral cortex and a set of structures associated with it, including the hippocampus and amygdala. All vertebrates have a forebrain whose upper surface is covered with a layer of neural tissue called the pallium, but in all except mammals the pallium has a relatively simple three-layered cell structure. In mammals it has a much more complex six-layered cell structure, and is given a different name, the cerebral cortex. The hippocampus and amygdala also originate from the pallium, but are much more complex in mammals than in other vertebrates.

As a primate brain, the human brain has a much larger cerebral cortex, in proportion to body size, than most mammals, and a very highly developed visual system. The shape of the brain within the skull is also altered somewhat as a consequence of the upright position in which primates hold their heads.

As a hominid brain, the human brain is substantially enlarged even in comparison to the brain of a typical monkey. The sequence of evolution from Australopithecus (four million years ago) to Homo sapiens (modern man) was marked by a steady increase in brain size, particularly in the frontal lobes, which are associated with a variety of high-level cognitive functions.

Humans and other primates have some differences in gene sequence, and genes are differentially expressed in many brain regions. The functional differences between the human brain and the brains of other animals also arise from many gene–environment interactions.[6]

Cerebral cortex

Bisection of the head of an adult female, showing the cerebral cortex, with its extensive folding, and the underlying white matter[7]

The dominant feature of the human brain is corticalization. The cerebral cortex in humans is so large that it overshadows every other part of the brain. A few subcortical structures show alterations reflecting this trend. The cerebellum, for example, has a medial zone connected mainly to subcortical motor areas, and a lateral zone connected primarily to the cortex. In humans the lateral zone takes up a much larger fraction of the cerebellum than in most other mammalian species. Corticalization is reflected in function as well as structure. In a rat, surgical removal of the entire cerebral cortex leaves an animal that is still capable of walking around and interacting with the environment.[8] In a human, comparable cerebral cortex damage produces a permanent state of coma. The amount of association cortex, relative to the other two categories of sensory and motor, increases dramatically as one goes from simpler mammals, such as the rat and the cat, to more complex ones, such as the chimpanzee and the human.[9]

The cerebral cortex is essentially a sheet of neural tissue, folded in a way that allows a large surface area to fit within the confines of the skull. When unfolded, each cerebral hemisphere has a total surface area of about 1.3 square feet (0.12 m2).[10] Each cortical ridge is called a gyrus, and each groove or fissure separating one gyrus from another is called a sulcus.

Cortical divisions

Four lobes

Regions of the lateral surface of the brain, and particularly the lobes of the forebrain:
Beigefrontal lobe
Blueparietal lobe
Greenoccipital lobe
Pinktemporal lobe

The cerebral cortex is nearly symmetrical with left and right hemispheres that are approximate mirror images of each other. Each hemisphere is conventionally divided into four "lobes", the frontal lobe, parietal lobe, occipital lobe, and temporal lobe. With one exception, this division into lobes does not derive from the structure of the cortex itself, though: the lobes are named after the bones of the skull that overlie them, the frontal bone, parietal bone, temporal bone, and occipital bone. The borders between lobes lie beneath the sutures that link the skull bones together. The exception is the border between the frontal and parietal lobes, which lies behind the corresponding suture; instead it follows the anatomical boundary of the central sulcus, a deep fold in the brain's structure where the primary somatosensory cortex and primary motor cortex meet.

Because of the arbitrary way most of the borders between lobes are demarcated, they have little functional significance. With the exception of the occipital lobe, a small area that is entirely dedicated to vision, each of the lobes contains a variety of brain areas that have minimal functional relationship. The parietal lobe, for example, contains areas involved in somatosensation, hearing, language, attention, and spatial cognition. In spite of this heterogeneity, the division into lobes is convenient for reference. The main functions of the frontal lobe are to control attention, abstract thinking, behavior, problem solving tasks, and physical reactions and personality.[11] The occipital lobe is the smallest lobe; its main functions are visual reception, visual-spatial processing, movement, and color recognition.[12] The temporal lobe controls auditory and visual memories, language, and some hearing and speech.[11]

Major sulci and gyri

Major gyri and sulci on the lateral surface of the cortex
Lateral surface of the cerebral cortex
Medial surface of the cerebral cortex

Although there are enough variations in the shape and placement of gyri and sulci (cortical folds) to make every brain unique, most human brains show sufficiently consistent patterns of folding that allow them to be named. Many of the gyri and sulci are named according to the location on the lobes or other major folds on the cortex. These include:

Functional divisions

Researchers who study the functions of the cortex divide it into three functional categories of regions. One consists of the primary sensory areas, which receive signals from the sensory nerves and tracts by way of relay nuclei in the thalamus. Primary sensory areas include the visual area of the occipital lobe, the auditory area in parts of the temporal lobe and insular cortex, and the somatosensory cortex in the parietal lobe. A second category is the primary motor cortex, which sends axons down to motor neurons in the brainstem and spinal cord.[13] This area occupies the rear portion of the frontal lobe, directly in front of the somatosensory area. The third category consists of the remaining parts of the cortex, which are called the association areas. These areas receive input from the sensory areas and lower parts of the brain and are involved in the complex processes of perception, thought, and decision-making.[14]

Cytoarchitecture

Brodmann's classification of areas of the cortex

Different parts of the cerebral cortex are involved in different cognitive and behavioral functions. The differences show up in a number of ways: the effects of localized brain damage, regional activity patterns exposed when the brain is examined using functional imaging techniques, connectivity with subcortical areas, and regional differences in the cellular architecture of the cortex. Neuroscientists describe most of the cortex—the part they call the neocortex—as having six layers, but not all layers are apparent in all areas, and even when a layer is present, its thickness and cellular organization may vary. Scientists have constructed maps of cortical areas on the basis of variations in the appearance of the layers as seen with a microscope. One of the most widely used schemes came from Korbinian Brodmann, who split the cortex into 51 different areas and assigned each a number (many of these Brodmann areas have since been subdivided). For example, Brodmann area 1 is the primary somatosensory cortex, Brodmann area 17 is the primary visual cortex, and Brodmann area 25 is the anterior cingulate cortex.[15]

Topography

Topography of the primary motor cortex, showing which body part is controlled by each zone

Many of the brain areas Brodmann defined have their own complex internal structures. In a number of cases, brain areas are organized into "topographic maps", where adjoining bits of the cortex correspond to adjoining parts of the body, or of some more abstract entity. A simple example of this type of correspondence is the primary motor cortex, a strip of tissue running along the anterior edge of the central sulcus, shown in the image to the right. Motor areas innervating each part of the body arise from a distinct zone, with neighboring body parts represented by neighboring zones. Electrical stimulation of the cortex at any point causes a muscle-contraction in the represented body part. This "somatotopic" representation is not evenly distributed, however. The head, for example, is represented by a region about three times as large as the zone for the entire back and trunk. The size of any zone correlates to the precision of motor control and sensory discrimination possible.= The areas for the lips, fingers, and tongue are particularly large, considering the proportional size of their represented body parts.

In visual areas, the maps are retinotopic—that is, they reflect the topography of the retina, the layer of light-activated neurons lining the back of the eye. In this case too the representation is uneven: the fovea—the area at the center of the visual field—is greatly overrepresented compared to the periphery. The visual circuitry in the human cerebral cortex contains several dozen distinct retinotopic maps, each devoted to analyzing the visual input stream in a particular way. The primary visual cortex (Brodmann area 17), which is the main recipient of direct input from the visual part of the thalamus, contains many neurons that are most easily activated by edges with a particular orientation moving across a particular point in the visual field. Visual areas farther downstream extract features such as color, motion, and shape.

In auditory areas, the primary map is tonotopic. Sounds are parsed according to frequency (i.e., high pitch vs. low pitch) by subcortical auditory areas, and this parsing is reflected by the primary auditory zone of the cortex. As with the visual system, there are a number of tonotopic cortical maps, each devoted to analyzing sound in a particular way.

Within a topographic map there can sometimes be finer levels of spatial structure. In the primary visual cortex, for example, where the main organization is retinotopic and the main responses are to moving edges, cells that respond to different edge-orientations are spatially segregated from one another.

Development

During the first 3 weeks of gestation, the human embryo's ectoderm forms a thickened strip called the neural plate. The neural plate then folds and closes to form the neural tube. This tube flexes as it grows, forming the crescent-shaped cerebral hemispheres at the head, and the cerebellum and pons towards the tail.
Brain of human embryo at 4.5 weeks, showing interior of forebrain 
Brain interior at 5 weeks 
Brain viewed at midline at 3 months 

Function

Cognition

Understanding the mind–body problem – the relationship between the brain and the mind – is a significant challenge both philosophically and scientifically. It is very difficult to imagine how mental activities such as thoughts and emotions could be implemented by physical structures such as neurons and synapses, or by any other type of physical mechanism. This difficulty was expressed by Gottfried Leibniz in an analogy known as Leibniz's Mill:
One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception.
— Leibniz, Monadology[16]
Incredulity about the possibility of a mechanistic explanation of thought drove René Descartes, and most of humankind along with him, to dualism: the belief that the mind exists independently of the brain.[17]
There has always, however, been a strong argument in the opposite direction. There is clear empirical evidence that physical manipulations of, or injuries to, the brain (for example by drugs or by lesions, respectively) can affect the mind in potent and intimate ways.[18] For example, a person suffering from Alzheimer's disease – a condition that causes physical damage to the brain – also experiences a compromised mind. Similarly, someone who has taken a psychedelic drug may temporarily lose their sense of personal identity (ego death) or experience profound changes to their perception and thought processes. Likewise, a patient with epilepsy who undergoes cortical stimulation mapping with electrical brain stimulation would also, upon stimulation of his or her brain, experience various complex feelings, hallucinations, memory flashbacks, and other complex cognitive, emotional, or behavioral phenomena.[19] Following this line of thinking, a large body of empirical evidence for a close relationship between brain activity and mental activity has led most neuroscientists and contemporary philosophers to be materialists, believing that mental phenomena are ultimately the result of, or reducible to, physical phenomena.[20]

Lateralization

Routing of neural signals from the two eyes to the brain

Each hemisphere of the brain interacts primarily with one half of the body, but for reasons that are unclear, the connections are crossed: the left side of the brain interacts with the right side of the body, and vice versa.[citation needed] Motor connections from the brain to the spinal cord, and sensory connections from the spinal cord to the brain, both cross the midline at the level of the brainstem. Visual input follows a more complex rule: the optic nerves from the two eyes come together at a point called the optic chiasm, and half of the fibers from each nerve split off to join the other. The result is that connections from the left half of the retina, in both eyes, go to the left side of the brain, whereas connections from the right half of the retina go to the right side of the brain. Because each half of the retina receives light coming from the opposite half of the visual field, the functional consequence is that visual input from the left side of the world goes to the right side of the brain, and vice versa. Thus, the right side of the brain receives somatosensory input from the left side of the body, and visual input from the left side of the visual field—an arrangement that presumably is helpful for visuomotor coordination.
The corpus callosum, a nerve bundle connecting the two cerebral hemispheres, with the lateral ventricles directly below

The two cerebral hemispheres are connected by a very large nerve bundle (the largest white matter structure in the brain) called the corpus callosum, which crosses the midline above the level of the thalamus.[21] There are also two much smaller connections, the anterior commissure and hippocampal commissure, as well as many subcortical connections that cross the midline. The corpus callosum is the main avenue of communication between the two hemispheres, though. It connects each point on the cortex to the mirror-image point in the opposite hemisphere, and also connects to functionally related points in different cortical areas.

In most respects, the left and right sides of the brain are symmetrical in terms of function. For example, the counterpart of the left-hemisphere motor area controlling the right hand is the right-hemisphere area controlling the left hand. There are, however, several very important exceptions, involving language and spatial cognition. In most people, the left hemisphere is "dominant" for language: a stroke that damages a key language area in the left hemisphere can leave the victim unable to speak or understand, whereas equivalent damage to the right hemisphere would cause only minor impairment to language skills.

A substantial part of our current understanding of the interactions between the two hemispheres has come from the study of "split-brain patients"—people who underwent surgical transection of the corpus callosum in an attempt to reduce the severity of epileptic seizures. These patients do not show unusual behavior that is immediately obvious, but in some cases can behave almost like two different people in the same body, with the right hand taking an action and then the left hand undoing it. Most of these patients, when briefly shown a picture on the right side of the point of visual fixation, are able to describe it verbally, but when the picture is shown on the left, are unable to describe it, but may be able to give an indication with the left hand of the nature of the object shown.

Language

Locations of two brain areas historically associated with language processing, Broca's area and Wernicke's area, and associated regions of sound processing and speech.
(Associated cortical regions involved in vision, touch sensation, and non-speech movement are also shown.)

The study of how language is represented, processed, and acquired by the brain is neurolinguistics, which is a large multidisciplinary field drawing from cognitive neuroscience, cognitive linguistics, and psycholinguistics. This field originated from the 19th-century discovery that damage to different parts of the brain appeared to cause different symptoms: physicians noticed that individuals with damage to a portion of the left inferior frontal gyrus now known as Broca's area had difficulty in producing language (aphasia of speech), whereas those with damage to a region in the left superior temporal gyrus, now known as Wernicke's area, had difficulty in understanding it.[22]

Since then, there has been substantial debate over what linguistic processes these and other parts of the brain subserve,[23] and although Broca's and Wernicke's areas have traditionally been associated with language functions, they may also be involved in certain non-speech functions.[citation needed] There is also debate over whether or not there even is a strong one-to-one relationship between brain regions and language functions that emerges during neocortical development.[24] More recently, research on language has increasingly used more modern methods including electrophysiology and functional neuroimaging, to examine how language processing occurs. In the study of natural language, a dedicated network of language development has been identified as crucially involving Broca's area.[25][26]

Metabolism

A flat oval object is surrounded by blue. The object is largely green-yellow, but contains a dark red patch at one end and a number of blue patches.
PET image of the human brain showing energy consumption

The brain consumes up to twenty percent of the energy used by the human body, more than any other organ.[27] Brain metabolism normally relies upon blood glucose as an energy source, but during times of low glucose (such as fasting, exercise, or limited carbohydrate intake), the brain will use ketone bodies for fuel with a smaller need for glucose. The brain can also utilize lactate during exercise.[28] Long-chain fatty acids cannot cross the blood–brain barrier, but the liver can break these down to produce ketones. However the medium-chain fatty acids octanoic and heptanoic acids can cross the barrier and be used by the brain.[29][30][31] The brain stores glucose in the form of glycogen, albeit in significantly smaller amounts than that found in the liver or skeletal muscle.[32]

Although the human brain represents only 2% of the body weight, it receives 15% of the cardiac output, 20% of total body oxygen consumption, and 25% of total body glucose utilization.[33] The need to limit body weight has led to selection for a reduction of brain size in some species, such as bats, who need to be able to fly.[34] The brain mostly uses glucose for energy, and deprivation of glucose, as can happen in hypoglycemia, can result in loss of consciousness. The energy consumption of the brain does not vary greatly over time, but active regions of the cortex consume somewhat more energy than inactive regions: this fact forms the basis for the functional brain imaging methods PET and fMRI.[35] These are nuclear medicine imaging techniques which produce a three-dimensional image of metabolic activity.

Clinical significance

Clinically, death is defined as an absence of brain activity as measured by EEG. Injuries to the brain tend to affect large areas of the organ, sometimes causing major deficits in intelligence, memory, personality, and movement. Head trauma caused, for example, by vehicular or industrial accidents, is a leading cause of death in youth and middle age. In many cases, more damage is caused by resultant edema than by the impact itself. Stroke, caused by the blockage or rupturing of blood vessels in the brain, is another major cause of death from brain damage.

Other problems in the brain can be more accurately classified as diseases. Neurodegenerative diseases, such as Alzheimer's disease, Parkinson's disease, Huntington's disease and motor neuron diseases are caused by the gradual death of individual neurons, leading to diminution in movement control, memory, and cognition. There are five motor neuron diseases, the most common of which is amyotrophic lateral sclerosis (ALS).

Some infectious diseases affecting the brain are caused by viruses and bacteria. Infection of the meninges, the membranes that cover the brain, can lead to meningitis. Bovine spongiform encephalopathy (also known as "mad cow disease") is deadly in cattle and humans and is linked to prions. Kuru is a similar prion-borne degenerative brain disease affecting humans, (endemic only to Papua New Guinea tribes). Both are linked to the ingestion of neural tissue, and may explain the tendency in human and some non-human species to avoid cannibalism. Viral or bacterial causes have been reported in multiple sclerosis, and are established causes of encephalopathy, and encephalomyelitis.

Mental disorders, such as clinical depression, schizophrenia, bipolar disorder and post-traumatic stress disorder may involve particular patterns of neuropsychological functioning related to various aspects of mental and somatic function. These disorders may be treated by psychotherapy, psychiatric medication, social intervention and personal recovery work or cognitive behavioural therapy; the underlying issues and associated prognoses vary significantly between individuals.

Many brain disorders are congenital, occurring during development. Tay-Sachs disease, fragile X syndrome, and Down syndrome are all linked to genetic and chromosomal errors. Many other syndromes, such as the intrinsic circadian rhythm disorders, are suspected to be congenital as well. Normal development of the brain can be altered by genetic factors, drug use, nutritional deficiencies, and infectious diseases during pregnancy.

Effects of brain damage

A key source of information about the function of brain regions is the effects of damage to them.[36] In humans, strokes have long provided a "natural laboratory" for studying the effects of brain damage.
Most strokes result from a blood clot lodging in the brain and blocking the local blood supply, causing damage or destruction of nearby brain tissue: the range of possible blockages is very wide, leading to a great diversity of stroke symptoms. Analysis of strokes is limited by the fact that damage often crosses into multiple regions of the brain, not along clear-cut borders, making it difficult to draw firm conclusions.

Transient ischemic attacks (TIAs) are mini-strokes that can cause sudden dimming or loss of vision (including amaurosis fugax), speech impairment ranging from slurring to dysarthria or aphasia, and mental confusion. But unlike a stroke, the symptoms of a TIA can resolve within a few minutes or 24 hours. Brain injury may still occur in a TIA lasting only a few minutes.[37][38] A silent stroke or silent cerebral infarct (SCI) differs from a TIA in that there are no immediately observable symptoms. An SCI may still cause long lasting neurological dysfunction affecting such areas as mood, personality, and cognition. An SCI often occurs before or after a TIA or major stroke.[39]

Electroencephalography

By placing electrodes on the scalp it is possible to record the summed electrical activity of the cortex, using a methodology known as electroencephalography (EEG).[40] EEG records average neuronal activity from the cerebral cortex and can detect changes in activity over large areas but with low sensitivity for sub-cortical activity. EEG recordings are sensitive enough to detect tiny electrical impulses lasting only a few milliseconds. Most EEG devices have good temporal resolution, but low spatial resolution.

Electrocorticography

Electrodes can also be placed directly on the surface of the brain (usually during surgical procedures that require removal of part of the skull). This technique, called electrocorticography (ECoG), offers finer spatial resolution than electroencephalography, but is very invasive.

Magnetoencephalography

In addition to measuring the electric field directly via electrodes placed over the skull, it is possible to measure the magnetic field that the brain generates using a method known as magnetoencephalography (MEG).[41] This technique also has good temporal resolution like EEG but with much better spatial resolution. The greatest disadvantage of MEG is that, because the magnetic fields generated by neural activity are very subtle, the neural activity must be relatively close to the surface of the brain to detect its magnetic field. MEGs can only detect the magnetic signatures of neurons located in the depths of cortical folds (sulci) that have dendrites oriented in a way that produces a field.

Imaging

Computed tomography of human brain, from base of the skull to top, taken with intravenous contrast medium

Neuroscientists, along with researchers from allied disciplines, study how the human brain works. Such research has expanded considerably in recent decades. The "Decade of the Brain", an initiative of the United States Government in the 1990s, is considered to have marked much of this increase in research.[42] It has been followed in 2013 by the BRAIN Initiative.

Information about the structure and function of the human brain comes from a variety of experimental methods. Most information about the cellular components of the brain and how they work comes from studies of animal subjects, using techniques described in the brain article. Some techniques, however, are used mainly in humans, and therefore are described here.

Structural and functional imaging

A scan of the brain using fMRI
fMRI scan of the brain

There are several methods for detecting brain activity changes using three-dimensional imaging of local changes in blood flow. The older methods are SPECT and PET, which depend on injection of radioactive tracers into the bloodstream. A newer method, functional magnetic resonance imaging (fMRI), has considerably better spatial resolution and involves no radioactivity.[43] Using the most powerful magnets currently available, fMRI can localize brain activity changes to regions as small as one cubic millimeter. The downside is that the temporal resolution is poor: when brain activity increases, the blood flow response is delayed by 1–5 seconds and lasts for at least 10 seconds. Thus, fMRI is a very useful tool for learning which brain regions are involved in a given behavior, but gives little information about the temporal dynamics of their responses. A major advantage for fMRI is that, because it is non-invasive, it can readily be used on human subjects.

Another new non-invasive functional imaging method is functional near-infrared spectroscopy.

Evolution

A reconstruction of Homo habilis

In the course of evolution of the Homininae, the human brain has grown in volume from about 600 cm3 in Homo habilis to about 1500 cm3 in Homo sapiens neanderthalensis. Subsequently, there has been a shrinking over the past 28,000 years. The male brain has decreased from 1,500 cm3 to 1,350 cm3 while the female brain has shrunk by the same relative proportion.[44] For comparison, Homo erectus, a relative of humans, had a brain size of 1,100 cm3. However, the little Homo floresiensis, with a brain size of 380 cm3, a third of that of their proposed ancestor H. erectus, used fire, hunted, and made stone tools at least as sophisticated as those of H. erectus.[45] In spite of significant changes in social capacity, there has been very little change in brain size from Neanderthals to the present day.[46] "As large as you need and as small as you can" has been said to summarize the opposite evolutionary constraints on human brain size.[47][48] Changes in the size of the human brain during evolution have been reflected in changes in the ASPM and microcephalin genes.[49]

Studies tend to indicate small to moderate correlations (averaging around 0.3 to 0.4) between brain volume and IQ.[50] The most consistent associations are observed within the frontal, temporal, and parietal lobes, the hippocampi, and the cerebellum, but these only account for a relatively small amount of variance in IQ, which itself has only a partial relationship to general intelligence and real-world performance.[51][52][full citation needed] One study indicated that in humans, fertility and intelligence tend to be negatively correlated—that is to say, the more intelligent, as measured by IQ, exhibit a lower total fertility rate than the less intelligent. According to the model, the present rate of decline is predicted to be 1.34 IQ points per decade.[53]

Saturday, January 10, 2015

Interstellar travel

From Wikipedia, the free encyclopedia

A Bussard Ramjet, one of many possible methods that could serve as propulsion for a starship.

Interstellar space travel is manned or unmanned travel between stars. Interstellar travel is much more difficult than interplanetary travel: the distances between the planets in the Solar System are typically measured in standard astronomical units (AU)—whereas the distances between stars are typically hundreds of thousands of AU, and usually expressed in light-years. Because of the vastness of those distances, interstellar travel would require either great speed (some percentage of the speed of light) or huge travel time (lasting from years to millennia).

The required speeds for interstellar travel in a human lifespan are far beyond what current methods of spacecraft propulsion can provide. The energy required to propel a spacecraft to these speeds, regardless of the propulsion system used, is enormous by today's standards of energy production. At these speeds, collisions by the spacecraft with interstellar dust and gas can produce very dangerous effects both to any passengers and the spacecraft itself.

A number of widely differing strategies have been proposed to deal with these problems, ranging from giant arks that would carry entire societies and ecosystems very slowly, to microscopic space probes. Many different propulsion systems have been proposed to give spacecraft the required speeds: these range from different forms of nuclear propulsion, to beamed energy methods that would require megascale engineering projects, to methods based on speculative physics.

For both unmanned and manned interstellar travel, considerable technological and economic challenges would need to be met. Even the most optimistic views about interstellar travel are that it might happen decades in the future; the more common view is that it is a century or more away.

Challenges

Interstellar distances

The basic challenge facing interstellar travel is the immense distances between the stars.
Astronomical distances are measured using different units of length, depending on the scale of the distances involved. Between the planets in the Solar System they are often measured in astronomical units (AU), defined as the average distance between the Sun and Earth, some 150 million kilometers (93 million miles). Venus, the closest other planet to Earth is (at closest approach) 0.28 AU away. Neptune, the furthest planet from the Sun, is 29.8 AU away. Voyager 1, the furthest man-made object from Earth, is 129.2 AU away.

The closest known star Proxima Centauri, however, is some 268,332 AU away, or 9000 times further away than even the furthest planet in the Solar System.
Object A.U. light time
The Moon 0.0026 1.3 seconds
Venus (nearest planet) 0.28 2.41 minutes
Neptune (furthest planet) 29.8 4.1 hours
Voyager 1 129.2 17.9 hours
Proxima Centauri (nearest star)    268,332    4.24 years
Because of this, distances between stars are usually expressed in light-years, defined as the distance that a ray of light travels in a year. Light in a vacuum travels around 300,000 kilometers (186,000 miles) per second, so this is some 9.46 trillion kilometers (5.87 trillion miles) or 63,241 AU. Proxima Centauri is 4.243 light-years away.

Another way of understanding the vastness of interstellar distances is by scaling: one of the closest stars to the sun, Alpha Centauri A (a Sun-like star), can be pictured by scaling down the Earth–Sun distance to one meter (~3.3 ft). On this scale, the distance to Alpha Centauri A would be 271 kilometers (169 miles).
The fastest outward-bound spacecraft yet sent, Voyager 1, has covered 1/600th of a light-year in 30 years and is currently moving at 1/18,000th the speed of light. At this rate, a journey to Proxima Centauri would take 80,000 years.[1]

Some combination of great speed and long travel time are required. The time required by propulsion methods based on currently known physical principles would require years to millennia.

Required energy

A significant factor contributing to the difficulty is the energy that must be supplied to obtain a reasonable travel time. A lower bound for the required energy is the kinetic energy K = ½ mv2 where m is the final mass. If deceleration on arrival is desired and cannot be achieved by any means other than the engines of the ship, then the required energy is significantly increased.

The velocity for a manned round trip of a few decades to even the nearest star is several thousand times greater than those of present space vehicles. This means that due to the v2 term in the kinetic energy formula, millions of times as much energy is required. Accelerating one ton to one-tenth of the speed of light requires at least 450 PJ or 4.5 ×1017 J or 125 billion kWh, without factoring in efficiency of the propulsion mechanism. This energy has to be generated on-board from stored fuel, harvested from the interstellar medium, or projected over immense distances.

Manned missions

The mass of any craft capable of carrying humans would inevitably be substantially larger than that necessary for an unmanned interstellar probe. For instance, the first space probe, Sputnik 1, had a payload of 83.6 kg, whereas the first spacecraft carrying a living passenger (the dog Laika), Sputnik 2, had a payload six times that at 508.3 kg. This underestimates the difference in the case of interstellar missions, given the vastly greater travel times involved and the resulting necessity of a closed-cycle life support system. As technology continues to advance, combined with the aggregate risks and support requirements of manned interstellar travel, the first interstellar missions are unlikely to carry life forms.

A manned craft will require more time to reach its top speed as humans have limited tolerance to acceleration.

Interstellar medium

A major issue with traveling at extremely high speeds is that interstellar dust and gas may cause considerable damage to the craft, due to the high relative speeds and large kinetic energies involved.
Various shielding methods to mitigate this problem have been proposed.[2] Larger objects (such as macroscopic dust grains) are far less common, but would be much more destructive. The risks of impacting such objects, and methods of mitigating these risks, have been discussed in the literature, but many unknowns remain.[3]

Travel time

An interstellar ship would face manifold hazards found in interplanetary travel, including vacuum, radiation, weightlessness, and micrometeoroids. Even the minimum multi-year travel times to the nearest stars are beyond current manned space mission design experience.

The habitual illumination energy requirement for each person is estimated to be 12 kilowatts.[4][5] Other long-term energy requirements are still being investigated.[6]

More speculative approaches to interstellar travel offer the possibility of circumventing these difficulties. Special relativity offers the possibility of shortening the travel time through relativistic time dilation: if a starship with could reach velocities approaching the speed of light, the journey time as experienced by the traveler would be greatly reduced (see time dilation section). General relativity offers the theoretical possibility that faster-than-light travel could greatly shorten travel times, both for the traveler and those on Earth (see Faster-than-light travel section).

Wait calculation

It has been argued that an interstellar mission that cannot be completed within 50 years should not be started at all. Instead, assuming that a civilization is still on an increasing curve of propulsion system velocity, not yet having reached the limit, the resources should be invested in designing a better propulsion system. This is because a slow spacecraft would probably be passed by another mission sent later with more-advanced propulsion (the incessant obsolescence postulate).[7] On the other hand, Andrew Kennedy has shown that if one calculates the journey time to a given destination as the rate of travel speed derived from growth (even exponential growth) increases, there is a clear minimum in the total time to that destination from now (see wait calculation).[8] Voyages undertaken before the minimum will be overtaken by those who leave at the minimum, whereas those who leave after the minimum will never overtake those who left at the minimum.

One argument against the stance of delaying a start until reaching fast propulsion system velocity is that the various other non-technical problems that are specific to long-distance travel at considerably higher speed (such as interstellar particle impact, possible dramatic shortening of average human life span during extended space residence, etc.) may remain obstacles that take much longer time to resolve than the propulsion issue alone, assuming that they can even be solved eventually at all. A case can therefore be made for starting a mission without delay, based on the concept of an achievable and dedicated but relatively slow interstellar mission using the current technological state-of-the-art and at relatively low cost, rather than banking on being able to solve all problems associated with a faster mission without having a reliable time frame for achievability of such.

Communications

The round-trip delay time is the minimum time between an observation by the probe and the moment the probe can receive instructions from Earth reacting to the observation. Given that information can travel no faster than the speed of light, this is for the Voyager 1 about 36 hours, and near Proxima Centauri it would be 8 years. Faster reaction would have to be programmed to be carried out automatically. Of course, in the case of a manned flight the crew can respond immediately to their observations. However, the round-trip delay time makes them not only extremely distant from, but, in terms of communication, also extremely isolated from Earth (analogous to how past long distance explorers were similarly isolated before the invention of the electrical telegraph).

Interstellar communication is still problematic – even if a probe could reach the nearest star, its ability to communicate back to Earth would be difficult given the extreme distance. See Interstellar communication.

Prime targets for interstellar travel

There are 59 known stellar systems within 20 light years of the Sun, containing 81 visible stars. The following could be considered prime targets for interstellar missions:[9]

Stellar system Distance (ly) Remarks
Alpha Centauri 4.3 Closest system. Three stars (G2, K1, M5). Component A is similar to the Sun (a G2 star). Alpha Centauri B has one confirmed planet.[10]
Barnard's Star 6 Small, low-luminosity M5 red dwarf. Second closest to Solar System.
Sirius 8.7 Large, very bright A1 star with a white dwarf companion.
Epsilon Eridani 10.8 Single K2 star slightly smaller and colder than the Sun. Has two asteroid belts, might have a giant and one much smaller planet,[11] and may possess a Solar-System-type planetary system.
Tau Ceti 11.8 Single G8 star similar to the Sun. High probability of possessing a Solar-System-type planetary system: current evidence shows 5 planets with potentially two in the habitable zone.
Gliese 581 20.3 Multiple planet system. The unconfirmed exoplanet Gliese 581 g and the confirmed exoplanet Gliese 581 d are in the star's habitable zone.
Gliese 667C 22 A system with at least six planets. A record-breaking three of these planets are super-Earths lying in the zone around the star where liquid water could exist, making them possible candidates for the presence of life.[12]
Vega 25 At least one planet, and of a suitable age to have evolved primitive life [13]

Existing and near-term astronomical technology is capable of finding planetary systems around these objects, increasing their potential for exploration.

Proposed methods

Slow, uncrewed probes

Slow interstellar missions based on current and near-future propulsion technologies are associated with trip times starting from about one hundred years to thousands of years. These missions consist of sending a robotic probe to a nearby star for exploration, similar to interplanetary probes such as used in the voyager program. By taking along no crew, the cost and complexity of the mission is significantly reduced although technology lifetime is still a significant issue next to obtaining a reasonable speed of travel. Proposed concepts include Project Daedalus, Project Icarus and Project Longshot.

Fast, uncrewed probes

Nanoprobes

Near-lightspeed nanospacecraft might be possible within the near future built on existing microchip technology with a newly developed nanoscale thruster. Researchers at the University of Michigan are developing thrusters that use nanoparticles as propellant. Their technology is called “nanoparticle field extraction thruster”, or nanoFET. These devices act like small particle accelerators shooting conductive nanoparticles out into space.[14]

Michio Kaku, a theoretical physicist, has suggested that clouds of "smart dust" be sent to the stars, which may become possible with advances in nanotechnology. Kaku also notes that a large amount of nanoprobes would need to be sent due to the vulnerability of very small probes to be easily deflected by magnetic fields, micrometeorites and other dangers to ensure the chances that at least one nanoprobe will survive the journey and reach the destination.[15]

Given the light weight of these probes, it would take much less energy to accelerate them. With on board solar cells they could continually accelerate using solar power. One can envision a day when a fleet of millions or even billions of these particles swarm to distant stars at nearly the speed of light and relay signals back to Earth through a vast interstellar communication network.

Slow, manned missions

In crewed missions, the duration of a slow interstellar journey presents a major obstacle and existing concepts deal with this problem in different ways.[16] They can be distinguished by the "state" in which humans are transported on-board of the spacecraft.

Generation ships

A generation ship (or world ship) is a type of interstellar ark in which the crew that arrives at the destination is descended from those who started the journey. Generation ships are not currently feasible because of the difficulty of constructing a ship of the enormous required scale and the great biological and sociological problems that life aboard such a ship raises.[17][18][19][20]

Suspended animation

Scientists and writers have postulated various techniques for suspended animation. These include human hibernation and cryonic preservation. Although neither is currently practical, they offer the possibility of sleeper ships in which the passengers lie inert for the long duration of the voyage.[21]

Extended human lifespan

A variant on this possibility is based on the development of substantial human life extension, such as the "Strategies for Engineered Negligible Senescence" proposed by Dr. Aubrey de Grey. If a ship crew had lifespans of some thousands of years, or had artificial bodies, they could traverse interstellar distances without the need to replace the crew in generations. The psychological effects of such an extended period of travel would potentially still pose a problem.

Frozen embryos

A robotic space mission carrying some number of frozen early stage human embryos is another theoretical possibility. This method of space colonization requires, among other things, the development of an artificial uterus, the prior detection of a habitable terrestrial planet, and advances in the field of fully autonomous mobile robots and educational robots that would replace human parents.[22]

Mind uploading

A more speculative method of transporting humans to the stars is by using mind uploading or also called brain emulation.[23][24] Frank J. Tipler speculates about the colonization of the universe by starships transporting uploaded humans.[25] Hein presents a range of concepts how such missions could be conducted, using more or less speculative technologies, for example self-replicating machines, wormholes, and teleportation.[23][26] One of the major challenges besides mind uploading itself are the means for downloading the uploads into physical entities, which can be biological or artficial or both.

Island hopping through interstellar space

Interstellar space is not completely empty; it contains trillions of icy bodies ranging from small asteroids (Oort cloud) to possible rogue planets. There may be ways to take advantage of these resources for a good part of an interstellar trip, slowly hopping from body to body or setting up waystations along the way.[27]

Fast missions

If a spaceship could average 10 percent of light speed (and decelerate at the destination, for manned missions), this would be enough to reach Proxima Centauri in forty years. Several propulsion concepts are proposed that might be eventually developed to accomplish this (see section below on propulsion methods), but none of them are ready for near-term (few decades) development at acceptable cost.[citation needed]

Time dilation

Assuming one cannot travel faster than light, one might conclude that a human can never make a round-trip further from Earth than 40 light years if the traveler is active between the ages of 20 and 60. In this example a traveler would never be able to reach more than the very few star systems that exist within the limit of 10–20 light years from Earth. This, however, fails to take into account time dilation. Clocks aboard an interstellar ship would run slower than Earth clocks, so if a ship's engines were powerful enough the ship could reach mostly anywhere in the galaxy and return to Earth within 40 years ship-time. Upon return, there would be a difference between the time elapsed on the astronaut's ship and the time elapsed on Earth. If a spaceship travels to a star 32 light-years away and initially accelerates at a constant 1.03g (i.e. 10.1 m/s2) for 1.32 years (ship time) then stops its engines and coasts for the next 17.3 years (ship time) at a constant speed then decelerates again for 1.32 ship-years and comes to a stop at the destination. After a short visit the astronaut returns to Earth the same way.
After the full round-trip, the clocks on board the ship show that 40 years have passed, but according to those on Earth, the ship comes back 76 years after launch.

From the viewpoint of the astronaut, on-board clocks seem to be running normally. The star ahead seems to be approaching at a speed of 0.87 lightyears per ship-year. The universe would appear contracted along the direction of travel to half the size it had when the ship was at rest; the distance between that star and the Sun would seem to be 16 light years as measured by the astronaut.

At higher speeds, the time onboard will run even slower, so the astronaut could travel to the center of the Milky Way (30 kly from Earth) and back in 40 years ship-time. But the speed according to Earth clocks will always be less than 1 lightyear per Earth year, so, when back home, the astronaut will find that 60 thousand years will have passed on Earth.[citation needed]

Constant acceleration

This plot shows a ship capable of 1-gee (10 m/s2 or about 1.0 ly/y2) "felt" or proper-acceleration[28] can go far, except for the problem of accelerating on-board propellant.

Regardless of how it is achieved, if a propulsion system can produce acceleration continuously from departure to destination, then this will be the fastest method of travel. If the propulsion system drives the ship faster and faster for the first half of the journey, then turns around and brakes the craft so that it arrives at the destination at a standstill, this is a constant acceleration journey. If this was performed at nearly 1g, this would have the added advantage of producing artificial "gravity". This is, however, largely unfeasible with current technology because of the difficulty in maintaining acceleration the closer one gets to the speed of light. This is illustrated by the definition of force: F=dp/dt. This is also a part of Newton's second law of motion.[29]

From the planetary observer perspective the ship will appear to steadily accelerate but more slowly as it approaches the speed of light. The ship will be close to the speed of light after about a year of accelerating and remain at that speed until it brakes for the end of the journey.

From the ship perspective there will be no top limit on speed – the ship keeps going faster and faster the whole first half. This happens because the ship's time sense slows down – relative to the planetary observer – the more it approaches the speed of light.

The result is an impressively fast journey if you are in the ship.

By transmission

If physical entities could be transmitted as information and reconstructed at a destination, travel at nearly the speed of light would be possible, which for the "travelers" would be instantaneous. However, sending an atom-by-atom description of (say) a human body would be a daunting task. Extracting and sending only a computer brain simulation is a significant part of that problem. "Journey" time would be the light-travel time plus the time needed to encode, send and reconstruct the whole transmission.[30]

Propulsion

Rocket concepts

All rocket concepts are limited by the rocket equation, which sets the characteristic velocity available as a function of exhaust velocity and mass ratio, the ratio of initial (M0, including fuel) to final (M1, fuel depleted) mass.

Very high specific power, the ratio of thrust to total vehicle mass, is required to reach interstellar targets within sub-century time-frames.[31] Some heat transfer is inevitable and a tremendous heating load must be adequately handled.

Thus, for interstellar rocket concepts of all technologies, a key engineering problem (seldom explicitly discussed) is limiting the heat transfer from the exhaust stream back into the vehicle.[32]

Nuclear fission powered

Fission-electric
Nuclear-electric or plasma engines, operating for long periods at low thrust and powered by fission reactors, have the potential to reach speeds much greater than chemically powered vehicles or nuclear-thermal rockets. Such vehicles probably have the potential to power Solar System exploration with reasonable trip times within the current century. Because of their low-thrust propulsion, they would be limited to off-planet, deep-space operation. Electrically powered spacecraft propulsion powered by a portable power-source, say a nuclear reactor, producing only small accelerations, would take centuries to reach for example 15% of the velocity of light, thus unsuitable for interstellar flight during a single human lifetime.[33]
Fission-fragment
Fission-fragment rockets use nuclear fission to create high-speed jets of fission fragments, which are ejected at speeds of up to 12,000 km/s. With fission, the energy output is approximately 0.1% of the total mass-energy of the reactor fuel and limits the effective exhaust velocity to about 5% of the velocity of light. For maximum velocity, the reaction mass should optimally consist of fission products, the "ash" of the primary energy source, in order that no extra reaction mass need be book-kept in the mass ratio. This is known as a fission-fragment rocket. thermal-propulsion engines such as NERVA produce sufficient thrust, but can only achieve relatively low-velocity exhaust jets, so to accelerate to the desired speed would require an enormous amount of fuel.
Nuclear pulse
Based on work in the late 1950s to the early 1960s, it has been technically possible to build spaceships with nuclear pulse propulsion engines, i.e. driven by a series of nuclear explosions. This propulsion system contains the prospect of very high specific impulse (space travel's equivalent of fuel economy) and high specific power.[34]
Project Orion team member, Freeman Dyson, proposed in 1968 an interstellar spacecraft using nuclear pulse propulsion that used pure deuterium fusion detonations with a very high fuel-burnup fraction. He computed an exhaust velocity of 15,000 km/s and a 100,000-tonne space vehicle able to achieve a 20,000 km/s delta-v allowing a flight-time to Alpha Centauri of 130 years.[35] Later studies indicate that the top cruise velocity that can theoretically be achieved by a Teller-Ulam thermonuclear unit powered Orion starship, assuming no fuel is saved for slowing back down, is about 8% to 10% of the speed of light (0.08-0.1c).[36] An atomic (fission) Orion can achieve perhaps 3%-5% of the speed of light. A nuclear pulse drive starship powered by Fusion-antimatter catalyzed nuclear pulse propulsion units would be similarly in the 10% range and pure Matter-antimatter annihilation rockets would be theoretically capable of obtaining a velocity between 50% to 80% of the speed of light. In each case saving fuel for slowing down halves the max. speed. The concept of using a magnetic sail to decelerate the spacecraft as it approaches its destination has been discussed as an alternative to using propellant, this would allow the ship to travel near the maximum theoretical velocity.[37] Alternative designs utilizing similar principles include Project Longshot, Project Daedalus, and Mini-Mag Orion. The principle of external nuclear pulse propulsion to maximize survivable power has remained common among serious concepts for interstellar flight without external power beaming and for very high-performance interplanetary flight.

In the 1970s the Nuclear Pulse Propulsion concept further was refined by Project Daedalus by use of externally triggered inertial confinement fusion, in this case producing fusion explosions via compressing fusion fuel pellets with high-powered electron beams. Since then, lasers, ion beams, neutral particle beams and hyper-kinetic projectiles have been suggested to produce nuclear pulses for propulsion purposes.[38]

A current impediment to the development of any nuclear-explosion-powered spacecraft is the 1963 Partial Test Ban Treaty, which includes a prohibition on the detonation of any nuclear devices (even non-weapon based) in outer space. This treaty would therefore need to be renegotiated, although a project on the scale of an interstellar mission using currently foreseeable technology would probably require international cooperation on at least the scale of the International Space Station.

Nuclear fusion rockets

Fusion rocket starships, powered by nuclear fusion reactions, should conceivably be able to reach speeds of the order of 10% of that of light, based on energy considerations alone. In theory, a large number of stages could push a vehicle arbitrarily close to the speed of light.[39] These would "burn" such light element fuels as deuterium, tritium, 3He, 11B, and 7Li. Because fusion yields about 0.3–0.9% of the mass of the nuclear fuel as released energy, it is energetically more favorable than fission, which releases <0 .1="" 4="" a="" achievable="" although="" and="" are="" as="" available="" be="" best="" c.="" centuries.="" concepts="" correspondingly="" decades="" difficulties="" easily="" energetically="" energy="" engineering="" exhaust="" fission="" for="" fraction="" fuel="" fusion="" high-energy="" higher="" however="" human="" intractable="" involve="" large="" lifetime="" long="" loss.="" mass-energy.="" massive="" maximum="" may="" most="" nearest-term="" nearest="" neutrons="" of="" offer="" or="" out="" p="" potentially="" prospects="" reactions="" release="" s="" seem="" significant="" source="" stars="" still="" technological="" than="" the="" their="" these="" they="" thus="" to="" travel="" turn="" typically="" velocities="" which="" within="">
Early studies include Project Daedalus, performed by the British Interplanetary Society in 1973–1978, and Project Longshot, a student project sponsored by NASA and the US Naval Academy, completed in 1988. Another fairly detailed vehicle system, "Discovery II",[40] designed and optimized for crewed Solar System exploration, based on the D3He reaction but using hydrogen as reaction mass, has been described by a team from NASA's Glenn Research Center. It achieves characteristic velocities of >300 km/s with an acceleration of ~1.7•10−3 g, with a ship initial mass of ~1700 metric tons, and payload fraction above 10%. Although these are still far short of the requirements for interstellar travel on human timescales, the study seems to represent a reasonable benchmark towards what may be approachable within several decades, which is not impossibly beyond the current state-of-the-art. Based on the concept's 2.2% burnup fraction it could achieve a pure fusion product exhaust velocity of ~3,000 km/s.

Antimatter rockets

An antimatter rocket would have a far higher energy density and specific impulse than any other proposed class of rocket. If energy resources and efficient production methods are found to make antimatter in the quantities required and store it safely, it would be theoretically possible to reach speeds approaching that of light. Then relativistic time dilation would become more noticeable, thus making time pass at a slower rate for the travelers as perceived by an outside observer, reducing the trip time experienced by human travelers.

Supposing the production and storage of antimatter should become practical, two further problems would present and need to be solved. First, in the annihilation of antimatter, much of the energy is lost in very penetrating high-energy gamma radiation, and especially also in neutrinos, so that substantially less than mc2 would actually be available if the antimatter were simply allowed to annihilate into radiations thermally. Even so, the energy available for propulsion would probably be substantially higher than the ~1% of mc2 yield of nuclear fusion, the next-best rival candidate.

Second, once again heat transfer from exhaust to vehicle seems likely to deposit enormous wasted energy into the ship, considering the large fraction of the energy that goes into penetrating gamma rays. Even assuming biological shielding were provided to protect the passengers, some of the energy would inevitably heat the vehicle, and may thereby prove limiting. This requires consideration for serious proposals if useful accelerations are to be achieved, because the energies involved (e.g. for 0.1g ship acceleration, approaching 0.3 trillion watts per ton of ship mass) are very large.

Rockets with an external energy source

Rockets deriving their power from external sources, such as a laser, could bypass the ordinary rocket equation, potentially reducing the mass of the ship greatly and allowing much higher travel speeds. Geoffrey A. Landis has proposed for an interstellar probe, with energy supplied by an external laser from a base station powering an Ion thruster.[41]

Non-rocket concepts

A problem with all traditional rocket propulsion methods is that the spacecraft would need to carry its fuel with it, thus making it very massive, in accordance with the rocket equation. Some concepts attempt to escape from this problem ([42]):

Interstellar ramjets

In 1960, Robert W. Bussard proposed the Bussard ramjet, a fusion rocket in which a huge scoop would collect the diffuse hydrogen in interstellar space, "burn" it on the fly using a proton–proton fusion reaction, and expel it out of the back. Later calculations with more accurate estimates suggest that the thrust generated would be less than the drag caused by any conceivable scoop design. Yet the idea is attractive because the fuel would be collected en route (commensurate with the concept of energy harvesting), so the craft could theoretically accelerate to near the speed of light.

Beamed propulsion

This diagram illustrates Robert L. Forward's scheme for slowing down an interstellar light-sail at the destination [43] system.

A light sail or magnetic sail powered by a massive laser or particle accelerator in the home star system could potentially reach even greater speeds than rocket- or pulse propulsion methods, because it would not need to carry its own reaction mass and therefore would only need to accelerate the craft's payload. Robert L. Forward proposed a means for decelerating an interstellar light sail in the destination star system without requiring a laser array to be present in that system. In this scheme, a smaller secondary sail is deployed to the rear of the spacecraft, whereas the large primary sail is detached from the craft to keep moving forward on its own. Light is reflected from the large primary sail to the secondary sail, which is used to decelerate the secondary sail and the spacecraft payload.[44]

A magnetic sail could also decelerate at its destination without depending on carried fuel or a driving beam in the destination system, by interacting with the plasma found in the solar wind of the destination star and the interstellar medium.[45][46]

The following table lists some example concepts using beamed laser propulsion as proposed by the physicist Robert L. Forward:[47]

Mission Laser Power Vehicle Mass Acceleration Sail Diameter Maximum Velocity (% of the speed of light)
1. Flyby - Alpha Centauri, 40 years
outbound stage 65 GW 1 t 0.036 g 3.6 km 11% @ 0.17 ly
2. Rendezvous - Alpha Centauri, 41 years
outbound stage 7,200 GW 785 t 0.005 g 100 km 21% @ 4.29 ly
deceleration stage 26,000 GW 71 t 0.2 g 30 km 21% @ 4.29 ly
3. Manned - Epsilon Eridani, 51 years (including 5 years exploring star system)
outbound stage 75,000,000 GW 78,500 t 0.3 g 1000 km 50% @ 0.4 ly
deceleration stage 21,500,000 GW 7,850 t 0.3 g 320 km 50% @ 10.4 ly
return stage 710,000 GW 785 t 0.3 g 100 km 50% @ 10.4 ly
deceleration stage 60,000 GW 785 t 0.3 g 100 km 50% @ 0.4 ly

Pre-accelerated fuel

Achieving start-stop interstellar trip times of less than a human lifetime require mass-ratios of between 1,000 and 1,000,000, even for the nearer stars. This could be achieved by multi-staged vehicles on a vast scale.[39] Alternatively large linear accelerators could propel fuel to fission propelled space-vehicles, avoiding the limitations of the Rocket equation.[48]

Speculative methods

Quark matter

Scientist T. Marshall Eubanks thinks that nuggets of condensed quark matter may exist at the centers of some asteroids, created during the Big Bang and each nugget with a mass of 1010 to 1011 kg.[49] If so these could be an enormous source of energy, as the nuggets could be used to generate huge quantities of antimatter—about a million tonnes of antimatter per nugget. This would be enough to propel a spacecraft close to the speed of light.[50]

Hawking radiation rockets

In a black hole starship, a parabolic reflector would reflect Hawking radiation from an artificial black hole. In 2009, Louis Crane and Shawn Westmoreland of Kansas State University published a paper investigating the feasibility of this idea. Their conclusion was that it was on the edge of possibility, but that quantum gravity effects that are presently unknown may make it easier or make it impossible.[51][52]

Magnetic monopole rockets

If some of the Grand unification models are correct, e.g. 't Hooft–Polyakov, it would be possible to construct a photonic engine that uses no antimatter thanks to the magnetic monopole that hypothetically can catalyze the decay of a proton to a positron and π0-meson:[53][54]
p \rarr e^{+} + \pi^0
π0 decays rapidly to two photons, and the positron annihilates with an electron to give two more photons. As a result, a hydrogen atom turns into four photons and only the problem of a mirror remains unresolved.

A magnetic monopole engine could also work on a once-through scheme such as the Bussard ramjet (see below).

At the same time, most of the modern Grand unification theories such as M-theory predict no magnetic monopoles, which casts doubt on this attractive idea.

Faster-than-light travel

Artist's depiction of a hypothetical Wormhole Induction Propelled Spacecraft, based loosely on the 1994 "warp drive" paper of Miguel Alcubierre. Credit: NASA CD-98-76634 by Les Bossinas.

Scientists and authors have postulated a number of ways by which it might be possible to surpass the speed of light. Even the most serious-minded of these are speculative.

It is also debated whether this is possible, in part, because of causality concerns, because in essence travel faster than light is equivalent to going back in time. Proposed mechanisms for faster-than-light travel within the theory of general relativity require the existence of exotic matter.
Alcubierre drive
According to Einstein's equation of general relativity, spacetime is curved:
G_{\mu\nu}=8\pi\,GT_{\mu\nu} \,
General relativity may permit the travel of an object faster than light in curved spacetime.[55] One could imagine exploiting the curvature to take a "shortcut" from one point to another. This is one form of the warp drive concept.

In physics, the Alcubierre drive is based on an argument that the curvature could take the form of a wave in which a spaceship might be carried in a "bubble". Space would be collapsing at one end of the bubble and expanding at the other end. The motion of the wave would carry a spaceship from one space point to another in less time than light would take through unwarped space. Nevertheless, the spaceship would not be moving faster than light within the bubble. This concept would require the spaceship to incorporate a region of exotic matter, or "negative mass".
Artificial gravity control
Scientist Lance Williams thinks that gravity can be controlled artificially through electromagnetic control.[56]
Wormholes
Wormholes are conjectural distortions in spacetime that theorists postulate could connect two arbitrary points in the universe, across an Einstein–Rosen Bridge. It is not known whether wormholes are possible in practice. Although there are solutions to the Einstein equation of general relativity that allow for wormholes, all of the currently known solutions involve some assumption, for example the existence of negative mass, which may be unphysical.[57] However, Cramer et al. argue that such wormholes might have been created in the early universe, stabilized by cosmic string.[58] The general theory of wormholes is discussed by Visser in the book Lorentzian Wormholes.[59]

Designs and studies

Enzmann starship

The Enzmann starship, as detailed by G. Harry Stine in the October 1973 issue of Analog, was a design for a future starship, based on the ideas of Dr. Robert Duncan-Enzmann.[60] The spacecraft itself as proposed used a 12,000,000 ton ball of frozen deuterium to power 12–24 thermonuclear pulse propulsion units.[60] Twice as long as the Empire State Building and assembled in-orbit, the spacecraft was part of a larger project preceded by interstellar probes and telescopic observation of target star systems.[60][61]

Project Hyperion

Project Hyperion, one of the projects of Icarus Interstellar.[62]

NASA research

NASA has been researching interstellar travel since its formation, translating important foreign language papers and conducting early studies on applying fusion propulsion, in the 1960s, and laser propulsion, in the 1970s, to interstellar travel.

The NASA Breakthrough Propulsion Physics Program (terminated in FY 2003 after a 6-year, $1.2-million study, because "No breakthroughs appear imminent.")[63] identified some breakthroughs that are needed for interstellar travel to be possible.[64]

Geoffrey A. Landis of NASA's Glenn Research Center states that a laser-powered interstellar sail ship could possibly be launched within 50 years, using new methods of space travel. "I think that ultimately we're going to do it, it's just a question of when and who," Landis said in an interview. Rockets are too slow to send humans on interstellar missions. Instead, he envisions interstellar craft with extensive sails, propelled by laser light to about one-tenth the speed of light. It would take such a ship about 43 years to reach Alpha Centauri, if it passed through the system. Slowing down to stop at Alpha Centauri could increase the trip to 100 years,[65] whereas a journey without slowing down raises the issue of making sufficiently accurate and useful observations and measurements during a fly-by.

100 Year Starship study

The 100 Year Starship (100YSS) is the name of the overall effort that will, over the next century, work toward achieving interstellar travel. The effort will also go by the moniker 100YSS. The 100 Year Starship study is the name of a one year project to assess the attributes of and lay the groundwork for an organization that can carry forward the 100 Year Starship vision.

Dr. Harold ("Sonny") White[66] from NASA's Johnson Space Center is a member of Icarus Interstellar,[67] the nonprofit foundation whose mission is to realize interstellar flight before the year 2100. At the 2012 meeting of 100YSS, he reported using a laser to try to warp spacetime by 1 part in 10 million with the aim of helping to make interstellar travel possible.[68]

Other designs

Non-profit organisations

A few organisations dedicated to interstellar propulsion research and advocacy for the case exist worldwide. These are still in their infancy, but are already backed up by a membership of a wide variety of scientists, students and professionals.

Skepticism

The energy requirements make interstellar travel very difficult. It has been reported that at the 2008 Joint Propulsion Conference, multiple experts opined that it was improbable that humans would ever explore beyond the Solar System.[75] Brice N. Cassenti, an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Institute, stated at least the total energy output of the entire world [in a given year] would be required to send a probe to the nearest star.[75]

Right to property

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Right_to_property The right to property , or the right to own property ...