Search This Blog

Tuesday, January 27, 2015

Water Vapor Important in Greenhouse Effect, but NOT Anthropogenic Global Warming

















The earth's atmosphere is about 10,000 ppm water vapor, while only (currently) 400 ppm CO2.  That is a 25/1 ratio.  Furthermore, about twice the greenhouse effect comes from water vapor than from CO2.  This suggests the warming effect of CO2 is about 12.5/1 stronger than water vapor.

Many people confuse the overall greenhouse effect with our recent amplification of it (which is still quite small), but they are two very different things, and it is important to understand why.

The strength of a greenhouse gas was given by Arrhenius in the late 1800s by the equation:

Radiative Forcing = a * ln(X/X0), where a is a constant which depends on the particular gas, and ln(X/X0) is the natural logarithm of the current gas level divided by the original level.  In the case of CO2, a ~ 6, and if X0 = ~300 ppm) (about the beginning of the 20th century) and X, the current level is ~ 400 ppm).  In this case the calculation yields about 1.7 watts/meter squared, which matches measurements pretty well.

In the case of water vapor, a = 6/12.5 ~ 0.3.  Now, using a rule of thumb from chemistry, a 10 C difference in air temperature results in a doubling of the amount of water vapor it can hold.  For a 1 C rise in temperature, the difference between 1900 and 2000 thus calculates to a 7% increase in atmospheric water vapor.

Let's use Arrhenius for water vapor.  Radiative Forcing = 0.3 * ln(10,007/10,000) = 0.02 watts/square meter, or about 1% of the warming produced by CO2.

If my reasoning and calculations are correct, then water vapor is not a significant contributor to AGW, even though it is the major greenhouse absorber in our atmosphere.

Bioengineers develop tool for reprogramming genetic code

5 hours ago by Bjorn Carey
Original link:  http://phys.org/news/2015-01-bioengineers-tool-reprogramming-genetic-code.html

Stanford bioengineers have developed a new tool that allows them to preferentially activate or deactivate genes in living cells. Credit: vitstudio/Shutterstock
Biology relies upon the precise activation of specific genes to work properly. If that sequence gets out of whack, or one gene turns on only partially, the outcome can often lead to a disease.

Standard CRISPR consists of two components: a short RNA that matches a particular spot in the genome, and a protein called Cas9 that snips the DNA in that location. For the purposes of gene editing, scientists can control where the protein snips the genome, insert a new gene into the cut and patch it back together.

Inserting new , however, is just one way to influence how the genome is expressed. Another involves telling the cell how much or how little to activate a particular gene, thus controlling how much protein a cell produces from that gene and altering its behavior.
It's this action that Lei Stanley Qi, an assistant professor of bioengineering and of chemical and systems biology at Stanford, and his colleagues aim to manipulate.

Influencing the genome

In the new work, the researchers describe how they have designed the CRISPR molecule to include a second piece of information on the RNA, instructing the molecule to either increase (upregulate) or decrease (downregulate) a target gene's activity, or turn it on/off entirely.

Additionally, they designed it so that it could affect two different genes at once. In a cell, the order or degree in which are activated can produce different metabolic products.

"It's like driving a car. You control the wheel to control direction, and the engine to control the speed, and how you balance the two determines how the car moves," Qi said. "We can do the same thing in the cell by up- or downregulating genes, and produce different outcomes."

As a proof of principle, the scientists used the technique to take control of a yeast metabolic pathway, turning genes on and off in various orders to produce four different end products. They then tested it on two mammalian genes that are important in cell mobility, and were able to control the cell's direction and how fast it moved.

Future therapies

The ability to control genes is an attractive approach in designing genetic therapies for complex diseases that involve multiple genes, Qi said, and the new system may overcome several of the challenges of existing experimental therapies.

"Our technique allows us to directly control multiple specific and pathways in the genome without expressing new transgenes or uncontrolled behaviors, such as producing too much of a protein, or doing so in the wrong cells," Qi said. "We could eventually synthesize tens of thousands of RNA molecules to control the genome over a whole organism."

Next, Qi plans to test the technique in mice and refine the delivery method. Currently the scientists use a virus to insert the molecule into a cell, but he would eventually like to simply inject the molecules into an organism's blood.

"That is what is so exciting about working at Stanford, because the School of Medicine's immunology group is just around the corner, and working with them will help us address how to do this without triggering an immune response," said Qi, who is a member of the interdisciplinary Stanford ChEM-H institute. "I'm optimistic because everything about this system comes naturally from , and should be compatible with any organism."

Telescope

From Wikipedia, the free encyclopedia
The 100 inch (2.54 m) Hooker reflecting telescope at Mount Wilson Observatory near Los Angeles, USA.

A telescope is an instrument that aids in the observation of remote objects by collecting electromagnetic radiation (such as visible light). The first known practical telescopes were invented in the Netherlands at the beginning of the 17th century, using glass lenses. They found use in terrestrial applications and astronomy.

Within a few decades, the reflecting telescope was invented, which used mirrors. In the 20th century many new types of telescopes were invented, including radio telescopes in the 1930s and infrared telescopes in the 1960s. The word telescope now refers to a wide range of instruments detecting different regions of the electromagnetic spectrum, and in some cases other types of detectors.

The word "telescope" (from the Greek τῆλε, tele "far" and σκοπεῖν, skopein "to look or see"; τηλεσκόπος, teleskopos "far-seeing") was coined in 1611 by the Greek mathematician Giovanni Demisiani for one of Galileo Galilei's instruments presented at a banquet at the Accademia dei Lincei.[1][2][3] In the Starry Messenger, Galileo had used the term "perspicillum".

History

Modern telescopes typically use CCDs instead of film for recording images. This is the sensor array in the Kepler spacecraft.

The earliest recorded working telescopes were the refracting telescopes that appeared in the Netherlands in 1608. Their development is credited to three individuals: Hans Lippershey and Zacharias Janssen, who were spectacle makers in Middelburg, and Jacob Metius of Alkmaar.[4] Galileo heard about the Dutch telescope in June 1609, built his own within a month,[5] and greatly improved upon the design in the following year.

The idea that the objective, or light-gathering element, could be a mirror instead of a lens was being investigated soon after the invention of the refracting telescope.[6] The potential advantages of using parabolic mirrors—reduction of spherical aberration and no chromatic aberration—led to many proposed designs and several attempts to build reflecting telescopes.[7] In 1668, Isaac Newton built the first practical reflecting telescope, of a design which now bears his name, the Newtonian reflector.

The invention of the achromatic lens in 1733 partially corrected color aberrations present in the simple lens and enabled the construction of shorter, more functional refracting telescopes. Reflecting telescopes, though not limited by the color problems seen in refractors, were hampered by the use of fast tarnishing speculum metal mirrors employed during the 18th and early 19th century—a problem alleviated by the introduction of silver coated glass mirrors in 1857,[8] and aluminized mirrors in 1932.[9] The maximum physical size limit for refracting telescopes is about 1 meter (40 inches), dictating that the vast majority of large optical researching telescopes built since the turn of the 20th century have been reflectors. The largest reflecting telescopes currently have objectives larger than 10 m (33 feet), and work is underway on several 30-40m designs.

The 20th century also saw the development of telescopes that worked in a wide range of wavelengths from radio to gamma-rays. The first purpose built radio telescope went into operation in 1937. Since then, a tremendous variety of complex astronomical instruments have been developed.

Types

The name "telescope" covers a wide range of instruments. Most detect electromagnetic radiation, but there are major differences in how astronomers must go about collecting light (electromagnetic radiation) in different frequency bands.

Telescopes may be classified by the wavelengths of light they detect:
Light Comparison
Name Wavelength Frequency (Hz) Photon Energy (eV)
Gamma ray less than 0.01 nm more than 10 EHZ 100 keV – 300+ GeV X
X-Ray 0.01 to 10 nm 30 PHz – 30 EHZ 120 eV to 120 keV X
Ultraviolet 10 nm – 400 nm 30 EHZ – 790 THz 3 eV to 124 eV
Visible 390 nm – 750 nm 790 THz – 405 THz 1.7 eV – 3.3 eV X
Infrared 750 nm – 1 mm 405 THz – 300 GHz 1.24 meV – 1.7 eV X
Microwave 1 mm – 1 meter 300 GHz – 300 MHz 1.24 meV – 1.24 µeV
Radio 1 mm – km 300 GHz3 Hz 1.24 meV – 12.4 feV X
As wavelengths become longer, it becomes easier to use antenna technology to interact with electromagnetic radiation (although it is possible to make very tiny antenna). The near-infrared can be handled much like visible light, however in the far-infrared and submillimetre range, telescopes can operate more like a radio telescope. For example the James Clerk Maxwell Telescope observes from wavelengths from 3 μm (0.003 mm) to 2000 μm (2 mm), but uses a parabolic aluminum antenna.[10] On the other hand, the Spitzer Space Telescope, observing from about 3 μm (0.003 mm) to 180 μm (0.18 mm) uses a mirror (reflecting optics). Also using reflecting optics, the Hubble Space Telescope with Wide Field Camera 3 can observe from about 0.2 μm (0.0002 mm) to 1.7 μm (0.0017 mm) (from ultra-violet to infrared light).[11]
Another threshold in telescope design, as photon energy increases (shorter wavelengths and higher frequency) is the use of fully reflecting optics rather than glancing-incident optics. Telescopes such as TRACE and SOHO use special mirrors to reflect Extreme ultraviolet, producing higher resolution and brighter images then otherwise possible. A larger aperture does not just mean more light is collected, it is collected at a higher diffraction limit.

Telescopes may also be classified by location: ground telescope, space telescope, or flying telescope. They may also be classified by whether they are operated by professional astronomers or amateur astronomers. A vehicle or permanent campus containing one or more telescopes or other instruments is called an observatory.

Optical telescopes


50 cm refracting telescope at Nice Observatory.
 
An optical telescope gathers and focuses light mainly from the visible part of the electromagnetic spectrum (although some work in the infrared and ultraviolet).[12] Optical telescopes increase the apparent angular size of distant objects as well as their apparent brightness. In order for the image to be observed, photographed, studied, and sent to a computer, telescopes work by employing one or more curved optical elements, usually made from glass lenses and/or mirrors, to gather light and other electromagnetic radiation to bring that light or radiation to a focal point. Optical telescopes are used for astronomy and in many non-astronomical instruments, including: theodolites (including transits), spotting scopes, monoculars, binoculars, camera lenses, and spyglasses. There are three main optical types: Beyond these basic optical types there are many sub-types of varying optical design classified by the task they perform such as Astrographs, Comet seekers, Solar telescope, etc.

Radio telescopes


The Very Large Array at Socorro, New Mexico, United States.
 
Radio telescopes are directional radio antennas used for radio astronomy. The dishes are sometimes constructed of a conductive wire mesh whose openings are smaller than the wavelength being observed. Multi-element Radio telescopes are constructed from pairs or larger groups of these dishes to synthesize large 'virtual' apertures that are similar in size to the separation between the telescopes; this process is known as aperture synthesis. As of 2005, the current record array size is many times the width of the Earth—utilizing space-based Very Long Baseline Interferometry (VLBI) telescopes such as the Japanese HALCA (Highly Advanced Laboratory for Communications and Astronomy) VSOP (VLBI Space Observatory Program) satellite. Aperture synthesis is now also being applied to optical telescopes using optical interferometers (arrays of optical telescopes) and aperture masking interferometry at single reflecting telescopes. Radio telescopes are also used to collect microwave radiation, which is used to collect radiation when any visible light is obstructed or faint, such as from quasars. Some radio telescopes are used by programs such as SETI and the Arecibo Observatory to search for extraterrestrial life.

X-ray telescopes


Einstein Observatory was a space-based focusing optical X-ray telescope from 1978.[13]
 
X-ray telescopes can use X-ray optics, such as a Wolter telescopes composed of ring-shaped 'glancing' mirrors made of heavy metals that are able to reflect the rays just a few degrees. The mirrors are usually a section of a rotated parabola and a hyperbola, or ellipse. In 1952, Hans Wolter outlined 3 ways a telescope could be built using only this kind of mirror.[14][15] Examples of an observatory using this type of telescope are the Einstein Observatory, ROSAT, and the Chandra X-Ray Observatory. By 2010, Wolter focusing X-ray telescopes are possible up to 79 keV.[13]

Gamma-ray telescopes

Higher energy X-ray and Gamma-ray telescopes refrain from focusing completely and use coded aperture masks: the patterns of the shadow the mask creates can be reconstructed to form an image.
X-ray and Gamma-ray telescopes are usually on Earth-orbiting satellites or high-flying balloons since the Earth's atmosphere is opaque to this part of the electromagnetic spectrum. However, high energy X-rays and gamma-rays do not form an image in the same way as telescopes at visible wavelengths. An example of this type of telescope is the Fermi Gamma-ray Space Telescope.

The detection of very high energy gamma rays, with shorter wavelength and higher frequency than regular gamma rays, requires further specialization. An example of this type of observatory is VERITAS. Very high energy gamma-rays are still photons, like visible light, whereas cosmic rays includes particles like electrons, protons, and heavier nuclei.

A discovery in 2012 may allow focusing gamma-ray telescopes.[16] At photon energies greater than 700 keV, the index of refraction starts to increase again.[16]

High-energy particle telescopes

High-energy astronomy requires specialized telescopes to make observations since most of these particles go through most metals and glasses.

In other types of high energy particle telescopes there is no image-forming optical system. Cosmic-ray telescopes usually consist of an array of different detector types spread out over a large area. A Neutrino telescope consists of a large mass of water or ice, surrounded by an array of sensitive light detectors known as photomultiplier tubes. Energetic neutral atom observatories like Interstellar Boundary Explorer detect particles traveling at certain energies.

Other types of telescopes


Equatorial-mounted Keplerian telescope

Astronomy is not limited to using electromagnetic radiation. Additional information can be obtained using other media. The detectors used to observe the Universe are analogous to telescopes, these are:

Types of mount

A telescope mount is a mechanical structure which supports a telescope. Telescope mounts are designed to support the mass of the telescope and allow for accurate pointing of the instrument. Many sorts of mounts have been developed over the years, with the majority of effort being put into systems that can track the motion of the stars as the Earth rotates. The two main types of tracking mount are:

Atmospheric electromagnetic opacity

Since the atmosphere is opaque for most of the electromagnetic spectrum, only a few bands can be observed from the Earth's surface. These bands are visible – near-infrared and a portion of the radio-wave part of the spectrum. For this reason there are no X-ray or far-infrared ground-based telescopes as these have to be flown in space to observe. Even if a wavelength is observable from the ground, it might still be advantageous to fly it on a satellite due to astronomical seeing.

A diagram of the electromagnetic spectrum with the Earth's atmospheric transmittance (or opacity) and the types of telescopes used to image parts of the spectrum.

Telescopic image from different telescope types

Different types of telescope, operating in different wavelength bands, provide different information about the same object. Together they provide a more comprehensive understanding.

A 6′ wide view of the Crab nebula supernova remnant, viewed at different wavelengths of light by various telescopes

By spectrum

Telescopes that operate in the electromagnetic spectrum:
Name Telescope Astronomy Wavelength
Radio Radio telescope Radio astronomy
(Radar astronomy)
more than 1 mm
Submillimetre Submillimetre telescopes* Submillimetre astronomy 0.1 mm – 1 mm
Far Infrared Far-infrared astronomy 30 µm – 450 µm
Infrared Infrared telescope Infrared astronomy 700 nm – 1 mm
Visible Visible spectrum telescopes Visible-light astronomy 400 nm – 700 nm
Ultraviolet Ultraviolet telescopes* Ultraviolet astronomy 10 nm – 400 nm
X-ray X-ray telescope X-ray astronomy 0.01 nm – 10 nm
Gamma-ray Gamma-ray astronomy less than 0.01 nm
*Links to categories.

Lists of telescopes


‘I emailed a message between two brains’

 
(SPL) 
 
 As internet connections become faster and more of the devices we carry help keep us online, it can sometimes feel like we’re on the verge of spontaneous email communication. I send an email, you receive it, open it, and respond – all in a matter of seconds. Regardless of whether you think near-instant communication is a good thing or not, it’s certainly happening. Not long ago we routinely waited days or weeks for a letter – today even waiting hours for a reply can feel like an eternity.
Perhaps the ultimate way to speed up online communication would be to push towards direct brain-to-brain communication over the web. If brains were directly connected, there would be no more need for pesky typing – we could simply think of an idea and send it instantly to a friend, whether they are in the same room or half the world away. We’re not there yet, of course, but a recent study took a first step in that direction, claiming direct brain-to-brain communication over the internet between people thousands of miles from one another.

The work is simply a proof of concept, as Giulio Ruffini, one of the researchers on the project – and CEO of Starlab, based in Barcelona – is quick to explain. The team did not, as some reported, send words or thoughts or emotions from one brain to another. Instead they did something much simpler.
Technology to detect brainwaves can be used to broadcast simple messages (Thinkstock)
Technology to detect brainwaves can be used to broadcast simple messages (Thinkstock)
Here’s how it worked. One subject – in this case a man in Kerala, India – was fitted with a brain-computer interface that records brainwaves through the scalp. That person was then instructed to imagine they were moving either their hands, or their feet. If he imagined moving his feet, the computer recorded a zero. If he imagined moving his hands, it recorded a one.

This string of zeros and ones was then sent through the internet to a receiver: a man in Strasbourg, France. He was fitted with something called a TMS robot – a robot designed to deliver strong but short electrical pulses to the brain. When the sender thought about moving his hands, the TMS robot zapped the receiver’s brain in a way that made him see light – even though his eyes were closed. The receiver saw no light if the sender thought about moving his feet.

To make the message more meaningful, the researchers came up with a cipher: one string of zeros and ones (or hands and feet) meant “hola” and another meant “ciao”. The receiver – who had also been taught the cipher – could then decode the signal of lights to interpret which word the sender had sent.

Deep concentration

This might sound simple, but at each stage there are complications. The sender has to concentrate extremely hard to focus only on imagining moving their hands or feet. Any other activity in the brain can cloud the signal, and make it hard to pick up the message. In fact, the sender had to be trained in how to do this properly.
(Thinkstock)
(Thinkstock)
The whole process isn’t fast, either. The researchers estimated that from brain to brain the transmission speed was about two bits (a zero and a one) per minute. So to get even a simple message from one brain to another would take a while. But when it happened, and it worked, Ruffini says it was exciting.

“I mean, you can look at this experiment in two ways,” he says. “On the one hand it’s quite technical and a very humble proof of concept. On the other hand, this was the first time it was done, so it was a little bit of a historical moment I suppose, and it was pretty exciting. After all the years thinking about it and finding the means to do it, it felt pretty good.”

Just a stunt?

There is actually some debate over whether the experiment really does represent a first. Last year, a team at Harvard hooked up a man’s brain to a rat’s tail, and he was able to make the tail twitch just by thinking. Also last year, a group at the University of Washington was able to create a brain-to-brain interface in which a sender gained some control over a receiver’s motor cortex, allowing him to send messages that caused the receiver’s hand to subconsciously strike a keyboard. Consequently, one scientist told IEEE Spectrum he thought Ruffini’s work was “pretty much a stunt”, and had “all been shown before”. But Ruffini’s experiment is certainly the first in which a brain-to-brain connection was attempted over such great a distance, and the first time the receiver was consciously interpreting the signal.
(SPL)
(SPL)
And Ruffini has bigger dreams. He wants to transmit feelings, sensations, and complete thoughts between brains. “The technology is very limited right now, but some day can be very powerful,” he says. “Some day we will transcend verbal communications.”

There are advantages to doing so, he says. Receiving another’s thoughts directly into your brain might allow people to more effectively put themselves in someone else’s shoes and understand how they feel, which could make the world a better place. “I think most of the world’s problems stem from the fact that we have different viewpoints and we don’t understand how other people see or feel about the world,” he says, “Being able to actually feel what other people are feeling, it would change a lot.” He even talks about applying the method to animals, to understand their world and feelings.
(SPL)
(SPL)
Before they can send fully formed concepts, the next step for the team is to try to transmit something more complicated than a one or a zero. This might involve stimulating the brain at multiple sites, and moving beyond using the perception of light as the signal. “The way we have encoded information in the brain, it’s distributed, there is not a single place where the word ‘hello’ is stored,” says Ruffini. To transmit language directly, he says, the researchers will have to figure out how to stimulate the networked brain in a new way. And if they want to send sensations, they’ll have to figure out how to stimulate those segments of the brain too. What makes the task even harder is the fact that the researchers want to do this stimulation externally, without invasive – but more precise – brain implants.

Of course, with this kind of power comes danger too. Anything sent over the internet can be hacked and tracked. The ability to send messages directly into a person’s brain is, to some, a terrifying concept. “It can potentially be some day used in a negative way – you could try to take control of [somebody’s] motor system,” says Ruffini. But he points out that researchers are a long way from being able to do anything even remotely so sophisticated.

Still, it remains an intriguing thought that one day, many decades from now, you might be digesting emails, messages or even an article like this one directly into your mind.

Evolution of the eye


From Wikipedia, the free encyclopedia


Major stages in the evolution of the eye.

The evolution of the eye has been a subject of significant study, as a distinctive example of a homologous organ present in a wide variety of taxa. Complex, image-forming eyes evolved independently some 50 to 100 times.[1]

Complex eyes appear to have first evolved within a few million years, in the rapid burst of evolution known as the Cambrian explosion. There is no evidence of eyes before the Cambrian, but a wide range of diversity is evident in the Middle Cambrian Burgess shale, and the slightly older Emu Bay Shale.[2] Eyes show a wide range of adaptations to meet the requirements of the organisms which bear them. Eyes vary in their visual acuity, the range of wavelengths they can detect, their sensitivity in low light levels, their ability to detect motion or resolve objects, and whether they can discriminate colours.

History of research[edit]


The human eye, demonstrating the iris

In 1802, philosopher William Paley called it a miracle of "design". Charles Darwin himself wrote in his Origin of Species, that the evolution of the eye by natural selection at first glance seemed "absurd in the highest possible degree". However, he went on to explain that despite the difficulty in imagining it, this was perfectly feasible:
...if numerous gradations from a simple and imperfect eye to one complex and perfect can be shown to exist, each grade being useful to its possessor, as is certainly the case; if further, the eye ever varies and the variations be inherited, as is likewise certainly the case and if such variations should be useful to any animal under changing conditions of life, then the difficulty of believing that a perfect and complex eye could be formed by natural selection, though insuperable by our imagination, should not be considered as subversive of the theory.[3]
He suggested a gradation from "an optic nerve merely coated with pigment, and without any other mechanism" to "a moderately high stage of perfection", giving examples of extant intermediate grades of evolution.[3] Darwin's suggestions were soon shown to be correct, and current research is investigating the genetic mechanisms responsible for eye development and evolution.[4]

Modern researchers have been putting forth work on the topic. D.E. Nilsson has independently put forth four theorized general stages in the evolution of a vertebrate eye from a patch of photoreceptors.[5] Nilsson and S. Pelger published a classical paper theorizing how many generations are needed to evolve a complex eye in vertebrates.[6] Another researcher, G.C. Young, has used fossil evidence to infer evolutionary conclusions, based on the structure of eye orbits and openings in fossilized skulls for blood vessels and nerves to go through.[7] All this evidence adds to the growing amount of evidence that supports Darwin's theory.

Rate of evolution

The first fossils of eyes that have been found to date are from the lower Cambrian period (about 540 million years ago).[8] This period saw a burst of apparently rapid evolution, dubbed the "Cambrian explosion". One of the many hypotheses for "causes" of this diversification, the "Light Switch" theory of Andrew Parker, holds that the evolution of eyes initiated an arms race that led to a rapid spate of evolution.[9] Earlier than this, organisms may have had use for light sensitivity, but not for fast locomotion and navigation by vision.

It is difficult to estimate the rate of eye evolution because the fossil record, particularly of the Early Cambrian, is poor. The evolution of a circular patch of photoreceptor cells into a fully functional vertebrate eye has been approximated based on rates of mutation, relative advantage to the organism, and natural selection. Based on pessimistic calculations that consistently overestimate the time required for each stage and a generation time of one year, which is common in small animals, it has been proposed that it would take less than 364,000 years for the vertebrate eye to evolve from a patch of photoreceptors.[10][note 1]

One origin or many?

Whether one considers the eye to have evolved once or multiple times depends somewhat on the definition of an eye. Much of the genetic machinery employed in eye development is common to all eyed organisms, which may suggest that their ancestor utilized some form of light-sensitive machinery – even if it lacked a dedicated optical organ. However, even photoreceptor cells may have evolved more than once from molecularly similar chemoreceptors, and photosensitive cells probably existed long before the Cambrian explosion.[11] Higher-level similarities – such as the use of the protein crystallin in the independently derived cephalopod and vertebrate lenses[12] – reflect the co-option of a protein from a more fundamental role to a new function within the eye.[13]

Shared traits common to all light-sensitive organs include the family of photo-receptive proteins called opsins. All seven sub-families of opsin were already present in the last common ancestor of animals. In addition, the genetic toolkit for positioning eyes is common to all animals: the PAX6 gene controls where the eye develops in organisms ranging from octopuses[14] to mice to fruit flies.[15][16][17] These high-level genes are, by implication, much older than many of the structures that they are today seen to control; they must originally have served a different purpose, before being co-opted for a new role in eye development.[13]

Sensory organs probably evolved before the brain did—there is no need for an information-processing organ (brain) before there is information to process.[18]

Stages of eye evolution


The stigma (2) of the euglena hides a light-sensitive spot.

The earliest predecessors of the eye were photoreceptor proteins that sense light, found even in unicellular organisms, called "eyespots". Eyespots can only sense ambient brightness: they can distinguish light from dark, sufficient for photoperiodism and daily synchronization of circadian rhythms. They are insufficient for vision, as they cannot distinguish shapes or determine the direction light is coming from. Eyespots are found in nearly all major animal groups, and are common among unicellular organisms, including euglena. The euglena's eyespot, called a stigma, is located at its anterior end. It is a small splotch of red pigment which shades a collection of light sensitive crystals. Together with the leading flagellum, the eyespot allows the organism to move in response to light, often toward the light to assist in photosynthesis,[19] and to predict day and night, the primary function of circadian rhythms. Visual pigments are located in the brains of more complex organisms, and are thought to have a role in synchronising spawning with lunar cycles. By detecting the subtle changes in night-time illumination, organisms could synchronise the release of sperm and eggs to maximise the probability of fertilisation.[citation needed]

Vision itself relies on a basic biochemistry which is common to all eyes. However, how this biochemical toolkit is used to interpret an organism's environment varies widely: eyes have a wide range of structures and forms, all of which have evolved quite late relative to the underlying proteins and molecules.[19]

At a cellular level, there appear to be two main "designs" of eyes, one possessed by the protostomes (molluscs, annelid worms and arthropods), the other by the deuterostomes (chordates and echinoderms).[19]

The functional unit of the eye is the receptor cell, which contains the opsin proteins and responds to light by initiating a nerve impulse. The light sensitive opsins are borne on a hairy layer, to maximise the surface area. The nature of these "hairs" differs, with two basic forms underlying photoreceptor structure: microvilli and cilia.[20] In the protostomes, they are microvilli: extensions or protrusions of the cellular membrane. But in the deuterostomes, they are derived from cilia, which are separate structures.[19] The actual derivation may be more complicated, as some microvilli contain traces of cilia — but other observations appear to support a fundamental difference between protostomes and deuterostomes.[19] These considerations centre on the response of the cells to light – some use sodium to cause the electric signal that will form a nerve impulse, and others use potassium; further, protostomes on the whole construct a signal by allowing more sodium to pass through their cell walls, whereas deuterostomes allow less through.[19]

This suggests that when the two lineages diverged in the Precambrian, they had only very primitive light receptors, which developed into more complex eyes independently.

Early eyes

The basic light-processing unit of eyes is the photoreceptor cell, a specialized cell containing two types of molecules in a membrane: the opsin, a light-sensitive protein, surrounding the chromophore, a pigment that distinguishes colors. Groups of such cells are termed "eyespots", and have evolved independently somewhere between 40 and 65 times. These eyespots permit animals to gain only a very basic sense of the direction and intensity of light, but not enough to discriminate an object from its surroundings.[19]

Developing an optical system that can discriminate the direction of light to within a few degrees is apparently much more difficult, and only six of the thirty-some phyla[note 2] possess such a system. However, these phyla account for 96% of living species.[19]

The planarian has "cup" eyespots that can slightly distinguish light direction.

These complex optical systems started out as the multicellular eyepatch gradually depressed into a cup, which first granted the ability to discriminate brightness in directions, then in finer and finer directions as the pit deepened. While flat eyepatches were ineffective at determining the direction of light, as a beam of light would activate exactly the same patch of photo-sensitive cells regardless of its direction, the "cup" shape of the pit eyes allowed limited directional differentiation by changing which cells the lights would hit depending upon the light's angle. Pit eyes, which had arisen by the Cambrian period, were seen in ancient snails,[clarification needed] and are found in some snails and other invertebrates living today, such as planaria. Planaria can slightly differentiate the direction and intensity of light because of their cup-shaped, heavily pigmented retina cells, which shield the light-sensitive cells from exposure in all directions except for the single opening for the light. However, this proto-eye is still much more useful for detecting the absence or presence of light than its direction; this gradually changes as the eye's pit deepens and the number of photoreceptive cells grows, allowing for increasingly precise visual information.[21]

When a photon is absorbed by the chromophore, a chemical reaction causes the photon's energy to be transduced into electrical energy and relayed, in higher animals, to the nervous system. These photoreceptor cells form part of the retina, a thin layer of cells that relays visual information,[22] including the light and day-length information needed by the circadian rhythm system, to the brain. However, some jellyfish, such as Cladonema, have elaborate eyes but no brain. Their eyes transmit a message directly to the muscles without the intermediate processing provided by a brain.[18]

During the Cambrian explosion, the development of the eye accelerated rapidly, with radical improvements in image-processing and detection of light direction.[23]

The primitive nautilus eye functions similarly to a pinhole camera.

After the photosensitive cell region invaginated, there came a point when reducing the width of the light opening became more efficient at increasing visual resolution than continued deepening of the cup.[10] By reducing the size of the opening, organisms achieved true imaging, allowing for fine directional sensing and even some shape-sensing. Eyes of this nature are currently found in the nautilus. Lacking a cornea or lens, they provide poor resolution and dim imaging, but are still, for the purpose of vision, a major improvement over the early eyepatches.[24]

Overgrowths of transparent cells prevented contamination and parasitic infestation. The chamber contents, now segregated, could slowly specialize into a transparent humour, for optimizations such as colour filtering, higher refractive index, blocking of ultraviolet radiation, or the ability to operate in and out of water. The layer may, in certain classes,[which?] be related to the moulting of the organism's shell or skin. An example of this can be observed in Onychophorans where the cuticula of the shell continues to the cornea. The cornea is composed of either one or two cuticular layers depending on how recently the animal has moulted.[25] Along with the lens and two humors, the cornea is responsible for converging light and aiding the focusing of it on the back of the retina. The cornea protects the eyeball while at the same time accounting for approximately 2/3 of the eye’s total refractive power.[26]

It is likely that a key reason eyes specialize in detecting a specific, narrow range of wavelengths on the electromagnetic spectrum—the visible spectrum—is because the earliest species to develop photosensitivity were aquatic, and only two specific wavelength ranges of electromagnetic radiation, blue and green visible light, can travel through water. This same light-filtering property of water also influenced the photosensitivity of plants.[27][28][29]

Lens formation and diversification


Light from a distant object and a near object being focused by changing the curvature of the lens

In a lensless eye, a distant point of light enters and hits the back of the eye with about the same size as when it entered. Adding a lens to the eye directs this incoming light onto a smaller surface area, without reducing the intensity of the stimulus.[6] The focal length of an early lobopod with lens-containing simple eyes focused the image behind the retina, so while no part of the image could be brought into focus, the intensity of light allowed the organism to see in deeper (and therefore darker) waters.[25] A subsequent increase of the lens's refractive index probably resulted in an in-focus image being formed.[25]

The development of the lens in camera-type eyes probably followed a different trajectory. The transparent cells over a pinhole eye's aperture split into two layers, with liquid in between.[citation needed] The liquid originally served as a circulatory fluid for oxygen, nutrients, wastes, and immune functions, allowing greater total thickness and higher mechanical protection. In addition, multiple interfaces between solids and liquids increase optical power, allowing wider viewing angles and greater imaging resolution. Again, the division of layers may have originated with the shedding of skin; intracellular fluid may infill naturally depending on layer depth.[citation needed]

Note that this optical layout has not been found, nor is it expected to be found. Fossilization rarely preserves soft tissues, and even if it did, the new humour would almost certainly close as the remains desiccated, or as sediment overburden forced the layers together, making the fossilized eye resemble the previous layout.

Compound eye of Antarctic krill

Vertebrate lenses are composed of adapted epithelial cells which have high concentrations of the protein crystallin. These crystallins belong to two major families, the α-crystallins and the βγ-crystallins. Both were categories of proteins originally used for other functions in organisms, but eventually were adapted for the sole purpose of vision in animal eyes.[30] In the embryo, the lens is living tissue, but the cellular machinery is not transparent so must be removed before the organism can see. Removing the machinery means the lens is composed of dead cells, packed with crystallins. These crystallins are special because they have the unique characteristics required for transparency and function in the lens such as tight packing, resistance to crystallization, and extreme longevity, as they must survive for the entirety of the organism’s life.[30] The refractive index gradient which makes the lens useful is caused by the radial shift in crystallin concentration in different parts of the lens, rather than by the specific type of protein: it is not the presence of crystallin, but the relative distribution of it, that renders the lens useful.[31]

It is biologically difficult to maintain a transparent layer of cells. Deposition of transparent, nonliving, material eased the need for nutrient supply and waste removal. Trilobites used calcite, a mineral which has not been used by any other organism; in other compound eyes[verification needed] and camera eyes, the material is crystallin. A gap between tissue layers naturally forms a biconvex shape, which is optically and mechanically ideal for substances of normal refractive index. A biconvex lens confers not only optical resolution, but aperture and low-light ability, as resolution is now decoupled from hole size—which slowly increases again, free from the circulatory constraints.

Independently, a transparent layer and a nontransparent layer may split forward from the lens: a separate cornea and iris. (These may happen before or after crystal deposition, or not at all.) Separation of the forward layer again forms a humour, the aqueous humour. This increases refractive power and again eases circulatory problems. Formation of a nontransparent ring allows more blood vessels, more circulation, and larger eye sizes. This flap around the perimeter of the lens also masks optical imperfections, which are more common at lens edges. The need to mask lens imperfections gradually increases with lens curvature and power, overall lens and eye size, and the resolution and aperture needs of the organism, driven by hunting or survival requirements. This type is now functionally identical to the eye of most vertebrates, including humans. Indeed, "the basic pattern of all vertebrate eyes is similar."[32]

Other developments

Color vision

Five classes of visual photopigmentations are found in vertebrates. All but one of these developed prior to the divergence of cyclostomes and fish.[33] Various adaptations within these five classes give rise to suitable eyes depending on the spectrum encountered. As light travels through water, longer wavelengths, such as reds and yellows, are absorbed more quickly than the shorter wavelengths of the greens and blues. This can create a gradient of light types as the depth of water increases. The visual receptors in fish are more sensitive to the range of light present in their habitat level. However, this phenomenon does not occur in land environments, creating little variation in pigment sensitivities among terrestrial vertebrates. The homogenous nature of the pigment sensitivities directly contributes to the significant presence of communication colors.[33] This presents distinct selective advantages, such as better recognition of predators, food, and mates. Indeed, it is thought that simple sensory-neural mechanisms may selectively control general behaviour patterns, such as escape, foraging, and hiding. Many examples of wavelength-specific behaviour patterns have been identified, in two primary groups: less than 450 nm, associated with natural light sources, and greater than 450 nm, associated with reflected light sources.[34] As opsin molecules were subtly fine-tuned to detect different wavelengths of light, at some point color vision developed when photoreceptor cells developed multiple pigments.[22] As a chemical adaption rather than a mechanical one, this may have occurred at any of the early stages of the eye's evolution, and the capability may have disappeared and reappeared as organisms became predator or prey. Similarly, night and day vision emerged when receptors differentiated into rods and cones, respectively.

Polarization vision

As discussed earlier, the properties of light under water differ from those in air. One example of this is the polarization of light. Polarization is the organization of originally disordered light, from the sun, into linear arrangements. This occurs when light passes through slit like filters, as well as when passing into a new medium. Sensitivity to polarized light is especially useful for organisms whose habitats are located more than a few meters under water. In this environment, color vision is less dependable, and therefore a weaker selective factor. While most photoreceptors have the ability to distinguish partially polarized light, terrestrial vertebrates’ membranes are orientated perpendicularly, such that they are insensitive to polarized light.[35] However, some fish can discern polarized light, demonstrating that they possess some linear photoreceptors. Like color vision, sensitivity to polarization can aid in an organism's ability to differentiate their surrounding objects and individuals. Because of the marginal reflective interference of polarized light, it is often used for orientation and navigation, as well as distinguishing concealed objects, such as disguised prey.[35]

Focusing mechanism

By utilizing the iris sphincter muscle, some species move the lens back and forth, some stretch the lens flatter. Another mechanism regulates focusing chemically and independently of these two, by controlling growth of the eye and maintaining focal length. In addition, the pupil shape can be used to predict the focal system being utilized. A slit pupil can indicate the common multifocal system, while circle pupil usually specifies a monofocal system. When using a circlular form, the pupil will constrict under bright light, increasing the focal length, and will dilate when dark in order to decrease the depth of focus.[36] Note that a focusing method is not a requirement. As photographers know, focal errors increase as aperture increases. Thus, countless organisms with small eyes are active in direct sunlight and survive with no focus mechanism at all. As a species grows larger, or transitions to dimmer environments, a means of focusing need only appear gradually.

Location

Prey generally have eyes on the sides of their head so to have a larger field of view, from which to avoid predators. Predators, however, have eyes in front of their head in order to have better depth perception.[37][38] Flatfish are predators which lie on their side on the bottom, and have eyes placed asymmetrically on the same side of the head. A transitional fossil from the common symmetric position is Amphistium.

Evolutionary baggage

Vertebrates and octopuses developed the camera eye independently. In the vertebrate version the nerve fibers pass in front of the retina, and there is a blind spot where the nerves pass through the retina. In the vertebrate example, 4 represents the blind spot, which is notably absent from the octopus eye. In vertebrates, 1 represents the retina and 2 is the nerve fibers, including the optic nerve (3), whereas in the octopus eye, 1 and 2 represent the nerve fibers and retina respectively.
The eyes of many taxa record their evolutionary history in their contemporary anatomy. The vertebrate eye, for instance, is built "backwards and upside down", requiring "photons of light to travel through the cornea, lens, aqueous fluid, blood vessels, ganglion cells, amacrine cells, horizontal cells, and bipolar cells before they reach the light-sensitive rods and cones that transduce the light signal into neural impulses, which are then sent to the visual cortex at the back of the brain for processing into meaningful patterns."[39] While such a construct has some drawbacks, it also allows the outer retina of the vertebrates to sustain higher metabolic activities as compared to the non-inverted design.[40] It also allowed for the evolution of the choroid layer, 
including the retinal pigment epithelial (RPE) cells, which play an important role in protecting the photoreceptive cells from photo-oxidative damage.[41][42]
The camera eyes of cephalopods, in contrast, are constructed the "right way out", with the nerves attached to the rear of the retina. This means that they do not have a blind spot. This difference may be accounted for by the origins of eyes; in cephalopods they develop as an invagination of the head surface whereas in vertebrates they originate as an extension of the brain.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...