Search This Blog

Monday, July 7, 2025

Rayleigh sky model

From Wikipedia, the free encyclopedia

The Rayleigh sky model describes the observed polarization pattern of the daytime sky. Within the atmosphere, Rayleigh scattering of light by air molecules, water, dust, and aerosols causes the sky's light to have a defined polarization pattern. The same elastic scattering processes cause the sky to be blue. The polarization is characterized at each wavelength by its degree of polarization, and orientation (the e-vector angle, or scattering angle).

The polarization pattern of the sky is dependent on the celestial position of the Sun. While all scattered light is polarized to some extent, light is highly polarized at a scattering angle of 90° from the light source. In most cases the light source is the Sun, but the Moon creates the same pattern as well. The degree of polarization first increases with increasing distance from the Sun, and then decreases away from the Sun. Thus, the maximum degree of polarization occurs in a circular band 90° from the Sun. In this band, degrees of polarization near 80% are typically reached.

Degree of polarization in the Rayleigh sky at sunset or sunrise. The zenith is at the center of the graph.

When the Sun is located at the zenith, the band of maximal polarization wraps around the horizon. Light from the sky is polarized horizontally along the horizon. During twilight at either the vernal or autumnal equinox, the band of maximal polarization is defined by the north-zenith-south plane, or meridian. In particular, the polarization is vertical at the horizon in the north and south, where the meridian meets the horizon. The polarization at twilight at an equinox is represented by the figure to the right. The red band represents the circle in the north-zenith-south plane where the sky is highly polarized. The cardinal directions (N, E, S, W) are shown at 12-o'clock, 9 o'clock, 6 o'clock, and 3 o'clock (counter-clockwise around the celestial sphere, since the observer is looking up at the sky).

Note that because the polarization pattern is dependent on the Sun, it changes not only throughout the day but throughout the year. When the sun sets toward the South, in the northern hemisphere's winter, the North-Zenith-South plane is offset, with "effective" North actually located somewhat toward the West. Thus if the sun sets at an azimuth of 255° (15° South of West) the polarization pattern will be at its maximum along the horizon at an azimuth of 345° (15° West of North) and 165° (15° East of South).

During a single day, the pattern rotates with the changing position of the sun. At twilight, it typically appears about 45 minutes before local sunrise and disappears 45 minutes after local sunset. Once established it is very stable, showing change only in its rotation. It can easily be seen on any given day using polarized sunglasses.

Many animals use the polarization patterns of the sky at twilight and throughout the day as a navigation tool. Because it is determined purely by the position of the Sun, it is easily used as a compass for animal orientation. By orienting themselves with respect to the polarization patterns, animals can locate the Sun and thus determine the cardinal directions.

Theory

Geometry

The geometry representing the Rayleigh sky

The geometry for the sky polarization can be represented by a celestial triangle based on the Sun, zenith, and observed pointing (or the point of scattering). In the model, γ is the angular distance between the observed pointing and the Sun, Θs is the solar zenith distance (90° – solar altitude), Θ is the angular distance between the observed pointing and the zenith (90° – observed altitude), Φ is the angle between the zenith direction and the solar direction at the observed pointing, and ψ is the angle between the solar direction and the observed pointing at the zenith.

Thus, the spherical triangle is defined not only by the three points located at the Sun, zenith, and observed point but by both the three interior angles as well as the three angular distances. In an altitude-azimuth grid the angular distance between the observed pointing and the Sun and the angular distance between the observed pointing and the zenith change while the angular distance between the Sun and the zenith remains constant at one point in time.

The angular distances between the observed pointing and the Sun when the sun is setting to the west (top plot) and between the observed pointing and the zenith (bottom plot)

The figure to the left shows the two changing angular distances as mapped onto an altitude-azimuth grid (with altitude located on the x-axis and azimuth located on the y-axis). The top plot represents the changing angular distance between the observed pointing and the Sun, which is opposite to the interior angle located at the zenith (or the scattering angle). When the Sun is located at the zenith this distance is greatest along the horizon at every cardinal direction. It then decreases with rising altitude moving closer toward the zenith. At twilight the sun is setting in the west. Hence the distance is greatest when looking directly away from the Sun along the horizon in the east, and lowest along the horizon in the west.

The bottom plot in the figure to the left represents the angular distance from the observed pointing to the zenith, which is opposite to the interior angle located at the Sun. Unlike the distance between the observed pointing and the Sun, this is independent of azimuth, i.e. cardinal direction. It is simply greatest along the horizon at low altitudes and decreases linearly with rising altitude.

The three interior angles of the celestial triangle.

The figure to the right represents the three angular distances. The left one represents the angle at the observed pointing between the zenith direction and the solar direction. This is thus heavily dependent on the changing solar direction as the Sun is perceived as moving across the sky. The middle one represents the angle at the Sun between the zenith direction and the pointing. Again this is heavily dependent on the changing pointing. This is symmetrical between the North and South hemispheres. The right one represents the angle at the zenith between the solar direction and the pointing. It thus rotates around the celestial sphere.

Degree of polarization

The Rayleigh sky model predicts the degree of sky polarization as:

The polarization along the horizon.

As a simple example one can map the degree of polarization on the horizon. As seen in the figure to the right it is high in the North (0° and 360°) and the South (180°). It then resembles a cosine function and decreases toward the East and West reaching zero at these cardinal directions.

The degree of polarization is easily understood when mapped onto an altitude-azimuth grid as shown below. As the sun sets due West, the maximum degree of polarization can be seen in the North-Zenith-South plane. Along the horizon, at an altitude of 0° it is highest in the North and South, and lowest in the East and West. Then as altitude increases approaching the zenith (or the plane of maximum polarization) the polarization remains high in the North and South and increases until it is again maximum at 90° in the East and West, where it is then at the zenith and within the plane of polarization.

The degree of sky polarization as mapped onto the celestial sphere.
The degree of polarization. Red is high (approximately 80%) and black is low (0%).

Click on the adjacent image to view an animation that represents the degree of polarization as shown on the celestial sphere. Black represents areas where the degree of polarization is zero, whereas red represents areas where the degree of polarization is much larger. It is approximately 80%, which is a realistic maximum for the clear Rayleigh sky during day time. The video thus begins when the sun is slightly above the horizon and at an azimuth of 120°. The sky is highly polarized in the effective North-Zenith-South plane. This is slightly offset because the sun's azimuth is not due East. The sun moves across the sky with clear circular polarization patterns surrounding it. When the Sun is located at the zenith the polarization is independent of azimuth and decreases with rising altitude (as it approaches the sun). The pattern then continues as the sun approaches the horizon once again for sunset. The video ends with the sun below the horizon.

Polarization angle

The polarization angle. Red is high (approximately 90°) and black is low (-90°).

The scattering plane is the plane through the Sun, the observer, and the point observed (or the scattering point). The scattering angle, γ, is the angular distance between the Sun and the observed point. The equation for the scattering angle is derived from the law of cosines to the spherical triangle (refer to the figure above in the geometry section). It is given by:

In the above equation, ψs and θs are respectively the azimuth and zenith angle of the Sun, and ψ and θ are respectively the azimuth and zenith angle of the observed point.

This equation breaks down at the zenith where the angular distance between the observed pointing and the zenith, θs is 0. Here the orientation of polarization is defined as the difference in azimuth between the observed pointing and the solar azimuth.

The angle of polarization (or polarization angle) is defined as the relative angle between a vector tangent to the meridian of the observed point, and an angle perpendicular to the scattering plane.

The polarization angles show a regular shift in polarization angle with azimuth. For example, when the sun is setting in the West the polarization angles proceed around the horizon. At this time the degree of polarization is constant in circular bands centered around the Sun. Thus the degree of polarization as well as its corresponding angle clearly shifts around the horizon. When the Sun is located at the zenith the horizon represents a constant degree of polarization. The corresponding polarization angle still shifts with different directions toward the zenith from different points.

The video to the right represents the polarization angle mapped onto the celestial sphere. It begins with the Sun located in a similar fashion. The angle is zero along the line from the Sun to the zenith and increases clockwise toward the East as the observed point moves clockwise toward the East. Once the sun rises in the East the angle acts in a similar fashion until the sun begins to move across the sky. As the sun moves across the sky the angle is both zero and high along the line defined by the sun, the zenith, and the anti-sun. It is lower South of this line and higher North of this line. When the Sun is at the zenith, the angle is either fully positive or 0. These two values rotate toward the west. The video then repeats a similar fashion when the sun sets in the West.

Q and U Stokes parameters

The q and u input.

The angle of polarization can be unwrapped into the Q and U Stokes parameters. Q and U are defined as the linearly polarized intensities along the position angles 0° and 45° respectively; -Q and -U are along the position angles 90° and −45°.

If the sun is located on the horizon due west, the degree of polarization is then along the North-Zenith-South plane. If the observer faces West and looks at the zenith, the polarization is horizontal with the observer. At this direction Q is 1 and U is 0. If the observer is still facing West but looking North instead then the polarization is vertical with him. Thus Q is −1 and U remains 0. Along the horizon U is always 0. Q is always −1 except in the East and West.

The scattering angle (the angle at the zenith between the solar direction and the observer direction) along the horizon is a circle. From the East through the West it is 180° and from the West through the East it is 90° at twilight. When the sun is setting in the West, the angle is then 180° East through West, and only 90° West through East. The scattering angle at an altitude of 45° is consistent.

The input stokes parameters q and u are then with respect to North but in the altitude-azimuth frame. We can easily unwrap q assuming it is in the +altitude direction. From the basic definition we know that +Q is an angle of 0° and -Q is an angle of 90°. Therefore, Q is calculated from a sine function. Similarly U is calculated from a cosine function. The angle of polarization is always perpendicular to the scattering plane. Therefore, 90° is added to both scattering angles in order to find the polarization angles. From this the Q and U Stokes parameters are determined:

and

The scattering angle, derived from the law of cosines is with respect to the Sun. The polarization angle is the angle with respect to the zenith, or positive altitude. There is a line of symmetry defined by the Sun and the zenith. It is drawn from the Sun through the zenith to the other side of the celestial sphere where the "anti-sun" would be. This is also the effective East-Zenith-West plane.

The q input. Red is high (approximately 80%) and black is low (0%). (Click for animation)
The u input. Red is high (approximately 80%) and black is low (0%).

The first image to the right represents the q input mapped onto the celestial sphere. It is symmetric about the line defined by the sun-zenith-anti-sun. At twilight, in the North-Zenith-South plane it is negative because it is vertical with the degree of polarization. It is horizontal, or positive in the East-Zenith-West plane. In other words, it is positive in the ±altitude direction and negative in the ±azimuth direction. As the sun moves across the sky the q input remains high along the sun-zenith-anti-sun line. It remains zero around a circle based on the sun and the zenith. As it passes the zenith it rotates toward the south and repeats the same pattern until sunset.

The second image to the right represents the u input mapped onto the celestial sphere. The u stokes parameter changes signs depending on which quadrant it is in. The four quadrants are defined by the line of symmetry, the effective East-Zenith-West plane and the North-Zenith-South plane. It is not symmetric because it is defined by the angles ±45°. In a sense it makes two circles around the line of symmetry as opposed to only one.

It is easily understood when compared with the q input. Where the q input is halfway between 0° and 90°, the u input is either positive at +45° or negative at −45°. Similarly if the q input is positive at 90° or negative at 0° the u input is halfway between +45° and −45°. This can be seen at the non symmetric circles about the line of symmetry. It then follows the same pattern across the sky as the q input.

Neutral points and lines

Areas where the degree of polarization is zero (the skylight is unpolarized), are known as neutral points. Here the Stokes parameters Q and U also equal zero by definition. The degree of polarization therefore increases with increasing distance from the neutral points.

These conditions are met at a few defined locations on the sky. The Arago point is located above the antisolar point, while the Babinet and Brewster points are located above and below the sun respectively. The zenith distance of the Babinet or Arago point increases with increasing solar zenith distance. These neutral points can depart from their regular positions due to interference from dust and other aerosols.

The skylight polarization switches from negative to positive while passing a neutral point parallel to the solar or antisolar meridian. The lines that separate the regions of positive Q and negative Q are called neutral lines.

Depolarization

The Rayleigh sky causes a clearly defined polarization pattern under many different circumstances. The degree of polarization however, does not always remain consistent and may in fact decrease in different situations. The Rayleigh sky may undergo depolarization due to nearby objects such as clouds and large reflecting surfaces such as the ocean. It may also change depending on the time of the day (for instance at twilight or night).

In the night, the polarization of the moonlit sky is very strongly reduced in the presence of urban light pollution, because scattered urban light is not strongly polarized.

Light pollution is mostly unpolarized, and its addition to moonlight results in a decreased polarization signal.

Extensive research shows that the angle of polarization in a clear sky continues underneath clouds if the air beneath the cloud is directly lit by the Sun. The scattering of direct sunlight on those clouds results in the same polarization pattern. In other words, the proportion of the sky that follows the Rayleigh Sky Model is high for both clear skies and cloudy skies. The pattern is also clearly visible in small visible patches of sky. The celestial angle of polarization is unaffected by clouds.

Polarization patterns remain consistent even when the Sun is not present in the sky. Twilight patterns are produced during the time period between the beginning of astronomical twilight (when the Sun is 18° below the horizon) and sunrise, or sunset and the end of astronomical twilight. The duration of astronomical twilight depends on the length of the path taken by the Sun below the horizon. Thus it depends on the time of year as well as the location, but it can last for as long as 1.5 hours.

The polarization pattern caused by twilight remains fairly consistent throughout this time period. This is because the sun is moving below the horizon nearly perpendicular to it, and its azimuth therefore changes very slowly throughout this time period.

At twilight, scattered polarized light originates in the upper atmosphere and then traverses the entire lower atmosphere before reaching the observer. This provides multiple scattering opportunities and causes depolarization. It has been seen that polarization increases by about 10% from the onset of twilight to dawn. Therefore, the pattern remains consistent while the degree changes slightly.

Not only do polarization patterns remain consistent as the sun moves across the sky, but also as the moon moves across the sky at night. The Moon creates the same polarization pattern. Thus it is possible to use the polarization patterns as a tool for navigation at night. The only difference is that the degree of polarization is not quite as strong.

Underlying surface properties can affect the degree of polarization of the daytime sky. The degree of polarization has a strong dependence on surface properties. As the surface reflectance or optical thickness increase, the degree of polarization decreases. The Rayleigh sky near the ocean can therefore be highly depolarized.

Lastly, there is a clear wavelength dependence in Rayleigh scattering. It is greatest at short wavelengths, whereas skylight polarization is greatest at middle to long wavelengths. Initially it is greatest in the ultraviolet, but as light moves to the Earth's surface and interacts via multiple-path scattering it becomes high at middle to long wavelengths. The angle of polarization shows no variation with wavelength.

Uses

Many animals, typically insects, are sensitive to the polarization of light and can therefore use the polarization patterns of the daytime sky as a tool for navigation. This theory was first proposed by Karl von Frisch when looking at the celestial orientation of honeybees. The natural sky polarization pattern serves as an easily detected compass. From the polarization patterns, these species can orient themselves by determining the exact position of the Sun without the use of direct sunlight. Thus under cloudy skies, or even at night, animals can find their way.

Using polarized light as a compass however is no easy task. The animal must be capable of detecting and analyzing polarized light. These species have specialized photoreceptors in their eyes that respond to the orientation and the degree of polarization near the zenith. They can extract information on the intensity and orientation of the degree of polarization. They can then incorporate this visually to orient themselves and recognize different properties of surfaces.

There is clear evidence that animals can even orient themselves when the Sun is below the horizon at twilight. How well insects might orient themselves using nocturnal polarization patterns is still a topic of study. So far, it is known that nocturnal crickets have wide-field polarization sensors and should be able to use the night-time polarization patterns to orient themselves. It has also been seen that nocturnally migrating birds become disoriented when the polarization pattern at twilight is unclear.

The best example is the halicitid bee Megalopta genalis, which inhabits the rainforests in Central America and scavenges before sunrise and after sunset. This bee leaves its nest approximately 1 hour before sunrise, forages for up to 30 minutes, and accurately returns to its nest before sunrise. It acts similarly just after sunset.

Thus, this bee is an example of an insect that can perceive polarization patterns throughout astronomical twilight. Not only does this case exemplify the fact that polarization patterns are present during twilight, but it remains as a perfect example that when light conditions are challenging the bee orients itself based on the polarization patterns of the twilight sky.

It has been suggested that Vikings were able to navigate on the open sea in a similar fashion, using the birefringent crystal Iceland spar, which they called "sunstone", to determine the orientation of the sky's polarization. This would allow the navigator to locate the Sun, even when it was obscured by cloud cover. An actual example of such a "sunstone" was found on a sunken (Tudor) ship dated 1592, in proximity to the ship's navigational equipment.

Non-polarized objects

Both artificial and natural objects in the sky can be very difficult to detect using only the intensity of light. These objects include clouds, satellites, and aircraft. However, the polarization of these objects due to resonant scattering, emission, reflection, or other phenomena can differ from that of the background illumination. Thus they can be more easily detected by using polarization imaging. There is a wide range of remote sensing applications in which polarization is useful for detecting objects that are otherwise difficult to see.

Charge carrier

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Charge_carrier

In solid state physics, a charge carrier is a particle or quasiparticle that is free to move, carrying an electric charge, especially the particles that carry electric charges in electrical conductors. Examples are electrons, ions and holes. In a conducting medium, an electric field can exert force on these free particles, causing a net motion of the particles through the medium; this is what constitutes an electric current. The electron and the proton are the elementary charge carriers, each carrying one elementary charge (e), of the same magnitude and opposite sign.

In conductors

In conducting mediums, particles serve to carry charge. In many metals, the charge carriers are electrons. One or two of the valence electrons from each atom are able to move about freely within the crystal structure of the metal. The free electrons are referred to as conduction electrons, and the cloud of free electrons is called a Fermi gas. Many metals have electron and hole bands. In some, the majority carriers are holes.

In electrolytes, such as salt water, the charge carriers are ions, which are atoms or molecules that have gained or lost electrons so they are electrically charged. Atoms that have gained electrons so they are negatively charged are called anions, atoms that have lost electrons so they are positively charged are called cations. Cations and anions of the dissociated liquid also serve as charge carriers in melted ionic solids (see e.g. the Hall–Héroult process for an example of electrolysis of a melted ionic solid). Proton conductors are electrolytic conductors employing positive hydrogen ions as carriers.

In a plasma, an electrically charged gas which is found in electric arcs through air, neon signs, and the sun and stars, the electrons and cations of ionized gas act as charge carriers.

In a vacuum, free electrons can act as charge carriers. In the electronic component known as the vacuum tube (also called valve), the mobile electron cloud is generated by a heated metal cathode, by a process called thermionic emission. When an electric field is applied strongly enough to draw the electrons into a beam, this may be referred to as a cathode ray, and is the basis of the cathode-ray tube display widely used in televisions and computer monitors until the 2000s.

In semiconductors, which are the materials used to make electronic components like transistors and integrated circuits, two types of charge carrier are possible. In p-type semiconductors, "effective particles" known as electron holes with positive charge move through the crystal lattice, producing an electric current. The "holes" are, in effect, electron vacancies in the valence-band electron population of the semiconductor and are treated as charge carriers because they are mobile, moving from atom site to atom site. In n-type semiconductors, electrons in the conduction band move through the crystal, resulting in an electric current.

In some conductors, such as ionic solutions and plasmas, positive and negative charge carriers coexist, so in these cases an electric current consists of the two types of carrier moving in opposite directions. In other conductors, such as metals, there are only charge carriers of one polarity, so an electric current in them simply consists of charge carriers moving in one direction.

In semiconductors

There are two recognized types of charge carriers in semiconductors. One is electrons, which carry a negative electric charge. In addition, it is convenient to treat the traveling vacancies in the valence band electron population (holes) as a second type of charge carrier, which carry a positive charge equal in magnitude to that of an electron.

Carrier generation and recombination

When an electron meets with a hole, they recombine and these free carriers effectively vanish. The energy released can be either thermal, heating up the semiconductor (thermal recombination, one of the sources of waste heat in semiconductors), or released as photons (optical recombination, used in LEDs and semiconductor lasers). The recombination means an electron which has been excited from the valence band to the conduction band falls back to the empty state in the valence band, known as the holes. The holes are the empty states created in the valence band when an electron gets excited after getting some energy to pass the energy gap.

Majority and minority carriers

The more abundant charge carriers are called majority carriers, which are primarily responsible for current transport in a piece of semiconductor. In n-type semiconductors they are electrons, while in p-type semiconductors they are holes. The less abundant charge carriers are called minority carriers; in n-type semiconductors they are holes, while in p-type semiconductors they are electrons. The concentration of holes and electrons in a doped semiconductor is governed by the mass action law.

In an intrinsic semiconductor, which does not contain any impurity, the concentrations of both types of carriers are ideally equal. If an intrinsic semiconductor is doped with a donor impurity then the majority carriers are electrons. If the semiconductor is doped with an acceptor impurity then the majority carriers are holes.

Minority carriers play an important role in bipolar transistors and solar cells. Their role in field-effect transistors (FETs) is a bit more complex: for example, a MOSFET has p-type and n-type regions. The transistor action involves the majority carriers of the source and drain regions, but these carriers traverse the body of the opposite type, where they are minority carriers. However, the traversing carriers hugely outnumber their opposite type in the transfer region (in fact, the opposite type carriers are removed by an applied electric field that creates an inversion layer), so conventionally the source and drain designation for the carriers is adopted, and FETs are called "majority carrier" devices.

Free carrier concentration

Free carrier concentration is the concentration of free carriers in a doped semiconductor. It is similar to the carrier concentration in a metal and for the purposes of calculating currents or drift velocities can be used in the same way. Free carriers are electrons (holes) that have been introduced into the conduction band (valence band) by doping. Therefore, they will not act as double carriers by leaving behind holes (electrons) in the other band. In other words, charge carriers are particles that are free to move, carrying the charge. The free carrier concentration of doped semiconductors shows a characteristic temperature dependence.

In superconductors

Superconductors have zero electrical resistance and are therefore able to carry current indefinitely. This type of conduction is possible by the formation of Cooper pairs. At present, superconductors can only be achieved at very low temperatures, for instance by using cryogenic chilling. As yet, achieving superconductivity at room temperature remains challenging; it is still a field of ongoing research and experimentation. Creating a superconductor that functions at ambient temperature would constitute an important technological break-through, which could potentially contribute to much higher energy efficiency in grid distribution of electricity.

In quantum situations

Under exceptional circumstances, positrons, muons, anti-muons, taus and anti-taus may potentially also carry electric charge. This is theoretically possible, yet the very short life-time of these charged particles would render such a current very challenging to maintain at the current state of technology. It might be possible to artificially create this type of current, or it might occur in nature during very short lapses of time.

In plasmas

Plasmas consist of ionized gas. Electric charge can cause the formation of electromagnetic fields in plasmas, which can lead to the formation of currents or even multiple currents. This phenomenon is used in nuclear fusion reactors. It also occurs naturally in the cosmos, in the form of jets, nebula winds or cosmic filaments that carry charged particles. This cosmic phenomenon is called Birkeland current. Considered in general, the electric conductivity of plasmas is a subject of plasma physics.

Biopsychiatry controversy

From Wikipedia, the free encyclopedia

The biopsychiatry controversy is a dispute over which viewpoint should predominate and form a basis of psychiatric theory and practice. The debate is a criticism of a claimed strict biological view of psychiatric thinking. Its critics include disparate groups such as the antipsychiatry movement and some academics.

Overview of opposition to biopsychiatry

Biological psychiatry or biopsychiatry aims to investigate determinants of mental disorders devising remedial measures of a primarily somatic nature.

This has been criticized by Alvin Pam for being a "stilted, unidimensional, and mechanistic world-view", so that subsequent "research in psychiatry has been geared toward discovering which aberrant genetic or neurophysiological factors underlie and cause social deviance". According to Pam, the "blame the body" approach, which typically offers medication for mental distress, shifts the focus from disturbed behavior in the family to putative biochemical imbalances.

Research issues

2003 status in biopsychiatric research

Biopsychiatric research has produced reproducible abnormalities of brain structure and function, as well as a strong genetic component for a number of psychiatric disorders (although the latter has been shown to be correlative rather than causative). It has also elucidated some of the mechanisms of action of medications that effectively treat some of these disorders. Still, by their own admission, this research has not progressed to the stage that they can identify clear biomarkers of these disorders.

Research has shown that serious neurobiological disorders such as schizophrenia reveal reproducible abnormalities of brain structure (such as ventricular enlargement) and function. Compelling evidence exists that disorders including schizophrenia, bipolar disorder, and autism to name a few have a strong genetic component. Still, brain science has not advanced to the point where scientists or clinicians can point to readily discernible pathologic lesions or genetic abnormalities that in and of themselves serve as reliable or predictive biomarkers of a given mental disorder or mental disorders as a group. Ultimately, no gross anatomical lesion such as a tumor may ever be found; rather, mental disorders will likely be proven to represent disorders of intercellular communication; or of disrupted neural circuitry. Research already has elucidated some of the mechanisms of action of medications that are effective for depression, schizophrenia, anxiety, attention deficit, and cognitive disorders such as Alzheimer's disease. These medications clearly exert influence on specific neurotransmitters, naturally occurring brain chemicals that effect, or regulate, communication between neurons in regions of the brain that control mood, complex reasoning, anxiety, and cognition. In 1970, The Nobel Prize was awarded to Julius Axelrod, Ph.D., of the National Institute of Mental Health, for his discovery of how anti-depressant medications regulate the availability of neurotransmitters such as norepinephrine in the synapses, or gaps, between nerve cells.

— American Psychiatric Association, Statement on Diagnosis and Treatment of Mental Disorders

Focus on genetic factors

Researchers have proposed that most common psychiatric and drug abuse disorders can be traced to a small number of dimensions of genetic risk and reports show significant associations between specific genomic regions and psychiatric disorders. However, to date, only a few genetic lesions have been demonstrated to be mechanistically responsible for psychiatric conditions. For example, one reported finding suggests that in persons diagnosed with schizophrenia as well as in their relatives with chronic psychiatric illnesses, the gene that encodes phosphodiesterase 4B (PDE4B) is disrupted by a balanced translocation.

The reasons for the relative lack of genetic understanding is that the links between genes and mental states defined as abnormal appear highly complex, involve extensive environmental influences, and can be mediated in numerous different ways, for example, by personality, temperament, or life events. Therefore, while twin studies and other research suggest that personality is heritable to some extent, finding the genetic basis for particular personality or temperament traits, and their links to mental health problems, is "at least as hard as the search for genes involved in other complex disorders." Theodore Lidz and The Gene Illusion. argue that biopsychiatrists use genetic terminology in an unscientific way to reinforce their approach. Joseph maintains that biopsychiatrists disproportionately focus on understanding the genetics of those individuals with mental health problems at the expense of addressing the problems of living in the environments of some extremely abusive families or societies.

Focus on biochemical factors

The chemical imbalance hypothesis states that a chemical imbalance within the brain is the main cause of psychiatric conditions and that these conditions can be improved with medication that corrects this imbalance. In that, emotions within a "normal" spectrum reflect a proper balance of neurotransmitter function. Still, abnormally extreme emotions that are severe enough to impact the daily functioning of patients (as seen in schizophrenia) reflect a profound imbalance. It is the goal of psychiatric intervention, therefore, to regain the homeostasis (via psychopharmacological approaches) that existed before the onset of the disease.

The scientific community has debated this conceptual framework, although no other demonstrably superior hypothesis has emerged. Recently, the biopsychosocial approach to mental illness has been shown to be the most comprehensive and applicable theory in understanding psychiatric disorders. However, there is still much to be discovered in this area of inquiry. As a prime example, while great strides have been made in the field of understanding certain psychiatric disorders (such as schizophrenia), others (such as major depressive disorder) operate via multiple different neurotransmitters and interact in a complex array of systems that are (as yet) not completely understood.

Reductionism

Niall McLaren emphasizes in his books Humanizing Madness and Humanizing Psychiatry that the major problem with psychiatry is that it lacks a unified model of the mind and has become entrapped in a biological reductionist paradigm. The reasons for this biological shift are intuitive, as reductionism has been very effective in other fields of science and medicine. However, despite reductionism's efficacy in explaining the smallest parts of the brain, this does not explain the mind, which is where he contends the majority of psychopathology stems from. An example would be that every aspect of a computer can be understood scientifically down to the last atom; however, this does not reveal the program that drives this hardware. He also argues that the widespread acceptance of the reductionist paradigm leads to a lack of openness to Self-criticism, "a smugness that stops the very engine of scientific progress." He has proposed his own natural dualist model of the mind, the biocognitive model, which is rooted in the theories of David Chalmers and Alan Turing and does not fall into the "dualist's trap" of spiritualism.

Economic influences on psychiatric practice

American Psychiatric Association president Steven S. Sharfstein, M.D. has stated that when the profit motive of pharmaceutical companies and human good are aligned, the results are mutually beneficial for all: "Pharmaceutical companies have developed and brought to market medications that have transformed the lives of millions of psychiatric patients. The proven effectiveness of antidepressant, mood-stabilizing, and antipsychotic medications has helped sensitize the public to the reality of mental illness and taught them that treatment works. In this way, Big Pharma has helped reduce stigma associated with psychiatric treatment and with psychiatrists." However, Sharfstein acknowledged that the goals of individual physicians who deliver direct patient care can be different from the pharmaceutical and medical device industry. Conflicts arising from this disparity raise natural concerns in this regard including:

  • a "broken health care system" that allows "many patients [to be] prescribed the wrong drugs or drugs they don't need";
  • "medical education opportunities sponsored by pharmaceutical companies [that] are often biased toward one product or another";
  • "[d]irect marketing to consumers [that] also leads to increased demand for medications and inflates expectations about the benefits of medications";
  • "drug companies [paying] physicians to allow company reps to sit in on patient sessions to learn more about care for patients."

Nevertheless, Sharfstein acknowledged that without pharmaceutical companies developing and producing modern medicines - virtually every medical specialty would have few (if any) treatments for the patients that they care for.

Pharmaceutical industry influences in psychiatry

Studies have shown that promotional marketing by pharmaceutical and other companies has the potential to influence physicians' decision making. Pharmaceutical manufacturers (and other advocates) would argue that in today's modern world, physicians simply do not have the time to continually update their knowledge base on the status of the latest research; that by providing educational materials for both physicians and patients, they are providing an educational perspective; and that it is up to the individual physician to decide what treatment is best for their patients, has been replaced by educationally-based activities that became the basis for the legal and industry reforms involving physician gifts, influence in graduate medical education, physician disclosure of conflicts of interest, and other promotional activities.

In an essay on the effect of advertisements on sales for marketed anti-depressants, evidence showed that both patients and physicians can be influenced by media advertisements, and that this influence has the possibility of increasing the frequency of certain medicines being prescribed over others.

Thermal conduction

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Thermal_conduction

Thermal conduction is the diffusion of thermal energy (heat) within one material or between materials in contact. The higher temperature object has molecules with more kinetic energy; collisions between molecules distributes this kinetic energy until an object has the same kinetic energy throughout. Thermal conductivity, frequently represented by k, is a property that relates the rate of heat loss per unit area of a material to its rate of change of temperature. Essentially, it is a value that accounts for any property of the material that could change the way it conducts heat. Heat spontaneously flows along a temperature gradient (i.e. from a hotter body to a colder body). For example, heat is conducted from the hotplate of an electric stove to the bottom of a saucepan in contact with it. In the absence of an opposing external driving energy source, within a body or between bodies, temperature differences decay over time, and thermal equilibrium is approached, temperature becoming more uniform.

Every process involving heat transfer takes place by only three methods:

  1. Conduction is heat transfer through stationary matter by physical contact. (The matter is stationary on a macroscopic scale—we know there is thermal motion of the atoms and molecules at any temperature above absolute zero.) Heat transferred between the electric burner of a stove and the bottom of a pan is transferred by conduction.
  2. Convection is the heat transfer by the macroscopic movement of a fluid. This type of transfer takes place in a forced-air furnace and in weather systems, for example.
  3. Heat transfer by radiation occurs when microwaves, infrared radiation, visible light, or another form of electromagnetic radiation is emitted or absorbed. An obvious example is the warming of the Earth by the Sun. A less obvious example is thermal radiation from the human body.

Overview

A region with greater thermal energy (heat) corresponds with greater molecular agitation. Thus when a hot object touches a cooler surface, the highly agitated molecules from the hot object bump the calm molecules of the cooler surface, transferring the microscopic kinetic energy and causing the colder part or object to heat up. Mathematically, thermal conduction works just like diffusion. As temperature difference goes up, the distance traveled gets shorter or the area goes up thermal conduction increases:

Where:

  • is the thermal conduction or power (the heat transferred per unit time over some distance between the two temperatures),
  • is the thermal conductivity of the material,
  • is the cross-sectional area of the object,
  • is the difference in temperature from one side to the other,
  • is the distance over which the heat is transferred.

Conduction is the main mode of heat transfer for solid materials because the strong inter-molecular forces allow the vibrations of particles to be easily transmitted, in comparison to liquids and gases. Liquids have weaker inter-molecular forces and more space between the particles, which makes the vibrations of particles harder to transmit. Gases have even more space, and therefore infrequent particle collisions. This makes liquids and gases poor conductors of heat.

Thermal contact conductance is the study of heat conduction between solid bodies in contact. A temperature drop is often observed at the interface between the two surfaces in contact. This phenomenon is said to be a result of a thermal contact resistance existing between the contacting surfaces. Interfacial thermal resistance is a measure of an interface's resistance to thermal flow. This thermal resistance differs from contact resistance, as it exists even at atomically perfect interfaces. Understanding the thermal resistance at the interface between two materials is of primary significance in the study of its thermal properties. Interfaces often contribute significantly to the observed properties of the materials.

The inter-molecular transfer of energy could be primarily by elastic impact, as in fluids, or by free-electron diffusion, as in metals, or phonon vibration, as in insulators. In insulators, the heat flux is carried almost entirely by phonon vibrations.

Metals (e.g., copper, platinum, gold, etc.) are usually good conductors of thermal energy. This is due to the way that metals bond chemically: metallic bonds (as opposed to covalent or ionic bonds) have free-moving electrons that transfer thermal energy rapidly through the metal. The electron fluid of a conductive metallic solid conducts most of the heat flux through the solid. Phonon flux is still present but carries less of the energy. Electrons also conduct electric current through conductive solids, and the thermal and electrical conductivities of most metals have about the same ratio.[clarification needed] A good electrical conductor, such as copper, also conducts heat well. Thermoelectricity is caused by the interaction of heat flux and electric current. Heat conduction within a solid is directly analogous to diffusion of particles within a fluid, in the situation where there are no fluid currents.

In gases, heat transfer occurs through collisions of gas molecules with one another. In the absence of convection, which relates to a moving fluid or gas phase, thermal conduction through a gas phase is highly dependent on the composition and pressure of this phase, and in particular, the mean free path of gas molecules relative to the size of the gas gap, as given by the Knudsen number .

To quantify the ease with which a particular medium conducts, engineers employ the thermal conductivity, also known as the conductivity constant or conduction coefficient, k. In thermal conductivity, k is defined as "the quantity of heat, Q, transmitted in time (t) through a thickness (L), in a direction normal to a surface of area (A), due to a temperature difference (ΔT) [...]". Thermal conductivity is a material property that is primarily dependent on the medium's phase, temperature, density, and molecular bonding. Thermal effusivity is a quantity derived from conductivity, which is a measure of its ability to exchange thermal energy with its surroundings.

Steady-state conduction

Steady-state conduction is the form of conduction that happens when the temperature difference(s) driving the conduction are constant, so that (after an equilibration time), the spatial distribution of temperatures (temperature field) in the conducting object does not change any further. Thus, all partial derivatives of temperature concerning space may either be zero or have nonzero values, but all derivatives of temperature at any point concerning time are uniformly zero. In steady-state conduction, the amount of heat entering any region of an object is equal to the amount of heat coming out (if this were not so, the temperature would be rising or falling, as thermal energy was tapped or trapped in a region).

For example, a bar may be cold at one end and hot at the other, but after a state of steady-state conduction is reached, the spatial gradient of temperatures along the bar does not change any further, as time proceeds. Instead, the temperature remains constant at any given cross-section of the rod normal to the direction of heat transfer, and this temperature varies linearly in space in the case where there is no heat generation in the rod.

In steady-state conduction, all the laws of direct current electrical conduction can be applied to "heat currents". In such cases, it is possible to take "thermal resistances" as the analog to electrical resistances. In such cases, temperature plays the role of voltage, and heat transferred per unit time (heat power) is the analog of electric current. Steady-state systems can be modeled by networks of such thermal resistances in series and parallel, in exact analogy to electrical networks of resistors. See purely resistive thermal circuits for an example of such a network.

Transient conduction

During any period in which temperatures changes in time at any place within an object, the mode of thermal energy flow is termed transient conduction. Another term is "non-steady-state" conduction, referring to the time-dependence of temperature fields in an object. Non-steady-state situations appear after an imposed change in temperature at a boundary of an object. They may also occur with temperature changes inside an object, as a result of a new source or sink of heat suddenly introduced within an object, causing temperatures near the source or sink to change in time.

When a new perturbation of temperature of this type happens, temperatures within the system change in time toward a new equilibrium with the new conditions, provided that these do not change. After equilibrium, heat flow into the system once again equals the heat flow out, and temperatures at each point inside the system no longer change. Once this happens, transient conduction is ended, although steady-state conduction may continue if heat flow continues.

If changes in external temperatures or internal heat generation changes are too rapid for the equilibrium of temperatures in space to take place, then the system never reaches a state of unchanging temperature distribution in time, and the system remains in a transient state.

An example of a new source of heat "turning on" within an object, causing transient conduction, is an engine starting in an automobile. In this case, the transient thermal conduction phase for the entire machine is over, and the steady-state phase appears, as soon as the engine reaches steady-state operating temperature. In this state of steady-state equilibrium, temperatures vary greatly from the engine cylinders to other parts of the automobile, but at no point in space within the automobile does temperature increase or decrease. After establishing this state, the transient conduction phase of heat transfer is over.

New external conditions also cause this process: for example, the copper bar in the example steady-state conduction experiences transient conduction as soon as one end is subjected to a different temperature from the other. Over time, the field of temperatures inside the bar reaches a new steady-state, in which a constant temperature gradient along the bar is finally set up, and this gradient then stays constant in time. Typically, such a new steady-state gradient is approached exponentially with time after a new temperature-or-heat source or sink, has been introduced. When a "transient conduction" phase is over, heat flow may continue at high power, so long as temperatures do not change.

An example of transient conduction that does not end with steady-state conduction, but rather no conduction, occurs when a hot copper ball is dropped into oil at a low temperature. Here, the temperature field within the object begins to change as a function of time, as the heat is removed from the metal, and the interest lies in analyzing this spatial change of temperature within the object over time until all gradients disappear entirely (the ball has reached the same temperature as the oil). Mathematically, this condition is also approached exponentially; in theory, it takes infinite time, but in practice, it is over, for all intents and purposes, in a much shorter period. At the end of this process with no heat sink but the internal parts of the ball (which are finite), there is no steady-state heat conduction to reach. Such a state never occurs in this situation, but rather the end of the process is when there is no heat conduction at all.

The analysis of non-steady-state conduction systems is more complex than that of steady-state systems. If the conducting body has a simple shape, then exact analytical mathematical expressions and solutions may be possible (see heat equation for the analytical approach). However, most often, because of complicated shapes with varying thermal conductivities within the shape (i.e., most complex objects, mechanisms or machines in engineering) often the application of approximate theories is required, and/or numerical analysis by computer. One popular graphical method involves the use of Heisler Charts.

Occasionally, transient conduction problems may be considerably simplified if regions of the object being heated or cooled can be identified, for which thermal conductivity is very much greater than that for heat paths leading into the region. In this case, the region with high conductivity can often be treated in the lumped capacitance model, as a "lump" of material with a simple thermal capacitance consisting of its aggregate heat capacity. Such regions warm or cool, but show no significant temperature variation across their extent, during the process (as compared to the rest of the system). This is due to their far higher conductance. During transient conduction, therefore, the temperature across their conductive regions changes uniformly in space, and as a simple exponential in time. An example of such systems is those that follow Newton's law of cooling during transient cooling (or the reverse during heating). The equivalent thermal circuit consists of a simple capacitor in series with a resistor. In such cases, the remainder of the system with a high thermal resistance (comparatively low conductivity) plays the role of the resistor in the circuit.

Relativistic conduction

The theory of relativistic heat conduction is a model that is compatible with the theory of special relativity. For most of the last century, it was recognized that the Fourier equation is in contradiction with the theory of relativity because it admits an infinite speed of propagation of heat signals. For example, according to the Fourier equation, a pulse of heat at the origin would be felt at infinity instantaneously. The speed of information propagation is faster than the speed of light in vacuum, which is physically inadmissible within the framework of relativity.

Quantum conduction

Second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave-like motion, rather than by the more usual mechanism of diffusion. Heat takes the place of pressure in normal sound waves. This leads to a very high thermal conductivity. It is known as "second sound" because the wave motion of heat is similar to the propagation of sound in air. This is called Quantum conduction.

Fourier's law

The law of heat conduction, also known as Fourier's law (compare Fourier's heat equation), states that the rate of heat transfer through a material is proportional to the negative gradient in the temperature and to the area, at right angles to that gradient, through which the heat flows. We can state this law in two equivalent forms: the integral form, in which we look at the amount of energy flowing into or out of a body as a whole, and the differential form, in which we look at the flow rates or fluxes of energy locally.

Newton's law of cooling is a discrete analogue of Fourier's law, while Ohm's law is the electrical analogue of Fourier's law and Fick's laws of diffusion is its chemical analogue.

Differential form

The differential form of Fourier's law of thermal conduction shows that the local heat flux density is equal to the product of thermal conductivity and the negative local temperature gradient . The heat flux density is the amount of energy that flows through a unit area per unit time. where (including the SI units)

  • is the local heat flux density, W/m2,
  • is the material's conductivity, W/(m·K),
  • is the temperature gradient, K/m.

The thermal conductivity is often treated as a constant, though this is not always true. While the thermal conductivity of a material generally varies with temperature, the variation can be small over a significant range of temperatures for some common materials. In anisotropic materials, the thermal conductivity typically varies with orientation; in this case is represented by a second-order tensor. In non-uniform materials, varies with spatial location.

For many simple applications, Fourier's law is used in its one-dimensional form, for example, in the x direction:

In an isotropic medium, Fourier's law leads to the heat equation with a fundamental solution famously known as the heat kernel.

Integral form

By integrating the differential form over the material's total surface , we arrive at the integral form of Fourier's law:

\oiint \oiint

where (including the SI units):

The above differential equation, when integrated for a homogeneous material of 1-D geometry between two endpoints at constant temperature, gives the heat flow rate as where

  • is the time interval during which the amount of heat flows through a cross-section of the material,
  • is the cross-sectional surface area,
  • is the temperature difference between the ends,
  • is the distance between the ends.

One can define the (macroscopic) thermal resistance of the 1-D homogeneous material:

With a simple 1-D steady heat conduction equation which is analogous to Ohm's law for a simple electric resistance:

This law forms the basis for the derivation of the heat equation.

Conductance

Writing where U is the conductance, in W/(m2 K).

Fourier's law can also be stated as:

The reciprocal of conductance is resistance, is given by:

Resistance is additive when several conducting layers lie between the hot and cool regions, because A and Q are the same for all layers. In a multilayer partition, the total conductance is related to the conductance of its layers by: or equivalently

So, when dealing with a multilayer partition, the following formula is usually used:

For heat conduction from one fluid to another through a barrier, it is sometimes important to consider the conductance of the thin film of fluid that remains stationary next to the barrier. This thin film of fluid is difficult to quantify because its characteristics depend upon complex conditions of turbulence and viscosity—but when dealing with thin high-conductance barriers it can sometimes be quite significant.

Intensive-property representation

The previous conductance equations, written in terms of extensive properties, can be reformulated in terms of intensive properties. Ideally, the formulae for conductance should produce a quantity with dimensions independent of distance, like Ohm's law for electrical resistance, , and conductance, .

From the electrical formula: , where ρ is resistivity, x is length, and A is cross-sectional area, we have , where G is conductance, k is conductivity, x is length, and A is cross-sectional area.

For heat, where U is the conductance.

Fourier's law can also be stated as: analogous to Ohm's law, or

The reciprocal of conductance is resistance, R, given by: analogous to Ohm's law,

The rules for combining resistances and conductances (in series and parallel) are the same for both heat flow and electric current.

Cylindrical shells

Conduction through cylindrical shells (e.g. pipes) can be calculated from the internal radius, , the external radius, , the length, , and the temperature difference between the inner and outer wall, .

The surface area of the cylinder is

When Fourier's equation is applied: and rearranged: then the rate of heat transfer is: the thermal resistance is: and , where . It is important to note that this is the log-mean radius.

Spherical

The conduction through a spherical shell with internal radius, , and external radius, , can be calculated in a similar manner as for a cylindrical shell.

The surface area of the sphere is:

Solving in a similar manner as for a cylindrical shell (see above) produces:

Transient thermal conduction

Interface heat transfer

The heat transfer at an interface is considered a transient heat flow. To analyze this problem, the Biot number is important to understand how the system behaves. The Biot number is determined by: The heat transfer coefficient , is introduced in this formula, and is measured in . If the system has a Biot number of less than 0.1, the material behaves according to Newtonian cooling, i.e. with negligible temperature gradient within the body. If the Biot number is greater than 0.1, the system behaves as a series solution. however, there is a noticeable temperature gradient within the material, and a series solution is required to describe the temperature profile. The cooling equation given is: This leads to the dimensionless form of the temperature profile as a function of time: This equation shows that the temperature decreases exponentially over time, with the rate governed by the properties of the material and the heat transfer coefficient. The heat transfer coefficient, h, is measured in , and represents the transfer of heat at an interface between two materials. This value is different at every interface and is an important concept in understanding heat flow at an interface.

The series solution can be analyzed with a nomogram. A nomogram has a relative temperature as the y coordinate and the Fourier number, which is calculated by

The Biot number increases as the Fourier number decreases. There are five steps to determine a temperature profile in terms of time.

  1. Calculate the Biot number
  2. Determine which relative depth matters, either x or L.
  3. Convert time to the Fourier number.
  4. Convert to relative temperature with the boundary conditions.
  5. Compared required to point to trace specified Biot number on the nomogram.

Applications

Splat cooling

Splat cooling is a method for quenching small droplets of molten materials by rapid contact with a cold surface. The particles undergo a characteristic cooling process, with the heat profile at for initial temperature as the maximum at and at and , and the heat profile at for as the boundary conditions. Splat cooling rapidly ends in a steady state temperature, and is similar in form to the Gaussian diffusion equation. The temperature profile, with respect to the position and time of this type of cooling, varies with:

Splat cooling is a fundamental concept that has been adapted for practical use in the form of thermal spraying. The thermal diffusivity coefficient, represented as , can be written as . This varies according to the material.

Metal quenching

Metal quenching is a transient heat transfer process in terms of the time temperature transformation (TTT). It is possible to manipulate the cooling process to adjust the phase of a suitable material. For example, appropriate quenching of steel can convert a desirable proportion of its content of austenite to martensite, creating a very hard and strong product. To achieve this, it is necessary to quench at the "nose" (or eutectic) of the TTT diagram. Since materials differ in their Biot numbers, the time it takes for the material to quench, or the Fourier number, varies in practice. In steel, the quenching temperature range is generally from 600 °C to 200 °C. To control the quenching time and to select suitable quenching media, it is necessary to determine the Fourier number from the desired quenching time, the relative temperature drop, and the relevant Biot number. Usually, the correct figures are read from a standard nomogram. By calculating the heat transfer coefficient from this Biot number, one can find a liquid medium suitable for the application.

Zeroth law of thermodynamics

One statement of the so-called zeroth law of thermodynamics is directly focused on the idea of conduction of heat. Bailyn (1994) writes that "the zeroth law may be stated: All diathermal walls are equivalent".

A diathermal wall is a physical connection between two bodies that allows the passage of heat between them. Bailyn is referring to diathermal walls that exclusively connect two bodies, especially conductive walls.

This statement of the "zeroth law" belongs to an idealized theoretical discourse, and actual physical walls may have peculiarities that do not conform to its generality.

For example, the material of the wall must not undergo a phase transition, such as evaporation or fusion, at the temperature at which it must conduct heat. But when only thermal equilibrium is considered and time is not urgent, so that the conductivity of the material does not matter too much, one suitable heat conductor is as good as another. Conversely, another aspect of the zeroth law is that, subject again to suitable restrictions, a given diathermal wall is indifferent to the nature of the heat bath to which it is connected. For example, the glass bulb of a thermometer acts as a diathermal wall whether exposed to a gas or a liquid, provided that they do not corrode or melt it.

These differences are among the defining characteristics of heat transfer. In a sense, they are symmetries of heat transfer.

Instruments

Thermal conductivity analyzer

Thermal conduction property of any gas under standard conditions of pressure and temperature is a fixed quantity. This property of a known reference gas or known reference gas mixtures can, therefore, be used for certain sensory applications, such as the thermal conductivity analyzer.

The working of this instrument is by principle based on the Wheatstone bridge containing four filaments whose resistances are matched. Whenever a certain gas is passed over such network of filaments, their resistance changes due to the altered thermal conductivity of the filaments and thereby changing the net voltage output from the Wheatstone Bridge. This voltage output will be correlated with the database to identify the gas sample.

Gas sensor

The principle of thermal conductivity of gases can also be used to measure the concentration of a gas in a binary mixture of gases.

Working: if the same gas is present around all the Wheatstone bridge filaments, then the same temperature is maintained in all the filaments and hence same resistances are also maintained; resulting in a balanced Wheatstone bridge. However, If the dissimilar gas sample (or gas mixture) is passed over one set of two filaments and the reference gas on the other set of two filaments, then the Wheatstone bridge becomes unbalanced. And the resulting net voltage output of the circuit will be correlated with the database to identify the constituents of the sample gas.

Using this technique many unknown gas samples can be identified by comparing their thermal conductivity with other reference gas of known thermal conductivity. The most commonly used reference gas is nitrogen; as the thermal conductivity of most common gases (except hydrogen and helium) are similar to that of nitrogen.

Rayleigh sky model

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Rayleigh_sky_model   The Rayleig...