Search This Blog

Monday, August 14, 2023

Sensor

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Sensor

Different types of light sensors

A sensor is a device that produces an output signal for the purpose of sensing a physical phenomenon.

In the broadest definition, a sensor is a device, module, machine, or subsystem that detects events or changes in its environment and sends the information to other electronics, frequently a computer processor.

Sensors are used in everyday objects such as touch-sensitive elevator buttons (tactile sensor) and lamps which dim or brighten by touching the base, and in innumerable applications of which most people are never aware. With advances in micromachinery and easy-to-use microcontroller platforms, the uses of sensors have expanded beyond the traditional fields of temperature, pressure  and flow measurement, for example into MARG sensors.

Analog sensors such as potentiometers and force-sensing resistors are still widely used. Their applications include manufacturing and machinery, airplanes and aerospace, cars, medicine, robotics and many other aspects of our day-to-day life. There is a wide range of other sensors that measure chemical and physical properties of materials, including optical sensors for refractive index measurement, vibrational sensors for fluid viscosity measurement, and electro-chemical sensors for monitoring pH of fluids.

A sensor's sensitivity indicates how much its output changes when the input quantity it measures changes. For instance, if the mercury in a thermometer moves 1  cm when the temperature changes by 1 °C, its sensitivity is 1 cm/°C (it is basically the slope dy/dx assuming a linear characteristic). Some sensors can also affect what they measure; for instance, a room temperature thermometer inserted into a hot cup of liquid cools the liquid while the liquid heats the thermometer. Sensors are usually designed to have a small effect on what is measured; making the sensor smaller often improves this and may introduce other advantages.

Technological progress allows more and more sensors to be manufactured on a microscopic scale as microsensors using MEMS technology. In most cases, a microsensor reaches a significantly faster measurement time and higher sensitivity compared with macroscopic approaches. Due to the increasing demand for rapid, affordable and reliable information in today's world, disposable sensors—low-cost and easy‐to‐use devices for short‐term monitoring or single‐shot measurements—have recently gained growing importance. Using this class of sensors, critical analytical information can be obtained by anyone, anywhere and at any time, without the need for recalibration and worrying about contamination.

Classification of measurement errors

An infrared sensor

A good sensor obeys the following rules:

  • it is sensitive to the measured property
  • it is insensitive to any other property likely to be encountered in its application, and
  • it does not influence the measured property.

Most sensors have a linear transfer function. The sensitivity is then defined as the ratio between the output signal and measured property. For example, if a sensor measures temperature and has a voltage output, the sensitivity is constant with the units [V/K]. The sensitivity is the slope of the transfer function. Converting the sensor's electrical output (for example V) to the measured units (for example K) requires dividing the electrical output by the slope (or multiplying by its reciprocal). In addition, an offset is frequently added or subtracted. For example, −40 must be added to the output if 0 V output corresponds to −40 C input.

For an analog sensor signal to be processed or used in digital equipment, it needs to be converted to a digital signal, using an analog-to-digital converter.

Sensor deviations

Since sensors cannot replicate an ideal transfer function, several types of deviations can occur which limit sensor accuracy:

  • Since the range of the output signal is always limited, the output signal will eventually reach a minimum or maximum when the measured property exceeds the limits. The full scale range defines the maximum and minimum values of the measured property.
  • The sensitivity may in practice differ from the value specified. This is called a sensitivity error. This is an error in the slope of a linear transfer function.
  • If the output signal differs from the correct value by a constant, the sensor has an offset error or bias. This is an error in the y-intercept of a linear transfer function.
  • Nonlinearity is deviation of a sensor's transfer function from a straight line transfer function. Usually, this is defined by the amount the output differs from ideal behavior over the full range of the sensor, often noted as a percentage of the full range.
  • Deviation caused by rapid changes of the measured property over time is a dynamic error. Often, this behavior is described with a bode plot showing sensitivity error and phase shift as a function of the frequency of a periodic input signal.
  • If the output signal slowly changes independent of the measured property, this is defined as drift. Long term drift over months or years is caused by physical changes in the sensor.
  • Noise is a random deviation of the signal that varies in time.
  • A hysteresis error causes the output value to vary depending on the previous input values. If a sensor's output is different depending on whether a specific input value was reached by increasing vs. decreasing the input, then the sensor has a hysteresis error.
  • If the sensor has a digital output, the output is essentially an approximation of the measured property. This error is also called quantization error.
  • If the signal is monitored digitally, the sampling frequency can cause a dynamic error, or if the input variable or added noise changes periodically at a frequency near a multiple of the sampling rate, aliasing errors may occur.
  • The sensor may to some extent be sensitive to properties other than the property being measured. For example, most sensors are influenced by the temperature of their environment.

All these deviations can be classified as systematic errors or random errors. Systematic errors can sometimes be compensated for by means of some kind of calibration strategy. Noise is a random error that can be reduced by signal processing, such as filtering, usually at the expense of the dynamic behavior of the sensor.

Resolution

The sensor resolution or measurement resolution is the smallest change that can be detected in the quantity that it is being measured. The resolution of a sensor with a digital output is usually the numerical resolution of the digital output. The resolution is related to the precision with which the measurement is made, but they are not the same thing. A sensor's accuracy may be considerably worse than its resolution.

  • For example, the distance resolution is the minimum distance that can be accurately measured by any distance measuring devices. In a time-of-flight camera, the distance resolution is usually equal to the standard deviation (total noise) of the signal expressed in unit of length.
  • The sensor may to some extent be sensitive to properties other than the property being measured. For example, most sensors are influenced by the temperature of their environment.

Chemical sensor

A chemical sensor is a self-contained analytical device that can provide information about the chemical composition of its environment, that is, a liquid or a gas phase. The information is provided in the form of a measurable physical signal that is correlated with the concentration of a certain chemical species (termed as analyte). Two main steps are involved in the functioning of a chemical sensor, namely, recognition and transduction. In the recognition step, analyte molecules interact selectively with receptor molecules or sites included in the structure of the recognition element of the sensor. Consequently, a characteristic physical parameter varies and this variation is reported by means of an integrated transducer that generates the output signal. A chemical sensor based on recognition material of biological nature is a biosensor. However, as synthetic biomimetic materials are going to substitute to some extent recognition biomaterials, a sharp distinction between a biosensor and a standard chemical sensor is superfluous. Typical biomimetic materials used in sensor development are molecularly imprinted polymers and aptamers.

Biosensor

In biomedicine and biotechnology, sensors which detect analytes thanks to a biological component, such as cells, protein, nucleic acid or biomimetic polymers, are called biosensors. Whereas a non-biological sensor, even organic (carbon chemistry), for biological analytes is referred to as sensor or nanosensor. This terminology applies for both in-vitro and in vivo applications. The encapsulation of the biological component in biosensors, presents a slightly different problem that ordinary sensors; this can either be done by means of a semipermeable barrier, such as a dialysis membrane or a hydrogel, or a 3D polymer matrix, which either physically constrains the sensing macromolecule or chemically constrains the macromolecule by bounding it to the scaffold.

Neuromorphic sensors

Neuromorphic sensors are sensors that physically mimic structures and functions of biological neural entities. One example of this is the event camera.

MOS sensors

Metal–oxide–semiconductor (MOS) technology originates from the MOSFET (MOS field-effect transistor, or MOS transistor) invented by Mohamed M. Atalla and Dawon Kahng in 1959, and demonstrated in 1960. MOSFET sensors (MOS sensors) were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters.

Biochemical sensors

A number of MOSFET sensors have been developed, for measuring physical, chemical, biological and environmental parameters. The earliest MOSFET sensors include the open-gate field-effect transistor (OGFET) introduced by Johannessen in 1970, the ion-sensitive field-effect transistor (ISFET) invented by Piet Bergveld in 1970, the adsorption FET (ADFET) patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology.

By the mid-1980s, numerous other MOSFET sensors had been developed, including the gas sensor FET (GASFET), surface accessible FET (SAFET), charge flow transistor (CFT), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), biosensor FET (BioFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET).[11] By the early 2000s, BioFET types such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.

Image sensors

MOS technology is the basis for modern image sensors, including the charge-coupled device (CCD) and the CMOS active-pixel sensor (CMOS sensor), used in digital imaging and digital cameras. Willard Boyle and George E. Smith developed the CCD in 1969. While researching the MOS process, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.

The MOS active-pixel sensor (APS) was developed by Tsutomu Nakamura at Olympus in 1985. The CMOS active-pixel sensor was later developed by Eric Fossum and his team in the early 1990s.

MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by Richard F. Lyon at Xerox in 1980, used a 5 µm NMOS sensor chip. Since the first commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS sensors.

Monitoring sensors

Lidar sensor on iPad Pro

MOS monitoring sensors are used for house monitoring, office and agriculture monitoring, traffic monitoring (including car speed, traffic jams, and traffic accidents), weather monitoring (such as for rain, wind, lightning and storms), defense monitoring, and monitoring temperature, humidity, air pollution, fire, health, security and lighting. MOS gas detector sensors are used to detect carbon monoxide, sulfur dioxide, hydrogen sulfide, ammonia, and other gas substances. Other MOS sensors include intelligent sensors and wireless sensor network (WSN) technology.

Quantum cloning

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Quantum_cloning

Quantum cloning is a process that takes an arbitrary, unknown quantum state and makes an exact copy without altering the original state in any way. Quantum cloning is forbidden by the laws of quantum mechanics as shown by the no cloning theorem, which states that there is no operation for cloning any arbitrary state perfectly. In Dirac notation, the process of quantum cloning is described by:

where is the actual cloning operation, is the state to be cloned, and is the initial state of the copy.

Though perfect quantum cloning is not possible, it is possible to perform imperfect cloning, where the copies have a non-unit (i.e. non-perfect) fidelity. The possibility of approximate quantum computing was first addressed by Buzek and Hillery, and theoretical bounds were derived on the fidelity of cloned quantum states.

One of the applications of quantum cloning is to analyse the security of quantum key distribution protocols. Teleportation, nuclear magnetic resonance, quantum amplification, and superior phase conjugation are examples of some methods utilized to realize a quantum cloning machine. Ion trapping techniques have been applied to cloning quantum states of ions.

Types of Quantum Cloning Machines

It may be possible to clone a quantum state to arbitrary accuracy in the presence of closed timelike curves.

Universal Quantum Cloning

Universal quantum cloning (UQC) implies that the quality of the output (cloned state) is not dependent on the input, thus the process is "universal" to any input state. The output state produced is governed by the Hamiltonian of the system.

One of the first cloning machines, a 1 to 2 UQC machine, was proposed in 1996 by Buzek and Hillery. As the name implies, the machine produces two identical copies of a single input qubit with a fidelity of 5/6 when comparing only one output qubit, and global fidelity of 2/3 when comparing both qubits. This idea was expanded to more general cases such as an arbitrary number of inputs and copies, as well as d-dimensional systems.

Multiple experiments have been conducted to realize this type of cloning machine physically by using photon stimulated emission. The concept relies on the property of certain three-level atoms to emit photons of any polarization with equally likely probability. This symmetry ensures the universality of the machine.

Phase Covariant Cloning

When input states are restricted to Bloch vectors corresponding to points on the equator of the Bloch Sphere, more information is known about them. The resulting clones are thus state-dependent, having an optimal fidelity of . Although only having a fidelity slightly greater than the UQCM (≈0.83), phase covariant cloning has the added benefit of being easily implemented through quantum logic gates consisting of the rotational operator and the controlled-NOT (CNOT). Output states are also separable according to Peres-Horodecki criterion.

The process has been generalized to the 1 → M case and proven optimal. This has also been extended to the qutrit and qudit  cases. The first experimental asymmetric quantum cloning machine was realized in 2004 using nuclear magnetic resonance.

Asymmetric Quantum Cloning

The first family of asymmetric quantum cloning machines was proposed by Nicholas Cerf in 1998. A cloning operation is said to be asymmetric if its clones have different qualities and are all independent of the input state. This is a more general case of the symmetric cloning operations discussed above which produce identical clones with the same fidelity. Take the case of a simple 1 → 2 asymmetric cloning machine. There is a natural trade-off in the cloning process in that if one clone's fidelity is fixed to a higher value, the other must decrease in quality and vice versa. The optimal trade-off is bounded by the following inequality:

where Fd and Fe are the state-independent fidelities of the two copies. This type of cloning procedure was proven mathematically to be optimal as derived from the Choi-Jamiolkowski channel state duality. However, even with this cloning machine perfect quantum cloning is proved to be unattainable.

The trade-off of optimal accuracy between the resulting copies has been studied in quantum circuits, and with regards to theoretical bounds.

Optimal asymmetric cloning machines are extended to in dimensions.

Probabilistic Quantum Cloning

In 1998, Duan and Guo proposed a different approach to quantum cloning machines that relies on probability. This machine allows for the perfect copying of quantum states without violation of the No-Cloning and No-Broadcasting Theorems, but at the cost of not being 100% reproducible. The cloning machine is termed "probabilistic" because it performs measurements in addition to a unitary evolution. These measurements are then sorted through to obtain the perfect copies with a certain quantum efficiency (probability). As only orthogonal states can be cloned perfectly, this technique can be used to identify non-orthogonal states. The process is optimal when where η is the probability of success for the states Ψ0 and Ψ1.

The process was proven mathematically to clone two pure, non-orthogonal input states using a unitary-reduction process. One implementation of this machine was realized through the use of a "noiseless optical amplifier" with a success rate of about 5% .

Applications of Approximate Quantum Cloning

Cloning in Discrete Quantum Systems

The simple basis for approximate quantum cloning exists in the first and second trivial cloning strategies. In first trivial cloning, a measurement of a qubit in a certain basis is made at random and yields two copies of the qubit. This method has a universal fidelity of 2/3.

The second trivial cloning strategy, also called "trivial amplification", is a method in which an original qubit is left unaltered, and another qubit is prepared in a different orthogonal state. When measured, both qubits have the same probability, 1/2, (check) and an overall single copy fidelity of 3/4.

Quantum Cloning Attacks

Quantum information is useful in the field of cryptography due to its intrinsic encrypted nature. One such mechanism is quantum key distribution. In this process, Bob receives a quantum state sent by Alice, in which some type of classical information is stored. He then performs a random measurement, and using minimal information provided by Alice, can determine whether or not his measurement was "good". This measurement is then transformed into a key in which private data can be stored and sent without fear of the information being stolen.

One reason this method of cryptography is so secure is because it is impossible to eavesdrop due to the no-cloning theorem. A third party, Eve, can use incoherent attacks in an attempt to observe the information being transferred from Bob to Alice. Due to the no-cloning theorem, Eve is unable to gain any information. However, through quantum cloning, this is no longer entirely true.

Incoherent attacks involve a third party gaining some information into the information being transmitted between Bob and Alice. These attacks follow two guidelines: 1) third party Eve must act individually and match the states that are being observed, and 2) Eve's measurement of the traveling states occurs after the sifting phase (removing states that are in non-matched bases) but before reconciliation (putting Alice and Bob's strings back together). Due to the secure nature of quantum key distribution, Eve would be unable to decipher the secret key even with as much information as Bob and Alice. These are known as an incoherent attacks because a random, repeated attack yields the highest chance of Eve finding the key.

Nuclear Magnetic Resonance

While classical nuclear magnetic resonance is the phenomenon of nuclei emitting electromagnetic radiation at resonant frequencies when exposed to a strong magnetic field and is used heavily in imaging technology, quantum nuclear magnetic resonance is a type of quantum information processing (QIP). The interactions between the nuclei allow for the application of quantum logic gates, such as the CNOT.

One quantum NMR experiment involved passing three qubits through a circuit, after which they are all entangled; the second and third qubit are transformed into clones of the first with a fidelity of 5/6.

Another application allowed for the alteration of the signal-noise ratio, a process that increased the signal frequency while decreasing the noise frequency, allowing for a clearer information transfer. This is done through polarization transfer, which allows for a portion of the signal's highly polarized electric spin to be transferred to the target nuclear spin.

The NMR system allows for the application of quantum algorithms such as Shor factorization and the Deutsch-Joza algorithm.

Stimulated Emission

Stimulated emission is a type of Universal Quantum Cloning Machine that functions on a three-level system: one ground and two degenerates that are connected by an orthogonal electromagnetic field. The system is able to emit photons by exciting electrons between the levels. The photons are emitted in varying polarizations due to the random nature of the system, but the probability of emission type is equal for all – this is what makes this a universal cloning machine. By integrating quantum logic gates into the stimulated emission system, the system is able to produce cloned states.

Telecloning

Telecloning is the combination of quantum teleportation and quantum cloning. This process uses positive operator-valued measurements, maximally entangled states, and quantum teleportation to create identical copies, locally and in a remote location. Quantum teleportation alone follows a "one-to-one" or "many-to-many" method in which either one or many states are transported from Alice, to Bob in a remote location. The teleclone works by first creating local quantum clones of a state, then sending these to a remote location by quantum teleportation.

The benefit of this technology is that it removes errors in transmission that usually result from quantum channel decoherence.

Human variability

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Human_variability
Examples of human phenotypic variability: people with different levels of skin colors, a normal distribution of IQ scores, the tallest recorded man in history - Robert Wadlow - with his father.

Human variability, or human variation, is the range of possible values for any characteristic, physical or mental, of human beings.

Frequently debated areas of variability include cognitive ability, personality, physical appearance (body shape, skin color, etc.) and immunology. Variability is partly heritable and partly acquired (nature vs. nurture debate). As the human species exhibits sexual dimorphism, many traits show significant variation not just between populations but also between the sexes.

Sources of human variability

Identical twins share identical genes. They are often studied to see how environmental factors impact human variability, for example, height difference.

Human variability is attributed to a combination of environmental and genetic sources including:

A skin color map of the world from data collected on native populations prior to 1940, based on the von Luschan chromatic scale

For the genetic variables listed above, few of the traits characterizing human variability are controlled by simple Mendelian inheritance. Most are polygenic or are determined by a complex combination of genetics and environment.

Many genetic differences (polymorphisms) have little effect on health or reproductive success but help to distinguish one population from another. It is helpful for researchers in the field of population genetics to study ancient migrations and relationships between population groups.

Environmental factors

Climate and disease

Other important factors of environmental factors include climate and disease. Climate has effects on determining what kinds of human variation are more adaptable to survive without much restrictions and hardships. For example, people who live in a climate where there is a lot of exposure to sunlight have a darker color of skin tone. Evolution has caused production of folate (folic acid) from UV radiation, thus giving them darker skin tone with more melanin to make sure child development is smooth and successful. Conversely, people who live farther away from the equator have a lighter skin tone. This is due to a need for an increased exposure and absorbance of sunlight to make sure the body can produce enough vitamin D for survival.

Blackfoot disease is a disease caused by environmental pollution and causes people to have black, charcoal-like skin in the lower limbs. This is caused by arsenic pollution in water and food source. This is an example of how disease can affect human variation. Another disease that can affect human variation is syphilis, a sexual transmitted disease. Syphilis does not affect human variation until the middle stage of the disease. It then starts to grow rashes all over the body, affecting people's human variation.

Nutrition

Phenotypic variation is a combination of one's genetics and their surrounding environment, with no interaction or mutual influence between the two. This means that a significant portion of human variability can be controlled by human behavior. Nutrition and diet play a substantial role in determining phenotype because they are arguably the most controllable forms of environmental factors that create epigenetic changes. This is because they can be changed or altered relatively easily as opposed to other environmental factors like location.

If people are reluctant to changing their diets, consuming harmful foods can have chronic negative effects on variability. One such instance of this occurs when eating certain chemicals through one's diet or consuming carcinogens, which can have adverse effects on individual phenotype. For example, Bisphenol A (BPA) is a known endocrine disruptor that mimics the hormone estradiol and can be found in various plastic products. BPA seeps into food or drinks when the plastic containing it is heated up and begins to melt. When these contaminated substances are consumed, especially often and over long periods of time, one's risk of diabetes and cardiovascular disease increases. BPA also has the potential to alter "physiological weight control patterns." Examples such as this demonstrate that preserving a healthy phenotype largely rests on nutritional decision-making skills.

The concept that nutrition and diet affect phenotype extends to what the mother eats during pregnancy, which can have drastic effects on the outcome of the phenotype of the child. A recent study by researchers at the MRC International Nutrition Group shows that "methylation machinery can be disrupted by nutrient deficiencies and that this can lead to disease" susceptibility in newborn babies. The reason for this is because methyl groups have the ability to silence certain genes. Increased deficiencies of various nutrients such as this have the potential to permanently change the epigenetics of the baby.

Genetic factors

Genetic variation in humans may mean any variance in phenotype which results from heritable allele expression, mutations, and epigenetic changes. While human phenotypes may seem diverse, individuals actually differ by only 1 in every 1,000 base pairs and is primarily the result of inherited genetic differences. Pure consideration of alleles is often referred to as Mendelian Genetics, or more properly Classical Genetics, and involves the assessment of whether a given trait is dominant or recessive and thus, at what rates it will be inherited.  The color of one's eyes was long believed to occur with a pattern of brown-eye dominance, with blue eyes being a recessive characteristic resulting from a past mutation. However, it is now understood that eye color is controlled by various genes, and thus, may not follow as distinct a pattern as previously believed. The trait is still the result of variance in genetic sequence between individuals as a result of inheritance from their parents. Common traits which may be linked to genetic patterns are earlobe attachment, hair color, and hair growth patterns.

In terms of evolution, genetic mutations are the origins of differences in alleles between individuals. However, mutations may also occur within a person's life-time and be passed down from parent to offspring. In some cases, mutations may result in genetic diseases, such as Cystic Fibrosis, which is the result of a mutation to the CFTR gene that is recessively inherited from both parents. In other cases, mutations may be harmless or phenotypically unnoticeable. We are able to treat biological traits as manifestations of either a single loci or multiple loci, labeling said biological traits as either monogenic or polygenic, respectively. Concerning polygenic traits it may be essential to be mindful of inter-genetic interactions or epistasis. Although epistasis is a significant genetic source of biological variation, it is only additive interactions that are heritable as other epistatic interactions involve recondite inter-genetic relationships. Epistatic interactions in of themselves vary further with their dependency on the results of the mechanisms of recombination and crossing over.

The ability of genes to be expressed may also be a source of variation between individuals and result in changes to phenotype. This may be the result of epigenetics, which are founded upon an organism's phenotypic plasticity, with such a plasticity even being heritable. Epigenetics may result from methylation of gene sequences leading to the blocking of expression or changes to histone protein structuring as a result of environmental or biological cues. Such alterations influence how genetic material is handled by the cell and to what extent certain DNA sections are expressed and compose the epigenome. The division between what can be considered as a genetic source of biological variation and not becomes immensely arbitrary as we approach aspects of biological variation such as epigenetics. Indeed, gene specific gene expression and inheritance may be reliant on environmental  influences.

Cultural factors

Archaeological findings such as those that indicate that the Middle Stone Age and the Acheulean – identified as a specific 'cultural phases' of humanity with a number of characteristics – lasted substantially longer in some places or 'ended' at times over 100,000 years apart, highlight a significant spatiotemporal cultural variability in and complexity of the sociocultural history and evolution of humanity. In some cases cultural factors may be intertwined with genetic and environmental factors.

Measuring variation

Scientific

Measurement of human variation can fall under the purview of several scholarly disciplines, many of which lie at the intersection of biology and statistics. The methods of biostatistics, the application of statistical methods to the analysis of biological data, and bioinformatics, the application of information technologies to the analysis of biological data, are utilized by researchers in these fields to uncover significant patterns of variability. Some fields of scientific research include the following:

Demography is a branch of statistics and sociology concerned with the statistical study of populations, especially humans. A demographic analysis can measure various metrics of a population, most commonly metrics of size and growth, diversity in culture, ethnicity, language, religious belief, political belief, etc. Biodemography is a subfield which specifically integrates biological understanding into demographics analysis.

In the social sciences, social research is conducted and collected data is analyzed under statistical methods. The methodologies of this research can be divided into qualitative and quantitative designs. Some example subdisciplines include:

  • Anthropology, the study of human societies. Comparative research in subfields of anthropology may yield results on human variation with respect to the subfield's topic of interest.
  • Psychology, the study of behavior from a mental perspective. Does a lot of experiments and analysis grouped into quantitative or qualitative research methods.
  • Sociology, the study of behavior from a social perspective. Sociological research can be conducted in either quantitative or qualitative formats, depending on the nature of data collected and the subfield of sociology under which the research falls. Analysis of this data is subject to quantitative or qualitative methods. Computational sociology is also a method of producing useful data for studies of social behavior.

Anthropometry

Anthropometry is the study of the measurements of different parts of the human body. Common measurements include height, weight, organ size (brain, stomach, penis, vagina), and other bodily metrics such as waist–hip ratio. Each measurement can vary significantly between populations; for instance, the average height of males of European descent is 178 cm ± 7 cm and of females of European descent is 165 cm ± 7 cm. Meanwhile, average height of Nilotic males in Dinka is 181.3 cm.

Applications of anthropometry include ergonomics, biometrics, and forensics. Knowing the distribution of body measurements enable designers to build better tools for workers. Anthropometry is also used when designing safety equipment such as seat belts. In biometrics, measurements of fingerprints and iris patterns can be used for secure identification purposes.

Measuring genetic variation

Human genomics and population genetics are the study of the human genome and variome, respectively. Studies in these areas may concern the patterns and trends in human DNA. The Human Genome Project and The Human Variome Project are examples of large scale studies of the entire human population to collect data which can be analyzed to understand genomic and genetic variation in individuals, respectively.

  • The Human Genome Project is the largest scientific project in the history of biology. At a cost of $3.8 billion in funding and over a period of 13 years from 1990 to 2003, the project processed through DNA sequencing the approximately 3 billion base pairs and catalogued the 20,000 to 25,000 genes in human DNA. The project made the data available to all scientific researchers and developed analytical tools for processing this information. A particular finding regarding human variability due to difference in DNA made possible by the Human Genome Project is that any two individuals share 99.9% of their nucleotide sequences.
  • The Human Variome Project is a similar undertaking with the goal of identification and categorization of the set of human genetic variation, specifically variations which are medically pertinent. This project will also provide a data repository for further research and analysis of disease. The Human Variome Project was launched in 2006 and is being run by an international community of researchers and representatives, including collaborators from the World Health Organization and the United Nations Educational, Scientific, and Cultural Organization.

Genetic drift

Genetic drift is one method by which variability occurs in populations. Unlike natural selection, genetic drift occurs when alleles decrease randomly over time and not as a result of selection bias. Over a long history, this can cause significant shifts in the underlying genetic distribution of a population. We can model genetic drift with the Wright-Fisher model. In a population of N with 2N genes, there are two alleles with frequencies p and q. If the previous generation had an allele with frequency p, then the probability that the next generation has k of that allele is:

Over time, one allele will be fixed when the frequency of that allele reaches 1 and the frequency of the other allele reaches 0. The probability that any allele is fixed is proportional to the frequency of that allele. For two alleles with frequencies p and q, the probability that p will be fixed is p. The expected number of generations for an allele with frequency p to be fixed is:

Where Ne is the effective population size.

Single-nucleotide polymorphism

Single-nucleotide polymorphism or SNPs are variations of a single nucleotide. SNPs can occur in coding or non-coding regions of genes and on average occur once every 300 nucleotides. SNPs in coding regions can cause synonymous, missense, and nonsense mutations. SNPs have shown to be correlated with drug responses and risk of diseases such as sickle-cell anemia, Alzheimer's disease, cystic fibrosis, and more.

DNA fingerprinting

DNA profiling, whereby a DNA fingerprint is constructed by extracting a DNA sample from body tissue or fluid. Then, it is segmented using restriction enzymes and each segment marked with probes then exposed on X-ray film. The segments form patterns of black bars;the DNA fingerprint. DNA Fingerprints are used in conjunction with other methods in order to individuals information in Federal programs such as CODIS (Combined DNA Index System for Missing Persons) in order to help identify individuals. 

Mitochondrial DNA

Mitochondrial DNA, which is only passed from mother to child. The first human population studies based on mitochondrial DNA were performed by restriction enzyme analyses (RFLPs) and revealed differences between the four ethnic groups (Caucasian, Amerindian, African, and Asian). Differences in mtDNA patterns have also been shown in communities with a different geographic origin within the same ethnic group.

Alloenzymic variation

Alloenzymic variation, a source of variation that identifies protein variants of the same gene due to amino acid substitutions in proteins. After grinding tissue to release the cytoplasm, wicks are used to absorb the resulting extract and placed in a slit cut into a starch gel. A low current is run across the gel resulting in a positive and negative ends. Proteins are then separated by charge and size, with the smaller and more highly charged molecules moving more quickly across the gel. This techniques does underestimate true genetic variability as there may be an amino acid substitution but if the amino acid is not charged differently than the original no difference in migration will appear it is estimated that approximately 1/3 of the true genetic variation is not expressed by this technique.

Structural variation

Structural variation, which can include insertions, deletions, duplications, and mutations in DNA. Within the human population, about 13% of the human genome is defined as structurally variant.

Phenotypic variation

Phenotypic variation, which accounts for both genetic and epigenetic factors that affect what characteristics are shown. For applications such as organ donations and matching, phenotypic variation of blood type, tissue type, and organ size are considered.

Civic

Measurement of human variation may also be initiated by governmental parties. A government may conduct a census, the systematic recording of an entire population of a region. The data may be used for calculating metrics of demography such as sex, gender, age, education, employment, etc.; this information is utilized for civic, political, economic, industrial, and environmental assessment and planning.

Commercial

Commercial motivation for understanding variation in human populations arises from the competitive advantage of tailoring products and services for a specific target market. A business may undertake some form of market research in order to collect data on customer preference and behavior and implement changes which align with the results.

Social significance and valuation

Both individuals and entire societies and cultures place values on different aspects of human variability; however, values can change as societies and cultures change. Not all people agree on the values or relative rankings, and neither do all societies and cultures. Nonetheless, nearly all human differences have a social value dimension. Examples of variations which may be given different values in different societies include skin color and/or body structure. Race and sex have a strong value difference, while handedness has a much weaker value difference. The values given to different traits among human variability are often influenced by what phenotypes are more prevalent locally. Local valuation may affect social standing, reproductive opportunities, or even survival.

Differences may vary or be distributed in various ways. Some, like height for a given sex, vary in close to a "normal" or Gaussian distribution. Other characteristics (e.g., skin color) vary continuously in a population, but the continuum may be socially divided into a small number of distinct categories. Then, there are some characteristics that vary bimodally (for example, handedness), with fewer people in intermediate categories.

Classification and evaluation of traits

When an inherited difference of body structure or function is severe enough to cause a significant hindrance in certain perceived abilities, it is termed a genetic disease, but even this categorization has fuzzy edges. There are many instances in which the degree of negative value of a human difference depends completely on the social or physical environment. For example, in a society with a large proportion of deaf people (as Martha's Vineyard in the 19th century), it was possible to deny that deafness is a disability. Another example of social renegotiation of the value assigned to a difference is reflected in the controversy over management of ambiguous genitalia, especially whether abnormal genital structure has enough negative consequences to warrant surgical correction.

Furthermore, many genetic traits may be advantageous in certain circumstances and disadvantageous in others. Being a heterozygote or carrier of the sickle-cell disease gene confers some protection against malaria, apparently enough to maintain the gene in populations of malarial areas. In a homozygous dose it is a significant disability.

Each trait has its own advantages and disadvantages, but sometimes a trait that is found desirable may not be favorable in terms of certain biological factors such as reproductive fitness, and traits that are not highly valued by the majority of people may be favorable in terms of biological factors. For example, women tend to have fewer pregnancies on average than before and therefore net worldwide fertility rates are dropping. Moreover, this leads to the fact that multiple births tend to be favorable in terms of number of children and therefore offspring count; when the average number of pregnancies and the average number of children was higher, multiple births made only a slight relative difference in number of children. However, with fewer pregnancies, multiple births can make the difference in number of children relatively large. A hypothetical scenario would be that couple 1 has ten children and couple 2 has eight children, but in both couples, the woman undergoes eight pregnancies. This is not a large difference in ratio of fertility. However, another hypothetical scenario can be that couple 1 has three children and couple 2 has one child but in both couples the woman undergoes one pregnancy (in this case couple 2 has triplets). When the proportion of offspring count in the latter hypothetical scenario is compared, the difference in proportion of offspring count becomes higher. A trait in women known to greatly increase the chance of multiple births is being a tall woman (presumably the chance is further increased when the woman is very tall among both women and men). Yet very tall women are not viewed as a desirable phenotype by the majority of people, and the phenotype of very tall women has not been highly favored in the past. Nevertheless, values placed on traits can change over time.

Such an example is homosexuality. In Ancient Greece, what in present terms would be called homosexuality, primarily between a man and a young boy, was not uncommon and was not outlawed. However, homosexuality became more condemned. Attitudes towards homosexuality alleviated in modern times.

Acknowledgement and study of human differences does have a wide range of uses, such as tailoring the size and shape of manufactured items. See Ergonomics.

Controversies of sociocultural and personal implications

Possession of above average amounts of some abilities is valued by most societies. Some of the traits that societies try to measure by perception are intellectual aptitude in the form of ability to learn, artistic prowess, strength, endurance, agility, and resilience.

Each individual's distinctive differences, even the negatively valued or stigmatized ones, are usually considered an essential part of self-identity. Membership or status in a social group may depend on having specific values for certain attributes. It is not unusual for people to deliberately try to amplify or exaggerate differences, or to conceal or minimize them, for a variety of reasons. Examples of practices designed to minimize differences include tanning, hair straightening, skin bleaching, plastic surgery, orthodontia, and growth hormone treatment for extreme shortness. Conversely, male-female differences are enhanced and exaggerated in most societies.

In some societies, such as the United States, circumcision is practiced on a majority of males, as well as sex reassignment on intersex infants, with substantial emphasis on cultural and religious norms. Circumcision is highly controversial because although it offers health benefits, such as less chance of urinary tract infections, STDs, and penile cancer, it is considered a drastic procedure that is not medically mandatory and argued as a decision that should be taken when the child is old enough to decide for himself. Similarly, sex reassignment surgery offers psychiatric health benefits to transgender people but is seen as unethical by some Christians, especially when performed on children.

Much controversy surrounds the assigning or distinguishing of some variations, especially since differences between groups in a society or between societies is often debated as part of either a person's "essential" nature or a socially constructed attribution. For example, there has long been a debate among sex researchers on whether sexual orientation is due to evolution and biology (the "essentialist" position), or a result of mutually reinforcing social perceptions and behavioral choices (the "constructivist" perspective). The essentialist position emphasizes inclusive fitness as the reason homosexuality has not been eradicated by natural selection. Gay or lesbian individuals have not been greatly affected by evolutionary selection because they may help the fitness of their siblings and siblings' children, thus increasing their own fitness through inclusive fitness and maintaining evolution of homosexuality. Biological theories for same gender sexual orientation include genetic influences, neuroanatomical factors, and hormone differences but research so far has not provided any conclusive results. In contrast, the social constructivist position argues that sexuality is a result of culture and has originated from language or dialogue about sex. Mating choices are the product of cultural values, such as youth and attractiveness, and homosexuality varies greatly between cultures and societies. In this view, complexities, such as sexual orientation changing during the course of one's lifespan, are accounted for.

Controversy also surrounds the boundaries of "wellness", "wholeness," or "normality." In some cultures, differences in physical appearance, mental ability, and even sex can exclude one from traditions, ceremonies, or other important events, such as religious service. For example, in India, menstruation is not only a taboo subject but also traditionally considered shameful. Depending on beliefs, a woman who is menstruating is not allowed to cook or enter spiritual areas because she is "impure" and "cursed". There has been large-scale renegotiation of the social significance of variations which reduce the ability of a person to do one or more functions in western culture. Laws have been passed to alleviate the reduction of social opportunity available to those with disabilities. The concept of "differently abled" has been pushed by those persuading society to see limited incapacities as a human difference of less negative value.

Ideologies of superiority and inferiority

The extreme exercise of social valuation of human difference is in the definition of "human." Differences between humans can lead to an individual's "nonhuman" status, in the sense of withholding identification, charity, and social participation. Views of these variations can change enormously between cultures over time. For example, nineteenth-century European and American ideas of race and eugenics culminated in the attempts of the Nazi-led German society of the 1930s to deny not just reproduction, but life itself to a variety of people with "differences" attributed in part to biological characteristics. Hitler and Nazi leaders wanted to create a "master race" consisting of only Aryans, or blue-eyed, blonde-haired, and tall individuals, thus discriminating and attempting to exterminate those who didn't fit into this ideal.

Contemporary controversy continues over "what kind of human" is a fetus or child with a significant disability. On one end are people who would argue that Down syndrome is not a disability but a mere "difference," and on the other those who consider it such a calamity as to assume that such a child is better off "not born". For example, in India and China, being female is widely considered such a negatively valued human difference that female infanticide occurs such to severely affect the proportion of sexes.

Common human variations

Human Genetic Variation
Type of Variation Example
Sex Klinefelter syndrome

Turner syndrome

Female

Male

Skin Color Human skin color

Albinism

Eye Color Eye color

Martin scale

Hair Color Human hair color

Hair coloring

Hair Quantity Hair loss

Hirsutism

Extra Body Parts Polydactyly

Supernumerary body part

Missing Body Parts Amelia (birth defect)

Amniotic band constriction

Recessive Phenotypes Cleft lip and cleft palate

Earlobe


Physical Disabilities
Type of Variation Example
Amputation Amputation
Blindness Color blindness

Visual impairment

Deafness Tone deafness

Hearing loss

Muteness Muteness

Selective Mutism

Genetic/Longterm Diseases Sickle-cell disease

Trisomy 21

 
Reproductive Abilities
Type of Variation Example
Fertility Infertility

Natural fertility

Fecundity Fecundity selection

Sterility

Birth rate

 
Other Aspects of Human Physical Appearance
Type of Variation Example
Acquire Variability Tattoo

Plastic surgery

Body Weight Obesity

Anorexia nervosa

 
Human Development
Type of Variation Example
Age Menopause

Puberty

Childhood

Developmental Disorders Progeroid syndromes

Werner syndrome

 
Psychological and Personality Traits
Type of Variation Example
Temperament Extraversion and introversion

Big Five personality traits

Creative Ability Dexterity

Creativity

eHealth

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/EHealth e...