Search This Blog

Saturday, June 25, 2022

Curve fitting

From Wikipedia, the free encyclopedia

Fitting of a noisy curve by an asymmetrical peak model, with an iterative process (Gauss–Newton algorithm with variable damping factor α).
Top: raw data and model.
Bottom: evolution of the normalised sum of the squares of the errors.
 

Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is required, or smoothing, in which a "smooth" function is constructed that approximately fits the data. A related topic is regression analysis, which focuses more on questions of statistical inference such as how much uncertainty is present in a curve that is fit to data observed with random errors. Fitted curves can be used as an aid for data visualization, to infer values of a function where no data are available, and to summarize the relationships among two or more variables. Extrapolation refers to the use of a fitted curve beyond the range of the observed data, and is subject to a degree of uncertainty since it may reflect the method used to construct the curve as much as it reflects the observed data.

Fitting functions to data points

Most commonly, one fits a function of the form y=f(x).

Fitting lines and polynomial functions to data points

Polynomial curves fitting a sine function
Polynomial curves fitting points generated with a sine function. The black dotted line is the "true" data, the red line is a first degree polynomial, the green line is second degree, the orange line is third degree and the blue line is fourth degree.

The first degree polynomial equation

is a line with slope a. A line will connect any two points, so a first degree polynomial equation is an exact fit through any two points with distinct x coordinates.

If the order of the equation is increased to a second degree polynomial, the following results:

This will exactly fit a simple curve to three points.

If the order of the equation is increased to a third degree polynomial, the following is obtained:

This will exactly fit four points.

A more general statement would be to say it will exactly fit four constraints. Each constraint can be a point, angle, or curvature (which is the reciprocal of the radius of an osculating circle). Angle and curvature constraints are most often added to the ends of a curve, and in such cases are called end conditions. Identical end conditions are frequently used to ensure a smooth transition between polynomial curves contained within a single spline. Higher-order constraints, such as "the change in the rate of curvature", could also be added. This, for example, would be useful in highway cloverleaf design to understand the rate of change of the forces applied to a car (see jerk), as it follows the cloverleaf, and to set reasonable speed limits, accordingly.

The first degree polynomial equation could also be an exact fit for a single point and an angle while the third degree polynomial equation could also be an exact fit for two points, an angle constraint, and a curvature constraint. Many other combinations of constraints are possible for these and for higher order polynomial equations.

If there are more than n + 1 constraints (n being the degree of the polynomial), the polynomial curve can still be run through those constraints. An exact fit to all constraints is not certain (but might happen, for example, in the case of a first degree polynomial exactly fitting three collinear points). In general, however, some method is then needed to evaluate each approximation. The least squares method is one way to compare the deviations.

There are several reasons given to get an approximate fit when it is possible to simply increase the degree of the polynomial equation and get an exact match.:

  • Even if an exact match exists, it does not necessarily follow that it can be readily discovered. Depending on the algorithm used there may be a divergent case, where the exact fit cannot be calculated, or it might take too much computer time to find the solution. This situation might require an approximate solution.
  • The effect of averaging out questionable data points in a sample, rather than distorting the curve to fit them exactly, may be desirable.
  • Runge's phenomenon: high order polynomials can be highly oscillatory. If a curve runs through two points A and B, it would be expected that the curve would run somewhat near the midpoint of A and B, as well. This may not happen with high-order polynomial curves; they may even have values that are very large in positive or negative magnitude. With low-order polynomials, the curve is more likely to fall near the midpoint (it's even guaranteed to exactly run through the midpoint on a first degree polynomial).
  • Low-order polynomials tend to be smooth and high order polynomial curves tend to be "lumpy". To define this more precisely, the maximum number of inflection points possible in a polynomial curve is n-2, where n is the order of the polynomial equation. An inflection point is a location on the curve where it switches from a positive radius to negative. We can also say this is where it transitions from "holding water" to "shedding water". Note that it is only "possible" that high order polynomials will be lumpy; they could also be smooth, but there is no guarantee of this, unlike with low order polynomial curves. A fifteenth degree polynomial could have, at most, thirteen inflection points, but could also have eleven, or nine or any odd number down to one. (Polynomials with even numbered degree could have any even number of inflection points from n - 2 down to zero.)

The degree of the polynomial curve being higher than needed for an exact fit is undesirable for all the reasons listed previously for high order polynomials, but also leads to a case where there are an infinite number of solutions. For example, a first degree polynomial (a line) constrained by only a single point, instead of the usual two, would give an infinite number of solutions. This brings up the problem of how to compare and choose just one solution, which can be a problem for software and for humans, as well. For this reason, it is usually best to choose as low a degree as possible for an exact match on all constraints, and perhaps an even lower degree, if an approximate fit is acceptable.

Relation between wheat yield and soil salinity

Fitting other functions to data points

Other types of curves, such as trigonometric functions (such as sine and cosine), may also be used, in certain cases.

In spectroscopy, data may be fitted with Gaussian, Lorentzian, Voigt and related functions.

In biology, ecology, demography, epidemiology, and many other disciplines, the growth of a population, the spread of infectious disease, etc. can be fitted using the logistic function.

In agriculture the inverted logistic sigmoid function (S-curve) is used to describe the relation between crop yield and growth factors. The blue figure was made by a sigmoid regression of data measured in farm lands. It can be seen that initially, i.e. at low soil salinity, the crop yield reduces slowly at increasing soil salinity, while thereafter the decrease progresses faster.

Algebraic fit versus geometric fit for curves

For algebraic analysis of data, "fitting" usually means trying to find the curve that minimizes the vertical (y-axis) displacement of a point from the curve (e.g., ordinary least squares). However, for graphical and image applications geometric fitting seeks to provide the best visual fit; which usually means trying to minimize the orthogonal distance to the curve (e.g., total least squares), or to otherwise include both axes of displacement of a point from the curve. Geometric fits are not popular because they usually require non-linear and/or iterative calculations, although they have the advantage of a more aesthetic and geometrically accurate result.

Fitting plane curves to data points

If a function of the form cannot be postulated, one can still try to fit a plane curve.

Other types of curves, such as conic sections (circular, elliptical, parabolic, and hyperbolic arcs) or trigonometric functions (such as sine and cosine), may also be used, in certain cases. For example, trajectories of objects under the influence of gravity follow a parabolic path, when air resistance is ignored. Hence, matching trajectory data points to a parabolic curve would make sense. Tides follow sinusoidal patterns, hence tidal data points should be matched to a sine wave, or the sum of two sine waves of different periods, if the effects of the Moon and Sun are both considered.

For a parametric curve, it is effective to fit each of its coordinates as a separate function of arc length; assuming that data points can be ordered, the chord distance may be used.

Fitting a circle by geometric fit

Circle fitting with the Coope method, the points describing a circle arc, centre (1 ; 1), radius 4.
 
different models of ellipse fitting
 
Ellipse fitting minimising the algebraic distance (Fitzgibbon method).

Coope approaches the problem of trying to find the best visual fit of circle to a set of 2D data points. The method elegantly transforms the ordinarily non-linear problem into a linear problem that can be solved without using iterative numerical methods, and is hence much faster than previous techniques.

Fitting an ellipse by geometric fit

The above technique is extended to general ellipses by adding a non-linear step, resulting in a method that is fast, yet finds visually pleasing ellipses of arbitrary orientation and displacement.

Fitting surfaces

Note that while this discussion was in terms of 2D curves, much of this logic also extends to 3D surfaces, each patch of which is defined by a net of curves in two parametric directions, typically called u and v. A surface may be composed of one or more surface patches in each direction.

Software

Many statistical packages such as R and numerical software such as the gnuplot, GNU Scientific Library, MLAB, Maple, MATLAB, TK Solver 6.0, Scilab, Mathematica, GNU Octave, and SciPy include commands for doing curve fitting in a variety of scenarios. There are also programs specifically written to do curve fitting; they can be found in the lists of statistical and numerical-analysis programs as well as in Category:Regression and curve fitting software.

Forensic chemistry

From Wikipedia, the free encyclopedia
 

Forensic chemistry is the application of chemistry and its subfield, forensic toxicology, in a legal setting. A forensic chemist can assist in the identification of unknown materials found at a crime scene. Specialists in this field have a wide array of methods and instruments to help identify unknown substances. These include high-performance liquid chromatography, gas chromatography-mass spectrometry, atomic absorption spectroscopy, Fourier transform infrared spectroscopy, and thin layer chromatography. The range of different methods is important due to the destructive nature of some instruments and the number of possible unknown substances that can be found at a scene. Forensic chemists prefer using nondestructive methods first, to preserve evidence and to determine which destructive methods will produce the best results.

Along with other forensic specialists, forensic chemists commonly testify in court as expert witnesses regarding their findings. Forensic chemists follow a set of standards that have been proposed by various agencies and governing bodies, including the Scientific Working Group on the Analysis of Seized Drugs. In addition to the standard operating procedures proposed by the group, specific agencies have their own standards regarding the quality assurance and quality control of their results and their instruments. To ensure the accuracy of what they are reporting, forensic chemists routinely check and verify that their instruments are working correctly and are still able to detect and measure various quantities of different substances.

Role in investigations

Aftermath of the Oklahoma City bombing.
Chemists were able to identify the explosive ANFO at the scene of the Oklahoma City bombing.

Forensic chemists' analysis can provide leads for investigators, and they can confirm or refute their suspicions. The identification of the various substances found at the scene can tell investigators what to look for during their search. During fire investigations, forensic chemists can determine if an accelerant such as gasoline or kerosene was used; if so, this suggests that the fire was intentionally set. Forensic chemists can also narrow down the suspect list to people who would have access to the substance used in a crime. For example, in explosive investigations, the identification of RDX or C-4 would indicate a military connection as those substances are military grade explosives. On the other hand, the identification of TNT would create a wider suspect list, since it is used by demolition companies as well as in the military. During poisoning investigations, the detection of specific poisons can give detectives an idea of what to look for when they are interviewing potential suspects. For example, an investigation that involves ricin would tell investigators to look for ricin's precursors, the seeds of the castor oil plant.

Forensic chemists also help to confirm or refute investigators' suspicions in drug or alcohol cases. The instruments used by forensic chemists can detect minute quantities, and accurate measurement can be important in crimes such as driving under the influence as there are specific blood alcohol content cutoffs where penalties begin or increase. In suspected overdose cases, the quantity of the drug found in the person's system can confirm or rule out overdose as the cause of death.

History

Early history

Refer to caption.
A bottle of strychnine extract was once easily obtainable in apothecaries.

Throughout history, a variety of poisons have been used to commit murder, including arsenic, nightshade, hemlock, strychnine, and curare. Until the early 19th century, there were no methods to accurately determine if a particular chemical was present, and poisoners were rarely punished for their crimes. In 1836, one of the first major contributions to forensic chemistry was introduced by British chemist James Marsh. He created the Marsh test for arsenic detection, which was subsequently used successfully in a murder trial. It was also during this time that forensic toxicology began to be recognized as a distinct field. Mathieu Orfila, the "father of toxicology", made great advancements to the field during the early 19th century. A pioneer in the development of forensic microscopy, Orfila contributed to the advancement of this method for the detection of blood and semen. Orfila was also the first chemist to successfully classify different chemicals into categories such as corrosives, narcotics, and astringents.

The next advancement in the detection of poisons came in 1850 when a valid method for detecting vegetable alkaloids in human tissue was created by chemist Jean Stas. Stas's method was quickly adopted and used successfully in court to convict Count Hippolyte Visart de Bocarmé of murdering his brother-in-law by nicotine poisoning. Stas was able to successfully isolate the alkaloid from the organs of the victim. Stas's protocol was subsequently altered to incorporate tests for caffeine, quinine, morphine, strychnine, atropine, and opium.

The wide range of instrumentation for forensic chemical analysis also began to be developed during this time period. The early 19th century saw the invention of the spectroscope by Joseph von Fraunhofer. In 1859, chemist Robert Bunsen and physicist Gustav Kirchhoff expanded on Fraunhofer's invention. Their experiments with spectroscopy showed that specific substances created a unique spectrum when exposed to specific wavelengths of light. Using spectroscopy, the two scientists were able to identify substances based on their spectrum, providing a method of identification for unknown materials. In 1906 botanist Mikhail Tsvet invented paper chromatography, an early predecessor to thin layer chromatography, and used it to separate and examine the plant proteins that make up chlorophyll. The ability to separate mixtures into their individual components allows forensic chemists to examine the parts of an unknown material against a database of known products. By matching the retention factors for the separated components with known values, materials can be identified.

Modernization

A gas chromatography mass spectrometry instrument that can be used to determine the identify of unknown chemicals.
A GC-MS unit with doors open. The gas chromatograph is on the right and the mass spectrometer is on the left.

Modern forensic chemists rely on numerous instruments to identify unknown materials found at a crime scene. The 20th century saw many advancements in technology that allowed chemists to detect smaller amounts of material more accurately. The first major advancement in this century came during the 1930s with the invention of a spectrometer that could measure the signal produced with infrared (IR) light. Early IR spectrometers used a monochromator and could only measure light absorption in a very narrow wavelength band. It was not until the coupling of an interferometer with an IR spectrometer in 1949 by Peter Fellgett that the complete infrared spectrum could be measured at once. Fellgett also used the Fourier transform, a mathematical method that can break down a signal into its individual frequencies, to make sense of the enormous amount of data received from the complete infrared analysis of a material. Since then, Fourier transform infrared spectroscopy (FTIR) instruments have become critical in the forensic analysis of unknown material because they are nondestructive and extremely quick to use. Spectroscopy was further advanced in 1955 with the invention of the modern atomic absorption (AA) spectrophotometer by Alan Walsh. AA analysis can detect specific elements that make up a sample along with their concentrations, allowing for the easy detection of heavy metals such as arsenic and cadmium.

Advancements in the field of chromatography arrived in 1953 with the invention of the gas chromatograph by Anthony T. James and Archer John Porter Martin, allowing for the separation of volatile liquid mixtures with components which have similar boiling points. Nonvolatile liquid mixtures could be separated with liquid chromatography, but substances with similar retention times could not be resolved until the invention of high-performance liquid chromatography (HPLC) by Csaba Horváth in 1970. Modern HPLC instruments are capable of detecting and resolving substances whose concentrations are as low as parts per trillion.

One of the most important advancements in forensic chemistry came in 1955 with the invention of gas chromatography-mass spectrometry (GC-MS) by Fred McLafferty and Roland Gohlke. The coupling of a gas chromatograph with a mass spectrometer allowed for the identification of a wide range of substances. GC-MS analysis is widely considered the "gold standard" for forensic analysis due to its sensitivity and versatility along with its ability to quantify the amount of substance present. The increase in the sensitivity of instrumentation has advanced to the point that minute impurities within compounds can be detected potentially allowing investigators to trace chemicals to a specific batch and lot from a manufacturer.

Methods

Forensic chemists rely on a multitude of instruments to identify unknown substances found at a scene. Different methods can be used to determine the identity of the same substance, and it is up to the examiner to determine which method will produce the best results. Factors that forensic chemists might consider when performing an examination are the length of time a specific instrument will take to examine a substance and the destructive nature of that instrument. They prefer using nondestructive methods first, to preserve the evidence for further examination. Nondestructive techniques can also be used to narrow down the possibilities, making it more likely that the correct method will be used the first time when a destructive method is used.

Spectroscopy

Refer to caption.
ATR FTIR spectrum for hexane showing percent transmittance (%T) versus wavenumber (cm−1).

The two main standalone spectroscopy techniques for forensic chemistry are FTIR and AA spectroscopy. FTIR is a nondestructive process that uses infrared light to identify a substance. The attenuated total reflectance sampling technique eliminates the need for substances to be prepared before analysis. The combination of nondestructiveness and zero preparation makes ATR FTIR analysis a quick and easy first step in the analysis of unknown substances. To facilitate the positive identification of the substance, FTIR instruments are loaded with databases that can be searched for known spectra that match the unknown's spectra. FTIR analysis of mixtures, while not impossible, presents specific difficulties due to the cumulative nature of the response. When analyzing an unknown that contains more than one substance, the resulting spectra will be a combination of the individual spectra of each component. While common mixtures have known spectra on file, novel mixtures can be difficult to resolve, making FTIR an unacceptable means of identification. However, the instrument can be used to determine the general chemical structures present, allowing forensic chemists to determine the best method for analysis with other instruments. For example, a methoxy group will result in a peak between 3,030 and 2,950 wavenumbers (cm−1).

Atomic absorption spectroscopy (AAS) is a destructive technique that is able to determine the elements that make up the analyzed sample. AAS performs this analysis by subjecting the sample to an extremely high heat source, breaking the atomic bonds of the substance, leaving free atoms. Radiation in the form of light is then passed through the sample forcing the atoms to jump to a higher energy state. Forensic chemists can test for each element by using a corresponding wavelength of light that forces that element's atoms to a higher energy state during the analysis. For this reason, and due to the destructive nature of this method, AAS is generally used as a confirmatory technique after preliminary tests have indicated the presence of a specific element in the sample. The concentration of the element in the sample is proportional to the amount of light absorbed when compared to a blank sample. AAS is useful in cases of suspected heavy metal poisoning such as with arsenic, lead, mercury, and cadmium. The concentration of the substance in the sample can indicate whether heavy metals were the cause of death.

Chromatography

Refer to caption.
HPLC readout of an Excedrin tablet. Peaks from left to right are acetaminophen, aspirin, and caffeine.

Spectroscopy techniques are useful when the sample being tested is pure, or a very common mixture. When an unknown mixture is being analyzed it must be broken down into its individual parts. Chromatography techniques can be used to break apart mixtures into their components allowing for each part to be analyzed separately.

Thin layer chromatography (TLC) is a quick alternative to more complex chromatography methods. TLC can be used to analyze inks and dyes by extracting the individual components. This can be used to investigate notes or fibers left at the scene since each company's product is slightly different and those differences can be seen with TLC. The only limiting factor with TLC analysis is the necessity for the components to be soluble in whatever solution is used to carry the components up the analysis plate. This solution is called the mobile phase. The forensic chemist can compare unknowns with known standards by looking at the distance each component travelled. This distance, when compared to the starting point, is known as the retention factor (Rf) for each extracted component. If each Rf value matches a known sample, that is an indication of the unknown's identity.

High-performance liquid chromatography can be used to extract individual components from a mixture dissolved in a solution. HPLC is used for nonvolatile mixtures that would not be suitable for gas chromatography. This is useful in drug analysis where the pharmaceutical is a combination drug since the components would separate, or elute, at different times allowing for the verification of each component. The eluates from the HPLC column are then fed into various detectors that produce a peak on a graph relative to its concentration as it elutes off the column. The most common type of detector is an ultraviolet-visible spectrometer as the most common item of interest tested with HPLC, pharmaceuticals, have UV absorbance.

Gas chromatography (GC) performs the same function as liquid chromatography, but it is used for volatile mixtures. In forensic chemistry, the most common GC instruments use mass spectrometry as their detector. GC-MS can be used in investigations of arson, poisoning, and explosions to determine exactly what was used. In theory, GC-MS instruments can detect substances whose concentrations are in the femtogram (10−15) range. However, in practice, due to signal-to-noise ratios and other limiting factors, such as the age of the individual parts of the instrument, the practical detection limit for GC-MS is in the picogram (10−12) range. GC-MS is also capable of quantifying the substances it detects; chemists can use this information to determine the effect the substance would have on an individual. GC-MS instruments need around 1,000 times more of the substance to quantify the amount than they need simply to detect it; the limit of quantification is typically in the nanogram (10−9) range.

Forensic toxicology

Forensic toxicology is the study of the pharmacodynamics, or what a substance does to the body, and pharmacokinetics, or what the body does to the substance. To accurately determine the effect a particular drug has on the human body, forensic toxicologists must be aware of various levels of drug tolerance that an individual can build up as well as the therapeutic index for various pharmaceuticals. Toxicologists are tasked with determining whether any toxin found in a body was the cause of or contributed to an incident, or whether it was at too low a level to have had an effect. While the determination of the specific toxin can be time-consuming due to the number of different substances that can cause injury or death, certain clues can narrow down the possibilities. For example, carbon monoxide poisoning would result in bright red blood while death from hydrogen sulfide poisoning would cause the brain to have a green hue.

Toxicologists are also aware of the different metabolites that a specific drug could break down into inside the body. For example, a toxicologist can confirm that a person took heroin by the presence in a sample of 6-monoacetylmorphine, which only comes from the breakdown of heroin. The constant creation of new drugs, both legal and illicit, forces toxicologists to keep themselves apprised of new research and methods to test for these novel substances. The stream of new formulations means that a negative test result does not necessarily rule out drugs. To avoid detection, illicit drug manufacturers frequently change the chemicals' structure slightly. These compounds are often not detected by routine toxicology tests and can be masked by the presence of a known compound in the same sample. As new compounds are discovered, known spectra are determined and entered into the databases that can be downloaded and used as reference standards. Laboratories also tend to keep in-house databases for the substances they find locally.

Standards

SWGDRUG analysis categories
Category A Category B Category C

Guidelines have been set up by various governing bodies regarding the standards that are followed by practicing forensic scientists. For forensic chemists, the international Scientific Working Group for the Analysis of Seized Drugs (SWGDRUG) presents recommendations for the quality assurance and quality control of tested materials. In the identification of unknown samples, protocols have been grouped into three categories based on the probability for false positives. Instruments and protocols in category A are considered the best for uniquely identifying an unknown material, followed by categories B and then C. To ensure the accuracy of identifications SWGDRUG recommends that multiple tests using different instruments be performed on each sample, and that one category A technique and at least one other technique be used. If a category A technique is not available, or the forensic chemist decides not to use one, SWGDRUG recommends that at least three techniques be used, two of which must be from category B. Combination instruments, such as GC-MS, are considered two separate tests as long as the results are compared to known values individually For example, the GC elution times would be compared to known values along with the MS spectra. If both of those match a known substance, no further tests are needed.

Standards and controls are necessary in the quality control of the various instruments used to test samples. Due to the nature of their work in the legal system, chemists must ensure that their instruments are working accurately. To do this, known controls are tested consecutively with unknown samples. By comparing the readouts of the controls with their known profiles the instrument can be confirmed to have been working properly at the time the unknowns were tested. Standards are also used to determine the instrument's limit of detection and limit of quantification for various common substances. Calculated quantities must be above the limit of detection to be confirmed as present and above the limit of quantification to be quantified. If the value is below the limit the value is not considered reliable.

Testimony

The standardized procedures for testimony by forensic chemists are provided by the various agencies that employ the scientists as well as SWGDRUG. Forensic chemists are ethically bound to present testimony in a neutral manner and to be open to reconsidering their statements if new information is found. Chemists should also limit their testimony to areas they have been qualified in regardless of questions during direct or cross-examination.

Individuals called to testify must be able to relay scientific information and processes in a manner that lay individuals can understand. By being qualified as an expert, chemists are allowed to give their opinions on the evidence as opposed to just stating the facts. This can lead to competing opinions from experts hired by the opposing side. Ethical guidelines for forensic chemists require that testimony be given in an objective manner, regardless of what side the expert is testifying for. Forensic experts that are called to testify are expected to work with the lawyer who issued the summons and to assist in their understanding of the material they will be asking questions about.

Education

Forensic chemistry positions require a bachelor's degree or similar in a natural or physical science as well as laboratory experience in general, organic, and analytical chemistry. Once in the position, individuals are trained in the protocols that are performed at that specific lab until they can prove they are competent to perform all experiments without supervision.. Practicing chemists already in the field are expected to have continuing education to maintain their proficiency.

Ultraviolet–visible spectroscopy

From Wikipedia, the free encyclopedia
 
Beckman DU640 UV/Vis spectrophotometer

UV spectroscopy or UV–visible spectrophotometry (UV–Vis or UV/Vis) refers to absorption spectroscopy or reflectance spectroscopy in part of the ultraviolet and the full, adjacent visible regions of the electromagnetic spectrum. Being relatively inexpensive and easily implemented, this methodology is widely used in diverse applied and fundamental applications. The only requirement is that the sample absorb in the UV-vis region, i.e. be a chromophore. Absorption spectroscopy is complementary to fluorescence spectroscopy. Parameters of interest, besides the wavelength of measurement, are absorbance (A) or Transmittance (%T) or Reflectance (%R), and its change with time.

Optical transitions

Most molecules and ions absorb energy in the ultraviolet or visible range, i.e., they are chromophores. The absorbed photon excites an electron in the chromophore to higher energy molecular orbitals, giving rise to an excited state. For organic chromophores, four possible types of transitions are assumed: π–π*, n–π*, σ–σ*, and n–σ*. Transition metal complexes are often colored (i.e., absorb visible light) owing to the presence of multiple electronic states associated with incompletely filled d orbitals.

Applications

An example of a UV/Vis readout

UV/Vis spectroscopy is routinely used in analytical chemistry for the quantitative determination of diverse analytes or sample, such as transition metal ions, highly conjugated organic compounds, and biological macromolecules. Spectroscopic analysis is commonly carried out in solutions but solids and gases may also be studied.

  • Organic compounds, especially those with a high degree of conjugation, also absorb light in the UV or visible regions of the electromagnetic spectrum. The solvents for these determinations are often water for water-soluble compounds, or ethanol for organic-soluble compounds. (Organic solvents may have significant UV absorption; not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly at most wavelengths.) Solvent polarity and pH can affect the absorption spectrum of an organic compound. Tyrosine, for example, increases in absorption maxima and molar extinction coefficient when pH increases from 6 to 13 or when solvent polarity decreases.
  • While charge transfer complexes also give rise to colours, the colours are often too intense to be used for quantitative measurement.

The Beer–Lambert law states that the absorbance of a solution is directly proportional to the concentration of the absorbing species in the solution and the path length. Thus, for a fixed path length, UV/Vis spectroscopy can be used to determine the concentration of the absorber in a solution. It is necessary to know how quickly the absorbance changes with concentration. This can be taken from references (tables of molar extinction coefficients), or more accurately, determined from a calibration curve.

A UV/Vis spectrophotometer may be used as a detector for HPLC. The presence of an analyte gives a response assumed to be proportional to the concentration. For accurate results, the instrument's response to the analyte in the unknown should be compared with the response to a standard; this is very similar to the use of calibration curves. The response (e.g., peak height) for a particular concentration is known as the response factor.

The wavelengths of absorption peaks can be correlated with the types of bonds in a given molecule and are valuable in determining the functional groups within a molecule. The Woodward–Fieser rules, for instance, are a set of empirical observations used to predict λmax, the wavelength of the most intense UV/Vis absorption, for conjugated organic compounds such as dienes and ketones. The spectrum alone is not, however, a specific test for any given sample. The nature of the solvent, the pH of the solution, temperature, high electrolyte concentrations, and the presence of interfering substances can influence the absorption spectrum. Experimental variations such as the slit width (effective bandwidth) of the spectrophotometer will also alter the spectrum. To apply UV/Vis spectroscopy to analysis, these variables must be controlled or accounted for in order to identify the substances present.

The method is most often used in a quantitative way to determine concentrations of an absorbing species in solution, using the Beer–Lambert law:

,

where A is the measured absorbance (in Absorbance Units (AU)), is the intensity of the incident light at a given wavelength, is the transmitted intensity, L the path length through the sample, and c the concentration of the absorbing species. For each species and wavelength, ε is a constant known as the molar absorptivity or extinction coefficient. This constant is a fundamental molecular property in a given solvent, at a particular temperature and pressure, and has units of .

The absorbance and extinction ε are sometimes defined in terms of the natural logarithm instead of the base-10 logarithm.

The Beer–Lambert Law is useful for characterizing many compounds but does not hold as a universal relationship for the concentration and absorption of all substances. A 2nd order polynomial relationship between absorption and concentration is sometimes encountered for very large, complex molecules such as organic dyes (Xylenol Orange or Neutral Red, for example).

UV–Vis spectroscopy is also used in the semiconductor industry to measure the thickness and optical properties of thin films on a wafer. UV–Vis spectrometers are used to measure the reflectance of light, and can be analyzed via the Forouhi–Bloomer dispersion equations to determine the index of refraction () and the extinction coefficient () of a given film across the measured spectral range.

Practical considerations

The Beer–Lambert law has implicit assumptions that must be met experimentally for it to apply; otherwise there is a possibility of deviations from the law. For instance, the chemical makeup and physical environment of the sample can alter its extinction coefficient. The chemical and physical conditions of a test sample therefore must match reference measurements for conclusions to be valid. Worldwide, pharmacopoeias such as the American (USP) and European (Ph. Eur.) pharmacopeias demand that spectrophotometers perform according to strict regulatory requirements encompassing factors such as stray light and wavelength accuracy.

Spectral bandwidth

It is important to have a monochromatic source of radiation for the light incident on the sample cell. Monochromaticity is measured as the width of the "triangle" formed by the intensity spike, at one half of the peak intensity. A given spectrometer has a spectral bandwidth that characterizes how monochromatic the incident light is. If this bandwidth is comparable to (or more than) the width of the absorption line, then the measured extinction coefficient will be mistaken. In reference measurements, the instrument bandwidth (bandwidth of the incident light) is kept below the width of the spectral lines. When a test material is being measured, the bandwidth of the incident light should also be sufficiently narrow. Reducing the spectral bandwidth reduces the energy passed to the detector and will, therefore, require a longer measurement time to achieve the same signal to noise ratio.

Wavelength error

In liquids, the extinction coefficient usually changes slowly with wavelength. A peak of the absorbance curve (a wavelength where the absorbance reaches a maximum) is where the rate of change in absorbance with wavelength is smallest. Measurements are usually made at a peak to minimize errors produced by errors in wavelength in the instrument, that is errors due to having a different extinction coefficient than assumed.

Stray light

Another important major factor is the purity of the light used. The most important factor affecting this is the stray light level of the monochromator.

The detector used is broadband; it responds to all the light that reaches it. If a significant amount of the light passed through the sample contains wavelengths that have much lower extinction coefficients than the nominal one, the instrument will report an incorrectly low absorbance. Any instrument will reach a point where an increase in sample concentration will not result in an increase in the reported absorbance, because the detector is simply responding to the stray light. In practice the concentration of the sample or the optical path length must be adjusted to place the unknown absorbance within a range that is valid for the instrument. Sometimes an empirical calibration function is developed, using known concentrations of the sample, to allow measurements into the region where the instrument is becoming non-linear.

As a rough guide, an instrument with a single monochromator would typically have a stray light level corresponding to about 3 Absorbance Units (AU), which would make measurements above about 2 AU problematic. A more complex instrument with a double monochromator would have a stray light level corresponding to about 6 AU, which would therefore allow measuring a much wider absorbance range.

Deviations from the Beer–Lambert law

At sufficiently high concentrations, the absorption bands will saturate and show absorption flattening. The absorption peak appears to flatten because close to 100% of the light is already being absorbed. The concentration at which this occurs depends on the particular compound being measured. One test that can be used to test for this effect is to vary the path length of the measurement. In the Beer–Lambert law, varying concentration and path length has an equivalent effect—diluting a solution by a factor of 10 has the same effect as shortening the path length by a factor of 10. If cells of different path lengths are available, testing if this relationship holds true is one way to judge if absorption flattening is occurring.

Solutions that are not homogeneous can show deviations from the Beer–Lambert law because of the phenomenon of absorption flattening. This can happen, for instance, where the absorbing substance is located within suspended particles. The deviations will be most noticeable under conditions of low concentration and high absorbance. The last reference describes a way to correct for this deviation.

Some solutions, like copper(II)chloride in water, change visually at a certain concentration because of changed conditions around the coloured ion (the divalent copper ion). For copper(II)chloride it means a shift from blue to green, which would mean that monochromatic measurements would deviate from the Beer–Lambert law.

Measurement uncertainty sources

The above factors contribute to the measurement uncertainty of the results obtained with UV/Vis spectrophotometry. If UV/Vis spectrophotometry is used in quantitative chemical analysis then the results are additionally affected by uncertainty sources arising from the nature of the compounds and/or solutions that are measured. These include spectral interferences caused by absorption band overlap, fading of the color of the absorbing species (caused by decomposition or reaction) and possible composition mismatch between the sample and the calibration solution.

Ultraviolet–visible spectrophotometer

The instrument used in ultraviolet–visible spectroscopy is called a UV/Vis spectrophotometer. It measures the intensity of light after passing through a sample (), and compares it to the intensity of light before it passes through the sample (). The ratio is called the transmittance, and is usually expressed as a percentage (%T). The absorbance, , is based on the transmittance:

The UV–visible spectrophotometer can also be configured to measure reflectance. In this case, the spectrophotometer measures the intensity of light reflected from a sample (), and compares it to the intensity of light reflected from a reference material () (such as a white tile). The ratio is called the reflectance, and is usually expressed as a percentage (%R).

The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating in a monochromator or a prism to separate the different wavelengths of light, and a detector. The radiation source is often a Tungsten filament (300–2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190–400 nm), Xenon arc lamp, which is continuous from 160 to 2,000 nm; or more recently, light emitting diodes (LED) for the visible wavelengths. The detector is typically a photomultiplier tube, a photodiode, a photodiode array or a charge-coupled device (CCD). Single photodiode detectors and photomultiplier tubes are used with scanning monochromators, which filter the light so that only light of a single wavelength reaches the detector at one time. The scanning monochromator moves the diffraction grating to "step-through" each wavelength so that its intensity may be measured as a function of wavelength. Fixed monochromators are used with CCDs and photodiode arrays. As both of these devices consist of many detectors grouped into one or two dimensional arrays, they are able to collect light of different wavelengths on different pixels or groups of pixels simultaneously.

Simplified schematic of a double beam UV–visible spectrophotometer

A spectrophotometer can be either single beam or double beam. In a single beam instrument (such as the Spectronic 20), all of the light passes through the sample cell. must be measured by removing the sample. This was the earliest design and is still in common use in both teaching and industrial labs.

In a double-beam instrument, the light is split into two beams before it reaches the sample. One beam is used as the reference; the other beam passes through the sample. The reference beam intensity is taken as 100% Transmission (or 0 Absorbance), and the measurement displayed is the ratio of the two beam intensities. Some double-beam instruments have two detectors (photodiodes), and the sample and reference beam are measured at the same time. In other instruments, the two beams pass through a beam chopper, which blocks one beam at a time. The detector alternates between measuring the sample beam and the reference beam in synchronism with the chopper. There may also be one or more dark intervals in the chopper cycle. In this case, the measured beam intensities may be corrected by subtracting the intensity measured in the dark interval before the ratio is taken.

In a single-beam instrument, the cuvette containing only a solvent has to be measured first. Mettler Toledo developed a single beam array spectrophotometer that allows fast and accurate measurements over the UV/VIS range. The light source consists of a Xenon flash lamp for the ultraviolet (UV) as well as for the visible (VIS) and near-infrared wavelength regions covering a spectral range from 190 up to 1100 nm. The lamp flashes are focused on a glass fiber which drives the beam of light onto a cuvette containing the sample solution. The beam passes through the sample and specific wavelengths are absorbed by the sample components. The remaining light is collected after the cuvette by a glass fiber and driven into a spectrograph. The spectrograph consists of a diffraction grating that separates the light into the different wavelengths, and a CCD sensor to record the data, respectively. The whole spectrum is thus simultaneously measured, allowing for fast recording.

Samples for UV/Vis spectrophotometry are most often liquids, although the absorbance of gases and even of solids can also be measured. Samples are typically placed in a transparent cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an internal width of 1 cm. (This width becomes the path length, , in the Beer–Lambert law.) Test tubes can also be used as cuvettes in some instruments. The type of sample container used must allow radiation to pass over the spectral region of interest. The most widely applicable cuvettes are made of high quality fused silica or quartz glass because these are transparent throughout the UV, visible and near infrared regions. Glass and plastic cuvettes are also common, although glass and most plastics absorb in the UV, which limits their usefulness to visible wavelengths.

Specialized instruments have also been made. These include attaching spectrophotometers to telescopes to measure the spectra of astronomical features. UV–visible microspectrophotometers consist of a UV–visible microscope integrated with a UV–visible spectrophotometer.

A complete spectrum of the absorption at all wavelengths of interest can often be produced directly by a more sophisticated spectrophotometer. In simpler instruments the absorption is determined one wavelength at a time and then compiled into a spectrum by the operator. By removing the concentration dependence, the extinction coefficient (ε) can be determined as a function of wavelength.

Microspectrophotometry

UV–visible spectroscopy of microscopic samples is done by integrating an optical microscope with UV–visible optics, white light sources, a monochromator, and a sensitive detector such as a charge-coupled device (CCD) or photomultiplier tube (PMT). As only a single optical path is available, these are single beam instruments. Modern instruments are capable of measuring UV–visible spectra in both reflectance and transmission of micron-scale sampling areas. The advantages of using such instruments is that they are able to measure microscopic samples but are also able to measure the spectra of larger samples with high spatial resolution. As such, they are used in the forensic laboratory to analyze the dyes and pigments in individual textile fibers, microscopic paint chips and the color of glass fragments. They are also used in materials science and biological research and for determining the energy content of coal and petroleum source rock by measuring the vitrinite reflectance. Microspectrophotometers are used in the semiconductor and micro-optics industries for monitoring the thickness of thin films after they have been deposited. In the semiconductor industry, they are used because the critical dimensions of circuitry is microscopic. A typical test of a semiconductor wafer would entail the acquisition of spectra from many points on a patterned or unpatterned wafer. The thickness of the deposited films may be calculated from the interference pattern of the spectra. In addition, ultraviolet–visible spectrophotometry can be used to determine the thickness, along with the refractive index and extinction coefficient of thin films. A map of the film thickness across the entire wafer can then be generated and used for quality control purposes.

Additional applications

UV/Vis can be applied to characterize the rate of a chemical reaction. Illustrative is the conversion of the yellow-orange and blue isomers of mercury dithizonate. This method of analysis relies on the fact that concentration is linearly proportional to concentration. In the same approach allows determination of equilibria between chromophores.

From the spectrum of burning gases, it is possible to determine a chemical composition of a fuel, temperature of gases, and air-fuel ratio.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...