Search This Blog

Saturday, August 6, 2022

Surveying

From Wikipedia, the free encyclopedia
A surveyor using a total station
 
A student using a theodolite in field

Surveying or land surveying is the technique, profession, art, and science of determining the terrestrial two-dimensional or three-dimensional positions of points and the distances and angles between them. A land surveying professional is called a land surveyor. These points are usually on the surface of the Earth, and they are often used to establish maps and boundaries for ownership, locations, such as the designed positions of structural components for construction or the surface location of subsurface features, or other purposes required by government or civil law, such as property sales.

Surveyors work with elements of geodesy, geometry, trigonometry, regression analysis, physics, engineering, metrology, programming languages, and the law. They use equipment, such as total stations, robotic total stations, theodolites, GNSS receivers, retroreflectors, 3D scanners, LiDAR sensors, radios, inclinometer, handheld tablets, optical and digital levels, subsurface locators, drones, GIS, and surveying software.

Surveying has been an element in the development of the human environment since the beginning of recorded history. The planning and execution of most forms of construction require it. It is also used in transport, communications, mapping, and the definition of legal boundaries for land ownership, and is an important tool for research in many other scientific disciplines.

Definition

The International Federation of Surveyors defines the function of surveying as follows:

A surveyor is a professional person with the academic qualifications and technical expertise to conduct one, or more, of the following activities;

  • to determine, measure and represent land, three-dimensional objects, point-fields and trajectories;
  • to assemble and interpret land and geographically related information,
  • to use that information for the planning and efficient administration of the land, the sea and any structures thereon; and,
  • to conduct research into the above practices and to develop them.

History

Ancient history

refer to caption
A plumb rule from the book Cassells' Carpentry and Joinery

Surveying has occurred since humans built the first large structures. In ancient Egypt, a rope stretcher would use simple geometry to re-establish boundaries after the annual floods of the Nile River. The almost perfect squareness and north–south orientation of the Great Pyramid of Giza, built c. 2700 BC, affirm the Egyptians' command of surveying. The groma instrument originated in Mesopotamia (early 1st millennium BC). The prehistoric monument at Stonehenge (c. 2500 BC) was set out by prehistoric surveyors using peg and rope geometry.

The mathematician Liu Hui described ways of measuring distant objects in his work Haidao Suanjing or The Sea Island Mathematical Manual, published in 263 AD.

The Romans recognized land surveying as a profession. They established the basic measurements under which the Roman Empire was divided, such as a tax register of conquered lands (300 AD). Roman surveyors were known as Gromatici.

In medieval Europe, beating the bounds maintained the boundaries of a village or parish. This was the practice of gathering a group of residents and walking around the parish or village to establish a communal memory of the boundaries. Young boys were included to ensure the memory lasted as long as possible.

In England, William the Conqueror commissioned the Domesday Book in 1086. It recorded the names of all the land owners, the area of land they owned, the quality of the land, and specific information of the area's content and inhabitants. It did not include maps showing exact locations.

Modern era

Printed image of surveying equipment.
Table of Surveying, 1728 Cyclopaedia

Abel Foullon described a plane table in 1551, but it is thought that the instrument was in use earlier as his description is of a developed instrument.

Gunter's chain was introduced in 1620 by English mathematician Edmund Gunter. It enabled plots of land to be accurately surveyed and plotted for legal and commercial purposes.

Leonard Digges described a theodolite that measured horizontal angles in his book A geometric practice named Pantometria (1571). Joshua Habermel (Erasmus Habermehl) created a theodolite with a compass and tripod in 1576. Johnathon Sission was the first to incorporate a telescope on a theodolite in 1725.

In the 18th century, modern techniques and instruments for surveying began to be used. Jesse Ramsden introduced the first precision theodolite in 1787. It was an instrument for measuring angles in the horizontal and vertical planes. He created his great theodolite using an accurate dividing engine of his own design. Ramsden's theodolite represented a great step forward in the instrument's accuracy. William Gascoigne invented an instrument that used a telescope with an installed crosshair as a target device, in 1640. James Watt developed an optical meter for the measuring of distance in 1771; it measured the parallactic angle from which the distance to a point could be deduced.

Dutch mathematician Willebrord Snellius (a.k.a. Snel van Royen) introduced the modern systematic use of triangulation. In 1615 he surveyed the distance from Alkmaar to Breda, approximately 72 miles (116 km). He underestimated this distance by 3.5%. The survey was a chain of quadrangles containing 33 triangles in all. Snell showed how planar formulae could be corrected to allow for the curvature of the earth. He also showed how to resect, or calculate, the position of a point inside a triangle using the angles cast between the vertices at the unknown point. These could be measured more accurately than bearings of the vertices, which depended on a compass. His work established the idea of surveying a primary network of control points, and locating subsidiary points inside the primary network later. Between 1733 and 1740, Jacques Cassini and his son César undertook the first triangulation of France. They included a re-surveying of the meridian arc, leading to the publication in 1745 of the first map of France constructed on rigorous principles. By this time triangulation methods were well established for local map-making.

Map of triangulation network covering India.
A map of India showing the Great Trigonometrical Survey, produced in 1870

It was only towards the end of the 18th century that detailed triangulation network surveys mapped whole countries. In 1784, a team from General William Roy's Ordnance Survey of Great Britain began the Principal Triangulation of Britain. The first Ramsden theodolite was built for this survey. The survey was finally completed in 1853. The Great Trigonometric Survey of India began in 1801. The Indian survey had an enormous scientific impact. It was responsible for one of the first accurate measurements of a section of an arc of longitude, and for measurements of the geodesic anomaly. It named and mapped Mount Everest and the other Himalayan peaks. Surveying became a professional occupation in high demand at the turn of the 19th century with the onset of the Industrial Revolution. The profession developed more accurate instruments to aid its work. Industrial infrastructure projects used surveyors to lay out canals, roads and rail.

In the US, the Land Ordinance of 1785 created the Public Land Survey System. It formed the basis for dividing the western territories into sections to allow the sale of land. The PLSS divided states into township grids which were further divided into sections and fractions of sections.

Napoleon Bonaparte founded continental Europe's first cadastre in 1808. This gathered data on the number of parcels of land, their value, land usage, and names. This system soon spread around Europe.

A railroad surveying party at Russel's Tank, Arizona in the 1860s

Robert Torrens introduced the Torrens system in South Australia in 1858. Torrens intended to simplify land transactions and provide reliable titles via a centralized register of land. The Torrens system was adopted in several other nations of the English-speaking world. Surveying became increasingly important with the arrival of railroads in the 1800s. Surveying was necessary so that railroads could plan technologically and financially viable routes.

20th century

Soldier standing next to a Telescopic instrument on a tripod.
A German engineer surveying during the First World War, 1918

At the beginning of the century surveyors had improved the older chains and ropes, but still faced the problem of accurate measurement of long distances. Dr Trevor Lloyd Wadley developed the Tellurometer during the 1950s. It measures long distances using two microwave transmitter/receivers. During the late 1950s Geodimeter introduced electronic distance measurement (EDM) equipment. EDM units use a multi frequency phase shift of light waves to find a distance. These instruments saved the need for days or weeks of chain measurement by measuring between points kilometers apart in one go.

Advances in electronics allowed miniaturization of EDM. In the 1970s the first instruments combining angle and distance measurement appeared, becoming known as total stations. Manufacturers added more equipment by degrees, bringing improvements in accuracy and speed of measurement. Major advances include tilt compensators, data recorders, and on-board calculation programs.

The first satellite positioning system was the US Navy TRANSIT system. The first successful launch took place in 1960. The system's main purpose was to provide position information to Polaris missile submarines. Surveyors found they could use field receivers to determine the location of a point. Sparse satellite cover and large equipment made observations laborious, and inaccurate. The main use was establishing benchmarks in remote locations.

The US Air Force launched the first prototype satellites of the Global Positioning System (GPS) in 1978. GPS used a larger constellation of satellites and improved signal transmission to provide more accuracy. Early GPS observations required several hours of observations by a static receiver to reach survey accuracy requirements. Recent improvements to both satellites and receivers allow Real Time Kinematic (RTK) surveying. RTK surveys get high-accuracy measurements by using a fixed base station and a second roving antenna. The position of the roving antenna can be tracked.

21st century

The theodolite, total station, and RTK GPS survey remain the primary methods in use.

Remote sensing and satellite imagery continue to improve and become cheaper, allowing more commonplace use. Prominent new technologies include three-dimensional (3D) scanning and use of lidar for topographical surveys. UAV technology along with photogrammetric image processing is also appearing.

Equipment

Hardware

Theodolite.
Total Station.
Optical Level.
Survey GPS station.
Surveying equipment. Clockwise from upper left: optical theodolite, robotic total station, RTK GPS base station, optical level.

The main surveying instruments in use around the world are the theodolite, measuring tape, total station, 3D scanners, GPS/GNSS, level and rod. Most instruments screw onto a tripod when in use. Tape measures are often used for measurement of smaller distances. 3D scanners and various forms of aerial imagery are also used.

The theodolite is an instrument for the measurement of angles. It uses two separate circles, protractors or alidades to measure angles in the horizontal and the vertical plane. A telescope mounted on trunnions is aligned vertically with the target object. The whole upper section rotates for horizontal alignment. The vertical circle measures the angle that the telescope makes against the vertical, known as the zenith angle. The horizontal circle uses an upper and lower plate. When beginning the survey, the surveyor points the instrument in a known direction (bearing), and clamps the lower plate in place. The instrument can then rotate to measure the bearing to other objects. If no bearing is known or direct angle measurement is wanted, the instrument can be set to zero during the initial sight. It will then read the angle between the initial object, the theodolite itself, and the item that the telescope aligns with.

The gyrotheodolite is a form of theodolite that uses a gyroscope to orient itself in the absence of reference marks. It is used in underground applications.

The total station is a development of the theodolite with an electronic distance measurement device (EDM). A total station can be used for leveling when set to the horizontal plane. Since their introduction, total stations have shifted from optical-mechanical to fully electronic devices.

Modern top-of-the-line total stations no longer need a reflector or prism to return the light pulses used for distance measurements. They are fully robotic, and can even e-mail point data to a remote computer and connect to satellite positioning systems, such as Global Positioning System. Real Time Kinematic GPS systems have significantly increased the speed of surveying, and they are now horizontally accurate to within 1 cm ± 1 ppm in real-time, while vertically it is currently about half of that to within 2 cm ± 2 ppm.

GPS surveying differs from other GPS uses in the equipment and methods used. Static GPS uses two receivers placed in position for a considerable length of time. The long span of time lets the receiver compare measurements as the satellites orbit. The changes as the satellites orbit also provide the measurement network with well conditioned geometry. This produces an accurate baseline that can be over 20 km long. RTK surveying uses one static antenna and one roving antenna. The static antenna tracks changes in the satellite positions and atmospheric conditions. The surveyor uses the roving antenna to measure the points needed for the survey. The two antennas use a radio link that allows the static antenna to send corrections to the roving antenna. The roving antenna then applies those corrections to the GPS signals it is receiving to calculate its own position. RTK surveying covers smaller distances than static methods. This is because divergent conditions further away from the base reduce accuracy.

Surveying instruments have characteristics that make them suitable for certain uses. Theodolites and levels are often used by constructors rather than surveyors in first world countries. The constructor can perform simple survey tasks using a relatively cheap instrument. Total stations are workhorses for many professional surveyors because they are versatile and reliable in all conditions. The productivity improvements from a GPS on large scale surveys makes them popular for major infrastructure or data gathering projects. One-person robotic-guided total stations allow surveyors to measure without extra workers to aim the telescope or record data. A fast but expensive way to measure large areas is with a helicopter, using a GPS to record the location of the helicopter and a laser scanner to measure the ground. To increase precision, surveyors place beacons on the ground (about 20 km (12 mi) apart). This method reaches precisions between 5–40 cm (depending on flight height).

Surveyors use ancillary equipment such as tripods and instrument stands; staves and beacons used for sighting purposes; PPE; vegetation clearing equipment; digging implements for finding survey markers buried over time; hammers for placements of markers in various surfaces and structures; and portable radios for communication over long lines of sight.

Software

Land surveyors, construction professionals, and civil engineers using total station, GPS, 3D scanners, and other collector data use Land Surveying Software to increase efficiency, accuracy, and productivity. Land Surveying Software is a staple of contemporary land surveying.

Typically, much if not all of the drafting and some of the designing for plans and plats of the surveyed property is done by the surveyor, and nearly everyone working in the area of drafting today (2021) utilizes CAD software and hardware both on PC, and more and more in newer generation data collectors in the field as well. Other computer platforms and tools commonly used today by surveyors are offered online by the U.S. Federal Government and other governments' survey agencies, such as the National Geodetic Survey and the CORS network, to get automated corrections and conversions for collected GPS data, and the data coordinate systems themselves.

Techniques

A compass with extra sights for measuring bearings.
A standard Brunton Geo compass, still used commonly today by geographers, geologists and surveyors for field-based measurements

Surveyors determine the position of objects by measuring angles and distances. The factors that can affect the accuracy of their observations are also measured. They then use this data to create vectors, bearings, coordinates, elevations, areas, volumes, plans and maps. Measurements are often split into horizontal and vertical components to simplify calculation. GPS and astronomic measurements also need measurement of a time component.

Distance measurement

A Woman with a backpack holding a laser rangefinder, a handheld GPS and a Tablet computer.
Example of modern equipment for surveying (Field-Map technology): GPS, laser rangefinder and field computer allows surveying as well as cartography (creation of map in real-time) and field data collection.

Before EDM (Electronic Distance Measurement) laser devices, distances were measured using a variety of means. These included chains with links of a known length such as a Gunter's chain, or measuring tapes made of steel or invar. To measure horizontal distances, these chains or tapes were pulled taut to reduce sagging and slack. The distance had to be adjusted for heat expansion. Attempts to hold the measuring instrument level would also be made. When measuring up a slope, the surveyor might have to "break" (break chain) the measurement- use an increment less than the total length of the chain. Perambulators, or measuring wheels, were used to measure longer distances but not to a high level of accuracy. Tacheometry is the science of measuring distances by measuring the angle between two ends of an object with a known size. It was sometimes used before to the invention of EDM where rough ground made chain measurement impractical.

Angle measurement

Historically, horizontal angles were measured by using a compass to provide a magnetic bearing or azimuth. Later, more precise scribed discs improved angular resolution. Mounting telescopes with reticles atop the disc allowed more precise sighting (see theodolite). Levels and calibrated circles allowed the measurement of vertical angles. Verniers allowed measurement to a fraction of a degree, such as with a turn-of-the-century transit.

The plane table provided a graphical method of recording and measuring angles, which reduced the amount of mathematics required. In 1829 Francis Ronalds invented a reflecting instrument for recording angles graphically by modifying the octant.

By observing the bearing from every vertex in a figure, a surveyor can measure around the figure. The final observation will be between the two points first observed, except with a 180° difference. This is called a close. If the first and last bearings are different, this shows the error in the survey, called the angular misclose. The surveyor can use this information to prove that the work meets the expected standards.

Levelling

A woman setting up an optical level on a tripod.
Center for Operational Oceanographic Products and Services staff member conducts tide station leveling in support of the US Army Corps of Engineers in Richmond, Maine.

The simplest method for measuring height is with an altimeter using air pressure to find the height. When more precise measurements are needed, means like precise levels (also known as differential leveling) are used. When precise leveling, a series of measurements between two points are taken using an instrument and a measuring rod. Differences in height between the measurements are added and subtracted in a series to get the net difference in elevation between the two endpoints. With the Global Positioning System (GPS), elevation can be measured with satellite receivers. Usually, GPS is somewhat less accurate than traditional precise leveling, but may be similar over long distances.

When using an optical level, the endpoint may be out of the effective range of the instrument. There may be obstructions or large changes of elevation between the endpoints. In these situations, extra setups are needed. Turning is a term used when referring to moving the level to take an elevation shot from a different location. To "turn" the level, one must first take a reading and record the elevation of the point the rod is located on. While the rod is being kept in exactly the same location, the level is moved to a new location where the rod is still visible. A reading is taken from the new location of the level and the height difference is used to find the new elevation of the level gun, which is why this method is referred to as differential levelling. This is repeated until the series of measurements is completed. The level must be horizontal to get a valid measurement. Because of this, if the horizontal crosshair of the instrument is lower than the base of the rod, the surveyor will not be able to sight the rod and get a reading. The rod can usually be raised up to 25 feet (7.6 m) high, allowing the level to be set much higher than the base of the rod.

Determining position

The primary way of determining one's position on the earth's surface when no known positions are nearby is by astronomic observations. Observations to the sun, moon and stars could all be made using navigational techniques. Once the instrument's position and bearing to a star is determined, the bearing can be transferred to a reference point on the earth. The point can then be used as a base for further observations. Survey-accurate astronomic positions were difficult to observe and calculate and so tended to be a base off which many other measurements were made. Since the advent of the GPS system, astronomic observations are rare as GPS allows adequate positions to be determined over most of the surface of the earth.

Reference networks

A diagram of survey markers running along a shoreline.
A survey using traverse and offset measurements to record the location of the shoreline shown in blue. Black dashed lines are traverse measurements between reference points (black circles). The red lines are offsets measured at right angles to the traverse lines.

Few survey positions are derived from the first principles. Instead, most surveys points are measured relative to previously measured points. This forms a reference or control network where each point can be used by a surveyor to determine their own position when beginning a new survey.

Survey points are usually marked on the earth's surface by objects ranging from small nails driven into the ground to large beacons that can be seen from long distances. The surveyors can set up their instruments in this position and measure to nearby objects. Sometimes a tall, distinctive feature such as a steeple or radio aerial has its position calculated as a reference point that angles can be measured against.

Triangulation is a method of horizontal location favoured in the days before EDM and GPS measurement. It can determine distances, elevations and directions between distant objects. Since the early days of surveying, this was the primary method of determining accurate positions of objects for topographic maps of large areas. A surveyor first needs to know the horizontal distance between two of the objects, known as the baseline. Then the heights, distances and angular position of other objects can be derived, as long as they are visible from one of the original objects. High-accuracy transits or theodolites were used, and angle measurements were repeated for increased accuracy. See also Triangulation in three dimensions.

Offsetting is an alternate method of determining the position of objects, and was often used to measure imprecise features such as riverbanks. The surveyor would mark and measure two known positions on the ground roughly parallel to the feature, and mark out a baseline between them. At regular intervals, a distance was measured at right angles from the first line to the feature. The measurements could then be plotted on a plan or map, and the points at the ends of the offset lines could be joined to show the feature.

Traversing is a common method of surveying smaller areas. The surveyor starts from an old reference mark or known position and places a network of reference marks covering the survey area. They then measure bearings and distances between the reference marks, and to the target features. Most traverses form a loop pattern or link between two prior reference marks so the surveyor can check their measurements.

Datum and coordinate systems

Many surveys do not calculate positions on the surface of the earth, but instead, measure the relative positions of objects. However, often the surveyed items need to be compared to outside data, such as boundary lines or previous survey's objects. The oldest way of describing a position is via latitude and longitude, and often a height above sea level. As the surveying profession grew it created Cartesian coordinate systems to simplify the mathematics for surveys over small parts of the earth. The simplest coordinate systems assume that the earth is flat and measure from an arbitrary point, known as a 'datum' (singular form of data). The coordinate system allows easy calculation of the distances and direction between objects over small areas. Large areas distort due to the earth's curvature. North is often defined as true north at the datum.

For larger regions, it is necessary to model the shape of the earth using an ellipsoid or a geoid. Many countries have created coordinate-grids customized to lessen error in their area of the earth.

Errors and accuracy

A basic tenet of surveying is that no measurement is perfect, and that there will always be a small amount of error. There are three classes of survey errors:

  • Gross errors or blunders: Errors made by the surveyor during the survey. Upsetting the instrument, misaiming a target, or writing down a wrong measurement are all gross errors. A large gross error may reduce the accuracy to an unacceptable level. Therefore, surveyors use redundant measurements and independent checks to detect these errors early in the survey.
  • Systematic: Errors that follow a consistent pattern. Examples include effects of temperature on a chain or EDM measurement, or a poorly adjusted spirit-level causing a tilted instrument or target pole. Systematic errors that have known effects can be compensated or corrected.
  • Random: Random errors are small unavoidable fluctuations. They are caused by imperfections in measuring equipment, eyesight, and conditions. They can be minimized by redundancy of measurement and avoiding unstable conditions. Random errors tend to cancel each other out, but checks must be made to ensure they are not propagating from one measurement to the next.

Surveyors avoid these errors by calibrating their equipment, using consistent methods, and by good design of their reference network. Repeated measurements can be averaged and any outlier measurements discarded. Independent checks like measuring a point from two or more locations or using two different methods are used, and errors can be detected by comparing the results of two or more measurements, thus utilizing redundancy.

Once the surveyor has calculated the level of the errors in his or her work, it is adjusted. This is the process of distributing the error between all measurements. Each observation is weighted according to how much of the total error it is likely to have caused and part of that error is allocated to it in a proportional way. The most common methods of adjustment are the Bowditch method, also known as the compass rule, and the principle of least squares method.

The surveyor must be able to distinguish between accuracy and precision. In the United States, surveyors and civil engineers use units of feet wherein a survey foot breaks down into 10ths and 100ths. Many deed descriptions containing distances are often expressed using these units (125.25 ft). On the subject of accuracy, surveyors are often held to a standard of one one-hundredth of a foot; about 1/8 inch. Calculation and mapping tolerances are much smaller wherein achieving near-perfect closures are desired. Though tolerances will vary from project to project, in the field and day to day usage beyond a 100th of a foot is often impractical.

Types

Local organisations or regulatory bodies class specializations of surveying in different ways. Broad groups are:

  • As-built survey: a survey that documents the location of recently constructed elements of a construction project. As-built surveys are done for record, completion evaluation and payment purposes. An as-built survey is also known as a 'works as executed survey'. As built surveys are often presented in red or redline and laid over existing plans for comparison with design information.
  • Cadastral or boundary surveying: a survey that establishes or re-establishes boundaries of a parcel using a legal description. It involves the setting or restoration of monuments or markers at the corners or along the lines of the parcel. These take the form of iron rods, pipes, or concrete monuments in the ground, or nails set in concrete or asphalt. The ALTA/ACSM Land Title Survey is a standard proposed by the American Land Title Association and the American Congress on Surveying and Mapping. It incorporates elements of the boundary survey, mortgage survey, and topographic survey.
  • Control surveying: Control surveys establish reference points to use as starting positions for future surveys. Most other forms of surveying will contain elements of control surveying.
  • Construction surveying and engineering surveying: topographic, layout, and as-built surveys associated with engineering design. They often need geodetic computations beyond normal civil engineering practice.
  • Deformation survey: a survey to determine if a structure or object is changing shape or moving. First the positions of points on an object are found. A period of time is allowed to pass and the positions are then re-measured and calculated. Then a comparison between the two sets of positions is made.
  • Dimensional control survey: This is a type of survey conducted in or on a non-level surface. Common in the oil and gas industry to replace old or damaged pipes on a like-for-like basis, the advantage of dimensional control survey is that the instrument used to conduct the survey does not need to be level. This is useful in the off-shore industry, as not all platforms are fixed and are thus subject to movement.
  • Foundation survey: a survey done to collect the positional data on a foundation that has been poured and is cured. This is done to ensure that the foundation was constructed in the location, and at the elevation, authorized in the plot plan, site plan, or subdivision plan.
  • Hydrographic survey: a survey conducted with the purpose of mapping the shoreline and bed of a body of water. Used for navigation, engineering, or resource management purposes.
  • Leveling: either finds the elevation of a given point or establish a point at a given elevation.
  • LOMA survey: Survey to change base flood line, removing property from a SFHA special flood hazard area.
  • Measured survey : a building survey to produce plans of the building. such a survey may be conducted before renovation works, for commercial purpose, or at end of the construction process.
  • Mining surveying: Mining surveying includes directing the digging of mine shafts and galleries and the calculation of volume of rock. It uses specialised techniques due to the restraints to survey geometry such as vertical shafts and narrow passages.
  • Mortgage survey: A mortgage survey or physical survey is a simple survey that delineates land boundaries and building locations. It checks for encroachment, building setback restrictions and shows nearby flood zones. In many places a mortgage survey is a precondition for a mortgage loan.
  • Photographic control survey: A survey that creates reference marks visible from the air to allow aerial photographs to be rectified.
  • Stakeout, layout or setout: an element of many other surveys where the calculated or proposed position of an object is marked on the ground. This can be temporary or permanent. This is an important component of engineering and cadastral surveying.
  • Structural survey: a detailed inspection to report upon the physical condition and structural stability of a building or structure. It highlights any work needed to maintain it in good repair.
  • Subdivision: A boundary survey that splits a property into two or more smaller properties.
  • Topographic survey: a survey that measures the elevation of points on a particular piece of land, and presents them as contour lines on a plot.
  • Existing conditions: Similar to a topographic survey but instead focuses more on the specific location of key features and structures as they exist at that time within the surveyed area rather than primarily focusing on the elevation, often used alongside architectural drawings and blueprints to locate or place building structures.
  • Underwater survey: a survey of an underwater site, object, or area.

Plane vs. geodetic surveying

Based on the considerations and true shape of the earth, surveying is broadly classified into two types.

Plane surveying assumes the earth is flat. Curvature and spheroidal shape of the earth is neglected. In this type of surveying all triangles formed by joining survey lines are considered as plane triangles. It is employed for small survey works where errors due to the earth's shape are too small to matter.

In geodetic surveying the curvature of the earth is taken into account while calculating reduced levels, angles, bearings and distances. This type of surveying is usually employed for large survey works. Survey works up to 100 square miles (260 square kilometers ) are treated as plane and beyond that are treated as geodetic. In geodetic surveying necessary corrections are applied to reduced levels, bearings and other observations.

On the basis of the instrument used

  • Chain Surveying
  • Compass Surveying
  • Plane table Surveying
  • Levelling
  • Theodolite Surveying
  • Traverse Surveying
  • Tacheometric Surveying
  • Aerial Surveying

Profession

Head and shoulders portrait of Nain Singh Rawat.
The pundit cartographer Nain Singh Rawat (19th century) received a Royal Geographical Society gold medal in 1876, for his efforts in exploring the Himalayas for the British
 
Four women pose with a theodolite, a plane table and two levelling staves.
An all-female surveying crew in Idaho, 1918

The basic principles of surveying have changed little over the ages, but the tools used by surveyors have evolved. Engineering, especially civil engineering, often needs surveyors.

Surveyors help determine the placement of roads, railways, reservoirs, dams, pipelines, retaining walls, bridges, and buildings. They establish the boundaries of legal descriptions and political divisions. They also provide advice and data for geographical information systems (GIS) that record land features and boundaries.

Surveyors must have a thorough knowledge of algebra, basic calculus, geometry, and trigonometry. They must also know the laws that deal with surveys, real property, and contracts.

Most jurisdictions recognize three different levels of qualification:

  1. Survey assistants or chainmen are usually unskilled workers who help the surveyor. They place target reflectors, find old reference marks, and mark points on the ground. The term 'chainman' derives from past use of measuring chains. An assistant would move the far end of the chain under the surveyor's direction.
  2. Survey technicians often operate survey instruments, run surveys in the field, do survey calculations, or draft plans. A technician usually has no legal authority and cannot certify his work. Not all technicians are qualified, but qualifications at the certificate or diploma level are available.
  3. Licensed, registered, or chartered surveyors usually hold a degree or higher qualification. They are often required to pass further exams to join a professional association or to gain certifying status. Surveyors are responsible for planning and management of surveys. They have to ensure that their surveys, or surveys performed under their supervision, meet the legal standards. Many principals of surveying firms hold this status.

Related professions include cartographers, hydrographers, geodesists, photogrammetrists, and topographers, as well as civil engineers and geomatics engineers.

Licensing

Licensing requirements vary with jurisdiction, and are commonly consistent within national borders. Prospective surveyors usually have to receive a degree in surveying, followed by a detailed examination of their knowledge of surveying law and principles specific to the region they wish to practice in, and undergo a period of on-the-job training or portfolio building before they are awarded a license to practise. Licensed surveyors usually receive a post nominal, which varies depending on where they qualified. The system has replaced older apprenticeship systems.

A licensed land surveyor is generally required to sign and seal all plans. The state dictates the format, showing their name and registration number.

In many jurisdictions, surveyors must mark their registration number on survey monuments when setting boundary corners. Monuments take the form of capped iron rods, concrete monuments, or nails with washers.

Surveying institutions

Uniformed group poses with theodolites, level staves and octant.
Surveying students with their professor at the Helsinki University of Technology in the late 19th century

Most countries' governments regulate at least some forms of surveying. Their survey agencies establish regulations and standards. Standards control accuracy, surveying credentials, monumentation of boundaries and maintenance of geodetic networks. Many nations devolve this authority to regional entities or states/provinces. Cadastral surveys tend to be the most regulated because of the permanence of the work. Lot boundaries established by cadastral surveys may stand for hundreds of years without modification.

Most jurisdictions also have a form of professional institution representing local surveyors. These institutes often endorse or license potential surveyors, as well as set and enforce ethical standards. The largest institution is the International Federation of Surveyors (Abbreviated FIG, for French: Fédération Internationale des Géomètres). They represent the survey industry worldwide.

Building surveying

Most English-speaking countries consider building surveying a distinct profession. They have their own professional associations and licensing requirements. A building surveyor can provide technical building advice on existing buildings, new buildings, design, compliance with regulations such as planning and building control. A building surveyor normally acts on behalf of his or her client ensuring that their vested interests remain protected. The Royal Institution of Chartered Surveyors (RICS) is a world-recognised governing body for those working within the built environment.

Cadastral surveying

One of the primary roles of the land surveyor is to determine the boundary of real property on the ground. The surveyor must determine where the adjoining landowners wish to put the boundary. The boundary is established in legal documents and plans prepared by attorneys, engineers, and land surveyors. The surveyor then puts monuments on the corners of the new boundary. They might also find or resurvey the corners of the property monumented by prior surveys.

Cadastral land surveyors are licensed by governments. The cadastral survey branch of the Bureau of Land Management (BLM) conducts most cadastral surveys in the United States. They consult with Forest Service, National Park Service, Army Corps of Engineers, Bureau of Indian Affairs, Fish and Wildlife Service, Bureau of Reclamation, and others. The BLM used to be known as the General Land Office (GLO).

In states organized per the Public Land Survey System (PLSS), surveyors must carry out BLM cadastral surveys under that system.

Cadastral surveyors often have to work around changes to the earth that obliterate or damage boundary monuments. When this happens, they must consider evidence that is not recorded on the title deed. This is known as extrinsic evidence.

Noteworthy surveyors

Three of the four U.S. Presidents on Mount Rushmore were land surveyors. George Washington, Thomas Jefferson, and Abraham Lincoln surveyed colonial or frontier territories prior to serving office.

David T. Abercrombie practiced land surveying before starting an outfitter store of excursion goods. The business would later turn into Abercrombie & Fitch lifestyle clothing store.

Percy Harrison Fawcett was a British surveyor that explored the jungles of South America attempting to find the Lost City of Z. His biography and expeditions were recounted in the book The Lost City of Z and were later adapted on film screen.

Inō Tadataka produced the first map of Japan using modern surveying techniques starting in 1800, at the age of 55.

DNA microarray

From Wikipedia, the free encyclopedia

A DNA microarray (also commonly known as DNA chip or biochip) is a collection of microscopic DNA spots attached to a solid surface. Scientists use DNA microarrays to measure the expression levels of large numbers of genes simultaneously or to genotype multiple regions of a genome. Each DNA spot contains picomoles (10−12 moles) of a specific DNA sequence, known as probes (or reporters or oligos). These can be a short section of a gene or other DNA element that are used to hybridize a cDNA or cRNA (also called anti-sense RNA) sample (called target) under high-stringency conditions. Probe-target hybridization is usually detected and quantified by detection of fluorophore-, silver-, or chemiluminescence-labeled targets to determine relative abundance of nucleic acid sequences in the target. The original nucleic acid arrays were macro arrays approximately 9 cm × 12 cm and the first computerized image based analysis was published in 1981. It was invented by Patrick O. Brown. An example of its application is in SNPs arrays for polymorphisms in cardiovascular diseases, cancer, pathogens and GWAS analysis. It is also used for the identification of structural variations and the measurement of gene expression.

Principle

Hybridization of the target to the probe
 

The core principle behind microarrays is hybridization between two DNA strands, the property of complementary nucleic acid sequences to specifically pair with each other by forming hydrogen bonds between complementary nucleotide base pairs. A high number of complementary base pairs in a nucleotide sequence means tighter non-covalent bonding between the two strands. After washing off non-specific bonding sequences, only strongly paired strands will remain hybridized. Fluorescently labeled target sequences that bind to a probe sequence generate a signal that depends on the hybridization conditions (such as temperature), and washing after hybridization. Total strength of the signal, from a spot (feature), depends upon the amount of target sample binding to the probes present on that spot. Microarrays use relative quantitation in which the intensity of a feature is compared to the intensity of the same feature under a different condition, and the identity of the feature is known by its position.

The steps required in a microarray experiment

Uses and types

Two Affymetrix chips. A match is shown at bottom left for size comparison.

Many types of arrays exist and the broadest distinction is whether they are spatially arranged on a surface or on coded beads:

  • The traditional solid-phase array is a collection of orderly microscopic "spots", called features, each with thousands of identical and specific probes attached to a solid surface, such as glass, plastic or silicon biochip (commonly known as a genome chip, DNA chip or gene array). Thousands of these features can be placed in known locations on a single DNA microarray.
  • The alternative bead array is a collection of microscopic polystyrene beads, each with a specific probe and a ratio of two or more dyes, which do not interfere with the fluorescent dyes used on the target sequence.

DNA microarrays can be used to detect DNA (as in comparative genomic hybridization), or detect RNA (most commonly as cDNA after reverse transcription) that may or may not be translated into proteins. The process of measuring gene expression via cDNA is called expression analysis or expression profiling.

Applications include:

Application or technology Synopsis
Gene expression profiling In an mRNA or gene expression profiling experiment the expression levels of thousands of genes are simultaneously monitored to study the effects of certain treatments, diseases, and developmental stages on gene expression. For example, microarray-based gene expression profiling can be used to identify genes whose expression is changed in response to pathogens or other organisms by comparing gene expression in infected to that in uninfected cells or tissues.
Comparative genomic hybridization Assessing genome content in different cells or closely related organisms, as originally described by Patrick Brown, Jonathan Pollack, Ash Alizadeh and colleagues at Stanford.
GeneID Small microarrays to check IDs of organisms in food and feed (like GMO ), mycoplasms in cell culture, or pathogens for disease detection, mostly combining PCR and microarray technology.
Chromatin immunoprecipitation on Chip DNA sequences bound to a particular protein can be isolated by immunoprecipitating that protein (ChIP), these fragments can be then hybridized to a microarray (such as a tiling array) allowing the determination of protein binding site occupancy throughout the genome. Example protein to immunoprecipitate are histone modifications (H3K27me3, H3K4me2, H3K9me3, etc.), Polycomb-group protein (PRC2:Suz12, PRC1:YY1) and trithorax-group protein (Ash1) to study the epigenetic landscape or RNA Polymerase II to study the transcription landscape.
DamID Analogously to ChIP, genomic regions bound by a protein of interest can be isolated and used to probe a microarray to determine binding site occupancy. Unlike ChIP, DamID does not require antibodies but makes use of adenine methylation near the protein's binding sites to selectively amplify those regions, introduced by expressing minute amounts of protein of interest fused to bacterial DNA adenine methyltransferase.
SNP detection Identifying single nucleotide polymorphism among alleles within or between populations. Several applications of microarrays make use of SNP detection, including genotyping, forensic analysis, measuring predisposition to disease, identifying drug-candidates, evaluating germline mutations in individuals or somatic mutations in cancers, assessing loss of heterozygosity, or genetic linkage analysis.
Alternative splicing detection An exon junction array design uses probes specific to the expected or potential splice sites of predicted exons for a gene. It is of intermediate density, or coverage, to a typical gene expression array (with 1–3 probes per gene) and a genomic tiling array (with hundreds or thousands of probes per gene). It is used to assay the expression of alternative splice forms of a gene. Exon arrays have a different design, employing probes designed to detect each individual exon for known or predicted genes, and can be used for detecting different splicing isoforms.
Fusion genes microarray A Fusion gene microarray can detect fusion transcripts, e.g. from cancer specimens. The principle behind this is building on the alternative splicing microarrays. The oligo design strategy enables combined measurements of chimeric transcript junctions with exon-wise measurements of individual fusion partners.
Tiling array Genome tiling arrays consist of overlapping probes designed to densely represent a genomic region of interest, sometimes as large as an entire human chromosome. The purpose is to empirically detect expression of transcripts or alternatively spliced forms which may not have been previously known or predicted.
Double-stranded B-DNA microarrays Right-handed double-stranded B-DNA microarrays can be used to characterize novel drugs and biologicals that can be employed to bind specific regions of immobilized, intact, double-stranded DNA. This approach can be used to inhibit gene expression. They also allow for characterization of their structure under different environmental conditions.
Double-stranded Z-DNA microarrays Left-handed double-stranded Z-DNA microarrays can be used to identify short sequences of the alternative Z-DNA structure located within longer stretches of right-handed B-DNA genes (e.g., transcriptional enhancement, recombination, RNA editing). The microarrays also allow for characterization of their structure under different environmental conditions.
Multi-stranded DNA microarrays (triplex-DNA microarrays and quadruplex-DNA microarrays) Multi-stranded DNA and RNA microarrays can be used to identify novel drugs that bind to these multi-stranded nucleic acid sequences. This approach can be used to discover new drugs and biologicals that have the ability to inhibit gene expression. These microarrays also allow for characterization of their structure under different environmental conditions.

Specialised arrays tailored to particular crops are becoming increasingly popular in molecular breeding applications. In the future they could be used to screen seedlings at early stages to lower the number of unneeded seedlings tried out in breeding operations.

Fabrication

Microarrays can be manufactured in different ways, depending on the number of probes under examination, costs, customization requirements, and the type of scientific question being asked. Arrays from commercial vendors may have as few as 10 probes or as many as 5 million or more micrometre-scale probes.

Spotted vs. in situ synthesised arrays

Microarrays can be fabricated using a variety of technologies, including printing with fine-pointed pins onto glass slides, photolithography using pre-made masks, photolithography using dynamic micromirror devices, ink-jet printing, or electrochemistry on microelectrode arrays.

In spotted microarrays, the probes are oligonucleotides, cDNA or small fragments of PCR products that correspond to mRNAs. The probes are synthesized prior to deposition on the array surface and are then "spotted" onto glass. A common approach utilizes an array of fine pins or needles controlled by a robotic arm that is dipped into wells containing DNA probes and then depositing each probe at designated locations on the array surface. The resulting "grid" of probes represents the nucleic acid profiles of the prepared probes and is ready to receive complementary cDNA or cRNA "targets" derived from experimental or clinical samples. This technique is used by research scientists around the world to produce "in-house" printed microarrays from their own labs. These arrays may be easily customized for each experiment, because researchers can choose the probes and printing locations on the arrays, synthesize the probes in their own lab (or collaborating facility), and spot the arrays. They can then generate their own labeled samples for hybridization, hybridize the samples to the array, and finally scan the arrays with their own equipment. This provides a relatively low-cost microarray that may be customized for each study, and avoids the costs of purchasing often more expensive commercial arrays that may represent vast numbers of genes that are not of interest to the investigator. Publications exist which indicate in-house spotted microarrays may not provide the same level of sensitivity compared to commercial oligonucleotide arrays, possibly owing to the small batch sizes and reduced printing efficiencies when compared to industrial manufactures of oligo arrays.

In oligonucleotide microarrays, the probes are short sequences designed to match parts of the sequence of known or predicted open reading frames. Although oligonucleotide probes are often used in "spotted" microarrays, the term "oligonucleotide array" most often refers to a specific technique of manufacturing. Oligonucleotide arrays are produced by printing short oligonucleotide sequences designed to represent a single gene or family of gene splice-variants by synthesizing this sequence directly onto the array surface instead of depositing intact sequences. Sequences may be longer (60-mer probes such as the Agilent design) or shorter (25-mer probes produced by Affymetrix) depending on the desired purpose; longer probes are more specific to individual target genes, shorter probes may be spotted in higher density across the array and are cheaper to manufacture. One technique used to produce oligonucleotide arrays include photolithographic synthesis (Affymetrix) on a silica substrate where light and light-sensitive masking agents are used to "build" a sequence one nucleotide at a time across the entire array. Each applicable probe is selectively "unmasked" prior to bathing the array in a solution of a single nucleotide, then a masking reaction takes place and the next set of probes are unmasked in preparation for a different nucleotide exposure. After many repetitions, the sequences of every probe become fully constructed. More recently, Maskless Array Synthesis from NimbleGen Systems has combined flexibility with large numbers of probes.

Two-channel vs. one-channel detection

Diagram of typical dual-colour microarray experiment

Two-color microarrays or two-channel microarrays are typically hybridized with cDNA prepared from two samples to be compared (e.g. diseased tissue versus healthy tissue) and that are labeled with two different fluorophores. Fluorescent dyes commonly used for cDNA labeling include Cy3, which has a fluorescence emission wavelength of 570 nm (corresponding to the green part of the light spectrum), and Cy5 with a fluorescence emission wavelength of 670 nm (corresponding to the red part of the light spectrum). The two Cy-labeled cDNA samples are mixed and hybridized to a single microarray that is then scanned in a microarray scanner to visualize fluorescence of the two fluorophores after excitation with a laser beam of a defined wavelength. Relative intensities of each fluorophore may then be used in ratio-based analysis to identify up-regulated and down-regulated genes.

Oligonucleotide microarrays often carry control probes designed to hybridize with RNA spike-ins. The degree of hybridization between the spike-ins and the control probes is used to normalize the hybridization measurements for the target probes. Although absolute levels of gene expression may be determined in the two-color array in rare instances, the relative differences in expression among different spots within a sample and between samples is the preferred method of data analysis for the two-color system. Examples of providers for such microarrays includes Agilent with their Dual-Mode platform, Eppendorf with their DualChip platform for colorimetric Silverquant labeling, and TeleChem International with Arrayit.

In single-channel microarrays or one-color microarrays, the arrays provide intensity data for each probe or probe set indicating a relative level of hybridization with the labeled target. However, they do not truly indicate abundance levels of a gene but rather relative abundance when compared to other samples or conditions when processed in the same experiment. Each RNA molecule encounters protocol and batch-specific bias during amplification, labeling, and hybridization phases of the experiment making comparisons between genes for the same microarray uninformative. The comparison of two conditions for the same gene requires two separate single-dye hybridizations. Several popular single-channel systems are the Affymetrix "Gene Chip", Illumina "Bead Chip", Agilent single-channel arrays, the Applied Microarrays "CodeLink" arrays, and the Eppendorf "DualChip & Silverquant". One strength of the single-dye system lies in the fact that an aberrant sample cannot affect the raw data derived from other samples, because each array chip is exposed to only one sample (as opposed to a two-color system in which a single low-quality sample may drastically impinge on overall data precision even if the other sample was of high quality). Another benefit is that data are more easily compared to arrays from different experiments as long as batch effects have been accounted for.

One channel microarray may be the only choice in some situations. Suppose samples need to be compared: then the number of experiments required using the two channel arrays quickly becomes unfeasible, unless a sample is used as a reference.

number of samples one-channel microarray two channel microarray

two channel microarray (with reference)

1 1 1 1
2 2 1 1
3 3 3 2
4 4 6 3

A typical protocol

Examples of levels of application of microarrays. Within the organisms, genes are transcribed and spliced to produce mature mRNA transcripts (red). The mRNA is extracted from the organism and reverse transcriptase is used to copy the mRNA into stable ds-cDNA (blue). In microarrays, the ds-cDNA is fragmented and fluorescently labelled (orange). The labelled fragments bind to an ordered array of complementary oligonucleotides, and measurement of fluorescent intensity across the array indicates the abundance of a predetermined set of sequences. These sequences are typically specifically chosen to report on genes of interest within the organism's genome.

This is an example of a DNA microarray experiment which includes details for a particular case to better explain DNA microarray experiments, while listing modifications for RNA or other alternative experiments.

  1. The two samples to be compared (pairwise comparison) are grown/acquired. In this example treated sample (case) and untreated sample (control).
  2. The nucleic acid of interest is purified: this can be RNA for expression profiling, DNA for comparative hybridization, or DNA/RNA bound to a particular protein which is immunoprecipitated (ChIP-on-chip) for epigenetic or regulation studies. In this example total RNA is isolated (both nuclear and cytoplasmic) by Guanidinium thiocyanate-phenol-chloroform extraction (e.g. Trizol) which isolates most RNA (whereas column methods have a cut off of 200 nucleotides) and if done correctly has a better purity.
  3. The purified RNA is analysed for quality (by capillary electrophoresis) and quantity (for example, by using a NanoDrop or NanoPhotometer spectrometer). If the material is of acceptable quality and sufficient quantity is present (e.g., >1μg, although the required amount varies by microarray platform), the experiment can proceed.
  4. The labeled product is generated via reverse transcription and followed by an optional PCR amplification. The RNA is reverse transcribed with either polyT primers (which amplify only mRNA) or random primers (which amplify all RNA, most of which is rRNA). miRNA microarrays ligate an oligonucleotide to the purified small RNA (isolated with a fractionator), which is then reverse transcribed and amplified.
    • The label is added either during the reverse transcription step, or following amplification if it is performed. The sense labeling is dependent on the microarray; e.g. if the label is added with the RT mix, the cDNA is antisense and the microarray probe is sense, except in the case of negative controls.
    • The label is typically fluorescent; only one machine uses radiolabels.
    • The labeling can be direct (not used) or indirect (requires a coupling stage). For two-channel arrays, the coupling stage occurs before hybridization, using aminoallyl uridine triphosphate (aminoallyl-UTP, or aaUTP) and NHS amino-reactive dyes (such as cyanine dyes); for single-channel arrays, the coupling stage occurs after hybridization, using biotin and labeled streptavidin. The modified nucleotides (usually in a ratio of 1 aaUTP: 4 TTP (thymidine triphosphate)) are added enzymatically in a low ratio to normal nucleotides, typically resulting in 1 every 60 bases. The aaDNA is then purified with a column (using a phosphate buffer solution, as Tris contains amine groups). The aminoallyl group is an amine group on a long linker attached to the nucleobase, which reacts with a reactive dye.
      • A form of replicate known as a dye flip can be performed to control for dye artifacts in two-channel experiments; for a dye flip, a second slide is used, with the labels swapped (the sample that was labeled with Cy3 in the first slide is labeled with Cy5, and vice versa). In this example, aminoallyl-UTP is present in the reverse-transcribed mixture.
  5. The labeled samples are then mixed with a proprietary hybridization solution which can consist of SDS, SSC, dextran sulfate, a blocking agent (such as Cot-1 DNA, salmon sperm DNA, calf thymus DNA, PolyA, or PolyT), Denhardt's solution, or formamine.
  6. The mixture is denatured and added to the pinholes of the microarray. The holes are sealed and the microarray hybridized, either in a hyb oven, where the microarray is mixed by rotation, or in a mixer, where the microarray is mixed by alternating pressure at the pinholes.
  7. After an overnight hybridization, all nonspecific binding is washed off (SDS and SSC).
  8. The microarray is dried and scanned by a machine that uses a laser to excite the dye and measures the emission levels with a detector.
  9. The image is gridded with a template and the intensities of each feature (composed of several pixels) is quantified.
  10. The raw data is normalized; the simplest normalization method is to subtract background intensity and scale so that the total intensities of the features of the two channels are equal, or to use the intensity of a reference gene to calculate the t-value for all of the intensities. More sophisticated methods include z-ratio, loess and lowess regression and RMA (robust multichip analysis) for Affymetrix chips (single-channel, silicon chip, in situ synthesized short oligonucleotides).

Microarrays and bioinformatics

Gene expression values from microarray experiments can be represented as heat maps to visualize the result of data analysis.

The advent of inexpensive microarray experiments created several specific bioinformatics challenges: the multiple levels of replication in experimental design (Experimental design); the number of platforms and independent groups and data format (Standardization); the statistical treatment of the data (Data analysis); mapping each probe to the mRNA transcript that it measures (Annotation); the sheer volume of data and the ability to share it (Data warehousing).

Experimental design

Due to the biological complexity of gene expression, the considerations of experimental design that are discussed in the expression profiling article are of critical importance if statistically and biologically valid conclusions are to be drawn from the data.

There are three main elements to consider when designing a microarray experiment. First, replication of the biological samples is essential for drawing conclusions from the experiment. Second, technical replicates (e.g. two RNA samples obtained from each experimental unit) may help to quantitate precision. The biological replicates include independent RNA extractions. Technical replicates may be two aliquots of the same extraction. Third, spots of each cDNA clone or oligonucleotide are present as replicates (at least duplicates) on the microarray slide, to provide a measure of technical precision in each hybridization. It is critical that information about the sample preparation and handling is discussed, in order to help identify the independent units in the experiment and to avoid inflated estimates of statistical significance.

Standardization

Microarray data is difficult to exchange due to the lack of standardization in platform fabrication, assay protocols, and analysis methods. This presents an interoperability problem in bioinformatics. Various grass-roots open-source projects are trying to ease the exchange and analysis of data produced with non-proprietary chips:

For example, the "Minimum Information About a Microarray Experiment" (MIAME) checklist helps define the level of detail that should exist and is being adopted by many journals as a requirement for the submission of papers incorporating microarray results. But MIAME does not describe the format for the information, so while many formats can support the MIAME requirements, as of 2007 no format permits verification of complete semantic compliance. The "MicroArray Quality Control (MAQC) Project" is being conducted by the US Food and Drug Administration (FDA) to develop standards and quality control metrics which will eventually allow the use of MicroArray data in drug discovery, clinical practice and regulatory decision-making. The MGED Society has developed standards for the representation of gene expression experiment results and relevant annotations.

Data analysis

National Center for Toxicological Research scientist reviews microarray data
 

Microarray data sets are commonly very large, and analytical precision is influenced by a number of variables. Statistical challenges include taking into account effects of background noise and appropriate normalization of the data. Normalization methods may be suited to specific platforms and, in the case of commercial platforms, the analysis may be proprietary. Algorithms that affect statistical analysis include:

  • Image analysis: gridding, spot recognition of the scanned image (segmentation algorithm), removal or marking of poor-quality and low-intensity features (called flagging).
  • Data processing: background subtraction (based on global or local background), determination of spot intensities and intensity ratios, visualisation of data (e.g. see MA plot), and log-transformation of ratios, global or local normalization of intensity ratios, and segmentation into different copy number regions using step detection algorithms.
  • Class discovery analysis: This analytic approach, sometimes called unsupervised classification or knowledge discovery, tries to identify whether microarrays (objects, patients, mice, etc.) or genes cluster together in groups. Identifying naturally existing groups of objects (microarrays or genes) which cluster together can enable the discovery of new groups that otherwise were not previously known to exist. During knowledge discovery analysis, various unsupervised classification techniques can be employed with DNA microarray data to identify novel clusters (classes) of arrays. This type of approach is not hypothesis-driven, but rather is based on iterative pattern recognition or statistical learning methods to find an "optimal" number of clusters in the data. Examples of unsupervised analyses methods include self-organizing maps, neural gas, k-means cluster analyses, hierarchical cluster analysis, Genomic Signal Processing based clustering and model-based cluster analysis. For some of these methods the user also has to define a distance measure between pairs of objects. Although the Pearson correlation coefficient is usually employed, several other measures have been proposed and evaluated in the literature. The input data used in class discovery analyses are commonly based on lists of genes having high informativeness (low noise) based on low values of the coefficient of variation or high values of Shannon entropy, etc. The determination of the most likely or optimal number of clusters obtained from an unsupervised analysis is called cluster validity. Some commonly used metrics for cluster validity are the silhouette index, Davies-Bouldin index, Dunn's index, or Hubert's statistic.
  • Class prediction analysis: This approach, called supervised classification, establishes the basis for developing a predictive model into which future unknown test objects can be input in order to predict the most likely class membership of the test objects. Supervised analysis for class prediction involves use of techniques such as linear regression, k-nearest neighbor, learning vector quantization, decision tree analysis, random forests, naive Bayes, logistic regression, kernel regression, artificial neural networks, support vector machines, mixture of experts, and supervised neural gas. In addition, various metaheuristic methods are employed, such as genetic algorithms, covariance matrix self-adaptation, particle swarm optimization, and ant colony optimization. Input data for class prediction are usually based on filtered lists of genes which are predictive of class, determined using classical hypothesis tests (next section), Gini diversity index, or information gain (entropy).
  • Hypothesis-driven statistical analysis: Identification of statistically significant changes in gene expression are commonly identified using the t-test, ANOVA, Bayesian method Mann–Whitney test methods tailored to microarray data sets, which take into account multiple comparisons or cluster analysis. These methods assess statistical power based on the variation present in the data and the number of experimental replicates, and can help minimize Type I and type II errors in the analyses.
  • Dimensional reduction: Analysts often reduce the number of dimensions (genes) prior to data analysis. This may involve linear approaches such as principal components analysis (PCA), or non-linear manifold learning (distance metric learning) using kernel PCA, diffusion maps, Laplacian eigenmaps, local linear embedding, locally preserving projections, and Sammon's mapping.
  • Network-based methods: Statistical methods that take the underlying structure of gene networks into account, representing either associative or causative interactions or dependencies among gene products. Weighted gene co-expression network analysis is widely used for identifying co-expression modules and intramodular hub genes. Modules may corresponds to cell types or pathways. Highly connected intramodular hubs best represent their respective modules.

Microarray data may require further processing aimed at reducing the dimensionality of the data to aid comprehension and more focused analysis. Other methods permit analysis of data consisting of a low number of biological or technical replicates; for example, the Local Pooled Error (LPE) test pools standard deviations of genes with similar expression levels in an effort to compensate for insufficient replication.

Annotation

The relation between a probe and the mRNA that it is expected to detect is not trivial. Some mRNAs may cross-hybridize probes in the array that are supposed to detect another mRNA. In addition, mRNAs may experience amplification bias that is sequence or molecule-specific. Thirdly, probes that are designed to detect the mRNA of a particular gene may be relying on genomic EST information that is incorrectly associated with that gene.

Data warehousing

Microarray data was found to be more useful when compared to other similar datasets. The sheer volume of data, specialized formats (such as MIAME), and curation efforts associated with the datasets require specialized databases to store the data. A number of open-source data warehousing solutions, such as InterMine and BioMart, have been created for the specific purpose of integrating diverse biological datasets, and also support analysis.

Alternative technologies

Advances in massively parallel sequencing has led to the development of RNA-Seq technology, that enables a whole transcriptome shotgun approach to characterize and quantify gene expression. Unlike microarrays, which need a reference genome and transcriptome to be available before the microarray itself can be designed, RNA-Seq can also be used for new model organisms whose genome has not been sequenced yet.

Glossary

  • An array or slide is a collection of features spatially arranged in a two dimensional grid, arranged in columns and rows.
  • Block or subarray: a group of spots, typically made in one print round; several subarrays/ blocks form an array.
  • Case/control: an experimental design paradigm especially suited to the two-colour array system, in which a condition chosen as control (such as healthy tissue or state) is compared to an altered condition (such as a diseased tissue or state).
  • Channel: the fluorescence output recorded in the scanner for an individual fluorophore and can even be ultraviolet.
  • Dye flip or dye swap or fluor reversal: reciprocal labelling of DNA targets with the two dyes to account for dye bias in experiments.
  • Scanner: an instrument used to detect and quantify the intensity of fluorescence of spots on a microarray slide, by selectively exciting fluorophores with a laser and measuring the fluorescence with a filter (optics) photomultiplier system.
  • Spot or feature: a small area on an array slide that contains picomoles of specific DNA samples.
  • For other relevant terms see:

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...