Search This Blog

Saturday, November 12, 2022

Death and the Internet

From Wikipedia, the free encyclopedia

A recent extension to the cultural relationship with death is the increasing number of people who die having created a large amount of digital content, such as social media profiles, that will remain after death. This may result in concern and confusion, because of automated features of dormant accounts (e.g. birthday reminders), uncertainty of the deceased's preferences that profiles be deleted or left as a memorial, and whether information that may violate the deceased's privacy (such as email or browser history) should be made accessible to family.

Issues with how this information is sensitively dealt with are further complicated as it may belong to the service provider (not the deceased) and many do not have clear policies on what happens to the accounts of deceased users. While some sites, including Facebook and Twitter, have policies related to death, others remain dormant until deleted due to inactivity or transferred to family or friends. The FADA (Fiduciary Access to Digital Assets Act) was set in place to make it possible to transfer digital possessions legally.

More broadly, the heavy increase in social media use is affecting cultural practices surrounding death. "Virtual funerals" and other forms of previously physical memorabilia are being introduced into the digital world, complete with public details of a person's life and death.

E-mail

Gmail and Hotmail allow the email accounts of the deceased to be accessed provided certain requirements are met. Yahoo! Mail will not provide access, citing the No Right of Survivorship and Non-Transferability clause in the Yahoo! terms of service. In 2005, Yahoo! was ordered by the Probate Court of Oakland County, Michigan, to release emails of deceased US Marine Justin Ellsworth to his father, John Ellsworth.

By website

Facebook

Policies

In its early days, Facebook used to delete profiles of dead people, but does not anymore. In October 2009, the company introduced "memorial pages" in response to multiple user requests related to the 2007 Virginia Tech shooting. After receiving a proof of death via a special form, the profile would be converted into a tribute page with minimal personal details, where friends and family members could share their grief.

In February 2015, Facebook allowed users to appoint a friend or family member as a "legacy contact" with the rights to manage their page after death. It also gave Facebook users an option to have their account permanently deleted when they die.

As of January 2019, all 3 options were active.

Controversies

In 2013, BuzzFeed criticized Facebook for the lack of control over memorialization that resulted in a "Facebook death" prank aimed at locking users out of their own accounts.

In 2017, Reuters reported that a German court rejected a mother's demand to access her deceased daughter's memorialized account stating that the right to private telecommunications outweighed the right to inheritance. In July 2018, Dubai's DIFC Courts ruling clarified that Facebook, Twitter and other social media accounts should be bequeathed in legally binding will.

Social media networks have also been criticized for not responding to relatives' requests to alter information on memorialized accounts. Another criticism is that Facebook users often are unaware that their content is ultimately owned not by them, but by Facebook.

Dropbox

Policies

Dropbox determines inactive accounts by looking at sign-ins, file shares, and file activity over the previous 12 months. Once an account is determined inactive, Dropbox deletes the files on the account. To request access to the account of a deceased person, heirs are required to send appropriate documents by physical mail.

Google

Policies

In April 2013, Google announced the creation of the 'Inactive Account Manager', which allows users of Google services to set up a process in which ownership and control of inactive accounts is transferred to a delegated user.

Google also allows users to submit a range of requests regarding accounts belonging to deceased users. Google works with immediate family members and representatives to close online accounts in some cases once a user is known to be deceased, and in certain circumstances may also provide content from a deceased user's account.

Twitter

Policies

Until 2010, Twitter (launched in July 2006) did not have a policy on handling deceased user accounts, and simply deleted timelines of deceased users. In August 2010, Twitter allowed memorialization of accounts upon request from family members, and also provided them with an option of either deleting the account or obtaining a permanent backup of the deceased user's public tweets.

In 2014, Twitter updated its policy to include an option to delete deceased user photographs. This policy was implemented after multiple Twitter trolls sent Zelda Williams, daughter of Robin Williams, photoshopped images of her father.

As of January 2019, the only option that Twitter offered for the accounts of dead people was account deactivation. Previously published content is not removed. To deactivate an account Twitter requires an immediate family member to present a copy of their ID and a death certificate of the deceased. Twitter specified that it does not provide account access to anyone, but does allow people having account login information to continue posting. A prominent example is Roger Ebert's account maintained by his wife Chaz.

Controversies

In 2012, The Next Web columnist Martin Bryant noticed that since Twitter, unlike Facebook, did not have a "one account per real person" emphasis, memorializing accounts presented a difficulty to the service. He also criticized the service for the lack of control over hacking of such accounts and disapproved the practice of passing dead people's usernames to new owners after a certain period of inactivity.

In 2013, Variety ran a feature about Cory Monteith's Twitter account that had 1.5 million followers at the moment on his death and gained almost 1 million new followers afterwards. Monteith's fans also launched #DontDeleteCorysTwitter campaign. As of January 2019, the celebrity's account had 1.64 million followers.

Various media reported awkward incidents related to automatic posting and account hacking.

iTunes

Policies

iCloud and iTunes accounts are "non transferable" since the content is not owned — users only have a licence to access it.

Wikipedia

Users who have made at least several hundred edits or are otherwise known for substantial contributions to Wikipedia can be noted at a central memorial page. Wikipedia user pages are ordinarily fully edit-protected after the user has died, to prevent vandalism.

YouTube

YouTube grants access to accounts of deceased persons under certain conditions. It is one of the data options that one can select to give access to a trusted contact with Google's Inactive Account Manager.

Digital inheritance

Digital inheritance is the process of handing over personal digital assets to human beneficiaries. These digital assets include digital estates and the right to use them. It may include bank accounts, writings, photographs, and social interactions.

There are several services which store account passwords and send them to selected individuals after death. Some of these periodically send the customer an email to confirm that that person is still alive; after failure to respond to multiple emails, the service provider assumes that the person has died and will thereafter distribute the passwords as arranged. The Data Inheritance function from SecureSafe gives an "activator code" that the customer transfers to a trusted individual, and in the event of death that individual enters the code into Secure Safe's system to get access to the deceased person's digital inheritance. Legacy Locker and SafeBeyond require two persons to confirm the death, together with the presentation of a death certificate, before any passwords are distributed.

Aimed at those concerned with their online privacy, platforms like LifeBank store a customer's Internet account passwords offline while ensuring that a trusted person is given permission to access the stored passwords upon the customer's death.

In July 2018, the Michigan Court of Appeals found that an Evernote document the decedent had typed into his phone shortly before committing suicide was enforceable as valid will.

Geodesy

From Wikipedia, the free encyclopedia
 
An old geodetic pillar (triangulation pillar) (1855) at Ostend, Belgium

Geodesy (/iˈɒdəsi/ jee-OD-ə-see) is the Earth science of accurately measuring and understanding Earth's figure (geometric shape and size), orientation in space, and gravity. The field also incorporates studies of how these properties change over time and equivalent measurements for other planets (known as planetary geodesy). Geodynamical phenomena, including crustal motion, tides and polar motion, can be studied by designing global and national control networks, applying space geodesy and terrestrial geodetic techniques and relying on datums and coordinate systems. The job title is geodesist or geodetic surveyor.

History

Definition

The word geodesy comes from the Ancient Greek word γεωδαισία geodaisia (literally, "division of Earth").

It is primarily concerned with positioning within the temporally varying gravitational field. Geodesy in the German-speaking world is divided into "higher geodesy" (Erdmessung or höhere Geodäsie), which is concerned with measuring Earth on the global scale, and "practical geodesy" or "engineering geodesy" (Ingenieurgeodäsie), which is concerned with measuring specific parts or regions of Earth, and which includes surveying. Such geodetic operations are also applied to other astronomical bodies in the Solar System. It is also the science of measuring and understanding Earth's geometric shape, orientation in space, and gravitational field.

To a large extent, the shape of Earth is the result of rotation, which causes its equatorial bulge, and the competition of geological processes such as the collision of plates and of volcanism, resisted by Earth's gravitational field. This applies to the solid surface, the liquid surface (dynamic sea surface topography) and Earth's atmosphere. For this reason, the study of Earth's gravitational field is called physical geodesy.

Geoid and reference ellipsoid

The geoid is essentially the figure of Earth abstracted from its topographical features. It is an idealized equilibrium surface of sea water, the mean sea level surface in the absence of currents and air pressure variations, and continued under the continental masses. The geoid, unlike the reference ellipsoid, is irregular and too complicated to serve as the computational surface on which to solve geometrical problems like point positioning. The geometrical separation between the geoid and the reference ellipsoid is called the geoidal undulation. It varies globally between ±110 m, when referred to the GRS 80 ellipsoid.

A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial radius) a and flattening f. The quantity f = ab/a, where b is the semi-minor axis (polar radius), is a purely geometrical one. The mechanical ellipticity of Earth (dynamical flattening, symbol J2) can be determined to high precision by observation of satellite orbit perturbations. Its relationship with the geometrical flattening is indirect. The relationship depends on the internal density distribution, or, in simplest terms, the degree of central concentration of mass.

The 1980 Geodetic Reference System (GRS 80) posited a 6,378,137 m semi-major axis and a 1:298.257 flattening. This system was adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics (IUGG). It is essentially the basis for geodetic positioning by the Global Positioning System (GPS) and is thus also in widespread use outside the geodetic community. The numerous systems that countries have used to create maps and charts are becoming obsolete as countries increasingly move to global, geocentric reference systems using the GRS 80 reference ellipsoid.

The geoid is "realizable", meaning it can be consistently located on Earth by suitable simple measurements from physical objects like a tide gauge. The geoid can, therefore, be considered a real surface. The reference ellipsoid, however, has many possible instantiations and is not readily realizable, therefore it is an abstract surface. The third primary surface of geodetic interest—the topographic surface of Earth—is a realizable surface.

Coordinate systems in space

The locations of points in three-dimensional space are most conveniently described by three cartesian or rectangular coordinates, X, Y and Z. Since the advent of satellite positioning, such coordinate systems are typically geocentric: the Z-axis is aligned with Earth's (conventional or instantaneous) rotation axis.

Prior to the era of satellite geodesy, the coordinate systems associated with a geodetic datum attempted to be geocentric, but their origins differed from the geocenter by hundreds of meters, due to regional deviations in the direction of the plumbline (vertical). These regional geodetic data, such as ED 50 (European Datum 1950) or NAD 27 (North American Datum 1927) have ellipsoids associated with them that are regional "best fits" to the geoids within their areas of validity, minimizing the deflections of the vertical over these areas.

It is only because GPS satellites orbit about the geocenter, that this point becomes naturally the origin of a coordinate system defined by satellite geodetic means, as the satellite positions in space are themselves computed in such a system.

Geocentric coordinate systems used in geodesy can be divided naturally into two classes:

  1. Inertial reference systems, where the coordinate axes retain their orientation relative to the fixed stars, or equivalently, to the rotation axes of ideal gyroscopes; the X-axis points to the vernal equinox
  2. Co-rotating, also ECEF ("Earth Centred, Earth Fixed"), where the axes are attached to the solid body of Earth. The X-axis lies within the Greenwich observatory's meridian plane.

The coordinate transformation between these two systems is described to good approximation by (apparent) sidereal time, which takes into account variations in Earth's axial rotation (length-of-day variations). A more accurate description also takes polar motion into account, a phenomenon closely monitored by geodesists.

Coordinate systems in the plane

A Munich archive with lithography plates of maps of Bavaria

In surveying and mapping, important fields of application of geodesy, two general types of coordinate systems are used in the plane:

  1. Plano-polar, in which points in a plane are defined by a distance s from a specified point along a ray having a specified direction α with respect to a base line or axis;
  2. Rectangular, points are defined by distances from two perpendicular axes called x and y. It is geodetic practice—contrary to the mathematical convention—to let the x-axis point to the north and the y-axis to the east.

Rectangular coordinates in the plane can be used intuitively with respect to one's current location, in which case the x-axis will point to the local north. More formally, such coordinates can be obtained from three-dimensional coordinates using the artifice of a map projection. It is impossible to map the curved surface of Earth onto a flat map surface without deformation. The compromise most often chosen—called a conformal projection—preserves angles and length ratios, so that small circles are mapped as small circles and small squares as squares.

An example of such a projection is UTM (Universal Transverse Mercator). Within the map plane, we have rectangular coordinates x and y. In this case, the north direction used for reference is the map north, not the local north. The difference between the two is called meridian convergence.

It is easy enough to "translate" between polar and rectangular coordinates in the plane: let, as above, direction and distance be α and s respectively, then we have

The reverse transformation is given by:

Heights

In geodesy, point or terrain heights are "above sea level", an irregular, physically defined surface. Heights come in the following variants:

  1. Orthometric heights
  2. Dynamic heights
  3. Geopotential heights
  4. Normal heights

Each has its advantages and disadvantages. Both orthometric and normal heights are heights in metres above sea level, whereas geopotential numbers are measures of potential energy (unit: m2 s−2) and not metric. The reference surface is the geoid, an equipotential surface approximating mean sea level. (For normal heights, the reference surface is actually the so-called quasi-geoid, which has a few metre separation from the geoid, because of the density assumption in its continuation under the continental masses.)

These heights can be related to ellipsoidal height (also known as geodetic height), which express the height of a point above the reference ellipsoid, by means of the geoid undulation. Satellite positioning receivers typically provide ellipsoidal heights, unless they are fitted with special conversion software based on a model of the geoid.

Geodetic data

Because geodetic point coordinates (and heights) are always obtained in a system that has been constructed itself using real observations, geodesists introduce the concept of a "geodetic datum": a physical realization of a coordinate system used for describing point locations. The realization is the result of choosing conventional coordinate values for one or more datum points.

In the case of height data, it suffices to choose one datum point: the reference benchmark, typically a tide gauge at the shore. Thus we have vertical data like the NAP (Normaal Amsterdams Peil), the North American Vertical Datum 1988 (NAVD 88), the Kronstadt datum, the Trieste datum, and so on.

In case of plane or spatial coordinates, we typically need several datum points. A regional, ellipsoidal datum like ED 50 can be fixed by prescribing the undulation of the geoid and the deflection of the vertical in one datum point, in this case the Helmert Tower in Potsdam. However, an overdetermined ensemble of datum points can also be used.

Changing the coordinates of a point set referring to one datum, so to make them refer to another datum, is called a datum transformation. In the case of vertical data, this consists of simply adding a constant shift to all height values. In the case of plane or spatial coordinates, datum transformation takes the form of a similarity or Helmert transformation, consisting of a rotation and scaling operation in addition to a simple translation. In the plane, a Helmert transformation has four parameters; in space, seven.

A note on terminology

In the abstract, a coordinate system as used in mathematics and geodesy is called a "coordinate system" in ISO terminology, whereas the International Earth Rotation and Reference Systems Service (IERS) uses the term "reference system". When these coordinates are realized by choosing datum points and fixing a geodetic datum, ISO says "coordinate reference system", while IERS says "reference frame". The ISO term for a datum transformation again is a "coordinate transformation".

Point positioning

Geodetic Control Mark (example of a deep benchmark)

Point positioning is the determination of the coordinates of a point on land, at sea, or in space with respect to a coordinate system. Point position is solved by computation from measurements linking the known positions of terrestrial or extraterrestrial points with the unknown terrestrial position. This may involve transformations between or among astronomical and terrestrial coordinate systems. The known points used for point positioning can be triangulation points of a higher-order network or GPS satellites.

Traditionally, a hierarchy of networks has been built to allow point positioning within a country. Highest in the hierarchy were triangulation networks. These were densified into networks of traverses (polygons), into which local mapping surveying measurements, usually with measuring tape, corner prism, and the familiar red and white poles, are tied.

Nowadays all but special measurements (e.g., underground or high-precision engineering measurements) are performed with GPS. The higher-order networks are measured with static GPS, using differential measurement to determine vectors between terrestrial points. These vectors are then adjusted in traditional network fashion. A global polyhedron of permanently operating GPS stations under the auspices of the IERS is used to define a single global, geocentric reference frame which serves as the "zero order" global reference to which national measurements are attached.

For surveying mappings, frequently Real Time Kinematic GPS is employed, tying in the unknown points with known terrestrial points close by in real time.

One purpose of point positioning is the provision of known points for mapping measurements, also known as (horizontal and vertical) control. In every country, thousands of such known points exist and are normally documented by national mapping agencies. Surveyors involved in real estate and insurance will use these to tie their local measurements.

Geodetic problems

In geometric geodesy, two standard problems exist—the first (direct or forward) and the second (inverse or reverse).

First (direct or forward) geodetic problem
Given a point (in terms of its coordinates) and the direction (azimuth) and distance from that point to a second point, determine (the coordinates of) that second point.
Second (inverse or reverse) geodetic problem
Given two points, determine the azimuth and length of the line (straight line, arc or geodesic) that connects them.

In plane geometry (valid for small areas on Earth's surface), the solutions to both problems reduce to simple trigonometry. On a sphere, however, the solution is significantly more complex, because in the inverse problem the azimuths will differ between the two end points of the connecting great circle, arc.

On the ellipsoid of revolution, geodesics may be written in terms of elliptic integrals, which are usually evaluated in terms of a series expansion—see, for example, Vincenty's formulae. In the general case, the solution is called the geodesic for the surface considered. The differential equations for the geodesic can be solved numerically.

Observational concepts

Here we define some basic observational concepts, like angles and coordinates, defined in geodesy (and astronomy as well), mostly from the viewpoint of the local observer.

  • Plumbline or vertical: the direction of local gravity, or the line that results by following it.
  • Zenith: the point on the celestial sphere where the direction of the gravity vector in a point, extended upwards, intersects it. It is more correct to call it a direction rather than a point.
  • Nadir: the opposite point—or rather, direction—where the direction of gravity extended downward intersects the (obscured) celestial sphere.
  • Celestial horizon: a plane perpendicular to a point's gravity vector.
  • Azimuth: the direction angle within the plane of the horizon, typically counted clockwise from the north (in geodesy and astronomy) or the south (in France).
  • Elevation: the angular height of an object above the horizon, Alternatively zenith distance, being equal to 90 degrees minus elevation.
  • Local topocentric coordinates: azimuth (direction angle within the plane of the horizon), elevation angle (or zenith angle), distance.
  • North celestial pole: the extension of Earth's (precessing and nutating) instantaneous spin axis extended northward to intersect the celestial sphere. (Similarly for the south celestial pole.)
  • Celestial equator: the (instantaneous) intersection of Earth's equatorial plane with the celestial sphere.
  • Meridian plane: any plane perpendicular to the celestial equator and containing the celestial poles.
  • Local meridian: the plane containing the direction to the zenith and the direction to the celestial pole.

Measurements

The level is used for determining height differences and height reference systems, commonly referred to mean sea level. The traditional spirit level produces these practically most useful heights above sea level directly; the more economical use of GPS instruments for height determination requires precise knowledge of the figure of the geoid, as GPS only gives heights above the GRS80 reference ellipsoid. As geoid knowledge accumulates, one may expect the use of GPS heighting to spread.

The theodolite is used to measure horizontal and vertical angles to target points. These angles are referred to the local vertical. The tacheometer additionally determines, electronically or electro-optically, the distance to target, and is highly automated to even robotic in its operations. The method of free station position is widely used.

For local detail surveys, tacheometers are commonly employed although the old-fashioned rectangular technique using angle prism and steel tape is still an inexpensive alternative. Real-time kinematic (RTK) GPS techniques are used as well. Data collected are tagged and recorded digitally for entry into a Geographic Information System (GIS) database.

Geodetic GPS receivers produce directly three-dimensional coordinates in a geocentric coordinate frame. Such a frame is, e.g., WGS84, or the frames that are regularly produced and published by the International Earth Rotation and Reference Systems Service (IERS).

GPS receivers have almost completely replaced terrestrial instruments for large-scale base network surveys. For planet-wide geodetic surveys, previously impossible, we can still mention satellite laser ranging (SLR) and lunar laser ranging (LLR) and very-long-baseline interferometry (VLBI) techniques. All these techniques also serve to monitor irregularities in Earth's rotation as well as plate tectonic motions.

Gravity is measured using gravimeters, of which there are two kinds. First, "absolute gravimeters" are based on measuring the acceleration of free fall (e.g., of a reflecting prism in a vacuum tube). They are used to establish the vertical geospatial control and can be used in the field. Second, "relative gravimeters" are spring-based and are more common. They are used in gravity surveys over large areas for establishing the figure of the geoid over these areas. The most accurate relative gravimeters are called "superconducting" gravimeters, which are sensitive to one-thousandth of one-billionth of Earth-surface gravity. Twenty-some superconducting gravimeters are used worldwide for studying Earth's tides, rotation, interior, and ocean and atmospheric loading, as well as for verifying the Newtonian constant of gravitation.

In the future, gravity and altitude will be measured by relativistic time dilation measured by optical clocks.

Units and measures on the ellipsoid

Geographical latitude and longitude are stated in the units degree, minute of arc, and second of arc. They are angles, not metric measures, and describe the direction of the local normal to the reference ellipsoid of revolution. This is approximately the same as the direction of the plumbline, i.e., local gravity, which is also the normal to the geoid surface. For this reason, astronomical position determination – measuring the direction of the plumbline by astronomical means – works fairly well provided an ellipsoidal model of the figure of Earth is used.

One geographical mile, defined as one minute of arc on the equator, equals 1,855.32571922 m. One nautical mile is one minute of astronomical latitude. The radius of curvature of the ellipsoid varies with latitude, being the longest at the pole and the shortest at the equator as is the nautical mile.

A metre was originally defined as the 10-millionth part of the length from equator to North Pole along the meridian through Paris (the target was not quite reached in actual implementation, so that is off by 200 ppm in the current definitions). This means that one kilometre is roughly equal to (1/40,000) * 360 * 60 meridional minutes of arc, which equals 0.54 nautical mile, though this is not exact because the two units are defined on different bases (the international nautical mile is defined as exactly 1,852 m, corresponding to a rounding of 1,000/0.54 m to four digits).

Temporal change

In geodesy, temporal change can be studied by a variety of techniques. Points on Earth's surface change their location due to a variety of mechanisms:

  • Continental plate motion, plate tectonics
  • Episodic motion of tectonic origin, especially close to fault lines
  • Periodic effects due to tides and tidal loading
  • Postglacial land uplift due to isostatic adjustment
  • Mass variations due to hydrological changes, including the atmosphere, cryosphere, land hydrology and oceans
  • Sub-daily polar motion
  • Length-of-day variability
  • Earth's center-of-mass (geocenter) variations
  • Anthropogenic movements such as reservoir construction or petroleum or water extraction

The science of studying deformations and motions of Earth's crust and its solidity as a whole is called geodynamics. Often, study of Earth's irregular rotation is also included in its definition. The geodynamics studies require terrestrial reference frames that are realized by the stations belonging to the Global Geodedetic Observing System (GGOS).

Techniques for studying geodynamic phenomena on the global scale include:

Authentication

From Wikipedia, the free encyclopedia
 
Authentication (from Greek: αὐθεντικός authentikos, "real, genuine", from αὐθέντης authentes, "author") is the act of proving an assertion, such as the identity of a computer system user. In contrast with identification, the act of indicating a person or thing's identity, authentication is the process of verifying that identity. It might involve validating personal identity documents, verifying the authenticity of a website with a digital certificate, determining the age of an artifact by carbon dating, or ensuring that a product or document is not counterfeit.

Methods

Authentication is relevant to multiple fields. In art, antiques, and anthropology, a common problem is verifying that a given artifact was produced by a certain person or in a certain place or period of history. In computer science, verifying a user's identity is often required to allow access to confidential data or systems.

Authentication can be considered to be of three types:

The first type of authentication is accepting proof of identity given by a credible person who has first-hand evidence that the identity is genuine. When authentication is required of art or physical objects, this proof could be a friend, family member, or colleague attesting to the item's provenance, perhaps by having witnessed the item in its creator's possession. With autographed sports memorabilia, this could involve someone attesting that they witnessed the object being signed. A vendor selling branded items implies authenticity, while they may not have evidence that every step in the supply chain was authenticated. Centralized authority-based trust relationships back most secure internet communication through known public certificate authorities; decentralized peer-based trust, also known as a web of trust, is used for personal services such as email or files (Pretty Good Privacy, GNU Privacy Guard) and trust is established by known individuals signing each other's cryptographic key at Key signing parties, for instance.

The second type of authentication is comparing the attributes of the object itself to what is known about objects of that origin. For example, an art expert might look for similarities in the style of painting, check the location and form of a signature, or compare the object to an old photograph. An archaeologist, on the other hand, might use carbon dating to verify the age of an artifact, do a chemical and spectroscopic analysis of the materials used, or compare the style of construction or decoration to other artifacts of similar origin. The physics of sound and light, and comparison with a known physical environment, can be used to examine the authenticity of audio recordings, photographs, or videos. Documents can be verified as being created on ink or paper readily available at the time of the item's implied creation.

Attribute comparison may be vulnerable to forgery. In general, it relies on the facts that creating a forgery indistinguishable from a genuine artifact requires expert knowledge, that mistakes are easily made, and that the amount of effort required to do so is considerably greater than the amount of profit that can be gained from the forgery.

In art and antiques, certificates are of great importance for authenticating an object of interest and value. Certificates can, however, also be forged, and the authentication of these poses a problem. For instance, the son of Han van Meegeren, the well-known art-forger, forged the work of his father and provided a certificate for its provenance as well; see the article Jacques van Meegeren.

Criminal and civil penalties for fraud, forgery, and counterfeiting can reduce the incentive for falsification, depending on the risk of getting caught.

Currency and other financial instruments commonly use this second type of authentication method. Bills, coins, and cheques incorporate hard-to-duplicate physical features, such as fine printing or engraving, distinctive feel, watermarks, and holographic imagery, which are easy for trained receivers to verify.

The third type of authentication relies on documentation or other external affirmations. In criminal courts, the rules of evidence often require establishing the chain of custody of evidence presented. This can be accomplished through a written evidence log, or by testimony from the police detectives and forensics staff that handled it. Some antiques are accompanied by certificates attesting to their authenticity. Signed sports memorabilia is usually accompanied by a certificate of authenticity. These external records have their own problems of forgery and perjury and are also vulnerable to being separated from the artifact and lost.

In computer science, a user can be given access to secure systems based on user credentials that imply authenticity. A network administrator can give a user a password, or provide the user with a key card or other access devices to allow system access. In this case, authenticity is implied but not guaranteed.

Consumer goods such as pharmaceuticals, perfume, fashion clothing can use all three forms of authentication to prevent counterfeit goods from taking advantage of a popular brand's reputation (damaging the brand owner's sales and reputation). As mentioned above, having an item for sale in a reputable store implicitly attests to it being genuine, the first type of authentication. The second type of authentication might involve comparing the quality and craftsmanship of an item, such as an expensive handbag, to genuine articles. The third type of authentication could be the presence of a trademark on the item, which is a legally protected marking, or any other identifying feature which aids consumers in the identification of genuine brand-name goods. With software, companies have taken great steps to protect from counterfeiters, including adding holograms, security rings, security threads and color shifting ink.

Authentication factors

The ways in which someone may be authenticated fall into three categories, based on what is known as the factors of authentication: something the user knows, something the user has, and something the user is. Each authentication factor covers a range of elements used to authenticate or verify a person's identity before being granted access, approving a transaction request, signing a document or other work product, granting authority to others, and establishing a chain of authority.

Security research has determined that for a positive authentication, elements from at least two, and preferably all three, factors should be verified. The three factors (classes) and some of the elements of each factor are:

Single-factor authentication

As the weakest level of authentication, only a single component from one of the three categories of factors is used to authenticate an individual's identity. The use of only one factor does not offer much protection from misuse or malicious intrusion. This type of authentication is not recommended for financial or personally relevant transactions that warrant a higher level of security.

Multi-factor authentication

Multi-factor authentication involves two or more authentication factors (something you know, something you have, or something you are). Two-factor authentication is a special case of multi-factor authentication involving exactly two factors.

For example, using a bank card (something the user has) along with a PIN (something the user knows) provides two-factor authentication. Business networks may require users to provide a password (knowledge factor) and a pseudorandom number from a security token (ownership factor). Access to a very-high-security system might require a mantrap screening of height, weight, facial, and fingerprint checks (several inherence factor elements) plus a PIN and a day code (knowledge factor elements), but this is still a two-factor authentication.

Authentication types

The most frequent types of authentication available in use for authenticating online users differ in the level of security provided by combining factors from one or more of the three categories of factors for authentication:

Strong authentication

The U.S. government's National Information Assurance Glossary defines strong authentication as a layered authentication approach relying on two or more authenticators to establish the identity of an originator or receiver of information.

The European Central Bank (ECB) has defined strong authentication as "a procedure based on two or more of the three authentication factors". The factors that are used must be mutually independent and at least one factor must be "non-reusable and non-replicable", except in the case of an inherence factor and must also be incapable of being stolen off the Internet. In the European, as well as in the US-American understanding, strong authentication is very similar to multi-factor authentication or 2FA, but exceeding those with more rigorous requirements.[2][8]

The Fast IDentity Online (FIDO) Alliance has been striving to establish technical specifications for strong authentication.

Continuous authentication

Conventional computer systems authenticate users only at the initial log-in session, which can be the cause of a critical security flaw. To resolve this problem, systems need continuous user authentication methods that continuously monitor and authenticate users based on some biometric trait(s). A study used behavioural biometrics based on writing styles as a continuous authentication method.

Recent research has shown the possibility of using smartphones sensors and accessories to extract some behavioral attributes such as touch dynamics, keystroke dynamics and gait recognition. These attributes are known as behavioral biometrics and could be used to verify or identify users implicitly and continuously on smartphones. The authentication systems that have been built based on these behavioral biometric traits are known as active or continuous authentication systems.

Digital authentication

The term digital authentication, also known as electronic authentication or e-authentication, refers to a group of processes where the confidence for user identities is established and presented via electronic methods to an information system. The digital authentication process creates technical challenges because of the need to authenticate individuals or entities remotely over a network. The American National Institute of Standards and Technology (NIST) has created a generic model for digital authentication that describes the processes that are used to accomplish secure authentication:

  1. Enrollment – an individual applies to a credential service provider (CSP) to initiate the enrollment process. After successfully proving the applicant's identity, the CSP allows the applicant to become a subscriber.
  2. Authentication – After becoming a subscriber, the user receives an authenticator e.g., a token and credentials, such as a user name. He or she is then permitted to perform online transactions within an authenticated session with a relying party, where they must provide proof that he or she possesses one or more authenticators.
  3. Life-cycle maintenance – the CSP is charged with the task of maintaining the user's credential over the course of its lifetime, while the subscriber is responsible for maintaining his or her authenticator(s).

The authentication of information can pose special problems with electronic communication, such as vulnerability to man-in-the-middle attacks, whereby a third party taps into the communication stream, and poses as each of the two other communicating parties, in order to intercept information from each. Extra identity factors can be required to authenticate each party's identity.

Product authentication

A security hologram label on an electronics box for authentication

Counterfeit products are often offered to consumers as being authentic. Counterfeit consumer goods, such as electronics, music, apparel, and counterfeit medications, have been sold as being legitimate. Efforts to control the supply chain and educate consumers help ensure that authentic products are sold and used. Even security printing on packages, labels, and nameplates, however, is subject to counterfeiting.

In their anti-counterfeiting technology guide, the EUIPO Observatory on Infringements of Intellectual Property Rights categorizes the main anti-counterfeiting technologies on the market currently into five main categories: electronic, marking, chemical and physical, mechanical, and technologies for digital media.

Products or their packaging can include a variable QR Code. A QR Code alone is easy to verify but offers a weak level of authentication as it offers no protection against counterfeits unless scan data is analyzed at the system level to detect anomalies. To increase the security level, the QR Code can be combined with a digital watermark or copy detection pattern that are robust to copy attempts and can be authenticated with a smartphone.

A secure key storage device can be used for authentication in consumer electronics, network authentication, license management, supply chain management, etc. Generally, the device to be authenticated needs some sort of wireless or wired digital connection to either a host system or a network. Nonetheless, the component being authenticated need not be electronic in nature as an authentication chip can be mechanically attached and read through a connector to the host e.g. an authenticated ink tank for use with a printer. For products and services that these secure coprocessors can be applied to, they can offer a solution that can be much more difficult to counterfeit than most other options while at the same time being more easily verified.

Packaging

Packaging and labeling can be engineered to help reduce the risks of counterfeit consumer goods or the theft and resale of products. Some package constructions are more difficult to copy and some have pilfer indicating seals. Counterfeit goods, unauthorized sales (diversion), material substitution and tampering can all be reduced with these anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate that the package and contents are not counterfeit; these too are subject to counterfeiting. Packages also can include anti-theft devices, such as dye-packs, RFID tags, or electronic article surveillance tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Anti-counterfeiting technologies that can be used with packaging include:

  • Taggant fingerprinting – uniquely coded microscopic materials that are verified from a database
  • Encrypted micro-particles – unpredictably placed markings (numbers, layers and colors) not visible to the human eye
  • Holograms – graphics printed on seals, patches, foils or labels and used at the point of sale for visual verification
  • Micro-printing – second-line authentication often used on currencies
  • Serialized barcodes
  • UV printing – marks only visible under UV light
  • Track and trace systems – use codes to link products to the database tracking system
  • Water indicators – become visible when contacted with water
  • DNA tracking – genes embedded onto labels that can be traced
  • Color-shifting ink or film – visible marks that switch colors or texture when tilted
  • Tamper evident seals and tapes – destructible or graphically verifiable at point of sale
  • 2d barcodes – data codes that can be tracked
  • RFID chips
  • NFC chips

Information content

Literary forgery can involve imitating the style of a famous author. If an original manuscript, typewritten text, or recording is available, then the medium itself (or its packaging – anything from a box to e-mail headers) can help prove or disprove the authenticity of the document. However, text, audio, and video can be copied into new media, possibly leaving only the informational content itself to use in authentication. Various systems have been invented to allow authors to provide a means for readers to reliably authenticate that a given message originated from or was relayed by them. These involve authentication factors like:

The opposite problem is the detection of plagiarism, where information from a different author is passed off as a person's own work. A common technique for proving plagiarism is the discovery of another copy of the same or very similar text, which has different attribution. In some cases, excessively high quality or a style mismatch may raise suspicion of plagiarism.

Literacy and literature authentication

In literacy, authentication is a readers’ process of questioning the veracity of an aspect of literature and then verifying those questions via research. The fundamental question for authentication of literature is – Does one believe it? Related to that, an authentication project is therefore a reading and writing activity in which students document the relevant research process. It builds students' critical literacy. The documentation materials for literature go beyond narrative texts and likely include informational texts, primary sources, and multimedia. The process typically involves both internet and hands-on library research. When authenticating historical fiction in particular, readers consider the extent that the major historical events, as well as the culture portrayed (e.g., the language, clothing, food, gender roles), are believable for the period.

History and state-of-the-art

NSA KAL-55B Tactical Authentication System used by the U.S. military during the Vietnam WarNational Cryptologic Museum

Historically, fingerprints have been used as the most authoritative method of authentication, but court cases in the US and elsewhere have raised fundamental doubts about fingerprint reliability. Outside of the legal system as well, fingerprints are easily spoofable, with British Telecom's top computer security official noting that "few" fingerprint readers have not already been tricked by one spoof or another. Hybrid or two-tiered authentication methods offer a compelling the solution, such as private keys encrypted by fingerprint inside of a USB device.

In a computer data context, cryptographic methods have been developed (see digital signature and challenge–response authentication) which are currently[when?] not spoofable if and only if the originator's key has not been compromised. That the originator (or anyone other than an attacker) knows (or doesn't know) about a compromise is irrelevant. It is not known whether these cryptographically based authentication methods are provably secure, since unanticipated mathematical developments may make them vulnerable to attack in the future. If that were to occur, it may call into question much of the authentication in the past. In particular, a digitally signed contract may be questioned when a new attack on the cryptography underlying the signature is discovered.

Authorization

A military police officer checks a driver's identification card before allowing her to enter a military base.

The process of authorization is distinct from that of authentication. Whereas authentication is the process of verifying that "you are who you say you are", authorization is the process of verifying that "you are permitted to do what you are trying to do". While authorization often happens immediately after authentication (e.g., when logging into a computer system), this does not mean authorization presupposes authentication: an anonymous agent could be authorized to a limited action set.

Access control

One familiar use of authentication and authorization is access control. A computer system that is supposed to be used only by those authorized must attempt to detect and exclude the unauthorized. Access to it is therefore usually controlled by insisting on an authentication procedure to establish with some degree of confidence the identity of the user, granting privileges established for that identity.

Experimental physics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Experimental_physics   ...