Search This Blog

Saturday, March 26, 2022

Universe

From Wikipedia, the free encyclopedia

Universe
NASA-HS201427a-HubbleUltraDeepField2014-20140603.jpg
The Hubble Ultra-Deep Field image shows some of the most remote galaxies visible with present technology, each consisting of billions of stars. (Apparent image area about 1/79 that of a full moon)
 
Age (within Lambda-CDM model)13.799 ± 0.021 billion years
DiameterUnknown. Diameter of the observable universe: 8.8×1026 m (28.5 Gpc or 93 Gly)
Mass (ordinary matter)At least 1053 kg
Average density (including the contribution from energy)9.9 x 10−30 g/cm3
Average temperature2.72548 K (-270.4 °C or -454.8 °F)
Main contentsOrdinary (baryonic) matter (4.9%)
Dark matter (26.8%)
Dark energy (68.3%)
ShapeFlat with a 0.4% margin of error

The universe (Latin: universus) is all of space and time and their contents, including planets, stars, galaxies, and all other forms of matter and energy. The Big Bang theory is the prevailing cosmological description of the development of the universe. According to this theory, space and time emerged together 13.787±0.020 billion years ago, and the universe has been expanding ever since. While the spatial size of the entire universe is unknown, the cosmic inflation equation indicates that it must have a minimum diameter of 23 trillion light years, and it is possible to measure the size of the observable universe, which is approximately 93 billion light-years in diameter at the present day.

The earliest cosmological models of the universe were developed by ancient Greek and Indian philosophers and were geocentric, placing Earth at the center. Over the centuries, more precise astronomical observations led Nicolaus Copernicus to develop the heliocentric model with the Sun at the center of the Solar System. In developing the law of universal gravitation, Isaac Newton built upon Copernicus's work as well as Johannes Kepler's laws of planetary motion and observations by Tycho Brahe.

Further observational improvements led to the realization that the Sun is one of hundreds of billions of stars in the Milky Way, which is one of a few hundred billion galaxies in the universe. Many of the stars in a galaxy have planets. At the largest scale, galaxies are distributed uniformly and the same in all directions, meaning that the universe has neither an edge nor a center. At smaller scales, galaxies are distributed in clusters and superclusters which form immense filaments and voids in space, creating a vast foam-like structure. Discoveries in the early 20th century have suggested that the universe had a beginning and that space has been expanding since then at an increasing rate.

According to the Big Bang theory, the energy and matter initially present have become less dense as the universe expanded. After an initial accelerated expansion called the inflationary epoch at around 10−32 seconds, and the separation of the four known fundamental forces, the universe gradually cooled and continued to expand, allowing the first subatomic particles and simple atoms to form. Dark matter gradually gathered, forming a foam-like structure of filaments and voids under the influence of gravity. Giant clouds of hydrogen and helium were gradually drawn to the places where dark matter was most dense, forming the first galaxies, stars, and everything else seen today.

From studying the movement of galaxies, it has been discovered that the universe contains much more matter than is accounted for by visible objects; stars, galaxies, nebulas and interstellar gas. This unseen matter is known as dark matter (dark means that there is a wide range of strong indirect evidence that it exists, but we have not yet detected it directly). The ΛCDM model is the most widely accepted model of the universe. It suggests that about 69.2%±1.2% [2015] of the mass and energy in the universe is a cosmological constant (or, in extensions to ΛCDM, other forms of dark energy, such as a scalar field) which is responsible for the current expansion of space, and about 25.8%±1.1% [2015] is dark matter. Ordinary ('baryonic') matter is therefore only 4.84%±0.1% [2015] of the physical universe. Stars, planets, and visible gas clouds only form about 6% of the ordinary matter.

There are many competing hypotheses about the ultimate fate of the universe and about what, if anything, preceded the Big Bang, while other physicists and philosophers refuse to speculate, doubting that information about prior states will ever be accessible. Some physicists have suggested various multiverse hypotheses, in which our universe might be one among many universes that likewise exist.

Definition

Hubble Space TelescopeUltra deep field galaxies to Legacy field zoom out
(video 00:50; May 2, 2019)

The physical universe is defined as all of space and time (collectively referred to as spacetime) and their contents. Such contents comprise all of energy in its various forms, including electromagnetic radiation and matter, and therefore planets, moons, stars, galaxies, and the contents of intergalactic space. The universe also includes the physical laws that influence energy and matter, such as conservation laws, classical mechanics, and relativity.

The universe is often defined as "the totality of existence", or everything that exists, everything that has existed, and everything that will exist. In fact, some philosophers and scientists support the inclusion of ideas and abstract concepts—such as mathematics and logic—in the definition of the universe. The word universe may also refer to concepts such as the cosmos, the world, and nature.

Etymology

The word universe derives from the Old French word univers, which in turn derives from the Latin word universum. The Latin word was used by Cicero and later Latin authors in many of the same senses as the modern English word is used.

Synonyms

A term for universe among the ancient Greek philosophers from Pythagoras onwards was τὸ πᾶν (tò pân) 'the all', defined as all matter and all space, and τὸ ὅλον (tò hólon) 'all things', which did not necessarily include the void. Another synonym was ὁ κόσμος (ho kósmos) meaning 'the world, the cosmos'. Synonyms are also found in Latin authors (totum, mundus, natura) and survive in modern languages, e.g., the German words Das All, Weltall, and Natur for universe. The same synonyms are found in English, such as everything (as in the theory of everything), the cosmos (as in cosmology), the world (as in the many-worlds interpretation), and nature (as in natural laws or natural philosophy).

Chronology and the Big Bang

The prevailing model for the evolution of the universe is the Big Bang theory. The Big Bang model states that the earliest state of the universe was an extremely hot and dense one, and that the universe subsequently expanded and cooled. The model is based on general relativity and on simplifying assumptions such as the homogeneity and isotropy of space. A version of the model with a cosmological constant (Lambda) and cold dark matter, known as the Lambda-CDM model, is the simplest model that provides a reasonably good account of various observations about the universe. The Big Bang model accounts for observations such as the correlation of distance and redshift of galaxies, the ratio of the number of hydrogen to helium atoms, and the microwave radiation background.

In this diagram, time passes from left to right, so at any given time, the universe is represented by a disk-shaped "slice" of the diagram

The initial hot, dense state is called the Planck epoch, a brief period extending from time zero to one Planck time unit of approximately 10−43 seconds. During the Planck epoch, all types of matter and all types of energy were concentrated into a dense state, and gravity—currently the weakest by far of the four known forces—is believed to have been as strong as the other fundamental forces, and all the forces may have been unified. Since the Planck epoch, space has been expanding to its present scale, with a very short but intense period of cosmic inflation believed to have occurred within the first 10−32 seconds. This was a kind of expansion different from those we can see around us today. Objects in space did not physically move; instead the metric that defines space itself changed. Although objects in spacetime cannot move faster than the speed of light, this limitation does not apply to the metric governing spacetime itself. This initial period of inflation is believed to explain why space appears to be very flat, and much larger than light could travel since the start of the universe.

Within the first fraction of a second of the universe's existence, the four fundamental forces had separated. As the universe continued to cool down from its inconceivably hot state, various types of subatomic particles were able to form in short periods of time known as the quark epoch, the hadron epoch, and the lepton epoch. Together, these epochs encompassed less than 10 seconds of time following the Big Bang. These elementary particles associated stably into ever larger combinations, including stable protons and neutrons, which then formed more complex atomic nuclei through nuclear fusion. This process, known as Big Bang nucleosynthesis, only lasted for about 17 minutes and ended about 20 minutes after the Big Bang, so only the fastest and simplest reactions occurred. About 25% of the protons and all the neutrons in the universe, by mass, were converted to helium, with small amounts of deuterium (a form of hydrogen) and traces of lithium. Any other element was only formed in very tiny quantities. The other 75% of the protons remained unaffected, as hydrogen nuclei.

After nucleosynthesis ended, the universe entered a period known as the photon epoch. During this period, the universe was still far too hot for matter to form neutral atoms, so it contained a hot, dense, foggy plasma of negatively charged electrons, neutral neutrinos and positive nuclei. After about 377,000 years, the universe had cooled enough that electrons and nuclei could form the first stable atoms. This is known as recombination for historical reasons; in fact electrons and nuclei were combining for the first time. Unlike plasma, neutral atoms are transparent to many wavelengths of light, so for the first time the universe also became transparent. The photons released ("decoupled") when these atoms formed can still be seen today; they form the cosmic microwave background (CMB).

As the universe expands, the energy density of electromagnetic radiation decreases more quickly than does that of matter because the energy of a photon decreases with its wavelength. At around 47,000 years, the energy density of matter became larger than that of photons and neutrinos, and began to dominate the large scale behavior of the universe. This marked the end of the radiation-dominated era and the start of the matter-dominated era.

In the earliest stages of the universe, tiny fluctuations within the universe's density led to concentrations of dark matter gradually forming. Ordinary matter, attracted to these by gravity, formed large gas clouds and eventually, stars and galaxies, where the dark matter was most dense, and voids where it was least dense. After around 100 – 300 million years, the first stars formed, known as Population III stars. These were probably very massive, luminous, non metallic and short-lived. They were responsible for the gradual reionization of the universe between about 200-500 million years and 1 billion years, and also for seeding the universe with elements heavier than helium, through stellar nucleosynthesis. The universe also contains a mysterious energy—possibly a scalar field—called dark energy, the density of which does not change over time. After about 9.8 billion years, the universe had expanded sufficiently so that the density of matter was less than the density of dark energy, marking the beginning of the present dark-energy-dominated era. In this era, the expansion of the universe is accelerating due to dark energy.

Physical properties

Of the four fundamental interactions, gravitation is the dominant at astronomical length scales. Gravity's effects are cumulative; by contrast, the effects of positive and negative charges tend to cancel one another, making electromagnetism relatively insignificant on astronomical length scales. The remaining two interactions, the weak and strong nuclear forces, decline very rapidly with distance; their effects are confined mainly to sub-atomic length scales.

The universe appears to have much more matter than antimatter, an asymmetry possibly related to the CP violation. This imbalance between matter and antimatter is partially responsible for the existence of all matter existing today, since matter and antimatter, if equally produced at the Big Bang, would have completely annihilated each other and left only photons as a result of their interaction. The universe also appears to have neither net momentum nor angular momentum, which follows accepted physical laws if the universe is finite. These laws are Gauss's law and the non-divergence of the stress-energy-momentum pseudotensor.

Constituent spatial scales of the observable universe
Location of Earth (3x3-English Annot-smaller).png

This diagram shows Earth's location in the universe on increasingly larger scales. The images, labeled along their left edge, increase in size from left to right, then from top to bottom.

Size and regions

Television signals broadcast from Earth will never reach the edges of this image.

According to the general theory of relativity, far regions of space may never interact with ours even in the lifetime of the universe due to the finite speed of light and the ongoing expansion of space. For example, radio messages sent from Earth may never reach some regions of space, even if the universe were to exist forever: space may expand faster than light can traverse it.

The spatial region that can be observed with telescopes is called the observable universe, which depends on the location of the observer. The proper distance—the distance as would be measured at a specific time, including the present—between Earth and the edge of the observable universe is 46 billion light-years (14 billion parsecs), making the diameter of the observable universe about 93 billion light-years (28 billion parsecs). The distance the light from the edge of the observable universe has travelled is very close to the age of the universe times the speed of light, 13.8 billion light-years (4.2×109 pc), but this does not represent the distance at any given time because the edge of the observable universe and the Earth have since moved further apart. For comparison, the diameter of a typical galaxy is 30,000 light-years (9,198 parsecs), and the typical distance between two neighboring galaxies is 3 million light-years (919.8 kiloparsecs). As an example, the Milky Way is roughly 100,000–180,000 light-years in diameter, and the nearest sister galaxy to the Milky Way, the Andromeda Galaxy, is located roughly 2.5 million light-years away.

Because we cannot observe space beyond the edge of the observable universe, it is unknown whether the size of the universe in its totality is finite or infinite. Estimates suggest that the whole universe, if finite, must be more than 250 times larger than the observable universe. Some disputed estimates for the total size of the universe, if finite, reach as high as megaparsecs, as implied by a suggested resolution of the No-Boundary Proposal.

Age and expansion

Astronomers calculate the age of the universe by assuming that the Lambda-CDM model accurately describes the evolution of the Universe from a very uniform, hot, dense primordial state to its present state and measuring the cosmological parameters which constitute the model. This model is well understood theoretically and supported by recent high-precision astronomical observations such as WMAP and Planck. Commonly, the set of observations fitted includes the cosmic microwave background anisotropy, the brightness/redshift relation for Type Ia supernovae, and large-scale galaxy clustering including the baryon acoustic oscillation feature. Other observations, such as the Hubble constant, the abundance of galaxy clusters, weak gravitational lensing and globular cluster ages, are generally consistent with these, providing a check of the model, but are less accurately measured at present. Assuming that the Lambda-CDM model is correct, the measurements of the parameters using a variety of techniques by numerous experiments yield a best value of the age of the universe as of 2015 of 13.799 ± 0.021 billion years.

Astronomers have discovered stars in the Milky Way galaxy that are almost 13.6 billion years old.

Over time, the universe and its contents have evolved; for example, the relative population of quasars and galaxies has changed and space itself has expanded. Due to this expansion, scientists on Earth can observe the light from a galaxy 30 billion light-years away even though that light has traveled for only 13 billion years; the very space between them has expanded. This expansion is consistent with the observation that the light from distant galaxies has been redshifted; the photons emitted have been stretched to longer wavelengths and lower frequency during their journey. Analyses of Type Ia supernovae indicate that the spatial expansion is accelerating.

The more matter there is in the universe, the stronger the mutual gravitational pull of the matter. If the universe were too dense then it would re-collapse into a gravitational singularity. However, if the universe contained too little matter then the self-gravity would be too weak for astronomical structures, like galaxies or planets, to form. Since the Big Bang, the universe has expanded monotonically. Perhaps unsurprisingly, our universe has just the right mass-energy density, equivalent to about 5 protons per cubic metre, which has allowed it to expand for the last 13.8 billion years, giving time to form the universe as observed today.

There are dynamical forces acting on the particles in the universe which affect the expansion rate. Before 1998, it was expected that the expansion rate would be decreasing as time went on due to the influence of gravitational interactions in the universe; and thus there is an additional observable quantity in the universe called the deceleration parameter, which most cosmologists expected to be positive and related to the matter density of the universe. In 1998, the deceleration parameter was measured by two different groups to be negative, approximately -0.55, which technically implies that the second derivative of the cosmic scale factor has been positive in the last 5-6 billion years. This acceleration does not, however, imply that the Hubble parameter is currently increasing; see deceleration parameter for details.

Spacetime

Spacetimes are the arenas in which all physical events take place. The basic elements of spacetimes are events. In any given spacetime, an event is defined as a unique position at a unique time. A spacetime is the union of all events (in the same way that a line is the union of all of its points), formally organized into a manifold.

Events, such as matter and energy, bend spacetime. Curved spacetime, on the other hand, forces matter and energy to behave in a certain way. There is no point in considering one without the other.

The universe appears to be a smooth spacetime continuum consisting of three spatial dimensions and one temporal (time) dimension (an event in the spacetime of the physical universe can therefore be identified by a set of four coordinates: (x, y, z, t) ). On average, space is observed to be very nearly flat (with a curvature close to zero), meaning that Euclidean geometry is empirically true with high accuracy throughout most of the Universe. Spacetime also appears to have a simply connected topology, in analogy with a sphere, at least on the length-scale of the observable universe. However, present observations cannot exclude the possibilities that the universe has more dimensions (which is postulated by theories such as the string theory) and that its spacetime may have a multiply connected global topology, in analogy with the cylindrical or toroidal topologies of two-dimensional spaces. The spacetime of the universe is usually interpreted from a Euclidean perspective, with space as consisting of three dimensions, and time as consisting of one dimension, the "fourth dimension". By combining space and time into a single manifold called Minkowski space, physicists have simplified a large number of physical theories, as well as described in a more uniform way the workings of the universe at both the supergalactic and subatomic levels.

Spacetime events are not absolutely defined spatially and temporally but rather are known to be relative to the motion of an observer. Minkowski space approximates the universe without gravity; the pseudo-Riemannian manifolds of general relativity describe spacetime with matter and gravity.

Shape

The three possible options for the shape of the universe

General relativity describes how spacetime is curved and bent by mass and energy (gravity). The topology or geometry of the universe includes both local geometry in the observable universe and global geometry. Cosmologists often work with a given space-like slice of spacetime called the comoving coordinates. The section of spacetime which can be observed is the backward light cone, which delimits the cosmological horizon. The cosmological horizon (also called the particle horizon or the light horizon) is the maximum distance from which particles can have traveled to the observer in the age of the universe. This horizon represents the boundary between the observable and the unobservable regions of the universe. The existence, properties, and significance of a cosmological horizon depend on the particular cosmological model.

An important parameter determining the future evolution of the universe theory is the density parameter, Omega (Ω), defined as the average matter density of the universe divided by a critical value of that density. This selects one of three possible geometries depending on whether Ω is equal to, less than, or greater than 1. These are called, respectively, the flat, open and closed universes.

Observations, including the Cosmic Background Explorer (COBE), Wilkinson Microwave Anisotropy Probe (WMAP), and Planck maps of the CMB, suggest that the universe is infinite in extent with a finite age, as described by the Friedmann–Lemaître–Robertson–Walker (FLRW) models. These FLRW models thus support inflationary models and the standard model of cosmology, describing a flat, homogeneous universe presently dominated by dark matter and dark energy.

Support of life

The universe may be fine-tuned; the Fine-tuned universe hypothesis is the proposition that the conditions that allow the existence of observable life in the universe can only occur when certain universal fundamental physical constants lie within a very narrow range of values, so that if any of several fundamental constants were only slightly different, the universe would have been unlikely to be conducive to the establishment and development of matter, astronomical structures, elemental diversity, or life as it is understood. The proposition is discussed among philosophers, scientists, theologians, and proponents of creationism.

Composition

The universe is composed almost completely of dark energy, dark matter, and ordinary matter. Other contents are electromagnetic radiation (estimated to constitute from 0.005% to close to 0.01% of the total mass-energy of the universe) and antimatter.

The proportions of all types of matter and energy have changed over the history of the universe. The total amount of electromagnetic radiation generated within the universe has decreased by 1/2 in the past 2 billion years. Today, ordinary matter, which includes atoms, stars, galaxies, and life, accounts for only 4.9% of the contents of the Universe. The present overall density of this type of matter is very low, roughly 4.5 × 10−31 grams per cubic centimetre, corresponding to a density of the order of only one proton for every four cubic metres of volume. The nature of both dark energy and dark matter is unknown. Dark matter, a mysterious form of matter that has not yet been identified, accounts for 26.8% of the cosmic contents. Dark energy, which is the energy of empty space and is causing the expansion of the universe to accelerate, accounts for the remaining 68.3% of the contents.

The formation of clusters and large-scale filaments in the cold dark matter model with dark energy. The frames show the evolution of structures in a 43 million parsecs (or 140 million light-years) box from redshift of 30 to the present epoch (upper left z=30 to lower right z=0).
 
A map of the superclusters and voids nearest to Earth

Matter, dark matter, and dark energy are distributed homogeneously throughout the universe over length scales longer than 300 million light-years or so. However, over shorter length-scales, matter tends to clump hierarchically; many atoms are condensed into stars, most stars into galaxies, most galaxies into clusters, superclusters and, finally, large-scale galactic filaments. The observable universe contains as many as 200 billion galaxies and, overall, as many as an estimated 1×1024 stars (more stars than all the grains of sand on planet Earth). Typical galaxies range from dwarfs with as few as ten million (107) stars up to giants with one trillion (1012) stars. Between the larger structures are voids, which are typically 10–150 Mpc (33 million–490 million ly) in diameter. The Milky Way is in the Local Group of galaxies, which in turn is in the Laniakea Supercluster. This supercluster spans over 500 million light-years, while the Local Group spans over 10 million light-years. The Universe also has vast regions of relative emptiness; the largest known void measures 1.8 billion ly (550 Mpc) across.

Comparison of the contents of the universe today to 380,000 years after the Big Bang as measured with 5 year WMAP data (from 2008). (Due to rounding errors, the sum of these numbers is not 100%). This reflects the 2008 limits of WMAP's ability to define dark matter and dark energy.

The observable universe is isotropic on scales significantly larger than superclusters, meaning that the statistical properties of the universe are the same in all directions as observed from Earth. The universe is bathed in highly isotropic microwave radiation that corresponds to a thermal equilibrium blackbody spectrum of roughly 2.72548 kelvins. The hypothesis that the large-scale universe is homogeneous and isotropic is known as the cosmological principle. A universe that is both homogeneous and isotropic looks the same from all vantage points and has no center.

Dark energy

An explanation for why the expansion of the universe is accelerating remains elusive. It is often attributed to "dark energy", an unknown form of energy that is hypothesized to permeate space. On a mass–energy equivalence basis, the density of dark energy (~ 7 × 10−30 g/cm3) is much less than the density of ordinary matter or dark matter within galaxies. However, in the present dark-energy era, it dominates the mass–energy of the universe because it is uniform across space.

Two proposed forms for dark energy are the cosmological constant, a constant energy density filling space homogeneously, and scalar fields such as quintessence or moduli, dynamic quantities whose energy density can vary in time and space. Contributions from scalar fields that are constant in space are usually also included in the cosmological constant. The cosmological constant can be formulated to be equivalent to vacuum energy. Scalar fields having only a slight amount of spatial inhomogeneity would be difficult to distinguish from a cosmological constant.

Dark matter

Dark matter is a hypothetical kind of matter that is invisible to the entire electromagnetic spectrum, but which accounts for most of the matter in the universe. The existence and properties of dark matter are inferred from its gravitational effects on visible matter, radiation, and the large-scale structure of the universe. Other than neutrinos, a form of hot dark matter, dark matter has not been detected directly, making it one of the greatest mysteries in modern astrophysics. Dark matter neither emits nor absorbs light or any other electromagnetic radiation at any significant level. Dark matter is estimated to constitute 26.8% of the total mass–energy and 84.5% of the total matter in the universe.

Ordinary matter

The remaining 4.9% of the mass–energy of the universe is ordinary matter, that is, atoms, ions, electrons and the objects they form. This matter includes stars, which produce nearly all of the light we see from galaxies, as well as interstellar gas in the interstellar and intergalactic media, planets, and all the objects from everyday life that we can bump into, touch or squeeze. As a matter of fact, the great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10 per cent of the ordinary matter contribution to the mass-energy density of the universe.

Ordinary matter commonly exists in four states (or phases): solid, liquid, gas, and plasma. However, advances in experimental techniques have revealed other previously theoretical phases, such as Bose–Einstein condensates and fermionic condensates.

Ordinary matter is composed of two types of elementary particles: quarks and leptons. For example, the proton is formed of two up quarks and one down quark; the neutron is formed of two down quarks and one up quark; and the electron is a kind of lepton. An atom consists of an atomic nucleus, made up of protons and neutrons, and electrons that orbit the nucleus. Because most of the mass of an atom is concentrated in its nucleus, which is made up of baryons, astronomers often use the term baryonic matter to describe ordinary matter, although a small fraction of this "baryonic matter" is electrons.

Soon after the Big Bang, primordial protons and neutrons formed from the quark–gluon plasma of the early universe as it cooled below two trillion degrees. A few minutes later, in a process known as Big Bang nucleosynthesis, nuclei formed from the primordial protons and neutrons. This nucleosynthesis formed lighter elements, those with small atomic numbers up to lithium and beryllium, but the abundance of heavier elements dropped off sharply with increasing atomic number. Some boron may have been formed at this time, but the next heavier element, carbon, was not formed in significant amounts. Big Bang nucleosynthesis shut down after about 20 minutes due to the rapid drop in temperature and density of the expanding universe. Subsequent formation of heavier elements resulted from stellar nucleosynthesis and supernova nucleosynthesis.

Particles

A four-by-four table of particles. Columns are three generations of matter (fermions) and one of forces (bosons). In the first three columns, two rows contain quarks and two leptons. The top two rows' columns contain up (u) and down (d) quarks, charm (c) and strange (s) quarks, top (t) and bottom (b) quarks, and photon (γ) and gluon (g), respectively. The bottom two rows' columns contain electron neutrino (ν sub e) and electron (e), muon neutrino (ν sub μ) and muon (μ), and tau neutrino (ν sub τ) and tau (τ), and Z sup 0 and W sup ± weak force. Mass, charge, and spin are listed for each particle.
Standard model of elementary particles: the 12 fundamental fermions and 4 fundamental bosons. Brown loops indicate which bosons (red) couple to which fermions (purple and green). Columns are three generations of matter (fermions) and one of forces (bosons). In the first three columns, two rows contain quarks and two leptons. The top two rows' columns contain up (u) and down (d) quarks, charm (c) and strange (s) quarks, top (t) and bottom (b) quarks, and photon (γ) and gluon (g), respectively. The bottom two rows' columns contain electron neutrino (νe) and electron (e), muon neutrino (νμ) and muon (μ), tau neutrino (ντ) and tau (τ), and the Z0 and W± carriers of the weak force. Mass, charge, and spin are listed for each particle.
 

Ordinary matter and the forces that act on matter can be described in terms of elementary particles. These particles are sometimes described as being fundamental, since they have an unknown substructure, and it is unknown whether or not they are composed of smaller and even more fundamental particles. Of central importance is the Standard Model, a theory that is concerned with electromagnetic interactions and the weak and strong nuclear interactions. The Standard Model is supported by the experimental confirmation of the existence of particles that compose matter: quarks and leptons, and their corresponding "antimatter" duals, as well as the force particles that mediate interactions: the photon, the W and Z bosons, and the gluon. The Standard Model predicted the existence of the recently discovered Higgs boson, a particle that is a manifestation of a field within the universe that can endow particles with mass. Because of its success in explaining a wide variety of experimental results, the Standard Model is sometimes regarded as a "theory of almost everything". The Standard Model does not, however, accommodate gravity. A true force-particle "theory of everything" has not been attained.

Hadrons

A hadron is a composite particle made of quarks held together by the strong force. Hadrons are categorized into two families: baryons (such as protons and neutrons) made of three quarks, and mesons (such as pions) made of one quark and one antiquark. Of the hadrons, protons are stable, and neutrons bound within atomic nuclei are stable. Other hadrons are unstable under ordinary conditions and are thus insignificant constituents of the modern universe. From approximately 10−6 seconds after the Big Bang, during a period is known as the hadron epoch, the temperature of the universe had fallen sufficiently to allow quarks to bind together into hadrons, and the mass of the universe was dominated by hadrons. Initially, the temperature was high enough to allow the formation of hadron/anti-hadron pairs, which kept matter and antimatter in thermal equilibrium. However, as the temperature of the universe continued to fall, hadron/anti-hadron pairs were no longer produced. Most of the hadrons and anti-hadrons were then eliminated in particle-antiparticle annihilation reactions, leaving a small residual of hadrons by the time the universe was about one second old.

Leptons

A lepton is an elementary, half-integer spin particle that does not undergo strong interactions but is subject to the Pauli exclusion principle; no two leptons of the same species can be in exactly the same state at the same time. Two main classes of leptons exist: charged leptons (also known as the electron-like leptons), and neutral leptons (better known as neutrinos). Electrons are stable and the most common charged lepton in the universe, whereas muons and taus are unstable particle that quickly decay after being produced in high energy collisions, such as those involving cosmic rays or carried out in particle accelerators. Charged leptons can combine with other particles to form various composite particles such as atoms and positronium. The electron governs nearly all of chemistry, as it is found in atoms and is directly tied to all chemical properties. Neutrinos rarely interact with anything, and are consequently rarely observed. Neutrinos stream throughout the universe but rarely interact with normal matter.

The lepton epoch was the period in the evolution of the early universe in which the leptons dominated the mass of the universe. It started roughly 1 second after the Big Bang, after the majority of hadrons and anti-hadrons annihilated each other at the end of the hadron epoch. During the lepton epoch the temperature of the universe was still high enough to create lepton/anti-lepton pairs, so leptons and anti-leptons were in thermal equilibrium. Approximately 10 seconds after the Big Bang, the temperature of the universe had fallen to the point where lepton/anti-lepton pairs were no longer created. Most leptons and anti-leptons were then eliminated in annihilation reactions, leaving a small residue of leptons. The mass of the universe was then dominated by photons as it entered the following photon epoch.

Photons

A photon is the quantum of light and all other forms of electromagnetic radiation. It is the force carrier for the electromagnetic force, even when static via virtual photons. The effects of this force are easily observable at the microscopic and at the macroscopic level because the photon has zero rest mass; this allows long distance interactions. Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, exhibiting properties of waves and of particles.

The photon epoch started after most leptons and anti-leptons were annihilated at the end of the lepton epoch, about 10 seconds after the Big Bang. Atomic nuclei were created in the process of nucleosynthesis which occurred during the first few minutes of the photon epoch. For the remainder of the photon epoch the universe contained a hot dense plasma of nuclei, electrons and photons. About 380,000 years after the Big Bang, the temperature of the Universe fell to the point where nuclei could combine with electrons to create neutral atoms. As a result, photons no longer interacted frequently with matter and the universe became transparent. The highly redshifted photons from this period form the cosmic microwave background. Tiny variations in temperature and density detectable in the CMB were the early "seeds" from which all subsequent structure formation took place.

Cosmological models

Model of the universe based on general relativity

General relativity is the geometric theory of gravitation published by Albert Einstein in 1915 and the current description of gravitation in modern physics. It is the basis of current cosmological models of the universe. General relativity generalizes special relativity and Newton's law of universal gravitation, providing a unified description of gravity as a geometric property of space and time, or spacetime. In particular, the curvature of spacetime is directly related to the energy and momentum of whatever matter and radiation are present. The relation is specified by the Einstein field equations, a system of partial differential equations. In general relativity, the distribution of matter and energy determines the geometry of spacetime, which in turn describes the acceleration of matter. Therefore, solutions of the Einstein field equations describe the evolution of the universe. Combined with measurements of the amount, type, and distribution of matter in the universe, the equations of general relativity describe the evolution of the universe over time.

With the assumption of the cosmological principle that the universe is homogeneous and isotropic everywhere, a specific solution of the field equations that describes the universe is the metric tensor called the Friedmann–Lemaître–Robertson–Walker metric,

where (r, θ, φ) correspond to a spherical coordinate system. This metric has only two undetermined parameters. An overall dimensionless length scale factor R describes the size scale of the universe as a function of time; an increase in R is the expansion of the universe. A curvature index k describes the geometry. The index k is defined so that it can take only one of three values: 0, corresponding to flat Euclidean geometry; 1, corresponding to a space of positive curvature; or −1, corresponding to a space of positive or negative curvature. The value of R as a function of time t depends upon k and the cosmological constant Λ. The cosmological constant represents the energy density of the vacuum of space and could be related to dark energy. The equation describing how R varies with time is known as the Friedmann equation after its inventor, Alexander Friedmann.

The solutions for R(t) depend on k and Λ, but some qualitative features of such solutions are general. First and most importantly, the length scale R of the universe can remain constant only if the universe is perfectly isotropic with positive curvature (k=1) and has one precise value of density everywhere, as first noted by Albert Einstein. However, this equilibrium is unstable: because the universe is inhomogeneous on smaller scales, R must change over time. When R changes, all the spatial distances in the universe change in tandem; there is an overall expansion or contraction of space itself. This accounts for the observation that galaxies appear to be flying apart; the space between them is stretching. The stretching of space also accounts for the apparent paradox that two galaxies can be 40 billion light-years apart, although they started from the same point 13.8 billion years ago and never moved faster than the speed of light.

Second, all solutions suggest that there was a gravitational singularity in the past, when R went to zero and matter and energy were infinitely dense. It may seem that this conclusion is uncertain because it is based on the questionable assumptions of perfect homogeneity and isotropy (the cosmological principle) and that only the gravitational interaction is significant. However, the Penrose–Hawking singularity theorems show that a singularity should exist for very general conditions. Hence, according to Einstein's field equations, R grew rapidly from an unimaginably hot, dense state that existed immediately following this singularity (when R had a small, finite value); this is the essence of the Big Bang model of the universe. Understanding the singularity of the Big Bang likely requires a quantum theory of gravity, which has not yet been formulated.

Third, the curvature index k determines the sign of the mean spatial curvature of spacetime averaged over sufficiently large length scales (greater than about a billion light-years). If k=1, the curvature is positive and the universe has a finite volume. A universe with positive curvature is often visualized as a three-dimensional sphere embedded in a four-dimensional space. Conversely, if k is zero or negative, the universe has an infinite volume. It may seem counter-intuitive that an infinite and yet infinitely dense universe could be created in a single instant at the Big Bang when R=0, but exactly that is predicted mathematically when k does not equal 1. By analogy, an infinite plane has zero curvature but infinite area, whereas an infinite cylinder is finite in one direction and a torus is finite in both. A toroidal universe could behave like a normal universe with periodic boundary conditions.

The ultimate fate of the universe is still unknown because it depends critically on the curvature index k and the cosmological constant Λ. If the universe were sufficiently dense, k would equal +1, meaning that its average curvature throughout is positive and the universe will eventually recollapse in a Big Crunch, possibly starting a new universe in a Big Bounce. Conversely, if the universe were insufficiently dense, k would equal 0 or −1 and the universe would expand forever, cooling off and eventually reaching the Big Freeze and the heat death of the universe. Modern data suggests that the rate of expansion of the universe is not decreasing, as originally expected, but increasing; if this continues indefinitely, the universe may eventually reach a Big Rip. Observationally, the universe appears to be flat (k = 0), with an overall density that is very close to the critical value between recollapse and eternal expansion.

Multiverse hypothesis

Some speculative theories have proposed that our universe is but one of a set of disconnected universes, collectively denoted as the multiverse, challenging or enhancing more limited definitions of the universe. Scientific multiverse models are distinct from concepts such as alternate planes of consciousness and simulated reality.

Max Tegmark developed a four-part classification scheme for the different types of multiverses that scientists have suggested in response to various Physics problems. An example of such multiverses is the one resulting from the chaotic inflation model of the early universe. Another is the multiverse resulting from the many-worlds interpretation of quantum mechanics. In this interpretation, parallel worlds are generated in a manner similar to quantum superposition and decoherence, with all states of the wave functions being realized in separate worlds. Effectively, in the many-worlds interpretation the multiverse evolves as a universal wavefunction. If the Big Bang that created our multiverse created an ensemble of multiverses, the wave function of the ensemble would be entangled in this sense.

The least controversial, but still highly disputed, category of multiverse in Tegmark's scheme is Level I. The multiverses of this level are composed by distant spacetime events "in our own universe". Tegmark and others have argued that, if space is infinite, or sufficiently large and uniform, identical instances of the history of Earth's entire Hubble volume occur every so often, simply by chance. Tegmark calculated that our nearest so-called doppelgänger, is 1010115 metres away from us (a double exponential function larger than a googolplex). However, the arguments used are of speculative nature. Additionally, it would be impossible to scientifically verify the existence of an identical Hubble volume.

It is possible to conceive of disconnected spacetimes, each existing but unable to interact with one another. An easily visualized metaphor of this concept is a group of separate soap bubbles, in which observers living on one soap bubble cannot interact with those on other soap bubbles, even in principle. According to one common terminology, each "soap bubble" of spacetime is denoted as a universe, whereas our particular spacetime is denoted as the universe, just as we call our moon the Moon. The entire collection of these separate spacetimes is denoted as the multiverse. With this terminology, different universes are not causally connected to each other. In principle, the other unconnected universes may have different dimensionalities and topologies of spacetime, different forms of matter and energy, and different physical laws and physical constants, although such possibilities are purely speculative. Others consider each of several bubbles created as part of chaotic inflation to be separate universes, though in this model these universes all share a causal origin.

Historical conceptions

Historically, there have been many ideas of the cosmos (cosmologies) and its origin (cosmogonies). Theories of an impersonal universe governed by physical laws were first proposed by the Greeks and Indians. Ancient Chinese philosophy encompassed the notion of the universe including both all of space and all of time. Over the centuries, improvements in astronomical observations and theories of motion and gravitation led to ever more accurate descriptions of the universe. The modern era of cosmology began with Albert Einstein's 1915 general theory of relativity, which made it possible to quantitatively predict the origin, evolution, and conclusion of the universe as a whole. Most modern, accepted theories of cosmology are based on general relativity and, more specifically, the predicted Big Bang.

Mythologies

Many cultures have stories describing the origin of the world and universe. Cultures generally regard these stories as having some truth. There are however many differing beliefs in how these stories apply amongst those believing in a supernatural origin, ranging from a god directly creating the universe as it is now to a god just setting the "wheels in motion" (for example via mechanisms such as the big bang and evolution).

Ethnologists and anthropologists who study myths have developed various classification schemes for the various themes that appear in creation stories. For example, in one type of story, the world is born from a world egg; such stories include the Finnish epic poem Kalevala, the Chinese story of Pangu or the Indian Brahmanda Purana. In related stories, the universe is created by a single entity emanating or producing something by him- or herself, as in the Tibetan Buddhism concept of Adi-Buddha, the ancient Greek story of Gaia (Mother Earth), the Aztec goddess Coatlicue myth, the ancient Egyptian god Atum story, and the Judeo-Christian Genesis creation narrative in which the Abrahamic God created the universe. In another type of story, the universe is created from the union of male and female deities, as in the Maori story of Rangi and Papa. In other stories, the universe is created by crafting it from pre-existing materials, such as the corpse of a dead god—as from Tiamat in the Babylonian epic Enuma Elish or from the giant Ymir in Norse mythology—or from chaotic materials, as in Izanagi and Izanami in Japanese mythology. In other stories, the universe emanates from fundamental principles, such as Brahman and Prakrti, the creation myth of the Serers, or the yin and yang of the Tao.

Philosophical models

The pre-Socratic Greek philosophers and Indian philosophers developed some of the earliest philosophical concepts of the universe. The earliest Greek philosophers noted that appearances can be deceiving, and sought to understand the underlying reality behind the appearances. In particular, they noted the ability of matter to change forms (e.g., ice to water to steam) and several philosophers proposed that all the physical materials in the world are different forms of a single primordial material, or arche. The first to do so was Thales, who proposed this material to be water. Thales' student, Anaximander, proposed that everything came from the limitless apeiron. Anaximenes proposed the primordial material to be air on account of its perceived attractive and repulsive qualities that cause the arche to condense or dissociate into different forms. Anaxagoras proposed the principle of Nous (Mind), while Heraclitus proposed fire (and spoke of logos). Empedocles proposed the elements to be earth, water, air and fire. His four-element model became very popular. Like Pythagoras, Plato believed that all things were composed of number, with Empedocles' elements taking the form of the Platonic solids. Democritus, and later philosophers—most notably Leucippus—proposed that the universe is composed of indivisible atoms moving through a void (vacuum), although Aristotle did not believe that to be feasible because air, like water, offers resistance to motion. Air will immediately rush in to fill a void, and moreover, without resistance, it would do so indefinitely fast.

Although Heraclitus argued for eternal change, his contemporary Parmenides made the radical suggestion that all change is an illusion, that the true underlying reality is eternally unchanging and of a single nature. Parmenides denoted this reality as τὸ ἐν (The One). Parmenides' idea seemed implausible to many Greeks, but his student Zeno of Elea challenged them with several famous paradoxes. Aristotle responded to these paradoxes by developing the notion of a potential countable infinity, as well as the infinitely divisible continuum. Unlike the eternal and unchanging cycles of time, he believed that the world is bounded by the celestial spheres and that cumulative stellar magnitude is only finitely multiplicative.

The Indian philosopher Kanada, founder of the Vaisheshika school, developed a notion of atomism and proposed that light and heat were varieties of the same substance. In the 5th century AD, the Buddhist atomist philosopher Dignāga proposed atoms to be point-sized, durationless, and made of energy. They denied the existence of substantial matter and proposed that movement consisted of momentary flashes of a stream of energy.

The notion of temporal finitism was inspired by the doctrine of creation shared by the three Abrahamic religions: Judaism, Christianity and Islam. The Christian philosopher, John Philoponus, presented the philosophical arguments against the ancient Greek notion of an infinite past and future. Philoponus' arguments against an infinite past were used by the early Muslim philosopher, Al-Kindi (Alkindus); the Jewish philosopher, Saadia Gaon (Saadia ben Joseph); and the Muslim theologian, Al-Ghazali (Algazel).

Astronomical concepts

3rd century BCE calculations by Aristarchus on the relative sizes of, from left to right, the Sun, Earth, and Moon, from a 10th-century AD Greek copy.

Astronomical models of the universe were proposed soon after astronomy began with the Babylonian astronomers, who viewed the universe as a flat disk floating in the ocean, and this forms the premise for early Greek maps like those of Anaximander and Hecataeus of Miletus.

Later Greek philosophers, observing the motions of the heavenly bodies, were concerned with developing models of the universe-based more profoundly on empirical evidence. The first coherent model was proposed by Eudoxus of Cnidos. According to Aristotle's physical interpretation of the model, celestial spheres eternally rotate with uniform motion around a stationary Earth. Normal matter is entirely contained within the terrestrial sphere.

De Mundo (composed before 250 BC or between 350 and 200 BC), stated, "Five elements, situated in spheres in five regions, the less being in each case surrounded by the greater—namely, earth surrounded by water, water by air, air by fire, and fire by ether—make up the whole universe".

This model was also refined by Callippus and after concentric spheres were abandoned, it was brought into nearly perfect agreement with astronomical observations by Ptolemy. The success of such a model is largely due to the mathematical fact that any function (such as the position of a planet) can be decomposed into a set of circular functions (the Fourier modes). Other Greek scientists, such as the Pythagorean philosopher Philolaus, postulated (according to Stobaeus account) that at the center of the universe was a "central fire" around which the Earth, Sun, Moon and planets revolved in uniform circular motion.

The Greek astronomer Aristarchus of Samos was the first known individual to propose a heliocentric model of the universe. Though the original text has been lost, a reference in Archimedes' book The Sand Reckoner describes Aristarchus's heliocentric model. Archimedes wrote:

You, King Gelon, are aware the universe is the name given by most astronomers to the sphere the center of which is the center of the Earth, while its radius is equal to the straight line between the center of the Sun and the center of the Earth. This is the common account as you have heard from astronomers. But Aristarchus has brought out a book consisting of certain hypotheses, wherein it appears, as a consequence of the assumptions made, that the universe is many times greater than the universe just mentioned. His hypotheses are that the fixed stars and the Sun remain unmoved, that the Earth revolves about the Sun on the circumference of a circle, the Sun lying in the middle of the orbit, and that the sphere of fixed stars, situated about the same center as the Sun, is so great that the circle in which he supposes the Earth to revolve bears such a proportion to the distance of the fixed stars as the center of the sphere bears to its surface.

Aristarchus thus believed the stars to be very far away, and saw this as the reason why stellar parallax had not been observed, that is, the stars had not been observed to move relative each other as the Earth moved around the Sun. The stars are in fact much farther away than the distance that was generally assumed in ancient times, which is why stellar parallax is only detectable with precision instruments. The geocentric model, consistent with planetary parallax, was assumed to be an explanation for the unobservability of the parallel phenomenon, stellar parallax. The rejection of the heliocentric view was apparently quite strong, as the following passage from Plutarch suggests (On the Apparent Face in the Orb of the Moon):

Cleanthes [a contemporary of Aristarchus and head of the Stoics] thought it was the duty of the Greeks to indict Aristarchus of Samos on the charge of impiety for putting in motion the Hearth of the Universe [i.e. the Earth], ... supposing the heaven to remain at rest and the Earth to revolve in an oblique circle, while it rotates, at the same time, about its own axis.

The only other astronomer from antiquity known by name who supported Aristarchus's heliocentric model was Seleucus of Seleucia, a Hellenistic astronomer who lived a century after Aristarchus. According to Plutarch, Seleucus was the first to prove the heliocentric system through reasoning, but it is not known what arguments he used. Seleucus' arguments for a heliocentric cosmology were probably related to the phenomenon of tides. According to Strabo (1.1.9), Seleucus was the first to state that the tides are due to the attraction of the Moon, and that the height of the tides depends on the Moon's position relative to the Sun. Alternatively, he may have proved heliocentricity by determining the constants of a geometric model for it, and by developing methods to compute planetary positions using this model, like what Nicolaus Copernicus later did in the 16th century. During the Middle Ages, heliocentric models were also proposed by the Indian astronomer Aryabhata, and by the Persian astronomers Albumasar and Al-Sijzi.

Model of the Copernican Universe by Thomas Digges in 1576, with the amendment that the stars are no longer confined to a sphere, but spread uniformly throughout the space surrounding the planets.

The Aristotelian model was accepted in the Western world for roughly two millennia, until Copernicus revived Aristarchus's perspective that the astronomical data could be explained more plausibly if the Earth rotated on its axis and if the Sun were placed at the center of the universe.

In the center rests the Sun. For who would place this lamp of a very beautiful temple in another or better place than this wherefrom it can illuminate everything at the same time?

— Nicolaus Copernicus, in Chapter 10, Book 1 of De Revolutionibus Orbium Coelestrum (1543)

As noted by Copernicus himself, the notion that the Earth rotates is very old, dating at least to Philolaus (c. 450 BC), Heraclides Ponticus (c. 350 BC) and Ecphantus the Pythagorean. Roughly a century before Copernicus, the Christian scholar Nicholas of Cusa also proposed that the Earth rotates on its axis in his book, On Learned Ignorance (1440). Al-Sijzi also proposed that the Earth rotates on its axis. Empirical evidence for the Earth's rotation on its axis, using the phenomenon of comets, was given by Tusi (1201–1274) and Ali Qushji (1403–1474).

This cosmology was accepted by Isaac Newton, Christiaan Huygens and later scientists. Edmund Halley (1720) and Jean-Philippe de Chéseaux (1744) noted independently that the assumption of an infinite space filled uniformly with stars would lead to the prediction that the nighttime sky would be as bright as the Sun itself; this became known as Olbers' paradox in the 19th century. Newton believed that an infinite space uniformly filled with matter would cause infinite forces and instabilities causing the matter to be crushed inwards under its own gravity. This instability was clarified in 1902 by the Jeans instability criterion. One solution to these paradoxes is the Charlier Universe, in which the matter is arranged hierarchically (systems of orbiting bodies that are themselves orbiting in a larger system, ad infinitum) in a fractal way such that the universe has a negligibly small overall density; such a cosmological model had also been proposed earlier in 1761 by Johann Heinrich Lambert. A significant astronomical advance of the 18th century was the realization by Thomas Wright, Immanuel Kant and others of nebulae.

In 1919, when the Hooker Telescope was completed, the prevailing view still was that the universe consisted entirely of the Milky Way Galaxy. Using the Hooker Telescope, Edwin Hubble identified Cepheid variables in several spiral nebulae and in 1922–1923 proved conclusively that Andromeda Nebula and Triangulum among others, were entire galaxies outside our own, thus proving that universe consists of a multitude of galaxies.

The modern era of physical cosmology began in 1917, when Albert Einstein first applied his general theory of relativity to model the structure and dynamics of the universe.

Map of the observable universe with some of the notable astronomical objects known today. The scale of length increases exponentially toward the right. Celestial bodies are shown enlarged in size to be able to understand their shapes.
 

Industrial and organizational psychology

Industrial and organizational psychology (I-O psychology), an applied discipline within psychology, is the science of human behavior as it pertains to the workplace. Depending on the country or region of the world, I-O psychology is also known as occupational psychology, organizational psychology, and work and organizational (WO) psychology. Industrial, work and organizational psychology (IWO) psychology is the broader, more global term for the field. As an applied field, the discipline involves both research and practice.

I-O psychologists apply psychological theories and principles to organizations and the individuals within them. I-O psychologists are trained in the scientist–practitioner model. They contribute to an organization's success by improving the recruitment, job performance, motivation, and job satisfaction of employees. This includes the work–nonwork interface such as transitioning into a career, retirement and work-family conflict and balance. An I-O psychologist conducts research on employee behaviors and attitudes, and how these can be improved through hiring practices, training programs, feedback, and management systems.

I-O psychology is one of the 17 recognized professional specialties by the American Psychological Association (APA). In the United States the profession is represented by Division 14 of the APA and is formally known as the Society for Industrial and Organizational Psychology (SIOP). Similar I-O psychology societies can be found in many countries.

International

I-O psychology is international. It can be found throughout the industrialized world. In North America the term "I-O" psychology is used; in the United Kingdom, the field is known as occupational psychology. Occupational psychology in the UK is one of nine "protected titles" within the "practitioner psychologist" professions. The profession is regulated by the Health and Care Professions Council. In the UK, graduate programs in psychology, including occupational psychology, are accredited by the British Psychological Society.

In Australia, the title organizational psychologist is protected by law and regulated by the Australian Health Practitioner Regulation Agency (AHPRA). Organizational psychology is one of nine areas of specialist endorsement for psychology practice in Australia.

In Europe, someone with a specialist EuroPsy Certificate in Work and Organisational Psychology is a fully qualified psychologist and a specialist in the work psychology field. Industrial and organizational psychologists reaching the EuroPsy standard are recorded in the Register of European Psychologists. I-O psychology is one of the three main psychology specializations in Europe.

In South Africa, industrial psychology is a registration category for the profession of psychologist as regulated by the Health Professions Council of South Africa (HPCSA).

Historical overview

The historical development of I-O psychology was paralleled in the US, the UK, Australia, Germany, the Netherlands, and Eastern European countries such as Romania. The roots of I-O psychology trace back nearly to the beginning of psychology as a science, when Wilhelm Wundt founded one of the first psychological laboratories in 1879 in Leipzig, Germany. In the mid–1880s, Wundt trained two psychologists, Hugo Münsterberg and James McKeen Cattell, who went on to have a major influence on the emergence of I-O psychology. World War I was an impetus for the development of the field simultaneously in the UK and US.

Instead of viewing performance differences as human "errors," Cattell was one of the first to recognize the importance of differences among individuals as a way of better understanding work behavior. Walter Dill Scott, who was a contemporary of Cattell and was elected President of the American Psychological Association (APA) in 1919, was arguably the most prominent I-O psychologist of his time. Scott, along with Walter Van Dyke Bingham, worked at what was then Carnegie Institute of Technology, developing methods for selecting and training sales personnel.

The "industrial" side of I-O psychology originated in research on individual differences, assessment, and the prediction of work performance. Industrial psychology crystallized during World War I. In response to the need to rapidly assign new troops to duty. Scott and Bingham volunteered to help with the testing and placement of more than a million U.S. Army recruits. In 1917, together with other prominent psychologists, they adapted a well-known intelligence test the Stanford–Binet, which was designed for testing one individual at a time, to make it suitable for group testing. The new test was called the Army Alpha. After the War, the growing industrial base in the U.S. was a source of momentum for what was then called "industrial psychology." Private industry set out to emulate the successful testing of Army personnel. Mental ability testing soon became commonplace in the work setting.

The "organizational" side of the field was focused on employee behavior, feelings, and well-being. During World War I, with the U.K. government's interest in worker productivity in munitions factories, Charles Myers studied worker fatigue and well-being. Following the war, Elton Mayo found that rest periods improved morale and reduced turnover in a Philadelphia textile factory. He later joined the ongoing Hawthorne studies, where he became interested in how workers' emotions and informal relationships affected productivity. The results of these studies ushered in the human relations movement.

World War II brought renewed interest in ability testing. The U.S. military needed to accurately place recruits in new technologically advanced jobs. There was also concern with morale and fatigue in war-industry workers. In the 1960s Arthur Kornhauser examined the impact on productivity of hiring mentally unstable workers. Kornhauser also examined the link between industrial working conditions and worker mental health as well as the spillover into a worker's personal life of having an unsatisfying job. Zickar noted that most of Kornhauser's I-O contemporaries favored management and Kornhauser was largely alone in his interest in protecting workers. Vinchur and Koppes (2010) observed that I-O psychologists' interest in job stress is a relatively recent development (p. 22).

The industrial psychology division of the former American Association of Applied Psychology became a division within APA, becoming Division 14 of APA. It was initially called the Industrial and Business Psychology Division. In 1962, the name was changed to the Industrial Psychology Division. In 1973, it was renamed again, this time to the Division of Industrial and Organizational Psychology. In 1982, the unit become more independent of APA, and its name was changed again, this time to the Society for Industrial and Organizational Psychology.

The name change of the division from "industrial psychology" to "industrial and organizational psychology" reflected the shift in the work of industrial psychologists who had originally addressed work behavior from the individual perspective, examining performance and attitudes of individual workers. Their work became broader. Group behavior in the workplace became a worthy subject of study. The emphasis on the "organizational" underlined the fact that when an individual joins an organization (e.g., the organization that hired him or her), he or she will be exposed to a common goal and a common set of operating procedures. In the 1970s in the UK, references to occupational psychology became more common than references to I-O psychology.

According to Bryan and Vinchur, "while organizational psychology increased in popularity through [the 1960s and 1970s], research and practice in the traditional areas of industrial psychology continued, primarily driven by employment legislation and case law". There was a focus on fairness and validity in selection efforts as well as in the job analyses that undergirded selection instruments. For example, I-O psychology showed increased interest in behaviorally anchored rating scales. What critics there were of I-O psychology accused the discipline of being responsive only to the concerns of management.

From the 1980s to 2010s, other changes in I-O psychology took place. Researchers increasingly adopted a multi-level approach, attempting to understand behavioral phenomena from both the level of the organization and the level of the individual worker. There was also an increased interest in the needs and expectations of employees as individuals. For example, an emphasis on organizational justice and the psychological contract took root, as well as the more traditional concerns of selection and training. Methodological innovations (e.g., meta-analyses, structural equation modeling) were adopted. With the passage of the American with Disabilities Act in 1990 and parallel legislation elsewhere in the world, I-O psychology saw an increased emphasis on "fairness in personnel decisions." Training research relied increasingly on advances in educational psychology and cognitive science.

Research methods

As described above, I-O psychologists are trained in the scientist–practitioner model. I-O psychologists rely on a variety of methods to conduct organizational research. Study designs employed by I-O psychologists include surveys, experiments, quasi-experiments, and observational studies. I-O psychologists rely on diverse data sources, including human judgments, historical databases, objective measures of work performance (e.g., sales volume), and questionnaires and surveys. Reliable measures with strong evidence for construct validity have been developed to assess a wide variety of job-relevant constructs.

I-O researchers employ quantitative statistical methods. Quantitative methods used in I-O psychology include correlation, multiple regression, and analysis of variance. More advanced statistical methods employed in I-O research include logistic regression, structural equation modeling, and hierarchical linear modeling (HLM; also known as multilevel modeling). I-O researchers have also employed meta-analysis. I-O psychologists also employ psychometric methods including methods associated with classical test theory, generalizability theory, and item response theory (IRT).

I-O psychologists have also employed qualitative methods, which largely involve focus groups, interviews, and case studies. I-O psychologists conducting research on organizational culture have employed ethnographic techniques and participant observation. A qualitative technique associated with I-O psychology is Flanagan's critical incident technique. I-O psychologists have also coordinated the use of quantitative and qualitative methods in the same study,

Topics

Job analysis

Job analysis encompasses a number of different methods including, but not limited to, interviews, questionnaires, task analysis, and observation. A job analysis primarily involves the systematic collection of information about a job. A task-oriented job analysis involves an assessment of the duties, tasks, and/or competencies a job requires. By contrast, a worker-oriented job analysis involves an examination of the knowledge, skills, abilities, and other characteristics (KSAOs) required to successfully perform the work. Information obtained from job analyses are used for many purposes, including the creation job-relevant selection procedures, the development of criteria for performance appraisals, the conducting of performance appraisals, and the development and implementation of training programs.

Personnel recruitment and selection

I-O psychologists typically work with human resource specialists to design (a) recruitment processes and (b) personnel selection systems. Personnel recruitment is the process of identifying qualified candidates in the workforce and getting them to apply for jobs within an organization. Personnel recruitment processes include developing job announcements, placing ads, defining key qualifications for applicants, and screening out unqualified applicants.

Personnel selection is the systematic process of hiring and promoting personnel. Personnel selection systems employ evidence-based practices to determine the most qualified candidates. Personnel selection involves both the newly hired and individuals who can be promoted from within the organization. Common selection tools include ability tests (e.g., cognitive, physical, or psycho-motor), knowledge tests, personality tests, structured interviews, the systematic collection of biographical data, and work samples. I-O psychologists must evaluate evidence regarding the extent to which selection tools predict job performance.

Personnel selection procedures are usually validated, i.e., shown to be job relevant to personnel selection, using one or more of the following types of validity: content validity, construct validity, and/or criterion-related validity. I-O psychologists must adhere to professional standards in personnel selection efforts. SIOP (e.g., Principles for validation and use of personnel selection procedures) and APA together with the National Council on Measurement in Education (e.g., Standards for educational and psychological testing are sources of those standards. The Equal Employment Opportunity Commission's Uniform guidelines are also influential in guiding personnel selection decisions.

A meta-analysis of selection methods found that general mental ability was the best overall predictor of job performance and attainment in training.

Performance appraisal/management

Performance appraisal or performance evaluation is the process in which an individual's or a group's work behaviors and outcomes are assessed against managers' and others' expectations for the job. Performance appraisal is frequently used in promotion and compensation decisions, to help design and validate personnel selection procedures, and for performance management. Performance management is the process of providing performance feedback relative to expectations and information relevant to helping a worker improve his or her performance (e.g., coaching, mentoring). Performance management may also include documenting and tracking performance information for organizational evaluation purposes.

An I-O psychologist would typically use information from the job analysis to determine a job's performance dimensions and then construct a rating scale to describe each level of performance for the job. Often, the I-O psychologist would be responsible for training organizational personnel how to use the performance appraisal instrument, including ways to minimize bias when using the rating scale and how to provide effective performance feedback.

Individual assessment and psychometrics

Individual assessment involves the measurement of individual differences. I-O psychologists perform individual assessments in order to evaluate differences among candidates for employment as well as differences among employees. The constructs measured pertain to job performance. With candidates for employment, individual assessment is often part of the personnel selection process. These assessments can include written tests, aptitude tests, physical tests, psycho-motor tests, personality tests, integrity and reliability tests, work samples, simulations, and assessment centres.

Occupational health and well-being

A more recent focus of I-O field is the health, safety, and well-being of employees. Topics include occupational stress and workplace mistreatment.

Occupational stress

There are many features of work that can be stressful to employees. Research has identified a number of job stressors (environmental conditions at work) that contribute to strains (adverse behavioral, emotional, physical, and psychological reactions). Occupational stress can have implications for organizational performance because of the emotions job stress evokes. For example, a job stressor such as conflict with a supervisor can precipitate anger that in turn motivates counterproductive workplace behaviors. A number of prominent models of job stress have been developed to explain the job stress process, including the person-environment (P-E) fit model, which was developed by University of Michigan social psychologists, and the demand-control(-support) and effort-reward imbalance models, which were developed by sociologists.

Research has also examined occupational stress in specific occupations, including police, general practitioners, and dentists. Another concern has been the relation of occupational stress to family life. Other I-O researchers have examined gender differences in leadership style and job stress and strain in the context of male- and female-dominated industries, and unemployment-related distress. Occupational stress has also been linked to lack of fit between people and their jobs.

Occupational safety

Accidents and safety in the workplace are important because of the serious injuries and fatalities that are all too common. Research has linked accidents to psychosocial factors in the workplace including overwork that leads to fatigue, workplace violence, and working night shifts. "Stress audits" can help organizations remain compliant with various occupational safety regulations. Psychosocial hazards can affect musculoskeletal disorders. A psychosocial factor related to accident risk is safety climate, which refers to employees' perceptions of the extent to which their work organization prioritizes safety. By contrast, psychosocial safety climate refers to management's "policies, practices, and procedures" aimed at protecting workers' psychological health. Research on safety leadership is also relevant to understanding employee safety performance. Research suggests that safety-oriented transformational leadership is associated with a positive safety climate and safe worker practices.

Workplace bullying, aggression and violence

I-O psychologists are concerned with the related topics of workplace bullying, aggression, and violence. For example, I-O research found that exposure to workplace violence elicited ruminative thinking. Ruminative thinking is associated with poor well-being. Research has found that interpersonal aggressive behaviour is associated with worse team performance.

Relation of I-O psychology to occupational health psychology

A new discipline, occupational health psychology (OHP), emerged from both health psychology and I-O psychology as well as occupational medicine. OHP concerns itself with such topic areas as the impact of occupational stressors on mental and physical health, the health impact of involuntary unemployment, violence and bullying in the workplace, psychosocial factors that influence accident risk and safety, work-family balance, and interventions designed to improve/protect worker health. Spector observed that one of the problems facing I-O psychologists in the late 20th century who were interested in the health of working people was resistance within the field to publishing papers on worker health. In the 21st century, more I-O psychologists joined with their OHP colleagues from other disciplines in researching work and health.

Work design

Work design concerns the "content and organisation of one's work tasks, activities, relationships, and responsibilities." Research has demonstrated that work design has important implications for individual employees (e.g., level of engagement, job strain, chance of injury), teams (e.g., how effectively teams co-ordinate their activities), organisations (e.g., productivity, safety, efficiency targets), and society (e.g., whether a nation utilises the skills of its population or promotes effective aging).

I-O psychologists review job tasks, relationships, and an individual's way of thinking about their work to ensure that their roles are meaningful and motivating, thus creating greater productivity and job satisfaction. Deliberate interventions aimed at altering work design are sometimes referred to as work redesign. Such interventions can be initiated by the management of an organization (e.g., job rotation, job enlargement, job enrichment) or by individual workers (e.g., job crafting, role innovation, idiosyncratic ideals).

Remuneration and compensation

Compensation includes wages or salary, bonuses, pension/retirement contributions, and employee benefits that can be converted to cash or replace living expenses. I-O psychologists may be asked to conduct a job evaluation for the purpose of determining compensation levels and ranges. I-O psychologists may also serve as expert witnesses in pay discrimination cases, when disparities in pay for similar work are alleged by employees.

Training and training evaluation

Training involves the systematic teaching of skills, concepts, or attitudes that results in improved performance in another environment. Because many people hired for a job are not already versed in all the tasks the job requires, training may be needed to help the individual perform the job effectively. Evidence indicates that training is often effective, and that it succeeds in terms of higher net sales and gross profitability per employee.

Similar to performance management (see above), an I-O psychologist would employ a job analysis in concert with the application of the principles of instructional design to create an effective training program. A training program is likely to include a summative evaluation at its conclusion in order to ensure that trainees have met the training objectives and can perform the target work tasks at an acceptable level. Kirkpatrick describes four levels of criteria by which to evaluate training:

  • Reactions are the extent to which trainees enjoyed the training and found it worthwhile.
  • Learning is the knowledge and skill trainees acquired from the training.
  • Behavior is the change in behavior trainees exhibit on the job after training,for example, did they perform trained tasks more quickly?
  • Results are the effect of the change in knowledge or behavior on the job, for example, was overall productivity increased or costs decreased?

Training programs often include formative evaluations to assess the effect of the training as the training proceeds. Formative evaluations can be used to locate problems in training procedures and help I-O psychologists make corrective adjustments while training is ongoing.

The foundation for training programs is learning. Learning outcomes can be organized into three broad categories: cognitive, skill-based, and affective outcomes. Cognitive training is aimed at instilling declarative knowledge or the knowledge of rules, facts, and principles (e.g., police officer training covers laws and court procedures). Skill-based training aims to impart procedural knowledge (e.g., skills needed to use a special tool) or technical skills (e.g., understanding the workings of software program). Affective training concerns teaching individuals to develop specific attitudes or beliefs that predispose trainees to behave a certain way (e.g., show commitment to the organization, appreciate diversity).

A needs assessment, an analysis of corporate and individual goals, is often undertaken prior to the development of a training program. In addition, a careful needs analysis is required in order to develop a systematic understanding of where training is needed, what should be taught, and who will be trained. A training needs analysis typically involves a three-step process that includes organizational analysis, task analysis and person analysis.

An organizational analysis is an examination of organizational goals and resources as well as the organizational environment. The results of an organizational analysis help to determine where training should be directed. The analysis identifies the training needs of different departments or subunits. It systematically assesses manager, peer, and technological support for transfer of training. An organizational analysis also takes into account the climate of the organization and its subunits. For example, if a climate for safety is emphasized throughout the organization or in subunits of the organization (e.g., production), then training needs will likely reflect an emphasis on safety. A task analysis uses the results of a job analysis to determine what is needed for successful job performance, contributing to training content. With organizations increasingly trying to identify "core competencies" that are required for all jobs, task analysis can also include an assessment of competencies. A person analysis identifies which individuals within an organization should receive training and what kind of instruction they need. Employee needs can be assessed using a variety of methods that identify weaknesses that training can address.

Motivation in the workplace

Work motivation reflects the energy an individual applies "to initiate work-related behavior, and to determine its form, direction, intensity, and duration" Understanding what motivates an organization's employees is central to I-O psychology. Motivation is generally thought of as a theoretical construct that fuels behavior. An incentive is an anticipated reward that is thought to incline a person to behave a certain way. Motivation varies among individuals. Studying its influence on behavior, it must be examined together with ability and environmental influences. Because of motivation's role in influencing workplace behavior and performance, many organizations structure the work environment to encourage productive behaviors and discourage unproductive behaviors.

Motivation involves three psychological processes: arousal, direction, and intensity. Arousal is what initiates action. It is often fueled by a person's need or desire for something that is missing from his or her life, either totally or partially. Direction refers to the path employees take in accomplishing the goals they set for themselves. Intensity is the amount of energy employees put into goal-directed work performance. The level of intensity often reflects the importance and difficulty of the goal. These psychological processes involve four factors. First, motivation serves to direct attention, focusing on particular issues, people, tasks, etc. Second, it serves to stimulate effort. Third, motivation influences persistence. Finally, motivation influences the choice and application of task-related strategies.

Organizational climate

Organizational climate is the perceptions of employees about what is important in an organization, that is, what behaviors are encouraged versus discouraged. It can be assessed in individual employees (climate perceptions) or averaged across groups of employees within a department or organization (organizational climate). Climates are usually focused on specific employee outcomes, or what is called “climate for something”. There are more than a dozen types of climates that have been assessed and studied. Some of the more popular include:

  • Customer service climate: The emphasis placed on providing good service. It has been shown to relate to employee service performance.
  • Diversity climate: The extent to which organizations value differences among employees and expect employees to treat everyone with respect. It has been linked to job satisfaction.
  • Psychosocial safety climate: Such climates make employees emphasize psychological safety meaning people feel free to be themselves and express views without fear of being criticized or ridiculed.
  • Safety climate: Such organizations emphasize safety and have fewer accidents and injuries.

Climate concerns organizational policies and practices that encourage or discourage specific behaviors by employees. Shared perceptions of what the organization emphasizes (organizational climate) is part of organizational culture, but culture concerns far more than shared perceptions, as discussed in the next section.

Organizational culture

While there is no universal definition for organizational culture, a collective understanding shares the following assumptions:

... that they are related to history and tradition, have some depth, are difficult to grasp and account for, and must be interpreted; that they are collective and shared by members of groups and primarily ideational in character, having to do with values, understandings, beliefs, knowledge, and other intangibles; and that they are holistic and subjective rather than strictly rational and analytical.

Organizational culture has been shown to affect important organizational outcomes such as performance, attraction, recruitment, retention, employee satisfaction, and employee well-being. There are three levels of organizational culture: artifacts, shared values, and basic beliefs and assumptions. Artifacts comprise the physical components of the organization that relay cultural meaning. Shared values are individuals' preferences regarding certain aspects of the organization's culture (e.g., loyalty, customer service). Basic beliefs and assumptions include individuals' impressions about the trustworthiness and supportiveness of an organization, and are often deeply ingrained within the organization's culture.

In addition to an overall culture, organizations also have subcultures. Subcultures can be departmental (e.g. different work units) or defined by geographical distinction. While there is no single "type" of organizational culture, some researchers have developed models to describe different organizational cultures.

Group behavior

Group behavior involves the interactions among individuals in a collective. Most I-O group research is about teams which is a group in which people work together to achieve the same task goals. The individuals' opinions, attitudes, and adaptations affect group behavior, with group behavior in turn affecting those opinions, etc. The interactions are thought to fulfill some need satisfaction in an individual who is part of the collective.

Team effectiveness

Organizations often organize teams because teams can accomplish a much greater amount of work in a short period of time than an individual can accomplish. I-O research has examined the harm workplace aggression does to team performance.

Team composition

Team composition, or the configuration of team member knowledge, skills, abilities, and other characteristics, fundamentally influences teamwork. Team composition can be considered in the selection and management of teams to increase the likelihood of team success. To achieve high-quality results, teams built with members having higher skill levels are more likely to be effective than teams built around members having lesser skills; teams that include a members with a diversity of skills are also likely to show improved team performance. Team members should also be compatible in terms of personality traits, values, and work styles. There is substantial evidence that personality traits and values can shape the nature of teamwork, and influence team performance.

Team task design

A fundamental question in team task design is whether or not a task is even appropriate for a team. Those tasks that require predominantly independent work are best left to individuals, and team tasks should include those tasks that consist primarily of interdependent work. When a given task is appropriate for a team, task design can play a key role in team effectiveness.

Job characteristic theory identifies core job dimensions that affect motivation, satisfaction, performance, etc. These dimensions include skill variety, task identity, task significance, autonomy and feedback. The dimensions map well to the team environment. Individual contributors who perform team tasks that are challenging, interesting, and engaging are more likely to be motivated to exert greater effort and perform better than team members who are working on tasks that lack those characteristics.

Organizational resources

Organizational support systems affect the team effectiveness and provide resources for teams operating in the multi-team environment. During the chartering of new teams, organizational enabling resources are first identified. Examples of enabling resources include facilities, equipment, information, training, and leadership. Team-specific resources (e.g., budgetary resources, human resources) are typically made available. Team-specific human resources represent the individual contributors who are selected to be team members. Intra-team processes (e.g., task design, task assignment) involve these team-specific resources.

Teams also function in dynamic multi-team environments. Teams often must respond to shifting organizational contingencies. Contingencies affecting teams include constraints arising from conditions in which organizational resources are not exclusively earmarked for certain teams. When resources are scarce, they must be shared by multiple teams.

Team rewards

Organizational reward systems drive the strengthening and enhancing of individual team member efforts; such efforts contribute towards reaching team goals. In other words, rewards that are given to individual team members should be contingent upon the performance of the entire team.

Several design elements are needed to enable organizational reward systems to operate successfully. First, for a collective assessment to be appropriate for individual team members, the group's tasks must be highly interdependent. If this is not the case, individual assessment is more appropriate than team assessment. Second, individual-level reward systems and team-level reward systems must be compatible. For example, it would be unfair to reward the entire team for a job well done if only one team member did most of the work. That team member would most likely view teams and teamwork negatively, and would not want to work on a team in the future. Third, an organizational culture must be created such that it supports and rewards employees who believe in the value of teamwork and who maintain a positive attitude towards team-based rewards.

Team goals

Goals potentially motivate team members when goals contain three elements: difficulty, acceptance, and specificity. Under difficult goal conditions, teams with more committed members tend to outperform teams with less committed members. When team members commit to team goals, team effectiveness is a function of how supportive members are with each other. The goals of individual team members and team goals interact. Team and individual goals must be coordinated. Individual goals must be consistent with team goals in order for a team to be effective.

Job satisfaction and commitment

Job satisfaction is often thought to reflect the extent to which a worker likes his or her job, or individual aspects or facets of jobs. It is one of the most heavily researched topics in I-O psychology. Job satisfaction has theoretical and practical utility for the field. It has been linked to important job outcomes including attitudinal variables (e.g., job involvement, organizational commitment), absenteeism, turnover intentions, actual turnover, job performance, and tension. A meta-analyses found job satisfaction to be related to life satisfaction, happiness, positive affect, and the absence of negative affect.

Productive behavior

Productive behavior is defined as employee behavior that contributes positively to the goals and objectives of an organization. When an employee begins a new job, there is a transition period during which he or she may not contribute significantly. To assist with this transition an employee typically requires job-related training. In financial terms, productive behavior represents the point at which an organization begins to achieve some return on the investment it has made in a new employee. IO psychologists are ordinarily more focused on productive behavior than job or task performance, including in-role and extra-role performance. In-role performance tells managers how well an employee performs the required aspects of the job; extra-role performance includes behaviors not necessarily required by job but nonetheless contribute to organizational effectiveness. By taking both in-role and extra-role performance into account, an I-O psychologist is able to assess employees' effectiveness (how well they do what they were hired to do), efficiency (outputs to relative inputs), and productivity (how much they help the organization reach its goals). Three forms of productive behavior that IO psychologists often evaluate include job performance, organizational citizenship behavior (see below), and innovation.

Job performance

Job performance represents behaviors employees engage in while at work which contribute to organizational goals. These behaviors are formally evaluated by an organization as part of an employee's responsibilities. In order to understand and ultimately predict job performance, it is important to be precise when defining the term. Job performance is about behaviors that are within the control of the employee and not about results (effectiveness), the costs involved in achieving results (productivity), the results that can be achieved in a period of time (efficiency), or the value an organization places on a given level of performance, effectiveness, productivity or efficiency (utility).

To model job performance, researchers have attempted to define a set of dimensions that are common to all jobs. Using a common set of dimensions provides a consistent basis for assessing performance and enables the comparison of performance across jobs. Performance is commonly broken into two major categories: in-role (technical aspects of a job) and extra-role (non-technical abilities such as communication skills and being a good team member). While this distinction in behavior has been challenged it is commonly made by both employees and management. A model of performance by Campbell breaks performance into in-role and extra-role categories. Campbell labeled job-specific task proficiency and non-job-specific task proficiency as in-role dimensions, while written and oral communication, demonstrating effort, maintaining personal discipline, facilitating peer and team performance, supervision and leadership and management and administration are labeled as extra-role dimensions. Murphy's model of job performance also broke job performance into in-role and extra-role categories. However, task-orientated behaviors composed the in-role category and the extra-role category included interpersonally-oriented behaviors, down-time behaviors and destructive and hazardous behaviors. However, it has been challenged as to whether the measurement of job performance is usually done through pencil/paper tests, job skills tests, on-site hands-on tests, off-site hands-on tests, high-fidelity simulations, symbolic simulations, task ratings and global ratings. These various tools are often used to evaluate performance on specific tasks and overall job performance. Van Dyne and LePine developed a measurement model in which overall job performance was evaluated using Campbell's in-role and extra-role categories. Here, in-role performance was reflected through how well "employees met their performance expectations and performed well at the tasks that made up the employees' job." Dimensions regarding how well the employee assists others with their work for the benefit of the group, if the employee voices new ideas for projects or changes to procedure and whether the employee attends functions that help the group composed the extra-role category.

To assess job performance, reliable and valid measures must be established. While there are many sources of error with performance ratings, error can be reduced through rater training and through the use of behaviorally-anchored rating scales. Such scales can be used to clearly define the behaviors that constitute poor, average, and superior performance. Additional factors that complicate the measurement of job performance include the instability of job performance over time due to forces such as changing performance criteria, the structure of the job itself and the restriction of variation in individual performance by organizational forces. These factors include errors in job measurement techniques, acceptance and the justification of poor performance and lack of importance of individual performance.

The determinants of job performance consist of factors having to do with the individual worker as well as environmental factors in the workplace. According to Campbell's Model of The Determinants of Job Performance, job performance is a result of the interaction between declarative knowledge (knowledge of facts or things), procedural knowledge (knowledge of what needs to be done and how to do it), and motivation (reflective of an employee's choices regarding whether to expend effort, the level of effort to expend, and whether to persist with the level of effort chosen). The interplay between these factors show that an employee may, for example, have a low level of declarative knowledge, but may still have a high level of performance if the employee has high levels of procedural knowledge and motivation.

Regardless of the job, three determinants stand out as predictors of performance: (1) general mental ability (especially for jobs higher in complexity); (2) job experience (although there is a law of diminishing returns); and (3) the personality trait of conscientiousness (people who are dependable and achievement-oriented, who plan well). These determinants appear to influence performance largely through the acquisition and usage of job knowledge and the motivation to do well. Further, an expanding area of research in job performance determinants includes emotional intelligence.

Organizational citizenship behavior

Organizational citizenship behaviors (OCBs) are another form of workplace behavior that IO psychologists are involved with. OCBs tend to be beneficial to both the organization and other workers. Dennis Organ (1988) defines OCBs as "individual behavior that is discretionary, not directly or explicitly recognized by the formal reward system, and that in the aggregate promotes the effective functioning of the organization." Behaviors that qualify as OCBs can fall into one of the following five categories: altruism, courtesy, sportsmanship, conscientiousness, and civic virtue. OCBs have also been categorized in other ways too, for example, by their intended targets individuals, supervisors, and the organization as a whole. Other alternative ways of categorizing OCBs include "compulsory OCBs", which are engaged in owing to coercive persuasion or peer pressure rather than out of good will. The extent to which OCBs are voluntary has been the subject of some debate.

Other research suggests that some employees perform OCBs to influence how they are viewed within the organization. While these behaviors are not formally part of the job description, performing them can influence performance appraisals. Researchers have advanced the view that employees engage in OCBs as a form of "impression management," a term coined by Erving Goffman. Goffman defined impression management as "the way in which the individual ... presents himself and his activity to others, the ways in which he guides and controls the impression they form of him, and the kinds of things he may and may not do while sustaining his performance before them. Some researchers have hypothesized that OCBs are not performed out of good will, positive affect, etc., but instead as a way of being noticed by others, including supervisors.

Innovation

Four qualities are generally linked to creative and innovative behaviour by individuals:

  • Task-relevant skills (general mental ability and job specific knowledge). Task specific and subject specific knowledge is most often gained through higher education; however, it may also be gained by mentoring and experience in a given field.
  • Creativity-relevant skills (ability to concentrate on a problem for long periods of time, to abandon unproductive searches, and to temporarily put aside stubborn problems). The ability to put aside stubborn problems is referred to by Jex and Britt as productive forgetting. Creativity-relevant skills also require the individual contributor to evaluate a problem from multiple vantage points. One must be able to take on the perspective of various users. For example, an Operation Manager analyzing a reporting issue and developing an innovative solution would consider the perspective of a sales person, assistant, finance, compensation, and compliance officer.
  • Task motivation (internal desire to perform task and level of enjoyment).

At the organizational level, a study by Damanpour identified four specific characteristics that may predict innovation:

  1. A population with high levels of technical knowledge
  2. The organization's level of specialization
  3. The level an organization communicates externally
  4. Functional differentiation.

Counterproductive work behavior

Counterproductive work behavior (CWB) can be defined as employee behavior that goes against the goals of an organization. These behaviors can be intentional or unintentional and result from a wide range of underlying causes and motivations. Some CWBs have instrumental motivations (e.g., theft). It has been proposed that a person-by-environment interaction can be utilized to explain a variety of counterproductive behaviors. For instance, an employee who sabotages another employee's work may do so because of lax supervision (environment) and underlying psychopathology (person) that work in concert to result in the counterproductive behavior. There is evidence that an emotional response (e.g., anger) to job stress (e.g., unfair treatment) can motivate CWBs.

The forms of counterproductive behavior with the most empirical examination are ineffective job performance, absenteeism, job turnover, and accidents. Less common but potentially more detrimental forms of counterproductive behavior have also been investigated including violence and sexual harassment.

Leadership

Leadership can be defined as a process of influencing others to agree on a shared purpose, and to work towards shared objectives. A distinction should be made between leadership and management. Managers process administrative tasks and organize work environments. Although leaders may be required to undertake managerial duties as well, leaders typically focus on inspiring followers and creating a shared organizational culture and values. Managers deal with complexity, while leaders deal with initiating and adapting to change. Managers undertake the tasks of planning, budgeting, organizing, staffing, controlling and problem solving. In contrast, leaders undertake the tasks of setting a direction or vision, aligning people to shared goals, communicating, and motivating.

Approaches to studying leadership can be broadly classified into three categories: Leader-focused approaches, contingency-focused approaches, and follower-focused approaches.

Leader-focused approaches

Leader-focused approaches look to organizational leaders to determine the characteristics of effective leadership. According to the trait approach, more effective leaders possess certain traits that less effective leaders lack. More recently, this approach is being used to predict leader emergence. The following traits have been identified as those that predict leader emergence when there is no formal leader: high intelligence, high needs for dominance, high self-motivation, and socially perceptive. Another leader-focused approached is the behavioral approach, which focuses on the behaviors that distinguish effective from ineffective leaders. There are two categories of leadership behaviors: consideration and initiating structure. Behaviors associated with the category of consideration include showing subordinates they are valued and that the leader cares about them. An example of a consideration behavior is showing compassion when problems arise in or out of the office. Behaviors associated with the category of initiating structure include facilitating the task performance of groups. One example of an initiating structure behavior is meeting one-on-one with subordinates to explain expectations and goals. The final leader-focused approach is power and influence. To be most effective, a leader should be able to influence others to behave in ways that are in line with the organization's mission and goals. How influential a leader can be depends on their social power – their potential to influence their subordinates. There are six bases of power: French and Raven's classic five bases of coercive, reward, legitimate, expert, and referent power, plus informational power. A leader can use several different tactics to influence others within an organization. These include: rational persuasion, inspirational appeal, consultation, ingratiation, exchange, personal appeal, coalition, legitimating, and pressure.

Contingency-focused approaches

Of the 3 approaches to leadership, contingency-focused approaches have been the most prevalent over the past 30 years. Contingency-focused theories base a leader's effectiveness on their ability to assess a situation and adapt their behavior accordingly. These theories assume that an effective leader can accurately "read" a situation and skillfully employ a leadership style that meets the needs of the individuals involved and the task at hand. A brief introduction to the most prominent contingency-focused theories will follow.

The Fiedler contingency model holds that a leader's effectiveness depends on the interaction between their characteristics and the characteristics of the situation. Path–goal theory asserts that the role of the leader is to help his or her subordinates achieve their goals. To effectively do this, leaders must skillfully select from four different leadership styles to meet the situational factors. The situational factors are a product of the characteristics of subordinates and the characteristics of the environment. The leader–member exchange theory (LMX) focuses on how leader–subordinate relationships develop. Generally speaking, when a subordinate performs well or when there are positive exchanges between a leader and a subordinate, their relationship is strengthened, performance and job satisfaction are enhanced, and the subordinate will feel more commitment to the leader and the organization as a whole. Vroom-Yetton-Jago model focuses on decision-making with respect to a feasibility set which is composed of the situational attributes.

In addition to the contingency-focused approaches mentioned, there has been a high degree of interest paid to three novel approaches that have recently emerged. The first is transformational leadership, which posits that there are certain leadership traits that inspire subordinates to perform beyond their capabilities. The second is transactional leadership, which is most concerned with keeping subordinates in-line with deadlines and organizational policy. This type of leader fills more of a managerial role and lacks qualities necessary to inspire subordinates and induce meaningful change. And the third is authentic leadership which is centered around empathy and a leader's values or character. If the leader understands their followers, they can inspire subordinates by cultivating a personal connection and leading them to share in the vision and goals of the team. Although there has been a limited amount of research conducted on these theories, they are sure to receive continued attention as the field of IO psychology matures.

Follower-focused approaches

Follower-focused approaches look at the processes by which leaders motivate followers, and lead teams to achieve shared goals. Understandably, the area of leadership motivation draws heavily from the abundant research literature in the domain of motivation in IO psychology. Because leaders are held responsible for their followers' ability to achieve the organization's goals, their ability to motivate their followers is a critical factor of leadership effectiveness. Similarly, the area of team leadership draws heavily from the research in teams and team effectiveness in IO psychology. Because organizational employees are frequently structured in the form of teams, leaders need to be aware of the potential benefits and pitfalls of working in teams, how teams develop, how to satisfy team members' needs, and ultimately how to bring about team effectiveness and performance.

An emerging area of IO research in the area of team leadership is in leading virtual teams, where people in the team are geographically distributed across various distances and sometimes even countries. While technological advances have enabled the leadership process to take place in such virtual contexts, they present new challenges for leaders as well, such as the need to use technology to build relationships with followers, and influencing followers when faced with limited (or no) face-to-face interaction.

Organizational development

IO psychologists are also concerned with organizational change. This effort, called organizational development (OD). Tools used to advance organization development include the survey feedback technique. The technique involves the periodic assessment (with surveys) of employee attitudes and feelings. The results are conveyed to organizational stakeholders, who may want to take the organization in a particular direction. Another tool is the team building technique. Because many if not most tasks within the organization are completed by small groups and/or teams, team building is important to organizational success. In order to enhance a team's morale and problem-solving skills, IO psychologists help the groups to build their self-confidence, group cohesiveness, and working effectiveness.

Relation to organizational behavior and human resource management

I-O psychology and organizational behavior researchers have sometimes investigated similar topics. The overlap has led to some confusion regarding how the two disciplines differ. Sometimes there has been confusion within organizations regarding the practical duties of I-O psychologists and human resource management specialists.

Training

The minimum requirement for working as an IO psychologist is a master's degree. Normally, this degree requires about two to three years of postgraduate work to complete. Of all the degrees granted in IO psychology each year, approximately two thirds are at the master's level.

A comprehensive list of US and Canadian master's and doctoral programs can be found at the web site of the Society for Industrial and Organizational Psychology (SIOP). Admission into IO psychology PhD programs is highly competitive; many programs accept only a small number of applicants each year.

There are graduate degree programs in IO psychology outside of the US and Canada. The SIOP web site lists some of them.

In Australia, organizational psychologists must be accredited by the Australian Psychological Society (APS). To become an organizational psychologist, one must meet the criteria for a general psychologist's licence: three years studying bachelor's degree in psychology, 4th year honours degree or postgraduate diploma in psychology, and two-year full-time supervised practice plus 80 hours of professional development. There are other avenues available, such as a two-year supervised training program after honours (i.e. 4+2 pathway), or one year of postgraduate coursework and practical placements followed by a one-year supervised training program (i.e. 5+1 pathway). After this, psychologists can elect to specialize as Organizational Psychologists.

Competencies

There are many different sets of competencies for different specializations within IO psychology and IO psychologists are versatile behavioral scientists. For example, an IO psychologist specializing in selection and recruiting should have expertise in finding the best talent for the organization and getting everyone on board while he or she might not need to know much about executive coaching. Some IO psychologists specialize in specific areas of consulting whereas others tend to generalize their areas of expertise. There are basic skills and knowledge an individual needs in order to be an effective IO psychologist, which include being an independent learner, interpersonal skills (e.g., listening skills), and general consultation skills (e.g., skills and knowledge in the problem area).

Job outlook

U.S. News & World Report lists I-O Psychology as the third best science job, with a strong job market in the U.S. In the 2020 SIOP salary survey, the median annual salary for a PhD in IO psychology was $125,000; for a master's level IO psychologist was $88,900. The highest paid PhD IO psychologists were self-employed consultants who had a median annual income of $167,000. The highest paid in private industry worked in IT ($153,000), retail ($151,000) and healthcare ($147,000). The lowest earners were found in state and local government positions, averaging approximately $100,000, and in academic positions in colleges and universities that do not award doctoral degrees, with median salaries between $80,000 and $94,000.

Ethics

An IO psychologist, whether an academic, a consultant, or an employee, is expected to maintain high ethical standards. The APA's ethical principles apply to IO psychologists. For example, ethically, the IO psychologist should only accept projects for which he or she is qualified. With more organizations becoming global, it is important that when an IO psychologist works outside her or his home country, the psychologist is aware of rules, regulations, and cultures of the organizations and countries in which the psychologist works, while also adhering to the ethical standards set at home.

Computer-aided software engineering

From Wikipedia, the free encyclopedia ...