Search This Blog

Thursday, September 7, 2023

Software agent

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Software_agent

In computer science, a software agent or software AI is a computer program that acts for a user or other program in a relationship of agency, which derives from the Latin agere (to do): an agreement to act on one's behalf. Such "action on behalf of" implies the authority to decide which, if any, action is appropriate. Some agents are colloquially known as bots, from robot. They may be embodied, as when execution is paired with a robot body, or as software such as a chatbot executing on a phone (e.g. Siri) or other computing device. Software agents may be autonomous or work together with other agents or people. Software agents interacting with people (e.g. chatbots, human-robot interaction environments) may possess human-like qualities such as natural language understanding and speech, personality or embody humanoid form (see Asimo).

Related and derived concepts include intelligent agents (in particular exhibiting some aspects of artificial intelligence, such as reasoning), autonomous agents (capable of modifying the methods of achieving their objectives), distributed agents (being executed on physically distinct computers), multi-agent systems (distributed agents that work together to achieve an objective that could not be accomplished by a single agent acting alone), and mobile agents (agents that can relocate their execution onto different processors).

Concepts

The basic attributes of an autonomous software agent are that agents:

  • are not strictly invoked for a task, but activate themselves,
  • may reside in wait status on a host, perceiving context,
  • may get to run status on a host upon starting conditions,
  • do not require interaction of user,
  • may invoke other tasks including communication.
Nwana's Category of Software Agent

The term "agent" describes a software abstraction, an idea, or a concept, similar to OOP terms such as methods, functions, and objects. The concept of an agent provides a convenient and powerful way to describe a complex software entity that is capable of acting with a certain degree of autonomy in order to accomplish tasks on behalf of its host. But unlike objects, which are defined in terms of methods and attributes, an agent is defined in terms of its behavior.

Various authors have proposed different definitions of agents, these commonly include concepts such as

  • persistence (code is not executed on demand but runs continuously and decides for itself when it should perform some activity)
  • autonomy (agents have capabilities of task selection, prioritization, goal-directed behavior, decision-making without human intervention)
  • social ability (agents are able to engage other components through some sort of communication and coordination, they may collaborate on a task)
  • reactivity (agents perceive the context in which they operate and react to it appropriately).

Distinguishing agents from programs

All agents are programs, but not all programs are agents. Contrasting the term with related concepts may help clarify its meaning. Franklin & Graesser (1997) discuss four key notions that distinguish agents from arbitrary programs: reaction to the environment, autonomy, goal-orientation and persistence.

Intuitive distinguishing agents from objects

  • Agents are more autonomous than objects.
  • Agents have flexible behavior: reactive, proactive, social.
  • Agents have at least one thread of control but may have more.

Distinguishing agents from expert systems

  • Expert systems are not coupled to their environment.
  • Expert systems are not designed for reactive, proactive behavior.
  • Expert systems do not consider social ability.

Distinguishing intelligent software agents from intelligent agents in AI

  • Intelligent agents (also known as rational agents) are not just computer programs: they may also be machines, human beings, communities of human beings (such as firms) or anything that is capable of goal-directed behavior.
(Russell & Norvig 2003)

Impact of software agents

Software agents may offer various benefits to their end users by automating complex or repetitive tasks. However, there are organizational and cultural impacts of this technology that need to be considered prior to implementing software agents.

Organizational impact

Work contentment and job satisfaction impact

People like to perform easy tasks providing the sensation of success unless the repetition of the simple tasking is affecting the overall output. In general implementing software agents to perform administrative requirements provides a substantial increase in work contentment, as administering their own work does never please the worker. The effort freed up serves for a higher degree of engagement in the substantial tasks of individual work. Hence, software agents may provide the basics to implement self-controlled work, relieved from hierarchical controls and interference. Such conditions may be secured by application of software agents for required formal support.

Cultural impact

The cultural effects of the implementation of software agents include trust affliction, skills erosion, privacy attrition and social detachment. Some users may not feel entirely comfortable fully delegating important tasks to software applications. Those who start relying solely on intelligent agents may lose important skills, for example, relating to information literacy. In order to act on a user's behalf, a software agent needs to have a complete understanding of a user's profile, including his/her personal preferences. This, in turn, may lead to unpredictable privacy issues. When users start relying on their software agents more, especially for communication activities, they may lose contact with other human users and look at the world with the eyes of their agents. These consequences are what agent researchers and users must consider when dealing with intelligent agent technologies.

History

The concept of an agent can be traced back to Hewitt's Actor Model (Hewitt, 1977) - "A self-contained, interactive and concurrently-executing object, possessing internal state and communication capability."

To be more academic, software agent systems are a direct evolution of Multi-Agent Systems (MAS). MAS evolved from Distributed Artificial Intelligence (DAI), Distributed Problem Solving (DPS) and Parallel AI (PAI), thus inheriting all characteristics (good and bad) from DAI and AI.

John Sculley's 1987 “Knowledge Navigator” video portrayed an image of a relationship between end-users and agents. Being an ideal first, this field experienced a series of unsuccessful top-down implementations, instead of a piece-by-piece, bottom-up approach. The range of agent types is now (from 1990) broad: WWW, search engines, etc.

Examples of intelligent software agents

Buyer agents (shopping bots)

Buyer agents travel around a network (e.g. the internet) retrieving information about goods and services. These agents, also known as 'shopping bots', work very efficiently for commodity products such as CDs, books, electronic components, and other one-size-fits-all products. Buyer agents are typically optimized to allow for digital payment services used in e-commerce and traditional businesses.

User agents (personal agents)

User agents, or personal agents, are intelligent agents that take action on your behalf. In this category belong those intelligent agents that already perform, or will shortly perform, the following tasks:

  • Check your e-mail, sort it according to the user's order of preference, and alert you when important emails arrive.
  • Play computer games as your opponent or patrol game areas for you.
  • Assemble customized news reports for you. There are several versions of these, including CNN.
  • Find information for you on the subject of your choice.
  • Fill out forms on the Web automatically for you, storing your information for future reference
  • Scan Web pages looking for and highlighting text that constitutes the "important" part of the information there
  • Discuss topics with you ranging from your deepest fears to sports
  • Facilitate with online job search duties by scanning known job boards and sending the resume to opportunities who meet the desired criteria
  • Profile synchronization across heterogeneous social networks

Monitoring-and-surveillance (predictive) agents

Monitoring and surveillance agents are used to observe and report on equipment, usually computer systems. The agents may keep track of company inventory levels, observe competitors' prices and relay them back to the company, watch stock manipulation by insider trading and rumors, etc.

Service monitoring

For example, NASA's Jet Propulsion Laboratory has an agent that monitors inventory, planning, schedules equipment orders to keep costs down, and manages food storage facilities. These agents usually monitor complex computer networks that can keep track of the configuration of each computer connected to the network.

A special case of Monitoring-and-Surveillance agents are organizations of agents used to emulate the Human Decision-Making process during tactical operations. The agents monitor the status of assets (ammunition, weapons available, platforms for transport, etc.) and receive Goals (Missions) from higher level agents. The Agents then pursue the Goals with the Assets at hand, minimizing expenditure of the Assets while maximizing Goal Attainment. (See Popplewell, "Agents and Applicability")

Data-mining agents

This agent uses information technology to find trends and patterns in an abundance of information from many different sources. The user can sort through this information in order to find whatever information they are seeking.

A data mining agent operates in a data warehouse discovering information. A 'data warehouse' brings together information from many different sources. "Data mining" is the process of looking through the data warehouse to find information that you can use to take action, such as ways to increase sales or keep customers who are considering defecting.

'Classification' is one of the most common types of data mining, which finds patterns in information and categorizes them into different classes. Data mining agents can also detect major shifts in trends or a key indicator and can detect the presence of new information and alert you to it. For example, the agent may detect a decline in the construction industry for an economy; based on this relayed information construction companies will be able to make intelligent decisions regarding the hiring/firing of employees or the purchase/lease of equipment in order to best suit their firm.

Networking and communicating agents

Some other examples of current intelligent agents include some spam filters, game bots, and server monitoring tools. Search engine indexing bots also qualify as intelligent agents.

  • User agent - for browsing the World Wide Web
  • Mail transfer agent - For serving E-mail, such as Microsoft Outlook. Why? It communicates with the POP3 mail server, without users having to understand POP3 command protocols. It even has rule sets that filter mail for the user, thus sparing them the trouble of having to do it themselves.
  • SNMP agent
  • In Unix-style networking servers, httpd is an HTTP daemon that implements the Hypertext Transfer Protocol at the root of the World Wide Web
  • Management agents used to manage telecom devices
  • Crowd simulation for safety planning or 3D computer graphics,
  • Wireless beaconing agent is a simple process hosted single tasking entity for implementing wireless lock or electronic leash in conjunction with more complex software agents hosted e.g. on wireless receivers.
  • Use of autonomous agents (deliberately equipped with noise) to optimize coordination in groups online.

Software development agents (aka software bots)

Software bots are becoming important in software engineering.

Security agents

Agents are also used in software security application to intercept, examine and act on various types of content. Example include:

  • Data Loss Prevention (DLP) Agents[13] - examine user operations on a computer or network, compare with policies specifying allowed actions, and take appropriate action (e.g. allow, alert, block). The more comprehensive DLP agents can also be used to perform EDR functions.
  • Endpoint Detection and Response (EDR) Agents - monitor all activity on an endpoint computer in order to detect and respond to malicious activities
  • Cloud Access Security Broker (CASB) Agents - similar to DLP Agents, however examining traffic going to cloud applications

Design issues

Issues to consider in the development of agent-based systems include

  • how tasks are scheduled and how synchronization of tasks is achieved
  • how tasks are prioritized by agents
  • how agents can collaborate, or recruit resources,
  • how agents can be re-instantiated in different environments, and how their internal state can be stored,
  • how the environment will be probed and how a change of environment leads to behavioral changes of the agents
  • how messaging and communication can be achieved,
  • what hierarchies of agents are useful (e.g. task execution agents, scheduling agents, resource providers ...).

For software agents to work together efficiently they must share semantics of their data elements. This can be done by having computer systems publish their metadata.

The definition of agent processing can be approached from two interrelated directions:

  • internal state processing and ontologies for representing knowledge
  • interaction protocols – standards for specifying communication of tasks

Agent systems are used to model real-world systems with concurrency or parallel processing.

  • Agent Machinery – Engines of various kinds, which support the varying degrees of intelligence
  • Agent Content – Data employed by the machinery in Reasoning and Learning
  • Agent Access – Methods to enable the machinery to perceive content and perform actions as outcomes of Reasoning
  • Agent Security – Concerns related to distributed computing, augmented by a few special concerns related to agents

The agent uses its access methods to go out into local and remote databases to forage for content. These access methods may include setting up news stream delivery to the agent, or retrieval from bulletin boards, or using a spider to walk the Web. The content that is retrieved in this way is probably already partially filtered – by the selection of the newsfeed or the databases that are searched. The agent next may use its detailed searching or language-processing machinery to extract keywords or signatures from the body of the content that has been received or retrieved. This abstracted content (or event) is then passed to the agent's Reasoning or inferencing machinery in order to decide what to do with the new content. This process combines the event content with the rule-based or knowledge content provided by the user. If this process finds a good hit or match in the new content, the agent may use another piece of its machinery to do a more detailed search on the content. Finally, the agent may decide to take an action based on the new content; for example, to notify the user that an important event has occurred. This action is verified by a security function and then given the authority of the user. The agent makes use of a user-access method to deliver that message to the user. If the user confirms that the event is important by acting quickly on the notification, the agent may also employ its learning machinery to increase its weighting for this kind of event.

Bots can act on behalf of their creators to do good as well as bad. There are a few ways which bots can be created to demonstrate that they are designed with the best intention and are not built to do harm. This is first done by having a bot identify itself in the user-agent HTTP header when communicating with a site. The source IP address must also be validated to establish itself as legitimate. Next, the bot must also always respect a site's robots.txt file since it has become the standard across most of the web. And like respecting the robots.txt file, bots should shy away from being too aggressive and respect any crawl delay instructions.

Notions and frameworks for agents

Expansion of the universe

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Expansion_of_the_universe

The expansion of the universe is the increase in distance between gravitationally unbound parts of the observable universe with time. It is an intrinsic expansion; the universe does not expand "into" anything and does not require space to exist "outside" it. To any observer in the universe, it appears that all but the nearest galaxies (which are bound by gravity) recede at speeds that are proportional to their distance from the observer, on average. While objects cannot move faster than light, this limitation only applies with respect to local reference frames and does not limit the recession rates of cosmologically distant objects.

Cosmic expansion is a key feature of Big Bang cosmology. It can be modeled mathematically with the Friedmann–Lemaître–Robertson–Walker metric, where it corresponds to an increase in the scale of the spatial part of the universe's spacetime metric (which governs the size and geometry of spacetime). Within this framework, stationary objects separate over time because space is expanding. However, this is not a generally covariant description but rather only a choice of coordinates. Contrary to common misconception, it is equally valid to adopt a description in which space does not expand and objects simply move apart under the influence of their mutual gravity. Although cosmic expansion is often framed as a consequence of general relativity, it is also predicted by Newtonian gravity.

According to inflation theory, during the inflationary epoch about 10−32 of a second after the Big Bang, the universe suddenly expanded, and its volume increased by a factor of at least 1078 (an expansion of distance by a factor of at least 1026 in each of the three dimensions). This would be equivalent to expanding an object 1 nanometer (10−9 m, about half the width of a molecule of DNA) in length to one approximately 10.6 light years (about 1017 m or 62 trillion miles) long. Cosmic expansion subsequently decelerated down to much slower rates, until at around 9.8 billion years after the Big Bang (4 billion years ago) it began to gradually expand more quickly, and is still doing so. Physicists have postulated the existence of dark energy, appearing as a cosmological constant in the simplest gravitational models, as a way to explain this late-time acceleration. According to the simplest extrapolation of the currently favored cosmological model, the Lambda-CDM model, this acceleration becomes more dominant into the future.

History

In 1912, Vesto M. Slipher discovered that light from remote galaxies was redshifted, which was later interpreted as galaxies receding from the Earth. In 1922, Alexander Friedmann used Einstein field equations to provide theoretical evidence that the universe is expanding.

Swedish astronomer Knut Lundmark was the first person to find observational evidence for expansion in 1924. According to Ian Steer of the NASA/IPAC Extragalactic Database of Galaxy Distances, "Lundmark's extragalactic distance estimates were far more accurate than Hubble's, consistent with an expansion rate (Hubble constant) that was within 1% of the best measurements today."

In 1927, Georges Lemaître independently reached a similar conclusion to Friedmann on a theoretical basis, and also presented observational evidence for a linear relationship between distance to galaxies and their recessional velocity. Edwin Hubble observationally confirmed Lundmark's and Lemaître's findings in 1929. Assuming the cosmological principle, these findings would imply that all galaxies are moving away from each other.

Astronomer Walter Baade recalculated the size of the known universe in the 1940s, doubling the previous calculation made by Hubble in 1929. He announced this finding to considerable astonishment at the 1952 meeting of the International Astronomical Union in Rome. For most of the second half of the 20th century, the value of the Hubble constant was estimated to be between 50 and 90 (km/s)/Mpc.

On 13 January 1994, NASA formally announced a completion of its repairs on the main mirror of the Hubble Space Telescope allowing for sharper images and, consequently, more accurate analyses of its observations. Briefly after the repairs were made, Wendy Freedman's 1994 Key Project analyzed the recession velocity of M100 from the core of the Virgo cluster, offering a Hubble constant measurement of 80±17 km s−1 Mpc−1 (Mega Parsec). Later the same year, Adam Riess et al used an empirical method of visual band light shape curves to more finely estimate the luminosity of Type Ia supernova. This further minimized the systemic measurement errors of the Hubble constant to 67±7 km s−1 Mpc−1. Reiss's measurements on the recession velocity of the nearby Virgo cluster more closely agree with subsequent and independent analyses of Cepheid variable calibrations of 1a supernovae, which estimates a Hubble Constant of 73±7km s−1 Mpc−1. In 2003, David Spergel's analysis of the Cosmic microwave background during the first year observations of the Wilkinson Microwave Anisotropy Probe satellite (WMAP) further agreed with the estimated expansion rates for local galaxies, 72±5 km s−1 Mpc−1.

Structure of cosmic expansion

The universe at the largest scales is observed to be homogeneous (the same everywhere) and isotropic (the same in all directions), consistent with the cosmological principle. These constraints demand that any expansion of the universe accord with Hubble's law, in which objects recede from each observer with velocities proportional to their positions with respect to that observer. That is, recession velocities scale with (observer-centered) positions according to

where the Hubble rate quantifies the rate of expansion. is a function of cosmic time.

Dynamics of cosmic expansion

The expansion history depends on the density of the universe. Ω on this graph corresponds to the ratio of the matter density to the critical density, for a matter dominated universe. The "acceleration" curve shows the trajectory of the scale factor for a universe with dark energy.

The expansion of the universe can be understood as a consequence of an initial impulse (possibly due to inflation), which sent the contents of the universe flying apart. The mutual gravitational attraction of the matter and radiation within the universe gradually slows this expansion over time, but expansion nevertheless continues due to momentum left over from the initial impulse. Also, certain exotic relativistic fluids, such as dark energy and inflation, exert gravitational repulsion in the cosmological context, which accelerates the expansion of the universe. A cosmological constant also has this effect.

Mathematically, the expansion of the universe is quantified by the scale factor, , which is proportional to the average separation between objects, such as galaxies. The scale factor is a function of time and is conventionally set to be at the present time. Because the universe is expanding, is smaller in the past and larger in the future. Extrapolating back in time with certain cosmological models will yield a moment when the scale factor was zero; our current understanding of cosmology sets this time at 13.787 ± 0.020 billion years ago. If the universe continues to expand forever, the scale factor will approach infinity in the future. It is also possible in principle for the universe to stop expanding and begin to contract, which corresponds to the scale factor decreasing in time.

The scale factor is a parameter of the FLRW metric, and its time evolution is governed by the Friedmann equations. The second Friedmann equation,

shows how the contents of the universe influence its expansion rate. Here, is the gravitational constant, is the energy density within the universe, is the pressure, is the speed of light, and is the cosmological constant. A positive energy density leads to deceleration of the expansion, , and a positive pressure further decelerates expansion. On the other hand, sufficiently negative pressure with leads to accelerated expansion, and the cosmological constant also accelerates expansion. Nonrelativistic matter is essentially pressureless, with , while a gas of ultrarelativistic particles (such as a photon gas) has positive pressure . Negative-pressure fluids, like dark energy, are not experimentally confirmed, but the existence of dark energy is inferred from astronomical observations.

Distances in the expanding universe

Comoving coordinates

In an expanding universe, it is often useful to study the evolution of structure with the expansion of the universe factored out. This motivates the use of comoving coordinates, which are defined to grow proportionally with the scale factor. If an object is moving only with the Hubble flow of the expanding universe, with no other motion, then it remains stationary in comoving coordinates. The comoving coordinates are the spatial coordinates in the FLRW metric.

Shape of the universe

The universe is a four-dimensional spacetime, but within a universe that obeys the cosmological principle, there is a natural choice of three-dimensional spatial surface. These are the surfaces on which observers who are stationary in comoving coordinates agree on the age of the universe. In a universe governed by special relativity, such surfaces would be hyperboloids, because relativistic time dilation means that rapidly-receding distant observers' clocks are slowed, so that spatial surfaces must bend "into the future" over long distances. However, within general relativity, the shape of these comoving synchronous spatial surfaces is affected by gravity. Current observations are consistent with these spatial surfaces being geometrically flat (so that, for example, the angles of a triangle add up to 180 degrees).

Cosmological horizons

An expanding universe typically has a finite age. Light, and other particles, can only have propagated a finite distance. The comoving distance that such particles can have covered over the age of the universe is known as the particle horizon, and the region of the universe that lies within our particle horizon is known as the observable universe.

If the dark energy that is inferred to dominate the universe today is a cosmological constant, then the particle horizon converges to a finite value in the infinite future. This implies that the amount of the universe that we will ever be able to observe is limited. Many systems exist whose light can never reach us, because there is a cosmic event horizon induced by the repulsive gravity of the dark energy.

When studying the evolution of structure within the universe, a natural scale emerges, known as the Hubble horizon. Cosmological perturbations much larger than the Hubble horizon are not dynamical, because gravitational influences do not have time to propagate across them, while perturbations much smaller than the Hubble horizon are straightforwardly governed by Newtonian gravitational dynamics.

Consequences of cosmic expansion

Velocities and redshifts

An object's peculiar velocity is its velocity with respect to the comoving coordinate grid, i.e., with respect to the average motion of the surrounding material. It is a measure of how a particle's motion deviates from the Hubble flow of the expanding universe. The peculiar velocities of nonrelativistic particles decay as the universe expands, in inverse proportion with the cosmic scale factor. This can be understood as a self-sorting effect. A particle that is moving in some direction gradually overtakes the Hubble flow of cosmic expansion in that direction, asymptotically approaching material with the same velocity as its own.

More generally, the peculiar momenta of both relativistic and nonrelativistic particles decay in inverse proportion with the scale factor. For photons, this leads to the cosmological redshift. While the cosmological redshift is often explained as the stretching of photon wavelengths due to "expansion of space", it is more naturally viewed as a consequence of the Doppler effect.

Temperature

The universe cools as it expands. This follows from the decay of particles' peculiar momenta, as discussed above. It can also be understood as adiabatic cooling. The temperature of ultrarelativistic fluids, often called "radiation" and including the cosmic microwave background, scales inversely with the scale factor (i.e. ). The temperature of nonrelativistic matter drops more sharply, scaling as the inverse square of the scale factor (i.e. ).

Density

The contents of the universe dilute as it expands. The number of particles within a comoving volume remains fixed (on average), while the volume expands. For nonrelativistic matter, this implies that the energy density drops as , where is the scale factor.

For ultrarelativistic particles ("radiation"), the energy density drops more sharply as . This is because in addition to the volume dilution of the particle count, the energy of each particle (including the rest mass energy) also drops significantly due to the decay of peculiar momenta.

In general, we can consider a perfect fluid with pressure , where is the energy density. The parameter is the equation of state parameter. The energy density of such a fluid drops as

Nonrelativistic matter has while radiation has . For an exotic fluid with negative pressure, like dark energy, the energy density drops more slowly; if it remains constant in time. If , corresponding to phantom energy, the energy density grows as the universe expands.

Expansion history

A graphical representation of the expansion of the universe from the Big Bang to the present day, with the inflationary epoch represented as the dramatic expansion seen on the left. This visualization shows only a section of the universe; the empty space outside the diagram should not be taken to represent empty space outside the universe (which does not necessarily exist).

Cosmic inflation

Inflation is a period of accelerated expansion hypothesized to have occurred at a time of around 10-32 seconds. It would have been driven by the inflaton, a field that has a positive-energy false vacuum state. Inflation was originally proposed to explain the absence of exotic relics predicted by grand unified theories, such as magnetic monopoles, because the rapid expansion would have diluted such relics. It was subsequently realized that the accelerated expansion would also solve the horizon problem and the flatness problem. Additionally, quantum fluctuations during inflation would have created initial variations in the density of the universe, which gravity later amplified to yield the observed spectrum of matter density variations.

During inflation, the cosmic scale factor grows exponentially in time. In order to solve the horizon and flatness problems, inflation must have lasted long enough that the scale factor grew by at least a factor of e60 (about 1026).

Radiation epoch

The history of the universe after inflation but before a time of about 1 second is largely unknown. However, the universe is known to have been dominated by ultrarelativistic Standard-Model particles, conventionally called radiation, by the time of neutrino decoupling at about 1 second. During radiation domination, cosmic expansion decelerated, with the scale factor growing proportionally with the square root of the time.

Matter epoch

Since radiation redshifts as the universe expands, eventually nonrelativistic matter comes to dominate the energy density of the universe. This transition happened at a time of about 50 thousand years. During the matter-dominated epoch, cosmic expansion also decelerated, with the scale factor growing as the 2/3 power of the time (). Also, gravitational structure formation is most efficient when nonrelativistic matter dominates, and this epoch is responsible for the formation of galaxies and the large-scale structure of the universe.

Dark energy

Around 3 billion years ago, at a time of about 11 billion years, dark energy began to dominate the energy density of the universe. This transition came about because dark energy does not dilute as the universe expands, instead maintaining a constant energy density. Similarly to inflation, dark energy drives accelerated expansion, such that the scale factor grows exponentially in time.

Measuring the expansion rate

When an object is receding, its light gets stretched (redshifted). When the object is approaching, its light gets compressed (blueshifted).

The most direct way to measure the expansion rate is to independently measure the recession velocities and the distances of distant objects, such as galaxies. The ratio between these quantities gives the Hubble rate, in accordance with Hubble's law. Typically, the distance is measured using a standard candle, which is an object or event for which the intrinsic brightness is known. The object's distance can then be inferred from the observed apparent brightness. Meanwhile, the recession speed is measured through the redshift. Hubble used this approach for his original measurement of the expansion rate, by measuring the brightness of Cepheid variable stars and the redshifts of their host galaxies. More recently, using Type Ia supernovae, the expansion rate was measured to be H0 = 73.24 ± 1.74 (km/s)/Mpc. This means that for every million parsecs of distance from the observer, objects at that distance are receding at about 73 kilometres per second (160,000 mph).

Supernovae are observable at such great distances that the light travel time therefrom can approach the age of the universe. Consequently, they can be used to measure not only the present-day expansion rate but also the expansion history. In work that was awarded the 2011 Nobel Prize in Physics, supernova observations were used to determine that cosmic expansion is accelerating in the present epoch.

By assuming a cosmological model, e.g. the Lambda-CDM model, another possibility is to infer the present-day expansion rate from the sizes of the largest fluctuations seen in the Cosmic Microwave Background. A higher expansion rate would imply a smaller characteristic size of CMB fluctuations, and vice versa. The Planck collaboration measured the expansion rate this way and determined H0 = 67.4 ± 0.5 (km/s)/Mpc. There is a disagreement between this measurement and the supernova-based measurements, known as the Hubble tension.

A third option proposed recently is to use information from gravitational wave events (especially those involving the merger of neutron stars, like GW170817) to measure the expansion rate. Such measurements do not yet have the precision to resolve the Hubble tension.

In principle, the cosmic expansion history can also be measured by studying how redshifts, distances, fluxes, angular positions, and angular sizes of astronomical objects change over the course of the time that they are being observed. These effects are too small to have been detected yet. However, changes in redshift or flux could be observed by Square Kilometre Array or Extremely Large Telescope in the mid-2030s.

Conceptual considerations and misconceptions

Measuring distances in expanding space

Two views of an isometric embedding of part of the visible universe over most of its history, showing how a light ray (red line) can travel an effective distance of 28 billion light years (orange line) in just 13.8 billion years of cosmological time. (Mathematical details)

At cosmological scales, the present universe conforms to Euclidean space, what cosmologists describe as geometrically flat, to within experimental error.

Consequently, the rules of Euclidean geometry associated with Euclid's fifth postulate hold in the present universe in 3D space. It is, however, possible that the geometry of past 3D space could have been highly curved. The curvature of space is often modeled using a non-zero Riemann curvature tensor in Curvature of Riemannian manifolds. Euclidean "geometrically flat" space has a Riemann curvature tensor of zero.

"Geometrically flat" space has 3 dimensions and is consistent with Euclidean space. However, spacetime on the other hand, is 4 dimensions; it is not flat according to Einsten's general theory of relativity. Einstein's theory postulates that "matter and energy curve spacetime, and there are enough matter and energy lying around to provide for curvature."

In part to accommodate such different geometries, the expansion of the universe is inherently general relativistic. It cannot be modeled with special relativity alone: though such models exist, they are at fundamental odds with the observed interaction between matter and spacetime seen in our universe.

The images to the right show two views of spacetime diagrams that show the large-scale geometry of the universe according to the ΛCDM cosmological model. Two of the dimensions of space are omitted, leaving one dimension of space (the dimension that grows as the cone gets larger) and one of time (the dimension that proceeds "up" the cone's surface). The narrow circular end of the diagram corresponds to a cosmological time of 700 million years after the Big Bang, while the wide end is a cosmological time of 18 billion years, where one can see the beginning of the accelerating expansion as a splaying outward of the spacetime, a feature that eventually dominates in this model. The purple grid lines mark off cosmological time at intervals of one billion years from the Big Bang. The cyan grid lines mark off comoving distance at intervals of one billion light years in the present era (less in the past and more in the future). Note that the circular curling of the surface is an artifact of the embedding with no physical significance and is done purely for illustrative purposes; a flat universe does not curl back onto itself. (A similar effect can be seen in the tubular shape of the pseudosphere.)

The brown line on the diagram is the worldline of Earth (or more precisely its location in space, even before it was formed). The yellow line is the worldline of the most distant known quasar. The red line is the path of a light beam emitted by the quasar about 13 billion years ago and reaching Earth at the present day. The orange line shows the present-day distance between the quasar and Earth, about 28 billion light years, which is a larger distance than the age of the universe multiplied by the speed of light, ct.

According to the equivalence principle of general relativity, the rules of special relativity are locally valid in small regions of spacetime that are approximately flat. In particular, light always travels locally at the speed c; in the diagram, this means, according to the convention of constructing spacetime diagrams, that light beams always make an angle of 45° with the local grid lines. It does not follow, however, that light travels a distance ct in a time t, as the red worldline illustrates. While it always moves locally at c, its time in transit (about 13 billion years) is not related to the distance traveled in any simple way, since the universe expands as the light beam traverses space and time. The distance traveled is thus inherently ambiguous because of the changing scale of the universe. Nevertheless, there are two distances that appear to be physically meaningful: the distance between Earth and the quasar when the light was emitted, and the distance between them in the present era (taking a slice of the cone along the dimension defined as the spatial dimension). The former distance is about 4 billion light years, much smaller than ct, whereas the latter distance (shown by the orange line) is about 28 billion light years, much larger than ct. In other words, if space were not expanding today, it would take 28 billion years for light to travel between Earth and the quasar, while if the expansion had stopped at the earlier time, it would have taken only 4 billion years.

The light took much longer than 4 billion years to reach us though it was emitted from only 4 billion light years away. In fact, the light emitted towards Earth was actually moving away from Earth when it was first emitted; the metric distance to Earth increased with cosmological time for the first few billion years of its travel time, also indicating that the expansion of space between Earth and the quasar at the early time was faster than the speed of light. None of this behavior originates from a special property of metric expansion, but rather from local principles of special relativity integrated over a curved surface.

Topology of expanding space

Over time, the space that makes up the universe is expanding. The words 'space' and 'universe', sometimes used interchangeably, have distinct meanings in this context. Here 'space' is a mathematical concept that stands for the three-dimensional manifold into which our respective positions are embedded while 'universe' refers to everything that exists including the matter and energy in space, the extra-dimensions that may be wrapped up in various strings, and the time through which various events take place. The expansion of space is in reference to this 3-D manifold only; that is, the description involves no structures such as extra dimensions or an exterior universe.

The ultimate topology of space is a posteriori – something that in principle must be observed – as there are no constraints that can simply be reasoned out (in other words there can not be any a priori constraints) on how the space in which we live is connected or whether it wraps around on itself as a compact space. Though certain cosmological models such as Gödel's universe even permit bizarre worldlines that intersect with themselves, ultimately the question as to whether we are in something like a "Pac-Man universe" where if traveling far enough in one direction would allow one to simply end up back in the same place like going all the way around the surface of a balloon (or a planet like the Earth) is an observational question that is constrained as measurable or non-measurable by the universe's global geometry. At present, observations are consistent with the universe being infinite in extent and simply connected, though we are limited in distinguishing between simple and more complicated proposals by cosmological horizons. The universe could be infinite in extent or it could be finite; but the evidence that leads to the inflationary model of the early universe also implies that the "total universe" is much larger than the observable universe, and so any edges or exotic geometries or topologies would not be directly observable as light has not reached scales on which such aspects of the universe, if they exist, are still allowed. For all intents and purposes, it is safe to assume that the universe is infinite in spatial extent, without edge or strange connectedness.

Regardless of the overall shape of the universe, the question of what the universe is expanding into is one that does not require an answer according to the theories that describe the expansion; the way we define space in our universe in no way requires additional exterior space into which it can expand since an expansion of an infinite expanse can happen without changing the infinite extent of the expanse. All that is certain is that the manifold of space in which we live simply has the property that the distances between objects are getting larger as time goes on. This only implies the simple observational consequences associated with the metric expansion explored below. No "outside" or embedding in hyperspace is required for an expansion to occur. The visualizations often seen of the universe growing as a bubble into nothingness are misleading in that respect. There is no reason to believe there is anything "outside" of the expanding universe into which the universe expands.

Even if the overall spatial extent is infinite and thus the universe cannot get any "larger", we still say that space is expanding because, locally, the characteristic distance between objects is increasing. As an infinite space grows, it remains infinite.

Density of universe during expansion

Despite being extremely dense when very young and during part of its early expansion – far denser than is usually required to form a black hole – the universe did not re-collapse into a black hole. This is because commonly used calculations for gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang.

Effects of expansion on small scales

The expansion of space is sometimes described as a force that acts to push objects apart. Though this is an accurate description of the effect of the cosmological constant, it is not an accurate picture of the phenomenon of expansion in general.

Animation of an expanding raisin bread model. As the bread doubles in width (depth and length), the distances between raisins also double.

In addition to slowing the overall expansion, gravity causes local clumping of matter into stars and galaxies. Once objects are formed and bound by gravity, they "drop out" of the expansion and do not subsequently expand under the influence of the cosmological metric, there being no force compelling them to do so.

There is no difference between the inertial expansion of the universe and the inertial separation of nearby objects in a vacuum; the former is simply a large-scale extrapolation of the latter.

Once objects are bound by gravity, they no longer recede from each other. Thus, the Andromeda galaxy, which is bound to the Milky Way galaxy, is actually falling towards us and is not expanding away. Within the Local Group, the gravitational interactions have changed the inertial patterns of objects such that there is no cosmological expansion taking place. Once one goes beyond the Local Group, the inertial expansion is measurable, though systematic gravitational effects imply that larger and larger parts of space will eventually fall out of the "Hubble Flow" and end up as bound, non-expanding objects up to the scales of superclusters of galaxies. We can predict such future events by knowing the precise way the Hubble Flow is changing as well as the masses of the objects to which we are being gravitationally pulled. Currently, the Local Group is being gravitationally pulled towards either the Shapley Supercluster or the "Great Attractor" with which, if dark energy were not acting, we would eventually merge and no longer see expand away from us after such a time.

A consequence of metric expansion being due to inertial motion is that a uniform local "explosion" of matter into a vacuum can be locally described by the FLRW geometry, the same geometry that describes the expansion of the universe as a whole and was also the basis for the simpler Milne universe, which ignores the effects of gravity. In particular, general relativity predicts that light will move at the speed c with respect to the local motion of the exploding matter, a phenomenon analogous to frame dragging.

The situation changes somewhat with the introduction of dark energy or a cosmological constant. A cosmological constant due to a vacuum energy density has the effect of adding a repulsive force between objects that is proportional (not inversely proportional) to distance. Unlike inertia it actively "pulls" on objects that have clumped together under the influence of gravity, and even on individual atoms. However, this does not cause the objects to grow steadily or to disintegrate; unless they are very weakly bound, they will simply settle into an equilibrium state that is slightly (undetectably) larger than it would otherwise have been. As the universe expands and the matter in it thins, the gravitational attraction decreases (since it is proportional to the density), while the cosmological repulsion increases. Thus, the ultimate fate of the ΛCDM universe is a near vacuum expanding at an ever-increasing rate under the influence of the cosmological constant. However, gravitationally bound objects like the Milky Way do not expand, and the Andromeda galaxy is moving fast enough towards us that it will still merge with the Milky Way in around 3 billion years.

Metric expansion and speed of light

At the end of the early universe's inflationary period, all the matter and energy in the universe was set on an inertial trajectory consistent with the equivalence principle and Einstein's general theory of relativity and this is when the precise and regular form of the universe's expansion had its origin (that is, matter in the universe is separating because it was separating in the past due to the inflaton field).

While special relativity prohibits objects from moving faster than light with respect to a local reference frame where spacetime can be treated as flat and unchanging, it does not apply to situations where spacetime curvature or evolution in time become important. These situations are described by general relativity, which allows the separation between two distant objects to increase faster than the speed of light, although the definition of "distance" here is somewhat different from that used in an inertial frame. The definition of distance used here is the summation or integration of local comoving distances, all done at constant local proper time. For example, galaxies that are farther than the Hubble radius, approximately 4.5 gigaparsecs or 14.7 billion light-years, away from us have a recession speed that is faster than the speed of light. Visibility of these objects depends on the exact expansion history of the universe. Light that is emitted today from galaxies beyond the more-distant cosmological event horizon, about 5 gigaparsecs or 16 billion light-years, will never reach us, although we can still see the light that these galaxies emitted in the past. Because of the high rate of expansion, it is also possible for a distance between two objects to be greater than the value calculated by multiplying the speed of light by the age of the universe. These details are a frequent source of confusion among amateurs and even professional physicists. Due to the non-intuitive nature of the subject and what has been described by some as "careless" choices of wording, certain descriptions of the metric expansion of space and the misconceptions to which such descriptions can lead are an ongoing subject of discussion within the fields of education and communication of scientific concepts.

Common analogies for cosmic expansion

The expansion of the universe is often illustrated with conceptual models where an expanding object is taken to represent expanding space. Note that these models can be misleading to the extent that they give the false impression that expanding space can carry objects with it. In reality, the expansion of the universe corresponds only to the inertial motion of objects away from one another.

In the "ant on a rubber rope model" one imagines an ant (idealized as pointlike) crawling at a constant speed on a perfectly elastic rope that is constantly stretching. If we stretch the rope in accordance with the ΛCDM scale factor and think of the ant's speed as the speed of light, then this analogy is numerically accurate – the ant's position over time will match the path of the red line on the embedding diagram above.

In the "rubber sheet model" one replaces the rope with a flat two-dimensional rubber sheet that expands uniformly in all directions. The addition of a second spatial dimension raises the possibility of showing local perturbations of the spatial geometry by local curvature in the sheet.

In the "balloon model" the flat sheet is replaced by a spherical balloon that is inflated from an initial size of zero (representing the big bang). A balloon has positive Gaussian curvature while observations suggest that the real universe is spatially flat, but this inconsistency can be eliminated by making the balloon very large so that it is locally flat to within the limits of observation. This analogy is potentially confusing since it wrongly suggests that the big bang took place at the center of the balloon. In fact points off the surface of the balloon have no meaning, even if they were occupied by the balloon at an earlier time.

In the "raisin bread model" one imagines a loaf of raisin bread expanding in the oven. The loaf (space) expands as a whole, but the raisins (gravitationally bound objects) do not expand; they merely grow farther away from each other.

Thermodynamic diagrams

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Thermodynamic_diagrams Thermodynamic diagrams are diagrams used to repr...