Search This Blog

Friday, November 17, 2023

System call

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/System_call

A high-level overview of the Linux kernel's system call interface, which handles communication between its various components and the userspace

In computing, a system call (commonly abbreviated to syscall) is the programmatic way in which a computer program requests a service from the operating system on which it is executed. This may include hardware-related services (for example, accessing a hard disk drive or accessing the device's camera), creation and execution of new processes, and communication with integral kernel services such as process scheduling. System calls provide an essential interface between a process and the operating system.

In most systems, system calls can only be made from userspace processes, while in some systems, OS/360 and successors for example, privileged system code also issues system calls.

Privileges

The architecture of most modern processors, with the exception of some embedded systems, involves a security model. For example, the rings model specifies multiple privilege levels under which software may be executed: a program is usually limited to its own address space so that it cannot access or modify other running programs or the operating system itself, and is usually prevented from directly manipulating hardware devices (e.g. the frame buffer or network devices).

However, many applications need access to these components, so system calls are made available by the operating system to provide well-defined, safe implementations for such operations. The operating system executes at the highest level of privilege, and allows applications to request services via system calls, which are often initiated via interrupts. An interrupt automatically puts the CPU into some elevated privilege level and then passes control to the kernel, which determines whether the calling program should be granted the requested service. If the service is granted, the kernel executes a specific set of instructions over which the calling program has no direct control, returns the privilege level to that of the calling program, and then returns control to the calling program.

The library as an intermediary

Generally, systems provide a library or API that sits between normal programs and the operating system. On Unix-like systems, that API is usually part of an implementation of the C library (libc), such as glibc, that provides wrapper functions for the system calls, often named the same as the system calls they invoke. On Windows NT, that API is part of the Native API, in the ntdll.dll library; this is an undocumented API used by implementations of the regular Windows API and directly used by some system programs on Windows. The library's wrapper functions expose an ordinary function calling convention (a subroutine call on the assembly level) for using the system call, as well as making the system call more modular. Here, the primary function of the wrapper is to place all the arguments to be passed to the system call in the appropriate processor registers (and maybe on the call stack as well), and also setting a unique system call number for the kernel to call. In this way the library, which exists between the OS and the application, increases portability.

The call to the library function itself does not cause a switch to kernel mode and is usually a normal subroutine call (using, for example, a "CALL" assembly instruction in some Instruction set architectures (ISAs)). The actual system call does transfer control to the kernel (and is more implementation-dependent and platform-dependent than the library call abstracting it). For example, in Unix-like systems, fork and execve are C library functions that in turn execute instructions that invoke the fork and exec system calls. Making the system call directly in the application code is more complicated and may require embedded assembly code to be used (in C and C++), as well as requiring knowledge of the low-level binary interface for the system call operation, which may be subject to change over time and thus not be part of the application binary interface; the library functions are meant to abstract this away.

On exokernel based systems, the library is especially important as an intermediary. On exokernels, libraries shield user applications from the very low level kernel API, and provide abstractions and resource management.

IBM's OS/360, DOS/360 and TSS/360 implement most system calls through a library of assembly language macros,[b] although there are a few services with a call linkage. This reflects their origin at a time when programming in assembly language was more common than high-level language usage. IBM system calls were therefore not directly executable by high-level language programs, but required a callable assembly language wrapper subroutine. Since then, IBM has added many services that can be called from high level languages in, e.g., z/OS and z/VSE. In more recent release of MVS/SP and in all later MVS versions, some system call macros generate Program Call (PC).

Examples and tools

On Unix, Unix-like and other POSIX-compliant operating systems, popular system calls are open, read, write, close, wait, exec, fork, exit, and kill. Many modern operating systems have hundreds of system calls. For example, Linux and OpenBSD each have over 300 different calls, NetBSD has close to 500, FreeBSD has over 500, Windows has close to 2000, divided between win32k (graphical) and ntdll (core) system calls while Plan 9 has 51.

Tools such as strace, ftrace and truss allow a process to execute from start and report all system calls the process invokes, or can attach to an already running process and intercept any system call made by the said process if the operation does not violate the permissions of the user. This special ability of the program is usually also implemented with system calls such as ptrace or system calls on files in procfs.

Typical implementations

Implementing system calls requires a transfer of control from user space to kernel space, which involves some sort of architecture-specific feature. A typical way to implement this is to use a software interrupt or trap. Interrupts transfer control to the operating system kernel, so software simply needs to set up some register with the system call number needed, and execute the software interrupt.

This is the only technique provided for many RISC processors, but CISC architectures such as x86 support additional techniques. For example, the x86 instruction set contains the instructions SYSCALL/SYSRET and SYSENTER/SYSEXIT (these two mechanisms were independently created by AMD and Intel, respectively, but in essence they do the same thing). These are "fast" control transfer instructions that are designed to quickly transfer control to the kernel for a system call without the overhead of an interrupt. Linux 2.5 began using this on the x86, where available; formerly it used the INT instruction, where the system call number was placed in the EAX register before interrupt 0x80 was executed.

An older mechanism is the call gate; originally used in Multics and later, for example, see call gate on the Intel x86. It allows a program to call a kernel function directly using a safe control transfer mechanism, which the operating system sets up in advance. This approach has been unpopular on x86, presumably due to the requirement of a far call (a call to a procedure located in a different segment than the current code segment) which uses x86 memory segmentation and the resulting lack of portability it causes, and the existence of the faster instructions mentioned above.

For IA-64 architecture, EPC (Enter Privileged Code) instruction is used. The first eight system call arguments are passed in registers, and the rest are passed on the stack.

In the IBM System/360 mainframe family, and its successors, a Supervisor Call instruction (SVC), with the number in the instruction rather than in a register, implements a system call for legacy facilities in most of[c] IBM's own operating systems, and for all system calls in Linux. In later versions of MVS, IBM uses the Program Call (PC) instruction for many newer facilities. In particular, PC is used when the caller might be in Service Request Block (SRB) mode.

The PDP-11 minicomputer used the EMT, TRAP and IOT instructions, which, similar to the IBM System/360 SVC and x86 INT, put the code in the instruction; they generate interrupts to specific addresses, transferring control to the operating system. The VAX 32-bit successor to the PDP-11 series used the CHMK, CHME, and CHMS instructions to make system calls to privileged code at various levels; the code is an argument to the instruction.

Categories of system calls

System calls can be grouped roughly into six major categories:


  1. Process control
  2. File management
    • create file, delete file
    • open, close
    • read, write, reposition
    • get/set file attributes
  3. Device management
    • request device, release device
    • read, write, reposition
    • get/set device attributes
    • logically attach or detach devices
  4. Information maintenance
    • get/set total system information (including time, date, computer name, enterprise etc.)
    • get/set process, file, or device metadata (including author, opener, creation time and date, etc.)
  5. Communication
    • create, delete communication connection
    • send, receive messages
    • transfer status information
    • attach or detach remote devices
  6. Protection
    • get/set file permissions

Processor mode and context switching

System calls in most Unix-like systems are processed in kernel mode, which is accomplished by changing the processor execution mode to a more privileged one, but no process context switch is necessary – although a privilege context switch does occur. The hardware sees the world in terms of the execution mode according to the processor status register, and processes are an abstraction provided by the operating system. A system call does not generally require a context switch to another process; instead, it is processed in the context of whichever process invoked it.[13][14]

In a multithreaded process, system calls can be made from multiple threads. The handling of such calls is dependent on the design of the specific operating system kernel and the application runtime environment. The following list shows typical models followed by operating systems:[15][16]

  • Many-to-one model: All system calls from any user thread in a process are handled by a single kernel-level thread. This model has a serious drawback – any blocking system call (like awaiting input from the user) can freeze all the other threads. Also, since only one thread can access the kernel at a time, this model cannot utilize multiple cores of processors.
  • One-to-one model: Every user thread gets attached to a distinct kernel-level thread during a system call. This model solves the above problem of blocking system calls. It is found in all major Linux distributions, macOS, iOS, recent Windows and Solaris versions.
  • Many-to-many model: In this model, a pool of user threads is mapped to a pool of kernel threads. All system calls from a user thread pool are handled by the threads in their corresponding kernel thread pool.
  • Hybrid model: This model implements both many to many and one to one models depending upon the choice made by the kernel. This is found in old versions of IRIX, HP-UX and Solaris.

Linux kernel interfaces

From Wikipedia, the free encyclopedia
Linux API, Linux ABI, and in-kernel APIs and ABIs

The Linux kernel provides multiple interfaces to user-space applications that are used for varying purposes and that have varying properties by design. There are two types of application programming interface (API) in the Linux kernel

  1. the "kernel–user space" API; and
  2. the "kernel internal" API.

Linux API

The Linux API is composed out of the System Call Interface of the Linux kernel, the GNU C Library (by GNU), libcgroup, libdrm, libalsa and libevdev (by freedesktop.org).
Linux API vs. POSIX API

The Linux API is the kernel–user space API, which allows programs in user space to access system resources and services of the Linux kernel. It is composed out of the System Call Interface of the Linux kernel and the subroutines in the GNU C Library (glibc). The focus of the development of the Linux API has been to provide the usable features of the specifications defined in POSIX in a way which is reasonably compatible, robust and performant, and to provide additional useful features not defined in POSIX, just as the kernel–user space APIs of other systems implementing the POSIX API also provide additional features not defined in POSIX.

The Linux API, by choice, has been kept stable over the decades through a policy of not introducing breaking changes; this stability guarantees the portability of source code. At the same time, Linux kernel developers have historically been conservative and meticulous about introducing new system calls.

Much available free and open-source software is written for the POSIX API. Since so much more development flows into the Linux kernel as compared to the other POSIX-compliant combinations of kernel and C standard library, the Linux kernel and its API have been augmented with additional features. As far as these additional features provide a technical advantage, programming for the Linux API is preferred over the POSIX-API. Well-known current examples are udev, systemd and Weston. People such as Lennart Poettering openly advocate to prefer the Linux API over the POSIX API, where this offers advantages.

At FOSDEM 2016, Michael Kerrisk explained some of the perceived issues with the Linux kernel's user-space API, describing that it contains multiple design errors by being non-extensible, unmaintainable, overly complex, of limited purpose, in violation of standards, and inconsistent. Most of those mistakes cannot be fixed because doing so would break the ABI that the kernel presents to the user space.

System Call Interface of the Linux kernel

System Call Interface is the denomination for the entirety of all implemented and available system calls in a kernel. Various subsystems, such as the Direct Rendering Manager (DRM), define their own system calls and the entirety is called System Call Interface.

Various issues with the organization of the Linux kernel system calls are being publicly discussed. Issues have been pointed out by Andy Lutomirski, Michael Kerrisk and others.

The C standard library

The GNU C Library is a wrapper around the Linux kernel System Call Interface.

A C standard library is a wrapper around the system calls of the Linux kernel; the combination of the Linux kernel System Call Interface and a C standard library is what builds the Linux API. Some popular implementations of the C standard library are

Additions to POSIX

As in other Unix-like systems, additional capabilities of the Linux kernel exist that are not part of POSIX:

DRM has been paramount for the development and implementations of well-defined and performant free and open-source graphics device drivers without which no rendering acceleration would be available at all, only the 2D drivers would be available in the X.Org Server. DRM was developed for Linux, and since has been ported to other operating systems as well.

Further libraries

Linux ABI

The Linux API and the Linux ABI

The term Linux ABI refers to a kernel–user space ABI. The application binary interface refers to the compiled binaries, in machine code. Any such ABI is therefore bound to the instruction set. Defining a useful ABI and keeping it stable is less the responsibility of the Linux kernel developers or of the developers of the GNU C Library, and more the task for Linux distributions and independent software vendors (ISVs) who wish to sell and provide support for their proprietary software as binaries only for such a single Linux ABI, as opposed to supporting multiple Linux ABIs.

An ABI has to be defined for every instruction set, such as x86, x86-64, MIPS, ARMv7-A (32-Bit), ARMv8-A (64-Bit), etc. with the endianness, if both are supported.

It should be able to compile the software with different compilers against the definitions specified in the ABI and achieve full binary compatibility. Compilers that are free and open-source software are e.g. GNU Compiler Collection, LLVM/Clang.

In-kernel APIs

There are a lot of kernel-internal APIs for all the subsystems to interface with one another. These are being kept fairly stable, but there is no guarantee for stability. In case new research or insights make a change seem favorable, an API is changed, all necessary rewrite and testing have to be done by the author.

The Linux kernel is a monolithic kernel, hence device drivers are kernel components. To ease the burden of companies maintaining their (proprietary) device drivers out-of-tree, stable APIs for the device drivers have been repeatedly requested. The Linux kernel developers have repeatedly denied guaranteeing stable in-kernel APIs for device drivers. Guaranteeing such would have faltered the development of the Linux kernel in the past and would still in the future and, due to the nature of free and open-source software, are not necessary. Ergo, by choice, the Linux kernel has no stable in-kernel API.

In-kernel ABIs

Since there are no stable in-kernel APIs, there cannot be stable in-kernel ABIs.

Abstraction APIs

OpenGL is indeed an abstraction API to make use of diverse GPUs of multiple vendors without the need to program for each one specifically.
But the implementation of the OpenGL-specification is executed on the CPU in the context of the running operating system. One design goal of Vulkan was to make the "graphics driver", i.e. the implementation of the graphics API, do less.

For several use cases, the Linux API is considered too low-level and higher abstraction APIs are used. Such of course still need to work on top of the low-level Linux APIs. Examples:

Thursday, November 16, 2023

glibc

From Wikipedia, the free encyclopedia
GNU C Library
Original author(s)Roland McGrath
Developer(s)GNU Project, most contributions by Ulrich Drepper
Initial release1987; 36 years ago
Stable release
2.38 / 31 July 2023
Repository
Written inC
Operating systemUnix-like
TypeRuntime library
License2001: LGPL-2.1-or-later
1992: LGPL-2.0-or-later
Websitewww.gnu.org/software/libc/

The GNU C Library, commonly known as glibc, is the GNU Project's implementation of the C standard library. It is a wrapper around the system calls of the Linux kernel for application use. Despite its name, it now also directly supports C++ (and, indirectly, other programming languages). It was started in the 1980s by the Free Software Foundation (FSF) for the GNU operating system.

Released under the GNU Lesser General Public License, glibc is free software. The GNU C Library project provides the core libraries for the GNU system, as well as many systems that use Linux as the kernel. These libraries provide critical APIs including ISO C11, POSIX.1-2008, BSD, OS-specific APIs and more. These APIs include such foundational facilities as open, read, write, malloc, printf, getaddrinfo, dlopen, pthread_create, crypt, login, exit and more.

History

Ulrich Drepper in 2007, the main author of glibc
The GNU C Library is a wrapper around the system calls of the Linux kernel.
The Linux kernel and GNU C Library together form the Linux API. After compilation, the binaries offer an ABI.

The glibc project was initially written mostly by Roland McGrath, working for the Free Software Foundation (FSF) in the summer of 1987 as a teenager. In February 1988, FSF described glibc as having nearly completed the functionality required by ANSI C. By 1992, it had the ANSI C-1989 and POSIX.1-1990 functions implemented and work was under way on POSIX.2. In September 1995 Ulrich Drepper made his first contribution to the glibc and by 1997 most commits were made by him. Drepper held the maintainership position for many years and until 2012 accumulated 63% of all commits to the project.

In May 2009 glibc was migrated to a Git repository.

In 2010, a licensing issue was resolved which was caused by the Sun RPC implementation in glibc that was not GPL compatible. It was fixed by re-licensing the Sun RPC components under the BSD license.

In 2014, glibc suffered from an ABI breakage bug on s390.

In July 2017, 30 years after he started glibc, Roland McGrath announced his departure, "declaring myself maintainer emeritus and withdrawing from direct involvement in the project. These past several months, if not the last few years, have proven that you don't need me anymore".

In 2018, maintainer Raymond Nicholson removed a joke about abortion from the glibc source code. It was restored later by Alexandre Oliva after Richard Stallman demanded to have it returned.

In 2021, the copyright assignment requirement to the Free Software Foundation was removed from the project.

Fork and variant

In 1994, the developers of the Linux kernel forked glibc. Their fork, "Linux libc", was maintained separately until around 1998. Because the copyright attribution was insufficient, changes could not be merged back to the GNU Libc. When the FSF released glibc 2.0 in January 1997, the kernel developers discontinued Linux libc due to glibc 2.0's superior compliance with POSIX standards. glibc 2.0 also had better internationalisation and more in-depth translation, IPv6 capability, 64-bit data access, facilities for multithreaded applications, future version compatibility, and the code was more portable. The last-used version of Linux libc used the internal name (soname) libc.so.5. Following on from this, glibc 2.x on Linux uses the soname libc.so.6

In 2009, Debian and a number of derivatives switched from glibc to the variant eglibc. Eglibc was supported by a consortium consisting of Freescale, MIPS, MontaVista and Wind River. It contained changes that made it more suitable for embedded usage and had added support for architectures that were not supported by glibc, such as the PowerPC e500. The code of eglibc was merged back into glibc at version 2.20. Since 2014, eglibc is discontinued. The Yocto Project and Debian also moved back to glibc since the release of Debian Jessie.

Steering committee

Starting in 2001 the library's development had been overseen by a committee, with Ulrich Drepper kept as the lead contributor and maintainer. The steering committee installation was surrounded by a public controversy, as it was openly described by Ulrich Drepper as a failed hostile takeover maneuver by Richard Stallman.

In March 2012, the steering committee voted to disband itself and remove Drepper in favor of a community-driven development process, with Ryan Arnold, Maxim Kuvyrkov, Joseph Myers, Carlos O'Donell, and Alexandre Oliva holding the responsibility of GNU maintainership (but no extra decision-making power).

Functionality

glibc provides the functionality required by the Single UNIX Specification, POSIX (1c, 1d, and 1j) and some of the functionality required by ISO C11, ISO C99, Berkeley Unix (BSD) interfaces, the System V Interface Definition (SVID) and the X/Open Portability Guide (XPG), Issue 4.2, with all extensions common to XSI (X/Open System Interface) compliant systems along with all X/Open UNIX extensions.

In addition, glibc also provides extensions that have been deemed useful or necessary while developing GNU.

Supported hardware and kernels

glibc is used in systems that run many different kernels and different hardware architectures. Its most common use is in systems using the Linux kernel on x86 hardware, however, officially supported hardware includes: ARM, ARC, C-SKY, DEC Alpha, IA-64, Motorola m68k, MicroBlaze, MIPS, Nios II, PA-RISC, PowerPC, RISC-V, s390, SPARC, and x86 (old versions support TILE). It officially supports the Hurd and Linux kernels. Additionally, there are heavily patched versions that run on the kernels of FreeBSD and NetBSD (from which Debian GNU/kFreeBSD and Debian GNU/NetBSD systems are built, respectively), as well as a forked-version of OpenSolaris. It is also used (in an edited form) and named libroot.so in BeOS and Haiku.

Use in small devices

glibc has been criticized as being "bloated" and slower than other libraries in the past, e.g. by Linus Torvalds and embedded Linux programmers. For this reason, several alternative C standard libraries have been created which emphasize a smaller footprint. However, many small-device projects use GNU libc over the smaller alternatives because of its application support, standards compliance, and completeness. Examples include Openmoko and Familiar Linux for iPaq handhelds (when using the GPE display software).

Compatibility layers

There are compatibility layers ("shims") to allow programs written for other ecosystems to run on glibc interface offering systems. These include libhybris, a compatibility layer for Android's Bionic, and Wine, which can be seen as a compatibility layer from Windows APIs to glibc and other native APIs available on Unix-like systems.

Life history theory

From Wikipedia, the free encyclopedia

Life history theory is an analytical framework designed to study the diversity of life history strategies used by different organisms throughout the world, as well as the causes and results of the variation in their life cycles. It is a theory of biological evolution that seeks to explain aspects of organisms' anatomy and behavior by reference to the way that their life histories—including their reproductive development and behaviors, post-reproductive behaviors, and lifespan (length of time alive)—have been shaped by natural selection. A life history strategy is the "age- and stage-specific patterns" and timing of events that make up an organism's life, such as birth, weaning, maturation, death, etc. These events, notably juvenile development, age of sexual maturity, first reproduction, number of offspring and level of parental investment, senescence and death, depend on the physical and ecological environment of the organism.

The theory was developed in the 1950s and is used to answer questions about topics such as organism size, age of maturation, number of offspring, life span, and many others. In order to study these topics, life history strategies must be identified, and then models are constructed to study their effects. Finally, predictions about the importance and role of the strategies are made, and these predictions are used to understand how evolution affects the ordering and length of life history events in an organism's life, particularly the lifespan and period of reproduction. Life history theory draws on an evolutionary foundation, and studies the effects of natural selection on organisms, both throughout their lifetime and across generations. It also uses measures of evolutionary fitness to determine if organisms are able to maximize or optimize this fitness, by allocating resources to a range of different demands throughout the organism's life. It serves as a method to investigate further the "many layers of complexity of organisms and their worlds".

Organisms have evolved a great variety of life histories, from Pacific salmon, which produce thousands of eggs at one time and then die, to human beings, who produce a few offspring over the course of decades. The theory depends on principles of evolutionary biology and ecology and is widely used in other areas of science.

Brief history of field

Life history theory is seen as a branch of evolutionary ecology and is used in a variety of different fields. Beginning in the 1950s, mathematical analysis became an important aspect of research regarding LHT. There are two main focuses that have developed over time: genetic and phenotypic, but there has been a recent movement towards combining these two approaches.

Life cycle

All organisms follow a specific sequence in their development, beginning with gestation and ending with death, which is known as the life cycle. Events in between usually include birth, childhood, maturation, reproduction, and senescence, and together these comprise the life history strategy of that organism.

The major events in this life cycle are usually shaped by the demographic qualities of the organism. Some are more obvious shifts than others, and may be marked by physical changes—for example, teeth erupting in young children. Some events may have little variation between individuals in a species, such as length of gestation, but other events may show a lot of variation between individuals, such as age at first reproduction.

Life cycles can be divided into two major stages: growth and reproduction. These two cannot take place at the same time, so once reproduction has begun, growth usually ends. This shift is important because it can also affect other aspects of an organism's life, such as the organization of its group or its social interactions.

Each species has its own pattern and timing for these events, often known as its ontogeny, and the variety produced by this is what LHT studies. Evolution then works upon these stages to ensure that an organism adapts to its environment. For example, a human, between being born and reaching adulthood, will pass through an assortment of life stages, which include: birth, infancy, weaning, childhood and growth, adolescence, sexual maturation, and reproduction. All of these are defined in a specific biological way, which is not necessarily the same as the way that they are commonly used.

Darwinian fitness

In the context of evolution, fitness is determined by how the organism is represented in the future. Genetically, a fit allele outcompetes its rivals over generations. Often, as a shorthand for natural selection, researchers only assess the number of descendants an organism produces over the course of its life. Then, the main elements are survivorship and reproductive rate. This means that the organism's traits and genes are carried on into the next generation, and are presumed to contribute to evolutionary "success". The process of adaptation contributes to this "success" by impacting rates of survival and reproduction, which in turn establishes an organism's level of Darwinian fitness. In life history theory, evolution works on the life stages of particular species (e.g., length of juvenile period) but is also discussed for a single organism's functional, lifetime adaptation. In both cases, researchers assume adaptation—processes that establish fitness.

Traits

There are seven traits that are traditionally recognized as important in life history theory:

  1. size at birth
  2. growth pattern
  3. age and size at maturity
  4. number, size, and sex ratio of offspring
  5. age- and size-specific reproductive investments
  6. age- and size-specific mortality schedules
  7. length of life

The trait that is seen as the most important for any given organism is the one where a change in that trait creates the most significant difference in that organism's level of fitness. In this sense, an organism's fitness is determined by its changing life history traits. The way in which evolutionary forces act on these life history traits serves to limit the genetic variability and heritability of the life history strategies, although there are still large varieties that exist in the world.

Strategies

Combinations of these life history traits and life events create the life history strategies. As an example, Winemiller and Rose, as cited by Lartillot & Delsuc, propose three types of life history strategies in the fish they study: opportunistic, periodic, and equilibrium. These types of strategies are defined by the body size of the fish, age at maturation, high or low survivorship, and the type of environment they are found in. A fish with a large body size, a late age of maturation, and low survivorship, found in a seasonal environment, would be classified as having a periodic life strategy. The type of behaviors taking place during life events can also define life history strategies. For example, an exploitative life history strategy would be one where an organism benefits by using more resources than others, or by taking these resources from other organisms.

Characteristics

Life history characteristics are traits that affect the life table of an organism, and can be imagined as various investments in growth, reproduction, and survivorship.

The goal of life history theory is to understand the variation in such life history strategies. This knowledge can be used to construct models to predict what kinds of traits will be favoured in different environments. Without constraints, the highest fitness would belong to a Darwinian demon, a hypothetical organism for whom such trade-offs do not exist. The key to life history theory is that there are limited resources available, and focusing on only a few life history characteristics is necessary.

Examples of some major life history characteristics include:

  • Age at first reproductive event
  • Reproductive lifespan and ageing
  • Number and size of offspring

Variations in these characteristics reflect different allocations of an individual's resources (i.e., time, effort, and energy expenditure) to competing life functions. For any given individual, available resources in any particular environment are finite. Time, effort, and energy used for one purpose diminishes the time, effort, and energy available for another.

For example, birds with larger broods are unable to afford more prominent secondary sexual characteristics. Life history characteristics will, in some cases, change according to the population density, since genotypes with the highest fitness at high population densities will not have the highest fitness at low population densities. Other conditions, such as the stability of the environment, will lead to selection for certain life history traits. Experiments by Michael R. Rose and Brian Charlesworth showed that unstable environments select for flies with both shorter lifespans and higher fecundity—in unreliable conditions, it is better for an organism to breed early and abundantly than waste resources promoting its own survival.

Biological tradeoffs also appear to characterize the life histories of viruses, including bacteriophages.

Reproductive value and costs of reproduction

Reproductive value models the tradeoffs between reproduction, growth, and survivorship. An organism's reproductive value (RV) is defined as its expected contribution to the population through both current and future reproduction:

RV = Current Reproduction + Residual Reproductive Value (RRV)

The residual reproductive value represents an organism's future reproduction through its investment in growth and survivorship. The cost of reproduction hypothesis predicts that higher investment in current reproduction hinders growth and survivorship and reduces future reproduction, while investments in growth will pay off with higher fecundity (number of offspring produced) and reproductive episodes in the future. This cost-of-reproduction tradeoff influences major life history characteristics. For example, a 2009 study by J. Creighton, N. Heflin, and M. Belk on burying beetles provided "unconfounded support" for the costs of reproduction. The study found that beetles that had allocated too many resources to current reproduction also had the shortest lifespans. In their lifetimes, they also had the fewest reproductive events and offspring, reflecting how over-investment in current reproduction lowers residual reproductive value.

The related terminal investment hypothesis describes a shift to current reproduction with higher age. At early ages, RRV is typically high, and organisms should invest in growth to increase reproduction at a later age. As organisms age, this investment in growth gradually increases current reproduction. However, when an organism grows old and begins losing physiological function, mortality increases while fecundity decreases. This senescence shifts the reproduction tradeoff towards current reproduction: the effects of aging and higher risk of death make current reproduction more favorable. The burying beetle study also supported the terminal investment hypothesis: the authors found beetles that bred later in life also had increased brood sizes, reflecting greater investment in those reproductive events.

r/K selection theory

The selection pressures that determine the reproductive strategy, and therefore much of the life history, of an organism can be understood in terms of r/K selection theory. The central trade-off to life history theory is the number of offspring vs. the timing of reproduction. Organisms that are r-selected have a high growth rate (r) and tend to produce a high number of offspring with minimal parental care; their lifespans also tend to be shorter. r-selected organisms are suited to life in an unstable environment, because they reproduce early and abundantly and allow for a low survival rate of offspring. K-selected organisms subsist near the carrying capacity of their environment (K), produce a relatively low number of offspring over a longer span of time, and have high parental investment. They are more suited to life in a stable environment in which they can rely on a long lifespan and a low mortality rate that will allow them to reproduce multiple times with a high offspring survival rate.

Some organisms that are very r-selected are semelparous, only reproducing once before they die. Semelparous organisms may be short-lived, like annual crops. However, some semelparous organisms are relatively long-lived, such as the African flowering plant Lobelia telekii which spends up to several decades growing an inflorescence that blooms only once before the plant dies, or the periodical cicada which spends 17 years as a larva before emerging as an adult. Organisms with longer lifespans are usually iteroparous, reproducing more than once in a lifetime. However, iteroparous organisms can be more r-selected than K-selected, such as a sparrow, which gives birth to several chicks per year but lives only a few years, as compared to a wandering albatross, which first reproduces at ten years old and breeds every other year during its 40-year lifespan.

r-selected organisms usually:

  • mature rapidly and have an early age of first reproduction
  • have a relatively short lifespan
  • have a large number of offspring at a time, and few reproductive events, or are semelparous
  • have a high mortality rate and a low offspring survival rate
  • have minimal parental care/investment

K-selected organisms usually:

  • mature more slowly and have a later age of first reproduction
  • have a longer lifespan
  • have few offspring at a time and more reproductive events spread out over a longer span of time
  • have a low mortality rate and a high offspring survival rate
  • have high parental investment

Variation

Variation is a major part of what LHT studies, because every organism has its own life history strategy. Differences between strategies can be minimal or great. For example, one organism may have a single offspring while another may have hundreds. Some species may live for only a few hours, and some may live for decades. Some may reproduce dozens of times throughout their lifespan, and others may only reproduce one or twice.

Trade-offs

An essential component of studying life history strategies is identifying the trade-offs that take place for any given organism. Energy use in life history strategies is regulated by thermodynamics and the conservation of energy, and the "inherent scarcity of resources", so not all traits or tasks can be invested in at the same time. Thus, organisms must choose between tasks, such as growth, reproduction, and survival, prioritizing some and not others. For example, there is a trade-off between maximizing body size and maximizing lifespan, and between maximizing offspring size and maximizing offspring number. This is also sometimes seen as a choice between quantity and quality of offspring. These choices are the trade-offs that life history theory studies.

One significant trade off is between somatic effort (towards growth and maintenance of the body) and reproductive effort (towards producing offspring). Since an organism cannot put energy towards doing these simultaneously, many organisms have a period where energy is put just toward growth, followed by a period where energy is focused on reproduction, creating a separation of the two in the life cycle. Thus, the end of the period of growth marks the beginning of the period of reproduction. Another fundamental trade-off associated with reproduction is between mating effort and parenting effort. If an organism is focused on raising its offspring, it cannot devote that energy to pursuing a mate.

An important trade-off in the dedication of resources to breeding has to do with predation risk: organisms that have to deal with an increased risk of predation often invest less in breeding. This is because it is not worth as much to invest a lot in breeding when the benefit of such investment is uncertain.

These trade-offs, once identified, can then be put into models that estimate their effects on different life history strategies and answer questions about the selection pressures that exist on different life events. Over time, there has been a shift in how these models are constructed. Instead of focusing on one trait and looking at how it changed, scientists are looking at these trade-offs as part of a larger system, with complex inputs and outcomes.

Constraints

The idea of constraints is closely linked to the idea of trade-offs discussed above. Because organisms have a finite amount of energy, the process of trade-offs acts as a natural limit on the organism's adaptations and potential for fitness. This occurs in populations as well. These limits can be physical, developmental, or historical, and they are imposed by the existing traits of the organism.

Optimal life-history strategies

Populations can adapt and thereby achieve an "optimal" life history strategy that allows the highest level of fitness possible (fitness maximization). There are several methods from which to approach the study of optimality, including energetic and demographic. Achieving optimal fitness also encompasses multiple generations, because the optimal use of energy includes both the parents and the offspring. For example, "optimal investment in offspring is where the decrease in total number of offspring is equaled by the increase of the number who survive".

Optimality is important for the study of life history theory because it serves as the basis for many of the models used, which work from the assumption that natural selection, as it works on a life history traits, is moving towards the most optimal group of traits and use of energy. This base assumption, that over the course of its life span an organism is aiming for optimal energy use, then allows scientists to test other predictions. However, actually gaining this optimal life history strategy cannot be guaranteed for any organism.

Allocation of resources

An organism's allocation of resources ties into several other important concepts, such as trade-offs and optimality. The best possible allocation of resources is what allows an organism to achieve an optimal life history strategy and obtain the maximum level of fitness, and making the best possible choices about how to allocate energy to various trade-offs contributes to this. Models of resource allocation have been developed and used to study problems such as parental involvement, the length of the learning period for children, and other developmental issues. The allocation of resources also plays a role in variation, because the different resource allocations by different species create the variety of life history strategies.

Capital and income breeding

The division of capital and income breeding focuses on how organisms use resources to finance breeding, and how they time it. In capital breeders, resources collected before breeding are used to pay for it, and they breed once they reach a body-condition threshold, which decreases as the season progresses. Income breeders, on the other hand, breed using resources that are generated concurrently with breeding, and time that using the rate of change in body-condition relative to multiple fixed thresholds. This distinction, though, is not necessarily a dichotomy; instead, it is a spectrum, with pure capital breeding lying on one end, and pure income breeding on the other.

Capital breeding is more often seen in organisms that deal with strong seasonality. This is because when offspring value is low, yet food is abundant, building stores to breed from allows these organisms to achieve higher rates of reproduction than they otherwise would have. In less seasonal environments, income breeding is likely to be favoured because waiting to breed would not have fitness benefits.

Phenotypic plasticity

Phenotypic plasticity focuses on the concept that the same genotype can produce different phenotypes in response to different environments. It affects the levels of genetic variability by serving as a source of variation and integration of fitness traits.

Determinants

Many factors can determine the evolution of an organism's life history, especially the unpredictability of the environment. A very unpredictable environment—one in which resources, hazards, and competitors may fluctuate rapidly—selects for organisms that produce more offspring earlier in their lives, because it is never certain whether they will survive to reproduce again. Mortality rate may be the best indicator of a species' life history: organisms with high mortality rates—the usual result of an unpredictable environment—typically mature earlier than those species with low mortality rates, and give birth to more offspring at a time. A highly unpredictable environment can also lead to plasticity, in which individual organisms can shift along the spectrum of r-selected vs. K-selected life histories to suit the environment.

Human life history

In studying humans, life history theory is used in many ways, including in biology, psychology, economics, anthropology, and other fields. For humans, life history strategies include all the usual factors—trade-offs, constraints, reproductive effort, etc.—but also includes a culture factor that allows them to solve problems through cultural means in addition to through adaptation. Humans also have unique traits that make them stand out from other organisms, such as a large brain, later maturity and age of first reproduction, and a relatively long lifespan, often supported by fathers and older (post-menopausal) relatives. There are a variety of possible explanations for these unique traits. For example, a long juvenile period may have been adapted to support a period of learning the skills needed for successful hunting and foraging. This period of learning may also explain the longer lifespan, as a longer amount of time over which to use those skills makes the period needed to acquire them worth it. Cooperative breeding and the grandmothering hypothesis have been proposed as the reasons that humans continue to live for many years after they are no longer capable of reproducing. The large brain allows for a greater learning capacity, and the ability to engage in new behaviors and create new things. The change in brain size may have been the result of a dietary shift—towards higher quality and difficult to obtain food sources—or may have been driven by the social requirements of group living, which promoted sharing and provisioning. Recent authors, such as Kaplan, argue that both aspects are probably important. Research has also indicated that humans may pursue different reproductive strategies. In investigating life history frameworks for explaining reproductive strategy development, empirical studies have identified issues with a psychometric approach, but tentatively supported predicted links between early stress, accelerated puberty, insecure attachment, unrestricted sociosexuality and relationship dissatisfaction.

Tools used

Perspectives

Life history theory has provided new perspectives in understanding many aspects of human reproductive behavior, such as the relationship between poverty and fertility. A number of statistical predictions have been confirmed by social data and there is a large body of scientific literature from studies in experimental animal models, and naturalistic studies among many organisms.

Criticism

The claim that long periods of helplessness in young would select for more parenting effort in protecting the young at the same time as high levels of predation would select for less parenting effort is criticized for assuming that absolute chronology would determine direction of selection. This criticism argues that the total amount of predation threat faced by the young has the same effective protection need effect no matter if it comes in the form of a long childhood and far between the natural enemies or a short childhood and closely spaced natural enemies, as different life speeds are subjectively the same thing for the animals and only outwardly looks different. One cited example is that small animals that have more natural enemies would face approximately the same number of threats and need approximately the same amount of protection (at the relative timescale of the animals) as large animals with fewer natural enemies that grow more slowly (e.g. that many small carnivores that could not eat even a very young human child could easily eat multiple very young blind meerkats). This criticism also argues that when a carnivore eats a batch stored together, there is no significant difference in the chance of one surviving depending on the number of young stored together, concluding that humans do not stand out from many small animals such as mice in selection for protecting helpless young.

There is criticism of the claim that menopause and somewhat earlier age-related declines in female fertility could co-evolve with a long term dependency on monogamous male providers who preferred fertile females. This criticism argues that the longer the time the child needed parental investment relative to the lifespans of the species, the higher the percentage of children born would still need parental care when the female was no longer fertile or dramatically reduced in her fertility. These critics argue that unless male preference for fertile females and ability to switch to a new female was annulled, any need for a male provider would have selected against menopause to use her fertility to keep the provider male attracted to her, and that the theory of monogamous fathers providing for their families therefore cannot explain why menopause evolved in humans.

One criticism of the notion of a trade-off between mating effort and parenting effort is that in a species in which it is common to spend much effort on something other than mating, including but not exclusive to parenting, there is less energy and time available for such for the competitors as well, meaning that species-wide reductions in the effort spent at mating does not reduce the ability of an individual to attract other mates. These critics also criticize the dichotomy between parenting effort and mating effort for missing the existence of other efforts that take time from mating, such as survival effort which would have the same species-wide effects.

There are also criticisms of size and organ trade-offs, including criticism of the claim of a trade-off between body size and longevity that cites the observation of longer lifespans in larger species, as well as criticism of the claim that big brains promoted sociality citing primate studies in which monkeys with large portions of their brains surgically removed remained socially functioning though their technical problem solving deteriorated in flexibility, computer simulations of chimpanzee social interaction showing that it requires no complex cognition, and cases of socially functioning humans with microcephalic brain sizes.

Romance (love)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/w...