Search This Blog

Saturday, April 3, 2021

Cetacean intelligence

From Wikipedia, the free encyclopedia

Cetacean intelligence is the cognitive ability of the infraorder Cetacea of mammals. This order includes whales, porpoises, and dolphins.

Brain size

Brain size was previously considered a major indicator of the intelligence of an animal. However, many other factors also affect intelligence, and recent discoveries concerning bird intelligence have called into question the influence of brain size. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that in general, mammalian brain size scales at approximately the ​23 or ​34 exponent of body mass. Comparison of actual brain size with the size expected from allometry provides an encephalization quotient (EQ) that can be used as a more accurate indicator of an animal's intelligence.

  • Sperm whales (Physeter macrocephalus) have the largest known brain mass of any extant animal, averaging 7.8 kg in mature males.
  • Orcas (Orcinus orca) have the second largest known brain mass of any extant animal. (5.4-6.8 kg) 
  • Bottlenose dolphins (Tursiops truncatus) have an absolute brain mass of 1,500–1,700 grams. This is slightly greater than that of humans (1,300–1,400 grams) and about four times that of chimpanzees (400 grams).
  • The brain to body mass ratio (not the encephalization quotient) in some members of the odontocete superfamily Delphinoidea (dolphins, porpoises, belugas, and narwhals) is greater than modern humans, and greater than all other mammals (there is debate whether that of the treeshrew might be second in place of humans). In some dolphins, it is less than half that of humans: 0.9% versus 2.1%. This comparison seems more favorable if one excludes the large amount of insulating blubber (15-20% of mass).
  • The encephalization quotient varies widely between species. The La Plata dolphin has an EQ of approximately 1.67; the Ganges river dolphin of 1.55; the orca of 2.57; the bottlenose dolphin of 4.14; and the tucuxi dolphin of 4.56; In comparison to other animals, elephants have an EQ ranging from 1.13 to 2.36; chimpanzees of approximately 2.49; dogs of 1.17; cats of 1.00; and mice of 0.50.
  • The majority of mammals are born with a brain close to 90% of the adult brain weight. Humans are born with 28% of the adult brain weight, chimpanzees with 54%, bottlenose dolphins with 42.5%, and elephants with 35%.

Spindle cells (neurons without extensive branching) have been discovered in the brains of the humpback whale, fin whale, sperm whale, killer whale, bottlenose dolphins, Risso's dolphins, and beluga whales. Humans, great apes, and elephants, species all well known for their high intelligence, are the only others known to have spindle cells. Spindle neurons appear to play a central role in the development of intelligent behavior. Such a discovery may suggest a convergent evolution of these species.

Brain structure

Elephant brains also show a complexity similar to dolphin brains, and are also more convoluted than that of humans, and with a cortex thicker than that of cetaceans. It is generally agreed that the growth of the neocortex, both absolutely and relative to the rest of the brain, during human evolution, has been responsible for the evolution of human intelligence, however defined. While a complex neocortex usually indicates high intelligence, there are exceptions. For example, the echidna has a highly developed brain, yet is not widely considered very intelligent, though preliminary investigations into their intelligence suggest that echidnas are capable of more advanced cognitive tasks than were previously assumed.

In 2014, it was shown for the first time that a species of dolphin, the long-finned pilot whale, has more neocortical neurons than any mammal studied to date including humans. Unlike terrestrial mammals, dolphin brains contain a paralimbic lobe, which may possibly be used for sensory processing. The dolphin is a voluntary breather, even during sleep, with the result that veterinary anaesthesia of dolphins would result in asphyxiation. All sleeping mammals, including dolphins, experience a stage known as REM sleep. Ridgway reports that EEGs show alternating hemispheric asymmetry in slow waves during sleep, with occasional sleep-like waves from both hemispheres. This result has been interpreted to mean that dolphins sleep only one hemisphere of their brain at a time, possibly to control their voluntary respiration system or to be vigilant for predators. This is also given as explanation for the large size of their brains.

Dolphin brain stem transmission time is faster than that normally found in humans, and is approximately equivalent to the speed in rats. The dolphin's greater dependence on sound processing is evident in the structure of its brain: its neural area devoted to visual imaging is only about one-tenth that of the human brain, while the area devoted to acoustical imaging is about 10 times as large. Sensory experiments suggest a great degree of cross-modal integration in the processing of shapes between echolocative and visual areas of the brain. Unlike the case of the human brain, the cetacean optic chiasm is completely crossed, and there is behavioral evidence for hemispheric dominance for vision.

Brain evolution

The evolution of encephalization in cetaceans is similar to that in primates. Though the general trend in their evolutionary history increased brain mass, body mass, and encephalization quotient, a few lineages actually underwent decephalization, although the selective pressures that caused this are still under debate. Among cetaceans, Odontoceti tend to have higher encephalization quotients than Mysticeti, which is at least partially due to the fact that Mysticeti have much larger body masses without a compensating increase in brain mass. As far as which selective pressures drove the encephalization (or decephalization) of cetacean brains, current research espouses a few main theories. The most promising suggests that cetacean brain size and complexity increased to support complex social relations. It could also have been driven by changes in diet, the emergence of echolocation, or an increase in territorial range.

Problem-solving ability

Some research shows that dolphins, among other animals, understand concepts such as numerical continuity, though not necessarily counting. Dolphins may be able to discriminate between numbers.

Several researchers observing animals' ability to learn set formation tend to rank dolphins at about the level of elephants in intelligence, and show that dolphins do not surpass other highly intelligent animals in problem solving. A 1982 survey of other studies showed that in the learning of "set formation", dolphins rank highly, but not as high as some other animals.

Behavior

Pod characteristics

Dolphin group sizes vary quite dramatically. River dolphins usually congregate in fairly small groups from 6 to 12 in number or, in some species, singly or in pairs. The individuals in these small groups know and recognize one another. Other species such as the oceanic pantropical spotted dolphin, common dolphin and spinner dolphin travel in large groups of hundreds of individuals. It is unknown whether every member of the group is acquainted with every other. However, large packs can act as a single cohesive unit – observations show that if an unexpected disturbance, such as a shark approach, occurs from the flank or from beneath the group, the group moves in near-unison to avoid the threat. This means that the dolphins must be aware not only of their near neighbors but also of other individuals nearby – in a similar manner to which humans perform "audience waves". This is achieved by sight, and possibly also echolocation. One hypothesis proposed by Jerison (1986) is that members of a pod of dolphins are able to share echolocation results with each other to create a better understanding of their surroundings.

Resident orcas living in British Columbia, Canada, and Washington, United States live in extremely stable family groups. The basis of this social structure is the matriline, consisting of a mother and her offspring, who travel with her for life. Male orcas never leave their mothers' pods, while female offspring may branch off to form their own matriline if they have many offspring of their own. Males have a particularly strong bond with their mother, and travel with them their entire lives, which can exceed 50 years.

Relationships in the orca population can be discovered through their vocalizations. Matrilines who share a common ancestor from only a few generations back share mostly the same dialect, comprising a pod. Pods who share some calls indicate a common ancestor from many generations back, and make up a clan. The orcas use these dialects to avoid inbreeding. They mate outside the clan, which is determined by the different vocalizations. There is evidence that other species of dolphins may also have dialects.

In bottlenose dolphin studies by Wells in Sarasota, Florida, and Smolker in Shark Bay, Australia, females of a community are all linked either directly or through a mutual association in an overall social structure known as fission-fusion. Groups of the strongest association are known as "bands", and their composition can remain stable over years. There is some genetic evidence that band members may be related, but these bands are not necessarily limited to a single matrilineal line. There is no evidence that bands compete with each other. In the same research areas, as well as in Moray Firth, Scotland, males form strong associations of two to three individuals, with a coefficient of association between 70 and 100. These groups of males are known as "alliances", and members often display synchronous behaviors such as respiration, jumping, and breaching. Alliance composition is stable on the order of tens of years, and may provide a benefit for the acquisition of females for mating. The complex social strategies of marine mammals such as bottlenose dolphins, "provide interesting parallels" with the social strategies of elephants and chimpanzees.

Complex play

Dolphins are known to engage in complex play behavior, which includes such things as producing stable underwater toroidal air-core vortex rings or "bubble rings". There are two main methods of bubble ring production: rapid puffing of a burst of air into the water and allowing it to rise to the surface, forming a ring; or swimming repeatedly in a circle and then stopping to inject air into the helical vortex currents thus formed. The dolphin will often then examine its creation visually and with sonar. They also appear to enjoy biting the vortex-rings they have created, so that they burst into many separate normal bubbles and then rise quickly to the surface. Certain whales are also known to produce bubble rings or bubble nets for the purpose of foraging. Many dolphin species also play by riding in waves, whether natural waves near the shoreline in a method akin to human "body-surfing", or within the waves induced by the bow of a moving boat in a behavior known as bow riding.

Cross-species cooperation

There have been instances in captivity of various species of dolphin and porpoise helping and interacting across species, including helping beached whales. Also they have been known to live alongside resident (fish eating) orca whales for limited amounts of time. Dolphins have also been known to aid human swimmers in need, and in at least one instance a distressed dolphin approached human divers seeking assistance.

Creative behavior

Aside from having exhibited the ability to learn complex tricks, dolphins have also demonstrated the ability to produce creative responses. This was studied by Karen Pryor during the mid-1960s at Sea Life Park in Hawaii, and was published as The Creative Porpoise: Training for Novel Behavior in 1969. The two test subjects were two rough-toothed dolphins (Steno bredanensis), named Malia (a regular show performer at Sea Life Park) and Hou (a research subject at adjacent Oceanic Institute). The experiment tested when and whether the dolphins would identify that they were being rewarded (with fish) for originality in behavior and was very successful. However, since only two dolphins were involved in the experiment, the study is difficult to generalize.

Starting with the dolphin named Malia, the method of the experiment was to choose a particular behavior exhibited by her each day and reward each display of that behavior throughout the day's session. At the start of each new day Malia would present the prior day's behavior, but only when a new behavior was exhibited was a reward given. All behaviors exhibited were, at least for a time, known behaviors of dolphins. After approximately two weeks Malia apparently exhausted "normal" behaviors and began to repeat performances. This was not rewarded.

According to Pryor, the dolphin became almost despondent. However, at the sixteenth session without novel behavior, the researchers were presented with a flip they had never seen before. This was reinforced. As related by Pryor, after the new display: "instead of offering that again she offered a tail swipe we'd never seen; we reinforced that. She began offering us all kinds of behavior that we hadn't seen in such a mad flurry that finally we could hardly choose what to throw fish at".

The second test subject, Hou, took thirty-three sessions to reach the same stage. On each occasion the experiment was stopped when the variability of dolphin behavior became too complex to make further positive reinforcement meaningful.

The same experiment was repeated with humans, and it took the volunteers about the same length of time to figure out what was being asked of them. After an initial period of frustration or anger, the humans realised they were being rewarded for novel behavior. In dolphins this realisation produced excitement and more and more novel behaviors – in humans it mostly just produced relief.

Captive orcas have displayed responses indicating they get bored with activities. For instance, when Paul Spong worked with the orca Skana, he researched her visual skills. However, after performing favorably in the 72 trials per day, Skana suddenly began consistently getting every answer wrong. Spong concluded that a few fish were not enough motivation. He began playing music, which seemed to provide Skana with much more motivation.

At the Institute for Marine Mammal Studies in Mississippi, it has also been observed that the resident dolphins seem to show an awareness of the future. The dolphins are trained to keep their own tank clean by retrieving rubbish and bringing it to a keeper, to be rewarded with a fish. However, one dolphin, named Kelly, has apparently learned a way to get more fish, by hoarding the rubbish under a rock at the bottom of the pool and bringing it up one small piece at a time.

Use of tools

As of 1984, scientists have observed wild bottlenose dolphins in Shark Bay, Western Australia using a basic tool. When searching for food on the sea floor, many of these dolphins were seen tearing off pieces of sponge and wrapping them around their rostra, presumably to prevent abrasions and facilitate digging.

Communication

Whale song is the sounds made by whales and which is used for different kinds of communication.

Dolphins emit two distinct kinds of acoustic signals, which are called whistles and clicks:

  • Clicks – quick broadband burst pulses – are used for echolocation, although some lower-frequency broadband vocalizations may serve a non-echolocative purpose such as communication; for example, the pulsed calls of orcas. Pulses in a click train are emitted at intervals of ≈35–50 milliseconds, and in general these inter-click intervals are slightly greater than the round-trip time of sound to the target.
  • Whistles – narrow-band frequency modulated (FM) signals – are used for communicative purposes, such as contact calls, the pod-specific dialects of resident orcas, or the signature whistle of bottlenose dolphins.

There is strong evidence that some specific whistles, called signature whistles, are used by dolphins to identify and/or call each other; dolphins have been observed emitting both other specimens' signature whistles, and their own. A unique signature whistle develops quite early in a dolphin's life, and it appears to be created in imitation of the signature whistle of the dolphin's mother. Imitation of the signature whistle seems to occur only among the mother and its young, and among befriended adult males.

Xitco reported the ability of dolphins to eavesdrop passively on the active echolocative inspection of an object by another dolphin. Herman calls this effect the "acoustic flashlight" hypothesis, and may be related to findings by both Herman and Xitco on the comprehension of variations on the pointing gesture, including human pointing, dolphin postural pointing, and human gaze, in the sense of a redirection of another individual's attention, an ability which may require theory of mind.

The environment where dolphins live makes experiments much more expensive and complicated than for many other species; additionally, the fact that cetaceans can emit and hear sounds (which are believed to be their main means of communication) in a range of frequencies much wider than humans can means that sophisticated equipment, which was scarcely available in the past, is needed to record and analyse them. For example, clicks can contain significant energy in frequencies greater than 110 kHz (for comparison, it is unusual for a human to be able to hear sounds above 20 kHz), requiring that equipment have a sampling rates of at least 220 kHz; MHz-capable hardware is often used.

In addition to the acoustic communication channel, the visual modality is also significant. The contrasting pigmentation of the body may be used, for example with "flashes" of the hypopigmented ventral area of some species, as can the production of bubble streams during signature whistling. Also, much of the synchronous and cooperative behaviors, as described in the Behavior section of this entry, as well as cooperative foraging methods, likely are managed at least partly by visual means.

Experiments have shown that they can learn human sign language and can use whistles for 2-way human–animal communication. Phoenix and Akeakamai, bottlenose dolphins, understood individual words and basic sentences like "touch the frisbee with your tail and then jump over it" (Herman, Richards, & Wolz 1984). Phoenix learned whistles, and Akeakamai learned sign language. Both dolphins understood the significance of the ordering of tasks in a sentence.

A study conducted by Jason Bruck of the University of Chicago showed that bottlenose dolphins can remember whistles of other dolphins they had lived with after 20 years of separation. Each dolphin has a unique whistle that functions like a name, allowing the marine mammals to keep close social bonds. The new research shows that dolphins have the longest memory yet known in any species other than humans.

Self-awareness

Self-awareness, though not well defined scientifically, is believed to be the precursor to more advanced processes like meta-cognitive reasoning (thinking about thinking) that are typical of humans. Scientific research in this field has suggested that bottlenose dolphins, alongside elephants and great apes, possess self-awareness.

The most widely used test for self-awareness in animals is the mirror test, developed by Gordon Gallup in the 1970s, in which a temporary dye is placed on an animal's body, and the animal is then presented with a mirror.

In 1995, Marten and Psarakos used television to test dolphin self-awareness. They showed dolphins real-time footage of themselves, recorded footage, and another dolphin. They concluded that their evidence suggested self-awareness rather than social behavior. While this particular study has not been repeated since then, dolphins have since passed the mirror test. However, some researchers have argued that evidence for self-awareness has not been convincingly demonstrated.

Further reading

  • Dolphin Communication and Cognition: Past, Present, and Future, edited by Denise L. Herzing and Christine M. Johnson, 2015, MIT Press

Memory protection

From Wikipedia, the free encyclopedia

Memory protection is a way to control memory access rights on a computer, and is a part of most modern instruction set architectures and operating systems. The main purpose of memory protection is to prevent a process from accessing memory that has not been allocated to it. This prevents a bug or malware within a process from affecting other processes, or the operating system itself. Protection may encompass all accesses to a specified area of memory, write accesses, or attempts to execute the contents of the area. An attempt to access unauthorized memory results in a hardware fault, e.g., a segmentation fault, storage violation exception, generally causing abnormal termination of the offending process. Memory protection for computer security includes additional techniques such as address space layout randomization and executable space protection.

Methods

Segmentation

Segmentation refers to dividing a computer's memory into segments. A reference to a memory location includes a value that identifies a segment and an offset within that segment.

The x86 architecture has multiple segmentation features, which are helpful for using protected memory on this architecture. On the x86 architecture, the Global Descriptor Table and Local Descriptor Tables can be used to reference segments in the computer's memory. Pointers to memory segments on x86 processors can also be stored in the processor's segment registers. Initially x86 processors had 4 segment registers, CS (code segment), SS (stack segment), DS (data segment) and ES (extra segment); later another two segment registers were added – FS and GS.

Paged virtual memory

In paging the memory address space or segment is divided into equal-sized blocks called pages. Using virtual memory hardware, each page can reside in any location at a suitable boundary of the computer's physical memory, or be flagged as being protected. Virtual memory makes it possible to have a linear virtual memory address space and to use it to access blocks fragmented over physical memory address space.

Most computer architectures which support paging also use pages as the basis for memory protection.

A page table maps virtual memory to physical memory. There may be a single page table, a page table for each process, a page table for each segment, or a hierarchy of page tables, depending on the architecture and the OS. The page tables are usually invisible to the process. Page tables make it easier to allocate additional memory, as each new page can be allocated from anywhere in physical memory.

It is impossible for an unprivileged application to access a page that has not been explicitly allocated to it, because every memory address either points to a page allocated to that application, or generates an interrupt called a page fault. Unallocated pages, and pages allocated to any other application, do not have any addresses from the application point of view.

A page fault may not necessarily indicate an error. Page faults are not only used for memory protection. The operating system may manage the page table in such a way that a reference to a page that has been previously swapped out to disk causes a page fault. The operating system intercepts the page fault, loads the required memory page, and the application continues as if no fault had occurred. This scheme, known as virtual memory, allows in-memory data not currently in use to be moved to disk storage and back in a way which is transparent to applications, to increase overall memory capacity.

On some systems, the page fault mechanism is also used for executable space protection such as W^X.

Protection keys

A memory protection key (MPK) mechanism divides physical memory into blocks of a particular size (e.g., 4 KiB), each of which has an associated numerical value called a protection key. Each process also has a protection key value associated with it. On a memory access the hardware checks that the current process's protection key matches the value associated with the memory block being accessed; if not, an exception occurs. This mechanism was introduced in the System/360 architecture. It is available on today's System z mainframes and heavily used by System z operating systems and their subsystems.

The System/360 protection keys described above are associated with physical addresses. This is different from the protection key mechanism used by architectures such as the Hewlett-Packard/Intel IA-64 and Hewlett-Packard PA-RISC, which are associated with virtual addresses, and which allow multiple keys per process.

In the Itanium and PA-RISC architectures, translations (TLB entries) have keys (Itanium) or access ids (PA-RISC) associated with them. A running process has several protection key registers (16 for Itanium, 4 for PA-RISC). A translation selected by the virtual address has its key compared to each of the protection key registers. If any of them match (plus other possible checks), the access is permitted. If none match, a fault or exception is generated. The software fault handler can, if desired, check the missing key against a larger list of keys maintained by software; thus, the protection key registers inside the processor may be treated as a software-managed cache of a larger list of keys associated with a process.

PA-RISC has 15–18 bits of key; Itanium mandates at least 18. Keys are usually associated with protection domains, such as libraries, modules, etc.

In the x86, the protection keys architecture allows tagging virtual addresses for user pages with any of 16 protection keys. All the pages tagged with the same protection key constitute a protection domain. A new register contains the permissions associated with each of the protection domain. Load and store operations are checked against both the page table permissions and the protection key permissions associated with the protection domain of the virtual address, and only allowed if both permissions allow the access. The protection key permissions can be set from user space, allowing applications to directly restrict access to the application data without OS intervention. Since the protection keys are associated with a virtual address, the protection domains are per address space, so processes running in different address spaces can each use all 16 domains.

Protection rings

In Multics and systems derived from it, each segment has a protection ring for reading, writing and execution; an attempt by a process with a higher ring number than the ring number for the segment causes a fault. There is a mechanism for safely calling procedures that run in a lower ring and returning to the higher ring. There are mechanisms for a routine running with a low ring number to access a parameter with the larger of its own ring and the caller's ring.

Simulated segmentation

Simulation is the use of a monitoring program to interpret the machine code instructions of some computer architectures. Such an instruction set simulator can provide memory protection by using a segmentation-like scheme and validating the target address and length of each instruction in real time before actually executing them. The simulator must calculate the target address and length and compare this against a list of valid address ranges that it holds concerning the thread's environment, such as any dynamic memory blocks acquired since the thread's inception, plus any valid shared static memory slots. The meaning of "valid" may change throughout the thread's life depending upon context. It may sometimes be allowed to alter a static block of storage, and sometimes not, depending upon the current mode of execution, which may or may not depend on a storage key or supervisor state.

It is generally not advisable to use this method of memory protection where adequate facilities exist on a CPU, as this takes valuable processing power from the computer. However, it is generally used for debugging and testing purposes to provide an extra fine level of granularity to otherwise generic storage violations and can indicate precisely which instruction is attempting to overwrite the particular section of storage which may have the same storage key as unprotected storage.

Capability-based addressing

Capability-based addressing is a method of memory protection that is unused in modern commercial computers. In this method, pointers are replaced by protected objects (called capabilities) that can only be created using privileged instructions which may only be executed by the kernel, or some other process authorized to do so. This effectively lets the kernel control which processes may access which objects in memory, with no need to use separate address spaces or context switches. Only a few commercial products used capability based security: Plessey System 250, IBM System/38, Intel iAPX 432 architecture and KeyKOS. Capability approaches are widely used in research systems such as EROS and Combex DARPA browser. They are used conceptually as the basis for some virtual machines, most notably Smalltalk and Java. Currently, the DARPA-funded CHERI project at University of Cambridge is working to create a modern capability machine that also supports legacy software.

Dynamic tainting

Dynamic tainting is a technique for protecting programs from illegal memory accesses. When memory is allocated, at runtime, this technique taints both the memory and the corresponding pointer using the same taint mark. Taint marks are then suitably propagated while the program executes and are checked every time a memory address m is accessed through a pointer p; if the taint marks associated with m and p differ, the execution is stopped and the illegal access is reported.

SPARC M7 processors (and higher) implement dynamic tainting in hardware. Oracle markets this feature as Silicon Secured Memory (SSM) (previously branded as Application Data Integrity (ADI)).

The lowRISC CPU design includes dynamic tainting under the name Tagged Memory.

Measures

The protection level of a particular implementation may be measured by how closely it adheres to the principle of minimum privilege.

Memory protection in different operating systems

Different operating systems use different forms of memory protection or separation. Although memory protection was common on most mainframes and many minicomputer systems from the 1960s, true memory separation was not used in home computer operating systems until OS/2 (and in RISC OS) was released in 1987. On prior systems, such lack of protection was even used as a form of interprocess communication, by sending a pointer between processes. It is possible for processes to access System Memory in the Windows 9x family of Operating Systems.

Some operating systems that do implement memory protection include:

On Unix-like systems, the mprotect system call is used to control memory protection.

Computer multitasking

From Wikipedia, the free encyclopedia

Modern desktop operating systems are capable of handling large numbers of different processes at the same time. This screenshot shows Linux Mint running simultaneously Xfce desktop environment, Firefox, a calculator program, the built-in calendar, Vim, GIMP, and VLC media player.
 
Multitasking capabilities of Microsoft Windows 1.01 released in 1985, here shown running the MS-DOS Executive and Calculator programs

In computing, multitasking is the concurrent execution of multiple tasks (also known as processes) over a certain period of time. New tasks can interrupt already started ones before they finish, instead of waiting for them to end. As a result, a computer executes segments of multiple tasks in an interleaved manner, while the tasks share common processing resources such as central processing units (CPUs) and main memory. Multitasking automatically interrupts the running program, saving its state (partial results, memory contents and computer register contents) and loading the saved state of another program and transferring control to it. This "context switch" may be initiated at fixed time intervals (pre-emptive multitasking), or the running program may be coded to signal to the supervisory software when it can be interrupted (cooperative multitasking).

Multitasking does not require parallel execution of multiple tasks at exactly the same time; instead, it allows more than one task to advance over a given period of time. Even on multiprocessor computers, multitasking allows many more tasks to be run than there are CPUs.

Multitasking is a common feature of computer operating systems. It allows more efficient use of the computer hardware; where a program is waiting for some external event such as a user input or an input/output transfer with a peripheral to complete, the central processor can still be used with another program. In a time-sharing system, multiple human operators use the same processor as if it was dedicated to their use, while behind the scenes the computer is serving many users by multitasking their individual programs. In multiprogramming systems, a task runs until it must wait for an external event or until the operating system's scheduler forcibly swaps the running task out of the CPU. Real-time systems such as those designed to control industrial robots, require timely processing; a single processor might be shared between calculations of machine movement, communications, and user interface.

Often multitasking operating systems include measures to change the priority of individual tasks, so that important jobs receive more processor time than those considered less significant. Depending on the operating system, a task might be as large as an entire application program, or might be made up of smaller threads that carry out portions of the overall program.

A processor intended for use with multitasking operating systems may include special hardware to securely support multiple tasks, such as memory protection, and protection rings that ensure the supervisory software cannot be damaged or subverted by user-mode program errors.

The term "multitasking" has become an international term, as the same word is used in many other languages such as German, Italian, Dutch, Danish and Norwegian.

Multiprogramming

In the early days of computing, CPU time was expensive, and peripherals were very slow. When the computer ran a program that needed access to a peripheral, the central processing unit (CPU) would have to stop executing program instructions while the peripheral processed the data. This was usually very inefficient.

The first computer using a multiprogramming system was the British Leo III owned by J. Lyons and Co. During batch processing, several different programs were loaded in the computer memory, and the first one began to run. When the first program reached an instruction waiting for a peripheral, the context of this program was stored away, and the second program in memory was given a chance to run. The process continued until all programs finished running.

The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, nonexistent.

Multiprogramming gives no guarantee that a program will run in a timely manner. Indeed, the first program may very well run for hours without needing access to a peripheral. As there were no users waiting at an interactive terminal, this was no problem: users handed in a deck of punched cards to an operator, and came back a few hours later for printed results. Multiprogramming greatly reduced wait times when multiple batches were being processed.

Cooperative multitasking

Early multitasking systems used applications that voluntarily ceded time to one another. This approach, which was eventually supported by many computer operating systems, is known today as cooperative multitasking. Although it is now rarely used in larger systems except for specific applications such as CICS or the JES2 subsystem, cooperative multitasking was once the only scheduling scheme employed by Microsoft Windows and Classic Mac OS to enable multiple applications to run simultaneously. Cooperative multitasking is still used today on RISC OS systems.

As a cooperatively multitasked system relies on each process regularly giving up time to other processes on the system, one poorly designed program can consume all of the CPU time for itself, either by performing extensive calculations or by busy waiting; both would cause the whole system to hang. In a server environment, this is a hazard that makes the entire environment unacceptably fragile.

Preemptive multitasking

Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular "slice" of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively. Preemptive multitasking was implemented in the PDP-6 Monitor and MULTICS in 1964, in OS/360 MFT in 1967, and in Unix in 1969, and was available in some operating systems for computers as small as DEC's PDP-8; it is a core feature of all Unix-like operating systems, such as Linux, Solaris and BSD with its derivatives, as well as modern versions of Windows.

At any specific time, processes can be grouped into two categories: those that are waiting for input or output (called "I/O bound"), and those that are fully utilizing the CPU ("CPU bound"). In primitive systems, the software would often "poll", or "busywait" while waiting for requested input (such as disk, keyboard or network input). During this time, the system was not performing useful work. With the advent of interrupts and preemptive multitasking, I/O bound processes could be "blocked", or put on hold, pending the arrival of the necessary data, allowing other processes to utilize the CPU. As the arrival of the requested data would generate an interrupt, blocked processes could be guaranteed a timely return to execution.

The earliest preemptive multitasking OS available to home users was Sinclair QDOS on the Sinclair QL, released in 1984, but very few people bought the machine. Commodore's Amiga, released the following year, was the first commercially successful home computer to use the technology, and its multimedia abilities make it a clear ancestor of contemporary multitasking personal computers. Microsoft made preemptive multitasking a core feature of their flagship operating system in the early 1990s when developing Windows NT 3.1 and then Windows 95. It was later adopted on the Apple Macintosh by Mac OS X that, as a Unix-like operating system, uses preemptive multitasking for all native applications.

A similar model is used in Windows 9x and the Windows NT family, where native 32-bit applications are multitasked preemptively. 64-bit editions of Windows, both for the x86-64 and Itanium architectures, no longer support legacy 16-bit applications, and thus provide preemptive multitasking for all supported applications.

Real time

Another reason for multitasking was in the design of real-time computing systems, where there are a number of possibly unrelated external activities needed to be controlled by a single processor system. In such systems a hierarchical interrupt system is coupled with process prioritization to ensure that key activities were given a greater share of available process time.

Multithreading

As multitasking greatly improved the throughput of computers, programmers started to implement applications as sets of cooperating processes (e. g., one process gathering input data, one process processing input data, one process writing out results on disk). This, however, required some tools to allow processes to efficiently exchange data.

Threads were born from the idea that the most efficient way for cooperating processes to exchange data would be to share their entire memory space. Thus, threads are effectively processes that run in the same memory context and share other resources with their parent processes, such as open files. Threads are described as lightweight processes because switching between threads does not involve changing the memory context.

While threads are scheduled preemptively, some operating systems provide a variant to threads, named fibers, that are scheduled cooperatively. On operating systems that do not provide fibers, an application may implement its own fibers using repeated calls to worker functions. Fibers are even more lightweight than threads, and somewhat easier to program with, although they tend to lose some or all of the benefits of threads on machines with multiple processors.

Some systems directly support multithreading in hardware.

Memory protection

Essential to any multitasking system is to safely and effectively share access to system resources. Access to memory must be strictly managed to ensure that no process can inadvertently or deliberately read or write to memory locations outside the process's address space. This is done for the purpose of general system stability and data integrity, as well as data security.

In general, memory access management is a responsibility of the operating system kernel, in combination with hardware mechanisms that provide supporting functionalities, such as a memory management unit (MMU). If a process attempts to access a memory location outside its memory space, the MMU denies the request and signals the kernel to take appropriate actions; this usually results in forcibly terminating the offending process. Depending on the software and kernel design and the specific error in question, the user may receive an access violation error message such as "segmentation fault".

In a well designed and correctly implemented multitasking system, a given process can never directly access memory that belongs to another process. An exception to this rule is in the case of shared memory; for example, in the System V inter-process communication mechanism the kernel allocates memory to be mutually shared by multiple processes. Such features are often used by database management software such as PostgreSQL.

Inadequate memory protection mechanisms, either due to flaws in their design or poor implementations, allow for security vulnerabilities that may be potentially exploited by malicious software.

Memory swapping

Use of a swap file or swap partition is a way for the operating system to provide more memory than is physically available by keeping portions of the primary memory in secondary storage. While multitasking and memory swapping are two completely unrelated techniques, they are very often used together, as swapping memory allows more tasks to be loaded at the same time. Typically, a multitasking system allows another process to run when the running process hits a point where it has to wait for some portion of memory to be reloaded from secondary storage.

Programming

Processes that are entirely independent are not much trouble to program in a multitasking environment. Most of the complexity in multitasking systems comes from the need to share computer resources between tasks and to synchronize the operation of co-operating tasks.

Various concurrent computing techniques are used to avoid potential problems caused by multiple tasks attempting to access the same resource.

Bigger systems were sometimes built with a central processor(s) and some number of I/O processors, a kind of asymmetric multiprocessing.

Over the years, multitasking systems have been refined. Modern operating systems generally include detailed mechanisms for prioritizing processes, while symmetric multiprocessing has introduced new complexities and capabilities.

Mycorrhizae and changing climate

From Wikipedia, the free encyclopedia

Mycorrhizae and changing climate refers to the effects of climate change on mycorrhizae, a fungus which forms an endosymbiotic relationship between with a vascular host plant by colonizing its roots, and the effects brought on by climate change. Climate change is any lasting effect in weather or temperature. It is important to note that a good indicator of climate change is global warming, though the two are not analogous. However, temperature plays a very important role in all ecosystems on Earth, especially those with high counts of mycorrhiza in soil biota.

Mycorrhizae are one of the most widespread symbioses on the planet, as they form a plant-fungal interaction with nearly eighty percent of all terrestrial plants. The resident mycorrhizae benefits from a share of the sugars and carbon produced during photosynthesis, while the plant effectively accesses water and other nutrients, such as nitrogen and phosphorus, crucial to its health. This symbiosis has become so beneficial to terrestrial plants that some depend entirely on the relationship to sustain themselves in their respective environments. The fungi are essential to the planet as most ecosystems, especially those in the Arctic, are filled with plants that survive with the aid of mycorrhizae. Because of their importance to a productive ecosystem, understanding this fungus and its symbioses is currently an active area of scientific research.

History of mycorrhizae

First wave – Triassic

Mycorrhizae and their related symbioses have been around for millions of years – dating as far back as the Triassic Period (200–250 million years ago) and even older. While there are still many gaps in the timeline of mycorrhizae, the oldest known forms of the fungal group can be dated back as far as 450 million years ago or older, where the first wave the eukaryotic fungi came about alongside the evolution of early land plants. There are some later lineages that consisted only of arbuscular mycorrhizae until the early Cretaceous Period (75–140 million years ago) when the clade began to drastically branch off into various forms of mycorrhizae, most of which would be specialized to particular niches, environments, climates, and plants. However, these lineages are separate from the lineages that other major types of mycorrhizae derived from. There are essential mycorrhizae that evolved from other symbioses such as Ascomycota, (which shares a phylum with Basidiomycota, another major mycorrhiza) which evolved to eventually become Ericoid mycorrhizae or Ectomycorrhizae. Some of the derived families are more complex due to specialized or multifunctional roots, which were not present in earlier times before Pangaea. The climate of the environments these groups of mycorrhizae occupied (which developed on rocky surfaces) were arid, not allowing for much diversification in life due to fixed niches. The downside to looking into the history of most fungi and plant symbioses is that typically, fungi do not preserve very well, so finding a fungal fossil of more ancient periods is not only difficult, but offers only specific information about the fungus and the environment in which it developed.

Second wave – Cretaceous

This diversification in both plants and mycorrhizae brought about their second wave of evolution within the Cretaceous period, which introduced alongside arbuscular mycorrhizae three new types of mycorrhizae: orchid mycorrhizae, ericoid mycorrhizae, and ectomycorrhizae. The taxonomic diversification of all plants with and without mycorrhizal symbiosis shows that 71% makes up arbuscular mycorrhizae, 10% makes up Orchidaceae, 2% make up ectomycorrhizae, and 1.4% make up ericoid mycorrhizae. The defining feature of this wave of evolution was the consistency of root types (or in other words, the similarities shared between root types, though characteristically different for individual families or even species) within the families that allowed for appropriate symbiosis with the plants of the period. The environments of this period had a radiation of angiosperms, showing a different reproductive strategy than before and providing distinct morphological traits for most varieties of plants as opposed to prior periods and before the K-Pg extinction event. The climate that allowed for these developments could be described as relatively warm, leading to higher sea levels and shallow inland bodies of water. These areas were occupied mostly by reptiles that fed on animals, and insects that fed on plants, showing a more complex ecosystem than was present in the Triassic period and further pushing evolution in plants and mycorrhizae via ever-present natural selection. There is plenty of plant evidence to support most of these findings; however, the information necessary to form hypotheses regarding the mycorrhizae of the time, as well as other related symbioses, is incredibly limited as the fossilization of such individuals is very rare.

Third wave – Paleogene

The third wave of evolutionary diversification began in the Paleogene Period (24–75 million years ago) and is closely linked with change in climate and soil conditions. The conditions that caused these changes are mostly due to an increase in disturbed niches and environments and the warming of global ecosystems, causing a shift in mycorrhizal types in plants within more complex soils. This wave consists of lineages of plants with root morphologies that are often inconsistent with the previously mentioned families from the second wave. These would be referred to as "New Complex Root Clades," due to the complexities that would arise in peculiar environments between ectomycorrhizal and nonmycorrhizal plants. While both the second and third waves are linked to climate change, the defining feature of the third wave is the increased variability within the families and complexities in plant-fungus associations. These stretches of diversification were brought about by an initially hot and humid climate, but became cooler and drier over time, forcing genetic drift.

These three waves are what help divide and organize most of the mycorrhizae timeline without getting into specific genera and species. While it is important to mention the distinction of these fungal types and their differences, it is equally important to recognize their counterpart plant diversification as well. There are a number of notable nonmycorrhizal plants that speciate during the Cretaceous Period—while there was a spread in mycorrhizal plants, there was also a spread in nonmycorrhizal plants. This all helps play into a clearly picture of the distribution of plants and their symbiotic fungi over the course of an Earth's history.

The effect of climate on plants and mycorrhizae

There are various effects that a changing climate can have on the numerous species found within an ecosystem. This includes plants and their symbiotic relations. As it is understood, any particular mycorrhiza is expected to be both present and abundant in any of its respective niches so long as the environment can support its growth. However, sustainable environments are becoming uncommon due to the effects of a warming, changing climate. It is important to note that the relationship between the vascular host plant and mycorrhizae is mutualistic. This means global environmental change first affects the host plant, which in turn impacts the mycorrhizae in a very similar way. Essentially, if the host plant experiences environmental stress, this will be passed along to the mycorrhizae, which could have negative consequences.

Arbuscular mycorrhizae, the most common form of mycorrhizae which are widespread "essential components of soil biota in natural and agricultural ecosystems", are used as a benchmark for the impacts of climate change on mycorrhizae in the following sections.

Increasing temperatures and excess CO2

The temperature of the globe is steadily rising due to human activity, where the majority of the blame can be placed on anthropogenic production of pollutant gases. The most common gas that is produced by both artificial and natural means is CO2, and its heavy collective concentration in the atmosphere traps a large amount of heat underneath the atmosphere. The heat affects fungi differently depending on what genus, species or strain they are; while some fungi suffer at certain temperatures, others thrive in them. This depends on which environments the fungi are most often found in. However, temperature also plays a vital role in availability of water and nutrients as the hotter climates will have an easier time absorbing nutrients but are also threatened by denaturation of proteins. If the soil is dried by excessive heat, the hyphae of the mycorrhizae as well as the plant root hairs will have far more difficulty obtaining both water and the nutrients to sustain their interactions.

While temperature may play a key role in fungal and plant growth, there is equally as much dependence on the amount of CO2 that is absorbed. The amount of CO2 within the soil is different from the amount that is in the air; the presence of this CO2 is a vital part of many plant cycles (such as photosynthesis) and due to the properties of plant-fungus symbiosis taking place in roots, mycorrhizae are affected as well. When plants are exposed to higher levels of CO2, they tend to take advantage of it and grow faster. This also increases the allocation of carbon to the plant's roots rather than the plant's shoots, which is beneficial to the symbiotic mycorrhizae. There is an increase in the amount of space that the roots can occupy and thus the cycle of trade between the plant and the fungi increases, showing potential for further growth and taking advantage of the available resources until the feedback becomes neutral. The allocated CO2 that is provided to the mycorrhizae also allows them to grow at an increased rate at higher levels, meaning the hyphae of the fungi will also expand, however the direct benefits seem to cease there in accordance to the mycorrhizae, alone. "Despite significant effects on root carbohydrate levels, there were generally no significant effects on mycorrhizal colonization." This means that while the plant may grow larger, the mycorrhizae will grow proportionally larger with the growth of the plant. In other words, the mycorrhizae's growth is caused by the growth of the plant; the opposite cannot be proven true even though these environmental factors affect both the mycorrhizae and the plant. CO2 should not be thought of as entirely beneficial: its main contribution is to photosynthetic processes but the plant relies on it while the essential sugars that the mycorrhizae require can only be provided by the plant; they cannot be extracted directly from the soils. The effects CO2 has on the environment are detrimental in the long run as it is a vital contributor to the problem of greenhouse gases and loss of territory in which plants and their respective mycorrhizae grow.

Mycorrhizae in Arctic Regions

While it may seem like a barren landscape, the Arctic is actually home to huge populations of animals, plants, and fungi. The plants in these regions depend on their relationship with mycorrhizae, and without it, would not fare as well as they do in such harsh conditions. In Arctic regions, nitrogen and water are harder for plants to obtain as the ground is frozen, which makes mycorrhizae crucial to their fitness, health, and growth.

Climate change has been recognized to affect Arctic regions more drastically than non-Arctic regions, a process known as Arctic Amplification. There seem to be more positive feedback loopsthan negativeoccurring in the Arctic as a result of this, which causes faster warming and further unpredictable change that will affect its ecosystems. Since mycorrhizae tend to do better in cooler temperatures, warming could have a detrimental effect on overall health of colonies.

Since these ecosystems offer soil with sparse, easily accessed nutrients, it is critical for shrubs and other vascular plants to obtain such nutrients through their symbiosis with mycorrhiza. If these relationships are placed under too much stress, a positive feedback loop could occur causing a decrease in the terrestrial plant and fungi populations because of harsher and potentially drier environments.

Biogeographic movement of plants and mycorrhizae

"Fungi may appear to have limited geographical distributions, but dispersal per se plays no role in determining such distributions." The limitations of animals and plants is different from that of fungi. Fungi tend to grow where there are already plants and probably animals because many of them are symbiotic in nature and the rely on very specific environments in order to grow. Plants on the other hand must rely on separate elements in order to spread, like the wind or other animals, and when seeds are planted the environments must still be sufficient enough to help them grow. Arbuscular mycorrhizae are the best example of this as it is found nearly anywhere where plants are growing in the wild. However, with changing climate comes change in environments. As climates warm or cool, plants tend to "move", that is – they exhibit biogeographic movement. Some habitats no longer remain viable to certain plants but then other previously hostile environments may become more hospitable to the same species. Once again, if a plant occupies an environment where mycorrhizae can grow and form a symbiosis with the plant, it will likely occur with seldom exceptions.

Not all fungi can grow in the same places though, distinct types of fungi are necessary to consider. Even though some fungi can have a massive area of dispersal, they still succumb to the same barriers that most species do. Some elevations are too high or too low and limit the capacity to disperse spores, favoring similar elevation as opposed to an increase inclining or declining elevation. Some biomes are too wet or too dry for a plant to not only move to but grow and survive in, or the fungi that occupy one climate do not function as efficiently (if at all) in another climate, limiting the dispersal even more. There are other factors that will mediate the dispersal of fungi, creating boundaries that can cause speciation between fungal communities, such as distance, bodies of water, strength or direction of wind, even animal interactions There are "structural differences, such as mushroom height, spore shape and size of the Buller’s drop, that determine dispersal distances." Morphological reproductive traits such as these play a big role in dispersal, and if there is a barrier that isolates or eliminates these, such as a river or a lack of soil which can support mycorrhizal interactions due to something like falling pH levels from acid rain, essential tactics for germination become obsolete as the offspring do not survive and thus, the population cannot grow or move. Vertical transmission of mycorrhizae does not exist, so to move past these barriers requires alternative means of horizontal transmission. Endemism in mycorrhizal fungi is due to the limitations of how fungal species can spread within their respective niches and home ranges, noticeably widespread within these areas.

While the changing climates keep these fungi from spreading, they also illustrate essential points. There is a greater degree of phylogenetic similarities between fungal communities at similar latitudes and they exhibit just as much similarity between themselves as do plant communities. Tracking one species of plant will help narrow down the specific movement of the mycorrhizae that are commonly associated with the plant species. Alaskan trees for example tend to move north as climate changes because tundra regions are becoming more hospitable and allows for these trees to grow there. Mycorrhizae will follow but which ones in specific is difficult to measure. While vegetation above ground is easier to see and varies less over a larger region, soil contents vary widely within a much smaller region. This makes it difficult to pinpoint exact movements of particular fungi which may be in competition with one another, however these Alaskan trees have obligate endomycorrhizal symbiotes in great quantities, so accounting for their movement is easier. The measurements showed that there were varying distributions of not only the ectomycorrhizal fungi in trees, but the ericoid mycorrhizae, orchid mychorrhize, and arbuscular mycorrhizae in shrubs and fruit plants. They found that of the measurable ectomycorrhizal species richness and density, "– the colonization of seedlings declines with increased distance from forest edge for both native and invasive tree species across fine spatial scales." Thus, the greatest inhibitor of forest expansion is actually the mycorrhizae that prioritize a host's growth rather than their establishment (planting of the seed). The nutrients in the soil cannot sustain the complete growth of a tree within the perimeters of the amount of nutrient absorption that a mycorrhizae (that focuses on growth rather than establishment) will allow. The mycorrhizae which help a plant's establishment will aid the species (and in turn themselves) the most, by maintaining a healthy and balanced intake of nutrients. Species that are moving away from the equator due to change in climate likely experience the best benefits when establishing mycorrhizae infect their roots and spread to other offspring.

Effects on environmental health

CO2 gases are only one of the most common gases to enter the atmosphere and circulate within several natural cycles essential to the preservation of life on a daily basis; however, there are a plethora of other harmful emissions that can be produced by industrial activity. These gaseous molecules negatively affect the phosphorus cycle, carbon cycle, water cycle, nitrogen cycle, and many others that keep ecosystems in check. Mycorrhizal fungi can be affected most heavily by the absorption of unnatural chemicals that can be found in the soils near man-made facilities such as factories, which give off many pollutants that can enter the ecosystem through many means, one of the worst being acid rain, which can precipitate sulfur and nitrogen oxides into the soils and harm or kill plants in its path. This is just one example of how extreme the harsh side effects of pollution can affect the environment, there is evidence that agricultural activities are also heavily affected by negative human influences. The advantage of having a mycorrhizal community in an agricultural setting is that the plants survive and obtain nutrients from their environment more easily. These mycorrhizae are indirectly and directly exposed to the same effects that human activity stresses upon their respective plants; the most common fungi being arbuscular mycorrhizae – specifically, the pollutants of the Earth's atmosphere.

The most common industrial air pollutants that are introduced into the atmosphere include, but are not limited to, SO2, NO-x, and O3 molecules. These gases all negatively impact mycorrhizal and plant development and growth. The most notable effects that these gases have on the mycorrhizae include "– a reduction in viable mycorrhizae propagules, the colonization of roots, degradation in connections between trees, reduction in the mycorrhizal incidence in trees, and reduction in the enzyme activity of ectomycorrhizal roots." Root growth and mycorrhizal colonization are important to note as these directly influence how well the plant can uptake essential nutrients, affecting how well it survives more so than the other adverse effects. Changing climates are correlated with the production of air pollutants, therefore these results are of significance to the understanding of how, not only mycorrhizae, but their symbiotic plant-host interactions are affected as well.

Political psychology

From Wikipedia, the free encyclopedia ...