Search This Blog

Wednesday, December 6, 2023

Nature

From Wikipedia, the free encyclopedia

https://en.wikipedia.org/wiki/Nature

A winter landscape in Lapland, Finland
 
Lightning strikes during the eruption of the Galunggung volcano, West Java, in 1982
 
Life in the abyssal oceans

Nature, in the broadest sense, is the physical world or universe. "Nature" can refer to the phenomena of the physical world, and also to life in general. The study of nature is a large, if not the only, part of science. Although humans are part of nature, human activity is often understood as a separate category from other natural phenomena.

The word nature is borrowed from the Old French nature and is derived from the Latin word natura, or "essential qualities, innate disposition", and in ancient times, literally meant "birth". In ancient philosophy, natura is mostly used as the Latin translation of the Greek word physis (φύσις), which originally related to the intrinsic characteristics of plants, animals, and other features of the world to develop of their own accord. The concept of nature as a whole, the physical universe, is one of several expansions of the original notion; it began with certain core applications of the word φύσις by pre-Socratic philosophers (though this word had a dynamic dimension then, especially for Heraclitus), and has steadily gained currency ever since.

During the advent of modern scientific method in the last several centuries, nature became the passive reality, organized and moved by divine laws. With the Industrial revolution, nature increasingly became seen as the part of reality deprived from intentional intervention: it was hence considered as sacred by some traditions (Rousseau, American transcendentalism) or a mere decorum for divine providence or human history (Hegel, Marx). However, a vitalist vision of nature, closer to the pre-Socratic one, got reborn at the same time, especially after Charles Darwin.

Within the various uses of the word today, "nature" often refers to geology and wildlife. Nature can refer to the general realm of living plants and animals, and in some cases to the processes associated with inanimate objects—the way that particular types of things exist and change of their own accord, such as the weather and geology of the Earth. It is often taken to mean the "natural environment" or wilderness—wild animals, rocks, forest, and in general those things that have not been substantially altered by human intervention, or which persist despite human intervention. For example, manufactured objects and human interaction generally are not considered part of nature, unless qualified as, for example, "human nature" or "the whole of nature". This more traditional concept of natural things that can still be found today implies a distinction between the natural and the artificial, with the artificial being understood as that which has been brought into being by a human consciousness or a human mind. Depending on the particular context, the term "natural" might also be distinguished from the unnatural or the supernatural.

Earth

The Blue Marble, which is a famous view of the Earth, taken in 1972 by the crew of Apollo 17

Earth is the only planet known to support life, and its natural features are the subject of many fields of scientific research. Within the Solar System, it is third closest to the Sun; it is the largest terrestrial planet and the fifth largest overall. Its most prominent climatic features are its two large polar regions, two relatively narrow temperate zones, and a wide equatorial tropical to subtropical region. Precipitation varies widely with location, from several metres of water per year to less than a millimetre. 71 percent of the Earth's surface is covered by salt-water oceans. The remainder consists of continents and islands, with most of the inhabited land in the Northern Hemisphere.

Earth has evolved through geological and biological processes that have left traces of the original conditions. The outer surface is divided into several gradually migrating tectonic plates. The interior remains active, with a thick layer of plastic mantle and an iron-filled core that generates a magnetic field. This iron core is composed of a solid inner phase, and a fluid outer phase. Convective motion in the core generates electric currents through dynamo action, and these, in turn, generate the geomagnetic field.

The atmospheric conditions have been significantly altered from the original conditions by the presence of life-forms,hich create an ecological balance that stabilizes the surface conditions. Despite the wide regional variations in climate by latitude and other geographic factors, the long-term average global climate is quite stable during interglacial periods, and variations of a degree or two of average global temperature have historically had major effects on the ecological balance, and on the actual geography of the Earth.

Geology

Geology is the science and study of the solid and liquid matter that constitutes the Earth. The field of geology encompasses the study of the composition, structure, physical properties, dynamics, and history of Earth materials, and the processes by which they are formed, moved, and changed. The field is a major academic discipline, and is also important for mineral and hydrocarbon extraction, knowledge about and mitigation of natural hazards, some Geotechnical engineering fields, and understanding past climates and environments.

Geological evolution

Three types of geological plate tectonic boundaries

The geology of an area evolves through time as rock units are deposited and inserted and deformational processes change their shapes and locations.

Rock units are first emplaced either by deposition onto the surface or intrude into the overlying rock. Deposition can occur when sediments settle onto the surface of the Earth and later lithify into sedimentary rock, or when as volcanic material such as volcanic ash or lava flows, blanket the surface. Igneous intrusions such as batholiths, laccoliths, dikes, and sills, push upwards into the overlying rock, and crystallize as they intrude.

After the initial sequence of rocks has been deposited, the rock units can be deformed and/or metamorphosed. Deformation typically occurs as a result of horizontal shortening, horizontal extension, or side-to-side (strike-slip) motion. These structural regimes broadly relate to convergent boundaries, divergent boundaries, and transform boundaries, respectively, between tectonic plates.

Historical perspective

An animation showing the movement of the continents from the separation of Pangaea until the present day

Earth is estimated to have formed 4.54 billion years ago from the solar nebula, along with the Sun and other planets. The Moon formed roughly 20 million years later. Initially molten, the outer layer of the Earth cooled, resulting in the solid crust. Outgassing and volcanic activity produced the primordial atmosphere. Condensing water vapor, most or all of which came from ice delivered by comets, produced the oceans and other water sources. The highly energetic chemistry is believed to have produced a self-replicating molecule around 4 billion years ago.

Plankton inhabit oceans, seas and lakes, and have existed in various forms for at least 2 billion years.

Continents formed, then broke up and reformed as the surface of Earth reshaped over hundreds of millions of years, occasionally combining to make a supercontinent. Roughly 750 million years ago, the earliest known supercontinent Rodinia, began to break apart. The continents later recombined to form Pannotia which broke apart about 540 million years ago, then finally Pangaea, which broke apart about 180 million years ago.

During the Neoproterozoic era, freezing temperatures covered much of the Earth in glaciers and ice sheets. This hypothesis has been termed the "Snowball Earth", and it is of particular interest as it precedes the Cambrian explosion in which multicellular life forms began to proliferate about 530–540 million years ago.

Since the Cambrian explosion there have been five distinctly identifiable mass extinctions. The last mass extinction occurred some 66 million years ago, when a meteorite collision probably triggered the extinction of the non-avian dinosaurs and other large reptiles, but spared small animals such as mammals. Over the past 66 million years, mammalian life diversified.

Several million years ago, a species of small African ape gained the ability to stand upright. The subsequent advent of human life, and the development of agriculture and further civilization allowed humans to affect the Earth more rapidly than any previous life form, affecting both the nature and quantity of other organisms as well as global climate. By comparison, the Great Oxygenation Event, produced by the proliferation of algae during the Siderian period, required about 300 million years to culminate.

The present era is classified as part of a mass extinction event, the Holocene extinction event, the fastest ever to have occurred. Some, such as E. O. Wilson of Harvard University, predict that human destruction of the biosphere could cause the extinction of one-half of all species in the next 100 years. The extent of the current extinction event is still being researched, debated and calculated by biologists.

Atmosphere, climate, and weather

Blue light is scattered more than other wavelengths by the gases in the atmosphere, giving the Earth a blue halo when seen from space.

The Earth's atmosphere is a key factor in sustaining the ecosystem. The thin layer of gases that envelops the Earth is held in place by gravity. Air is mostly nitrogen, oxygen, water vapor, with much smaller amounts of carbon dioxide, argon, etc. The atmospheric pressure declines steadily with altitude. The ozone layer plays an important role in depleting the amount of ultraviolet (UV) radiation that reaches the surface. As DNA is readily damaged by UV light, this serves to protect life at the surface. The atmosphere also retains heat during the night, thereby reducing the daily temperature extremes.

Terrestrial weather occurs almost exclusively in the lower part of the atmosphere, and serves as a convective system for redistributing heat. Ocean currents are another important factor in determining climate, particularly the major underwater thermohaline circulation which distributes heat energy from the equatorial oceans to the polar regions. These currents help to moderate the differences in temperature between winter and summer in the temperate zones. Also, without the redistributions of heat energy by the ocean currents and atmosphere, the tropics would be much hotter, and the polar regions much colder.

Lightning

Weather can have both beneficial and harmful effects. Extremes in weather, such as tornadoes or hurricanes and cyclones, can expend large amounts of energy along their paths, and produce devastation. Surface vegetation has evolved a dependence on the seasonal variation of the weather, and sudden changes lasting only a few years can have a dramatic effect, both on the vegetation and on the animals which depend on its growth for their food.

Climate is a measure of the long-term trends in the weather. Various factors are known to influence the climate, including ocean currents, surface albedo, greenhouse gases, variations in the solar luminosity, and changes to the Earth's orbit. Based on historical and geological records, the Earth is known to have undergone drastic climate changes in the past, including ice ages.

A tornado in central Oklahoma

The climate of a region depends on a number of factors, especially latitude. A latitudinal band of the surface with similar climatic attributes forms a climate region. There are a number of such regions, ranging from the tropical climate at the equator to the polar climate in the northern and southern extremes. Weather is also influenced by the seasons, which result from the Earth's axis being tilted relative to its orbital plane. Thus, at any given time during the summer or winter, one part of the Earth is more directly exposed to the rays of the sun. This exposure alternates as the Earth revolves in its orbit. At any given time, regardless of season, the Northern and Southern Hemispheres experience opposite seasons.

Weather is a chaotic system that is readily modified by small changes to the environment, so accurate weather forecasting is limited to only a few days. Overall, two things are happening worldwide: (1) temperature is increasing on the average; and (2) regional climates have been undergoing noticeable changes.

Water on the Earth

The Iguazu Falls on the border between Brazil and Argentina

Water is a chemical substance that is composed of hydrogen and oxygen (H2O) and is vital for all known forms of life. In typical usage, water refers only to its liquid form or state, but the substance also has a solid state, ice, and a gaseous state, water vapor, or steam. Water covers 71% of the Earth's surface. On Earth, it is found mostly in oceans and other large bodies of water, with 1.6% of water below ground in aquifers and 0.001% in the air as vapor, clouds, and precipitation. Oceans hold 97% of surface water, glaciers, and polar ice caps 2.4%, and other land surface water such as rivers, lakes, and ponds 0.6%. Additionally, a minute amount of the Earth's water is contained within biological bodies and manufactured products.

Oceans

A view of the Atlantic Ocean from Leblon, Rio de Janeiro

An ocean is a major body of saline water, and a principal component of the hydrosphere. Approximately 71% of the Earth's surface (an area of some 361 million square kilometers) is covered by ocean, a continuous body of water that is customarily divided into several principal oceans and smaller seas. More than half of this area is over 3,000 meters (9,800 feet) deep. Average oceanic salinity is around 35 parts per thousand (ppt) (3.5%), and nearly all seawater has a salinity in the range of 30 to 38 ppt. Though generally recognized as several 'separate' oceans, these waters comprise one global, interconnected body of salt water often referred to as the World Ocean or global ocean. This concept of a global ocean as a continuous body of water with relatively free interchange among its parts is of fundamental importance to oceanography.

The major oceanic divisions are defined in part by the continents, various archipelagos, and other criteria: these divisions are (in descending order of size) the Pacific Ocean, the Atlantic Ocean, the Indian Ocean, the Southern Ocean, and the Arctic Ocean. Smaller regions of the oceans are called seas, gulfs, bays and other names. There are also salt lakes, which are smaller bodies of landlocked saltwater that are not interconnected with the World Ocean. Two notable examples of salt lakes are the Aral Sea and the Great Salt Lake.

Lakes

Lake Mapourika, New Zealand

A lake (from Latin word lacus) is a terrain feature (or physical feature), a body of liquid on the surface of a world that is localized to the bottom of basin (another type of landform or terrain feature; that is, it is not global) and moves slowly if it moves at all. On Earth, a body of water is considered a lake when it is inland, not part of the ocean, is larger and deeper than a pond, and is fed by a river. The only world other than Earth known to harbor lakes is Titan, Saturn's largest moon, which has lakes of ethane, most likely mixed with methane. It is not known if Titan's lakes are fed by rivers, though Titan's surface is carved by numerous river beds. Natural lakes on Earth are generally found in mountainous areas, rift zones, and areas with ongoing or recent glaciation. Other lakes are found in endorheic basins or along the courses of mature rivers. In some parts of the world, there are many lakes because of chaotic drainage patterns left over from the last ice age. All lakes are temporary over geologic time scales, as they will slowly fill in with sediments or spill out of the basin containing them.

Ponds

The Westborough Reservoir (Mill Pond) in Westborough, Massachusetts

A pond is a body of standing water, either natural or human-made, that is usually smaller than a lake. A wide variety of human-made bodies of water are classified as ponds, including water gardens designed for aesthetic ornamentation, fish ponds designed for commercial fish breeding, and solar ponds designed to store thermal energy. Ponds and lakes are distinguished from streams via current speed. While currents in streams are easily observed, ponds and lakes possess thermally driven micro-currents and moderate wind driven currents. These features distinguish a pond from many other aquatic terrain features, such as stream pools and tide pools.

Rivers

The Nile river in Cairo, Egypt's capital city

A river is a natural watercourse, usually freshwater, flowing towards an ocean, a lake, a sea or another river. In a few cases, a river simply flows into the ground or dries up completely before reaching another body of water. Small rivers may also be called by several other names, including stream, creek, brook, rivulet, and rill; there is no general rule that defines what can be called a river. Many names for small rivers are specific to geographic location; one example is Burn in Scotland and North-east England. Sometimes a river is said to be larger than a creek, but this is not always the case, due to vagueness in the language. A river is part of the hydrological cycle. Water within a river is generally collected from precipitation through surface runoff, groundwater recharge, springs, and the release of stored water in natural ice and snowpacks (i.e., from glaciers).

Streams

A rocky stream in Hawaii

A stream is a flowing body of water with a current, confined within a bed and stream banks. In the United States, a stream is classified as a watercourse less than 60 feet (18 metres) wide. Streams are important as conduits in the water cycle, instruments in groundwater recharge, and they serve as corridors for fish and wildlife migration. The biological habitat in the immediate vicinity of a stream is called a riparian zone. Given the status of the ongoing Holocene extinction, streams play an important corridor role in connecting fragmented habitats and thus in conserving biodiversity. The study of streams and waterways in general involves many branches of inter-disciplinary natural science and engineering, including hydrology, fluvial geomorphology, aquatic ecology, fish biology, riparian ecology, and others.

Ecosystems

Loch Lomond in Scotland forms a relatively isolated ecosystem. The fish community of this lake has remained unchanged over a very long period of time.
Lush green Aravalli Mountain Range in the Desert country – Rajasthan, India. A wonder how such greenery can exist in hot Rajasthan, a place well known for its Thar Desert
An aerial view of a human ecosystem. Pictured is the city of Chicago.

Ecosystems are composed of a variety of biotic and abiotic components that function in an interrelated way. The structure and composition is determined by various environmental factors that are interrelated. Variations of these factors will initiate dynamic modifications to the ecosystem. Some of the more important components are soil, atmosphere, radiation from the sun, water, and living organisms.

Peñas Blancas, part of the Bosawás Biosphere Reserve. Located northeast of the city of Jinotega in Northeastern Nicaragua

Central to the ecosystem concept is the idea that living organisms interact with every other element in their local environment. Eugene Odum, a founder of ecology, stated: "Any unit that includes all of the organisms (ie: the "community") in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e.: exchange of materials between living and nonliving parts) within the system is an ecosystem." Within the ecosystem, species are connected and dependent upon one another in the food chain, and exchange energy and matter between themselves as well as with their environment. The human ecosystem concept is based on the human/nature dichotomy and the idea that all species are ecologically dependent on each other, as well as with the abiotic constituents of their biotope.

A smaller unit of size is called a microecosystem. For example, a microsystem can be a stone and all the life under it. A macroecosystem might involve a whole ecoregion, with its drainage basin.

Wilderness

Old growth European Beech forest in Biogradska Gora National Park, Montenegro

Wilderness is generally defined as areas that have not been significantly modified by human activity. Wilderness areas can be found in preserves, estates, farms, conservation preserves, ranches, national forests, national parks, and even in urban areas along rivers, gulches, or otherwise undeveloped areas. Wilderness areas and protected parks are considered important for the survival of certain species, ecological studies, conservation, and solitude. Some nature writers believe wilderness areas are vital for the human spirit and creativity, and some ecologists consider wilderness areas to be an integral part of the Earth's self-sustaining natural ecosystem (the biosphere). They may also preserve historic genetic traits and that they provide habitat for wild flora and fauna that may be difficult or impossible to recreate in zoos, arboretums, or laboratories.

Life

Female mallard and ducklings – reproduction is essential for continuing life.

Although there is no universal agreement on the definition of life, scientists generally accept that the biological manifestation of life is characterized by organization, metabolism, growth, adaptation, response to stimuli, and reproduction. Life may also be said to be simply the characteristic state of organisms.

Properties common to terrestrial organisms (plants, animals, fungi, protists, archaea, and bacteria) are that they are cellular, carbon-and-water-based with complex organization, having a metabolism, a capacity to grow, respond to stimuli, and reproduce. An entity with these properties is generally considered life. However, not every definition of life considers all of these properties to be essential. Human-made analogs of life may also be considered to be life.

The biosphere is the part of Earth's outer shell—including land, surface rocks, water, air and the atmosphere—within which life occurs, and which biotic processes in turn alter or transform. From the broadest geophysiological point of view, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere (rocks), hydrosphere (water), and atmosphere (air). The entire Earth contains over 75 billion tons (150 trillion pounds or about 6.8×1013 kilograms) of biomass (life), which lives within various environments within the biosphere.

Over nine-tenths of the total biomass on Earth is plant life, on which animal life depends very heavily for its existence. More than 2 million species of plant and animal life have been identified to date, and estimates of the actual number of existing species range from several million to well over 50 million. The number of individual species of life is constantly in some degree of flux, with new species appearing and others ceasing to exist on a continual basis. The total number of species is in rapid decline.

Evolution

An area of the Amazon Rainforest shared between Colombia and Brazil. The tropical rainforests of South America contain the largest diversity of species on Earth.

The origin of life on Earth is not well understood, but it is known to have occurred at least 3.5 billion years ago, during the hadean or archean eons on a primordial Earth that had a substantially different environment than is found at present. These life forms possessed the basic traits of self-replication and inheritable traits. Once life had appeared, the process of evolution by natural selection resulted in the development of ever-more diverse life forms.

Species that were unable to adapt to the changing environment and competition from other life forms became extinct. However, the fossil record retains evidence of many of these older species. Current fossil and DNA evidence shows that all existing species can trace a continual ancestry back to the first primitive life forms.

When basic forms of plant life developed the process of photosynthesis the sun's energy could be harvested to create conditions which allowed for more complex life forms. The resultant oxygen accumulated in the atmosphere and gave rise to the ozone layer. The incorporation of smaller cells within larger ones resulted in the development of yet more complex cells called eukaryotes. Cells within colonies became increasingly specialized, resulting in true multicellular organisms. With the ozone layer absorbing harmful ultraviolet radiation, life colonized the surface of Earth.

Microbes

A microscopic mite Lorryia formosa

The first form of life to develop on the Earth were microbes, and they remained the only form of life until about a billion years ago when multi-cellular organisms began to appear. Microorganisms are single-celled organisms that are generally microscopic, and smaller than the human eye can see. They include Bacteria, Fungi, Archaea, and Protista.

These life forms are found in almost every location on the Earth where there is liquid water, including in the Earth's interior. Their reproduction is both rapid and profuse. The combination of a high mutation rate and a horizontal gene transfer ability makes them highly adaptable, and able to survive in new environments, including outer space. They form an essential part of the planetary ecosystem. However, some microorganisms are pathogenic and can post health risk to other organisms.

Plants and animals

A selection of diverse plant species
A selection of diverse animal species

Originally Aristotle divided all living things between plants, which generally do not move fast enough for humans to notice, and animals. In Linnaeus' system, these became the kingdoms Vegetabilia (later Plantae) and Animalia. Since then, it has become clear that the Plantae as originally defined included several unrelated groups, and the fungi and several groups of algae were removed to new kingdoms. However, these are still often considered plants in many contexts. Bacterial life is sometimes included in flora, and some classifications use the term bacterial flora separately from plant flora.

Among the many ways of classifying plants are by regional floras, which, depending on the purpose of study, can also include fossil flora, remnants of plant life from a previous era. People in many regions and countries take great pride in their individual arrays of characteristic flora, which can vary widely across the globe due to differences in climate and terrain.

Regional floras commonly are divided into categories such as native flora and agricultural and garden flora, the lastly mentioned of which are intentionally grown and cultivated. Some types of "native flora" actually have been introduced centuries ago by people migrating from one region or continent to another, and become an integral part of the native, or natural flora of the place to which they were introduced. This is an example of how human interaction with nature can blur the boundary of what is considered nature.

Another category of plant has historically been carved out for weeds. Though the term has fallen into disfavor among botanists as a formal way to categorize "useless" plants, the informal use of the word "weeds" to describe those plants that are deemed worthy of elimination is illustrative of the general tendency of people and societies to seek to alter or shape the course of nature. Similarly, animals are often categorized in ways such as domestic, farm animals, wild animals, pests, etc. according to their relationship to human life.

Animals as a category have several characteristics that generally set them apart from other living things. Animals are eukaryotic and usually multicellular (although see Myxozoa), which separates them from bacteria, archaea, and most protists. They are heterotrophic, generally digesting food in an internal chamber, which separates them from plants and algae. They are also distinguished from plants, algae, and fungi by lacking cell walls.

With a few exceptions—most notably the two phyla consisting of sponges and placozoans—animals have bodies that are differentiated into tissues. These include muscles, which are able to contract and control locomotion, and a nervous system, which sends and processes signals. There is also typically an internal digestive chamber. The eukaryotic cells possessed by all animals are surrounded by a characteristic extracellular matrix composed of collagen and elastic glycoproteins. This may be calcified to form structures like shells, bones, and spicules, a framework upon which cells can move about and be reorganized during development and maturation, and which supports the complex anatomy required for mobility.

Human interrelationship

Despite their natural beauty, the secluded valleys along the Na Pali Coast in Hawaii are heavily modified by introduced invasive species such as She-oak.

Human impact

Although humans comprise only a minuscule proportion of the total living biomass on Earth, the human effect on nature is disproportionately large. Because of the extent of human influence, the boundaries between what humans regard as nature and "made environments" is not clear cut except at the extremes. Even at the extremes, the amount of natural environment that is free of discernible human influence is diminishing at an increasingly rapid pace. A 2020 study published in Nature found that anthropogenic mass (human-made materials) outweighs all living biomass on earth, with plastic alone exceeding the mass of all land and marine animals combined. And according to a 2021 study published in Frontiers in Forests and Global Change, only about 3% of the planet's terrestrial surface is ecologically and faunally intact, with a low human footprint and healthy populations of native animal species. Philip Cafaro, professor of philosophy at the School of Global Environmental Sustainability at Colorado State University, wrote in 2022 that "the cause of global biodiversity loss is clear: other species are being displaced by a rapidly growing human economy."

The development of technology by the human race has allowed the greater exploitation of natural resources and has helped to alleviate some of the risk from natural hazards. In spite of this progress, however, the fate of human civilization remains closely linked to changes in the environment. There exists a highly complex feedback loop between the use of advanced technology and changes to the environment that are only slowly becoming understood. Human-made threats to the Earth's natural environment include pollution, deforestation, and disasters such as oil spills. Humans have contributed to the extinction of many plants and animals, with roughly 1 million species threatened with extinction within decades. The loss of biodiversity and ecosystem functions over the last half century have impacted the extent that nature can contribute to human quality of life, and continued declines could pose a major threat to the continued existence of human civilization, unless a rapid course correction is made. The value of natural resources to human society is not reflected in market prices because mostly natural resources are available free of charge. This distorts market pricing of natural resources and at the same time leads to underinvestment in our natural assets. The annual global cost of public subsidies that damage nature is conservatively estimated at $4–6 trillion (million million). Institutional protections of these natural goods, such as the oceans and rainforests, are lacking. Governments have not prevented these economic externalities.

Humans employ nature for both leisure and economic activities. The acquisition of natural resources for industrial use remains a sizable component of the world's economic system. Some activities, such as hunting and fishing, are used for both sustenance and leisure, often by different people. Agriculture was first adopted around the 9th millennium BCE. Ranging from food production to energy, nature influences economic wealth.

Although early humans gathered uncultivated plant materials for food and employed the medicinal properties of vegetation for healing, most modern human use of plants is through agriculture. The clearance of large tracts of land for crop growth has led to a significant reduction in the amount available of forestation and wetlands, resulting in the loss of habitat for many plant and animal species as well as increased erosion.

Aesthetics and beauty

Aesthetically pleasing flowers

Beauty in nature has historically been a prevalent theme in art and books, filling large sections of libraries and bookstores. That nature has been depicted and celebrated by so much art, photography, poetry, and other literature shows the strength with which many people associate nature and beauty. Reasons why this association exists, and what the association consists of, are studied by the branch of philosophy called aesthetics. Beyond certain basic characteristics that many philosophers agree about to explain what is seen as beautiful, the opinions are virtually endless. Nature and wildness have been important subjects in various eras of world history. An early tradition of landscape art began in China during the Tang Dynasty (618–907). The tradition of representing nature as it is became one of the aims of Chinese painting and was a significant influence in Asian art.

Although natural wonders are celebrated in the Psalms and the Book of Job, wilderness portrayals in art became more prevalent in the 1800s, especially in the works of the Romantic movement. British artists John Constable and J. M. W. Turner turned their attention to capturing the beauty of the natural world in their paintings. Before that, paintings had been primarily of religious scenes or of human beings. William Wordsworth's poetry described the wonder of the natural world, which had formerly been viewed as a threatening place. Increasingly the valuing of nature became an aspect of Western culture. This artistic movement also coincided with the Transcendentalist movement in the Western world. A common classical idea of beautiful art involves the word mimesis, the imitation of nature. Also in the realm of ideas about beauty in nature is that the perfect is implied through perfect mathematical forms and more generally by patterns in nature. As David Rothenburg writes, "The beautiful is the root of science and the goal of art, the highest possibility that humanity can ever hope to see".

Matter and energy

The first few hydrogen atom electron orbitals shown as cross-sections with color-coded probability density

Some fields of science see nature as matter in motion, obeying certain laws of nature which science seeks to understand. For this reason the most fundamental science is generally understood to be "physics"—the name for which is still recognizable as meaning that it is the "study of nature".

Matter is commonly defined as the substance of which physical objects are composed. It constitutes the observable universe. The visible components of the universe are now believed to compose only 4.9 percent of the total mass. The remainder is believed to consist of 26.8 percent cold dark matter and 68.3 percent dark energy. The exact arrangement of these components is still unknown and is under intensive investigation by physicists.

The behaviour of matter and energy throughout the observable universe appears to follow well-defined physical laws. These laws have been employed to produce cosmological models that successfully explain the structure and the evolution of the universe we can observe. The mathematical expressions of the laws of physics employ a set of twenty physical constants that appear to be static across the observable universe. The values of these constants have been carefully measured, but the reason for their specific values remains a mystery.

Beyond Earth

Planets of the Solar System (sizes to scale, distances and illumination not to scale)
NGC 4414 is a spiral galaxy in the constellation Coma Berenices about 56,000 light-years in diameter and approximately 60 million light-years from Earth.

Outer space, also simply called space, refers to the relatively empty regions of the Universe outside the atmospheres of celestial bodies. Outer space is used to distinguish it from airspace (and terrestrial locations). There is no discrete boundary between Earth's atmosphere and space, as the atmosphere gradually attenuates with increasing altitude. Outer space within the Solar System is called interplanetary space, which passes over into interstellar space at what is known as the heliopause.

Outer space is sparsely filled with several dozen types of organic molecules discovered to date by microwave spectroscopy, blackbody radiation left over from the Big Bang and the origin of the universe, and cosmic rays, which include ionized atomic nuclei and various subatomic particles. There is also some gas, plasma and dust, and small meteors. Additionally, there are signs of human life in outer space today, such as material left over from previous crewed and uncrewed launches which are a potential hazard to spacecraft. Some of this debris re-enters the atmosphere periodically.

Although Earth is the only body within the Solar System known to support life, evidence suggests that in the distant past the planet Mars possessed bodies of liquid water on the surface. For a brief period in Mars' history, it may have also been capable of forming life. At present though, most of the water remaining on Mars is frozen. If life exists at all on Mars, it is most likely to be located underground where liquid water can still exist.

Conditions on the other terrestrial planets, Mercury and Venus, appear to be too harsh to support life as we know it. But it has been conjectured that Europa, the fourth-largest moon of Jupiter, may possess a sub-surface ocean of liquid water and could potentially host life.

Astronomers have started to discover extrasolar Earth analogs – planets that lie in the habitable zone of space surrounding a star, and therefore could possibly host life as we know it.

Functionalism (philosophy of mind)

In the philosophy of mind, functionalism is the thesis that each and every mental state (for example, the state of having a belief, of having a desire, or of being in pain) is constituted solely by its functional role, which means its causal relation to other mental states, sensory inputs, and behavioral outputs. Functionalism developed largely as an alternative to the identity theory of mind and behaviorism.

Functionalism is a theoretical level between the physical implementation and behavioral output. Therefore, it is different from its predecessors of Cartesian dualism (advocating independent mental and physical substances) and Skinnerian behaviorism and physicalism (declaring only physical substances) because it is only concerned with the effective functions of the brain, through its organization or its "software programs".

Since a mental state is identified by a functional role, it is said to be realized on multiple levels; in other words, it is able to be manifested in various systems, even perhaps computers, so long as the system performs the appropriate functions. While a computer's program performs the functions via computations on inputs to give outputs, implemented via its electronic substrate, a brain performs the functions via its biological operation and stimulus responses.

Multiple realizability

An important part of some arguments for functionalism is the idea of multiple realizability. According to standard functionalist theories, a mental state corresponds to a functional role. It is like a valve; a valve can be made of plastic or metal or other material, as long as it performs the proper function (controlling the flow of a liquid or gas). Similarly, functionalists argue, a mental state can be explained without considering the state of the underlying physical medium (such as the brain) that realizes it; one need only consider higher-level function or functions. Because a mental state is not limited to a particular medium, it can be realized in multiple ways, including, theoretically, with non-biological systems, such as computers. A silicon-based machine could have the same sort of mental life that a human being has, provided that its structure realized the proper functional roles.

However, there have been some functionalist theories that combine with the identity theory of mind, which deny multiple realizability. Such Functional Specification Theories (FSTs) (Levin, § 3.4), as they are called, were most notably developed by David Lewis and David Malet Armstrong. According to FSTs, mental states are the particular "realizers" of the functional role, not the functional role itself. The mental state of belief, for example, just is whatever brain or neurological process that realizes the appropriate belief function. Thus, unlike standard versions of functionalism (often called Functional State Identity Theories), FSTs do not allow for the multiple realizability of mental states, because the fact that mental states are realized by brain states is essential. What often drives this view is the belief that if we were to encounter an alien race with a cognitive system composed of significantly different material from humans' (e.g., silicon-based) but performed the same functions as human mental states (for example, they tend to yell "Ouch!" when poked with sharp objects), we would say that their type of mental state might be similar to ours but it is not the same. For some, this may be a disadvantage to FSTs. Indeed, one of Hilary Putnam's arguments for his version of functionalism relied on the intuition that such alien creatures would have the same mental states as humans do, and that the multiple realizability of standard functionalism makes it a better theory of mind.

Types

Machine-state functionalism

Artistic representation of a Turing machine

The broad position of "functionalism" can be articulated in many different varieties. The first formulation of a functionalist theory of mind was put forth by Hilary Putnam in the 1960s. This formulation, which is now called machine-state functionalism, or just machine functionalism, was inspired by the analogies which Putnam and others noted between the mind and the theoretical "machines" or computers capable of computing any given algorithm which were developed by Alan Turing (called Turing machines). Putnam himself, by the mid-1970s, had begun questioning this position. The beginning of his opposition to machine-state functionalism can be read about in his Twin Earth thought experiment.

In non-technical terms, a Turing machine is not a physical object, but rather an abstract machine built upon a mathematical model. Typically, a Turing Machine has a horizontal tape divided into rectangular cells arranged from left to right. The tape itself is infinite in length, and each cell may contain a symbol. The symbols used for any given "machine" can vary. The machine has a read-write head that scans cells and moves in left and right directions. The action of the machine is determined by the symbol in the cell being scanned and a table of transition rules that serve as the machine's programming. Because of the infinite tape, a traditional Turing Machine has an infinite amount of time to compute any particular function or any number of functions. In the below example, each cell is either blank (B) or has a 1 written on it. These are the inputs to the machine. The possible outputs are:

  • Halt: Do nothing.
  • R: move one square to the right.
  • L: move one square to the left.
  • B: erase whatever is on the square.
  • 1: erase whatever is on the square and print a '1.

An extremely simple example of a Turing machine which writes out the sequence '111' after scanning three blank squares and then stops as specified by the following machine table:


State One State Two State Three
B write 1; stay in state 1 write 1; stay in state 2 write 1; stay in state 3
1 go right; go to state 2 go right; go to state 3 [halt]

This table states that if the machine is in state one and scans a blank square (B), it will print a 1 and remain in state one. If it is in state one and reads a 1, it will move one square to the right and also go into state two. If it is in state two and reads a B, it will print a 1 and stay in state two. If it is in state two and reads a 1, it will move one square to the right and go into state three. If it is in state three and reads a B, it prints a 1 and remains in state three. Finally, if it is in state three and reads a 1, then it will stay in state three.

The essential point to consider here is the nature of the states of the Turing machine. Each state can be defined exclusively in terms of its relations to the other states as well as inputs and outputs. State one, for example, is simply the state in which the machine, if it reads a B, writes a 1 and stays in that state, and in which, if it reads a 1, it moves one square to the right and goes into a different state. This is the functional definition of state one; it is its causal role in the overall system. The details of how it accomplishes what it accomplishes and of its material constitution are completely irrelevant.

The above point is critical to an understanding of machine-state functionalism. Since Turing machines are not required to be physical systems, "anything capable of going through a succession of states in time can be a Turing machine". Because biological organisms “go through a succession of states in time”, any such organisms could also be equivalent to Turing machines.

According to machine-state functionalism, the nature of a mental state is just like the nature of the Turing machine states described above. If one can show the rational functioning and computing skills of these machines to be comparable to the rational functioning and computing skills of human beings, it follows that Turing machine behavior closely resembles that of human beings. Therefore, it is not a particular physical-chemical composition responsible for the particular machine or mental state, it is the programming rules which produce the effects that are responsible. To put it another way, any rational preference is due to the rules being followed, not to the specific material composition of the agent.

Psycho-functionalism

A second form of functionalism is based on the rejection of behaviorist theories in psychology and their replacement with empirical cognitive models of the mind. This view is most closely associated with Jerry Fodor and Zenon Pylyshyn and has been labeled psycho-functionalism.

The fundamental idea of psycho-functionalism is that psychology is an irreducibly complex science and that the terms that we use to describe the entities and properties of the mind in our best psychological theories cannot be redefined in terms of simple behavioral dispositions, and further, that such a redefinition would not be desirable or salient were it achievable. Psychofunctionalists view psychology as employing the same sorts of irreducibly teleological or purposive explanations as the biological sciences. Thus, for example, the function or role of the heart is to pump blood, that of the kidney is to filter it and to maintain certain chemical balances and so on—this is what accounts for the purposes of scientific explanation and taxonomy. There may be an infinite variety of physical realizations for all of the mechanisms, but what is important is only their role in the overall biological theory. In an analogous manner, the role of mental states, such as belief and desire, is determined by the functional or causal role that is designated for them within our best scientific psychological theory. If some mental state which is postulated by folk psychology (e.g. hysteria) is determined not to have any fundamental role in cognitive psychological explanation, then that particular state may be considered not to exist. On the other hand, if it turns out that there are states which theoretical cognitive psychology posits as necessary for explanation of human behavior but which are not foreseen by ordinary folk psychological language, then these entities or states exist.

Analytic functionalism

A third form of functionalism is concerned with the meanings of theoretical terms in general. This view is most closely associated with David Lewis and is often referred to as analytic functionalism or conceptual functionalism. The basic idea of analytic functionalism is that theoretical terms are implicitly defined by the theories in whose formulation they occur and not by intrinsic properties of the phonemes they comprise. In the case of ordinary language terms, such as "belief", "desire", or "hunger", the idea is that such terms get their meanings from our common-sense "folk psychological" theories about them, but that such conceptualizations are not sufficient to withstand the rigor imposed by materialistic theories of reality and causality. Such terms are subject to conceptual analyses which take something like the following form:

Mental state M is the state that is preconceived by P and causes Q.

For example, the state of pain is caused by sitting on a tack and causes loud cries, and higher order mental states of anger and resentment directed at the careless person who left a tack lying around. These sorts of functional definitions in terms of causal roles are claimed to be analytic and a priori truths about the submental states and the (largely fictitious) propositional attitudes they describe. Hence, its proponents are known as analytic or conceptual functionalists. The essential difference between analytic and psychofunctionalism is that the latter emphasizes the importance of laboratory observation and experimentation in the determination of which mental state terms and concepts are genuine and which functional identifications may be considered to be genuinely contingent and a posteriori identities. The former, on the other hand, claims that such identities are necessary and not subject to empirical scientific investigation.

Homuncular functionalism

Homuncular functionalism was developed largely by Daniel Dennett and has been advocated by William Lycan. It arose in response to the challenges that Ned Block's China Brain (a.k.a. Chinese nation) and John Searle's Chinese room thought experiments presented for the more traditional forms of functionalism (see below under "Criticism"). In attempting to overcome the conceptual difficulties that arose from the idea of a nation full of Chinese people wired together, each person working as a single neuron to produce in the wired-together whole the functional mental states of an individual mind, many functionalists simply bit the bullet, so to speak, and argued that such a Chinese nation would indeed possess all of the qualitative and intentional properties of a mind; i.e. it would become a sort of systemic or collective mind with propositional attitudes and other mental characteristics. Whatever the worth of this latter hypothesis, it was immediately objected that it entailed an unacceptable sort of mind-mind supervenience: the systemic mind which somehow emerged at the higher-level must necessarily supervene on the individual minds of each individual member of the Chinese nation, to stick to Block's formulation. But this would seem to put into serious doubt, if not directly contradict, the fundamental idea of the supervenience thesis: there can be no change in the mental realm without some change in the underlying physical substratum. This can be easily seen if we label the set of mental facts that occur at the higher-level M1 and the set of mental facts that occur at the lower-level M2. Then M1 and M2 both supervene on the physical facts, but a change of M1 to M2 (say) could occur without any change in these facts.

Since mind-mind supervenience seemed to have become acceptable in functionalist circles, it seemed to some that the only way to resolve the puzzle was to postulate the existence of an entire hierarchical series of mind levels (analogous to homunculi) which became less and less sophisticated in terms of functional organization and physical composition all the way down to the level of the physico-mechanical neuron or group of neurons. The homunculi at each level, on this view, have authentic mental properties but become simpler and less intelligent as one works one's way down the hierarchy.

Mechanistic functionalism

Mechanistic functionalism, originally formulated and defended by Gualtiero Piccinini and Carl Gillett independently, augments previous functionalist accounts of mental states by maintaining that any psychological explanation must be rendered in mechanistic terms. That is, instead of mental states receiving a purely functional explanation in terms of their relations to other mental states, like those listed above, functions are seen as playing only a part—the other part being played by structures— of the explanation of a given mental state.

A mechanistic explanation involves decomposing a given system, in this case a mental system, into its component physical parts, their activities or functions, and their combined organizational relations. On this account the mind remains a functional system, but one that is understood in mechanistic terms. This account remains a sort of functionalism because functional relations are still essential to mental states, but it is mechanistic because the functional relations are always manifestations of concrete structures—albeit structures understood at a certain level of abstraction. Functions are individuated and explained either in terms of the contributions they make to the given system or in teleological terms. If the functions are understood in teleological terms, then they may be characterized either etiologically or non-etiologically.

Mechanistic functionalism leads functionalism away from the traditional functionalist autonomy of psychology from neuroscience and towards integrating psychology and neuroscience. By providing an applicable framework for merging traditional psychological models with neurological data, mechanistic functionalism may be understood as reconciling the functionalist theory of mind with neurological accounts of how the brain actually works. This is due to the fact that mechanistic explanations of function attempt to provide an account of how functional states (mental states) are physically realized through neurological mechanisms.

Physicalism

There is much confusion about the sort of relationship that is claimed to exist (or not exist) between the general thesis of functionalism and physicalism. It has often been claimed that functionalism somehow "disproves" or falsifies physicalism tout court (i.e. without further explanation or description). On the other hand, most philosophers of mind who are functionalists claim to be physicalists—indeed, some of them, such as David Lewis, have claimed to be strict reductionist-type physicalists.

Functionalism is fundamentally what Ned Block has called a broadly metaphysical thesis as opposed to a narrowly ontological one. That is, functionalism is not so much concerned with what there is than with what it is that characterizes a certain type of mental state, e.g. pain, as the type of state that it is. Previous attempts to answer the mind-body problem have all tried to resolve it by answering both questions: dualism says there are two substances and that mental states are characterized by their immateriality; behaviorism claimed that there was one substance and that mental states were behavioral disposition; physicalism asserted the existence of just one substance and characterized the mental states as physical states (as in "pain = C-fiber firings").

On this understanding, type physicalism can be seen as incompatible with functionalism, since it claims that what characterizes mental states (e.g. pain) is that they are physical in nature, while functionalism says that what characterizes pain is its functional/causal role and its relationship with yelling "ouch", etc. However, any weaker sort of physicalism which makes the simple ontological claim that everything that exists is made up of physical matter is perfectly compatible with functionalism. Moreover, most functionalists who are physicalists require that the properties that are quantified over in functional definitions be physical properties. Hence, they are physicalists, even though the general thesis of functionalism itself does not commit them to being so.

In the case of David Lewis, there is a distinction in the concepts of "having pain" (a rigid designator true of the same things in all possible worlds) and just "pain" (a non-rigid designator). Pain, for Lewis, stands for something like the definite description "the state with the causal role x". The referent of the description in humans is a type of brain state to be determined by science. The referent among silicon-based life forms is something else. The referent of the description among angels is some immaterial, non-physical state. For Lewis, therefore, local type-physical reductions are possible and compatible with conceptual functionalism. (See also Lewis's mad pain and Martian pain.) There seems to be some confusion between types and tokens that needs to be cleared up in the functionalist analysis.

Criticism

China brain

Ned Block argues against the functionalist proposal of multiple realizability, where hardware implementation is irrelevant because only the functional level is important. The "China brain" or "Chinese nation" thought experiment involves supposing that the entire nation of China systematically organizes itself to operate just like a brain, with each individual acting as a neuron. (The tremendous difference in speed of operation of each unit is not addressed.). According to functionalism, so long as the people are performing the proper functional roles, with the proper causal relations between inputs and outputs, the system will be a real mind, with mental states, consciousness, and so on. However, Block argues, this is patently absurd, so there must be something wrong with the thesis of functionalism since it would allow this to be a legitimate description of a mind.

Some functionalists believe China would have qualia but that due to the size it is impossible to imagine China being conscious. Indeed, it may be the case that we are constrained by our theory of mind and will never be able to understand what Chinese-nation consciousness is like. Therefore, if functionalism is true either qualia will exist across all hardware or will not exist at all but are illusory.

The Chinese room

The Chinese room argument by John Searle is a direct attack on the claim that thought can be represented as a set of functions. The thought experiment asserts that it is possible to mimic intelligent action without any interpretation or understanding through the use of a purely functional system. In short, Searle describes a person who only speaks English who is in a room with only Chinese symbols in baskets and a rule book in English for moving the symbols around. The person is then ordered by people outside of the room to follow the rule book for sending certain symbols out of the room when given certain symbols. Further suppose that the people outside of the room are Chinese speakers and are communicating with the person inside via the Chinese symbols. According to Searle, it would be absurd to claim that the English speaker inside knows Chinese simply based on these syntactic processes. This thought experiment attempts to show that systems which operate merely on syntactic processes (inputs and outputs, based on algorithms) cannot realize any semantics (meaning) or intentionality (aboutness). Thus, Searle attacks the idea that thought can be equated with following a set of syntactic rules; that is, functionalism is an insufficient theory of the mind.

In connection with Block's Chinese nation, many functionalists responded to Searle's thought experiment by suggesting that there was a form of mental activity going on at a higher level than the man in the Chinese room could comprehend (the so-called "system reply"); that is, the system does know Chinese. In response, Searle suggested the man in the room could simply memorize the rules and symbol relations. Again, though he would convincingly mimic communication, he would be aware only of the symbols and rules, not of the meaning behind them.

Inverted spectrum

Another main criticism of functionalism is the inverted spectrum or inverted qualia scenario, most specifically proposed as an objection to functionalism by Ned Block. This thought experiment involves supposing that there is a person, call her Jane, that is born with a condition which makes her see the opposite spectrum of light that is normally perceived. Unlike normal people, Jane sees the color violet as yellow, orange as blue, and so forth. So, suppose, for example, that you and Jane are looking at the same orange. While you perceive the fruit as colored orange, Jane sees it as colored blue. However, when asked what color the piece of fruit is, both you and Jane will report "orange". In fact, one can see that all of your behavioral as well as functional relations to colors will be the same. Jane will, for example, properly obey traffic signs just as any other person would, even though this involves the color perception. Therefore, the argument goes, since there can be two people who are functionally identical, yet have different mental states (differing in their qualitative or phenomenological aspects), functionalism is not robust enough to explain individual differences in qualia.

David Chalmers tries to show that even though mental content cannot be fully accounted for in functional terms, there is nevertheless a nomological correlation between mental states and functional states in this world. A silicon-based robot, for example, whose functional profile matched our own, would have to be fully conscious. His argument for this claim takes the form of a reductio ad absurdum. He considers gradually replacing a human brain by functionally equivalent circuitry; the general idea is that since it would be very unlikely for a conscious human being to experience a change in its qualia which it utterly fails to notice, mental content and functional profile appear to be inextricably bound together, at least for entities that behave like humans. If the subject's qualia were to change, we would expect the subject to notice, and therefore his functional profile to follow suit. A similar argument is applied to the notion of absent qualia. In this case, Chalmers argues that it would be very unlikely for a subject to experience a fading of his qualia which he fails to notice and respond to. This, coupled with the independent assertion that a conscious being's functional profile just could be maintained, irrespective of its experiential state, leads to the conclusion that the subject of these experiments would remain fully conscious. The problem with this argument, however, as Brian G. Crabb (2005) has observed, is that, while changing or fading qualia in a conscious subject might force changes in its functional profile, this tells us nothing about the case of a permanently inverted or unconscious robot. A subject with inverted qualia from birth would have nothing to notice or adjust to. Similarly, an unconscious functional simulacrum of ourselves (a zombie) would have no experiential changes to notice or adjust to. Consequently, Crabb argues, Chalmers' "fading qualia" and "dancing qualia" arguments fail to establish that cases of permanently inverted or absent qualia are nomologically impossible.

A related critique of the inverted spectrum argument is that it assumes that mental states (differing in their qualitative or phenomenological aspects) can be independent of the functional relations in the brain. Thus, it begs the question of functional mental states: its assumption denies the possibility of functionalism itself, without offering any independent justification for doing so. (Functionalism says that mental states are produced by the functional relations in the brain.) This same type of problem—that there is no argument, just an antithetical assumption at their base—can also be said of both the Chinese room and the Chinese nation arguments. Notice, however, that Crabb's response to Chalmers does not commit this fallacy: His point is the more restricted observation that even if inverted or absent qualia turn out to be nomologically impossible, and it is perfectly possible that we might subsequently discover this fact by other means, Chalmers' argument fails to demonstrate that they are impossible.

Twin Earth

The Twin Earth thought experiment, introduced by Hilary Putnam, is responsible for one of the main arguments used against functionalism, although it was originally intended as an argument against semantic internalism. The thought experiment is simple and runs as follows. Imagine a Twin Earth which is identical to Earth in every way but one: water does not have the chemical structure H2O, but rather some other structure, say XYZ. It is critical, however, to note that XYZ on Twin Earth is still called "water" and exhibits all the same macro-level properties that H2O exhibits on Earth (i.e., XYZ is also a clear drinkable liquid that is in lakes, rivers, and so on). Since these worlds are identical in every way except in the underlying chemical structure of water, you and your Twin Earth doppelgänger see exactly the same things, meet exactly the same people, have exactly the same jobs, behave exactly the same way, and so on. In other words, since you share the same inputs, outputs, and relations between other mental states, you are functional duplicates. So, for example, you both believe that water is wet. However, the content of your mental state of believing that water is wet differs from your duplicate's because your belief is of H2O, while your duplicate's is of XYZ. Therefore, so the argument goes, since two people can be functionally identical, yet have different mental states, functionalism cannot sufficiently account for all mental states.

Most defenders of functionalism initially responded to this argument by attempting to maintain a sharp distinction between internal and external content. The internal contents of propositional attitudes, for example, would consist exclusively in those aspects of them which have no relation with the external world and which bear the necessary functional/causal properties that allow for relations with other internal mental states. Since no one has yet been able to formulate a clear basis or justification for the existence of such a distinction in mental contents, however, this idea has generally been abandoned in favor of externalist causal theories of mental contents (also known as informational semantics). Such a position is represented, for example, by Jerry Fodor's account of an "asymmetric causal theory" of mental content. This view simply entails the modification of functionalism to include within its scope a very broad interpretation of inputs and outputs to include the objects that are the causes of mental representations in the external world.

The twin earth argument hinges on the assumption that experience with an imitation water would cause a different mental state than experience with natural water. However, since no one would notice the difference between the two waters, this assumption is likely false. Further, this basic assumption is directly antithetical to functionalism; and, thereby, the twin earth argument does not constitute a genuine argument: as this assumption entails a flat denial of functionalism itself (which would say that the two waters would not produce different mental states, because the functional relationships would remain unchanged).

Meaning holism

Another common criticism of functionalism is that it implies a radical form of semantic holism. Block and Fodor referred to this as the damn/darn problem. The difference between saying "damn" or "darn" when one smashes one's finger with a hammer can be mentally significant. But since these outputs are, according to functionalism, related to many (if not all) internal mental states, two people who experience the same pain and react with different outputs must share little (perhaps nothing) in common in any of their mental states. But this is counterintuitive; it seems clear that two people share something significant in their mental states of being in pain if they both smash their finger with a hammer, whether or not they utter the same word when they cry out in pain.

Another possible solution to this problem is to adopt a moderate (or molecularist) form of holism. But even if this succeeds in the case of pain, in the case of beliefs and meaning, it faces the difficulty of formulating a distinction between relevant and non-relevant contents (which can be difficult to do without invoking an analytic–synthetic distinction, as many seek to avoid).

Triviality arguments

According to Ned Block, if functionalism is to avoid the chauvinism of type-physicalism, it becomes overly liberal in "ascribing mental properties to things that do not in fact have them". As an example, he proposes that the economy of Bolivia might be organized such that the economic states, inputs, and outputs would be isomorphic to a person under some bizarre mapping from mental to economic variables.

Hilary Putnam, John Searle, and others have offered further arguments that functionalism is trivial, i.e. that the internal structures functionalism tries to discuss turn out to be present everywhere, so that either functionalism turns out to reduce to behaviorism, or to complete triviality and therefore a form of panpsychism. These arguments typically use the assumption that physics leads to a progression of unique states, and that functionalist realization is present whenever there is a mapping from the proposed set of mental states to physical states of the system. Given that the states of a physical system are always at least slightly unique, such a mapping will always exist, so any system is a mind. Formulations of functionalism which stipulate absolute requirements on interaction with external objects (external to the functional account, meaning not defined functionally) are reduced to behaviorism instead of absolute triviality, because the input-output behavior is still required.

Peter Godfrey-Smith has argued further that such formulations can still be reduced to triviality if they accept a somewhat innocent-seeming additional assumption. The assumption is that adding a transducer layer, that is, an input-output system, to an object should not change whether that object has mental states. The transducer layer is restricted to producing behavior according to a simple mapping, such as a lookup table, from inputs to actions on the system, and from the state of the system to outputs. However, since the system will be in unique states at each moment and at each possible input, such a mapping will always exist so there will be a transducer layer which will produce whatever physical behavior is desired.

Godfrey-Smith believes that these problems can be addressed using causality, but that it may be necessary to posit a continuum between objects being minds and not being minds rather than an absolute distinction. Furthermore, constraining the mappings seems to require either consideration of the external behavior as in behaviorism, or discussion of the internal structure of the realization as in identity theory; and though multiple realizability does not seem to be lost, the functionalist claim of the autonomy of high-level functional description becomes questionable.

Evolvability

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Evolvability

Evolvability is defined as the capacity of a system for adaptive evolution. Evolvability is the ability of a population of organisms to not merely generate genetic diversity, but to generate adaptive genetic diversity, and thereby evolve through natural selection.

In order for a biological organism to evolve by natural selection, there must be a certain minimum probability that new, heritable variants are beneficial. Random mutations, unless they occur in DNA sequences with no function, are expected to be mostly detrimental. Beneficial mutations are always rare, but if they are too rare, then adaptation cannot occur. Early failed efforts to evolve computer programs by random mutation and selection showed that evolvability is not a given, but depends on the representation of the program as a data structure, because this determines how changes in the program map to changes in its behavior. Analogously, the evolvability of organisms depends on their genotype–phenotype map. This means that genomes are structured in ways that make beneficial changes more likely. This has been taken as evidence that evolution has created fitter populations of organisms that are better able to evolve.

Alternative definitions

Andreas Wagner describes two definitions of evolvability. According to the first definition, a biological system is evolvable:

  • if its properties show heritable genetic variation, and
  • if natural selection can thus change these properties.

According to the second definition, a biological system is evolvable:

  • if it can acquire novel functions through genetic change, functions that help the organism survive and reproduce.

For example, consider an enzyme with multiple alleles in the population. Each allele catalyzes the same reaction, but with a different level of activity. However, even after millions of years of evolution, exploring many sequences with similar function, no mutation might exist that gives this enzyme the ability to catalyze a different reaction. Thus, although the enzyme's activity is evolvable in the first sense, that does not mean that the enzyme's function is evolvable in the second sense. However, every system evolvable in the second sense must also be evolvable in the first.

Pigliucci recognizes three classes of definition, depending on timescale. The first corresponds to Wagner's first, and represents the very short timescales that are described by quantitative genetics. He divides Wagner's second definition into two categories, one representing the intermediate timescales that can be studied using population genetics, and one representing exceedingly rare long-term innovations of form.

Pigliucci's second definition of evolvability includes Altenberg's quantitative concept of evolvability, being not a single number, but the entire upper tail of the fitness distribution of the offspring produced by the population. This quantity was considered a "local" property of the instantaneous state of a population, and its integration over the population's evolutionary trajectory, and over many possible populations, would be necessary to give a more global measure of evolvability.

Generating more variation

More heritable phenotypic variation means more evolvability. While mutation is the ultimate source of heritable variation, its permutations and combinations also make a big difference. Sexual reproduction generates more variation (and thereby evolvability) relative to asexual reproduction (see evolution of sexual reproduction). Evolvability is further increased by generating more variation when an organism is stressed, and thus likely to be less well adapted, but less variation when an organism is doing well. The amount of variation generated can be adjusted in many different ways, for example via the mutation rate, via the probability of sexual vs. asexual reproduction, via the probability of outcrossing vs. inbreeding, via dispersal, and via access to previously cryptic variants through the switching of an evolutionary capacitor. A large population size increases the influx of novel mutations in each generation.

Enhancement of selection

Rather than creating more phenotypic variation, some mechanisms increase the intensity and effectiveness with which selection acts on existing phenotypic variation. For example:

  • Mating rituals that allow sexual selection on "good genes", and so intensify natural selection.
  • Large effective population size increasing the threshold value of the selection coefficient above which selection becomes an important player. This could happen through an increase in the census population size, decreasing genetic drift, through an increase in the recombination rate, decreasing genetic draft, or through changes in the probability distribution of the numbers of offspring.
  • Recombination decreasing the importance of the Hill-Robertson effect, where different genotypes contain different adaptive mutations. Recombination brings the two alleles together, creating a super-genotype in place of two competing lineages.
  • Shorter generation time.

Robustness and evolvability

The relationship between robustness and evolvability depends on whether recombination can be ignored. Recombination can generally be ignored in asexual populations and for traits affected by single genes.

Without recombination

Robustness in the face of mutation does not increase evolvability in the first sense. In organisms with a high level of robustness, mutations have smaller phenotypic effects than in organisms with a low level of robustness. Thus, robustness reduces the amount of heritable genetic variation on which selection can act. However, robustness may allow exploration of large regions of genotype space, increasing evolvability according to the second sense. Even without genetic diversity, some genotypes have higher evolvability than others, and selection for robustness can increase the "neighborhood richness" of phenotypes that can be accessed from the same starting genotype by mutation. For example, one reason many proteins are less robust to mutation is that they have marginal thermodynamic stability, and most mutations reduce this stability further. Proteins that are more thermostable can tolerate a wider range of mutations and are more evolvable. For polygenic traits, neighborhood richness contributes more to evolvability than does genetic diversity or "spread" across genotype space.

With recombination

Temporary robustness, or canalisation, may lead to the accumulation of significant quantities of cryptic genetic variation. In a new environment or genetic background, this variation may be revealed and sometimes be adaptive.

Factors affecting evolvability via robustness

Different genetic codes have the potential to change robustness and evolvability by changing the effect of single-base mutational changes.

Exploration ahead of time

When mutational robustness exists, many mutants will persist in a cryptic state. Mutations tend to fall into two categories, having either a very bad effect or very little effect: few mutations fall somewhere in between. Sometimes, these mutations will not be completely invisible, but still have rare effects, with very low penetrance. When this happens, natural selection weeds out the very bad mutations, while leaving the others relatively unaffected. While evolution has no "foresight" to know which environment will be encountered in the future, some mutations cause major disruption to a basic biological process, and will never be adaptive in any environment. Screening these out in advance leads to preadapted stocks of cryptic genetic variation.

Another way that phenotypes can be explored, prior to strong genetic commitment, is through learning. An organism that learns gets to "sample" several different phenotypes during its early development, and later sticks to whatever worked best. Later in evolution, the optimal phenotype can be genetically assimilated so it becomes the default behavior rather than a rare behavior. This is known as the Baldwin effect, and it can increase evolvability.

Learning biases phenotypes in a beneficial direction. But an exploratory flattening of the fitness landscape can also increase evolvability even when it has no direction, for example when the flattening is a result of random errors in molecular and/or developmental processes. This increase in evolvability can happen when evolution is faced with crossing a "valley" in an adaptive landscape. This means that two mutations exist that are deleterious by themselves, but beneficial in combination. These combinations can evolve more easily when the landscape is first flattened, and the discovered phenotype is then fixed by genetic assimilation.

Modularity

If every mutation affected every trait, then a mutation that was an improvement for one trait would be a disadvantage for other traits. This means that almost no mutations would be beneficial overall. But if pleiotropy is restricted to within functional modules, then mutations affect only one trait at a time, and adaptation is much less constrained. In a modular gene network, for example, a gene that induces a limited set of other genes that control a specific trait under selection may evolve more readily than one that also induces other gene pathways controlling traits not under selection. Individual genes also exhibit modularity. A mutation in one cis-regulatory element of a gene's promoter region may allow the expression of the gene to be altered only in specific tissues, developmental stages, or environmental conditions rather than changing gene activity in the entire organism simultaneously.

Evolution of evolvability

While variation yielding high evolvability could be useful in the long term, in the short term most of that variation is likely to be a disadvantage. For example, naively it would seem that increasing the mutation rate via a mutator allele would increase evolvability. But as an extreme example, if the mutation rate is too high then all individuals will be dead or at least carry a heavy mutation load. Short-term selection for low variation most of the time is usually thought likely to be more powerful than long-term selection for evolvability, making it difficult for natural selection to cause the evolution of evolvability. Other forces of selection also affect the generation of variation; for example, mutation and recombination may in part be byproducts of mechanisms to cope with DNA damage.

When recombination is low, mutator alleles may still sometimes hitchhike on the success of adaptive mutations that they cause. In this case, selection can take place at the level of the lineage. This may explain why mutators are often seen during experimental evolution of microbes. Mutator alleles can also evolve more easily when they only increase mutation rates in nearby DNA sequences, not across the whole genome: this is known as a contingency locus.

The evolution of evolvability is less controversial if it occurs via the evolution of sexual reproduction, or via the tendency of variation-generating mechanisms to become more active when an organism is stressed. The yeast prion [PSI+] may also be an example of the evolution of evolvability through evolutionary capacitance. An evolutionary capacitor is a switch that turns genetic variation on and off. This is very much like bet-hedging the risk that a future environment will be similar or different. Theoretical models also predict the evolution of evolvability via modularity. When the costs of evolvability are sufficiently short-lived, more evolvable lineages may be the most successful in the long-term. However, the hypothesis that evolvability is an adaptation is often rejected in favor of alternative hypotheses, e.g. minimization of costs.

Applications

Evolvability phenomena have practical applications. For protein engineering we wish to increase evolvability, and in medicine and agriculture we wish to decrease it. Protein evolvability is defined as the ability of the protein to acquire sequence diversity and conformational flexibility which can enable it to evolve toward a new function.

In protein engineering, both rational design and directed evolution approaches aim to create changes rapidly through mutations with large effects. Such mutations, however, commonly destroy enzyme function or at least reduce tolerance to further mutations. Identifying evolvable proteins and manipulating their evolvability is becoming increasingly necessary in order to achieve ever larger functional modification of enzymes. Proteins are also often studied as part of the basic science of evolvability, because the biophysical properties and chemical functions can be easily changed by a few mutations. More evolvable proteins can tolerate a broader range of amino acid changes and allow them to evolve toward new functions. The study of evolvability has fundamental importance for understanding very long term evolution of protein superfamilies.

Many human diseases are capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics. Predicting the evolution and evolvability of our pathogens, and devising strategies to slow or circumvent the development of resistance, demands deeper knowledge of the complex forces driving evolution at the molecular level.

A better understanding of evolvability is proposed to be part of an Extended Evolutionary Synthesis.

Computer-aided software engineering

From Wikipedia, the free encyclopedia ...