Search This Blog

Wednesday, September 18, 2019

Accumulated cyclone energy

From Wikipedia, the free encyclopedia
 
Accumulated cyclone energy (ACE) is a measure used by various agencies including the National Oceanic and Atmospheric Administration (NOAA) and the India Meteorological Department to express the activity of individual tropical cyclones and entire tropical cyclone seasons. It uses an approximation of the wind energy used by a tropical system over its lifetime and is calculated every six hours. The ACE of a season is the sum of the ACEs for each storm and takes into account the number, strength, and duration of all the tropical storms in the season. The highest ACE calculated for a single storm is 82, for Hurricane/Typhoon Ioke in 2006.

Calculation

The ACE of a season is calculated by summing the squares of the estimated maximum sustained velocity of every active tropical storm (wind speed 35 knots [65 km/h, 40 mph] or higher), at six-hour intervals. Since the calculation is sensitive to the starting point of the six-hour intervals, the convention is to use 00:00, 06:00, 12:00, and 18:00 UTC. If any storms of a season happen to cross years, the storm's ACE counts for the previous year. The numbers are usually divided by 10,000 to make them more manageable. One unit of ACE equals 104 kn2, and for use as an index the unit is assumed. Thus:
where vmax is estimated sustained wind speed in knots

Kinetic energy is proportional to the square of velocity, and by adding together the energy per some interval of time, the accumulated energy is found. As the duration of a storm increases, more values are summed and the ACE also increases such that longer-duration storms may accumulate a larger ACE than more-powerful storms of lesser duration. Although ACE is a value proportional to the energy of the system, it is not a direct calculation of energy (the mass of the moved air and therefore the size of the storm would show up in a real energy calculation). 

A related quantity is hurricane destruction potential (HDP), which is ACE but only calculated for the time where the system is a hurricane.

Atlantic basin ACE

Atlantic basin cyclone intensity by Accumulated cyclone energy, timeseries 1850-2014
 
A season's ACE is used by NOAA and others to categorize the hurricane season into 3 groups by its activity. Measured over the period 1951–2000 for the Atlantic basin, the median annual index was 87.5 and the mean annual index was 93.2. The NOAA categorization system divides seasons into:
  • Above-normal season: An ACE value above 111 (120% of the 1981–2010 median), provided at least two of the following three parameters are also exceeded: number of tropical storms: 12, hurricanes: 6, and major hurricanes: 2.
  • Near-normal season: neither above-normal nor below normal
  • Below-normal season: An ACE value below 66 (71.4% of the 1981–2010 median), or none of the following three parameters are exceeded: number of tropical storms: 9, hurricanes: 4, and major hurricanes: 1.
According to the NOAA categorization system for the Atlantic, the most recent above-normal season is the 2018 season, the most recent near-normal season is the 2014 season, and the most recent below normal season is the 2015 season.

Hyperactivity

The term hyperactive is used by Goldenberg et al. (2001) based on a different weighting algorithm, which places more weight on major hurricanes, but typically equating to an ACE of about 153 (175% of the 1951–2000 median) or more.

Individual storms in the Atlantic

The highest ever ACE estimated for a single storm in the Atlantic is 73.6, for the San Ciriaco hurricane in 1899. This single storm had an ACE higher than many whole Atlantic storm seasons. Other Atlantic storms with high ACEs include Hurricane Ivan in 2004, with an ACE of 70.4, Hurricane Irma in 2017, with an ACE of 64.9, the Great Charleston Hurricane in 1893, with an ACE of 63.5, Hurricane Isabel in 2003, with an ACE of 63.3, and the 1932 Cuba hurricane, with an ACE of 59.8.

Since 1950, the highest ACE of a tropical storm was Tropical Storm Laura in 1971, which attained an ACE of 8.6. The highest ACE of a Category 1 hurricane was Hurricane Nadine in 2012, which attained an ACE of 26.3. The lowest ACE of a tropical storm was 2000's Tropical Storm Chris and 2017's Tropical Storm Philippe, both of which were tropical storms for only six hours and had an ACE of just 0.1. The lowest ACE of any hurricane was 2005's Hurricane Cindy, which was only a hurricane for six hours, and 2007's Hurricane Lorenzo, which was a hurricane for twelve hours; both of which had an ACE of just 1.5. The lowest ACE of a major hurricane (Category 3 or higher), was Hurricane Gerda in 1969, with an ACE of 5.3. The only years since 1950 to feature two storms with an ACE index of over 40 points have been 1966, 2003, and 2004, and the only year to feature three storms is 2017.

The following table shows those storms in the Atlantic basin from 1950–2019 that have attained over 40 points of ACE.

Storm Year Peak classification ACE Duration
Hurricane Ivan 2004
Category 5 hurricane
70.4 23 days
Hurricane Irma 2017
Category 5 hurricane
64.9 13 days
Hurricane Isabel 2003
Category 5 hurricane
63.3 14 days
Hurricane Donna 1960
Category 4 hurricane
57.6 16 days
Hurricane Carrie 1957
Category 4 hurricane
55.8 21 days
Hurricane Inez 1966
Category 4 hurricane
54.6 21 days
Hurricane Luis 1995
Category 4 hurricane
53.5 16 days
Hurricane Allen 1980
Category 5 hurricane
52.3 12 days
Hurricane Esther 1961
Category 4 hurricane
52.2 18 days
Hurricane Matthew 2016
Category 5 hurricane
50.9 12 days
Hurricane Flora 1963
Category 4 hurricane
49.4 16 days
Hurricane Edouard 1996
Category 4 hurricane
49.3 14 days
Hurricane Beulah 1967
Category 5 hurricane
47.9 17 days
Hurricane Dorian 2019
Category 5 hurricane
47.8 15 days
Hurricane Dog 1950
Category 4 hurricane
47.5 13 days
Hurricane Betsy 1965
Category 4 hurricane
47.0 18 days
Hurricane Frances 2004
Category 4 hurricane
45.9 15 days
Hurricane Faith 1966
Category 3 hurricane
45.4 17 days
Hurricane Maria 2017
Category 5 hurricane
44.8 14 days
Hurricane Ginger 1971
Category 2 hurricane
44.2 28 days
Hurricane David 1979
Category 5 hurricane
44.0 12 days
Hurricane Jose 2017
Category 4 hurricane
43.3 17 days
Hurricane Fabian 2003
Category 4 hurricane
43.2 14 days
Hurricane Hugo 1989
Category 5 hurricane
42.7 12 days
Hurricane Gert 1999
Category 4 hurricane
42.3 12 days
Hurricane Igor 2010
Category 4 hurricane
41.9 14 days

Atlantic hurricane seasons, 1851–2019

Due to the scarcity and imprecision of early offshore measurements, ACE data for the Atlantic hurricane season is less reliable prior to the modern satellite era,but NOAA has analyzed the best available information dating back to 1851. The 1933 Atlantic hurricane season is considered the highest ACE on record with a total of 259. For the current season or the season that just ended, the ACE is preliminary based on National Hurricane Center bulletins, which may later be revised.

Eastern Pacific ACE

Observed monthly values for the PDO index, 1900–present.
 
Historical East Pacific Seasonal Activity, 1981–2015.

Individual storms in the Eastern Pacific (east of 180°W)

The highest ever ACE estimated for a single storm in the Eastern or Central Pacific, while located east of the International Date Line is 62.8, for Hurricane Fico of 1978. Other Eastern Pacific storms with high ACEs include Hurricane John in 1994, with an ACE of 54.0, Hurricane Kevin in 1991, with an ACE of 52.1, and Hurricane Hector of 2018, with an ACE of 50.5.

The following table shows those storms in the Eastern and Central Pacific basins from 1971–2018 that have attained over 30 points of ACE.

Storm Year Peak classification ACE Duration
Hurricane Fico 1978
Category 4 hurricane
62.8 20 days
Hurricane John dagger 1994
Category 5 hurricane
54.0 19 days
Hurricane Kevin 1991
Category 4 hurricane
52.1 17 days
Hurricane Hector dagger 2018
Category 4 hurricane
50.5 13 days
Hurricane Tina 1992
Category 4 hurricane
47.7 22 days
Hurricane Trudy 1990
Category 4 hurricane
45.8 16 days
Hurricane Lane 2018
Category 5 hurricane
44.2 13 days
Hurricane Dora dagger 1999
Category 4 hurricane
41.4 13 days
Hurricane Jimena 2015
Category 4 hurricane
40.0 15 days
Hurricane Guillermo 1997
Category 5 hurricane
40.0 16 days
Hurricane Norbert 1984
Category 4 hurricane
39.6 12 days
Hurricane Norman 2018
Category 4 hurricane
36.6 12 days
Hurricane Celeste 1972
Category 4 hurricane
36.3 16 days
Hurricane Sergio 2018
Category 4 hurricane
35.5 13 days
Hurricane Lester 2016
Category 4 hurricane
35.4 14 days
Hurricane Olaf 2015
Category 4 hurricane
34.6 12 days
Hurricane Jimena 1991
Category 4 hurricane
34.5 12 days
Hurricane Doreen 1973
Category 4 hurricane
34.3 16 days
Hurricane Ioke dagger 2006
Category 5 hurricane
34.2 7 days
Hurricane Marie 1990
Category 4 hurricane
33.1 14 days
Hurricane Orlene 1992
Category 4 hurricane
32.4 12 days
Hurricane Greg 1993
Category 4 hurricane
32.3 13 days
Hurricane Hilary 2011
Category 4 hurricane
31.2 9 days
dagger – Indicates that the storm formed in the Eastern/Central Pacific, but crossed 180°W at least once, therefore only the ACE and number of days spent in the EPAC/CPAC are included.

Eastern Pacific hurricane seasons, 1971–2019

Accumulated Cyclone Energy is also used in the eastern and central Pacific Ocean. Data on ACE is considered reliable starting with the 1971 season. The season with the highest ACE since 1971 is the 2018 season. The 1977 season has the lowest ACE. The most recent above-normal season is the 2018 season, the most recent near-normal season is the 2017 season, and the most recent below normal season is the 2013 season. The 35 year median 1971–2005 is 115 x 104 kn2 (100 in the EPAC zone east of 140°W, 13 in the CPAC zone); the mean is 130 (112 + 18).

Sahara (Climate)

From Wikipedia, the free encyclopedia

The Sahara is the world's largest low-latitude hot desert. It is located in the horse latitudes under the subtropical ridge, a significant belt of semi-permanent subtropical warm-core high pressure where the air from upper levels of the troposphere tends to sink towards the ground. This steady descending airflow causes a warming and a drying effect in the upper troposphere. The sinking air prevents evaporating water from rising, and therefore prevents adiabatic cooling, which makes cloud formation extremely difficult to nearly impossible.

The permanent dissolution of clouds allows unhindered light and thermal radiation. The stability of the atmosphere above the desert prevents any convective overturning, thus making rainfall virtually non-existent. As a consequence, the weather tends to be sunny, dry and stable with a minimal chance of rainfall. Subsiding, diverging, dry air masses associated with subtropical high-pressure systems are extremely unfavorable for the development of convectional showers. The subtropical ridge is the predominant factor that explains the hot desert climate (Köppen climate classification BWh) of this vast region. The descending airflow is the strongest and the most effective over the eastern part of the Great Desert, in the Libyan Desert: this is the sunniest, driest and the most nearly "rainless" place on the planet, rivaling the Atacama Desert, lying in Chile and Peru.

The rainfall inhibition and the dissipation of cloud cover are most accentuated over the eastern section of the Sahara rather than the western. The prevailing air mass lying above the Sahara is the continental tropical (cT) air mass, which is hot and dry. Hot, dry air masses primarily form over the North-African desert from the heating of the vast continental land area, and it affects the whole desert during most of the year. Because of this extreme heating process, a thermal low is usually noticed near the surface, and is the strongest and the most developed during the summertime. The Sahara High represents the eastern continental extension of the Azores High, centered over the North Atlantic Ocean. The subsidence of the Sahara High nearly reaches the ground during the coolest part of the year, while it is confined to the upper troposphere during the hottest periods. 

The effects of local surface low pressure are extremely limited because upper-level subsidence still continues to block any form of air ascent. Also, to be protected against rain-bearing weather systems by the atmospheric circulation itself, the desert is made even drier by its geographical configuration and location. Indeed, the extreme aridity of the Sahara is not only explained by the subtropical high pressure: the Atlas Mountains of Algeria, Morocco and Tunisia also help to enhance the aridity of the northern part of the desert. These major mountain ranges act as a barrier, causing a strong rain shadow effect on the leeward side by dropping much of the humidity brought by atmospheric disturbances along the polar front which affects the surrounding Mediterranean climates. 

The primary source of rain in the Sahara is the Intertropical Convergence Zone, a continuous belt of low-pressure systems near the equator which bring the brief, short and irregular rainy season to the Sahel and southern Sahara. Rainfall in this giant desert has to overcome the physical and atmospheric barriers that normally prevent the production of precipitation. The harsh climate of the Sahara is characterized by: extremely low, unreliable, highly erratic rainfall; extremely high sunshine duration values; high temperatures year-round; negligible rates of relative humidity; a significant diurnal temperature variation; and extremely high levels of potential evaporation which are the highest recorded worldwide.

Temperature

The sky is usually clear above the desert, and the sunshine duration is extremely high everywhere in the Sahara. Most of the desert has more than 3,600 hours of bright sunshine per year (over 82% of daylight hours), and a wide area in the eastern part has over 4,000 hours of bright sunshine per year (over 91% of daylight hours). The highest values are very close to the theoretical maximum value. A value of 4300 hours (98%) of the time would be recorded in Upper Egypt (Aswan, Luxor) and in the Nubian Desert (Wadi Halfa). The annual average direct solar irradiation is around 2,800 kWh/(m2 year) in the Great Desert. The Sahara has a huge potential for solar energy production.

Sand dunes in the Sahara
 
The high position of the Sun, the extremely low relative humidity, and the lack of vegetation and rainfall make the Great Desert the hottest large region in the world, and the hottest place on Earth during summer in some spots. The average high temperature exceeds 38 to 40 °C or 100.4 to 104.0 °F during the hottest month nearly everywhere in the desert except at very high altitudes. The world's highest officially recorded average daily high temperature was 47 °C or 116.6 °F in a remote desert town in the Algerian Desert called Bou Bernous, at an elevation of 378 metres (1,240 ft) above sea level, and only Death Valley, California rivals it. Other hot spots in Algeria such as Adrar, Timimoun, In Salah, Ouallene, Aoulef, Reggane with an elevation between 200 and 400 metres (660 and 1,310 ft) above sea level get slightly lower summer average highs, around 46 °C or 114.8 °F during the hottest months of the year. Salah, well known in Algeria for its extreme heat, has average high temperatures of 43.8 °C or 110.8 °F, 46.4 °C or 115.5 °F, 45.5 °C or 113.9 °F and 41.9 °C or 107.4 °F in June, July, August and September respectively. There are even hotter spots in the Sahara, but they are located in extremely remote areas, especially in the Azalai, lying in northern Mali. The major part of the desert experiences around three to five months when the average high strictly exceeds 40 °C or 104 °F; while in the southern central part of the desert, there are up to six or seven months when the average high temperature strictly exceeds 40 °C or 104 °F. Some examples of this are Bilma, Niger and Faya-Largeau, Chad. The annual average daily temperature exceeds 20 °C or 68 °F everywhere and can approach 30 °C or 86 °F in the hottest regions year-round. However, most of the desert has a value in excess of 25 °C or 77 °F. 

Sunset in Sahara
 
Sand and ground temperatures are even more extreme. During daytime, the sand temperature is extremely high: it can easily reach 80 °C or 176 °F or more. A sand temperature of 83.5 °C (182.3 °F) has been recorded in Port Sudan. Ground temperatures of 72 °C or 161.6 °F have been recorded in the Adrar of Mauritania and a value of 75 °C (167 °F) has been measured in Borkou, northern Chad.

Due to lack of cloud cover and very low humidity, the desert usually has high diurnal temperature variations between days and nights. However, it is a myth that the nights are cold after extremely hot days in the Sahara. The average diurnal temperature range is typically between 13 and 20 °C or 23.4 and 36.0 °F. The lowest values are found along the coastal regions due to high humidity and are often even lower than 10 °C or 18 °F, while the highest values are found in inland desert areas where the humidity is the lowest, mainly in the southern Sahara. Still, it is true that winter nights can be cold as it can drop to the freezing point and even below, especially in high-elevation areas. The frequency of subfreezing winter nights in the Sahara is strongly influenced by the North Atlantic Oscillation (NAO), with warmer winter temperatures during negative NAO events and cooler winters with more frosts when the NAO is positive. This is because the weaker clockwise flow around the eastern side of the subtropical anticyclone during negative NAO winters, although too dry to produce more than negligible precipitation, does reduce the flow of dry, cold air from higher latitudes of Eurasia into the Sahara significantly.

Precipitation

The average annual rainfall ranges from very low in the northern and southern fringes of the desert to nearly non-existent over the central and the eastern part. The thin northern fringe of the desert receives more winter cloudiness and rainfall due to the arrival of low pressure systems over the Mediterranean Sea along the polar front, although very attenuated by the rain shadow effects of the mountains and the annual average rainfall ranges from 100 millimetres (4 in) to 250 millimetres (10 in). For example, Biskra, Algeria, and Ouarzazate, Morocco, are found in this zone. The southern fringe of the desert along the border with the Sahel receives summer cloudiness and rainfall due to the arrival of the Intertropical Convergence Zone from the south and the annual average rainfall ranges from 100 millimetres (4 in) to 250 millimetres (10 in). For example, Timbuktu, Mali and Agadez, Niger are found in this zone. The vast central hyper-arid core of the desert is virtually never affected by northerly or southerly atmospheric disturbances and permanently remains under the influence of the strongest anticyclonic weather regime, and the annual average rainfall can drop to less than 1 millimetre (0.04 in). In fact, most of the Sahara receives less than 20 millimetres (0.8 in). Of the 9,000,000 square kilometres (3,500,000 sq mi) of desert land in the Sahara, an area of about 2,800,000 square kilometres (1,100,000 sq mi) (about 31% of the total area) receives an annual average rainfall amount of 10 millimetres (0.4 in) or less, while some 1,500,000 square kilometres (580,000 sq mi) (about 17% of the total area) receives an average of 5 millimetres (0.2 in) or less. The annual average rainfall is virtually zero over a wide area of some 1,000,000 square kilometres (390,000 sq mi) in the eastern Sahara comprising deserts of: Libya, Egypt and Sudan (Tazirbu, Kufra, Dakhla, Kharga, Farafra, Siwa, Asyut, Sohag, Luxor, Aswan, Abu Simbel, Wadi Halfa) where the long-term mean approximates 0.5 millimetres (0.02 in) per year. Rainfall is very unreliable and erratic in the Sahara as it may vary considerably year by year. In full contrast to the negligible annual rainfall amounts, the annual rates of potential evaporation are extraordinarily high, roughly ranging from 2,500 millimetres (100 in) per year to more than 6,000 millimetres (240 in) per year in the whole desert. Nowhere else on Earth has air been found as dry and evaporative as in the Sahara region. However, at least two instances of snowfall have been recorded in Sahara, in February 1979 and December 2016, both in the town of Ain Sefra.

Desertification and prehistoric climate

One theory for the formation of the Sahara is that the monsoon in Northern Africa was weakened because of glaciation during the Quaternary period, starting two or three million years ago. Another theory is that the monsoon was weakened when the ancient Tethys Sea dried up during the Tortonian period around 7 million years.

The climate of the Sahara has undergone enormous variations between wet and dry over the last few hundred thousand years, believed to be caused by long-term changes in the North African climate cycle that alters the path of the North African Monsoon – usually southward. The cycle is caused by a 41000-year cycle in which the tilt of the earth changes between 22° and 24.5°. At present (2000 AD), we are in a dry period, but it is expected that the Sahara will become green again in 15000 years (17000 AD). When the North African monsoon is at its strongest annual precipitation and subsequent vegetation in the Sahara region increase, resulting in conditions commonly referred to as the "green Sahara". For a relatively weak North African monsoon, the opposite is true, with decreased annual precipitation and less vegetation resulting in a phase of the Sahara climate cycle known as the "desert Sahara".

The idea that changes in insolation (solar heating) caused by long-term changes in the Earth's orbit are a controlling factor for the long-term variations in the strength of monsoon patterns across the globe was first suggested by Rudolf Spitaler in the late nineteenth century, The hypothesis was later formally proposed and tested by the meteorologist John Kutzbach in 1981. Kutzbach's ideas about the impacts of insolation on global monsoonal patterns have become widely accepted today as the underlying driver of long term monsoonal cycles. Kutzbach never formally named his hypothesis and as such it is referred to here as the "Orbital Monsoon Hypothesis" as suggested by Ruddiman in 2001.

Sahel region of Mali
 
During the last glacial period, the Sahara was much larger than it is today, extending south beyond its current boundaries. The end of the glacial period brought more rain to the Sahara, from about 8000 BCE to 6000 BCE, perhaps because of low pressure areas over the collapsing ice sheets to the north. Once the ice sheets were gone, the northern Sahara dried out. In the southern Sahara, the drying trend was initially counteracted by the monsoon, which brought rain further north than it does today. By around 4200 BCE, however, the monsoon retreated south to approximately where it is today, leading to the gradual desertification of the Sahara. The Sahara is now as dry as it was about 13,000 years ago.

The Sahara pump theory describes this cycle. During periods of a wet or "Green Sahara", the Sahara becomes a savanna grassland and various flora and fauna become more common. Following inter-pluvial arid periods, the Sahara area then reverts to desert conditions and the flora and fauna are forced to retreat northwards to the Atlas Mountains, southwards into West Africa, or eastwards into the Nile Valley. This separates populations of some of the species in areas with different climates, forcing them to adapt, possibly giving rise to allopatric speciation.

It is also proposed that humans accelerated the drying out period from 6,000–2,500 BCE by pastoralists overgrazing available grassland.

Evidence for cycles

The growth of speleothems (which requires rainwater) was detected in Hol-Zakh, Ashalim, Even-Sid, Ma'ale-ha-Meyshar, Ktora Cracks, Nagev Tzavoa Cave, and elsewhere, and has allowed tracking of prehistoric rainfall. The Red Sea coastal route was extremely arid before 140 and after 115 kya. Slightly wetter conditions appear at 90–87 kya, but it still was just one tenth the rainfall around 125 kya. In the southern Negev Desert speleothems did not grow between 185–140 kya (MIS 6), 110–90 (MIS 5.4–5.2), nor after 85 kya nor during most of the interglacial period (MIS 5.1), the glacial period and Holocene. This suggests that the southern Negev was arid to hyper-arid in these periods.

During the Last Glacial Maximum (LGM) the Sahara desert was more extensive than it is now with the extent of the tropical forests being greatly reduced, and the lower temperatures reduced the strength of the Hadley Cell. This is a climate cell which causes rising tropical air of the Inter-Tropical Convergence Zone (ITCZ) to bring rain to the tropics, while dry descending air, at about 20 degrees north, flows back to the equator and brings desert conditions to this region. It is associated with high rates of wind-blown mineral dust, and these dust levels are found as expected in marine cores from the north tropical Atlantic. But around 12,500 BCE the amount of dust in the cores in the Bølling/Allerød phase suddenly plummets and shows a period of much wetter conditions in the Sahara, indicating a Dansgaard-Oeschger (DO) event (a sudden warming followed by a slower cooling of the climate). The moister Saharan conditions had begun about 12,500 BCE, with the extension of the ITCZ northward in the northern hemisphere summer, bringing moist wet conditions and a savanna climate to the Sahara, which (apart from a short dry spell associated with the Younger Dryas) peaked during the Holocene thermal maximum climatic phase at 4000 BCE when mid-latitude temperatures seem to have been between 2 and 3 degrees warmer than in the recent past. Analysis of Nile River deposited sediments in the delta also shows this period had a higher proportion of sediments coming from the Blue Nile, suggesting higher rainfall also in the Ethiopian Highlands. This was caused principally by a stronger monsoonal circulation throughout the sub-tropical regions, affecting India, Arabia and the Sahara. Lake Victoria only recently became the source of the White Nile and dried out almost completely around 15 kya.

The sudden subsequent movement of the ITCZ southwards with a Heinrich event (a sudden cooling followed by a slower warming), linked to changes with the El Niño-Southern Oscillation cycle, led to a rapid drying out of the Saharan and Arabian regions, which quickly became desert. This is linked to a marked decline in the scale of the Nile floods between 2700 and 2100 BCE.

Monday, September 16, 2019

Entropy (computing)

From Wikipedia, the free encyclopedia

In computing, entropy is the randomness collected by an operating system or application for use in cryptography or other uses that require random data. This randomness is often collected from hardware sources (variance in fan noise or HDD), either pre-existing ones such as mouse movements or specially provided randomness generators. A lack of entropy can have a negative impact on performance and security.

Linux kernel

The Linux kernel generates entropy from keyboard timings, mouse movements, and IDE timings and makes the random character data available to other operating system processes through the special files /dev/random and /dev/urandom. This capability was introduced in Linux version 1.3.30.

There are some Linux kernel patches allowing one to use more entropy sources. The audio_entropyd project, which is included in some operating systems such as Fedora, allows audio data to be used as an entropy source. Also available are video_entropyd which calculates random data from a video-source and entropybroker which includes these three and can be used to distribute the entropy data to systems not capable of running any of these (e.g. virtual machines). Furthermore, one can use the HAVEGE algorithm through haveged to pool entropy. In some systems, network interrupts can be used as an entropy source as well.

On systems using the Linux kernel, programs needing significant amounts of random data from /dev/urandom cannot co-exist with programs reading little data from /dev/random, as /dev/urandom depletes /dev/random whenever it is being read.

OpenBSD kernel

OpenBSD has integrated cryptography as one of its main goals and has always worked on increasing its entropy for encryption but also for randomising many parts of the OS, including various internal operations of its kernel. Around 2011, two of the random devices were dropped and linked into a single source as it could produce hundreds of megabytes per second of high quality random data on an average system. This made depletion of random data by userland programs impossible on OpenBSD once enough entropy has initially been gathered. This is due to OpenBSD utilising an arc4random function to maximise the efficiency or minimise the wastage of entropy that the system has gathered.

Hurd kernel

A driver ported from the Linux kernel has been made available for the Hurd kernel.

Solaris

/dev/random and /dev/urandom have been available as Sun packages or patches for Solaris since Solaris 2.6, and have been a standard feature since Solaris 9. As of Solaris 10, administrators can remove existing entropy sources or define new ones via the kernel-level cryptographic framework. 

A 3rd-party kernel module implementing /dev/random is also available for releases dating back to Solaris 2.4.

OS/2

There is a software package for OS/2 that allows software processes to retrieve random data.

Windows

Microsoft Windows releases newer than Windows 95 use CryptoAPI to gather entropy in a similar fashion to Linux kernel's /dev/random.

Windows's CryptoAPI uses the binary registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\RNG\Seed to store a seeded value from all of its entropy sources.

Because CryptoAPI is closed-source, some free and open source software applications running on the Windows platform use other measures to get randomness. For example, GnuPG, as of version 1.06, uses a variety of sources such as the number of free bytes in memory that combined with a random seed generates desired randomness it needs.

Programmers using CAPI can get entropy by calling CAPI's CryptGenRandom(), after properly initializing it.

CryptoAPI was deprecated from NT 6.0 and higher. New API is called Cryptography API: Next Generation (CNG).

Newer version of Windows are able to use a variety of entropy sources:
  • TPM if available and enabled on motherboard
  • Entropy from UEFI interface (if booted from UEFI)
  • RdRand CPU instruction if available
  • Hardware system clock (RTC)
  • OEM0 ACPI table content

Embedded Systems

Embedded Systems have difficulty gathering enough entropy as they are often very simple devices with short boot times, and key generation operations that require sufficient entropy are often one of the first things a system may do. Common entropy sources may not exist on these devices, or will not have been active long enough during boot to ensure sufficient entropy exists. Embedded devices often lack rotating disk drives, human interface devices, and even fans, and the network interface, if any, will not have been active for long enough to provide much entropy. Lacking easy access to entropy, some devices may use hard-coded keys to seed random generators, or seed random generators from easily-guessed unique identifiers such as the device's MAC address. A simple study demonstrated the widespread use of weak keys by finding many embedded systems such as routers using the same keys. It was thought that the number of weak keys found would have been far higher if simple and often attacker determinable one-time unique identifiers had not been incorporated into the entropy of some of these systems.

Other systems

There are some software packages that allow one to use a userspace process to gather random characters, exactly what /dev/random does, such as EGD, the Entropy Gathering Daemon.

Hardware-originated entropy

Modern CPUs and hardware often feature integrated generators that can provide high-quality and high-speed entropy to operating systems. On systems based on the Linux kernel, one can read the entropy generated from such a device through /dev/hw_random. However, sometimes /dev/hw_random may be slow; usually around 80 KiB/s.

There are some companies manufacturing entropy generation devices, and some of them are shipped with drivers for Linux.

On Debian, one can install the rng-tools package (apt-get install rng-tools) that supports the true random number generators (TRNGs) found in CPUs supporting the RdRand instruction, Trusted Platform Modules and in some Intel, AMD, or VIA chipsets, effectively increasing the entropy collected into /dev/random and potentially improving the cryptographic potential. This is especially useful on headless systems that have no other sources of entropy.

Practical implications

System administrators, especially those supervising Internet servers, have to ensure that the server processes will not halt because of entropy depletion. Entropy on servers utilising the Linux kernel, or any other kernel or userspace process that generates entropy from the console and the storage subsystem, is often less than ideal because of the lack of a mouse and keyboard, thus servers have to generate their entropy from a limited set of resources such as IDE timings. 

The entropy pool size in Linux is viewable through the file /proc/sys/kernel/random/entropy_avail and should generally be at least 2000 bits (out of a maximum of 4096). Entropy changes frequently.

Administrators responsible for systems that have low or zero entropy should not attempt to use /dev/urandom as a substitute for /dev/random as this may cause SSL/TLS connections to have lower-grade encryption.

Some software systems change their Diffie-Hellman keys often, and this may in some cases help a server to continue functioning normally even with an entropy bottleneck.

On servers with low entropy, a process can appear hung when it is waiting for random characters to appear in /dev/random (on Linux-based systems). For example, there was a known problem in Debian that caused exim4 to hang in some cases because of this.

Security

Entropy sources can be used for keyboard timing attacks.

Entropy can affect the cryptography (TLS/SSL) of a server: If a server fails to use a proper source of randomness, the keys generated by the server will be insecure. In some cases a cracker (malicious attacker) can guess some bits of entropy from the output of a pseudorandom number generator (PRNG), and this happens when not enough entropy is introduced into the PRNG.

Potential sources

Commonly used entropy sources include the mouse, keyboard, and IDE timings, but there are other potential sources. For example, one could collect entropy from the computer's microphone, or by building a sensor to measure the air turbulence inside a disk drive.

For Unix/BSD derivatives there exists a USB based solution that utilizes an ARM Cortex CPU for filtering / securing the bit stream generated by two entropy generator sources in the system.

Biotic stress

From Wikipedia, the free encyclopedia
 
Biotic stress is stress that occurs as a result of damage done to an organism by other living organisms, such as bacteria, viruses, fungi, parasites, beneficial and harmful insects, weeds, and cultivated or native plants. It is different from abiotic stress, which is the negative impact of non-living factors on the organisms such as temperature, sunlight, wind, salinity, flooding and drought. The types of biotic stresses imposed on an organism depend the climate where it lives as well as the species' ability to resist particular stresses. Biotic stress remains a broadly defined term and those who study it face many challenges, such as the greater difficulty in controlling biotic stresses in an experimental context compared to abiotic stress.

The damage caused by these various living and nonliving agents can appear very similar. Even with close observation, accurate diagnosis can be difficult. For example, browning of leaves on an oak tree caused by drought stress may appear similar to leaf browning caused by oak wilt, a serious vascular disease caused by a fungus, or the browning caused by anthracnose, a fairly minor leaf disease.

Agriculture

Biotic stressors are a major focus of agricultural research, due to the vast economic losses caused to cash crops. The relationship between biotic stress and plant yield affects economic decisions as well as practical development. The impact of biotic injury on crop yield impacts population dynamics, plant-stressor coevolution, and ecosystem nutrient cycling.

Biotic stress also impacts horticultural plant health and natural habitats ecology. It also has dramatic changes in the host recipient. Plants are exposed to many stress factors, such as drought, high salinity or pathogens, which reduce the yield of the cultivated plants or affect the quality of the harvested products. Although there are many kinds of biotic stress, the majority of plant diseases are caused by fungi. Arabidopsis thaliana is often used as a model plant to study the responses of plants to different sources of stress.

In history

Biotic stresses have had huge repercussions for humanity; an example of this is the potato blight, an oomycete which caused widespread famine in England, Ireland and Belgium in the 1840s. Another example is grape phylloxera coming from North America in the 19th century, which led to the Great French Wine Blight.

Today

Losses to pests and disease in crop plants continue to pose a significant threat to agriculture and food security. During the latter half of the 20th century, agriculture became increasingly reliant on synthetic chemical pesticides to provide control of pests and diseases, especially within the intensive farming systems common in the developed world. However, in the 21st century, this reliance on chemical control is becoming unsustainable. Pesticides tend to have a limited lifespan due to the emergence of resistance in the target pests, and are increasingly recognised in many cases to have negative impacts on biodiversity, and on the health of agricultural workers and even consumers.

Tomorrow

Due to the implications of climate change, it is suspected that plants will have increased susceptibility to pathogens. Additionally, elevated threat of abiotic stresses (i.e. drought and heat) are likely to contribute to plant pathogen susceptibility.

Effect on plant growth

Photosynthesis

Many biotic stresses affect photosynthesis, as chewing insects reduce leaf area and virus infections reduce the rate of photosynthesis per leaf area. Vascular-wilt fungi compromise the water transport and photosynthesis by inducing stomatal closure.

Response to stress

Plants have co-evolved with their parasites for several hundred million years. This co-evolutionary process has resulted in the selection of a wide range of plant defences against microbial pathogens and herbivorous pests which act to minimise frequency and impact of attack. These defences include both physical and chemical adaptations, which may either be expressed constitutively, or in many cases, are activated only in response to attack. For example, utilization of high metal ion concentrations derived from the soil allow plants to reduce the harmful effects of biotic stressors (pathogens, herbivores etc.); meanwhile preventing the infliction of severe metal toxicity by way of safeguarding metal ion distribution throughout the plant with protective physiological pathways. Such induced resistance provides a mechanism whereby the costs of defence are avoided until defense is beneficial to the plant. At the same time, successful pests and pathogens have evolved mechanisms to overcome both constitutive and induced resistance in their particular host species. In order to fully understand and manipulate plant biotic stress resistance, we require a detailed knowledge of these interactions at a wide range of scales, from the molecular to the community level.

Inducible defense responses to insect herbivores.

In order for a plant to defend itself against biotic stress, it must be able to differentiate between an abiotic and biotic stress. A plants response to herbivores starts with the recognition of certain chemicals that are abundant in the saliva of the herbivores. These compounds that trigger a response in plants are known as elicitors or herbivore-associated molecular patterns (HAMPs). These HAMPs trigger signalling pathways throughout the plant, initiating its defence mechanism and allowing the plant to minimise damage to other regions.These HAMPs trigger signalling pathways throughout the plant, initiating its defence mechanism and allowing the plant to minimise damage to other regions. Phloem feeders, like aphids, do not cause a great deal of mechanical damage to plants, but they are still regarded as pests and can seriously harm crop yields. Plants have developed a defence mechanism using salicylic acid pathway, which is also used in infection stress, when defending itself against phloem feeders. Plants perform a more direct attack on an insects digestive system. The plants do this using proteinase inhibitors. These proteinase inhibitors prevent protein digestion and once in the digestive system of an insect, they bind tightly and specifically to the active site of protein hydrolysing enzymes such as trypsin and chymotrypsin. This mechanism is most likely to have evolved in plants when dealing with insect attack. 

Plants detect elicitors in the insects saliva. Once detected, a signal transduction network is activated. The presence of an elicitor causes an influx of Ca2+ ions to be released in to the cytosol. This increase in cytosolic concentration activates target proteins such as Calmodulin and other binding proteins. Downstream targets, such as phosphorylation and transcriptional activation of stimulus specific responses, are turned on by Ca2+ dependent protein kinases. In Arabidopsis, over expression of the IQD1 calmodulin-binding transcriptional regulator leads to inhibitor of herbivore activity. The role of calcium ions in this signal transduction network is therefore important. 

Calcium Ions also play a large role in activating a plants defensive response. When fatty acid amides are present in insect saliva, the mitogen-activated protein kinases (MAPKs) are activated. These genes when activated, play a role in the jasmonic acid pathway. The jasmonic acid pathway is also referred to as the Octadecanoid pathway. This pathway is vital for the activation of defence genes in plants. The production of jasmonic acid, a phytohormone, is a result of the pathway. In an experiment using virus-induced gene silencing of two calcium-dependent protein kinases (CDPKs) in a wild tobacco ( Nicotiana attenuata), it was discovered that the longer herbivory continued the higher the accumulation of jasmonic acid in wild-type plants and in silenced plants, the production of more defence metabolites was seen as well as the decrease in the growth rate of the herbivore used, the tobacco hornworm (Manduca sexta). This example demonstrates the importance of MAP kinases in plant defence regulation.

Inducible defense responses to pathogens

Plants are capable of detecting invaders through the recognition of non-self signals despite the lack of a circulatory or immune system like those found in animals. Often a plant's first line of defense against microbes occurs at the plant cell surface and involves the detection of microorganism-associated molecular patterns (MAMPs). MAMPs include nucleic acids common to viruses and endotoxins on bacterial cell membranes which can be detected by specialized pattern-recognition receptors. Another method of detection involves the use of plant immune receptors to detect effector molecules released into plant cells by pathogens. Detection of these signals in infected cells leads to an activation of effector-triggered immunity (ETI), a type of innate immune response.

Both the pattern recognition immunity (PTI) and effector-triggered immunity (ETI) result from the upregulation of multiple defense mechanisms including defensive chemical signaling compounds. An increase in the production of salicylic acid (SA) has been shown to be induced by pathogenic infection. The increase in SA results in the production of pathogenesis related (PR) genes which ultimately increase plant resistance to biotrophic and hemibiotrophic pathogens. Increases in jasmonic acid (JA) synthesis near the sites of pathogen infection have also been described. This physiological response to increase JA production has been implicated in the ubiquitination of jasmonate ZIM domains (JAZ) proteins, which inhibit JA signaling, leading to their degradation and a subsequent increase in JA activated defense genes.

Studies regarding the upregulation of defensive chemicals have confirmed the role of SA and JA in pathogen defense. In studies utilizing Arabidopsis mutants with the bacterial NahG gene, which inhibits the production and accumulation of SA, were shown to be more susceptible to pathogens than the wild-type plants. This was thought to result from the inability to produce critical defensive mechanisms including increased PR gene expression. Other studies conducted by injecting tobacco plants and Arabidopsis with salicylic acid resulted in higher resistance of infection by the alfalfa and tobacco mosaic viruses, indicating a role for SA biosynthesis in reducing viral replication. Additionally, studies performed using Arabidopsis with mutated jasmonic acid biosynthesis pathways have shown JA mutants to be at an increased risk of infection by soil pathogens.

Along with SA and JA, other defensive chemicals have been implicated in plant viral pathogen defenses including abscisic acid (ABA), gibberellic acid (GA), auxin, and peptide hormones. The use of hormones and innate immunity presents parallels between animal and plant defenses, though pattern-triggered immunity is thought to have arisen independently in each.

Cross tolerance with abiotic stress

  • Evidence shows that a plant undergoing multiple stresses, both abiotic and biotic (usually pathogen or herbivore attack), can produce a positive effect on plant performance, by reducing their susceptibility to biotic stress compared to how they respond to individual stresses. The interaction leads to a crosstalk between their respective hormone signalling pathways which will either induce or antagonize another restructuring genes machinery to increase tolerance of defense reactions.
  • Reactive oxygen species (ROS) are key signalling molecules produced in response to biotic and abiotic stress cross tolerance. ROS are produced in response to biotic stresses during the oxidative burst.
  • Dual stress imposed by ozone (O3) and pathogen affects tolerance of crop and leads to altered host pathogen interaction (Fuhrer, 2003). Alteration in pathogenesis potential of pest due to O3 exposure is of ecological and economical importance.
  • Tolerance to both biotic and abiotic stresses has been achieved. In maize, breeding programmes have led to plants which are tolerant to drought and have additional resistance to the parasitic weed Striga hermonthica.

Remote sensing

The Agricultural Research Service (ARS) and various government agencies and private institutions have provided a great deal of fundamental information relating spectral reflectance and thermal emittance properties of soils and crops to their agronomic and biophysical characteristics. This knowledge has facilitated the development and use of various remote sensing methods for non-destructive monitoring of plant growth and development and for the detection of many environmental stresses that limit plant productivity. Coupled with rapid advances in computing and position locating technologies, remote sensing from ground-, air-, and space-based platforms is now capable of providing detailed spatial and temporal information on plant response to their local environment that is needed for site specific agricultural management approaches. This is very important in today's society because with increasing pressure on global food productivity due to population increase, result in a demand for stress-tolerant crop varieties that has never been greater.

High-context and low-context cultures

From Wikipedia, the free encyclopedia
 
In anthropology, high-context culture and low-context culture is a measure of how explicit the messages exchanged in a culture are, and how important the context is in communication. High and low context cultures fall on a continuum that describes how a person communicates with others through their range of communication abilities: utilizing gestures, relations, body language, verbal messages, or non-verbal messages. These concepts were first introduced by the anthropologist Edward T. Hall in his 1976 book Beyond Culture. According to Hall, in a low-context culture, the message will be interpreted through just the words (whether written or spoken) and their explicit meaning. In a high-context culture, messages are also interpreted using tone of voice, gesture, silence or implied meaning, as well as context or situation. There, the receiver is expected to use the situation, messages and cultural norms to understand the message.

High-context cultures often stem from less direct verbal and nonverbal communication, utilizing small communication gestures and reading into these less direct messages with more meaning. Low-context cultures are the opposite; direct verbal communication is needed to properly understand a message being said and doing so relies heavily on explicit verbal skills.

"High" and "low" context cultures typically refer to language groups, nationalities, or regional communities. However, they have also been applied to corporations, professions and other cultural groups, as well as settings such as online and offline communication.

Examples of higher and lower context cultures

Cultural contexts are not absolutely "high" or "low". Instead, a comparison between cultures may find communication differences to a greater or lesser degree. Typically a high-context culture will be relational, collectivist, intuitive, and contemplative. They place a high value on interpersonal relationships and group members are a very close-knit community. Typically a low-context culture will be less close-knit, and so individuals communicating will have fewer relational cues when interpreting messages. Therefore, it is necessary for more explicit information to be included in the message so it is not misinterpreted. Not all individuals in a culture can be defined by cultural stereotypes, and there will be variations within a national culture in different settings. For example, Hall describes how Japanese culture has both low- and high-context situations. However, understanding the broad tendencies of predominant cultures can help inform and educate individuals on how to better facilitate communication between individuals of differing cultural backgrounds. 

Although the concept of high- and low-context cultures is usually applied in the field of analyzing national cultures, it can also be used to describe scientific or corporate cultures, or specific settings such as airports or law courts. A simplified example mentioned by Hall is that scientists working in "hard science" fields (like chemistry and physics) tend to have lower-context cultures: because their knowledge and models have fewer variables, they will typically include less context for each event they describe. In contrast, scientists working with living systems need to include more context because there can be significant variables which impact the research outcomes. 

Croucher’s study examines the assertion that culture influences communication style (high/low context) preference. Data was gathered in India, Ireland, Thailand, and the United States where the results confirm that "high-context nations (India and Thailand) prefer the avoiding and obliging conflict styles more than low-context nations (Ireland and the United States), whereas low-context nations prefer the uncompromising and dominating communication style more than high-context nations."

In addition, Hall identified countries such as Japan, Arabic countries and some Latin American Countries to practice high-context culture; “High context communication carries most of its information within physical acts and features such as avoiding eye contact or even the shrug of a shoulder.” On the other hand, he identified countries such as Germany, the United States and Scandinavia as low context cultures. These countries are quite explicit and elaborate without having prior knowledge to each member’s history or background. 

Cultures and languages are defined as higher or lower context on a spectrum. For example, it could be argued[by whom?] that the Canadian French language is higher context than Canadian English, but lower context than Spanish or French French. An individual from Texas (a higher-context culture) may communicate with a few words or use of a prolonged silence characteristic of Texan English, where a New Yorker would be very explicit (as typical of New York City English), although both speak the same language (American English) and are part of a nation (the United States of America) which is lower-context relative to other nations. Hall notes a similar difference between Navajo-speakers and English-speakers in a United States school.

Hall and Hall proposed a "spectrum" of national cultures from "High-Context cultures" to "Low-Context Cultures. This has been expanded to further countries by Copeland & Griggs (1985).
Higher-context culture: Afghans, African, Arabic, Brazilians, the Chinese, Filipinos, French Canadians, the French, Greeks, Hawaiian, Hungarians, Indians, Indonesian, Italians, Irish, Japanese, Koreans, Latin Americans, Nepali, Pakistani, Persian, Portuguese, Russians, Southern United States, the Spanish, Thai, Turks, Vietnamese, South Slavic, West Slavic.
Lower-context culture: Australian, Dutch, English Canadians, the English, Finnish, Germans, Israelis, New Zealand, Scandinavia, Switzerland, United States.
Cultural context can also shift and evolve. For instance, a study has argued that both Japan and Finland (high-context cultures) are becoming lower-context with the increased influence of Western European and United States culture.

The overlap and contrast between context cultures

The categories of context cultures are not totally separate. Both often take many aspects of the other's cultural communication abilities and strengths into account. The terms high- and low-context cultures are not classified with strict individual characteristics or boundaries. Instead, many cultures tend to have a mixture or at least some concepts that are shared between them, overlapping the two context cultures.

Ramos suggests that "in low context culture, communication members’ communication must be more explicit. As such, what is said is what is meant, and further analysis of the message is usually unnecessary." This implies that communication is quite direct and detailed because members of the culture are not expected to have knowledge of each other's histories, past experience or background. Because low-context communication concerns more direct messages, the meaning of these messages is more dependent on the words being spoken rather than on the interpretation of more subtle or unspoken cues. 

The Encyclopedia of Diversity and Social Justice states that, "high context defines cultures that are relational and collectivist, and which most highlight interpersonal relationships. Cultures and communication in which context is of great importance to structuring actions is referred to as high context." In such cultures, people are highly perceptive of actions. Furthermore, cultural aspects such as tradition, ceremony, and history are also highly valued. Because of this, many features of cultural behavior in high-context cultures, such as individual roles and expectations, do not need much detailed or thought-out explanation. 

According to Watson, "the influence of cultural variables interplays with other key factors – for example, social identities, those of age, gender, social class and ethnicity; this may include a stronger or weaker influence." A similarity that the two communication styles share is its influence on social characteristics such as age, gender, social class and ethnicity. For example, for someone who is older and more experienced within a society, the need for social cues may be higher or lower depending on the communication style. The same applies for the other characteristics in varied countries. 

On the other hand, certain intercultural communication skills are unique for each culture and it is significant to note that these overlaps in communication techniques are represented subgroups within social interactions or family settings. Many singular cultures that are large have subcultures inside of them, making communication and defining them more complicated than the low context and high context culture scale. The diversity within a main culture shows how the high and low scale differs depending on social settings such as school, work, home, and in other countries; variation is what allows the scale to fluctuate even if a large culture is categorized as primarily one or the other.

Miscommunication within culture contexts

Between each type of culture context, there will be forms of miscommunication because of the difference in gestures, social cues, and intercultural adjustments; however, it is important to recognize these differences and learn how to avoid miscommunication to benefit certain situations. Since all sets of cultures differ, especially from a global standpoint where language also creates a barrier for communication, social interactions specific to a culture normally require a range of appropriate communication abilities that an opposing culture may not understand or know about. This significance follows into many situations such as the workplace, which can be prone to diversified cultures and opportunities for collaboration and working together. Awareness of miscommunication between high and low context cultures within the workplace or intercultural communication settings advocates for collected unification within a group through the flexibility and ability to understand one another.

How higher context relates to other cultural metrics

Diversity

Families, subcultures and in-groups typically favour higher-context communication. Groups that are able to rely on a common background may not need to use words as explicitly to understand each other. Settings and cultures where people come together from a wider diversity of backgrounds such as international airports, large cities, or multi-national firms, tend to use lower-context communication forms.

Language

Hall links language to culture through the work of Sapir-Whorf on linguistic relativity. A trade language will typically need to explicitly explain more of the context than a dialect which can assume a high level of shared context. Because a low-context setting cannot rely on shared understanding of potentially ambiguous messages, low-context cultures tend to give more information, or to be precise in their language. In contrast, a high-context language like Japanese or Chinese can use a high number of homophones but still be understood by a listener who knows the context.
Elaborated and restricted codes
The concept of elaborated and restricted codes is introduced by sociologist Basil Bernstein in his book Class, Codes and Control. An elaborated code indicates that the speaker is expressing his/her idea by phrasing from an abundant selection of alternatives without assuming the listener shares significant amounts of common knowledge, which allows the speaker to explain their idea explicitly. In contrast, restricted codes are phrased from more limited alternatives, usually with collapsed and shortened sentences. Therefore, restricted codes require listeners to share a great deal of common perspective to understand the implicit meanings and nuances of a conversation.

Restricted codes are commonly used in high-context culture groups, where group members share the same cultural background and can easily understand the implicit meanings "between the lines" without further elaboration. Conversely, in cultural groups with low context, where people share less common knowledge or ‘value individuality above group identification’, detailed elaboration becomes more essential to avoid misunderstanding.

Collectivism and individualism

The concepts of collectivism and individualism have been applied to high- and low-context cultures by Dutch psychologist Geert Hofstede in his Cultural Dimensions Theory. Collectivist societies prioritize the group over the individual, and vice versa for individualist ones. In high-context cultures, language may be used to assist and maintain relationship-building and to focus on process. India and Japan are typically high-context, highly collectivistic cultures, where business is done by building relationships and maintaining respectful communication.

Individualistic cultures promote the development of individual values and independent social groups. Individualism may lead to communicating to all people in a group in the same way, rather than offering hierarchical respect to certain members. Because individualistic cultures may value cultural diversity, a more explicit way of communicating is often required to avoid misunderstanding. Language may be used to achieve goals or exchange information. The USA and Australia are typically low-context, highly individualistic cultures, where transparency and competition in business are prized.

Stability and durability of tradition

High-context cultures tend to be more stable, as their communication is more economical, fast, efficient and satisfying; but these are gained at a price of devoting time into preprogramming cultural background, and their high stability might come with a price of a high barrier for development. By contrast, low-context cultures tend to change more rapidly and drastically, allowing extension to happen at faster rates. This also means that low-context communication may fail due to the overload of information, which makes culture lose its screening function.

Therefore, higher-context cultures tend to correlate with cultures that also have a strong sense of tradition and history, and change little over time. For example, Native Americans in the United States have higher-context cultures with a strong sense of tradition and history, compared to general American culture. Focusing on tradition creates opportunities for higher context messages between individuals of each new generation, and the high-context culture feeds back to the stability hence allows the tradition to be maintained. This is in contrast to lower-context cultures in which the shared experiences upon which communication is built can change drastically from one generation to the next, creating communication gaps between parents and children, as in the United States.

Facial expression and gesture

Culture also affects how individuals interpret other people's facial expressions. An experiment performed by the University of Glasgow shows that different cultures have different understanding of the facial expression signals of the six basic emotions, which are the so-called "universal language of emotion"—happiness, surprise, fear, disgust, anger and sadness. In high-context cultures, facial expressions and gestures take on greater importance in conveying and understanding a message, and the receiver may require more cultural context to understand "basic" displays of emotions.

Marketing and advertising perspective

Cultural differences in advertising and marketing may also be explained through high- and low-context cultures. One study on McDonald's online advertising compared Japan, China, Korea, Hong Kong, Pakistan, Germany, Denmark, Sweden, Norway, Finland, and the United States, and found that in high-context countries, the advertising used more colors, movements, and sounds to give context, while in low-context cultures the advertising focused more on verbal information and linear processes.

Representation of a Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Representation_of_a_Lie_group...