Search This Blog

Thursday, November 9, 2023

Substance dependence

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Substance_dependence

Substance dependence, also known as drug dependence, is a biopsychological situation whereby an individual's functionality is dependent on the necessitated re-consumption of a psychoactive substance because of an adaptive state that has developed within the individual from psychoactive substance consumption that results in the experience of withdrawal and that necessitates the re-consumption of the drug. A drug addiction, a distinct concept from substance dependence, is defined as compulsive, out-of-control drug use, despite negative consequences. An addictive drug is a drug which is both rewarding and reinforcing. ΔFosB, a gene transcription factor, is now known to be a critical component and common factor in the development of virtually all forms of behavioral and drug addictions, but not dependence.

The International Classification of Diseases classifies substance dependence as a mental and behavioural disorder. Within the framework of the 4th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), substance dependence is redefined as a drug addiction, and can be diagnosed without the occurrence of a withdrawal syndrome. It was described accordingly: "When an individual persists in use of alcohol or other drugs despite problems related to use of the substance, substance dependence may be diagnosed. Compulsive and repetitive use may result in tolerance to the effect of the drug and withdrawal symptoms when use is reduced or stopped. This, along with Substance Abuse are considered Substance Use Disorders." In the DSM-5 (released in 2013), substance abuse and substance dependence have been merged into the category of substance use disorders and they no longer exist as individual diagnoses.

Addiction and dependence glossary
  • addiction – a biopsychosocial disorder characterized by persistent use of drugs (including alcohol) despite substantial harm and adverse consequences
  • addictive drug – psychoactive substances that with repeated use are associated with significantly higher rates of substance use disorders, due in large part to the drug's effect on brain reward systems
  • dependence – an adaptive state associated with a withdrawal syndrome upon cessation of repeated exposure to a stimulus (e.g., drug intake)
  • drug sensitization or reverse tolerance – the escalating effect of a drug resulting from repeated administration at a given dose
  • drug withdrawal – symptoms that occur upon cessation of repeated drug use
  • physical dependence – dependence that involves persistent physical–somatic withdrawal symptoms (e.g., fatigue and delirium tremens)
  • psychological dependence – dependence socially seen as being extremely mild compared to physical dependence (e.g., with enough willpower it could be overcome)
  • reinforcing stimuli – stimuli that increase the probability of repeating behaviors paired with them
  • rewarding stimuli – stimuli that the brain interprets as intrinsically positive and desirable or as something to approach
  • sensitization – an amplified response to a stimulus resulting from repeated exposure to it
  • substance use disorder – a condition in which the use of substances leads to clinically and functionally significant impairment or distress
  • tolerance – the diminishing effect of a drug resulting from repeated administration at a given dose

Withdrawal

Withdrawal is the body's reaction to abstaining from a substance upon which a person has developed a dependence syndrome. When dependence has developed, cessation of substance-use produces an unpleasant state, which promotes continued drug use through negative reinforcement; i.e., the drug is used to escape or avoid re-entering the associated withdrawal state. The withdrawal state may include physical-somatic symptoms (physical dependence), emotional-motivational symptoms (psychological dependence), or both. Chemical and hormonal imbalances may arise if the substance is not re-introduced. Psychological stress may also result if the substance is not re-introduced.

Infants also experience substance withdrawal, known as neonatal abstinence syndrome (NAS), which can have severe and life-threatening effects. Addiction to drugs such as alcohol in expectant mothers not only causes NAS, but also an array of other issues which can continually affect the infant throughout their lifetime.

Risk factors

Mental health as a risk factor for illicit drug dependency or abuse.

Dependence potential

The dependence potential of a drug varies from substance to substance, and from individual to individual. Dose, frequency, pharmacokinetics of a particular substance, route of administration, and time are critical factors for developing a drug dependence.

An article in The Lancet compared the harm and dependence liability of 20 drugs, using a scale from zero to three for physical dependence, psychological dependence, and pleasure to create a mean score for dependence. Selected results can be seen in the chart below.

Drug Mean Pleasure Psychological dependence Physical dependence
Heroin/Morphine 3.00 3.0 3.0 3.0
Cocaine 2.39 3.0 2.8 1.3
Tobacco 2.21 2.3 2.6 1.8
Barbiturates 2.01 2.0 2.2 1.8
Alcohol 1.93 2.3 1.9 1.6
Benzodiazepines 1.83 1.7 2.1 1.8
Amphetamine 1.67 2.0 1.9 1.1
Cannabis 1.51 1.9 1.7 0.8
Ecstasy 1.13 1.5 1.2 0.7

Capture rates

Capture rates enumerate the percentage of users who reported that they had become dependent to their respective drug at some point.

Drug % of users
Cannabis 9%
Alcohol 15.4%
Cocaine 16.7%
Heroin 23.1%
Tobacco 31.9%

Biomolecular mechanisms

Psychological dependence

Two factors have been identified as playing pivotal roles in psychological dependence: the neuropeptide "corticotropin-releasing factor" (CRF) and the gene transcription factor "cAMP response element binding protein" (CREB). The nucleus accumbens (NAcc) is one brain structure that has been implicated in the psychological component of drug dependence. In the NAcc, CREB is activated by cyclic adenosine monophosphate (cAMP) immediately after a high and triggers changes in gene expression that affect proteins such as dynorphin; dynorphin peptides reduce dopamine release into the NAcc by temporarily inhibiting the reward pathway. A sustained activation of CREB thus forces a larger dose to be taken to reach the same effect. In addition, it leaves the user feeling generally depressed and dissatisfied, and unable to find pleasure in previously enjoyable activities, often leading to a return to the drug for another dose.

In addition to CREB, it is hypothesized that stress mechanisms play a role in dependence. Koob and Kreek have hypothesized that during drug use, CRF activates the hypothalamic–pituitary–adrenal axis (HPA axis) and other stress systems in the extended amygdala. This activation influences the dysregulated emotional state associated with psychological dependence. They found that as drug use escalates, so does the presence of CRF in human cerebrospinal fluid. In rat models, the separate use of CRF inhibitors and CRF receptor antagonists both decreased self-administration of the drug of study. Other studies in this review showed dysregulation of other neuropeptides that affect the HPA axis, including enkephalin which is an endogenous opioid peptide that regulates pain. It also appears that µ-opioid receptors, which enkephalin acts upon, is influential in the reward system and can regulate the expression of stress hormones.

Increased expression of AMPA receptors in nucleus accumbens MSNs is a potential mechanism of aversion produced by drug withdrawal.

Physical dependence

Upregulation of the cAMP signal transduction pathway in the locus coeruleus by CREB has been implicated as the mechanism responsible for certain aspects of opioid-induced physical dependence. The temporal course of withdrawal correlates with LC firing, and administration of α2 agonists into the locus coeruleus leads to a decrease in LC firing and norepinephrine release during withdrawal. A possible mechanism involves upregulation of NMDA receptors, which is supported by the attenuation of withdraw by NMDA receptor antagonists. Physical dependence on opioids has been observed to produce an elevation of extracellular glutamate, an increase in NMDA receptor subunits NR1 and NR2A, phosphorylated CaMKII, and c-fos. Expression of CaMKII and c-fos is attenuated by NMDA receptor antagonists, which is associated with blunted withdrawal in adult rats, but not neonatal rats While acute administration of opioids decreases AMPA receptor expression and depresses both NMDA and non-NMDA excitatory postsynaptic potentials in the NAC, withdrawal involves a lowered threshold for LTP and an increase in spontaneous firing in the NAc.

Diagnosis

DSM classification

"Substance dependence", as defined in the DSM-IV, can be diagnosed with physiological dependence, evidence of tolerance or withdrawal, or without physiological dependence. DSM-IV substance dependencies include:

Management

Addiction is a complex but treatable condition. It is characterized by compulsive drug craving, seeking, and use that persists even if the user is aware of severe adverse consequences. For some people, addiction becomes chronic, with periodic relapses even after long periods of abstinence. As a chronic, relapsing disease, addiction may require continued treatments to increase the intervals between relapses and diminish their intensity. While some with substance issues recover and lead fulfilling lives, others require ongoing additional support. The ultimate goal of addiction treatment is to enable an individual to manage their substance misuse; for some this may mean abstinence. Immediate goals are often to reduce substance abuse, improve the patient's ability to function, and minimize the medical and social complications of substance abuse and their addiction; this is called "harm reduction".

Treatments for addiction vary widely according to the types of drugs involved, amount of drugs used, duration of the drug addiction, medical complications and the social needs of the individual. Determining the best type of recovery program for an addicted person depends on a number of factors, including: personality, drugs of choice, concept of spirituality or religion, mental or physical illness, and local availability and affordability of programs.

Many different ideas circulate regarding what is considered a successful outcome in the recovery from addiction. Programs that emphasize controlled drinking exist for alcohol addiction. Opiate replacement therapy has been a medical standard of treatment for opioid addiction for many years.

Treatments and attitudes toward addiction vary widely among different countries. In the US and developing countries, the goal of commissioners of treatment for drug dependence is generally total abstinence from all drugs. Other countries, particularly in Europe, argue the aims of treatment for drug dependence are more complex, with treatment aims including reduction in use to the point that drug use no longer interferes with normal activities such as work and family commitments; shifting the addict away from more dangerous routes of drug administration such as injecting to safer routes such as oral administration; reduction in crime committed by drug addicts; and treatment of other comorbid conditions such as AIDS, hepatitis and mental health disorders. These kinds of outcomes can be achieved without eliminating drug use completely. Drug treatment programs in Europe often report more favorable outcomes than those in the US because the criteria for measuring success are functional rather than abstinence-based. The supporters of programs with total abstinence from drugs as a goal believe that enabling further drug use means prolonged drug use and risks an increase in addiction and complications from addiction.

Residential

Residential drug treatment can be broadly divided into two camps: 12-step programs and therapeutic communities. 12-step programs are a nonclinical support-group and spiritual-based approach to treating addiction. Therapy typically involves the use of cognitive-behavioral therapy, an approach that looks at the relationship between thoughts, feelings and behaviors, addressing the root cause of maladaptive behavior. Cognitive-behavioral therapy treats addiction as a behavior rather than a disease, and so is subsequently curable, or rather, unlearnable. Cognitive-behavioral therapy programs recognize that, for some individuals, controlled use is a more realistic possibility.

One of many recovery methods are 12-step recovery programs, with prominent examples including Alcoholics Anonymous, Narcotics Anonymous, and Pills Anonymous. They are commonly known and used for a variety of addictions for the individual addicted and the family of the individual. Substance-abuse rehabilitation (rehab) centers offer a residential treatment program for some of the more seriously addicted, in order to isolate the patient from drugs and interactions with other users and dealers. Outpatient clinics usually offer a combination of individual counseling and group counseling. Frequently, a physician or psychiatrist will prescribe medications in order to help patients cope with the side effects of their addiction. Medications can help immensely with anxiety and insomnia, can treat underlying mental disorders (cf. self-medication hypothesis, Khantzian 1997) such as depression, and can help reduce or eliminate withdrawal symptomology when withdrawing from physiologically addictive drugs. Some examples are using benzodiazepines for alcohol detoxification, which prevents delirium tremens and complications; using a slow taper of benzodiazepines or a taper of phenobarbital, sometimes including another antiepileptic agent such as gabapentin, pregabalin, or valproate, for withdrawal from barbiturates or benzodiazepines; using drugs such as baclofen to reduce cravings and propensity for relapse amongst addicts to any drug, especially effective in stimulant users, and alcoholics (in which it is nearly as effective as benzodiazepines in preventing complications); using clonidine, an alpha-agonist, and loperamide for opioid detoxification, for first-time users or those who wish to attempt an abstinence-based recovery (90% of opioid users relapse to active addiction within eight months or are multiple relapse patients); or replacing an opioid that is interfering with or destructive to a user's life, such as illicitly-obtained heroin, dilaudid, or oxycodone, with an opioid that can be administered legally, reduces or eliminates drug cravings, and does not produce a high, such as methadone or buprenorphineopioid replacement therapy – which is the gold standard for treatment of opioid dependence in developed countries, reducing the risk and cost to both user and society more effectively than any other treatment modality (for opioid dependence), and shows the best short-term and long-term gains for the user, with the greatest longevity, least risk of fatality, greatest quality of life, and lowest risk of relapse and legal issues including arrest and incarceration.

In a survey of treatment providers from three separate institutions, the National Association of Alcoholism and Drug Abuse Counselors, Rational Recovery Systems and the Society of Psychologists in Addictive Behaviors, measuring the treatment provider's responses on the "Spiritual Belief Scale" (a scale measuring belief in the four spiritual characteristics of AA identified by Ernest Kurtz); the scores were found to explain 41% of the variance in the treatment provider's responses on the "Addiction Belief Scale" (a scale measuring adherence to the disease model or the free-will model of addiction).

Behavioral programming

Behavioral programming is considered critical in helping those with addictions achieve abstinence. From the applied behavior analysis literature and the behavioral psychology literature, several evidence based intervention programs have emerged: (1) behavioral marital therapy; (2) community reinforcement approach; (3) cue exposure therapy; and (4) contingency management strategies. In addition, the same author suggests that social skills training adjunctive to inpatient treatment of alcohol dependence is probably efficacious. Community reinforcement has both efficacy and effectiveness data. In addition, behavioral treatment such as community reinforcement and family training (CRAFT) have helped family members to get their loved ones into treatment. Motivational intervention has also shown to be an effective treatment for substance dependence.

Alternative therapies

Alternative therapies, such as acupuncture, are used by some practitioners to alleviate the symptoms of drug addiction. In 1997, the American Medical Association (AMA) adopted, as policy, the following statement after a report on a number of alternative therapies including acupuncture:

There is little evidence to confirm the safety or efficacy of most alternative therapies. Much of the information currently known about these therapies makes it clear that many have not been shown to be efficacious. Well-designed, stringently controlled research should be done to evaluate the efficacy of alternative therapies.

In addition, new research surrounding the effects of psilocybin on smokers revealed that 80% of smokers quit for six months following the treatment, and 60% remained smoking free for 5 years following the treatment.

Treatment and issues

Medical professionals need to apply many techniques and approaches to help patients with substance related disorders. Using a psychodynamic approach is one of the techniques that psychologists use to solve addiction problems. In psychodynamic therapy, psychologists need to understand the conflicts and the needs of the addicted person, and also need to locate the defects of their ego and defense mechanisms. Using this approach alone has proven to be ineffective in solving addiction problems. Cognitive and behavioral techniques should be integrated with psychodynamic approaches to achieve effective treatment for substance related disorders. Cognitive treatment requires psychologists to think deeply about what is happening in the brain of an addicted person. Cognitive psychologists should zoom in to neural functions of the brain and understand that drugs have been manipulating the dopamine reward center of the brain. From this particular state of thinking, cognitive psychologists need to find ways to change the thought process of the addicted person.

Cognitive approach

There are two routes typically applied to a cognitive approach to substance abuse: tracking the thoughts that pull patients to addiction and tracking the thoughts that prevent them if so from relapsing. Behavioral techniques have the widest application in treating substance related disorders. Behavioral psychologists can use the techniques of "aversion therapy", based on the findings of Pavlov's classical conditioning. It uses the principle of pairing abused substances with unpleasant stimuli or conditions; for example, pairing pain, electrical shock, or nausea with alcohol consumption. The use of medications may also be used in this approach, such as using disulfiram to pair unpleasant effects with the thought of alcohol use. Psychologists tend to use an integration of all these approaches to produce reliable and effective treatment. With the advanced clinical use of medications, biological treatment is now considered to be one of the most efficient interventions that psychologists may use as treatment for those with substance dependence.

Medicinal approach

Another approach is to use medicines that interfere with the functions of the drugs in the brain. Similarly, one can also substitute the misused substance with a weaker, safer version to slowly taper the patient off of their dependence. Such is the case with Suboxone in the context of opioid dependence. These approaches are aimed at the process of detoxification. Medical professionals weigh the consequences of withdrawal symptoms against the risk of staying dependent on these substances. These withdrawal symptoms can be very difficult and painful times for patients. Most will have steps in place to handle severe withdrawal symptoms, either through behavioral therapy or other medications. Biological intervention should be combined with behavioral therapy approaches and other non-pharmacological techniques. Group therapies including anonymity, teamwork and sharing concerns of daily life among people who also have substance dependence issues can have a great impact on outcomes. However, these programs proved to be more effective and influential on persons who did not reach levels of serious dependence.

Vaccines

  • TA-CD is an active vaccine developed by the Xenova Group which is used to negate the effects of cocaine, making it suitable for use in treatment of addiction. It is created by combining norcocaine with inactivated cholera toxin.
  • TA-NIC is a proprietary vaccine in development similar to TA-CD but being used to create human anti-nicotine antibodies in a person to destroy nicotine in the human body so that it is no longer effective.

History

The phenomenon of drug addiction has occurred to some degree throughout recorded history (see "Opium"). Modern agricultural practices, improvements in access to drugs, advancements in biochemistry, and dramatic increases in the recommendation of drug usage by clinical practitioners have exacerbated the problem significantly in the 20th century. Improved means of active biological agent manufacture and the introduction of synthetic compounds, such as fentanyl and methamphetamine, are also factors contributing to drug addiction.

For the entirety of US history, drugs have been used by some members of the population. In the country's early years, most drug use by the settlers was of alcohol or tobacco.

The 19th century saw opium usage in the US become much more common and popular. Morphine was isolated in the early 19th century, and came to be prescribed commonly by doctors, both as a painkiller and as an intended cure for opium addiction. At the time, the prevailing medical opinion was that the addiction process occurred in the stomach, and thus it was hypothesized that patients would not become addicted to morphine if it was injected into them via a hypodermic needle, and it was further hypothesized that this might potentially be able to cure opium addiction. However, many people did become addicted to morphine. In particular, addiction to opium became widespread among soldiers fighting in the Civil War, who very often required painkillers and thus were very often prescribed morphine. Women were also very frequently prescribed opiates, and opiates were advertised as being able to relieve "female troubles".

Many soldiers in the Vietnam War were introduced to heroin and developed a dependency on the substance which survived even when they returned to the US. Technological advances in travel meant that this increased demand for heroin in the US could now be met. Furthermore, as technology advanced, more drugs were synthesized and discovered, opening up new avenues to substance dependency.

Society and culture

Demographics

Internationally, the U.S. and Eastern Europe contain the countries with the highest substance abuse disorder occurrence (5-6%). Africa, Asia, and the Middle East contain countries with the lowest worldwide occurrence (1-2%). Across the globe, those that tended to have a higher prevalence of substance dependence were in their twenties, unemployed, and men. The National Survey on Drug Use and Health (NSDUH) reports on substance dependence/abuse rates in various population demographics across the U.S. When surveying populations based on race and ethnicity in those ages 12 and older, it was observed that American Indian/Alaskan Natives were among the highest rates and Asians were among the lowest rates in comparison to other racial/ethnic groups.

Substance Use in Racial/Ethnic Groups
Race/Ethnicity Dependence/Abuse Rate
Asian 4.6%
Black 7.4%
White 8.4%
Hispanic 8.6%
Mixed race 10.9%
Native Hawaiian/

Pacific Islander

11.3%
American Indian/

Alaskan Native

14.9%

When surveying populations based on gender in those ages 12 and older, it was observed that males had a higher substance dependence rate than females. However, the difference in the rates are not apparent until after age 17. |+Substance Use in Different Genders w/ Respect to Age !Age !Male !Female |- |12 and older |10.8% |5.8% |- |12-17 |5.3% |5.2% |- |18 or older |11.4% |5.8% |} Alcohol dependence or abuse rates were shown to have no correspondence with any person's education level when populations were surveyed in varying degrees of education from ages 26 and older. However, when it came to illicit drug use there was a correlation, in which those that graduated from college had the lowest rates. Furthermore, dependence rates were greater in unemployed populations ages 18 and older and in metropolitan-residing populations ages 12 and older.

Illicit Drug Dependence Demographics (Education, Employment, and Regional)
Education level Rates Employment status Rates Region Rates
high school 2.5% un-employed 15.2% large metropolitan 8.6%
no-degree, college 2.1% part-time 9.3% small metropolitan 8.4%
college graduate 0.9% full-time 9.5% non-metropolitan 6.6%

The National Opinion Research Center at the University of Chicago reported an analysis on disparities within admissions for substance abuse treatment in the Appalachian region, which comprises 13 states and 410 counties in the Eastern part of the U.S. While their findings for most demographic categories were similar to the national findings by NSDUH, they had different results for racial/ethnic groups which varied by sub-regions. Overall, Whites were the demographic with the largest admission rate (83%), while Alaskan Native, American Indian, Pacific Islander, and Asian populations had the lowest admissions (1.8%).

Legislation

Depending on the jurisdiction, addictive drugs may be legal, legal only as part of a government sponsored study, illegal to use for any purpose, illegal to sell, or even illegal to merely possess.

Most countries have legislation which brings various drugs and drug-like substances under the control of licensing systems. Typically this legislation covers any or all of the opiates, amphetamines, cannabinoids, cocaine, barbiturates, benzodiazepines, anesthetics, hallucinogenics, derivatives and a variety of more modern synthetic drugs. Unlicensed production, supply or possession is a criminal offence.

Usually, however, drug classification under such legislation is not related simply to addictiveness. The substances covered often have very different addictive properties. Some are highly prone to cause physical dependency, while others rarely cause any form of compulsive need whatsoever. Also, under legislation specifically about drugs, alcohol and nicotine are not usually included.

Although the legislation may be justifiable on moral or public health grounds, it can make addiction or dependency a much more serious issue for the individual: reliable supplies of a drug become difficult to secure, and the individual becomes vulnerable to both criminal abuse and legal punishment.

It is unclear whether laws against illegal drug use do anything to stem usage and dependency. In jurisdictions where addictive drugs are illegal, they are generally supplied by drug dealers, who are often involved with organized crime. Even though the cost of producing most illegal addictive substances is very low, their illegality combined with the addict's need permits the seller to command a premium price, often hundreds of times the production cost. As a result, addicts sometimes turn to crime to support their habit.

United States

In the United States, drug policy is primarily controlled by the federal government. The Department of Justice's Drug Enforcement Administration (DEA) enforces controlled substances laws and regulations. The Department of Health and Human Services' Food and Drug Administration (FDA) serve to protect and promote public health by controlling the manufacturing, marketing, and distribution of products, like medications.

The United States' approach to substance abuse has shifted over the last decade, and is continuing to change. The federal government was minimally involved in the 19th century. The federal government transitioned from using taxation of drugs in the early 20th century to criminalizing drug abuse with legislations and agencies like the Federal Bureau of Narcotics (FBN) mid-20th century in response to the nation's growing substance abuse issue. These strict punishments for drug offenses shined light on the fact that drug abuse was a multi-faceted problem. The President's Advisory Commission on Narcotics and Drug Abuse of 1963 addressed the need for a medical solution to drug abuse. However, drug abuse continued to be enforced by the federal government through agencies such as the DEA and further legislations such as The Controlled Substances Act (CSA), the Comprehensive Crime Control Act of 1984, and Anti-Drug Abuse Acts.

In the past decade, there have been growing efforts through state and local legislations to shift from criminalizing drug abuse to treating it as a health condition requiring medical intervention. 28 states currently allow for the establishment of needle exchanges. Florida, Iowa, Missouri and Arizona all introduced bills to allow for the establishment of needle exchanges in 2019. These bills have grown in popularity across party lines since needle exchanges were first introduced in Amsterdam in 1983. In addition, AB-186 Controlled substances: overdose prevention program was introduced to operate safe injection sites in the City and County of San Francisco. The bill was vetoed on September 30, 2018, by California Governor Jerry Brown. The legality of these sites are still in discussion, so there are no such sites in the United States yet. However, there is growing international evidence for successful safe injection facilities.

Posthuman

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Posthuman

Posthuman or post-human is a concept originating in the fields of science fiction, futurology, contemporary art, and philosophy that means a person or entity that exists in a state beyond being human. The concept aims at addressing a variety of questions, including ethics and justice, language and trans-species communication, social systems, and the intellectual aspirations of interdisciplinarity.

Posthumanism is not to be confused with transhumanism (the biotechnological enhancement of human beings) and narrow definitions of the posthuman as the hoped-for transcendence of materiality. The notion of the posthuman comes up both in posthumanism as well as transhumanism, but it has a special meaning in each tradition. In 2017, Penn State University Press in cooperation with Stefan Lorenz Sorgner and James Hughes established the Journal of Posthuman Studies, in which all aspects of the concept "posthuman" can be analysed.

Posthumanism

In critical theory, the posthuman is a speculative being that represents or seeks to re-conceive the human. It is the object of posthumanist criticism, which critically questions humanism, a branch of humanist philosophy which claims that human nature is a universal state from which the human being emerges; human nature is autonomous, rational, capable of free will, and unified in itself as the apex of existence. Thus, the posthuman position recognizes imperfectability and disunity within oneself, and understands the world through heterogeneous perspectives while seeking to maintain intellectual rigor and dedication to objective observations. Key to this posthuman practice is the ability to fluidly change perspectives and manifest oneself through different identities. The posthuman, for critical theorists of the subject, has an emergent ontology rather than a stable one; in other words, the posthuman is not a singular, defined individual, but rather one who can "become" or embody different identities and understand the world from multiple, heterogeneous perspectives.

Approaches to posthumanism are not homogeneous, and have often been very critical. The term itself is contested, with one of the foremost authors associated with posthumanism, Manuel DeLanda, decrying the term as "very silly." Covering the ideas of, for example, Robert Pepperell's The Posthuman Condition, and Hayles's How We Became Posthuman under a single term is distinctly problematic due to these contradictions.

The posthuman is roughly synonymous with the "cyborg" of A Cyborg Manifesto by Donna Haraway. Haraway's conception of the cyborg is an ironic take on traditional conceptions of the cyborg that inverts the traditional trope of the cyborg whose presence questions the salient line between humans and robots. Haraway's cyborg is in many ways the "beta" version of the posthuman, as her cyborg theory prompted the issue to be taken up in critical theory. Following Haraway, Hayles, whose work grounds much of the critical posthuman discourse, asserts that liberal humanism—which separates the mind from the body and thus portrays the body as a "shell" or vehicle for the mind—becomes increasingly complicated in the late 20th and 21st centuries because information technology puts the human body in question. Hayles maintains that we must be conscious of information technology advancements while understanding information as "disembodied," that is, something which cannot fundamentally replace the human body but can only be incorporated into it and human life practices.

Post-posthumanism and post-cyborg ethics

The idea of post-posthumanism (post-cyborgism) has recently been introduced. This body of work outlines the after-effects of long-term adaptation to cyborg technologies and their subsequent removal, e.g., what happens after 20 years of constantly wearing computer-mediating eyeglass technologies and subsequently removing them, and of long-term adaptation to virtual worlds followed by return to "reality." and the associated post-cyborg ethics (e.g. the ethics of forced removal of cyborg technologies by authorities, etc.).

Posthuman political and natural rights have been framed on a spectrum with animal rights and human rights. Posthumanism broadens the scope of what it means to be a valued life form and to be treated as such (in contrast to certain life forms being seen as less-than and being taken advantage of or killed off); it “calls for a more inclusive definition of life, and a greater moral-ethical response, and responsibility, to non-human life forms in the age of species blurring and species mixing. … [I]t interrogates the hierarchic ordering—and subsequently exploitation and even eradication—of life forms.”

Transhumanism

Definition

According to transhumanist thinkers, a posthuman is a hypothetical future being "whose basic capacities so radically exceed those of present humans as to be no longer unambiguously human by our current standards." Posthumans primarily focus on cybernetics, the posthuman consequent and the relationship to digital technology. Steve Nichols published the Posthuman Movement manifesto in 1988. His early evolutionary theory of mind (MVT) allows development of sentient E1 brains. The emphasis is on systems. Transhumanism does not focus on either of these. Instead, transhumanism focuses on the modification of the human species via any kind of emerging science, including genetic engineering, digital technology, and bioengineering. Transhumanism is sometimes criticized for not adequately addressing the scope of posthumanism and its concerns for the evolution of humanism.

Methods

Posthumans could be completely synthetic artificial intelligences, or a symbiosis of human and artificial intelligence, or uploaded consciousnesses, or the result of making many smaller but cumulatively profound technological augmentations to a biological human, i.e. a cyborg. Some examples of the latter are redesigning the human organism using advanced nanotechnology or radical enhancement using some combination of technologies such as genetic engineering, psychopharmacology, life extension therapies, neural interfaces, advanced information management tools, memory enhancing drugs, wearable or implanted computers, and cognitive techniques.

Posthuman future

As used in this article, "posthuman" does not necessarily refer to a conjectured future where humans are extinct or otherwise absent from the Earth. Kevin Warwick says that both humans and posthumans will continue to exist but the latter will predominate in society over the former because of their abilities. Recently, scholars have begun to speculate that posthumanism provides an alternative analysis of apocalyptic cinema and fiction, often casting vampires, werewolves, zombies and greys as potential evolutions of the human form and being.

Many science fiction authors, such as Greg Egan, H. G. Wells, Isaac Asimov, Bruce Sterling, Frederik Pohl, Greg Bear, Charles Stross, Neal Asher, Ken MacLeod, Peter F. Hamilton, Ann Leckie, and authors of the Orion's Arm Universe, have written works set in posthuman futures.

Posthuman God

A variation on the posthuman theme is the notion of a "posthuman god"; the idea that posthumans, being no longer confined to the parameters of human nature, might grow physically and mentally so powerful as to appear possibly god-like by present-day human standards. This notion should not be interpreted as being related to the idea portrayed in some science fiction that a sufficiently advanced species may "ascend" to a higher plane of existence—rather, it merely means that some posthuman beings may become so exceedingly intelligent and technologically sophisticated that their behaviour would not possibly be comprehensible to modern humans, purely by reason of their limited intelligence and imagination.

Gesture recognition

From Wikipedia, the free encyclopedia
Child's hand location and movement being detected by a gesture recognition algorithm

Gesture recognition is an area of research and development in computer science and language technology concerned with the recognition and interpretation of human gestures. A subdiscipline of computer vision, it employs mathematical algorithms to interpret gestures. Gestures can originate from any bodily motion or state, but commonly originate from the face or hand. One area of the field is emotion recognition derived from facial expressions and hand gestures. Users can make simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language, however, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition is a path for computers to begin to better understand and interpret human body language, previously not possible through text or unenhanced graphical (GUI) user interfaces.

Overview

Middleware usually processes gesture recognition, then sends the results to the user.

Gesture recognition has application in such areas as:

Gesture recognition can be conducted with techniques from computer vision and image processing.

The literature includes ongoing work in the computer vision field on capturing gestures or more general human pose and movements by cameras connected to a computer.

The term "gesture recognition" has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a graphics tablet, multi-touch gestures, and mouse gesture recognition. This is computer interaction through the drawing of symbols with a pointing device cursor. Pen computing expands digital gesture recognition beyond traditional input devices such as keyboards and mice, and reduces the hardware impact of a system.

Gesture types

In computer interfaces, two types of gestures are distinguished: We consider online gestures, which can also be regarded as direct manipulations like scaling and rotating, and in contrast, offline gestures are usually processed after the interaction is finished; e. g. a circle is drawn to activate a context menu.

  • Offline gestures: Those gestures that are processed after the user's interaction with the object. An example is a gesture to activate a menu.
  • Online gestures: Direct manipulation gestures. They are used to scale or rotate a tangible object.

Touchless interface

A touchless user interface (TUI) is an emerging type of technology wherein a device is controlled via body motion and gestures without touching a keyboard, mouse, or screen.

Types of touchless technology

There are several devices utilizing this type of interface such as smartphones, laptops, games, TVs, and music equipment.

One type of touchless interface uses the Bluetooth connectivity of a smartphone to activate a company's visitor management system. This eliminates having to touch an interface, for convenience or to avoid a potential source of contamination as during the COVID-19 pandemic.

Input devices

The ability to track a person's movements and determine what gestures they may be performing can be achieved through various tools. Kinetic user interfaces (KUIs) are an emerging type of user interfaces that allow users to interact with computing devices through the motion of objects and bodies. Examples of KUIs include tangible user interfaces and motion-aware games such as Wii and Microsoft's Kinect, and other interactive projects.

Although there is a large amount of research done in image/video-based gesture recognition, there is some variation in the tools and environments used between implementations.

  • Wired gloves. These can provide input to the computer about the position and rotation of the hands using magnetic or inertial tracking devices. Furthermore, some gloves can detect finger bending with a high degree of accuracy (5-10 degrees), or even provide haptic feedback to the user, which is a simulation of the sense of touch. The first commercially available hand-tracking glove-type device was the DataGlove, a glove-type device that could detect hand position, movement and finger bending. This uses fiber optic cables running down the back of the hand. Light pulses are created and when the fingers are bent, light leaks through small cracks and the loss is registered, giving an approximation of the hand pose.
  • Depth-aware cameras. Using specialized cameras such as structured light or time-of-flight cameras, one can generate a depth map of what is being seen through the camera at a short-range, and use this data to approximate a 3d representation of what is being seen. These can be effective for the detection of hand gestures due to their short-range capabilities.
  • Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. To get the cameras' relations, one can use a positioning reference such as a lexian-stripe or infrared emitter. In combination with direct motion measurement (6D-Vision) gestures can directly be detected.
  • Gesture-based controllers. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by the software. An example of emerging gesture-based motion capture is skeletal hand tracking, which is being developed for virtual reality and augmented reality applications. An example of this technology is shown by tracking companies uSens and Gestigon, which allow users to interact with their surroundings without controllers.
  • Wi-Fi sensing
  • Mouse gesture tracking, where the motion of the mouse is correlated to a symbol being drawn by a person's hand which can study changes in acceleration over time to represent gestures. The software also compensates for human tremor and inadvertent movement. The sensors of these smart light-emitting cubes can be used to sense hands and fingers as well as other objects nearby, and can be used to process data. Most applications are in music and sound synthesis, but can be applied to other fields.
  • Single camera. A standard 2D camera can be used for gesture recognition where the resources/environment would not be convenient for other forms of image-based recognition. Earlier it was thought that a single camera may not be as effective as stereo or depth-aware cameras, but some companies are challenging this theory. Software-based gesture recognition technology using a standard 2D camera that can detect robust hand gestures.

Algorithms

Some alternative methods of tracking and analyzing gestures, and their respective relationships

Depending on the type of input data, the approach for interpreting a gesture could be done in different ways. However, most of the techniques rely on key pointers represented in a 3D coordinate system. Based on the relative motion of these, the gesture can be detected with high accuracy, depending on the quality of the input and the algorithm's approach.

In order to interpret movements of the body, one has to classify them according to common properties and the message the movements may express. For example, in sign language, each gesture represents a word or phrase.

Some literature differentiates 2 different approaches in gesture recognition: a 3D model-based and an appearance-based. The foremost method makes use of 3D information on key elements of the body parts in order to obtain several important parameters, like palm position or joint angles. Approaches derived from it such as the volumetric models have proven to be very intensive in terms of computational power and require further technological developments in order to be implemented for real-time analysis. Alternately, appearance-based systems use images or videos for direct interpretation. Such models are easier to process, but usually lack the generality required for human-computer interaction.

3D model-based algorithms

A real hand (left) is interpreted as a collection of vertices and lines in the 3D mesh version (right), and the software uses their relative position and interaction in order to infer the gesture.

The 3D model approach can use volumetric or skeletal models or even a combination of the two. Volumetric approaches have been heavily used in the computer animation industry and for computer vision purposes. The models are generally created from complicated 3D surfaces, like NURBS or polygon meshes.

The drawback of this method is that it is very computationally intensive, and systems for real-time analysis are still to be developed. For the moment, a more interesting approach would be to map simple primitive objects to the person's most important body parts (for example cylinders for the arms and neck, sphere for the head) and analyze the way these interact with each other. Furthermore, some abstract structures like super-quadrics and generalized cylinders maybe even more suitable for approximating the body parts.

Skeletal-based algorithms

The skeletal version (right) is effectively modeling the hand (left). This has fewer parameters than the volumetric version and it's easier to compute, making it suitable for real-time gesture analysis systems.

Instead of using intensive processing of the 3D models and dealing with a lot of parameters, one can just use a simplified version of joint angle parameters along with segment lengths. This is known as a skeletal representation of the body, where a virtual skeleton of the person is computed and parts of the body are mapped to certain segments. The analysis here is done using the position and orientation of these segments and the relation between each one of them( for example the angle between the joints and the relative position or orientation)

Advantages of using skeletal models:

  • Algorithms are faster because only key parameters are analyzed.
  • Pattern matching against a template database is possible
  • Using key points allows the detection program to focus on the significant parts of the body

Appearance-based models

These binary silhouette(left) or contour(right) images represent typical input for appearance-based algorithms. They are compared with different hand templates and if they match, the correspondent gesture is inferred.

Appearance-based models no longer use a spatial representation of the body, instead deriving their parameters directly from the images or videos using a template database. Some are based on the deformable 2D templates of the human parts of the body, particularly the hands. Deformable templates are sets of points on the outline of an object, used as interpolation nodes for the object's outline approximation. One of the simplest interpolation functions is linear, which performs an average shape from point sets, point variability parameters, and external deformation. These template-based models are mostly used for hand-tracking, but could also be used for simple gesture classification.

The second approach in gesture detection using appearance-based models uses image sequences as gesture templates. Parameters for this method are either the images themselves, or certain features derived from these. Most of the time, only one (monoscopic) or two (stereoscopic) views are used.

Electromyography-based models

Electromyography (EMG) concerns the study of electrical signals produced by muscles in the body. Through classification of data received from the arm muscles, it is possible to classify the action and thus input the gesture to external software. Consumer EMG devices allow for non-invasive approaches such as an arm or leg band and connect via Bluetooth. Due to this, EMG has an advantage over visual methods since the user does not need to face a camera to give input, enabling more freedom of movement.

Challenges

There are many challenges associated with the accuracy and usefulness of gesture recognition and software designed to implement it. For image-based gesture recognition, there are limitations on the equipment used and image noise. Images or video may not be under consistent lighting, or in the same location. Items in the background or distinct features of the users may make recognition more difficult.

The variety of implementations for image-based gesture recognition may also cause issues with the viability of the technology for general usage. For example, an algorithm calibrated for one camera may not work for a different camera. The amount of background noise also causes tracking and recognition difficulties, especially when occlusions (partial and full) occur. Furthermore, the distance from the camera, and the camera's resolution and quality, also cause variations in recognition accuracy.

In order to capture human gestures by visual sensors robust computer vision methods are also required, for example for hand tracking and hand posture recognition or for capturing movements of the head, facial expressions or gaze direction.

Social acceptability

One significant challenge to the adoption of gesture interfaces on consumer mobile devices such as smartphones and smartwatches stems from the social acceptability implications of gestural input. While gestures can facilitate fast and accurate input on many novel form-factor computers, their adoption and usefulness are often limited by social factors rather than technical ones. To this end, designers of gesture input methods may seek to balance both technical considerations and user willingness to perform gestures in different social contexts. In addition, different device hardware and sensing mechanisms support different kinds of recognizable gestures.

Mobile device

Gesture interfaces on mobile and small form-factor devices are often supported by the presence of motion sensors such as inertial measurement units (IMUs). On these devices, gesture sensing relies on users performing movement-based gestures capable of being recognized by these motion sensors. This can potentially make capturing signals from subtle or low-motion gestures challenging, as they may become difficult to distinguish from natural movements or noise. Through a survey and study of gesture usability, researchers found that gestures that incorporate subtle movement, which appear similar to existing technology, look or feel similar to every action, and are enjoyable were more likely to be accepted by users, while gestures that look strange, are uncomfortable to perform, interfere with communication, or involve uncommon movement caused users more likely to reject their usage. The social acceptability of mobile device gestures relies heavily on the naturalness of the gesture and social context.

On-body and wearable computers

Wearable computers typically differ from traditional mobile devices in that their usage and interaction location takes place on the user's body. In these contexts, gesture interfaces may become preferred over traditional input methods, as their small size renders touch-screens or keyboards less appealing. Nevertheless, they share many of the same social acceptability obstacles as mobile devices when it comes to gestural interaction. However, the possibility of wearable computers being hidden from sight or integrated into other everyday objects, such as clothing, allow gesture input to mimic common clothing interactions, such as adjusting a shirt collar or rubbing one's front pant pocket. A major consideration for wearable computer interaction is the location for device placement and interaction. A study exploring third-party attitudes towards wearable device interaction conducted across the United States and South Korea found differences in the perception of wearable computing use of males and females, in part due to different areas of the body considered socially sensitive. Another study investigating the social acceptability of on-body projected interfaces found similar results, with both studies labelling areas around the waist, groin, and upper body (for women) to be least acceptable while areas around the forearm and wrist to be most acceptable.

Public installations

Public Installations, such as interactive public displays, allow access to information and displays interactive media in public settings such as museums, galleries, and theaters. While touch screens are a frequent form of input for public displays, gesture interfaces provide additional benefits such as improved hygiene, interaction from a distance, and improved discoverability, and may favor performative interaction. An important consideration for gestural interaction with public displays is the high probability or expectation of a spectator audience.

Fatigue

Arm fatigue was a side-effect of vertically oriented touch-screen or light-pen use. In periods of prolonged use, users' arms began to feel fatigued and/or discomfort. This effect contributed to the decline of touch-screen input despite its initial popularity in the 1980s.

In order to measure arm fatigue side effect, researchers developed a technique called Consumed Endurance.

Sustainability and systemic change resistance

From Wikipedia, the free encyclopedia

The environmental sustainability problem has proven difficult to solve. The modern environmental movement has attempted to solve the problem in a large variety of ways. But little progress has been made, as shown by severe ecological footprint overshoot and lack of sufficient progress on the climate change problem. Something within the human system is preventing change to a sustainable mode of behavior. That system trait is systemic change resistance. Change resistance is also known as organizational resistance, barriers to change, or policy resistance.

Overview of resistance to solving the sustainability problem

While environmentalism had long been a minor force in political change, the movement strengthened significantly in the 1970s with the first Earth Day in 1970, in which over 20 million people participated, with publication of The Limits to Growth in 1972, and with the first United Nations Conference on the Human Environment in Stockholm in 1972. Early expectations the problem could be solved ran high. 114 out of 132 members of the United Nations attended the Stockholm conference. The conference was widely seen at the time as a harbinger of success:

"Many believe the most important result of the conference was the precedent it set for international cooperation in addressing environmental degradation. The nations attending agreed they shared responsibility for the quality of the environment, particularly the oceans and the atmosphere, and they signed a declaration of principles, after extensive negotiations, concerning their obligations. The conference also approved an environmental fund and an 'action program,' which involved 200 specific recommendations for addressing such problems as global climate change, marine pollution, population growth, the dumping of toxic wastes, and the preservation of biodiversity. A permanent environment unit was established for coordinating these and other international efforts. [This later became] the United Nations Environmental Program [which was] was formally approved by the General Assembly later that same year and its base established in Nairobi, Kenya. This organization not only coordinated action but monitored research, collecting and disseminating information, and it has played an ongoing role in international negotiations about environmental issues.
"The conference in Stockholm accomplished almost everything the preparatory committed had planned. It was widely considered successful, and many observers were almost euphoric about the extent of agreement."

However, despite the work of a worldwide environmental movement, many national environmental protection agencies, creation of the United Nations Environment Programme, and many international environmental treaties, the sustainability problem continues to grow worse. The latest ecological footprint data shows the world's footprint increased from about 50% undershoot in 1961 to 50% overshoot in 2007, the last year data is available.

In 1972 the first edition of The Limits to Growth analyzed the environmental sustainability problem using a system dynamics model. The widely influential book predicted that:

"If the present trends in world population, industrialization, pollution, food production, and resource depletion continue unchanged, the limits to growth on this planet will be reached sometime within the next one hundred years. The most probable result will be a rather sudden and uncontrollable decline in both population and industrial capacity some time in the 21st century."

Yet thirty-two years later in 2004 the third edition reported that:

"[The second edition of Limits to Growth] was published in 1992, the year of the global summit on environment and development in Rio de Janeiro. The advent of the summit seemed to prove that global society had decided to deal seriously with the important environmental problems. But we now know that humanity failed to achieve the goals of Rio. The Rio plus 10 conference in Johannesburg in 2002 produced even less; it was almost paralyzed by a variety of ideological and economic disputes, [due to] the efforts of those pursuing their narrow national, corporate, or individual self-interests.
"...humanity has largely squandered the past 30 years."

Change resistance runs so high that the world's top two greenhouse gas emitters, China and the United States, have never adopted the Kyoto Protocol treaty. In the US resistance was so strong that in 1999 the US Senate voted 95 to zero against the treaty by passing the Byrd–Hagel Resolution, despite the fact Al Gore was vice-president at the time. Not a single senator could be persuaded to support the treaty, which has not been brought back to the floor since.

Due to prolonged change resistance, the climate change problem has escalated to the climate change crisis. Greenhouse gas emissions are rising much faster than IPCC models expected: "The growth rate of [fossil fuel] emissions was 3.5% per year for 2000-2007, an almost four fold increase from 0.9% per year in 1990-1999. … This makes current trends in emissions higher than the worst case IPCC-SRES scenario."

The Copenhagen Climate Summit of December 2009 ended in failure. No agreement on binding targets was reached. The Cancun Climate Summit in December 2010 did not break the deadlock. The best it could do was another non-binding agreement:

"Recognizing that climate change represents an urgent and potentially irreversible threat to human societies and the planet, and thus requires to be urgently addressed by all Parties."

This indicates no progress at all since 1992, when the United Nations Framework Convention on Climate Change was created at the Earth Summit in Rio de Janeiro. The 2010 Cancun agreement was the functional equivalent of what the 1992 agreement said:

"The Parties to this Convention... [acknowledge] that the global nature of climate change calls for the widest possible cooperation by all countries and their participation in an effective and appropriate international response.... [thus the parties recognize] that States should enact effective environmental legislation... [to] protect the climate system for the benefit of present and future generations of humankind...."

Negotiations have bogged down so pervasively that: "Climate policy is gridlocked, and there's virtually no chance of a breakthrough." "Climate policy, as it has been understood and practised by many governments of the world under the Kyoto Protocol approach, has failed to produce any discernible real world reductions in emissions of greenhouse gases in fifteen years."

These events suggest that change resistance to solving the sustainability problem is so high the problem is currently unsolvable.

The change resistance and proper coupling subproblems

Understanding change resistance requires seeing it as a distinct and separate part of the sustainability problem. Tanya Markvart's 2009 thesis on Understanding Institutional Change and Resistance to Change Towards Sustainability stated that:

"It has also been demonstrated that ecologically destructive and inequitable institutional systems can be highly resilient and resistant to change, even in the face of social-ecological degradation and/or collapse (e.g., Berkes & Folke, 2002; Allison & Hobbs, 2004; Brown, 2005; Runnalls, 2008; Finley, 2009; Walker et al., 2009)."

The thesis focuses specifically on developing "an interdisciplinary theoretical framework for understanding institutional change and resistance to change towards sustainability."

Jack Harich's 2010 paper on Change Resistance as the Crux of the Environmental Sustainability Problem argues there are two separate problems to solve. A root cause analysis and a system dynamics model were used to explain how:

"...difficult social problems [like sustainability must be decomposed] into two sequential subproblems: (1) How to overcome change resistance and then (2) How to achieve proper coupling. This is the timeless strategy of divide and conquer. By cleaving one big problem into two, the problem becomes an order of magnitude easier to solve, because we can approach the two subproblems differently and much more appropriately. We are no longer unknowingly attempting to solve two very different problems simultaneously."

The paper discussed the two subproblems:

"Change resistance is the tendency for a system to continue its current behavior, despite the application of force to change that behavior.
"Proper coupling occurs when the behavior of one system affects the behavior of other systems in a desirable manner, using the appropriate feedback loops, so the systems work together in harmony in accordance with design objectives. … In the environmental sustainability problem the human system has become improperly coupled to the greater system it lives within: the environment.
"Change resistance versus proper coupling allows a crucial distinction. Society is aware of the proper practices required to live sustainably and the need to do so. But society has a strong aversion to adopting these practices. As a result, problem solvers have created thousands of effective (and often ingenious) proper practices. But they are stymied in their attempts to have them taken up by enough of the system to solve the problem because an 'implicit system goal' is causing insurmountable change resistance. Therefore systemic change resistance is the crux of the problem and must be solved first."

The proper coupling subproblem is what most people consider as "the" problem to solve. It is called decoupling in economic and environmental fields, where the term refers to economic growth without additional environmental degradation. Solving the proper coupling problem is the goal of environmentalism and in particular ecological economics: "Ecological economics is the study of the interactions and co-evolution in time and space of human economies and the ecosystems in which human economies are embedded."

Change resistance is also called barriers to change. Hoffman and Bazerman, in a chapter on "Understanding and overcoming the organizational and psychological barriers to action", concluded that:

"In this chapter, we argue that the change in thinking required of the sustainability agenda will never come to fruition within practical domains unless proper attention is given to the sources of individual and social resistance to such change. The implementation of wise management practices cannot be accomplished without a concurrent set of strategies for surmounting these barriers."

John Sterman, current leader of the system dynamics school of thought, came to the same conclusion:

"The civil rights movement provides a better analogy for the climate challenge. Then, as now, entrenched special interests vigorously opposed change. … Of course, we need more research and technical innovation—money and genius are always in short supply. But there is no purely technical solution for climate change. For public policy to be grounded in the hard-won results of climate science, we must now turn our attention to the dynamics of social and political change."

These findings indicate there are at least two subproblems to be solved: change resistance and proper coupling. Given the human system's long history of unsuccessful attempts to self-correct to a sustainable mode, it appears that high change resistance is preventing proper coupling. This may be expressed as an emerging principle: systemic change resistance is the crux of the sustainability problem and must be solved first, before the human system can be properly coupled to the greater system it lives within, the environment.

Systemic versus individual change resistance

Systemic change resistance differs significantly from individual change resistance. "Systemic means originating from the system in such a manner as to affect the behavior of most or all social agents of certain types, as opposed to originating from individual agents." Individual change resistance originates from individual people and organizations. How the two differ may be seen in this passage:

"The notion of resistance to change is credited to Kurt Lewin. His conceptualization of the phrase, however, is very different from today's usage. [which treats resistance to change as a psychological concept, where resistance or support of change comes from values, habits, mental models, and so on residing within the individual] For Lewin, resistance to change could occur, but that resistance could be anywhere in the system. As Kotter (1995) found, it is possible for the resistance to be sited within the individual, but it is much more likely to be found elsewhere in the system.
"Systems of social roles, with their associated patterns of attitudes, expectations, and behavior norms, share with biological systems the characteristic of homeostasis—i.e., tendencies to resist change, to restore the previous state after a disturbance.
"Lewin had been working on this idea, that the status quo represented an equilibrium between the barriers to change and the forces favoring change, since 1928 as part of his field theory. He believed that some difference in these forces—weakening of the barriers or strengthening of the driving forces—was required to produce the unfreezing that began a change."

If sources of systemic change resistance are present, they are the principal cause of individual change resistance. According to the fundamental attribution error it is crucial to address systemic change resistance when present and avoid assuming that change resistance can be overcome by bargaining, reasoning, inspirational appeals, and so on. This is because:

"A fundamental principle of system dynamics states that the structure of the system gives rise to its behavior. However, people have a strong tendency to attribute the behavior of others to dispositional rather than situational factors, that is, to character and especially character flaws rather than the system in which these people are acting. The tendency to blame the person rather than the system is so strong psychologists call it the 'fundamental attribution error.' "

Peter Senge, a thought leader of systems thinking for the business world, describes the structural source of systemic change resistance as being due to an "implicit system goal:" 

"In general, balancing loops are more difficult to see than reinforcing loops because it often looks like nothing is happening. There's no dramatic growth of sales and marketing expenditures, or nuclear arms, or lily pads. Instead, the balancing process maintains the status quo, even when all participants want change. The feeling, as Lewis Carroll's Queen of Hearts put it, of needing 'all the running you can do to keep in the same place' is a clue that a balancing loop may exist nearby.
"Leaders who attempt organizational change often find themselves unwittingly caught in balancing processes. To the leaders, it looks as though their efforts are clashing with sudden resistance that seems to come from nowhere. In fact, as my friend found when he tried to reduce burnout, the resistance is a response by the system, trying to maintain an implicit system goal. Until this goal is recognized, the change effort is doomed to failure."

Senge's insight applies to the sustainability problem. Until the "implicit system goal" causing systemic change resistance is found and resolved, change efforts to solve the proper coupling part of the sustainability problem may be, as Senge argues, "doomed to failure".

Focus is on proper coupling

Presently environmentalism is focused on solving the proper coupling subproblem. For example, the following are all proper coupling solutions. They attempt to solve the direct cause of the sustainability problem's symptoms:

The direct cause of environmental impact is the three factors on the right side of the I=PAT equation where Impact equals Population times Affluence (consumption per person) times Technology (environmental impact per unit of consumption). It is these three factors that solutions like those listed above seek to reduce.

The top environmental organization in the world, the United Nations Environmental Programme (UNEP), focuses exclusively on proper coupling solutions:

"2010 marked the beginning of a period of new, strategic and transformational direction for UNEP as it began implementing its Medium Term Strategy (MTS) for 2010-2013 across six areas: Climate change; Disasters and conflicts; Ecosystem management; Environmental governance; Harmful substances and hazardous waste; Resource efficiency, Sustainable consumption and production."

The six areas are all direct practices to reduce the three factors of the I=PAT equation.

Al Gore's 2006 documentary film An Inconvenient Truth described the climate change problem and the urgency of solving it. The film concluded with Gore saying:

"Each one of us is a cause of global warming, but each one of us can make choices to change that with the things we buy, the electricity we use, the cars we drive; we can make choices to bring our individual carbon emissions to zero. The solutions are in our hands, we just have to have the determination to make it happen. We have everything that we need to reduce carbon emissions, everything but political will. But in America, the will to act is a renewable resource."

The four solutions Gore mentions are proper coupling practices. There is, however, a hint of acknowledgement that overcoming systemic change resistance is the real challenge, when Gore says "...we just have to have the determination to make it happen. We have everything that we need to reduce carbon emissions, everything but political will."

The twenty-seven solutions that appear during the film's closing credits are mostly proper coupling solutions. The first nine are:

  • Go to www.climatecrisis.net
  • You can reduce your carbon emissions. In fact, you can even reduce your carbon emissions to zero.
  • Buy energy efficient appliances & light bulbs.
  • Change your thermostat (and use clock thermostats) to reduce energy for heating & cooling.
  • Weatherize your house, increase insulation, get an energy audit.
  • Recycle.
  • If you can, buy a hybrid car.
  • When you can, walk or ride a bicycle.
  • Where you can, use light rail & mass transit.

Some solutions are attempts to overcome individual change resistance, such as:

  • Tell your parents not to ruin the world that you will live in.
  • If you are a parent, join with your children to save the world they will live in.
  • Vote for leaders who pledge to solve this crisis.
  • Write to congress. If they don't listen, run for congress.
  • Speak up in your community.

However none of the 27 solutions deal with overcoming systemic change resistance.

Overcoming systemic change resistance

Efforts here are sparse because environmentalism is currently not oriented toward treating systemic change resistance as a distinct and separate problem to solve.

On how to specifically overcome the change resistance subproblem, Markvart examined two leading theories that seemed to offer insight into change resistance, Panarchy theory and New Institutionalism, and concluded that:

"...neither theory devotes significant attention to understanding the dynamics of resilient and resistant but inefficient and/or unproductive institutional and ecological systems. Overall, more research is required...."

Taking a root cause analysis and system dynamics modeling approach, Harich carefully defined the three characteristics of a root cause and then found a main systemic root cause for both the change resistance and proper coupling subproblems. Several sample solution elements for resolving the root causes were suggested. The point was made that the exact solution policies chosen do not matter nearly as much as finding the correct systemic root causes. Once these are found, how to resolve them is relatively obvious because once a root cause is found by structural modeling, the high leverage point for resolving it follows easily. Solutions may then push on specific structural points in the social system, which due to careful modeling will have fairly predictable effects.

This reaffirms the work of Donella Meadows, as expressed in her classic essay on Leverage Points: Places to Intervene in a System. The final page stated that:

"The higher the leverage point, the more the system will resist changing it."

Here Meadows refers to the leverage point for resolving the proper coupling subproblem rather than the leverage point for overcoming change resistance. This is because the current focus of environmentalism is on proper coupling.

However, if the leverage points associated with the root causes of change resistance exist and can be found, the system will not resist changing them. This is an important principle of social system behavior.

For example, Harich found the main root cause of successful systemic change resistance to be high "deception effectiveness." The source was special interests, particularly large for-profit corporations. The high leverage point was raising "general ability to detect manipulative deception." This can be done with a variety of solution elements, such as "The Truth Test." This effectively increases truth literacy, just as conventional education raises reading and writing literacy. Few citizens resist literacy education because its benefits have become so obvious.

Promotion of corporate social responsibility (CSR) has been used to try to overcome change resistance to solving social problems, including environmental sustainability. This solution strategy has not worked well because it is voluntary and does not resolve root causes. Milton Friedman explained why CSR fails: "The social responsibility of business is to increase profits." Business cannot be responsible to society. It can only be responsible to its shareholders.

Environmental movement

From Wikipedia, the free encyclopedia

Intelligent Environments (IE) are spaces with embedded systems and information and communication technologies creating interactive spaces that bring computation into the physical world and enhance occupants experiences. "Intelligent environments are spaces in which computation is seamlessly used to enhance ordinary activity. One of the driving forces behind the emerging interest in highly interactive environments is to make computers not only genuine user-friendly but also essentially invisible to the user".

IEs describe physical environments in which information and communication technologies and sensor systems disappear as they become embedded into physical objects, infrastructures, and the surroundings in which we live, travel and work. The goal here is to allow computers to take part in activities never previously involved and allow people to interact with computers via gesture, voice, movement, and context.

Origins

Conceptual figure of Intelligent Space (by J.-H. Lee and H. Hashimoto)

The idea of having an artificial intelligence capable of managing an environment, recollect data, and respond in consequence is older than we would expect. In the novel 2001: A Space Odyssey from 1968, long before the microcomputers revolution, you have the fictional character HAL 9000, a computer capable of controlling the different sensors and systems of the environment and using them as extensions of itself. The character Proteus from the 1973 novel Demon Seed also portrays the same characteristics of an artificial intelligence controlling an embedded environment. By the time these two novels were released, the idea of a computer controlling the environment that surrounds us was not broadly accepted by the community since both characters played the role of evil machines whose only objectives included the control over humans.

The term 'Intelligent environments' is a concept and expression originally created by Peter Droege for his homonymous Elsevier publication of 1997, a project that commenced in 1986: Intelligent environments - Spatial Aspects of the Information Revolution https://www.sciencedirect.com/book/9780444823328/intelligent-environments. The 1986 project was his winning entry into the Campus City Kawasaki competition in Japan, seeking to apply the benefits of information technology and advanced telecommunications to an entire city, its societal empowerment, trans-industrial prosperity and, above all, its environmental redemption.

It is not until 1991 with the introduction of ubiquitous computing by Mark Weiser when we start seeing an inclination from the scientific community to study the area of computing outside of the typical machine with a keyboard and a screen. It became something that could be potentially implemented into anything that surrounds us, proposing casual access to computing to any user. In 1996 Hashimoto Laboratory at the University of Tokyo developed the first research on intelligent spaces. J.-H. Lee and H. Hashimoto designed a room with a homemade three-dimensional tracking sensor and mobile robots, all this connected to a network. The idea was for the robots to support the person in the room with different tasks with the help of vision cameras and computer sets, becoming one of the first intelligent environment.

At first, intelligent spaces were designed with the only objective to help people with physical tasks. Robots included in the room would help people to grab objects as well as support people with disabilities to do certain jobs. This idea started shifting into the concept we have today of intelligent environments, not only an environment to support people but also robots. The intelligent space became a platform that extends the censorial capacity of anything connected to it. If we start designing products, either software or hardware around this intelligent environments, the effort needed to complete all kinds of tasks would be drastically reduced.

Challenges

Practical implementation of intelligent environments implies the solution of many challenges. Pervasive computing systems embedded in IE need to be proactive and to accomplish this, it is crucial that systems can track and determine the users' intent. The challenge here is finding that action that supposedly will help the user rather than hinder him. As of right now, algorithms behind the intelligent environments are constantly being reworked by the simple method of trial and error in artificial environments. It is not until a programmer can see an accurate enough level of prediction for the product to become commercialized. The degree of accuracy of intelligent environments depends on the task they want to accomplish. Some simple actions that do not substantially affect the user can admit more failures in the predictions than other functions that hold more responsibility. Still, there are always actions that cannot be fully predicted by the IE and needs some input from the user to be completed. One of the most significant challenges as of right now is determining which are those actions that are required for user input and how to create algorithms capable of eliminating that input so that the usability of the systems improves.

By the other hand, pro-activity of such environments has to be handled very carefully. Pervasive computing systems are supposed to be minimally intrusive and at the same time be capable of taking decisions that will help users. One way to achieve that is making those systems capable of modifying their behavior based on the user's state and surroundings. Here again, some challenges arise: What are the required data and information that a system needs to be context-aware? How frequently should that information be measured and consulted without hurting system performance? The goal is to create an IE capable of reacting fast and accurate to the needs and inputs of the user so it would be unnecessary for the sensors to record information that will not help the algorithms make the correct action to what is happening. Recognizing important data and filtering the environment to search for the appropriate place to obtain it results in a great challenge.

It is crucial for pervasive computing systems to find the right level of pro-activity and transparency without annoying the user. Systems can infer the user's needs for pro-activity based on his level of expertise on a particular task. Self-tuning can be crucial for accomplishing this goal.

As the Intelligent Environments Conference (2007) points out: "Types of Intelligent Environments range from private to public and from fixed to mobile; some are ephemeral while others are permanent; some change type during their lifespan. The realization of Intelligent Environments requires the convergence of different disciplines: Information and Computer Science, Architecture, Material Engineering, Artificial Intelligence, Sociology, and Design. Besides, technical breakthroughs are required in key enabling technology fields, such as microelectronics (e.g., miniaturization, power consumption), communication and networking technologies (e.g., broadband and wireless networks), smart materials (e.g., bio-implants) and intelligent agents (e.g., context awareness)". The correct integration of all of these components is crucial to developing a useful IE.

Applications

Business

One of the main areas that will experience a significant impact on the emergence of IE is business relations. The way companies interact with each other and with people will suffer the most significant impact. Their relationships will become more dynamic and should emphasize a more flexible approach to businesses, trying to adapt to the continually changing commercial environment. Such flexibility should also be reflected also on their employees and their work environment. Even today, companies that have shown significant levels of flexibility on their working environments and with their employees (as at Microsoft or Google) have increasing levels of productivity and employees retention.

Another critical issue that companies must take into account in the IE era is the way they approach the privacy of their clients. The success of these future companies will depend significantly on how people feel more confident in the use they give to their personal information. Another essential key to the success of these prospective businesses will be to allow the end user to have control over the way in which the IE systems make the decisions. Friendly user configurations should enable them to be in control of these systems but at the time is one of the biggest challenges for systems engineers.

Leisure Activities

New ways of entertainment have emerged since the creation of IE. There have been several experiments in museums where this technology is used to create a more interactive experience that makes the visitors not only experience history with their eyes but also feel in all of their senses. From the use of sounds and lights that adapt according to the expositions presented to the incorporation of smells that define unique environments, there are endless opportunities to the application of IE into this sort of entertainment.

The same concept can be used to not only improve existent leisure experiences but also to create new ones. Artistic expression has had a significant influence on this since we have seen new forms of art using Artificial Intelligence and IE. Take for example the work of the artist Chris Milk where you can see the implementation of immersive installations that make the user not only appreciate a work of art but also be part of it. One of his most important work of arts, "The Treachery of Sanctuary" uses projections of the users' bodies in different screens to explore the creative process by using generated digital birds. These type of art requires the user's interaction to exist.

Health Care

One of the most critical applications of Intelligent Environments is in the Healthcare Industry. You can use IE in the hospital's rooms to monitor the state of the patients without the patient even noticing, which results in less disturbance to those patients who need extraordinary amounts of rest and fewer efforts for nurses that are no longer required to check on patients regularly. This technology could substantially change the way in which hospitals and clinics are designed since nurses can be more efficient with their time attending patients in critical needs without leaving other patients under the care of intelligent rooms. This unique installations will no longer monitor the patient's health and notify the nurses, but it could also be programmed to interact with them with preventative purposes by administering specific drugs or directly delivering food when needed.

Caring of frail, elderly patients could dramatically change in the future with the use of IE. By introducing this technology into their own homes, we will be able to monitor a patient from anywhere we are without having the need to transporting them to hospitals or clinics to have proper care. This could transform nearly any house into an intelligent nursing home care, allowing families to save lots of money by dramatically reducing the cost of care.

Emergency Response

Preventing is the best way to fighting a possible problem and there is no better way to do so that gathering information prior to a problematic event that helps us know when it would happen. IE would provide the perfect way to gather the data necessary to predict hazards and possible problems in the future. If we implement IE in our houses, it could notify, for example, the fire department if a fire is about to happen without us even noticing, or the police department if suspicious activity is detected in the proximity of our homes. In the best scenario, the event would not happen since the IE will help us create a diagnosis of the environment where it is implemented so that we could attack possible issues long before it happens. This will substantially improve live conditions in the cities, and a substantial economic impact since less hazardous events would happen, preventing material loss.

Environmental Monitoring

Intelligent Environments will help us monitor different natural environments at a much higher precision and granularity than the currently used techniques. By having access to a richer and more significant data, it will not only help us to control the environment for possible hazards, but it will also change the way in which we understand it, making us improve the current theories and models of environmental processes. As of today, this technology is being used to study phenomena such as coastal erosion, flooding and the movement of glacial. We know very little about the why of many natural processes that are currently affecting us and having more accurate and precise data will significantly improve the way in which we attack those issues so that we not only make humans more environmentally friendly but also improve the health of our planet.

Hydrogen-like atom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Hydrogen-like_atom ...