Search This Blog

Monday, May 16, 2022

Pain in babies

From Wikipedia, the free encyclopedia

Pain in babies, and whether babies feel pain, has been a large subject of debate within the medical profession for centuries. Prior to the late nineteenth century it was generally considered that babies hurt more easily than adults. It was only in the last quarter of the 20th century that scientific techniques finally established babies definitely do experience pain – probably more than adults – and developed reliable means of assessing and of treating it. As recently as 1999, it was commonly stated that babies could not feel pain until they were a year old, but today it is believed newborns and likely even fetuses beyond a certain age can experience pain.

Effects

There are a number of metabolic and homeostatic changes which result from untreated pain, including an increased requirement for oxygen, accompanied by a reduction in the efficiency of gas exchange in the lungs. This combination can lead to inadequate oxygen supply, resulting in potential hypoxemia. In addition, a rise in stomach acidity accompanies the stress reaction precipitated by pain, and there is a risk of aspirating this into the lungs, further endangering lung integrity and tissue oxygenation. In cases of acute, persistent pain, the metabolism becomes predominantly catabolic, causing reduced efficiency of the immune system and a breakdown of proteins caused by the action of the stress hormones. In combination, healing of damaged or infected tissue may be impaired, and morbidity and mortality increased.

The neuropsychological effect on the bonding between mother and child, on later contact with health professionals, and on personal and social psychological well-being is difficult to quantify. Research suggests that babies exposed to pain in the neonatal period have more difficulty in these areas. Professionals working in the field of neonatal pain have speculated that adolescent aggression and self-destructive behaviour, including suicide, may, in some cases, be attributed to the long-term effects of untreated neonatal pain.

Pathophysiology

The present understanding of pain in babies is largely due to the recognition that the fetal and newborn unmyelinated nerve fibres are capable of relaying information, albeit slower than would be the case with myelinated fibres. At birth a baby has developed the neural pathways for nociception and for experiencing pain, but the pain responses are an immature version of that of an adult. There are a number of differences in both nerve structure and in the quality and extent of nerve response which are considered to be pertinent to understanding neonatal pain.

The nerves of young babies respond more readily to noxious stimuli, with a lower threshold to stimulation, than those of adults. A baby's threshold for sensitization is also substantially decreased, whilst the process involves a much larger area of sensitization with each trauma. The neural pathways that descend from the brain to the spinal cord are not well developed in the newborn, resulting in the ability of the central nervous system to inhibit nociception being more limited than in the adult.

There are also indication that the neonate's nervous system may be much more active than that of an adult, in terms of transforming its connections and central nerve pathways in response to stimuli. The ongoing process of neural pathway development, involving both structural and chemical changes of the nervous system, have been shown to be affected by pain events, both in the short term and potentially into adult life.

Diagnosis

Some of the signs of pain in babies are obvious, requiring no special equipment or training. The baby is crying and irritable when awake, develops a disturbed sleep pattern, feeds poorly, and shows a fearful, distrustful reaction towards care-givers.

The classical International Association for the Study of Pain definition of pain as a subjective, emotional experience that is described in terms of tissue damage, depends on the sufferer being able to self-report pain, which is little use in diagnosing and treating pain in babies. More significant are non-verbal responses, of which there are two kinds: gross physical movements and physiological response measurements. The former are simple direct observation, while the latter requires specific equipment to monitor blood pressure and stress hormone levels.

The cry response is increasingly important, as researchers are now able to differentiate between different kinds of cry: classed as "hungry", "angry", and "fearful or in pain". Interpretation is difficult, however, depending on the sensitivity of the listener, and varies significantly between observers.

Studies have sought additional, visible and easily definable indicators of pain and in particular the high level of pain detected in babies when hungry, compared to pain levels in further developed children. Combinations of crying with facial expressions, posture and movements, aided by physiological measurements, have been tested and found to be reliable indicators. A number of such observational scales have been published and verified. Even with noticeable responses from an infant, the underlying problem may be hidden. Due to the inability to speak or the side effects of the illness, it may be difficult to receive a proper diagnosis, causing infant diagnosis to be one of the hardest to do in the medical field.

Children and Infants’ Postoperative Pain Scale

The Children and Infants Postoperative Pain Scale (ChIPPS) is often used in the assessment of hospitalised babies. The scale requires no special measurements, and is therefore applicable across a wide range of circumstances.

Described in 2000, the scale uses a measurement of five items, each rated as 0, 1, or 2 based on the following parameters:

Item Score 0 Score 1 Score 2
Crying None Moaning Screaming
Facial expression Relaxed smiling Wry mouth Grimacing
Posture of the trunk Neutral Variable Rear up
Posture of the legs Neutral Kicking Tightened
Motor restlessness None Moderate Restless

Total score indicates how the baby should be managed according to the scale:

  • 0–3 No requirement for treating pain,
  • 4–10 Progressively greater need for analgesia.

All observations, both movement and physiological, tend to decrease when pain is persistent, thus rendering the scale unreliable in acute or prolonged cases. In addition, hyperalgesia and allodynia, occur more quickly and more extensively in babies than in adults. Day to day changes in the response to a specific injury may therefore become unpredictable and variable.

Treatment

Where the baby is to undergo some form of planned procedure, health professionals will take steps to reduce pain to a minimum, though in some circumstances it may be not be possible to remove all pain.

In case of illness, accident and post operative pain, a graded sequence of treatment is becoming established as standard practice. Research is making it easier and simpler to provide the necessary care, with a clearer understanding of the risks and possible side effects.

Measures not involving medication

Comforting

Touching, holding, stroking, keeping warm, talking and singing/music are ways in which adults have been comforting babies since the start of human history. This way of managing pain is shared with other primates, where the actions are performed both by female and male adults. Children who are able to verbalise pain report it to be an ineffective strategy and this is assumed to also be true of babies.

While the pain of a procedure may or may not be affected, the fear is visibly reduced. This works to ameliorate the negative effects of fear in health care situations. It is, therefore, considered good practice to involve parents or care-givers directly, having them present and in contact with the baby whenever possible before a minor painful procedure, such as the drawing of blood, or prior to giving a local anaesthetic injection.

Oral stimulation

Breastfeeding, the use of a pacifier and the administration of sugar orally has been proven to reduce the signs of pain in babies. Electroencephalographic changes are reduced by sugar, but the mechanism for this remains unknown; it does not seem to be endorphin mediated. As in comforting, it is unknown whether the physical pain is reduced, or whether it is the level of anxiety which is affected. However, the reduction in pain behaviour is assumed to be accompanied by a reduction in pain-related disorders, both immediate and longer term.

Oral sugar

Sugar taken orally reduces the total crying time but not the duration of the first cry in newborns undergoing a painful procedure (a single lancing of the heel). It does not moderate the effect of pain on heart rate and a recent single study found that sugar did not significantly affect pain-related electrical activity in the brains of newborns one second after the heel lance procedure. Sweet oral liquid moderately reduces the incidence and duration of crying caused by immunization injection in children between one and twelve months of age.

Sensorial Saturation

It is based on the competition of various gentle stimuli with pain transmission to the central nervous system: the so-called gate control theory of pain (proposed by Patrick David Wall and Ronald Melzack in 1965). Sensorial saturation follows a “3Ts” rule: using touch, taste and talk to distract the baby and antagonize pain. In babies treated with Sensorial Saturation, a reduction in crying time and pain score were noted, with respect to a control group and with respect to groups in which only oral sugar, only sucking, or a combination of the two was used. The "3Ts" stimuli (touch, talk, and taste)given throughout the painful procedure increase the well-known analgesic effect of oral sugar. Sensorial Saturation has been included in several international guidelines for analgesia.

Other techniques

Other "old fashioned" techniques are being tested with some success. "Facilitated tucking", swaddling and "kangaroo care" have been shown to reduce the response of babies to painful or distressful circumstances, while a comprehensive technique of nursing, called "developmental care", has been developed for managing pre-term infants.

Measures involving medication

Local anaesthetics

A variety of topical anaesthetic creams have been developed, ranging from single agents with good skin penetration, to eutectic mixtures of agents and technologically modern formulations of lignocaine in microspheres. They are effective in suitable procedures, if correctly and timeously applied. Disadvantages are the slow onset of adequate anaesthesia, inadequate analgesia for larger procedures, and toxicity of absorbed medication.

Local infiltration anaesthesia, the infiltration of anaesthetic agent directly into the skin and subcutaneous tissue where the painful procedure is to be undertaken, may be effectively used to reduce pain after a procedure under general anaesthesia. To reduce the pain of the initial injection, a topical anaesthetic ointment may be applied.

Regional anaesthesia requires the injection of local anaesthetic around the nerve trunks that supply a limb, or into the epidural space surrounding the spinal cord. It is used for pain relief after surgery, but requires special facilities for observation of the baby until the effect has worn out.

Analgesics

As the site of pain in babies is difficult to confirm, analgesics are often advised against until a proper diagnosis has been performed. For all analgesic drugs, the immaturity of the baby’s nervous system and metabolic pathways, the different way in which the drugs are distributed, and the reduced ability of the baby to excrete the drugs though the kidneys make the prescription of dosage important. The potentially harmful side effects of analgesic drugs are the same for babies as they are for adults and are both well known and manageable.

There are three forms of analgesia suitable for the treatment of pain in babies: paracetamol (acetaminophen), the nonsteroidal anti-inflammatory drugs, and the opioids. Paracetamol is safe and effective if given in the correct dosage. The same is true of the nonsteroidal anti-inflammatory drugs, such as ibuprofen (aspirin is seldom used). Of the opioids, morphine and fentanyl are most often used in a hospital setting, while codeine is effective for use at home. Clonidine is thought to have potential to reduce pain in newborn babies but it has yet to be tested in clinical trials.

History

Pre late 19th century

Before the late nineteenth century, babies were considered to be more sensitive to pain than adults. Doris Cope quotes paediatric surgeon Felix Würtz of Basel, writing in 1656:

If a new skin in old people be tender, what is it you think in a newborn Babe? Doth a small thing pain you so much on a finger, how painful is it then to a Child, which is tormented all the body over, which hath but a tender new grown flesh?

Late 19th century

In the late nineteenth, and first half of the twentieth century, doctors were taught that babies did not experience pain, and were treating their young patients accordingly. From needle sticks to tonsillectomies to heart operations were done with no anaesthesia or analgesia, other than muscle relaxation for the surgery. The belief was that in babies the expression of pain was reflexive and, owing to the immaturity of the infant brain, the pain could not really matter.

Cope considers it probable that the belief arose from misinterpretation of discoveries made in the new science of embryology. Dr Paul Flechsig equated the non-myelinisation of much of a baby’s nervous system with an inability to function.

It was generally believed that babies would not remember any pain that they happened to feel, and that lack of conscious memory meant lack of long-term harm. Scientific studies on animals with various brain lesions were interpreted as supporting the idea that the responses seen in babies were merely spinal reflexes. Furthermore, the whole effort of relieving pain was considered futile since it was thought to be impossible to measure the child's pain.

This, coupled with a concern that use of opiates would lead to addiction, and the time and effort needed to provide adequate analgesia to the newborn, contributed to the medical profession's continued practice of not providing pain relief for babies.

Mid-1980s

In the United States, a major change in practice was brought about by events surrounding one operation. Infant Jeffrey Lawson underwent open heart surgery in 1985. His mother, Jill R. Lawson, subsequently discovered that he had been operated on without any anaesthesia, other than a muscle relaxant. She started a vigorous awareness campaign which created such a public, and medical, reaction that by 1987 medical opinion had come full circle.

A number of studies on the measurement of pain in young children, and on ways of reducing the injury response began, and publications on the hormonal and metabolic responses of babies to pain stimuli began to appear, confirming that the provision of adequate anaesthesia and analgesia was better medicine on both humanitarian and physiological grounds.

It is now accepted that the neonate responds more extensively to pain than the adult does, and that exposure to severe pain, without adequate treatment, can have long-term consequences. Despite the difficulty of assessing how much pain a baby is experiencing, and the practical problem of prescribing the correct dosage or technique for treatment, modern medicine is firmly committed to improving the quality of pain relief for the very young.

The effective treatment of pain benefits the baby immediately, reduces some medium-term negative consequences, and likely prevents a number of adult psycho-physiological problems.

Wearable technology

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Wearable technology, wearables, fashion technology, smartwear, tech togs, streetwear tech, skin electronics or fashion electronics are smart electronic devices (electronic device with micro-controllers) that are worn close to and/or on the surface of the skin, where they detect, analyze, and transmit information concerning e.g. body signals such as vital signs, and/or ambient data and which allow in some cases immediate biofeedback to the wearer.

Wearable devices such as activity trackers are an example of the Internet of Things, since "things" such as electronics, software, sensors, and connectivity are effectors that enable objects to exchange data (including data quality) through the internet with a manufacturer, operator, and/or other connected devices, without requiring human intervention.

Wearable technology has a variety of applications which grows as the field itself expands. It appears prominently in consumer electronics with the popularization of the smartwatch and activity tracker. A popular activity tracker called the fit bit is widely used in the fitness industry to track calories and health related goals. A popular smartwatch in the market is the Apple Watch. Apart from commercial uses, wearable technology is being incorporated into navigation systems, advanced textiles, and healthcare. As wearable technology is being proposed for use in critical applications, it has to be vetted for its reliability and security properties.

Watch

History

In the 1500s, German inventor Peter Henlein (1485-1542) created small watches that were worn as necklaces. A century later, pocket watches grew in popularity as waistcoats became fashionable for men. Wristwatches were created in the late 1600s but were worn mostly by women as bracelets.

In the late 1800s, the first wearable hearing aids were introduced.

In 1904, aviator Alberto Santos-Dumont pioneered the modern use of the wristwatch.

In the 1970s, calculator watches became available, reaching the peak of their popularity in the 1980s.

From the early 2000s, wearable cameras were being used as part of a growing sousveillance movement. In 2008, Ilya Fridman incorporated a hidden Bluetooth microphone into a pair of earrings.

In 2010, Fitbit released its first step counter. Wearable technology which tracks information such as walking and heart rate is part of the quantified self movement.

World's First Consumer Released Smart Ring, by McLear/NFC Ring, circa 2013

In 2013 McLear, also known as NFC Ring, released the first widely used advanced wearable device. The smart ring could pay with bitcoin, unlock other devices, transfer personally identifying information, and other features. McLear owns the earliest patent, filed in 2012, which covers all smart rings, with Joseph Prencipe as the sole inventor.

In 2013, one of the first widely available smartwatches was the Samsung Galaxy Gear. Apple followed in 2015 with the Apple Watch.

Prototypes

From 1991 to 1997, Rosalind Picard and her students, Steve Mann and Jennifer Healey, at the MIT Media Lab designed, built, and demonstrated data collection and decision making from "Smart Clothes" that monitored continuous physiological data from the wearer. These "smart clothes", "smart underwear", "smart shoes", and smart jewellery collected data that related to affective state and contained or controlled physiological sensors and environmental sensors like cameras and other devices.

At the same time, also at the MIT Media Lab, Thad Starner and Alex "Sandy" Pentland develop augmented reality. In 1997, their smartglass prototype is featured on 60 Minutes and enables rapid web search and instant messaging. Though the prototype’s glasses are nearly as streamlined as modern smartglasses, the processor was a computer worn in a backpack – the most lightweight solution available at the time.

In 2009, Sony Ericsson teamed up with the London College of Fashion for a contest to design digital clothing. The winner was a cocktail dress with Bluetooth technology making it light up when a call is received.

Zach "Hoeken" Smith of MakerBot fame made keyboard pants during a "Fashion Hacking" workshop at a New York City creative collective.

The Tyndall National Institute in Ireland developed a "remote non-intrusive patient monitoring" platform which was used to evaluate the quality of the data generated by the patient sensors and how the end users may adopt to the technology.

More recently, London-based fashion company CuteCircuit created costumes for singer Katy Perry featuring LED lighting so that the outfits would change color both during stage shows and appearances on the red carpet such as the dress Katy Perry wore in 2010 at the MET Gala in NYC. In 2012, CuteCircuit created the world's first dress to feature Tweets, as worn by singer Nicole Scherzinger.

In 2010, McLear, also known as NFC Ring, developed the first advanced wearables prototype in the world, which was then fundraised on Kickstarter in 2013.

In 2014, graduate students from the Tisch School of Arts in New York designed a hoodie that sent pre-programmed text messages triggered by gesture movements.

Around the same time, prototypes for digital eyewear with heads up display (HUD) began to appear.

The US military employs headgear with displays for soldiers using a technology called holographic optics.

In 2010, Google started developing prototypes of its optical head-mounted display Google Glass, which went into customer beta in March 2013.

Usage

In the consumer space, sales of smart wristbands (aka activity trackers such as the Jawbone UP and Fitbit Flex) started accelerating in 2013. One in five American adults have a wearable device, according to the 2014 PriceWaterhouseCoopers Wearable Future Report. As of 2009, decreasing cost of processing power and other components was facilitating widespread adoption and availability.

In professional sports, wearable technology has applications in monitoring and real-time feedback for athletes. Examples of wearable technology in sport include accelerometers, pedometers, and GPS's which can be used to measure an athlete's energy expenditure and movement pattern.

In cybersecurity and financial technology, secure wearable devices have captured part of the physical security key market. McLear, also known as NFC Ring, and VivoKey developed products with one-time pass secure access control.

In health informatics, wearable devices have enabled better capturing of human health statics for data driven analysis. This has facilitated data-driven machine learning algorithms to analyse the health condition of users.

Modern technologies

The Fitbit, a modern wearable device

On April 16, 2013, Google invited "Glass Explorers" who had pre-ordered its wearable glasses at the 2012 Google I/O conference to pick up their devices. This day marked the official launch of Google Glass, a device intended to deliver rich text and notifications via a heads-up display worn as eyeglasses. The device also had a 5 MP camera and recorded video at 720p. Its various functions were activated via voice command, such as "OK Glass". The company also launched the Google Glass companion app, MyGlass. The first third-party Google Glass App came from the New York Times, which was able to read out articles and news summaries.

However, in early 2015, Google stopped selling the beta "explorer edition" of Glass to the public, after criticism of its design and the $1,500 price tag.

While optical head-mounted display technology remains a niche, two popular types of wearable devices have taken off: smartwatches and activity trackers. In 2012, ABI Research forecast that sales of smartwatches would hit $1.2 million in 2013, helped by the high penetration of smartphones in many world markets, the wide availability and low cost of MEMS sensors, energy efficient connectivity technologies such as Bluetooth 4.0, and a flourishing app ecosystem.

Crowdfunding-backed start-up Pebble reinvented the smartwatch in 2013, with a campaign running on Kickstarter that raised more than $10m in funding. At the end of 2014, Pebble announced it had sold a million devices. In early 2015, Pebble went back to its crowdfunding roots to raise a further $20m for its next-generation smartwatch, Pebble Time, which started shipping in May 2015.

Crowdfunding-backed start-up McLear invented the smart ring in 2013, with a campaign running on Kickstarter that raised more than $300k in funding. McLear was the first mover in wearables technology in introducing payments, bitcoin payments, advanced secure access control, quantified self data collection, biometric data tracking, and monitoring systems for the elderly.

In March 2014, Motorola unveiled the Moto 360 smartwatch powered by Android Wear, a modified version of the mobile operating system Android designed specifically for smartwatches and other wearables. Finally, following more than a year of speculation, Apple announced its own smartwatch, the Apple Watch, in September 2014.

Wearable technology was a popular topic at the trade show Consumer Electronics Show in 2014, with the event dubbed "The Wearables, Appliances, Cars and Bendable TVs Show" by industry commentators. Among numerous wearable products showcased were smartwatches, activity trackers, smart jewelry, head-mounted optical displays and earbuds. Nevertheless, wearable technologies are still suffering from limited battery capacity.

Another field of application of wearable technology is monitoring systems for assisted living and eldercare. Wearable sensors have a huge potential in generating big data, with a great applicability to biomedicine and ambient assisted living. For this reason, researchers are moving their focus from data collection to the development of intelligent algorithms able to glean valuable information from the collected data, using data mining techniques such as statistical classification and neural networks.

Wearable technology can also collect biometric data such as heart rate (ECG and HRV), brainwave (EEG), and muscle bio-signals (EMG) from the human body to provide valuable information in the field of health care and wellness.

Another increasingly popular wearable technology involves virtual reality. VR headsets have been made by a range of manufacturers for computers, consoles, and mobile devices. Recently Google released their headset, the Google Daydream.

In July 2014 a smart technology footwear was introduced in Hyderabad, India. The shoe insoles are connected to a smartphone application that uses Google Maps, and vibrate to tell users when and where to turn to reach their destination.

In addition to commercial applications, wearable technology is being researched and developed for a multitude of uses. The Massachusetts Institute of Technology is one of the many research institutions developing and testing technologies in this field. For example, research is being done to improve haptic technology for its integration into next-generation wearables. Another project focuses on using wearable technology to assist the visually impaired in navigating their surroundings.

As wearable technology continues to grow, it has begun to expand into other fields. The integration of wearables into healthcare has been a focus of research and development for various institutions. Wearables continue to evolve, moving beyond devices and exploring new frontiers such as smart fabrics. Applications involve using a fabric to perform a function such as integrating a QR code into the textile, or performance apparel that increases airflow during exercise.

Wearable technology and health

Samsung Galaxy Watch is designed specifically for sports and health functions, including a step counter and a heart rate monitor.

Wearable technology is often used to monitor a user's health. Given that such a device is in close contact with the user, it can easily collect data. It started as soon as 1980 where first wireless ECG was invented. In the last decades, it shows rapid growth in research of textile-based, tattoo, patch, and contact lenses.

Wearables can be used to collect data on a user's health including:

  • Heart rate
  • Calories burned
  • Steps walked
  • Blood pressure
  • Release of certain biochemicals
  • Time spent exercising
  • Seizures
  • physical strain

These functions are often bundled together in a single unit, like an activity tracker or a smartwatch like the Apple Watch Series 2 or Samsung Galaxy Gear Sport. Devices like these are used for physical training and monitoring overall physical health, as well as alerting to serious medical conditions such as seizures (e.g. Empatica Embrace).

Currently other applications within healthcare are being explored, such as:

  • Forecasting changes in mood, stress, and health
  • Measuring blood alcohol content
  • Measuring athletic performance
  • Monitoring how sick the user is
  • Long-term monitoring of patients with heart and circulatory problems that records an electrocardiogram and is self-moistening
  • Health Risk Assessment applications, including measures of frailty and risks of age-dependent diseases
  • Automatic documentation of care activities.

While wearables can collect data in aggregate form, most of them are limited in their ability to analyze or make conclusions based on this data; thus, most are used primarily for general health information. (An exception is seizure-alerting wearables, which continuously analyze the wearer's data and make a decision about calling for help; the data collected can then provide doctors with objective evidence that they may find useful in diagnoses.) Wearables can account for individual differences, although most just collect data and apply one-size-fits-all algorithms.

Today, there is a growing interest to use wearables not only for individual self-tracking, but also within corporate health and wellness programs. Given that wearables create a massive data trail which employers could repurpose for objectives other than health, more and more research has begun to study the dark side of wearables. Asha Peta Thompson founded Intelligent Textiles Limited, Intelligent Textiles, who create woven power banks and circuitry that can be used in e-uniforms for infantry.

Epidermal (skin) Electronics

Epidermal electronics is an emerging field of wearable technology, termed for their properties and behaviors comparable to those of the epidermis, or outermost layer of the skin. These wearables are mounted directly onto the skin to continuously monitor physiological and metabolic processes, both dermal and subdermal. Wireless capability is typically achieved through battery, Bluetooth or NFC, making these devices convenient and portable as a type of wearable technology. Currently, epidermal electronics are being developed in the fields of fitness and medical monitoring.

Current usage of epidermal technology is limited by existing fabrication processes. Its current application relies on various sophisticated fabrication techniques such as by lithography or by directly printing on a carrier substrate before attaching directly to the body. Research into printing epidermal electronics directly on the skin is currently available as a sole study source.

The significance of epidermal electronics involves their mechanical properties, which resemble those of skin. The skin can be modeled as bilayer, composed of an epidermis having Young's Modulus (E) of 2-80 kPa and thickness of 0.3–3 mm and a dermis having E of 140-600 kPa and thickness of 0.05-1.5 mm. Together this bilayer responds plastically to tensile strains ≥ 30%, below which the skin's surface stretches and wrinkles without deforming. Properties of epidermal electronics mirror those of skin to allow them to perform in this same way. Like skin, epidermal electronics are ultrathin (h < 100 μm), low-modulus (E ~ 70 kPa), and lightweight (<10 mg/cm2), enabling them to conform to the skin without applying strain. Conformal contact and proper adhesion enable the device to bend and stretch without delaminating, deforming or failing, thereby eliminating the challenges with conventional, bulky wearables, including measurement artifacts, hysteresis, and motion-induced irritation to the skin. With this inherent ability to take the shape of skin, epidermal electronics can accurately acquire data without altering the natural motion or behavior of skin. The thin, soft, flexible design of epidermal electronics resembles that of temporary tattoos laminated on the skin. Essentially, these devices are "mechanically invisible" to the wearer.

Epidermal electronics devices may adhere to the skin via van der Waals forces or elastomeric substrates. With only van der Waals forces, an epidermal device has the same thermal mass per unit area (150 mJ/cm2K) as skin, when the skin's thickness is <500 nm. Along with van der Waals forces, the low values of E and thickness are effective in maximizing adhesion because they prevent deformation-induced detachment due to tension or compression. Introducing an elastomeric substrate can improve adhesion but will raise the thermal mass per unit area slightly. Several materials have been studied to produce these skin-like properties, including photolithography patterned serpentine gold nanofilm and patterned doping of silicon nanomembranes.

Entertainment

A fully wearable Walkman music player (W series)

Wearables have expanded into the entertainment space by creating new ways to experience digital media. Virtual reality headsets and augmented reality glasses have come to exemplify wearables in entertainment. The influence of these virtual reality headsets and augmented reality glasses are seen mostly in the gaming industry during the initial days, but are now used in the fields of medicine and education.

Virtual reality headsets such as the Oculus Rift, HTC Vive, and Google Daydream View aim to create a more immersive media experience by either simulating a first-person experience or displaying the media in the user's full field of vision. Television, films, video games, and educational simulators have been developed for these devices to be used by working professionals and consumers. In a 2014 expo, Ed Tang of Avegant presented his "Smart Headphones". These headphones use Virtual Retinal Display to enhance the experience of the Oculus Rift. Some augmented reality devices fall under the category of wearables. Augmented reality glasses are currently in development by several corporations. Snap Inc.'s Spectacles are sunglasses that record video from the user's point of view and pair with a phone to post videos on Snapchat. Microsoft has also delved into this business, releasing Augmented Reality glasses, HoloLens, in 2017. The device explores using digital holography, or holograms, to give the user a first hand experience of Augmented Reality. These wearable headsets are used in many different fields including the military.

Wearable technology has also expanded from small pieces of technology on the wrist to apparel all over the body. There is a shoe made by the company shiftwear that uses a smartphone application to periodically change the design display on the shoe. The shoe is designed using normal fabric but utilizes a display along the midsection and back that shows a design of your choice. The application was up by 2016 and a prototype for the shoes was created in 2017.

Another example of this can be seen with Atari's headphone speakers. Atari and Audiowear are developing a face cap with built in speakers. The cap will feature speakers built into the underside of the brim, and will have Bluetooth capabilities. Jabra has released earbuds, in 2018, that cancel the noise around the user and can toggle a setting called "hearthrough." This setting takes the sound around the user through the microphone and sends it to the user. This gives the user an augmented sound while they commute so they will be able to hear their surroundings while listening to their favorite music. Many other devices can be considered entertainment wearables and need only be devices worn by the user to experience media.

Gaming

The gaming industry has always incorporated new technology. The first technology used for electronic gaming was a controller for Pong. The way users game has continuously evolved through each decade. Currently, the two most common forms of gaming is either using a controller for video game consoles or a mouse and keyboard for PC games.

In 2012, virtual reality headphones were reintroduced to the public. VR headsets were first conceptualized in the 1950s and officially created in the 1960s. The creation of the first virtual reality headset can be credited to Cinematographer Morton Heilig. He created a device known as the Sensorama in 1962. The Sensorama was a videogame like device that was so heavy that it needed to be held up by a suspension device. There has been numerous different wearable technology within the gaming industry from gloves to foot boards. The gaming space has offbeat inventions. In 2016 Sony debuted its first portable, connectable virtual reality headset codenamed Project Morpheus. The device was rebranded for PlayStation in 2018. Early 2019 Microsoft debut their HoloLens 2 that goes beyond just virtual reality into mixed reality headset. Their main focus is to be use mainly by the working class to help with difficult tasks. These headsets are used by educators, scientists, engineers, military personnel, surgeons, and many more. Headsets such as the HoloLens 2 allows the user to see a projected image at multiple angles and interact with the image. This helps gives a hands on experience to the user, which otherwise, they would not be able to get.

Fashion

Fashionable wearables are "designed garments and accessories that combines aesthetics and style with functional technology." Garments are the interface to the exterior mediated through digital technology. It allows endless possibilities for the dynamic customization of apparel. All clothes have social, psychological and physical functions. However, with the use of technology these functions can be amplified. There are some wearables that are called E-textiles. These are the combination of textiles(fabric) and electronic components to create wearable technology within clothing. They are also known as smart textile and digital textile.

Wearables are made from a functionality perspective or from an aesthetic perspective. When made from a functionality perspective, designers and engineers create wearables to provide convenience to the user. Clothing and accessories are used as a tool to provide assistance to the user. Designers and engineers are working together to incorporate technology in the manufacturing of garments in order to provide functionalities that can simplify the lives of the user. For example, through smartwatches people have the ability to communicate on the go and track their health. Moreover, smart fabrics have a direct interaction with the user, as it allows sensing the customers' moves. This helps to address concerns such as privacy, communication and well-being. Years ago, fashionable wearables were functional but not very aesthetic. As of 2018, wearables are quickly growing to meet fashion standards through the production of garments that are stylish and comfortable. Furthermore, when wearables are made from an aesthetic perspective, designers explore with their work by using technology and collaborating with engineers. These designers explore the different techniques and methods available for incorporating electronics in their designs. They are not constrained by one set of materials or colors, as these can change in response to the embedded sensors in the apparel. They can decide how their designs adapt and responds to the user.

In 1967 French fashion designer Pierre Cardin, known for his futuristic designs created a collection of garments entitled "robe electronique" that featured a geometric embroidered pattern with LEDs (light emitting diodes). Pierre Cardin unique designs were featured in an episode of the Jetsons animated show where one of the main characters demonstrates how her luminous "Pierre Martian" dress works by plugging it into the mains. An exhibition about the work of Pierre Cardin was recently on display at the Brooklyn Museum in New York.

In 1968, the Museum of Contemporary Craft in New York City held an exhibition named Body Covering which presented the infusion of technological wearables with fashion. Some of the projects presented were clothing that changed temperature, and party dresses that light up and produce noises, among others. The designers from this exhibition creatively embedded electronics into the clothes and accessories to create these projects. As of 2018, fashion designers continue to explore this method in the manufacturing of their designs by pushing the limits of fashion and technology.

House of Holland and NFC Ring

McLear, also known as NFC Ring, in partnership with the House of Henry Holland and Visa Europe Collab, showcased an event entitled "Cashless on the Catwalk" at the Collins Music Hall in Islington. Celebrities walking through the event could make purchases for the first time in history from a wearable device using McLear's NFC Rings by tapping the ring on a purchase terminal.

CuteCircuit

CuteCircuit pioneered the concept of interactive and app-controlled fashion with the creation in 2008 of the Galaxy Dress (part of the permanent collection of the Museum of Science and Industry in Chicago, US) and in 2012 of the tshirtOS (now infinitshirt). CuteCircuit fashion designs can interact and change colour providing the wearer a new way of communicating and expressing their personality and style. CuteCircuit's designs have been worn on the red carpet by celebrities such as Katy Perry and Nicole Scherzinger. and are part of the permanent collections of the Museum of Fine Arts in Boston.

Project Jacquard

Project Jacquard, a Google project led by Ivan Poupyrev, has been combining clothing with technology. Google collaborated with Levi Strauss to create a jacket that has touch-sensitive areas that can control a smartphone. The cuff-links are removable and charge in a USB port.

Intel & Chromat

Intel partnered with the brand Chromat to create a sports bra that responds to changes in the body of the user, as well as a 3D printed carbon fiber dress that changes color based on the user's adrenaline levels. Intel also partnered with Google and TAG Heuer to make a smart watch.

Iris van Herpen

Iris Van Herpen's water dress

Smart fabrics and 3D printing have been incorporated in high fashion by the designer Iris van Herpen. Van Herpen was the first designer to incorporate 3D printing technology of rapid prototyping into the fashion industry. The Belgian company Materialise NV collaborates with her in the printing of her designs.

Manufacturing Process of E-textiles

There are several methods which companies manufacture e-textiles from fiber to garment and the insertion of electronics to the process. One of the methods being developed is when stretchable circuits are printed right into a fabric using conductive ink. The conductive ink uses metal fragments in the ink to become electrically conductive. Another method would be using conductive thread or yarn. This development includes the coating of non-conductive fiber (like Polyester PET) with conductive material such as metal like gold or silver to produce coated yarns or in order to produce an e-textile.

Common fabrication techniques for e-textiles include the following traditional methods:

  • Embroidery
  • Sewing
  • Weaving
  • Non-woven
  • Knitting
  • Spinning
  • Breading
  • Coating
  • Printing
  • Laying

Military

Wearable technology within the military ranges from educational purposes, training exercises and sustainability technology.

The technology used for educational purposes within the military are mainly wearables that tracks a soldier's vitals. By tracking a soldier's heart rate, blood pressure, emotional status, etc. helps the research and development team best help the soldiers. According to chemist, Matt Coppock, he has started to enhance a soldier's lethality by collecting different biorecognition receptors. By doing so it will eliminate emerging environmental threats to the soldiers.

With the emergence of virtual reality it is only natural to start creating simulations using VR. This will better prepare the user for whatever situation they are training for. In the military there are combat simulations that soldiers will train on. The reason the military will use VR to train its soldiers is because it is the most interactive/immersive experience the user will feels without being put in a real situation. Recent simulations include a soldier wearing a shock belt during a combat simulation. Each time they are shot the belt will release a certain amount of electricity directly to the user's skin. This is to simulate a shot wound in the most humane way possible.

There are many sustainability technologies that military personnel wear in the field. One of which is a boot insert. This insert gauges how soldiers are carrying the weight of their equipment and how daily terrain factors impact their mission panning optimization. These sensors will not only help the military plan the best timeline but will help keep the soldiers at best physical/mental health.

Issues and concerns

The FDA drafted a guidance for low risk devices advises that personal health wearables are general wellness products if they only collect data on weight management, physical fitness, relaxation or stress management, mental acuity, self-esteem, sleep management, or sexual function. This was due to the privacy risks that were surrounding the devices. As more and more of the devices were being used as well as improved soon enough these devices would be able to tell if a person is showing certain health issues and give a course of action. With the rise of these devices being consumed so to the FDA drafted this guidance in order to decrease risk of a patient in case the app doesn't function properly. It is argued the ethics of it as well because although they help track health and promote independence there is still an invasion of privacy that ensues to gain information. This is due to the huge amounts of data that has to be transferred which could raise issues for both the user and the companies if a third partied gets access to this data. There was an issue with the google glass that was used by surgeons in order to track vital signs of a patient where it had privacy issues relating to third party use of non-consented information. The issue is consent as well when it comes to wearable technology because it gives the ability to record and that is an issue when permission is not asked when a person is being recorded.

Compared to smart phones, wearable devices pose several new reliability challenges to device manufacturers and software developers. Limited display area, limited computing power, limited volatile and non-volatile memory, non-conventional shape of the devices, abundance of sensor data, complex communication patterns of the apps, and limited battery size—all these factors can contribute to salient software bugs and failure modes, such as, resource starvation or device hangs. Moreover, since many of the wearable devices are used for health purposes (either monitoring or treatment), their accuracy and robustness issues can give rise to safety concerns. Some tools have been developed to evaluate the reliability and the security properties of these wearable devices. The early results point to a weak spot of wearable software whereby overloading of the devices, such as through high UI activity, can cause failures.

Bilingual education

From Wikipedia, the free encyclopedia
 
Children at school

Bilingual education involves teaching academic content in two languages, in a native and secondary language with varying amounts of each language used in accordance with the program model. Bilingual education refers to the utilization of two languages as means of instruction for students and considered part of or the entire school curriculum, as distinct from simply teaching a second language as a subject.

Importance of bilingual education

Children's Bilingual Theater Dr Seuss Day
 
The bilingual French-speaking school Trung Vuong

Bilingual education is viewed by educators as the "pathway to bilingualism", which allows learners to develop proficiency and literacy in both their mother-tongue and second-language. The competency in two languages is believed to broaden students' opportunities to communicate with people from other communities or revive another language. Another advantage of bilingual education is “promoting equal education” and becoming “the cure and not the cause of underachievement”, as it gives students an opportunity to showcase their knowledge and skills in their first language. When students' first language is valued and used as a resource for learning, it has a positive effect on learners’ self-esteem  and “identity affirmation”. 

Not only does bilingual education introduce new linguistics and maintain home languages, but it also promotes cultural and linguistic diversity. This allows for positive intercultural communication, which can lead to a better understanding of cultural and linguistic differences. As Baker and Wright (2017) point out, children in dual language bilingual schools “are likely to be more tolerant, respectful, sensitive and equalized in status. Genuine cross-cultural friendships may develop, and issues of stereotyping and discrimination may be diminished”. The official language policy of International Baccalaureate Organization (2014) also emphasizes the importance of “cultivation of intercultural awareness, international-mindedness, and global citizenship”  in international schools where students speak more than two languages. Other benefits of bilingual education are considered to be improved cognitive performance, "particularly in the performance of complex tasks that are controlled by executive functioning processes and working memory" and such economic advantages as increased job and education opportunities around the world. Bilingual education can also revive native languages in colonised countries.

Bilingual education program models

The following section surveys several different types of bilingual education program models.

Immersion bilingual education

Immersion is a type of bilingual education in which subjects are taught in a student's second language. The students are immersed into a classroom in which the subject is taught entirely in their second language (non-native language). There are different facets of immersion in schools.

Total immersion

Total immersion is a type of bilingual education in which the whole class is only taught in the second language, without any use of the native language.

While not explicitly referred to as a bilingual education program in the United States of America, the Structured English Immersion program, required by the states of California, Arizona, and Massachusetts, is a total immersion programs, as the whole class is taught only using the second language of the students, English, in the Structured English Immersion program.

Partial immersion

The second type of bilingual education is known as Partial immersion, when the native language is used in the class, and about half of the class time is spent learning the second language.

Two-way or dual language immersion

The third type of immersion within schools is called two-way immersion, also known as dual immersion. Dual immersion occurs when half of the students in class natively speak the second language while the other half do not. Dual immersion encourages each group of students to work together in learning each other’s language.

Dual language or two-way immersion education refers to programs that provide grade-level content and literacy instruction to all students through two languages, English and a partner language. These programs are designed to help native and non-native English speakers become bilingual and biliterate. There are four main types of dual language programs, these programs refer to how a student would best learn with dual language immersion based on their previous language skills.

The first type are developmental, or maintenance bilingual programs. These programs enroll students who are native speakers of the partner language to learn English. The second type are bilingual immersion programs. These programs enroll both native English speakers and native speakers of the partner language. The third type are foreign language immersion programs. These programs primarily enroll students who speak English as their native language. Finally, the fourth type are heritage language programs. These programs enroll students who are primarily dominant in English, but a close relative (e.g. parent or grandparent) speaks the partner language.

Another form of bilingual education is a type of dual language program that has students study in two different ways: 1) A variety of academic subjects are taught in the students' second language, with specially trained bilingual teachers who can understand students when they ask questions in their native language, but always answer in the second language; and 2) Native language literacy classes improve students' writing and higher-order language skills in their first language. Research has shown that many of the skills learned in the native language can be transferred easily to the second language later. In this type of program, the native language classes do not teach academic subjects. The second-language classes are content-based, rather than grammar-based, so students learn all of their academic subjects in the second language. Dual language is a type of bilingual education where students learn about reading and writing in two languages. In the United States, the majority of programs are English and Spanish but new partner languages have emerged lately such as Japanese, Korean, French, Mandarin, and Arabic. The concept of dual language promotes bilingualism, improved awareness of cultural diversity, and higher levels of academic achievement by means of lessons in two languages.

The 90/10 and 50/50 models

There are two basic models for dual language immersion. The first model is the 90/10 model. The two-way bilingual immersion program has 90% of the instructions in grades K-1 in the minority language, which is less supported by the broader society, and 10% in the majority language. This proportion gradually changes in the majority language until the curriculum is equally divided in both languages by 5th grade. The two-way bilingual immersion program is based on the principle of clear curriculum separation of the two languages of instruction. Teachers do not repeat or translate the subject matter in the second language but strengthen concepts taught in one language across the two languages in a spiral curriculum in order to provide cognitive challenge (Thomas & Collier, 1997). The languages of instructions are alternated by theme or content area. This type of immersion is required to develop the dual language proficiency, as social language can be mastered in couple of years, but a higher level of competency is required to read social studies texts or solve mathematics word problems, roughly around 5 to 7 years (Collier, 1987). The goal of gradually increasing the majority of the language is for instruction to become 50% of English and 50% of the partner language. The second model is the 50/50 model. In the 50/50 model English and the partner language are used equally throughout the program.

Dual immersion programs in the US

Dual immersion classrooms encourage students' native language development, making an important contribution to heritage language maintenance, and allow language minority students to remain in classrooms with their native English-speaking peers, resulting in linguistic and socio-cultural advantages. As of May 2005, there were 317 dual immersion programs operating in elementary schools in the United States in 10 different languages.

Dual language programs are less common in US schools, although research indicates they are extremely effective in helping students learn English well and aiding the long-term performance of English learners in school. Native English speakers benefit by learning a second language. English language learners (ELLs) are not segregated from their peers. These students are taught in their mother tongue yet still in the typical "American" classroom, for both cognitive and social benefits.

Transitional bilingual education

Transitional bilingual education involves education in a child's native language to ensure that students do not fall behind in content areas such as mathematics, science, and social studies while they are learning English. When the child's English proficiency is deemed satisfactory, they can then transition to an English Only (EO) environment. Research has shown that many of the skills learned in the native language can be transferred easily to the second language later. While the linguistic goal of such programs is to help students transition to mainstream, English-only classrooms, the use of the student's primary language as a vehicle to develop literacy skills and acquire academic knowledge also prevents the degeneration of a child's native language. This program model is often used in U.S. public schools.

English as a second language

This program entails learning English while with people that speak the same native language. ESL is a supplementary, comprehensive English language program for students trying to learn the language to better function in American society. People are learning English as a second language because English has been assigned communicative status in that country. Singapore, India, Malawi, and 50 other territories use English as part of the country's leading institutions, where it plays a second-language role in a multilingual society. ESL is different from EFL (English as a foreign language). ESL is offered at many schools to accommodate the culturally diverse students, most often found in urban areas, and helps these students keep up with subjects such as math and science. To teach ESL abroad, a bachelor's degree and ESL teaching qualification is typically required at minimum.

Late-exit or developmental bilingual education

In this program model, education is in the child's native language for an extended duration, accompanied by education in English. The goal is to develop literacy in the child's native language first, and transfer these skills to the second language. This education is ideal for many English learning students, but in many instances the resources for such education are not available.

Effects of mother-tongue instruction

Continuing to foster the abilities of children's mother tongue along with other languages has proven essential for their personal and educational development because they retain their cultural identity and gain a deeper understanding of language. Two 2016 studies of mother-tongue instruction in Ethiopia and Kenya respectively show that it had positive outcomes for the students in both countries. The following list contains multiple benefits that researchers have found from children being educated bilingually.

Empathy

Theory of mind is connected to empathy because it helps us to understand the beliefs, desires, and thoughts of others. Researchers studying theory of mind in bilingual and monolingual preschoolers found that bilingual preschoolers performed significantly higher on theory of mind false belief tasks than their monolingual peers.

Reading

Researchers found that students in a dual-language immersion program in Portland Oregon performed better in English reading and writing skills than their peers.

Attention

Many studies have shown that bilingual children tend to have better executive function abilities. These are often measured using tasks that require inhibition and task switching. Bilingual children are typically able to hold their attention for longer without becoming distracted and are better able to switch from one task to another.

School performance and engagement

Researchers Wayne Thomas and Virginia Collier conducted school program evaluation research across 15 states. They found that students in dual-language classroom environments have better outcomes than their peers in English-only classrooms in regards to attendance, behavior, and parent involvement.

By country or region

Naïve cynicism

From Wikipedia, the free encyclopedia

Naïve cynicism is a philosophy of mind, cognitive bias and form of psychological egoism that occurs when people naïvely expect more egocentric bias in others than actually is the case.

Flow chart of naïve cynicism

The term was formally proposed by Justin Kruger and Thomas Gilovich and has been studied across a wide range of contexts including: negotiations, group-membership, marriage, economics, government policy and more.

History

Early examples from social psychology (1949)

The idea that 'people naïvely believe they see things objectively and others do not' has been acknowledged for quite some time in the field of social psychology. For example, while studying social cognition, Solomon Asch and Gustav Ichheiser wrote in 1949:

"[W]e tend to resolve our perplexity arising out of the experience that other people see the world differently than we see it ourselves by declaring that those others, in consequence of some basic intellectual and moral defect, are unable to see the things “as they really are” and to react to them “in a normal way.” We thus imply, of course, that things are in fact as we see them and that our ways are the normal ways."

Formal laboratory experimentation (1999)

The formal proposal of naïve cynicism came from Kruger and Gilovich's 1999 study called "'Naive cynicism' in everyday theories of responsibility assessment: On biased assumptions of bias".

Theory

The theory of naïve cynicism can be described as:

  1. I am not biased.
  2. You are biased if you disagree with me.
  3. Your intentions/actions reflect your underlying egocentric biases.

A counter to naïve realism

As with naïve realism, the theory of naïve cynicism hinges on the acceptance of the following three beliefs:

  1. I am not biased.
  2. Reasonable people are not biased.
  3. All others are biased.

Naïve cynicism can be thought of as the counter to naïve realism, which is the belief that an individual perceives the social world objectively while others perceive it subjectively.

It is important to discern that naïve cynicism is related to the notion that others have an egocentric bias that motivates them to do things for their own self-interest rather than for altruistic reasons.

Both of these theories, however, relate to the extent that adults credit or discredit the beliefs or statements of others.

Relating to psychological egoism

Psychological egoism is the belief that humans are always motivated by self-interest.

In a related quote, Joel Feinberg, in his 1958 paper "Psychological Egoism", embraces a similar critique by drawing attention to the infinite regress of psychological egoism:

"All men desire only satisfaction."
"Satisfaction of what?"
"Satisfaction of their desires."
"Their desires for what?"
"Their desires for satisfaction."
"Satisfaction of what?"
"Their desires."
"For what?"
"For satisfaction"—etc., ad infinitum.

The circular reasoning evidenced by Feinberg's quote exemplifies how this view can be thought of as the need for others to have incessant personal desires and satisfaction.

Definition

There are several ways to define naïve cynicism such as:

  • The tendency to expect others’ judgments will have a motivational bias and therefore will be biased in the direction of their self-interest.
  • Expecting that others are motivationally biased when determining responsibility for positive and negative outcomes.
  • The propensity to believe that others are prone to committing the fundamental attribution error, false consensus effect or self-enhancement bias.
  • When our assumptions of others' biases exceed their actual biases.

Examples

Historical examples

Cold War

The American reaction to a Russian SALT treaty during the Cold War is one well-known example of naïve cynicism in history. Political leaders negotiating on behalf of the United States discredited the offer simply because it was proposed by the Russian side.

Former U.S. congressman Floyd Spence indicates the use of naïve cynicism in this quote:

"I have had a philosophy for some time in regard to SALT, and it goes like this: the Russians will not accept a SALT treaty that is not in their best interest, and it seems to me that if it is their best interests, it can‘t be in our best interest."

Marketplace

Consumers exhibit naïve cynicism toward commercial companies and advertisers through suspicion and awareness toward marketing strategies used by such companies.

Politics

The public displays naïve cynicism toward governments and political leaders through distrust.

Decision-making behavior

Naïve cynicism can be exemplified in some decision-making behaviors such as:

Other possible real-world examples

  • Overestimating the influence of financial compensation on people’s willingness to give blood.
  • If another person tends to favor himself when interpreting uncertain information, someone exhibiting naïve cynicism would believe the other person is intentionally misleading them for their own advantage.
  • Assuming that group membership has a large influence on beliefs and attitudes.
    • If an individual of one political party makes an interpretation or a statement in favor of his own party and thus in accord with his self-interest, other adults discount his statement (especially if they belong to an opposing party).
      • Likewise, if that individual makes a statement against his own self-interest, adults are more likely to believe him.

Resulting negative outcomes

Naïve cynicism can contribute to several negative outcomes including:

  • Over-thinking the actions of others.
  • Making negative attributions about others' motivations without sufficient cause.
  • Missing opportunities that greater trust might capture.

Reducing

The major strategy to attenuate naïve cynicism in individuals has been shown by:

  • Viewing the other person as part of one's in-group or acknowledging they are working in cooperation.

As a result of applying this strategy, happier married couples were less likely to exhibit cynical beliefs about each other’s judgments.

Individuals are especially likely to exhibit naïve cynicism when the other person has a vested interest in the judgment at hand. However, if the other person is dispassionate about the judgment at hand, the individual will be less likely to engage in naïve cynicism and think the other person will see things the way they do.

Psychological contexts

Naïve cynicism may play a major role in these psychology-related contexts:

Groups

In one series of classic experiments by Kruger and Gilovich, groups including video game players, darts players and debaters were asked how often they were responsible for good or bad events relative to their partner. Participants evenly apportioned themselves for both good and bad events, but expected their partner to claim more responsibility for good events than bad events (egocentric bias) than they actually did.

Marriage

In the same study conducted by Kruger and Gilovich, married couples were also examined and ultimately exhibited the same type of naïve cynicism about their marriage partner as did partners of dart players, video game players and debaters.

Altruism

Naïve cynicism has been exemplified in the context of altruism. Explanations of selfless human behavior have been described in terms of individuals seeking personal advantage as opposed to absolute altruism.

For example, naïve cynicism would explain the act of an individual donating money or items to charity as a way of self-enhancement rather than for altruistic purposes.

Dispositionists vs. situationists

Dispositionists are described as individuals who believe people's actions are conditioned by some internal factor, such as beliefs, values, personality traits or abilities, rather than the situation they find themselves in.

Situationists, in contrast, are described as individuals who believe people's actions are conditioned by external factors outside of one's control.

Dispositionists exemplify naïve cynicism while situationists do not. Therefore, situationist attributions are often thought to be more accurate than dispositionist attributions. However, dispositionist attributions are commonly believed to be the dominant schema because situationist attributions tend to be dismissed or attacked.

In a direct quote from Benforado and Hanson's paper titled "Naïve Cynicism: Maintaining False Perceptions in Policy Debates", the situationist and dispositionist are described as:

"...the naïve cynic is a self-aware—even proud—critic.
She speaks what she believes to be the truth, though it may require disparaging her opponents.
She senses that she is delving below the surface of the complex arguments of the situationists; she “sees,” for example, the financial interest, the prejudice, or the distorting zealotry that motivates the situationist.
She “sees” the bias and self-interest in those who would disagree—while maintaining an affirming view of herself as objective and other-regarding.
The naïve cynic, then, is a dispositionist who cynically dispositionalizes the situationist. :She protects fundamentally flawed attributions by attacking the sources of potentially more accurate attributions.
The naïve cynic understands (though rarely consciously) that the best defense is a good offense."

Examples of differences

  • Dispositionists might explain bankruptcy as the largely self-inflicted result of personal laziness and/or imprudence.
  • Situationists might explain bankruptcy as frequently caused by more complicated external forces, such as divorce or the medical and other costs of unanticipated illness.

Applied contexts

In addition to purely psychological contexts, naïve cynicism may play a major role in several applied contexts such as:

Negotiations

Naïve cynicism has been studied extensively in several contexts of negotiating such as bargaining tactics, specifically in the sense that too much naïve cynicism can be costly.

Negotiators displaying naïve cynicism rely on distributive and competitive strategies and focus on gaining concessions rather than on problem-solving. This can be detrimental and lead to reduced information exchange, information concealment, coordination, cooperation and quality of information revealed.

Reducing naïve cynicism in negotiations

The following strategies have been identified as ways to reduce naïve cynicism in the context of negotiations:

Perspective taking

It has been shown that individuals who focus more on the perspectives of their opponents were more successful in experimental laboratory negotiations.

Taking another person's perspective produced better predictions of opponents' goals and biases, though it is noted that many individuals lack the ability to properly change perspectives. These incapable individuals tend to view their opponents as passive and can be prone to ignore valuable information disseminated during negotiations.

Negotiator role reversal is believed to be an important focus of training for individuals to be less inclined to use naïve cynicism during negotiations. This process involves having each negotiator verbally consider their opponents perspective prior to making any judgements.

Communication

It has been shown when communication between opponents in negotiations is strong, negotiators are more likely to avoid stalemates.

Negotiators who exhibit strong communication skills tend to believe integrity should be reciprocally displayed by both sides and thus regard open communication as a positive aspect in negotiations. Those negotiators high in communication skills also tend to view deadlocks as a negative event and will avoid reaching an impasse in order to reach an agreement.

Feedback

Despite attempts to reduce errors of naïve cynicism in laboratory studies with error-related feedback, errors still persisted even after many trials and strong feedback.

Governmental policy debates

In relation to governmental policy debates, it is hypothesized that naïve cynicism fosters a distrust of other political parties and entities. Naïve cynicism is thought to be a critical contributor for why certain legal policies succeed and others fail.

For example, naïve cynicism is thought to be a factor contributing to the existence of detention centers such as Abu Ghraib, Guantanamo Bay, Bagram Air Base and more.

Related biases

Biases including the following have been argued to be caused at least partially by naïve cynicism:

History of agriculture in Palestine

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/His...