Search This Blog

Friday, August 9, 2019

Intellectual disability

From Wikipedia, the free encyclopedia
 
Intellectual disability
Other namesIntellectual developmental disability (IDD), general learning disability
A child runs through the finishing line
Children with intellectual disabilities or other developmental conditions can compete in the Special Olympics.
SpecialtyPsychiatry, pediatrics
Frequency153 million (2015)

Intellectual disability (ID), also known as general learning disability and mental retardation (MR), is a generalized neurodevelopmental disorder characterized by significantly impaired intellectual and adaptive functioning. It is defined by an IQ under 70, in addition to deficits in two or more adaptive behaviors that affect everyday, general living.

Once focused almost entirely on cognition, the definition now includes both a component relating to mental functioning and one relating to an individual's functional skills in their daily environment. As a result of this focus on the person's abilities in practice, a person with an unusually low IQ may still not be considered to have intellectual disability.

Intellectual disability is subdivided into syndromic intellectual disability, in which intellectual deficits associated with other medical and behavioral signs and symptoms are present, and non-syndromic intellectual disability, in which intellectual deficits appear without other abnormalities. Down syndrome and fragile X syndrome are examples of syndromic intellectual disabilities.

Intellectual disability affects about 2–3% of the general population. Seventy-five to ninety percent of the affected people have mild intellectual disability. Non-syndromic, or idiopathic cases account for 30–50% of these cases. About a quarter of cases are caused by a genetic disorder, and about 5% of cases are inherited from a person's parents. Cases of unknown cause affect about 95 million people as of 2013.

Signs and symptoms

A historical image of a person with intellectual disability
 
Intellectual disability (ID) becomes apparent during childhood and involves deficits in mental abilities, social skills, and core activities of daily living (ADLs) when compared to same-aged peers. There often are no physical signs of mild forms of ID, although there may be characteristic physical traits when it is associated with a genetic disorder (e.g., Down syndrome).

The level of impairment ranges in severity for each person. Some of the early signs can include:
  • Delays in reaching, or failure to achieve milestones in motor skills development (sitting, crawling, walking)
  • Slowness learning to talk, or continued difficulties with speech and language skills after starting to talk
  • Difficulty with self-help and self-care skills (e.g., getting dressed, washing, and feeding themselves)
  • Poor planning or problem-solving abilities
  • Behavioral and social problems
  • Failure to grow intellectually, or continued infant-like behavior
  • Problems keeping up in school
  • Failure to adapt or adjust to new situations
  • Difficulty understanding and following social rules
In early childhood, mild ID (IQ 50–69) may not be obvious or identified until children begin school. Even when poor academic performance is recognized, it may take expert assessment to distinguish mild intellectual disability from specific learning disability or emotional/behavioral disorders. People with mild ID are capable of learning reading and mathematics skills to approximately the level of a typical child aged nine to twelve. They can learn self-care and practical skills, such as cooking or using the local mass transit system. As individuals with intellectual disability reach adulthood, many learn to live independently and maintain gainful employment.

Moderate ID (IQ 35–49) is nearly always apparent within the first years of life. Speech delays are particularly common signs of moderate ID. People with moderate intellectual disability need considerable supports in school, at home, and in the community in order to fully participate. While their academic potential is limited, they can learn simple health and safety skills and to participate in simple activities. As adults, they may live with their parents, in a supportive group home, or even semi-independently with significant supportive services to help them, for example, manage their finances. As adults, they may work in a sheltered workshop.

People with Severe (IQ 20–34) or Profound ID (IQ 19 or below) need more intensive support and supervision for their entire lives. They may learn some ADLs, but an intellectual disability is considered severe or profound when individuals are unable to independently care for themselves without ongoing significant assistance from a caregiver throughout adulthood. Individuals with profound ID are completely dependent on others for all ADLs and to maintain their physical health and safety. They may be able to learn to participate in some of these activities to limited degree.

Causes

An eight-year-old boy
Down syndrome is the most common genetic cause of intellectual disability.
 
Among children, the cause of intellectual disability is unknown for one-third to one-half of cases. About 5% of cases are inherited from a person's parents. Genetic defects that cause intellectual disability, but are not inherited, can be caused by accidents or mutations in genetic development. Examples of such accidents are development of an extra chromosome 18 (trisomy 18) and Down syndrome, which is the most common genetic cause. Velocardiofacial syndrome and fetal alcohol spectrum disorders are the two next most common causes. However, there are many other causes. The most common are:

Diagnosis

According to both the American Association on Intellectual and Developmental Disabilities (Intellectual Disability: Definition, Classification, and Systems of Supports (11th Edition) and the American Psychiatric Association Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), three criteria must be met for a diagnosis of intellectual disability: significant limitation in general mental abilities (intellectual functioning), significant limitations in one or more areas of adaptive behavior across multiple environments (as measured by an adaptive behavior rating scale, i.e. communication, self-help skills, interpersonal skills, and more), and evidence that the limitations became apparent in childhood or adolescence. In general, people with intellectual disability have an IQ below 70, but clinical discretion may be necessary for individuals who have a somewhat higher IQ but severe impairment in adaptive functioning.

It is formally diagnosed by an assessment of IQ and adaptive behavior. A third condition requiring onset during the developmental period is used to distinguish intellectual disability from other conditions, such as traumatic brain injuries and dementias (including Alzheimer's disease).

Intelligence quotient

The first English-language IQ test, the Stanford–Binet Intelligence Scales, was adapted from a test battery designed for school placement by Alfred Binet in France. Lewis Terman adapted Binet's test and promoted it as a test measuring "general intelligence." Terman's test was the first widely used mental test to report scores in "intelligence quotient" form ("mental age" divided by chronological age, multiplied by 100). Current tests are scored in "deviation IQ" form, with a performance level by a test-taker two standard deviations below the median score for the test-taker's age group defined as IQ 70. Until the most recent revision of diagnostic standards, an IQ of 70 or below was a primary factor for intellectual disability diagnosis, and IQ scores were used to categorize degrees of intellectual disability. 

Since current diagnosis of intellectual disability is not based on IQ scores alone, but must also take into consideration a person's adaptive functioning, the diagnosis is not made rigidly. It encompasses intellectual scores, adaptive functioning scores from an adaptive behavior rating scale based on descriptions of known abilities provided by someone familiar with the person, and also the observations of the assessment examiner who is able to find out directly from the person what he or she can understand, communicate, and such like. IQ assessment must be based on a current test. This enables diagnosis to avoid the pitfall of the Flynn effect, which is a consequence of changes in population IQ test performance changing IQ test norms over time.

Distinction from other disabilities

Clinically, intellectual disability is a subtype of cognitive deficit or disabilities affecting intellectual abilities, which is a broader concept and includes intellectual deficits that are too mild to properly qualify as intellectual disability, or too specific (as in specific learning disability), or acquired later in life through acquired brain injuries or neurodegenerative diseases like dementia. Cognitive deficits may appear at any age. Developmental disability is any disability that is due to problems with growth and development. This term encompasses many congenital medical conditions that have no mental or intellectual components, although it, too, is sometimes used as a euphemism for intellectual disability.

Limitations in more than one area

Adaptive behavior, or adaptive functioning, refers to the skills needed to live independently (or at the minimally acceptable level for age). To assess adaptive behavior, professionals compare the functional abilities of a child to those of other children of similar age. To measure adaptive behavior, professionals use structured interviews, with which they systematically elicit information about persons' functioning in the community from people who know them well. There are many adaptive behavior scales, and accurate assessment of the quality of someone's adaptive behavior requires clinical judgment as well. Certain skills are important to adaptive behavior, such as:

Management

By most definitions, intellectual disability is more accurately considered a disability rather than a disease. Intellectual disability can be distinguished in many ways from mental illness, such as schizophrenia or depression. Currently, there is no "cure" for an established disability, though with appropriate support and teaching, most individuals can learn to do many things. Causes, such as congenital hypothyroidism, if detected early may be treated to prevent development of an intellectual disability.

There are thousands of agencies around the world that provide assistance for people with developmental disabilities. They include state-run, for-profit, and non-profit, privately run agencies. Within one agency there could be departments that include fully staffed residential homes, day rehabilitation programs that approximate schools, workshops wherein people with disabilities can obtain jobs, programs that assist people with developmental disabilities in obtaining jobs in the community, programs that provide support for people with developmental disabilities who have their own apartments, programs that assist them with raising their children, and many more. There are also many agencies and programs for parents of children with developmental disabilities.

Beyond that, there are specific programs that people with developmental disabilities can take part in wherein they learn basic life skills. These "goals" may take a much longer amount of time for them to accomplish, but the ultimate goal is independence. This may be anything from independence in tooth brushing to an independent residence. People with developmental disabilities learn throughout their lives and can obtain many new skills even late in life with the help of their families, caregivers, clinicians and the people who coordinate the efforts of all of these people.

There are four broad areas of intervention that allow for active participation from caregivers, community members, clinicians, and of course, the individual(s) with an intellectual disability. These include psychosocial treatments, behavioral treatments, cognitive-behavioral treatments, and family-oriented strategies. Psychosocial treatments are intended primarily for children before and during the preschool years as this is the optimum time for intervention. This early intervention should include encouragement of exploration, mentoring in basic skills, celebration of developmental advances, guided rehearsal and extension of newly acquired skills, protection from harmful displays of disapproval, teasing, or punishment, and exposure to a rich and responsive language environment. A great example of a successful intervention is the Carolina Abecedarian Project that was conducted with over 100 children from low SES families beginning in infancy through pre-school years. Results indicated that by age 2, the children provided the intervention had higher test scores than control group children, and they remained approximately 5 points higher 10 years after the end of the program. By young adulthood, children from the intervention group had better educational attainment, employment opportunities, and fewer behavioral problems than their control-group counterparts.

Core components of behavioral treatments include language and social skills acquisition. Typically, one-to-one training is offered in which a therapist uses a shaping procedure in combination with positive reinforcements to help the child pronounce syllables until words are completed. Sometimes involving pictures and visual aids, therapists aim at improving speech capacity so that short sentences about important daily tasks (e.g. bathroom use, eating, etc.) can be effectively communicated by the child. In a similar fashion, older children benefit from this type of training as they learn to sharpen their social skills such as sharing, taking turns, following instruction, and smiling. At the same time, a movement known as social inclusion attempts to increase valuable interactions between children with an intellectual disability and their non-disabled peers. Cognitive-behavioral treatments, a combination of the previous two treatment types, involves a strategical-metastrategical learning technique that teaches children math, language, and other basic skills pertaining to memory and learning. The first goal of the training is to teach the child to be a strategical thinker through making cognitive connections and plans. Then, the therapist teaches the child to be metastrategical by teaching them to discriminate among different tasks and determine which plan or strategy suits each task. Finally, family-oriented strategies delve into empowering the family with the skill set they need to support and encourage their child or children with an intellectual disability. In general, this includes teaching assertiveness skills or behavior management techniques as well as how to ask for help from neighbors, extended family, or day-care staff. As the child ages, parents are then taught how to approach topics such as housing/residential care, employment, and relationships. The ultimate goal for every intervention or technique is to give the child autonomy and a sense of independence using the acquired skills he/she has. 

Although there is no specific medication for intellectual disability, many people with developmental disabilities have further medical complications and may be prescribed several medications. For example, autistic children with developmental delay may be prescribed antipsychotics or mood stabilizers to help with their behavior. Use of psychotropic medications such as benzodiazepines in people with intellectual disability requires monitoring and vigilance as side effects occur commonly and are often misdiagnosed as behavioral and psychiatric problems.

Epidemiology

Intellectual disability affects about 2–3% of the general population. 75–90% of the affected people have mild intellectual disability. Non-syndromic or idiopathic ID accounts for 30–50% of cases. About a quarter of cases are caused by a genetic disorder. Cases of unknown cause affect about 95 million people as of 2013. It is more common in males and in low to middle income countries.

History

Intellectual disability has been documented under a variety of names throughout history. Throughout much of human history, society was unkind to those with any type of disability, and people with intellectual disability were commonly viewed as burdens on their families.

Greek and Roman philosophers, who valued reasoning abilities, disparaged people with intellectual disability as barely human. The oldest physiological view of intellectual disability is in the writings of Hippocrates in the late fifth century BCE, who believed that it was caused by an imbalance in the four humors in the brain.

Caliph Al-Walid (r. 705–715) built one of the first care homes for intellectually disabled individuals and built the first hospital which accommodated intellectually disabled individuals as part of its services. In addition, Al-Walid assigned each intellectually disabled individual a caregiver.

Until the Enlightenment in Europe, care and asylum was provided by families and the church (in monasteries and other religious communities), focusing on the provision of basic physical needs such as food, shelter and clothing. Negative stereotypes were prominent in social attitudes of the time.

In the 13th century, England declared people with intellectual disability to be incapable of making decisions or managing their affairs. Guardianships were created to take over their financial affairs.

In the 17th century, Thomas Willis provided the first description of intellectual disability as a disease. He believed that it was caused by structural problems in the brain. According to Willis, the anatomical problems could be either an inborn condition or acquired later in life.

In the 18th and 19th centuries, housing and care moved away from families and towards an asylum model. People were placed by, or removed from, their families (usually in infancy) and housed in large professional institutions, many of which were self-sufficient through the labor of the residents. Some of these institutions provided a very basic level of education (such as differentiation between colors and basic word recognition and numeracy), but most continued to focus solely on the provision of basic needs of food, clothing, and shelter. Conditions in such institutions varied widely, but the support provided was generally non-individualized, with aberrant behavior and low levels of economic productivity regarded as a burden to society. Individuals of higher wealth were often able to afford higher degrees of care such as home care or private asylums. Heavy tranquilization and assembly-line methods of support were the norm, and the medical model of disability prevailed. Services were provided based on the relative ease to the provider, not based on the needs of the individual. A survey taken in 1891 in Cape Town, South Africa shows the distribution between different facilities. Out of 2046 persons surveyed, 1,281 were in private dwellings, 120 in jails, and 645 in asylums, with men representing nearly two-thirds of the number surveyed. In situations of scarcity of accommodation, preference was given to white men and black men (whose insanity threatened white society by disrupting employment relations and the tabooed sexual contact with white women).

In the late 19th century, in response to Charles Darwin's On the Origin of Species, Francis Galton proposed selective breeding of humans to reduce intellectual disability. Early in the 20th century, the eugenics movement became popular throughout the world. This led to forced sterilization and prohibition of marriage in most of the developed world and was later used by Adolf Hitler as a rationale for the mass murder of people with intellectual disability during the holocaust. Eugenics was later abandoned as an evil violation of human rights, and the practice of forced sterilization and prohibition from marriage was discontinued by most of the developed world by the mid-20th century.
In 1905, Alfred Binet produced the first standardized test for measuring intelligence in children.

Although ancient Roman law had declared people with intellectual disability to be incapable of the deliberate intent to harm that was necessary for a person to commit a crime, during the 1920s, Western society believed they were morally degenerate.

Ignoring the prevailing attitude, Civitans adopted service to people with developmental disabilities as a major organizational emphasis in 1952. Their earliest efforts included workshops for special education teachers and daycamps for children with disabilities, all at a time when such training and programs were almost nonexistent. The segregation of people with developmental disabilities was not widely questioned by academics or policy-makers until the 1969 publication of Wolf Wolfensberger's seminal work "The Origin and Nature of Our Institutional Models", drawing on some of the ideas proposed by SG Howe 100 years earlier. This book posited that society characterizes people with disabilities as deviant, sub-human and burdens of charity, resulting in the adoption of that "deviant" role. Wolfensberger argued that this dehumanization, and the segregated institutions that result from it, ignored the potential productive contributions that all people can make to society. He pushed for a shift in policy and practice that recognized the human needs of those with intellectual disability and provided the same basic human rights as for the rest of the population.

The publication of this book may be regarded as the first move towards the widespread adoption of the social model of disability in regard to these types of disabilities, and was the impetus for the development of government strategies for desegregation. Successful lawsuits against governments and an increasing awareness of human rights and self-advocacy also contributed to this process, resulting in the passing in the U.S. of the Civil Rights of Institutionalized Persons Act in 1980. 

From the 1960s to the present, most states have moved towards the elimination of segregated institutions. Normalization and deinstitutionalization are dominant. Along with the work of Wolfensberger and others including Gunnar and Rosemary Dybwad, a number of scandalous revelations around the horrific conditions within state institutions created public outrage that led to change to a more community-based method of providing services.

By the mid-1970s, most governments had committed to de-institutionalization, and had started preparing for the wholesale movement of people into the general community, in line with the principles of normalization. In most countries, this was essentially complete by the late 1990s, although the debate over whether or not to close institutions persists in some states, including Massachusetts.

In the past, lead poisoning and infectious diseases were significant causes of intellectual disability. Some causes of intellectual disability are decreasing, as medical advances, such as vaccination, increase. Other causes are increasing as a proportion of cases, perhaps due to rising maternal age, which is associated with several syndromic forms of intellectual disability.

Along with the changes in terminology, and the downward drift in acceptability of the old terms, institutions of all kinds have had to repeatedly change their names. This affects the names of schools, hospitals, societies, government departments, and academic journals. For example, the Midlands Institute of Mental Subnormality became the British Institute of Mental Handicap and is now the British Institute of Learning Disability. This phenomenon is shared with mental health and motor disabilities, and seen to a lesser degree in sensory disabilities.

Terminology

The terms used for this condition are subject to a process called the euphemism treadmill. This means that whatever term is chosen for this condition, it eventually becomes perceived as an insult. The terms mental retardation and mentally retarded were invented in the middle of the 20th century to replace the previous set of terms, which included "imbecile" and "moron" and are now considered offensive. By the end of the 20th century, these terms themselves have come to be widely seen as disparaging, politically incorrect, and in need of replacement. The term intellectual disability is now preferred by most advocates and researchers in most English-speaking countries.

The term "mental retardation" was used in the American Psychiatric Association's DSM-IV (1994) and in the World Health Organization's ICD-10 (codes F70–F79). In the next revision, the ICD-11, this term has been replaced by the term "disorders of intellectual development" (codes 6A00–6A04; 6A00.Z for the "unspecified" diagnosis code). The term "intellectual disability (intellectual developmental disorder)" is used in DSM-5 (2013). As of 2013, "intellectual disability (intellectual developmental disorder)" is the term that has come into common use by among educational, psychiatric, and other professionals over the past two decades. Because of its specificity and lack of confusion with other conditions, the term "mental retardation" is still sometimes used in professional medical settings around the world, such as formal scientific research and health insurance paperwork.

The several traditional terms that long predate psychiatry are simple forms of abuse in common usage today; they are often encountered in such old documents as books, academic papers, and census forms. For example, the British census of 1901 has a column heading including the terms imbecile and feeble-minded.

Vaguer expressions like developmentally disabled, special, or challenged have been used instead of the term mentally retarded. The term developmental delay was popular among caretakers and parents of individuals with intellectual disability because delay suggests that a person is slowly reaching his or her full potential, rather than having a lifelong condition.

Usage has changed over the years and differed from country to country. For example, mental retardation in some contexts covers the whole field but previously applied to what is now the mild MR group. Feeble-minded used to mean mild MR in the UK, and once applied in the US to the whole field. "Borderline intellectual functioning" is not currently defined, but the term may be used to apply to people with IQs in the 70s. People with IQs of 70 to 85 used to be eligible for special consideration in the US public education system on grounds of intellectual disability.
  • Cretin is the oldest and comes from a dialectal French word for Christian. The implication was that people with significant intellectual or developmental disabilities were "still human" (or "still Christian") and deserved to be treated with basic human dignity. Individuals with the condition were considered to be incapable of sinning, thus "christ-like" in their disposition. This term has not been used in scientific endeavors since the middle of the 20th century and is generally considered a term of abuse. Although cretin is no longer in use, the term cretinism is still used to refer to the mental and physical disability resulting from untreated congenital hypothyroidism.
  • Amentia has a long history, mostly associated with dementia. The difference between amentia and dementia was originally defined by time of onset. Amentia was the term used to denote an individual who developed deficits in mental functioning early in life, while dementia included individuals who develop mental deficiencies as adults. Theodor Meynert in the 1890s lectures described amentia as a form of sudden-onset confusion (German: Verwirrtheit), often with hallucinations. This term was long in use in psychiatry in this sense. Emil Kraepelin in the 1910s wrote that “acute confusion (amentia)” is a form of febrile delirium. By 1912, amentia was a classification lumping "idiots, imbeciles, and feeble minded" individuals in a category separate from a dementia classification, in which the onset is later in life. In Russian psychiatry the term “amentia” defines a form of clouding of consciousness, which is dominated by confusion, true hallucinations, incoherence of thinking and speech and chaotic movements. In Russia “amentia” (Russian: аменция) is not associated with intellectual disability and mean only clouding of consciousness.
  • Idiot indicated the greatest degree of intellectual disability, where the mental age is two years or less, and the person cannot guard himself or herself against common physical dangers. The term was gradually replaced by the term profound mental retardation (which has itself since been replaced by other terms).
  • Imbecile indicated an intellectual disability less extreme than idiocy and not necessarily inherited. It is now usually subdivided into two categories, known as severe intellectual disability and moderate intellectual disability.
  • Moron was defined by the American Association for the Study of the Feeble-minded in 1910, following work by Henry H. Goddard, as the term for an adult with a mental age between eight and twelve; mild intellectual disability is now the term for this condition. Alternative definitions of these terms based on IQ were also used. This group was known in UK law from 1911 to 1959–60 as feeble-minded.
  • Mongolism and Mongoloid idiot were medical terms used to identify someone with Down syndrome, as the doctor who first described the syndrome, John Langdon Down, believed that children with Down syndrome shared facial similarities with Blumenbach's "Mongolian race". The Mongolian People's Republic requested that the medical community cease use of the term as a referent to intellectual disability. Their request was granted in the 1960s, when the World Health Organization agreed that the term should cease being used within the medical community.
  • In the field of special education, educable (or "educable intellectual disability") refers to ID students with IQs of approximately 50–75 who can progress academically to a late elementary level. Trainable (or "trainable intellectual disability") refers to students whose IQs fall below 50 but who are still capable of learning personal hygiene and other living skills in a sheltered setting, such as a group home. In many areas, these terms have been replaced by use of "moderate" and "severe" intellectual disability. While the names change, the meaning stays roughly the same in practice.
  • Retarded comes from the Latin retardare, "to make slow, delay, keep back, or hinder," so mental retardation meant the same as mentally delayed. The term was recorded in 1426 as a "fact or action of making slower in movement or time". The first record of retarded in relation to being mentally slow was in 1895. The term mentally retarded was used to replace terms like idiot, moron, and imbecile because retarded was not then a derogatory term. By the 1960s, however, the term had taken on a partially derogatory meaning as well. The noun retard is particularly seen as pejorative; a BBC survey in 2003 ranked it as the most offensive disability-related word, ahead of terms such as spastic (or its abbreviation spaz) and mong. The terms mentally retarded and mental retardation are still fairly common, but currently the Special Olympics, Best Buddies, and over 100 other organizations are striving to eliminate their use by referring to the word retard and its variants as the "r-word", in an effort to equate it to the word nigger and the associated euphemism "n-word", in everyday conversation. These efforts have resulted in federal legislation, sometimes known as "Rosa's Law", to replace the term mentally retarded with the term intellectual disability in some federal statutes.
    The term mental retardation was a diagnostic term denoting the group of disconnected categories of mental functioning such as idiot, imbecile, and moron derived from early IQ tests, which acquired pejorative connotations in popular discourse. It acquired negative and shameful connotations over the last few decades due to the use of the words retarded and retard as insults. This may have contributed to its replacement with euphemisms such as mentally challenged or intellectually disabled. While developmental disability includes many other disorders, developmental disability and developmental delay (for people under the age of 18) are generally considered more polite terms than mental retardation.

United States

Special Olympics USA team in July 2019
  • In North America, intellectual disability is subsumed into the broader term developmental disability, which also includes epilepsy, autism, cerebral palsy, and other disorders that develop during the developmental period (birth to age 18). Because service provision is tied to the designation "developmental disability", it is used by many parents, direct support professionals, and physicians. In the United States, however, in school-based settings, the more specific term mental retardation or, more recently (and preferably), intellectual disability, is still typically used, and is one of 13 categories of disability under which children may be identified for special education services under Public Law 108-446.
  • The phrase intellectual disability is increasingly being used as a synonym for people with significantly below-average cognitive ability. These terms are sometimes used as a means of separating general intellectual limitations from specific, limited deficits as well as indicating that it is not an emotional or psychological disability. It is not specific to congenital disorders such as Down syndrome.
The American Association on Mental Retardation changed its name to the American Association on Intellectual and Developmental Disabilities (AAIDD) in 2007, and soon thereafter changed the names of its scholarly journals to reflect the term "intellectual disability". In 2010, the AAIDD released its 11th edition of its terminology and classification manual, which also used the term intellectual disability.

United Kingdom

In the UK, mental handicap had become the common medical term, replacing mental subnormality in Scotland and mental deficiency in England and Wales, until Stephen Dorrell, Secretary of State for Health for the United Kingdom from 1995–97, changed the NHS's designation to learning disability. The new term is not yet widely understood, and is often taken to refer to problems affecting schoolwork (the American usage), which are known in the UK as "learning difficulties". British social workers may use "learning difficulty" to refer to both people with intellectual disability and those with conditions such as dyslexia. In education, "learning difficulties" is applied to a wide range of conditions: "specific learning difficulty" may refer to dyslexia, dyscalculia or developmental coordination disorder, while "moderate learning difficulties", "severe learning difficulties" and "profound learning difficulties" refer to more significant impairments.

In England and Wales between 1983 and 2008, the Mental Health Act 1983 defined "mental impairment" and "severe mental impairment" as "a state of arrested or incomplete development of mind which includes significant/severe impairment of intelligence and social functioning and is associated with abnormally aggressive or seriously irresponsible conduct on the part of the person concerned." As behavior was involved, these were not necessarily permanent conditions: they were defined for the purpose of authorizing detention in hospital or guardianship. The term mental impairment was removed from the Act in November 2008, but the grounds for detention remained. However, English statute law uses mental impairment elsewhere in a less well-defined manner—e.g. to allow exemption from taxes—implying that intellectual disability without any behavioral problems is what is meant.

A BBC poll conducted in the United Kingdom came to the conclusion that 'retard' was the most offensive disability-related word. On the reverse side of that, when a contestant on Celebrity Big Brother live used the phrase "walking like a retard", despite complaints from the public and the charity Mencap, the communications regulator Ofcom did not uphold the complaint saying "it was not used in an offensive context [...] and had been used light-heartedly". It was, however, noted that two previous similar complaints from other shows were upheld.

Australia

In the past, Australia has used British and American terms interchangeably, including "mental retardation" and "mental handicap". Today, "intellectual disability" is the preferred and more commonly used descriptor.

Society and culture

Severely disabled girl in Bhutan
 
People with intellectual disabilities are often not seen as full citizens of society. Person-centered planning and approaches are seen as methods of addressing the continued labeling and exclusion of socially devalued people, such as people with disabilities, encouraging a focus on the person as someone with capacities and gifts as well as support needs. The self-advocacy movement promotes the right of self-determination and self-direction by people with intellectually disabilities, which means allowing them to make decisions about their own lives.

Until the middle of the 20th century, people with intellectual disabilities were routinely excluded from public education, or educated away from other typically developing children. Compared to peers who were segregated in special schools, students who are mainstreamed or included in regular classrooms report similar levels of stigma and social self-conception, but more ambitious plans for employment. As adults, they may live independently, with family members, or in different types of institutions organized to support people with disabilities. About 8% currently live in an institution or a group home.

In the United States, the average lifetime cost of a person with an intellectual disability amounts to $223,000 per person, in 2003 US dollars, for direct costs such as medical and educational expenses. The indirect costs were estimated at $771,000, due to shorter lifespans and lower than average economic productivity. The total direct and indirect costs, which amount to a little more than a million dollars, are slightly more than the economic costs associated with cerebral palsy, and double that associated with serious vision or hearing impairmentss. Of the costs, about 14% is due to increased medical expenses (not including what is normally incurred by the typical person), and 10% is due to direct non-medical expenses, such as the excess cost of special education compared to standard schooling. The largest amount, 76%, is indirect costs accounting for reduced productivity and shortened lifespans. Some expenses, such as ongoing costs to family caregivers or the extra costs associated with living in a group home, were excluded from this calculation.

Health disparities

People with intellectual disability as a group have higher rates of adverse health conditions such as epilepsy and neurological disorders, gastrointestinal disorders, and behavioral and psychiatric problems compared to people without disabilities. Adults also have a higher prevalence of poor social determinants of health, behavioral risk factors, depression, diabetes, and poor or fair health status than adults without intellectual disability.

In the United Kingdom people with intellectual disability live on average 16 years less than the general population.

Neuropsychology

From Wikipedia, the free encyclopedia

Neuropsychology is the study and characterization of the behavioral modifications that follow a neurological trauma or condition. It is both an experimental and clinical field of psychology that aims to understand how behavior and cognition are influenced by brain functioning and is concerned with the diagnosis and treatment of behavioral and cognitive effects of neurological disorders. Whereas classical neurology focuses on the pathology of the nervous system and classical psychology is largely divorced from it, neuropsychology seeks to discover how the brain correlates with the mind through the study of neurological patients. It thus shares concepts and concerns with neuropsychiatry and with behavioral neurology in general. The term neuropsychology has been applied to lesion studies in humans and animals. It has also been applied in efforts to record electrical activity from individual cells (or groups of cells) in higher primates (including some studies of human patients).

In practice, neuropsychologists tend to work in research settings (universities, laboratories or research institutions), clinical settings (medical hospitals or rehabilitation settings, often involved in assessing or treating patients with neuropsychological problems), or forensic settings or industry (often as clinical-trial consultants where CNS function is a concern).

History

Neuropsychology is a relatively new discipline within the field of psychology. The first textbook defining the field, Fundamentals of Human Neuropsychology, was initially published by Kolb and Whishaw in 1980. However, the history of its development can be traced back to the Third Dynasty in ancient Egypt, perhaps even earlier. There is much debate as to when societies started considering the functions of different organs. For many centuries, the brain was thought useless and was often discarded during burial processes and autopsies. As the field of medicine developed its understanding of human anatomy and physiology, different theories were developed as to why the body functioned the way it did. Many times, bodily functions were approached from a religious point of view and abnormalities were blamed on bad spirits and the gods. The brain has not always been considered the center of the functioning body. It has taken hundreds of years to develop our understanding of the brain and how it affects our behaviors.

Ancient Egypt

In ancient Egypt, writings on medicine date from the time of the priest Imhotep. They took a more scientific approach to medicine and disease, describing the brain, trauma, abnormalities, and remedies for reference for future physicians. Despite this, Egyptians saw the heart not the brain as the seat of the soul.

Aristotle

Senses, perception, memory, dreams, action in Aristotle's biology. Impressions are stored in the seat of perception, linked by his Laws of Association (similarity, contrast, and contiguity).
 
Aristotle reinforced this focus on the heart which originated in Egypt. He believed the heart to be in control of mental processes, and looked on the brain, due to its inert nature, as a mechanism for cooling the heat generated by the heart. He drew his conclusions based on the empirical study of animals. He found that while their brains were cold to the touch and that such contact did not trigger any movements, the heart was warm and active, accelerating and slowing dependent on mood. Such beliefs were upheld by many for years to come, persisting through the Middle Ages and the Renaissance period until they began to falter in the 17th century due to further research. The influence of Aristotle in the development of neuropsychology is evident within language used in modern day, since we "follow our hearts" and "learn by the heart."

Hippocrates

Hippocrates viewed the brain as the seat of the soul. He drew a connection between the brain and behaviors of the body, writing: "The brain exercises the greatest power in the man." Apart from moving the focus from the heart as the "seat of the soul" to the brain, Hippocrates did not go into much detail about its actual functioning. However, by switching the attention of the medical community to the brain, his theory led to more scientific discovery of the organ responsible for our behaviors. For years to come, scientists were inspired to explore the functions of the body and to find concrete explanations for both normal and abnormal behaviors. Scientific discovery led them to believe that there were natural and organically occurring reasons to explain various functions of the body, and it could all be traced back to the brain. Hippocrates introduced the concept of the mind – which was widely seen as a separate function apart from the actual brain organ.

René Descartes

Philosopher René Descartes expanded upon this idea and is most widely known for his work on the mind-body problem. Often Descartes's ideas were looked upon as overly philosophical and lacking in sufficient scientific foundation. Descartes focused much of his anatomical experimentation on the brain, paying special attention to the pineal gland – which he argued was the actual "seat of the soul." Still deeply rooted in a spiritual outlook towards the scientific world, the body was said to be mortal, and the soul immortal. The pineal gland was then thought to be the very place at which the mind would interact with the mortal and machine-like body. At the time, Descartes was convinced the mind had control over the behaviors of the body (controlling the person) – but also that the body could have influence over the mind, which is referred to as dualism. This idea that the mind essentially had control over the body, but the body could resist or even influence other behaviors, was a major turning point in the way many physiologists would look at the brain. The capabilities of the mind were observed to do much more than simply react, but also to be rational and function in organized, thoughtful ways – much more complex than he thought the animal world to be. These ideas, although disregarded by many and cast aside for years led the medical community to expand their own ideas of the brain and begin to understand in new ways just how intricate the workings of the brain really were, and the complete effects it had on daily life, as well, which treatments would be the most beneficial to helping those people living with a dysfunctional mind. The mind-body problem, spurred by René Descartes, continues to this day with many philosophical arguments both for and against his ideas. However controversial they were and remain today, the fresh and well-thought-out perspective Descartes presented has had long-lasting effects on the various disciplines of medicine, psychology and much more, especially in putting an emphasis on separating the mind from the body in order to explain observable behaviors.

Thomas Willis

It was in the mid-17th century that another major contributor to the field of neuropsychology emerged. Thomas Willis studied at Oxford University and took a physiological approach to the brain and behavior. It was Willis who coined the words 'hemisphere' and 'lobe' when referring to the brain. He was one of the earliest to use the words 'neurology' and 'psychology'. Rejecting the idea that humans were the only beings capable of rational thought, Willis looked at specialized structures of the brain. He theorized that higher structures accounted for complex functions, whereas lower structures were responsible for functions similar to those seen in other animals, consisting mostly of reactions and automatic responses. He was particularly interested in people who suffered from manic disorders and hysteria. His research constituted some of the first times that psychiatry and neurology came together to study individuals. Through his in-depth study of the brain and behavior, Willis concluded that automated responses such as breathing, heartbeats and other various motor activities were carried out within the lower region of the brain. Although much of his work has been made obsolete, his ideas presented the brain as more complex than previously imagined, and led the way for future pioneers to understand and build upon his theories, especially when it came to looking at disorders and dysfunctions in the brain.

Franz Joseph Gall

Neuroanatomist and physiologist Franz Joseph Gall made major progress in understanding the brain. He theorized that personality was directly related to features and structures within the brain. However, Gall's major contribution within the field of neuroscience is his invention of phrenology. This new discipline looked at the brain as an organ of the mind, where the shape of the skull could ultimately determine one's intelligence and personality. This theory was like many circulating at the time, as many scientists were taking into account physical features of the face and body, head size, anatomical structure, and levels of intelligence; only Gall looked primarily at the brain. There was much debate over the validity of Gall's claims however, because he was often found to be wrong in his predictions. He was once sent a cast of René Descartes' skull, and through his method of phrenology claimed the subject must have had a limited capacity for reasoning and higher cognition. As controversial and false as many of Gall's claims were, his contributions to understanding cortical regions of the brain and localized activity continued to advance understanding of the brain, personality, and behavior. His work is considered crucial to having laid a firm foundation in the field of neuropsychology, which would flourish over the next few decades.

Jean-Baptiste Bouillaud

Jean-Baptiste Bouillaud
 
Towards the late 19th century, the belief that the size of ones skull could determine their level of intelligence was discarded as science and medicine moved forward. A physician by the name of Jean-Baptiste Bouillaud expanded upon the ideas of Gall and took a closer look at the idea of distinct cortical regions of the brain each having their own independent function. Bouillaud was specifically interested in speech and wrote many publications on the anterior region of the brain being responsible for carrying out the act of ones speech, a discovery that had stemmed from the research of Gall. He was also one of the first to use larger samples for research although it took many years for that method to be accepted. By looking at over a hundred different case studies, Bouillaud came to discover that it was through different areas of the brain that speech is completed and understood. By observing people with brain damage, his theory was made more concrete. Bouillaud, along with many other pioneers of the time made great advances within the field of neurology, especially when it came to localization of function. There are many arguable debates as to who deserves the most credit for such discoveries, and often, people remain unmentioned, but Paul Broca is perhaps one of the most famous and well known contributors to neuropsychology – often referred to as "the father" of the discipline.

Paul Broca

Inspired by the advances being made in the area of localized function within the brain, Paul Broca committed much of his study to the phenomena of how speech is understood and produced. Through his study, it was discovered and expanded upon that we articulate via the left hemisphere. Broca's observations and methods are widely considered to be where neuropsychology really takes form as a recognizable and respected discipline. Armed with the understanding that specific, independent areas of the brain are responsible for articulation and understanding of speech, the brains abilities were finally being acknowledged as the complex and highly intricate organ that it is. Broca was essentially the first to fully break away from the ideas of phrenology and delve deeper into a more scientific and psychological view of the brain.

Karl Spencer Lashley

Lashley's works and theories that follow are summarized in his book Brain Mechanisms and Intelligence. Lashley's theory of the Engram was the driving force for much of his research. An engram was believed to be a part of the brain where a specific memory was stored. He continued to use the training/ablation method that Franz had taught him. He would train a rat to learn a maze and then use systematic lesions and removed sections of cortical tissue to see if the rat forgot what it had learned.

Through his research with the rats, he learned that forgetting was dependent on the amount of tissue removed and not where it was removed from. He called this mass action and he believed that it was a general rule that governed how brain tissue would respond, independent of the type of learning. But we know now that mass action was a misinterpretation of his empirical results, because in order to run a maze the rats required multiple cortical areas, so cutting into small individual parts alone will not impair the rats' brains much, but taking large sections removes multiple cortical areas at one time, affecting various functions such as sight, motor coordination and memory, making the animal unable to run a maze properly. 

Lashley also proposed that a portion of a functional area could carry out the role of the entire area, even when the rest of the area has been removed. He called this phenomenon equipotentiality. We know now that he was seeing evidence of plasticity in the brain. The brain has the spectacular ability for certain areas to take over the functions of other areas if those areas should fail or be removed, although not to the extent initially argued by Lashley.

Approaches

Experimental neuropsychology is an approach that uses methods from experimental psychology to uncover the relationship between the nervous system and cognitive function. The majority of work involves studying healthy humans in a laboratory setting, although a minority of researchers may conduct animal experiments. Human work in this area often takes advantage of specific features of our nervous system (for example that visual information presented to a specific visual field is preferentially processed by the cortical hemisphere on the opposite side) to make links between neuroanatomy and psychological function.

Clinical neuropsychology is the application of neuropsychological knowledge to the assessment (see neuropsychological test and neuropsychological assessment), management, and rehabilitation of people who have suffered illness or injury (particularly to the brain) which has caused neurocognitive problems. In particular they bring a psychological viewpoint to treatment, to understand how such illness and injury may affect and be affected by psychological factors. They also can offer an opinion as to whether a person is demonstrating difficulties due to brain pathology or as a consequence of an emotional or another (potentially) reversible cause or both. For example, a test might show that both patients X and Y are unable to name items that they have been previously exposed to within the past 20 minutes (indicating possible dementia). If patient Y can name some of them with further prompting (e.g. given a categorical clue such as being told that the item they could not name is a fruit), this allows a more specific diagnosis than simply dementia (Y appears to have the vascular type which is due to brain pathology but is usually at least somewhat reversible). Clinical neuropsychologists often work in hospital settings in an interdisciplinary medical team; others work in private practice and may provide expert input into medico-legal proceedings.

Cognitive neuropsychology is a relatively new development and has emerged as a distillation of the complementary approaches of both experimental and clinical neuropsychology. It seeks to understand the mind and brain by studying people who have suffered brain injury or neurological illness. One model of neuropsychological functioning is known as functional localization. This is based on the principle that if a specific cognitive problem can be found after an injury to a specific area of the brain, it is possible that this part of the brain is in some way involved. However, there may be reason to believe that the link between mental functions and neural regions is not so simple. An alternative model of the link between mind and brain, such as parallel processing, may have more explanatory power for the workings and dysfunction of the human brain. Yet another approach investigates how the pattern of errors produced by brain-damaged individuals can constrain our understanding of mental representations and processes without reference to the underlying neural structure. A more recent but related approach is cognitive neuropsychiatry which seeks to understand the normal function of mind and brain by studying psychiatric or mental illness.

Connectionism is the use of artificial neural networks to model specific cognitive processes using what are considered to be simplified but plausible models of how neurons operate. Once trained to perform a specific cognitive task these networks are often damaged or 'lesioned' to simulate brain injury or impairment in an attempt to understand and compare the results to the effects of brain injury in humans.

Functional neuroimaging uses specific neuroimaging technologies to take readings from the brain, usually when a person is doing a particular task, in an attempt to understand how the activation of particular brain areas is related to the task. In particular, the growth of methodologies to employ cognitive testing within established functional magnetic resonance imaging (fMRI) techniques to study brain-behavior relations is having a notable influence on neuropsychological research.

In practice these approaches are not mutually exclusive and most neuropsychologists select the best approach or approaches for the task to be completed.

Methods and tools

Standardized neuropsychological tests 
These tasks have been designed so the performance on the task can be linked to specific neurocognitive processes. These tests are typically standardized, meaning that they have been administered to a specific group (or groups) of individuals before being used in individual clinical cases. The data resulting from standardization are known as normative data. After these data have been collected and analyzed, they are used as the comparative standard against which individual performances can be compared. Examples of neuropsychological tests include: the Wechsler Memory Scale (WMS), the Wechsler Adult Intelligence Scale (WAIS), Boston Naming Test, the Wisconsin Card Sorting Test, the Benton Visual Retention Test, and the Controlled Oral Word Association.
Brain scans 
The use of brain scans to investigate the structure or function of the brain is common, either as simply a way of better assessing brain injury with high resolution pictures, or by examining the relative activations of different brain areas. Such technologies may include fMRI (functional magnetic resonance imaging) and positron emission tomography (PET), which yields data related to functioning, as well as MRI (magnetic resonance imaging) and computed axial tomography (CAT or CT), which yields structural data.
Global Brain Project 
Brain models based on mouse and monkey have been developed based on theoretical neuroscience involving working memory and attention, while mapping brain activity based on time constants validated by measurements of neuronal activity in various layers of the brain. These methods also map to decision states of behavior in simple tasks that involve binary outcomes.
Electrophysiology 
The use of electrophysiological measures designed to measure the activation of the brain by measuring the electrical or magnetic field produced by the nervous system. This may include electroencephalography (EEG) or magneto-encephalography (MEG).
Experimental tasks 
The use of designed experimental tasks, often controlled by computer and typically measuring reaction time and accuracy on a particular tasks thought to be related to a specific neurocognitive process. An example of this is the Cambridge Neuropsychological Test Automated Battery (CANTAB) or CNS Vital Signs (CNSVS).
Software products 
Researchers have mapped to neural activity in the brain through brain scan or MRI. Applications based on Neuropsychology are being used to influence behavior design and habit formation. An example of such a product is fooya, which is a mobile App for children that has been shown in randomized and controlled studies to influence dietary preferences among children.

Thursday, August 8, 2019

Cyberterrorism

From Wikipedia, the free encyclopedia
 
Cyberterrorism is the use of the Internet to conduct violent acts that result in, or threaten, loss of life or significant bodily harm, in order to achieve political or ideological gains through threat or intimidation. It is also sometimes considered an act of Internet terrorism where terrorist activities, including acts of deliberate, large-scale disruption of computer networks, especially of personal computers attached to the Internet by means of tools such as computer viruses, computer worms, phishing, and other malicious software and hardware methods and programming scripts.

Cyberterrorism is a controversial term. Some authors opt for a very narrow definition, relating to deployment by known terrorist organizations of disruption attacks against information systems for the primary purpose of creating alarm, panic, or physical disruption. Other authors prefer a broader definition, which includes cybercrime. Participating in a cyberattack affects the terror threat perception, even if it isn't done with a violent approach. By some definitions, it might be difficult to distinguish which instances of online activities are cyberterrorism or cybercrime.

Cyberterrorism can be also defined as the intentional use of computers, networks, and public internet to cause destruction and harm for personal objectives. Experienced cyberterrorists, who are very skilled in terms of hacking can cause massive damage to government systems, hospital records, and national security programs, which might leave a country, community or organization in turmoil and in fear of further attacks. The objectives of such terrorists may be political or ideological since this can be considered a form of terror.

There is much concern from government and media sources about potential damage that could be caused by cyberterrorism, and this has prompted efforts by government agencies such as the Federal Bureau of Investigations (FBI) and the Central Intelligence Agency (CIA) to put an end to cyber attacks and cyberterrorism.

There have been several major and minor instances of cyberterrorism. Al-Qaeda utilized the internet to communicate with supporters and even to recruit new members.[5] Estonia, a Baltic country which is constantly evolving in terms of technology, became a battleground for cyberterror in April, 2007 after disputes regarding the removal of a WWII soviet statue located in Estonia's capital Tallinn.

Overview

There is debate over the basic definition of the scope of cyberterrorism. These definitions can be narrow such as the use of Internet to attack other systems in the Internet that result to violence against persons or property. They can also be broad, those that include any form of Internetusage by terrorists ro conventional attacks on information technology infrastructures. There is variation in qualification by motivation, targets, methods, and centrality of computer use in the act. U.S. government agencies also use varying definitions and that none of these have so far attempted to introduce a standard that is binding outside of their sphere of influence.

Depending on context, cyberterrorism may overlap considerably with cybercrime, cyberwar or ordinary terrorism. Eugene Kaspersky, founder of Kaspersky Lab, now feels that "cyberterrorism" is a more accurate term than "cyberwar". He states that "with today's attacks, you are clueless about who did it or when they will strike again. It's not cyber-war, but cyberterrorism." He also equates large-scale cyber weapons, such as the Flame Virus and NetTraveler Virus which his company discovered, to biological weapons, claiming that in an interconnected world, they have the potential to be equally destructive.

If cyberterrorism is treated similarly to traditional terrorism, then it only includes attacks that threaten property or lives, and can be defined as the leveraging of a target's computers and information, particularly via the Internet, to cause physical, real-world harm or severe disruption of infrastructure.
Many academics and researchers who specialize in terrorism studies suggest that cyberterrorism does not exist and is really a matter of hacking or information warfare. They disagree with labeling it as terrorism because of the unlikelihood of the creation of fear, significant physical harm, or death in a population using electronic means, considering current attack and protective technologies.

If death or physical damage that could cause human harm is considered a necessary part of the cyberterrorism definition, then there have been few identifiable incidents of cyberterrorism, although there has been much policy research and public concern. Modern terrorism and political violence is not easily defined, however, and some scholars assert that it is now "unbounded" and not exclusively concerned with physical damage 

There is an old saying that death or loss of property are the side products of terrorism, the main purpose of such incidents is to create terror in peoples' minds and harm bystanders. If any incident in cyberspace can create terror, it may be rightly called cyberterrorism. For those affected by such acts, the fears of cyberterrorism are quite real.

As with cybercrime in general, the threshold of required knowledge and skills to perpetrate acts of cyberterror has been steadily diminishing thanks to freely available hacking suites and online courses. Additionally, the physical and virtual worlds are merging at an accelerated rate, making for many more targets of opportunity which is evidenced by such notable cyber attacks as Stuxnet, the Saudi petrochemical sabotage attempt in 2018 and others.

Defining cyberterrorism

Assigning a concrete definition to cyberterrorism can be hard, due to the difficulty of defining the term terrorism itself. Multiple organizations have created their own definitions, most of which are overly broad. There is also controversy concerning overuse of the term, hyperbole in the media and by security vendors trying to sell "solutions".

One way of understanding cyberterrorism involves the idea that terrorists could cause massive loss of life, worldwide economic chaos and environmental damage by hacking into critical infrastructure systems. The nature of cyberterrorism covers conduct involving computer or Internet technology that:
  1. is motivated by a political, religious or ideological cause
  2. is intended to intimidate a government or a section of the public to varying degrees
  3. seriously interferes with infrastructure
The term "cyberterrorism" can be used in a variety of different ways, but there are limits to its use. An attack on an Internet business can be labeled cyberterrorism, however when it is done for economic motivations rather than ideological it is typically regarded as cybercrime. Convention also limits the label "cyberterrorism" to actions by individuals, independent groups, or organizations. Any form of cyberwarfare conducted by governments and states would be regulated and punishable under international law.

The Technolytics Institute defines cyberterrorism as
"[t]he premeditated use of disruptive activities, or the threat thereof, against computers and/or networks, with the intention to cause harm or further social, ideological, religious, political or similar objectives. Or to intimidate any person in furtherance of such objectives."
The term appears first in defense literature, surfacing (as "cyber-terrorism") in reports by the U.S. Army War College as early as 1998.

The National Conference of State Legislatures, an organization of legislators created to help policymakers in the United States of America with issues such as economy and homeland security defines cyberterrorism as:
[T]he use of information technology by terrorist groups and individuals to further their agenda. This can include use of information technology to organize and execute attacks against networks, computer systems and telecommunications infrastructures, or for exchanging information or making threats electronically. Examples are hacking into computer systems, introducing viruses to vulnerable networks, web site defacing, Denial-of-service attacks, or terroristic threats made via electronic communication.
NATO defines cyberterrorism as "[a] cyberattack using or exploiting computer or communication networks to cause sufficient destruction or disruption to generate fear or to intimidate a society into an ideological goal".

The United States National Infrastructure Protection Center defined cyberterrorism as:
A criminal act perpetrated by the use of computers and telecommunications capabilities resulting in violence, destruction, and/or disruption of services to create fear by causing confusion and uncertainty within a given population, with the goal of influencing a government or population to conform to a political, social, or ideological agenda.
The FBI, another United States agency, defines "cyber terrorism" as “premeditated, politically motivated attack against information, computer systems, computer programs, and data which results in violence against non-combatant targets by subnational groups or clandestine agents”.

These definitions tend to share the view of cyberterrorism as politically and/or ideologically inclined. One area of debate is the difference between cyberterrorism and hacktivism. Hacktivism is ”the marriage of hacking with political activism”. Both actions are politically driven and involve using computers, however cyberterrorism is primarily used to cause harm. It becomes an issue because acts of violence on the computer can be labeled either cyberterrorism or hacktivism.

Types of cyberterror capability

The following three levels of cyberterror capability are defined by Monterey group
  • Simple-Unstructured: The capability to conduct basic hacks against individual systems using tools created by someone else. The organization possesses little target analysis, command, and control, or learning capability.
  • Advanced-Structured: The capability to conduct more sophisticated attacks against multiple systems or networks and possibly, to modify or create basic hacking tools. The organization possesses an elementary target analysis, command and control, and learning capability.
  • Complex-Coordinated: The capability for a coordinated attack capable of causing mass-disruption against integrated, heterogeneous defenses (including cryptography). Ability to create sophisticated hacking tools. Highly capable target analysis, command, and control, and organization learning capability.

Concerns

Cyberterrorism is becoming more and more prominent on social media today. As the Internet becomes more pervasive in all areas of human endeavor, individuals or groups can use the anonymity afforded by cyberspace to threaten citizens, specific groups (i.e. with membership based on ethnicity or belief), communities and entire countries, without the inherent threat of capture, injury, or death to the attacker that being physically present would bring. Many groups such as Anonymous, use tools such as denial-of-service attack to attack and censor groups who oppose them, creating many concerns for freedom and respect for differences of thought. 

Many believe that cyberterrorism is an extreme threat to countries' economies, and fear an attack could potentially lead to another Great Depression. Several leaders agree that cyberterrorism has the highest percentage of threat over other possible attacks on U.S. territory. Although natural disasters are considered a top threat and have proven to be devastating to people and land, there is ultimately little that can be done to prevent such events from happening. Thus, the expectation is to focus more on preventative measures that will make Internet attacks impossible for execution.

As the Internet continues to expand, and computer systems continue to be assigned increased responsibility while becoming more complex and interdependent, sabotage or terrorism via the Internet may become a more serious threat and is possibly one of the top 10 events to "end the human race." People have much easier access to illegal involvement within the cyberspace by the ability to access a part of the internet known as the Dark Web. The Internet of Things promises to further merge the virtual and physical worlds, which some experts see as a powerful incentive for states to use terrorist proxies in furtherance of objectives.

Dependence on the internet is rapidly increasing on a worldwide scale, creating a platform for international cyber terror plots to be formulated and executed as a direct threat to national security. For terrorists, cyber-based attacks have distinct advantages over physical attacks. They can be conducted remotely, anonymously, and relatively cheaply, and they do not require significant investment in weapons, explosive and personnel. The effects can be widespread and profound. Incidents of cyberterrorism are likely to increase. They will be conducted through denial of service attacks, malware, and other methods that are difficult to envision today. One example involves the deaths involving the Islamic State and the online social networks Twitter, Google, and Facebook lead to legal action being taken against them, that ultimately resulted in them being sued.

In an article about cyber attacks by Iran and North Korea, The New York Times observes, "The appeal of digital weapons is similar to that of nuclear capability: it is a way for an outgunned, outfinanced nation to even the playing field. 'These countries are pursuing cyberweapons the same way they are pursuing nuclear weapons,' said James A. Lewis, a computer security expert at the Center for Strategic and International Studies in Washington. 'It's primitive; it's not top of the line, but it's good enough and they are committed to getting it.'"

History

Public interest in cyberterrorism began in the late 1990s, when the term was coined by Barry C. Collin.[35] As 2000 approached, the fear and uncertainty about the millennium bug heightened, as did the potential for attacks by cyber terrorists. Although the millennium bug was by no means a terrorist attack or plot against the world or the United States, it did act as a catalyst in sparking the fears of a possibly large-scale devastating cyber-attack. Commentators noted that many of the facts of such incidents seemed to change, often with exaggerated media reports.

The high-profile terrorist attacks in the United States on September 11, 2001 and the ensuing War on Terror by the US led to further media coverage of the potential threats of cyberterrorism in the years following. Mainstream media coverage often discusses the possibility of a large attack making use of computer networks to sabotage critical infrastructures with the aim of putting human lives in jeopardy or causing disruption on a national scale either directly or by disruption of the national economy.

Authors such as Winn Schwartau and John Arquilla are reported to have had considerable financial success selling books which described what were purported to be plausible scenarios of mayhem caused by cyberterrorism. Many critics claim that these books were unrealistic in their assessments of whether the attacks described (such as nuclear meltdowns and chemical plant explosions) were possible. A common thread throughout what critics perceive as cyberterror-hype is that of non-falsifiability; that is, when the predicted disasters fail to occur, it only goes to show how lucky we've been so far, rather than impugning the theory. 

In 2016, for the first time ever, the Department of Justice charged Ardit Ferizi with cyberterrorism. He is accused of allegedly hacking into a military website and stealing the names, addresses, and other personal information of government and military personnel and selling it to ISIS.

On the other hand, it is also argued that, despite substantial studies on cyberterrorism, the body of literature is still unable to present a realistic estimate of the actual threat. For instance, in the case of a cyberterrorist attack on a public infrastructure such as a power plant or air traffic control through hacking, there is uncertainty as to its success because data concerning such phenomena are limited.

International attacks and response

Conventions

As of 2016 there have been seventeen conventions and major legal instruments that specifically deal with terrorist activities and can also be applied to cyber terrorism.
  • 1963: Convention on Offences and Certain Other Acts Committed on Board Aircraft
  • 1970: Convention for the Suppression of Unlawful Seizure of Aircraft
  • 1971: Convention for the Suppression of Unlawful Acts Against the Safety of Civil Aviation
  • 1973: Convention on the Prevention and Punishment of Crimes against Internationally Protected Persons
  • 1979: International Convention against the Taking of Hostages
  • 1980: Convention on the Physical Protection of Nuclear Material
  • 1988: Protocol for the Suppression of Unlawful Acts of Violence at Airports Serving International Civil Aviation
  • 1988: Protocol for the Suppression of Unlawful Acts against the Safety of Fixed Platforms Located on the Continental Shelf
  • 1988: Convention for the Suppression of Unlawful Acts against the Safety of Maritime Navigation
  • 1989: Supplementary to the Convention for the Suppression of Unlawful Acts against the Safety of Civil Aviation
  • 1991: Convention on the Marking of Plastic Explosives for the Purpose of Detection
  • 1997: International Convention for the Suppression of Terrorist Bombings
  • 1999: International Convention for the Suppression of the Financing of Terrorism
  • 2005: Protocol to the Convention for the Suppression of Unlawful Acts against the Safety of Maritime Navigation
  • 2005: International Convention for the Suppression of Acts of Nuclear Terrorism
  • 2010: Protocol Supplementary to the Convention for the Suppression of Unlawful Seizure of Aircraft
  • 2010: Convention on the Suppression of Unlawful Acts Relating to International Civil Aviation

Motivations for cyberattacks

There are many different motives for cyberattacks, with the majority being for financial reasons. However, there is increasing evidence that hackers are becoming more politically motivated. Cyberterrorists are aware that governments are reliant on the internet and have exploited this as a result. For example, Mohammad Bin Ahmad As-Sālim's piece '39 Ways to Serve and Participate in Jihad' discusses how an electronic jihad could disrupt the West through targeted hacks of American websites, and other resources seen as anti-Jihad, modernist, or secular in orientation (Denning, 2010; Leyden, 2007).

International institutions

As of 2016 the United Nations only has one agency that specializes in cyberterrorism, the International Telecommunications Union.

U.S. military/protections against cyberterrorism

The US Department of Defense (DoD) charged the United States Strategic Command with the duty of combating cyberterrorism. This is accomplished through the Joint Task Force-Global Network Operations, which is the operational component supporting USSTRATCOM in defense of the DoD's Global Information Grid. This is done by integrating GNO capabilities into the operations of all DoD computers, networks, and systems used by DoD combatant commands, services and agencies.

On November 2, 2006, the Secretary of the Air Force announced the creation of the Air Force's newest MAJCOM, the Air Force Cyber Command, which would be tasked to monitor and defend American interest in cyberspace. The plan was however replaced by the creation of Twenty-Fourth Air Force which became active in August 2009 and would be a component of the planned United States Cyber Command.

On December 22, 2009, the White House named its head of computer security as Howard Schmidt to coordinate U.S Government, military and intelligence efforts to repel hackers. He left the position in May, 2012. Michael Daniel was appointed to the position of White House Coordinator of Cyber Security the same week and continues in the position during the second term of the Obama administration.

More recently, Obama signed an executive order to enable the US to impose sanctions on either individuals or entities that are suspected to be participating in cyber related acts. These acts were assessed to be possible threats to US national security, financial issues or foreign policy issues. U.S. authorities indicted a man over 92 cyberterrorism hacks attacks on computers used by the Department of Defense. A Nebraska-based consortium apprehended four million hacking attempts in the course of eight weeks. In 2011 cyberterrorism attacks grew 20%.

Estonia and NATO

The Baltic state of Estonia was the target of a massive denial-of-service attack that ultimately rendered the country offline and shut out from services dependent on Internet connectivity in April 2007. The infrastructure of Estonia including everything from online banking and mobile phone networks to government services and access to health care information was disabled for a time. The tech-dependent state experienced severe turmoil and there was a great deal of concern over the nature and intent of the attack. 

The cyber attack was a result of an Estonian-Russian dispute over the removal of a bronze statue depicting a World War II-era Soviet soldier from the center of the capital, Tallinn. In the midst of the armed conflict with Russia, Georgia likewise was subject to sustained and coordinated attacks on its electronic infrastructure in August 2008. In both of these cases, circumstantial evidence point to coordinated Russian attacks, but attribution of the attacks is difficult; though both the countries blame Moscow for contributing to the cyber attacks, proof establishing legal culpability is lacking. 

Estonia joined NATO in 2004, which prompted NATO to carefully monitor its member state's response to the attack. NATO also feared escalation and the possibility of cascading effects beyond Estonia's border to other NATO members. In 2008, directly as a result of the attacks, NATO opened a new center of excellence on cyberdefense to conduct research and training on cyber warfare in Tallinn.

The chaos resulting from the attacks in Estonia illustrated to the world the dependence countries had on information technology. This dependence then makes countries vulnerable to future cyber attacks and terrorism.

Republic of Korea

According to 2016 Deloitte Asia-Pacific Defense Outlook, South Korea's 'Cyber Risk Score' was 884 out of 1,000 and South Korea is found to be the most vulnerable country to cyber attacks in the Asia-Pacific region. Considering South Korea's high speed internet and cutting edge technology, its cyber security infrastructure is relatively weak. The 2013 South Korea cyberattack significantly damaged the Korean economy. In 2017, a ransomware attack harassed private companies and users, who experienced personal information leakage. Additionally, there were North Korea's cyber attacks which risked national security of South Korea.

In response to this, South Korean government's countermeasure is to protect the information security centres the National Intelligence Agency. Currently, 'cyber security' is one of the major goals of NIS Korea. Since 2013, South Korea had established policies related to National cyber security and trying to prevent cyber crises via sophisticated investigation on potential threats. Meanwhile, scholars emphasise on improving the national consciousness towards cyber attacks as South Korea had already entered the so-called 'hyper connected society'.

China

The Chinese Defense Ministry confirmed the existence of an online defense unit in May 2011. Composed of about thirty elite internet specialists, the so-called "Cyber Blue Team", or "Blue Army", is officially claimed to be engaged in cyber-defense operations, though there are fears the unit has been used to penetrate secure online systems of foreign governments.

Pakistan

Pakistani Government has also taken steps to curb the menace of cyberterrorism and extremist propaganda. National Counter Terrorism Authority (Nacta) is working on joint programs with different NGOs and other cyber security organizations in Pakistan to combat this problem. Surf Safe Pakistan is one such example. Now people in Pakistan can report extremist and terrorist related content online on Surf Safe Pakistan portal. The National Counter Terrorism Authority (NACTA) provides the Federal Government's leadership for the Surf Safe Campaign. In March 2008 an al Qaeda forum posted a training website with six training modules to learn cyberterrorism techniques.

Ukraine

A series of powerful cyber attacks began 27 June 2017 that swamped websites of Ukrainian organizations, including banks, ministries, newspapers and electricity firms.

Examples

An operation can be done by anyone anywhere in the world, for it can be performed thousands of miles away from a target. An attack can cause serious damage to a critical infrastructure which may result in casualties.

Some attacks are conducted in furtherance of political and social objectives, as the following examples illustrate:
  • In 1996, a computer hacker allegedly associated with the White Supremacist movement temporarily disabled a Massachusetts ISP and damaged part of the ISP's record keeping system. The ISP had attempted to stop the hacker from sending out worldwide racist messages under the ISP's name. The hacker signed off with the threat: "you have yet to see true electronic terrorism. This is a promise."
  • In 1998, Spanish protesters bombarded the Institute for Global Communications (IGC) with thousands of bogus e-mail messages. E-mail was tied up and undeliverable to the ISP's users, and support lines were tied up with people who couldn't get their mail. The protestors also spammed IGC staff and member accounts, clogged their Web page with bogus credit card orders, and threatened to employ the same tactics against organizations using IGC services. They demanded that IGC stop hosting the Web site for the Euskal Herria Journal, a New York-based publication supporting Basque independence. Protestors said IGC supported terrorism because a section on the Web pages contained materials on the terrorist group ETA, which claimed responsibility for assassinations of Spanish political and security officials, and attacks on military installations. IGC finally relented and pulled the site because of the "mail bombings."
  • In 1998, ethnic Tamil guerrillas attempted to disrupt Sri Lankan embassies by sending large volumes of e-mail. The embassies received 800 e-mails a day over a two-week period. The messages read "We are the Internet Black Tigers and we're doing this to disrupt your communications." Intelligence authorities characterized it as the first known attack by terrorists against a country's computer systems.
  • During the Kosovo conflict in 1999, NATO computers were blasted with e-mail bombs and hit with denial-of-service attacks by hacktivists protesting the NATO bombings. In addition, businesses, public organizations and academic institutes received highly politicized virus-laden e-mails from a range of Eastern European countries, according to reports. Web defacements were also common. After the Chinese Embassy was accidentally bombed in Belgrade, Chinese hacktivists posted messages such as "We won't stop attacking until the war stops!" on U.S. government Web sites.
  • Since December 1997, the Electronic Disturbance Theater (EDT) has been conducting Web sit-ins against various sites in support of the Mexican Zapatistas. At a designated time, thousands of protestors point their browsers to a target site using software that floods the target with rapid and repeated download requests. EDT's software has also been used by animal rights groups against organizations said to abuse animals. Electrohippies, another group of hacktivists, conducted Web sit-ins against the WTO when they met in Seattle in late 1999. These sit-ins all require mass participation to have much effect, and thus are more suited to use by activists than by terrorists.
  • In 2000, a Japanese investigation revealed that the government was using software developed by computer companies affiliated with Aum Shinrikyo, the doomsday sect responsible for the sarin gas attack on the Tokyo subway system in 1995. "The government found 100 types of software programs used by at least 10 Japanese government agencies, including the Defense Ministry, and more than 80 major Japanese companies, including Nippon Telegraph and Telephone." Following the discovery, the Japanese government suspended use of Aum-developed programs out of concern that Aum-related companies may have compromised security by breaching firewalls. gaining access to sensitive systems or information, allowing invasion by outsiders, planting viruses that could be set off later, or planting malicious code that could cripple computer systems and key data system.
  • In March 2013, The New York Times reported on a pattern of cyber attacks against U.S. financial institutions believed to be instigated by Iran as well as incidents affecting South Korean financial institutions that originate with the North Korean government.
  • In August 2013, media companies including The New York Times, Twitter and the Huffington Post lost control of some of their websites after hackers supporting the Syrian government breached the Australian Internet company that manages many major site addresses. The Syrian Electronic Army, a hacker group that has previously attacked media organisations that it considers hostile to the regime of Syrian president Bashar al-Assad, claimed credit for the Twitter and Huffington Post hacks in a series of Twitter messages. Electronic records showed that NYTimes.com, the only site with an hours-long outage, redirected visitors to a server controlled by the Syrian group before it went dark.
  • The website of Air Botswana, defaced by a group calling themselves the "Pakistan Cyber Army"
  • Pakistani Cyber Army is the name taken by a group of hackers who are known for their defacement of websites, particularly Indian, Chinese, and Israeli companies and governmental organizations, claiming to represent Pakistani nationalist and Islamic interests. The group is thought to have been active since at least 2008, and maintains an active presence on social media, especially Facebook. Its members have claimed responsibility for the hijacking of websites belonging to Acer, BSNL, India's CBI, Central Bank, and the State Government of Kerala.
  • British hacker Kane Gamble, sentenced to 2 years in youth detention, posed as CIA chief to access highly sensitive information. He also "cyber-terrorized" high-profile U.S. intelligence officials such as then CIA chief John Brennan or Director of National Intelligence James Clapper. The judge said Gamble engaged in "politically motivated cyber terrorism."

Sabotage

Non-political acts of sabotage have caused financial and other damage. In 2000, disgruntled employee Vitek Boden caused the release of 800,000 litres of untreated sewage into waterways in Maroochy Shire, Australia.

More recently, in May 2007 Estonia was subjected to a mass cyber-attack in the wake of the removal of a Russian World War II war memorial from downtown Tallinn. The attack was a distributed denial-of-service attack in which selected sites were bombarded with traffic to force them offline; nearly all Estonian government ministry networks as well as two major Estonian bank networks were knocked offline; in addition, the political party website of Estonia's Prime Minister Andrus Ansip featured a counterfeit letter of apology from Ansip for removing the memorial statue. Despite speculation that the attack had been coordinated by the Russian government, Estonia's defense minister admitted he had no conclusive evidence linking cyber attacks to Russian authorities. Russia called accusations of its involvement "unfounded", and neither NATO nor European Commission experts were able to find any conclusive proof of official Russian government participation. In January 2008 a man from Estonia was convicted for launching the attacks against the Estonian Reform Party website and fined.

During the Russia-Georgia War, on 5 August 2008, three days before Georgia launched its invasion of South Ossetia, the websites for OSInform News Agency and OSRadio were hacked. The OSinform website at osinform.ru kept its header and logo, but its content was replaced by a feed to the Alania TV website content. Alania TV, a Georgian government-supported television station aimed at audiences in South Ossetia, denied any involvement in the hacking of the websites. Dmitry Medoyev, at the time the South Ossetian envoy to Moscow, claimed that Georgia was attempting to cover up information on events which occurred in the lead-up to the war. One such cyber attack caused the Parliament of Georgia and Georgian Ministry of Foreign Affairs websites to be replaced by images comparing Georgian president Mikheil Saakashvili to Adolf Hitler. Other attacks involved denials of service to numerous Georgian and Azerbaijani websites, such as when Russian hackers allegedly disabled the servers of the Azerbaijani Day.Az news agency.

In June 2019, Russia has conceded that it is "possible" its electrical grid is under cyber-attack by the United States. The New York Times reported that American hackers from the United States Cyber Command planted malware potentially capable of disrupting the Russian electrical grid.

Website defacement and denial of service

Even more recently, in October 2007, the website of Ukrainian president Viktor Yushchenko was attacked by hackers. A radical Russian nationalist youth group, the Eurasian Youth Movement, claimed responsibility.

In 1999 hackers attacked NATO computers. The computers flooded them with email and hit them with a denial-of-service attack. The hackers were protesting against the NATO bombings of the Chinese embassy in Belgrade. Businesses, public organizations and academic institutions were bombarded with highly politicized emails containing viruses from other European countries.

In December 2018, Twitter warned of "unusual activity" from China and Saudi Arabia. A bug was detected in November that could have revealed the country code of users' phone numbers. Twitter said the bug could have had ties to "state-sponsored actors".

Human extinction

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Human_ext...