Search This Blog

Sunday, February 11, 2024

Social learning (social pedagogy)

From Wikipedia, the free encyclopedia

Social learning (social pedagogy) is learning that takes place at a wider scale than individual or group learning, up to a societal scale, through social interaction between peers.

Definition

Social learning is defined as learning through the observation of other people's behaviors. It is a process of social change in which people learn from each other in ways that can benefit wider social-ecological systems. Different social contexts allow individuals to pick up new behaviors by observing what people are doing within that environment. Social learning and social pedagogy emphasize the dynamic interaction between people and the environment in the construction of meaning and identity.

The process of learning a new behaviour starts by observing a behaviour, taking the information in and finally adopting that behaviour. Examples of environmental contexts that promote social learning are schools, media, family members and friends.

If learning is to be considered as social, then it must:

  1. demonstrate that a change in understanding has taken place in the individuals involved;
  2. demonstrate that this change goes beyond the individual and becomes situated within wider social units or communities of practice;
  3. occur through social interactions and processes between actors within a social network.

It is a theoretical system that focuses on the development of the child and how practice and training affect their life skills. This idea is centered around the notion that children are active and competent.

History

18th century

Jean-Jacques Rousseau brings forth the idea that all humans are born good but are ultimately corrupted by society, implying a form of social learning.

19th century

The literature on the topic of social pedagogy tends to identify German educator Karl Mager (1810-1858) as the person who coined the term ‘social pedagogy’ in 1844. Mager and Friedrich Adolph Diesterweg shared the belief that education should go beyond the individual's acquisition of knowledge and focus on the acquisition of culture by society. Ultimately, it should benefit the community itself.

1900s - 1950s

Developmental psychology focused on the theories of behaviorism from B.F. Skinner and Sigmund Freud’s psychoanalytic theory to explain how humans learn new behaviours.

The founding father of social pedagogy, German philosopher and educator Paul Natorp (1854-1924) published the book Sozialpädagogik: Theorie der Willensbildung auf der Grundlage der Gemeinschaft (Social Pedagogy: The theory of educating the human will into a community asset) in 1899. Natorp argued that in all instances, pedagogy should be social. Teachers should consider the interaction between educational and societal processes.

1950s - 1990s

The field of developmental psychology underwent significant changes during these decades as social learning theories started to gain traction through the research and experiments of Psychologists such as Julian Rotter, Albert Bandura and Robert Sears. In 1954, Julian Rotter developed his social learning theory which linked human behavior changes with environmental interactions. Predictable variables were behavior potential, expectancy, reinforcement value and psychological situation. Bandura conducted his bobo doll experiment in 1961 and developed his social learning theory in 1977. These contributions to the field of developmental psychology cemented a strong knowledge foundation and allowed researchers to build on and expand our understanding of human behavior.

Theories

Jean-Jacques Rousseau - Natural Man

Jean-Jacques Rousseau (1712 - 1778), with his book Emile, or On Education, introduced his pedagogic theory where the child should be brought up in harmony with nature. The child should be introduced to society only during the fourth stage of development, the age of moral self-worth (15 to 18 years of age). That way, the child enters society in an informed and self-reliable manner, with one's own judgment. Rousseau's conceptualization of childhood and adolescence is based on his theory that human beings are inherently good but corrupted a society that denaturalize them. Rousseau is the precursor of the child-centered approach in education.

Karl Mager - Social Pedagogy

Karl Mager (1810 - 1858) is often identified as the one who coined the term social pedagogy. He held the belief that education should focus on the acquisition of knowledge but also of culture through society and should orient its activities to benefit the community. It also implies that knowledge should not solely come from individuals but also from the larger concept of society.

Paul Natorp - Social Pedagogy

Paul Natorp (1854 - 1924) was a German philosopher and educator. In 1899, he published Sozialpädagogik: Theorie der Willensbildung auf der Grundlage der Gemeinschaft (Social Pedagogy: The theory of educating the human will into a community asset). According to him, education should be social, thus an interaction between educational and social processes. Natorp believed in the model of Gemeinshaft (small community) in order to build universal happiness and achieve true humanity. At the time, philosophers like Jean-Jacques Rousseau, John Locke, Johann Heinrich Pestalozzi and Immanuel Kant were preoccupied by the structure of society and how it may influence human interrelations. Philosophers were not solely thinking of the child as an individual but rather at what he/she can bring to creating human togetherness and societal order.

Natorp's perspective was influenced by Plato's ideas about the relation between the individual and the city-state (polis). The polis is a social and political structure of society that, according to Plato, allows individuals to maximize their potential. It is strictly structured with classes serving others and philosopher kings setting universal laws and truths for all. Furthermore, Plato argued for the need to pursue intellectual virtues rather than personal advancements such as wealth and reputation. Natorp's interpretation of the concept of the polis is that an individual will want to serve his/her community and state after having been educated, as long as the education is social (Sozialpädagogik).

Natorp focused on education for the working class as well as social reform. His view of social pedagogy outlined that education is a social process and social life is an educational process. Social pedagogic practices are a deliberative and rational form of socialization. Individuals become social human beings by being socialized into society. Social pedagogy involves teachers and children sharing the same social spaces.

Herman Nohl - Hermeneutic Perspective

Herman Nohl (1879 - 1960) was a German pedagogue of the first half of the twentieth century. He interpreted reality from a hermeneutical perspective (methodological principles of interpretation) and tried to expose the causes of social inequalities. According to Nohl, social pedagogy's aim is to foster the wellbeing of student by integrating into society youth initiatives, programs and efforts. Teachers should be advocates for the welfare of their students and contribute to the social transformations it entails. Nohl conceptualized a holistic educative process that takes into account the historical, cultural, personal and social contexts of any given situation.

Robert Sears - Social Learning

Robert Richardson Sears (1908 - 1989) focused his research mostly on the stimulus-response theory. Much of his theoretical effort was expended on understanding the way children come to internalize the values, attitudes, and behaviours of the culture in which they are raised. Just like Albert Bandura, he focused most of his research on aggression, but also on the growth of resistance to temptation and guilt, and the acquisition of culturally-approved sex-role behaviors. Sears wanted to prove the importance of the place of parents in the child's education, concentrating on features of parental behaviour that either facilitated or hampered the process. Such features include both general relationship variables such as parental warmth and permissiveness and specific behaviours such as punishment in the form of love withdrawal and power assertion.

Albert Bandura - Social Learning

Albert Bandura advanced the social learning theory by including the individual and the environment in the process of learning and imitating behaviour. In other words, children and adults learn or change behaviours by imitating behaviours observed in others. Albert Bandura mentions that the environment plays an important role as it is the stimuli that triggers the learning process. For example, according to Bandura (1978), people learn aggressive behaviour through 3 sources: Family members, community and mass media. Research shows that parent who prefer aggressive solution to solve their problems tend to have children who use aggressive tactics to deal with other people. Research also found that communities in which fighting prowess are valued have a higher rate of aggressive behaviour. Also, findings show that watching televisions can have at least 4 different effect on people: 1) it teaches aggressive style of conduct, 2) it alters restraints over aggressive behavior,3) it desensitizes and habituate people to violence and 4) it shapes people's image of reality. The environment also allows people to learn through another person's experience. For example, students don't cheat on exams (at least no openly) because they know the consequences of it, even if they never experienced the consequences themselves

However, still according to Banduras, the learning process does not stop at the influence of the family, community and media, the internal process (individual thoughts, values, etc.) will determine at which frequency and which intensity an individual will imitate and adopt a certain behaviour. Indeed, parents plays an important role in a child's education for two reasons: Firstly, because of the frequency and intensity of the interactions and secondly because the children often admire their parent and often take them as role models. Therefore, even if the stimuli is the parents' interactions with their children, if their child did not admire them, their children would not reproduce their behaviour as often. That is the main difference between early social learning theory and Bandura's point of view. This principle is called reciprocal determinism, which means that the developmental process is bidirectional, and that the individual has to value his environment in order to learn for it. Bandura also states that this process starts at births; indeed, research shows that infants are more receptive to certain experiences and less to others. Albert Bandura also says that most human behaviours are driven by goals and that we regulate our behaviour through weighing the benefits and the troubles that we can get into because of a particular behaviour.

Application in education and pedagogy

Social learning and social pedagogy has proven its efficiency with the application in practical professions, like nursing, where the student can observe a trained professional in a professional/work settings, and they can learn about nursing throughout all its aspects: interactions, attitudes, co-working skills and the nursing job itself. Students who have taken part in social learning state that they increased their nursing skills, and that it could only be possible with a good learning environment, a good mentor, and a student who is assertive enough. It means that social learning can be achieved with a good mentor, but one needs to be a good listener too. This mentoring experience creates what Albert Bandura called observational learning, when students observe a well-trained model/teacher and the students's knowledge and understanding increase.

Experiences in the field for student teachers are a good way to show how social pedagogy and social learning contribute to one's education. Indeed, field experiences are part of a student's life in their route to their teaching degree. Field experiences are based on the social learning theory; a student follows a teacher for some time, at first observing the cooperating teacher and taking notes about the teaching act. The second part of the field experience is actual teaching, and receiving feedback from the role model and the students. The student teachers try as much as they can to imitate what they have learned by observing their cooperating teacher.

Cyberbullying being an issue in schools, social pedagogy can be a solution to decrease this trend. Indeed, the bullied pupil can build a relationship with a particular mentor or role model, which in return can empower the student to deal with issues such as cyberbullying. This can work both on the victim and the bully, since both may lack confidence and affection. Using social pedagogy instead of punishments and reactive actions is also a way to derive from the traditional model of raising children, and teaching, which relies on punishments and rewards.

Parent education is also based on social learning. From birth, children look at their parents and try to model what they do, how they talk, and what they think. Of course, a child's environment is much larger than only their familiar environment, but it is an influential part. A study by Dubanoski and Tanabe, was made on parenting and social learning, where parents had to attend classes that would teach them social learning principles to improve their children's behaviour. The classes taught the parents how to record objectively their children's behaviour, and to deal with them by teaching the correct behaviour, not by punishing the wrong one. A significant number of parents improve their children behaviour by the end of the study.

The issue of how long social learning takes is important for the design of learning initiatives, teaching experiences and policy interventions. The process of going beyond individual learning to a broader understanding situated in a community of practice can take some time to develop. A longitudinal case study in Australia looked at an environmental group concerned about land degradation. The whole project was led by a local committee, Wallatin Wildlife and Landcare. They wanted to "encourage social learning among landholders through field visits, focus groups, and deliberative processes to balance innovative 'thinking outside the box' with judicious use of public funds". They found that social learning was documented after approximately fifteen months, but was initially restricted to an increased understanding of the problem without improved knowledge to address it. Further knowledge necessary to address the problem in focus emerged during the third year of the program. This suggests that learning initiatives could take around three years to develop sufficient new knowledge embedded in a community of practice in order to address complex problems.

Social media and technology

Benefits

Social pedagogy is in fact the interaction between society and the individual, which create a learning experience. Therefore, if talking about the current development of social pedagogy and social learning, the recent trend in term of learning in our society, is the use of social media and other forms of technology. On one side, if well designed within an educational framework, social media can surely help with the development of certain essential skills:

Therefore, it can be seen that social media can be extremely useful for developing some of the key skills needed in this digital age. For instance, “the main feature of social media is that they empower the end user to access, create, disseminate and share information easily in a user-friendly, open environment". By using social media, the learning experience becomes easier and more accessible to all. By allowing social media in the pedagogical program of our young students, it could help them to grow and fully participate in our digital society.

With the growing use of technology and different social platform in many aspects of our life, we can use social media at work and at home as well as in schools. It can be seen that social media now enables teachers to set online group work, based on cases or projects, and students can collect data in the field, without any need for direct face-to-face contact with either the teacher or other students.

Disadvantages

The benefits of social media in education stipulate how easier the communication between individuals becomes. However, others will argue that it excludes the vital tacit knowledge that direct, face-to-face interpersonal contact enables, and that social learning is bound up with physical and spatial learning. Social learning includes sharing experiences and working with others. Social media facilitates those experiences but make it less effective by eliminating the physical interaction between individuals. The more time students spend on social sites, the less time they spend socializing in person. Because of the lack of nonverbal cues, like tone and inflection, the use of social media is not an adequate replacement for face-to-face communication. Students who spend a great amount of time on social networking sites are less effective at communicating in person.

With the omnipresence of technology in our life and the easy access to unlimited source of information, the difference between using technology as a tool and not as an end in itself needs to be understood.

Data transformation (statistics)

A scatterplot in which the areas of the sovereign states and dependent territories in the world are plotted on the vertical axis against their populations on the horizontal axis. The upper plot uses raw data. In the lower plot, both the area and population data have been transformed using the logarithm function.

In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point zi is replaced with the transformed value yi = f(zi), where f is a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of a statistical inference procedure that is to be applied, or to improve the interpretability or appearance of graphs.

Nearly always, the function that is used to transform the data is invertible, and generally is continuous. The transformation is usually applied to a collection of comparable measurements. For example, if we are working with data on peoples' incomes in some currency unit, it would be common to transform each person's income value by the logarithm function.

Motivation

Guidance for how data should be transformed, or whether a transformation should be applied at all, should come from the particular statistical analysis to be performed. For example, a simple way to construct an approximate 95% confidence interval for the population mean is to take the sample mean plus or minus two standard error units. However, the constant factor 2 used here is particular to the normal distribution, and is only applicable if the sample mean varies approximately normally. The central limit theorem states that in many situations, the sample mean does vary normally if the sample size is reasonably large. However, if the population is substantially skewed and the sample size is at most moderate, the approximation provided by the central limit theorem can be poor, and the resulting confidence interval will likely have the wrong coverage probability. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to a symmetric distributionefore constructing a confidence interval. If desired, the confidence interval can then be transformed back to the original scale using the inverse of the transformation that was applied to the data.

Data can also be transformed to make them easier to visualize. For example, suppose we have a scatterplot in which the points are the countries of the world, and the data values being plotted are the land area and population of each country. If the plot is made using untransformed data (e.g. square kilometers for area and the number of people for population), most of the countries would be plotted in tight cluster of points in the lower left corner of the graph. The few countries with very large areas and/or populations would be spread thinly around most of the graph's area. Simply rescaling units (e.g., to thousand square kilometers, or to millions of people) will not change this. However, following logarithmic transformations of both area and population, the points will be spread more uniformly in the graph.

Another reason for applying data transformation is to improve interpretability, even if no formal statistical analysis or visualization is to be performed. For example, suppose we are comparing cars in terms of their fuel economy. These data are usually presented as "kilometers per liter" or "miles per gallon". However, if the goal is to assess how much additional fuel a person would use in one year when driving one car compared to another, it is more natural to work with the data transformed by applying the reciprocal function, yielding liters per kilometer, or gallons per mile.

In regression

Data transformation may be used as a remedial measure to make data suitable for modeling with linear regression if the original data violates one or more assumptions of linear regression. For example, the simplest linear regression models assume a linear relationship between the expected value of Y (the response variable to be predicted) and each independent variable (when the other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either the independent or dependent variables in the regression model to improve the linearity. For example, addition of quadratic functions of the original independent variables may lead to a linear relationship with expected value of Y, resulting in a polynomial regression model, a special case of linear regression.

Another assumption of linear regression is homoscedasticity, that is the variance of errors must be the same regardless of the values of predictors. If this assumption is violated (i.e. if the data is heteroscedastic), it may be possible to find a transformation of Y alone, or transformations of both X (the predictor variables) and Y, such that the homoscedasticity assumption (in addition to the linearity assumption) holds true on the transformed variables and linear regression may therefore be applied on these.

Yet another application of data transformation is to address the problem of lack of normality in error terms. Univariate normality is not needed for least squares estimates of the regression parameters to be meaningful (see Gauss–Markov theorem). However confidence intervals and hypothesis tests will have better statistical properties if the variables exhibit multivariate normality. Transformations that stabilize the variance of error terms (i.e. those that address heteroscedaticity) often also help make the error terms approximately normal.

Examples

Equation:

Meaning: A unit increase in X is associated with an average of b units increase in Y.

Equation:

(From exponentiating both sides of the equation: )
Meaning: A unit increase in X is associated with an average increase of b units in , or equivalently, Y increases on an average by a multiplicative factor of . For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols (a and b) are used to denote the regression coefficients, then a unit increase in X would lead to a times increase in Y on an average. If b were 1, then this implies a 10-fold increase in Y for a unit increase in X

Equation:

Meaning: A k-fold increase in X is associated with an average of units increase in Y. For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols (a and b) are used to denote the regression coefficients, then a tenfold increase in X would result in an average increase of units in Y

Equation:

(From exponentiating both sides of the equation: )
Meaning: A k-fold increase in X is associated with a multiplicative increase in Y on an average. Thus if X doubles, it would result in Y changing by a multiplicative factor of .

Alternative

Generalized linear models (GLMs) provide a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. GLMs allow the linear model to be related to the response variable via a link function and allow the magnitude of the variance of each measurement to be a function of its predicted value.

Common cases

The logarithm transformation and square root transformation are commonly used for positive data, and the multiplicative inverse transformation (reciprocal transformation) can be used for non-zero data. The power transformation is a family of transformations parameterized by a non-negative value λ that includes the logarithm, square root, and multiplicative inverse transformations as special cases. To approach data transformation systematically, it is possible to use statistical estimation techniques to estimate the parameter λ in the power transformation, thereby identifying the transformation that is approximately the most appropriate in a given setting. Since the power transformation family also includes the identity transformation, this approach can also indicate whether it would be best to analyze the data without a transformation. In regression analysis, this approach is known as the Box–Cox transformation.

The reciprocal transformation, some power transformations such as the Yeo–Johnson transformation, and certain other transformations such as applying the inverse hyperbolic sine, can be meaningfully applied to data that include both positive and negative values (the power transformation is invertible over all real numbers if λ is an odd integer). However, when both negative and positive values are observed, it is sometimes common to begin by adding a constant to all values, producing a set of non-negative data to which any power transformation can be applied.

A common situation where a data transformation is applied is when a value of interest ranges over several orders of magnitude. Many physical and social phenomena exhibit such behavior — incomes, species populations, galaxy sizes, and rainfall volumes, to name a few. Power transforms, and in particular the logarithm, can often be used to induce symmetry in such data. The logarithm is often favored because it is easy to interpret its result in terms of "fold changes."

The logarithm also has a useful effect on ratios. If we are comparing positive quantities X and Y using the ratio X / Y, then if X < Y, the ratio is in the interval (0,1), whereas if X > Y, the ratio is in the half-line (1,∞), where the ratio of 1 corresponds to equality. In an analysis where X and Y are treated symmetrically, the log-ratio log(X / Y) is zero in the case of equality, and it has the property that if X is K times greater than Y, the log-ratio is the equidistant from zero as in the situation where Y is K times greater than X (the log-ratios are log(K) and −log(K) in these two situations).

If values are naturally restricted to be in the range 0 to 1, not including the end-points, then a logit transformation may be appropriate: this yields values in the range (−∞,∞).

Transforming to normality

1. It is not always necessary or desirable to transform a data set to resemble a normal distribution. However, if symmetry or normality are desired, they can often be induced through one of the power transformations.

2. A linguistic power function is distributed according to the Zipf-Mandelbrot law. The distribution is extremely spiky and leptokurtic, this is the reason why researchers had to turn their backs to statistics to solve e.g. authorship attribution problems. Nevertheless, usage of Gaussian statistics is perfectly possible by applying data transformation.

3. To assess whether normality has been achieved after transformation, any of the standard normality tests may be used. A graphical approach is usually more informative than a formal statistical test and hence a normal quantile plot is commonly used to assess the fit of a data set to a normal population. Alternatively, rules of thumb based on the sample skewness and kurtosis have also been proposed.

Transforming to a uniform distribution or an arbitrary distribution

If we observe a set of n values X1, ..., Xn with no ties (i.e., there are n distinct values), we can replace Xi with the transformed value Yi = k, where k is defined such that Xi is the kth largest among all the X values. This is called the rank transform,[14] and creates data with a perfect fit to a uniform distribution. This approach has a population analogue.

Using the probability integral transform, if X is any random variable, and F is the cumulative distribution function of X, then as long as F is invertible, the random variable U = F(X) follows a uniform distribution on the unit interval [0,1].

From a uniform distribution, we can transform to any distribution with an invertible cumulative distribution function. If G is an invertible cumulative distribution function, and U is a uniformly distributed random variable, then the random variable G−1(U) has G as its cumulative distribution function.

Putting the two together, if X is any random variable, F is the invertible cumulative distribution function of X, and G is an invertible cumulative distribution function then the random variable G−1(F(X)) has G as its cumulative distribution function.

Variance stabilizing transformations

Many types of statistical data exhibit a "variance-on-mean relationship", meaning that the variability is different for data values with different expected values. As an example, in comparing different populations in the world, the variance of income tends to increase with mean income. If we consider a number of small area units (e.g., counties in the United States) and obtain the mean and variance of incomes within each county, it is common that the counties with higher mean income also have higher variances.

A variance-stabilizing transformation aims to remove a variance-on-mean relationship, so that the variance becomes constant relative to the mean. Examples of variance-stabilizing transformations are the Fisher transformation for the sample correlation coefficient, the square root transformation or Anscombe transform for Poisson data (count data), the Box–Cox transformation for regression analysis, and the arcsine square root transformation or angular transformation for proportions (binomial data). While commonly used for statistical analysis of proportional data, the arcsine square root transformation is not recommended because logistic regression or a logit transformation are more appropriate for binomial or non-binomial proportions, respectively, especially due to decreased type-II error.

Transformations for multivariate data

Univariate functions can be applied point-wise to multivariate data to modify their marginal distributions. It is also possible to modify some attributes of a multivariate distribution using an appropriately constructed transformation. For example, when working with time series and other types of sequential data, it is common to difference the data to improve stationarity. If data generated by a random vector X are observed as vectors Xi of observations with covariance matrix Σ, a linear transformation can be used to decorrelate the data. To do this, the Cholesky decomposition is used to express Σ = A A'. Then the transformed vector Yi = A−1Xi has the identity matrix as its covariance matrix.

Functional specialization (brain)

From Wikipedia, the free encyclopedia

In neuroscience, functional specialization is a theory which suggests that different areas in the brain are specialized for different functions.

Historical origins

1848 edition of American Phrenological Journal published by Fowlers & Wells, New York City

Phrenology, created by Franz Joseph Gall (1758–1828) and Johann Gaspar Spurzheim (1776–1832) and best known for the idea that one's personality could be determined by the variation of bumps on their skull, proposed that different regions in one's brain have different functions and may very well be associated with different behaviours. Gall and Spurzheim were the first to observe the crossing of pyramidal tracts, thus explaining why lesions in one hemisphere are manifested in the opposite side of the body. However, Gall and Spurzheim did not attempt to justify phrenology on anatomical grounds. It has been argued that phrenology was fundamentally a science of race. Gall considered the most compelling argument in favor of phrenology the differences in skull shape found in sub-Saharan Africans and the anecdotal evidence (due to scientific travelers and colonists) of their intellectual inferiority and emotional volatility. In Italy, Luigi Rolando carried out lesion experiments and performed electrical stimulation of the brain, including the Rolandic area.

A
Phineas Gage's accident

Phineas Gage became one of the first lesion case studies in 1848 when an explosion drove a large iron rod completely through his head, destroying his left frontal lobe. He recovered with no apparent sensory, motor, or gross cognitive deficits, but with behaviour so altered that friends described him as "no longer being Gage," suggesting that the damaged areas are involved in "higher functions" such as personality. However, Gage's mental changes are usually grossly exaggerated in modern presentations.

Subsequent cases (such as Broca's patient Tan) gave further support to the doctrine of specialization.

In the XX century, in the process of treating epilepsy, Wilder Penfield produced maps of the location of various functions (motor, sensory, memory, vision) in the brain.

Major theories of the brain

Currently, there are two major theories of the brain's cognitive function. The first is the theory of modularity. Stemming from phrenology, this theory supports functional specialization, suggesting the brain has different modules that are domain specific in function. The second theory, distributive processing, proposes that the brain is more interactive and its regions are functionally interconnected rather than specialized. Each orientation plays a role within certain aims and tend to complement each other (see below section `Collaboration´).

Modularity

The theory of modularity suggests that there are functionally specialized regions in the brain that are domain specific for different cognitive processes. Jerry Fodor expanded the initial notion of phrenology by creating his Modularity of the Mind theory. The Modularity of the Mind theory indicates that distinct neurological regions called modules are defined by their functional roles in cognition. He also rooted many of his concepts on modularity back to philosophers like Descartes, who wrote about the mind being composed of "organs" or "psychological faculties". An example of Fodor's concept of modules is seen in cognitive processes such as vision, which have many separate mechanisms for colour, shape and spatial perception.

One of the fundamental beliefs of domain specificity and the theory of modularity suggests that it is a consequence of natural selection and is a feature of our cognitive architecture. Researchers Hirschfeld and Gelman propose that because the human mind has evolved by natural selection, it implies that enhanced functionality would develop if it produced an increase in "fit" behaviour. Research on this evolutionary perspective suggests that domain specificity is involved in the development of cognition because it allows one to pinpoint adaptive problems.

An issue for the modular theory of cognitive neuroscience is that there are cortical anatomical differences from person to person. Although many studies of modularity are undertaken from very specific lesion case studies, the idea is to create a neurological function map that applies to people in general. To extrapolate from lesion studies and other case studies this requires adherence to the universality assumption, that there is no difference, in a qualitative sense, between subjects who are intact neurologically. For example, two subjects would fundamentally be the same neurologically before their lesions, and after have distinctly different cognitive deficits. Subject 1 with a lesion in the "A" region of the brain may show impaired functioning in cognitive ability "X" but not "Y", while subject 2 with a lesion in area "B" demonstrates reduced "Y" ability but "X" is unaffected; results like these allow inferences to be made about brain specialization and localization, also known as using a double dissociation.

The difficulty with this theory is that in typical non-lesioned subjects, locations within the brain anatomy are similar but not completely identical. There is a strong defense for this inherent deficit in our ability to generalize when using functional localizing techniques (fMRI, PET etc.). To account for this problem, the coordinate-based Talairach and Tournoux stereotaxic system is widely used to compare subjects' results to a standard brain using an algorithm. Another solution using coordinates involves comparing brains using sulcal reference points. A slightly newer technique is to use functional landmarks, which combines sulcal and gyral landmarks (the groves and folds of the cortex) and then finding an area well known for its modularity such as the fusiform face area. This landmark area then serves to orient the researcher to the neighboring cortex.

Future developments for modular theories of neuropsychology may lie in "modular psychiatry". The concept is that a modular understanding of the brain and advanced neuro-imaging techniques will allow for a more empirical diagnosis of mental and emotional disorders. There has been some work done towards this extension of the modularity theory with regards to the physical neurological differences in subjects with depression and schizophrenia, for example. Zielasek and Gaeble have set out a list of requirements in the field of neuropsychology in order to move towards neuropsychiatry:

  1. To assemble a complete overview of putative modules of the human mind
  2. To establish module-specific diagnostic tests (specificity, sensitivity, reliability)
  3. To assess how far individual modules, sets of modules or their connections are affected in certain psychopathological situations
  4. To probe novel module-specific therapies like the facial affect recognition training or to retrain access to context information in the case of delusions and hallucinations, in which "hyper-modularity" may play a role 

Research in the study of brain function can also be applied to cognitive behaviour therapy. As therapy becomes increasingly refined, it is important to differentiate cognitive processes in order to discover their relevance towards different patient treatments. An example comes specifically from studies on lateral specialization between the left and right cerebral hemispheres of the brain. The functional specialization of these hemispheres are offering insight on different forms of cognitive behaviour therapy methods, one focusing on verbal cognition (the main function of the left hemisphere) and the other emphasizing imagery or spatial cognition (the main function of the right hemisphere). Examples of therapies that involve imagery, requiring right hemisphere activity in the brain, include systematic desensitization and anxiety management training. Both of these therapy techniques rely on the patient's ability to use visual imagery to cope with or replace patients symptoms, such as anxiety. Examples of cognitive behaviour therapies that involve verbal cognition, requiring left hemisphere activity in the brain, include self-instructional training and stress inoculation. Both of these therapy techniques focus on patients' internal self-statements, requiring them to use vocal cognition. When deciding which cognitive therapy to employ, it is important to consider the primary cognitive style of the patient. Many individuals have a tendency to prefer visual imagery over verbalization and vice versa. One way of figuring out which hemisphere a patient favours is by observing their lateral eye movements. Studies suggest that eye gaze reflects the activation of cerebral hemisphere contralateral to the direction. Thus, when asking questions that require spatial thinking, individuals tend to move their eyes to the left, whereas when asked questions that require verbal thinking, individuals usually move their eyes to the right. In conclusion, this information allows one to choose the optimal cognitive behaviour therapeutic technique, thereby enhancing the treatment of many patients.

Areas representing modularity in the brain

Fusiform face area

One of the most well known examples of functional specialization is the fusiform face area (FFA). Justine Sergent was one of the first researchers that brought forth evidence towards the functional neuroanatomy of face processing. Using positron emission tomography (PET), Sergent found that there were different patterns of activation in response to the two different required tasks, face processing verses object processing. These results can be linked with her studies of brain-damaged patients with lesions in the occipital and temporal lobes. Patients revealed that there was an impairment of face processing but no difficulty recognizing everyday objects, a disorder also known as prosopagnosia. Later research by Nancy Kanwisher using functional magnetic resonance imaging (fMRI), found specifically that the region of the inferior temporal cortex, known as the fusiform gyrus, was significantly more active when subjects viewed, recognized and categorized faces in comparison to other regions of the brain. Lesion studies also supported this finding where patients were able to recognize objects but unable to recognize faces. This provided evidence towards domain specificity in the visual system, as Kanwisher acknowledges the Fusiform Face Area as a module in the brain, specifically the extrastriate cortex, that is specialized for face perception.

Visual area V4 and V5

While looking at the regional cerebral blood flow (rCBF), using PET, researcher Semir Zeki directly demonstrated functional specialization within the visual cortex known as visual modularity, first in the monkey and then in the human visual brain. He localized regions involved specifically in the perception of colour and vision motion, as well as of orientation (form). For colour, visual area V4 was located when subjects were shown two identical displays, one being multicoloured and the other shades of grey. This was further supported from lesion studies where individuals were unable to see colours after damage, a disorder known as achromatopsia. Combining PET and magnetic resonance imaging (MRI), subjects viewing a moving checker board pattern verses a stationary checker board pattern located visual area V5, which is now considered to be specialized for vision motion. (Watson et al., 1993) This area of functional specialization was also supported by lesion study patients whose damage caused cerebral motion blindness, a condition now referred to as cerebral akinetopsia

Frontal lobes

Studies have found the frontal lobes to be involved in the executive functions of the brain, which are higher level cognitive processes. This control process is involved in the coordination, planning and organizing of actions towards an individual's goals. It contributes to such things as one's behaviour, language and reasoning. More specifically, it was found to be the function of the prefrontal cortex, and evidence suggest that these executive functions control processes such as planning and decision making, error correction and assisting overcoming habitual responses. Miller and Cummings used PET and functional magnetic imaging (fMRI) to further support functional specialization of the frontal cortex. They found lateralization of verbal working memory in the left frontal cortex and visuospatial working memory in the right frontal cortex. Lesion studies support these findings where left frontal lobe patients exhibited problems in controlling executive functions such as creating strategies. The dorsolateral, ventrolateral and anterior cingulate regions within the prefrontal cortex are proposed to work together in different cognitive tasks, which is related to interaction theories. However, there has also been evidence suggesting strong individual specializations within this network. For instance, Miller and Cummings found that the dorsolateral prefrontal cortex is specifically involved in the manipulation and monitoring of sensorimotor information within working memory.

Right and left hemispheres

During the 1960s, Roger Sperry conducted a natural experiment on epileptic patients who had previously had their corpora callosa cut. The corpus callosum is the area of the brain dedicated to linking both the right and left hemisphere together. Sperry et al.'s experiment was based on flashing images in the right and left visual fields of his participants. Because the participant's corpus callosum was cut, the information processed by each visual field could not be transmitted to the other hemisphere. In one experiment, Sperry flashed images in the right visual field (RVF), which would subsequently be transmitted to the left hemisphere (LH) of the brain. When asked to repeat what they had previously seen, participants were fully capable of remembering the image flashed. However, when the participants were then asked to draw what they had seen, they were unable to. When Sperry et al. flashed images in the left visual field (LVF), the information processed would be sent to the right hemisphere (RH) of the brain. When asked to repeat what they had previously seen, participants were unable to recall the image flashed, but were very successful in drawing the image. Therefore, Sperry concluded that the left hemisphere of the brain was dedicated to language as the participants could clearly speak the image flashed. On the other hand, Sperry concluded that the right hemisphere of the brain was involved in more creative activities such as drawing.

Parahippocampal place area

Located in the parahippocampal gyrus, the parahippocampal place area (PPA) was coined by Nancy Kanwisher and Russell Epstein after an fMRI study showed that the PPA responds optimally to scenes presented containing a spatial layout, minimally to single objects and not at all to faces. It was also noted in this experiment that activity remains the same in the PPA when viewing a scene with an empty room or a room filled with meaningful objects. Kanwisher and Epstein proposed "that the PPA represents places by encoding the geometry of the local environment". In addition, Soojin Park and Marvin Chun posited that activation in the PPA is viewpoint specific, and so responds to changes in the angle of the scene. In contrast, another special mapping area, the retrosplenial cortex (RSC), is viewpoint invariant or does not change response levels when views change. This perhaps indicates a complementary arrangement of functionally and anatomically separate visual processing brain areas.

Extrastriate body area

Located in the lateral occipitotemporal cortex, fMRI studies have shown the extrastriate body area (EBA) to have selective responding when subjects see human bodies or body parts, implying that it has functional specialization. The EBA does not optimally respond to objects or parts of objects but to human bodies and body parts, a hand for example. In fMRI experiments conducted by Downing et al. participants were asked to look at a series of pictures. These stimuli includes objects, parts of objects (for example just the head of a hammer), figures of the human body in all sorts of positions and types of detail (including line drawings or stick men), and body parts (hands or feet) without any body attached. There was significantly more blood flow (and thus activation) to human bodies, no matter how detailed, and body parts than to objects or object parts.

Distributive processing

The cognitive theory of distributed processing suggests that brain areas are highly interconnected and process information in a distributed manner.

A remarkable precedent of this orientation is the research of Justo Gonzalo on brain dynamics where several phenomena that he observed could not be explained by the traditional theory of localizations. From the gradation he observed between different syndromes in patients with different cortical lesions, this author proposed in 1952 a functional gradients model, which permits an ordering and an interpretation of multiple phenomena and syndromes. The functional gradients are continuous functions through the cortex describing a distributed specificity, so that, for a given sensory system, the specific gradient, of contralateral character, is maximum in the corresponding projection area and decreases in gradation towards more "central" zone and beyond so that the final decline reaches other primary areas. As a consequence of the crossing and overlapping of the specific gradients, in the central zone where the overlap is greater, there would be an action of mutual integration, rather nonspecific (or multisensory) with bilateral character due to the corpus callosum. This action would be maximum in the central zone and minimal towards the projection areas. As the author stated (p. 20 of the English translation) "a functional continuity with regional variation is then offered, each point of the cortex acquiring different properties but with certain unity with the rest of the cortex. It is a dynamic conception of quantitative localizations". A very similar gradients scheme was proposed by Elkhonon Goldberg in 1989.

Other researchers who provide evidence to support the theory of distributive processing include Anthony McIntosh and William Uttal, who question and debate localization and modality specialization within the brain. McIntosh's research suggests that human cognition involves interactions between the brain regions responsible for processes sensory information, such as vision, audition, and other mediating areas like the prefrontal cortex. McIntosh explains that modularity is mainly observed in sensory and motor systems, however, beyond these very receptors, modularity becomes "fuzzier" and you see the cross connections between systems increase. He also illustrates that there is an overlapping of functional characteristics between the sensory and motor systems, where these regions are close to one another. These different neural interactions influence each other, where activity changes in one area influence other connected areas. With this, McIntosh suggest that if you only focus on activity in one area, you may miss the changes in other integrative areas. Neural interactions can be measured using analysis of covariance in neuroimaging. McIntosh used this analysis to convey a clear example of the interaction theory of distributive processing. In this study, subjects learned that an auditory stimulus signalled a visual event. McIntosh found activation (an increase blood flow), in an area of the occipital cortex, a region of the brain involved in visual processing, when the auditory stimulus was presented alone. Correlations between the occipital cortex and different areas of the brain such as the prefrontal cortex, premotor cortex and superior temporal cortex showed a pattern of co-variation and functional connectivity.

Uttal focusses on the limits of localizing cognitive processes in the brain. One of his main arguments is that since the late 1990s, research in cognitive neuroscience has forgotten about conventional psychophysical studies based on behavioural observation. He believes that current research focusses on the technological advances of brain imaging techniques such as MRI and PET scans. Thus, he further suggest that this research is dependent on the assumptions of localization and hypothetical cognitive modules that use such imaging techniques to pursuit these assumptions. Uttal's major concern incorporates many controversies with the validly, over-assumptions and strong inferences some of these images are trying to illustrate. For instance, there is concern over the proper utilization of control images in an experiment. Most of the cerebrum is active during cognitive activity, therefore the amount of increased activity in a region must be greater when compared to a controlled area. In general, this may produce false or exaggerated findings and may increase potential tendency to ignore regions of diminished activity which may be crucial to the particular cognitive process being studied. Moreover, Uttal believes that localization researchers tend to ignore the complexity of the nervous system. Many regions in the brain are physically interconnected in a nonlinear system, hence, Uttal believes that behaviour is produced by a variety of system organizations.

Collaboration

The two theories, modularity and distributive processing, can also be combined. By operating simultaneously, these principles may interact with each other in a collaborative effort to characterize the functioning of the brain. Fodor himself, one of the major contributors to the modularity theory, appears to have this sentiment. He noted that modularity is a matter of degrees, and that the brain is modular to the extent that it warrants studying it in regards to its functional specialization. Although there are areas in the brain that are more specialized for cognitive processes than others, the nervous system also integrates and connects the information produced in these regions. In fact, the proposed distributive scheme of the functional cortical gradientes by J. Gonzalo already tries to join both concepts modular and distributive: regional heterogeneity should be a definitive acquisition (maximum specificity in the projection paths and primary areas), but the rigid separation between projection and association areas would be erased through the continuous functions of gradient.

The collaboration between the two theories not only would provide a more unified perception and understanding of the world but also make available the ability to learn from it.

Human taxonomy

From Wikipedia, the free encyclopedia
 
Homo ("humans")
Temporal range: Piacenzian-Present, 2.865–0 Ma
Scientific classification 
Domain: Eukaryota
Kingdom: Animalia
Phylum: Chordata
Class: Mammalia
Order: Primates
Suborder: Haplorhini
Infraorder: Simiiformes
Family: Hominidae
Subfamily: Homininae
Tribe: Hominini
Genus: Homo
Linnaeus, 1758
Type species
Homo sapiens
Linnaeus, 1758
Species

other species or subspecies suggested

Synonyms

Human taxonomy is the classification of the human species (systematic name Homo sapiens, Latin: "wise man") within zoological taxonomy. The systematic genus, Homo, is designed to include both anatomically modern humans and extinct varieties of archaic humans. Current humans have been designated as subspecies Homo sapiens sapiens, differentiated, according to some, from the direct ancestor, Homo sapiens idaltu (with some other research instead classifying idaltu and current humans as belonging to the same subspecies).

Since the introduction of systematic names in the 18th century, knowledge of human evolution has increased drastically, and a number of intermediate taxa have been proposed in the 20th and early 21st centuries. The most widely accepted taxonomy grouping takes the genus Homo as originating between two and three million years ago, divided into at least two species, archaic Homo erectus and modern Homo sapiens, with about a dozen further suggestions for species without universal recognition.

The genus Homo is placed in the tribe Hominini alongside Pan (chimpanzees). The two genera are estimated to have diverged over an extended time of hybridization, spanning roughly 10 to 6 million years ago, with possible admixture as late as 4 million years ago. A subtribe of uncertain validity, grouping archaic "pre-human" or "para-human" species younger than the Homo-Pan split, is Australopithecina (proposed in 1939).

A proposal by Wood and Richmond (2000) would introduce Hominina as a subtribe alongside Australopithecina, with Homo the only known genus within Hominina. Alternatively, following Cela-Conde and Ayala (2003), the "pre-human" or "proto-human" genera of Australopithecus, Ardipithecus, Praeanthropus, and possibly Sahelanthropus, may be placed on equal footing alongside the genus Homo. An even more extreme view rejects the division of Pan and Homo as separate genera, which based on the Principle of Priority would imply the reclassification of chimpanzees as Homo paniscus (or similar).

Categorizing humans based on phenotypes is a socially controversial subject. Biologists originally classified races as subspecies, but contemporary anthropologists reject the concept of race as a useful tool to understanding humanity, and instead view humanity as a complex, interrelated genetic continuum. Taxonomy of the hominins continues to evolve.

History

The taxonomic classification of humans following John Edward Gray (1825).

Human taxonomy on one hand involves the placement of humans within the taxonomy of the hominids (great apes), and on the other the division of archaic and modern humans into species and, if applicable, subspecies. Modern zoological taxonomy was developed by Carl Linnaeus during the 1730s to 1750s. He was the first to develop the idea that, like other biological entities, groups of people could too share taxonomic classifications. He named the human species as Homo sapiens in 1758, as the only member species of the genus Homo, divided into several subspecies corresponding to the great races. The Latin noun homō (genitive hominis) means "human being". The systematic name Hominidae for the family of the great apes was introduced by John Edward Gray (1825). Gray also supplied Hominini as the name of the tribe including both chimpanzees (genus Pan) and humans (genus Homo).

The discovery of the first extinct archaic human species from the fossil record dates to the mid 19th century: Homo neanderthalensis, classified in 1864. Since then, a number of other archaic species have been named, but there is no universal consensus as to their exact number. After the discovery of H. neanderthalensis, which even if "archaic" is recognizable as clearly human, late 19th to early 20th century anthropology for a time was occupied with finding the supposedly "missing link" between Homo and Pan. The "Piltdown Man" hoax of 1912 was the fraudulent presentation of such a transitional species. Since the mid-20th century, knowledge of the development of Hominini has become much more detailed, and taxonomical terminology has been altered a number of times to reflect this.

The introduction of Australopithecus as a third genus, alongside Homo and Pan, in the tribe Hominini is due to Raymond Dart (1925). Australopithecina as a subtribe containing Australopithecus as well as Paranthropus (Broom 1938) is a proposal by Gregory & Hellman (1939). More recently proposed additions to the Australopithecina subtribe include Ardipithecus (1995) and Kenyanthropus (2001). The position of Sahelanthropus (2002) relative to Australopithecina within Hominini is unclear. Cela-Conde and Ayala (2003) propose the recognition of Australopithecus, Ardipithecus, Praeanthropus, and Sahelanthropus (the latter incertae sedis)as separate genera.

Other proposed genera, now mostly considered part of Homo, include: Pithecanthropus (Dubois, 1894), Protanthropus (Haeckel, 1895), Sinanthropus (Black, 1927), Cyphanthropus (Pycraft, 1928) Africanthropus (Dreyer, 1935), Telanthropus (Broom & Anderson 1949), Atlanthropus (Arambourg, 1954), Tchadanthropus (Coppens, 1965).

The genus Homo has been taken to originate some two million years ago, since the discovery of stone tools in Olduvai Gorge, Tanzania, in the 1960s. Homo habilis (Leakey et al., 1964) would be the first "human" species (member of genus Homo) by definition, its type specimen being the OH 7 fossils. However, the discovery of more fossils of this type has opened up the debate on the delineation of H. habilis from Australopithecus. Especially, the LD 350-1 jawbone fossil discovered in 2013, dated to 2.8 Mya, has been argued as being transitional between the two. It is also disputed whether H. habilis was the first hominin to use stone tools, as Australopithecus garhi, dated to c. 2.5 Mya, has been found along with stone tool implements. Fossil KNM-ER 1470 (discovered in 1972, designated Pithecanthropus rudolfensis by Alekseyev 1978) is now seen as either a third early species of Homo (alongside H. habilis and H. erectus) at about 2 million years ago, or alternatively as transitional between Australopithecus and Homo.

Wood and Richmond (2000) proposed that Gray's tribe Hominini ("hominins") be designated as comprising all species after the chimpanzee–human last common ancestor by definition, to the inclusion of Australopithecines and other possible pre-human or para-human species (such as Ardipithecus and Sahelanthropus) not known in Gray's time. In this suggestion, the new subtribe of Hominina was to be designated as including the genus Homo exclusively, so that Hominini would have two subtribes, Australopithecina and Hominina, with the only known genus in Hominina being Homo. Orrorin (2001) has been proposed as a possible ancestor of Hominina but not Australopithecina.

Designations alternative to Hominina have been proposed: Australopithecinae (Gregory & Hellman 1939) and Preanthropinae (Cela-Conde & Altaba 2002);

Species

At least a dozen species of Homo other than Homo sapiens have been proposed, with varying degrees of consensus. Homo erectus is widely recognized as the species directly ancestral to Homo sapiens. Most other proposed species are proposed as alternatively belonging to either Homo erectus or Homo sapiens as a subspecies. This concerns Homo ergaster in particular. One proposal divides Homo erectus into an African and an Asian variety; the African is Homo ergaster, and the Asian is Homo erectus sensu stricto. (Inclusion of Homo ergaster with Asian Homo erectus is Homo erectus sensu lato.) There appears to be a recent trend, with the availability of ever more difficult-to-classify fossils such as the Dmanisi skulls (2013) or Homo naledi fossils (2015) to subsume all archaic varieties under Homo erectus.

Comparative table of Homo lineages
Lineages Temporal range
(kya)
Habitat Adult height Adult mass Cranial capacity
(cm3)
Fossil record Discovery Publication
of name
H. habilis
membership in Homo uncertain
2,100–1,500 Tanzania 110–140 cm (3 ft 7 in – 4 ft 7 in) 33–55 kg (73–121 lb) 510–660 Many 1960 1964
H. rudolfensis
membership in Homo uncertain
1,900 Kenya

700 2 sites 1972 1986
H. gautengensis
also classified as H. habilis
1,900–600 South Africa 100 cm (3 ft 3 in)

3 individuals 2010 2010
H. erectus 1,900–140 Africa, Eurasia 180 cm (5 ft 11 in) 60 kg (130 lb) 850 (early) – 1,100 (late) Many 1891 1892
H. ergaster
African H. erectus
1,800–1,300 East and Southern Africa

700–850 Many 1949 1975
H. antecessor 1,200–800 Western Europe 175 cm (5 ft 9 in) 90 kg (200 lb) 1,000 2 sites 1994 1997
H. heidelbergensis
early H. neanderthalensis
600–300 Europe, Africa 180 cm (5 ft 11 in) 90 kg (200 lb) 1,100–1,400 Many 1907 1908
H. cepranensis
a single fossil, possibly H. heidelbergensis
c. 450 Italy

1,000 1 skull cap 1994 2003
H. longi 309–138 Northeast China

1,420 1 individual 1933 2021
H. rhodesiensis
early H. sapiens
c. 300 Zambia

1,300 Single or very few 1921 1921
H. naledi c. 300 South Africa 150 cm (4 ft 11 in) 45 kg (99 lb) 450 15 individuals 2013 2015
H. sapiens
(anatomically modern humans)
c. 300–present Worldwide 150–190 cm (4 ft 11 in – 6 ft 3 in) 50–100 kg (110–220 lb) 950–1,800 (extant) —— 1758
H. neanderthalensis
240–40 Europe, Western Asia 170 cm (5 ft 7 in) 55–70 kg (121–154 lb)
(heavily built)
1,200–1,900 Many 1829 1864
H. floresiensis
classification uncertain
190–50 Indonesia 100 cm (3 ft 3 in) 25 kg (55 lb) 400 7 individuals 2003 2004
Nesher Ramla Homo
classification uncertain
140–120 Palestine


several individuals 2021
H. tsaichangensis
possibly H. erectus or Denisova
c. 100 Taiwan


1 individual 2008(?) 2015
H. luzonensis
c. 67 Philippines


3 individuals 2007 2019
Denisova hominin 40 Siberia


2 sites 2000
2010

Subspecies

Homo sapiens subspecies

1737 painting of Carl Linnaeus wearing a traditional Sami costume. Linnaeus is sometimes named as the lectotype of both H. sapiens and H. s. sapiens.

The recognition or nonrecognition of subspecies of Homo sapiens has a complicated history. The rank of subspecies in zoology is introduced for convenience, and not by objective criteria, based on pragmatic consideration of factors such as geographic isolation and sexual selection. The informal taxonomic rank of race is variously considered equivalent or subordinate to the rank of subspecies, and the division of anatomically modern humans (H. sapiens) into subspecies is closely tied to the recognition of major racial groupings based on human genetic variation.

A subspecies cannot be recognized independently: a species will either be recognized as having no subspecies at all or at least two (including any that are extinct). Therefore, the designation of an extant subspecies Homo sapiens sapiens only makes sense if at least one other subspecies is recognized. H. s. sapiens is attributed to "Linnaeus (1758)" by the taxonomic Principle of Coordination. During the 19th to mid-20th century, it was common practice to classify the major divisions of extant H. sapiens as subspecies, following Linnaeus (1758), who had recognized H. s. americanus, H. s. europaeus, H. s. asiaticus and H. s. afer as grouping the native populations of the Americas, West Eurasia, East Asia and Sub-Saharan Africa, respectively. Linnaeus also included H. s. ferus, for the "wild" form which he identified with feral children, and two other "wild" forms for reported specimens now considered very dubious (see cryptozoology), H. s. monstrosus and H. s. troglodytes.

There were variations and additions to the categories of Linnaeus, such as H. s. tasmanianus for the native population of Australia. Bory de St. Vincent in his Essai sur l'Homme (1825) extended Linnaeus's "racial" categories to as many as fifteen: Leiotrichi ("smooth-haired"): japeticus (with subraces), arabicus, iranicus, indicus, sinicus, hyperboreus, neptunianus, australasicus, columbicus, americanus, patagonicus; Oulotrichi ("crisp-haired"): aethiopicus, cafer, hottentotus, melaninus. Similarly, Georges Vacher de Lapouge (1899) also had categories based on race, such as priscus, spelaeus (etc.).

Homo sapiens neanderthalensis was proposed by King (1864) as an alternative to Homo neanderthalensis. There have been "taxonomic wars" over whether Neanderthals were a separate species since their discovery in the 1860s. Pääbo (2014) frames this as a debate that is unresolvable in principle, "since there is no definition of species perfectly describing the case." Louis Lartet (1869) proposed Homo sapiens fossilis based on the Cro-Magnon fossils.

There are a number of proposals of extinct varieties of Homo sapiens made in the 20th century. Many of the original proposals were not using explicit trinomial nomenclature, even though they are still cited as valid synonyms of H. sapiens by Wilson & Reeder (2005). These include: Homo grimaldii (Lapouge, 1906), Homo aurignacensis hauseri (Klaatsch & Hauser, 1910), Notanthropus eurafricanus (Sergi, 1911), Homo fossilis infrasp. proto-aethiopicus (Giuffrida-Ruggeri, 1915), Telanthropus capensis (Broom, 1917), Homo wadjakensis (Dubois, 1921), Homo sapiens cro-magnonensis, Homo sapiens grimaldiensis (Gregory, 1921), Homo drennani (Kleinschmidt, 1931), Homo galilensis (Joleaud, 1931) = Paleanthropus palestinus (McCown & Keith, 1932). Rightmire (1983) proposed Homo sapiens rhodesiensis.

After World War II, the practice of dividing extant populations of Homo sapiens into subspecies declined. An early authority explicitly avoiding the division of H. sapiens into subspecies was Grzimeks Tierleben, published 1967–1972. A late example of an academic authority proposing that the human racial groups should be considered taxonomical subspecies is John Baker (1974). The trinomial nomenclature Homo sapiens sapiens became popular for "modern humans" in the context of Neanderthals being considered a subspecies of H. sapiens in the second half of the 20th century. Derived from the convention, widespread in the 1980s, of considering two subspecies, H. s. neanderthalensis and H. s. sapiens, the explicit claim that "H. s. sapiens is the only extant human subspecies" appears in the early 1990s.

Since the 2000s, the extinct Homo sapiens idaltu (White et al., 2003) has gained wide recognition as a subspecies of Homo sapiens, but even in this case there is a dissenting view arguing that "the skulls may not be distinctive enough to warrant a new subspecies name". H. s. neanderthalensis and H. s. rhodesiensis continue to be considered separate species by some authorities, but the 2010s discovery of genetic evidence of archaic human admixture with modern humans has reopened the details of taxonomy of archaic humans.

Homo erectus subspecies

Homo erectus since its introduction in 1892 has been divided into numerous subspecies, many of them formerly considered individual species of Homo. None of these subspecies have universal consensus among paleontologists.

E-patient

From Wikipedia, the free encyclopedia https://en.wikipedi...