Search This Blog

Sunday, October 29, 2023

Face perception

From Wikipedia, the free encyclopedia
Close-up photograph of an adult male's face, covered in make-up.
An adult male's face with make-up

Facial perception is an individual's understanding and interpretation of the face. Here, perception implies the presence of consciousness and hence excludes automated facial recognition systems. Although facial recognition is found in other species, this article focuses on facial perception in humans.

The perception of facial features is an important part of social cognition. Information gathered from the face helps people understand each other's identity, what they are thinking and feeling, anticipate their actions, recognize their emotions, build connections, and communicate through body language. Developing facial recognition is a necessary building block for complex societal constructs. Being able to perceive identity, mood, age, sex, and race lets people mold the way we interact with one another, and understand our immediate surroundings.

Though facial perception is mainly considered to stem from visual intake, studies have shown that even people born blind can learn face perception without vision. Studies have supported the notion of a specialized mechanism for perceiving faces.

Overview

Theories about the processes involved in adult face perception have largely come from two sources; research on normal adult face perception and the study of impairments in face perception that are caused by brain injury or neurological illness.

Bruce & Young model

Bruce & Young Model of Face Recognition, 1986

One of the most widely accepted theories of face perception argues that understanding faces involves several stages: from basic perceptual manipulations on the sensory information to derive details about the person (such as age, gender or attractiveness), to being able to recall meaningful details such as their name and any relevant past experiences of the individual.

This model, developed by Vicki Bruce and Andrew Young in 1986, argues that face perception involves independent sub-processes working in unison.

  1. A "view centered description" is derived from the perceptual input. Simple physical aspects of the face are used to work out age, gender or basic facial expressions. Most analysis at this stage is on feature-by-feature basis.
  2. This initial information is used to create a structural model of the face, which allows it to be compared to other faces in memory. This explains why the same person from a novel angle can still be recognized (see Thatcher effect).
  3. The structurally encoded representation is transferred to theoretical "face recognition units" that are used with "personal identity nodes" to identify a person through information from semantic memory. Interestingly, the ability to produce someone's name when presented with their face has been shown to be selectively damaged in some cases of brain injury, suggesting that naming may be a separate process from being able to produce other information about a person.

Traumatic brain injury and neurological illness

Following brain damage, faces can appear severely distorted. A wide variety of distortions can occur – features can droop, enlarge, become discolored, or the entire face can appear to shift relative to the head. This condition is known as prosopometamorphopsia (PMO). In half of the reported cases, distortions are restricted to either the left or the right side of the face, and this form of PMO is called hemi-prosopometamorphopsia (hemi-PMO). Hemi-PMO often results from lesions to the splenium, which connects the right and left hemisphere. In the other half of reported cases, features on both sides of the face appear distorted.

Perceiving facial expressions can involve many areas of the brain, and damaging certain parts of the brain can cause specific impairments in one's ability to perceive a face. As stated earlier, research on the impairments caused by brain injury or neurological illness has helped develop our understanding of cognitive processes. The study of prosopagnosia (an impairment in recognizing faces that is usually caused by brain injury) has been particularly helpful in understanding how normal face perception might work. Individuals with prosopagnosia may differ in their abilities to understand faces, and it has been the investigation of these differences which has suggested that several stage theories might be correct.

Brain imaging studies typically show a great deal of activity in an area of the temporal lobe known as the fusiform gyrus, an area also known to cause prosopagnosia when damaged (particularly when damage occurs on both sides). This evidence has led to a particular interest in this area and it is sometimes referred to as the fusiform face area (FFA) for that reason.

It is important to note that while certain areas of the brain respond selectively to faces, facial processing involves many neural networks which include visual and emotional processing systems. For example, prosopagnosia patients demonstrate neuropsychological support for a specialized face perception mechanism as these people (due to brain damage) have deficits in facial perception, but their cognitive perception of objects remains intact. The face inversion effect provides behavioral support of a specialized mechanism as people tend to have greater deficits in task performance when prompted to react to an inverted face than to an inverted object.

Electrophysiological support comes from the finding that the N170 and M170 responses tend to be face-specific. Neuro-imaging studies such as PET and fMRI studies have shown support for a specialized facial processing mechanism as they have identified regions of the fusiform gyrus that have higher activation during face perception tasks than other visual perception tasks. Theories about the processes involved in adult face perception have largely come from two sources: research on normal adult face perception and the study of impairments in face perception that are caused by brain injury or neurological illness. Novel optical illusions such as the flashed face distortion effect, in which scientific phenomenology outpaces neurological theory, also provide areas for research.

Difficulties in facial emotion processing can also be seen in individuals with traumatic brain injury, in both diffuse axonal injury and focal brain injury.

Early development

Despite numerous studies, there is no widely accepted time-frame in which the average human develops the ability to perceive faces.

Ability to discern faces from other objects

Many studies have found that infants will give preferential attention to faces in their visual field, indicating they can discern faces from other objects.

  • While newborns will often show particular interest in faces at around three months of age, that preference slowly disappears, re-emerges late during the first year, and slowly declines once more over the next two years of life.
  • While newborns show a preference to faces as they grow older (specifically between one and four months of age) this interest can be inconsistent.
  • Infants turning their heads towards faces or face-like images suggest rudimentary facial processing capacities.
  • The re-emergence of interest in faces at three months is likely influenced by a child's motor abilities.

Ability to detect emotion in the face

Lineart depicting various emotions.
Examples of various emotions

At around seven months of age, infants show the ability to discern faces by emotion. However, whether they have fully developed emotion recognition is unclear. Discerning visual differences in facial expressions is different to understanding the valence of a particular emotion.

  • Seven-month-olds seem capable of associating emotional prosodies with facial expressions. When presented with a happy or angry face, followed by an emotionally neutral word read in a happy or angry tone, their event-related potentials follow different patterns. Happy faces followed by angry vocal tones produce more changes than the other incongruous pairing, while there was no such difference between happy and angry congruous pairings. The greater reaction implies that infants held greater expectations of a happy vocal tone after seeing a happy face than an angry tone following an angry face.
  • By the age of seven months, children are able to recognize an angry or fearful facial expression, perhaps because of the threat-salient nature of the emotion. Despite this ability, newborns are not yet aware of the emotional content encoded within facial expressions.
  • Infants can comprehend facial expressions as social cues representing the feelings of other people before they are a year old. Seven-month-old infants show greater negative central components to angry faces that are looking directly at them than elsewhere, although the gaze of fearful faces produces no difference. In addition, two event-related potentials in the posterior part of the brain are differently aroused by the two negative expressions tested. These results indicate that infants at this age can partially understand the higher level of threat from anger directed at them. They also showed activity in the occipital areas.
  • Five-month-olds, when presented with an image of a fearful expression and a happy expression, exhibit similar event-related potentials for both. However, when seven-month-olds are given the same treatment, they focus more on the fearful face. This result indicates increased cognitive focus toward fear that reflects the threat-salient nature of the emotion. Seven-month-olds regard happy and sad faces as distinct emotive categories.
  • By seven months, infants are able to use facial expressions to understand others' behavior. Seven-month-olds look to use facial cues to understand the motives of other people in ambiguous situations, as shown in a study where infants watched the experimenter's face longer if the experimenter took a toy from them and maintained a neutral expression, as opposed to if the experimenter made a happy expression. When infants are exposed to faces, it varies depending on factors including facial expression and eye gaze direction.
  • Emotions likely play a large role in our social interactions. The perception of a positive or negative emotion on a face affects the way that an individual perceives and processes that face. A face that is perceived to have a negative emotion is processed in a less holistic manner than a face displaying a positive emotion.
  • While seven-month-olds have been found to focus more on fearful faces, a study found that "happy expressions elicit enhanced sympathetic arousal in infants" both when facial expressions were presented subliminally and in a way that the infants were consciously aware of the stimulus. Conscious awareness of a stimulus is not connected to an infant's reaction.

Ability to recognize familiar faces

It is unclear when humans develop the ability to recognize familiar faces. Studies have varying results, and may depend on multiple factors (such as continued exposure to particular faces during a certain time period).

  • Early perceptual experience is crucial to the development of adult visual perception, including the ability to identify familiar people and comprehend facial expressions. The capacity to discern between faces, like language, appears to have broad potential in early life that is whittled down to the kinds of faces experienced in early life.
  • The neural substrates of face perception in infants are similar to those of adults, but the limits of child-safe imaging technology currently obscure specific information from subcortical areas like the amygdala, which is active in adult facial perception. They also showed activity near the fusiform gyrus,
  • Healthy adults likely process faces via a retinotectal (subcortical) pathway.
  • Infants can discern between macaque faces at six months of age, but, without continued exposure, cannot do so at nine months of age. If they were shown photographs of macaques during this three-month period, they were more likely to retain this ability.
  • Faces "convey a wealth of information that we use to guide our social interactions". They also found that the neurological mechanisms responsible for face recognition are present by age five. Children process faces is similar to that of adults, but adults process faces more efficiently. The may be because of advancements in memory and cognitive functioning.
  • Interest in the social world is increased by interaction with the physical environment. They found that training three-month-old infants to reach for objects with Velcro-covered "sticky mitts" increased the attention they pay to faces compared to moving objects through their hands and control groups.

Ability to 'mimic' faces

A commonly disputed topic is the age at which we can mimic facial expressions.

  • Infants as young as two days are capable of mimicking an adult, able to note details like mouth and eye shape as well as move their own muscles to produce similar patterns.
  • However, the idea that infants younger than two could mimic facial expressions was disputed by Susan S. Jones, who believed that infants are unaware of the emotional content encoded within facial expressions, and also found they are not able to imitate facial expressions until their second year of life. She also found that mimicry emerged at different ages.

Neuroanatomy

Key areas of the brain

A side-on image of an fMRI scan of a human brain.
A computer-enhanced fMRI scan of a person who has been asked to look at faces

Facial perception has neuroanatomical correlates in the brain.

The fusiform face area (BA37— Brodmann area 37) is located in the lateral fusiform gyrus. It is thought that this area is involved in holistic processing of faces and it is sensitive to the presence of facial parts as well as the configuration of these parts. The fusiform face area is also necessary for successful face detection and identification. This is supported by fMRI activation and studies on prosopagnosia, which involves lesions in the fusiform face area.

The occipital face area is located in the inferior occipital gyrus. Similar to the fusiform face area, this area is also active during successful face detection and identification, a finding that is supported by fMRI and MEG activation. The occipital face area is involved and necessary in the analysis of facial parts but not in the spacing or configuration of facial parts. This suggests that the occipital face area may be involved in a facial processing step that occurs prior to fusiform face area processing.

The superior temporal sulcus is involved in recognition of facial parts and is not sensitive to the configuration of these parts. It is also thought that this area is involved in gaze perception. The superior temporal sulcus has demonstrated increased activation when attending to gaze direction.

During face perception, major activations occur in the extrastriate areas bilaterally, particularly in the above three areas. Perceiving an inverted human face involves increased activity in the inferior temporal cortex, while perceiving a misaligned face involves increased activity in the occipital cortex. No results were found when perceiving a dog face, suggesting a process specific to human faces. Bilateral activation is generally shown in all of these specialized facial areas. However, some studies show increased activation in one side over the other: for instance, the right fusiform gyrus is more important for facial processing in complex situations.

BOLD fMRI mapping and the fusiform face area

The majority of fMRI studies use blood oxygen level dependent (BOLD) contrast to determine which areas of the brain are activated by various cognitive functions.

One study used BOLD fMRI mapping to identify activation in the brain when subjects viewed both cars and faces. They found that the occipital face area, the fusiform face area, the superior temporal sulcus, the amygdala, and the anterior/inferior cortex of the temporal lobe all played roles in contrasting faces from cars, with initial face perception beginning in the fusiform face area and occipital face areas. This entire region forms a network that acts to distinguish faces. The processing of faces in the brain is known as a "sum of parts" perception.

However, the individual parts of the face must be processed first in order to put all of the pieces together. In early processing, the occipital face area contributes to face perception by recognizing the eyes, nose, and mouth as individual pieces.

Researchers also used BOLD fMRI mapping to determine the patterns of activation in the brain when parts of the face were presented in combination and when they were presented singly. The occipital face area is activated by the visual perception of single features of the face, for example, the nose and mouth, and preferred combination of two-eyes over other combinations. This suggests that the occipital face area recognizes the parts of the face at the early stages of recognition.

On the contrary, the fusiform face area shows no preference for single features, because the fusiform face area is responsible for "holistic/configural" information, meaning that it puts all of the processed pieces of the face together in later processing. This is supported by a study which found that regardless of the orientation of a face, subjects were impacted by the configuration of the individual facial features. Subjects were also impacted by the coding of the relationships between those features. This shows that processing is done by a summation of the parts in later stages of recognition.

The fusiform gyrus and the amygdala

The fusiform gyri are preferentially responsive to faces, whereas the parahippocampal/lingual gyri are responsive to buildings.

While certain areas respond selectively to faces, facial processing involves many neural networks, including visual and emotional processing systems. While looking at faces displaying emotions (especially those with fear facial expressions) compared to neutral faces there is increased activity in the right fusiform gyrus. This increased activity also correlates with increased amygdala activity in the same situations. The emotional processing effects observed in the fusiform gyrus are decreased in patients with amygdala lesions. This demonstrates connections between the amygdala and facial processing areas.

Face familiarity also affects the fusiform gyrus and amygdala activation. Multiple regions activated by similar face components indicates that facial processing is a complex process. Increased brain activation in precuneus and cuneus often occurs when differentiation of two faces are easy (kin and familiar non-kin faces) and the role of posterior medial substrates for visual processing of faces with familiar features (faces averaged with that of a sibling).

The object form topology hypothesis posits a topological organization of neural substrates for object and facial processing. However, there is disagreement: the category-specific and process-map models could accommodate most other proposed models for the neural underpinnings of facial processing.

Most neuroanatomical substrates for facial processing are perfused by the middle cerebral artery. Therefore, facial processing has been studied using measurements of mean cerebral blood flow velocity in the middle cerebral arteries bilaterally. During facial recognition tasks, greater changes occur in the right middle cerebral artery than the left. Men are right-lateralized and women left-lateralized during facial processing tasks.

Just as memory and cognitive function separate the abilities of children and adults to recognize faces, the familiarity of a face may also play a role in the perception of faces. Recording event-related potentials in the brain to determine the timing of facial recognition showed that familiar faces are indicated by a stronger N250, a specific wavelength response that plays a role in the visual memory of faces. Similarly, all faces elicit the N170 response in the brain.

The brain conceptually needs only ~50 neurons to encode any human face, with facial features projected on individual axes (neurons) in a 50-dimensional "Face Space".

Cognitive neuroscience

3D rendering of a purple greeble
A 'greeble' (nonsense figure used in face recognition experiments)

Cognitive neuroscientists Isabel Gauthier and Michael Tarr are two of the major proponents of the view that face recognition involves expert discrimination of similar objects (See the Perceptual Expertise Network). Other scientists, in particular Nancy Kanwisher and her colleagues, argue that face recognition involves processes that are face-specific and that are not recruited by expert discriminations in other object classes (see the domain specificity).

Studies by Gauthier have shown that an area of the brain known as the fusiform gyrus (sometimes called the fusiform face area because it is active during face recognition) is also active when study participants are asked to discriminate between different types of birds and cars, and even when participants become expert at distinguishing computer generated nonsense shapes known as greebles. This suggests that the fusiform gyrus have a general role in the recognition of similar visual objects.

The activity found by Gauthier when participants viewed non-face objects was not as strong as when participants were viewing faces, however this could be because we have much more expertise for faces than for most other objects. Furthermore, not all findings of this research have been successfully replicated, for example, other research groups using different study designs have found that the fusiform gyrus is specific to faces and other nearby regions deal with non-face objects.

However, these findings are difficult to interpret: failures to replicate are null effects and can occur for many different reasons. In contrast, each replication adds a great deal of weight to a particular argument. There are now multiple replications with greebles, with birds and cars, and two unpublished studies with chess experts.

Although expertise sometimes recruits the fusiform face area, a more common finding is that expertise leads to focal category-selectivity in the fusiform gyrus—a pattern similar in terms of antecedent factors and neural specificity to that seen for faces. As such, it remains an open question as to whether face recognition and expert-level object recognition recruit similar neural mechanisms across different subregions of the fusiform or whether the two domains literally share the same neural substrates. At least one study argues that the issue is nonsensical, as multiple measurements of the fusiform face area within an individual often overlap no more with each other than measurements of fusiform face area and expertise-predicated regions.

fMRI studies have asked whether expertise has any specific connection to the fusiform face area in particular, by testing for expertise effects in both the fusiform face area and a nearby but not face-selective region called LOC (Rhodes et al., JOCN 2004; Op de Beeck et al., JN 2006; Moore et al., JN 2006; Yue et al. VR 2006). In all studies, expertise effects are significantly stronger in the LOC than in the fusiform face area, and indeed expertise effects were only borderline significant in the fusiform face area in two of the studies, while the effects were robust and significant in the LOC in all studies.

Therefore, it is still not clear in exactly which situations the fusiform gyrus becomes active, although it is certain that face recognition relies heavily on this area and damage to it can lead to severe face recognition impairment.

Face advantage in memory recall

During face perception, neural networks make connections with the brain to recall memories.

According to the Seminal Model of face perception, there are three stages of face processing:

  • recognition of the face
  • recall of memories and information linked with that face
  • name recall

There are exceptions to this order. For example, names are recalled faster than semantic information in cases of highly familiar stimuli. While the face is a powerful identifier, the voice also helps in recognition.

Research has tested if faces or voices make it easier to identify individuals and recall semantic memory and episodic memory. These experiments looked at all three stages of face processing. The experiment showed two groups of celebrity and familiar faces or voices with a between-group design and asked the participants to recall information about them. The participants were first asked if the stimulus was familiar. If they answered yes then they were asked for information (semantic memory) and memories (episodic memory) that fit the face or voice presented. These experiments demonstrated the phenomenon of face advantage and how it persists through follow-up studies.

Recognition-performance issue

After the first experiments on the advantage of faces over voices in memory recall, errors and gaps were found in the methods used.

For one, there was not a clear face advantage for the recognition stage of face processing. Participants showed a familiarity-only response to voices more often than faces. In other words, when voices were recognized (about 60–70% of the time) they were much harder to recall biographical information but very good at being recognized. The results were looked at as remember versus know judgements. A lot more remember results (or familiarity) occurred with voices, and more know (or memory recall) responses happened with faces. This phenomenon persists through experiments dealing with criminal line-ups in prisons. Witnesses are more likely to say that a suspect's voice sounded familiar than his/her face even though they cannot remember anything about the suspect. This discrepancy is due to a larger amount of guesswork and false alarms that occur with voices.

To give faces a similar ambiguity to that of voices, the face stimuli were blurred in the follow-up experiment. This experiment followed the same procedures as the first, presenting two groups with sets of stimuli made up of half celebrity faces and half unfamiliar faces. The only difference was that the face stimuli were blurred so that detailed features could not be seen. Participants were then asked to say if they recognized the person, if they could recall specific biographical information about them, and finally if they knew the person's name. The results were completely different from those of the original experiment, supporting the view that there were problems in the first experiment's methods. According to the results of the followup, the same amount of information and memory could be recalled through voices and faces, dismantling the face advantage. However, these results are flawed and premature because other methodological issues in the experiment still needed to be fixed.

Content of speech

The process of controlling the content of speech extract has proven to be more difficult than the elimination of non facial cues in photographs.

Thus the findings of experiments that did not control this factor lead to misleading conclusions regarding the voice recognition over the face recognition. For example, in an experiment it was found that 40% of the time participants could easily pair the celebrity-voice with their occupation just by guessing. In order to eliminate these errors, experimenters removed parts of the voice samples that could possibly give clues to the identity of the target, such as catchphrases. Even after controlling the voice samples as well as the face samples (using blurred faces), studies have shown that semantic information can be more accessible to retrieve when individuals are recognizing faces than voices.

Another technique to control the content of the speech extracts is to present the faces and voices of personally familiar individuals, like the participant's teachers or neighbors, instead of the faces and voices of celebrities. In this way alike words are used for the speech extracts. For example, the familiar targets are asked to read exactly the same scripted speech for their voice extracts. The results showed again that semantic information is easier to retrieve when individuals are recognizing faces than voices.

Frequency-of-exposure issue

Another factor that has to be controlled in order for the results to be reliable is the frequency of exposure.

If we take the example of celebrities, people are exposed to celebrities' faces more often than their voices because of the mass media. Through magazines, newspapers and the Internet, individuals are exposed to celebrities' faces without their voices on an everyday basis rather than their voices without their faces. Thus, someone could argue that for all of the experiments that were done until now the findings were a result of the frequency of exposure to the faces of celebrities rather than their voices.

To overcome this problem researchers decided to use personally familiar individuals as stimuli instead of celebrities. Personally familiar individuals, such as participant's teachers, are for the most part heard as well as seen. Studies that used this type of control also demonstrated the face advantage. Students were able to retrieve semantic information more readily when recognizing their teachers faces (both normal and blurred) rather than their voices.

However, researchers over the years have found an even more effective way to control not only the frequency of exposure but also the content of the speech extracts, the associative learning paradigm. Participants are asked to link semantic information as well as names with pre-experimentally unknown voices and faces. In a current experiment that used this paradigm, a name and a profession were given together with, accordingly, a voice, a face or both to three participant groups. The associations described above were repeated four times.

The next step was a cued recall task in which every stimulus that was learned in the previous phase was introduced and participants were asked to tell the profession and the name for every stimulus. Again, the results showed that semantic information can be more accessible to retrieve when individuals are recognizing faces than voices even when the frequency of exposure was controlled.

Extension to episodic memory and explanation for existence

Episodic memory is our ability to remember specific, previously experienced events.

In recognition of faces as it pertains to episodic memory, there has been shown to be activation in the left lateral prefrontal cortex, parietal lobe, and the left medial frontal/anterior cingulate cortex. It was also found that a left lateralization during episodic memory retrieval in the parietal cortex correlated strongly with success in retrieval. This may possibly be due to the hypothesis that the link between face recognition and episodic memory were stronger than those of voice and episodic memory. This hypothesis can also be supported by the existence of specialized face recognition devices thought to be located in the temporal lobes.

There is also evidence of the existence of two separate neural systems for face recognition: one for familiar faces and another for newly learned faces. One explanation for this link between face recognition and episodic memory is that since face recognition is a major part of human existence, the brain creates a link between the two in order to be better able to communicate with others.

Three-layer model of self-cognition developed by Motoaki Sugiura

Self-face perception

Though many animals have face-perception capabilities, the recognition of self-face is phenomenon has been observed to be unique to only a few species. There is a particular interest in the study of self-face perception because of its relation to the perceptual integration process.

One study found that the perception/recognition of one's own face was unaffected by changing contexts, while the perception/recognition of familiar and unfamiliar faces was adversely affected. Another study that focused on older adults found that they had self-face advantage in configural processing but not featural processing.

In 2014, Motoaki Sugiura developed a conceptual model for self-recognition by breaking it into three categories: the physical, interpersonal, and social selves.

Mirror test

Gordon Gallup Jr. developed a technique in 1970 as an attempt to measure self-awareness. This technique is commonly referred to has the mirror test.

The method involves placing a marker on the subject in a place they can not see without a mirror (e.g. ones forehead). The marker must be placed inconspicuously enough that the subject does not become aware that they have been marked. Once the marker is placed, the subject is given access to a mirror. If the subject investigates the mark (e.g. tries to wipe the mark off), this would indicate that the subject understands they are looking at a reflection of themselves, as opposed to perceiving the mirror as an extension of their environment. (e.g., thinking the reflection is another person/animal behind a window)

Though this method is regarded as one of the more effective techniques when it comes to measuring self-awareness, it certainly not perfect. There are many factors at play that could have an effect on the outcome. For example, if an animal is biologically blind, like a mole, we can not assume that they inherently lack self awareness. It can only be assumed that visual self-recognition, is possibly one of many ways for a living being to be considered as cognitively "self aware".

Gender

Black-and-white photograph of a man, face turned away from the camera, and a woman staring at the viewer.
A human male and female

Studies using electrophysiological techniques have demonstrated gender-related differences during a face recognition memory task and a facial affect identification task.

In facial perception there was no association to estimated intelligence, suggesting that face recognition in women is unrelated to several basic cognitive processes. Gendered differences may suggest a role for sex hormones. In females there may be variability for psychological functions related to differences in hormonal levels during different phases of the menstrual cycle.

Data obtained in norm and in pathology support asymmetric face processing.

The left inferior frontal cortex and the bilateral occipitotemporal junction may respond equally to all face conditions. Some contend that both the left inferior frontal cortex and the occipitotemporal junction are implicated in facial memory. The right inferior temporal/fusiform gyrus responds selectively to faces but not to non-faces. The right temporal pole is activated during the discrimination of familiar faces and scenes from unfamiliar ones. Right asymmetry in the mid-temporal lobe for faces has also been shown using 133-Xenon measured cerebral blood flow. Other investigators have observed right lateralization for facial recognition in previous electrophysiological and imaging studies.

Asymmetric facial perception implies implementing different hemispheric strategies. The right hemisphere would employ a holistic strategy, and the left an analytic strategy.

A 2007 study, using functional transcranial Doppler spectroscopy, demonstrated that men were right-lateralized for object and facial perception, while women were left-lateralized for facial tasks but showed a right-tendency or no lateralization for object perception. This could be taken as evidence for topological organization of these cortical areas in men. It may suggest that the latter extends from the area implicated in object perception to a much greater area involved in facial perception.

This agrees with the object form topology hypothesis proposed by Ishai. However, the relatedness of object and facial perception was process-based, and appears to be associated with their common holistic processing strategy in the right hemisphere. Moreover, when the same men were presented with facial paradigm requiring analytic processing, the left hemisphere was activated. This agrees with the suggestion made by Gauthier in 2000, that the extrastriate cortex contains areas that are best suited for different computations, and described as the process-map model.

Therefore, the proposed models are not mutually exclusive: facial processing imposes no new constraints on the brain besides those used for other stimuli.

Each stimulus may have been mapped by category into face or non-face, and by process into holistic or analytic. Therefore, a unified category-specific process-mapping system was implemented for either right or left cognitive styles. For facial perception, men likely use a category-specific process-mapping system for right cognitive style, and women use the same for the left.

Ethnicity

Four faces, two caucasian (first two rows) and two Asian (last two rows) as well as their edited counterparts. The middle face of each row is the original face upon which the manipulations were made. The leftmost face has eyes or mouth 20% smaller than the original (the middle face), the face located second from the left has eyes or mouth 10% smaller than the original, the rightmost face has eyes or mouth 20% larger than the original, and the face located second from the right has eyes or mouth 10% larger than the original.
Sample of real and edited white and Asian faces used in study of the cross-race effect

Differences in own- versus other-race face recognition and perceptual discrimination was first researched in 1914. Humans tend to perceive people of other races than their own to all look alike:

Other things being equal, individuals of a given race are distinguishable from each other in proportion to our familiarity, to our contact with the race as whole. Thus, to the uninitiated American all Asiatics look alike, while to the Asiatics, all White men look alike.

This phenomenon, known as the cross-race effect, is also called the own-race effect, other-race effect, own race bias, or interracial face-recognition deficit.

It is difficult to measure the true influence of the cross-race effect.

A 1990 study found that other-race effect is larger among White subjects than among African-American subjects, whereas a 1979 study found the opposite. D. Stephen Lindsay and colleagues note that results in these studies could be due to intrinsic difficulty in recognizing the faces presented, an actual difference in the size of cross-race effect between the two test groups, or some combination of these two factors. Shepherd reviewed studies that found better performance on African-American faces, White faces, and studies where no difference was found.

Overall, Shepherd reported a reliable positive correlation between the size of the effect and the amount of interaction subjects had with members of the other race. This correlation reflects the fact that African-American subjects, who performed equally well on faces of both races in Shepherd's study, almost always responded with the highest possible self-rating of amount of interaction with white people, whereas white counterparts displayed a larger other-race effect and reported less other-race interaction. This difference in rating was statistically reliable.

The cross-race effect seems to appear in humans at around six months of age.

Challenging the cross-race effect

Cross-race effects can be changed through interaction with people of other races. Other-race experience is a major influence on the cross-race effect. A series of studies revealed that participants with greater other-race experience were consistently more accurate at discriminating other-race faces than participants with less experience. Many current models of the effect assume that holistic face processing mechanisms are more fully engaged when viewing own-race faces.

The own-race effect appears related to increased ability to extract information about the spatial relationships between different facial features.

A deficit occurs when viewing people of another race because visual information specifying race takes up mental attention at the expense of individuating information. Further research using perceptual tasks could shed light on the specific cognitive processes involved in the other-race effect. The own-race effect likely extends beyond racial membership into in-group favoritism. Categorizing somebody by the university they attend yields similar results to the own-race effect.

Similarly, men tend to recognize fewer female faces than women do, whereas there are no sex differences for male faces.

If made aware of the own-race effect prior to the experiment, test subjects show significantly less, if any, of the own-race effect.

Autism

Photograph of a child with autism stacking cans.
A child with autism

Autism spectrum disorder is a comprehensive neural developmental disorder that produces social, communicative, and perceptual deficits. Individuals with autism exhibit difficulties with facial identity recognition and recognizing emotional expressions. These deficits are suspected to spring from abnormalities in early and late stages of facial processing.

Speed and methods

People with autism process face and non-face stimuli with the same speed.

In neurotypical individuals, a preference for face processing results in a faster processing speed in comparison to non-face stimuli. These individuals use holistic processing when perceiving faces. In contrast, individuals with autism employ part-based processing or bottom-up processing, focusing on individual features rather than the face as a whole. People with autism direct their gaze primarily to the lower half of the face, specifically the mouth, varying from the eye-trained gaze of neurotypical people. This deviation does not employ the use of facial prototypes, which are templates stored in memory that make for easy retrieval.

Additionally, individuals with autism display difficulty with recognition memory, specifically memory that aids in identifying faces. The memory deficit is selective for faces and does not extend to other visual input. These face-memory deficits are possibly products of interference between face-processing regions.

Associated difficulties

Autism often manifests in weakened social ability, due to decreased eye contact, joint attention, interpretation of emotional expression, and communicative skills.

These deficiencies can be seen in infants as young as nine months. Some experts use 'face avoidance' to describe how infants who are later diagnosed with autism preferentially attend to non-face objects. Furthermore, some have proposed that children with autism's difficulty in grasping the emotional content of faces is the result of a general inattentiveness to facial expression, and not an incapacity to process emotional information in general.

The constraints are viewed to cause impaired social engagement. Furthermore, research suggests a link between decreased face processing abilities in individuals with autism and later deficits in theory of mind. While typically developing individuals are able to relate others' emotional expressions to their actions, individuals with autism do not demonstrate this skill to the same extent.

This causation, however, resembles the chicken or the egg dispute. Some theorize that social impairment leads to perceptual problems. In this perspective, a biological lack of social interest inhibits facial recognition due to under-use.

Neurology

Many of the obstacles that individuals with autism face in terms of facial processing may be derived from abnormalities in the fusiform face area and amygdala.

Typically, the fusiform face area in individuals with autism has reduced volume. This volume reduction has been attributed to deviant amygdala activity that does not flag faces as emotionally salient, and thus decreases activation levels.

Studies are not conclusive as to which brain areas people with autism use instead. One found that, when looking at faces, people with autism exhibit activity in brain regions normally active when neurotypical individuals perceive objects. Another found that during facial perception, people with autism use different neural systems, each using their own unique neural circuitry.

Compensation mechanisms

As autistic individuals age, scores on behavioral tests assessing ability to perform face-emotion recognition increase to levels similar to controls.

The recognition mechanisms of these individuals are still atypical, though often effective. In terms of face identity-recognition, compensation can include a more pattern-based strategy, first seen in face inversion tasks. Alternatively, older individuals compensate by using mimicry of other's facial expressions and rely on their motor feedback of facial muscles for face emotion-recognition.

Schizophrenia

The portrait Schizophrenia, by William A. Ursprung.
Schizophrenia, by William A. Ursprung

Schizophrenia is known to affect attention, perception, memory, learning, processing, reasoning, and problem solving.

Schizophrenia has been linked to impaired face and emotion perception. People with schizophrenia demonstrate worse accuracy and slower response time in face perception tasks in which they are asked to match faces, remember faces, and recognize which emotions are present in a face. People with schizophrenia have more difficulty matching upright faces than they do with inverted faces. A reduction in configural processing, using the distance between features of an item for recognition or identification (e.g. features on a face such as eyes or nose), has also been linked to schizophrenia.

Schizophrenia patients are able to easily identify a "happy" affect but struggle to identify faces as "sad" or "fearful". Impairments in face and emotion perception are linked to impairments in social skills, due to the individual's inability to distinguish facial emotions. People with schizophrenia tend to demonstrate a reduced N170 response, atypical face scanning patterns, and a configural processing dysfunction. The severity of schizophrenia symptoms has been found to correlate with the severity of impairment in face perception.

Individuals with diagnosed schizophrenia and antisocial personality disorder have been found to have even more impairment in face and emotion perception than individuals with just schizophrenia. These individuals struggle to identify anger, surprise, and disgust. There is a link between aggression and emotion perception difficulties for people with this dual diagnosis.

Data from magnetic resonance imaging and functional magnetic resonance imaging has shown that a smaller volume of the fusiform gyrus is linked to greater impairments in face perception.

There is a positive correlation between self-face recognition and other-face recognition difficulties in individuals with schizophrenia. The degree of schizotypy has also been shown to correlate with self-face difficulties, unusual perception difficulties, and other face recognition difficulties. Schizophrenia patients report more feelings of strangeness when looking in a mirror than do normal controls. Hallucinations, somatic concerns, and depression have all been found to be associated with self-face perception difficulties.

Other animals

Neurobiologist Jenny Morton and her team have been able to teach sheep to choose a familiar face over unfamiliar one when presented with two photographs, which has led to the discovery that sheep can recognize human faces. Archerfish (distant relatives of humans) were able to differentiate between forty-four different human faces, which supports the theory that there is no need for a neocortex or a history of discerning human faces in order to do so. Pigeons were found to use the same parts of the brain as humans do to distinguish between happy and neutral faces or male and female faces.

Artificial intelligence

Much effort has gone into developing software that can recognize human faces.

This work has occurred in a branch of artificial intelligence known as computer vision, which uses the psychology of face perception to inform software design. Recent breakthroughs use noninvasive functional transcranial Doppler spectroscopy to locate specific responses to facial stimuli. The new system uses input responses, called cortical long-term potentiation, to trigger target face search from a computerized face database system. Such a system provides for brain-machine interface for facial recognition, referred to as cognitive biometrics.

Another application is estimating age from images of faces. Compared with other cognition problems, age estimation from facial images is challenging, mainly because the aging process is influenced by many external factors like physical condition and living style.The aging process is also slow, making sufficient data difficult to collect.

Nemrodov

In 2016, Dan Nemrodov conducted multivariate analyses of EEG signals that might be involved in identity related information and applied pattern classification to event-related potential signals both in time and in space. The main target of the study were:

  1. evaluating whether previously known event-related potential components such as N170 and others are involved in individual face recognition or not
  2. locating temporal landmarks of individual level recognition from event-related potential signals
  3. figuring out the spatial profile of individual face recognition

For the experiment, conventional event-related potential analyses and pattern classification of event-related potential signals were conducted given preprocessed EEG signals.

This and a further study showed the existence of a spatio-temporal profile of individual face recognition process and reconstruction of individual face images was possible by utilizing such profile and informative features that contribute to encoding of identity related information.

Genetic basis

While many cognitive abilities, such as general intelligence, have a clear genetic basis, evidence for the genetic basis of facial recognition is fairly recent. Current evidence suggests that facial recognition abilities are highly linked to genetic, rather than environmental, bases.

Early research focused on genetic disorders which impair facial recognition abilities, such as Turner syndrome, which results in impaired amygdala functioning. A 2003 study found significantly poorer facial recognition abilities in individuals with Turner syndrome, suggesting that the amygdala impacts face perception.

Evidence for a genetic basis in the general population, however, comes from twin studies in which the facial recognition scores on the Cambridge Face Memory test were twice as similar for monozygotic twins in comparison to dizygotic twins. This finding was supported by studies which found a similar difference in facial recognition scores and those which determined the heritability of facial recognition to be approximately 61%.

There was no significant relationship between facial recognition scores and other cognitive abilities, most notably general object recognition. This suggests that facial recognition abilities are heritable, and have a genetic basis independent from other cognitive abilities. Research suggests that more extreme examples of facial recognition abilities, specifically hereditary prosopagnosics, are highly genetically correlated.

For hereditary prosopagnosics, an autosomal dominant model of inheritance has been proposed. Research also correlated the probability of hereditary prosopagnosia with the single nucleotide polymorphisms along the oxytocin receptor gene (OXTR), suggesting that these alleles serve a critical role in normal face perception. Mutation from the wild type allele at these loci has also been found to result in other disorders in which social and facial recognition deficits are common, such as autism spectrum disorder, which may imply that the genetic bases for general facial recognition are complex and polygenic.

This relationship between OXTR and facial recognition is also supported by studies of individuals who do not have hereditary prosopagnosia.

Social perceptions of faces

People make rapid judgements about others based on facial appearance. Some judgements are formed very quickly and accurately, with adults correctly categorising the sex of adult faces with only a 75ms exposure and with near 100% accuracy. The accuracy of some other judgements are less easily confirmed, though there is evidence that perceptions of health made from faces are at least partly accurate, with health judgements reflecting fruit and vegetable intake, body fat and BMI. People also form judgements about others' personalities from their faces, and there is evidence of at least partial accuracy in this domain too.

Valence-dominance model

The valence-dominance model of face recognition is a widely-cited model that suggests that the social judgements made of faces can be summarised into two dimensions: valence (positive-negative) and dominance (dominant-submissive). A recent large-scale multi-country replication project largely supported this model across different world regions, though found that a potential third dimension may also be important in some regions and other research suggests that the valence-dominance model also applies to social perceptions of bodies.

Iron oxide nanoparticle

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Iron_oxide_nanoparticle

Iron oxide nanoparticles are iron oxide particles with diameters between about 1 and 100 nanometers. The two main forms are composed of magnetite (Fe3O4) and its oxidized form maghemite (γ-Fe2O3). They have attracted extensive interest due to their superparamagnetic properties and their potential applications in many fields (although cobalt and nickel are also highly magnetic materials, they are toxic and easily oxidized) including molecular imaging.

Applications of iron oxide nanoparticles include terabit magnetic storage devices, catalysis, sensors, superparamagnetic relaxometry, high-sensitivity biomolecular magnetic resonance imaging, magnetic particle imaging, magnetic fluid hyperthermia, separation of biomolecules, and targeted drug and gene delivery for medical diagnosis and therapeutics. These applications require coating of the nanoparticles by agents such as long-chain fatty acids, alkyl-substituted amines, and diols. They have been used in formulations for supplementation.

Structure

Magnetite has an inverse spinel structure with oxygen forming a face-centered cubic crystal system. In magnetite, all tetrahedral sites are occupied by Fe3+
and octahedral sites are occupied by both Fe3+
and Fe2+
. Maghemite differs from magnetite in that all or most of the iron is in the trivalent state (Fe3+
) and by the presence of cation vacancies in the octahedral sites. Maghemite has a cubic unit cell in which each cell contains 32 oxygen ions, 2113 Fe3+
ions and 223 vacancies. The cations are distributed randomly over the 8 tetrahedral and 16 octahedral sites.

Magnetic properties

Due to its 4 unpaired electrons in 3d shell, an iron atom has a strong magnetic moment. Ions Fe2+
have also 4 unpaired electrons in 3d shell and Fe3+
have 5 unpaired electrons in 3d shell. Therefore, when crystals are formed from iron atoms or ions Fe2+
and Fe3+
they can be in ferromagnetic, antiferromagnetic, or ferrimagnetic states.

In the paramagnetic state, the individual atomic magnetic moments are randomly oriented, and the substance has a zero net magnetic moment if there is no magnetic field. These materials have a relative magnetic permeability greater than one and are attracted to magnetic fields. The magnetic moment drops to zero when the applied field is removed. But in a ferromagnetic material, all the atomic moments are aligned even without an external field. A ferrimagnetic material is similar to a ferromagnet but has two different types of atoms with opposing magnetic moments. The material has a magnetic moment because the opposing moments have different strengths. If they have the same magnitude, the crystal is antiferromagnetic and possesses no net magnetic moment.

When an external magnetic field is applied to a ferromagnetic material, the magnetization (M) increases with the strength of the magnetic field (H) until it approaches saturation. Over some range of fields the magnetization has hysteresis because there is more than one stable magnetic state for each field. Therefore, a remanent magnetization will be present even after removing the external magnetic field.

A single domain magnetic material (e. g. magnetic nanoparticles) that has no hysteresis loop is said to be superparamagnetic. The ordering of magnetic moments in ferromagnetic, antiferromagnetic, and ferrimagnetic materials decreases with increasing temperature. Ferromagnetic and ferrimagnetic materials become disordered and lose their magnetization beyond the Curie temperature and antiferromagnetic materials lose their magnetization beyond the Néel temperature . Magnetite is ferrimagnetic at room temperature and has a Curie temperature of 850 K. Maghemite is ferrimagnetic at room temperature, unstable at high temperatures, and loses its susceptibility with time. (Its Curie temperature is hard to determine). Both magnetite and maghemite nanoparticles are superparamagnetic at room temperature. This superparamagnetic behavior of iron oxide nanoparticles can be attributed to their size. When the size gets small enough (<10 nm), thermal fluctuations can change the direction of magnetization of the entire crystal. A material with many such crystals behaves like a paramagnet, except that the moments of entire crystals are fluctuating instead of individual atoms.

Furthermore, the unique superparamagnetic behavior of iron oxide nanoparticles allows them to be manipulated magnetically from a distance. In the latter sections, external manipulation will be discussed in regards to biomedical applications of iron oxide nanoparticles. Forces are required to manipulate the path of iron oxide particles. A spatially uniform magnetic field can result in a torque on the magnetic particle, but cannot cause particle translation; therefore, the magnetic field must be a gradient to cause translational motion. The force on a point-like magnetic dipole moment m due to a magnetic field B is given by the equation:

In biological applications, iron oxide nanoparticles will be translate through some kind of fluid, possibly bodily fluid, in which case the aforementioned equation can be modified to:

Based on these equations, there will be the greatest force in the direction of the largest positive slope of the energy density scalar field.

Another important consideration is the force acting against the magnetic force. As iron oxide nanoparticles translate toward the magnetic field source, they experience Stokes' drag force in the opposite direction. The drag force is expressed below.

In this equation, η is the fluid viscosity, R is the hydrodynamic radius of the particle, and 𝑣 is the velocity of the particle.

Synthesis

The preparation method has a large effect on shape, size distribution, and surface chemistry of the particles. It also determines to a great extent the distribution and type of structural defects or impurities in the particles. All these factors affect magnetic behavior. Recently, many attempts have been made to develop processes and techniques that would yield "monodisperse colloids" consisting of nanoparticles uniform in size and shape.

Coprecipitation

By far the most employed method is coprecipitation. This method can be further divided into two types. In the first, ferrous hydroxide suspensions are partially oxidized with different oxidizing agents. For example, spherical magnetite particles of narrow size distribution with mean diameters between 30 and 100 nm can be obtained from a Fe(II) salt, a base and a mild oxidant (nitrate ions). The other method consists in ageing stoichiometric mixtures of ferrous and ferric hydroxides in aqueous media, yielding spherical magnetite particles homogeneous in size. In the second type, the following chemical reaction occurs:

2 Fe3+ + Fe2+ + 8 OH → Fe3O4↓ + 4 H2O

Optimum conditions for this reaction are pH between 8 and 14, Fe3+
/Fe2+
ratio of 2:1 and a non-oxidizing environment. Being highly susceptibile to oxidation, magnetite (Fe3O4) is transformed to maghemite (γFe2O3) in the presence of oxygen:

2 Fe3O4 + O2 → 2 γFe2O3

The size and shape of the nanoparticles can be controlled by adjusting pH, ionic strength, temperature, nature of the salts (perchlorates, chlorides, sulfates, and nitrates), or the Fe(II)/Fe(III) concentration ratio.

Microemulsions

A microemulsion is a stable isotropic dispersion of 2 immiscible liquids consisting of nanosized domains of one or both liquids in the other stabilized by an interfacial film of surface-active molecules. Microemulsions may be categorized further as oil-in-water (o/w) or water-in-oil (w/o), depending on the dispersed and continuous phases. Water-in-oil is more popular for synthesizing many kinds of nanoparticles. The water and oil are mixed with an amphiphillic surfactant. The surfactant lowers the surface tension between water and oil, making the solution transparent. The water nanodroplets act as nanoreactors for synthesizing nanoparticles. The shape of the water pool is spherical. The size of the nanoparticles will depend on size of the water pool to a great extent. Thus, the size of the spherical nanoparticles can be tailored and tuned by changing the size of the water pool.

High-temperature decomposition of organic precursors

The decomposition of iron precursors in the presence of hot organic surfactants results in samples with good size control, narrow size distribution (5-12 nm) and good crystallinity; and the nanoparticles are easily dispersed. For biomedical applications like magnetic resonance imaging, magnetic cell separation or magnetorelaxometry, where particle size plays a crucial role, magnetic nanoparticles produced by this method are very useful. Viable iron precursors include Fe(Cup)
3
, Fe(CO)
5
, or Fe(acac)
3
in organic solvents with surfactant molecules. A combination of Xylenes and Sodium Dodecylbenezensulfonate as a surfactant are used to create nanoreactors for which well dispersed iron(II) and iron (III) salts can react.

Biomedical applications

Magnetite and maghemite are preferred in biomedicine because they are biocompatible and potentially non-toxic to humans. Iron oxide is easily degradable and therefore useful for in vivo applications. Results from exposure of a human mesothelium cell line and a murine fibroblast cell line to seven industrially important nanoparticles showed a nanoparticle specific cytotoxic mechanism for uncoated iron oxide. Solubility was found to strongly influence the cytotoxic response. Labelling cells (e.g. stem cells, dendritic cells) with iron oxide nanoparticles is an interesting new tool to monitor such labelled cells in real time by magnetic resonance tomography. Some forms of Iron oxide nanoparticle have been found to be toxic and cause transcriptional reprogramming.

The magneto-mechano-chemical synthesis (1) is accompanied by splitting of electron energy levels (SEELs) and electron transfer in magnetic field (2) from nanoparticles Fe3O4 to doxorubicin. The concentration of paramagnetic centers (free radicals) is increased in the magneto-sensitive complex (MNC) (3). The local combined action of constant magnetic and electromagnetic fields and MNC in tumor (4) initiated SEELs, free radicals, leading to oxidative stress and electron- and proton-transport deregulation in the mitochondrion (5). Magnetic nanotherapy has more effectively inhibited the synthesis of ATP in mitochondria of tumor cell and induced the death of tumor cells compared to conventional doxorubicin.

Iron oxide nanoparticles are used in cancer magnetic nanotherapy that is based on the magneto-spin effects in free-radical reactions and semiconductor material ability to generate oxygen radicals, furthermore, control oxidative stress in biological media under inhomogeneous electromagnetic radiation. The magnetic nanotherapy is remotely controlled by external electromagnetic field reactive oxygen species (ROS) and reactive nitrogen species (RNS)-mediated local toxicity in the tumor during chemotherapy with antitumor magnetic complex and lesser side effects in normal tissues. Magnetic complexes with magnetic memory that consist of iron oxide nanoparticles loaded with antitumor drug have additional advantages over conventional antitumor drugs due to their ability to be remotely controlled while targeting with a constant magnetic field and further strengthening of their antitumor activity by moderate inductive hyperthermia (below 40 °C). The combined influence of inhomogeneous constant magnetic and electromagnetic fields during nanotherapy has initiated splitting of electron energy levels in magnetic complex and unpaired electron transfer from iron oxide nanoparticles to anticancer drug and tumor cells. In particular, anthracycline antitumor antibiotic doxorubicin, the native state of which is diamagnetic, acquires the magnetic properties of paramagnetic substances. Electromagnetic radiation at the hyperfine splitting frequency can increase the time that radical pairs are in the triplet state and hence the probability of dissociation and so the concentration of free radicals. The reactivity of magnetic particles depends on their spin state. The experimental data was received about correlation between the frequency of electromagnetic field radiation with magnetic properties and quantity paramagnetic centres of complex. It is possible to control the kinetics of free-radical reactions by external magnetic fields and modulate the level of oxidative stress (local toxicity) in malignant tumor. Cancer cells are then particularly vulnerable to an oxidative assault and induction of high levels of oxidative stress locally in tumor tissue, that has the potential to destroy or arrest the growth of cancer cells and can be thought as therapeutic strategy against cancer. Multifunctional magnetic complexes with magnetic memory can combine cancer magnetic nanotherapy, tumor targeting and medical imaging functionalities in theranostics approach for personalized cancer medicine.

Yet, the use of inhomogeneous stationary magnetic fields to target iron oxide magnetic nanoparticles can result in enhanced tumor growth. Magnetic force transmission through magnetic nanoparticles to the tumor due to the action of the inhomogeneous stationary magnetic field reflects mechanical stimuli converting iron-induced reactive oxygen species generation to the modulation of biochemical signals.

Iron oxide nanoparticles may also be used in magnetic hyperthermia as a cancer treatment method. In this method, the ferrofluid which contains iron oxide is injected to the tumor and then heated up by an alternating high frequency magnetic field. The temperature distribution produced by this heat generation may help to destroy cancerous cells inside the tumor.

The use of superparamagnetic iron oxide (SPIO) can also be used as a tracer in sentinel node biopsy instead of radioisotope.

Visual processing

From Wikipedia, the free encyclopedia

Visual processing is a term that is used to refer to the brain's ability to use and interpret visual information from the world around us. The process of converting light energy into a meaningful image is a complex process that is facilitated by numerous brain structures and higher level cognitive processes. On an anatomical level, light energy first enters the eye through the cornea, where the light is bent. After passing through the cornea, light passes through the pupil and then lens of the eye, where it is bent to a greater degree and focused upon the retina. The retina is where a group of light-sensing cells, called photoreceptors are located. There are two types of photoreceptors: rods and cones. Rods are sensitive to dim light and cones are better able to transduce bright light. Photoreceptors connect to bipolar cells, which induce action potentials in retinal ganglion cells. These retinal ganglion cells form a bundle at the optic disc, which is a part of the optic nerve. The two optic nerves from each eye meet at the optic chiasm, where nerve fibers from each nasal retina cross which results in the right half of each eye's visual field being represented in the left hemisphere and the left half of each eye's visual fields being represented in the right hemisphere. The optic tract then diverges into two visual pathways, the geniculostriate pathway and the tectopulvinar pathway, which send visual information to the visual cortex of the occipital lobe for higher level processing (Whishaw and Kolb, 2015).

Top-down and bottom-up representations

The visual system is organized hierarchically, with anatomical areas that have specialized functions in visual processing. Low-level visual processing is concerned with determining different types of contrast among images projected onto the retina whereas high-level visual processing refers to the cognitive processes that integrate information from a variety of sources into the visual information that is represented in one's consciousness. Object processing, including tasks such as object recognition and location, is an example of higher-level visual processing. High-level visual processing depends on both top-down and bottom-up processes. Bottom-up processing refers to the visual system's ability to use the incoming visual information, flowing in a unidirectional path from the retina to higher cortical areas. Top-down processing refers to the use of prior knowledge and context to process visual information and change the information conveyed by neurons, altering the way they are tuned to a stimulus. All areas of the visual pathway except for the retina are able to be influenced by top-down processing. There is a traditional view that visual processing follows a feedforward system where there is a one-way process by which light is sent from the retina to higher cortical areas, however, there is increasing evidence that visual pathways operate bidirectionally, with both feedforward and feedback mechanisms in place that transmit information to and from lower and higher cortical areas. Various studies have demonstrated this idea that visual processing relies on both feedforward and feedback systems (Jensen et al., 2015; Layher et al., 2014; Lee, 2002). Various studies that recorded from early visual neurons in macaque monkeys found evidence that early visual neurons are sensitive to features both within their receptive fields and the global context of a scene. Two other monkey study used electrophysiology to find different frequencies that are associated with feedforward and feedback processing in monkeys (Orban, 2008; Schenden & Ganis, 2005). Studies with monkeys have also shown that neurons in higher level visual areas are selective to certain stimuli. One study that used single unit recordings in macaque monkeys found that neurons in middle temporal visual area, also known as area MT or V5, were highly selective for both direction and speed (Maunsell & Van Essen, 1983).

Disorders of higher-level visual processing

There are various disorders that are known the cause deficits in higher-level visual processing, including visual object agnosia, prosopagnosia, topographagnosia, alexia, achromatopsia, akinetopsia, Balint syndrome, and astereopsis. These deficits are caused by damage to brain structure implicated in either the ventral or dorsal visual stream (Barton 2011).

Processing of face and place stimuli

Past models of visual processing have distinguished certain areas of the brain by the specific stimuli that they are most responsive to; for example, the parahippocampal place area (PPA) has been shown to have heightened activation when presented with buildings and place scenes (Epstein & Kanwisher, 1998), whereas the fusiform face area (FFA) responds mostly strongly to faces and face-like stimuli (Kanwisher et al., 1997).

Parahippocampal Place Area (PPA)

The parahippocampal place area (PPA) is located in the posterior parahippocampal gyrus, which itself is contained in the medial temporal lobe with close proximity to the hippocampus. Its name comes from the increased neural response in the PPA when viewing places, like buildings, houses, and other structures, and when viewing environmental scenes, both indoors and outdoors (Epstein & Kanwisher, 1998). This is not to say that the PPA does not show activation when presented with other visual stimuli – when presented with familiar objects that are neither buildings nor faces, like chairs, there is also some activation within the PPA (Ishai et al., 2000). It does however appear that the PPA is associated with visual processing of buildings and places, as patients who have experienced damage to the parahippocampal area demonstrate topographic disorientation, in other words, unable to navigate familiar and unfamiliar surroundings (Habib & Sirigu, 1987). Outside of visual processing, the parahippocampal gyrus is involved in both spatial memory and spatial navigation (Squire & Zola-Morgan, 1991).

Fusiform Face Area (FFA)

The fusiform face area is located within the inferior temporal cortex in the fusiform gyrus. Similar to the PPA, the FFA exhibits higher neural activation when visually processing faces more so than places or buildings (Kanwisher et al., 1997). However, the fusiform area also shows activation for other stimuli and can be trained to specialize in the visual processing of objects of expertise. Past studies have investigated the activation of the FFA in people with specialized visual training, like bird watchers or car experts who have adapted a visual skill in identifying traits of birds and cars respectively. It has been shown that these experts have developed FFA activation for their specific visual expertise. Other experiments have studied the ability to develop expertise in the FFA using 'greebles', a visual stimulus generated to have a few components that can be combined to make a series of different configurations, much like how a variety of slightly different facial features can be used to construct a unique face. Participants were trained on their ability to distinguish greebles by differing features and had activation in the FFA measured periodically through their learning – the results after training demonstrated that greeble activation in the FFA increased over time whereas FFA responses to faces actually decreased with increased greeble training. These results suggested three major findings in regards to FFA in visual processing: firstly, the FFA does not exclusively process faces; secondly, the FFA demonstrates activation for 'expert' visual tasks and can be trained over time to adapt to new visual stimuli; lastly, the FFA does not maintain constant levels of activation for all stimuli and instead seems to 'share' activation in such a way that the most frequently viewed stimuli receives the greatest activation in the FFA as seen in the greebles study (Gauthier et al., 2000).  

Development of the FFA and PPA in the brain 

Some research suggests that the development of the FFA and the PPA is due to the specialization of certain visual tasks and their relation to other visual processing patterns in the brain. In particular, existing research shows that FFA activation falls within the area of the brain that processes the immediate field of vision, whereas PPA activation is located in areas of the brain that handle peripheral vision and vision just out of the direct field of vision (Levy et al., 2001). This suggests that the FFA and PPA may have developed certain specializations due to the common visual tasks within those fields of view. Because faces are commonly processed in the immediate field of vision, the parts of the brain that process the direct field of vision eventually also specialize in more detailed tasks like face recognition. The same concept applies to place: because buildings and locations are often viewed in their entirety either right outside of the field of vision or in an individual's periphery, any building or location visual specialization will be processed within the areas of the brain handling peripheral vision. As such, commonly seen shapes like houses and buildings become specialized in certain regions of the brain, i.e. the PPA.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...