Search This Blog

Thursday, November 21, 2024

Designer baby

From Wikipedia, the free encyclopedia

A designer baby is a baby whose genetic makeup has been selected or altered, often to exclude a particular gene or to remove genes associated with disease. This process usually involves analysing a wide range of human embryos to identify genes associated with particular diseases and characteristics, and selecting embryos that have the desired genetic makeup; a process known as preimplantation genetic diagnosis. Screening for single genes is commonly practiced, and polygenic screening is offered by a few companies. Other methods by which a baby's genetic information can be altered involve directly editing the genome before birth, which is not routinely performed and only one instance of this is known to have occurred as of 2019, where Chinese twins Lulu and Nana were edited as embryos, causing widespread criticism.

Genetically altered embryos can be achieved by introducing the desired genetic material into the embryo itself, or into the sperm and/or egg cells of the parents; either by delivering the desired genes directly into the cell or using gene-editing technology. This process is known as germline engineering and performing this on embryos that will be brought to term is typically prohibited by law. Editing embryos in this manner means that the genetic changes can be carried down to future generations, and since the technology concerns editing the genes of an unborn baby, it is considered controversial and is subject to ethical debate. While some scientists condone the use of this technology to treat disease, concerns have been raised that this could be translated into using the technology for cosmetic purposes and enhancement of human traits.

Pre-implantation genetic diagnosis

Pre-implantation genetic diagnosis (PGD or PIGD) is a procedure in which embryos are screened prior to implantation. The technique is used alongside in vitro fertilisation (IVF) to obtain embryos for evaluation of the genome – alternatively, ovocytes can be screened prior to fertilisation. The technique was first used in 1989.

PGD is used primarily to select embryos for implantation in the case of possible genetic defects, allowing identification of mutated or disease-related alleles and selection against them. It is especially useful in embryos from parents where one or both carry a heritable disease. PGD can also be used to select for embryos of a certain sex, most commonly when a disease is more strongly associated with one sex than the other (as is the case for X-linked disorders which are more common in males, such as haemophilia). Infants born with traits selected following PGD are sometimes considered to be designer babies.

One application of PGD is the selection of 'saviour siblings', children who are born to provide a transplant (of an organ or group of cells) to a sibling with a usually life-threatening disease. Saviour siblings are conceived through IVF and then screened using PGD to analyze genetic similarity to the child needing a transplant, to reduce the risk of rejection.

Process

Process of pre-implantation genetic diagnosis. In vitro fertilisation involves either incubation of sperm and oocyte together, or injection of sperm directly into the oocyte. PCR - polymerase chain reaction, FISH - fluorescent in situ hybridisation.

Embryos for PGD are obtained from IVF procedures in which the oocyte is artificially fertilised by sperm. Oocytes from the woman are harvested following controlled ovarian hyperstimulation (COH), which involves fertility treatments to induce production of multiple oocytes. After harvesting the oocytes, they are fertilised in vitro, either during incubation with multiple sperm cells in culture, or via intracytoplasmic sperm injection (ICSI), where sperm is directly injected into the oocyte. The resulting embryos are usually cultured for 3–6 days, allowing them to reach the blastomere or blastocyst stage.

Once embryos reach the desired stage of development, cells are biopsied and genetically screened. The screening procedure varies based on the nature of the disorder being investigated.

Polymerase chain reaction (PCR) is a process in which DNA sequences are amplified to produce many more copies of the same segment, allowing screening of large samples and identification of specific genes. The process is often used when screening for monogenic disorders, such as cystic fibrosis.

Another screening technique, fluorescent in situ hybridisation (FISH) uses fluorescent probes which specifically bind to highly complementary sequences on chromosomes, which can then be identified using fluorescence microscopy. FISH is often used when screening for chromosomal abnormalities such as aneuploidy, making it a useful tool when screening for disorders such as Down syndrome.

Following the screening, embryos with the desired trait (or lacking an undesired trait such as a mutation) are transferred into the mother's uterus, then allowed to develop naturally.

Regulation

PGD regulation is determined by individual countries' governments, with some prohibiting its use entirely, including in Austria, China, and Ireland.

In many countries, PGD is permitted under very stringent conditions for medical use only, as is the case in France, Switzerland, Italy and the United Kingdom. Whilst PGD in Italy and Switzerland is only permitted under certain circumstances, there is no clear set of specifications under which PGD can be carried out, and selection of embryos based on sex is not permitted. In France and the UK, regulations are much more detailed, with dedicated agencies setting out framework for PGD. Selection based on sex is permitted under certain circumstances, and genetic disorders for which PGD is permitted are detailed by the countries' respective agencies.

In contrast, the United States federal law does not regulate PGD, with no dedicated agencies specifying regulatory framework by which healthcare professionals must abide. Elective sex selection is permitted, accounting for around 9% of all PGD cases in the U.S., as is selection for desired conditions such as deafness or dwarfism.

Pre-implantation Genetic Testing

Based on the specific analysis conducted:

PGT-M (Preimplantation Genetic Testing for monogenic diseases): It is used to detect hereditary diseases caused by the mutation or alteration of the DNA sequence of a single gene.

PGT-A (Preimplantation Genetic Testing for aneuploidy): It is used to diagnose numerical abnormalities (aneuploidies).

Human germline engineering

Human germline engineering is a process in which the human genome is edited within a germ cell, such as a sperm cell or oocyte (causing heritable changes), or in the zygote or embryo following fertilization. Germline engineering results in changes in the genome being incorporated into every cell in the body of the offspring (or of the individual following embryonic germline engineering). This process differs from somatic cell engineering, which does not result in heritable changes. Most human germline editing is performed on individual cells and non-viable embryos, which are destroyed at a very early stage of development. In November 2018, however, a Chinese scientist, He Jiankui, announced that he had created the first human germline genetically edited babies.

Genetic engineering relies on a knowledge of human genetic information, made possible by research such as the Human Genome Project, which identified the position and function of all the genes in the human genome. As of 2019, high-throughput sequencing methods allow genome sequencing to be conducted very rapidly, making the technology widely available to researchers.

Germline modification is typically accomplished through techniques which incorporate a new gene into the genome of the embryo or germ cell in a specific location. This can be achieved by introducing the desired DNA directly to the cell for it to be incorporated, or by replacing a gene with one of interest. These techniques can also be used to remove or disrupt unwanted genes, such as ones containing mutated sequences.

Whilst germline engineering has mostly been performed in mammals and other animals, research on human cells in vitro is becoming more common. Most commonly used in human cells are germline gene therapy and the engineered nuclease system CRISPR/Cas9.

Germline gene modification

Gene therapy is the delivery of a nucleic acid (usually DNA or RNA) into a cell as a pharmaceutical agent to treat disease. Most commonly it is carried out using a vector, which transports the nucleic acid (usually DNA encoding a therapeutic gene) into the target cell. A vector can transduce a desired copy of a gene into a specific location to be expressed as required. Alternatively, a transgene can be inserted to deliberately disrupt an unwanted or mutated gene, preventing transcription and translation of the faulty gene products to avoid a disease phenotype.

Gene therapy in patients is typically carried out on somatic cells in order to treat conditions such as some leukaemias and vascular diseases. Human germline gene therapy in contrast is restricted to in vitro experiments in some countries, whilst others prohibited it entirely, including Australia, Canada, Germany and Switzerland.

Whilst the National Institutes of Health in the US does not currently allow in utero germline gene transfer clinical trials, in vitro trials are permitted. The NIH guidelines state that further studies are required regarding the safety of gene transfer protocols before in utero research is considered, requiring current studies to provide demonstrable efficacy of the techniques in the laboratory. Research of this sort is currently using non-viable embryos to investigate the efficacy of germline gene therapy in treatment of disorders such as inherited mitochondrial diseases.

Gene transfer to cells is usually by vector delivery. Vectors are typically divided into two classes – viral and non-viral.

Viral vectors

Viruses infect cells by transducing their genetic material into a host's cell, using the host's cellular machinery to generate viral proteins needed for replication and proliferation. By modifying viruses and loading them with the therapeutic DNA or RNA of interest, it is possible to use these as a vector to provide delivery of the desired gene into the cell.

Retroviruses are some of the most commonly used viral vectors, as they not only introduce their genetic material into the host cell, but also copy it into the host's genome. In the context of gene therapy, this allows permanent integration of the gene of interest into the patient's own DNA, providing longer lasting effects.

Viral vectors work efficiently and are mostly safe but present with some complications, contributing to the stringency of regulation on gene therapy. Despite partial inactivation of viral vectors in gene therapy research, they can still be immunogenic and elicit an immune response. This can impede viral delivery of the gene of interest, as well as cause complications for the patient themselves when used clinically, especially in those who already have a serious genetic illness. Another difficulty is the possibility that some viruses will randomly integrate their nucleic acids into the genome, which can interrupt gene function and generate new mutations. This is a significant concern when considering germline gene therapy, due to the potential to generate new mutations in the embryo or offspring.

Non-viral vectors

Non-viral methods of nucleic acid transfection involved injecting a naked DNA plasmid into cell for incorporation into the genome. This method used to be relatively ineffective with low frequency of integration, however, efficiency has since greatly improved, using methods to enhance the delivery of the gene of interest into cells. Furthermore, non-viral vectors are simple to produce on a large scale and are not highly immunogenic.

Some non-viral methods are detailed below:

  • Electroporation is a technique in which high voltage pulses are used to carry DNA into the target cell across the membrane. The method is believed to function due to the formation of pores across the membrane, but although these are temporary, electroporation results in a high rate of cell death which has limited its use. An improved version of this technology, electron-avalanche transfection, has since been developed, which involves shorter (microsecond) high voltage pulses which result in more effective DNA integration and less cellular damage.
  • The gene gun is a physical method of DNA transfection, where a DNA plasmid is loaded onto a particle of heavy metal (usually gold) and loaded onto the 'gun'. The device generates a force to penetrate the cell membrane, allowing the DNA to enter whilst retaining the metal particle.
  • Oligonucleotides are used as chemical vectors for gene therapy, often used to disrupt mutated DNA sequences to prevent their expression. Disruption in this way can be achieved by introduction of small RNA molecules, called siRNA, which signal cellular machinery to cleave the unwanted mRNA sequences to prevent their transcription. Another method utilises double-stranded oligonucleotides, which bind transcription factors required for transcription of the target gene. By competitively binding these transcription factors, the oligonucleotides can prevent the gene's expression.

ZFNs

Zinc-finger nucleases (ZFNs) are enzymes generated by fusing a zinc finger DNA-binding domain to a DNA-cleavage domain. Zinc finger recognizes between 9 and 18 bases of sequence. Thus by mixing those modules, it becomes easier to target any sequence researchers wish to alter ideally within complex genomes. A ZFN is a macromolecular complex formed by monomers in which each subunit contains a zinc domain and a FokI endonuclease domain. The FokI domains must dimerize for activities, thus narrowing target area by ensuring that two close DNA-binding events occurs.

The resulting cleavage event enables most genome-editing technologies to work. After a break is created, the cell seeks to repair it.

  • A method is NHEJ, in which the cell polishes the two ends of broken DNA and seals them back together, often producing a frame shift.
  • An alternative method is homology-directed repairs. The cell tries to fix the damage by using a copy of the sequence as a backup. By supplying their own template, researcher can have the system to insert a desired sequence instead.

The success of using ZFNs in gene therapy depends on the insertion of genes to the chromosomal target area without causing damage to the cell. Custom ZFNs offer an option in human cells for gene correction.

TALENs

There is a method called TALENs that targets singular nucleotides. TALENs stand for transcription activator-like effector nucleases. TALENs are made by TAL effector DNA-binding domain to a DNA cleavage domain. All these methods work by as the TALENs are arranged. TALENs are "built from arrays of 33-35 amino acid modules…by assembling those arrays…researchers can target any sequence they like". This event is referred as Repeat Variable Diresidue (RVD). The relationship between the amino acids enables researchers to engineer a specific DNA domain. The TALEN enzymes are designed to remove specific parts of the DNA strands and replace the section; which enables edits to be made. TALENs can be used to edit genomes using non-homologous end joining (NHEJ) and homology directed repair.

CRISPR/Cas9

CRISPR-Cas9. PAM (Protospacer Adjacent Motif) is required for target binding.

The CRISPR/Cas9 system (CRISPR – Clustered Regularly Interspaced Short Palindromic Repeats, Cas9 – CRISPR-associated protein 9) is a genome editing technology based on the bacterial antiviral CRISPR/Cas system. The bacterial system has evolved to recognize viral nucleic acid sequences and cut these sequences upon recognition, damaging infecting viruses. The gene editing technology uses a simplified version of this process, manipulating the components of the bacterial system to allow location-specific gene editing.

The CRISPR/Cas9 system broadly consists of two major components – the Cas9 nuclease and a guide RNA (gRNA). The gRNA contains a Cas-binding sequence and a ~20 nucleotide spacer sequence, which is specific and complementary to the target sequence on the DNA of interest. Editing specificity can therefore be changed by modifying this spacer sequence.

DNA repair after double-strand break

Upon system delivery to a cell, Cas9 and the gRNA bind, forming a ribonucleoprotein complex. This causes a conformational change in Cas9, allowing it to cleave DNA if the gRNA spacer sequence binds with sufficient homology to a particular sequence in the host genome. When the gRNA binds to the target sequence, Cas will cleave the locus, causing a double-strand break (DSB).

The resulting DSB can be repaired by one of two mechanisms –

  • Non-Homologous End Joining (NHEJ) - an efficient but error-prone mechanism, which often introduces insertions and deletions (indels) at the DSB site. This means it is often used in knockout experiments to disrupt genes and introduce loss of function mutations.
  • Homology Directed Repair (HDR) - a less efficient but high-fidelity process which is used to introduce precise modifications into the target sequence. The process requires adding a DNA repair template including a desired sequence, which the cell's machinery uses to repair the DSB, incorporating the sequence of interest into the genome.

Since NHEJ is more efficient than HDR, most DSBs will be repaired via NHEJ, introducing gene knockouts. To increase frequency of HDR, inhibiting genes associated with NHEJ and performing the process in particular cell cycle phases (primarily S and G2) appear effective.

CRISPR/Cas9 is an effective way of manipulating the genome in vivo in animals as well as in human cells in vitro, but some issues with the efficiency of delivery and editing mean that it is not considered safe for use in viable human embryos or the body's germ cells. As well as the higher efficiency of NHEJ making inadvertent knockouts likely, CRISPR can introduce DSBs to unintended parts of the genome, called off-target effects. These arise due to the spacer sequence of the gRNA conferring sufficient sequence homology to random loci in the genome, which can introduce random mutations throughout. If performed in germline cells, mutations could be introduced to all the cells of a developing embryo.

There are developments to prevent unintended consequences otherwise known as off-target effects due to gene editing. There is a race to develop new gene editing technologies that prevent off-target effects from occurring with some of the technologies being known as biased off-target detection, and Anti-CRISPR Proteins. For biased off-target effects detection, there are several tools to predict the locations where off-target effects may take place. Within the technology of biased off-target effects detection, there are two main models, Alignment Based Models that involve having the sequences of gRNA being aligned with sequences of genome, after which then the off-target locations are predicted. The second model is known as the Scoring-Based Model where each piece of gRNA is scored for their off-target effects in accordance with their positioning.

Regulation on CRISPR use

In 2015, the International Summit on Human Gene Editing was held in Washington D.C., hosted by scientists from China, the UK and the U.S. The summit concluded that genome editing of somatic cells using CRISPR and other genome editing tools would be allowed to proceed under FDA regulations, but human germline engineering would not be pursued.

In February 2016, scientists at the Francis Crick Institute in London were given a license permitting them to edit human embryos using CRISPR to investigate early development. Regulations were imposed to prevent the researchers from implanting the embryos and to ensure experiments were stopped and embryos destroyed after seven days.

In November 2018, Chinese scientist He Jiankui announced that he had performed the first germline engineering on viable human embryos, which have since been brought to term. The research claims received significant criticism, and Chinese authorities suspended He's research activity. Following the event, scientists and government bodies have called for more stringent regulations to be imposed on the use of CRISPR technology in embryos, with some calling for a global moratorium on germline genetic engineering. Chinese authorities have announced stricter controls will be imposed, with Communist Party general secretary Xi Jinping and government premier Li Keqiang calling for new gene-editing legislations to be introduced.

As of January 2020, germline genetic alterations are prohibited in 24 countries by law and also in 9 other countries by their guidelines. The Council of Europe's Convention on Human Rights and Biomedicine, also known as the Oviedo Convention, has stated in its article 13 "Interventions on the human genome" as follows: "An intervention seeking to modify the human genome may only be undertaken for preventive, diagnostic or therapeutic purposes and only if its aim is not to introduce any modification in the genome of any descendants". Nonetheless, wide public debate has emerged, targeting the fact that the Oviedo Convention Article 13 should be revisited and renewed, especially due to the fact that it was constructed in 1997 and may be out of date, given recent technological advancements in the field of genetic engineering.

Lulu and Nana controversy

He Jiankui speaking at the Second International Summit on Human Genome Editing, November 2018

The Lulu and Nana controversy refers to the two Chinese twin girls born in November 2018, who had been genetically modified as embryos by the Chinese scientist He Jiankui. The twins are believed to be the first genetically modified babies. The girls' parents had participated in a clinical project run by He, which involved IVF, PGD and genome editing procedures in an attempt to edit the gene CCR5. CCR5 encodes a protein used by HIV to enter host cells, so by introducing a specific mutation into the gene CCR5 Δ32 He claimed that the process would confer innate resistance to HIV.

The project run by He recruited couples wanting children where the man was HIV-positive and the woman uninfected. During the project, He performed IVF with sperm and eggs from the couples and then introduced the CCR5 Δ32 mutation into the genomes of the embryos using CRISPR/Cas9. He then used PGD on the edited embryos during which he sequenced biopsied cells to identify whether the mutation had been successfully introduced. He reported some mosaicism in the embryos, whereby the mutation had integrated into some cells but not all, suggesting the offspring would not be entirely protected against HIV. He claimed that during the PGD and throughout the pregnancy, fetal DNA was sequenced to check for off-target errors introduced by the CRISPR/Cas9 technology, however the NIH released a statement in which they announced "the possibility of damaging off-target effects has not been satisfactorily explored". The girls were born in early November 2018, and were reported by He to be healthy.

His research was conducted in secret until November 2018, when documents were posted on the Chinese clinical trials registry and MIT Technology Review published a story about the project. Following this, He was interviewed by the Associated Press and presented his work on 27 November at the Second International Human Genome Editing Summit which was held in Hong Kong.

Although the information available about this experiment is relatively limited, it is deemed that the scientist erred against many ethical, social and moral rules but also China's guidelines and regulations, which prohibited germ-line genetic modifications in human embryos, while conducting this trial. From a technological point of view, the CRISPR/Cas9 technique is one of the most precise and least expensive methods of gene modification to this day, whereas there are still a number of limitations that keep the technique from being labelled as safe and efficient. During the First International Summit on Human Gene Editing in 2015 the participants agreed that a halt must be set on germline genetic alterations in clinical settings unless and until: "(1) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (2) there is broad societal consensus about the appropriateness of the proposed application". However, during the second International Summit in 2018 the topic was once again brought up by stating: "Progress over the last three years and the discussions at the current summit, however, suggest that it is time to define a rigorous, responsible translational pathway toward such trials". Inciting that the ethical and legal aspects should indeed be revisited G. Daley, representative of the summit's management and Dean of Harvard Medical School depicted Dr. He's experiment as "a wrong turn on the right path".

The experiment was met with widespread criticism and was very controversial, globally as well as in China. Several bioethicists, researchers and medical professionals have released statements condemning the research, including Nobel laureate David Baltimore who deemed the work "irresponsible" and one pioneer of the CRISPR/Cas9 technology, biochemist Jennifer Doudna at University of California, Berkeley. The director of the NIH, Francis S. Collins stated that the "medical necessity for inactivation of CCR5 in these infants is utterly unconvincing" and condemned He Jiankui and his research team for 'irresponsible work'. Other scientists, including geneticist George Church of Harvard University suggested gene editing for disease resistance was "justifiable" but expressed reservations regarding the conduct of He's work.

The Safe Genes program by DARPA has the goal to protect soldiers against gene editing war tactics. They receive information from ethical experts to better predict and understand future and current potential gene editing issues.

The World Health Organization has launched a global registry to track research on human genome editing, after a call to halt all work on genome editing.

The Chinese Academy of Medical Sciences responded to the controversy in the journal Lancet, condemning He for violating ethical guidelines documented by the government and emphasising that germline engineering should not be performed for reproductive purposes. The academy ensured they would "issue further operational, technical and ethical guidelines as soon as possible" to impose tighter regulation on human embryo editing.

Ethical considerations

Editing embryos, germ cells and the generation of designer babies is the subject of ethical debate, as a result of the implications in modifying genomic information in a heritable manner. This includes arguments over unbalanced gender selection and gamete selection.

Despite regulations set by individual countries' governing bodies, the absence of a standardized regulatory framework leads to frequent discourse in discussion of germline engineering among scientists, ethicists and the general public. Arthur Caplan, the head of the Division of Bioethics at New York University suggests that establishing an international group to set guidelines for the topic would greatly benefit global discussion and proposes instating "religious and ethics and legal leaders" to impose well-informed regulations.

In many countries, editing embryos and germline modification for reproductive use is illegal. As of 2017, the U.S. restricts the use of germline modification and the procedure is under heavy regulation by the FDA and NIH. The American National Academy of Sciences and National Academy of Medicine indicated they would provide qualified support for human germline editing "for serious conditions under stringent oversight", should safety and efficiency issues be addressed. In 2019, World Health Organization called human germline genome editing as "irresponsible".

Since genetic modification poses risk to any organism, researchers and medical professionals must give the prospect of germline engineering careful consideration. The main ethical concern is that these types of treatments will produce a change that can be passed down to future generations and therefore any error, known or unknown, will also be passed down and will affect the offspring. Theologian Ronald Green of Dartmouth College has raised concern that this could result in a decrease in genetic diversity and the accidental introduction of new diseases in the future.

When considering support for research into germline engineering, ethicists have often suggested that it can be considered unethical not to consider a technology that could improve the lives of children who would be born with congenital disorders. Geneticist George Church claims that he does not expect germline engineering to increase societal disadvantage, and recommends lowering costs and improving education surrounding the topic to dispel these views. He emphasizes that allowing germline engineering in children who would otherwise be born with congenital defects could save around 5% of babies from living with potentially avoidable diseases. Jackie Leach Scully, professor of social and bioethics at Newcastle University, acknowledges that the prospect of designer babies could leave those living with diseases and unable to afford the technology feeling marginalized and without medical support. However, Professor Leach Scully also suggests that germline editing provides the option for parents "to try and secure what they think is the best start in life" and does not believe it should be ruled out. Similarly, Nick Bostrom, an Oxford philosopher known for his work on the risks of artificial intelligence, proposed that "super-enhanced" individuals could "change the world through their creativity and discoveries, and through innovations that everyone else would use".

Many bioethicists emphasize that germline engineering is usually considered in the best interest of a child, therefore associated should be supported. Dr James Hughes, a bioethicist at Trinity College, Connecticut, suggests that the decision may not differ greatly from others made by parents which are well accepted – choosing with whom to have a child and using contraception to denote when a child is conceived. Julian Savulescu, a bioethicist and philosopher at Oxford University believes parents "should allow selection for non‐disease genes even if this maintains or increases social inequality", coining the term procreative beneficence to describe the idea that the children "expected to have the best life" should be selected. The Nuffield Council on Bioethics said in 2017 that there was "no reason to rule out" changing the DNA of a human embryo if performed in the child's interest, but stressed that this was only provided that it did not contribute to societal inequality. Furthermore, Nuffield Council in 2018 detailed applications, which would preserve equality and benefit humanity, such as elimination of hereditary disorders and adjusting to warmer climate. Philosopher and Director of Bioethics at non-profit Invincible Wellbeing David Pearce argues that "the question [of designer babies] comes down to an analysis of risk-reward ratios - and our basic ethical values, themselves shaped by our evolutionary past." According to Pearce,"it's worth recalling that each act of old-fashioned sexual reproduction is itself an untested genetic experiment", often compromising a child's wellbeing and pro-social capacities even if the child grows in a healthy environment. Pearce thinks that as technology matures, more people may find it unacceptable to rely on "genetic roulette of natural selection".

Conversely, several concerns have been raised regarding the possibility of generating designer babies, especially concerning the inefficiencies currently presented by the technologies. Green stated that although the technology was "unavoidably in our future", he foresaw "serious errors and health problems as unknown genetic side effects in 'edited' children" arise. Furthermore, Green warned against the possibility that "the well-to-do" could more easily access the technologies "..that make them even better off". This concern regarding germline editing exacerbating a societal and financial divide is shared amongst other researches, with the chair of the Nuffield Bioethics Council Professor Karen Yeung stressing that if funding of the procedures "were to exacerbate social injustice, in our view that would not be an ethical approach".

Social and religious worries also arise over the possibility of editing human embryos. In a survey conducted by the Pew Research Centre, it was found that only a third of the Americans surveyed who identified as strongly Christian approved of germline editing. Catholic leaders are in the middle ground. This stance is because, according to Catholicism, a baby is a gift from God, and Catholics believe that people are created to be perfect in God's eyes. Thus, altering the genetic makeup of an infant is unnatural. In 1984, Pope John Paul II addressed that genetic manipulation in aiming to heal diseases is acceptable in the Church. He stated that it "will be considered in principle as desirable provided that it tends to the real promotion of the personal well-being of man, without harming his integrity or worsening his life conditions". However, it is unacceptable if designer babies are used to create a super/superior race including cloning humans. The Catholic Church rejects human cloning even if its purpose is to produce organs for therapeutic usage. The Vatican has stated that "The fundamental values connected with the techniques of artificial human procreation are two: the life of the human being called into existence and the special nature of the transmission of human life in marriage". According to them, it violates the dignity of the individual and is morally illicit.

A survey conducted by the Mayo Clinic in the Midwestern United States in 2017 saw that most of the participants agreed against the creation of designer babies with some noting its eugenic undertones. The participants also felt that gene editing may have unintended consequences that it may be manifested later in life for those that undergo gene editing. Some that took the survey worried that gene editing may lead to a decrease in the genetic diversity of the population in societies. The survey also noted how the participants were worried about the potential socioeconomic effects designer babies may exacerbate. The authors of the survey noted that the results of the survey showed that there is a greater need for interaction between the public and the scientific community concerning the possible implications and the recommended regulation of gene editing as it was unclear to them how much those that participated knew about gene editing and its effects prior to taking the survey.

In Islam, the positive attitude towards genetic engineering is based on the general principle that Islam aims at facilitating human life. However, the negative view comes from the process used to create a designer baby. Oftentimes, it involves the destruction of some embryos. Muslims believe that "embryos already has a soul" at conception. Thus, the destruction of embryos is against the teaching of the Qur'an, Hadith, and Shari'ah law, that teaches our responsibility to protect human life. To clarify, the procedure would be viewed as "acting like God/Allah". With the idea, that parents could choose the gender of their child, Islam believes that humans have no decision to choose the gender, and that "gender selection is only up to God".

Since 2020, there have been discussions about American studies that use embryos without embryonic implantation with the CRISPR/Cas9 technique that had been modified with HDR (homology-directed repair), and the conclusions from the results were that gene editing technologies are currently not mature enough for real world use and that there is a need for more studies that generate safe results over a longer period of time.

An article in the journal Bioscience Reports discussed how health in terms of genetics is not straightforward and thus there should be extensive deliberation for operations involving gene editing when the technology gets mature enough for real world use, where all of the potential effects are known on a case-by-case basis to prevent undesired effects on the subject or patient being operated on.

Social aspects also raise concern, as highlighted by Josephine Quintavelle, director of Comment on Reproductive Ethics at Queen Mary University of London, who states that selecting children's traits is "turning parenthood into an unhealthy model of self-gratification rather than a relationship".

One major worry among scientists, including Marcy Darnovsky at the Center for Genetics and Society in California, is that permitting germline engineering for correction of disease phenotypes is likely to lead to its use for cosmetic purposes and enhancement. Meanwhile, Henry Greely, a bioethicist at Stanford University in California, states that "almost everything you can accomplish by gene editing, you can accomplish by embryo selection", suggesting the risks undertaken by germline engineering may not be necessary. Alongside this, Greely emphasizes that the beliefs that genetic engineering will lead to enhancement are unfounded, and that claims that we will enhance intelligence and personality are far off – "we just don't know enough and are unlikely to for a long time – or maybe for ever".

Philosophy of psychology

From Wikipedia, the free encyclopedia

Epistemology

Some of the issues studied by the philosophy of psychology are epistemological concerns about the methodology of psychological investigation. For example:

  • What constitutes a psychological explanation?
  • What is the most appropriate methodology for psychology: mentalism, behaviorism, or a compromise?
  • Are self-reports a reliable data-gathering method?
  • What conclusions can be drawn from null hypothesis tests?
  • Can first-person experiences (emotions, desires, beliefs, etc.) be measured objectively?

Ontology

Philosophers of psychology also concern themselves with ontological issues, like:

  • Can psychology be theoretically reduced to neuroscience?
  • What are psychological phenomena?
  • What is the relationship between subjectivity and objectivity in psychology?

Relations to other fields

Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, cognitive psychology, and artificial intelligence, for example questioning whether psychological phenomena can be explained using the methods of neuroscience, evolutionary theory, and computational modeling, respectively. Although these are all closely related fields, some concerns still arise about the appropriateness of importing their methods into psychology. Some such concerns are whether psychology, as the study of individuals as information processing systems (see Donald Broadbent), is autonomous from what happens in the brain (even if psychologists largely agree that the brain in some sense causes behavior (see supervenience)); whether the mind is "hard-wired" enough for evolutionary investigations to be fruitful; and whether computational models can do anything more than offer possible implementations of cognitive theories that tell us nothing about the mind (Fodor & Pylyshyn 1988).

Related to the philosophy of psychology are philosophical and epistemological inquiries about clinical psychiatry and psychopathology. Philosophy of psychiatry is mainly concerned with the role of values in psychiatry: derived from philosophical value theory and phenomenology, values-based practice is aimed at improving and humanizing clinical decision-making in the highly complex environment of mental health care. Philosophy of psychopathology is mainly involved in the epistemological reflection about the implicit philosophical foundations of psychiatric classification and evidence-based psychiatry. Its aims is to unveil the constructive activity underlying the description of mental phenomena.

Main areas

Different schools and systems of psychology represent approaches to psychological problems, which are often based on different philosophies of consciousness.

Functional psychology Functionalism treats the psyche as derived from the activity of external stimuli, deprived of its essential autonomy, denying free will, which influenced behaviourism later on; one of the founders of functionalism was James, also close to pragmatism, where human action is put before questions and doubts about the nature of the world and man himself.

Psychoanalysis Freud`s doctrine, called Metapsychology, was to give the human self greater freedom from instinctive and irrational desires in a dialogue with a psychologist through analysis of the unconscious. Later the psychoanalytic movement split, part of it treating psychoanalysis as a practice of working with archetypes (analytical psychology), part criticising the social limitations of the unconscious (Freudo-Marxism), and later Lacan`s structural psychoanalysis, which interpreted the unconscious as a language.

Phenomenological psychology Edmund Husserl rejected the physicalism of most of the psychological teachings of his time and began to understand consciousness as the only reality accessible to reliable cognition.[9] His disciple Heidegger added to this the assertion of the fundamental finitude of man and the threat of a loss of authenticity in the technical world, and thus laid the foundation for existential psychology.

Structuralism The recognised creator of psychology as a science, W. Wundt described the primordial structures of the psyche that determine perception and behaviour, but faced the problem of the impossibility of direct access to these structures and the vagueness of their description. Half a century later his ideas, combined with Sossur`s semiotics, strongly influenced the general humanities of structuralism and the post-structuralism and post-modernism that emerged from it, where structures were treated as linguistic invariants.

Philosophy

From Wikipedia, the free encyclopedia

Epistemology

Some of the issues studied by the philosophy of psychology are epistemological concerns about the methodology of psychological investigation. For example:

  • What constitutes a psychological explanation?
  • What is the most appropriate methodology for psychology: mentalism, behaviorism, or a compromise?
  • Are self-reports a reliable data-gathering method?
  • What conclusions can be drawn from null hypothesis tests?
  • Can first-person experiences (emotions, desires, beliefs, etc.) be measured objectively?

Ontology

Philosophers of psychology also concern themselves with ontological issues, like:

  • Can psychology be theoretically reduced to neuroscience?
  • What are psychological phenomena?
  • What is the relationship between subjectivity and objectivity in psychology?

Relations to other fields

Philosophy of psychology also closely monitors contemporary work conducted in cognitive neuroscience, cognitive psychology, and artificial intelligence, for example questioning whether psychological phenomena can be explained using the methods of neuroscience, evolutionary theory, and computational modeling, respectively. Although these are all closely related fields, some concerns still arise about the appropriateness of importing their methods into psychology. Some such concerns are whether psychology, as the study of individuals as information processing systems (see Donald Broadbent), is autonomous from what happens in the brain (even if psychologists largely agree that the brain in some sense causes behavior (see supervenience)); whether the mind is "hard-wired" enough for evolutionary investigations to be fruitful; and whether computational models can do anything more than offer possible implementations of cognitive theories that tell us nothing about the mind (Fodor & Pylyshyn 1988).

Related to the philosophy of psychology are philosophical and epistemological inquiries about clinical psychiatry and psychopathology. Philosophy of psychiatry is mainly concerned with the role of values in psychiatry: derived from philosophical value theory and phenomenology, values-based practice is aimed at improving and humanizing clinical decision-making in the highly complex environment of mental health care. Philosophy of psychopathology is mainly involved in the epistemological reflection about the implicit philosophical foundations of psychiatric classification and evidence-based psychiatry. Its aims is to unveil the constructive activity underlying the description of mental phenomena.[6]

Main areas

Different schools and systems of psychology represent approaches to psychological problems, which are often based on different philosophies of consciousness.

Functional psychology Functionalism treats the psyche as derived from the activity of external stimuli, deprived of its essential autonomy, denying free will, which influenced behaviourism later on; one of the founders of functionalism was James, also close to pragmatism, where human action is put before questions and doubts about the nature of the world and man himself.

Psychoanalysis Freud`s doctrine, called Metapsychology, was to give the human self greater freedom from instinctive and irrational desires in a dialogue with a psychologist through analysis of the unconscious. Later the psychoanalytic movement split, part of it treating psychoanalysis as a practice of working with archetypes (analytical psychology), part criticising the social limitations of the unconscious (Freudo-Marxism), and later Lacan`s structural psychoanalysis, which interpreted the unconscious as a language.

Phenomenological psychology Edmund Husserl rejected the physicalism of most of the psychological teachings of his time and began to understand consciousness as the only reality accessible to reliable cognition. His disciple Heidegger added to this the assertion of the fundamental finitude of man and the threat of a loss of authenticity in the technical world, and thus laid the foundation for existential psychology.

Structuralism The recognised creator of psychology as a science, W. Wundt described the primordial structures of the psyche that determine perception and behaviour, but faced the problem of the impossibility of direct access to these structures and the vagueness of their description. Half a century later his ideas, combined with Sossur`s semiotics, strongly influenced the general humanities of structuralism and the post-structuralism and post-modernism that emerged from it, where structures were treated as linguistic invariants.

Wednesday, November 20, 2024

Neurophilosophy

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Neurophilosophy

Neurophilosophy or the philosophy of neuroscience is the interdisciplinary study of neuroscience and philosophy that explores the relevance of neuroscientific studies to the arguments traditionally categorized as philosophy of mind. The philosophy of neuroscience attempts to clarify neuroscientific methods and results using the conceptual rigor and methods of philosophy of science.

Specific issues

Below is a list of specific issues important to philosophy of neuroscience:

  • "The indirectness of studies of mind and brain"
  • "Computational or representational analysis of brain processing"
  • "Relations between psychological and neuroscientific inquiries"
  • Modularity of mind
  • What constitutes adequate explanation in neuroscience?
  • "Location of cognitive function"

Indirectness of studies of the mind and brain

Many of the methods and techniques central to neuroscientific discovery rely on assumptions that can limit the interpretation of the data. Philosophers of neuroscience have discussed such assumptions in the use of functional magnetic resonance imaging (fMRI), dissociation in cognitive neuropsychology, single unit recording, and computational neuroscience. Following are descriptions of many of the current controversies and debates about the methods employed in neuroscience.

fMRI

Many fMRI studies rely heavily on the assumption of localization of function (same as functional specialization).

Localization of function means that many cognitive functions can be localized to specific brain regions. An example of functional localization comes from studies of the motor cortex. There seem to be different groups of cells in the motor cortex responsible for controlling different groups of muscles.

Many philosophers of neuroscience criticize fMRI for relying too heavily on this assumption. Michael Anderson points out that subtraction-method fMRI misses a lot of brain information that is important to the cognitive processes. Subtraction fMRI only shows the differences between the task activation and the control activation, but many of the brain areas activated in the control are obviously important for the task as well.

Rejections of fMRI

Some philosophers entirely reject any notion of localization of function and thus believe fMRI studies to be profoundly misguided. These philosophers maintain that brain processing acts holistically, that large sections of the brain are involved in processing most cognitive tasks (see holism in neurology and the modularity section below). One way to understand their objection to the idea of localization of function is the radio repairman thought experiment. In this thought experiment, a radio repairman opens up a radio and rips out a tube. The radio begins whistling loudly and the radio repairman declares that he must have ripped out the anti-whistling tube. There is no anti-whistling tube in the radio and the radio repairman has confounded function with effect. This criticism was originally targeted at the logic used by neuropsychological brain lesion experiments, but the criticism is still applicable to neuroimaging. These considerations are similar to Van Orden's and Paap's criticism of circularity in neuroimaging logic. According to them, neuroimagers assume that their theory of cognitive component parcellation is correct and that these components divide cleanly into feed-forward modules. These assumptions are necessary to justify their inference of brain localization. The logic is circular if the researcher then uses the appearance of brain region activation as proof of the correctness of their cognitive theories.

Reverse inference

A different problematic methodological assumption within fMRI research is the use of reverse inference. A reverse inference is when the activation of a brain region is used to infer the presence of a given cognitive process. Poldrack points out that the strength of this inference depends critically on the likelihood that a given task employs a given cognitive process and the likelihood of that pattern of brain activation given that cognitive process. In other words, the strength of reverse inference is based upon the selectivity of the task used as well as the selectivity of the brain region activation.

A 2011 article published in the New York Times has been heavily criticized for misusing reverse inference. In the study, participants were shown pictures of their iPhones and the researchers measured activation of the insula. The researchers took insula activation as evidence of feelings of love and concluded that people loved their iPhones. Critics were quick to point out that the insula is not a very selective piece of cortex, and therefore not amenable to reverse inference.

The neuropsychologist Max Coltheart took the problems with reverse inference a step further and challenged neuroimagers to give one instance in which neuroimaging had informed psychological theory. Coltheart takes the burden of proof to be an instance where the brain imaging data is consistent with one theory but inconsistent with another theory.

Roskies maintains that Coltheart's ultra cognitive position makes his challenge unwinnable. Since Coltheart maintains that the implementation of a cognitive state has no bearing on the function of that cognitive state, then it is impossible to find neuroimaging data that will be able to comment on psychological theories in the way Coltheart demands. Neuroimaging data will always be relegated to the lower level of implementation and be unable to selectively determine one or another cognitive theory.

In a 2006 article, Richard Henson suggests that forward inference can be used to infer dissociation of function at the psychological level. He suggests that these kinds of inferences can be made when there is crossing activations between two task types in two brain regions and there is no change in activation in a mutual control region.

Pure insertion

One final assumption is the assumption of pure insertion in fMRI. The assumption of pure insertion is the assumption that a single cognitive process can be inserted into another set of cognitive processes without affecting the functioning of the rest. For example, to find the reading comprehension area of the brain, researchers might scan participants while they were presented with a word and while they were presented with a non-word (e.g. "Floob"). If the researchers then infer that the resulting difference in brain pattern represents the regions of the brain involved in reading comprehension, they have assumed that these changes are not reflective of changes in task difficulty or differential recruitment between tasks. The term pure insertion was coined by Donders as a criticism of reaction time methods.

Resting-state functional-connectivity MRI

Recently, researchers have begun using a new functional imaging technique called resting-state functional-connectivity MRI. Subjects' brains are scanned while the subject sits idly in the scanner. By looking at the natural fluctuations in the blood-oxygen-level-dependent (BOLD) pattern while the subject is at rest, the researchers can see which brain regions co-vary in activation together. Afterward, they can use the patterns of covariance to construct maps of functionally-linked brain areas.

The name "functional-connectivity" is somewhat misleading since the data only indicates co-variation. Still, this is a powerful method for studying large networks throughout the brain.

Methodological issues

There are a couple of important methodological issues that need to be addressed. Firstly, there are many different possible brain mappings that could be used to define the brain regions for the network. The results could vary significantly depending on the brain region chosen.

Secondly, what mathematical techniques are best to characterize these brain regions?

The brain regions of interest are somewhat constrained by the size of the voxels. Rs-fcMRI uses voxels that are only a few millimeters cubed, so the brain regions will have to be defined on a larger scale. Two of the statistical methods that are commonly applied to network analysis can work on the single voxel spatial scale, but graph theory methods are extremely sensitive to the way nodes are defined.

Brain regions can be divided according to their cellular architecture, according to their connectivity, or according to physiological measures. Alternatively, one could take a "theory-neutral" approach, and randomly divide the cortex into partitions with an arbitrary size.

As mentioned earlier, there are several approaches to network analysis once the brain regions have been defined. Seed-based analysis begins with an a priori defined seed region and finds all of the regions that are functionally connected to that region. Wig et al. caution that the resulting network structure will not give any information concerning the inter-connectivity of the identified regions or the relations of those regions to regions other than the seed region.

Another approach is to use independent component analysis (ICA) to create spatio-temporal component maps, and the components are sorted into those that carry information of interest and those that are caused by noise. Wigs et al. once again warns that inference of functional brain region communities is difficult under ICA. ICA also has the issue of imposing orthogonality on the data.

Graph theory uses a matrix to characterize covariance between regions, which is then transformed into a network map. The problem with graph theory analysis is that network mapping is heavily influenced by a priori brain region and connectivity (nodes and edges). This places the researcher at risk of cherry-picking regions and connections according to their own preconceived theories. However, graph theory analysis is still considered extremely valuable, as it is the only method that gives pair-wise relationships between nodes.

While ICA may have an advantage in being a fairly principled method, it seems that using both methods will be important to better understanding the network connectivity of the brain. Mumford et al. hoped to avoid these issues and use a principled approach that could determine pair-wise relationships using a statistical technique adopted from analysis of gene co-expression networks.

Dissociation in cognitive neuropsychology

Cognitive neuropsychology studies brain damaged patients and uses the patterns of selective impairment in order to make inferences on the underlying cognitive structure. Dissociation between cognitive functions is taken to be evidence that these functions are independent. Theorists have identified several key assumptions that are needed to justify these inferences:

  1. Functional modularity – the mind is organized into functionally separate cognitive modules.
  2. Anatomical modularity – the brain is organized into functionally separate modules. This assumption is very similar to the assumption of functional localization. These assumptions differ from the assumption of functional modularity, because it is possible to have separable cognitive modules that are implemented by diffuse patterns of brain activation.
  3. Universality – The basic organization of functional and anatomical modularity is the same for all normal humans. This assumption is needed if we are to make any claim about functional organization based on dissociation that extrapolates from the instance of a case study to the population.
  4. Transparency / Subtractivity – the mind does not undergo substantial reorganization following brain damage. It is possible to remove one functional module without significantly altering the overall structure of the system. This assumption is necessary in order to justify using brain damaged patients in order to make inferences about the cognitive architecture of healthy people.

There are three principal types of evidence in cognitive neuropsychology: association, single dissociation and double dissociation. Association inferences observe that certain deficits are likely to co-occur. For example, there are many cases who have deficits in both abstract and concrete word comprehension following brain damage. Association studies are considered the weakest form of evidence, because the results could be accounted for by damage to neighboring brain regions and not damage to a single cognitive system. Single Dissociation inferences observe that one cognitive faculty can be spared while another can be damaged following brain damage. This pattern indicates that a) the two tasks employ different cognitive systems b) the two tasks occupy the same system and the damaged task is downstream from the spared task or c) that the spared task requires fewer cognitive resources than the damaged task. The "gold standard" for cognitive neuropsychology is the double dissociation. Double dissociation occurs when brain damage impairs task A in Patient1 but spares task B and brain damage spares task A in Patient 2 but damages task B. It is assumed that one instance of double dissociation is sufficient proof to infer separate cognitive modules in the performance of the tasks.

Many theorists criticize cognitive neuropsychology for its dependence on double dissociations. In one widely cited study, Joula and Plunkett used a model connectionist system to demonstrate that double dissociation behavioral patterns can occur through random lesions of a single module. They created a multilayer connectionist system trained to pronounce words. They repeatedly simulated random destruction of nodes and connections in the system and plotted the resulting performance on a scatter plot. The results showed deficits in irregular noun pronunciation with spared regular verb pronunciation in some cases and deficits in regular verb pronunciation with spared irregular noun pronunciation. These results suggest that a single instance of double dissociation is insufficient to justify inference to multiple systems.

Charter offers a theoretical case in which double dissociation logic can be faulty. If two tasks, task A and task B, use almost all of the same systems but differ by one mutually exclusive module apiece, then the selective lesioning of those two modules would seem to indicate that A and B use different systems. Charter uses the example of someone who is allergic to peanuts but not shrimp and someone who is allergic to shrimp and not peanuts. He argues that double dissociation logic leads one to infer that peanuts and shrimp are digested by different systems. John Dunn offers another objection to double dissociation. He claims that it is easy to demonstrate the existence of a true deficit but difficult to show that another function is truly spared. As more data is accumulated, the value of your results will converge on an effect size of zero, but there will always be a positive value greater than zero that has more statistical power than zero. Therefore, it is impossible to be fully confident that a given double dissociation actually exists.

On a different note, Alphonso Caramazza has given a principled reason for rejecting the use of group studies in cognitive neuropsychology. Studies of brain damaged patients can either take the form of a single case study, in which an individual's behavior is characterized and used as evidence, or group studies, in which a group of patients displaying the same deficit have their behavior characterized and averaged. In order to justify grouping a set of patient data together, the researcher must know that the group is homogenous, that their behavior is equivalent in every theoretically meaningful way. In brain damaged patients, this can only be accomplished a posteriori by analyzing the behavior patterns of all the individuals in the group. Thus according to Caramazza, any group study is either the equivalent of a set of single case studies or is theoretically unjustified. Newcombe and Marshall pointed out that there are some cases (they use Geschwind's syndrome as an example) and that group studies might still serve as a useful heuristic in cognitive neuropsychological studies.

Single-unit recordings

It is commonly understood in neuroscience that information is encoded in the brain by the firing patterns of neurons. Many of the philosophical questions surrounding the neural code are related to questions about representation and computation that are discussed below. There are other methodological questions including whether neurons represent information through an average firing rate or whether there is information represented by the temporal dynamics. There are similar questions about whether neurons represent information individually or as a population.

Computational neuroscience

Many of the philosophical controversies surrounding computational neuroscience involve the role of simulation and modeling as explanation. Carl Craver has been especially vocal about such interpretations. Jones and Love wrote an especially critical article targeted at Bayesian behavioral modeling that did not constrain the modeling parameters by psychological or neurological considerations Eric Winsberg has written about the role of computer modeling and simulation in science generally, but his characterization is applicable to computational neuroscience.

Computation and representation in the brain

The computational theory of mind has been widespread in neuroscience since the cognitive revolution in the 1960s. This section will begin with a historical overview of computational neuroscience and then discuss various competing theories and controversies within the field.

Historical overview

Computational neuroscience began in the 1930s and 1940s with two groups of researchers. The first group consisted of Alan Turing, Alonzo Church and John von Neumann, who were working to develop computing machines and the mathematical underpinnings of computer science. This work culminated in the theoretical development of so-called Turing machines and the Church–Turing thesis, which formalized the mathematics underlying computability theory. The second group consisted of Warren McCulloch and Walter Pitts who were working to develop the first artificial neural networks. McCulloch and Pitts were the first to hypothesize that neurons could be used to implement a logical calculus that could explain cognition. They used their toy neurons to develop logic gates that could make computations. However these developments failed to take hold in the psychological sciences and neuroscience until the mid-1950s and 1960s. Behaviorism had dominated the psychology until the 1950s when new developments in a variety of fields overturned behaviorist theory in favor of a cognitive theory. From the beginning of the cognitive revolution, computational theory played a major role in theoretical developments. Minsky and McCarthy's work in artificial intelligence, Newell and Simon's computer simulations, and Noam Chomsky's importation of information theory into linguistics were all heavily reliant on computational assumptions. By the early 1960s, Hilary Putnam was arguing in favor of machine functionalism in which the brain instantiated Turing machines. By this point computational theories were firmly fixed in psychology and neuroscience. By the mid-1980s, a group of researchers began using multilayer feed-forward analog neural networks that could be trained to perform a variety of tasks. The work by researchers like Sejnowski, Rosenberg, Rumelhart, and McClelland were labeled as connectionism, and the discipline has continued since then. The connectionist mindset was embraced by Paul and Patricia Churchland who then developed their "state space semantics" using concepts from connectionist theory. Connectionism was also condemned by researchers such as Fodor, Pylyshyn, and Pinker. The tension between the connectionists and the classicists is still being debated today.

Representation

One of the reasons that computational theories are appealing is that computers have the ability to manipulate representations to give meaningful output. Digital computers use strings of 1s and 0s in order to represent the content. Most cognitive scientists posit that the brain uses some form of representational code that is carried in the firing patterns of neurons. Computational accounts seem to offer an easy way of explaining how human brains carry and manipulate the perceptions, thoughts, feelings, and actions of individuals. While most theorists maintain that representation is an important part of cognition, the exact nature of that representation is highly debated. The two main arguments come from advocates of symbolic representations and advocates of associationist representations.

Symbolic representational accounts have been famously championed by Fodor and Pinker. Symbolic representation means that the objects are represented by symbols and are processed through rule governed manipulations that are sensation to the constitutive structure. The fact that symbolic representation is sensitive to the structure of the representations is a major part of its appeal. Fodor proposed the language of thought hypothesis, in which mental representations are manipulated in the same way that language is syntactically manipulated in order to produce thought. According to Fodor, the language of thought hypothesis explains the systematicity and productivity seen in both language and thought.

Associativist representations are most often described with connectionist systems. In connectionist systems, representations are distributed across all the nodes and connection weights of the system and thus are said to be sub symbolic. A connectionist system is capable of implementing a symbolic system. There are several important aspects of neural nets that suggest that distributed parallel processing provides a better basis for cognitive functions than symbolic processing. Firstly, the inspiration for these systems came from the brain itself indicating biological relevance. Secondly, these systems are capable of storing content addressable memory, which is far more efficient than memory searches in symbolic systems. Thirdly, neural nets are resilient to damage while even minor damage can disable a symbolic system. Lastly, soft constraints and generalization when processing novel stimuli allow nets to behave more flexibly than symbolic systems.

The Churchlands described representation in a connectionist system in terms of state space. The content of the system is represented by an n-dimensional vector where the n= the number of nodes in the system and the direction of the vector is determined by the activation pattern of the nodes. Fodor rejected this method of representation on the grounds that two different connectionist systems could not have the same content. Further mathematical analysis of connectionist system revealed that connectionist systems that could contain similar content could be mapped graphically to reveal clusters of nodes that were important to representing the content. However, state space vector comparison was not amenable to this type of analysis. Recently, Nicholas Shea has offered his own account for content within connectionist systems that employs the concepts developed through cluster analysis.

Views on computation

Computationalism, a kind of functionalist philosophy of mind, is committed to the position that the brain is some sort of computer, but what does it mean to be a computer? The definition of a computation must be narrow enough so that we limit the number of objects that can be called computers. For example, it might seem problematic to have a definition wide enough to allow stomachs and weather systems to be involved in computations. However, it is also necessary to have a definition broad enough to allow all of the wide varieties of computational systems to compute. For example, if the definition of computation is limited to syntactic manipulation of symbolic representations, then most connectionist systems would not be able to compute. Rick Grush distinguishes between computation as a tool for simulation and computation as a theoretical stance in cognitive neuroscience. For the former, anything that can be computationally modeled counts as computing. In the latter case, the brain is a computing function that is distinct from systems like fluid dynamic systems and the planetary orbits in this regard. The challenge for any computational definition is to keep the two senses distinct.

Alternatively, some theorists choose to accept a narrow or wide definition for theoretical reasons. Pancomputationalism is the position that everything can be said to compute. This view has been criticized by Piccinini on the grounds that such a definition makes computation trivial to the point where it is robbed of its explanatory value.

The simplest definition of computations is that a system can be said to be computing when a computational description can be mapped onto the physical description. This is an extremely broad definition of computation and it ends up endorsing a form of pancomputationalism. Putnam and Searle, who are often credited with this view, maintain that computation is observer-related. In other words, if you want to view a system as computing then you can say that it is computing. Piccinini points out that, in this view, not only is everything computing, but also everything is computing in an indefinite number of ways. Since it is possible to apply an indefinite number of computational descriptions to a given system, the system ends up computing an indefinite number of tasks.

The most common view of computation is the semantic account of computation. Semantic approaches use a similar notion of computation as the mapping approaches with the added constraint that the system must manipulate representations with semantic content. Note from the earlier discussion of representation that both the Churchlands' connectionist systems and Fodor's symbolic systems use this notion of computation. In fact, Fodor is famously credited as saying "No computation without representation". Computational states can be individuated by an externalized appeal to content in a broad sense (i.e. the object in the external world) or by internalist appeal to the narrow sense content (content defined by the properties of the system). In order to fix the content of the representation, it is often necessary to appeal to the information contained within the system. Grush provides a criticism of the semantic account. He points out that appeal to the informational content of a system to demonstrate representation by the system. He uses his coffee cup as an example of a system that contains information, such as the heat conductance of the coffee cup and the time since the coffee was poured, but is too mundane to compute in any robust sense. Semantic computationalists try to escape this criticism by appealing to the evolutionary history of system. This is called the biosemantic account. Grush uses the example of his feet, saying that by this account his feet would not be computing the amount of food he had eaten because their structure had not been evolutionarily selected for that purpose. Grush replies to the appeal to biosemantics with a thought experiment. Imagine that lightning strikes a swamp somewhere and creates an exact copy of you. According to the biosemantic account, this swamp-you would be incapable of computation because there is no evolutionary history with which to justify assigning representational content. The idea that for two physically identical structures one can be said to be computing while the other is not should be disturbing to any physicalist.

There are also syntactic or structural accounts for computation. These accounts do not need to rely on representation. However, it is possible to use both structure and representation as constrains on computational mapping. Oron Shagrir identifies several philosophers of neuroscience who espouse structural accounts. According to him, Fodor and Pylyshyn require some sort of syntactic constraint on their theory of computation. This is consistent with their rejection of connectionist systems on the grounds of systematicity. He also identifies Piccinini as a structuralist quoting his 2008 paper: "the generation of output strings of digits from input strings of digits in accordance with a general rule that depends on the properties of the strings and (possibly) on the internal state of the system". Though Piccinini undoubtedly espouses structuralist views in that paper, he claims that mechanistic accounts of computation avoid reference to either syntax or representation. It is possible that Piccinini thinks that there are differences between syntactic and structural accounts of computation that Shagrir does not respect.

In his view of mechanistic computation, Piccinini asserts that functional mechanisms process vehicles in a manner sensitive to the differences between different portions of the vehicle, and thus can be said to generically compute. He claims that these vehicles are medium-independent, meaning that the mapping function will be the same regardless of the physical implementation. Computing systems can be differentiated based upon the vehicle structure and the mechanistic perspective can account for errors in computation.

Dynamical systems theory presents itself as an alternative to computational explanations of cognition. These theories are staunchly anti-computational and anti-representational. Dynamical systems are defined as systems that change over time in accordance with a mathematical equation. Dynamical systems theory claims that human cognition is a dynamical model in the same sense computationalists claim that the human mind is a computer. A common objection leveled at dynamical systems theory is that dynamical systems are computable and therefore a subset of computationalism. Van Gelder is quick to point out that there is a big difference between being a computer and being computable. Making the definition of computing wide enough to incorporate dynamical models would effectively embrace pancomputationalism.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...