Search This Blog

Friday, March 13, 2026

Gene expression

From Wikipedia, the free encyclopedia

Gene expression is the process by which the information contained within a gene is used to produce a functional gene product, such as a protein or a functional RNA molecule. This process involves multiple steps, including the transcription of the gene's sequence into RNA. For protein-coding genes, this RNA is further translated into a chain of amino acids that folds into a protein, while for non-coding genes, the resulting RNA itself serves a functional role in the cell. Gene expression enables cells to utilize the genetic information in genes to carry out a wide range of biological functions. While expression levels can be regulated in response to cellular needs and environmental changes, some genes are expressed continuously with little variation.

Mechanism

Transcription

RNA polymerase moving along a stretch of DNA, leaving behind newly synthetized strand of RNA.
The process of transcription is carried out by RNA polymerase (RNAP), which uses DNA (black) as a template and produces RNA (blue).

The production of a RNA copy from a DNA strand is called transcription, and is performed by RNA polymerases, which add one ribonucleotide at a time to a growing RNA strand as per the complementarity law of the nucleotide bases. This RNA is complementary to the template 3′ → 5′ DNA strand, with the exception that thymines (T) are replaced with uracils (U) in the RNA and possible errors.

In bacteria transcription is carried out by a single type of RNA polymerase, which needs to bind a DNA sequence called a Pribnow box with the help of the sigma factor protein (σ factor) to start transcription. In eukaryotes, transcription is performed in the nucleus by three types of RNA polymerases, each of which needs a special DNA sequence called the promoter and a set of DNA-binding proteins—transcription factors—to initiate the process (see regulation of transcription below). RNA polymerase I is responsible for transcription of ribosomal RNA (rRNA) genes. RNA polymerase II (Pol II) transcribes all protein-coding genes but also some non-coding RNAs (e.g., snRNAs, snoRNAs or long non-coding RNAs). RNA polymerase III transcribes 5S rRNA, transfer RNA (tRNA) genes, and some small non-coding RNAs (e.g., 7SK). Transcription ends when the polymerase encounters a sequence called the terminator.

mRNA processing

While transcription of prokaryotic protein-coding genes creates messenger RNA (mRNA) that is ready for translation into protein, transcription of eukaryotic genes leaves a primary transcript of RNA (pre-RNA), which first has to undergo a series of modifications to become a mature RNA. Types and steps involved in the maturation processes vary between coding and non-coding preRNAs; i.e. even though preRNA molecules for both mRNA and tRNA undergo splicing, the steps and machinery involved are different. The processing of non-coding RNA is described below (non-coding RNA maturation).

The processing of pre-mRNA include 5′ capping, which is set of enzymatic reactions that add 7-methylguanosine (m7G) to the 5′ end of pre-mRNA and thus protect the RNA from degradation by exonucleases. The m7G cap is then bound by cap binding complex heterodimer (CBP20/CBP80), which aids in mRNA export to cytoplasm and also protect the RNA from decapping.

Another modification is 3′ cleavage and polyadenylation. They occur if polyadenylation signal sequence (5′- AAUAAA-3′) is present in pre-mRNA, which is usually between protein-coding sequence and terminator. The pre-mRNA is first cleaved and then a series of ~200 adenines (A) are added to form poly(A) tail, which protects the RNA from degradation. The poly(A) tail is bound by multiple poly(A)-binding proteins (PABPs) necessary for mRNA export and translation re-initiation. In the inverse process of deadenylation, poly(A) tails are shortened by the CCR4-Not 3′-5′ exonuclease, which often leads to full transcript decay.

Pre-mRNA is spliced to form of mature mRNA.
Illustration of exons and introns in pre-mRNA and the formation of mature mRNA by splicing. The UTRs (in green) are non-coding parts of exons at the ends of the mRNA.

A very important modification of eukaryotic pre-mRNA is RNA splicing. The majority of eukaryotic pre-mRNAs consist of alternating segments called exons and introns. During the process of splicing, an RNA-protein catalytical complex known as spliceosome catalyzes two transesterification reactions, which remove an intron and release it in form of lariat structure, and then splice neighbouring exons together. In certain cases, some introns or exons can be either removed or retained in mature mRNA. This so-called alternative splicing creates series of different transcripts originating from a single gene. Because these transcripts can be potentially translated into different proteins, splicing extends the complexity of eukaryotic gene expression and the size of a species proteome.

Extensive RNA processing may be an evolutionary advantage made possible by the nucleus of eukaryotes. In prokaryotes, transcription and translation happen together, whilst in eukaryotes, the nuclear membrane separates the two processes, giving time for RNA processing to occur.

Non-coding RNA maturation

In most organisms non-coding genes (ncRNA) are transcribed as precursors that undergo further processing. In the case of ribosomal RNAs (rRNA), they are often transcribed as a pre-rRNA that contains one or more rRNAs. The pre-rRNA is cleaved and modified (2′-O-methylation and pseudouridine formation) at specific sites by approximately 150 different small nucleolus-restricted RNA species, called snoRNAs. SnoRNAs associate with proteins, forming snoRNPs. While snoRNA part basepair with the target RNA and thus position the modification at a precise site, the protein part performs the catalytical reaction. In eukaryotes, in particular a snoRNP called RNase, MRP cleaves the 45S pre-rRNA into the 28S, 5.8S, and 18S rRNAs. The rRNA and RNA processing factors form large aggregates called the nucleolus.

In the case of transfer RNA (tRNA), for example, the 5′ sequence is removed by RNase P, whereas the 3′ end is removed by the tRNase Z enzyme and the non-templated 3′ CCA tail is added by a nucleotidyl transferase. In the case of micro RNA (miRNA), miRNAs are first transcribed as primary transcripts or pri-miRNA with a cap and poly-A tail and processed to short, 70-nucleotide stem-loop structures known as pre-miRNA in the cell nucleus by the enzymes Drosha and Pasha. After being exported, it is then processed to mature miRNAs in the cytoplasm by interaction with the endonuclease Dicer, which also initiates the formation of the RNA-induced silencing complex (RISC), composed of the Argonaute protein.

Even snRNAs and snoRNAs themselves undergo series of modification before they become part of functional RNP complex. This is done either in the nucleoplasm or in the specialized compartments called Cajal bodies. Their bases are methylated or pseudouridinilated by a group of small Cajal body-specific RNAs (scaRNAs), which are structurally similar to snoRNAs.

Translation

For some non-coding RNA, the mature RNA is the final gene product. In the case of messenger RNA (mRNA) the RNA is an information carrier coding for the synthesis of one or more proteins. mRNA carrying a single protein sequence (common in eukaryotes) is monocistronic whilst mRNA carrying multiple protein sequences (common in prokaryotes) is known as polycistronic.

Ribosome translating messenger RNA to chain of amino acids (protein).
During the translation, tRNA charged with amino acid enters the ribosome and aligns with the correct mRNA triplet. Ribosome then adds amino acid to growing protein chain.

Every mRNA consists of three parts: a 5′ untranslated region (5′UTR), a protein-coding region or open reading frame (ORF), and a 3′ untranslated region (3′UTR). The coding region carries information for protein synthesis encoded by the genetic code to form triplets. Each triplet of nucleotides of the coding region is called a codon and corresponds to a binding site complementary to an anticodon triplet in transfer RNA. Transfer RNAs with the same anticodon sequence always carry an identical type of amino acid. Amino acids are then chained together by the ribosome according to the order of triplets in the coding region. The ribosome helps transfer RNA to bind to messenger RNA and takes the amino acid from each transfer RNA and makes a structure-less protein out of it. Each mRNA molecule is translated into many protein molecules, on average ~2800 in mammals.

In prokaryotes translation generally occurs at the point of transcription (co-transcriptionally), often using a messenger RNA that is still in the process of being created. In eukaryotes translation can occur in a variety of regions of the cell depending on where the protein being written is supposed to be. Major locations are the cytoplasm for soluble cytoplasmic proteins and the membrane of the endoplasmic reticulum for proteins that are for export from the cell or insertion into a cell membrane. Proteins that are supposed to be produced at the endoplasmic reticulum are recognised part-way through the translation process. This is governed by the signal recognition particle—a protein that binds to the ribosome and directs it to the endoplasmic reticulum when it finds a signal peptide on the growing (nascent) amino acid chain.

Regulation

A cat with patches of orange and black fur.
The patchy colours of a tortoiseshell cat are the result of different levels of expression of pigmentation genes in different areas of the skin.

Regulation of gene expression is the control of the amount and timing of appearance of the functional product of a gene. Control of expression is vital to allow a cell to produce the gene products it needs when it needs them; in turn, this gives cells the flexibility to adapt to a variable environment, external signals, damage to the cell, and other stimuli. More generally, gene regulation gives the cell control over all structure and function, and is the basis for cellular differentiation, morphogenesis and the versatility and adaptability of any organism.

Numerous terms are used to describe types of genes depending on how they are regulated; these include:

  • A constitutive gene is a gene that is transcribed continually as opposed to a facultative gene, which is only transcribed when needed.
  • A housekeeping gene is a gene that is required to maintain basic cellular function and so is typically expressed in all cell types of an organism. Examples include actin, GAPDH and ubiquitin. Some housekeeping genes are transcribed at a relatively constant rate and these genes can be used as a reference point in experiments to measure the expression rates of other genes.
  • A facultative gene is a gene only transcribed when needed as opposed to a constitutive gene.
  • An inducible gene is a gene whose expression is either responsive to environmental change or dependent on the position in the cell cycle.

Any step of gene expression may be modulated, from the DNA-RNA transcription step to post-translational modification of a protein. The stability of the final gene product, whether it is RNA or protein, also contributes to the expression level of the gene—an unstable product results in a low expression level. In general gene expression is regulated through changes in the number and type of interactions between molecules that collectively influence transcription of DNA and translation of RNA.

Some simple examples of where gene expression is important are:

Transcriptional

When lactose is present in a prokaryote, it acts as an inducer and inactivates the repressor so that the genes for lactose metabolism can be transcribed.

Regulation of transcription can be broken down into three main routes of influence; genetic (direct interaction of a control factor with the gene), modulation interaction of a control factor with the transcription machinery and epigenetic (non-sequence changes in DNA structure that influence transcription).

Ribbon diagram of the lambda repressor dimer bound to DNA.
The lambda repressor transcription factor (green) binds as a dimer to major groove of DNA target (red and blue) and disables initiation of transcription. From PDB: 1LMB​.

Direct interaction with DNA is the simplest and the most direct method by which a protein changes transcription levels. Genes often have several protein binding sites around the coding region with the specific function of regulating transcription. There are many classes of regulatory DNA binding sites known as enhancers, insulators and silencers. The mechanisms for regulating transcription are varied, from blocking key binding sites on the DNA for RNA polymerase to acting as an activator and promoting transcription by assisting RNA polymerase binding.

The activity of transcription factors is further modulated by intracellular signals causing protein post-translational modification including phosphorylation, acetylation, or glycosylation. These changes influence a transcription factor's ability to bind, directly or indirectly, to promoter DNA, to recruit RNA polymerase, or to favor elongation of a newly synthesized RNA molecule.

The nuclear membrane in eukaryotes allows further regulation of transcription factors by the duration of their presence in the nucleus, which is regulated by reversible changes in their structure and by binding of other proteins. Environmental stimuli or endocrine signals may cause modification of regulatory proteins eliciting cascades of intracellular signals, which result in regulation of gene expression.

It has become apparent that there is a significant influence of non-DNA-sequence specific effects on transcription. These effects are referred to as epigenetic and involve the higher order structure of DNA, non-sequence specific DNA binding proteins and chemical modification of DNA. In general epigenetic effects alter the accessibility of DNA to proteins and so modulate transcription.

A cartoon representation of the nucleosome structure.
In eukaryotes, DNA is organized in form of nucleosomes. Note how the DNA (blue and green) is tightly wrapped around the protein core made of histone octamer (ribbon coils), restricting access to the DNA. From PDB: 1KX5​.

In eukaryotes the structure of chromatin, controlled by the histone code, regulates access to DNA with significant impacts on the expression of genes in euchromatin and heterochromatin areas.

Enhancers, transcription factors, mediator complex and DNA loops

Regulation of transcription in mammals. An active enhancer regulatory region is enabled to interact with the promoter region of its target gene by formation of a chromosome loop. This can initiate messenger RNA (mRNA) synthesis by RNA polymerase II (RNAP II) bound to the promoter at the transcription start site of the gene. The loop is stabilized by one architectural protein anchored to the enhancer and one anchored to the promoter and these proteins are joined to form a dimer (red zigzags). Specific regulatory transcription factors bind to DNA sequence motifs on the enhancer. General transcription factors bind to the promoter. When a transcription factor is activated by a signal (here indicated as phosphorylation shown by a small red star on a transcription factor on the enhancer) the enhancer is activated and can now activate its target promoter. The active enhancer is transcribed on each strand of DNA in opposite directions by bound RNAP IIs. Mediator proteins (a complex consisting of about 26 proteins in an interacting structure) communicate regulatory signals from the enhancer DNA-bound transcription factors to the promoter.

Gene expression in mammals is regulated by many cis-regulatory elements, including core promoters and promoter-proximal elements that are located near the transcription start sites of genes, upstream on the DNA (towards the 5' region of the sense strand). Other important cis-regulatory modules are localized in DNA regions that are distant from the transcription start sites. These include enhancers, silencers, insulators and tethering elements. Enhancers and their associated transcription factors have a leading role in the regulation of gene expression.

Enhancers are genome regions that regulate genes. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. Multiple enhancers, each often tens or hundred of thousands of nucleotides distant from their target genes, loop to their target gene promoters and coordinate with each other to control gene expression.

The illustration shows an enhancer looping around to come into proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1). One member of the dimer is anchored to its binding motif on the enhancer and the other member is anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function-specific transcription factors (among the about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer. A small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern transcription level of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter.

Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two eRNAs as illustrated in the figure. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating transcription of messenger RNA from its target gene.

DNA methylation and demethylation

DNA methylation is the addition of a methyl group to the DNA that happens at cytosine. The image shows a cytosine single ring base and a methyl group added on to the 5 carbon. In mammals, DNA methylation occurs almost exclusively at a cytosine that is followed by a guanine.

DNA methylation is a widespread mechanism for epigenetic influence on gene expression and is seen in bacteria and eukaryotes and has roles in heritable transcription silencing and transcription regulation. Methylation most often occurs on a cytosine (see Figure). Methylation of cytosine primarily occurs in dinucleotide sequences where a cytosine is followed by a guanine, a CpG site. The number of CpG sites in the human genome is about 28 million. Depending on the type of cell, about 70% of the CpG sites have a methylated cytosine.

Methylation of cytosine in DNA has a major role in regulating gene expression. Methylation of CpGs in a promoter region of a gene usually represses gene transcription while methylation of CpGs in the body of a gene increases expression. TET enzymes play a central role in demethylation of methylated cytosines. Demethylation of CpGs in a gene promoter by TET enzyme activity increases transcription of the gene.

Post-transcriptional regulation

In eukaryotes, where export of RNA is required before translation is possible, nuclear export is thought to provide additional control over gene expression. All transport in and out of the nucleus is via the nuclear pore and transport is controlled by a wide range of importin and exportin proteins.

Expression of a gene coding for a protein is only possible if the messenger RNA carrying the code survives long enough to be translated. In a typical cell, an RNA molecule is only stable if specifically protected from degradation. RNA degradation has particular importance in regulation of expression in eukaryotic cells where mRNA has to travel significant distances before being translated. In eukaryotes, RNA is stabilised by certain post-transcriptional modifications, particularly the 5′ cap and poly-adenylated tail.

Intentional degradation of mRNA is used not just as a defence mechanism from foreign RNA (normally from viruses) but also as a route of mRNA destabilisation. If an mRNA molecule has a complementary sequence to a small interfering RNA then it is targeted for destruction via the RNA interference pathway.

Three prime untranslated regions and microRNAs

Three prime untranslated regions (3′UTRs) of messenger RNAs (mRNAs) often contain regulatory sequences that post-transcriptionally influence gene expression. Such 3′-UTRs often contain both binding sites for microRNAs (miRNAs) as well as for regulatory proteins. By binding to specific sites within the 3′-UTR, miRNAs can decrease gene expression of various mRNAs by either inhibiting translation or directly causing degradation of the transcript. The 3′-UTR also may have silencer regions that bind repressor proteins that inhibit the expression of a mRNA.

The 3′-UTR often contains microRNA response elements (MREs). MREs are sequences to which miRNAs bind. These are prevalent motifs within 3′-UTRs. Among all regulatory motifs within the 3′-UTRs (e.g. including silencer regions), MREs make up about half of the motifs.

As of 2014, the miRBase web site, an archive of miRNA sequences and annotations, listed 28,645 entries in 233 biologic species. Of these, 1,881 miRNAs were in annotated human miRNA loci. miRNAs were predicted to have an average of about four hundred target mRNAs (affecting expression of several hundred genes). Friedman et al. estimate that >45,000 miRNA target sites within human mRNA 3′UTRs are conserved above background levels, and >60% of human protein-coding genes have been under selective pressure to maintain pairing to miRNAs.

Direct experiments show that a single miRNA can reduce the stability of hundreds of unique mRNAs. Other experiments show that a single miRNA may repress the production of hundreds of proteins, but that this repression often is relatively mild (less than 2-fold).

The effects of miRNA dysregulation of gene expression seem to be important in cancer. For instance, in gastrointestinal cancers, nine miRNAs have been identified as epigenetically altered and effective in down regulating DNA repair enzymes.

The effects of miRNA dysregulation of gene expression also seem to be important in neuropsychiatric disorders, such as schizophrenia, bipolar disorder, major depression, Parkinson's disease, Alzheimer's disease and autism spectrum disorders.

Translational

A chemical structure of neomycin molecule.
Neomycin is an example of a small molecule that reduces expression of all protein genes inevitably leading to cell death; it thus acts as an antibiotic.

Direct regulation of translation is less prevalent than control of transcription or mRNA stability but is occasionally used. Inhibition of protein translation is a major target for toxins and antibiotics, so they can kill a cell by overriding its normal gene expression control. Protein synthesis inhibitors include the antibiotic neomycin and the toxin ricin.

Post-translational modifications

Post-translational modifications (PTMs) are covalent modifications to proteins. Like RNA splicing, they help to significantly diversify the proteome. These modifications are usually catalyzed by enzymes. Additionally, processes like covalent additions to amino acid side chain residues can often be reversed by other enzymes. However, some, like the proteolytic cleavage of the protein backbone, are irreversible.

PTMs play many important roles in the cell. For example, phosphorylation is primarily involved in activating and deactivating proteins and in signaling pathways. PTMs are involved in transcriptional regulation: an important function of acetylation and methylation is histone tail modification, which alters how accessible DNA is for transcription. They can also be seen in the immune system, where glycosylation plays a key role. One type of PTM can initiate another type of PTM, as can be seen in how ubiquitination tags proteins for degradation through proteolysis. Proteolysis, other than being involved in breaking down proteins, is also important in activating and deactivating them, and in regulating biological processes such as DNA transcription and cell death.

Measurement

Schematic karyogram of a human, showing an overview of the expression of the human genome using G banding, which is a method that includes Giemsa staining, wherein the lighter staining regions are generally more transcriptionally active, whereas darker regions are more inactive.

Measuring gene expression is an important part of many life sciences, as the ability to quantify the level at which a particular gene is expressed within a cell, tissue or organism can provide a lot of valuable information. For example, measuring gene expression can:

Similarly, the analysis of the location of protein expression is a powerful tool, and this can be done on an organismal or cellular scale. Investigation of localization is particularly important for the study of development in multicellular organisms and as an indicator of protein function in single cells. Ideally, measurement of expression is done by detecting the final gene product (for many genes, this is the protein); however, it is often easier to detect one of the precursors, typically mRNA and to infer gene-expression levels from these measurements.

mRNA quantification

Levels of mRNA can be quantitatively measured by northern blotting, which provides size and sequence information about the mRNA molecules. A sample of RNA is separated on an agarose gel and hybridized to a radioactively labeled RNA probe that is complementary to the target sequence. The radiolabeled RNA is then detected by an autoradiograph. Because the use of radioactive reagents makes the procedure time-consuming and potentially dangerous, alternative labeling and detection methods, such as digoxigenin and biotin chemistries, have been developed. Perceived disadvantages of Northern blotting are that large quantities of RNA are required and that quantification may not be completely accurate, as it involves measuring band strength in an image of a gel. On the other hand, the additional mRNA size information from the Northern blot allows the discrimination of alternately spliced transcripts.

Another approach for measuring mRNA abundance is RT-qPCR. In this technique, reverse transcription is followed by quantitative PCR. Reverse transcription first generates a DNA template from the mRNA; this single-stranded template is called cDNA. The cDNA template is then amplified in the quantitative step, during which the fluorescence emitted by labeled hybridization probes or intercalating dyes changes as the DNA amplification process progresses. With a carefully constructed standard curve, qPCR can produce an absolute measurement of the number of copies of original mRNA, typically in units of copies per nanolitre of homogenized tissue or copies per cell. qPCR is very sensitive (detection of a single mRNA molecule is theoretically possible), but can be expensive depending on the type of reporter used; fluorescently labeled oligonucleotide probes are more expensive than non-specific intercalating fluorescent dyes.

For expression profiling, or high-throughput analysis of many genes within a sample, quantitative PCR may be performed for hundreds of genes simultaneously in the case of low-density arrays. A second approach is the hybridization microarray. A single array or "chip" may contain probes to determine transcript levels for every known gene in the genome of one or more organisms. Alternatively, "tag based" technologies like Serial analysis of gene expression (SAGE) and RNA-Seq, which can provide a relative measure of the cellular concentration of different mRNAs, can be used. An advantage of tag-based methods is the "open architecture", allowing for the exact measurement of any transcript, with a known or unknown sequence. Next-generation sequencing (NGS) such as RNA-Seq is another approach, producing vast quantities of sequence data that can be matched to a reference genome. Although NGS is comparatively time-consuming, expensive, and resource-intensive, it can identify single-nucleotide polymorphisms, splice-variants, and novel genes, and can also be used to profile expression in organisms for which little or no sequence information is available.

Protein quantification

For genes encoding proteins, the expression level can be directly assessed by a number of methods with some clear analogies to the techniques for mRNA quantification.

One of the most commonly used methods is to perform a Western blot against the protein of interest. This gives information on the size of the protein in addition to its identity. A sample (often cellular lysate) is separated on a polyacrylamide gel, transferred to a membrane and then probed with an antibody to the protein of interest. The antibody can either be conjugated to a fluorophore or to horseradish peroxidase for imaging and/or quantification. The gel-based nature of this assay makes quantification less accurate, but it has the advantage of being able to identify later modifications to the protein, for example proteolysis or ubiquitination, from changes in size.

mRNA-protein correlation

While transcription directly reflects gene expression, the copy number of mRNA molecules does not directly correlate with the number of protein molecules translated from mRNA. Quantification of both protein and mRNA permits a correlation of the two levels. Regulation on each step of gene expression can impact the correlation, as shown for regulation of translation or protein stability. Post-translational factors, such as protein transport in highly polar cells, can influence the measured mRNA-protein correlation as well.

Localization

Visualization of hunchback mRNA in Drosophila embryo.
In situ-hybridization of Drosophila embryos at different developmental stages for the mRNA responsible for the expression of hunchback. High intensity of blue color marks places with high hunchback mRNA quantity.

Analysis of expression is not limited to quantification; localization can also be determined. mRNA can be detected with a suitably labelled complementary mRNA strand and protein can be detected via labelled antibodies. The probed sample is then observed by microscopy to identify where the mRNA or protein is.

A ribbon diagram of green fluorescent protein resembling barrel structure.
The three-dimensional structure of green fluorescent protein. The residues in the centre of the "barrel" are responsible for production of green light after exposing to higher energetic blue light. From PDB: 1EMA​.

By replacing the gene with a new version fused to a green fluorescent protein marker or similar, expression may be directly quantified in live cells. This is done by imaging using a fluorescence microscope. It is very difficult to clone a GFP-fused protein into its native location in the genome without affecting expression levels, so this method often cannot be used to measure endogenous gene expression. It is, however, widely used to measure the expression of a gene artificially introduced into the cell, for example via an expression vector. By fusing a target protein to a fluorescent reporter, the protein's behavior, including its cellular localization and expression level, can be significantly changed.

The enzyme-linked immunosorbent assay works by using antibodies immobilised on a microtiter plate to capture proteins of interest from samples added to the well. Using a detection antibody conjugated to an enzyme or fluorophore the quantity of bound protein can be accurately measured by fluorometric or colourimetric detection. The detection process is very similar to that of a Western blot, but by avoiding the gel steps more accurate quantification can be achieved.

Theorem

From Wikipedia, the free encyclopedia
The Pythagorean theorem has at least 370 known proofs.

In mathematics and formal logic, a theorem is a statement that has been proven, or can be proven. The proof of a theorem is a logical argument that uses the inference rules of a deductive system to establish that the theorem is a logical consequence of the axioms and previously proved theorems.

In mainstream mathematics, the axioms and the inference rules are commonly left implicit, and, in this case, they are almost always those of Zermelo–Fraenkel set theory with the axiom of choice (ZFC), or of a less powerful theory, such as Peano arithmetic. Generally, an assertion that is explicitly called a theorem is a proved result that is not an immediate consequence of other known theorems. Moreover, many authors qualify as theorems only the most important results, and use the terms lemma, proposition and corollary for less important theorems.

In mathematical logic, the concepts of theorems and proofs have been formalized in order to allow mathematical reasoning about them. In this context, statements become well-formed formulas of some formal language. A theory consists of some basis statements called axioms, and some deducing rules (sometimes included in the axioms). The theorems of the theory are the statements that can be derived from the axioms by using the deducing rules. This formalization led to proof theory, which allows proving general theorems about theorems and proofs. In particular, Gödel's incompleteness theorems show that every consistent theory containing the natural numbers has true statements on natural numbers that are not theorems of the theory (that is they cannot be proved inside the theory).

As the axioms are often abstractions of properties of the physical world, theorems may be considered as expressing some truth, but in contrast to the notion of a scientific law, which is experimental, the justification of the truth of a theorem is purely deductive. A conjecture is a tentative proposition that may evolve to become a theorem if proven true.

Theoremhood and truth

Until the end of the 19th century and the foundational crisis of mathematics, all mathematical theories were built from a few basic properties that were considered as self-evident; for example, the facts that every natural number has a successor, and that there is exactly one line that passes through two given distinct points. These basic properties that were considered as absolutely evident were called postulates or axioms; for example Euclid's postulates. All theorems were proved by using implicitly or explicitly these basic properties, and, because of the evidence of these basic properties, a proved theorem was considered as a definitive truth, unless there was an error in the proof. For example, the sum of the interior angles of a triangle equals 180°, and this was considered as an unquestionable fact.

One aspect of the foundational crisis of mathematics was the discovery of non-Euclidean geometries that do not lead to any contradiction, although, in such geometries, the sum of the angles of a triangle is different from 180°. So, the property "the sum of the angles of a triangle equals 180°" is either true or false, depending whether Euclid's fifth postulate is assumed or denied. Similarly, the use of "evident" basic properties of sets leads to the contradiction of Russell's paradox. This has been resolved by elaborating the rules that are allowed for manipulating sets.

This crisis has been resolved by revisiting the foundations of mathematics to make them more rigorous. In these new foundations, a theorem is a well-formed formula of a mathematical theory that can be proved from the axioms and inference rules of the theory. So, the above theorem on the sum of the angles of a triangle becomes: Under the axioms and inference rules of Euclidean geometry, the sum of the interior angles of a triangle equals 180°. Similarly, Russell's paradox disappears because, in an axiomatized set theory, the set of all sets cannot be expressed with a well-formed formula. More precisely, if the set of all sets can be expressed with a well-formed formula, this implies that the theory is inconsistent, and every well-formed assertion, as well as its negation, is a theorem.

In this context, the validity of a theorem depends only on the correctness of its proof. It is independent from the truth, or even the significance of the axioms. This does not mean that the significance of the axioms is uninteresting, but only that the validity of a theorem is independent from the significance of the axioms. This independence may be useful by allowing the use of results of some area of mathematics in apparently unrelated areas.

An important consequence of this way of thinking about mathematics is that it allows defining mathematical theories and theorems as mathematical objects, and to prove theorems about them. Examples are Gödel's incompleteness theorems. In particular, there are well-formed assertions than can be proved to not be a theorem of the ambient theory, although they can be proved in a wider theory. An example is Goodstein's theorem, which can be stated in Peano arithmetic, but is proved to be not provable in Peano arithmetic. However, it is provable in some more general theories, such as Zermelo–Fraenkel set theory.

Epistemological considerations

Many mathematical theorems are conditional statements, whose proofs deduce conclusions from conditions known as hypotheses or premises. In light of the interpretation of proof as justification of truth, the conclusion is often viewed as a necessary consequence of the hypotheses. Namely, that the conclusion is true in case the hypotheses are true—without any further assumptions. However, the conditional could also be interpreted differently in certain deductive systems, depending on the meanings assigned to the derivation rules and the conditional symbol (e.g., non-classical logic).

Although theorems can be written in a completely symbolic form (e.g., as propositions in propositional calculus), they are often expressed informally in a natural language such as English for better readability. The same is true of proofs, which are often expressed as logically organized and clearly worded informal arguments, intended to convince readers of the truth of the statement of the theorem beyond any doubt, and from which a formal symbolic proof can in principle be constructed.

In addition to the better readability, informal arguments are typically easier to check than purely symbolic ones—indeed, many mathematicians would express a preference for a proof that not only demonstrates the validity of a theorem, but also explains in some way why it is obviously true. In some cases, one might even be able to substantiate a theorem by using a picture as its proof.

Because theorems lie at the core of mathematics, they are also central to its aesthetics. Theorems are often described as being "trivial", or "difficult", or "deep", or even "beautiful". These subjective judgments vary not only from person to person, but also with time and culture: for example, as a proof is obtained, simplified or better understood, a theorem that was once difficult may become trivial. On the other hand, a deep theorem may be stated simply, but its proof may involve surprising and subtle connections between disparate areas of mathematics. Fermat's Last Theorem is a particularly well-known example of such a theorem.

Informal account of theorems

Logically, many theorems are of the form of an indicative conditional: If A, then B. Such a theorem does not assert B — only that B is a necessary consequence of A. In this case, A is called the hypothesis of the theorem ("hypothesis" here means something very different from a conjecture), and B the conclusion of the theorem. The two together (without the proof) are called the proposition or statement of the theorem (e.g. "If A, then B" is the proposition). Alternatively, A and B can be also termed the antecedent and the consequent, respectively. The theorem "If n is an even natural number, then n/2 is a natural number" is a typical example in which the hypothesis is "n is an even natural number", and the conclusion is "n/2 is also a natural number".

In order for a theorem to be proved, it must be in principle expressible as a precise, formal statement. However, theorems are usually expressed in natural language rather than in a completely symbolic form—with the presumption that a formal statement can be derived from the informal one.

It is common in mathematics to choose a number of hypotheses within a given language and declare that the theory consists of all statements provable from these hypotheses. These hypotheses form the foundational basis of the theory and are called axioms or postulates. The field of mathematics known as proof theory studies formal languages, axioms and the structure of proofs.

A planar map with five colors such that no two regions with the same color meet. It can actually be colored in this way with only four colors. The four color theorem states that such colorings are possible for any planar map, but every known proof involves a computational search that is too long to check by hand.

Some theorems are "trivial", in the sense that they follow from definitions, axioms, and other theorems in obvious ways and do not contain any surprising insights. Some, on the other hand, may be called "deep", because their proofs may be long and difficult, involve areas of mathematics superficially distinct from the statement of the theorem itself, or show surprising connections between disparate areas of mathematics. A theorem might be simple to state and yet be deep. An excellent example is Fermat's Last Theorem, and there are many other examples of simple yet deep theorems in number theory and combinatorics, among other areas.

Other theorems have a known proof that cannot easily be written down. The most prominent examples are the four color theorem and the Kepler conjecture. Both of these theorems are only known to be true by reducing them to a computational search that is then verified by a computer program. Initially, many mathematicians did not accept this form of proof, but it has become more widely accepted. The mathematician Doron Zeilberger has even gone so far as to claim that these are possibly the only nontrivial results that mathematicians have ever proved. Many mathematical theorems can be reduced to more straightforward computation, including polynomial identities, trigonometric identities and hypergeometric identities.

Relation with scientific theories

Theorems in mathematics and theories in science are fundamentally different in their epistemology. A scientific theory cannot be proved; its key attribute is that it is falsifiable, that is, it makes predictions about the natural world that are testable by experiments. Any disagreement between prediction and experiment demonstrates the incorrectness of the scientific theory, or at least limits its accuracy or domain of validity. Mathematical theorems, on the other hand, are purely abstract formal statements: the proof of a theorem cannot involve experiments or other empirical evidence in the same way such evidence is used to support scientific theories.

The Collatz conjecture: one way to illustrate its complexity is to extend the iteration from the natural numbers to the complex numbers. The result is a fractal, which (in accordance with universality) resembles the Mandelbrot set.

Nonetheless, there is some degree of empiricism and data collection involved in the discovery of mathematical theorems. By establishing a pattern, sometimes with the use of a powerful computer, mathematicians may have an idea of what to prove, and in some cases even a plan for how to set about doing the proof. It is also possible to find a single counter-example and so establish the impossibility of a proof for the proposition as-stated, and possibly suggest restricted forms of the original proposition that might have feasible proofs.

For example, both the Collatz conjecture and the Riemann hypothesis are well-known unsolved problems; they have been extensively studied through empirical checks, but remain unproven. The Collatz conjecture has been verified for start values up to about 2.88 × 1018. The Riemann hypothesis has been verified to hold for the first 10 trillion non-trivial zeroes of the zeta function. Although most mathematicians can tolerate supposing that the conjecture and the hypothesis are true, neither of these propositions is considered proved.

Such evidence does not constitute proof. For example, the Mertens conjecture is a statement about natural numbers that is now known to be false, but no explicit counterexample (i.e., a natural number n for which the Mertens function M(n) equals or exceeds the square root of n) is known: all numbers less than 1014 have the Mertens property, and the smallest number that does not have this property is only known to be less than the exponential of 1.59 × 1040, which is approximately 10 to the power 4.3 × 1039. Since the number of particles in the universe is generally considered less than 10 to the power 100 (a googol), there is no hope to find an explicit counterexample by exhaustive search.

The word "theory" also exists in mathematics, to denote a body of mathematical axioms, definitions and theorems, as in, for example, group theory (see mathematical theory). There are also "theorems" in science, particularly physics, and in engineering, but they often have statements and proofs in which physical assumptions and intuition play an important role; the physical axioms on which such "theorems" are based are themselves falsifiable.

Terminology

A number of different terms for mathematical statements exist; these terms indicate the role statements play in a particular subject. The distinction between different terms is sometimes rather arbitrary, and the usage of some terms has evolved over time.

  • An axiom or postulate is a fundamental assumption regarding the object of study, that is accepted without proof. A related concept is that of a definition, which gives the meaning of a word or a phrase in terms of known concepts. Classical geometry discerns between axioms, which are general statements; and postulates, which are statements about geometrical objects. Historically, axioms were regarded as "self-evident"; today they are merely assumed to be true.
  • A conjecture is an unproved statement that is believed to be true. Conjectures are usually made in public, and named after their maker (for example, Goldbach's conjecture and Collatz conjecture). The term hypothesis is also used in this sense (for example, Riemann hypothesis), which should not be confused with "hypothesis" as the premise of a proof. Other terms are also used on occasion, for example problem when people are not sure whether the statement should be believed to be true.
    • Sometimes the name of a problem in common use does not match what would be technically most correct. Fermat's Last Theorem was historically called a theorem, although, for centuries, it was only a conjecture. Conversely, the Poincaré conjecture is still generally referred to as a conjecture, despite having been proved in 2002.
  • A theorem is a statement that has been proven to be true based on axioms and other theorems.
  • A proposition is a theorem of lesser importance, or one that is considered so elementary or immediately obvious, that it may be stated without proof. This should not be confused with "proposition" as used in propositional logic. In classical geometry the term "proposition" was used differently: in Euclid's Elements (c. 300 BCE), all theorems and geometric constructions were called "propositions" regardless of their importance.
  • A lemma is an "accessory proposition" - a proposition with little applicability outside its use in a particular proof. Over time a lemma may gain in importance and be considered a theorem, though the term "lemma" is usually kept as part of its name (e.g. Gauss's lemma, Zorn's lemma, and the fundamental lemma).
  • A corollary is a proposition that follows immediately from another theorem or axiom, with little or no required proof. A corollary may also be a restatement of a theorem in a simpler form, or for a special case: for example, the theorem "all internal angles in a rectangle are right angles" has a corollary that "all internal angles in a square are right angles" - a square being a special case of a rectangle.
  • A generalization of a theorem is a theorem with a similar statement but a broader scope, from which the original theorem can be deduced as a special case (a corollary).

Other terms may also be used for historical or customary reasons, for example:

A few well-known theorems have even more idiosyncratic names, for example, the division algorithm, Euler's formula, and the Banach–Tarski paradox.

Layout

A theorem and its proof are typically laid out as follows:

Theorem (name of the person who proved it, along with year of discovery or publication of the proof)
Statement of theorem (sometimes called the proposition)
Proof
Description of proof
End

The end of the proof may be signaled by the letters Q.E.D. (quod erat demonstrandum) or by one of the tombstone marks, such as "□" or "∎", meaning "end of proof", introduced by Paul Halmos following their use in magazines to mark the end of an article.

The exact style depends on the author or publication. Many publications provide instructions or macros for typesetting in the house style.

It is common for a theorem to be preceded by definitions describing the exact meaning of the terms used in the theorem. It is also common for a theorem to be preceded by a number of propositions or lemmas which are then used in the proof. However, lemmas are sometimes embedded in the proof of a theorem, either with nested proofs, or with their proofs presented after the proof of the theorem.

Corollaries to a theorem are either presented between the theorem and the proof, or directly after the proof. Sometimes, corollaries have proofs of their own that explain why they follow from the theorem.

Lore

It has been estimated that over a quarter of a million theorems are proved every year.

The well-known aphorism, "A mathematician is a device for turning coffee into theorems", is probably due to Alfréd Rényi, although it is often attributed to Rényi's colleague Paul Erdős (and Rényi may have been thinking of Erdős), who was famous for the many theorems he produced, the number of his collaborations, and his coffee drinking.

The classification of finite simple groups is regarded by some to be the longest proof of a theorem. It comprises tens of thousands of pages in 500 journal articles by some 100 authors. These papers are together believed to give a complete proof, and several ongoing projects hope to shorten and simplify this proof. Another theorem of this type is the four color theorem whose computer generated proof is too long for a human to read.

Theorems in logic

In mathematical logic, a formal theory is a set of sentences within a formal language. A sentence is a well-formed formula with no free variables. A sentence that is a member of a theory is one of its theorems, and the theory is the set of its theorems. Usually a theory is understood to be closed under the relation of logical consequence. Some accounts define a theory to be closed under the semantic consequence relation (), while others define it to be closed under the syntactic consequence, or derivability relation ().

This diagram shows the syntactic entities that can be constructed from formal languages. The symbols and strings of symbols may be broadly divided into nonsense and well-formed formulas. A formal language can be thought of as identical to the set of its well-formed formulas. The set of well-formed formulas may be broadly divided into theorems and non-theorems.

For a theory to be closed under a derivability relation, it must be associated with a deductive system that specifies how the theorems are derived. The deductive system may be stated explicitly, or it may be clear from the context. The closure of the empty set under the relation of logical consequence yields the set that contains just those sentences that are the theorems of the deductive system.

In the broad sense in which the term is used within logic, a theorem does not have to be true, since the theory that contains it may be unsound relative to a given semantics, or relative to the standard interpretation of the underlying language. A theory that is inconsistent has all sentences as theorems.

The definition of theorems as sentences of a formal language is useful within proof theory, which is a branch of mathematics that studies the structure of formal proofs and the structure of provable formulas. It is also important in model theory, which is concerned with the relationship between formal theories and structures that are able to provide a semantics for them through interpretation.

Although theorems may be uninterpreted sentences, in practice mathematicians are more interested in the meanings of the sentences, i.e. in the propositions they express. What makes formal theorems useful and interesting is that they may be interpreted as true propositions and their derivations may be interpreted as a proof of their truth. A theorem whose interpretation is a true statement about a formal system (as opposed to within a formal system) is called a metatheorem.

Some important theorems in mathematical logic are:

Syntax and semantics

The concept of a formal theorem is fundamentally syntactic, in contrast to the notion of a true proposition, which introduces semantics. Different deductive systems can yield other interpretations, depending on the presumptions of the derivation rules (i.e. belief, justification or other modalities). The soundness of a formal system depends on whether or not all of its theorems are also validities. A validity is a formula that is true under any possible interpretation (for example, in classical propositional logic, validities are tautologies). A formal system is considered semantically complete when all of its theorems are also tautologies.

Interpretation of a formal theorem

Theorems and theories

Technogaianism

From Wikipedia, the free encyclopedia

Technogaianism (a portmanteau word combining "techno-" for technology and "gaian" for Gaia philosophy) is a bright green environmentalist stance of active support for the research, development and use of emerging and future technologies to help restore Earth's environment. Technogaianists argue that developing safe, clean, alternative technology should be an important goal of environmentalists and environmentalism.

Philosophy

Typically, radical environmentalists hold the view that all technology necessarily degrades the environment, and that environmental restoration can therefore occur only with reduced reliance on technology. By contrast, technogaianists argue that technology gets cleaner and more efficient with time, not necessarily progressing to the detriment of the environment. One example used is hydrogen fuel cells. More directly, they argue that such things as nanotechnology and biotechnology can directly reverse environmental degradation. Molecular nanotechnology, for example, could convert garbage in landfills into useful materials and products, while biotechnology could lead to novel microbes that devour hazardous waste.

While many environmentalists still contend that most technology is detrimental to the environment, technogaianists argue that it has been in humanity's best interests to exploit the environment mercilessly until fairly recently, following accurately to current understandings of evolutionary systems; when new factors (such as foreign species or mutant subspecies) are introduced into an ecosystem, they tend to maximize their own resource consumption until either, a) they reach an equilibrium beyond which they cannot continue unmitigated growth, or b) they become extinct. In these models, it is impossible for such a factor to totally destroy its host environment, though they may precipitate major ecological transformation before their ultimate eradication.

Technogaianists believe humanity has currently reached just such a threshold, and that the only way for human civilization to continue advancing is to accept the tenets of technogaianism and limit future exploitive exhaustion of natural resources and minimize further unsustainable development or face the widespread, ongoing mass extinction of species. The destructive effects of modern civilization are to be mitigated by technological solutions, such as using nuclear power. Furthermore, technogaianists argue that only science and technology can help humanity be aware of, and possibly develop counter-measures for, risks to civilization, humans and planet Earth such as a possible impact event.

Sociologist James Hughes mentions Walter Truett Anderson, author of To Govern Evolution: Further Adventures of the Political Animal, as an example of a technogaian political philosopher; argues that technogaianism applied to environmental management is found in the reconciliation ecology writings such as Michael Rosenzweig's Win-Win Ecology: How The Earth's Species Can Survive In The Midst of Human Enterprise; and considers Bruce Sterling's Viridian design movement to be an exemplary technogaian initiative.

The theories of English writer Fraser Clark may be broadly categorized as technogaian.  Clark advocated "balancing the hippie right brain with the techno left brain". The idea of combining technology and ecology was extrapolated at length by a South African eco-anarchist project in the 1990s. The Kagenna Magazine project aimed to combine technology, art, and ecology in an emerging movement that could restore the balance between humans and nature.

George Dvorsky suggests the sentiment of technogaianism is to heal the Earth, use sustainable technology, and create ecologically diverse environments. Dvorsky argues that defensive counter measures could be designed to counter the harmful effects of asteroid impacts, earthquakes, and volcanic eruptions. Dvorksky also suggest that genetic engineering could be used to reduce the environmental impact humans have on the earth.

Methods

Environmental monitoring

The Delta II rocket with climate research satellites, CloudSat and CALIPSO, on Launch Pad SLC-2W, VAFB

Technology facilities the sampling, testing, and monitoring of various environments and ecosystems. NASA uses space-based observations to conduct research on solar activity, sea level rise, the temperature of the atmosphere and the oceans, the state of the ozone layer, air pollution, and changes in sea ice and land ice.

Geoengineering

Climate engineering is a technogaian method that uses two categories of technologies- carbon dioxide removal and solar radiation management. Carbon dioxide removal addresses a cause of climate change by removing one of the greenhouse gases from the atmosphere. Solar radiation management attempts to offset the effects of greenhouse gases by causing the Earth to absorb less solar radiation.

Earthquake engineering is a technogaian method concerned with protecting society and the natural and man-made environment from earthquakes by limiting the seismic risk to acceptable levels. Another example of a technogaian practice is an artificial closed ecological system used to test if and how people could live and work in a closed biosphere, while carrying out scientific experiments. It is in some cases used to explore the possible use of closed biospheres in space colonization, and also allows the study and manipulation of a biosphere without harming Earth's. The most advanced technogaian proposal is the "terraforming" of a planet, moon, or other body by deliberately modifying its atmosphere, temperature, or ecology to be similar to those of Earth in order to make it habitable by humans.

Genetic engineering

S. Matthew Liao, professor of philosophy and bioethics at New York University, claims that the human impact on the environment could be reduced by genetically engineering humans to have, a smaller stature, an intolerance to eating meat, and an increased ability to see in the dark, thereby using less lighting. Liao argues that human engineering is less risky than geoengineering.

Genetically modified foods have reduced the amount of herbicide and insecticide needed for cultivation. The development of glyphosate-resistant (Roundup Ready) plants has changed the herbicide use profile away from more environmentally persistent herbicides with higher toxicity, such as atrazine, metribuzin and alachlor, and reduced the volume and danger of herbicide runoff.

An environmental benefit of Bt-cotton and maize is reduced use of chemical insecticides. A PG Economics study concluded that global pesticide use was reduced by 286,000 tons in 2006, decreasing the environmental impact of herbicides and pesticides by 15%. A survey of small Indian farms between 2002 and 2008 concluded that Bt cotton adoption had led to higher yields and lower pesticide use. Another study concluded insecticide use on cotton and corn during the years 1996 to 2005 fell by 35,600,000 kilograms (78,500,000 lb) of active ingredient, which is roughly equal to the annual amount applied in the EU. A Bt cotton study in six northern Chinese provinces from 1990 to 2010 concluded that it halved the use of pesticides and doubled the level of ladybirds, lacewings and spiders and extended environmental benefits to neighbouring crops of maize, peanuts and soybeans.

Bright green environmentalism

From Wikipedia, the free encyclopedia

Bright green environmentalism is an environmental philosophy and movement that emphasizes the use of advanced technology, social innovation, eco-innovation, and sustainable design to address environmental challenges. This approach contrasts with more traditional forms of environmentalism that may advocate for reduced consumption or a return to simpler lifestyles.

Light green, and dark green environmentalism are yet other sub-movements, respectively distinguished by seeing environmentalism as a lifestyle choice (light greens), and promoting reduction in human numbers and/or a relinquishment of technology (dark greens).

Origin and evolution of bright green thinking

The term bright green, coined in 2003 by writer Alex Steffen, refers to the fast-growing new wing of environmentalism, distinct from traditional forms. Bright green environmentalism aims to provide prosperity in an ecologically sustainable way through the use of new technologies and improved design.

Proponents promote and advocate for green energy, electric vehicles, efficient manufacturing systems, bio and nanotechnologies, ubiquitous computing, dense urban settlements, closed loop materials cycles and sustainable product designs. One-planet living is a commonly used phrase. Their principal focus is on the idea that through a combination of well-built communities, new technologies and sustainable living practices, the quality of life can actually be improved even while ecological footprints shrink.

Around the middle of the century we'll see global population peak at something like 9 billion people, all of whom will want to live with a reasonable amount of prosperity, and many of whom will want, at the very least, a European lifestyle. They will see escaping poverty as their nonnegotiable right, but to deliver that prosperity at our current levels of efficiency and resource use would destroy the planet many times over. We need to invent a new model of prosperity, one that lets billions have the comfort, security, and opportunities they want at the level of impact the planet can afford. We can't do that without embracing technology and better design.

The term bright green has been used with increased frequency due to the promulgation of these ideas through the Internet and coverage by some traditional media.

Dark greens, light greens and bright greens

Alex Steffen describes contemporary environmentalists as being split into three groups, dark, light, and bright greens.

Light green

Light greens see protecting the environment first and foremost as a personal responsibility. They fall into the transformational activist end of the spectrum, but light greens do not emphasize environmentalism as a distinct political ideology, or even seek fundamental political reform. Instead, they often focus on environmentalism as a lifestyle choice. The motto "Green is the new black" sums up this way of thinking, for many. This is different from the term lite green, which some environmentalists use to describe products or practices they believe are greenwashing, those products and practices which pretend to achieve more change than they actually do (if any).

Dark green

In contrast, dark greens believe that environmental problems are an inherent part of industrialized, capitalist civilization, and seek radical political and social and cultural change. Dark greens believe that currently and historically dominant modes of societal organization inevitably lead to consumerism, overconsumption, overproduction, waste, alienation from nature and resource depletion. Dark greens claim this is caused by the emphasis on economic growth that exists within all existing ideologies, a tendency sometimes referred to as growth mania. The dark green brand of environmentalism is associated with ideas of ecocentrism, deep ecology, degrowth, anti-consumerism, post-materialism, holism, the Gaia hypothesis of James Lovelock, and sometimes a support for a reduction in human numbers and/or a relinquishment of technology to reduce humanity's effect on the biosphere.

Dark greens may point to effects like the Jevons paradox to argue limits to the benefits of technological approaches such as advocated by bright greens.

Contrast between light green and dark green

In The Song of the Earth, Jonathan Bate notes that there are typically significant divisions within environmental theory. He identifies one group as “light Greens” or “environmentalists,” who view environmental protection primarily as a personal responsibility. The other group, termed “dark Greens” or “deep ecologists,” believes that environmental issues are fundamentally tied to industrialized civilization and advocate for radical political changes. This distinction can be summarized as “Know Technology” versus “No Technology” (Suresh Frederick in Ecocriticism: Paradigms and Praxis).

Bright green

More recently, bright greens emerged as a group of environmentalists who believe that radical changes are needed in the economic and political operation of society in order to make it sustainable, but that better designs, new technologies and more widely distributed social innovations are the means to make those changes—and that society can neither stop nor protest its way to sustainability. As Ross Robertson writes,

[B]right green environmentalism is less about the problems and limitations we need to overcome than the "tools, models, and ideas" that already exist for overcoming them. It forgoes the bleakness of protest and dissent for the energizing confidence of constructive solutions.

Some have included open source technology as part of this new approach.

Blue Brain Project

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Blue_Brain_Project   ...