Search This Blog

Sunday, April 22, 2018

Machine learning

From Wikipedia, the free encyclopedia

Machine learning is a field of computer science that uses statistical techniques to give computer systems the ability to "learn" (i.e., progressively improve performance on a specific task) with data, without being explicitly programmed.[1]

The name machine learning was coined in 1959 by Arthur Samuel.[2] Evolved from the study of pattern recognition and computational learning theory in artificial intelligence,[3] machine learning explores the study and construction of algorithms that can learn from and make predictions on data[4] – such algorithms overcome following strictly static program instructions by making data-driven predictions or decisions,[5]:2 through building a model from sample inputs. Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms with good performance is difficult or infeasible; example applications include email filtering, detection of network intruders or malicious insiders working towards a data breach,[6] optical character recognition (OCR),[7] learning to rank, and computer vision.

Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on prediction-making through the use of computers. It has strong ties to mathematical optimization, which delivers methods, theory and application domains to the field. Machine learning is sometimes conflated with data mining,[8] where the latter subfield focuses more on exploratory data analysis and is known as unsupervised learning.[5]:vii[9] Machine learning can also be unsupervised[10] and be used to learn and establish baseline behavioral profiles for various entities[11] and then used to find meaningful anomalies.

Within the field of data analytics, machine learning is a method used to devise complex models and algorithms that lend themselves to prediction; in commercial use, this is known as predictive analytics. These analytical models allow researchers, data scientists, engineers, and analysts to "produce reliable, repeatable decisions and results" and uncover "hidden insights" through learning from historical relationships and trends in the data.[12]

Effective machine learning is difficult because finding patterns is hard and often not enough training data are available; as a result, machine-learning programs often fail to deliver.[13][14]

Overview

Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E."[15] This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?".[16] In Turing's proposal the various characteristics that could be possessed by a thinking machine and the various implications in constructing one are exposed.

Machine learning tasks


Machine learning tasks are typically classified into two broad categories, depending on whether there is a learning "signal" or "feedback" available to a learning system:
  • Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs. As special cases, the input signal can be only partially available, or restricted to special feedback:
    • Semi-supervised learning: the computer is given only an incomplete training signal: a training set with some (often many) of the target outputs missing.
    • Active learning: the computer can only obtain training labels for a limited set of instances (based on a budget), and also has to optimize its choice of objects to acquire labels for. When used interactively, these can be presented to the user for labeling.
    • Reinforcement learning: training data (in form of rewards and punishments) is given only as feedback to the program's actions in a dynamic environment, such as driving a vehicle or playing a game against an opponent.[5]:3
  • Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).

Machine learning applications


A support vector machine is a classifier that divides its input space into two regions, separated by a linear boundary. Here, it has learned to distinguish black and white circles.

Another categorization of machine learning tasks arises when one considers the desired output of a machine-learned system:[5]:3
  • In classification, inputs are divided into two or more classes, and the learner must produce a model that assigns unseen inputs to one or more (multi-label classification) of these classes. This is typically tackled in a supervised way. Spam filtering is an example of classification, where the inputs are email (or other) messages and the classes are "spam" and "not spam".
  • In regression, also a supervised problem, the outputs are continuous rather than discrete.
  • In clustering, a set of inputs is to be divided into groups. Unlike in classification, the groups are not known beforehand, making this typically an unsupervised task.
  • Density estimation finds the distribution of inputs in some space.
  • Dimensionality reduction simplifies inputs by mapping them into a lower-dimensional space. Topic modeling is a related problem, where a program is given a list of human language documents and is tasked to find out which documents cover similar topics.
Among other categories of machine learning problems, learning to learn learns its own inductive bias based on previous experience. Developmental learning, elaborated for robot learning, generates its own sequences (also called curriculum) of learning situations to cumulatively acquire repertoires of novel skills through autonomous self-exploration and social interaction with human teachers and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.

History and relationships to other fields

Arthur Samuel, an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning" in 1959 while at IBM[17]. As a scientific endeavour, machine learning grew out of the quest for artificial intelligence. Already in the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what were then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics.[18] Probabilistic reasoning was also employed, especially in automated medical diagnosis.[19]:488

However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation.[19]:488 By 1980, expert systems had come to dominate AI, and statistics was out of favor.[20] Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval.[19]:708–710; 755 Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including Hopfield, Rumelhart and Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation.[19]:25

Machine learning, reorganized as a separate field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and probability theory.[20] It also benefited from the increasing availability of digitized information, and the ability to distribute it via the Internet.

Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.

Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples). The difference between the two fields arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples.[21]

Relation to statistics

Machine learning and statistics are closely related fields. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics.[22] He also suggested the term data science as a placeholder to call the overall field.[22]
Leo Breiman distinguished two statistical modelling paradigms: data model and algorithmic model,[23] wherein "algorithmic model" means more or less the machine learning algorithms like Random forest.

Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning.[24]

Theory

A core objective of a learner is to generalize from its experience.[25][26] Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.

The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error.

For the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has underfit the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalization will be poorer.[27]

In addition to performance bounds, computational learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.

Approaches

Decision tree learning

Decision tree learning uses a decision tree as a predictive model, which maps observations about an item to conclusions about the item's target value.

Association rule learning

Association rule learning is a method for discovering interesting relations between variables in large databases.

Artificial neural networks

An artificial neural network (ANN) learning algorithm, usually called "neural network" (NN), is a learning algorithm that is vaguely inspired by biological neural networks. Computations are structured in terms of an interconnected group of artificial neurons, processing information using a connectionist approach to computation. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables.

Deep learning

Falling hardware prices and the development of GPUs for personal use in the last few years have contributed to the development of the concept of deep learning which consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition.[28]

Inductive logic programming

Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming languages for representing hypotheses (and not only logic programming), such as functional programs.

Support vector machines

Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.

Clustering

Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to some predesignated criterion or criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated for example by internal compactness (similarity between members of the same cluster) and separation between different clusters. Other methods are based on estimated density and graph connectivity. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis.

Bayesian networks

A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning.

Reinforcement learning

Reinforcement learning is concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt to find a policy that maps states of the world to the actions the agent ought to take in those states. Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected.

Representation learning

Several learning algorithms, mostly unsupervised learning algorithms, aim at discovering better representations of the inputs provided during training. Classical examples include principal components analysis and cluster analysis. Representation learning algorithms often attempt to preserve the information in their input but transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions, allowing reconstruction of the inputs coming from the unknown data generating distribution, while not being necessarily faithful for configurations that are implausible under that distribution.

Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse (has many zeros). Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into (high-dimensional) vectors.[29] Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.[30]

Similarity and metric learning

In this problem, the learning machine is given pairs of examples that are considered similar and pairs of less similar objects. It then needs to learn a similarity function (or a distance metric function) that can predict if new objects are similar. It is sometimes used in Recommendation systems.

Sparse dictionary learning

In this method, a datum is represented as a linear combination of basis functions, and the coefficients are assumed to be sparse. Let x be a d-dimensional datum, D be a d by n matrix, where each column of D represents a basis function. r is the coefficient to represent x using D. Mathematically, sparse dictionary learning means solving {\displaystyle x\approx Dr} where r is sparse. Generally speaking, n is assumed to be larger than d to allow the freedom for a sparse representation.

Learning a dictionary along with sparse representations is strongly NP-hard and also difficult to solve approximately.[31] A popular heuristic method for sparse dictionary learning is K-SVD.

Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine which classes a previously unseen datum belongs to. Suppose a dictionary for each class has already been built. Then a new datum is associated with the class such that it's best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.[32]

Genetic algorithms

A genetic algorithm (GA) is a search heuristic that mimics the process of natural selection, and uses methods such as mutation and crossover to generate new genotype in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms found some uses in the 1980s and 1990s.[33][34] Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms.[35]

Rule-based machine learning

Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves `rules’ to store, manipulate or apply, knowledge. The defining characteristic of a rule-based machine learner is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learners that commonly identify a singular model that can be universally applied to any instance in order to make a prediction.[36] Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems.

Learning classifier systems

Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component (e.g. typically a genetic algorithm) with a learning component (performing either supervised learning, reinforcement learning, or unsupervised learning). They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions.[37]

Applications

Applications for machine learning include:
In 2006, the online movie company Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy on its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million.[43] Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly.[44]

In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of Machine Learning to predict the financial crisis. [45]

In 2012, co-founder of Sun Microsystems Vinod Khosla predicted that 80% of medical doctors jobs would be lost in the next two decades to automated machine learning medical diagnostic software.[46]

In 2014, it has been reported that a machine learning algorithm has been applied in Art History to study fine art paintings, and that it may have revealed previously unrecognized influences between artists.[47]

Model assessments

Classification machine learning models can be validated by accuracy estimation techniques like the Holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the N-fold-cross-validation method randomly splits the data in k subsets where the k-1 instances of the data are used to train the model while the kth instance is used to test the predictive ability of the training model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy.[48]

In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate (TPR) and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the False Positive Rate (FPR) as well as the False Negative Rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. The Total Operating Characteristic (TOC) is an effective method to express a model’s diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used Receiver operating characteristic (ROC) and ROC’s associated Area Under the Curve (AUC).

Ethics

Machine learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices.[49] For example, using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants against similarity to previous successful applicants.[50][51] Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning.

Because language contains biases, machines trained on language corpora will necessarily also learn bias.[52]

Designer baby

From Wikipedia, the free encyclopedia

A designer baby is a human embryo which has been genetically modified, usually following guidelines set by the parent or scientist, to produce desirable traits. This is done using various methods, such as germline engineering or preimplantation genetic diagnosis (PGD). This technology is the subject of ethical debate, bringing up the concept of genetically modified "superhumans" to replace modern humans.

Preimplantation genetic diagnosis

In medicine and (clinical) genetics pre-implantation genetic diagnosis (PGD or PIGD) (also known as embryo screening) is a procedure performed on embryos prior to implantation, sometimes even on oocytes prior to fertilization. The methods of PGD help to identify and locate genetic defects in early embryos that were conceived through in vitro fertilization (IVF).[1] The IVF procedure is carried out by the removal of one or two cells when the embryo is at a specific stage in development. The PGD uses the IVF technique to obtain oocytes or embryos for evaluation of the organism's genome.

The PGD procedures allow scientists to identify damaged or mutated genes associated with diseases in the oocytes or embryos by using in-situ hybridization (ISH).[2] The ISH technique labels specific nucleic acid sequences on a gene that can help detect genetic abnormalities.[3]

Conversely, this technique can also help select for desirable traits by avoiding implanting embryos with genes that have serious diseases or disabilities. Examples of desirable traits that could be selected would be increased muscle mass, voice pitch, or high intelligence. Overall, the procedure of PGD to select for a positive trait is referred to the creation of a "designer baby".[2]

This is not a new technology - the first PGD babies, and thus also the first designer babies were created in 1989 and born in 1990.[4]

A 2012 article by Carolyn Abraham in The Globe and Mail stated that "Recent breakthroughs have made it possible to scan every chromosome in a single embryonic cell, to test for genes involved in hundreds of 'conditions,' some of which are clearly life-threatening while others are less dramatic and less certain". There is already a "microchip that can test a remarkable 1,500 genetic traits at once, including heart disease, seasonal affective disorder, obesity, athletic ability, hair and eye color, height, susceptibility to alcohol and nicotine addictions, lactose intolerance and one of several genes linked to intelligence. It is still difficult to get enough DNA for such extensive testing but the chip designer thinks this technical problem will be solved soon.[5]

Regulation of Preimplantation Genetic Diagnosis

PGD has been used primarily for medical purposes, but as the possibilities of the procedure increase the idea of non-medical uses has become a popular topic of debate. Non-medical motivations could lead to potential problems when trying to make the distinction of when the procedure is needed or desired.

For example, PGD has the ability to select an embryo based on gender preferences (Stankovic). Since changing a gender is not needed, but desired this could cause much controversy. Additionally, the procedure is able to create a donor offspring or a “savior sibling”, which can assist a pre-existing offspring for medical purposes.[2] The “savior sibling” is a brother or sister that is created to donate life-saving tissue to an existing child.[6] There has been arguments against the procedures of “savior siblings” because many believe that this will lead humans closer to the creation of designer babies. For example, one critique said, “the new technique is a dangerous first step towards allowing parents to use embryo testing to choose other characteristics of the baby, such as eye colour and sex”.[7]

The artificial selection of traits through the use of PGD has become a widely debated topic and governments have started to regulate this procedure.

Many countries completely prohibit PGD, including Austria, Germany, Ireland, and Switzerland. Other countries restrict PGD to medical use only, including Belgium, France, Greece, Netherlands, Italy, Norway, and the United Kingdom.

In contrast, the United States federal law does not have any regulation of PGD. Those who are in favor of PGD believe the government should not be involved in the procedure and parents should have a reproductive choice. The opposing side has argued that PGD will allow embryo selection based on trivial traits. While other critics believe that this procedure could lead to a new form of Eugenics.[1]

The regulation of PGD has become an important topic, however much of the artificial trait selection remains only prospective until technology advances. For example, scientist do not know which specific gene is associated with specific traits, like voice pitch or intelligence. Nevertheless, with the current rate of technological advancements, it is believed that in the next twenty years the artificial selection of desirable traits will exist.[2]

Genetic engineering of human gametes, zygotes, or embryos (a.k.a. germline modification)

The other use for designer babies concerns possible uses of gene therapy techniques to create desired traits of a child, such as disease resistance, sex, hair color and other cosmetic traits, athletic ability, and intelligence.[8]

Understanding of genetics for human traits

Genetics explains the process of parents passing down certain genes to their children. Genes are inherited from both biological parents, and each gene expresses a specific trait. The traits expressed by genes can be something physically seen—such as hair color, eye color, or height—or can be things such as diseases and disorders.[9]

Human genes are found within chromosomes. Humans have 23 pairs of chromosomes, 46 individual chromosomes. 23 chromosomes are inherited from the father, and 23 from the mother. Each chromosome can carry about 20,000 genes.[9]

Researchers have already connected the genes in the striped zebra fish which control the colour of the fish to genes in humans that determine skin colour.[10] Many other things could be discovered in further years especially with the new possibilities of cloning animals.

Scientists have been able to better understand the genetic traits of humans through projects such as The Human Genome Project. This project was launched around 1990 and was an international research project that had an end goal of mapping and understanding every gene in the human body.[11] As a part of the Human Genome Project, we have been able to pin point specific locations for about 12,800 specific genes within different chromosomes.[9]

Germline modification

Germline modification has been around since the 1980s, as there have been successful animal trials dating back to that time.[12] In order for germline modification to be successful, medical professionals must know how to introduce a gene into the patient's’ cell and the germline so that it will be transferred subsequent generations and still maintain the proper functionality.[13] The way in which genes are integrated into the DNA is what determines that difference between germline modification and somatic cell modification.[14] In order to be transferred to subsequent generations, these changes need to be carried out through the development of germ cells.[15] Changes in the germline result in permanent and heritable changes to the DNA.[14] While amplification of positive effects would occur, there is also the risk that amplification of possible negative effects would also occur.[15] Since the results are generational, it is more complicated to study the long-term effects and therefore it is not a simple task to figure out if the benefits of germline modification outweigh the harm.[15] Allowing families to have the ability to design their children and select for desirable traits is another major concern that germline modification presents.[15]

Germline modification can be accomplished through different techniques that focus on modification of the germinal epithelium, germ cells, or the fertilized egg.[14] Most of the techniques include transporting transgenes and then the transgenes are integrated with the DNA of the zygote.[12] After integration, the transgene becomes a stable and functioning portion of the host’s genome.[12] One technique involves a specific sequence of cloned DNA being inserted into the fertilized egg using the microinjection technique.[14] The sequence is inserted directly into the pronucleus. The second technique uses the transfection process. Stem cells obtained from the embryo during the blastocysts stage are modified, combined with naked DNA, and the resulting cell is reinserted into the embryo that is developing.[14] The third technique focuses on carrying DNA into the embryo by using retroviruses.[14]

Feasibility of gene therapy

Gene therapy is the use of DNA as a pharmaceutical agent to treat disease. Gene therapy was first conceptualized in 1972, with the authors urging caution before commencing gene therapy studies in humans.[16] The first FDA-approved gene therapy experiment in the United States occurred in 1990, on a four-year-old girl named, Ashanti DeSilva, she was treated for ADA-SCID.[17][18] This is a disease that had left her defenseless against infections spreading throughout her body. Dr. W French Anderson was a major lead on this clinical trial, he worked for the National Heart, Lung, and Blood Institute[19] Since then, over 1,700 clinical trials have been conducted using a number of techniques for gene therapy.[20]

Techniques in gene therapy

The techniques used in gene therapy, which are also referred to as vectors, have a method of using a healthy gene to attack and replace an infected gene. The number of techniques or vectors that have been used to conduct these clinical trials vary. A few of the techniques are basic processing, gene doping, and viral vectors. Viral infections can be life-threatening in patients who are immune-compromised because they cannot mount an effective immune response. Approaches to protection from infection using gene therapy include T cell-based immunotherapy, stem-cell based therapy, genetic vaccines, and other approaches to genetic blockade of infection.[21] There are also other approaches known as T cell-based approaches, cell therapy, stem-cell-based approaches, and genetic vaccines.

Basic processing can be achieved through replacement of a mutated gene, inactivation of a mutated gene, or introduction of a new gene to help fight a disease caused by mutation. Secondly, gene doping is a procedure of gene therapy that modulates gene expression of a particular gene. This procedure is mainly used to improve athletic ability for sporting events. This is a genetic form of human enhancement that is able to treat muscle-wasting disorders. It is a highly controversial procedure because the results do nothing unusual to the bloodstream, so athletic officials would be unable to detect chemicals in a blood or urine test. An example of gene doping would be proving athlete with erythropoietin (EPO), a hormone that increases the red blood cell count. Lastly, viral vectors are able to mimic the methods of a normal virus in the human body to introduce favorable genes into a human cell. For instance, scientists are able to positively change the host's genome by removing the genes that cause disease from a virus and replacing it with genes of the desired trait (“Types of Gene Therapy”).

The aforementioned techniques have been used by scientists, but the most popularized techniques are naked DNA and DNA complexes. The injection of the naked DNA is the simplest form of the vector delivery method. The naked DNA is a histone-free, modified DNA sequence that removes proteins that would normally surround these structures. This form of delivery is sometimes used as a natural compound, but the United States has been making large waves of synthetic compounds for gene delivery. The other form, which is DNA complexes, has been used when a compound is crossed with a chemical mix in order to produce the desired compound. There are other studies that are currently underway that have been referred to as the hybrid method because there is a combination of two or more gene therapy techniques used. This can instill the idea that the desired gene will stick during the delivery, transfer, and implant.

The manipulation of an organism’s genome for a desirable trait is related to the medical procedure of cloning. The process of cloning results in making genetically identical organisms. Moreover, scientists can use gene therapy vectors to modify the DNA to be identical to a particular organism. Moreover, the techniques established by the field of gene therapy can potentially be used to create “designer babies”. This can be achieved through the use of IVF to assist in creating a genetically designed baby.

Disease control in gene therapy

Gene therapy is being studied for the treatment of a wide variety of acquired and inherited disorders. Retroviruses, adenoviruses, poxviruses, herpesviruses, and others are being engineered to serve as gene therapy vectors and are being administered to patients in a clinical setting.[22] Some of the other genetic disorders that could potentially be implemented in clinical trials are ADA-SCID, which as stated earlier is severe combined immune deficiency CGD which is a chronic granulomatous disorder, and haemophilia. These examples of disorders are only a few among numerous others that are being discovered. Some of the acquired diseases that can be potentially controlled in a clinical trial with gene therapy are cancer and neurodegenerative diseases such as Parkinson's disease or Huntington's disease.[23]

Ethics and risks

Lee Silver has projected a dystopia in which a race of superior humans look down on those without genetic enhancements, though others have counseled against accepting this vision of the future.[24] It has also been suggested that if designer babies were created through genetic engineering, that this could have deleterious effects on the human gene pool.[25] Some futurists claim that it would put the human species on a path to participant evolution.[24][26] It has also been argued that designer babies may have an important role as counter-acting an argued dysgenic trend.[27]
There are risks associated with genetic modifications to any organism. When focusing on the ethics behind this treatment, medical professionals and clinical ethicists take many factors into consideration. They look at whether or not the goal and outcome of the treatment are supposed to impact an individual and their family lineage or a group of people.[14] The main ethical issue with pure germline modification is that these types of treatments will produce a change that can be passed down to future generations and therefore any error, known or unknown, will also be passed down and will affect the offspring.[13] New diseases may be introduced accidentally.[10][28]

The use of germline modification is justified when it is used to correct genetic problems that cannot be treated with somatic cell therapy, stabilize DNA in a mating that has the potential to be high risk, provide an alternative to the abortion of embryos that genetically problematic for a family, and intensify the incidence of genes that are favorable and desirable.[14] This can ultimately lead to perfected lineages on a genotypic level and possibly a phenotypic level. Ultimately, these issues raise potential questions about the welfare and identity of individuals that have been genetically modified through the germline.[12]

Safety is a major concern when it comes to the gene editing and mitochondrial transfer. Since the effects of germline modification can be passed down to multiple generations, experimentation of this treatment brings forth many questions and concerns about the ethics of completing this research.[14] If a patient has undergone germline modification treatment, the coming generations, one or two after the initial treatment, will be used as trials to see if the changes in the germline have been successful.[14] This extended waiting time could possess harmful implications since the effect of the treatment is not known until it has been passed down to a few generations. Problems with the gene editing may not appear until after the child with edited genes is born.[29] If the patient assumes the risk alone, consent may be given for the treatment, but it is less justified when it comes to giving consent for future generations.[14] On a larger scale, germline modification has the potential to impact the gene pool of the entire human race in a negative or positive way.[12] Germline modification is considered a more ethically and morally acceptable treatment when a patient is a carrier for a harmful trait and is treated to improve the genotype and safety of the future generations.[14] When the treatment is used for this purpose, it can fill the gaps that other technologies may not be able to accomplish.[12]

Since experimentation of the germline occurs directly on embryos, there is a major ethical deliberation on experimenting with fertilized eggs and embryos and killing the flawed ones.[14] The embryo cannot give consent and some of the treatments have long-lasting and harmful implications.[14] In many countries, editing embryos and germline modification for reproductive use is illegal.[30] As of 2017, the United States of America restricts the use of germline modification and the procedure is under heavy regulation by the FDA and NIH.[30] The American National Academy of Sciences and National Academy of Medicine gave qualified support to human genome editing in 2017 once answers have been found to safety and efficiency problems "but only for serious conditions under stringent oversight."[31] Germline modification would be more practical if sampling methods were less destructive and used the polar bodies rather than embryos.[14]

Saturday, April 21, 2018

Eternal inflation

From Wikipedia, the free encyclopedia

Eternal inflation is a hypothetical inflationary universe model, which is itself an outgrowth or extension of the Big Bang theory.

According to eternal inflation, the inflationary phase of the universe's expansion lasts forever throughout most of the universe. Because the regions expand exponentially rapidly, most of the volume of the universe at any given time is inflating. Eternal inflation, therefore, produces a hypothetically infinite multiverse, in which only an insignificant fractal volume ends inflation.
Paul Steinhardt, one of the original architects of the inflationary model, introduced the first example of eternal inflation in 1983,[1] and Alexander Vilenkin showed that it is generic.[2]

Alan Guth's 2007 paper, "Eternal inflation and its implications",[3] states that under reasonable assumptions "Although inflation is generically eternal into the future, it is not eternal into the past." Guth detailed what was known about the subject at the time, and demonstrated that eternal inflation was still considered the likely outcome of inflation, more than 20 years after eternal inflation was first introduced by Steinhardt.

Overview

Development of the theory

Inflation, or the inflationary universe theory, was originally developed as a way to overcome the few remaining problems with what was otherwise considered a successful theory of cosmology, the Big Bang model.

In 1979, Alan Guth introduced the inflationary model of the universe to explain why the universe is flat and homogeneous (which refers to the smooth distribution of matter and radiation on a large scale).[4] The basic idea was that the universe underwent a period of rapidly accelerating expansion a few instants after the Big Bang. He offered a mechanism for causing the inflation to begin: false vacuum energy. Guth coined the term "inflation," and was the first to discuss the theory with other scientists worldwide.

Guth's original formulation was problematic, as there was no consistent way to bring an end to the inflationary epoch and end up with the hot, isotropic, homogeneous universe observed today. Although the false vacuum could decay into empty "bubbles" of "true vacuum" that expanded at the speed of light, the empty bubbles could not coalesce to reheat the universe, because they could not keep up with the remaining inflating universe.

In 1982, this "graceful exit problem" was solved independently by Andrei Linde and by Andreas Albrecht and Paul J. Steinhardt[5] who showed how to end inflation without making empty bubbles and, instead, end up with a hot expanding universe. The basic idea was to have a continuous "slow-roll" or slow evolution from false vacuum to true without making any bubbles. The improved model was called "new inflation."

In 1983, Paul Steinhardt was the first to show that this "new inflation" does not have to end everywhere.[1] Instead, it might only end in a finite patch or a hot bubble full of matter and radiation, and that inflation continues in most of the universe while producing hot bubble after hot bubble along the way. Alexander Vilenkin showed that when quantum effects are properly included, this is actually generic to all new inflation models.[2]

Using ideas introduced by Steinhardt and Vilenkin, Andrei Linde published an alternative model of inflation in 1986 which used these ideas to provide a detailed description of what has become known as the Chaotic Inflation theory or eternal inflation.[6]

Quantum fluctuations

New inflation does not produce a perfectly symmetric universe due to quantum fluctuations during inflation. The fluctuations cause the energy and matter density to be different in different points in space.

Quantum fluctuations in the hypothetical inflation field produce changes in the rate of expansion that are responsible for eternal inflation. Those regions with a higher rate of inflation expand faster and dominate the universe, despite the natural tendency of inflation to end in other regions. This allows inflation to continue forever, to produce future-eternal inflation. As a simplified example, suppose that during inflation, the natural decay rate of the inflaton field is slow compared to the effect of quantum fluctuation. When a mini-universe inflates and "self-reproduces" into, say, twenty causally-disconnected mini-universes of equal size to the original mini-universe, perhaps nine of the new mini-universes will have a larger, rather than smaller, average inflaton field value than the original mini-universe, because they inflated from regions of the original mini-universe where quantum fluctuation pushed the inflaton value up more than the slow inflation decay rate brought the inflaton value down. Originally there was one mini-universe with a given inflaton value; now there are nine mini-universes that have a slightly larger inflaton value. (Of course, there are also eleven mini-universes where the inflaton value is slightly lower than it originally was.) Each mini-universe with the larger inflaton field value restarts a similar round of approximate self-reproduction within itself. (The mini-universes with lower inflaton values may also reproduce, unless its inflaton value is small enough that the region drops out of inflation and ceases self-reproduction.) This process continues indefinitely; nine high-inflaton mini-universes might become 81, then 729... Thus, there is eternal inflation.[7]

In 1980, quantum fluctuations were suggested by Viatcheslav Mukhanov and Gennady Chibisov[8][9] in the Soviet Union in the context of a model of modified gravity by Alexei Starobinsky[10] to be possible seeds for forming galaxies.

In the context of inflation, quantum fluctuations were first analyzed at the three-week 1982 Nuffield Workshop on the Very Early Universe at Cambridge University.[11] The average strength of the fluctuations was first calculated by four groups working separately over the course of the workshop: Stephen Hawking;[12] Starobinsky;[13] Guth and So-Young Pi;[14] and James M. Bardeen, Paul Steinhardt and Michael Turner.[15]

The early calculations derived at the Nuffield Workshop only focused on the average fluctuations, whose magnitude is too small to affect inflation. However, beginning with the examples presented by Steinhardt[1] and Vilenkin,[2] the same quantum physics was later shown to produce occasional large fluctuations that increase the rate of inflation and keep inflation going eternally.

Further developments

In analyzing the Planck Satellite data from 2013, Anna Ijjas and Paul Steinhardt showed that the simplest textbook inflationary models were eliminated and that the remaining models require exponentially more tuned starting conditions, more parameters to be adjusted, and less inflation. Later Planck observations reported in 2015 confirmed these conclusions.[16][17]

A 2014 paper by Kohli and Haslam called into question the viability of the eternal inflation theory, by analyzing Linde's chaotic inflation theory in which the quantum fluctuations are modeled as Gaussian white noise.[18] They showed that in this popular scenario, eternal inflation in fact cannot be eternal, and the random noise leads to spacetime being filled with singularities. This was demonstrated by showing that solutions to the Einstein field equations diverge in a finite time. Their paper therefore concluded that the theory of eternal inflation based on random quantum fluctuations would not be a viable theory, and the resulting existence of a multiverse is "still very much an open question that will require much deeper investigation".

Inflation, eternal inflation, and the multiverse

In 1983, it was shown that inflation could be eternal, leading to a multiverse in which space is broken up into bubbles or patches whose properties differ from patch to patch spanning all physical possibilities.

Paul Steinhardt, who produced the first example of eternal inflation,[1] eventually became a strong and vocal opponent of the theory. He argued that the multiverse represented a breakdown of the inflationary theory, because, in a multiverse, any outcome is equally possible, so inflation makes no predictions and, hence, is untestable. Consequently, he argued, inflation fails a key condition for a scientific theory.[19][20]

Both Linde and Guth, however, continued to support the inflationary theory and the multiverse. Guth declared:
It's hard to build models of inflation that don't lead to a multiverse. It's not impossible, so I think there's still certainly research that needs to be done. But most models of inflation do lead to a multiverse, and evidence for inflation will be pushing us in the direction of taking the idea of a multiverse seriously.[21]
According to Linde, "It's possible to invent models of inflation that do not allow a multiverse, but it's difficult. Every experiment that brings better credence to inflationary theory brings us much closer to hints that the multiverse is real."[21]

Friday, April 20, 2018

Global Warming Alarmists Caught Doctoring '97-Percent Consensus' Claims

From https://www.forbes.com/sites/jamestaylor/2013/05/30/global-warming-alarmists-caught-doctoring-97-percent-consensus-claims/#277ab972485d


, I am president of the Spark of Freedom Foundation. Opinions expressed by Forbes Contributors are their own.
 
Global warming graphic
 (Photo credit: Wikipedia)

Global warming alarmists and their allies in the liberal media have been caught doctoring the results of a widely cited paper asserting there is a 97-percent scientific consensus regarding human-caused global warming. After taking a closer look at the paper, investigative journalists report the authors’ claims of a 97-pecent consensus relied on the authors misclassifying the papers of some of the world’s most prominent global warming skeptics. At the same time, the authors deliberately presented a meaningless survey question so they could twist the responses to fit their own preconceived global warming alarmism.

Global warming alarmist John Cook, founder of the misleadingly named blog site Skeptical Science, published a paper with several other global warming alarmists claiming they reviewed nearly 12,000 abstracts of studies published in the peer-reviewed climate literature. Cook reported that he and his colleagues found that 97 percent of the papers that expressed a position on human-caused global warming “endorsed the consensus position that humans are causing global warming.”


As is the case with other ‘surveys’ alleging an overwhelming scientific consensus on global warming, the question surveyed had absolutely nothing to do with the issues of contention between global warming alarmists and global warming skeptics. The question Cook and his alarmist colleagues surveyed was simply whether humans have caused some global warming. The question is meaningless regarding the global warming debate because most skeptics as well as most alarmists believe humans have caused some global warming. The issue of contention dividing alarmists and skeptics is whether humans are causing global warming of such negative severity as to constitute a crisis demanding concerted action.


Either through idiocy, ignorance, or both, global warming alarmists and the liberal media have been reporting that the Cook study shows a 97 percent consensus that humans are causing a global warming crisis. However, that was clearly not the question surveyed.

Investigative journalists at Popular Technology looked into precisely which papers were classified within Cook’s asserted 97 percent. The investigative journalists found Cook and his colleagues strikingly classified papers by such prominent, vigorous skeptics as Willie Soon, Craig Idso, Nicola Scafetta, Nir Shaviv, Nils-Axel Morner and Alan Carlin as supporting the 97-percent consensus.

Cook and his colleagues, for example, classified a peer-reviewed paper by scientist Craig Idso as explicitly supporting the ‘consensus’ position on global warming “without minimizing” the asserted severity of global warming. When Popular Technology asked Idso whether this was an accurate characterization of his paper, Idso responded, “That is not an accurate representation of my paper. The papers examined how the rise in atmospheric CO2 could be inducing a phase advance in the spring portion of the atmosphere's seasonal CO2 cycle. Other literature had previously claimed a measured advance was due to rising temperatures, but we showed that it was quite likely the rise in atmospheric CO2 itself was responsible for the lion's share of the change. It would be incorrect to claim that our paper was an endorsement of CO2-induced global warming."

When Popular Technology asked physicist Nicola Scafetta whether Cook and his colleagues accurately classified one of his peer-reviewed papers as supporting the ‘consensus’ position, Scafetta similarly criticized the Skeptical Science classification.

“Cook et al. (2013) is based on a straw man argument because it does not correctly define the IPCC AGW theory, which is NOT that human emissions have contributed 50%+ of the global warming since 1900 but that almost 90-100% of the observed global warming was induced by human emission,” Scafetta responded. “What my papers say is that the IPCC [United Nations Intergovernmental Panel on Climate Change] view is erroneous because about 40-70% of the global warming observed from 1900 to 2000 was induced by the sun.”

What it is observed right now is utter dishonesty by the IPCC advocates. … They are gradually engaging into a metamorphosis process to save face. … And in this way they will get the credit that they do not merit, and continue in defaming critics like me that actually demonstrated such a fact since 2005/2006,” Scafetta added.

Astrophysicist Nir Shaviv similarly objected to Cook and colleagues claiming he explicitly supported the ‘consensus’ position about human-induced global warming. Asked if Cook and colleagues accurately represented his paper, Shaviv responded, “Nope... it is not an accurate representation. The paper shows that if cosmic rays are included in empirical climate sensitivity analyses, then one finds that different time scales consistently give a low climate sensitivity. i.e., it supports the idea that cosmic rays affect the climate and that climate sensitivity is low. This means that part of the 20th century [warming] should be attributed to the increased solar activity and that 21st century warming under a business as usual scenario should be low (about 1°C).”

“I couldn't write these things more explicitly in the paper because of the refereeing, however, you don't have to be a genius to reach these conclusions from the paper," Shaviv added.

To manufacture their misleading asserted consensus, Cook and his colleagues also misclassified various papers as taking “no position” on human-caused global warming. When Cook and his colleagues determined a paper took no position on the issue, they simply pretended, for the purpose of their 97-percent claim, that the paper did not exist.

Morner, a sea level scientist, told Popular Technology that Cook classifying one of his papers as “no position” was "Certainly not correct and certainly misleading. The paper is strongly against AGW [anthropogenic global warming], and documents its absence in the sea level observational facts. Also, it invalidates the mode of sea level handling by the IPCC."

Soon, an astrophysicist, similarly objected to Cook classifying his paper as “no position.”
"I am sure that this rating of no position on AGW by CO2 is nowhere accurate nor correct,” said Soon.

I hope my scientific views and conclusions are clear to anyone that will spend time reading our papers. Cook et al. (2013) is not the study to read if you want to find out about what we say and conclude in our own scientific works,” Soon emphasized.

Viewing the Cook paper in the best possible light, Cook and colleagues can perhaps claim a small amount of wiggle room in their classifications because the explicit wording of the question they analyzed is simply whether humans have caused some global warming. By restricting the question to such a minimalist, largely irrelevant question in the global warming debate and then demanding an explicit, unsolicited refutation of the assertion in order to classify a paper as a ‘consensus’ contrarian, Cook and colleagues misleadingly induce people to believe 97 percent of publishing scientists believe in a global warming crisis when that is simply not the case.

Misleading the public about consensus opinion regarding global warming, of course, is precisely what the Cook paper sought to accomplish. This is a tried and true ruse perfected by global warming alarmists. Global warming alarmists use their own biased, subjective judgment to misclassify published papers according to criteria that is largely irrelevant to the central issues in the global warming debate. Then, by carefully parsing the language of their survey questions and their published results, the alarmists encourage the media and fellow global warming alarmists to cite these biased, subjective, totally irrelevant surveys as conclusive evidence for the lie that nearly all scientists believe humans are creating a global warming crisis.

These biased, misleading, and totally irrelevant “surveys” form the best “evidence” global warming alarmists can muster in the global warming debate. And this truly shows how embarrassingly feeble their alarmist theory really is.

Two Degree Temperature Target Has Little Scientific Basis

The two degree temperature target (beyond which we will face an existential climate crisis) is inaccurate, irrelevant, and vague.  It appears to be based on the claim that modern humans (Homo sapiens) never existed when the average global temperature was two degrees above the mid-nineteenth century, and therefor, since this is an "unprecedented" state of affairs, must lead to catastrophe.

But, first, it is incorrect, as shown below.  Modern humans evolved before the last interglacial period, the Eemian, which was at least a degree warmer than the present at times, and perhaps more.  Second, it is a non sequitur, for two reasons:  one, that an unprecedented global climate condition does not logically or scientifically predict catastrophe; second, that human beings do not live in an average global climate but a local one -- there is little doubt, for example, that Europe and the North Atlantic climate region has undergone temperature swings of two and more degrees during the Holocene, yet the millions who have lived there during this period survived and even thrived -- despite possessing little more than Stone Age technology for much of this time, while being subject to famines, droughts, contagions, warfare, and invasions -- conditions which no longer prevail even today, let alone 50 or 100 years from now.

That the claim is vague, and lacking in any scientific specificity, I hope is clear.



Prof. Roger Pielke Jr. on origins of 2 degree temp target: ‘Has little scientific basis’



Via: Roger Pielke Jr.’s The Climate Fix website: https://theclimatefix.wordpress.com/2017/09/18/pielke-on-climate-5/

Do you want to know the origins of the 2 degree temperature target that underpins much of climate policy discussions and action?
  • As is often the case, it is an arbitrary round number that was politically convenient. So it became a sort of scientific truth. However, it has little scientific basis but is a hard political reality.
  • Jaeger and Jaeger (2011) explain that it came from “a marginal remark in an early paper about climate policy”
  • That “marginal paper” was a 1975 working paper by economist William Nordhaus (here in PDF and a second version is from 1977, with the figure shown below). At p. 23, “If there were global temperatures of more than 2 or 3 C. above the current average temperature, this would take the climate outside of the range of observations which have been made over the last several hundred thousand years.”

    nord-1977
  • Nordhaus’ claim was sourced to climatologist Hubert Lamb (1972) who in turn calculated long-term variations in temperature based on record kept in Central England.
  • So: The 2 degree temperature target that sits at the center of current climate policy discussions originated in a local, long-term record of temperature variation in England, which was adapted by an economist in a “what if?” exercise.
  • The 2 degree target is today far more politically “real” than its grounding in science or policy. That won’t change, but it is nonetheless a fascinating look at the arbitrariness of policy and how it is that issues are framed shapes what options are deemed relevant and appropriate.
  • As an example, check out this paper just out today in Nature — it argues that we can emit more than we thought and still hit a 1.5 degree temperature target. People will argue about the results, many because of its perceived political implications. But this argument is only tenuously related to actual energy policies, instead it is related to how we should think about arguments that might be used to motivate people to think about energy policies and thus demand action and so on. Tenuous, like I said.
Related Link: 
Flashback Climategate emails: Phil Jones says critical 2-degree C limit was ‘plucked out of thin air’
German Scientists: ‘2°C Target Purely Political’ – Prof. Dr. Christian Schönwiese told German public television: ‘They formulated a 2°C target. It is not from a climate scientist, or a physicist, or a chemist, but from an outside person who simply plucked it out of thin air and said ‘2°C’

Warmist father Hans Joachim Schellnhuber of 2C temperature limit admits it’s ‘a political goal’– Hans Joachim Schellnhuber, a top German climate scientist who helped establish the 2-degree threshold, stressed it was a policy marker: “Two degrees is not a magical limit — it’s clearly a political goal,” says Hans Joachim Schellnhuber, director of the Potsdam Institute for Climate Impact Research (PIK). “The world will not come to an end right away in the event of stronger warming, nor are we definitely saved if warming is not as significant. The reality, of course, is much more complicated.” Schellnhuber ought to know. He is the father of the two-degree target. “Yes, I plead guilty,” he says, smiling. The idea didn’t hurt his career. In fact, it made him Germany’s most influential climatologist. Schellnhuber, a theoretical physicist, became Chancellor Angela Merkel’s chief scientific adviser — a position any researcher would envy.

Out of Africa: When Did Prehistoric Humans Actually Leave—and Where Did They Go?



Scientists have discovered the oldest human fossil ever found outside of Africa in Misliya Cave, Israel. The find means our current timing for human migration—and evolution—could be off by at least 50,000 years.

So when did humans really start exploring the rest of the world?
 
The jawbone fossil was found in Misliya Cave in Israel. The newly discovered fossil is estimated to be between 170,000 and 190,000 years old.
Scientists think our modern human species (Homo sapiens) emerged approximately 200,000 years ago in Africa. In the 1980s, fossil and DNA evidence pointed to the continent as the cradle of humanity.

Where humans went next, however, is still a big mystery.

Out of Africa

The traditional “Out of Africa” model holds that humans first traveled from the continent between 130,000 and 115,000 years ago, toward the Middle East.

The newly discovered fossil is estimated to be between 170,000 and 190,000 years old. Before now, the earliest remains found in Israel were dated between 90,000 and 120,000 years old. This means humans reached the region at least 50,000 years earlier than expected.

Chris Stringer of London’s Natural History Museum, who was not involved in the latest discovery, told the BBC: “The find breaks the long-established 130,000-year-old limit on modern humans outside of Africa.... The new dating hints that there could be even older Homo sapien finds to come from the region of western Asia.”

Moving back the date of that first migration has big consequences for our understanding of human evolution. “The entire narrative of the evolution of Homo sapiens must be pushed back by at least 100,000 to 200,000 years,” study author Israel Hershkovitz from Tel Aviv University explained in a statement. “In other words, if modern humans started traveling out of Africa some 200,000 years ago, it follows that they must have originated in Africa at least 300,000 to 500,000 years ago.”
 
A micro-CT reconstruction of the jawbone fossil found in Misliya Cave in Israel. Scientists believe the jawbone to be the oldest human fossil ever found outside of Africa. The discovery means our current timings for human migration—and evolution—could be off by at least 50,000 years. Gerhard Weber/University of Vienna

What path did early humans take?

The early excursions into Eurasia responsible for the Misliya fossil likely ended in extinction. Scientists had believed a second exodus occurred about 60,000 years ago.

This idea was brought under scrutiny last year, when a team of scientists reviewed human bones from China. The bones were estimated to be up to 120,000 years old. The team argued that multiple dispersals might explain a growing body of evidence finding humans in the wrong place, at the wrong time.

They produced a map (below) describing the human journey from Africa as a series of smaller migrations around the globe.
 
This map shows the early human migration charted by researchers. It reflects the human journey from Africa as a series of smaller migrations around the globe. Katerina Douka/Michelle O'Reilly/Science
The question of when humans left Africa is far from solved. The Misliya jawbone has once again thrown the question of human origins wide open.

Hershkovitz explains: “This finding—that early modern humans were present outside of Africa earlier than commonly believed—completely changes our view on modern human dispersal and the history of modern human evolution.”

Every Black Hole Contains Another Universe – Equations Predict

Original article posted on National Geographic.
 
Like part of a cosmic Russian doll, our universe may be perfectly nested inside a black hole that is itself part of a larger universe. In turn, all the black holes found so far in our universe—from the microscopic to the supermassive—may be ultimate doorways into alternate realities.
Related image
According to a mind-bending new theory, a black hole is actually a tunnel between universes—a type of wormhole. The matter the black hole attracts doesn’t collapse into a single point, as has been predicted, but rather gushes out a “white hole” at the other end of the black one, the theory goes.

In a paper published in the journal Physics Letters B, Indiana University physicist Nikodem Poplawski presents new mathematical models of the spiraling motion of matter falling into a black hole. His equations suggest such wormholes are viable alternatives to the “space-time singularities” that Albert Einstein predicted to be at the centers of black holes. According to Einstein’s equations for general relativity, singularities are created whenever matter in a given region gets too dense, as would happen at the ultra-dense heart of a black hole. 
Image result for Every Black Hole Contains Another Universe – Equations Predict

Einstein’s theory suggests singularities take up no space, are infinitely dense, and are infinitely hot—a concept supported by numerous lines of indirect evidence but still so outlandish that many scientists find it hard to accept. If Poplawski is correct, they may no longer have to. According to the new equations, the matter black holes absorb and seemingly destroy is actually expelled and becomes the building blocks for galaxies, stars, and planets in another reality.
Image result for Every Black Hole Contains Another Universe – Equations Predict

The notion of black holes as wormholes could explain certain mysteries in modern cosmology, Poplawski said. For example, the big bang theory says the universe started as a singularity. But scientists have no satisfying explanation for how such a singularity might have formed in the first place. If our universe was birthed by a white hole instead of a singularity, Poplawski said:
“It would solve this problem of black hole singularities and also the big bang singularity.”
Related image

Wormholes might also explain gamma ray bursts, the second most powerful explosions in the universe after the big bang. Gamma ray bursts occur at the fringes of the known universe. They appear to be associated with supernovae, or star explosions, in faraway galaxies, but their exact sources are a mystery.

Poplawski proposes that the bursts may be discharges of matter from alternate universes. The matter, he says, might be escaping into our universe through supermassive black holes—wormholes—at the hearts of those galaxies, though it’s not clear how that would be possible. The wormhole theory may also help explain why certain features of our universe deviate from what theory predicts, according to physicists. 
“It’s kind of a crazy idea, but who knows?” he said. There is at least one way to test Poplawski’s theory: Some of our universe’s black holes rotate, and if our universe was born inside a similarly revolving black hole, then our universe should have inherited the parent object’s rotation. If future experiments reveal that our universe appears to rotate in a preferred direction, it would be indirect evidence supporting his wormhole theory, Poplawski said.
Based on the standard model of physics, after the big bang the curvature of the universe should have increased over time so that now—13.7 billion years later—we should seem to be sitting on the surface of a closed, spherical universe. But observations show the universe appears flat in all directions.

What’s more, data on light from the very early universe show that everything just after the big bang was a fairly uniform temperature. That would mean that the farthest objects we see on opposite horizons of the universe were once close enough to interact and come to equilibrium, like molecules of gas in a sealed chamber. 
Image result for Every Black Hole Contains Another Universe – Equations Predict

Again, observations don’t match predictions, because the objects farthest from each other in the known universe are so far apart that the time it would take to travel between them at the speed of light exceeds the age of the universe. Inflation states that shortly after the universe was created, it experienced a rapid growth spurt during which space itself expanded at faster-than-light speeds. The expansion stretched the universe from a size smaller than an atom to astronomical proportions in a fraction of a second.

The universe therefore appears flat, because the sphere we’re sitting on is extremely large from our viewpoint—just as the sphere of Earth seems flat to someone standing in a field. Inflation also explains how objects so far away from each other might have once been close enough to interact. But—assuming inflation is real—astronomers have always been at pains to explain what caused it. That’s where the new wormhole theory comes in.

According to Poplawski, some theories of inflation say the event was caused by “exotic matter,” a theoretical substance that differs from normal matter, in part because it is repelled rather than attracted by gravity. Based on his equations, Poplawski thinks such exotic matter might have been created when some of the first massive stars collapsed and became wormholes.
“There may be some relationship between the exotic matter that forms wormholes and the exotic matter that triggered inflation,” he said.
The new model isn’t the first to propose that other universes exist inside black holes. Damien Easson, a theoretical physicist at Arizona State University, has made the speculation in previous studies.
“What is new here is an actual wormhole solution in general relativity that acts as the passage from the exterior black hole to the new interior universe.In our paper, we just speculated that such a solution could exist, but Poplawski has found an actual solution,” said Easson, referring to Poplawski’s equations (who was not involved in the new study). Nevertheless, the idea is still very speculative, Easson said in an email.
“Is the idea possible? Yes. Is the scenario likely? I have no idea. But it is certainly an interesting possibility. Future work in quantum gravity—the study of gravity at the subatomic level—could refine the equations and potentially support or disprove Poplawski’s theory”, Easson said.

Overall, the wormhole theory is interesting, but not a breakthrough in explaining the origins of our universe, said Andreas Albrecht, a physicist at the University of California, Davis, who was also not involved in the new study. By saying our universe was created by a gush of matter from a parent universe, the theory simply shifts the original creation event into an alternate reality. In other words, it doesn’t explain how the parent universe came to be or why it has the properties it has—properties our universe presumably inherited.
“There’re really some pressing problems we’re trying to solve, and it’s not clear that any of this is offering a way forward with that,” he said.
Still, Albrecht doesn’t find the idea of universe-bridging wormholes any stranger than the idea of black hole singularities, and he cautions against dismissing the new theory just because it sounds a little out there.
“Everything people ask in this business is pretty weird,” he said. “You can’t say the less weird [idea] is going to win, because that’s not the way it’s been, by any means.”

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...