Search This Blog

Tuesday, July 29, 2014

New theory says the Universe isn't expanding — it's just getting fat

New theory says the Universe isn't expanding — it's just getting fat

 
 

 
 
 
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 



 
 
 
 
 
 
Conventional thinking says the Universe has been expanding ever since the Big Bang. But theoretical astrophysicist Christof Wetterich says it's not expanding at all. It’s just that the mass of all the particles within it is steadily increasing.
Top Image: T. Piner.
We think the Universe is expanding because all the galaxies within it are pushing away from one another. Scientists see this as the redshift — a kind of doppler effect that happens when atoms emit or absorb light. We see these frequencies as appearing in the red, an indication that mass is moving away from us. Galaxies exhibit this redshift, which is why scientists say the Universe is expanding.
But Wetterich, who works out of the University of Heidelberg in Germany, says the characteristic light emitted by atoms are also governed by the masses of the atoms’ elementary particles, particularly their electrons.
Writing in Nature News, Jon Cartwright explains:
If an atom were to grow in mass, the photons it emits would become more energetic. Because higher energies correspond to higher frequencies, the emission and absorption frequencies would move towards the blue part of the spectrum. Conversely, if the particles were to become lighter, the frequencies would become redshifted.
Because the speed of light is finite, when we look at distant galaxies we are looking backwards in time — seeing them as they would have been when they emitted the light that we observe. If all masses were once lower, and had been constantly increasing, the colours of old galaxies would look redshifted in comparison to current frequencies, and the amount of redshift would be proportionate to their distances from Earth. Thus, the redshift would make galaxies seem to be receding even if they were not.
Work through the maths in this alternative interpretation of redshift, and all of cosmology looks very different. The Universe still expands rapidly during a short-lived period known as inflation. But prior to inflation, according to Wetterich, the Big Bang no longer contains a 'singularity' where the density of the Universe would be infinite. Instead, the Big Bang stretches out in the past over an essentially infinite period of time. And the current cosmos could be static, or even beginning to contract.
Whoa. That is a radically different picture of the Universe than what we're used to.
Unfortunately, there’s no way for us to test this. Well, at least not yet.
But Wetterich says his theory is useful for thinking about different cosmological models. And indeed, it may offer some fresh insights into the spooky dark energy that's apparently pushing the Universe outwards at an accelerating rate.

Read the entire study — which has not yet been peer reviewed — at arXiv: “A Universe without expansion.” But as Cartwright notes in his article, other physicists are not hating the idea.

To AGW Doubters, Skeptics, "Deniers", and Anyone Interested in the Science Behind Global Warming

Almost all of what follows come from Wikipedia, but as it agrees with my own knowledge of chemistry and physics from many sources over the years, it makes a good if sometimes hard to follow summary of the science behind anthropogenic CO2 enhanced global warming.  It is theory however, so how much warming it has caused in the Earth's atmosphere over the last ~ 150 years, and the climatological consequences of that is left up to scientific debate.
___________________________________________________________

We start with Svante Arrhenius, in the latter 19'th century:

Greenhouse effect

Arrhenius developed a theory to explain the ice ages, and in 1896 he was the first scientist to attempt to calculate how changes in the levels of carbon dioxide in the atmosphere could alter the surface temperature through the greenhouse effect.[8] He was influenced by the work of others, including Joseph Fourier and John Tyndall. Arrhenius used the infrared observations of the moon by Frank Washington Very and Samuel Pierpont Langley at the Allegheny Observatory in Pittsburgh to calculate the absorption of infrared radiation by atmospheric CO2 and water vapour. Using 'Stefan's law' (better known as the Stefan-Boltzmann law), he formulated his greenhouse law. In its original form, Arrhenius' greenhouse law reads as follows:
if the quantity of carbonic acid [CO2] increases in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression.
The following equivalent formulation of Arrhenius' greenhouse law is still used today:[9]
ΔF = α Ln(C/C_0)
Here C is carbon dioxide (CO2) concentration measured in parts per million by volume (ppmv); C_0 denotes a baseline or unperturbed concentration of CO2, and ΔF is the radiative forcing, measured in watts per square meter. The constant alpha (α) has been assigned a value between five and seven.[9]
Arrhenius at the first Solvay conference on chemistry in 1922 in Brussels.
 
Based on information from his colleague Arvid Högbom (sv), Arrhenius was the first person to predict that emissions of carbon dioxide from the burning of fossil fuels and other combustion processes were large enough to cause global warming. In his calculation Arrhenius included the feedback from changes in water vapor as well as latitudinal effects, but he omitted clouds, convection of heat upward in the atmosphere, and other essential factors. His work is currently seen less as an accurate prediction of global warming than as the first demonstration that it should be taken as a serious possibility.

Arrhenius' absorption values for CO2 and his conclusions met criticism by Knut Ångström in 1900, who published the first modern infrared spectrum of CO2 with two absorption bands, and published experimental results that seemed to show that absorption of infrared radiation by the gas in the atmosphere was already "saturated" so that adding more could make no difference. Arrhenius replied strongly in 1901 (Annalen der Physik), dismissing the critique altogether. He touched the subject briefly in a technical book titled Lehrbuch der kosmischen Physik (1903). He later wrote Världarnas utveckling (1906) (German: Das Werden der Welten [1907], English: Worlds in the Making [1908]) directed at a general audience, where he suggested that the human emission of CO2 would be strong enough to prevent the world from entering a new ice age, and that a warmer earth would be needed to feed the rapidly increasing population:
"To a certain extent the temperature of the earth's surface, as we shall presently see, is conditioned by the properties of the atmosphere surrounding it, and particularly by the permeability of the latter for the rays of heat." (p46)
"That the atmospheric envelopes limit the heat losses from the planets had been suggested about 1800 by the great French physicist Fourier. His ideas were further developed afterwards by Pouillet and Tyndall. Their theory has been styled the hot-house theory, because they thought that the atmosphere acted after the manner of the glass panes of hot-houses." (p51)
 
"If the quantity of carbonic acid [CO2] in the air should sink to one-half its present percentage, the temperature would fall by about 4°; a diminution to one-quarter would reduce the temperature by 8°. On the other hand, any doubling of the percentage of carbon dioxide in the air would raise the temperature of the earth's surface by 4°; and if the carbon dioxide were increased fourfold, the temperature would rise by 8°." (p53)
 
"Although the sea, by absorbing carbonic acid, acts as a regulator of huge capacity, which takes up about five-sixths of the produced carbonic acid, we yet recognize that the slight percentage of carbonic acid in the atmosphere may by the advances of industry be changed to a noticeable degree in the course of a few centuries." (p54)
 
"Since, now, warm ages have alternated with glacial periods, even after man appeared on the earth, we have to ask ourselves: Is it probable that we shall in the coming geological ages be visited by a new ice period that will drive us from our temperate countries into the hotter climates of Africa? There does not appear to be much ground for such an apprehension. The enormous combustion of coal by our industrial establishments suffices to increase the percentage of carbon dioxide in the air to a perceptible degree." (p61)
 
"We often hear lamentations that the coal stored up in the earth is wasted by the present generation without any thought of the future, and we are terrified by the awful destruction of life and property which has followed the volcanic eruptions of our days. We may find a kind of consolation in the consideration that here, as in every other case, there is good mixed with the evil. By the influence of the increasing percentage of carbonic acid in the atmosphere, we may hope to enjoy ages with more equable and better climates, especially as regards the colder regions of the earth, ages when the earth will bring forth much more abundant crops than at present, for the benefit of rapidly propagating mankind." (p63)
Arrhenius clearly believed that a warmer world would be a positive change. His ideas remained in circulation, but until about 1960 most scientists doubted that global warming would occur (believing the oceans would absorb CO2 faster than humanity emitted the gas).[citation needed] Most scientists also dismissed the greenhouse effect as implausible for the cause of ice ages, as Milutin Milankovitch had presented a mechanism using orbital changes of the earth (Milankovitch cycles).[citation needed]
Nowadays, the accepted explanation is that orbital forcing sets the timing for ice ages with CO2 acting as an essential amplifying feedback.

Arrhenius estimated that halving of CO2 would decrease temperatures by 4–5 °C (Celsius) and a doubling of CO2 would cause a temperature rise of 5–6 °C.[10] In his 1906 publication, Arrhenius adjusted the value downwards to 1.6 °C (including water vapor feedback: 2.1 °C). Recent (2014) estimates from IPCC say this value (the Climate sensitivity) is likely to be between 1.5 and 4.5 °C. Arrhenius expected CO2 levels to rise at a rate given by emissions in his time. Since then, industrial carbon dioxide levels have risen at a much faster rate: Arrhenius expected CO2 doubling to take about 3000 years; it is now estimated in most scenarios to take about a century.
___________________________________________________________

And now onto the 20'th century, and the quantum-mechanical explanation of why certain gasses exhibit the greenhouse effect, i.e., molecular vibrations and infrared radiation absorption:

Molecular vibration

Molecular vibration occurs when atoms in a molecule are in periodic motion while the molecule as a whole has constant translational and rotational motion. The frequency of the periodic motion is known as a vibration frequency, and the typical frequencies of molecular vibrations range from less than 1012 to approximately 1014 Hz.

In general, a molecule with N atoms has 3N – 6 normal modes of vibration, but a linear molecule has 3N – 5 such modes, as rotation about its molecular axis cannot be observed.[1] A diatomic molecule has one normal mode of vibration. The normal modes of vibration of polyatomic molecules are independent of each other but each normal mode will involve simultaneous vibrations of different parts of the molecule such as different chemical bonds.

A molecular vibration is excited when the molecule absorbs a quantum of energy, E, corresponding to the vibration's frequency, ν, according to the relation E = (where h is Planck's constant). A fundamental vibration is excited when one such quantum of energy is absorbed by the molecule in its ground state. When two quanta are absorbed the first overtone is excited, and so on to higher overtones.

To a first approximation, the motion in a normal vibration can be described as a kind of simple harmonic motion. In this approximation, the vibrational energy is a quadratic function (parabola) with respect to the atomic displacements and the first overtone has twice the frequency of the fundamental. In reality, vibrations are anharmonic and the first overtone has a frequency that is slightly lower than twice that of the fundamental. Excitation of the higher overtones involves progressively less and less additional energy and eventually leads to dissociation of the molecule, as the potential energy of the molecule is more like a Morse potential.

The vibrational states of a molecule can be probed in a variety of ways. The most direct way is through infrared spectroscopy, as vibrational transitions typically require an amount of energy that corresponds to the infrared region of the spectrum. Raman spectroscopy, which typically uses visible light, can also be used to measure vibration frequencies directly. The two techniques are complementary and comparison between the two can provide useful structural information such as in the case of the rule of mutual exclusion for centrosymmetric molecules.

Vibrational excitation can occur in conjunction with electronic excitation (vibronic transition), giving vibrational fine structure to electronic transitions, particularly with molecules in the gas state.

In the harmonic approximation the potential energy is a quadratic function of the normal coordinates. Solving the Schrödinger wave equation, the energy states for each normal coordinate are given by
E_n = h \left( n + {1 \over 2 } \right)\nu=h\left( n + {1 \over 2 } \right) {1\over {2 \pi}} \sqrt{k \over m} \!,
where n is a quantum number that can take values of 0, 1, 2 ... In molecular spectroscopy where several types of molecular energy are studied and several quantum numbers are used, this vibrational quantum number is often designated as v.[7][8]

The difference in energy when n (or v) changes by 1 is therefore equal to h\nu, the product of the Planck constant and the vibration frequency derived using classical mechanics. For a transition from level n to level n+1 due to absorption of a photon, the frequency of the photon is equal to the classical vibration frequency \nu (in the harmonic oscillator approximation).

See quantum harmonic oscillator for graphs of the first 5 wave functions, which allow certain selection rules to be formulated. For example, for a harmonic oscillator transitions are allowed only when the quantum number n changes by one,
\Delta n = \pm 1
but this does not apply to an anharmonic oscillator; the observation of overtones is only possible because vibrations are anharmonic. Another consequence of anharmonicity is that transitions such as between states n=2 and n=1 have slightly less energy than transitions between the ground state and first excited state. Such a transition gives rise to a hot band.

Intensities

In an infrared spectrum the intensity of an absorption band is proportional to the derivative of the molecular dipole moment with respect to the normal coordinate.[9] The intensity of Raman bands depends on polarizability.

Symmetrical
stretching
Asymmetrical
stretching
Scissoring (Bending)
Symmetrical stretching.gifAsymmetrical stretching.gifScissoring.gif
RockingWaggingTwisting
Modo rotacao.gifWagging.gifTwisting.gif

Americans' Attitudes Toward Muslims And Arabs Are Getting Worse, Poll Finds


Posted: Updated:            

MUSLIM POLL

 
WASHINGTON -- Americans were outraged to learn they were being spied on by the National Security Agency, but many support law enforcement profiling of Muslims, according to a poll released Tuesday by the Arab American Institute.

The survey, conducted by Zogby Analytics for the advocacy group, found that 42 percent of Americans believe law enforcement is justified in using profiling tactics against Muslim-Americans and Arab-Americans. The survey also shows American attitudes toward Arab-Americans and Muslim-Americans have turned for the worse since the Arab American Institute first began polling on the subject in 2010. The new poll found favorability toward Arab-Americans at 36 percent, down from 43 percent in 2010. For Muslim-Americans, favorability was just 27 percent, compared with 36 percent in 2010.

Recent news headlines associated with Muslims have focused on the ongoing civil war in Syria; the rise of ISIS, or the Islamic State in Iraq and Levant, in Iraq; the abduction of Nigerian schoolgirls by the Islamist group Boko Haram; and the 2012 terrorist attack on a U.S. diplomatic mission in Benghazi, Libya.

"The way forward is clear," the pollsters wrote in the survey's executive summary. "Education about and greater exposure to Arab Americans and American Muslims are the keys both to greater understanding of these growing communities of American citizens and to ensuring that their rights are secured."

The poll found a growing number of Americans doubt that Muslim-Americans or Arab-Americans would be able to perform in a government post without their ethnicity or religion affecting their work. Thirty-six percent of respondents felt that Arab-Americans would be influenced by their ethnicity, and 42 percent said Muslim-Americans would be influenced by religion.

Results differed by political party, with the majority of Republicans holding negative views of both Arab-Americans and Muslims. Democrats gave Arab-Americans a 30 percent unfavorable rating and Muslim-Americans a 33 percent unfavorable rating, while Republicans gave Arab-Americans a 54 percent unfavorable rating and Muslim-Americans a 63 percent unfavorable rating.

Similarly, Republicans were more likely to think that Arab-Americans and Muslim-Americans unable to hold a role in government without being influenced by ethnicity or religion. Fifty-seven percent of Republicans said they believed Muslim-Americans would be influenced by their religion, while half said the same for Arab-Americans. Almost half of Democrats said they were confident Muslim-Americans and Arab-Americans could do their jobs without influence.

The survey also showed a generational gap in attitudes toward Arab-Americans and Muslim-Americans, with younger respondents showing more favorability toward both groups. Part of that, according to the pollsters, has to do with exposure -- those ages 18 to 29 were likely to know Arab-Americans or Muslim-Americans, while respondents older than 65 were almost evenly split on that question.

Previous polls also have shown Americans holding a cold view of Muslims. A Pew poll this month found that Muslims were perceived as negatively as atheists.

The Arab American Institute survey was conducted online among 1,110 likely U.S. voters from June 27 to June 29, a period of unrest in the Muslim world.

Several Muslim-American groups are dedicated to changing the negative perception of Islam, and have encouraged Muslims to pursue more public engagement, both within the federal government and individual communities.

The Littlest Victims of Anti-Science Rhetoric


vitamin K
Vitamin K is a critical compound in our bodies that allows blood to coagulate. Infants are naturally low in it and are at risk for terrible problems that can be otherwise prevented with a simple shot.
Photo by Shutterstock/Natalia Karpova

After all these years advocating for science, and hammering away at those who deny it, I’m surprised I can still be surprised at how bad anti-science can get.
Yet here we are. Babies across the U.S. are suffering from horrific injuries—including hemorrhages, brain damage, and even strokes (yes, strokes, in babies)—because of parents refusing a vitamin K shot. This vitamin is needed to coagulate blood, and without it internal bleeding can result.
Advertisement
Vitamin K deficiency is rare in adults, but it doesn’t cross the placental barrier except in limited amounts, so newborn babies are generally low in it. That’s why it’s been a routine injection for infants for more than 50 years—while vitamin K deficiency is not as big a risk as other problems, the shot is essentially 100 percent effective, and is quite safe.
Mind you, this is not a vaccine, which contains minuscule doses of killed or severely weakened microbes to prime the immune system. It’s a shot of a critical vitamin.
Phil Plait Phil Plait
Phil Plait writes Slate’s Bad Astronomy blog and is an astronomer, public speaker, science evangelizer, and author of Death from the Skies! 
 
Nevertheless, as my friend Chris Mooney writes in Mother Jones, there is an overlap with the anti-vax and “natural health” community. As an example, as reported by the Centers for Disease Control and Prevention, in the Nashville, Tennessee, area, more than 3 percent of parents who gave birth in hospitals refused the injection overall, but in “natural birth” centers that rate shot up to 28 percent.
My Slate colleague Amanda Marcotte points out that vitamin K levels in breast milk are very low as well, and that’s the preferred technique for baby feeding among those who are also hostile to vaccines. In those cases, getting the shot is even more critical.

But the anti-vax rhetoric has apparently crossed over into simple injections. Chris has examples in his Mother Jones article, and there’s this in an article in the St, Louis Post-Dispatch:
The CDC learned that parents refused the injection for several reasons, including an impression it was unnecessary if they had healthy pregnancies, and a desire to minimize exposure to “toxins.” A 1992 study associated vitamin K and childhood leukemia, but the findings have been debunked by subsequent research.
“We sort of came to the realization that parents were relying on a lot of sources out there that were providing misleading and inaccurate information,” said Dr. Lauren Marcewicz, a pediatrician with the CDC’s Division of Blood Disorders. 
By “sources,” they mean various anti-science websites and alt-med anti-vaxxers like Joe Mercola (who has decidedly odd things to say about the vitamin K shot, which you can read about at Science-Based Medicine). Despite the lack of evidence of harm, some parents are still buying into the nonsense, and it’s babies who are suffering the ghastly consequences.

These include infants with brain damage, children with severe developmental disabilities, and more, because of parents refusing a simple shot for their infants. The irony here is extreme: These are precisely the sorts of things the anti-vaxxers claim they are trying to prevent.
The Centers for Disease Control and Prevention has a great Web page about Vitamin K: what it is, why we need it, and why babies need it even more so. It will answer any questions you have about this necessary vitamin.

If you’re about to have a baby or have had one recently: Congratulations! It’s one of the most amazing things we can do as humans, and I will always remember watching and participating in my daughter’s birth. I would have done anything to make her ready for the world, and for me—for every parent—that includes getting the real facts about health.

George Will stuns Fox panel: ‘Preposterous’ that U.S. can’t shelter child refugees from violence

 By David Edwards
Sunday, July 27, 2014 11:12 EDT


George Will speaks to Fox News
                                       
  • 6155
     
Fox News contributor George Will shocked his fellow panelists on Sunday by asserting that the United States should not deport child refugees who were fleeing violence in Central America.

“We ought to say to these children, ‘Welcome to America, you’re going to go to school, and get a job, and become American,’” Will suggested. “We have 3,141 counties in this country. That would be 20 per county.”

“They idea that we can’t assimilate these 8-year-old criminals with their teddy bears is preposterous,” he added.
At that point, Fox News host Chris Wallace interrupted: “You got to know, we’re going to get tons of email saying, ‘This guy doesn’t understand the border. Why should we be dealing with Central America’s problem? We can’t import the problem, they’ve got to deal with it there, and our border has to mean something.’”
“We can handle this problem is what I’m saying,” Will explained. “We’ve handled what [American poet] Emma Lazarus called the ‘wretched refuse of your teeming shore’ a long time ago, and a lot more people than this.”
Watch the video below from Fox News’ Fox News Sunday, broadcast July 27, 2014.

How Do Intelligent Machines Learn?

How and under what conditions is it possible for an intelligent machine to learn? To address this question, let’s start with a definition of machine learning. The most widely accepted definition comes from Tom M. Mitchell, a American computer scientist and E. Fredkin University Professor at Carnegie Mellon University. Here is his formal definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” In simple terms machine learning requires a machine to learn similar to the way humans do, namely from experience, and continue to improve its performance as it gains more experience.

Machine learning is a branch of AI; it utilizes algorithms that improve automatically through experience. Machine learning also has been a focus of AI research since the field’s inception. There are numerous computer software programs, known as machine-learning algorithms, that use various computational techniques to predict outcomes of new, unseen experiences. The algorithms’ performance is a branch of theoretical computer science known as “computational learning theory.”
What this means in simple terms is that an intelligent machine has in its memory data that relates to a finite set of experiences. The machine-learning algorithms (i.e., software) access this data for its similarity to a new experience and use a specific algorithm (or combination of algorithms) to guide the machine to predict an outcome of this new experience. Since the experience data in the machine’s memory is limited, the algorithms are unable to predict outcomes with certainty. Instead they associate a probability to a specific outcome and act in accordance with the highest probability.
Optical character recognition is an example of machine learning. In this case the computer recognizes printed characters based on previous examples. As anyone who has ever used an optical character-recognition program knows, however, the programs are far from 100 percent accurate. In my experience the best case is a little more than 95 percent accurate when the text is clear and uses a common font.

There are eleven major machine-learning algorithms and numerous variations of these algorithms. To study and understand each would be a formidable task. Fortunately, though, machine-learning algorithms fall into three major classifications. By understanding these classifications,we can gain significant insight into the science of machine learning. Therefore let us review the three major classifications:
  1. Supervised learning: This class of algorithms infers a function (a way of mapping or relating an input to an output) from training data, which consists of training examples. Each example consists of an input object and a desired output value. Ideally the inferred function (generalized from the training data) allows the algorithm to analyze new data (unseen instances/inputs) and map it to (i.e., predict) a high-probability output.
  2. Unsupervised learning: This class of algorithms seeks to find hidden structures (patterns in data) in a stream of input (unlabeled data). Unlike in supervised learning, the examples presented to the learner are unlabeled, which makes it impossible to assign an error or reward to a potential solution.
  3. Reinforcement learning: Reinforcement learning was inspired by behaviorist psychology. It focuses on which actions an agent (an intelligent machine) should take to maximize a reward (for example a numerical value associated with utility). In effect the agent receives rewards for good responses and punishment for bad ones. The algorithms for reinforcement learning require the agent to take discrete time steps and calculate the reward as a function of having taken that step. At this point the agent takes another time step and again calculates the reward, which provides feedback to guide the agent’s next action. The agent’s goal is to collect as much reward as possible.
In essence machine learning incorporates four essential elements.
  1. Representation: The intelligent machine must be able to assimilate data (input) and transform it in a way that makes it useful for a specific algorithm.
  2. Generalization: The intelligent machine must be able to accurately map unseen data to similar data in the learning data set.
  3. Algorithm selection: After generalization the intelligent machine must choose and/or combine algorithms to make a computation (such as a decision or an evaluation).
  4. Feedback: After a computation, the intelligent machine must use feedback (such as a reward or punishment) to improve its ability to perform steps 1 through 3 above.
Machine learning is similar to human learning in many respects. The most difficult issue in machine learning is generalization or what is often referred to as abstraction. This is simply the ability to determine the features and structures of an object (i.e., data) relevant to solving the problem. Humans are excellent when it comes to abstracting the essence of an object. For example, regardless of the breed or type of dog, whether we see a small, large, multicolor, long-hair, short-hair, large-nose, or short-nose animal, we immediately recognize that the animal is a dog. Most four-year-old children immediately recognize dogs. However, most intelligent agents have a difficult time with generalization and require sophisticated computer programs to enable them to generalize.

Machine learning has come a long way since the 1972 introduction of Pong, the first game developed by Atari Inc. Today’s computer games are incredibly realistic, and the graphics are similar to watching a movie. Few of us can win a chess game on our computer or smartphone unless we set the difficulty level to low. In general machine learning appears to be accelerating, even faster than the field of AI as a whole. We may, however, see a bootstrap effect, in which machine learning results in highly intelligent agents that accelerate the development of artificial general intelligence, but there is more to the human mind than intelligence. One of the most important characteristics of our humanity is our ability to feel human emotions.

This raises an important question. When will computers be capable of feeling human emotions? A new science is emerging to address how to develop and program computers to be capable of simulating and eventually feeling human emotions. This new science is termed “affective computing.”  We will discuss affective computing in a future post.

Source: The Artificial Intelligence Revolution (2014), Louis A. Del Monte
 

Hang in their just a few more years.  SLS (and SpaceX's Heavy Falcon) are on their way.  To the moon and beyond!

Algorithmic information theory

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Algorithmic_information_theory ...