Source: https://en.wikipedia.org/wiki/Cambrian_explosion
The fossil record as Darwin knew it seemed to suggest that the major metazoan groups appeared in a few million years of the early to mid-Cambrian, and even in the 1980s this still appeared to be the case.[15][16]
However, evidence of Precambrian metazoa is gradually accumulating. If the Ediacaran Kimberella was a mollusc-like protostome (one of the two main groups of coelomates),[20][61] the protostome and deuterostome lineages must have split significantly before 550 million years ago (deuterostomes are the other main group of coelomates).[94] Even if it is not a protostome, it is widely accepted as a bilaterian.[65][94] Since fossils of rather modern-looking Cnidarians (jellyfish-like organisms) have been found in the Doushantuo lagerstätte, the Cnidarian and bilaterian lineages must have diverged well over 580 million years ago.[94]
Trace fossils[59] and predatory borings in Cloudina shells provide further evidence of Ediacaran animals.[95] Some fossils from the Doushantuo formation have been interpreted as embryos and one (Vernanimalcula) as a bilaterian coelomate, although these interpretations are not universally accepted.[48][49][96] Earlier still, predatory pressure has acted on stromatolites and acritarchs since around 1,250 million years ago.[44]
The presence of Precambrian animals somewhat dampens the "bang" of the explosion: not only was the appearance of animals gradual, but their evolutionary radiation ("diversification") may also not have been as rapid as once thought. Indeed, statistical analysis shows that the Cambrian explosion was no faster than any of the other radiations in animals' history.[note 5] However, it does seem that some innovations linked to the explosion – such as resistant armour – only evolved once in the animal lineage; this makes a lengthy Precambrian animal lineage harder to defend.[98] Further, the conventional view that all the phyla arose in the Cambrian is flawed; while the phyla may have diversified in this time period, representatives of the crown-groups of many phyla do not appear until much later in the Phanerozoic.[53] Further, the mineralized phyla that form the basis of the fossil record may not be representative of other phyla, since most mineralized phyla originated in a benthic setting. The fossil record is consistent with a Cambrian Explosion that was limited to the benthos, with pelagic phyla evolving much later.[53]
Ecological complexity among marine animals increased in the Cambrian, as well later in the Ordovician.[5] However, recent research has overthrown the once-popular idea that disparity was exceptionally high throughout the Cambrian, before subsequently decreasing.[99] In fact, disparity remains relatively low throughout the Cambrian, with modern levels of disparity only attained after the early Ordovician radiation.[5]
The diversity of many Cambrian assemblages is similar to today's,[100][91] and at a high (class/phylum) level, diversity is thought by some to have risen relatively smoothly through the Cambrian, stabilizing somewhat in the Ordovician.[101] This interpretation, however, glosses over the astonishing and fundamental pattern of basal polytomy and phylogenetic telescoping at or near the Cambrian boundary, as seen in most major animal lineages.[102] Thus Harry Blackmore Whittington's questions regarding the abrupt nature of the Cambrian explosion remain, and have yet to be satisfactorily answered.[103]
A Medley of Potpourri is just what it says; various thoughts, opinions, ruminations, and contemplations on a variety of subjects.
Search This Blog
Thursday, December 26, 2013
Wednesday, December 25, 2013
Epigenetics enigma resolved: First structure of enzyme that removes methylation
Epigenetics enigma resolved: First structure of enzyme that removes methylation
Read more at: http://phys.org/news/2013-12-epigenetics-enigma-enzyme-methylation.html#jCp
The finding is important for the field of epigenetics because Tet enzymes chemically modify DNA, changing signposts that tell the cell's machinery "this gene is shut off" into other signs that say "ready for a change."
Tet enzymes' roles have come to light only in the last five years; they are needed for stem cells to maintain their multipotent state, and are involved in early embryonic and brain development and in cancer.
The results, which could help scientists understand how Tet enzymes are regulated and look for drugs that manipulate them, are scheduled for publication in Nature.
Researchers led by Xiaodong Cheng, PhD, determined the structure of a Tet family member from Naegleria gruberi by X-ray crystallography. The structure shows how the enzyme interacts with its target DNA, bending the double helix and flipping out the base that is to be modified.
"This base flipping mechanism is also used by other enzymes that modify and repair DNA, but we can see from the structure that the Tet family enzymes interact with the DNA in a distinct way," Cheng says.
Cheng is professor of biochemistry at Emory University School of Medicine and a Georgia Research Alliance Eminent Scholar. The first author of the paper is research associate Hideharu Hashimoto, PhD. A team led by Yu Zheng, PhD, a senior research scientist at New England Biolabs, contributed to the paper by analyzing the enzymatic activity of Tet using liquid chromatography–mass spectrometry.
Using oxygen, Tet enzymes change 5-methylcytosine into 5-hydroxymethylcytosine and other oxidized forms of methylcytosine. 5-methylcytosine (5-mC) and 5-hydroxymethylcytosine (5-hmC) are both epigenetic modifications of DNA, which change how DNA is regulated without altering the letters of the genetic code itself.
5-mC is generally found on genes that are turned off or on repetitive regions of the genome. 5-mC helps shut off genes that aren't supposed to be turned on (depending on the cell type) and changes in 5-mC's distribution underpin a healthy cell's transformation into a cancer cell.
In contrast to 5-mC, 5-hmC appears to be enriched on active genes, especially in brain cells. Having a Tet enzyme form 5-hmC seems to be a way for cells to erase or at least modify the "off" signal provided by 5-mC, although the functions of 5-hmC are an active topic of investigation, Cheng says.
Alterations of the Tet enzymes have been found in forms of leukemia, so having information on the enzymes' molecular structure could help scientists design drugs that interfere with them.
N. gruberi is a single-celled organism found in soil or fresh water that can take the form of an amoeba or a flagellate; its close relative N. fowleri can cause deadly brain infections. Cheng says his team chose to study the enzyme from Naegleria because it was smaller and simpler and thus easier to crystallize than mammalian forms of the enzyme, yet still resembles mammalian forms in protein sequence.
Mammalian Tet enzymes appear to have an additional regulatory domain that the Naegleria forms do not; understanding how that domain works will be a new puzzle opened up by having the Naegleria structure, Cheng says.
Journal reference: Nature
Read more at: http://phys.org/news/2013-12-epigenetics-enigma-enzyme-methylation.html#jCp
The finding is important for the field of epigenetics because Tet enzymes chemically modify DNA, changing signposts that tell the cell's machinery "this gene is shut off" into other signs that say "ready for a change."
Tet enzymes' roles have come to light only in the last five years; they are needed for stem cells to maintain their multipotent state, and are involved in early embryonic and brain development and in cancer.
The results, which could help scientists understand how Tet enzymes are regulated and look for drugs that manipulate them, are scheduled for publication in Nature.
Researchers led by Xiaodong Cheng, PhD, determined the structure of a Tet family member from Naegleria gruberi by X-ray crystallography. The structure shows how the enzyme interacts with its target DNA, bending the double helix and flipping out the base that is to be modified.
"This base flipping mechanism is also used by other enzymes that modify and repair DNA, but we can see from the structure that the Tet family enzymes interact with the DNA in a distinct way," Cheng says.
Cheng is professor of biochemistry at Emory University School of Medicine and a Georgia Research Alliance Eminent Scholar. The first author of the paper is research associate Hideharu Hashimoto, PhD. A team led by Yu Zheng, PhD, a senior research scientist at New England Biolabs, contributed to the paper by analyzing the enzymatic activity of Tet using liquid chromatography–mass spectrometry.
Using oxygen, Tet enzymes change 5-methylcytosine into 5-hydroxymethylcytosine and other oxidized forms of methylcytosine. 5-methylcytosine (5-mC) and 5-hydroxymethylcytosine (5-hmC) are both epigenetic modifications of DNA, which change how DNA is regulated without altering the letters of the genetic code itself.
5-mC is generally found on genes that are turned off or on repetitive regions of the genome. 5-mC helps shut off genes that aren't supposed to be turned on (depending on the cell type) and changes in 5-mC's distribution underpin a healthy cell's transformation into a cancer cell.
In contrast to 5-mC, 5-hmC appears to be enriched on active genes, especially in brain cells. Having a Tet enzyme form 5-hmC seems to be a way for cells to erase or at least modify the "off" signal provided by 5-mC, although the functions of 5-hmC are an active topic of investigation, Cheng says.
Alterations of the Tet enzymes have been found in forms of leukemia, so having information on the enzymes' molecular structure could help scientists design drugs that interfere with them.
N. gruberi is a single-celled organism found in soil or fresh water that can take the form of an amoeba or a flagellate; its close relative N. fowleri can cause deadly brain infections. Cheng says his team chose to study the enzyme from Naegleria because it was smaller and simpler and thus easier to crystallize than mammalian forms of the enzyme, yet still resembles mammalian forms in protein sequence.
Mammalian Tet enzymes appear to have an additional regulatory domain that the Naegleria forms do not; understanding how that domain works will be a new puzzle opened up by having the Naegleria structure, Cheng says.
How Rare Am I? Genographic Project Results Demonstrate Our Extended Family Tree
Posted by Miguel Vilar on December 24, 2013
Most participants of National Geographic’s Genographic Project can recite their haplogroup as readily as their mother’s maiden name. Yet outside consumer genetics, the word haplogroup is still unknown. Your haplogroup, or genetic branch of the human family tree, tells you about your deep ancestry—often thousands of years ago—and shows you the possible paths of migration taken by these ancient ancestors. Your haplogroup also places you within a community of relatives, some distant, with whom you unmistakably share an ancestor way back when.
Haplogroup H1, Genographic’s most common lineage.
Let’s focus here on mitochondrial DNA haplogroup is H1, as it is the Genographic Project’s most common maternal lineage result. You inherited your mitochondrial DNA purely from your mother, who inherited it from her mother, and her mother, and so on. Yet, unlike often is the case with a mother’s maiden name, her maternal haplogroup is passed down through generations. Today, all members of haplogroup H1 are direct descendants from the first H1 woman that lived thousands of years ago. Most H1 members may know their haplogroup as H1a or H1b2 or H1c1a, etc, yet as a single genetic branch, H1 accounts for 15% of Genographic participants. What’s more, in the past few years, anthropologists have discovered and named an astonishing 200 new branches within haplogroup H1; and that number continues to grow.
The origin of haplogroup H1 continues to be a debate as well. Most researchers suggest it was born in the Middle East between 10,000 and 15,000 years ago, and spread from there to Europe and North Africa. However, ancient DNA studies show that its ancestral haplogroup H first appears in Central Europe just 8,000 year ago. Its vast diversity and high concentration in Spain and Portugal, suggests H1 may have existed there during the last Ice Age, and spread north after glaciers melted. Yet others postulate that its young age and high frequency indicate it spread as agriculture took shape in Europe.
Any of the scenarios is possible. As technology improves, more DNA is extracted and sequenced from ancient bones, and more people contribute their DNA to the Genographic Project, we will keep learning about H1, and all other haplogroups. It is because of participants contributing their DNA, their stories, and their hypotheses to science that we can carry forward this exciting work uncovering our deep genetic connections.
Happy Haplogroups!
Haplogroup H1, Genographic’s most common lineage.
Let’s focus here on mitochondrial DNA haplogroup is H1, as it is the Genographic Project’s most common maternal lineage result. You inherited your mitochondrial DNA purely from your mother, who inherited it from her mother, and her mother, and so on. Yet, unlike often is the case with a mother’s maiden name, her maternal haplogroup is passed down through generations. Today, all members of haplogroup H1 are direct descendants from the first H1 woman that lived thousands of years ago. Most H1 members may know their haplogroup as H1a or H1b2 or H1c1a, etc, yet as a single genetic branch, H1 accounts for 15% of Genographic participants. What’s more, in the past few years, anthropologists have discovered and named an astonishing 200 new branches within haplogroup H1; and that number continues to grow.
The origin of haplogroup H1 continues to be a debate as well. Most researchers suggest it was born in the Middle East between 10,000 and 15,000 years ago, and spread from there to Europe and North Africa. However, ancient DNA studies show that its ancestral haplogroup H first appears in Central Europe just 8,000 year ago. Its vast diversity and high concentration in Spain and Portugal, suggests H1 may have existed there during the last Ice Age, and spread north after glaciers melted. Yet others postulate that its young age and high frequency indicate it spread as agriculture took shape in Europe.
Any of the scenarios is possible. As technology improves, more DNA is extracted and sequenced from ancient bones, and more people contribute their DNA to the Genographic Project, we will keep learning about H1, and all other haplogroups. It is because of participants contributing their DNA, their stories, and their hypotheses to science that we can carry forward this exciting work uncovering our deep genetic connections.
Happy Haplogroups!
What does it mean to be conscious?
By Patricia Salber
A study published today (10/31/2013) in the online open source journal, NeuroImage: Clinical, further blurs the boundaries of what it means to be conscious. Although the title, Dissociable endogenous and exogenous attention in disorders of consciousness, and the research methodology are almost indecipherable to those of us not inside the beltway of chronic Disorders of Consciousness (DoC) research, University of Cambridge translates for us on their website.
Basically the researchers, lead by Dr. Srivas Chennu at the University of Cambridge, were trying to see if patients diagnosed as either in a vegetative state (VS) or minimally conscious state (MCS) could pay attention to (count) certain words, called the attended words, when they were embedded in a string of other randomly presented words, called the distracting words. Normal brain wave responses were established by performing the word testing on 8 healthy volunteers. The same testing was then applied to 21 brain damaged individuals, 9 with a clinical diagnosis of vegetative state and 12 with a diagnosis of minimally conscious state. Most of the patients did not respond to the presentation of words as did normal volunteers. But one did.
The patient, described at Patient P1, suffered a traumatic brain injury 4 months prior to testing. He was diagnosed as “behaviorally vegetative,” based on a Coma Recovery Score-Revised (CRS-R) of 7 (8 or greater = MCS). In addition to being able to consciously attend to the key words, this patient could also follow simple commands to imagine playing tennis.
Dr. Chennu was quoted as saying, “we are progressively building up a fuller picture of the sensory, perceptual and cognitive abilities in patients” with vegetative and minimally conscious states. Yes, this is true. But what does it mean if someone previously diagnosed as vegetative can now be shown to perform this sort of task? Dr. Chennu hopes that this information will spur the development of “future technology to help patients in a vegetative state communicate with the outside world.”
I think this is fascinating research and it sheds new insights into how the brain functions, but it also raises a number of important questions. For example, if I can attend to words, does it change my prognosis? Patient P1 was found to have minimal cortical atrophy. Perhaps he is just slow to transition from a vegetative to a MCS. If attending to words is associated with a better prognosis, should that make me a candidate for intensive and expensive rehabilitation? If so, who should pay for this? If I have an advanced directive that says I don’t want to continue to live in a persistent vegetative state, will this level of awareness mean I am not really vegetative. As more and more resources are poured into care for folks with severe brain damage, does it come at a societal cost?
What trade offs are we making, what services are we forgoing, as we spend money developing tools to improve communication in vegetative states
Of course no one has the answer to these questions and I suspect as researchers like those at Cambridge continue to learn more about the functioning of the severely injured brain, the more difficult it will be to clearly say what is really means to be “aware.”
Atheists, Work With Us for Peace, Pope Says on Christmas
Filippo Monteforte/Agence France-Presse — Getty Images
By REUTERS
Published: December 25, 2013 at 7:47 AM ET
VATICAN CITY — Pope Francis, celebrating his first Christmas as Roman Catholic leader, on Wednesday called on atheists to unite with believers of all religions and work for "a homemade peace" that can spread across the world.
The leader of the 1.2 billion-member Church wove his first "Urbi et Orbi" (to the city and world) message around the theme of peace.
"Peace is a daily commitment. It is a homemade peace," he said.
He said that people of other religions were also praying for peace, and - departing from his prepared text - he urged atheists to join forces with believers.
"I invite even non-believers to desire peace. (Join us) with your desire, a desire that widens the heart. Let us all unite, either with prayer or with desire, but everyone, for peace," he said, drawing sustained applause from the crowd.
Francis's reaching out to atheists and people of other religions is a marked contrast to the attitude of former Pope Benedict, who sometimes left non-Catholics feeling that he saw them as second-class believers.
He called for "social harmony in South Sudan, where current tensions have already caused numerous victims and are threatening peaceful coexistence in that young state".
Thousands are believed to have died in violence divided along ethnic lines between the Nuer and Dinka tribes in the country, which seceded from Sudan in 2011 after decades of war.
The pontiff also called for dialogue to end the conflicts in Syria, Nigeria, Democratic Republic of Congo and Iraq, and prayed for a "favorable outcome" to the peace process between Israelis and Palestinians.
"Wars shatter and hurt so many lives!" he said, saying their most vulnerable victims were children, elderly, battered women and the sick.
PERSONAL PEACEMAKERS
The thread running through the message was that individuals had a role in promoting peace, either with their neighbor or between nations.
The message of the birth of Jesus in Bethlehem was directed at "every man or woman who keeps watch through the night, who hopes for a better world, who cares for others while humbly seeking to do his or her duty," he said.
"God is peace: let us ask him to help us to be peacemakers each day, in our life, in our families, in our cities and nations, in the whole world," he said.
Pilgrims came from all over the world for Christmas at the Vatican and some said it was because they felt Francis had brought a breath of fresh air to the Church.
"(He) is bringing a new era into the Church, a Church that is focusing much more on the poor and that is more austere, more lively," said Dolores Di Benedetto, who came from the pope's homeland, Argentina, to attend Christmas Eve Mass.
Giacchino Sabello, an Italian, said he wanted to get a first-hand look at the new pope: "I thought it would be very nice to hear the words of this pope close up and to see how the people are overwhelmed by him."
In his speech, Francis asked God to "look upon the many children who are kidnapped, wounded and killed in armed conflicts, and all those who are robbed of their childhood and forced to become soldiers".
He also called for a "dignified life" for migrants, praying tragedies such as one in which hundreds died in a shipwreck off the coast of the Italian island of Lampedusa are never repeated, and made a particular appeal against human trafficking, which he called a "crime against humanity".
(Editing by Pravin Char)
An Ultracold Big Bang: A successful simulation of the evolution of the early universe
Posted on From Quarks to Quasars December 25, 2013 at 9:00 am by Joshua Filmer
In August of 2013, physicists made a major breakthrough in our understanding of the early universe in an experiment that successfully reproduced a pattern resembling the cosmic microwave background radiation. This experiment was conducted at the University of Chicago with the aid of ultracold cesium atoms.
“This is the first time an experiment like this has simulated the evolution of structure in the early universe,” according to physics professor Cheng Chin, one of the authors on this project. The goal of the experiment was to simulate the big bang using ultracold atoms in an effort to understand how the universe evolved at the earliest timescales. Tentatively, their experiment seems a tremendous success
The cosmic microwave background (CMB) is one of the only things we have left to analyze the early structure of the universe, and this CMB is a kind of window, allowing us to go back in time to that most volatile period in our universe’s history. Ultimately, it allows us to pull a fingerprint of the universe when it was only 380,000 years old. This pervasive radiation has been mapped over the last few decades. The most recent and most detailed mapping of the CMB comes from the Planck Space Observatory and was completed earlier this year.
Chen-Lung Hung, the lead author on the project, described the methodology of the experiment as follows, “…under certain conditions, a cloud of atoms chilled to a billionth of a degree above absolute zero (-459.67 degrees Fahrenheit) in a vacuum chamber displays phenomena similar to those that unfolded following the Big Bang. At this ultracold temperature, atoms get excited collectively. They act as if they are sound waves in air.” That sound wave action can be observed in the CMB.
The echoing and rippling of spacetime created in the big bang was exaggerated in the period of the universe’s rapid inflation. These ripples reverberated back and forth and interacted with each other creating the foundation for the complicated patterns we see in the universe today. This phenomenon is known as “Sakharov acoustic oscillations” after the scientists who first described them.
The simulated universe comprised of a cloud of 10,000 cesium atoms, chilled to a billionth of a degree above absolute zero. This caused the atoms to form an exotic state of matter called two-dimensional atomic superfluid. This simulated universe measured about 70-microns in diameter, or about the size of a human hair. Even though the universe had a diameter of about 100,000 light-years when emitted the pattern we recognize today as the CMB, the much smaller simulated universe behaved in exactly the same fashion as a large universe would.
In August of 2013, physicists made a major breakthrough in our understanding of the early universe in an experiment that successfully reproduced a pattern resembling the cosmic microwave background radiation. This experiment was conducted at the University of Chicago with the aid of ultracold cesium atoms.
“This is the first time an experiment like this has simulated the evolution of structure in the early universe,” according to physics professor Cheng Chin, one of the authors on this project. The goal of the experiment was to simulate the big bang using ultracold atoms in an effort to understand how the universe evolved at the earliest timescales. Tentatively, their experiment seems a tremendous success
The cosmic microwave background (CMB) is one of the only things we have left to analyze the early structure of the universe, and this CMB is a kind of window, allowing us to go back in time to that most volatile period in our universe’s history. Ultimately, it allows us to pull a fingerprint of the universe when it was only 380,000 years old. This pervasive radiation has been mapped over the last few decades. The most recent and most detailed mapping of the CMB comes from the Planck Space Observatory and was completed earlier this year.
Chen-Lung Hung, the lead author on the project, described the methodology of the experiment as follows, “…under certain conditions, a cloud of atoms chilled to a billionth of a degree above absolute zero (-459.67 degrees Fahrenheit) in a vacuum chamber displays phenomena similar to those that unfolded following the Big Bang. At this ultracold temperature, atoms get excited collectively. They act as if they are sound waves in air.” That sound wave action can be observed in the CMB.
The echoing and rippling of spacetime created in the big bang was exaggerated in the period of the universe’s rapid inflation. These ripples reverberated back and forth and interacted with each other creating the foundation for the complicated patterns we see in the universe today. This phenomenon is known as “Sakharov acoustic oscillations” after the scientists who first described them.
The simulated universe comprised of a cloud of 10,000 cesium atoms, chilled to a billionth of a degree above absolute zero. This caused the atoms to form an exotic state of matter called two-dimensional atomic superfluid. This simulated universe measured about 70-microns in diameter, or about the size of a human hair. Even though the universe had a diameter of about 100,000 light-years when emitted the pattern we recognize today as the CMB, the much smaller simulated universe behaved in exactly the same fashion as a large universe would.
Asimov's 'I, Robot' Soon To Be Reality, No Longer Fiction
(International Business Times By Cameron Fuller) -- Scientists have created what may become the future of prosthetics, a robot “muscle” that can throw something 50 times its own weight five times its length in a surprisingly fast 60 milliseconds. While it’s easy to envision what this means for the future, a Hollywood image of robot arms crushing steel bars with ease comes quickly to mind, don’t fear just yet, the new muscle is currently the size of a microchip.
“We’ve created a micro-bimorph dual coil that functions as a powerful torsional muscle, driven thermally or electro-thermally by the phase transition of vanadium dioxide,” said Junqiao Wu, the project’s lead scientist at the U.S. Department of Energy’s Lawrence Berkeley National Labs (Berkeley Labs).
The strength of the new robotic muscle comes from the special property that vanadium dioxide possesses. VO2 changes physical state when heated or cooled. The muscle, coincidentally in the shape of a V, is heated causing one dimension to contract while the other two dimensions expand, creating a torsion spring. Think catapult, but on a much smaller scale.
While in its current state the muscle demonstrates the potential for what may be the future of artificial neuromuscular systems. Wu’s device functions in a way that creates a proximity sensor, which is very similar to the way biological muscles work. This torsion spring and proximity sensor features “allow the device to remotely detect a target and respond by reconfiguring itself to a different shape. This simulates living bodies where neurons sense and deliver stimuli to the muscles and the muscles provide motion,” according to Wu.
The micro-muscle requires a way of heating to actuate. As it stands, Wu thinks “electric current is the better way to go because it allows for the selective heating of individual micro-muscles and the heating and cooling process is much faster.” However, Berkeley Labs is working on a way for heat from the sun to trigger the device.
This announcement comes just three months after Dr. Adrian Koh of the National University of Singapore’s (NUS) Faculty of Engineering announced a similar muscle able to carry 80 times its own weight in September of this year. Both of these devices are at the forefront of more human-like robotics.
Dr. Koh suggests how these micro-muscles will change the game of humanoid robotics. “Our materials mimic those of the human muscle, responding quickly to electrical impulses, instead of slowly for mechanisms driven by hydraulics. Robots move in a jerky manner because of this mechanism. Now, imagine artificial muscles which are pliable, extendable and react in a fraction of a second like those of a human. Robots equipped with such muscles will be able to function in a more human-like manner – and outperform humans in strength.”
Robots like those seen the big budget Hollywood film “I, Robot” may no longer be an Asimovian dream, finding reality instead through people like Wu and Dr. Koh.
While in its current state the muscle demonstrates the potential for what may be the future of artificial neuromuscular systems. Wu’s device functions in a way that creates a proximity sensor, which is very similar to the way biological muscles work. This torsion spring and proximity sensor features “allow the device to remotely detect a target and respond by reconfiguring itself to a different shape. This simulates living bodies where neurons sense and deliver stimuli to the muscles and the muscles provide motion,” according to Wu.
The micro-muscle requires a way of heating to actuate. As it stands, Wu thinks “electric current is the better way to go because it allows for the selective heating of individual micro-muscles and the heating and cooling process is much faster.” However, Berkeley Labs is working on a way for heat from the sun to trigger the device.
This announcement comes just three months after Dr. Adrian Koh of the National University of Singapore’s (NUS) Faculty of Engineering announced a similar muscle able to carry 80 times its own weight in September of this year. Both of these devices are at the forefront of more human-like robotics.
Dr. Koh suggests how these micro-muscles will change the game of humanoid robotics. “Our materials mimic those of the human muscle, responding quickly to electrical impulses, instead of slowly for mechanisms driven by hydraulics. Robots move in a jerky manner because of this mechanism. Now, imagine artificial muscles which are pliable, extendable and react in a fraction of a second like those of a human. Robots equipped with such muscles will be able to function in a more human-like manner – and outperform humans in strength.”
Robots like those seen the big budget Hollywood film “I, Robot” may no longer be an Asimovian dream, finding reality instead through people like Wu and Dr. Koh.
Tuesday, December 24, 2013
What You Believe About Homosexuality Doesn’t Matter
Today, there are 2 news stories that have been circulating all over my Facebook and Twitter news feeds. One you are probably aware of, the other maybe not. The two, though, are closely related. The first news story is the indefinite suspension of Duck Dynasty star Phil Robertson due to the comments he made during an interview with GQ magazine. The second news story is about the “defrocking” of Pennsylvania UMC pastor Frank Schaefer after he performed the marriage for his gay son and subsequent refusal to submit to church law regarding this action. The link between these two stories is clear. The church’s views (or, in the case of Duck Dynasty, a certain understanding of the Christian faith’s views) regarding homosexuality.
The reaction to both of these stories has been…emphatic, to say the least. The debate over the “rightness or wrongness” of homosexuality has once again been fired up. The appeals to the Biblical passages have been made. The academic rebuttals to the interpretation of those passages has no doubt been referenced. The calls for freedom and tolerance (from both sides) have been shouted…or at least typed out with great gusto. The theological debate (and I am using that term VERY generously here) has been raging all day long, and no doubt will continue to rage in the weeks to come.
But I refuse to engage in it. The way I see it, the time for that debate has long since passed. The stakes are too high now. The current research suggestions that teenagers that are gay are about 3 times more likely to attempt suicide than their heterosexual peers. That puts the percentage of gay teens attempting suicide at about 30-some percent. 1 out of 3 teens who are gay or bisexual will try to kill themselves. And a lot of times they succeed. In fact, Rev. Schaefer’s son contemplated suicide on a number of occasions in his teens.
The fact of the matter is, it doesn’t matter whether or not you think homosexuality is a sin. Let me say that again. It does not matter if you think homosexuality is a sin, or if you think it is simply another expression of human love. It doesn’t matter. Why doesn’t it matter? Because people are dying. Kids are literally killing themselves because they are so tired of being rejected and dehumanized that they feel their only option left is to end their life. As a Youth Pastor, this makes me physically ill. And as a human, it should make you feel the same way. So, I’m through with the debate.
When faced with the choice between being theologically correct…as if this is even possible…and being morally responsible, I’ll go with morally responsible every time. Dietrich Bonhoeffer was a German pastor and theologian during World War II. He firmly held the theological position of nonviolence. He believed that complete pacifism was theologically correct. And yet, in the midst of the war, he conspired to assassinate Adolf Hitler; to kill a fellow man. Why? Because in light of what he saw happening to the Jews around him by the Nazis, he felt that it would be morally irresponsible not to. Between the assassination of Hitler and nonviolence, he felt the greater sin would be nonviolence.
We are past the time for debate. We no longer have the luxury to consider the original meaning of Paul’s letter to the Corinthian church. We are now faced with the reality that there are lives at stake. So whatever you believe about homosexuality, keep it to yourself. Instead, try telling a gay kid that you love him and you don’t want him to die. Try inviting her into your church and into your home and into your life. Anything other than that simply doesn’t matter.
The reaction to both of these stories has been…emphatic, to say the least. The debate over the “rightness or wrongness” of homosexuality has once again been fired up. The appeals to the Biblical passages have been made. The academic rebuttals to the interpretation of those passages has no doubt been referenced. The calls for freedom and tolerance (from both sides) have been shouted…or at least typed out with great gusto. The theological debate (and I am using that term VERY generously here) has been raging all day long, and no doubt will continue to rage in the weeks to come.
But I refuse to engage in it. The way I see it, the time for that debate has long since passed. The stakes are too high now. The current research suggestions that teenagers that are gay are about 3 times more likely to attempt suicide than their heterosexual peers. That puts the percentage of gay teens attempting suicide at about 30-some percent. 1 out of 3 teens who are gay or bisexual will try to kill themselves. And a lot of times they succeed. In fact, Rev. Schaefer’s son contemplated suicide on a number of occasions in his teens.
The fact of the matter is, it doesn’t matter whether or not you think homosexuality is a sin. Let me say that again. It does not matter if you think homosexuality is a sin, or if you think it is simply another expression of human love. It doesn’t matter. Why doesn’t it matter? Because people are dying. Kids are literally killing themselves because they are so tired of being rejected and dehumanized that they feel their only option left is to end their life. As a Youth Pastor, this makes me physically ill. And as a human, it should make you feel the same way. So, I’m through with the debate.
When faced with the choice between being theologically correct…as if this is even possible…and being morally responsible, I’ll go with morally responsible every time. Dietrich Bonhoeffer was a German pastor and theologian during World War II. He firmly held the theological position of nonviolence. He believed that complete pacifism was theologically correct. And yet, in the midst of the war, he conspired to assassinate Adolf Hitler; to kill a fellow man. Why? Because in light of what he saw happening to the Jews around him by the Nazis, he felt that it would be morally irresponsible not to. Between the assassination of Hitler and nonviolence, he felt the greater sin would be nonviolence.
We are past the time for debate. We no longer have the luxury to consider the original meaning of Paul’s letter to the Corinthian church. We are now faced with the reality that there are lives at stake. So whatever you believe about homosexuality, keep it to yourself. Instead, try telling a gay kid that you love him and you don’t want him to die. Try inviting her into your church and into your home and into your life. Anything other than that simply doesn’t matter.
How effective are renewable energy subsidies? Maybe not effective as originally thoughts, finds news study
(Phys.org) —Renewable energy subsidies have been a politically popular program over the past decade. These subsidies have led to explosive growth in wind power installations across the United States, especially in the Midwest and Texas
But do these subsidies work?
Not as well as one might think, finds a new study from Washington University in St. Louis' Olin Business School.
The "social costs" of carbon dioxide would have to be greater than $42 per ton in order for the environmental benefits of wind power to have out weighed the costs of subsidies, finds Joseph Cullen, PhD, assistant professor economics and expert on environmental regulation and energy markets.
The social cost of carbon is the marginal cost to society of emitting one extra ton of carbon (as carbon dioxide) at any point in time.
The current social cost of carbon estimates, released in November and projected for 2015, range from $12 to $116 per ton of additional carbon dioxide emissions. The prior version, from 2010, had a range between $7 and $81 per ton of carbon dioxide. The estimates are expected to rise in the coming decades.
Cullen's findings are explained in a paper titled "Measuring the Environmental Benefits of Wind-Generated Electricity" in American Economic Journal: Economic Policy.
"Given the lack of a national climate legislation, renewable energy subsidies are likely to be continued to be used as one of the major policy instruments for mitigating carbon dioxide emissions in the near future," Cullen says. "As such, it's imperative that we gain a better understanding of the impact of subsidization on emissions."
Since electricity produced by wind is emission free, the development of wind-power may reduce aggregate pollution by offsetting production from fossil fuel generated electricity production. When low marginal cost wind-generated electricity enters the grid, higher marginal cost fossil fuel generators will reduce their output.
However, emission rates of fossil fuel generators vary greatly by generator (coal-fired, natural gas, nuclear, hydropower). Thus, the quantity of emissions offset by wind power will depend crucially on which generators reduce their output, Cullen says.
The quantity of pollutants offset by wind power depends crucially on which generators reduce production when wind power comes online.
Cullen's paper introduces an approach to empirically measure the environmental contribution of wind power resulting from these production offsets.
"By exploiting the quasi-experimental variation in wind power production driven by weather fluctuations, it is possible to identify generator specific production offsets due to wind power," Cullen says.
Importantly, dynamics play a critical role in the estimation procedure, he finds.
"Failing to account for dynamics in generator operations leads to overly optimistic estimates of emission offsets," Cullen says. "Although a static model would indicate that wind has a significant impact on the operation of coal generators, the results from a dynamic model show that wind power only crowds out electricity production fueled by natural gas."
The model was used to estimate wind power offsets for generators on the Texas electricity grid. The results showed that one mega watt hour of wind power production offsets less than half a ton of carbon dioxide, almost one pound of nitrogen oxide, and no discernible amount of sulfur dioxide.
"As a benchmark for the economic benefits of renewable subsidies, I compared the value of offset emissions to the cost of subsidizing wind farms for a range of possible emission values," Cullen says. "I found that the value of subsidizing wind power is driven primarily by carbon dioxide offsets, but that the social costs of carbon dioxide would have to be greater than $42 per ton in order for the environmental benefits of wind power to have out weighed the costs of subsidies."
More information: Cullen, Joseph. 2013. "Measuring the Environmental Benefits of Wind-Generated Electricity." American Economic Journal: Economic Policy, 5(4): 107-33.
Provided by Washington University in St. Louis
Earth's orbit about the sun is not perfectly circular. Like all planets, it is an ellipse with at least a little eccentricity, for Earth this being 0.0167. This means that our planet's distance from the sun ranges from 94,509,460 miles to 91,402,640 miles. This difference results in an almost seven percent difference in solar energy reaching us between periapsis and apoapsis.
Oddly, the northern hemisphere summer occurs when the sun is furthest away, and its winter when the sun is closest. The effects of our 23.5 degree axial tilt clearly overwhelms the orbital eccentricity effect, although it factors into the Milankovitch cycles, developed by Milutin Milanković.
Many pseudoscientists and other quacks have abused these facts to put forth their own "theories" about the seasons; for example, in his book Your Right to Know by the then living Master of ECKANKAR, "Sri" Darwin Gross, we are told that Earth's magnetic forces are pulled by the sun's greater distance, causing internal terrestrial heat to well up and "cause" summer. Gross was apparently unaware of the elementary fact that when it is summer in the northern hemisphere it is winter in southern, and so forth.
I confess that the reason I know this so well is because at that time I was a member of ECKANKAR. This issue probably did more to drive me back to my scientific childhood than anything else (though there were many other factors) and into a career and lifelong devotion to science and reason. I suppose, ironically, I owe an intellectual debt to Darwin Gross (who died not long ago) and ECKANKAR for demonstrating how distressing the irrational life is and, consequently, how rewarding the rational life can be. ECKANKAR, by the way, is still with us, still strong and, yes, profitable and tax-exempt, with tens of thousands of followers. They keep a pretty low profile, but are rather like Scientology in their tactics, from what I've recently read.
The Age of the Universe: Revised
By Joshua Filmer in Quarks to Quasars, http://www.fromquarkstoquasars.com/the-age-of-the-universe-revised/
The Plank Space Observatory has recently aided scientists by making the most detailed map ever seen of the Cosmic Microwave Background (CMB). This image shows a ‘baby picture’ of the universe and revises the age of the universe making it a little older than scientists have previously thought.
The CMB is background radiation (pictured above) left over from the early stages of the universe, showing the universe as it was about 380,000 years after the big bang. At that time, the universe was still a dense soup of basic particles such as electrons, photons, and protons – all ‘boiling’ at a temperature of 2700 Celsius. Here, the protons and electrons started to combine into hydrogen atoms, this processed released the photons. As the universe continued to expand, the light redshifted to the microwave side of the electromagnetic spectrum, today, we can detect those microwaves, which give the universe an equivalent temperature of 2.7 degrees above absolute zero.
One of the many benefits of observing the CMB is the ability to see tiny temperature fluctuations (corresponding to different densities) of the very early universe. This naturally affects the large scale construction of today’s stars and galaxies. Thus, understanding the early universe is pivotal to understanding what we see today.
This is where the Plank Observatory comes it. Plank was originally designed to map the fluctuations that are seen in the CMB that occurred in the inflation period of the universe that happened shortly after the big bang. In addition to clarifying our current understanding of cosmology, this new map confirms the standard model of cosmology and helps to prove the models accuracy. There are also some new, as-yet unexplained features seen on the new CMB map that some scientists believe will need new physics to understand.
Jean-Jacques Dordain, the ESA’s director general, puts it best by saying, “the extraordinary quality of Planck’s portrait of the infant Universe allows us to peel back its layers to the very foundations, revealing that our blueprint of the cosmos is far from complete.”
Of course, after I praise the accuracy of the standard model of cosmology, now I’ll turn right around and rebuke it. There are several features seen in this map that don’t match up with our current models. One such feature is the specific fluctuations seen in the CMB at large angular scales. Here, scientists see signals much weaker than we had previously expected. In attrition, the average temperature of the northern hemisphere of the universe differs from that of the southern, which is contrary to the prediction that the universe should be very similar despite the direction we look.
Another anomaly is a confirmation in the existence of a rather large, asymmetric, cold spot seen in the map taken by the WMAP mission from NASA. The cold spot was originally regarded as an artifact WMAP’s sensors and thus thought of more-or-less as an error. Now, with better, more concrete, and more accurate information, the reality of these anomalies is coming home.
As far as the asymmetric and non-uniformity seen in the temperatures is concerned, scientists have a few ideas. It’s possible the light rays seen in the CMB take a more complicated route through the universe than we currently understand, or, perhaps the universe is not the same in all directions on a scale larger than we can observe. Either way, Professor Efstathiou of the University of Cambridge says, “Our ultimate goal would be to construct a new model that predicts the anomalies and links them together. But these are early days; so far, we don’t know whether this is possible and what type of new physics might be needed. And that’s exciting.”
Even with the kinks in our models, the Plank map goes a long way to confirming our expectations – at least revealing that we are on the right track. In addition, the map goes to revise our understanding of what the universe is made of, the ratios between normal matter, dark matter, and dark energy. Here, Plank shows us a universe made of 4.9% normal ‘visible’ matter (in contrast to 4.5% seen in WMAP), 26.8% dark matter (in contrast to 22.7%), and 68.3% dark energy (in contrast to 72.8%). The Plank measurements also place the age of the universe at 13.81-billion years old, in contrast to the 13.7-billion seen in the WMAP mission.
One of the most exciting thing about all of that data is that the revised numbers are within the margins of error of the old numbers – so, we’re very much on the right track to understanding the universe at large.
The Plank Space Observatory has recently aided scientists by making the most detailed map ever seen of the Cosmic Microwave Background (CMB). This image shows a ‘baby picture’ of the universe and revises the age of the universe making it a little older than scientists have previously thought.
The CMB is background radiation (pictured above) left over from the early stages of the universe, showing the universe as it was about 380,000 years after the big bang. At that time, the universe was still a dense soup of basic particles such as electrons, photons, and protons – all ‘boiling’ at a temperature of 2700 Celsius. Here, the protons and electrons started to combine into hydrogen atoms, this processed released the photons. As the universe continued to expand, the light redshifted to the microwave side of the electromagnetic spectrum, today, we can detect those microwaves, which give the universe an equivalent temperature of 2.7 degrees above absolute zero.
One of the many benefits of observing the CMB is the ability to see tiny temperature fluctuations (corresponding to different densities) of the very early universe. This naturally affects the large scale construction of today’s stars and galaxies. Thus, understanding the early universe is pivotal to understanding what we see today.
This is where the Plank Observatory comes it. Plank was originally designed to map the fluctuations that are seen in the CMB that occurred in the inflation period of the universe that happened shortly after the big bang. In addition to clarifying our current understanding of cosmology, this new map confirms the standard model of cosmology and helps to prove the models accuracy. There are also some new, as-yet unexplained features seen on the new CMB map that some scientists believe will need new physics to understand.
Jean-Jacques Dordain, the ESA’s director general, puts it best by saying, “the extraordinary quality of Planck’s portrait of the infant Universe allows us to peel back its layers to the very foundations, revealing that our blueprint of the cosmos is far from complete.”
Of course, after I praise the accuracy of the standard model of cosmology, now I’ll turn right around and rebuke it. There are several features seen in this map that don’t match up with our current models. One such feature is the specific fluctuations seen in the CMB at large angular scales. Here, scientists see signals much weaker than we had previously expected. In attrition, the average temperature of the northern hemisphere of the universe differs from that of the southern, which is contrary to the prediction that the universe should be very similar despite the direction we look.
Another anomaly is a confirmation in the existence of a rather large, asymmetric, cold spot seen in the map taken by the WMAP mission from NASA. The cold spot was originally regarded as an artifact WMAP’s sensors and thus thought of more-or-less as an error. Now, with better, more concrete, and more accurate information, the reality of these anomalies is coming home.
As far as the asymmetric and non-uniformity seen in the temperatures is concerned, scientists have a few ideas. It’s possible the light rays seen in the CMB take a more complicated route through the universe than we currently understand, or, perhaps the universe is not the same in all directions on a scale larger than we can observe. Either way, Professor Efstathiou of the University of Cambridge says, “Our ultimate goal would be to construct a new model that predicts the anomalies and links them together. But these are early days; so far, we don’t know whether this is possible and what type of new physics might be needed. And that’s exciting.”
Even with the kinks in our models, the Plank map goes a long way to confirming our expectations – at least revealing that we are on the right track. In addition, the map goes to revise our understanding of what the universe is made of, the ratios between normal matter, dark matter, and dark energy. Here, Plank shows us a universe made of 4.9% normal ‘visible’ matter (in contrast to 4.5% seen in WMAP), 26.8% dark matter (in contrast to 22.7%), and 68.3% dark energy (in contrast to 72.8%). The Plank measurements also place the age of the universe at 13.81-billion years old, in contrast to the 13.7-billion seen in the WMAP mission.
One of the most exciting thing about all of that data is that the revised numbers are within the margins of error of the old numbers – so, we’re very much on the right track to understanding the universe at large.
Wondering About The Ultimate Beginnings
I
hope that chapter eight (my previous blog) has given you a reasonable feel for what is
commonly called “deep” time, that is, the geological and
biological evolution of our own planet. Given that we do reside here
and did evolve here, that was a pretty good place to start. But the
universe as a whole did not begin with that of our own world and the
rest of our solar system; there is an approximately nine billion year
gap between those two events. Besides, as already mentioned, events
in the very early universe were quite different than those later on,
simply because back then, the cosmos was smaller, denser, and hotter,
and the laws of physics needed to understand it were necessarily
different as well.
Imagine
yourself on a journey backward in time, back not just before our
Earth and solar system, not just before our Milky Ways galaxy began
to form, but much, much further than that, to a time before the first
stars and galaxies began to take form. We are at a point in the
universe’s evolution where it can be modeled fairly accurately as a
gas – a gas composed almost entirely out of hydrogen and some
helium although that is not the critical feature determining its
behavior. Although it is a somewhat rough analogy, it can
essentially be thought of as a gas in a closed flask, characterized
by a specific density, pressure, and temperature. As such, it can be
modeled reasonably well by the gas laws you learned in first year
college chemistry, if you were fortunate enough to have taken them.
What’s that? You never took chemistry in college? No matter; it
is quite straightforward. The basic law governing the behavior of
gasses is the so-called “Ideal Gas” Law, which, placed in
equation form is PV = nRT, where P is the pressure of the gas, V its
volume, and T its temperature. N is the totals number of moles, or
atoms / nolecules of gas in the universe. Never mind the meaning of
R here, the proportionality constant, which remains constant in this
situation anyway; when the equation is re-written as V
T/P ( being the symbol for
proportionality), we see that as V, in this case the volume of the
universe, decreases, either T must increase or P decrease. A
modification to the equation is needed here, however. I am speaking
of ideal gasses, which, in reality, don’t actually exist, but serve
as models for real gasses. In fact, with real gasses, increasing
pressure also raises their temperature. An example of this is the
gasoline vapor / air mixture in the cylinder of a car; as the piston
presses down on the mixture both its pressure and temperature rise –
in a diesel engine, this compression is enough by itself to ignite
the mixture, driving the piston upward and turning the crankshaft.
If
the temperature of a gas rises high enough, the kinetic energy of the
atoms or molecules composing it is sufficient to strip away their
electrons, leading to a state of matter known as a plasma. This
temperature is fairly high, in the thousands or tens of thousands of
degrees. The sun and other stars of equal or greater brightness are
both so hot as to be composed of such gaseous plasmas, whose
temperature rises well up into the millions of degrees in their
centers, enough along with the very high pressures / densities there
to allow the thermonuclear fusion reactions which power their
enormous energy outputs.
If
we continue our backwards time journey, at a point of between three
and four hundred thousand years after the start of the Big Bang, we
reach the point where the temperature of the universe increases to
and above the plasma temperature; after this point, electrons combine
with protons and heavier nuclei to form the first atoms. This is a
critical time in the cosmos’ evolution: prior to it, the
interaction of electromagnetic radiation with the electrically
charged electrons and bare atomic nuclei make it opaque; after it,
when stable atoms form, the electromagnetic radiation can stream
freely through space. This radiation, called the cosmic background
radiation, is a measure of the universe’s temperature. Largely in
the visible range at first, it cools over the billions of years the
universe has been expanding, to the point now where it is almost
entirely in the microwave region, a region of much lower energy than
visible light, indicating a cosmic temperature of only a few degrees
above absolute zero (absolute zero, or 0K on the Kelvin temperature
range, is the complete absence of all heat). By the way, it was the
(accidental) discovery of this microwave radiation in 1964 by Penzias
and Wilson which as much or more than anything else cinched the case
for the Big Bang theory.
Back
to the universe at 3-400,000 years after the Big Bang. Another
important fact of the universe at this time is that, although I have
described it as though it were a gas of uniform, or homogenous,
density throughout, obviously this could not have been the case.
Even then there had to be inhomogenuities present, otherwise there
would have been nothing for stars and galaxies and galactic clusters
and larger scale structures to gravitationally condense around.
These inhomogenuities need only be quite small – you would never
notice their equivalent in a pot of mashed potatoes however hard you
looked – but they had to be there, or else – well, for else one
thing, just as with the discovery of a stable high energy state of
the carbon nucleus, we would not be here to make their prediction.
In fact, they turn out to be so small that it was not until the 1990s
that they were finally unequivocally discovered by a space-based
probe called the Cosmic Background Explorer, or COBE for short. The
discovery was of such significance that some regard it as the most
important scientific discovery of the century, even to the point of
making religious analogies (of the Einsteinian nature) to it.
Three
or four hundred thousand years is not much on our cosmic timeline, if
you recall it from the last chapter, only around four hours.
Actually, this is where the line begins to lose its usefulness,
because the next set of interesting events occur up to only about 20
real minutes after the Big Bang, and reach back to only a
trillionth of a trillionth of a trillionth (10-36) of a
second after the beginning. Indeed, it is difficult to come up with
any type of line that is intuitively useful; if we make 10-36
equal to one second, then events happening at a trillionth of a
second would be 1024 or thirty million billion years
later, over two million times longer than the known age of the
cosmos! So we are going to have to drop our attempts to make such
time intervals intuitively meaningful, and stick with the hard
numbers, as difficult as they are to grasp.
As
it is also impossible to describe the events that happen during this
period of 10-36 second to approximately twenty minutes
without some basic knowledge of nuclear physics, a digression is
necessary before plunging in. Don’t panic, though; it will only be
enough for our purposes here, and besides, I’m not enough of an
expert in the subject to make it too abstruse.
As
you probably already know from your high schooling at least the atom
is composed of a nucleus consisting of at least one (in the case of
the simplest element, hydrogen) or more protons, plus zero or more
neutrons, along with one or more electrons which (though you will
recall from chapter three that this is not really correct) circle it.
Neutral atoms have as many electrons as protons, since the negative
charge on the former exactly equals the positive charge on the
latter. An atom stripped of one or more electrons is called an ion;
high enough temperature or energetic enough radiation has the ability
to do this, and as already mentioned, matter in this state is called
a plasma.
There
is something I must take the time to explain here. You probably
didn’t learn much about the atomic nuclei in your schooling, but a
fairly obvious question should occur to you about it: given that
protons are all positively charged, what holds them together in the
nucleus? Before answering that question, another thing you should
know about both protons and neutrons, which are collectively known as
hadrons, is that they themselves are composed of still smaller
entities bearing the strange name of quarks (a pun I can’t resist
is that there is a type of high-energy quark called the strange
quark). You could say, in fact, that instead of describing nuclei as
being made of two different types of hadrons, we really should say
that they are made of two different kinds, or “flavors”, of
quarks, namely up quarks and down quarks.
Quarks
have “fractional” electric charges, in that they possess ⅓ of
the negative charge of an electron – this is the down quark – or
⅔ of the positive charge of a proton – this is the up quark.
Thus, what we call a proton is really a combination of two up quarks
with a down quark, and a neutron is composed of one up quark with two
down quarks. Add up the charges and you will see they work out,
protons having a +1 charge and neutrons having zero charge.
Quarks
have other interesting properties as well. Individual quarks cannot
be isolated from each other and observed; they always exist in
combinations of two or three (or maybe more). In fact, their
existence was predicted on purely theoretical grounds in the early
1960s by Murray Gell-Mann and George Zweig, and weren’t indirectly
experimentally verified until several years later by particle
scattering experiments.
So
the question isn’t what holds protons and neutrons together in the
nucleus, but what holds the quarks together. Strangely enough, that
question was partially answered several decades earlier (although the
answer had to be modified to account for the quark structure of
hadrons). Again, there is much more to this answer than needs to be
covered here, but yet another brief digression, this time on forces,
is enough to cover the basics. Also, as I have alluded to this
earlier, now is a good time to explain it in more detail.
* * *
There
are four “fundamental” forces in the universe – fundamental in
that any force you encounter consists of one or a combination of
them, working together or against one another. You are actually
already familiar with two of these forces: gravity, which pulls all
mass objects in the universe toward each, including holding you down
on the ground, the moon orbiting Earth, and Earth and the other
planets orbiting the sun; and electromagnetism, which you observe
every time you use a magnet or electrically charged objects – it
is, of course, the force that keeps electrons in their orbits, or
orbitals, around the atomic nucleus. Incidentally, the reason you
are much more aware of gravity than electromagnetism is that the
former is (almost) a universally attractive force, building up as the
mass to generate it accumulates. Electromagnetism, on the other
hand, is both attractive and repulsive, so you only notice it under
the special conditions where an excess of positive or negative
charges occurs, and even then the excess is usually quite small, so
the effect seems relatively weak compared to gravity. In fact,
electromagnetism is some 1039 times more powerful than
gravity! Also, the reason you come into direct contact with both
forces is that they are infinite in range; both fall off only as the
square of the distance between the two attracting (or repelling)
objects.
The
remaining two forces are called nuclear forces because their
intensities fall off so rapidly that they act only on the scale of
atomic nuclei; this is the reason we don’t encounter them directly,
but only indirectly through their effects. One of these forces, the
weak nuclear force, is involved in certain kinds of radioactive
decay. I won’t speak more about it here. The other, the strong
nuclear force, which I have mentioned before, is what answers our
question about what holds the quarks, or the protons and neutrons,
together in the atomic nucleus. This force is approximately a
hundred times stronger than the electromagnetic force at the ranges
typical inside nuclei. Again though, its range is so short that it
takes tremendous kinetic energy to overcome the mutual
electromagnetic repulsion between two nuclei and allow them to come
close enough together to fuse via the strong nuclear force; this is
why it takes the incredibly high temperatures in the core of a star,
or in a thermonuclear weapon, or in the very early universe, to
accomplish this kind of nuclear fusion.
The
reason for the digression to discuss these forces is that, according
to modern theories of nuclear physics, they are all actually
manifestations of a single force, and that at sufficiently high
temperatures and pressures, such as what happens as we get closer and
closer to time zero, they merge together one by one until there is
only a single force. The reason for the digression on quarks is that
prior to a certain time, the temperature of the universe is so high
that they cannot hold together long enough to make stable protons and
neutrons.
* * *
There
is one more digression that needs to be made before we talk about the
earliest moments of cosmic evolution. It is, or so it seems to me, a
non-physicist, to be the Central Problem if we are ever able to fully
understand those moments.
The
problem is that there are two major edifices of physics twentieth
century science has erected to understand matter, energy, space, and
time over the last hundred or so years. The first edifice, which
we’ve already met, is the physics of the ultra-tiny, the world of
the atom and smaller, the physics of quantum mechanics. The other
edifice is the physics that describes the universe on the large
scale, from approximately planet sized objects on up: Einstein’s
General Relativity. And the problem is both simple and deep at the
same time: they simply do not look at and model reality in the same
way.
A
good example of this is how they describe gravity. In quantum
mechanics all forces are carried by a type of particle called a
virtual boson (bosons are particles which carry forces; the particles
which compose mass itself are called fermions). For the
electromagnetic force, this boson is the photon; and for the strong
nuclear force, the gluon. For gravity it is a hypothetical force
dubbed, naturally enough, the graviton. I say hypothetical because
gravity is such a weak force that gravitons have yet to be detected,
although they are well described theoretically; nevertheless,
according to everything we know, they must be there.
According
to general relativity, however, gravity is really not a force at all,
but the result of the Einsteinian curvature of four dimensional
spacetime by massive objects. Another object will fall toward the
object because it is only following the path of least resistance
through this spacetime. Although this curvature is enough to hold us
solidly on Earth, it requires a very massive object to detect it.
One way of doing this is by the way it bends light; historically,
General Relativity was regarded as proven by the slight deflection of
star positions during a solar eclipse in 1919. The bending of light
is used to explain a number of other astronomical phenomena as well,
such as gravitational lensing, and the splitting of the image of a
distance galaxy into two or more images by the presence of an
intervening object of sufficient mass.
Another
difference between the two theories is how they regard spacetime
itself. General relativity requires that spacetime be smooth and
relatively flat on all scales. Quantum mechanics however says that
that is impossible. The uncertainty principle, which we have already
met, means that on small enough scales spacetime must be lumpy and
twisted. An analogy to this might be a woolen blanket which from a
large distance looks smooth but up close is revealed to be composed
of intertwining hair. The uncertainly principle also affects
spacetime on small enough scales in another way, by allowing
“virtual” particles to come into existence over short enough time
periods. This happens because of another way of expressing the
uncertainty principle besides the x
× s
≤ /m
form we encountered in chapter three: t
× E
≤ ,
where t is time and E energy. In this form the equation states that
it is possible for particles of any given mass energy (E)
to exist as long as they disappear within time t.
Despite the term virtual (they are not directly detectable), these
particles are not only quite real in their effects, but they are the
heart of what explains the four fundamental forces in quantum
mechanics.
This
conflict between quantum mechanics and general relativity means that
neither theory encompasses a complete and fully correct vision of
reality. This is not normally a problem for physicists however as
generally, they divide reality into two camps, which deal with it on
such different scales. In dealing with the very early universe
however, they clash like charging elephants at full speed, for we
have now delved into a realm of both the extremely small and
the extremely massive, a place that no one has gone before and where
all our curiosity and imagination and brilliance become less and less
able to predict what we will find there. The only thing that is
certain is that we are not in Kansas anymore.
* * *
It
is time to resume our journey back to the beginning of the universe,
or at least as far as our knowledge of physics permits, back towards
T = 0, if indeed there was such a time. We had stopped at T + 20
minutes, and for good reason. In the universe today, only the
centers of stars are hot enough and dense enough to fuse hydrogen
into helium and heavier elements. But there must have been a time,
if the Big Bang is true, when the universe as a whole existed in
those conditions. There was, and T + 20 minutes marks the end of
that time.
Astronomers
observing our current cosmos discover that it is, by mass,
approximately 75% hydrogen and 24% helium, with only traces of
heavier elements. It is impossible to account for more than a tiny
fraction of that helium by stellar nucleosynthesis, however. One of
the triumphs of Big Bang theory was to account for the remaining
helium; the period between T + 3 and T + 20 minutes in our universe
had just the right conditions in terms of temperature and density,
and lasted just the right amount of time, to create it.
The
earliest periods of the Big Bang are referred to by cosmologists as
epochs. Despite the name, epochs are mostly extremely short periods
of time when the newly born universe was evolving extremely rapidly.
Thus, there is the Planck epoch, the grand unification epoch, the
inflationary epoch, the quark epoch, and so on. These epochs are
defined according to the predominant process(es) or particle(s) which
characterize them. The period of nucleosynthesis we are discussing
is just a part of the photon epoch, the total length of which is from
T + 3 minutes to almost T + three-four hundred thousand years
(although the nucleosynthesis fraction of this time, if you’ll
recall, only lasts up to T + 20 minutes), a time when most of the
energy of the universe is in the form of photons; as mentioned
before, this epoch ends when stable atoms finally form and the
photons are free to stream through space unhindered as the cosmic
background radiation we detect today.
The
epoch preceding the photon epoch is the so-called lepton epoch, which
takes us back to approximately T + 1 second. Leptons are fermions (a
type of mass bearing particle, if you’ll remember) that interact
with all forces except the strong nuclear force; the member of this
family we are most familiar with is the electron, although there are
others, such as the electron neutrino, a very low mass particle
involved in certain types of nuclear reactions. There are also high
energy, short-lived versions of both these particles, such as the
muon and tau high energy analogues of the electron, and their
corresponding neutrinos, the muon neutrino and tau neutrino. In the
lepton epoch leptons dominate the mass of the universe. Excuse me, I
should say leptons and anti-leptons, for we have reached that
period of the universe’s evolution where one of its most
interesting puzzles needs to be addressed: the cosmic asymmetry
between matter and antimatter.
* * *
Antimatter
probably sounds like the stuff of science fiction, especially if you
are a Star Trek fan (this is admittedly where I first heard of
it), but in fact it is very real, and that reality poses a serious
problem. The problem is that every mass carrying particle, or
fermion, has a corresponding antiparticle, which has the same mass
but the opposite electric charge (there are other differences, too).
So every electron, say, has an antielectron – also known as a
positron – every quark has an antiquark, every neutrino an
antineutrino. The real problem is that if a particle and its anti
counterpart should encounter each other, say an electron and a
positron, the result is cataclysmic: both particles mutually
annihilate each other in a burst of high energy photons (photons,
like other force carrying particles, are their own anti-particles;
there are no such things as anti-photons). No, the real
problem is that, in the first few seconds of the cosmos’ existence,
both fermions and their anti counterparts ought to be produced in
equal numbers, only in the next few seconds to completely annihilate
each other, leaving a universe composed of nothing but high energy
radiation; no matter, no stars or galaxies, and no us. As the
universe today, for good theoretical and observational reasons,
appears to be composed almost entirely of matter, with very little if
any antimatter, there must have been a certain asymmetry between the
number of matter and antimatter particles formed in the early
universe. This asymmetry, favoring the creation of matter over
antimatter, need only be quite small; once all of the antimatter had
mutually annihilated by an equal quantity of matter, the excess of
matter would have been left to dominate the cosmos as we see today.
But what could have caused this asymmetry, however small?
This
is no trivial question because symmetry lies at the heart of much of
the laws of physics, especially the laws that govern sub-atomic
particles and their behavior. Violations of certain kinds of
symmetry, however, are known to occur. Symmetry breaking is, indeed,
crucial to the earliest moments of Big Bang cosmology, particularly
in the evolution of the four fundamental forces. Recall that these
forces merge, one by one, into a single force as we close in on T =
0. So it is not unreasonable to hypothesize that some kind of
symmetry breaking is responsible for the matter excess we see in the
universe today. This is an area of active research and intense
debate among cosmologists.
It
is worthwhile to pause here at T + 1 seconds and take stock of where
we are and what is happening in our attempt to unravel the earliest
moments of the cosmos. I mentioned at the beginning of this chapter
that as we went deeper and deeper into the past, we would eventually
reach a point where our understanding of the laws of physics begins
to get increasingly shaky, shaky to the degree that we are no longer
certain of the ground beneath our feet. Like fossil hunters digging
into deeper and deeper strata, what we find is less certain, more
speculative, and harder to lay out with the same confidence that has
carried us this far. My sense and reading and understanding leaves
me to believe that we have arrived at this point, or at least are
very close to it. The one event before T + 1 which does seem well
established, the breaking of electroweak (electromagnetic plus weak
forces) symmetry and the ensuing establishment of the weak nuclear
force and electromagnetism as two separate forces, occurs at
approximately T + 10-12 seconds. At this point all four
fundamental forces have achieved their current form (though not
current strengths), and the quarks in the quark-gluon plasma that
fills the universe acquire their masses via their interaction with a
still hypothetical particle (it is currently being actively searched
for) called the Higgs boson. The subsequent cooling after this point
allows the free quarks to combine into the protons and neutrons and
other hadrons we see today.
* * *
I
think I can say confidently that what happens before T + 10-12
seconds is entirely the subject of theoretical work. The next
symmetry breaking, between the strong nuclear force and the
electroweak force, is the subject of so-called Grand Unification
Theories, or GUTS, of which there are several varieties. By the way,
in a way this name is misleading, as we have still not accounted for
gravity yet. But recalling our earlier discussion of general
relativity and quantum mechanics, we know that a quantum theory of
gravity needs to be formulated and tested before we tread that realm,
and that such a theory is still in such a theoretical stage that one
of its prime candidates, string theory, has yet to be accepted a
real, credible theory by many in the scientific community.
Current
estimates of the break between the strong and electroweak forces
places it at about T + 10-36 seconds, or a trillionth of a
trillionth of a trillionth of a second after the Beginning. And
here, at the risk of understatement, is where things begin to get
interesting, at least if our theoretical models are correct. For
this is where Big Bang cosmology almost fell flat on its face, if I
may be pardoned what is about to be another pun.
Besides
the matter-antimatter asymmetry, two other features of the current
universe need to be explained by events very early in its history:
one is that, on very large scale, its shape is very flat; the second
is that, on more local scales, it is lumpy and inhomogeneous.
The
local inhomogenuity is the easier of the two to understand. We look
around ourselves and we see a universe today in which the matter is
organized into stars / solar systems, galaxies, clusters of galaxies,
clusters of clusters, and so on. This is due to gravity working over
billions of years, of course. But there must have been primordial
inhomogenuities in the early universe for gravity to work on; if the
Big Bang had produced a perfectly homogeneous distribution of
mass-energy, then we would not be here to observe a universe composed
of non-uniformly distributed hydrogen and helium, bathing in an
equally non-uniform sea of background radiation.
Fortunately
for us, the universe is inhomogeneous, and has been since the
de-coupling of matter and energy around T + 3-400,000 years, as
careful studies of the cosmic background radiation (from COBE) have
shown. But where did these inhomogenuities arise from? Classical
Big Bang theory at the time could not answer this question.
The
other problem, that of the flatness of universe on large scales, also
stumped classical theory, although it is a little harder to explain.
This is an issue raised by general relativity; more precisely, by the
so-called “field equations” of general relativity, which have a
number of different solutions, under different conditions. These
solutions, among other things, describe the cosmic curvature of
spacetime due to the presence of mass-energy. There are three
possible curvatures, depending on the mass-energy density, measured
by a value called omega or Ω: if Ω is greater than one, then the
mass-energy density yields a universe characterized by positive
spacetime curvature, causing its expansion to eventually stop, then
reverse into a contraction phase (which would have already happened
by now) which may result in another cosmic singularity and big bang;
if Ω is less than one, however, then spacetime is described as
hyperbolic and the expansion will continue forever; if Ω is exactly
equal to one, than spacetime is flat and the expansion will also
continue forever, albeit slower and slower, gradually grinding to a
stop it will never quite reach.
An
exact measurement of Ω today is difficult, but between the
observational data and theoretical considerations, it should be very
close to if not exactly equal to one. The problem this creates is
that any deviation from Ω = 1 in the early universe would be
exponentially magnified by the cosmos’ expansion until today we
should see a Ω vastly greater or smaller than one. As Ω appears
close to or equal to one today, this must mean that it was even more
exquisitely close to one in the early universe as well. Prior to the
1980s, however, nobody had a convincing reason why that should be the
case. It simply appeared that Ω was another example of the “fine
tuning” problem which we shall return to later.
Human
ingenuity is never to be underestimated, however. In the 1980s the
work of Alan Guth, Andrei Linde, Andreas Albrecht, and Paul
Steinhardt yielded a modified version of Big Bang theory that
included a period of exponential expansion very early in the cosmos’
evolution. They called this extra fast expansion Inflation. The idea
of an ultra-fast, in fact exponential, expansion meant that during
this phase the universe increased in size by many orders of magnitude
(by a factor of at least 1026) in a fantastically short
period of time, from about T + 10-36 to T + 10-32
seconds. The triggering mechanism for this expansion is not known
for sure, but a good candidate appears to be the decoupling of the
strong nuclear force from the electroweak force, especially as they
appear to happen at the same time. It is also a matter of contention
as to what brought inflation to an end, or even whether it ended
everywhere at the same time or broke up into “bubbles” of
ordinary universes formed at different times, of which ours is one.
In fact, inflation could still be going on outside of our own
universe, or perhaps “hyperverse” is the better term, still
creating new universes with perhaps different laws and constants.
Whatever
the physics behind inflation, what initiates it and how it ends, it
neatly solves both the problems of local inhomogenuity and cosmic
flatness (and a number of other problems as well). The flatness
problem is solved because whatever the value of Ω before inflation,
the enormous exponential stretching of spacetime brings it
essentially so close to one that it will not diverge significantly
from this value during the subsequent normal cosmic expansion. The
local inhomegenuity problem is also solved, thanks to quantum
mechanics: in the pre-inflation epoch the cosmos is so small that
random inhomogenuities arise simply due to the uncertainty principle,
which says that spacetime and the distribution of mass-energy can
never be perfectly uniform; the effect of inflation is to “freeze”
and enormously expand these inhomogenuities into the seeds of stars
and galaxies and larger structures we see today.
* * *
So.
We find ourselves at the decoupling of the strong nuclear force from
the electroweak force which, if theory is correct, occurred somewhere
between T + 10-36 and T + 10-32 seconds. The
next step, going back further, T + 10-43 seconds marks the
end of the Planck epoch, named so because according to quantum
mechanics, it is approximately the shortest period of time which can
be even theoretically measured, the shortest period of time one could
say that time can even exist. The Planck epoch is also the time
period in which quantum mechanics and general relativity find
themselves in full collision. Somehow, some way, somewhere, gravity
merges with the strong + electroweak force, although no one knows how
with any certainty. We have entered the realm of pure imagination,
where some scientists play with entities called cosmic strings and
work long hours trying to turn them into the ultimate explanation of
matter, energy, space, and time, while other scientists place their
time and bets on ideas like quantum loop gravity and other exotic
hypotheses. As no one has succeeded to the approval of all, we have
also reached the end of our own, personal journey into the past,
arriving if possible at where we began in Chapter eight, when we
tried to imagine what nothing would really be like and realized that
we couldn’t do it no matter how hard we tried. Of course, perhaps
what preceded the Big Bang wasn’t nothing at all. Quite possibly
our universe is part of a greater reality, in which other universes
are also embedded – the multiverse conjecture. There are also a
number of cyclic universe models, such as the Steinhardt-Turok model
in which the universe oscillates between expansion and contraction,
with each Big Bang triggered by a collisions of two “branes”
(multi-dimensional strings) in a higher dimensional spacetime.
Again, this model could predict many, even an infinite number, of
universes.
Although
any of these models could be true, there is, I think, a philosophical
problem with the whole approach, one ironically not too different
from the concept of a supernatural god(s) being responsible for the
universe. Just as a god needs a greater god to explain it, ad
infinitum, we are potentially postulating an infinite number of
greater or higher dimensional cosmoses to explain our own. To me it
all seems driven by a pathological inability to accept nothing merely
because we are incapable of imagining it. But the limitations of
human imagination prove nothing, except our need to accept them,
however unpleasant. This is a subject we will return to in the last
chapter of the book.
Subscribe to:
Posts (Atom)
Right to property
From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Right_to_property The right to property , or the right to own property ...
-
From Wikipedia, the free encyclopedia Islamic State of Iraq and the Levant الدولة الإسلامية في العراق والشام ( ...
-
From Wikipedia, the free encyclopedia A reproduction of the palm -leaf manuscript in Siddham script ...