Search This Blog

Wednesday, July 30, 2014

Breakthrough Material Could Cut the Cost of Solar Energy in Half

Breakthrough Material Could Cut the Cost of Solar Energy in Half


July 30, 2014, 2:46 PM
Solar_cells

What's the Latest?

Renewable energy startup Glint Photonics has created a new material that, when used to cover photovoltaic cells on solar panels, can capture more energy with less infrastructure, greatly reducing the total cost of harvesting the sun's power. By changing its reflectivity in response to heat from concentrated sunlight, the material can gather in light that comes in from different angles throughout the day. Currently, directing sunlight onto the most absorbent part of solar cells requires lenses or mirrors supported by concrete and steel "which must be moved precisely as the sun advances across the sky to ensure that concentrated sunlight remains focused on the cells." That added infrastructure quickly runs up the costs for renewable energy plants.

What's the Big Idea?

Here's how Glint's device works using two essential parts: "The first is an array of thin, inexpensive lenses that concentrate sunlight. The second is a sheet of glass that serves to concentrate that light more—up to 500 times—as light gathered over its surface is concentrated at its edges." As the day goes on, the angle of the incoming light shifts and the material adapts, allowing light in only at the most direct angle, trapping the light beneath the edges of the material until it can be absorbed. The company says its material could produce electricity at four cents per kilowatt-hour, compared to eight cents for the best conventional solar panels.

Read more at Technology Review

Photo credit: Shutterstock

Science Is Not About Certainty The separation of science and the humanities is relatively new—and detrimental to both

Science Is Not About Certainty The separation of science and the humanities is relatively new—and detrimental to both

By Photo: Fabrice Coffrini/AFP
We teach our students: We say that we have some theories about science. Science is about hypothetico-deductive methods; we have observations, we have data, data require organizing into theories. So then we have theories. These theories are suggested or produced from the data somehow, then checked in terms of the data. Then time passes, we have more data, theories evolve, we throw away a theory, and we find another theory that’s better, a better understanding of the data, and so on and so forth.

This is the standard idea of how science works, which implies that science is about empirical content; the true, interesting, relevant content of science is its empirical content. Since theories change, the empirical content is the solid part of what science is.

Now, there’s something disturbing, for me, as a theoretical scientist, in all this. I feel that something is missing. Something of the story is missing. I’ve been asking myself, “What is this thing missing?” I’m not sure I have the answer, but I want to present some ideas on something else that science is.

This is particularly relevant today in science, and particularly in physics, becauseif I’m allowed to be polemicalin my field, fundamental theoretical physics, for thirty years we have failed. There hasn’t been a major success in theoretical physics in the last few decades after the standard model, somehow. Of course there are ideas. These ideas might turn out to be right. Loop quantum gravity might turn out to be right, or not. String theory might turn out to be right, or not. But we don’t know, and for the moment Nature has not said yes, in any sense.
There hasn’t been a major success in theoretical physics in the last few decades.
I suspect that this might be in part because of the wrong ideas we have about science, and because methodologically we’re doing something wrongat least in theoretical physics, and perhaps also in other sciences. Let me tell you a story to explain what I mean. The story is an old story about my latest, greatest passion outside theoretical physicsan ancient scientist, or so I say even if often he’s called a philosopher: Anaximander. I’m fascinated by this character, Anaximander. I went into understanding what he did, and to me he’s a scientist. He did something that’s very typical of science and shows some aspect of what science is. What is the story with Anaximander? It’s the following, in brief:

Until Anaximander, all the civilizations of the planet everybody around the worldthought the structure of the world was the sky over our heads and the earth under our feet. There’s an up and a down, heavy things fall from the up to the down, and that’s reality. Reality is oriented up and down; Heaven’s up and Earth is down. Then comes Anaximander and says, “No, it’s something else. The Earth is a finite body that floats in space, without falling, and the sky is not just over our head, it’s all around.”
ADVERTISEMENT

How did he get this? Well, obviously, he looked at the sky. You see things going aroundthe stars, the heavens, the moon, the planets, everything moves around and keeps turning around us. It’s sort of reasonable to think that below us is nothing, so it seems simple to come to this conclusion. Except that nobody else came to this conclusion. In centuries and centuries of ancient civilizations, nobody got there. The Chinese didn’t get there until the 17th century, when Matteo Ricci and the Jesuits went to China and told them. In spite of centuries of the Imperial Astronomical Institute, which was studying the sky. The Indians learned this only when the Greeks arrived to tell them. In Africa, in America, in Australianobody else arrived at this simple realization that the sky is not just over our head, it’s also under our feet. Why?

Because obviously it’s easy to suggest that the Earth floats in nothing, but then you have to answer the question, Why doesn’t it fall? The genius of Anaximander was to answer this question. We know his answerfrom Aristotle, from other people. He doesn’t answer this question, in fact: He questions this question. He asks, “Why should it fall?” Things fall toward the Earth. Why should the Earth itself fall? In other words, he realizes that the obvious generalizationfrom every heavy object falling to the Earth itself fallingmight be wrong. He proposes an alternative, which is that objects fall toward the Earth, which means that the direction of falling changes around the Earth.

This means that up and down become notions relative to the Earth. Which is rather simple to figure out for us now: We’ve learned this idea. But if you think of the difficulty when we were children of understanding how people in Sydney could live upside-down, clearly this required changing something structural in our basic language in terms of which we understand the world. In other words, “up” and “down” meant something different before and after Anaximander’s revolution.

He understands something about reality essentially by changing something in the conceptual structure we use to grasp reality. In doing so, he isn’t making a theory; he understands something that, in some precise sense, is forever. It’s an uncovered truth, which to a large extent is a negative truth. He frees us from prejudice, a prejudice that was ingrained in our conceptual structure for thinking about space.

Why do I think this is interesting?  Because I think this is what happens at every major step, at least in physics; in fact, I think this is what happened at every step in physics, not necessarily major. When
I give a thesis to students, most of the time the problem I give for a thesis is not solved. It’s not solved because the solution of the question, most of the time, is not in solving the question, it’s in questioning the question itself. It’s realizing that in the way the problem was formulated there was some implicit prejudice or assumption that should be dropped. 

If this is so, then the idea that we have data and theories and then we have a rational agent who constructs theories from the data using his rationality, his mind, his intelligence, his conceptual structure doesn’t make any sense, because what’s being challenged at every step is not the theory, it’s the conceptual structure used in constructing the theory and interpreting the data. In other words, it’s not by changing theories that we go ahead but by changing the way we think about the world.
The prototype of this way of thinkingthe example that makes it cleareris Einstein’s discovery of special relativity. On the one hand, there was Newtonian mechanics, which was extremely successful with its empirical content. On the other hand, there was Maxwell’s theory, with its empirical content, which was extremely successful, too. But there was a contradiction between the two.

If Einstein had gone to school to learn what science is, if he had read Kuhn, and the philosophers explaining what science is, if he was any one of my colleagues today who are looking for a solution of the big problem of physics today, what would he do? He would say, “OK, the empirical content is the strong part of the theory. The idea in classical mechanics that velocity is relative: forget about it. The Maxwell equations: forget about them. Because this is a volatile part of our knowledge. The theories themselves have to be changed, OK? What we keep solid is the data, and we modify the theory so that it makes sense coherently, and coherently with the data.”

That’s not at all what Einstein does. Einstein does the contrary. He takes the theories very seriously. He believes the theories. He says, “Look, classical mechanics is so successful that when it says that velocity is relative, we should take it seriously, and we should believe it. And the Maxwell equations are so successful that we should believe the Maxwell equations.” He has so much trust in the theory itself, in the qualitative content of the theorythat qualitative content that Kuhn says changes all the time, that we learned not to take too seriouslyand he has so much in that that he’s ready to do what? To force coherence between the two theories by challenging something completely different, which is something that’s in our head, which is how we think about time.

He’s changing something in common sensesomething about the elementary structure in terms of which we think of the worldon the basis of trust of the past results in physics. This is exactly the opposite of what’s done today in physics. If you read Physical Review today, it’s all about theories that challenge completely and deeply the content of previous theories, so that theories in which there’s no Lorentz invariance, which are not relativistic, which are not general covariant, quantum mechanics, might be wrong.…

Every physicist today is immediately ready to say, “OK, all of our past knowledge about the world is wrong. Let’s randomly pick some new idea.” I suspect that this is not a small component of the long-term lack of success of theoretical physics. You understand something new about the world either from new data or from thinking deeply on what we’ve already learned about the world. But thinking means also accepting what we’ve learned, challenging what we think, and knowing that in some of the things we think, there may be something to modify.

What, then, are the aspects of doing science that I think are undervalued and should come up front? First, science is about constructing visions of the world, about rearranging our conceptual structure, about creating new concepts which were not there before, and even more, about changing, challenging, the a priori that we have. It has nothing to do with the assembling of data and the ways of organizing the assembly of data. It has everything to do with the way we think, and with our mental vision of the world. Science is a process in which we keep exploring ways of thinking and keep changing our image of the world, our vision of the world, to find new visions that work a little bit better.

In doing that, what we’ve learned in the past is our main ingredientespecially the negative things we’ve learned. If we’ve learned that the Earth is not flat, there will be no theory in the future in which the Earth is flat. If we have learned that the Earth is not at the center of the universe, that’s forever.
We’re not going to go back on this. If you’ve learned that simultaneity is relative, with Einstein, we’re not going back to absolute simultaneity, like many people think. Thus when an experiment measures neutrinos going faster than light, we should be suspicious and, of course, check to see whether there’s something very deep that’s happening. But it’s absurd when everybody jumps and says, “OK, Einstein was wrong,” just because a little anomaly indicates this. It never works like that in science.

The past knowledge is always with us, and it’s our main ingredient for understanding. The theoretical ideas that are based on “Let’s imagine that this may happen, because why not?” are not taking us anywhere.

I seem to be saying two things that contradict each other. On the one hand, we trust our past knowledge, and on the other hand, we are always ready to modify, in depth, part of our conceptual structure of the world. There’s no contradiction between the two; the idea of the contradiction comes from what I see as the deepest misunderstanding about science, which is the idea that science is about certainty.
Science is not about certainty. Science is about finding the most reliable way of thinking at the present level of knowledge.
Science is not about certainty. Science is about finding the most reliable way of thinking at the present level of knowledge. Science is extremely reliable; it’s not certain. In fact, not only is it not certain, but it’s the lack of certainty that grounds it. Scientific ideas are credible not because they are sure but because they’re the ones that have survived all the possible past critiques, and they’re the most credible because they were put on the table for everybody’s criticism.

The very expression “scientifically proven” is a contradiction in terms. There’s nothing that is scientifically proven. The core of science is the deep awareness that we have wrong ideas, we have prejudices. We have ingrained prejudices. In our conceptual structure for grasping reality, there might be something not appropriate, something we may have to revise to understand better. So at any moment we have a vision of reality that is effective, it’s good, it’s the best we have found so far. It’s the most credible we have found so far; it’s mostly correct.

But, at the same time, it’s not taken as certain, and any element of it is a priori open for revision.
Why do we have this continuous …? On the one hand, we have this brain, and it has evolved for millions of years. It has evolved for us, basically, for running across the savannah, for running after and eating deer and trying not to be eaten by lions. We have a brain tuned to meters and hours, which is not particularly well-tuned to think about atoms and galaxies. So we have to overcome that. 

At the same time, I think we have been selected for going out of the forest, perhaps going out of Africa, for being as smart as possible, as animals that escape lions. This continuing effort on our part to change our way of thinking, to readapt, is our nature. We’re not changing our mind outside of nature; it’s our natural history that continues to change us.

If I can make a final comment about this way of thinking about science, or two final comments: One is that science is not about the data. The empirical content of scientific theory is not what’s relevant.
The data serve to suggest the theory, to confirm the theory, to disconfirm the theory, to prove the theory wrong. But these are the tools we use. What interests us is the content of the theory. What interests us is what the theory says about the world. General relativity says spacetime is curved. The data of general relativity are that the Mercury perihelion moves 43 degrees per century with respect to that computed with Newtonian mechanics.   

Who cares? Who cares about these details? If that were the content of general relativity, general relativity would be boring. General relativity is interesting not because of its data but because it tells us that as far as we know today, the best way of conceptualizing spacetime is as a curved object. It gives us a better way of grasping reality than Newtonian mechanics, because it tells us that there can be black holes, because it tells us there’s a Big Bang. This is the content of the scientific theory. All living beings on Earth have common ancestors. This is a content of the scientific theory, not the specific data used to check the theory.

So the focus of scientific thinking, I believe, should be on the content of the theorythe past theory, the previous theoriesto try to see what they hold concretely and what they suggest to us for changing in our conceptual frame.     

The final consideration regards just one comment about this understanding of science, and the long conflict across the centuries between scientific thinking and religious thinking. It is often misunderstood. The question is, Why can't we live happily together and why can’t people pray to their gods and study the universe without this continual clash? This continual clash is a little unavoidable, for the opposite reason from the one often presented. It’s unavoidable not because science pretends to know the answers. It’s the other way around, because scientific thinking is a constant reminder to us that we don’t know the answers. In religious thinking, this is often unacceptable. What’s unacceptable is not a scientist who says, “I know…” but a scientist who says, “I don’t know, and how could you know?” Many religions, or some religions, or some ways of being religious, are based on the idea that there should be a truth that one can hold onto and not question.
This way of thinking is naturally disturbed by a way of thinking based on continual revision, not just of theories but of the core ground of the way in which we think.     
       
So, to sum up, science is not about data; it’s not about the empirical content, about our vision of the world. It’s about overcoming our own ideas and continually going beyond common sense. Science is a continual challenging of common sense, and the core of science is not certainty, it’s continual uncertaintyI would even say, the joy of being aware that in everything we think, there are probably still an enormous amount of prejudices and mistakes, and trying to learn to look a little bit beyond, knowing that there’s always a larger point of view to be expected in the future.   

We’re very far from the final theory of the world, in my field, in physicsextremely far. Every hope of saying, “Well we’re almost there, we've solved all the problems” is nonsense. And we’re wrong when we discard the value of theories like quantum mechanics, general relativityor special relativity, for that matterand try something else randomly. On the basis of what we know, we should learn something more, and at the same time we should somehow take our vision for what it isa vision that’s the best vision we have, but one we should continually evolve.

If science works, or in part works, in the way I’ve described, this is strongly tied to the kind of physics I do. The way I view the present situation in fundamental physics is that there are different problems: One is the problem of unification, of providing a theory of everything. The more specific problem, which is the problem in which I work, is quantum gravity. It’s a remarkable problem because of general relativity. Gravity is spacetime; that’s what we learned from Einstein. Doing quantum gravity means understanding what quantum spacetime is. And quantum spacetime requires some key change in the way we think about space and time.

Now, with respect to quantum gravity, there are two major research directions today, which are loops, the one in which I work, and strings. These are not just two different sets of equations; they are based on different philosophies of science, in a sense. The one in which I work is very much based on the philosophy I have just described, and that’s what has forced me to think about the philosophy of science.

Why? Because the idea is the following: The best of what we know about spacetime is what we know from general relativity. The best of what we know about mechanics is what we know from quantum mechanics. There seems to be a difficulty in attaching the two pieces of the puzzle together: They don’t fit well. But the difficulty might be in the way we face the problem. The best information we have about the world is still contained in these two theories, so let’s take quantum mechanics as seriously as possible, believe it as much as possible. Maybe enlarge it a little bit to make it general relativistic, or whatever. And let’s take general relativity as seriously as possible. General relativity has peculiar features, specific symmetries, specific characteristics. Let’s try to understand them deeply and see whether as they are, or maybe just a little bit enlarged, a little bit adapted, they can fit with quantum mechanics to give us a theoryeven if the theory that comes out contradicts something in the way we think. 

That’s the way quantum gravitythe way of the loops, the way I work, and the way other people workis being developed. This takes us in one specific direction of research, a set of equations, a way of putting up the theory. String theory has gone in the opposite direction. In a sense, it says, “Well, let’s not take general relativity too seriously as an indication of how the universe works.”
Even quantum mechanics has been questioned, to some extent. “Let’s imagine that quantum mechanics has to be replaced by something different. Let’s try to guess something completely new” some big theory out of which, somehow, the same empirical content of general relativity and quantum mechanics comes out in some limit. 

I’m distrustful of this huge ambition, because we don’t have the tools to guess this immense theory. String theory is a beautiful theory. It might work, but I suspect it’s not going to work. I suspect it’s not going to work because it’s not sufficiently grounded in everything we know so far about the world, and especially in what I perceive as the main physical content of general relativity. 
String theory’s big guesswork. Physics has never been guesswork; it’s been a way of unlearning how to think about something and learning about how to think a little bit differently by reading the novelty into the details of what we already know. Copernicus didn’t have any new data, any major new idea; he just took Ptolemy, the details of Ptolemy, and he read in the details of Ptolemy the fact that the equants, the epicycles, the deferents, were in certain proportions.  It was a way to look at the same construction from a slightly different perspective and discover that the Earth is not the center of the universe.

Einstein, as I said, took seriously both Maxwell’s theory and classical mechanics in order to get special relativity. Loop quantum gravity is an attempt to do the same thing: take general relativity seriously, take quantum mechanics seriously, and out of that, bring them together, even if this means a theory where there’s no time, no fundamental time, so that we have to rethink the world without basic time. The theory, on the one hand, is conservative because it’s based on what we know. But it’s totally radical, because it forces us to change something big in our way of thinking.

String theorists think differently. They say, “Well, let’s go out to infinity, where somehow the full covariance of general relativity is not there. There we know what time is, we know what space is, because we’re at asymptotic distances, at large distances. The theory is wilder, more different, newer, but in my opinion it’s more based on the old conceptual structure. It’s attached to the old conceptual structure and not attached to the novel content of the theories that have proven empirically successful. That’s how my way of reading science coincides with the specifics of the research work that I dospecifically, loop quantum gravity.

Of course, we don’t know. I want to be very clear. I think string theory is a great attempt to go ahead, by great people. My only polemical objection to string theory is when I hearbut I hear it less and less now“Oh, we know the solution already; it’s string theory.” That’s certainly wrong, and false.
What’s true is that it is  a good set of ideas; loop quantum gravity is another good set of ideas. We have to wait and see which one of these theories turns out to work and, ultimately, be empirically confirmed.   

This takes me to another point, which is, Should a scientist think about philosophy or not? It’s the fashion today to discard philosophy, to say now that we have science, we don’t need philosophy. I find this attitude naïve, for two reasons. One is historical. Just look back. Heisenberg would have never done quantum mechanics without being full of philosophy. Einstein would have never done relativity without having read all the philosophers and having a head full of philosophy. Galileo would never have done what he did without having a head full of Plato. Newton thought of himself as a philosopher and started by discussing this with Descartes and had strong philosophical ideas.
Newton thought of himself as a philosopher and started by discussing this with Descartes and had strong philosophical ideas.
Even Maxwell, Boltzmannall the major steps of science in the past were done by people who were very aware of methodological, fundamental, even metaphysical questions being posed. When Heisenberg does quantum mechanics, he is in a completely philosophical frame of mind. He says that in classical mechanics there’s something philosophically wrong, there’s not enough emphasis on empiricism. It is exactly this philosophical reading that allows him to construct that fantastically new physical theory, quantum mechanics. 

The divorce between this strict dialogue between philosophers and scientists is very recent, in the second half of the 20th century. It has worked because in the first half of the 20th century people were so smart. Einstein and Heisenberg and Dirac and company put together relativity and quantum theory and did all the conceptual work. The physics of the second half of the century has been, in a sense, a physics of application of the great ideas of the people of the ’30sof the Einsteins and the Heisenbergs.

When you want to apply these ideas, when you do atomic physics, you need less conceptual thinking. But now we’re back to basics, in a sense. When we do quantum gravity, it's not just application. The scientists who say “I don't care about philosophy” it’s not true that they don’t care about philosophy, because they have a philosophy. They’re using a philosophy of science. They’re applying a methodology. They have a head full of ideas about what philosophy they’re using; they’re just not aware of them and they take them for granted, as if this were obvious and clear, when it’s far from obvious and clear. They’re taking a position without knowing that there are many other possibilities around that might work much better and might be more interesting for them.

There is narrow-mindedness, if I may say so, in many of my colleagues who don’t want to learn what’s being said in the philosophy of science. There is also a narrow-mindedness in a lot of areas of philosophy and the humanities, whose proponents don’t want to learn about sciencewhich is even more narrow-minded. Restricting our vision of reality today to just the core content of science or the core content of the humanities is being blind to the complexity of reality, which we can grasp from a number of points of view. The two points of view can teach each other and, I believe, enlarge each other.

This piece has been excerpted from The Universe: Leading Scientists Explore the Origin, Mysteries, and Future of the Cosmos.  Copyright © 2014 by Edge Foundation, Inc. Published by Harper Perennial.

Carlo Rovelli is a theoretical physicist; a professor at Université de la Méditerranée, Marseille; and author of The First Scientist: Anaximander and His Legacy and the textbook, Quantum Gravity, the main introduction to the field since its publication in 2004.

Underwater self-healing polymer mimics mussels

Underwater self-healing polymer mimics mussels

 

 
A common acrylic polymer used in biomedical applications and as a substitute for glass has been given the ability to completely self-heal underwater by US researchers. The method, which takes inspiration from the self-healing abilities of adhesive proteins secreted by mussels, could allow for longer lasting biomedical implants.
'Polymer self-healing research is about 10 years old now and many different strategies have been developed,' says Herbert Waite, who conducted the work with colleagues at the University of California, Santa Barbara. 'None, however, address the need for healing in a wet medium – a critical omission as all biomaterials function, and fail, in wet environments.'

The idea of mimicking the biological self-healing ability of mussel adhesive proteins is not new, and previous attempts have involved polymer networks functionalised with catechols – synthetic water-soluble organic molecules that mimic mussel adhesive proteins – and metal-ion mediated bonding.
However, how mussel adhesive proteins self-heal remains poorly understood, which has limited attempts to synthesise catechols that accurately mimic biological self-healing underwater.

Now, Waite and colleagues have discovered a new aspect of catechols after they were simply 'goofing around' in the lab and found a new way to modify the surface of poly(methyl methacrylate), or PMMA, with catechols. This led them to explore the material's properties and discover that hydrogen bonding enables the polymer to self-heal underwater after being damaged. 'Usually, catechols in wet adhesives are associated with covalent or coordination mediated cross-linking. Our results argue that hydrogen bonding can also be critical, especially as an initiator of healing,' he says.

The healing process begins because catechols provide multidentate hydrogen-bonding faces that trigger a network of hydrogen bonds to fix any damage – the interaction is strong enough to resist interference by water but reversible. Acting a bit like dissolvable stitches, hydrogen bonding between the catechols appears to stitch the damaged area, which allows the underlying polymer to fuse back together. After about 20 minutes, the hydrogen bonded catechols mysteriously disappear leaving the original site of damage completely healed. 'We don't know where the hydrogen bonded catechols go,’ Waite says. ‘Possibly back to the surface, dispersed within the bulk polymer, or some other possibility.'

Phillip Messersmith, a biomaterials expert at the University of California, Berkeley, US, says that this is ‘really creative work’. '[This] reveals a new dimension of catechols, which in this case mediate interfacial self-healing through the formation of hydrogen bonds between surfaces, and which are ultimately augmented or replaced by other types of adhesive interactions.'

References

Origins Of Mysterious World Trade Center Ship Determined

Origins Of Mysterious World Trade Center Ship Determined

July 30, 2014 | by Stephen Luntz
   
Photo credit: Lower Manhatten Development Corporation.The partial hull of a ship found in excavating the World Trade Center site.
  
A remarkable piece of scientific detective work has revealed the wooden ship found beneath the wreckage of the World Trade Center was built just before, or during, the American War of Independence. Even the location where the wood was grown appears to have been settled.

In 2010, when digging the foundations for the buildings that will replace the twin towers, workers found a 9.75m long oaken partial hull 7m below what is now street level. Hickory in the keel indicated the ship was almost certainly of North American origin, but its age and specific place of construction were initially a mystery.

Isotopic dating isn’t precise enough to tell us the age of the wood from which the ship is made, so instead researchers from Columbia University used the tree rings. As they report in probably the most attention grabbing story ever published in Tree-Ring Research the rings in timber from different parts of the ship were found to be highly similar.


Lower Manhatten Development Corporation Via Columbia University. The rings in the white oak of the ship's hull reveal the seasons in which the timber grew. 

Since the width of tree rings depends on the weather that season, trees growing nearby tend to have  ring patterns that match each other fairly closely.  When compared to 21 trees of the same species (white oak, Quercus Leucobalanus) from the eastern American seaboard team, led by Dr Dario Martin-Benito, found exceptionally good matches to those from the Keystone State.

“Our results showed the highest agreement between the WTC ship chronology and two chronologies from Philadelphia and eastern Pennsylvania,” the paper reports. The last rings indicate the ship was built from trees felled in 1773, confirming previous theories.

While the ship has potential to provide insight into construction of the day, the authors note “idiosyncratic aspects of the vessel's construction [indicate] that the ship was the product of a small shipyard.”

"Philadelphia was one of the most — if not the most — important shipbuilding cities in the U.S. at the time. And they had plenty of wood so it made lots of sense that the wood could come from there,” Martin-Benito told Livescience 

The wood has previously been found to have been infested with Lyrodus pedicellatus, indicating a trip to the Caribbean at some point. This infestation with shipworm may have led to its  premature demise, possibly being used as a sort of reclamation process to bolster Manhattan's defenses against the sea.

Although considered part of the World Trade Center site, the location of the ship was not excavated when the original towers were built.

Read more at http://www.iflscience.com/plants-and-animals/origins-mysterious-world-trade-center-ship-determined#CkJ2ptwsTZ2YjqSC.99

Depleted Uranium Could Turn Carbon Dioxide into Valuable Chemicals

Depleted Uranium Could Turn Carbon Dioxide into Valuable Chemicals

New reactions could convert excessive CO2 into building blocks for materials like nylon


carbon dioxide levels


A model of carbon dioxide levels in Earth's lower atmosphere.
Credit: NOAA

European scientists have synthesised uranium complexes that take them a step closer to producing commodity chemicals from carbon dioxide.

Widespread fossil fuel depletion and concerns over levels of climatic carbon dioxide are motivating research to convert this small molecule into value-added chemicals. Organometallic uranium complexes have successfully activated various small molecules before. However, there were no reports of an actinide metal complex that could reductively couple with carbon dioxide to give a segment made from two carbon dioxide molecules – an oxalate dianion.
Not only has this now been achieved, but simply changing the alkyl group on the cyclopentadienyl ring of the uranium(iii) sandwich complex has a remarkable effect on carbon dioxide activation, enabling selective tuning of the resulting reduction products.

Geoff Cloke’s group at the University of Sussex, UK, and computational collaborators at the University of Toulouse, France, found that a small methyl group gives both bridging oxo and oxalate complexes; intermediate ethyl and isopropyl substituents give bridging carbonate and oxalate species; while bulkier tertiary butyl gives only the bridging carbonate complex. The oxalate formation is particularly important as it involves making a C–C bond directly from carbon dioxide. This is a fundamentally important but seldom reported transformation.

Uranium(iii) lends itself to small molecule activation for a number of reasons: it is a strong reducing agent with a U(iii)/U(iv) redox couple electrode potential of around –2.5 V and, unlike transition metals, it is not constrained by the 18 electron rule and overall has pretty unique reactivity. These characteristics do however make handling such extremely air sensitive and reactive compounds challenging. While the chemistry is still far from large-scale production for industrial applications, Fang Dai, a chemical engineer at General Motors, US, points out that it ‘provides a solid basis for further exploration of both chemical activation of carbon dioxide and corresponding organo-actinide chemistry’.

Finding an alternative use for depleted uranium – which has almost negligible radioactivity and is in plentiful supply – to its typical use in military applications is certainly desirable. What’s more, controlling the selectivity and establishing different mechanisms and key intermediates of reductive activations could lead to reductive coupling of more than one type of small molecule. Cloke raises the ‘fantastic’ example of creating a dicarboxylic acid uranium derivative by reductively coupling carbon dioxide and ethene: ‘Dicarboxylic acids such as adipic acid are used in making nylon, so to make them directly from carbon dioxide would be very attractive. Although making a catalytic system would undoubtedly be challenging, demonstrating this idea is the next, very important, step.’

This article is reproduced with permission from Chemistry World. The article was first published on July 25, 2014.

Jacob Bronowski

Jacob Bronowski

From Wikipedia, the free encyclopedia
            
Jacob Bronowski
Bronowski.jpg
Born18 January 1908 (1908-01-18)
Łódź, Congress Poland, Russian Empire
Died22 August 1974 (1974-08-23) (aged 66)
East Hampton, New York, United States
ResidenceUnited Kingdom
NationalityPolish-English
FieldsMathematics, operations research, biology, history of science,
InstitutionsSalk Institute
Alma materUniversity of Cambridge
Doctoral advisorH. F. Baker
Known forGeometry, The Ascent of Man
SpouseRita Coblentz
ChildrenLisa Jardine, Judith Bronowski

Jacob Bronowski (18 January 1908 – 22 August 1974) was a Polish-Jewish British mathematician, biologist, historian of science, theatre author, poet and inventor. He is best remembered as the presenter and writer of the 1973 BBC television documentary series, The Ascent of Man, and the accompanying book.

Life and work

Jacob Bronowski was born in Łódź, Congress Poland, Russian Empire, in 1908. His family moved to Germany during the First World War, and then to England in 1920. Although, according to Bronowski, he knew only two English words on arriving in Great Britain,[1] he gained admission to the Central Foundation Boys' School in London and went on to study at the University of Cambridge and graduated as the senior wrangler.

As a mathematics student at Jesus College, Cambridge, Bronowski co-edited—with William Empson—the literary periodical Experiment, which first appeared in 1928. Bronowski would pursue this sort of dual activity, in both the mathematical and literary worlds, throughout his professional life. He was also a strong chess player, earning a half-blue while at Cambridge and composing numerous chess problems for the British Chess Magazine between 1926 and 1970.[2] He received a Ph.D. in mathematics in 1935, writing a dissertation in algebraic geometry. For a time in the 1930s he lived near Laura Riding and Robert Graves in Majorca. From 1934 to 1942 he taught mathematics at the University College of Hull. Beginning in this period, the British secret service MI5 kept Bronowski under surveillance believing he was a security risk, which is thought to have restricted his access to senior posts in the UK.[3]

During the Second World War Bronowski worked in operations research for the UK's Ministry of Home Security, where he developed mathematical approaches to bombing strategy for RAF Bomber Command. At the end of the war, Bronowski was part of a British team that visited Japan to document the effects of the atomic bombings of Hiroshima and Nagasaki. Following his experiences of the after-effects of the Nagasaki and Hiroshima bombings, he discontinued his work for British military research and turned to biology, as did his friend Leó Szilárd and many other physicists of that time, to better understand the nature of violence. Subsequently, he became Director of Research for the National Coal Board in the UK, and an associate director of the Salk Institute from 1964.

In 1950, Bronowski was given the Taung child's fossilized skull and asked to try, using his statistical skills, to combine a measure of the size of the skull's teeth with their shape in order to discriminate them from the teeth of apes. Work on this turned his interests towards the biology of humanity's intellectual products.

In 1967 Bronowski delivered the six Silliman Memorial Lectures at Yale University and chose as his subject the role of imagination and symbolic language in the progress of scientific knowledge. Transcripts of the lectures were published posthumously in 1978 as The Origins of Knowledge and Imagination and remain in print.

He first became familiar to the British public through appearances on the BBC television version of The Brains Trust in the late 1950s. His ability to answer questions on many varied subjects led to an offhand reference in an episode of Monty Python's Flying Circus where one character states that "He knows everything." Bronowski is best remembered for his thirteen part series The Ascent of Man (1973), a documentary about the history of human beings through scientific endeavour. This project was intended to parallel art historian Kenneth Clark's earlier "personal view" series Civilisation (1969) which had covered cultural history.

During the making of The Ascent of Man, Bronowski was interviewed by the popular British chat show host Michael Parkinson. Parkinson later recounted that Bronowski's description of a visit to Auschwitz—Bronowski had lost many family members during the Nazi era—was one of Parkinson's most memorable interviews.[4]

Personal life

Jacob Bronowski married Rita Coblentz in 1941.[5] The couple had four children, all daughters, the eldest being the British academic Lisa Jardine and another being the filmmaker Judith Bronowski. He died in 1974 of a heart attack in East Hampton, New York[6] a year after The Ascent of Man was completed, and was buried in the western side of London's Highgate Cemetery, near the entrance.

Books

Jacob Bronowski's grave in Highgate Cemetery, London.
  • The Poet's Defence (1939)
  • William Blake: A Man Without a Mask (1943)
  • The Common Sense of Science (1951)
  • The Face of Violence (1954)
  • Science and Human Values. New York: Julian Messner, Inc. 1956, 1965. 
  • William Blake: The Penguin Poets Series (1958)
  • The Western Intellectual Tradition, From Leonardo to Hegel (1960) - with Bruce Mazlish
  • Biography of an Atom (1963) - with Millicent Selsam
  • Insight (1964)
  • The Identity of Man. Garden City: The Natural History Press. 1965. 
  • Nature and Knowledge: The Philosophy of Contemporary Science (1969)
  • William Blake and the Age of Revolution (1972)
  • The Ascent of Man (1974)
  • A Sense of the Future (1977)
  • Magic, Science & Civilization (1978)
  • The Origins of Knowledge and Imagination (1978)
  • The Visionary Eye: Essays in the Arts, Literature and Science (1979) - edited by Piero Ariotti and Rita Bronowski.

References

  1. Jump up ^ Bronowski, Jacob (1967). The Common Sense of Science. Cambridge, Massachusetts: Harvard University Press. p. 8. ISBN 0-674-14651-4. 
  2. Jump up ^ Winter, Edward. "Chess Notes". Retrieved 23 March 2008. 
  3. Jump up ^ Berg, Sanchia (4 April 2011). "MI5 'said Bronowski was a risk'". BBC News. 
  4. Jump up ^ Bronowski, Jacob (8 February 1974). Dr. Jacob Bronowski. Interview with Michael Parkinson. BBC Television. Parkinson. Retrieved 2014-02-03. 
  5. Jump up ^ Lisa Jardine Obituary: Rita Bronowski [Coblentz], The Guardian, 22 September 2010
  6. Jump up ^ "Milestones, Sep. 2, 1974", Time website (n.d., reprint of contemporary item)

Group theory

Group theory

From Wikipedia, the free encyclopedia
   
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right.
Various physical systems, such as crystals and the hydrogen atom, can be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography.

One of the most important mathematical achievements of the 20th century[1] was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 1980, that culminated in a complete classification of finite simple groups.

History

 
Group theory has three main historical sources: number theory, the theory of algebraic equations, and geometry. The number-theoretic strand was begun by Leonhard Euler, and developed by Gauss's work on modular arithmetic and additive and multiplicative groups related to quadratic fields. Early results about permutation groups were obtained by Lagrange, Ruffini, and Abel in their quest for general solutions of polynomial equations of high degree. Évariste Galois coined the term "group" and established a connection, now known as Galois theory, between the nascent theory of groups and field theory. In geometry, groups first became important in projective geometry and, later, non-Euclidean geometry. Felix Klein's Erlangen program proclaimed group theory to be the organizing principle of geometry.

Galois, in the 1830s, was the first to employ groups to determine the solvability of polynomial equations. Arthur Cayley and Augustin Louis Cauchy pushed these investigations further by creating the theory of permutation groups. The second historical source for groups stems from geometrical situations. In an attempt to come to grips with possible geometries (such as euclidean, hyperbolic or projective geometry) using group theory, Felix Klein initiated the Erlangen programme. Sophus Lie, in 1884, started using groups (now called Lie groups) attached to analytic problems. Thirdly, groups were, at first implicitly and later explicitly, used in algebraic number theory.

The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth of abstract algebra in the early 20th century, representation theory, and many more influential spin-off domains. The classification of finite simple groups is a vast body of work from the mid 20th century, classifying all the finite simple groups.

Main classes of groups

The range of groups being considered has gradually expanded from finite permutation groups and special examples of matrix groups to abstract groups that may be specified through a presentation by generators and relations.

Permutation groups

The first class of groups to undergo a systematic study was permutation groups. Given any set X and a collection G of bijections of X into itself (known as permutations) that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn; in general, any permutation group G is a subgroup of the symmetric group of X. An early construction due to Cayley exhibited any group as a permutation group, acting on itself (X = G) by means of the left regular representation.

In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for n ≥ 5, the alternating group An is simple, i.e. does not admit any proper normal subgroups. This fact plays a key role in the impossibility of solving a general algebraic equation of degree n ≥ 5 in radicals.

Matrix groups

The next important class of groups is given by matrix groups, or linear groups. Here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the n-dimensional vector space Kn by linear transformations. This action makes matrix groups conceptually similar to permutation groups, and the geometry of the action may be usefully exploited to establish properties of the group G.

Transformation groups

Permutation groups and matrix groups are special cases of transformation groups: groups that act on a certain space X preserving its inherent structure. In the case of permutation groups, X is a set; for matrix groups, X is a vector space. The concept of a transformation group is closely related with the concept of a symmetry group: transformation groups frequently consist of all transformations that preserve a certain structure.

The theory of transformation groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, considers group actions on manifolds by homeomorphisms or diffeomorphisms. The groups themselves may be discrete or continuous.

Abstract groups

Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of an abstract group as a set with operations satisfying a certain system of axioms began to take hold. A typical way of specifying an abstract group is through a presentation by generators and relations,
 G = \langle S|R\rangle.
A significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory. If a group G is a permutation group on a set X, the factor group G/H is no longer acting on X; but the idea of an abstract group permits one not to worry about this discrepancy.

The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant under isomorphism, as well as the classes of group with a given such property: finite groups, periodic groups, simple groups, solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation of abstract algebra in the works of Hilbert, Emil Artin, Emmy Noether, and mathematicians of their school.[citation needed]

Topological and algebraic groups

An important elaboration of the concept of a group occurs if G is endowed with additional structure, notably, of a topological space, differentiable manifold, or algebraic variety. If the group operations m (multiplication) and i (inversion),
 m: G\times G\to G, (g,h)\mapsto gh, \quad i:G\to G, g\mapsto g^{-1},
are compatible with this structure, i.e. are continuous, smooth or regular (in the sense of algebraic geometry) maps then G becomes a topological group, a Lie group, or an algebraic group.[2]

The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain for abstract harmonic analysis, whereas Lie groups (frequently realized as transformation groups) are the mainstays of differential geometry and unitary representation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups.
Thus, compact connected Lie groups have been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a group Γ can be realized as a lattice in a topological group G, the geometry and analysis pertaining to G yield important results about Γ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a single p-adic analytic group G has a family of quotients which are finite p-groups of various orders, and properties of G translate into the properties of its finite quotients.

Combinatorial and geometric group theory

Groups can be described in different ways. Finite groups can be described by writing down the group table consisting of all possible multiplications gh. A more compact way of defining a group is by generators and relations, also called the presentation of a group. Given any set F of generators {gi}iI, the free group generated by F subjects onto the group G. The kernel of this map is called subgroup of relations, generated by some subset D. The presentation is usually denoted by F | D. For example, the group Z = 〈a | 〉 can be generated by one element a (equal to +1 or −1) and no relations, because n·1 never equals 0 unless n is zero. A string consisting of generator symbols and their inverses is called a word.

Combinatorial group theory studies groups from the perspective of generators and relations.[3] It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection of graphs via their fundamental groups. For example, one can show that every subgroup of a free group is free.

There are several natural questions arising from giving a group by its presentation. The word problem asks whether two words are effectively the same group element. By relating the problem to Turing machines, one can show that there is in general no algorithm solving this task. Another, generally harder, algorithmically insoluble problem is the group isomorphism problem, which asks whether two groups given by different presentations are actually isomorphic. For example the additive group Z of integers can also be presented by
x, y | xyxyx = e;
it may not be obvious that these groups are isomorphic.[4]
The Cayley graph of 〈 x, y ∣ 〉, the free group of rank 2.
Geometric group theory attacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on.[5] The first idea is made precise by means of the Cayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs the word metric given by the length of the minimal path between the elements. A theorem of Milnor and Svarc then says that given a group G acting in a reasonable manner on a metric space X, for example a compact manifold, then G is quasi-isometric (i.e. looks similar from the far) to the space X.

Representation of groups

Saying that a group G acts on a set X means that every element defines a bijective map on a set in a way compatible with the group structure. When X has more structure, it is useful to restrict this notion further: a representation of G on a vector space V is a group homomorphism:
ρ : GGL(V),
where GL(V) consists of the invertible linear transformations of V. In other words, to every group element g is assigned an automorphism ρ(g) such that ρ(g) ∘ ρ(h) = ρ(gh) for any h in G.
This definition can be understood in two directions, both of which give rise to whole new domains of mathematics.[6] On the one hand, it may yield new information about the group G: often, the group operation in G is abstractly given, but via ρ, it corresponds to the multiplication of matrices, which is very explicit.[7] On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, if G is finite, it is known that V above decomposes into irreducible parts. These parts in turn are much more easily manageable than the whole V (via Schur's lemma).

Given a group G, representation theory then asks what representations of G exist. There are several settings, and the employed methods and obtained results are rather different in every case: representation theory of finite groups and representations of Lie groups are two main subdomains of the theory. The totality of representations is governed by the group's characters. For example, Fourier polynomials can be interpreted as the characters of U(1), the group of complex numbers of absolute value 1, acting on the L2-space of periodic functions.

Connection of groups and symmetry

Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example
  1. If X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups.
  2. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (an isometry). The corresponding group is called isometry group of X.
  3. If instead angles are preserved, one speaks of conformal maps. Conformal maps give rise to Kleinian groups, for example.
  4. Symmetries are not restricted to geometrical objects, but include algebraic objects as well. For instance, the equation
x^2-3=0
has the two solutions +\sqrt{3}, and -\sqrt{3}. In this case, the group that exchanges the two roots is the Galois group belonging to the equation. Every polynomial equation in one variable has a Galois group, that is a certain permutation group on its roots.
The axioms of a group formalize the essential aspects of symmetry. Symmetries form a group: they are closed because if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object.
Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions are associative.
Frucht's theorem says that every group is the symmetry group of some graph. So every abstract group is actually the symmetries of some explicit object.

The saying of "preserving the structure" of an object can be made precise by working in a category. Maps preserving the structure are then the morphisms, and the symmetry group is the automorphism group of the object in question.

Applications of group theory

Applications of group theory abound. Almost all structures in abstract algebra are special cases of groups. Rings, for example, can be viewed as abelian groups (corresponding to addition) together with a second operation (corresponding to multiplication). Therefore group theoretic arguments underlie large parts of the theory of those entities.

Galois theory uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The fundamental theorem of Galois theory provides a link between algebraic field extensions and group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the corresponding Galois group. For example, S5, the symmetric group in 5 elements, is not solvable which implies that the general quintic equation cannot be solved by radicals in the way equations of lower degree can.
The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such as class field theory.

Algebraic topology is another domain which prominently associates groups to the objects the theory is interested in. There, groups are used to describe certain invariants of topological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to some deformation. For example, the fundamental group "counts" how many paths in the space are essentially different. The Poincaré conjecture, proved in 2002/2003 by Grigori Perelman is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use of Eilenberg–MacLane spaces which are spaces with prescribed homotopy groups. Similarly algebraic K-theory stakes in a crucial way on classifying spaces of groups. Finally, the name of the torsion subgroup of an infinite group shows the legacy of topology in group theory.
A torus. Its abelian group structure is induced from the map CC/Z+τZ, where τ is a parameter living in the upper half plane.
The cyclic group Z26 underlies Caesar's cipher.

Algebraic geometry and cryptography likewise uses group theory in many ways. Abelian varieties have been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures.[8] The one-dimensional case, namely elliptic curves is studied in particular detail. They are both theoretically and practically intriguing.[9] Very large groups of prime order constructed in Elliptic-Curve Cryptography serve for public key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make the discrete logarithm very hard to calculate. One of the earliest encryption protocols, Caesar's cipher, may also be interpreted as a (very easy) group operation. In another direction, toric varieties are algebraic varieties acted on by a torus. Toroidal embeddings have recently led to advances in algebraic geometry, in particular resolution of singularities.[10]

Algebraic number theory is a special case of group theory, thereby following the rules of the latter. For example, Euler's product formula

\begin{align}
\sum_{n\geq 1}\frac{1}{n^s}& = \prod_{p \text{ prime}} \frac{1}{1-p^{-s}} \\
\end{align}
\!
captures the fact that any integer decomposes in a unique way into primes. The failure of this statement for more general rings gives rise to class groups and regular primes, which feature in Kummer's treatment of Fermat's Last Theorem.
The circle of fifths may be endowed with a cyclic group structure
  • In physics, groups are important because they describe the symmetries which the laws of physics seem to obey. According to Noether's theorem, every continuous symmetry of a physical system corresponds to a conservation law of the system. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include the Standard Model, gauge theory, the Lorentz group, and the Poincaré group.

Hate speech

From Wikipedia, the free encyclopedia ...