Search This Blog

Friday, January 9, 2015

On the futility of climate models: ‘simplistic nonsense’

Guest essay by Leo Smith – elevated from a comment left on WUWT on January 6, 2015 at 2:11 am (h/t to dbs)


As an engineer, my first experience of a computer model taught me nearly all I needed to know about models.

I was tasked with designing a high voltage video amplifier to drive a military heads up display featuring a CRT.

Some people suggested I make use of the acoustic coupler to input my design and optimise it with one of the circuit modelling programs they had devised. The results were encouraging, so I built it. The circuit itself was a dismal failure.

Investigation revealed the reason instantly: the model parametrised parasitic capacitance into a simple single value: the reality of semiconductors is that the capacitance varies with applied voltage – an effect made use of in every radio today as the ‘varicap diode’. for small signals this is an acceptable compromise. Over large voltage swings the effect is massively non linear. The model was simply inadequate.

Most of engineering is to design things so that small unpredictable effects are swamped by large predictable ones. Any stable design has to work like that. If it doesn’t, it ain’t stable. Or reproducible.

That leads to a direct piece of engineering wisdom: If a system is not dominated by a few major feedback factors, it ain’t stable. And if it has a regions of stability then perturbing it outside those regions will result in gross instability, and the system will be short lived.

Climate has been in real terms amazingly stable. For millions of years. It has maintained an average of about 282 degrees absolute +- about 5 degrees since forever.

So called ‘Climate science’ relies on net positive feedback to create alarmist views – and that positive feedback is nothing to do with CO2 allegedly: on the contrary it is a temperature change amplifier pure and simple.

If such a feedback existed, any driver of temperature, from a minor change in the suns output, to a volcanic eruption must inevitably trigger massive temperature changes. But it simply never has. Or we wouldn’t be here to spout such nonsense.

With all simple known factors taken care of the basic IPCC style equation boils down to:

∆T = λ.k.log( ∆CO2)

where lambda (λ) is the climate sensitivity that expresses the presupposed propensity of any warming directly attributable to CO2 (k.log(CO2)) radiative forcing and its resultant direct temperature change to be amplified by some unexplained and unknown feedback factor, which is adjusted to match such late 20th century warming as was reasonably certain.

Everyone argues over the value of lambda. No one is arguing over the actual shape of the equation itself.

And that is the sleight of hand of the IPCC…arguments about climate sensitivity are pure misdirection away from the actuality of what is going on.

Consider an alternative:

∆T = k.log( ∆CO2) + f(∆x)

In terms of matching late 20th century warming, this is equally as good, and relies merely on introducing another unknown to replace the unknown lambda, this time not as a multiplier of CO2 driven change, but as a completely independent variable.

Philosophically both have one unknown. There is little to choose between them.

Scientifically both the rise and the pause together fit the second model far better.

Worse, consider some possible mechanisms for what X might be….

∆T = k.log( ∆CO2) + f(∆T).

Let’s say that f(∆T) is in fact a function whose current value depends on non linear and time delayed values of past temperature. So it does indeed represent temperature feedback to create new temperatures!

This is quite close to the IPCC model, but with one important proviso. The overall long term feedback MUST be negative, otherwise temperatures would be massively unstable over geological timescales.

BUT we know that short term fluctuations of quite significant values – ice ages and warm periods – are also in evidence.

Can long term negative feedback create shorter term instability? Hell yes! If you have enough terms and some time delay, it’s a piece of piss.

The climate has all the elements needed. temperature, and water. Water vapour (greenhouse gas: acts to increase temperatures) clouds (reduce daytime temps, increase night time temps) and ice (massive albedo modifiers: act to reduce temperatures) are functions of sea and air temperature, and sea and air temperature are a function via albedo and greenhouse modifiers, of water vapour concentrations. Better yet, latent heat of ice/water represents massive amounts of energy needed to effect a phase transition at a single temperature. Lots of lovely non-linearity there. Plus huge delays of decadal or multidecadal length in terms of ocean current circulations and melting/freezing of ice sheets and permafrost.

Not to mention continental drift, which adds further water cycle variables into the mix.

Or glaciation that causes falling sea levels, thus exposing more land to lower the albedo where the earth is NOT frozen, and glaciation that strips water vapour out of the air reducing cloud albedo in non glaciated areas.

It’s a massive non linear hugely time delayed negative feedback system. And that’s just water and ice. Before we toss in volcanic action, meteor strikes, continental drift. solar variability, and Milankovitch cycles…

The miracle of AGW is that all this has been simply tossed aside, or considered some kind of constant, or a multiplier of the only driver in town, CO2.

When all you know is linear systems analysis everything looks like a linear system perturbed by an external driver.

When the only driver you have come up with is CO2, everything looks like CO2.

Engineers who have done control system theory are not so arrogant. And can recognise in the irregular sawtooth of ice age temperature record a system that looks remarkably like a nasty multiple (negative) feed back time delayed relaxation oscillator.

Oscillators don’t need external inputs to change, they do that entirely within the feedback that comprises them. Just one electron of thermal noise will start them off.

What examination of the temperature record shows is that glaciation is slow. It takes many many thousands of years as the ice increases before the lowest temperatures are reached, but that positive going temperatures are much faster – we are only 10,000 years out of the last one.

The point finally is this: To an engineer, climate science as the IPCC have it is simplistic nonsense. There are far far better models available, to explain climate change based on the complexity of water interactions with temperature. Unfortunately they are far too complex even for the biggest of computers to be much use in simulating climate. And have no political value anyway, since they will essentially say ‘Climate changes irrespective of human activity, over 100 thousand year major cycles, and within that its simply unpredictable noise due to many factors none of which we have any control over’

UPDATE: An additional and clarifying comment has been posted by Leo Smith on January 6, 2015 at 6:32 pm

Look, this post was elevated (without me being aware…) from a blog comment typed in in a hurry. I accept the formula isn’t quite what I meant, but you get the general idea OK?

If I had known it was going to become a post I’d have taken a lot more care over it.

Not used k where it might confuse,. Spotted that delta log is not the same as log delta..

But the main points stand:

(i) The IPCC ‘formula’ fits the data less well than other equally simple formulae with just as many unknowns.

(ii) The IPCC formula is a linear differential equation.

(iii) There is no reason to doubt that large parts of the radiative/convective thermal cycle/balance of climate are non linear.

(iv) There are good historical reasons to suppose that the overall feedback of the climate system is negative, not positive as the IPCC assumes.

(v) given the number of feedback paths and the lags associated with them, there is more than enough scope in the climate for self generated chaotic quasi-periodic fluctuations to be generated even without any external inputs beyond a steady sun.

(vi) Given the likely shape of the overall real climate equation, there is no hope of anything like a realistic forecast ever being obtained with the current generation of computer systems and mathematical techniques. Chaos style equations are amongst the hardest and most intractable problems we have, and indeed there may well be no final answer to climate change beyond a butterfly flapping its wings in Brazil and tipping the climate into a new ice age, or a warm period, depending ;-)

(vii) A point I didn’t make: a chaotic system is never ‘in balance’, and even its average value has little meaning, because its simply a mathematical oddity – a single point on a range where the system never rests – it merely represents a point between the upper and lower bounds; Worse, is system with multiple attractors, it may not even be anywhere near where the systems orbits fr any length of time.

In short my current thinking says :
– there is no such thing as a normal climate, nor does it have a balance that man has disturbed , or could disturb. Its constantly changing and may go anywhere from ice age to seriously warm over extremely long periods of time. It does this all by itself. There need be no external drivers to move it from one attractor to another or cause it to orbit any given attractor. That climate changes is unarguable, that anything beyond climate itself is causing it, is deeply doubtful. That CO2 has a major effect is, on the data, as absurd as claiming that CO2 has no effect at all.

What we are looking at here is very clever misdirection cooked up for economic and political motives: It suited many peoples books to paint CO2 emissions as a scary pollutant, and a chance temporary correlation of rising temperatures and CO2 was combined in a linear way that any third rate scientist could understand to present a plausible formula for scary AGW. I have pointed out that other interpretations of the data make a non scary scenario, and indeed, post the Pause,. actually fit the data better.

Occam’s razor has nothing to say in defence of either.

Poppers falsifiability is no help because the one model – the IPCC – has been falsified. The other can make no predictions beyond ‘change happens all by itself in ways we cannot hope to predict’. So that cannot be falsified. If you want to test Newton’s laws the last experiment you would use is throwing an egg at a spike to predict where the bits of eggshell are going to land….

Net result is climate science isn’t worth spending a plugged nickel on, and we should spend the money on being reasonably ready for moderate climate change in either direction. Some years ago my business partner – ten years my junior wanted to get key man insurance in case I died or fell under a bus. ‘How much for how much’ ‘well you are a smoker, and old, so its a lot’ It was enough in fact to wipe out the annual profits, and the business, twice over. Curiously he is now dead from prostate cancer, and I have survived testicular cancer, and with luck, a blocked coronary artery. Sometimes you just take te risk because insuring against it costs more … if we had been really serious about climate change we would be 100% nuclear by now. It was proven safe technology and dollar for dollar has ten times the carbon reduction impact than renewables. But of course carbon reduction was not the actual game plan. Political control of energy was. Its so much easier and cheaper to bribe governments than compete in a free market…
.
IF – and this is something that should be demonstrable – the dominant feedback terms in the real climate equations are non linear, and multiple and subject to time delay, THEN we have a complex chaotic system that will be in constant more or less unpredictable flux.

And we are pissing in the wind trying to model it with simple linear differential equations and parametrised nonsense.

The whole sleight of hand of the AGW movement has been to convince scientists who do NOT understand non linear control theory, that they didn’t NEED to understand it to model climate, and that any fluctuations MUST be ’caused’ by an externality, and to pick on the most politically and commercially convenient one – CO2 – that resonated with a vastly anti-science and non-commercial sentiment left over from the Cold War ideological battles . AGW is AgitProp, not science. AGW flatters all the worst people into thinking they are more important than they are. To a man every ground roots green movement has taken a government coin, as have the universities, and they are all dancing to the piper who is paid by the unholy aggregation of commercial interest, political power broking and political marketing.

They bought them all. They couldn’t however buy the climate. Mother Nature is not a whore.

Whether AGW is a deliberate fraud, an honest mistake, or mere sloppy ignorant science is moot. At any given level it is one or the other or any combination.

What it really is, is an emotional narrative, geared to flatter the stupid and pander to their bigotry, in order to make them allies in a process that if they knew its intentions, they would utterly oppose,.

Enormous damage to the environment is justified by environmentalists because the Greater Cause says that windmills and solar panels will Save the Planet. Even when its possible to demonstrate that they have almost no effect on emissions at all, and it is deeply doubtful if those emissions are in any way significant anyway.

Green is utterly anti-nuclear. Yet which- even on their own claims – is less harmful, a few hundred tonnes of long lived radionuclides encased in glass and dumped a mile underground, or a billion tonnes of CO2?

Apparently the radiation which hasn’t injured or killed a single person at Fukushima, is far far more dangerous than the CO2, because Germany would rather burn stinking lignite having utterly polluted its rivers in strip mining it, than allow a nuclear power plant to operate inside its borders .

Years ago Roy Harper sang
“You can lead a horse to water, but you cannot make him drink
You can lead a man to slaughter, but you’ll never make him think”

I had a discussion with a gloomy friend today. We agreed the world is a mess because people don’t think, they follow leaders, trends, emotional narratives, received wisdom.. Never once do they step back and ask, ‘what really is going on here?’. Another acquaintance doing management training in the financial arena chalked up on the whiteboard “Anyone who presages a statement with the words ‘I think’ and then proceeds to regurgitate someone else’s opinions, analysis or received wisdom, will fail this course and be summarily ejected’

And finally Anthony, I am not sure I wanted that post to become an article. I dont want to be someone else’s received wisdom. I want the buggers to start thinking for themselves.

If that means studying control theory systems analysis and chaos mathematics then do it. And form your own opinions.

“Don’t follow leaders, watch your parking meters”

I say people don’t think. Prove me wrong. Don’t believe what I say, do your own analysis. Stop trusting and start thinking.

I’ll leave you with a final chilling thought. Consider the following statement:

“100% of all media ‘news’ and 90% of what is called ‘science’ and an alarming amount of blog material is not what is the case, or even what people think is the case, but what people for reasons of their own, want you to think is the case”

Finally, if I ever get around to finishing it, for those who ask ‘how can it be possible that so many people are caught up in what you claim to be a grand conspiracy or something of that nature?’ I am on the business of writing a philosophical, psychological and social explanation. It entitled ‘convenient lies’ And it shows that bigotry prejudice stupidity and venality are in fact useful techniques for species survival most of the time.

Of course the interesting facet is the ‘Black Swan’ times, when it’s the most dangerous thing in the world.

Following the herd is safer than straying off alone. Unless the herd is approaching the cliff edge and the leaders are more concerned with who is following them than where they are going…

AGW is one of the great dangers facing mankind, not because its true, but because it is widely believed, and demonstrably false.

My analysis of convenient lies shows that they are most dangerous in times of deep social and economic change in society, when the old orthodoxies are simply no good.

I feel more scared these days than at any time in the cold war. Then one felt that no one would be stupid enough to start world war three. Today, I no longer have that conviction. Two generations of social engineering aimed at removing all risk and all need to actually think from society has led to a generation which is stupid enough and smug enough and feels safe enough to utterly destroy western civilisation simply because they take it totally for granted. To them the promotion of the AGW meme is a success story in terms of political and commercial marketing. The fact that where they are taking us over a cliff edge into a new dark age, is something they simply haven’t considered at all.

They have socially engineered risk and dissent out of society. For profit. Leaving behind a population that cannot think for itself, and has no need to. Its told to blindly follow the rules.

Control system theory says that that, unlike the climate, is a deeply unstable situation.

Wake up, smell the coffee. AGW is simply another element in a tendency towards political control of everything, and the subjugation of the individual into the mass of society at large. No decision is to be taken by the individual, all is to be taken by centralised bureaucratic structures – such as the IPCC. The question is, is that a functional and effective way to structure society?

My contention is that its deeply dangerous. It introduces massive and laggy overall centralised feedback, Worse, it introduces a single point of failure. If central government breaks down or falters, people simply do not know what to do any more. No one has the skill or practice in making localised decisions anymore.

The point is to see AGW and the whole greenspin machine as just an aspect of a particular stage in political and societal evolution, and understand it in those terms. Prior to the age of the telegraph and instantaneous communications, government had to be devolved – the lag was too great to pass the decisions back to central authority. Today we think we can, but there is another lag – bureaucratic lag. As well as bureaucratic incompetence.

System theory applied to political systems, gives a really scary prediction. We are on the point of almost total collapse, and we do not have the localised systems in place to replace centralised structures that are utterly dysfunctional. Sooner or later an externality is going to come along that will overwhelm the ability of centralized bureaucracy to deal with it, and it will fail. And nothing else will succeed, because people can no longer think for themselves.

Because they were lazy and let other people do the thinking for them. And paid them huge sums to do it, and accepted the results unquestioningly.

Nanotechnology

From Wikipedia, the free encyclopedia

Nanotechnology ("nanotech") is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest, widespread description of nanotechnology[1][2] referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold. It is therefore common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and applications whose common trait is size. Because of the variety of potential applications (including industrial and military), governments have invested billions of dollars in nanotechnology research. Through its National Nanotechnology Initiative, the USA has invested 3.7 billion dollars. The European Union has invested[when?] 1.2 billion and Japan 750 million dollars.[3]

Nanotechnology as defined by size is naturally very broad, including fields of science as diverse as surface science, organic chemistry, molecular biology, semiconductor physics, microfabrication, etc.[4] The associated research and applications are equally diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.

Scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in medicine, electronics, biomaterials energy production, and consumer products. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials,[5] and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.

Origins

The concepts that seeded nanotechnology were first discussed in 1959 by renowned physicist Richard 
Feynman in his talk There's Plenty of Room at the Bottom, in which he described the possibility of synthesis via direct manipulation of atoms. The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known.
Comparison of Nanomaterials Sizes

Inspired by Feynman's concepts, K. Eric Drexler used the term "nanotechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity with atomic control. Also in 1986, Drexler co-founded The Foresight Institute (with which he is no longer affiliated) to help increase public awareness and understanding of nanotechnology concepts and implications.

Thus, emergence of nanotechnology as a field in the 1980s occurred through convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework for nanotechnology, and high-visibility experimental advances that drew additional wide-scale attention to the prospects of atomic control of matter. In 1980s two major breakthroughs incepted the growth of nanotechnology in modern era.

First, the invention of the scanning tunneling microscope in 1981 which provided unprecedented visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in 1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory received a Nobel Prize in Physics in 1986.[6][7] Binnig, Quate and Gerber also invented the analogous atomic force microscope that year.
Buckminsterfullerene C60, also known as the buckyball, is a representative member of the carbon structures known as fullerenes. Members of the fullerene family are a major subject of research falling under the nanotechnology umbrella.

Second, Fullerenes were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry.[8][9] C60 was not initially described as nanotechnology; the term was used regarding subsequent work with related graphene tubes (called carbon nanotubes and sometimes called Bucky tubes) which suggested potential applications for nanoscale electronics and devices.

In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress. Controversies emerged regarding the definitions and potential implications of nanotechnologies, exemplified by the Royal Society's report on nanotechnology.[10] Challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003.[11]

Meanwhile, commercialization of products based on advancements in nanoscale technologies began emerging. These products are limited to bulk applications of nanomaterials and do not involve atomic control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an antibacterial agent, nanoparticle-based transparent sunscreens, and carbon nanotubes for stain-resistant textiles.[12][13]

Governments moved to promote and fund research into nanotechnology, beginning in the U.S. with the National Nanotechnology Initiative, which formalized a size-based definition of nanotechnology and established funding for research on the nanoscale.

By the mid-2000s new and serious scientific attention began to flourish. Projects emerged to produce nanotechnology roadmaps[14][15] which center on atomically precise manipulation of matter and discuss existing and projected capabilities, goals, and applications.

Fundamental concepts

Nanotechnology is the engineering of functional systems at the molecular scale. This covers both current work and concepts that are more advanced. In its original sense, nanotechnology refers to the projected ability to construct items from the bottom up, using techniques and tools being developed today to make complete, high performance products.

One nanometer (nm) is one billionth, or 10−9, of a meter. By comparison, typical carbon-carbon bond lengths, or the spacing between these atoms in a molecule, are in the range 0.12–0.15 nm, and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular life-forms, the bacteria of the genus Mycoplasma, are around 200 nm in length. By convention, nanotechnology is taken as the scale range 1 to 100 nm following the definition used by the National Nanotechnology Initiative in the US. The lower limit is set by the size of atoms (hydrogen has the smallest atoms, which are approximately a quarter of a nm diameter) since nanotechnology must build its devices from atoms and molecules. The upper limit is more or less arbitrary but is around the size that phenomena not observed in larger structures start to become apparent and can be made use of in the nano device.[16] These new phenomena make nanotechnology distinct from devices which are merely miniaturised versions of an equivalent macroscopic device; such devices are on a larger scale and come under the description of microtechnology.[17]

To put that scale in another context, the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth.[18] Or another way of putting it: a nanometer is the amount an average man's beard grows in the time it takes him to raise the razor to his face.[18]

Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control.[19]

Areas of physics such as nanoelectronics, nanomechanics, nanophotonics and nanoionics have evolved during the last few decades to provide a basic scientific foundation of nanotechnology.

Larger to smaller: a materials perspective

Image of reconstruction on a clean Gold(100) surface, as visualized using scanning tunneling microscopy. The positions of the individual atoms composing the surface are visible.

Several phenomena become pronounced as the size of the system decreases. These include statistical mechanical effects, as well as quantum mechanical effects, for example the “quantum size effect” where the electronic properties of solids are altered with great reductions in particle size. This effect does not come into play by going from macro to micro dimensions. However, quantum effects can become significant when the nanometer size range is reached, typically at distances of 100 nanometers or less, the so-called quantum realm. Additionally, a number of physical (mechanical, electrical, optical, etc.) properties change when compared to macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal and catalytic properties of materials. Diffusion and reactions at nanoscale, nanostructures materials and nanodevices with fast ion transport are generally referred to nanoionics. Mechanical properties of nanosystems are of interest in the nanomechanics research. The catalytic activity of nanomaterials also opens potential risks in their interaction with biomaterials.

Materials reduced to the nanoscale can show different properties compared to what they exhibit on a macroscale, enabling unique applications. For instance, opaque substances can become transparent (copper); stable materials can turn combustible (aluminum); insoluble materials may become soluble (gold). A material such as gold, which is chemically inert at normal scales, can serve as a potent chemical catalyst at nanoscales. Much of the fascination with nanotechnology stems from these quantum and surface phenomena that matter exhibits at the nanoscale.[20]

Simple to complex: a molecular perspective

Modern synthetic chemistry has reached the point where it is possible to prepare small molecules to almost any structure. These methods are used today to manufacture a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble these single molecules into supramolecular assemblies consisting of many molecules arranged in a well defined manner.
These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into some useful conformation through a bottom-up approach. The concept of molecular recognition is especially important: molecules can be designed so that a specific configuration or arrangement is favored due to non-covalent intermolecular forces. The Watson–Crick basepairing rules are a direct result of this, as is the specificity of an enzyme being targeted to a single substrate, or the specific folding of the protein itself. Thus, two or more components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole.

Such bottom-up approaches should be capable of producing devices in parallel and be much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, there are many examples of self-assembly based on molecular recognition in biology, most notably Watson–Crick basepairing and enzyme-substrate interactions. The challenge for nanotechnology is whether these principles can be used to engineer new constructs in addition to natural ones.

Molecular nanotechnology: a long-term view

Molecular nanotechnology, sometimes called molecular manufacturing, describes engineered nanosystems (nanoscale machines) operating on the molecular scale. Molecular nanotechnology is especially associated with the molecular assembler, a machine that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to, and should be clearly distinguished from, the conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles. When the term "nanotechnology" was independently coined and popularized by Eric Drexler (who at the time was unaware of an earlier usage by Norio Taniguchi) it referred to a future manufacturing technology based on molecular machine systems. The premise was that molecular scale biological analogies of traditional machine components demonstrated molecular machines were possible: by the countless examples found in biology, it is known that sophisticated, stochastically optimised biological machines can be produced.

It is hoped that developments in nanotechnology will make possible their construction by some other means, perhaps using biomimetic principles. However, Drexler and other researchers[21] have proposed that advanced nanotechnology, although perhaps initially implemented by biomimetic means, ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification.[22] The physics and engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems.

In general it is very difficult to assemble devices on the atomic scale, as one has to position atoms on other atoms of comparable size and stickiness. Another view, put forth by Carlo Montemagno,[23] is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Richard Smalley argued that mechanosynthesis are impossible due to the difficulties in mechanically manipulating individual molecules.

This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003.[24] Though biology clearly demonstrates that molecular machine systems are possible, non-biological molecular machines are today only in their infancy. Leaders in research on non-biological molecular machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories and UC Berkeley.[citation needed] They have constructed at least three distinct molecular devices whose motion is controlled from the desktop with changing voltage: a nanotube nanomotor, a molecular actuator,[25] and a nanoelectromechanical relaxation oscillator.[26] See nanotube nanomotor for more examples.

An experiment indicating that positional molecular assembly is possible was performed by Ho and Lee at Cornell University in 1999. They used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal, and chemically bound the CO to the Fe by applying a voltage.

Current research

Graphical representation of a rotaxane, useful as a molecular switch.
This DNA tetrahedron[27] is an artificially designed nanostructure of the type made in the field of DNA nanotechnology. Each edge of the tetrahedron is a 20 base pair DNA double helix, and each vertex is a three-arm junction.
This device transfers energy from nano-thin layers of quantum wells to nanocrystals above them, causing the nanocrystals to emit visible light.[28]

Nanomaterials

The nanomaterials field includes subfields which develop or study materials having unique properties arising from their nanoscale dimensions.[29]
  • Interface and colloid science has given rise to many materials which may be useful in nanotechnology, such as carbon nanotubes and other fullerenes, and various nanoparticles and nanorods. Nanomaterials with fast ion transport are related also to nanoionics and nanoelectronics.
  • Nanoscale materials can also be used for bulk applications; most present commercial applications of nanotechnology are of this flavor.
  • Progress has been made in using these materials for medical applications; see Nanomedicine.
  • Nanoscale materials such as nanopillars are sometimes used in solar cells which combats the cost of traditional Silicon solar cells.
  • Development of applications incorporating semiconductor nanoparticles to be used in the next generation of products, such as display technology, lighting, solar cells and biological imaging; see quantum dots.

Bottom-up approaches

These seek to arrange smaller components into more complex assemblies.
  • DNA nanotechnology utilizes the specificity of Watson–Crick basepairing to construct well-defined structures out of DNA and other nucleic acids.
  • Approaches from the field of "classical" chemical synthesis (Inorganic and organic synthesis) also aim at designing molecules with well-defined shape (e.g. bis-peptides[30]).
  • More generally, molecular self-assembly seeks to use concepts of supramolecular chemistry, and molecular recognition in particular, to cause single-molecule components to automatically arrange themselves into some useful conformation.
  • Atomic force microscope tips can be used as a nanoscale "write head" to deposit a chemical upon a surface in a desired pattern in a process called dip pen nanolithography. This technique fits into the larger subfield of nanolithography.

Top-down approaches

These seek to create smaller devices by using larger ones to direct their assembly.

Functional approaches

These seek to develop components of a desired functionality without regard to how they might be assembled.
  • Molecular scale electronics seeks to develop molecules with useful electronic properties. These could then be used as single-molecule components in a nanoelectronic device.[33] For an example see rotaxane.
  • Synthetic chemical methods can also be used to create synthetic molecular motors, such as in a so-called nanocar.

Biomimetic approaches

  • Bionics or biomimicry seeks to apply biological methods and systems found in nature, to the study and design of engineering systems and modern technology. Biomineralization is one example of the systems studied.

Speculative

These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry might progress. These often take a big-picture view of nanotechnology, with more emphasis on its societal implications than the details of how such inventions could actually be created.
  • Molecular nanotechnology is a proposed approach which involves manipulating single molecules in finely controlled, deterministic ways. This is more theoretical than the other subfields, and many of its proposed techniques are beyond current capabilities.
  • Nanorobotics centers on self-sufficient machines of some functionality operating at the nanoscale. There are hopes for applying nanorobots in medicine,[36][37][38] but it may not be easy to do such a thing because of several drawbacks of such devices.[39] Nevertheless, progress on innovative materials and methodologies has been demonstrated with some patents granted about new nanomanufacturing devices for future commercial applications, which also progressively helps in the development towards nanorobots with the use of embedded nanobioelectronics concepts.[40][41]
  • Productive nanosystems are "systems of nanosystems" which will be complex nanosystems that produce atomically precise parts for other nanosystems, not necessarily using novel nanoscale-emergent properties, but well-understood fundamentals of manufacturing. Because of the discrete (i.e. atomic) nature of matter and the possibility of exponential growth, this stage is seen as the basis of another industrial revolution. Mihail Roco, one of the architects of the USA's National Nanotechnology Initiative, has proposed four states of nanotechnology that seem to parallel the technical progress of the Industrial Revolution, progressing from passive nanostructures to active nanodevices to complex nanomachines and ultimately to productive nanosystems.[42]
  • Programmable matter seeks to design materials whose properties can be easily, reversibly and externally controlled though a fusion of information science and materials science.
  • Due to the popularity and media exposure of the term nanotechnology, the words picotechnology and femtotechnology have been coined in analogy to it, although these are only used rarely and informally.

Tools and techniques

Typical AFM setup. A microfabricated cantilever with a sharp tip is deflected by features on a sample surface, much like in a phonograph but on a much smaller scale. A laser beam reflects off the backside of the cantilever into a set of photodetectors, allowing the deflection to be measured and assembled into an image of the surface.

There are several important modern developments. The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two early versions of scanning probes that launched nanotechnology. There are other types of scanning probe microscopy. Although conceptually similar to the scanning confocal microscope developed by Marvin Minsky in 1961 and the scanning acoustic microscope (SAM) developed by Calvin Quate and coworkers in the 1970s, newer scanning probe microscopes have much higher resolution, since they are not limited by the wavelength of sound or light.

The tip of a scanning probe can also be used to manipulate nanostructures (a process called positional assembly). Feature-oriented scanning methodology may be a promising way to implement these nanomanipulations in automatic mode.[43][44] However, this is still a slow process because of low scanning velocity of the microscope.

Various techniques of nanolithography such as optical lithography, X-ray lithography dip pen nanolithography, electron beam lithography or nanoimprint lithography were also developed. Lithography is a top-down fabrication technique where a bulk material is reduced in size to nanoscale pattern.

Another group of nanotechnological techniques include those used for fabrication of nanotubes and nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers. The precursors of these techniques preceded the nanotech era, and are extensions in the development of scientific advancements rather than techniques which were devised with the sole purpose of creating nanotechnology and which were results of nanotechnology research.

The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are made. Scanning probe microscopy is an important technique both for characterization and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning approach, atoms or molecules can be moved around on a surface with scanning probe microscopy techniques.[43][44] At present, it is expensive and time-consuming for mass production but very suitable for laboratory experimentation.

In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Dual polarisation interferometry is one tool suitable for characterisation of self assembled thin films. Another variation of the bottom-up approach is molecular beam epitaxy or MBE. Researchers at Bell Telephone Laboratories like John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE allows scientists to lay down atomically precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics.

However, new therapeutic products, based on responsive nanomaterials, such as the ultradeformable, stress-sensitive Transfersome vesicles, are under development and already approved for human use in some countries.[citation needed]

Applications

One of the major applications of nanotechnology is in the area of nanoelectronics with MOSFET's being made of small nanowires ~10 nm in length. Here is a simulation of such a nanowire.
Nanostructures provide this surface with superhydrophobicity, which lets water droplets roll down the inclined plane.

As of August 21, 2008, the Project on Emerging Nanotechnologies estimates that over 800 manufacturer-identified nanotech products are publicly available, with new ones hitting the market at a pace of 3–4 per week.[13] The project lists all of the products in a publicly accessible online database. Most applications are limited to the use of "first generation" passive nanomaterials which includes titanium dioxide in sunscreen, cosmetics, surface coatings,[45] and some food products; Carbon allotropes used to produce gecko tape; silver in food packaging, clothing, disinfectants and household appliances; zinc oxide in sunscreens and cosmetics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide as a fuel catalyst.[12]

Further applications allow tennis balls to last longer, golf balls to fly straighter, and even bowling balls to become more durable and have a harder surface. Trousers and socks have been infused with nanotechnology so that they will last longer and keep people cool in the summer. Bandages are being infused with silver nanoparticles to heal cuts faster.[46] Video game consoles and personal computers may become cheaper, faster, and contain more memory thanks to nanotechnology.[47] Nanotechnology may have the ability to make existing medical applications cheaper and easier to use in places like the general practitioner's office and at home.[48] Cars are being manufactured with nanomaterials so they may need fewer metals and less fuel to operate in the future.[49]

Scientists are now turning to nanotechnology in an attempt to develop diesel engines with cleaner exhaust fumes. Platinum is currently used as the diesel engine catalyst in these engines. The catalyst is what cleans the exhaust fume particles. First a reduction catalyst is employed to take nitrogen atoms from NOx molecules in order to free oxygen. Next the oxidation catalyst oxidizes the hydrocarbons and carbon monoxide to form carbon dioxide and water.[50] Platinum is used in both the reduction and the oxidation catalysts.[51] Using platinum though, is inefficient in that it is expensive and unsustainable. Danish company InnovationsFonden invested DKK 15 million in a search for new catalyst substitutes using nanotechnology. The goal of the project, launched in the autumn of 2014, is to maximize surface area and minimize the amount of material required. Objects tend to minimize their surface energy; two drops of water, for example, will join to form one drop and decrease surface area. If the catalyst's surface area that is exposed to the exhaust fumes is maximized, efficiency of the catalyst is maximized. The team working on this project aims to create nanoparticles that will not merge together. Every time the surface is optimized, material is saved. Thus, creating these nanoparticles will increase the effectiveness of the resulting diesel engine catalyst—in turn leading to cleaner exhaust fumes—and will decrease cost. If successful, the team hopes to reduce platinum use by 25%.[52]

Nanotechnology also has a prominent role in the fast developing field of Tissue Engineering. When designing scaffolds, researchers attempt to the mimic the nanoscale features of a Cell's microenvironment to direct its differentiation down a suitable lineage.[53] For example, when creating scaffolds to support the growth of bone, researchers may mimic osteoclast resorption pits.[54]

Researchers have successfully used DNA origami-based nanobots capable of carrying out logic functions to achieve targeted drug delivery in cockroaches. It is said that the computational power of these nanobots can be scaled up to that of a Commodore 64.[55]

Implications

An area of concern is the effect that industrial-scale manufacturing and use of nanomaterials would have on human health and the environment, as suggested by nanotoxicology research. For these reasons, some groups advocate that nanotechnology be regulated by governments. Others counter that overregulation would stifle scientific research and the development of beneficial innovations. Public health research agencies, such as the National Institute for Occupational Safety and Health are actively conducting research on potential health effects stemming from exposures to nanoparticles.[56][57]
Some nanoparticle products may have unintended consequences. Researchers have discovered that bacteriostatic silver nanoparticles used in socks to reduce foot odor are being released in the wash.[58] These particles are then flushed into the waste water stream and may destroy bacteria which are critical components of natural ecosystems, farms, and waste treatment processes.[59]

Public deliberations on risk perception in the US and UK carried out by the Center for Nanotechnology in Society found that participants were more positive about nanotechnologies for energy applications than for health applications, with health applications raising moral and ethical dilemmas such as cost and availability.[60]

Experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, have testified[61] that successful commercialization depends on adequate oversight, risk research strategy, and public engagement. Berkeley, California is currently the only city in the United States to regulate nanotechnology;[62] Cambridge, Massachusetts in 2008 considered enacting a similar law,[63] but ultimately rejected it.[64] Relevant for both research on and application of nanotechnologies, the insurability of nanotechnology is contested.[65] Without state regulation of nanotechnology, the availability of private insurance for potential damages is seen as necessary to ensure that burdens are not socialised implicitly.

Health and environmental concerns

Nanofibers are used in several areas and in different products, in everything from aircraft wings to tennis rackets. Inhaling airborne nanoparticles and nanofibers may lead to a number of pulmonary diseases, e.g. fibrosis.[66] Researchers have found that when rats breathed in nanoparticles, the particles settled in the brain and lungs, which led to significant increases in biomarkers for inflammation and stress response[67] and that nanoparticles induce skin aging through oxidative stress in hairless mice.[68][69]
A two-year study at UCLA's School of Public Health found lab mice consuming nano-titanium dioxide showed DNA and chromosome damage to a degree "linked to all the big killers of man, namely cancer, heart disease, neurological disease and aging".[70]

A major study published more recently in Nature Nanotechnology suggests some forms of carbon nanotubes – a poster child for the “nanotechnology revolution” – could be as harmful as asbestos if inhaled in sufficient quantities. Anthony Seaton of the Institute of Occupational Medicine in Edinburgh, Scotland, who contributed to the article on carbon nanotubes said "We know that some of them probably have the potential to cause mesothelioma. So those sorts of materials need to be handled very carefully."[71] In the absence of specific regulation forthcoming from governments, Paull and Lyons (2008) have called for an exclusion of engineered nanoparticles in food.[72] A newspaper article reports that workers in a paint factory developed serious lung disease and nanoparticles were found in their lungs.[73][74][75][76]

Regulation

Calls for tighter regulation of nanotechnology have occurred alongside a growing debate related to the human health and safety risks of nanotechnology.[77] There is significant debate about who is responsible for the regulation of nanotechnology. Some regulatory agencies currently cover some nanotechnology products and processes (to varying degrees) – by “bolting on” nanotechnology to existing regulations – there are clear gaps in these regimes.[78] Davies (2008) has proposed a regulatory road map describing steps to deal with these shortcomings.[79]
Stakeholders concerned by the lack of a regulatory framework to assess and control risks associated with the release of nanoparticles and nanotubes have drawn parallels with bovine spongiform encephalopathy ("mad cow" disease), thalidomide, genetically modified food,[80] nuclear energy, reproductive technologies, biotechnology, and asbestosis. Dr. Andrew Maynard, chief science advisor to the Woodrow Wilson Center’s Project on Emerging Nanotechnologies, concludes that there is insufficient funding for human health and safety research, and as a result there is currently limited understanding of the human health and safety risks associated with nanotechnology.[81] As a result, some academics have called for stricter application of the precautionary principle, with delayed marketing approval, enhanced labelling and additional safety data development requirements in relation to certain forms of nanotechnology.[82][83]

The Royal Society report[10] identified a risk of nanoparticles or nanotubes being released during disposal, destruction and recycling, and recommended that “manufacturers of products that fall under extended producer responsibility regimes such as end-of-life regulations publish procedures outlining how these materials will be managed to minimize possible human and environmental exposure” (p. xiii). Reflecting the challenges for ensuring responsible life cycle regulation, the Institute for Food and Agricultural Standards has proposed that standards for nanotechnology research and development should be integrated across consumer, worker and environmental standards. They also propose that NGOs and other citizen groups play a meaningful role in the development of these standards.

The Center for Nanotechnology in Society has found that people respond to nanotechnologies differently, depending on application – with participants in public deliberations more positive about nanotechnologies for energy than health applications – suggesting that any public calls for nano regulations may differ by technology sector.[60]

Nanoinnovation

Nanoinnovation is the implementation of nanoscale discoveries and inventions including new technologies and applications that involve nanoscale structures and processes. Cutting edge innovations in nanotechnology include 2D materials that are one atom thick, such as graphene (carbon), silicene (silicon) and staphene (tin). Many products we're familiar with are nano-enabled, such as smartphones, large screen television sets, solar cells, and batteries...to name a few examples. Nanocircuits and nanomaterials are creating a new generation of wearable computers and a wide variety of sensors. Many nanoinnovations borrow ideas from Nature (biomimicry) such as a new type of dry adhesive called Geckskin(tm) which recreates the nanostructures of a gecko lizard's footpads. In the field of nanomedicine, virtually all innovations involving viruses are nanoinnovations, since most viruses are nanoscale in size. Dozens of examples of nanoinnovations are included in the 2014 book, NanoInnovation: What Every Manager Needs to Know by Michael Tomczyk (Wiley, 2014).

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...