Search This Blog

Friday, August 15, 2014

Nanotechnology

Nanotechnology

From Wikipedia, the free encyclopedia
 
Nanotechnology (sometimes shortened to "nanotech") is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest, widespread description of nanotechnology[1][2] referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold. It is therefore common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and applications whose common trait is size. Because of the variety of potential applications (including industrial and military), governments have invested billions of dollars in nanotechnology research. Through its National Nanotechnology Initiative, the USA has invested 3.7 billion dollars. The European Union has invested[when?] 1.2 billion and Japan 750 million dollars.[3]

Nanotechnology as defined by size is naturally very broad, including fields of science as diverse as surface science, organic chemistry, molecular biology, semiconductor physics, microfabrication, etc.[4] The associated research and applications are equally diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.

Scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in medicine, electronics, biomaterials and energy production. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials,[5] and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.

Origins

The concepts that seeded nanotechnology were first discussed in 1959 by renowned physicist Richard Feynman in his talk There's Plenty of Room at the Bottom, in which he described the possibility of synthesis via direct manipulation of atoms. The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known.

Comparison of Nanomaterials Sizes

Inspired by Feynman's concepts, K. Eric Drexler independently used the term "nanotechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity with atomic control. Also in 1986, Drexler co-founded The Foresight Institute (with which he is no longer affiliated) to help increase public awareness and understanding of nanotechnology concepts and implications.

Thus, emergence of nanotechnology as a field in the 1980s occurred through convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework for nanotechnology, and high-visibility experimental advances that drew additional wide-scale attention to the prospects of atomic control of matter.

For example, the invention of the scanning tunneling microscope in 1981 provided unprecedented visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in 1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory received a Nobel Prize in Physics in 1986.[6][7] Binnig, Quate and Gerber also invented the analogous atomic force microscope that year.

Buckminsterfullerene C60, also known as the buckyball, is a representative member of the carbon structures known as fullerenes. Members of the fullerene family are a major subject of research falling under the nanotechnology umbrella.

Fullerenes were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry.[8][9] C60 was not initially described as nanotechnology; the term was used regarding subsequent work with related graphene tubes (called carbon nanotubes and sometimes called Bucky tubes) which suggested potential applications for nanoscale electronics and devices.

In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress. Controversies emerged regarding the definitions and potential implications of nanotechnologies, exemplified by the Royal Society's report on nanotechnology.[10]
Challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003.[11]

Meanwhile, commercialization of products based on advancements in nanoscale technologies began emerging. These products are limited to bulk applications of nanomaterials and do not involve atomic control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an antibacterial agent, nanoparticle-based transparent sunscreens, and carbon nanotubes for stain-resistant textiles.[12][13]

Governments moved to promote and fund research into nanotechnology, beginning in the U.S. with the National Nanotechnology Initiative, which formalized a size-based definition of nanotechnology and established funding for research on the nanoscale.

By the mid-2000s new and serious scientific attention began to flourish. Projects emerged to produce nanotechnology roadmaps[14][15] which center on atomically precise manipulation of matter and discuss existing and projected capabilities, goals, and applications.

Fundamental concepts

Nanotechnology is the engineering of functional systems at the molecular scale. This covers both current work and concepts that are more advanced. In its original sense, nanotechnology refers to the projected ability to construct items from the bottom up, using techniques and tools being developed today to make complete, high performance products.

One nanometer (nm) is one billionth, or 10−9, of a meter. By comparison, typical carbon-carbon bond lengths, or the spacing between these atoms in a molecule, are in the range 0.12–0.15 nm, and a DNA double-helix has a diameter around 2 nm. On the other hand, the smallest cellular life-forms, the bacteria of the genus Mycoplasma, are around 200 nm in length. By convention, nanotechnology is taken as the scale range 1 to 100 nm following the definition used by the National Nanotechnology Initiative in the US. The lower limit is set by the size of atoms (hydrogen has the smallest atoms, which are approximately a quarter of a nm diameter) since nanotechnology must build its devices from atoms and molecules. The upper limit is more or less arbitrary but is around the size that phenomena not observed in larger structures start to become apparent and can be made use of in the nano device.[16] These new phenomena make nanotechnology distinct from devices which are merely miniaturised versions of an equivalent macroscopic device; such devices are on a larger scale and come under the description of microtechnology.[17]

To put that scale in another context, the comparative size of a nanometer to a meter is the same as that of a marble to the size of the earth.[18] Or another way of putting it: a nanometer is the amount an average man's beard grows in the time it takes him to raise the razor to his face.[18]

Two main approaches are used in nanotechnology. In the "bottom-up" approach, materials and devices are built from molecular components which assemble themselves chemically by principles of molecular recognition. In the "top-down" approach, nano-objects are constructed from larger entities without atomic-level control.[19]

Areas of physics such as nanoelectronics, nanomechanics, nanophotonics and nanoionics have evolved during the last few decades to provide a basic scientific foundation of nanotechnology.

Larger to smaller: a materials perspective


Image of reconstruction on a clean Gold(100) surface, as visualized using scanning tunneling microscopy. The positions of the individual atoms composing the surface are visible.

Several phenomena become pronounced as the size of the system decreases. These include statistical mechanical effects, as well as quantum mechanical effects, for example the “quantum size effect” where the electronic properties of solids are altered with great reductions in particle size. This effect does not come into play by going from macro to micro dimensions. However, quantum effects can become significant when the nanometer size range is reached, typically at distances of 100 nanometers or less, the so-called quantum realm. Additionally, a number of physical (mechanical, electrical, optical, etc.) properties change when compared to macroscopic systems. One example is the increase in surface area to volume ratio altering mechanical, thermal and catalytic properties of materials. Diffusion and reactions at nanoscale, nanostructures materials and nanodevices with fast ion transport are generally referred to nanoionics. Mechanical properties of nanosystems are of interest in the nanomechanics research. The catalytic activity of nanomaterials also opens potential risks in their interaction with biomaterials.

Materials reduced to the nanoscale can show different properties compared to what they exhibit on a macroscale, enabling unique applications. For instance, opaque substances can become transparent (copper); stable materials can turn combustible (aluminum); insoluble materials may become soluble (gold). A material such as gold, which is chemically inert at normal scales, can serve as a potent chemical catalyst at nanoscales. Much of the fascination with nanotechnology stems from these quantum and surface phenomena that matter exhibits at the nanoscale.[20]

Simple to complex: a molecular perspective

Modern synthetic chemistry has reached the point where it is possible to prepare small molecules to almost any structure. These methods are used today to manufacture a wide variety of useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the question of extending this kind of control to the next-larger level, seeking methods to assemble these single molecules into supramolecular assemblies consisting of many molecules arranged in a well defined manner.

These approaches utilize the concepts of molecular self-assembly and/or supramolecular chemistry to automatically arrange themselves into some useful conformation through a bottom-up approach. The concept of molecular recognition is especially important: molecules can be designed so that a specific configuration or arrangement is favored due to non-covalent intermolecular forces. The Watson–Crick basepairing rules are a direct result of this, as is the specificity of an enzyme being targeted to a single substrate, or the specific folding of the protein itself. Thus, two or more components can be designed to be complementary and mutually attractive so that they make a more complex and useful whole.

Such bottom-up approaches should be capable of producing devices in parallel and be much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Most useful structures require complex and thermodynamically unlikely arrangements of atoms. Nevertheless, there are many examples of self-assembly based on molecular recognition in biology, most notably Watson–Crick basepairing and enzyme-substrate interactions. The challenge for nanotechnology is whether these principles can be used to engineer new constructs in addition to natural ones.

Molecular nanotechnology: a long-term view

Molecular nanotechnology, sometimes called molecular manufacturing, describes engineered nanosystems (nanoscale machines) operating on the molecular scale. Molecular nanotechnology is especially associated with the molecular assembler, a machine that can produce a desired structure or device atom-by-atom using the principles of mechanosynthesis. Manufacturing in the context of productive nanosystems is not related to, and should be clearly distinguished from, the conventional technologies used to manufacture nanomaterials such as carbon nanotubes and nanoparticles.
When the term "nanotechnology" was independently coined and popularized by Eric Drexler (who at the time was unaware of an earlier usage by Norio Taniguchi) it referred to a future manufacturing technology based on molecular machine systems. The premise was that molecular scale biological analogies of traditional machine components demonstrated molecular machines were possible: by the countless examples found in biology, it is known that sophisticated, stochastically optimised biological machines can be produced.

It is hoped that developments in nanotechnology will make possible their construction by some other means, perhaps using biomimetic principles. However, Drexler and other researchers[21] have proposed that advanced nanotechnology, although perhaps initially implemented by biomimetic means, ultimately could be based on mechanical engineering principles, namely, a manufacturing technology based on the mechanical functionality of these components (such as gears, bearings, motors, and structural members) that would enable programmable, positional assembly to atomic specification.[22] The physics and engineering performance of exemplar designs were analyzed in Drexler's book Nanosystems.

In general it is very difficult to assemble devices on the atomic scale, as one has to position atoms on other atoms of comparable size and stickiness. Another view, put forth by Carlo Montemagno,[23] is that future nanosystems will be hybrids of silicon technology and biological molecular machines. Richard Smalley argued that mechanosynthesis are impossible due to the difficulties in mechanically manipulating individual molecules.

This led to an exchange of letters in the ACS publication Chemical & Engineering News in 2003.[24] Though biology clearly demonstrates that molecular machine systems are possible, non-biological molecular machines are today only in their infancy. Leaders in research on non-biological molecular machines are Dr. Alex Zettl and his colleagues at Lawrence Berkeley Laboratories and UC Berkeley. They have constructed at least three distinct molecular devices whose motion is controlled from the desktop with changing voltage: a nanotube nanomotor, a molecular actuator,[25] and a nanoelectromechanical relaxation oscillator.[26] See nanotube nanomotor for more examples.
An experiment indicating that positional molecular assembly is possible was performed by Ho and Lee at Cornell University in 1999. They used a scanning tunneling microscope to move an individual carbon monoxide molecule (CO) to an individual iron atom (Fe) sitting on a flat silver crystal, and chemically bound the CO to the Fe by applying a voltage.

Current research


Graphical representation of a rotaxane, useful as a molecular switch.

This DNA tetrahedron[27] is an artificially designed nanostructure of the type made in the field of DNA nanotechnology. Each edge of the tetrahedron is a 20 base pair DNA double helix, and each vertex is a three-arm junction.

This device transfers energy from nano-thin layers of quantum wells to nanocrystals above them, causing the nanocrystals to emit visible light.[28]

Nanomaterials

The nanomaterials field includes subfields which develop or study materials having unique properties arising from their nanoscale dimensions.[29]
  • Interface and colloid science has given rise to many materials which may be useful in nanotechnology, such as carbon nanotubes and other fullerenes, and various nanoparticles and nanorods. Nanomaterials with fast ion transport are related also to nanoionics and nanoelectronics.
  • Nanoscale materials can also be used for bulk applications; most present commercial applications of nanotechnology are of this flavor.
  • Progress has been made in using these materials for medical applications; see Nanomedicine.
  • Nanoscale materials such as nanopillars are sometimes used in solar cells which combats the cost of traditional Silicon solar cells.
  • Development of applications incorporating semiconductor nanoparticles to be used in the next generation of products, such as display technology, lighting, solar cells and biological imaging; see quantum dots.

Bottom-up approaches

These seek to arrange smaller components into more complex assemblies.
  • DNA nanotechnology utilizes the specificity of Watson–Crick basepairing to construct well-defined structures out of DNA and other nucleic acids.
  • Approaches from the field of "classical" chemical synthesis (inorganic and organic synthesis) also aim at designing molecules with well-defined shape (e.g. bis-peptides[30]).
  • More generally, molecular self-assembly seeks to use concepts of supramolecular chemistry, and molecular recognition in particular, to cause single-molecule components to automatically arrange themselves into some useful conformation.
  • Atomic force microscope tips can be used as a nanoscale "write head" to deposit a chemical upon a surface in a desired pattern in a process called dip pen nanolithography. This technique fits into the larger subfield of nanolithography.

Top-down approaches

These seek to create smaller devices by using larger ones to direct their assembly.

Functional approaches

These seek to develop components of a desired functionality without regard to how they might be assembled.
  • Molecular scale electronics seeks to develop molecules with useful electronic properties. These could then be used as single-molecule components in a nanoelectronic device.[33] For an example see rotaxane.
  • Synthetic chemical methods can also be used to create synthetic molecular motors, such as in a so-called nanocar.

Biomimetic approaches

  • Bionics or biomimicry seeks to apply biological methods and systems found in nature, to the study and design of engineering systems and modern technology. Biomineralization is one example of the systems studied.

Speculative

These subfields seek to anticipate what inventions nanotechnology might yield, or attempt to propose an agenda along which inquiry might progress. These often take a big-picture view of nanotechnology, with more emphasis on its societal implications than the details of how such inventions could actually be created.
  • Molecular nanotechnology is a proposed approach which involves manipulating single molecules in finely controlled, deterministic ways. This is more theoretical than the other subfields, and many of its proposed techniques are beyond current capabilities.
  • Nanorobotics centers on self-sufficient machines of some functionality operating at the nanoscale. There are hopes for applying nanorobots in medicine,[36][37][38] but it may not be easy to do such a thing because of several drawbacks of such devices.[39] Nevertheless, progress on innovative materials and methodologies has been demonstrated with some patents granted about new nanomanufacturing devices for future commercial applications, which also progressively helps in the development towards nanorobots with the use of embedded nanobioelectronics concepts.[40][41]
  • Productive nanosystems are "systems of nanosystems" which will be complex nanosystems that produce atomically precise parts for other nanosystems, not necessarily using novel nanoscale-emergent properties, but well-understood fundamentals of manufacturing. Because of the discrete (i.e. atomic) nature of matter and the possibility of exponential growth, this stage is seen as the basis of another industrial revolution. Mihail Roco, one of the architects of the USA's National Nanotechnology Initiative, has proposed four states of nanotechnology that seem to parallel the technical progress of the Industrial Revolution, progressing from passive nanostructures to active nanodevices to complex nanomachines and ultimately to productive nanosystems.[42]
  • Programmable matter seeks to design materials whose properties can be easily, reversibly and externally controlled though a fusion of information science and materials science.
  • Due to the popularity and media exposure of the term nanotechnology, the words picotechnology and femtotechnology have been coined in analogy to it, although these are only used rarely and informally.

Tools and techniques


Typical AFM setup. A microfabricated cantilever with a sharp tip is deflected by features on a sample surface, much like in a phonograph but on a much smaller scale. A laser beam reflects off the backside of the cantilever into a set of photodetectors, allowing the deflection to be measured and assembled into an image of the surface.

There are several important modern developments. The atomic force microscope (AFM) and the Scanning Tunneling Microscope (STM) are two early versions of scanning probes that launched nanotechnology. There are other types of scanning probe microscopy. Although conceptually similar to the scanning confocal microscope developed by Marvin Minsky in 1961 and the scanning acoustic microscope (SAM) developed by Calvin Quate and coworkers in the 1970s, newer scanning probe microscopes have much higher resolution, since they are not limited by the wavelength of sound or light.

The tip of a scanning probe can also be used to manipulate nanostructures (a process called positional assembly). Feature-oriented scanning methodology suggested by Rostislav Lapshin appears to be a promising way to implement these nanomanipulations in automatic mode.[43][44] However, this is still a slow process because of low scanning velocity of the microscope.

Various techniques of nanolithography such as optical lithography, X-ray lithography dip pen nanolithography, electron beam lithography or nanoimprint lithography were also developed. Lithography is a top-down fabrication technique where a bulk material is reduced in size to nanoscale pattern.

Another group of nanotechnological techniques include those used for fabrication of nanotubes and nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography, electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer deposition, and molecular vapor deposition, and further including molecular self-assembly techniques such as those employing di-block copolymers. The precursors of these techniques preceded the nanotech era, and are extensions in the development of scientific advancements rather than techniques which were devised with the sole purpose of creating nanotechnology and which were results of nanotechnology research.

The top-down approach anticipates nanodevices that must be built piece by piece in stages, much as manufactured items are made. Scanning probe microscopy is an important technique both for characterization and synthesis of nanomaterials. Atomic force microscopes and scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By designing different tips for these microscopes, they can be used for carving out structures on surfaces and to help guide self-assembling structures. By using, for example, feature-oriented scanning approach, atoms or molecules can be moved around on a surface with scanning probe microscopy techniques.[43][44] At present, it is expensive and time-consuming for mass production but very suitable for laboratory experimentation.

In contrast, bottom-up techniques build or grow larger structures atom by atom or molecule by molecule. These techniques include chemical synthesis, self-assembly and positional assembly. Dual polarisation interferometry is one tool suitable for characterisation of self assembled thin films. Another variation of the bottom-up approach is molecular beam epitaxy or MBE. Researchers at Bell Telephone Laboratories like John R. Arthur. Alfred Y. Cho, and Art C. Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s. Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which the 1998 Nobel Prize in Physics was awarded. MBE allows scientists to lay down atomically precise layers of atoms and, in the process, build up complex structures. Important for research on semiconductors, MBE is also widely used to make samples and devices for the newly emerging field of spintronics.

However, new therapeutic products, based on responsive nanomaterials, such as the ultradeformable, stress-sensitive Transfersome vesicles, are under development and already approved for human use in some countries.[citation needed]

Applications


One of the major applications of nanotechnology is in the area of nanoelectronics with MOSFET's being made of small nanowires ~10 nm in length. Here is a simulation of such a nanowire.
Nanostructures provide this surface with superhydrophobicity, which lets water droplets roll down the inclined plane.

As of August 21, 2008, the Project on Emerging Nanotechnologies estimates that over 800 manufacturer-identified nanotech products are publicly available, with new ones hitting the market at a pace of 3–4 per week.[13] The project lists all of the products in a publicly accessible online database. Most applications are limited to the use of "first generation" passive nanomaterials which includes titanium dioxide in sunscreen, cosmetics, surface coatings,[45] and some food products; Carbon allotropes used to produce gecko tape; silver in food packaging, clothing, disinfectants and household appliances; zinc oxide in sunscreens and cosmetics, surface coatings, paints and outdoor furniture varnishes; and cerium oxide as a fuel catalyst.[12]

Further applications allow tennis balls to last longer, golf balls to fly straighter, and even bowling balls to become more durable and have a harder surface. Trousers and socks have been infused with nanotechnology so that they will last longer and keep people cool in the summer. Bandages are being infused with silver nanoparticles to heal cuts faster.[46] Cars are being manufactured with nanomaterials so they may need fewer metals and less fuel to operate in the future.[47] Video game consoles and personal computers may become cheaper, faster, and contain more memory thanks to nanotechnology.[48] Nanotechnology may have the ability to make existing medical applications cheaper and easier to use in places like the general practitioner's office and at home.[49]

The National Science Foundation (a major distributor for nanotechnology research in the United States) funded researcher David Berube to study the field of nanotechnology. His findings are published in the monograph Nano-Hype: The Truth Behind the Nanotechnology Buzz. This study concludes that much of what is sold as “nanotechnology” is in fact a recasting of straightforward materials science, which is leading to a “nanotech industry built solely on selling nanotubes, nanowires, and the like” which will “end up with a few suppliers selling low margin products in huge volumes." Further applications which require actual manipulation or arrangement of nanoscale components await further research. Though technologies branded with the term 'nano' are sometimes little related to and fall far short of the most ambitious and transformative technological goals of the sort in molecular manufacturing proposals, the term still connotes such ideas. According to Berube, there may be a danger that a "nano bubble" will form, or is forming already, from the use of the term by scientists and entrepreneurs to garner funding, regardless of interest in the transformative possibilities of more ambitious and far-sighted work.[50]

Researchers have successfully used DNA origami-based nanobots capable of carrying out logic functions to achieve targeted drug delivery in cockroaches. It is said that the computational power of these nanobots can be scaled up to that of a Commodore 64. [51]

Implications

An area of concern is the effect that industrial-scale manufacturing and use of nanomaterials would have on human health and the environment, as suggested by nanotoxicology research. For these reasons, some groups advocate that nanotechnology be regulated by governments. Others counter that overregulation would stifle scientific research and the development of beneficial innovations. Public health research agencies, such as the National Institute for Occupational Safety and Health are actively conducting research on potential health effects stemming from exposures to nanoparticles.[52][53]

Some nanoparticle products may have unintended consequences. Researchers have discovered that bacteriostatic silver nanoparticles used in socks to reduce foot odor are being released in the wash.[54] These particles are then flushed into the waste water stream and may destroy bacteria which are critical components of natural ecosystems, farms, and waste treatment processes.[55]

Public deliberations on risk perception in the US and UK carried out by the Center for Nanotechnology in Society found that participants were more positive about nanotechnologies for energy applications than for health applications, with health applications raising moral and ethical dilemmas such as cost and availability.[56]

Experts, including director of the Woodrow Wilson Center's Project on Emerging Nanotechnologies David Rejeski, have testified[57] that successful commercialization depends on adequate oversight, risk research strategy, and public engagement. Berkeley, California is currently the only city in the United States to regulate nanotechnology;[58] Cambridge, Massachusetts in 2008 considered enacting a similar law,[59] but ultimately rejected it.[60] Relevant for both research on and application of nanotechnologies, the insurability of nanotechnology is contested.[61] Without state regulation of nanotechnology, the availability of private insurance for potential damages is seen as necessary to ensure that burdens are not socialised implicitly.

Health and environmental concerns

Nanofibers are used in several areas and in different products, in everything from aircraft wings to tennis rackets. Inhaling airborne nanoparticles and nanofibers may lead to a number of pulmonary diseases, e.g. fibrosis.[62] Researchers have found that when rats breathed in nanoparticles, the particles settled in the brain and lungs, which led to significant increases in biomarkers for inflammation and stress response[63] and that nanoparticles induce skin aging through oxidative stress in hairless mice.[64][65]

A two-year study at UCLA's School of Public Health found lab mice consuming nano-titanium dioxide showed DNA and chromosome damage to a degree "linked to all the big killers of man, namely cancer, heart disease, neurological disease and aging".[66]

A major study published more recently in Nature Nanotechnology suggests some forms of carbon nanotubes – a poster child for the “nanotechnology revolution” – could be as harmful as asbestos if inhaled in sufficient quantities. Anthony Seaton of the Institute of Occupational Medicine in Edinburgh, Scotland, who contributed to the article on carbon nanotubes said "We know that some of them probably have the potential to cause mesothelioma. So those sorts of materials need to be handled very carefully."[67] In the absence of specific regulation forthcoming from governments, Paull and Lyons (2008) have called for an exclusion of engineered nanoparticles in food.[68] A newspaper article reports that workers in a paint factory developed serious lung disease and nanoparticles were found in their lungs.[69][70][71][72]

Regulation

Calls for tighter regulation of nanotechnology have occurred alongside a growing debate related to the human health and safety risks of nanotechnology.[73] There is significant debate about who is responsible for the regulation of nanotechnology. Some regulatory agencies currently cover some nanotechnology products and processes (to varying degrees) – by “bolting on” nanotechnology to existing regulations – there are clear gaps in these regimes.[74] Davies (2008) has proposed a regulatory road map describing steps to deal with these shortcomings.[75]

Stakeholders concerned by the lack of a regulatory framework to assess and control risks associated with the release of nanoparticles and nanotubes have drawn parallels with bovine spongiform encephalopathy ("mad cow" disease), thalidomide, genetically modified food,[76] nuclear energy, reproductive technologies, biotechnology, and asbestosis. Dr. Andrew Maynard, chief science advisor to the Woodrow Wilson Center’s Project on Emerging Nanotechnologies, concludes that there is insufficient funding for human health and safety research, and as a result there is currently limited understanding of the human health and safety risks associated with nanotechnology.[77] As a result, some academics have called for stricter application of the precautionary principle, with delayed marketing approval, enhanced labelling and additional safety data development requirements in relation to certain forms of nanotechnology.[78]

The Royal Society report[10] identified a risk of nanoparticles or nanotubes being released during disposal, destruction and recycling, and recommended that “manufacturers of products that fall under extended producer responsibility regimes such as end-of-life regulations publish procedures outlining how these materials will be managed to minimize possible human and environmental exposure” (p. xiii). Reflecting the challenges for ensuring responsible life cycle regulation, the Institute for Food and Agricultural Standards has proposed that standards for nanotechnology research and development should be integrated across consumer, worker and environmental standards. They also propose that NGOs and other citizen groups play a meaningful role in the development of these standards.

The Center for Nanotechnology in Society has found that people respond to nanotechnologies differently, depending on application – with participants in public deliberations more positive about nanotechnologies for energy than health applications – suggesting that any public calls for nano regulations may differ by technology sector.[56]

Black hole

Black hole

From Wikipedia, the free encyclopedia
 
Simulated view of a black hole (center) in front of the Large Magellanic Cloud. Note the gravitational lensing effect, which produces two enlarged but highly distorted views of the Cloud. Across the top, the Milky Way disk appears distorted into an arc.

A black hole is a region of spacetime from which gravity prevents anything, including light, from escaping.[1] The theory of general relativity predicts that a sufficiently compact mass will deform spacetime to form a black hole.[2] The boundary of the region from which no escape is possible is called the event horizon. Although crossing the event horizon has enormous effect on the fate of the object crossing it, it appears to have no locally detectable features. In many ways a black hole acts like an ideal black body, as it reflects no light.[3][4] Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is on the order of billionths of a Kelvin for black holes of stellar mass, making it all but impossible to observe.

Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. The first modern solution of general relativity that would characterize a black hole was found by Karl Schwarzschild in 1916, although its interpretation as a region of space from which nothing can escape was first published by David Finkelstein in 1958. Long considered a mathematical curiosity, it was during the 1960s that theoretical work showed black holes were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality.

Black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed it can continue to grow by absorbing mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies.

Despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter and with electromagnetic radiation such as light. Matter falling onto a black hole can form an accretion disk heated by friction, forming some of the brightest objects in the universe. If there are other stars orbiting a black hole, their orbit can be used to determine its mass and location.
Such observations can be used to exclude possible alternatives (such as neutron stars). In this way, astronomers have identified numerous stellar black hole candidates in binary systems, and established that the core of the Milky Way contains a supermassive black hole of about 4.3 million solar masses.

Multiverse

Multiverse

From Wikipedia, the free encyclopedia
 
The multiverse (or meta-universe) is the hypothetical set of infinite or finite possible universes (including the historical universe we consistently experience) that together comprise everything that exists: the entirety of space, time, matter, and energy as well as the physical laws and constants that describe them. The various universes within the multiverse are sometimes called parallel universes.
The structure of the multiverse, the nature of each universe within it and the relationships among the various constituent universes, depend on the specific multiverse hypothesis considered. Multiple universes have been hypothesized in cosmology, physics, astronomy, religion, philosophy, transpersonal psychology, and fiction, particularly in science fiction and fantasy. In these contexts, parallel universes are also called "alternative universes", "quantum universes", "interpenetrating dimensions", "parallel dimensions", "parallel worlds", "alternative realities", "alternative timelines", and "dimensional planes," among others. The term 'multiverse' was coined in 1895 by the American philosopher and psychologist William James in a different context.[1]

The multiverse hypothesis is a source of disagreement within the physics community. Physicists disagree about whether the multiverse exists, and whether the multiverse is a proper subject of scientific inquiry.[2] Supporters of one of the multiverse hypotheses include Stephen Hawking,[3] Steven Weinberg,[4] Brian Greene,[5] Max Tegmark,[6] Alan Guth,[7] Andrei Linde,[8] Michio Kaku,[9] David Deutsch,[10] Leonard Susskind,[11] Raj Pathria,[12] Sean Carroll and Alex Vilenkin.[13] In contrast, critics such as David Gross,[14] Paul Steinhardt,[15] George Ellis[16][17] and Paul Davies have argued that the multiverse question is philosophical rather than scientific, or even that the multiverse hypothesis is harmful or pseudoscientific.

Multiverse hypotheses in physics

Categories

Max Tegmark and Brian Greene have devised classification schemes that categorize the various theoretical types of multiverse, or types of universe that might theoretically comprise a multiverse ensemble.

Max Tegmark's four levels

Cosmologist Max Tegmark has provided a taxonomy of universes beyond the familiar observable universe. The levels according to Tegmark's classification are arranged such that subsequent levels can be understood to encompass and expand upon previous levels, and they are briefly described below.[18][19]
Level I: Beyond our cosmological horizon
A generic prediction of chaotic inflation is an infinite ergodic universe, which, being infinite, must contain Hubble volumes realizing all initial conditions.

Accordingly, an infinite universe will contain an infinite number of Hubble volumes, all having the same physical laws and physical constants. In regard to configurations such as the distribution of matter, almost all will differ from our Hubble volume. However, because there are infinitely many, far beyond the cosmological horizon, there will eventually be Hubble volumes with similar, and even identical, configurations. Tegmark estimates that an identical volume to ours should be about 1010115 meters away from us.[20] Given infinite space, there would, in fact, be an infinite number of Hubble volumes identical to ours in the universe.[21] This follows directly from the cosmological principle, wherein it is assumed our Hubble volume is not special or unique.
Level II: Universes with different physical constants
"Bubble universes": every disk is a bubble universe (Universe 1 to Universe 6 are different bubbles; they have physical constants that are different from our universe); our universe is just one of the bubbles.

In the chaotic inflation theory, a variant of the cosmic inflation theory, the multiverse as a whole is stretching and will continue doing so forever, but some regions of space stop stretching and form distinct bubbles, like gas pockets in a loaf of rising bread. Such bubbles are embryonic level I multiverses. Linde and Vanchurin calculated the number of these universes to be on the scale of 101010,000,000.[22]

Different bubbles may experience different spontaneous symmetry breaking resulting in different properties such as different physical constants.[21]

This level also includes John Archibald Wheeler's oscillatory universe theory and Lee Smolin's fecund universes theory.
Level III: Many-worlds interpretation of quantum mechanics
Hugh Everett's many-worlds interpretation (MWI) is one of several mainstream interpretations of quantum mechanics. In brief, one aspect of quantum mechanics is that certain observations cannot be predicted absolutely. Instead, there is a range of possible observations, each with a different probability. According to the MWI, each of these possible observations corresponds to a different universe. Suppose a die is thrown that contains six sides and that the numeric result of the throw corresponds to a quantum mechanics observable. All six possible ways the die can fall correspond to six different universes.

Tegmark argues that a level III multiverse does not contain more possibilities in the Hubble volume than a level I-II multiverse. In effect, all the different "worlds" created by "splits" in a level III multiverse with the same physical constants can be found in some Hubble volume in a level I multiverse. Tegmark writes that "The only difference between Level I and Level III is where your doppelgängers reside. In Level I they live elsewhere in good old three-dimensional space. In Level III they live on another quantum branch in infinite-dimensional Hilbert space." Similarly, all level II bubble universes with different physical constants can in effect be found as "worlds" created by "splits" at the moment of spontaneous symmetry breaking in a level III multiverse.[21]

Related to the many-worlds idea are Richard Feynman's multiple histories interpretation and H. Dieter Zeh's many-minds interpretation.
Level IV: Ultimate ensemble
The ultimate ensemble or mathematical universe hypothesis is the hypothesis of Tegmark himself.[23] This level considers equally real all universes that can be described by different mathematical structures. Tegmark writes that "abstract mathematics is so general that any Theory Of Everything (TOE) that is definable in purely formal terms (independent of vague human terminology) is also a mathematical structure. For instance, a TOE involving a set of different types of entities (denoted by words, say) and relations between them (denoted by additional words) is nothing but what mathematicians call a set-theoretical model, and one can generally find a formal system that it is a model of." He argues this "implies that any conceivable parallel universe theory can be described at Level IV" and "subsumes all other ensembles, therefore brings closure to the hierarchy of multiverses, and there cannot be say a Level V."[24]

Jürgen Schmidhuber, however, says the "set of mathematical structures" is not even well-defined, and admits only universe representations describable by constructive mathematics, that is, computer programs. He explicitly includes universe representations describable by non-halting programs whose output bits converge after finite time, although the convergence time itself may not be predictable by a halting program, due to Kurt Gödel's limitations.[25][26][27] He also explicitly discusses the more restricted ensemble of quickly computable universes.[28]

Brian Greene's nine types

American theoretical physicist and string theorist Brian Greene discussed nine types of parallel universes:[29]
Quilted
The quilted multiverse works only in an infinite universe. With an infinite amount of space, every possible event will occur an infinite number of times. However, the speed of light prevents us from being aware of these other identical areas.
Inflationary
The inflationary multiverse is composed of various pockets where inflation fields collapse and form new universes.
Brane
The brane multiverse follows from M-theory and states that each universe is a 3-dimensional brane that exists with many others. Particles are bound to their respective branes except for gravity.
Cyclic
The cyclic multiverse has multiple branes (each a universe) that collided, causing Big Bangs. The universes bounce back and pass through time, until they are pulled back together and again collide, destroying the old contents and creating them anew.
Landscape
The landscape multiverse relies on string theory's Calabi–Yau shapes. Quantum fluctuations drop the shapes to a lower energy level, creating a pocket with a different set of laws from the surrounding space.
Quantum
The quantum multiverse creates a new universe when a diversion in events occurs, as in the many-worlds interpretation of quantum mechanics.
Holographic
The holographic multiverse is derived from the theory that the surface area of a space can simulate the volume of the region.
Simulated
The simulated multiverse exists on complex computer systems that simulate entire universes.
Ultimate
The ultimate multiverse contains every mathematically possible universe under different laws of physics.

Cyclic theories

In several theories there is a series of infinite, self-sustaining cycles (for example: an eternity of Big Bang-Big crunches).

M-theory

A multiverse of a somewhat different kind has been envisaged within string theory and its higher-dimensional extension, M-theory.[30] These theories require the presence of 10 or 11 spacetime dimensions respectively. The extra 6 or 7 dimensions may either be compactified on a very small scale, or our universe may simply be localized on a dynamical (3+1)-dimensional object, a D-brane.
This opens up the possibility that there are other branes which could support "other universes".[31][32] This is unlike the universes in the "quantum multiverse", but both concepts can operate at the same time.[citation needed]

Some scenarios postulate that our big bang was created, along with our universe, by the collision of two branes.[31][32]

Black-hole cosmology

A black-hole cosmology is a cosmological model in which the observable universe is the interior of a black hole existing as one of possibly many inside a larger universe.

Anthropic principle

The concept of other universes has been proposed to explain how our Universe appears to be fine-tuned for conscious life as we experience it. If there were a large (possibly infinite) number of universes, each with possibly different physical laws (or different fundamental physical constants), some of these universes, even if very few, would have the combination of laws and fundamental parameters that are suitable for the development of matter, astronomical structures, elemental diversity, stars, and planets that can exist long enough for life to emerge and evolve. The weak anthropic principle could then be applied to conclude that we (as conscious beings) would only exist in one of those few universes that happened to be finely tuned, permitting the existence of life with developed consciousness. Thus, while the probability might be extremely small that any particular universe would have the requisite conditions for life (as we understand life) to emerge and evolve, this does not require intelligent design per the teleological argument as the only explanation for the conditions in the Universe that promote our existence in it.

Search for evidence

Around 2010, scientists such as Stephen M. Feeney analyzed Wilkinson Microwave Anisotropy Probe (WMAP) data and claimed to find preliminary evidence suggesting that our universe collided with other (parallel) universes in the distant past.[33][unreliable source?][34][35][36] However, a more thorough analysis of data from the WMAP and from the Planck satellite, which has a resolution 3 times higher than WMAP, failed to find any statistically significant evidence of such a bubble universe collision.[37][38] In addition, there is no evidence of any gravitational pull of other universes on ours.[39][40]

Criticism

Non-scientific claims

In his 2003 NY Times opinion piece, A Brief History of the Multiverse, author and cosmologist, Paul Davies, offers a variety of arguments that multiverse theories are non-scientific :[41]
For a start, how is the existence of the other universes to be tested? To be sure, all cosmologists accept that there are some regions of the universe that lie beyond the reach of our telescopes, but somewhere on the slippery slope between that and the idea that there are an infinite number of universes, credibility reaches a limit. As one slips down that slope, more and more must be accepted on faith, and less and less is open to scientific verification. Extreme multiverse explanations are therefore reminiscent of theological discussions. Indeed, invoking an infinity of unseen universes to explain the unusual features of the one we do see is just as ad hoc as invoking an unseen Creator. The multiverse theory may be dressed up in scientific language, but in essence it requires the same leap of faith.
— Paul Davies, A Brief History of the Multiverse
Taking cosmic inflation as a popular case in point, George Ellis, writing in August 2011, provides a balanced criticism of not only the science, but as he suggests, the scientific philosophy, by which multiverse theories are generally substantiated. He, like most cosmologists, accepts Tegmark's level I "domains", even though they lie far beyond the cosmological horizon. Likewise, the multiverse of cosmic inflation is said to exist very far away. It would be so far away, however, that it's very unlikely any evidence of an early interaction will be found. He argues that for many theorists, the lack of empirical testability or falsifiability is not a major concern. “Many physicists who talk about the multiverse, especially advocates of the string landscape, do not care much about parallel universes per se. For them, objections to the multiverse as a concept are unimportant. Their theories live or die based on internal consistency and, one hopes, eventual laboratory testing.” Although he believes there's little hope that will ever be possible, he grants that the theories on which the speculation is based, are not without scientific merit. He concludes that multiverse theory is a “productive research program”:[42]
As skeptical as I am, I think the contemplation of the multiverse is an excellent opportunity to reflect on the nature of science and on the ultimate nature of existence: why we are here… In looking at this concept, we need an open mind, though not too open. It is a delicate path to tread. Parallel universes may or may not exist; the case is unproved. We are going to have to live with that uncertainty. Nothing is wrong with scientifically based philosophical speculation, which is what multiverse proposals are. But we should name it for what it is.
— George Ellis, Scientific American, Does the Multiverse Really Exist?

Occam's razor

Proponents and critics disagree about how to apply Occam's razor. Critics argue that to postulate a practically infinite number of unobservable universes just to explain our own seems contrary to Occam's razor.[43] In contrast, proponents argue that, in terms of Kolmogorov complexity, the proposed multiverse is simpler than a single idiosyncratic universe.[21]

For example, multiverse proponent Max Tegmark argues:
[A]n entire ensemble is often much simpler than one of its members. This principle can be stated more formally using the notion of algorithmic information content. The algorithmic information content in a number is, roughly speaking, the length of the shortest computer program that will produce that number as output. For example, consider the set of all integers. Which is simpler, the whole set or just one number? Naively, you might think that a single number is simpler, but the entire set can be generated by quite a trivial computer program, whereas a single number can be hugely long. Therefore, the whole set is actually simpler... (Similarly), the higher-level multiverses are simpler. Going from our universe to the Level I multiverse eliminates the need to specify initial conditions, upgrading to Level II eliminates the need to specify physical constants, and the Level IV multiverse eliminates the need to specify anything at all.... A common feature of all four multiverse levels is that the simplest and arguably most elegant theory involves parallel universes by default. To deny the existence of those universes, one needs to complicate the theory by adding experimentally unsupported processes and ad hoc postulates: finite space, wave function collapse and ontological asymmetry. Our judgment therefore comes down to which we find more wasteful and inelegant: many worlds or many words. Perhaps we will gradually get used to the weird ways of our cosmos and find its strangeness to be part of its charm.[21]
— Max Tegmark, "Parallel universes. Not just a staple of science fiction, other universes are a direct implication of cosmological observations." Scientific American 2003 May;288(5):40–51
Princeton cosmologist Paul Steinhardt used the 2014 Annual Edge Question to voice his opposition to multiverse theorizing:
A pervasive idea in fundamental physics and cosmology that should be retired: the notion that we live in a multiverse in which the laws of physics and the properties of the cosmos vary randomly from one patch of space to another. According to this view, the laws and properties within our observable universe cannot be explained or predicted because they are set by chance. Different regions of space too distant to ever be observed have different laws and properties, according to this picture. Over the entire multiverse, there are infinitely many distinct patches. Among these patches, in the words of Alan Guth, "anything that can happen will happen—and it will happen infinitely many times". Hence, I refer to this concept as a Theory of Anything. Any observation or combination of observations is consistent with a Theory of Anything. No observation or combination of observations can disprove it. Proponents seem to revel in the fact that the Theory cannot be falsified. The rest of the scientific community should be up in arms since an unfalsifiable idea lies beyond the bounds of normal science. Yet, except for a few voices, there has been surprising complacency and, in some cases, grudging acceptance of a Theory of Anything as a logical possibility. The scientific journals are full of papers treating the Theory of Anything seriously. What is going on?[15]
— Paul Steinhardt, "Theories of Anything" edge.com'
Steinhardt claims that multiverse theories have gained currency mostly because too much has been invested in theories that have failed, e.g. inflation or string theory. He tends to see in them an attempt to redefine the values of science to which he objects even more strongly:
A Theory of Anything is useless because it does not rule out any possibility and worthless because it submits to no do-or-die tests. (Many papers discuss potential observable consequences, but these are only possibilities, not certainties, so the Theory is never really put at risk.)[15]
— Paul Steinhardt, "Theories of Anything" edge.com'

Multiverse hypotheses in philosophy and logic

Modal realism

Possible worlds are a way of explaining probability, hypothetical statements and the like, and some philosophers such as David Lewis believe that all possible worlds exist, and are just as real as the actual world (a position known as modal realism).[44]

Trans-world identity

A metaphysical issue that crops up in multiverse schema that posit infinite identical copies of any given universe is that of the notion that there can be identical objects in different possible worlds. According to the counterpart theory of David Lewis, the objects should be regarded as similar rather than identical.[45][46]

Fictional realism

The view that because fictions exist, fictional characters exist as well. There are fictional entities, in the same sense in which, setting aside philosophical disputes, there are people, Mondays, numbers and planets.[47][48]

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...