Search This Blog

Thursday, September 23, 2021

Synthetic molecular motor

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Synthetic_molecular_motor

Synthetic molecular motors are molecular machines capable of continuous directional rotation under an energy input. Although the term "molecular motor" has traditionally referred to a naturally occurring protein that induces motion (via protein dynamics), some groups also use the term when referring to non-biological, non-peptide synthetic motors. Many chemists are pursuing the synthesis of such molecular motors.

Molecular dynamics simulation of a synthetic molecular rotor composed of three molecules in a nanopore (outer diameter 6.7 nm) at 250 K.

The basic requirements for a synthetic motor are repetitive 360° motion, the consumption of energy and unidirectional rotation. The first two efforts in this direction, the chemically driven motor by Dr. T. Ross Kelly of Boston College with co-workers and the light-driven motor by Ben Feringa and co-workers, were published in 1999 in the same issue of Nature.

As of 2020, the smallest atomically precise molecular machine has a rotor that consists of four atoms.

Chemically driven rotary molecular motors

The prototype of a chemically driven rotary molecular motor by Kelly and co-workers.

An example of a prototype for a synthetic chemically driven rotary molecular motor was reported by Kelly and co-workers in 1999. Their system is made up from a three-bladed triptycene rotor and a helicene, and is capable of performing a unidirectional 120° rotation.

This rotation takes place in five steps. The amine group present on the triptycene moiety is converted to an isocyanate group by condensation with phosgene (a). Thermal or spontaneous rotation around the central bond then brings the isocyanate group in proximity of the hydroxyl group located on the helicene moiety (b), thereby allowing these two groups to react with each other (c). This reaction irreversibly traps the system as a strained cyclic urethane that is higher in energy and thus energetically closer to the rotational energy barrier than the original state. Further rotation of the triptycene moiety therefore requires only a relatively small amount of thermal activation in order to overcome this barrier, thereby releasing the strain (d). Finally, cleavage of the urethane group restores the amine and alcohol functionalities of the molecule (e).

The result of this sequence of events is a unidirectional 120° rotation of the triptycene moiety with respect to the helicene moiety. Additional forward or backward rotation of the triptycene rotor is inhibited by the helicene moiety, which serves a function similar to that of the pawl of a ratchet. The unidirectionality of the system is a result from both the asymmetric skew of the helicene moiety as well as the strain of the cyclic urethane which is formed in c. This strain can be only be lowered by the clockwise rotation of the triptycene rotor in d, as both counterclockwise rotation as well as the inverse process of d are energetically unfavorable. In this respect the preference for the rotation direction is determined by both the positions of the functional groups and the shape of the helicene and is thus built into the design of the molecule instead of dictated by external factors.

The motor by Kelly and co-workers is an elegant example of how chemical energy can be used to induce controlled, unidirectional rotational motion, a process which resembles the consumption of ATP in organisms in order to fuel numerous processes. However, it does suffer from a serious drawback: the sequence of events that leads to 120° rotation is not repeatable. Kelly and co-workers have therefore searched for ways to extend the system so that this sequence can be carried out repeatedly. Unfortunately, their attempts to accomplish this objective have not been successful and currently the project has been abandoned. In 2016 David Leigh's group invented the first autonomous chemically-fuelled synthetic molecular motor.

Some other examples of synthetic chemically driven rotary molecular motors that all operate by sequential addition of reagents have been reported, including the use of the stereoselective ring opening of a racemic biaryl lactone by the use of chiral reagents, which results in a directed 90° rotation of one aryl with respect to the other aryl. Branchaud and co-workers have reported that this approach, followed by an additional ring closing step, can be used to accomplish a non-repeatable 180° rotation. Feringa and co-workers used this approach in their design of a molecule that can repeatably perform 360° rotation. The full rotation of this molecular motor takes place in four stages. In stages A and C rotation of the aryl moiety is restricted, although helix inversion is possible. In stages B and D the aryl can rotate with respect to the naphthalene with steric interactions preventing the aryl from passing the naphthalene. The rotary cycle consists of four chemically induced steps which realize the conversion of one stage into the next. Steps 1 and 3 are asymmetric ring opening reactions which make use of a chiral reagent in order to control the direction of the rotation of the aryl. Steps 2 and 4 consist of the deprotection of the phenol, followed by regioselective ring formation.

The chemically driven rotary molecular motor by Feringa and co-workers.

Light-driven rotary molecular motors

Rotary cycle of the light-driven rotary molecular motor by Feringa and co-workers.

In 1999 the laboratory of Prof. Dr. Ben L. Feringa at the University of Groningen, The Netherlands, reported the creation of a unidirectional molecular rotor. Their 360° molecular motor system consists of a bis-helicene connected by an alkene double bond displaying axial chirality and having two stereocenters.

One cycle of unidirectional rotation takes 4 reaction steps. The first step is a low temperature endothermic photoisomerization of the trans (P,P) isomer 1 to the cis (M,M) 2 where P stands for the right-handed helix and M for the left-handed helix. In this process, the two axial methyl groups are converted into two less sterically favorable equatorial methyl groups.

By increasing the temperature to 20 °C these methyl groups convert back exothermally to the (P,P) cis axial groups (3) in a helix inversion. Because the axial isomer is more stable than the equatorial isomer, reverse rotation is blocked. A second photoisomerization converts (P,P) cis 3 into (M,M) trans 4, again with accompanying formation of sterically unfavorable equatorial methyl groups. A thermal isomerization process at 60 °C closes the 360° cycle back to the axial positions.

Synthetic molecular motors: fluorene system

A major hurdle to overcome is the long reaction time for complete rotation in these systems, which does not compare to rotation speeds displayed by motor proteins in biological systems. In the fastest system to date, with a fluorene lower half, the half-life of the thermal helix inversion is 0.005 seconds. This compound is synthesized using the Barton-Kellogg reaction. In this molecule the slowest step in its rotation, the thermally induced helix-inversion, is believed to proceed much more quickly because the larger tert-butyl group makes the unstable isomer even less stable than when the methyl group is used. This is because the unstable isomer is more destabilized than the transition state that leads to helix-inversion. The different behaviour of the two molecules is illustrated by the fact that the half-life time for the compound with a methyl group instead of a tert-butyl group is 3.2 minutes.

The Feringa principle has been incorporated into a prototype nanocar. The car synthesized has a helicene-derived engine with an oligo (phenylene ethynylene) chassis and four carborane wheels and is expected to be able to move on a solid surface with scanning tunneling microscopy monitoring, although so far this has not been observed. The motor does not perform with fullerene wheels because they quench the photochemistry of the motor moiety. Feringa motors have also been shown to remain operable when chemically attached to solid surfaces. The ability of certain Feringa systems to act as an asymmetric catalyst has also been demonstrated.

In 2016, Feringa was awarded a Nobel prize for his work on molecular motors.

Nanorobotics

From Wikipedia, the free encyclopedia
 

Nanorobotics is an emerging technology field creating machines or robots whose components are at or near the scale of a nanometer (10−9 meters). More specifically, nanorobotics (as opposed to microrobotics) refers to the nanotechnology engineering discipline of designing and building nanorobots, with devices ranging in size from 0.1 to 10 micrometres and constructed of nanoscale or molecular components. The terms nanobot, nanoid, nanite, nanomachine, or nanomite have also been used to describe such devices currently under research and development.

Nanomachines are largely in the research and development phase, but some primitive molecular machines and nanomotors have been tested. An example is a sensor having a switch approximately 1.5 nanometers across, able to count specific molecules in the chemical sample. The first useful applications of nanomachines may be in nanomedicine. For example, biological machines could be used to identify and destroy cancer cells. Another potential application is the detection of toxic chemicals, and the measurement of their concentrations, in the environment. Rice University has demonstrated a single-molecule car developed by a chemical process and including Buckminsterfullerenes (buckyballs) for wheels. It is actuated by controlling the environmental temperature and by positioning a scanning tunneling microscope tip.

Another definition is a robot that allows precise interactions with nanoscale objects, or can manipulate with nanoscale resolution. Such devices are more related to microscopy or scanning probe microscopy, instead of the description of nanorobots as molecular machines. Using the microscopy definition, even a large apparatus such as an atomic force microscope can be considered a nanorobotic instrument when configured to perform nanomanipulation. For this viewpoint, macroscale robots or microrobots that can move with nanoscale precision can also be considered nanorobots.

Nanorobotics theory

According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micro-machines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the surgeon". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.

Since nano-robots would be microscopic in size, it would probably be necessary for very large numbers of them to work together to perform microscopic and macroscopic tasks. These nano-robot swarms, both those unable to replicate (as in utility fog) and those able to replicate unconstrained in the natural environment (as in grey goo and synthetic biology), are found in many science fiction stories, such as the Borg nano-probes in Star Trek and The Outer Limits episode "The New Breed". Some proponents of nano-robotics, in reaction to the grey goo scenarios that they earlier helped to propagate, hold the view that nano-robots able to replicate outside of a restricted factory environment do not form a necessary part of a purported productive nanotechnology, and that the process of self-replication, were it ever to be developed, could be made inherently safe. They further assert that their current plans for developing and using molecular manufacturing do not in fact include free-foraging replicators.

A detailed theoretical discussion of nanorobotics, including specific design issues such as sensing, power communication, navigation, manipulation, locomotion, and onboard computation, has been presented in the medical context of nanomedicine by Robert Freitas. Some of these discussions remain at the level of unbuildable generality and do not approach the level of detailed engineering.

Legal and ethical implications

Open technology

A document with a proposal on nanobiotech development using open design technology methods, as in open-source hardware and open-source software, has been addressed to the United Nations General Assembly. According to the document sent to the United Nations, in the same way that open source has in recent years accelerated the development of computer systems, a similar approach should benefit the society at large and accelerate nanorobotics development. The use of nanobiotechnology should be established as a human heritage for the coming generations, and developed as an open technology based on ethical practices for peaceful purposes. Open technology is stated as a fundamental key for such an aim.

Nanorobot race

In the same ways that technology research and development drove the space race and nuclear arms race, a race for nanorobots is occurring. There is plenty of ground allowing nanorobots to be included among the emerging technologies. Some of the reasons are that large corporations, such as General Electric, Hewlett-Packard, Synopsys, Northrop Grumman and Siemens have been recently working in the development and research of nanorobots; surgeons are getting involved and starting to propose ways to apply nanorobots for common medical procedures; universities and research institutes were granted funds by government agencies exceeding $2 billion towards research developing nanodevices for medicine; bankers are also strategically investing with the intent to acquire beforehand rights and royalties on future nanorobots commercialisation. Some aspects of nanorobot litigation and related issues linked to monopoly have already arisen. A large number of patents has been granted recently on nanorobots, done mostly for patent agents, companies specialized solely on building patent portfolios, and lawyers. After a long series of patents and eventually litigations, see for example the invention of radio, or the war of currents, emerging fields of technology tend to become a monopoly, which normally is dominated by large corporations.

Manufacturing approaches

Manufacturing nanomachines assembled from molecular components is a very challenging task. Because of the level of difficulty, many engineers and scientists continue working cooperatively across multidisciplinary approaches to achieve breakthroughs in this new area of development. Thus, it is quite understandable the importance of the following distinct techniques currently applied towards manufacturing nanorobots:

Biochip

The joint use of nanoelectronics, photolithography, and new biomaterials provides a possible approach to manufacturing nanorobots for common medical uses, such as surgical instrumentation, diagnosis, and drug delivery. This method for manufacturing on nanotechnology scale is in use in the electronics industry since 2008. So, practical nanorobots should be integrated as nanoelectronics devices, which will allow tele-operation and advanced capabilities for medical instrumentation.

Nubots

A nucleic acid robot (nubot) is an organic molecular machine at the nanoscale. DNA structure can provide means to assemble 2D and 3D nanomechanical devices. DNA based machines can be activated using small molecules, proteins and other molecules of DNA. Biological circuit gates based on DNA materials have been engineered as molecular machines to allow in-vitro drug delivery for targeted health problems. Such material based systems would work most closely to smart biomaterial drug system delivery, while not allowing precise in vivo teleoperation of such engineered prototypes.

Surface-bound systems

Several reports have demonstrated the attachment of synthetic molecular motors to surfaces. These primitive nanomachines have been shown to undergo machine-like motions when confined to the surface of a macroscopic material. The surface anchored motors could potentially be used to move and position nanoscale materials on a surface in the manner of a conveyor belt.

Positional nanoassembly

Nanofactory Collaboration, founded by Robert Freitas and Ralph Merkle in 2000 and involving 23 researchers from 10 organizations and 4 countries, focuses on developing a practical research agenda specifically aimed at developing positionally-controlled diamond mechanosynthesis and a diamondoid nanofactory that would have the capability of building diamondoid medical nanorobots.

Biohybrids

The emerging field of bio-hybrid systems combines biological and synthetic structural elements for biomedical or robotic applications. The constituting elements of bio-nanoelectromechanical systems (BioNEMS) are of nanoscale size, for example DNA, proteins or nanostructured mechanical parts. Thiol-ene e-beams resist allow the direct writing of nanoscale features, followed by the functionalization of the natively reactive resist surface with biomolecules. Other approaches use a biodegradable material attached to magnetic particles that allow them to be guided around the body.

Bacteria-based

This approach proposes the use of biological microorganisms, like the bacterium Escherichia coli and Salmonella typhimurium. Thus the model uses a flagellum for propulsion purposes. Electromagnetic fields normally control the motion of this kind of biological integrated device. Chemists at the University of Nebraska have created a humidity gauge by fusing a bacterium to a silicon computer chip.

Virus-based

Retroviruses can be retrained to attach to cells and replace DNA. They go through a process called reverse transcription to deliver genetic packaging in a vector. Usually, these devices are Pol – Gag genes of the virus for the Capsid and Delivery system. This process is called retroviral gene therapy, having the ability to re-engineer cellular DNA by usage of viral vectors. This approach has appeared in the form of retroviral, adenoviral, and lentiviral gene delivery systems. These gene therapy vectors have been used in cats to send genes into the genetically modified organism (GMO), causing it to display the trait. 

3D printing

3D printing is the process by which a three-dimensional structure is built through the various processes of additive manufacturing. Nanoscale 3D printing involves many of the same process, incorporated at a much smaller scale. To print a structure in the 5-400 µm scale, the precision of the 3D printing machine needs to be improved greatly. A two-step process of 3D printing, using a 3D printing and laser etched plates method was incorporated as an improvement technique. To be more precise at a nanoscale, the 3D printing process uses a laser etching machine, which etches the details needed for the segments of nanorobots into each plate. The plate is then transferred to the 3D printer, which fills the etched regions with the desired nanoparticle. The 3D printing process is repeated until the nanorobot is built from the bottom up. This 3D printing process has many benefits. First, it increases the overall accuracy of the printing process. Second, it has the potential to create functional segments of a nanorobot. The 3D printer uses a liquid resin, which is hardened at precisely the correct spots by a focused laser beam. The focal point of the laser beam is guided through the resin by movable mirrors and leaves behind a hardened line of solid polymer, just a few hundred nanometers wide. This fine resolution enables the creation of intricately structured sculptures as tiny as a grain of sand. This process takes place by using photoactive resins, which are hardened by the laser at an extremely small scale to create the structure. This process is quick by nanoscale 3D printing standards. Ultra-small features can be made with the 3D micro-fabrication technique used in multiphoton photopolymerisation. This approach uses a focused laser to trace the desired 3D object into a block of gel. Due to the nonlinear nature of photo excitation, the gel is cured to a solid only in the places where the laser was focused while the remaining gel is then washed away. Feature sizes of under 100 nm are easily produced, as well as complex structures with moving and interlocked parts.

Potential uses

Nanomedicine

Potential uses for nanorobotics in medicine include early diagnosis and targeted drug-delivery for cancer, biomedical instrumentation, surgery, pharmacokinetics, monitoring of diabetes, and health care.

In such plans, future medical nanotechnology is expected to employ nanorobots injected into the patient to perform work at a cellular level. Such nanorobots intended for use in medicine should be non-replicating, as replication would needlessly increase device complexity, reduce reliability, and interfere with the medical mission.

Nanotechnology provides a wide range of new technologies for developing customized means to optimize the delivery of pharmaceutical drugs. Today, harmful side effects of treatments such as chemotherapy are commonly a result of drug delivery methods that don't pinpoint their intended target cells accurately. Researchers at Harvard and MIT, however, have been able to attach special RNA strands, measuring nearly 10 nm in diameter, to nanoparticles, filling them with a chemotherapy drug. These RNA strands are attracted to cancer cells. When the nanoparticle encounters a cancer cell, it adheres to it, and releases the drug into the cancer cell. This directed method of drug delivery has great potential for treating cancer patients while avoiding negative effects (commonly associated with improper drug delivery). The first demonstration of nanomotors operating in living organisms was carried out in 2014 at University of California, San Diego. MRI-guided nanocapsules are one potential precursor to nanorobots.

Another useful application of nanorobots is assisting in the repair of tissue cells alongside white blood cells. Recruiting inflammatory cells or white blood cells (which include neutrophil granulocytes, lymphocytes, monocytes, and mast cells) to the affected area is the first response of tissues to injury. Because of their small size, nanorobots could attach themselves to the surface of recruited white cells, to squeeze their way out through the walls of blood vessels and arrive at the injury site, where they can assist in the tissue repair process. Certain substances could possibly be used to accelerate the recovery.

The science behind this mechanism is quite complex. Passage of cells across the blood endothelium, a process known as transmigration, is a mechanism involving engagement of cell surface receptors to adhesion molecules, active force exertion and dilation of the vessel walls and physical deformation of the migrating cells. By attaching themselves to migrating inflammatory cells, the robots can in effect "hitch a ride" across the blood vessels, bypassing the need for a complex transmigration mechanism of their own.

As of 2016, in the United States, Food and Drug Administration (FDA) regulates nanotechnology on the basis of size.

Nanocomposite particles that are controlled remotely by an electromagnetic field was also developed. This series of nanorobots that are now enlisted in the Guinness World Records, can be used to interact with the biological cells. Scientists suggest that this technology can be used for the treatment of cancer.

Cultural references

The Nanites are characters on the TV show Mystery Science Theater 3000. They're self-replicating, bio-engineered organisms that work on the ship and reside in the SOL's computer systems. They made their first appearance in Season 8. Nanites are used in a number of episodes in the Netflix series "Travelers". They be programmed and injected into injured people to perform repairs. First appearance in season 1

Nanites also feature in the Rise of Iron 2016 expansion for Destiny in which SIVA, a self-replicating nanotechnology is used as a weapon.

Nanites (referred to more often as Nanomachines) are often referenced in Konami's "Metal Gear" series being used to enhance and regulate abilities and body functions.

In the Star Trek franchise TV shows nanites play an important plot device. Starting with Evolution in the third season of The Next Generation, Borg Nanoprobes perform the function of maintaining the Borg cybernetic systems, as well as repairing damage to the organic parts of a Borg. They generate new technology inside a Borg when needed, as well as protecting them from many forms of disease.

Nanites play a role in the video game Deus Ex, being the basis of the nano-augmentation technology which gives augmented people superhuman abilities.

Nanites are also mentioned in the Arc of a Scythe book series by Neal Shusterman and are used to heal all nonfatal injuries, regulate bodily functions, and considerably lessen pain.

Self-replicating machine

From Wikipedia, the free encyclopedia
 
A simple form of machine self-replication

A self-replicating machine is a type of autonomous robot that is capable of reproducing itself autonomously using raw materials found in the environment, thus exhibiting self-replication in a way analogous to that found in nature. The concept of self-replicating machines has been advanced and examined by Homer Jacobson, Edward F. Moore, Freeman Dyson, John von Neumann and in more recent times by K. Eric Drexler in his book on nanotechnology, Engines of Creation (coining the term clanking replicator for such machines) and by Robert Freitas and Ralph Merkle in their review Kinematic Self-Replicating Machines which provided the first comprehensive analysis of the entire replicator design space. The future development of such technology is an integral part of several plans involving the mining of moons and asteroid belts for ore and other materials, the creation of lunar factories, and even the construction of solar power satellites in space. The von Neumann probe is one theoretical example of such a machine. Von Neumann also worked on what he called the universal constructor, a self-replicating machine that would be able to evolve and which he formalized in a cellular automata environment. Notably, Von Neumann's Self-Reproducing Automata scheme posited that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.

A self-replicating machine is an artificial self-replicating system that relies on conventional large-scale technology and automation. Although suggested more than 70 years ago no self-replicating machine has been seen until today. Certain idiosyncratic terms are occasionally found in the literature. For example, the term clanking replicator was once used by Drexler to distinguish macroscale replicating systems from the microscopic nanorobots or "assemblers" that nanotechnology may make possible, but the term is informal and is rarely used by others in popular or technical discussions. Replicators have also been called "von Neumann machines" after John von Neumann, who first rigorously studied the idea. However, the term "von Neumann machine" is less specific and also refers to a completely unrelated computer architecture that von Neumann proposed and so its use is discouraged where accuracy is important. Von Neumann himself used the term universal constructor to describe such self-replicating machines.

Historians of machine tools, even before the numerical control era, sometimes figuratively said that machine tools were a unique class of machines because they have the ability to "reproduce themselves" by copying all of their parts. Implicit in these discussions is that a human would direct the cutting processes (later planning and programming the machines), and would then assemble the parts. The same is true for RepRaps, which are another class of machines sometimes mentioned in reference to such non-autonomous "self-replication". In contrast, machines that are truly autonomously self-replicating (like biological machines) are the main subject discussed here.

History

The general concept of artificial machines capable of producing copies of themselves dates back at least several hundred years. An early reference is an anecdote regarding the philosopher René Descartes, who suggested to Queen Christina of Sweden that the human body could be regarded as a machine; she responded by pointing to a clock and ordering "see to it that it reproduces offspring." Several other variations on this anecdotal response also exist. Samuel Butler proposed in his 1872 novel Erewhon that machines were already capable of reproducing themselves but it was man who made them do so, and added that "machines which reproduce machinery do not reproduce machines after their own kind". In George Eliot's 1879 book Impressions of Theophrastus Such, a series of essays that she wrote in the character of a fictional scholar named Theophrastus, the essay "Shadows of the Coming Race" speculated about self-replicating machines, with Theophrastus asking "how do I know that they may not be ultimately made to carry, or may not in themselves evolve, conditions of self-supply, self-repair, and reproduction".

In 1802 William Paley formulated the first known teleological argument depicting machines producing other machines, suggesting that the question of who originally made a watch was rendered moot if it were demonstrated that the watch was able to manufacture a copy of itself. Scientific study of self-reproducing machines was anticipated by John Bernal as early as 1929 and by mathematicians such as Stephen Kleene who began developing recursion theory in the 1930s. Much of this latter work was motivated by interest in information processing and algorithms rather than physical implementation of such a system, however. In the course of the 1950s, suggestions of several increasingly simple mechanical systems capable of self-reproduction were made — notably by Lionel Penrose.

Von Neumann's kinematic model

A detailed conceptual proposal for a self-replicating machine was first put forward by mathematician John von Neumann in lectures delivered in 1948 and 1949, when he proposed a kinematic model of self-reproducing automata as a thought experiment. Von Neumann's concept of a physical self-replicating machine was dealt with only abstractly, with the hypothetical machine using a "sea" or stockroom of spare parts as its source of raw materials. The machine had a program stored on a memory tape that directed it to retrieve parts from this "sea" using a manipulator, assemble them into a duplicate of itself, and then copy the contents of its memory tape into the empty duplicate's. The machine was envisioned as consisting of as few as eight different types of components; four logic elements that send and receive stimuli and four mechanical elements used to provide a structural skeleton and mobility. While qualitatively sound, von Neumann was evidently dissatisfied with this model of a self-replicating machine due to the difficulty of analyzing it with mathematical rigor. He went on to instead develop an even more abstract model self-replicator based on cellular automata. His original kinematic concept remained obscure until it was popularized in a 1955 issue of Scientific American.

Von Neummann's goal for his self-reproducing automata theory, as specified in his lectures at the University of Illinois in 1949, was to design a machine whose complexity could grow automatically akin to biological organisms under natural selection. He asked what is the threshold of complexity that must be crossed for machines to be able to evolve. His answer was to design an abstract machine which, when run, would replicate itself. Notably, his design implies that open-ended evolution requires inherited information to be copied and passed to offspring separately from the self-replicating machine, an insight that preceded the discovery of the structure of the DNA molecule by Watson and Crick and how it is separately translated and replicated in the cell.

Moore's artificial living plants

In 1956 mathematician Edward F. Moore proposed the first known suggestion for a practical real-world self-replicating machine, also published in Scientific American. Moore's "artificial living plants" were proposed as machines able to use air, water and soil as sources of raw materials and to draw its energy from sunlight via a solar battery or a steam engine. He chose the seashore as an initial habitat for such machines, giving them easy access to the chemicals in seawater, and suggested that later generations of the machine could be designed to float freely on the ocean's surface as self-replicating factory barges or to be placed in barren desert terrain that was otherwise useless for industrial purposes. The self-replicators would be "harvested" for their component parts, to be used by humanity in other non-replicating machines.

Dyson's replicating systems

The next major development of the concept of self-replicating machines was a series of thought experiments proposed by physicist Freeman Dyson in his 1970 Vanuxem Lecture. He proposed three large-scale applications of machine replicators. First was to send a self-replicating system to Saturn's moon Enceladus, which in addition to producing copies of itself would also be programmed to manufacture and launch solar sail-propelled cargo spacecraft. These spacecraft would carry blocks of Enceladean ice to Mars, where they would be used to terraform the planet. His second proposal was a solar-powered factory system designed for a terrestrial desert environment, and his third was an "industrial development kit" based on this replicator that could be sold to developing countries to provide them with as much industrial capacity as desired. When Dyson revised and reprinted his lecture in 1979 he added proposals for a modified version of Moore's seagoing artificial living plants that was designed to distill and store fresh water for human use and the "Astrochicken."

Advanced Automation for Space Missions

An artist's conception of a "self-growing" robotic lunar factory

In 1980, inspired by a 1979 "New Directions Workshop" held at Wood's Hole, NASA conducted a joint summer study with ASEE entitled Advanced Automation for Space Missions to produce a detailed proposal for self-replicating factories to develop lunar resources without requiring additional launches or human workers on-site. The study was conducted at Santa Clara University and ran from June 23 to August 29, with the final report published in 1982. The proposed system would have been capable of exponentially increasing productive capacity and the design could be modified to build self-replicating probes to explore the galaxy.

The reference design included small computer-controlled electric carts running on rails inside the factory, mobile "paving machines" that used large parabolic mirrors to focus sunlight on lunar regolith to melt and sinter it into a hard surface suitable for building on, and robotic front-end loaders for strip mining. Raw lunar regolith would be refined by a variety of techniques, primarily hydrofluoric acid leaching. Large transports with a variety of manipulator arms and tools were proposed as the constructors that would put together new factories from parts and assemblies produced by its parent.

Power would be provided by a "canopy" of solar cells supported on pillars. The other machinery would be placed under the canopy.

A "casting robot" would use sculpting tools and templates to make plaster molds. Plaster was selected because the molds are easy to make, can make precise parts with good surface finishes, and the plaster can be easily recycled afterward using an oven to bake the water back out. The robot would then cast most of the parts either from nonconductive molten rock (basalt) or purified metals. A carbon dioxide laser cutting and welding system was also included.

A more speculative, more complex microchip fabricator was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins."

A 2004 study supported by NASA's Institute for Advanced Concepts took this idea further. Some experts are beginning to consider self-replicating machines for asteroid mining.

Much of the design study was concerned with a simple, flexible chemical system for processing the ores, and the differences between the ratio of elements needed by the replicator, and the ratios available in lunar regolith. The element that most limited the growth rate was chlorine, needed to process regolith for aluminium. Chlorine is very rare in lunar regolith.

Lackner-Wendt Auxon replicators

In 1995, inspired by Dyson's 1970 suggestion of seeding uninhabited deserts on Earth with self-replicating machines for industrial development, Klaus Lackner and Christopher Wendt developed a more detailed outline for such a system. They proposed a colony of cooperating mobile robots 10–30 cm in size running on a grid of electrified ceramic tracks around stationary manufacturing equipment and fields of solar cells. Their proposal didn't include a complete analysis of the system's material requirements, but described a novel method for extracting the ten most common chemical elements found in raw desert topsoil (Na, Fe, Mg, Si, Ca, Ti, Al, C, O2 and H2) using a high-temperature carbothermic process. This proposal was popularized in Discover magazine, featuring solar-powered desalination equipment used to irrigate the desert in which the system was based. They named their machines "Auxons", from the Greek word auxein which means "to grow."

Recent work

NIAC studies on self-replicating systems

In the spirit of the 1980 "Advanced Automation for Space Missions" study, the NASA Institute for Advanced Concepts began several studies of self-replicating system design in 2002 and 2003. Four phase I grants were awarded:

Bootstrapping Self-Replicating Factories in Space

In 2012, NASA researchers Metzger, Muscatello, Mueller, and Mantovani argued for a so-called "bootstrapping approach" to start self-replicating factories in space. They developed this concept on the basis of In Situ Resource Utilization (ISRU) technologies that NASA has been developing to "live off the land" on the Moon or Mars. Their modeling showed that in just 20 to 40 years this industry could become self-sufficient then grow to large size, enabling greater exploration in space as well as providing benefits back to Earth. In 2014, Thomas Kalil of the White House Office of Science and Technology Policy published on the White House blog an interview with Metzger on bootstrapping solar system civilization through self-replicating space industry. Kalil requested the public submit ideas for how "the Administration, the private sector, philanthropists, the research community, and storytellers can further these goals." Kalil connected this concept to what former NASA Chief technologist Mason Peck has dubbed "Massless Exploration", the ability to make everything in space so that you do not need to launch it from Earth. Peck has said, "...all the mass we need to explore the solar system is already in space. It's just in the wrong shape." In 2016, Metzger argued that fully self-replicating industry can be started over several decades by astronauts at a lunar outpost for a total cost (outpost plus starting the industry) of about a third of the space budgets of the International Space Station partner nations, and that this industry would solve Earth's energy and environmental problems in addition to providing massless exploration.

New York University artificial DNA tile motifs

In 2011, a team of scientists at New York University created a structure called 'BTX' (bent triple helix) based around three double helix molecules, each made from a short strand of DNA. Treating each group of three double-helices as a code letter, they can (in principle) build up self-replicating structures that encode large quantities of information.

Self-replication of magnetic polymers

In 2001 Jarle Breivik at University of Oslo created a system of magnetic building blocks, which in response to temperature fluctuations, spontaneously form self-replicating polymers.

Self-replication of neural circuits

In 1968 Zellig Harris wrote that "the metalanguage is in the language," suggesting that self-replication is part of language. In 1977 Niklaus Wirth formalized this proposition by publishing a self-replicating deterministic context-free grammar. Adding to it probabilities, Bertrand du Castel published in 2015 a self-replicating stochastic grammar and presented a mapping of that grammar to neural networks, thereby presenting a model for a self-replicating neural circuit.

Self-replicating spacecraft

The idea of an automated spacecraft capable of constructing copies of itself was first proposed in scientific literature in 1974 by Michael A. Arbib, but the concept had appeared earlier in science fiction such as the 1967 novel Berserker by Fred Saberhagen or the 1950 novellette trilogy The Voyage of the Space Beagle by A. E. van Vogt. The first quantitative engineering analysis of a self-replicating spacecraft was published in 1980 by Robert Freitas, in which the non-replicating Project Daedalus design was modified to include all subsystems necessary for self-replication. The design's strategy was to use the probe to deliver a "seed" factory with a mass of about 443 tons to a distant site, have the seed factory replicate many copies of itself there to increase its total manufacturing capacity, and then use the resulting automated industrial complex to construct more probes with a single seed factory on board each.

Other references

  • A number of patents have been granted for self-replicating machine concepts. U.S. Patent 5,659,477 "Self reproducing fundamental fabricating machines (F-Units)" Inventor: Collins; Charles M. (Burke, Va.) (August 1997), U.S. Patent 5,764,518 " Self reproducing fundamental fabricating machine system" Inventor: Collins; Charles M. (Burke, Va.)(June 1998); and Collins' PCT patent WO 96/20453: "Method and system for self-replicating manufacturing stations" Inventors: Merkle; Ralph C. (Sunnyvale, Calif.), Parker; Eric G. (Wylie, Tex.), Skidmore; George D. (Plano, Tex.) (January 2003).
  • Macroscopic replicators are mentioned briefly in the fourth chapter of K. Eric Drexler's 1986 book Engines of Creation.
  • In 1995, Nick Szabo proposed a challenge to build a macroscale replicator from Lego robot kits and similar basic parts. Szabo wrote that this approach was easier than previous proposals for macroscale replicators, but successfully predicted that even this method would not lead to a macroscale replicator within ten years.
  • In 2004, Robert Freitas and Ralph Merkle published the first comprehensive review of the field of self-replication (from which much of the material in this article is derived, with permission of the authors), in their book Kinematic Self-Replicating Machines, which includes 3000+ literature references. This book included a new molecular assembler design, a primer on the mathematics of replication, and the first comprehensive analysis of the entire replicator design space.

Prospects for implementation

As the use of industrial automation has expanded over time, some factories have begun to approach a semblance of self-sufficiency that is suggestive of self-replicating machines. However, such factories are unlikely to achieve "full closure" until the cost and flexibility of automated machinery comes close to that of human labour and the manufacture of spare parts and other components locally becomes more economical than transporting them from elsewhere. As Samuel Butler has pointed out in Erewhon, replication of partially closed universal machine tool factories is already possible. Since safety is a primary goal of all legislative consideration of regulation of such development, future development efforts may be limited to systems which lack either control, matter, or energy closure. Fully capable machine replicators are most useful for developing resources in dangerous environments which are not easily reached by existing transportation systems (such as outer space).

An artificial replicator can be considered to be a form of artificial life. Depending on its design, it might be subject to evolution over an extended period of time. However, with robust error correction, and the possibility of external intervention, the common science fiction scenario of robotic life run amok will remain extremely unlikely for the foreseeable future.

 

Singularitarianism

From Wikipedia, the free encyclopedia

Singularitarianism is a movement defined by the belief that a technological singularity—the creation of superintelligence—will likely happen in the medium future, and that deliberate action ought to be taken to ensure that the singularity benefits humans.

Singularitarians are distinguished from other futurists who speculate on a technological singularity by their belief that the singularity is not only possible, but desirable if guided prudently. Accordingly, they might sometimes dedicate their lives to acting in ways they believe will contribute to its rapid yet safe realization.

Time magazine describes the worldview of Singularitarians by saying that "even though it sounds like science fiction, it isn't, no more than a weather forecast is science fiction. It's not a fringe idea; it's a serious hypothesis about the future of life on Earth. There's an intellectual gag reflex that kicks in anytime you try to swallow an idea that involves super-intelligent immortal cyborgs, but... while the Singularity appears to be, on the face of it, preposterous, it's an idea that rewards sober, careful evaluation".

Definition

The term "Singularitarian" was originally defined by Extropian thinker Mark Plus (Mark Potts) in 1991 to mean "one who believes the concept of a Singularity". This term has since been redefined to mean "Singularity activist" or "friend of the Singularity"; that is, one who acts so as to bring about the singularity.

Singularitarianism can also be thought of as an orientation or an outlook that prefers the enhancement of human intelligence as a specific transhumanist goal instead of focusing on specific technologies such as A.I. There are also definitions that identify a singularitarian as an activist or a friend of the concept of singularity, that is, one who acts so as to bring about a singularity. Some sources described it as a moral philosophy that advocates deliberate action to bring about and steer the development of a superintelligence that will lead to a theoretical future point that emerges during a time of accelerated change.

Inventor and futurist Ray Kurzweil, author of the 2005 book The Singularity Is Near: When Humans Transcend Biology, defines a Singularitarian as someone "who understands the Singularity and who has reflected on its implications for his or her own life" and estimates the singularity will occur around 2045.

History

In 1993, mathematician, computer scientist, and science fiction author Vernor Vinge hypothesized that the moment might come when technology will allow "creation of entities with greater than human intelligence" and used the term "the Singularity" to describe this moment. He suggested that the singularity may pose an existential risk for humanity, and that it could happen through one of four means:

  1. The development of computers that are "awake" and superhumanly intelligent.
  2. Large computer networks (and their associated users) may "wake up" as a superhumanly intelligent entity.
  3. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent.
  4. Biological science may find ways to improve upon the natural human intellect.

Singularitarianism coalesced into a coherent ideology in 2000 when artificial intelligence (AI) researcher Eliezer Yudkowsky wrote The Singularitarian Principles, in which he stated that a Singularitarian believes that the singularity is a secular, non-mystical event which is possible and beneficial to the world and is worked towards by its adherents. Yudkowsky's conceptualization of singularity offered a broad definition developed to be inclusive of various interpretations. There are theorists such as Michael Anissimov who argued for a strict definition, one that refers only to the advocacy of the development of posthuman (greater than human) intelligence.

In June 2000 Yudkowsky, with the support of Internet entrepreneurs Brian Atkins and Sabine Atkins, founded the Machine Intelligence Research Institute to work towards the creation of self-improving Friendly AI. MIRI's writings argue for the idea that an AI with the ability to improve upon its own design (Seed AI) would rapidly lead to superintelligence. These Singularitarians believe that reaching the singularity swiftly and safely is the best possible way to minimize net existential risk.

Many people believe a technological singularity is possible without adopting Singularitarianism as a moral philosophy. Although the exact numbers are hard to quantify, Singularitarianism is a small movement, which includes transhumanist philosopher Nick Bostrom. Inventor and futurist Ray Kurzweil, who predicts that the Singularity will occur circa 2045, greatly contributed to popularizing Singularitarianism with his 2005 book The Singularity Is Near: When Humans Transcend Biology .

What, then, is the Singularity? It's a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the Singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one's view of life in general and one's particular life. I regard someone who understands the Singularity and who has reflected on its implications for his or her own life as a "singularitarian."

With the support of NASA, Google and a broad range of technology forecasters and technocapitalists, the Singularity University opened in June 2009 at the NASA Research Park in Silicon Valley with the goal of preparing the next generation of leaders to address the challenges of accelerating change.

In July 2009, many prominent Singularitarians participated in a conference organized by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard (i.e., cybernetic revolt). They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They warned that some computer viruses can evade elimination and have achieved "cockroach intelligence". They asserted that self-awareness as depicted in science fiction is probably unlikely, but that there were other potential hazards and pitfalls. Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. The President of the AAAI has commissioned a study to look at this issue.

Reception

There are several objections to Kurzweil's singularitarianism and these even include criticisms from optimists within the A.I. field. For instance, Pulitzer Prize winning author Douglas Hofstadter argued that Kurzweil's predicted achievement of human-level A.I. by 2045 is not viable. Even Gordon Moore, who is credited for introducing the Moore's Law that predicated the notion of singularity, maintained that it will never occur. According to some observers, these criticisms do not diminish enthusiasm for singularity because it has assumed a quasi-religious response to the fear of death, allowing its adherents to enjoy the benefits of religion without its ontological burdens. Science journalist John Horgan provided more insights into this notion as he likened singularitarianism to a religion:

Let's face it. The singularity is a religious rather than a scientific vision. The science-fiction writer Ken MacLeod has dubbed it ”the rapture for nerds,” an allusion to the end-time, when Jesus whisks the faithful to heaven and leaves us sinners behind. Such yearning for transcendence, whether spiritual or technological, is all too understandable. Both as individuals and as a species, we face deadly serious problems, including terrorism, nuclear proliferation, overpopulation, poverty, famine, environmental degradation, climate change, resource depletion, and AIDS. Engineers and scientists should be helping us face the world's problems and find solutions to them, rather than indulging in escapist, pseudoscientific fantasies like the singularity.

Kurzweil rejects this categorization, stating that his predictions about the singularity are driven by the data that increases in computational technology have been exponential in the past. He also stressed that critics who challenge his view mistakenly take an intuitive linear view of technological advancement.

Politics of Europe

From Wikipedia, the free encyclopedia ...