Search This Blog

Friday, June 29, 2018

Molecular scale electronics

From Wikipedia, the free encyclopedia

Molecular scale electronics, also called single-molecule electronics, is a branch of nanotechnology that uses single molecules, or nanoscale collections of single molecules, as electronic components. Because single molecules constitute the smallest stable structures imaginable, this miniaturization is the ultimate goal for shrinking electrical circuits.

The field is often termed simply as "molecular electronics", but this term is also used to refer to the distantly related field of conductive polymers and organic electronics, which uses the properties of molecules to affect the bulk properties of a material. A nomenclature distinction has been suggested so that molecular materials for electronics refers to this latter field of bulk applications, while molecular scale electronics refers to the nanoscale single-molecule applications treated here.[1][2]

Fundamental concepts

Conventional electronics have traditionally been made from bulk materials. Ever since their invention in 1958, the performance and complexity of integrated circuits has undergone exponential growth, a trend named Moore’s law, as feature sizes of the embedded components have shrunk accordingly. As the structures shrink, the sensitivity to deviations increases. In a few technology generations, when the minimum feature sizes reaches 13 nm, the composition of the devices must be controlled to a precision of a few atoms[3] for the devices to work. With bulk methods growing increasingly demanding and costly as they near inherent limits, the idea was born that the components could instead be built up atom by atom in a chemistry lab (bottom up) versus carving them out of bulk material (top down). This is the idea behind molecular electronics, with the ultimate miniaturization being components contained in single molecules.

In single-molecule electronics, the bulk material is replaced by single molecules. Instead of forming structures by removing or applying material after a pattern scaffold, the atoms are put together in a chemistry lab. In this way, billions of billions of copies are made simultaneously (typically more than 1020 molecules are made at once) while the composition of molecules are controlled down to the last atom. The molecules used have properties that resemble traditional electronic components such as a wire, transistor or rectifier.

Single-molecule electronics is an emerging field, and entire electronic circuits consisting exclusively of molecular sized compounds are still very far from being realized. However, the unceasing demand for more computing power, along with the inherent limits of lithographic methods as of 2016, make the transition seem unavoidable. Currently, the focus is on discovering molecules with interesting properties and on finding ways to obtain reliable and reproducible contacts between the molecular components and the bulk material of the electrodes.

Theoretical basis

Molecular electronics operates in the quantum realm of distances less than 100 nanometers. The miniaturization down to single molecules brings the scale down to a regime where quantum mechanics effects are important. In conventional electronic components, electrons can be filled in or drawn out more or less like a continuous flow of electric charge. In contrast, in molecular electronics the transfer of one electron alters the system significantly. For example, when an electron has been transferred from a source electrode to a molecule, the molecule gets charged up, which makes it far harder for the next electron to transfer (see also Coulomb blockade). The significant amount of energy due to charging must be accounted for when making calculations about the electronic properties of the setup, and is highly sensitive to distances to conducting surfaces nearby.

The theory of single-molecule devices is especially interesting since the system under consideration is an open quantum system in nonequilibrium (driven by voltage). In the low bias voltage regime, the nonequilibrium nature of the molecular junction can be ignored, and the current-voltage traits of the device can be calculated using the equilibrium electronic structure of the system. However, in stronger bias regimes a more sophisticated treatment is required, as there is no longer a variational principle. In the elastic tunneling case (where the passing electron does not exchange energy with the system), the formalism of Rolf Landauer can be used to calculate the transmission through the system as a function of bias voltage, and hence the current. In inelastic tunneling, an elegant formalism based on the non-equilibrium Green's functions of Leo Kadanoff and Gordon Baym, and independently by Leonid Keldysh was advanced by Ned Wingreen and Yigal Meir. This Meir-Wingreen formulation has been used to great success in the molecular electronics community to examine the more difficult and interesting cases where the transient electron exchanges energy with the molecular system (for example through electron-phonon coupling or electronic excitations).

Further, connecting single molecules reliably to a larger scale circuit has proven a great challenge, and constitutes a significant hindrance to commercialization.

Examples

Common for molecules used in molecular electronics is that the structures contain many alternating double and single bonds (see also Conjugated system). This is done because such patterns delocalize the molecular orbitals, making it possible for electrons to move freely over the conjugated area.

Wires

This animation of a rotating carbon nanotube shows its 3D structure.

The sole purpose of molecular wires is to electrically connect different parts of a molecular electrical circuit. As the assembly of these and their connection to a macroscopic circuit is still not mastered, the focus of research in single-molecule electronics is primarily on the functionalized molecules: molecular wires are characterized by containing no functional groups and are hence composed of plain repetitions of a conjugated building block. Among these are the carbon nanotubes that are quite large compared to the other suggestions but have shown very promising electrical properties.

The main problem with the molecular wires is to obtain good electrical contact with the electrodes so that electrons can move freely in and out of the wire.

Transistors

Single-molecule transistors are fundamentally different from the ones known from bulk electronics. The gate in a conventional (field-effect) transistor determines the conductance between the source and drain electrode by controlling the density of charge carriers between them, whereas the gate in a single-molecule transistor controls the possibility of a single electron to jump on and off the molecule by modifying the energy of the molecular orbitals. One of the effects of this difference is that the single-molecule transistor is almost binary: it is either on or off. This opposes its bulk counterparts, which have quadratic responses to gate voltage.

It is the quantization of charge into electrons that is responsible for the markedly different behavior compared to bulk electronics. Because of the size of a single molecule, the charging due to a single electron is significant and provides means to turn a transistor on or off (see Coulomb blockade). For this to work, the electronic orbitals on the transistor molecule cannot be too well integrated with the orbitals on the electrodes. If they are, an electron cannot be said to be located on the molecule or the electrodes and the molecule will function as a wire.

A popular group of molecules, that can work as the semiconducting channel material in a molecular transistor, is the oligopolyphenylenevinylenes (OPVs) that works by the Coulomb blockade mechanism when placed between the source and drain electrode in an appropriate way.[4] Fullerenes work by the same mechanism and have also been commonly used.

Semiconducting carbon nanotubes have also been demonstrated to work as channel material but although molecular, these molecules are sufficiently large to behave almost as bulk semiconductors.

The size of the molecules, and the low temperature of the measurements being conducted, makes the quantum mechanical states well defined. Thus, it is being researched if the quantum mechanical properties can be used for more advanced purposes than simple transistors (e.g. spintronics).

Physicists at the University of Arizona, in collaboration with chemists from the University of Madrid, have designed a single-molecule transistor using a ring-shaped molecule similar to benzene. Physicists at Canada's National Institute for Nanotechnology have designed a single-molecule transistor using styrene. Both groups expect (the designs were experimentally unverified as of June 2005) their respective devices to function at room temperature, and to be controlled by a single electron.[5]

Rectifiers (diodes)

Hydrogen can be removed from individual tetraphenylporphyrin (H2TPP) molecules by applying excess voltage to the tip of a scanning tunneling microscope (STAM, a); this removal alters the current-voltage (I-V) curves of TPP molecules, measured using the same STM tip, from diode-like (red curve in b) to resistor-like (green curve). Image (c) shows a row of TPP, H2TPP and TPP molecules. While scanning image (d), excess voltage was applied to H2TPP at the black dot, which instantly removed hydrogen, as shown in the bottom part of (d) and in the re-scan image (e). Such manipulations can be used in single-molecule electronics.[6]

Molecular rectifiers are mimics of their bulk counterparts and have an asymmetric construction so that the molecule can accept electrons in one end but not the other. The molecules have an electron donor (D) in one end and an electron acceptor (A) in the other. This way, the unstable state D+ – A will be more readily made than D – A+. The result is that an electric current can be drawn through the molecule if the electrons are added through the acceptor end, but less easily if the reverse is attempted.

Methods

One of the biggest problems with measuring on single molecules is to establish reproducible electrical contact with only one molecule and doing so without shortcutting the electrodes. Because the current photolithographic technology is unable to produce electrode gaps small enough to contact both ends of the molecules tested (on the order of nanometers), alternative strategies are applied.

Molecular gaps

One way to produce electrodes with a molecular sized gap between them is break junctions, in which a thin electrode is stretched until it breaks. Another is electromigration. Here a current is led through a thin wire until it melts and the atoms migrate to produce the gap. Further, the reach of conventional photolithography can be enhanced by chemically etching or depositing metal on the electrodes.

Probably the easiest way to conduct measurements on several molecules is to use the tip of a scanning tunneling microscope (STM) to contact molecules adhered at the other end to a metal substrate.[7]

Anchoring

A popular way to anchor molecules to the electrodes is to make use of sulfur's high chemical affinity to gold. In these setups, the molecules are synthesized so that sulfur atoms are placed strategically to function as crocodile clips connecting the molecules to the gold electrodes. Though useful, the anchoring is non-specific and thus anchors the molecules randomly to all gold surfaces. Further, the contact resistance is highly dependent on the precise atomic geometry around the site of anchoring and thereby inherently compromises the reproducibility of the connection.

To circumvent the latter issue, experiments has shown that fullerenes could be a good candidate for use instead of sulfur because of the large conjugated π-system that can electrically contact many more atoms at once than one atom of sulfur.[8]

Fullerene nanoelectronics

In polymers, classical organic molecules are composed of both carbon and hydrogen (and sometimes additional compounds such as nitrogen, chlorine or sulphur). They are obtained from petrol and can often be synthesized in large amounts. Most of these molecules are insulating when their length exceeds a few nanometers. However, naturally occurring carbon is conducting, especially graphite recovered from coal or encountered otherwise. From a theoretical viewpoint, graphite is a semi-metal, a category in between metals and semi-conductors. It has a layered structure, each sheet being one atom thick. Between each sheet, the interactions are weak enough to allow an easy manual cleavage.

Tailoring the graphite sheet to obtain well defined nanometer-sized objects remains a challenge. However, by the close of the twentieth century, chemists were exploring methods to fabricate extremely small graphitic objects that could be considered single molecules. After studying the interstellar conditions under which carbon is known to form clusters, Richard Smalley's group (Rice University, Texas) set up an experiment in which graphite was vaporized via laser irradiation. Mass spectrometry revealed that clusters containing specific magic numbers of atoms were stable, especially those clusters of 60 atoms. Harry Kroto, an English chemist who assisted in the experiment, suggested a possible geometry for these clusters – atoms covalently bound with the exact symmetry of a soccer ball. Coined buckminsterfullerenes, buckyballs, or C60, the clusters retained some properties of graphite, such as conductivity. These objects were rapidly envisioned as possible building blocks for molecular electronics.

Problems

Artifacts

When trying to measure electronic traits of molecules, artificial phenomena can occur that can be hard to distinguish from truly molecular behavior.[9] Before they were discovered, these artifacts have mistakenly been published as being features pertaining to the molecules in question.

Applying a voltage drop on the order of volts across a nanometer sized junction results in a very strong electrical field. The field can cause metal atoms to migrate and eventually close the gap by a thin filament, which can be broken again when carrying a current. The two levels of conductance imitate molecular switching between a conductive and an isolating state of a molecule.

Another encountered artifact is when the electrodes undergo chemical reactions due to the high field strength in the gap. When the voltage bias is reversed, the reaction will cause hysteresis in the measurements that can be interpreted as being of molecular origin.

A metallic grain between the electrodes can act as a single electron transistor by the mechanism described above, thus resembling the traits of a molecular transistor. This artifact is especially common with nanogaps produced by the electromigration method.

Commercialization

One of the biggest hindrances for single-molecule electronics to be commercially exploited is the lack of methods to connect a molecular sized circuit to bulk electrodes in a way that gives reproducible results. At the current state, the difficulty of connecting single molecules vastly outweighs any possible performance increase that could be gained from such shrinkage. The difficulties grow worse if the molecules are to have a certain spatial orientation and/or have multiple poles to connect.

Also problematic is that some measurements on single molecules are carried out in cryogenic temperatures (near absolute zero), which is very energy consuming. This is done to reduce signal noise enough to measure the faint currents of single molecules.

History and recent progress

Graphical representation of a rotaxane, useful as a molecular switch.

In their treatment of so-called donor-acceptor complexes in the 1940s, Robert Mulliken and Albert Szent-Györgyi advanced the concept of charge transfer in molecules. They subsequently further refined the study of both charge transfer and energy transfer in molecules. Likewise, a 1974 paper from Mark Ratner and Ari Aviram illustrated a theoretical molecular rectifier.[10] In 1988, Aviram described in detail a theoretical single-molecule field-effect transistor. Further concepts were proposed by Forrest Carter of the Naval Research Laboratory, including single-molecule logic gates. A wide range of ideas were presented, under his aegis, at a conference entitled Molecular Electronic Devices in 1988.[11] These were all theoretical constructs and not concrete devices. The direct measurement of the electronic traits of individual molecules awaited the development of methods for making molecular-scale electrical contacts. This was no easy task. Thus, the first experiment directly-measuring the conductance of a single molecule was only reported in 1995 on a single C60 molecule by C. Joachim and J. K. Gimzewsky in their seminal Physical Revie Letter paper and later in 1997 by Mark Reed and co-workers on a few hundred molecules. Since then, this branch of the field has advanced rapidly. Likewise, as it has grown possible to measure such properties directly, the theoretical predictions of the early workers have been confirmed substantially.

Recent progress in nanotechnology and nanoscience has facilitated both experimental and theoretical study of molecular electronics. Development of the scanning tunneling microscope (STM) and later the atomic force microscope (AFM) have greatly facilitated manipulating single-molecule electronics. Also, theoretical advances in molecular electronics have facilitated further understanding of non-adiabatic charge transfer events at electrode-electrolyte interfaces.[12][13]

The concept of molecular electronics was first published in 1974 when Aviram and Ratner suggested an organic molecule that could work as a rectifier.[14] Having both huge commercial and fundamental interest, much effort was put into proving its feasibility, and 16 years later in 1990, the first demonstration of an intrinsic molecular rectifier was realized by Ashwell and coworkers for a thin film of molecules.

The first measurement of the conductance of a single molecule was realised in 1994 by C. Joachim and J. K. Gimzewski and published in 1995 (see the corresponding Phys. Rev. Lett. paper). This was the conclusion of 10 years of research started at IBM TJ Watson, using the scanning tunnelling microscope tip apex to switch a single molecule as already explored by A. Aviram, C. Joachim and M. Pomerantz at the end of the 80's (see their seminal Chem. Phys. Lett. paper during this period). The trick was to use an UHV Scanning Tunneling microscope to allow the tip apex to gently touch the top of a single C
60
molecule adsorbed on an Au(110) surface. A resistance of 55 MOhms was recorded along with a low voltage linear I-V. The contact was certified by recording the I-z current distance property, which allows measurement of the deformation of the C
60
cage under contact. This first experiment was followed by the reported result using a mechanical break junction method to connect two gold electrodes to a sulfur-terminated molecular wire by Mark Reed and James Tour in 1997.[15]

A single-molecule amplifier was implemented by C. Joachim and J.K. Gimzewski in IBM Zurich. This experiment, involving one C
60
molecule, demonstrated that one such molecule can provide gain in a circuit via intramolecular quantum interference effects alone.

A collaboration of researchers at Hewlett-Packard (HP) and University of California, Los Angeles (UCLA), led by James Heath, Fraser Stoddart, R. Stanley Williams, and Philip Kuekes, has developed molecular electronics based on rotaxanes and catenanes.

Work is also occurring on the use of single-wall carbon nanotubes as field-effect transistors. Most of this work is being done by International Business Machines (IBM).

Some specific reports of a field-effect transistor based on molecular self-assembled monolayers were shown to be fraudulent in 2002 as part of the Schön scandal.[16]

Until recently entirely theoretical, the Aviram-Ratner model for a unimolecular rectifier has been confirmed unambiguously in experiments by a group led by Geoffrey J. Ashwell at Bangor University, UK.[17][18][19] Many rectifying molecules have so far been identified, and the number and efficiency of these systems is growing rapidly.

Supramolecular electronics is a new field involving electronics at a supramolecular level.

An important issue in molecular electronics is the determination of the resistance of a single molecule (both theoretical and experimental). For example, Bumm, et al. used STM to analyze a single molecular switch in a self-assembled monolayer to determine how conductive such a molecule can be.[20] Another problem faced by this field is the difficulty of performing direct characterization since imaging at the molecular scale is often difficult in many experimental devices.

Nanoelectronics

From Wikipedia, the free encyclopedia

Nanoelectronics refer to the use of nanotechnology in electronic components. The term covers a diverse set of devices and materials, with the common characteristic that they are so small that inter-atomic interactions and quantum mechanical properties need to be studied extensively. Some of these candidates include: hybrid molecular/semiconductor electronics, one-dimensional nanotubes/nanowires (e.g. Silicon nanowires or Carbon nanotubes) or advanced molecular electronics. Recent silicon CMOS technology generations, such as the 22 nanometer node, are already within this regime. Nanoelectronics are sometimes considered as disruptive technology because present candidates are significantly different from traditional transistors.

Fundamental Concepts

In 1965 Gordon Moore observed that silicon transistors were undergoing a continual process of scaling downward, an observation which was later codified as Moore's law. Since his observation transistor minimum feature sizes have decreased from 10 micrometers to the 28-22 nm range in 2011. The field of nanoelectronics aims to enable the continued realization of this law by using new methods and materials to build electronic devices with feature sizes on the nanoscale.

The volume of an object decreases as the third power of its linear dimensions, but the surface area only decreases as its second power. This somewhat subtle and unavoidable principle has huge ramifications. For example, the power of a drill (or any other machine) is proportional to the volume, while the friction of the drill's bearings and gears is proportional to their surface area. For a normal-sized drill, the power of the device is enough to handily overcome any friction. However, scaling its length down by a factor of 1000, for example, decreases its power by 10003 (a factor of a billion) while reducing the friction by only 10002 (a factor of only a million). Proportionally it has 1000 times less power per unit friction than the original drill. If the original friction-to-power ratio was, say, 1%, that implies the smaller drill will have 10 times as much friction as power; the drill is useless.

For this reason, while super-miniature electronic integrated circuits are fully functional, the same technology cannot be used to make working mechanical devices beyond the scales where frictional forces start to exceed the available power. So even though you may see microphotographs of delicately etched silicon gears, such devices are currently little more than curiosities with limited real world applications, for example, in moving mirrors and shutters.[1] Surface tension increases in much the same way, thus magnifying the tendency for very small objects to stick together. This could possibly make any kind of "micro factory" impractical: even if robotic arms and hands could be scaled down, anything they pick up will tend to be impossible to put down. The above being said, molecular evolution has resulted in working cilia, flagella, muscle fibers and rotary motors in aqueous environments, all on the nanoscale. These machines exploit the increased frictional forces found at the micro or nanoscale. Unlike a paddle or a propeller which depends on normal frictional forces (the frictional forces perpendicular to the surface) to achieve propulsion, cilia develop motion from the exaggerated drag or laminar forces (frictional forces parallel to the surface) present at micro and nano dimensions. To build meaningful "machines" at the nanoscale, the relevant forces need to be considered. We are faced with the development and design of intrinsically pertinent machines rather than the simple reproductions of macroscopic ones.

All scaling issues therefore need to be assessed thoroughly when evaluating nanotechnology for practical applications.

Approaches to Nanoelectronics

Nanofabrication

For example, electron transistors, which involve transistor operation based on a single electron.  Nanoelectromechanical systems also fall under this category. Nanofabrication can be used to construct ultradense parallel arrays of nanowires, as an alternative to synthesizing nanowires individually.

Nanomaterials Electronics

Besides being small and allowing more transistors to be packed into a single chip, the uniform and symmetrical structure of nanowires and/or nanotubes allows a higher electron mobility (faster electron movement in the material), a higher dielectric constant (faster frequency), and a symmetrical electron/hole characteristic.[4]

Also, nanoparticles can be used as quantum dots.

Molecular Electronics

Single molecule devices are another possibility. These schemes would make heavy use of molecular self-assembly, designing the device components to construct a larger structure or even a complete system on their own. This can be very useful for reconfigurable computing, and may even completely replace present FPGA technology.
Molecular electronics[5] is a new technology which is still in its infancy, but also brings hope for truly atomic scale electronic systems in the future. One of the more promising applications of molecular electronics was proposed by the IBM researcher Ari Aviram and the theoretical chemist Mark Ratner in their 1974 and 1988 papers Molecules for Memory, Logic and Amplification, (see Unimolecular rectifier).[6][7]

This is one of many possible ways in which a molecular level diode / transistor might be synthesized by organic chemistry. A model system was proposed with a spiro carbon structure giving a molecular diode about half a nanometre across which could be connected by polythiophene molecular wires. Theoretical calculations showed the design to be sound in principle and there is still hope that such a system can be made to work.

Other Approaches

Nanoionics studies the transport of ions rather than electrons in nanoscale systems.

Nanophotonics studies the behavior of light on the nanoscale, and has the goal of developing devices that take advantage of this behavior.

Nanoelectronic Devices

Current high-technology production processes are based on traditional top down strategies, where nanotechnology has already been introduced silently. The critical length scale of integrated circuits is already at the nanoscale (50 nm and below) regarding the gate length of transistors in CPUs or DRAM devices.

Computers

Simulation result for formation of inversion channel (electron
density) and attainment of threshold voltage (IV) in a
nanowire MOSFET. Note that the threshold voltage for this
device lies around 0.45V.

Nanoelectronics holds the promise of making computer processors more powerful than are possible with conventional semiconductor fabrication techniques. A number of approaches are currently being researched, including new forms of nanolithography, as well as the use of nanomaterials such as nanowires or small molecules in place of traditional CMOS components. Field effect transistors have been made using both semiconducting carbon nanotubes[8] and with heterostructured semiconductor nanowires (SiNWs).[9]

In 1999, the CMOS transistor developed at the Laboratory for Electronics and Information Technology in Grenoble, France, tested the limits of the principles of the MOSFET transistor with a diameter of 18 nm (approximately 70 atoms placed side by side). This was almost one tenth the size of the smallest industrial transistor in 2003 (130 nm in 2003, 90 nm in 2004, 65 nm in 2005 and 45 nm in 2007). It enabled the theoretical integration of seven billion junctions on a €1 coin. However, the CMOS transistor, which was created in 1999, was not a simple research experiment to study how CMOS technology functions, but rather a demonstration of how this technology functions now that we ourselves are getting ever closer to working on a molecular scale. Today it would be impossible to master the coordinated assembly of a large number of these transistors on a circuit and it would also be impossible to create this on an industrial level.[10]

Memory Storage

Electronic memory designs in the past have largely relied on the formation of transistors. However, research into crossbar switch based electronics have offered an alternative using reconfigurable interconnections between vertical and horizontal wiring arrays to create ultra high density memories. Two leaders in this area are Nantero which has developed a carbon nanotube based crossbar memory called Nano-RAM and Hewlett-Packard which has proposed the use of memristor material as a future replacement of Flash memory.

An example of such novel devices is based on spintronics.The dependence of the resistance of a material (due to the spin of the electrons) on an external field is called magnetoresistance. This effect can be significantly amplified (GMR - Giant Magneto-Resistance) for nanosized objects, for example when two ferromagnetic layers are separated by a nonmagnetic layer, which is several nanometers thick (e.g. Co-Cu-Co). The GMR effect has led to a strong increase in the data storage density of hard disks and made the gigabyte range possible. The so-called tunneling magnetoresistance (TMR) is very similar to GMR and based on the spin dependent tunneling of electrons through adjacent ferromagnetic layers. Both GMR and TMR effects can be used to create a non-volatile main memory for computers, such as the so-called magnetic random access memory or MRAM.

Novel Optoelectronic Devices

In the modern communication technology traditional analog electrical devices are increasingly replaced by optical or optoelectronic devices due to their enormous bandwidth and capacity, respectively. Two promising examples are photonic crystals and quantum dots. Photonic crystals are materials with a periodic variation in the refractive index with a lattice constant that is half the wavelength of the light used. They offer a selectable band gap for the propagation of a certain wavelength, thus they resemble a semiconductor, but for light or photons instead of electrons. Quantum dots are nanoscaled objects, which can be used, among many other things, for the construction of lasers. The advantage of a quantum dot laser over the traditional semiconductor laser is that their emitted wavelength depends on the diameter of the dot. Quantum dot lasers are cheaper and offer a higher beam quality than conventional laser diodes.
Fullerene Nanogears - GPN-2000-001535.jpg

Displays

The production of displays with low energy consumption might be accomplished using carbon nanotubes (CNT) and/or Silicon nanowires. Such nanostructures are electrically conductive and due to their small diameter of several nanometers, they can be used as field emitters with extremely high efficiency for field emission displays (FED). The principle of operation resembles that of the cathode ray tube, but on a much smaller length scale.

Quantum Computers

Entirely new approaches for computing exploit the laws of quantum mechanics for novel quantum computers, which enable the use of fast quantum algorithms. The Quantum computer has quantum bit memory space termed "Qubit" for several computations at the same time. This facility may improve the performance of the older systems.

Radios

Nanoradios have been developed structured around carbon nanotubes.[11]

Energy Production

Research is ongoing to use nanowires and other nanostructured materials with the hope to create cheaper and more efficient solar cells than are possible with conventional planar silicon solar cells.[12] It is believed that the invention of more efficient solar energy would have a great effect on satisfying global energy needs.

There is also research into energy production for devices that would operate in vivo, called bio-nano generators. A bio-nano generator is a nanoscale electrochemical device, like a fuel cell or galvanic cell, but drawing power from blood glucose in a living body, much the same as how the body generates energy from food. To achieve the effect, an enzyme is used that is capable of stripping glucose of its electrons, freeing them for use in electrical devices. The average person's body could, theoretically, generate 100 watts of electricity (about 2000 food calories per day) using a bio-nano generator.[13] However, this estimate is only true if all food was converted to electricity, and the human body needs some energy consistently, so possible power generated is likely much lower. The electricity generated by such a device could power devices embedded in the body (such as pacemakers), or sugar-fed nanorobots. Much of the research done on bio-nano generators is still experimental, with Panasonic's Nanotechnology Research Laboratory among those at the forefront.

Medical Diagnostics

There is great interest in constructing nanoelectronic devices[14][15][16] that could detect the concentrations of biomolecules in real time for use as medical diagnostics,[17] thus falling into the category of nanomedicine.[18] A parallel line of research seeks to create nanoelectronic devices which could interact with single cells for use in basic biological research.[19] These devices are called nanosensors. Such miniaturization on nanoelectronics towards in vivo proteomic sensing should enable new approaches for health monitoring, surveillance, and defense technology.

DNA computing

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
DNA computing is a branch of computing which uses DNA, biochemistry, and molecular biology hardware, instead of the traditional silicon-based computer technologies. Research and development in this area concerns theory, experiments, and applications of DNA computing. The term "molectronics" has sometimes been used, but this term had already been used for an earlier technology, a then-unsuccessful rival of the first integrated circuits;[1] this term has also been used more generally, for molecular-scale electronic technology.

History

A photograph of Loenard Aldeman, the inventor of DNA computing
Leonard Adleman, the inventor of DNA computing

This field was initially developed by Leonard Adleman of the University of Southern California, in 1994.[3] Adleman demonstrated a proof-of-concept use of DNA as a form of computation which solved the seven-point Hamiltonian path problem. Since the initial Adleman experiments, advances have been made and various Turing machines have been proven to be constructible.

While the initial interest was in using this novel approach to tackle NP-hard problems, it was soon realized that they may not be best suited for this type of computation, and several proposals have been made to find a "killer application" for this approach. In 1997, computer scientist Mitsunori Ogihara working with biologist Animesh Ray suggested one to be the evaluation of Boolean circuits and described an implementation.

In 2002, researchers from the Weizmann Institute of Science in Rehovot, Israel, unveiled a programmable molecular computing machine composed of enzymes and DNA molecules instead of silicon microchips.[8] On April 28, 2004, Ehud Shapiro, Yaakov Benenson, Binyamin Gil, Uri Ben-Dor, and Rivka Adar at the Weizmann Institute announced in the journal Nature that they had constructed a DNA computer coupled with an input and output module which would theoretically be capable of diagnosing cancerous activity within a cell, and releasing an anti-cancer drug upon diagnosis.[9]

In January 2013, researchers were able to store a JPEG photograph, a set of Shakespearean sonnets, and an audio file of Martin Luther King, Jr.'s speech I Have a Dream on DNA digital data storage.[10]

In March 2013, researchers created a transcriptor (a biological transistor).[11]

In August 2016, researchers used the CRISPR gene-editing system to insert a GIF of a galloping horse and rider into the DNA of living bacteria.[12]

Idea

The organisation and complexity of all living beings is based on a coding system functioning with four key components of the DNA-molecule. Because of this, the DNA is very suited as a medium for data processing.[13] According to different calculations a DNA-computer with one liter of fluid containing six grams of DNA could potentially have a memory capacity of 3072 exabytes. The theoretical maximum data transfer speed would also be enormous due to the massive parallelism of the calculations. Therefore, about 1000 petaFLOPS could be reached, while today's most powerful computers do not go above a few dozen (99 petaFLOPS being the current record).[citation needed]

Pros and cons

The slow processing speed of a DNA-computer (the response time is measured in minutes, hours or days, rather than milliseconds) is compensated by its potential to make a high amount of multiple parallel computations. This allows the system to take a similar amount of time for a complex calculation as for a simple one. This is achieved by the fact that millions or billions of molecules interact with each other simultaneously. However, it is much harder to analyze the answers given by a DNA-Computer than by a digital one.

Examples/Prototypes

In 1994 Leonard Adleman presented the first prototype of a DNA-Computer. The TT-100 was a test tube filled with 100 microliters of a DNA-solution. He managed to solve for example an instance of the directed Hamiltonian path problem.[14]

In another experiment a simple version of the “travelling salesman problem” was “solved”. For this purpose, different DNA-fragments were created, each one of them representing a city that had to be visited. Every one of these fragments is capable of a linkage with the other fragments created. These DNA-fragments were produced and mixed in a test tube. Within seconds, the small fragments form bigger ones, representing the different travel routes. Through a chemical reaction (that lasts a few days), the DNA-fragments representing the longer routes were eliminated. The remains are the solution to the problem. However, current technical limitations prevent evaluation of the results. Therefore, the experiment isn’t suitable for application, but it is nevertheless a proof of concept.

Combinatorial problems

First results to these problems were obtained by Leonard Adleman (NASA JPL)

Tic-tac-toe game

In 2002, J. Macdonald, D. Stefanovic and Mr. Stojanovic created a DNA computer able to play tic-tac-toe against a human player.[15] The calculator consists of nine bins corresponding to the nine squares of the game. Each bin contains a substrate and various combinations of DNA enzymes. The substrate itself is composed of a DNA strand onto which was grafted a fluorescent chemical group at one end, and the other end, a repressor group. Fluorescence is only active if the molecules of the substrate are halved. The DNA enzyme simulate logical functions. For example, such a DNA will unfold if two specific types of DNA strand are introduced to reproduce the logic function AND.

By default, the computer is supposed to play first in the central square. The human player has then as a starter eight different types of DNA strands assigned to each of eight boxes that may be played. To indicate that box nr. i is being ticked, the human player pours into all bins the strands corresponding to input #i. These strands bind to certain DNA enzymes present in the bins, resulting in one of these two bins in the deformation of the DNA enzymes which binds to the substrate and cuts it. The corresponding bin becomes fluorescent, indicating which box is being played by the DNA computer. The various DNA enzymes are divided into various bins in such a way to ensure the victory of the DNA computer against the human player.

Capabilities

DNA computing is a form of parallel computing in that it takes advantage of the many different molecules of DNA to try many different possibilities at once.[16] For certain specialized problems, DNA computers are faster and smaller than any other computer built so far. Furthermore, particular mathematical computations have been demonstrated to work on a DNA computer. As an example, DNA molecules have been utilized to tackle the assignment problem.[17]

Jian-Jun Shu and colleagues built a DNA GPS[18] system and also conduct an experiment to show that magnetic fields can enhance charge transport through DNA[19] (or protein), which may allow organisms to sense magnetic fields.

Aran Nayebi[20] has provided a general implementation of Strassen's matrix multiplication algorithm on a DNA computer, although there are problems with scaling. In addition, Caltech researchers have created a circuit made from 130 unique DNA strands, which is able to calculate the square root of numbers up to 15.[21] Recently, Salehi et al. showed that with a new coding referred to as "fractional coding", chemical reactions in general and DNA reactions in particular, can compute polynomials. In the fractional coding two DNA molecules are used to represent each variable.[22]

DNA computing does not provide any new capabilities from the standpoint of computability theory, the study of which problems are computationally solvable using different models of computation. For example, if the space required for the solution of a problem grows exponentially with the size of the problem (EXPSPACE problems) on von Neumann machines, it still grows exponentially with the size of the problem on DNA machines. For very large EXPSPACE problems, the amount of DNA required is too large to be practical.

Methods

There are multiple methods for building a computing device based on DNA, each with its own advantages and disadvantages. Most of these build the basic logic gates (AND, OR, NOT) associated with digital logic from a DNA basis. Some of the different bases include DNAzymes, deoxyoligonucleotides, enzymes, toehold exchange.

DNAzymes

Catalytic DNA (deoxyribozyme or DNAzyme) catalyze a reaction when interacting with the appropriate input, such as a matching oligonucleotide. These DNAzymes are used to build logic gates analogous to digital logic in silicon; however, DNAzymes are limited to 1-, 2-, and 3-input gates with no current implementation for evaluating statements in series.

The DNAzyme logic gate changes its structure when it binds to a matching oligonucleotide and the fluorogenic substrate it is bonded to is cleaved free. While other materials can be used, most models use a fluorescence-based substrate because it is very easy to detect, even at the single molecule limit.[23] The amount of fluorescence can then be measured to tell whether or not a reaction took place. The DNAzyme that changes is then “used,” and cannot initiate any more reactions. Because of this, these reactions take place in a device such as a continuous stirred-tank reactor, where old product is removed and new molecules added.

Two commonly used DNAzymes are named E6 and 8-17. These are popular because they allow cleaving of a substrate in any arbitrary location.[24] Stojanovic and MacDonald have used the E6 DNAzymes to build the MAYA I[25] and MAYA II[26] machines, respectively; Stojanovic has also demonstrated logic gates using the 8-17 DNAzyme.[27] While these DNAzymes have been demonstrated to be useful for constructing logic gates, they are limited by the need for a metal cofactor to function, such as Zn2+ or Mn2+, and thus are not useful in vivo.[23][28]

A design called a stem loop, consisting of a single strand of DNA which has a loop at an end, are a dynamic structure that opens and closes when a piece of DNA bonds to the loop part. This effect has been exploited to create several logic gates. These logic gates have been used to create the computers MAYA I and MAYA II which can play tic-tac-toe to some extent.[29]

Enzymes

Enzyme based DNA computers are usually of the form of a simple Turing machine; there is analogous hardware, in the form of an enzyme, and software, in the form of DNA.[30]

Benenson, Shapiro and colleagues have demonstrated a DNA computer using the FokI enzyme[31] and expanded on their work by going on to show automata that diagnose and react to prostate cancer: under expression of the genes PPAP2B and GSTP1 and an over expression of PIM1 and HPN.[9] Their automata evaluated the expression of each gene, one gene at a time, and on positive diagnosis then released a single strand DNA molecule (ssDNA) that is an antisense for MDM2. MDM2 is a repressor of protein 53, which itself is a tumor suppressor.[32] On negative diagnosis it was decided to release a suppressor of the positive diagnosis drug instead of doing nothing. A limitation of this implementation is that two separate automata are required, one to administer each drug. The entire process of evaluation until drug release took around an hour to complete. This method also requires transition molecules as well as the FokI enzyme to be present. The requirement for the FokI enzyme limits application in vivo, at least for use in "cells of higher organisms".[33] It should also be pointed out that the 'software' molecules can be reused in this case.

Toehold exchange

DNA computers have also been constructed using the concept of toehold exchange. In this system, an input DNA strand binds to a sticky end, or toehold, on another DNA molecule, which allows it to displace another strand segment from the molecule. This allows the creation of modular logic components such as AND, OR, and NOT gates and signal amplifiers, which can be linked into arbitrarily large computers. This class of DNA computers does not require enzymes or any chemical capability of the DNA.[34]

Algorithmic self-assembly

DNA arrays that display a representation of the Sierpinski gasket on their surfaces. Click the image for further details. Image from Rothemund et al., 2004.[35]

DNA nanotechnology has been applied to the related field of DNA computing. DNA tiles can be designed to contain multiple sticky ends with sequences chosen so that they act as Wang tiles. A DX array has been demonstrated whose assembly encodes an XOR operation; this allows the DNA array to implement a cellular automaton which generates a fractal called the Sierpinski gasket. This shows that computation can be incorporated into the assembly of DNA arrays, increasing its scope beyond simple periodic arrays.[35]

Alternative technologies

A partnership between IBM and Caltech was established in 2009 aiming at "DNA chips" production.[36] A Caltech group is working on the manufacturing of these nucleic-acid-based integrated circuits. One of these chips can compute whole square roots.[37] A compiler has been written[38] in Perl.

Thursday, June 28, 2018

Mathematical and theoretical biology

From Wikipedia, the free encyclopedia

Mathematical and theoretical biology is a branch of biology which employs theoretical analysis, mathematical models and abstractions of the living organisms to investigate the principles that govern the structure, development and behavior of the systems, as opposed to experimental biology which deals with the conduction of experiments to prove and validate the scientific theories.[1] The field is sometimes called mathematical biology or biomathematics to stress the mathematical side, or theoretical biology to stress the biological side.[2] Theoretical biology focuses more on the development of theoretical principles for biology while mathematical biology focuses on the use of mathematical tools to study biological systems, even though the two terms are sometimes interchanged.

Mathematical biology aims at the mathematical representation and modeling of biological processes, using techniques and tools of applied mathematics. It has both theoretical and practical applications in biological, biomedical and biotechnology research. Describing systems in a quantitative manner means their behavior can be better simulated, and hence properties can be predicted that might not be evident to the experimenter. This requires precise mathematical models.

Mathematical biology employs many components of mathematics,[5] and has contributed to the development of new techniques.

History

Early history

Mathematics has been applied to biology since the 19th century.

Fritz Müller described the evolutionary benefits of what is now called Müllerian mimicry in 1879, in an account notable for being the first use of a mathematical argument in evolutionary ecology to show how powerful the effect of natural selection would be, unless one includes Malthus's discussion of the effects of population growth that influenced Charles Darwin: Malthus argued that growth would be "geometric" while resources (the environment's carrying capacity) could only grow arithmetically.[6]

One founding text is considered to be On Growth and Form (1917) by D'Arcy Thompson,[7] and other early pioneers include Ronald Fisher, Hans Leo Przibram, Nicolas Rashevsky and Vito Volterra.[8]

Recent growth

Interest in the field has grown rapidly from the 1960s onwards. Some reasons for this include:
  • The rapid growth of data-rich information sets, due to the genomics revolution, which are difficult to understand without the use of analytical tools
  • Recent development of mathematical tools such as chaos theory to help understand complex, non-linear mechanisms in biology
  • An increase in computing power, which facilitates calculations and simulations not previously possible
  • An increasing interest in in silico experimentation due to ethical considerations, risk, unreliability and other complications involved in human and animal research

Areas of research

Several areas of specialized research in mathematical and theoretical biology[9][10][11][12][13] as well as external links to related projects in various universities are concisely presented in the following subsections, including also a large number of appropriate validating references from a list of several thousands of published authors contributing to this field. Many of the included examples are characterised by highly complex, nonlinear, and supercomplex mechanisms, as it is being increasingly recognised that the result of such interactions may only be understood through a combination of mathematical, logical, physical/chemical, molecular and computational models.

Evolutionary biology

Ecology and evolutionary biology have traditionally been the dominant fields of mathematical biology.

Evolutionary biology has been the subject of extensive mathematical theorizing. The traditional approach in this area, which includes complications from genetics, is population genetics. Most population geneticists consider the appearance of new alleles by mutation, the appearance of new genotypes by recombination, and changes in the frequencies of existing alleles and genotypes at a small number of gene loci. When infinitesimal effects at a large number of gene loci are considered, together with the assumption of linkage equilibrium or quasi-linkage equilibrium, one derives quantitative genetics. Ronald Fisher made fundamental advances in statistics, such as analysis of variance, via his work on quantitative genetics. Another important branch of population genetics that led to the extensive development of coalescent theory is phylogenetics. Phylogenetics is an area that deals with the reconstruction and analysis of phylogenetic (evolutionary) trees and networks based on inherited characteristics[14] Traditional population genetic models deal with alleles and genotypes, and are frequently stochastic.

Many population genetics models assume that population sizes are constant. Variable population sizes, often in the absence of genetic variation, are treated by the field of population dynamics. Work in this area dates back to the 19th century, and even as far as 1798 when Thomas Malthus formulated the first principle of population dynamics, which later became known as the Malthusian growth model. The Lotka–Volterra predator-prey equations are another famous example. Population dynamics overlap with another active area of research in mathematical biology: mathematical epidemiology, the study of infectious disease affecting populations. Various models of the spread of infections have been proposed and analyzed, and provide important results that may be applied to health policy decisions.

In evolutionary game theory, developed first by John Maynard Smith and George R. Price, selection acts directly on inherited phenotypes, without genetic complications. This approach has been mathematically refined to produce the field of adaptive dynamics.

Computer models and automata theory

A monograph on this topic summarizes an extensive amount of published research in this area up to 1986,[15][16][17] including subsections in the following areas: computer modeling in biology and medicine, arterial system models, neuron models, biochemical and oscillation networks, quantum automata, quantum computers in molecular biology and genetics,[18] cancer modelling,[19] neural nets, genetic networks, abstract categories in relational biology,[20] metabolic-replication systems, category theory[21] applications in biology and medicine,[22] automata theory, cellular automata,[23] tessellation models[24][25] and complete self-reproduction, chaotic systems in organisms, relational biology and organismic theories.[26][27]

Modeling cell and molecular biology

This area has received a boost due to the growing importance of molecular biology.[12]
  • Mechanics of biological tissues[28]
  • Theoretical enzymology and enzyme kinetics
  • Cancer modelling and simulation[29][30]
  • Modelling the movement of interacting cell populations[31]
  • Mathematical modelling of scar tissue formation[32]
  • Mathematical modelling of intracellular dynamics[33][34]
  • Mathematical modelling of the cell cycle[35]
Modelling physiological systems

Molecular set theory

Molecular set theory (MST) is a mathematical formulation of the wide-sense chemical kinetics of biomolecular reactions in terms of sets of molecules and their chemical transformations represented by set-theoretical mappings between molecular sets. It was introduced by Anthony Bartholomay, and its applications were developed in mathematical biology and especially in mathematical medicine.[38] In a more general sense, MST is the theory of molecular categories defined as categories of molecular sets and their chemical transformations represented as set-theoretical mappings of molecular sets. The theory has also contributed to biostatistics and the formulation of clinical biochemistry problems in mathematical formulations of pathological, biochemical changes of interest to Physiology, Clinical Biochemistry and Medicine.[38][39]

Mathematical methods

A model of a biological system is converted into a system of equations, although the word 'model' is often used synonymously with the system of corresponding equations. The solution of the equations, by either analytical or numerical means, describes how the biological system behaves either over time or at equilibrium. There are many different types of equations and the type of behavior that can occur is dependent on both the model and the equations used. The model often makes assumptions about the system. The equations may also make assumptions about the nature of what may occur.

Simulation of mathematical biology

Computer with significant recent evolution in performance acceraretes the model simulation based on various formulas. The websites BioMath Modeler can run simulations and display charts interactively on browser.

Mathematical biophysics

The earlier stages of mathematical biology were dominated by mathematical biophysics, described as the application of mathematics in biophysics, often involving specific physical/mathematical models of biosystems and their components or compartments.

The following is a list of mathematical descriptions and their assumptions.

Deterministic processes (dynamical systems)

A fixed mapping between an initial state and a final state. Starting from an initial condition and moving forward in time, a deterministic process always generates the same trajectory, and no two trajectories cross in state space.

Stochastic processes (random dynamical systems)

A random mapping between an initial state and a final state, making the state of the system a random variable with a corresponding probability distribution.

Spatial modelling

One classic work in this area is Alan Turing's paper on morphogenesis entitled The Chemical Basis of Morphogenesis, published in 1952 in the Philosophical Transactions of the Royal Society.

Organizational biology

Theoretical approaches to biological organization aim to understand the interdependence between the parts of organisms. They emphasize the circularities that these interdependences lead to. Theoretical biologists developed several concepts to formalize this idea.

For example, abstract relational biology (ARB)[45] is concerned with the study of general, relational models of complex biological systems, usually abstracting out specific morphological, or anatomical, structures. Some of the simplest models in ARB are the Metabolic-Replication, or (M,R)--systems introduced by Robert Rosen in 1957-1958 as abstract, relational models of cellular and organismal organization.[46]

Other approaches include the notion of autopoiesis developed by Maturana and Varela, Kauffman's Work-Constraints cycles, and more recently the notion of closure of constraints.[47]

Algebraic biology

Algebraic biology (also known as symbolic systems biology) applies the algebraic methods of symbolic computation to the study of biological problems, especially in genomics, proteomics, analysis of molecular structures and study of genes.[26][48][49]

Computational neuroscience

Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is the theoretical study of the nervous system.

Model example: the cell cycle

The eukaryotic cell cycle is very complex and is one of the most studied topics, since its misregulation leads to cancers. It is possibly a good example of a mathematical model as it deals with simple calculus but gives valid results. Two research groups [52][53] have produced several models of the cell cycle simulating several organisms. They have recently produced a generic eukaryotic cell cycle model that can represent a particular eukaryote depending on the values of the parameters, demonstrating that the idiosyncrasies of the individual cell cycles are due to different protein concentrations and affinities, while the underlying mechanisms are conserved (Csikasz-Nagy et al., 2006).
By means of a system of ordinary differential equations these models show the change in time (dynamical system) of the protein inside a single typical cell; this type of model is called a deterministic process (whereas a model describing a statistical distribution of protein concentrations in a population of cells is called a stochastic process).

To obtain these equations an iterative series of steps must be done: first the several models and observations are combined to form a consensus diagram and the appropriate kinetic laws are chosen to write the differential equations, such as rate kinetics for stoichiometric reactions, Michaelis-Menten kinetics for enzyme substrate reactions and Goldbeter–Koshland kinetics for ultrasensitive transcription factors, afterwards the parameters of the equations (rate constants, enzyme efficiency coefficients and Michaelis constants) must be fitted to match observations; when they cannot be fitted the kinetic equation is revised and when that is not possible the wiring diagram is modified. The parameters are fitted and validated using observations of both wild type and mutants, such as protein half-life and cell size.

To fit the parameters, the differential equations must be studied. This can be done either by simulation or by analysis. In a simulation, given a starting vector (list of the values of the variables), the progression of the system is calculated by solving the equations at each time-frame in small increments.

Cell cycle bifurcation diagram.jpg

In analysis, the properties of the equations are used to investigate the behavior of the system depending of the values of the parameters and variables. A system of differential equations can be represented as a vector field, where each vector described the change (in concentration of two or more protein) determining where and how fast the trajectory (simulation) is heading. Vector fields can have several special points: a stable point, called a sink, that attracts in all directions (forcing the concentrations to be at a certain value), an unstable point, either a source or a saddle point, which repels (forcing the concentrations to change away from a certain value), and a limit cycle, a closed trajectory towards which several trajectories spiral towards (making the concentrations oscillate).

A better representation, which handles the large number of variables and parameters, is a bifurcation diagram using bifurcation theory. The presence of these special steady-state points at certain values of a parameter (e.g. mass) is represented by a point and once the parameter passes a certain value, a qualitative change occurs, called a bifurcation, in which the nature of the space changes, with profound consequences for the protein concentrations: the cell cycle has phases (partially corresponding to G1 and G2) in which mass, via a stable point, controls cyclin levels, and phases (S and M phases) in which the concentrations change independently, but once the phase has changed at a bifurcation event (Cell cycle checkpoint), the system cannot go back to the previous levels since at the current mass the vector field is profoundly different and the mass cannot be reversed back through the bifurcation event, making a checkpoint irreversible. In particular the S and M checkpoints are regulated by means of special bifurcations called a Hopf bifurcation and an infinite period bifurcation.[citation needed]

Societies and institutes

Energy flow (ecology)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Energy_flow_(ecology) A graphic repr...