Search This Blog

Friday, December 26, 2025

Self-assembly of nanoparticles

From Wikipedia, the free encyclopedia
Transmission electron microscopy image of an iron oxide nanoparticle. Regularly arranged dots within the dashed border are columns of Fe atoms. Left inset is the corresponding electron diffraction pattern. Scale bar: 10 nm.
Iron oxide nanoparticles can be dispersed in an organic solvent (toluene). Upon its evaporation, they may self-assemble (left and right panels) into micron-sized mesocrystals (center) or multilayers (right). Each dot in the left image is a traditional "atomic" crystal shown in the image above. Scale bars: 100 nm (left), 25 μm (center), 50 nm (right).

Nanoparticles are classified as having at least one of its dimensions in the range of 1-100 nanometers (nm). The small size of nanoparticles allows them to have unique characteristics which may not be possible on the macro-scale. Self-assembly is the spontaneous organization of smaller subunits to form larger, well-organized patterns. For nanoparticles, this spontaneous assembly is a consequence of interactions between the particles aimed at achieving a thermodynamic equilibrium and reducing the system's free energy. The thermodynamics definition of self-assembly was introduced by Professor Nicholas A. Kotov. He describes self-assembly as a process where components of the system acquire non-random spatial distribution with respect to each other and the boundaries of the system. This definition allows one to account for mass and energy fluxes taking place in the self-assembly processes.

This process occurs at all size scales, in the form of either static or dynamic self-assembly. Static self-assembly utilizes interactions amongst the nano-particles to achieve a free-energy minimum. In solutions, it is an outcome of random motion of particles and the affinity of their binding sites for one another. A dynamic system is forced to not reach equilibrium by supplying the system with a continuous, external source of energy to balance attractive and repulsive forces. Magnetic fields, electric fields, ultrasound fields, light fields, etc. have all been used as external energy sources to program robot swarms at small scales. Static self-assembly is significantly slower compared to dynamic self-assembly as it depends on the random chemical interactions between particles.

Self assembly can be directed in two ways. The first is by manipulating the intrinsic properties which includes changing the directionality of interactions or changing particle shapes. The second is through external manipulation by applying and combining the effects of several kinds of fields to manipulate the building blocks into doing what is intended. To do so correctly, an extremely high level of direction and control is required and developing a simple, efficient method to organize molecules and molecular clusters into precise, predetermined structures is crucial.

History

In 1959, physicist Richard Feynman gave a talk titled "There's Plenty of Room at the Bottom" to the American Physical Society.  He imagined a world in which "we could arrange atoms one by one, just as we want them." This idea set the stage for the bottom-up synthesis approach in which constituent components interact to form higher-ordered structures in a controllable manner. The study of self-assembly of nanoparticles began with recognition that some properties of atoms and molecules enable them to arrange themselves into patterns. A variety of applications where the self-assembly of nanoparticles might be useful. For example, building sensors or computer chips.

Definition

Self-assembly is defined as a process in which individual units of material associate with themselves spontaneously into a defined and organized structure or larger units with minimal external direction. Self-assembly is recognized as a highly useful technique to achieve outstanding qualities in both organic and inorganic nanostructures.

According to George M. Whitesides, "Self-assembly is the autonomous organization of components into patterns or structures without human intervention." Another definition by Serge Palacin & Renaud Demadrill is "Self-assembly is a spontaneous and reversible process that brings together in a defined geometry randomly moving distinct bodies through selective bonding forces."

Importance

To commemorate the 125th anniversary of Science magazine, 25 urgent questions were asked for scientists to solve, and the only one that relates to chemistry is"How Far Can We Push Chemical Self-Assembly?" Because self-assembly is the only approach for building a wide variety of nanostructures, the need for increasing complexity is growing. To learn from nature and build the nanoworld with noncovalent bonds, more research is needed in this area. Self assembly of nanomaterials is currently considered broadly for nano-structuring and nano-fabrication because of its simplicity, versatility and spontaneity. Exploiting the properties of the nano assembly holds promise as a low-cost and high-yield technique for a wide range of scientific and technological applications and is a key research effort in nanotechnology, molecular robotics, and molecular computation. A summary of benefits of self-assembly in fabrication is listed below:

  • Self-assembly is a scalable and parallel process which can involve large numbers of components in a short timeframe.
  • Can result in structural dimensions across orders of magnitude, from nanoscale to macroscale.
  • Is relatively inexpensive compared to the top-down assembly approach, which often consumes large amounts of finite resources.
  • Natural processes that drive self-assembly tend to be highly reproducible. The existence of life is strongly dependent on the reproducibility of self-assembly.

Challenges

There exist several outstanding challenges in self-assembly, due to a variety of competing factors.  Currently self-assembly is difficult to control on large scales, and to be widely applied we will need to ensure high degrees of reproducibility at these scales. The fundamental thermodynamic and kinetic mechanisms of self-assembly are poorly understood - the basic principles of atomistic and macroscale processes can be significantly different than those for nanostructures. Concepts related to thermal motion and capillary action influence equilibrium timescales and kinetic rates that are not well defined in self-assembling systems.

Top-down vs bottom-up synthesis

The top-down approach is breaking down of a system into small components, while bottom-up is assembling sub-systems into larger system. A bottom-up approach for nano-assembly is a primary research target for nano-fabrication because top down synthesis is expensive (requiring external work) and is not selective on very small length scales, but is currently the primary mode of industrial fabrication. Generally, the maximum resolution of the top-down products is much coarser than those of bottom-up; therefore, an accessible strategy to bridge "bottom-up" and "top-down", is realizable by the principles of self-assembly. By controlling local intermolecular forces to find the lowest-energy configuration, self-assembly can be guided by templates to generate similar structures to those currently fabricated by top-down approaches. This so-called bridging will enable fabrication of materials with the fine resolution of bottom-up methods and the larger range and arbitrary structure of top-down processes. Furthermore, in some cases components are too small for top-down synthesis, so self-assembly principles are required to realize these novel structures.

Classification

Nanostructures can be organized into groups based on their size, function, and structure; this organization is useful to define the potential of the field.

By size

Among the more sophisticated and structurally complex nanostructures currently available are organic macromolecules, wherein their assembly relies on the placement of atoms into molecular or extended structures with atomic-level precision.  It is now known that organic compounds can be conductors, semiconductors, and insulators, thus one of the main opportunities in nanomaterials science is to use organic synthesis and molecular design to make electronically useful structures. Structural motifs in these systems include colloids, small crystals, and aggregates on the order of 1-100 nm.

By function

Nanostructured materials can also be classed according to their functions, for example nanoelectronics and information technology (IT). Lateral dimensions used in information storage are shrinking from the micro- to the nanoscale as fabrication technologies improve. Optical materials are important in the development of miniaturized information storage because light has many advantages for storage and transmission over electronic methods. Quantum dots - most commonly CdSe nanoparticles having diameters of tens of nm, and with protective surface coatings - are notable for their ability to fluoresce over a broad range of the visible spectrum, with the controlling parameter being size.

By structure

Certain structural classes are especially relevant to nanoscience. As the dimensions of structures become smaller, their surface area-to-volume ratio increases. Much like molecules, nanostructures at small enough scales are essentially "all surface". The mechanical properties of materials are strongly influenced by these surface structures. Fracture strength and character, ductility, and various mechanical moduli all depend on the substructure of the materials over a range of scales. The opportunity to redevelop a science of materials that are nanostructured by design is largely open.

Thermodynamics

Self-assembly is an equilibrium process, i.e. the individual and assembled components exist in equilibrium. In addition, the flexibility and the lower free energy conformation is usually a result of a weaker intermolecular force between self-assembled moieties and is essentially enthalpic in nature.

The thermodynamics of the self-assembly process can be represented by a simple Gibbs free energy equation:

where if is negative, self-assembly is a spontaneous process. is the enthalpy change of the process and is largely determined by the potential energy/intermolecular forces between the assembling entities. is the change in entropy associated with the formation of the ordered arrangement. In general, the organization is accompanied by a decrease in entropy and in order for the assembly to be spontaneous the enthalpy term must be negative and in excess of the entropy term. This equation shows that as the value of approaches the value of and above a critical temperature, the self-assembly process will become progressively less likely to occur and spontaneous self-assembly will not happen.

The self-assembly is governed by the normal processes of nucleation and growth. Small assemblies are formed because of their increased lifetime as the attractive interactions between the components lower the Gibbs free energy. As the assembly grows, the Gibbs free energy continues to decrease until the assembly becomes stable enough to last for a long period of time. The necessity of the self-assembly to be an equilibrium process is defined by the organization of the structure which requires non-ideal arrangements to be formed before the lowest energy configuration is found.

Kinetics

The ultimate driving force in self-assembly is energy minimization and the corresponding evolution towards equilibrium, but kinetic effects can also play a very strong role. These kinetic effects, such as trapping in metastable states, slow coarsening kinetics, and pathway-dependent assembly, are often viewed as complications to be overcome in, for example, the formation of block copolymers.

Amphiphile self-assembly is an essential bottom-up approach of fabricating advanced functional materials. Self-assembled materials with desired structures are often obtained through thermodynamic control. Here, we demonstrate that the selection of kinetic pathways can lead to drastically different self-assembled structures, underlining the significance of kinetic control in self-assembly.

Defects

There are two kinds of defects: Equilibrium defects, and Non-Equilibrium defects. Self-assembled structures contain defects. Dislocations caused during the assembling of nanomaterials can majorly affect the final structure and in general defects are never completely avoidable. Current research on defects is focused on controlling defect density. In most cases, the thermodynamic driving force for self-assembly is provided by weak intermolecular interactions and is usually of the same order of magnitude as the entropy term. In order for a self-assembling system to reach the minimum free energy configuration, there has to be enough thermal energy to allow the mass transport of the self-assembling molecules. For defect formation, the free energy of single defect formation is given by:

The enthalpy term, does not necessarily reflect the intermolecular forces between the molecules, it is the energy cost associated with disrupting the pattern and may be thought of as a region where optimum arrangement does not occur and the reduction of enthalpy associated with ideal self-assembly did not occur. An example of this can be seen in a system of hexagonally packed cylinders where defect regions of lamellar structure exist.

If is negative, there will be a finite number of defects in the system and the concentration will be given by:

N is the number of defects in a matrix of N0 self-assembled particles or features and is the activation energy of defect formation. The activation energy, , should not be confused with . The activation energy represents the energy difference between the initial ideally arranges state and a transition state towards the defective structure. At low defect concentrations, defect formation is entropy driven until a critical concentration of defects allows the activation energy term to compensate for entropy. There is usually an equilibrium defect density indicated at the minimum free energy. The activation energy for defect formation increases this equilibrium defect density.

Particle Interaction

Intermolecular forces govern the particle interaction in self-assembled systems. The forces tend to be intermolecular in type rather than ionic or covalent because ionic or covalent bonds will "lock" the assembly into non-equilibrium structures. The types intermolecular forces seen in self-assembly processes are van der Waals, hydrogen bonds, and weak polar forces, just to name a few. In self-assembly, regular structural arrangements are frequently observed, therefore there must be a balance of attractive and repulsive between molecules otherwise an equilibrium distance will not exist between the particles. The repulsive forces can be electron cloud-electron cloud overlap or electrostatic repulsion.

Design nanoparticle self-assembly structure

Self-assembly of nanoparticles is driven by either maximization of packing density or minimization of the contact area between particles according to hard or soft nanoparticles. Examples of hard nanoparticles are: silica, fullerenes; soft nanoparticles are often organic nanoparticles, block copolymer micelles, DNA nanoparticles. The ordered self-assembly structure of nanoparticles is called superlattice.

Hard nanoparticles

For hard particles, Pauling's rules are useful in understanding the structure of ionic compounds in the early days, and the later entropy maximization principle shows favor of dense packing in the system. Therefore, finding the densest packing for a given shape is a starting point for predicting the structure of hard nanoparticle superlattices. For spherical particles, the densest packings are face-centered cubic and hexagonal close-packed from the Kepler–Hales theorem.

Different particle shapes / polyhedra create diverse complex packing structures in order to minimize the entropy of the system. By computer simulations, four structure categories are classified for faceted polyhedra nanoparticles according to their long-range order and short-range order, which are liquid crystals, plastic crystals, crystals, and disordered structures.

Soft nanoparticles

Most soft nanoparticles have core–shell structures. The semiflexible surface ligands soften the interaction of the cores and create a more spherical shape than the underlying core from uniform coverage. The surface ligands can be chosen from surfactants, polymers, DNA, ions, etc. Tuning the structure of superlattices can be achieved by varying the amount of surface ligands. Their "soft" behavior results in different self-assembly rules from hard particles, where Pauling's rules expired.

To tailor the superlattice structure of soft nanoparticles, six design rules of spherical nanoparticle superlattice are established based on the study of metal–DNA nanoparticles:

  1. The thermodynamic product of equal radius nanoparticle has the maximum nearest neighbors that DNA connections can form.
  2. The kinetic product of two similarly stable lattices can be produced by slowing the individual DNA linker dehybridize and rehybridize rate.
  3. The assembly and packing behavior is determined by the overall hydrodynamic radius of a nanoparticle rather than the size of the core or the shell.
  4. The hydrodynamic radius ratio or the size ratio of two nanoparticles in a binary system indicates the thermodynamically favored crystal structure.
  5. A system with the same size ratio and DNA linker ratio will give the same thermodynamic product.
  6. The most stable crystal structure is the maximization of the all possible DNA sequence-specific hybridization interactions.

Processing

The processes by which nanoparticles self-assemble are widespread and important. Understanding why and how self-assembly occurs is key in reproducing and optimizing results. Typically, nanoparticles will self-assemble for one or both of two reasons: molecular interactions and external direction.

Self-assembly by molecular interactions

Nanoparticles have the ability to assemble chemically through covalent or noncovalent interactions with their capping ligand. The terminal functional group(s) on the particle are known as capping ligands. As these ligands tend to be complex and sophisticated, self-assembly can provide a simpler pathway for nanoparticle organization by synthesizing efficient functional groups. For instance, DNA oligomers have been a key ligand for nanoparticle building blocks to be self-assembling via sequence-based specific organization. However, to deliver precise and scalable (programmable) assembly for a desired structure, a careful positioning of ligand molecules onto the nanoparticle counterpart should be required at the building block (precursor) level, such as direction, geometry, morphology, affinity, etc. The successful design of ligand-building block units can play an essential role in manufacturing a wide-range of new nano systems, such as nanosensor systems, nanomachines/nanobots, nanocomputers, and many more uncharted systems.

Intermolecular forces

Nanoparticles can self-assemble as a result of their intermolecular forces. As systems look to minimize their free energy, self-assembly is one option for the system to achieve its lowest free energy thermodynamically. Nanoparticles can be programmed to self-assemble by changing the functionality of their side groups, taking advantage of weak and specific intermolecular forces to spontaneously order the particles. These direct interparticle interactions can be typical intermolecular forces such as hydrogen bonding or Van der Waals forces, but can also be internal characteristics, such as hydrophobicity or hydrophilicity. For example, lipophilic nanoparticles have the tendency to self-assemble and form crystals as solvents are evaporated. While these aggregations are based on intermolecular forces, external factors such as temperature and pH also play a role in spontaneous self-assembly.

Hamaker interaction

As nanoparticle interactions take place on a nanoscale, the particle interactions must be scaled similarly. Hamaker interactions take into account the polarization characteristics of a large number of nearby particles and the effects they have on each other. Hamaker interactions sum all of the forces between all particles and the solvent(s) involved in the system. While Hamaker theory generally describes a macroscopic system, the vast number of nanoparticles in a self-assembling system allows the term to be applicable. Hamaker constants for nanoparticles are calculated using Lifshitz theory, and can often be found in literature.

Hamaker constants for nanoparticles in water
Material A131
Fe3O4 22
-Fe2O3 26
α-Fe2O3 29
Ag 33
Au 45
All values reported in zJ

Externally directed self-assembly

The natural ability of nanoparticles to self-assemble can be replicated in systems that do not intrinsically self-assemble. Directed self-assembly (DSA) attempts to mimic the chemical properties of self-assembling systems, while simultaneously controlling the thermodynamic system to maximize self-assembly.

Electric and magnetic fields

External fields are the most common directors of self-assembly. Electric and magnetic fields allow induced interactions to align the particles. The fields take advantage of the polarizability of the nanoparticle and its functional groups. When these field-induced interactions overcome random Brownian motion, particles join to form chains and then assemble. At more modest field strengths, ordered crystal structures are established due to the induced dipole interactions. Electric and magnetic field direction requires a constant balance between thermal energy and interaction energies.

Flow fields

Common ways of incorporating nanoparticle self-assembly with a flow include Langmuir-Blodgett, dip coating, flow coating and spin coating.

Macroscopic viscous flow

Macroscopic viscous flow fields can direct self-assembly of a random solution of particles into ordered crystals. However, the assembled particles tend to disassemble when the flow is stopped or removed. Shear flows are useful for jammed suspensions or random close packing. As these systems begin in nonequilibrium, flow fields are useful in that they help the system relax towards ordered equilibrium. Flow fields are also useful when dealing with complex matrices that themselves have rheological behavior. Flow can induce anisotropic viseoelastic stresses, which helps to overcome the matrix and cause self-assembly.

Combination of fields

The most effective self-assembly director is a combination of external force fields. If the fields and conditions are optimized, self-assembly can be permanent and complete. When a field combination is used with nanoparticles that are tailored to be intrinsically responsive, the most complete assembly is observed. Combinations of fields allow the benefits of self-assembly, such as scalability and simplicity, to be maintained while being able to control orientation and structure formation. Field combinations possess the greatest potential for future directed self-assembly work.

Nanomaterial Interfaces

Applications of nanotechnology often depend on the lateral assembly and spatial arrangement of nanoparticles at interfaces. Chemical reactions can be induced at solid/liquid interfaces by manipulating the location and orientation of functional groups of nanoparticles. This can be achieved through external stimuli or direct manipulation. Changing the parameters of the external stimuli, such as light and electric fields, has a direct effect on assembled nanostructures. Likewise, direct manipulation takes advantage of photolithography techniques, along with scanning probe microscopy (SPM), and scanning tunneling microscopy (STM), just to name a few.

Solid interfaces

Nano-particles can self-assemble on solid surfaces after external forces (like magnetic and electric) are applied. Templates made of microstructures, like carbon nanotubes or block polymers, can also be used to assist in self-assembly. They cause directed self-assembly (DSA), in which active sites are embedded to selectively induce nanoparticle deposition. Such templates are objects onto which different particles can be arranged into a structure with a morphology similar to that of the template. Carbon nanotubes (microstructures), single molecules, or block copolymers are common templates. Nanoparticles are often shown to self-assemble within distances of nanometers and micrometers, but block copolymer templates can be used to form well-defined self-assemblies over macroscopic distances. By incorporating active sites to the surfaces of nanotubes and polymers, the functionalization of these templates can be transformed to favor self-assembly of specified nanoparticles.

Liquid interfaces

Understanding the behavior of nanoparticles at liquid interfaces is essential for integrating them into electronics, optics, sensing, and catalysis devices. Molecular arrangements at liquid/liquid interfaces are uniform. Often, they also provide a defect-correcting platform and thus, liquid/liquid interfaces are ideal for self-assembly. Upon self-assembly, the structural and spatial arrangements can be determined via X-ray diffraction and optical reflectance. The number of nanoparticles involved in self-assembly can be controlled by manipulating the concentration of the electrolyte, which can be in the aqueous or the organic phase. Higher electrolyte concentrations correspond to decreased spacing between the nanoparticles. Pickering and Ramsden worked with oil/water (O/W) interfaces to portray this idea. Pickering and Ramsden explained the idea of pickering emulsions when experimenting with paraffin-water emulsions with solid particles like iron oxide and silicon dioxide. They observed that the micron-sized colloids generated a resistant film at the interface between the two immiscible phases, inhibiting the coalescence of the emulsion drops. These Pickering emulsions are formed from the self-assembly of colloidal particles in two-part liquid systems, such as oil-water systems. The desorption energy, which is directly related to the stability of emulsions depends on the particle size, particles interacting with each other, and particles interacting with oil and water molecules.

Self-assembly of solid nanoparticles at oil-water interface.

A decrease in total free energy was observed to be a result of the assembly of nanoparticles at an oil/water interface. When moving to the interface, particles reduce the unfavorable contact between the immiscible fluids and decrease the interfacial energy. The decrease in total free energy for microscopic particles is much larger than that of thermal energy, resulting in an effective confinement of large colloids to the interface. Nanoparticles are restricted to the interface by an energy reduction comparable to thermal energy. Thus, nanoparticles are easily displaced from the interface. A constant particle exchange then occurs at the interface at rates dependent on particle size. For the equilibrium state of assembly, the total gain in free energy is smaller for smaller particles. Thus, large nanoparticle assemblies are more stable. The size dependence allows nanoparticles to self-assemble at the interface to attain its equilibrium structure. Micrometer- size colloids, on the other hand, may be confined in a non-equilibrium state.

Applications

Electronics

Model of multidimensional array of nano-particles. A particle could have two spins, spin up or down. Based on the spin directions, nano-particles will be able to store 0 and 1. Therefore, nanostructural material has a great potential for future use in electronic devices.

Self-assembly of nanoscale structures from functional nanoparticles has provided a powerful path to developing small and powerful electronic components. Nanoscale objects have always been difficult to manipulate because they cannot be characterized by molecular techniques and they are too small to observe optically. But with advances in science and technology, there are now many instruments for observing nanostructures. Imaging methods span electron, optical and scanning probe microscopy, including combined electron-scanning probe and near-field opticalscanning probe instruments. Nanostructure characterization tools include advanced optical spectro-microscopy (linear, non-linear, tipenhanced and pump-probe) and Auger and x-ray photoemission for surface analysis. 2D self-assembly monodisperse particle colloids has a strong potential in dense magnetic storage media. Each colloid particle has the ability to store information as known as binary number 0 and 1 after applying it to a strong magnetic field. In the meantime, it requires a nanoscale sensor or detector in order to selectively choose the colloid particle. The microphase separation of block copolymers shows a great deal of promise as a means of generating regular nanopatterns at surfaces. They may, therefore, find application as a means to novel nanomaterials and nanoelectronics device structures.

Biological applications

Drug delivery

Block copolymers are a well-studied and versatile class of self-assembling materials characterized by chemically distinct polymer blocks that are covalently bonded. This molecular architecture of the covalent bond enhancement is what causes block copolymers to spontaneously form nanoscale patterns. In block copolymers, covalent bonds frustrate the natural tendency of each individual polymer to remain separate (in general, different polymers, do not like to mix), so the material assembles into a nano-pattern instead. These copolymers offer the ability to self-assemble into uniform, nanosized micelles and accumulate in tumors via the enhanced permeability and retention effect. Polymer composition can be chosen to control the micelle size and compatibility with the drug of choice. The challenges of this application are the difficulty of reproducing or controlling the size of self-assembly nano micelle, preparing predictable size-distribution, and the stability of the micelle with high drug load content.

Magnetic drug delivery

Magnetic nanochains are a class of new magnetoresponsive and superparamagnetic nanostructures with highly anisotropic shapes (chain-like) which can be manipulated using magnetic field and magnetic field gradient. The magnetic nanochains possess attractive properties which are significant added value for many potential uses including magneto-mechanical actuation-associated nanomedicines in low and super-low frequency alternating magnetic field and magnetic drug delivery.

Cell imaging

Nanoparticles have good biological labeling and sensing because of brightness and photostability; thus, certain self-assembled nanoparticles can be used as imaging contrast in various systems. Combined with polymer cross-linkers, the fluorescence intensity can also be enhanced. Surface modification with functional groups, can also lead to selective biological labeling. Self-assembled nanoparticles are also more biocompatible compared to standard drug delivery systems.

Glycogen

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/glycogen
Schematic two-dimensional cross-sectional view of glycogen: A core protein of glycogenin is surrounded by branches of glucose units. The entire globular granule may contain around 30,000 glucose units.
A view of the atomic structure of a single branched strand of glucose units in a glycogen molecule.
Glycogen (black granules) in spermatozoa of a flatworm; transmission electron microscopy, scale: 0.3 μm

Glycogen is a multibranched polysaccharide of glucose that serves as a form of energy storage in animalsfungi, and bacteria. It is the main storage form of glucose in the human body.

Glycogen functions as one of three regularly used forms of energy reserves, creatine phosphate being for very short-term, glycogen being for short-term and the triglyceride stores in adipose tissue (i.e., body fat) being for long-term storage. Protein, broken down into amino acids, is seldom used as a main energy source except during starvation and glycolytic crisis (see bioenergetic systems).

In humans, glycogen is made and stored primarily in the cells of the liver and skeletal muscle. In the liver, glycogen can make up 5–6% of the organ's fresh weight: the liver of an adult, weighing 1.5 kg, can store roughly 100–120 grams of glycogen. In skeletal muscle, glycogen is found in a low concentration (1–2% of the muscle mass): the skeletal muscle of an adult weighing 70 kg stores roughly 400 grams of glycogen. Small amounts of glycogen are also found in other tissues and cells, including the kidneys, red blood cellswhite blood cells, and glial cells in the brain. The uterus also stores glycogen during pregnancy to nourish the embryo.

The amount of glycogen stored in the body mostly depends on oxidative type 1 fibres, physical training, basal metabolic rate, and eating habits. Different levels of resting muscle glycogen are reached by changing the number of glycogen particles, rather than increasing the size of existing particles though most glycogen particles at rest are smaller than their theoretical maximum.

Approximately 4 grams of glucose are present in the blood of humans at all times; in fasting individuals, blood glucose is maintained constant at this level at the expense of glycogen stores, primarily from the liver (glycogen in skeletal muscle is mainly used as an immediate source of energy for that muscle rather than being used to maintain physiological blood glucose levels). Glycogen stores in skeletal muscle serve as a form of energy storage for the muscle itself; however, the breakdown of muscle glycogen impedes muscle glucose uptake from the blood, thereby increasing the amount of blood glucose available for use in other tissues. Liver glycogen stores serve as a store of glucose for use throughout the body, particularly the central nervous system. The human brain consumes approximately 60% of blood glucose in fasted, sedentary individuals.

Glycogen is an analogue of starch, a glucose polymer that functions as energy storage in plants. It has a structure similar to amylopectin (a component of starch), but is more extensively branched and compact than starch. Both are white powders in their dry state. Glycogen is found in the form of granules in the cytosol/cytoplasm in many cell types, and plays an important role in the glucose cycle. Glycogen forms an energy reserve that can be quickly mobilized to meet a sudden need for glucose, but one that is less compact than the energy reserves of triglycerides (lipids). As such it is also found as storage reserve in many parasitic protozoa.

Structure

α(1→4)-glycosidic linkages in the glycogen oligomer
α(1→4)-glycosidic and α(1→6)-glycosidic linkages in the glycogen oligomer

Glycogen is a branched biopolymer consisting of linear chains of glucose residues with an average chain length of approximately 8–12 glucose units and 2,000-60,000 residues per one molecule of glycogen.

Like amylopectin, glucose units are linked together linearly by α(1→4) glycosidic bonds from one glucose to the next. Branches are linked to the chains from which they are branching off by α(1→6) glycosidic bonds between the first glucose of the new branch and a glucose on the stem chain.

Each glycogen is essentially a ball of glucose trees, with around 12 layers, centered on a glycogenin protein, with three kinds of glucose chains: A, B, and C. There is only one C-chain, attached to the glycogenin. This C-chain is formed by the self-glucosylation of the glycogenin, forming a short primer chain. From the C-chain grows out B-chains, and from B-chains branch out B- and A-chains. The B-chains have on average 2 branch points, while the A-chains are terminal, thus unbranched. On average, each chain has length 12, tightly constrained to be between 11 and 15. All A-chains reach the spherical surface of the glycogen.

Glycogen in muscle, liver, and fat cells is stored in a hydrated form, composed of three or four parts of water per part of glycogen associated with 0.45 millimoles (18 mg) of potassium per gram of glycogen.

Glucose is an osmotic molecule, and can have profound effects on osmotic pressure in high concentrations possibly leading to cell damage or death if stored in the cell without being modified. Glycogen is a non-osmotic molecule, so it can be used as a solution to storing glucose in the cell without disrupting osmotic pressure.

Functions

Liver

As a meal containing carbohydrates or protein is eaten and digested, blood glucose levels rise, and the pancreas secretes insulin. Blood glucose from the portal vein enters liver cells (hepatocytes). Insulin acts on the hepatocytes to stimulate the action of several enzymes, including glycogen synthase. Glucose molecules are added to the chains of glycogen as long as both insulin and glucose remain plentiful. In this postprandial or "fed" state, the liver takes in more glucose from the blood than it releases.

After a meal has been digested and glucose levels begin to fall, insulin secretion is reduced, and glycogen synthesis stops. When it is needed for energy, glycogen is broken down and converted again to glucose. Glycogen phosphorylase is the primary enzyme of glycogen breakdown. For the next 8–12 hours, glucose derived from liver glycogen is the primary source of blood glucose used by the rest of the body for fuel.

Glucagon, another hormone produced by the pancreas, in many respects serves as a countersignal to insulin. In response to insulin levels being below normal (when blood levels of glucose begin to fall below the normal range), glucagon is secreted in increasing amounts and stimulates both glycogenolysis (the breakdown of glycogen) and gluconeogenesis (the production of glucose from other sources).

Muscle

Metabolism of common monosaccharides

Muscle glycogen appears to function as a reserve of quickly available phosphorylated glucose, in the form of glucose-1-phosphate, for muscle cells. Glycogen contained within skeletal muscle cells are primarily in the form of β particles. Other cells that contain small amounts use it locally as well. As muscle cells lack glucose-6-phosphatase, which is required to pass glucose into the blood, the glycogen they store is available solely for internal use and is not shared with other cells. This is in contrast to liver cells, which, on demand, readily do break down their stored glycogen into glucose and send it through the blood stream as fuel for other organs.

Skeletal muscle needs ATP (provides energy) for muscle contraction and relaxation, in what is known as the sliding filament theory. Skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity. During anaerobic activity, such as weightlifting and isometric exercise, the phosphagen system (ATP-PCr) and muscle glycogen are the only substrates used as they do not require oxygen nor blood flow.

Different bioenergetic systems produce ATP at different speeds, with ATP produced from muscle glycogen being much faster than fatty acid oxidation. The level of exercise intensity determines how much of which substrate (fuel) is used for ATP synthesis also. Muscle glycogen can supply a much higher rate of substrate for ATP synthesis than blood glucose. During maximum intensity exercise, muscle glycogen can supply 40 mmol glucose/kg wet weight/minute, whereas blood glucose can supply 4 – 5 mmol. Due to its high supply rate and quick ATP synthesis, during high-intensity aerobic activity (such as brisk walking, jogging, or running), the higher the exercise intensity, the more the muscle cell produces ATP from muscle glycogen. This reliance on muscle glycogen is not only to provide the muscle with enough ATP during high-intensity exercise, but also to maintain blood glucose homeostasis (that is, to not become hypoglycaemic by the muscles needing to extract far more glucose from the blood than the liver can provide). A deficit of muscle glycogen leads to muscle fatigue known as "hitting the wall" or "the bonk" (see below under glycogen depletion).

Structure type

In 1999, Meléndez et al claimed that the structure of glycogen is optimal under a particular metabolic constraint model, where the structure was suggested to be "fractal" in nature. However, research by Besford et al used small angle X-ray scattering experiments accompanied by branching theory models to show that glycogen is a randomly hyperbranched polymer nanoparticle. Glycogen is not fractal in nature. This has been subsequently verified by others who have performed Monte Carlo simulations of glycogen particle growth, and shown that the molecular density reaches a maximum near the centre of the nanoparticle structure, not at the periphery (contradicting a fractal structure that would have greater density at the periphery).

History

Glycogen was discovered by Claude Bernard. His experiments showed that the liver contained a substance that could give rise to reducing sugar by the action of a "ferment" in the liver. By 1857, he described the isolation of a substance he called "la matière glycogène", or "sugar-forming substance". Soon after the discovery of glycogen in the liver, M.A. Sanson found that muscular tissue also contains glycogen. The empirical formula for glycogen of (C
6
H
10
O
5
)n was established by August Kekulé in 1858.

Sanson, M. A. "Note sur la formation physiologique du sucre dans l’economie animale." Comptes rendus des séances de l'Académie des Sciences 44 (1857): 1323-5.

Metabolism

Synthesis

Glycogen synthesis is, unlike its breakdown, endergonic—it requires the input of energy. Energy for glycogen synthesis comes from uridine triphosphate (UTP), which reacts with glucose-1-phosphate, forming UDP-glucose, in a reaction catalysed by UTP—glucose-1-phosphate uridylyltransferase. Glycogen is synthesized from monomers of UDP-glucose initially by the protein glycogenin, which has two tyrosine anchors for the reducing end of glycogen, since glycogenin is a homodimer. After about eight glucose molecules have been added to a tyrosine residue, the enzyme glycogen synthase progressively lengthens the glycogen chain using UDP-glucose, adding α(1→4)-bonded glucose to the nonreducing end of the glycogen chain.

The glycogen branching enzyme catalyzes the transfer of a terminal fragment of six or seven glucose residues from a nonreducing end to the C-6 hydroxyl group of a glucose residue deeper into the interior of the glycogen molecule. The branching enzyme can act upon only a branch having at least 11 residues, and the enzyme may transfer to the same glucose chain or adjacent glucose chains.

Breakdown

Glycogen is cleaved from the nonreducing ends of the chain by the enzyme glycogen phosphorylase to produce monomers of glucose-1-phosphate:

Action of glycogen phosphorylase on glycogen

In vivo, phosphorolysis proceeds in the direction of glycogen breakdown because the ratio of phosphate and glucose-1-phosphate is usually greater than 100. Glucose-1-phosphate is then converted to glucose 6 phosphate (G6P) by phosphoglucomutase. A special debranching enzyme is needed to remove the α(1→6) branches in branched glycogen and reshape the chain into a linear polymer. The G6P monomers produced have three possible fates:

Clinical relevance

Disorders of glycogen metabolism

The most common disease in which glycogen metabolism becomes abnormal is diabetes, in which, because of abnormal amounts of insulin, liver glycogen can be abnormally accumulated or depleted. Restoration of normal glucose metabolism usually normalizes glycogen metabolism, as well.

In hypoglycemia caused by excessive insulin, liver glycogen levels are high, but the high insulin levels prevent the glycogenolysis necessary to maintain normal blood sugar levels. Glucagon is a common treatment for this type of hypoglycemia.

Various inborn errors of carbohydrate metabolism are caused by deficiencies of enzymes or transport proteins necessary for glycogen synthesis or breakdown. These are collectively referred to as glycogen storage diseases.

Glycogen depletion and endurance exercise

Long-distance athletes, such as marathon runners, cross-country skiers, and cyclists, often experience glycogen depletion, where almost all of the athlete's glycogen stores are depleted after long periods of exertion without sufficient carbohydrate consumption. This phenomenon is referred to as "hitting the wall" in running and "bonking" in cycling.

Glycogen depletion can be forestalled in three possible ways:

  • First, during exercise, carbohydrates with the highest possible rate of conversion to blood glucose (high glycemic index) are ingested continuously. The best possible outcome of this strategy replaces about 35% of glucose consumed at heart rates above about 80% of maximum.
  • Second, through endurance training adaptations and specialized regimens (e.g. fasting, low-intensity endurance training), the body can condition type I muscle fibers to improve both fuel use efficiency and workload capacity to increase the percentage of fatty acids used as fuel, sparing carbohydrate use from all sources.
  • Third, by consuming large quantities of carbohydrates after depleting glycogen stores as a result of exercise or diet, the body can increase storage capacity of intramuscular glycogen stores. This process is known as carbohydrate loading. In general, glycemic index of carbohydrate source does not matter since muscular insulin sensitivity is increased as a result of temporary glycogen depletion.

When athletes ingest both carbohydrate and caffeine following exhaustive exercise, their glycogen stores tend to be replenished more rapidly; however, the minimum dose of caffeine at which there is a clinically significant effect on glycogen repletion has not been established.

Nanomedicine

Glycogen nanoparticles have been investigated as potential drug delivery systems.

Introduction to quantum mechanics

Quantum mechanics is the study of matter and matter's interactions with energy on the scale of atomic and subatomic particles. By contrast, classical physics explains matter and energy only on a scale familiar to human experience, including the behavior of astronomical bodies such as the Moon. Classical physics is still used in much of modern science and technology. However, towards the end of the 19th century, scientists discovered phenomena in both the large (macro) and the small (micro) worlds that classical physics could not explain. The desire to resolve inconsistencies between observed phenomena and classical theory led to a revolution in physics, a shift in the original scientific paradigm: the development of quantum mechanics.

Many aspects of quantum mechanics yield unexpected results, defying expectations and deemed counterintuitive. These aspects can seem paradoxical as they map behaviors quite differently from those seen at larger scales. In the words of quantum physicist Richard Feynman, quantum mechanics deals with "nature as She is—absurd". Features of quantum mechanics often defy simple explanations in everyday language. One example of this is the uncertainty principle: precise measurements of position cannot be combined with precise measurements of velocity. Another example is entanglement: a measurement made on one particle (such as an electron that is measured to have spin 'up') will correlate with a measurement on a second particle (an electron will be found to have spin 'down') if the two particles have a shared history. This will apply even if it is impossible for the result of the first measurement to have been transmitted to the second particle before the second measurement takes place.

Quantum mechanics helps people understand chemistry, because it explains how atoms interact with each other and form molecules. Many remarkable phenomena can be explained using quantum mechanics, like superfluidity. For example, if liquid helium cooled to a temperature near absolute zero is placed in a container, it spontaneously flows up and over the rim of its container; this is an effect which cannot be explained by classical physics.

History

James C. Maxwell's unification of the equations governing electricity, magnetism, and light in the late 19th century led to experiments on the interaction of light and matter. Some of these experiments had aspects which could not be explained until quantum mechanics emerged in the early part of the 20th century.

Evidence of quanta from the photoelectric effect

The seeds of the quantum revolution appear in the discovery by J.J. Thomson in 1897 that cathode rays were not continuous but "corpuscles" (electrons). Electrons had been named just six years earlier as part of the emerging theory of atoms. In 1900, Max Planck, unconvinced by the atomic theory, discovered that he needed discrete entities like atoms or electrons to explain black-body radiation.

Black-body radiation intensity vs color and temperature. The rainbow bar represents visible light; 5000 K objects are "white hot" by mixing differing colors of visible light. To the right is the invisible infrared. Classical theory (black curve for 5000 K) fails to predict the colors; the other curves are correctly predicted by quantum theories.

Very hot – red hot or white hot – objects look similar when heated to the same temperature. This look results from a common curve of light intensity at different frequencies (colors), which is called black-body radiation. White hot objects have intensity across many colors in the visible range. The lowest frequencies above visible colors are infrared light, which also give off heat. Continuous wave theories of light and matter cannot explain the black-body radiation curve. Planck spread the heat energy among individual "oscillators" of an undefined character but with discrete energy capacity; this model explained black-body radiation.

At the time, electrons, atoms, and discrete oscillators were all exotic ideas to explain exotic phenomena. But in 1905 Albert Einstein proposed that light was also corpuscular, consisting of "energy quanta", in contradiction to the established science of light as a continuous wave, stretching back a hundred years to Thomas Young's work on diffraction.

Einstein's revolutionary proposal started by reanalyzing Planck's black-body theory, arriving at the same conclusions by using the new "energy quanta". Einstein then showed how energy quanta connected to Thomson's electron. In 1902, Philipp Lenard directed light from an arc lamp onto freshly cleaned metal plates housed in an evacuated glass tube. He measured the electric current coming off the metal plate, at higher and lower intensities of light and for different metals. Lenard showed that amount of current – the number of electrons – depended on the intensity of the light, but that the velocity of these electrons did not depend on intensity. This is the photoelectric effect. The continuous wave theories of the time predicted that more light intensity would accelerate the same amount of current to higher velocity, contrary to this experiment. Einstein's energy quanta explained the volume increase: one electron is ejected for each quantum: more quanta mean more electrons.

Einstein then predicted that the electron velocity would increase in direct proportion to the light frequency above a fixed value that depended upon the metal. Here the idea is that energy in energy-quanta depends upon the light frequency; the energy transferred to the electron comes in proportion to the light frequency. The type of metal gives a barrier, the fixed value, that the electrons must climb over to exit their atoms, to be emitted from the metal surface and be measured.

Ten years elapsed before Millikan's definitive experiment verified Einstein's prediction. During that time many scientists rejected the revolutionary idea of quanta. But Planck's and Einstein's concept was in the air and soon began to affect other physics and quantum theories.

Quantization of bound electrons in atoms

Experiments with light and matter in the late 1800s uncovered a reproducible but puzzling regularity. When light was shown through purified gases, certain frequencies (colors) did not pass. These dark absorption 'lines' followed a distinctive pattern: the gaps between the lines decreased steadily. By 1889, the Rydberg formula predicted the lines for hydrogen gas using only a constant number and the integers to index the lines. The origin of this regularity was unknown. Solving this mystery would eventually become the first major step toward quantum mechanics.

Throughout the 19th century evidence grew for the atomic nature of matter. With Thomson's discovery of the electron in 1897, scientists began the search for a model of the interior of the atom. Thomson proposed negative electrons swimming in a pool of positive charge. Between 1908 and 1911, Rutherford showed that the positive part was only 1/3000th of the diameter of the atom.

Models of "planetary" electrons orbiting a nuclear "Sun" were proposed, but cannot explain why the electron does not simply fall into the positive charge. In 1913 Niels Bohr and Ernest Rutherford connected the new atom models to the mystery of the Rydberg formula: the orbital radius of the electrons were constrained and the resulting energy differences matched the energy differences in the absorption lines. This meant that absorption and emission of light from atoms was energy quantized: only specific energies that matched the difference in orbital energy would be emitted or absorbed.

Trading one mystery – the regular pattern of the Rydberg formula – for another mystery – constraints on electron orbits – might not seem like a big advance, but the new atom model summarized many other experimental findings. The quantization of the photoelectric effect and now the quantization of the electron orbits set the stage for the final revolution.

Throughout the first and the modern era of quantum mechanics the concept that classical mechanics must be valid macroscopically constrained possible quantum models. This concept was formalized by Bohr in 1923 as the correspondence principle. It requires quantum theory to converge to classical limits. A related concept is Ehrenfest's theorem, which shows that the average values obtained from quantum mechanics (e.g. position and momentum) obey classical laws.

Quantization of spin

Stern–Gerlach experiment: Silver atoms travelling through an inhomogeneous magnetic field, and being deflected up or down depending on their spin; (1) furnace, (2) beam of silver atoms, (3) inhomogeneous magnetic field, (4) classically expected result, (5) observed result

In 1922 Otto Stern and Walther Gerlach demonstrated that the magnetic properties of silver atoms defy classical explanation, the work contributing to Stern’s 1943 Nobel Prize in Physics. They fired a beam of silver atoms through a magnetic field. According to classical physics, the atoms should have emerged in a spray, with a continuous range of directions. Instead, the beam separated into two, and only two, diverging streams of atoms. Unlike the other quantum effects known at the time, this striking result involves the state of a single atom. In 1927, Thomas Erwin Phipps and John Bellamy Taylor [de] obtained a similar, but less pronounced effect using hydrogen atoms in their ground state, thereby eliminating any doubts that may have been caused by the use of silver atoms.

In 1924, Wolfgang Pauli called it "two-valuedness not describable classically" and associated it with electrons in the outermost shell. The experiments lead to formulation of its theory described to arise from spin of the electron in 1925, by Samuel Goudsmit and George Uhlenbeck, under the advice of Paul Ehrenfest.

Quantization of matter

In 1924 Louis de Broglie proposed that electrons in an atom are constrained not in "orbits" but as standing waves. In detail his solution did not work, but his hypothesis – that the electron "corpuscle" moves in the atom as a wave – spurred Erwin Schrödinger to develop a wave equation for electrons; when applied to hydrogen the Rydberg formula was accurately reproduced.

Example original electron diffraction photograph from the laboratory of G. P. Thomson, recorded 1925–1927

Max Born's 1924 paper "Zur Quantenmechanik" was the first use of the words "quantum mechanics" in print. His later work included developing quantum collision models; in a footnote to a 1926 paper he proposed the Born rule connecting theoretical models to experiment.

In 1927 at Bell Labs, Clinton Davisson and Lester Germer fired slow-moving electrons at a crystalline nickel target which showed a diffraction pattern indicating wave nature of electron whose theory was fully explained by Hans Bethe. A similar experiment by George Paget Thomson and Alexander Reid, firing electrons at thin celluloid foils and later metal films, observing rings, independently discovered matter wave nature of electrons.

Further developments

In 1928 Paul Dirac published his relativistic wave equation simultaneously incorporating relativity, predicting anti-matter, and providing a complete theory for the Stern–Gerlach result. These successes launched a new fundamental understanding of our world at small scale: quantum mechanics.

Planck and Einstein started the revolution with quanta that broke down the continuous models of matter and light. Twenty years later "corpuscles" like electrons came to be modeled as continuous waves. This result came to be called wave-particle duality, one iconic idea along with the uncertainty principle that sets quantum mechanics apart from older models of physics.

Quantum radiation, quantum fields

In 1923 Compton demonstrated that the Planck-Einstein energy quanta from light also had momentum; three years later the "energy quanta" got a new name "photon". Despite its role in almost all stages of the quantum revolution, no explicit model for light quanta existed until 1927 when Paul Dirac began work on a quantum theory of radiation that became quantum electrodynamics. Over the following decades this work evolved into quantum field theory, the basis for modern quantum optics and particle physics.

Wave–particle duality

The concept of wave–particle duality says that neither the classical concept of "particle" nor of "wave" can fully describe the behavior of quantum-scale objects, either photons or matter. Wave–particle duality is an example of the principle of complementarity in quantum physics. An elegant example of wave-particle duality is the double-slit experiment.

The diffraction pattern produced when light is shone through one slit (top) and the interference pattern produced by two slits (bottom). Both patterns show oscillations due to the wave nature of light. The double slit pattern is more dramatic.

In the double-slit experiment, as originally performed by Thomas Young in 1803, and then Augustin Fresnel a decade later, a beam of light is directed through two narrow, closely spaced slits, producing an interference pattern of light and dark bands on a screen. The same behavior can be demonstrated in water waves: the double-slit experiment was seen as a demonstration of the wave nature of light.

Variations of the double-slit experiment have been performed using electrons, atoms, and even large molecules, and the same type of interference pattern is seen. Thus it has been demonstrated that all matter possesses wave characteristics.

If the source intensity is turned down, the same interference pattern will slowly build up, one "count" or particle (e.g. photon or electron) at a time. The quantum system acts as a wave when passing through the double slits, but as a particle when it is detected. This is a typical feature of quantum complementarity: a quantum system acts as a wave in an experiment to measure its wave-like properties, and like a particle in an experiment to measure its particle-like properties. The point on the detector screen where any individual particle shows up is the result of a random process. However, the distribution pattern of many individual particles mimics the diffraction pattern produced by waves.

Uncertainty principle

Werner Heisenberg at the age of 26. Heisenberg won the Nobel Prize in Physics in 1932 for the work he did in the late 1920s.

Suppose it is desired to measure the position and speed of an object—for example, a car going through a radar speed trap. It can be assumed that the car has a definite position and speed at a particular moment in time. How accurately these values can be measured depends on the quality of the measuring equipment. If the precision of the measuring equipment is improved, it provides a result closer to the true value. It might be assumed that the speed of the car and its position could be operationally defined and measured simultaneously, as precisely as might be desired.

In 1927, Heisenberg proved that this last assumption is not correct. Quantum mechanics shows that certain pairs of physical properties, for example, position and speed, cannot be simultaneously measured, nor defined in operational terms, to arbitrary precision: the more precisely one property is measured, or defined in operational terms, the less precisely can the other be thus treated. This statement is known as the uncertainty principle. The uncertainty principle is not only a statement about the accuracy of our measuring equipment but, more deeply, is about the conceptual nature of the measured quantities—the assumption that the car had simultaneously defined position and speed does not work in quantum mechanics. On a scale of cars and people, these uncertainties are negligible, but when dealing with atoms and electrons they become critical.

Heisenberg gave, as an illustration, the measurement of the position and momentum of an electron using a photon of light. In measuring the electron's position, the higher the frequency of the photon, the more accurate is the measurement of the position of the impact of the photon with the electron, but the greater is the disturbance of the electron. This is because from the impact with the photon, the electron absorbs a random amount of energy, rendering the measurement obtained of its momentum increasingly uncertain, for one is necessarily measuring its post-impact disturbed momentum from the collision products and not its original momentum (momentum which should be simultaneously measured with position). With a photon of lower frequency, the disturbance (and hence uncertainty) in the momentum is less, but so is the accuracy of the measurement of the position of the impact.

At the heart of the uncertainty principle is a fact that for any mathematical analysis in the position and velocity domains, achieving a sharper (more precise) curve in the position domain can only be done at the expense of a more gradual (less precise) curve in the speed domain, and vice versa. More sharpness in the position domain requires contributions from more frequencies in the speed domain to create the narrower curve, and vice versa. It is a fundamental tradeoff inherent in any such related or complementary measurements, but is only really noticeable at the smallest (Planck) scale, near the size of elementary particles.

The uncertainty principle shows mathematically that the product of the uncertainty in the position and momentum of a particle (momentum is velocity multiplied by mass) could never be less than a certain value, and that this value is related to the Planck constant.

Wave function collapse

Wave function collapse means that a measurement has forced or converted a quantum (probabilistic or potential) state into a definite measured value. This phenomenon is only seen in quantum mechanics rather than classical mechanics.

For example, before a photon actually "shows up" on a detection screen it can be described only with a set of probabilities for where it might show up. When it does appear, for instance in the CCD of an electronic camera, the time and space where it interacted with the device are known within very tight limits. However, the photon has disappeared in the process of being captured (measured), and its quantum wave function has disappeared with it. In its place, some macroscopic physical change in the detection screen has appeared, e.g., an exposed spot in a sheet of photographic film, or a change in electric potential in some cell of a CCD.

Eigenstates and eigenvalues

Because of the uncertainty principle, statements about both the position and momentum of particles can assign only a probability that the position or momentum has some numerical value. Therefore, it is necessary to formulate clearly the difference between the state of something indeterminate, such as an electron in a probability cloud, and the state of something having a definite value. When an object can definitely be "pinned-down" in some respect, it is said to possess an eigenstate.

In the Stern–Gerlach experiment discussed above, the quantum model predicts two possible values of spin for the atom compared to the magnetic axis. These two eigenstates are named arbitrarily 'up' and 'down'. The quantum model predicts these states will be measured with equal probability, but no intermediate values will be seen. This is what the Stern–Gerlach experiment shows.

The eigenstates of spin about the vertical axis are not simultaneously eigenstates of spin about the horizontal axis, so this atom has an equal probability of being found to have either value of spin about the horizontal axis. As described in the section above, measuring the spin about the horizontal axis can allow an atom that was spun up to spin down: measuring its spin about the horizontal axis collapses its wave function into one of the eigenstates of this measurement, which means it is no longer in an eigenstate of spin about the vertical axis, so can take either value.

The Pauli exclusion principle

Wolfgang Pauli

In 1924, Wolfgang Pauli proposed a new quantum degree of freedom (or quantum number), with two possible values, to resolve inconsistencies between observed molecular spectra and the predictions of quantum mechanics. In particular, the spectrum of atomic hydrogen had a doublet, or pair of lines differing by a small amount, where only one line was expected. Pauli formulated his exclusion principle, stating, "There cannot exist an atom in such a quantum state that two electrons within [it] have the same set of quantum numbers."

A year later, Uhlenbeck and Goudsmit identified Pauli's new degree of freedom with the property called spin whose effects were observed in the Stern–Gerlach experiment.

Dirac wave equation

Paul Dirac (1902–1984)

In 1928, Paul Dirac extended the Pauli equation, which described spinning electrons, to account for special relativity. The result was a theory that dealt properly with events, such as the speed at which an electron orbits the nucleus, occurring at a substantial fraction of the speed of light. By using the simplest electromagnetic interaction, Dirac was able to predict the value of the magnetic moment associated with the electron's spin and found the experimentally observed value, which was too large to be that of a spinning charged sphere governed by classical physics. He was able to solve for the spectral lines of the hydrogen atom and to reproduce from physical first principles Sommerfeld's successful formula for the fine structure of the hydrogen spectrum.

Dirac's equations sometimes yielded a negative value for energy, for which he proposed a novel solution: he posited the existence of an antielectron and a dynamical vacuum. This led to the many-particle quantum field theory.

Quantum entanglement

In quantum physics, a group of particles can interact or be created together in such a way that the quantum state of each particle of the group cannot be described independently of the state of the others, including when the particles are separated by a large distance. This is known as quantum entanglement.

An early landmark in the study of entanglement was the Einstein–Podolsky–Rosen (EPR) paradox, a thought experiment proposed by Albert Einstein, Boris Podolsky and Nathan Rosen which argues that the description of physical reality provided by quantum mechanics is incomplete. In a 1935 paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables.

The thought experiment involves a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality. In the same year, Erwin Schrödinger used the word "entanglement" and declared: "I would not call that one but rather the characteristic trait of quantum mechanics."

The Irish physicist John Stewart Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequality. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles are able to interact instantaneously no matter how widely they ever become separated. Performing experiments like those that Bell suggested, physicists have found that nature obeys quantum mechanics and violates Bell inequalities. In other words, the results of these experiments are incompatible with any local hidden variable theory.

Quantum field theory

The idea of quantum field theory began in the late 1920s with British physicist Paul Dirac, when he attempted to quantize the energy of the electromagnetic field; just as in quantum mechanics the energy of an electron in the hydrogen atom was quantized. Quantization is a procedure for constructing a quantum theory starting from a classical theory.

Merriam-Webster defines a field in physics as "a region or space in which a given effect (such as magnetism) exists". Other effects that manifest themselves as fields are gravitation and static electricity. In 2008, physicist Richard Hammond wrote:

Sometimes we distinguish between quantum mechanics (QM) and quantum field theory (QFT). QM refers to a system in which the number of particles is fixed, and the fields (such as the electromechanical field) are continuous classical entities. QFT ... goes a step further and allows for the creation and annihilation of particles ...

He added, however, that quantum mechanics is often used to refer to "the entire notion of quantum view".

In 1931, Dirac proposed the existence of particles that later became known as antimatter. Dirac shared the Nobel Prize in Physics for 1933 with Schrödinger "for the discovery of new productive forms of atomic theory".

Quantum electrodynamics

Quantum electrodynamics (QED) is the name of the quantum theory of the electromagnetic force. Understanding QED begins with understanding electromagnetism. Electromagnetism can be called "electrodynamics" because it is a dynamic interaction between electrical and magnetic forces. Electromagnetism begins with the electric charge.

Electric charges are the sources of and create, electric fields. An electric field is a field that exerts a force on any particles that carry electric charges, at any point in space. This includes the electron, proton, and even quarks, among others. As a force is exerted, electric charges move, a current flows, and a magnetic field is produced. The changing magnetic field, in turn, causes electric current (often moving electrons). The physical description of interacting charged particles, electrical currents, electrical fields, and magnetic fields is called electromagnetism.

In 1928 Paul Dirac produced a relativistic quantum theory of electromagnetism. This was the progenitor to modern quantum electrodynamics, in that it had essential ingredients of the modern theory. However, the problem of unsolvable infinities developed in this relativistic quantum theory. Years later, renormalization largely solved this problem. Initially viewed as a provisional, suspect procedure by some of its originators, renormalization eventually was embraced as an important and self-consistent tool in QED and other fields of physics. Also, in the late 1940s Feynman diagrams provided a way to make predictions with QED by finding a probability amplitude for each possible way that an interaction could occur. The diagrams showed in particular that the electromagnetic force is the exchange of photons between interacting particles.

The Lamb shift is an example of a quantum electrodynamics prediction that has been experimentally verified. It is an effect whereby the quantum nature of the electromagnetic field makes the energy levels in an atom or ion deviate slightly from what they would otherwise be. As a result, spectral lines may shift or split.

Similarly, within a freely propagating electromagnetic wave, the current can also be just an abstract displacement current, instead of involving charge carriers. In QED, its full description makes essential use of short-lived virtual particles. There, QED again validates an earlier, rather mysterious concept.

Standard Model

The Standard Model of particle physics is the quantum field theory that describes three of the four known fundamental forces (electromagnetic, weak and strong interactions – excluding gravity) in the universe and classifies all known elementary particles. It was developed in stages throughout the latter half of the 20th century, through the work of many scientists worldwide, with the current formulation being finalized in the mid-1970s upon experimental confirmation of the existence of quarks. Since then, proof of the top quark (1995), the tau neutrino (2000), and the Higgs boson (2012) have added further credence to the Standard Model. In addition, the Standard Model has predicted various properties of weak neutral currents and the W and Z bosons with great accuracy.

Although the Standard Model is believed to be theoretically self-consistent and has demonstrated success in providing experimental predictions, it leaves some physical phenomena unexplained and so falls short of being a complete theory of fundamental interactions. For example, it does not fully explain baryon asymmetry, incorporate the full theory of gravitation as described by general relativity, or account for the universe's accelerating expansion as possibly described by dark energy. The model does not contain any viable dark matter particle that possesses all of the required properties deduced from observational cosmology. It also does not incorporate neutrino oscillations and their non-zero masses. Accordingly, it is used as a basis for building more exotic models that incorporate hypothetical particles, extra dimensions, and elaborate symmetries (such as supersymmetry) to explain experimental results at variance with the Standard Model, such as the existence of dark matter and neutrino oscillations.

Interpretations

The physical measurements, equations, and predictions pertinent to quantum mechanics are all consistent and hold a very high level of confirmation. However, the question of what these abstract models say about the underlying nature of the real world has received competing answers. These interpretations are widely varying and sometimes somewhat abstract. For instance, the Copenhagen interpretation states that before a measurement, statements about a particle's properties are completely meaningless, while the many-worlds interpretation describes the existence of a multiverse made up of every possible universe.

Light behaves in some aspects like particles and in other aspects like waves. Matter—the "stuff" of the universe consisting of particles such as electrons and atoms—exhibits wavelike behavior too. Some light sources, such as neon lights, give off only certain specific frequencies of light, a small set of distinct pure colors determined by neon's atomic structure. Quantum mechanics shows that light, along with all other forms of electromagnetic radiation, comes in discrete units, called photons, and predicts its spectral energies (corresponding to pure colors), and the intensities of its light beams. A single photon is a quantum, or smallest observable particle, of the electromagnetic field. A partial photon is never experimentally observed. More broadly, quantum mechanics shows that many properties of objects, such as position, speed, and angular momentum, that appeared continuous in the zoomed-out view of classical mechanics, turn out to be (in the very tiny, zoomed-in scale of quantum mechanics) quantized. Such properties of elementary particles are required to take on one of a set of small, discrete allowable values, and since the gap between these values is also small, the discontinuities are only apparent at very tiny (atomic) scales.

Applications

Everyday applications

The relationship between the frequency of electromagnetic radiation and the energy of each photon is why ultraviolet light can cause sunburn, but visible or infrared light cannot. A photon of ultraviolet light delivers a high amount of energy—enough to contribute to cellular damage such as occurs in a sunburn. A photon of infrared light delivers less energy—only enough to warm one's skin. So, an infrared lamp can warm a large surface, perhaps large enough to keep people comfortable in a cold room, but it cannot give anyone a sunburn.

Technological applications

Applications of quantum mechanics include the laser, the transistor, the electron microscope, and magnetic resonance imaging. A special class of quantum mechanical applications is related to macroscopic quantum phenomena such as superfluid helium and superconductors. The study of semiconductors led to the invention of the diode and the transistor, which are indispensable for modern electronics.

In even a simple light switch, quantum tunneling is absolutely vital, as otherwise the electrons in the electric current could not penetrate the potential barrier made up of a layer of oxide. Flash memory chips found in USB drives also use quantum tunneling, to erase their memory cells.

Philosophy of science

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Philosophy_of_science Philosophy of science  is the branch of  philosoph...