Search This Blog

Friday, April 27, 2018

Top-down and bottom-up design

From Wikipedia, the free encyclopedia
Top-down and bottom-up are both strategies of information processing and knowledge ordering, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice, they can be seen as a style of thinking, teaching, or leadership.

A top-down approach (also known as stepwise design and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional sub-systems in a reverse engineering fashion. In a top-down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top-down model is often specified with the assistance of "black boxes", which makes it easier to manipulate. However, black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. Top down approach starts with the big picture. It breaks down from there into smaller segments.[1]

A bottom-up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems sub-systems of the emergent system. Bottom-up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. However, "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose.

Product design and development

During the design and development of new products, designers and engineers rely on both a bottom-up and top-down approach. The bottom-up approach is being utilized when off-the-shelf or existing components are selected and integrated into the product. An example would include selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top-down approach, a custom fastener would be designed such that it would fit properly in the receiving components.[2] For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment, etc.), such as a space-suit, a more top-down approach is taken and almost everything is custom designed. However, when it's more important to minimize cost and increase component availability, such as with manufacturing equipment, a more bottom-up approach would be taken, and as many off-the-shelf components (bolts, gears, bearings, etc.) would be selected as possible. In the latter case, the receiving housings would be designed around the selected components.

Computer science

Software development

In the software development process, the top-down and bottom-up approaches play a key role.

Top-down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top-down approaches are implemented by attaching the stubs in place of the module. This, however, delays testing of the ultimate functional units of a system until significant design is complete. Bottom-up emphasizes coding and early testing, which can begin as soon as the first module has been specified. This approach, however, runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of the bottom-up approach.[3]

Top-down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top-down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top-down programming was not strictly what he promoted. Top-down methods were favored in software engineering until the late 1980s,[3] and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be utilized.

Modern software design approaches usually combine both top-down and bottom-up approaches. Although an understanding of the complete system is usually considered necessary for good design, leading theoretically to a top-down approach, most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom-up flavor. Some design approaches also use an approach where a partially functional system is designed and coded to completion, and this system is then expanded to fulfill all the requirements for the project

Programming


Building blocks are an example of bottom-up design because the parts are first created and then assembled without regard to how the parts will work in the assembly.

Top-down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized sub-routines eventually will perform actions so simple they can be easily and concisely coded. When all the various sub-routines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower level work can be self-contained. By defining how the lower level abstractions are expected to integrate into higher level ones, interfaces become clearly defined.

In a bottom-up approach, the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with LEGO. Engineers call this piece part design.

In a bottom-up approach, good intuition is necessary to decide the functionality that is to be provided by the module. If a system is to be built from an existing system, this approach is more suitable as it starts from some existing modules.

Parsing

Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler.

Bottom-up parsing is a strategy for analyzing unknown data relationships that attempts to identify the most fundamental units first, and then to infer higher-order structures from them. Top-down parsers, on the other hand, hypothesize general parse tree structures and then consider whether the known fundamental structures are compatible with the hypothesis. See Top-down parsing and Bottom-up parsing.

Nanotechnology

Top-down and bottom-up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 in order to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom-up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top-down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications.

The top-down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top-down secondary approaches to engineer nanostructures.[4]

Bottom-up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches utilize the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases.

Neuroscience and psychology


An example of top-down processing: Even though the second letter in each word is ambiguous, top-down processing allows for easy disambiguation based on the context.

These terms are also employed in neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing.[5][page needed] Typically sensory input is considered "bottom-up", and higher cognitive processes, which have more information from other sources, are considered "top-down". A bottom-up process is characterized by an absence of higher level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Beiderman, 19).[3]

According to college teaching notes written by Charles Ramskov,[who?] Rock, Neiser, and Gregory claim that top-down approach involves perception that is an active and constructive process.[6][better source needed] Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to Theoretical Synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach."[7]

Conversely, psychology defines bottom-up processing as an approach wherein there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom-up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus.[8][page needed][better source needed][9] Theoretical Synthesis also claims that bottom-up processing occurs "when a stimulus is presented long and clearly enough."[7]

Cognitively speaking, certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom-up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top-down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom-up connections.[7] Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top-down influence.[10][better source needed]

The study of visual attention provides an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom-up fashion—your attention was not contingent upon knowledge of the flower; the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object you are looking for, it is salient. This is an example of the use of top-down information.

In cognitive terms, two thinking approaches are distinguished. "Top-down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom-up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition.[11]

Management and organization

In the fields of management and organization, the terms "top-down" and "bottom-up" are used to describe how decisions are made and/or how change is implemented.[12]

A "top-down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then the leader uses a planned approach to drive the changes down to the frontline staff (Stewart, Manges, Ward, 2015).[12]

A "bottom-up" approach to changes one that works from the grassroots—from a large number of people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom-up" decision. A bottom-up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers" (Stewart, Manges, Ward, 2015, p. 241).[12]

Positive aspects of top-down approaches include their efficiency and superb overview of higher levels.[12] Also, external effects can be internalized. On the negative side, if reforms are perceived to be imposed 'from above', it can be difficult for lower levels to accept them (e.g. Bresser-Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms (e.g. Dubois 2002). A bottom-up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change (see Stewart, Manges, Ward, 2015).[12]

Public health

Both top-down and bottom-up approaches exist in public health. There are many examples of top-down programs, often run by governments or large inter-governmental organizations (IGOs); many of these are disease-specific or issue-specific, such as HIV control or Smallpox Eradication. Examples of bottom-up programs include many small NGOs set up to improve local access to healthcare. However, a lot of programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary health-care.

Architecture

Often, the École des Beaux-Arts school of design is said to have primarily promoted top-down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project.[citation needed]

By contrast, the Bauhaus focused on bottom-up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the woodpanel carving and furniture design).

Ecology

In ecology, top-down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. In other words, such ecosystems are not controlled by productivity of the kelp but rather a top predator.

Bottom up control in ecosystems refers to ecosystems in which the nutrient supply and productivity and type of primary producers (plants and phytoplankton) control the ecosystem structure. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface.

There are many different examples of these concepts. It is common for populations to be influenced by both types of control.

Nanomedicine

From Wikipedia, the free encyclopedia

Nanomedicine is the medical application of nanotechnology.[1] Nanomedicine ranges from the medical applications of nanomaterials and biological devices, to nanoelectronic biosensors, and even possible future applications of molecular nanotechnology such as biological machines. Current problems for nanomedicine involve understanding the issues related to toxicity and environmental impact of nanoscale materials (materials whose structure is on the scale of nanometers, i.e. billionths of a meter).

Functionalities can be added to nanomaterials by interfacing them with biological molecules or structures. The size of nanomaterials is similar to that of most biological molecules and structures; therefore, nanomaterials can be useful for both in vivo and in vitro biomedical research and applications. Thus far, the integration of nanomaterials with biology has led to the development of diagnostic devices, contrast agents, analytical tools, physical therapy applications, and drug delivery vehicles.

Nanomedicine seeks to deliver a valuable set of research tools and clinically useful devices in the near future.[2][3] The National Nanotechnology Initiative expects new commercial applications in the pharmaceutical industry that may include advanced drug delivery systems, new therapies, and in vivo imaging.[4] Nanomedicine research is receiving funding from the US National Institutes of Health Common Fund program, supporting four nanomedicine development centers.[5]

Nanomedicine sales reached $16 billion in 2015, with a minimum of $3.8 billion in nanotechnology R&D being invested every year. Global funding for emerging nanotechnology increased by 45% per year in recent years, with product sales exceeding $1 trillion in 2013.[6] As the nanomedicine industry continues to grow, it is expected to have a significant impact on the economy.

Drug delivery

Nanoparticles (top), liposomes (middle), and dendrimers (bottom) are some nanomaterials being investigated for use in nanomedicine.

Nanotechnology has provided the possibility of delivering drugs to specific cells using nanoparticles.[7] The overall drug consumption and side-effects may be lowered significantly by depositing the active agent in the morbid region only and in no higher dose than needed. Targeted drug delivery is intended to reduce the side effects of drugs with concomitant decreases in consumption and treatment expenses. Drug delivery focuses on maximizing bioavailability both at specific places in the body and over a period of time. This can potentially be achieved by molecular targeting by nanoengineered devices.[8][9] A benefit of using nanoscale for medical technologies is that smaller devices are less invasive and can possibly be implanted inside the body, plus biochemical reaction times are much shorter. These devices are faster and more sensitive than typical drug delivery.[10] The efficacy of drug delivery through nanomedicine is largely based upon: a) efficient encapsulation of the drugs, b) successful delivery of drug to the targeted region of the body, and c) successful release of the drug.[citation needed]

Drug delivery systems, lipid-[11] or polymer-based nanoparticles,[12] can be designed to improve the pharmacokinetics and biodistribution of the drug.[13][14][15] However, the pharmacokinetics and pharmacodynamics of nanomedicine is highly variable among different patients.[16] When designed to avoid the body's defence mechanisms,[17] nanoparticles have beneficial properties that can be used to improve drug delivery. Complex drug delivery mechanisms are being developed, including the ability to get drugs through cell membranes and into cell cytoplasm. Triggered response is one way for drug molecules to be used more efficiently. Drugs are placed in the body and only activate on encountering a particular signal. For example, a drug with poor solubility will be replaced by a drug delivery system where both hydrophilic and hydrophobic environments exist, improving the solubility.[18] Drug delivery systems may also be able to prevent tissue damage through regulated drug release; reduce drug clearance rates; or lower the volume of distribution and reduce the effect on non-target tissue. However, the biodistribution of these nanoparticles is still imperfect due to the complex host's reactions to nano- and microsized materials[17] and the difficulty in targeting specific organs in the body. Nevertheless, a lot of work is still ongoing to optimize and better understand the potential and limitations of nanoparticulate systems. While advancement of research proves that targeting and distribution can be augmented by nanoparticles, the dangers of nanotoxicity become an important next step in further understanding of their medical uses.[19]

Nanoparticles are under research for their potential to decrease antibiotic resistance or for various antimicrobial uses.[20][21][22] Nanoparticles might also used to circumvent multidrug resistance (MDR) mechanisms.[7]

Systems under research

Two forms of nanomedicine that have already been tested in mice and are awaiting human testing will use gold nanoshells to help diagnose and treat cancer,[23] along with liposomes as vaccine adjuvants and drug transport vehicles.[24][25] Similarly, drug detoxification is also another application for nanomedicine which has shown promising results in rats.[26] Advances in Lipid nanotechnology was also instrumental in engineering medical nanodevices and novel drug delivery systems as well as in developing sensing applications.[27] Another example can be found in dendrimers and nanoporous materials. Another example is to use block co-polymers, which form micelles for drug encapsulation.[12]

Polymeric nanoparticles are a competing technology to lipidic (based mainly on Phospholipids) nanoparticles. There is an additional risk of toxicity associated with polymers not widely studied or understood. The major advantages of polymers is stability, lower cost and predictable characterisation. However, in the patient's body this very stability (slow degradation) is a negative factor. Phospholipids on the other hand are membrane lipids (already present in the body and surrounding each cell), have a GRAS (Generally Recognised As Safe) status from FDA and are derived from natural sources without any complex chemistry involved. They are not metabolised but rather absorbed by the body and the degradation products are themselves nutrients (fats or micronutrients).[citation needed]

Protein and peptides exert multiple biological actions in the human body and they have been identified as showing great promise for treatment of various diseases and disorders. These macromolecules are called biopharmaceuticals. Targeted and/or controlled delivery of these biopharmaceuticals using nanomaterials like nanoparticles[28] and Dendrimers is an emerging field called nanobiopharmaceutics, and these products are called nanobiopharmaceuticals.[citation needed]

Another highly efficient system for microRNA delivery for example are nanoparticles formed by the self-assembly of two different microRNAs deregulated in cancer.[29]

Another vision is based on small electromechanical systems; nanoelectromechanical systems are being investigated for the active release of drugs and sensors. Some potentially important applications include cancer treatment with iron nanoparticles or gold shells or cancer early diagnosis.[30] Nanotechnology is also opening up new opportunities in implantable delivery systems, which are often preferable to the use of injectable drugs, because the latter frequently display first-order kinetics (the blood concentration goes up rapidly, but drops exponentially over time). This rapid rise may cause difficulties with toxicity, and drug efficacy can diminish as the drug concentration falls below the targeted range.[citation needed]

Applications

Some nanotechnology-based drugs that are commercially available or in human clinical trials include:
  • Abraxane, approved by the U.S. Food and Drug Administration (FDA) to treat breast cancer,[31] non-small- cell lung cancer (NSCLC)[32] and pancreatic cancer,[33] is the nanoparticle albumin bound paclitaxel.
  • Doxil was originally approved by the FDA for the use on HIV-related Kaposi's sarcoma. It is now being used to also treat ovarian cancer and multiple myeloma. The drug is encased in liposomes, which helps to extend the life of the drug that is being distributed. Liposomes are self-assembling, spherical, closed colloidal structures that are composed of lipid bilayers that surround an aqueous space. The liposomes also help to increase the functionality and it helps to decrease the damage that the drug does to the heart muscles specifically.[34]
  • Onivyde, liposome encapsulated irinotecan to treat metastatic pancreatic cancer, was approved by FDA in October 2015.[35]
  • C-dots (Cornell dots) are the smallest silica-based nanoparticles with the size <10 a="" are="" dye="" href="https://en.wikipedia.org/wiki/Fluorescence" infused="" light="" nbsp="" nm.="" organic="" particles="" the="" title="Fluorescence" up="" which="" will="" with="">fluorescence
. Clinical trial is underway since 2011 to use the C-dots as diagnostic tool to assist surgeons to identify the location of tumor cells.[36]
  • An early phase clinical trial using the platform of ‘Minicell’ nanoparticle for drug delivery have been tested on patients with advanced and untreatable cancer. Built from the membranes of mutant bacteria, the minicells were loaded with paclitaxel and coated with cetuximab, antibodies that bind the epidermal growth factor receptor (EGFR) which is often overexpressed in a number of cancers, as a 'homing' device to the tumor cells. The tumor cells recognize the bacteria from which the minicells have been derived, regard it as invading microorganism and engulf it. Once inside, the payload of anti-cancer drug kills the tumor cells. Measured at 400 nanometers, the minicell is bigger than synthetic particles developed for drug delivery. The researchers indicated that this larger size gives the minicells a better profile in side-effects because the minicells will preferentially leak out of the porous blood vessels around the tumor cells and do not reach the liver, digestive system and skin. This Phase 1 clinical trial demonstrated that this treatment is well tolerated by the patients. As a platform technology, the minicell drug delivery system can be used to treat a number of different cancers with different anti-cancer drugs with the benefit of lower dose and less side-effects.[37][38]
  • In 2014, a Phase 3 clinical trial for treating inflammation and pain after cataract surgery, and a Phase 2 trial for treating dry eye disease were initiated using nanoparticle loteprednol etabonate.[39] In 2015, the product, KPI-121 was found to produce statistically significant positive results for the post-surgery treatment.[40]

  • Cancer

    Nanoparticles have high surface area to volume ratio. This allows for many functional groups to be attached to a nanoparticle, which can seek out and bind to certain tumor cells.[47] Additionally, the small size of nanoparticles (10 to 100 nanometers), allows them to preferentially accumulate at tumor sites (because tumors lack an effective lymphatic drainage system).[48] Limitations to conventional cancer chemotherapy include drug resistance, lack of selectivity, and lack of solubility.[46] Nanoparticles have the potential to overcome these problems.[41][49]

    In photodynamic therapy, a particle is placed within the body and is illuminated with light from the outside. The light gets absorbed by the particle and if the particle is metal, energy from the light will heat the particle and surrounding tissue. Light may also be used to produce high energy oxygen molecules which will chemically react with and destroy most organic molecules that are next to them (like tumors). This therapy is appealing for many reasons. It does not leave a "toxic trail" of reactive molecules throughout the body (chemotherapy) because it is directed where only the light is shined and the particles exist. Photodynamic therapy has potential for a noninvasive procedure for dealing with diseases, growth and tumors. Kanzius RF therapy is one example of such therapy (nanoparticle hyperthermia) .[citation needed] Also, gold nanoparticles have the potential to join numerous therapeutic functions into a single platform, by targeting specific tumor cells, tissues and organs.[50][51]

    Imaging

    In vivo imaging is another area where tools and devices are being developed.[52] Using nanoparticle contrast agents, images such as ultrasound and MRI have a favorable distribution and improved contrast. In cardiovascular imaging, nanoparticles have potential to aid visualization of blood pooling, ischemia, angiogenesis, atherosclerosis, and focal areas where inflammation is present.[52]

    The small size of nanoparticles endows them with properties that can be very useful in oncology, particularly in imaging.[7] Quantum dots (nanoparticles with quantum confinement properties, such as size-tunable light emission), when used in conjunction with MRI (magnetic resonance imaging), can produce exceptional images of tumor sites. Nanoparticles of cadmium selenide (quantum dots) glow when exposed to ultraviolet light. When injected, they seep into cancer tumors. The surgeon can see the glowing tumor, and use it as a guide for more accurate tumor removal.These nanoparticles are much brighter than organic dyes and only need one light source for excitation. This means that the use of fluorescent quantum dots could produce a higher contrast image and at a lower cost than today's organic dyes used as contrast media. The downside, however, is that quantum dots are usually made of quite toxic elements, but this concern may be addressed by use of fluorescent dopants.[53]

    Tracking movement can help determine how well drugs are being distributed or how substances are metabolized. It is difficult to track a small group of cells throughout the body, so scientists used to dye the cells. These dyes needed to be excited by light of a certain wavelength in order for them to light up. While different color dyes absorb different frequencies of light, there was a need for as many light sources as cells. A way around this problem is with luminescent tags. These tags are quantum dots attached to proteins that penetrate cell membranes.[53] The dots can be random in size, can be made of bio-inert material, and they demonstrate the nanoscale property that color is size-dependent. As a result, sizes are selected so that the frequency of light used to make a group of quantum dots fluoresce is an even multiple of the frequency required to make another group incandesce. Then both groups can be lit with a single light source. They have also found a way to insert nanoparticles[54] into the affected parts of the body so that those parts of the body will glow showing the tumor growth or shrinkage or also organ trouble.[55]

    Sensing

    Nanotechnology-on-a-chip is one more dimension of lab-on-a-chip technology. Magnetic nanoparticles, bound to a suitable antibody, are used to label specific molecules, structures or microorganisms. In particular silica nanoparticles are inert from the photophysical point of view and might accumulate a large number of dye(s) within the nanoparticle shell.[28] Gold nanoparticles tagged with short segments of DNA can be used for detection of genetic sequence in a sample. Multicolor optical coding for biological assays has been achieved by embedding different-sized quantum dots into polymeric microbeads. Nanopore technology for analysis of nucleic acids converts strings of nucleotides directly into electronic signatures.[citation needed]

    Sensor test chips containing thousands of nanowires, able to detect proteins and other biomarkers left behind by cancer cells, could enable the detection and diagnosis of cancer in the early stages from a few drops of a patient's blood.[56] Nanotechnology is helping to advance the use of arthroscopes, which are pencil-sized devices that are used in surgeries with lights and cameras so surgeons can do the surgeries with smaller incisions. The smaller the incisions the faster the healing time which is better for the patients. It is also helping to find a way to make an arthroscope smaller than a strand of hair.[57]

    Research on nanoelectronics-based cancer diagnostics could lead to tests that can be done in pharmacies. The results promise to be highly accurate and the product promises to be inexpensive. They could take a very small amount of blood and detect cancer anywhere in the body in about five minutes, with a sensitivity that is a thousand times better than in a conventional laboratory test. These devices that are built with nanowires to detect cancer proteins; each nanowire detector is primed to be sensitive to a different cancer marker.[30] The biggest advantage of the nanowire detectors is that they could test for anywhere from ten to one hundred similar medical conditions without adding cost to the testing device.[58] Nanotechnology has also helped to personalize oncology for the detection, diagnosis, and treatment of cancer. It is now able to be tailored to each individual’s tumor for better performance. They have found ways that they will be able to target a specific part of the body that is being affected by cancer.[59]

    Blood purification

    Magnetic micro particles are proven research instruments for the separation of cells and proteins from complex media. The technology is available under the name Magnetic-activated cell sorting or Dynabeads among others. More recently it was shown in animal models that magnetic nanoparticles can be used for the removal of various noxious compounds including toxins, pathogens, and proteins from whole blood in an extracorporeal circuit similar to dialysis.[60][61] In contrast to dialysis, which works on the principle of the size related diffusion of solutes and ultrafiltration of fluid across a semi-permeable membrane, the purification with nanoparticles allows specific targeting of substances. Additionally larger compounds which are commonly not dialyzable can be removed.[citation needed]

    The purification process is based on functionalized iron oxide or carbon coated metal nanoparticles with ferromagnetic or superparamagnetic properties.[62] Binding agents such as proteins,[61] antibodies,[60] antibiotics,[63] or synthetic ligands[64] are covalently linked to the particle surface. These binding agents are able to interact with target species forming an agglomerate. Applying an external magnetic field gradient allows exerting a force on the nanoparticles. Hence the particles can be separated from the bulk fluid, thereby cleaning it from the contaminants.[65][66]

    The small size (< 100 nm) and large surface area of functionalized nanomagnets leads to advantageous properties compared to hemoperfusion, which is a clinically used technique for the purification of blood and is based on surface adsorption. These advantages are high loading and accessibility of the binding agents, high selectivity towards the target compound, fast diffusion, small hydrodynamic resistance, and low dosage.[67]

    This approach offers new therapeutic possibilities for the treatment of systemic infections such as sepsis by directly removing the pathogen. It can also be used to selectively remove cytokines or endotoxins[63] or for the dialysis of compounds which are not accessible by traditional dialysis methods. However the technology is still in a preclinical phase and first clinical trials are not expected before 2017.[68]

    Tissue engineering

    Nanotechnology may be used as part of tissue engineering to help reproduce or repair or reshape damaged tissue using suitable nanomaterial-based scaffolds and growth factors. Tissue engineering if successful may replace conventional treatments like organ transplants or artificial implants. Nanoparticles such as graphene, carbon nanotubes, molybdenum disulfide and tungsten disulfide are being used as reinforcing agents to fabricate mechanically strong biodegradable polymeric nanocomposites for bone tissue engineering applications. The addition of these nanoparticles in the polymer matrix at low concentrations (~0.2 weight %) leads to significant improvements in the compressive and flexural mechanical properties of polymeric nanocomposites.[69][70] Potentially, these nanocomposites may be used as a novel, mechanically strong, light weight composite as bone implants.[citation needed]

    For example, a flesh welder was demonstrated to fuse two pieces of chicken meat into a single piece using a suspension of gold-coated nanoshells activated by an infrared laser. This could be used to weld arteries during surgery.[71] Another example is nanonephrology, the use of nanomedicine on the kidney.

    Medical devices

    Neuro-electronic interfacing is a visionary goal dealing with the construction of nanodevices that will permit computers to be joined and linked to the nervous system. This idea requires the building of a molecular structure that will permit control and detection of nerve impulses by an external computer. A refuelable strategy implies energy is refilled continuously or periodically with external sonic, chemical, tethered, magnetic, or biological electrical sources, while a nonrefuelable strategy implies that all power is drawn from internal energy storage which would stop when all energy is drained. A nanoscale enzymatic biofuel cell for self-powered nanodevices have been developed that uses glucose from biofluids including human blood and watermelons.[72] One limitation to this innovation is the fact that electrical interference or leakage or overheating from power consumption is possible. The wiring of the structure is extremely difficult because they must be positioned precisely in the nervous system. The structures that will provide the interface must also be compatible with the body's immune system.[73]

    Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections. Molecular nanotechnology is highly theoretical, seeking to anticipate what inventions nanotechnology might yield and to propose an agenda for future inquiry. The proposed elements of molecular nanotechnology, such as molecular assemblers and nanorobots are far beyond current capabilities.[1][73][74][75] Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair machines, including ones operating within cells and utilizing as yet hypothetical molecular machines, in his 1986 book Engines of Creation, with the first technical discussion of medical nanorobots by Robert Freitas appearing in 1999.[1] Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030.[76] According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines (see nanotechnology). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom.[77]

    Thursday, April 26, 2018

    Entropy in thermodynamics and information theory

    From Wikipedia, the free encyclopedia

    There are close parallels between the mathematical expressions for the thermodynamic entropy, usually denoted by S, of a physical system in the statistical thermodynamics established by Ludwig Boltzmann and J. Willard Gibbs in the 1870s, and the information-theoretic entropy, usually expressed as H, of Claude Shannon and Ralph Hartley developed in the 1940s. Shannon commented on the similarity upon publicizing information theory in A Mathematical Theory of Communication.

    This article explores what links there are between the two concepts, and how far they can be regarded as connected.

    Equivalence of form of the defining expressions


    Boltzmann's grave in the Zentralfriedhof, Vienna, with bust and entropy formula.

    The defining expression for entropy in the theory of statistical mechanics established by Ludwig Boltzmann and J. Willard Gibbs in the 1870s, is of the form:
    {\displaystyle S=-k_{\text{B}}\sum _{i}p_{i}\ln p_{i},}
    where p_{i} is the probability of the microstate i taken from an equilibrium ensemble.

    The defining expression for entropy in the theory of information established by Claude E. Shannon in 1948 is of the form:
    {\displaystyle H=-\sum _{i}p_{i}\log _{b}p_{i},}
    where p_{i} is the probability of the message m_{i} taken from the message space M, and b is the base of the logarithm used. Common values of b are 2, Euler's number e, and 10, and the unit of entropy is shannon (or bit) for b = 2, nat for b = e, and hartley for b = 10.[1]

    Mathematically H may also be seen as an average information, taken over the message space, because when a certain message occurs with probability pi, the information quantity −log(pi) will be obtained.

    If all the microstates are equiprobable (a microcanonical ensemble), the statistical thermodynamic entropy reduces to the form, as given by Boltzmann,
    {\displaystyle S=k_{\text{B}}\ln W,}
    where W is the number of microstates that corresponds to the macroscopic thermodynamic state. Therefore S depends on temperature.

    If all the messages are equiprobable, the information entropy reduces to the Hartley entropy
    {\displaystyle H=\log _{b}|M|\ ,}
    where |M| is the cardinality of the message space M.

    The logarithm in the thermodynamic definition is the natural logarithm. It can be shown that the Gibbs entropy formula, with the natural logarithm, reproduces all of the properties of the macroscopic classical thermodynamics of Rudolf Clausius. (See article: Entropy (statistical views)).

    The logarithm can also be taken to the natural base in the case of information entropy. This is equivalent to choosing to measure information in nats instead of the usual bits (or more formally, shannons). In practice, information entropy is almost always calculated using base 2 logarithms, but this distinction amounts to nothing other than a change in units. One nat is about 1.44 bits.

    For a simple compressible system that can only perform volume work, the first law of thermodynamics becomes
    {\displaystyle dE=-pdV+TdS.}
    But one can equally well write this equation in terms of what physicists and chemists sometimes call the 'reduced' or dimensionless entropy, σ = S/k, so that
    {\displaystyle dE=-pdV+k_{\text{B}}Td\sigma .}
    Just as S is conjugate to T, so σ is conjugate to kBT (the energy that is characteristic of T on a molecular scale).

    Theoretical relationship

    Despite the foregoing, there is a difference between the two quantities. The information entropy H can be calculated for any probability distribution (if the "message" is taken to be that the event i which had probability pi occurred, out of the space of the events possible), while the thermodynamic entropy S refers to thermodynamic probabilities pi specifically. The difference is more theoretical than actual, however, because any probability distribution can be approximated arbitrarily closely by some thermodynamic system.[citation needed]

    Moreover, a direct connection can be made between the two. If the probabilities in question are the thermodynamic probabilities pi: the (reduced) Gibbs entropy σ can then be seen as simply the amount of Shannon information needed to define the detailed microscopic state of the system, given its macroscopic description. Or, in the words of G. N. Lewis writing about chemical entropy in 1930, "Gain in entropy always means loss of information, and nothing more". To be more concrete, in the discrete case using base two logarithms, the reduced Gibbs entropy is equal to the minimum number of yes–no questions needed to be answered in order to fully specify the microstate, given that we know the macrostate.

    Furthermore, the prescription to find the equilibrium distributions of statistical mechanics—such as the Boltzmann distribution—by maximising the Gibbs entropy subject to appropriate constraints (the Gibbs algorithm) can be seen as something not unique to thermodynamics, but as a principle of general relevance in statistical inference, if it is desired to find a maximally uninformative probability distribution, subject to certain constraints on its averages. (These perspectives are explored further in the article Maximum entropy thermodynamics.)

    The Shannon entropy in information theory is sometimes expressed in units of bits per symbol. The physical entropy may be on a "per quantity" basis (h) which is called "intensive" entropy instead of the usual total entropy which is called "extensive" entropy. The "shannons" of a message (H) are its total "extensive" information entropy and is h times the number of bits in the message.

    A direct and physically real relationship between h and S can be found by assigning a symbol to each microstate that occurs per mole, kilogram, volume, or particle of a homogeneous substance, then calculating the 'h' of these symbols. By theory or by observation, the symbols (microstates) will occur with different probabilities and this will determine h. If there are N moles, kilograms, volumes, or particles of the unit substance, the relationship between h (in bits per unit substance) and physical extensive entropy in nats is:
    {\displaystyle S=k_{\mathrm {B} }\ln(2)Nh}
    where ln(2) is the conversion factor from base 2 of Shannon entropy to the natural base e of physical entropy. N h is the amount of information in bits needed to describe the state of a physical system with entropy S. Landauer's principle demonstrates the reality of this by stating the minimum energy E required (and therefore heat Q generated) by an ideally efficient memory change or logic operation by irreversibly erasing or merging N h bits of information will be S times the temperature which is
    {\displaystyle E=Q=Tk_{\mathrm {B} }\ln(2)Nh}
    where h is in informational bits and E and Q are in physical Joules. This has been experimentally confirmed.[2]

    Temperature is a measure of the average kinetic energy per particle in an ideal gas (Kelvins = 2/3*Joules/kb) so the J/K units of kb is fundamentally unitless (Joules/Joules). kb is the conversion factor from energy in 3/2*Kelvins to Joules for an ideal gas. If kinetic energy measurements per particle of an ideal gas were expressed as Joules instead of Kelvins, kb in the above equations would be replaced by 3/2. This shows that S is a true statistical measure of microstates that does not have a fundamental physical unit other than the units of information, in this case "nats", which is just a statement of which logarithm base was chosen by convention.

    Information is physical

    Szilard's engine


    N-atom engine schematic

    A physical thought experiment demonstrating how just the possession of information might in principle have thermodynamic consequences was established in 1929 by Leó Szilárd, in a refinement of the famous Maxwell's demon scenario.

    Consider Maxwell's set-up, but with only a single gas particle in a box. If the supernatural demon knows which half of the box the particle is in (equivalent to a single bit of information), it can close a shutter between the two halves of the box, close a piston unopposed into the empty half of the box, and then extract k_{B}T\ln 2 joules of useful work if the shutter is opened again. The particle can then be left to isothermally expand back to its original equilibrium occupied volume. In just the right circumstances therefore, the possession of a single bit of Shannon information (a single bit of negentropy in Brillouin's term) really does correspond to a reduction in the entropy of the physical system. The global entropy is not decreased, but information to energy conversion is possible.

    Using a phase-contrast microscope equipped with a high speed camera connected to a computer, as demon, the principle has been actually demonstrated.[3] In this experiment, information to energy conversion is performed on a Brownian particle by means of feedback control; that is, synchronizing the work given to the particle with the information obtained on its position. Computing energy balances for different feedback protocols, has confirmed that the Jarzynski equality requires a generalization that accounts for the amount of information involved in the feedback.

    Landauer's principle

    In fact one can generalise: any information that has a physical representation must somehow be embedded in the statistical mechanical degrees of freedom of a physical system.

    Thus, Rolf Landauer argued in 1961, if one were to imagine starting with those degrees of freedom in a thermalised state, there would be a real reduction in thermodynamic entropy if they were then re-set to a known state. This can only be achieved under information-preserving microscopically deterministic dynamics if the uncertainty is somehow dumped somewhere else – i.e. if the entropy of the environment (or the non information-bearing degrees of freedom) is increased by at least an equivalent amount, as required by the Second Law, by gaining an appropriate quantity of heat: specifically kT ln 2 of heat for every 1 bit of randomness erased.

    On the other hand, Landauer argued, there is no thermodynamic objection to a logically reversible operation potentially being achieved in a physically reversible way in the system. It is only logically irreversible operations – for example, the erasing of a bit to a known state, or the merging of two computation paths – which must be accompanied by a corresponding entropy increase. When information is physical, all processing of its representations, i.e. generation, encoding, transmission, decoding and interpretation, are natural processes where entropy increases by consumption of free energy.[4]

    Applied to the Maxwell's demon/Szilard engine scenario, this suggests that it might be possible to "read" the state of the particle into a computing apparatus with no entropy cost; but only if the apparatus has already been SET into a known state, rather than being in a thermalised state of uncertainty. To SET (or RESET) the apparatus into this state will cost all the entropy that can be saved by knowing the state of Szilard's particle.

    Negentropy

    Shannon entropy has been related by physicist Léon Brillouin to a concept sometimes called negentropy. In 1953, Brillouin derived a general equation[5] stating that the changing of an information bit value requires at least kT ln(2) energy. This is the same energy as the work Leo Szilard's engine produces in the idealistic case, which in turn equals to the same quantity found by Landauer. In his book,[6] he further explored this problem concluding that any cause of a bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount, kT ln(2), of energy. Consequently, acquiring information about a system’s microstates is associated with an entropy production, while erasure yields entropy production only when the bit value is changing. Setting up a bit of information in a sub-system originally in thermal equilibrium results in a local entropy reduction. However, there is no violation of the second law of thermodynamics, according to Brillouin, since a reduction in any local system’s thermodynamic entropy results in an increase in thermodynamic entropy elsewhere. In this way, Brillouin clarified the meaning of negentropy which was considered as controversial because its earlier understanding can yield Carnot efficiency higher than one. Additionally, the relationship between energy and information formulated by Brillouin has been proposed as a connection between the amount of bits that the brain processes and the energy it consumes.[7]

    In 2009, Mahulikar & Herwig redefined thermodynamic negentropy as the specific entropy deficit of the dynamically ordered sub-system relative to its surroundings.[8] This definition enabled the formulation of the Negentropy Principle, which is mathematically shown to follow from the 2nd Law of Thermodynamics, during order existence.

    Black holes

    Stephen Hawking often speaks of the thermodynamic entropy of black holes in terms of their information content.[9] Do black holes destroy information? It appears that there are deep relations between the entropy of a black hole and information loss[10] See Black hole thermodynamics and Black hole information paradox.

    Quantum theory

    Hirschman showed,[11] cf. Hirschman uncertainty, that Heisenberg's uncertainty principle can be expressed as a particular lower bound on the sum of the classical distribution entropies of the quantum observable probability distributions of a quantum mechanical state, the square of the wave-function, in coordinate, and also momentum space, when expressed in Planck units. The resulting inequalities provide a tighter bound on the uncertainty relations of Heisenberg.

    It is meaningful to assign a "joint entropy", because positions and momenta are quantum conjugate variables and are therefore not jointly observable. Mathematically, they have to be treated as joint distribution. Note that this joint entropy is not equivalent to the Von Neumann entropy, −Tr ρ lnρ = −⟨lnρ⟩. Hirschman's entropy is said to account for the full information content of a mixture of quantum states.[12]

    (Dissatisfaction with the Von Neumann entropy from quantum information points of view has been expressed by Stotland, Pomeransky, Bachmat and Cohen, who have introduced a yet different definition of entropy that reflects the inherent uncertainty of quantum mechanical states. This definition allows distinction between the minimum uncertainty entropy of pure states, and the excess statistical entropy of mixtures.[13])

    The fluctuation theorem

    The fluctuation theorem provides a mathematical justification of the second law of thermodynamics under these principles, and precisely defines the limitations of the applicability of that law for systems away from thermodynamic equilibrium.

    Criticism

    There exists criticisms of the link between thermodynamic entropy and information entropy.

    The most common criticism is that information entropy cannot be related to thermodynamic entropy because there is no concept of temperature, energy, or the second law, in the discipline of information entropy.[14][15][16][17][18] This can best be discussed by considering the fundamental equation of thermodynamics:
    {\displaystyle dU=\sum F_{i}\,dx_{i}}
    where the Fi are "generalized forces" and the dxi are "generalized displacements". This is analogous to the mechanical equation dE = F dx where dE is the change in the kinetic energy of an object having been displaced by distance dx under the influence of force F. For example, for a simple gas, we have:
    {\displaystyle dU=TdS-PdV+\mu dN}
    where the temperature (T ), pressure (P ), and chemical potential (µ ) are generalized forces which, when imbalanced, result in a generalized displacement in entropy (S ), volume (-V ) and quantity (N ) respectively, and the products of the forces and displacements yield the change in the internal energy (dU ) of the gas.

    In the mechanical example, to declare that dx is not a geometric displacement because it ignores the dynamic relationship between displacement, force, and energy is not correct. Displacement, as a concept in geometry, does not require the concepts of energy and force for its definition, and so one might expect that entropy may not require the concepts of energy and temperature for its definition. The situation is not that simple, however. In classical thermodynamics, which is the study of thermodynamics from a purely empirical, or measurement point of view, thermodynamic entropy can only be measured by considering energy and temperature. Clausius' statement dS= δQ/T, or, equivalently, when all other effective displacements are zero, dS=dU/T, is the only way to actually measure thermodynamic entropy. It is only with the introduction of statistical mechanics, the viewpoint that a thermodynamic system consists of a collection of particles and which explains classical thermodynamics in terms of probability distributions, that the entropy can be considered separately from temperature and energy. This is expressed in Boltzmann's famous entropy formula S=kB ln(W). Here kB is Boltzmann's constant, and W is the number of equally probable microstates which yield a particular thermodynamic state, or macrostate.

    Boltzmann's equation is presumed to provide a link between thermodynamic entropy S and information entropy H = −Σi pi ln pi = ln(W) where pi=1/W are the equal probabilities of a given microstate. This interpretation has been criticized also. While some say that the equation is merely a unit conversion equation between thermodynamic and information entropy, this is not completely correct.[19] A unit conversion equation will, e.g., change inches to centimeters, and yield two measurements in different units of the same physical quantity (length). Since thermodynamic and information entropy are dimensionally unequal (energy/unit temperature vs. units of information), Boltzmann's equation is more akin to x = c t where x is the distance travelled by a light beam in time t, c being the speed of light. While we cannot say that length x and time t represent the same physical quantity, we can say that, in the case of a light beam, since c is a universal constant, they will provide perfectly accurate measures of each other. (For example, the light-year is used as a measure of distance). Likewise, in the case of Boltzmann's equation, while we cannot say that thermodynamic entropy S and information entropy H represent the same physical quantity, we can say that, in the case of a thermodynamic system, since kB is a universal constant, they will provide perfectly accurate measures of each other.

    The question then remains whether ln(W) is an information-theoretic quantity. If it is measured in bits, one can say that, given the macrostate, it represents the number of yes/no questions one must ask to determine the microstate, clearly an information-theoretic concept. Objectors point out that such a process is purely conceptual, and has nothing to do with the measurement of entropy. Then again, the whole of statistical mechanics is purely conceptual, serving only to provide an explanation of the "pure" science of thermodynamics.

    Ultimately, the criticism of the link between thermodynamic entropy and information entropy is a matter of terminology, rather than substance. Neither side in the controversy will disagree on the solution to a particular thermodynamic or information-theoretic problem.

    Topics of recent research

    Is information quantized?

    In 1995, Tim Palmer signalled[citation needed] two unwritten assumptions about Shannon's definition of information that may make it inapplicable as such to quantum mechanics:
    • The supposition that there is such a thing as an observable state (for instance the upper face of a dice or a coin) before the observation begins
    • The fact that knowing this state does not depend on the order in which observations are made (commutativity)
    Anton Zeilinger's and Caslav Brukner's article[20] synthesized and developed these remarks. The so-called Zeilinger's principle suggests that the quantization observed in QM could be bound to information quantization (one cannot observe less than one bit, and what is not observed is by definition "random"). Nevertheless, these claims remain quite controversial. Detailed discussions of the applicability of the Shannon information in quantum mechanics and an argument that Zeilinger's principle cannot explain quantization have been published,[21][22][23] that show that Brukner and Zeilinger change, in the middle of the calculation in their article, the numerical values of the probabilities needed to compute the Shannon entropy, so that the calculation makes little sense.

    Extracting work from quantum information in a Szilárd engine

    In 2013, a description was published[24] of a two atom version of a Szilárd engine using Quantum discord to generate work from purely quantum information.[25] Refinements in the lower temperature limit were suggested.[26]

    Algorithmic cooling

    Algorithmic cooling is an algorithmic method for transferring heat (or entropy) from some qubits to others or outside the system and into the environment, thus resulting in a cooling effect. This cooling effect may have usages in initializing cold (highly pure) qubits for quantum computation and in increasing polarization of certain spins in nuclear magnetic resonance.

    Operator (computer programming)

    From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...