Search This Blog

Sunday, September 1, 2024

AI effect

From Wikipedia, the free encyclopedia

The AI effect occurs when onlookers discount the behavior of an artificial intelligence program as not "real" intelligence.

The author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"

Definition

"The AI effect" refers to a phenomenon where either the definition of AI or the concept of intelligence is adjusted to exclude capabilities that AI systems have mastered. This often manifests as tasks that AI can now perform successfully no longer being considered part of AI, or as the notion of intelligence itself being redefined to exclude AI achievements. Edward Geist credits John McCarthy for coining the term "AI effect" to describe this phenomenon.

McCorduck calls it an "odd paradox" that "practical AI successes, computational programs that actually achieved intelligent behavior were soon assimilated into whatever application domain they were found to be useful in, and became silent partners alongside other problem-solving approaches, which left AI researchers to deal only with the 'failures', the tough nuts that couldn't yet be cracked." It is an example of moving the goalposts.

Tesler's Theorem is:

AI is whatever hasn't been done yet.

Douglas Hofstadter quotes this as do many other commentators.

When problems have not yet been formalised, they can still be characterised by a model of computation that includes human computation. The computational burden of a problem is split between a computer and a human: one part is solved by computer and the other part solved by a human. This formalisation is referred to as a human-assisted Turing machine.

AI applications become mainstream

Software and algorithms developed by AI researchers are now integrated into many applications throughout the world, without really being called AI. This underappreciation is known from such diverse fields as computer chess, marketing, agricultural automation and hospitality.

Michael Swaine reports "AI advances are not trumpeted as artificial intelligence so much these days, but are often seen as advances in some other field". "AI has become more important as it has become less conspicuous", Patrick Winston says. "These days, it is hard to find a big system that does not work, in part, because of ideas developed or matured in the AI world."

According to Stottler Henke, "The great practical benefits of AI applications and even the existence of AI in many software products go largely unnoticed by many despite the already widespread use of AI techniques in software. This is the AI effect. Many marketing people don't use the term 'artificial intelligence' even when their company's products rely on some AI techniques. Why not?"

Marvin Minsky writes "This paradox resulted from the fact that whenever an AI research project made a useful new discovery, that product usually quickly spun off to form a new scientific or commercial specialty with its own distinctive name. These changes in name led outsiders to ask, Why do we see so little progress in the central field of artificial intelligence?"

Nick Bostrom observes that "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labelled AI anymore."

The AI effect on decision-making in supply chain risk management is a severely understudied area.

To avoid the AI effect problem, the editors of a special issue of IEEE Software on AI and software engineering recommend not overselling – not hyping – the real achievable results to start with.

The Bulletin of the Atomic Scientists organization views the AI effect as a worldwide strategic military threat. They point out that it obscures the fact that applications of AI had already found their way into both US and Soviet militaries during the Cold War. AI tools to advise humans regarding weapons deployment were developed by both sides and received very limited usage during that time. They believe this constantly shifting failure to recognise AI continues to undermine human recognition of security threats in the present day.

Some experts think that the AI effect will continue, with advances in AI continually producing objections and redefinitions of public expectations. Some also believe that the AI effect will expand to include the dismissal of specialised artificial intelligences.

Legacy of the AI winter

In the early 1990s, during the second "AI winter" many AI researchers found that they could get more funding and sell more software if they avoided the bad name of "artificial intelligence" and instead pretended their work had nothing to do with intelligence at all.

Patty Tascarella wrote in 2006: "Some believe the word 'robotics' actually carries a stigma that hurts a company's chances at funding."

Saving a place for humanity at the top of the chain of being

Michael Kearns suggests that "people subconsciously are trying to preserve for themselves some special role in the universe". By discounting artificial intelligence people can continue to feel unique and special. Kearns argues that the change in perception known as the AI effect can be traced to the mystery being removed from the system. In being able to trace the cause of events implies that it's a form of automation rather than intelligence.

A related effect has been noted in the history of animal cognition and in consciousness studies, where every time a capacity formerly thought of as uniquely human is discovered in animals (e.g. the ability to make tools, or passing the mirror test), the overall importance of that capacity is deprecated.

Herbert A. Simon, when asked about the lack of AI's press coverage at the time, said, "What made AI different was that the very idea of it arouses a real fear and hostility in some human breasts. So you are getting very strong emotional reactions. But that's okay. We'll live with that."

Mueller 1987 proposed comparing AI to human intelligence, coining the standard of Human-Level Machine Intelligence. This nonetheless suffers from the AI effect however when different humans are used as the standard.

Game 6

Deep Blue defeats Kasparov

When IBM's chess-playing computer Deep Blue succeeded in defeating Garry Kasparov in 1997, public perception of chess playing shifted from a difficult mental task to a routine operation.

The public complained that Deep Blue had only used "brute force methods" and it wasn't real intelligence. Notably, John McCarthy, an AI pioneer and founder of the term "artificial intelligence", was disappointed by Deep Blue. He described it as a mere brute force machine that did not have any deep understanding of the game. McCarthy would also criticize how widespread the AI effect is ("As soon as it works, no one calls it AI anymore"), but in this case did not think that Deep Blue was a good example.

On the other side, Fred A. Reed writes:

A problem that proponents of AI regularly face is this: When we know how a machine does something "intelligent", it ceases to be regarded as intelligent. If I beat the world's chess champion, I'd be regarded as highly bright.

Self-assembly

From Wikipedia, the free encyclopedia
Self-assembly of lipids (a), proteins (b), and (c) SDS-cyclodextrin complexes. SDS is a surfactant with a hydrocarbon tail (yellow) and a SO4 head (blue and red), while cyclodextrin is a saccharide ring (green C and red O atoms).
Transmission electron microscopy image of an iron oxide nanoparticle. Regularly arranged dots within the dashed border are columns of Fe atoms. Left inset is the corresponding electron diffraction pattern. Scale bar: 10 nm.
Iron oxide nanoparticles can be dispersed in an organic solvent (toluene). Upon its evaporation, they may self-assemble (left and right panels) into micron-sized mesocrystals (center) or multilayers (right). Each dot in the left image is a traditional "atomic" crystal shown in the image above. Scale bars: 100 nm (left), 25 μm (center), 50 nm (right).
STM image of self-assembled Br4-pyrene molecules on Au(111) surface (top) and its model (bottom; pink spheres are Br atoms).

Self-assembly is a process in which a disordered system of pre-existing components forms an organized structure or pattern as a consequence of specific, local interactions among the components themselves, without external direction. When the constitutive components are molecules, the process is termed molecular self-assembly.

AFM imaging of self-assembly of 2-aminoterephthalic acid molecules on (104)-oriented calcite.

Self-assembly can be classified as either static or dynamic. In static self-assembly, the ordered state forms as a system approaches equilibrium, reducing its free energy. However, in dynamic self-assembly, patterns of pre-existing components organized by specific local interactions are not commonly described as "self-assembled" by scientists in the associated disciplines. These structures are better described as "self-organized", although these terms are often used interchangeably.

In chemistry and materials science

The DNA structure at left (schematic shown) will self-assemble into the structure visualized by atomic force microscopy at right.

Self-assembly in the classic sense can be defined as the spontaneous and reversible organization of molecular units into ordered structures by non-covalent interactions. The first property of a self-assembled system that this definition suggests is the spontaneity of the self-assembly process: the interactions responsible for the formation of the self-assembled system act on a strictly local level—in other words, the nanostructure builds itself.

Although self-assembly typically occurs between weakly-interacting species, this organization may be transferred into strongly-bound covalent systems. An example for this may be observed in the self-assembly of polyoxometalates. Evidence suggests that such molecules assemble via a dense-phase type mechanism whereby small oxometalate ions first assemble non-covalently in solution, followed by a condensation reaction that covalently binds the assembled units. This process can be aided by the introduction of templating agents to control the formed species. In such a way, highly organized covalent molecules may be formed in a specific manner.

Self-assembled nano-structure is an object that appears as a result of ordering and aggregation of individual nano-scale objects guided by some physical principle.

A particularly counter-intuitive example of a physical principle that can drive self-assembly is entropy maximization. Though entropy is conventionally associated with disorder, under suitable conditions  entropy can drive nano-scale objects to self-assemble into target structures in a controllable way.

Another important class of self-assembly is field-directed assembly. An example of this is the phenomenon of electrostatic trapping. In this case an electric field is applied between two metallic nano-electrodes. The particles present in the environment are polarized by the applied electric field. Because of dipole interaction with the electric field gradient the particles are attracted to the gap between the electrodes. Generalizations of this type approach involving different types of fields, e.g., using magnetic fields, using capillary interactions for particles trapped at interfaces, elastic interactions for particles suspended in liquid crystals have also been reported.

Regardless of the mechanism driving self-assembly, people take self-assembly approaches to materials synthesis to avoid the problem of having to construct materials one building block at a time. Avoiding one-at-a-time approaches is important because the amount of time required to place building blocks into a target structure is prohibitively difficult for structures that have macroscopic size.

Once materials of macroscopic size can be self-assembled, those materials can find use in many applications. For example, nano-structures such as nano-vacuum gaps are used for storing energy[9] and nuclear energy conversion. Self-assembled tunable materials are promising candidates for large surface area electrodes in batteries and organic photovoltaic cells, as well as for microfluidic sensors and filters.

Distinctive features

At this point, one may argue that any chemical reaction driving atoms and molecules to assemble into larger structures, such as precipitation, could fall into the category of self-assembly. However, there are at least three distinctive features that make self-assembly a distinct concept.

Order

First, the self-assembled structure must have a higher order than the isolated components, be it a shape or a particular task that the self-assembled entity may perform. This is generally not true in chemical reactions, where an ordered state may proceed towards a disordered state depending on thermodynamic parameters.

Interactions

The second important aspect of self-assembly is the predominant role of weak interactions (e.g. Van der Waals, capillary, , hydrogen bonds, or entropic forces) compared to more "traditional" covalent, ionic, or metallic bonds. These weak interactions are important in materials synthesis for two reasons.

First, weak interactions take a prominent place in materials, especially in biological systems. For instance, they determine the physical properties of liquids, the solubility of solids, and the organization of molecules in biological membranes.

Second, in addition to the strength of the interactions, interactions with varying degrees of specificity can control self-assembly. Self-assembly that is mediated by DNA pairing interactions constitutes the interactions of the highest specificity that have been used to drive self-assembly. At the other extreme, the least specific interactions are possibly those provided by emergent forces that arise from entropy maximization.

Building blocks

The third distinctive feature of self-assembly is that the building blocks are not only atoms and molecules, but span a wide range of nano- and mesoscopic structures, with different chemical compositions, functionalities, and shapes. Research into possible three-dimensional shapes of self-assembling micrites examines Platonic solids (regular polyhedral). The term 'micrite' was created by DARPA to refer to sub-millimeter sized microrobots, whose self-organizing abilities may be compared with those of slime mold. Recent examples of novel building blocks include polyhedra and patchy particles. Examples also included microparticles with complex geometries, such as hemispherical, dimer, discs, rods, molecules, as well as multimers. These nanoscale building blocks can in turn be synthesized through conventional chemical routes or by other self-assembly strategies such as directional entropic forces. More recently, inverse design approaches have appeared where it is possible to fix a target self-assembled behavior, and determine an appropriate building block that will realize that behavior.

Thermodynamics and kinetics

Self-assembly in microscopic systems usually starts from diffusion, followed by the nucleation of seeds, subsequent growth of the seeds, and ends at Ostwald ripening. The thermodynamic driving free energy can be either enthalpic or entropic or both. In either the enthalpic or entropic case, self-assembly proceeds through the formation and breaking of bonds, possibly with non-traditional forms of mediation. The kinetics of the self-assembly process is usually related to diffusion, for which the absorption/adsorption rate often follows a Langmuir adsorption model which in the diffusion controlled concentration (relatively diluted solution) can be estimated by the Fick's laws of diffusion. The desorption rate is determined by the bond strength of the surface molecules/atoms with a thermal activation energy barrier. The growth rate is the competition between these two processes.

Examples

Important examples of self-assembly in materials science include the formation of molecular crystals, colloids, lipid bilayers, phase-separated polymers, and self-assembled monolayers. The folding of polypeptide chains into proteins and the folding of nucleic acids into their functional forms are examples of self-assembled biological structures. Recently, the three-dimensional macroporous structure was prepared via self-assembly of diphenylalanine derivative under cryoconditions, the obtained material can find the application in the field of regenerative medicine or drug delivery system. P. Chen et al. demonstrated a microscale self-assembly method using the air-liquid interface established by Faraday wave as a template. This self-assembly method can be used for generation of diverse sets of symmetrical and periodic patterns from microscale materials such as hydrogels, cells, and cell spheroids. Yasuga et al. demonstrated how fluid interfacial energy drives the emergence of three-dimensional periodic structures in micropillar scaffolds. Myllymäki et al. demonstrated the formation of micelles, that undergo a change in morphology to fibers and eventually to spheres, all controlled by solvent change.

Properties

Self-assembly extends the scope of chemistry aiming at synthesizing products with order and functionality properties, extending chemical bonds to weak interactions and encompassing the self-assembly of nanoscale building blocks at all length scales. In covalent synthesis and polymerization, the scientist links atoms together in any desired conformation, which does not necessarily have to be the energetically most favoured position; self-assembling molecules, on the other hand, adopt a structure at the thermodynamic minimum, finding the best combination of interactions between subunits but not forming covalent bonds between them. In self-assembling structures, the scientist must predict this minimum, not merely place the atoms in the location desired.

Another characteristic common to nearly all self-assembled systems is their thermodynamic stability. For self-assembly to take place without intervention of external forces, the process must lead to a lower Gibbs free energy, thus self-assembled structures are thermodynamically more stable than the single, unassembled components. A direct consequence is the general tendency of self-assembled structures to be relatively free of defects. An example is the formation of two-dimensional superlattices composed of an orderly arrangement of micrometre-sized polymethylmethacrylate (PMMA) spheres, starting from a solution containing the microspheres, in which the solvent is allowed to evaporate slowly in suitable conditions. In this case, the driving force is capillary interaction, which originates from the deformation of the surface of a liquid caused by the presence of floating or submerged particles.

These two properties—weak interactions and thermodynamic stability—can be recalled to rationalise another property often found in self-assembled systems: the sensitivity to perturbations exerted by the external environment. These are small fluctuations that alter thermodynamic variables that might lead to marked changes in the structure and even compromise it, either during or after self-assembly. The weak nature of interactions is responsible for the flexibility of the architecture and allows for rearrangements of the structure in the direction determined by thermodynamics. If fluctuations bring the thermodynamic variables back to the starting condition, the structure is likely to go back to its initial configuration. This leads us to identify one more property of self-assembly, which is generally not observed in materials synthesized by other techniques: reversibility.

Self-assembly is a process which is easily influenced by external parameters. This feature can make synthesis rather complex because of the need to control many free parameters. Yet self-assembly has the advantage that a large variety of shapes and functions on many length scales can be obtained.

The fundamental condition needed for nanoscale building blocks to self-assemble into an ordered structure is the simultaneous presence of long-range repulsive and short-range attractive forces.

By choosing precursors with suitable physicochemical properties, it is possible to exert a fine control on the formation processes that produce complex structures. Clearly, the most important tool when it comes to designing a synthesis strategy for a material, is the knowledge of the chemistry of the building units. For example, it was demonstrated that it was possible to use diblock copolymers with different block reactivities in order to selectively embed maghemite nanoparticles and generate periodic materials with potential use as waveguides.

In 2008 it was proposed that every self-assembly process presents a co-assembly, which makes the former term a misnomer. This thesis is built on the concept of mutual ordering of the self-assembling system and its environment.

At the macroscopic scale

The most common examples of self-assembly at the macroscopic scale can be seen at interfaces between gases and liquids, where molecules can be confined at the nanoscale in the vertical direction and spread over long distances laterally. Examples of self-assembly at gas-liquid interfaces include breath-figures, self-assembled monolayers, droplet clusters, and Langmuir–Blodgett films, while crystallization of fullerene whiskers is an example of macroscopic self-assembly in between two liquids. Another remarkable example of macroscopic self-assembly is the formation of thin quasicrystals at an air-liquid interface, which can be built up not only by inorganic, but also by organic molecular units. Furthermore, it was reported that Fmoc protected L-DOPA amino acid (Fmoc-DOPA) can present a minimal supramolecular polymer model, displaying a spontaneous structural transition from meta-stable spheres to fibrillar assemblies to gel-like material and finally to single crystals.

Self-assembly processes can also be observed in systems of macroscopic building blocks. These building blocks can be externally propelled or self-propelled. Since the 1950s, scientists have built self-assembly systems exhibiting centimeter-sized components ranging from passive mechanical parts to mobile robots. For systems at this scale, the component design can be precisely controlled. For some systems, the components' interaction preferences are programmable. The self-assembly processes can be easily monitored and analyzed by the components themselves or by external observers.

In April 2014, a 3D printed plastic was combined with a "smart material" that self-assembles in water, resulting in "4D printing".

Consistent concepts of self-organization and self-assembly

People regularly use the terms "self-organization" and "self-assembly" interchangeably. As complex system science becomes more popular though, there is a higher need to clearly distinguish the differences between the two mechanisms to understand their significance in physical and biological systems. Both processes explain how collective order develops from "dynamic small-scale interactions". Self-organization is a non-equilibrium process where self-assembly is a spontaneous process that leads toward equilibrium. Self-assembly requires components to remain essentially unchanged throughout the process. Besides the thermodynamic difference between the two, there is also a difference in formation. The first difference is what "encodes the global order of the whole" in self-assembly whereas in self-organization this initial encoding is not necessary. Another slight contrast refers to the minimum number of units needed to make an order. Self-organization appears to have a minimum number of units whereas self-assembly does not. The concepts may have particular application in connection with natural selection. Eventually, these patterns may form one theory of pattern formation in nature.

Medicinal chemistry

From Wikipedia, the free encyclopedia

Pharmacophore model of the benzodiazepine binding site on the GABAA receptor

Medicinal or pharmaceutical chemistry is a scientific discipline at the intersection of chemistry and pharmacy involved with designing and developing pharmaceutical drugs. Medicinal chemistry involves the identification, synthesis and development of new chemical entities suitable for therapeutic use. It also includes the study of existing drugs, their biological properties, and their quantitative structure-activity relationships (QSAR).

Medicinal chemistry is a highly interdisciplinary science combining organic chemistry with biochemistry, computational chemistry, pharmacology, molecular biology, statistics, and physical chemistry.

Compounds used as medicines are most often organic compounds, which are often divided into the broad classes of small organic molecules (e.g., atorvastatin, fluticasone, clopidogrel) and "biologics" (infliximab, erythropoietin, insulin glargine), the latter of which are most often medicinal preparations of proteins (natural and recombinant antibodies, hormones etc.). Medicines can also be inorganic and organometallic compounds, commonly referred to as metallodrugs (e.g., platinum, lithium and gallium-based agents such as cisplatin, lithium carbonate and gallium nitrate, respectively). The discipline of Medicinal Inorganic Chemistry investigates the role of metals in medicine (metallotherapeutics), which involves the study and treatment of diseases and health conditions associated with inorganic metals in biological systems. There are several metallotherapeutics approved for the treatment of cancer (e.g., contain Pt, Ru, Gd, Ti, Ge, V, and Ga), antimicrobials (e.g., Ag, Cu, and Ru), diabetes (e.g., V and Cr), broad-spectrum antibiotic (e.g., Bi), bipolar disorder (e.g., Li). Other areas of study include: metallomics, genomics, proteomics, diagnostic agents (e.g., MRI: Gd, Mn; X-ray: Ba, I) and radiopharmaceuticals (e.g., 99mTc for diagnostics, 186Re for therapeutics).

In particular, medicinal chemistry in its most common practice—focusing on small organic molecules—encompasses synthetic organic chemistry and aspects of natural products and computational chemistry in close combination with chemical biology, enzymology and structural biology, together aiming at the discovery and development of new therapeutic agents. Practically speaking, it involves chemical aspects of identification, and then systematic, thorough synthetic alteration of new chemical entities to make them suitable for therapeutic use. It includes synthetic and computational aspects of the study of existing drugs and agents in development in relation to their bioactivities (biological activities and properties), i.e., understanding their structure–activity relationships (SAR). Pharmaceutical chemistry is focused on quality aspects of medicines and aims to assure fitness for purpose of medicinal products.

At the biological interface, medicinal chemistry combines to form a set of highly interdisciplinary sciences, setting its organic, physical, and computational emphases alongside biological areas such as biochemistry, molecular biology, pharmacognosy and pharmacology, toxicology and veterinary and human medicine; these, with project management, statistics, and pharmaceutical business practices, systematically oversee altering identified chemical agents such that after pharmaceutical formulation, they are safe and efficacious, and therefore suitable for use in treatment of disease.

In the path of drug discovery

Discovery

Discovery is the identification of novel active chemical compounds, often called "hits", which are typically found by assay of compounds for a desired biological activity. Initial hits can come from repurposing existing agents toward a new pathologic processes, and from observations of biologic effects of new or existing natural products from bacteria, fungi, plants, etc. In addition, hits also routinely originate from structural observations of small molecule "fragments" bound to therapeutic targets (enzymes, receptors, etc.), where the fragments serve as starting points to develop more chemically complex forms by synthesis. Finally, hits also regularly originate from en-masse testing of chemical compounds against biological targets using biochemical or chemoproteomics assays, where the compounds may be from novel synthetic chemical libraries known to have particular properties (kinase inhibitory activity, diversity or drug-likeness, etc.), or from historic chemical compound collections or libraries created through combinatorial chemistry. While a number of approaches toward the identification and development of hits exist, the most successful techniques are based on chemical and biological intuition developed in team environments through years of rigorous practice aimed solely at discovering new therapeutic agents.

Hit to lead and lead optimization

Further chemistry and analysis is necessary, first to identify the "triage" compounds that do not provide series displaying suitable SAR and chemical characteristics associated with long-term potential for development, then to improve the remaining hit series concerning the desired primary activity, as well as secondary activities and physiochemical properties such that the agent will be useful when administered in real patients. In this regard, chemical modifications can improve the recognition and binding geometries (pharmacophores) of the candidate compounds, and so their affinities for their targets, as well as improving the physicochemical properties of the molecule that underlie necessary pharmacokinetic/pharmacodynamic (PK/PD), and toxicologic profiles (stability toward metabolic degradation, lack of geno-, hepatic, and cardiac toxicities, etc.) such that the chemical compound or biologic is suitable for introduction into animal and human studies.

Process chemistry and development

The final synthetic chemistry stages involve the production of a lead compound in suitable quantity and quality to allow large scale animal testing, and then human clinical trials. This involves the optimization of the synthetic route for bulk industrial production, and discovery of the most suitable drug formulation. The former of these is still the bailiwick of medicinal chemistry, the latter brings in the specialization of formulation science (with its components of physical and polymer chemistry and materials science). The synthetic chemistry specialization in medicinal chemistry aimed at adaptation and optimization of the synthetic route for industrial scale syntheses of hundreds of kilograms or more is termed process synthesis, and involves thorough knowledge of acceptable synthetic practice in the context of large scale reactions (reaction thermodynamics, economics, safety, etc.). Critical at this stage is the transition to more stringent GMP requirements for material sourcing, handling, and chemistry.

Synthetic analysis

The synthetic methodology employed in medicinal chemistry is subject to constraints that do not apply to traditional organic synthesis. Owing to the prospect of scaling the preparation, safety is of paramount importance. The potential toxicity of reagents affects methodology.

Structural analysis

The structures of pharmaceuticals are assessed in many ways, in part as a means to predict efficacy, stability, and accessibility. Lipinski's rule of five focus on the number of hydrogen bond donors and acceptors, number of rotatable bonds, surface area, and lipophilicity. Other parameters by which medicinal chemists assess or classify their compounds are: synthetic complexity, chirality, flatness, and aromatic ring count.

Structural analysis of lead compounds is often performed through computational methods prior to actual synthesis of the ligand(s). This is done for a number of reasons, including but not limited to: time and financial considerations (expenditure, etc.). Once the ligand of interest has been synthesized in the laboratory, analysis is then performed by traditional methods (TLC, NMR, GC/MS, and others).

Training

Medicinal chemistry is by nature an interdisciplinary science, and practitioners have a strong background in organic chemistry, which must eventually be coupled with a broad understanding of biological concepts related to cellular drug targets. Scientists in medicinal chemistry work are principally industrial scientists (but see following), working as part of an interdisciplinary team that uses their chemistry abilities, especially, their synthetic abilities, to use chemical principles to design effective therapeutic agents. The length of training is intense, with practitioners often required to attain a 4-year bachelor's degree followed by a 4–6 year Ph.D. in organic chemistry. Most training regimens also include a postdoctoral fellowship period of 2 or more years after receiving a Ph.D. in chemistry, making the total length of training range from 10 to 12 years of college education. However, employment opportunities at the Master's level also exist in the pharmaceutical industry, and at that and the Ph.D. level there are further opportunities for employment in academia and government.

Graduate level programs in medicinal chemistry can be found in traditional medicinal chemistry or pharmaceutical sciences departments, both of which are traditionally associated with schools of pharmacy, and in some chemistry departments. However, the majority of working medicinal chemists have graduate degrees (MS, but especially Ph.D.) in organic chemistry, rather than medicinal chemistry, and the preponderance of positions are in research, where the net is necessarily cast widest, and most broad synthetic activity occurs.

In research of small molecule therapeutics, an emphasis on training that provides for breadth of synthetic experience and "pace" of bench operations is clearly present (e.g., for individuals with pure synthetic organic and natural products synthesis in Ph.D. and post-doctoral positions, ibid.). In the medicinal chemistry specialty areas associated with the design and synthesis of chemical libraries or the execution of process chemistry aimed at viable commercial syntheses (areas generally with fewer opportunities), training paths are often much more varied (e.g., including focused training in physical organic chemistry, library-related syntheses, etc.).

As such, most entry-level workers in medicinal chemistry, especially in the U.S., do not have formal training in medicinal chemistry but receive the necessary medicinal chemistry and pharmacologic background after employment—at entry into their work in a pharmaceutical company, where the company provides its particular understanding or model of "medichem" training through active involvement in practical synthesis on therapeutic projects. (The same is somewhat true of computational medicinal chemistry specialties, but not to the same degree as in synthetic areas.)

Medical physics

From Wikipedia, the free encyclopedia

Medical physics deals with the application of the concepts and methods of physics to the prevention, diagnosis and treatment of human diseases with a specific goal of improving human health and well-being. Since 2008, medical physics has been included as a health profession according to International Standard Classification of Occupation of the International Labour Organization.

Although medical physics may sometimes also be referred to as biomedical physics, medical biophysics, applied physics in medicine, physics applications in medical science, radiological physics or hospital radio-physics, a "medical physicist" is specifically a health professional with specialist education and training in the concepts and techniques of applying physics in medicine and competent to practice independently in one or more of the subfields of medical physics. Traditionally, medical physicists are found in the following healthcare specialties: radiation oncology (also known as radiotherapy or radiation therapy), diagnostic and interventional radiology (also known as medical imaging), nuclear medicine, and radiation protection. Medical physics of radiation therapy can involve work such as dosimetry, linac quality assurance, and brachytherapy. Medical physics of diagnostic and interventional radiology involves medical imaging techniques such as magnetic resonance imaging, ultrasound, computed tomography and x-ray. Nuclear medicine will include positron emission tomography and radionuclide therapy. However one can find Medical Physicists in many other areas such as physiological monitoring, audiology, neurology, neurophysiology, cardiology and others.

Medical physics departments may be found in institutions such as universities, hospitals, and laboratories. University departments are of two types. The first type are mainly concerned with preparing students for a career as a hospital Medical Physicist and research focuses on improving the practice of the profession. A second type (increasingly called 'biomedical physics') has a much wider scope and may include research in any applications of physics to medicine from the study of biomolecular structure to microscopy and nanomedicine.

Mission statement of medical physicists

In hospital medical physics departments, the mission statement for medical physicists as adopted by the European Federation of Organisations for Medical Physics (EFOMP) is the following:

"Medical Physicists will contribute to maintaining and improving the quality, safety and cost-effectiveness of healthcare services through patient-oriented activities requiring expert action, involvement or advice regarding the specification, selection, acceptance testing, commissioning, quality assurance/control and optimised clinical use of medical devices and regarding patient risks and protection from associated physical agents (e.g., x-rays, electromagnetic fields, laser light, radionuclides) including the prevention of unintended or accidental exposures; all activities will be based on current best evidence or own scientific research when the available evidence is not sufficient. The scope includes risks to volunteers in biomedical research, carers and comforters. The scope often includes risks to workers and public particularly when these impact patient risk"

The term "physical agents" refers to ionising and non-ionising electromagnetic radiations, static electric and magnetic fields, ultrasound, laser light and any other Physical Agent associated with medical e.g., x-rays in computerised tomography (CT), gamma rays/radionuclides in nuclear medicine, magnetic fields and radio-frequencies in magnetic resonance imaging (MRI), ultrasound in ultrasound imaging and Doppler measurements.

This mission includes the following 11 key activities:

  1. Scientific problem solving service: Comprehensive problem solving service involving recognition of less than optimal performance or optimised use of medical devices, identification and elimination of possible causes or misuse, and confirmation that proposed solutions have restored device performance and use to acceptable status. All activities are to be based on current best scientific evidence or own research when the available evidence is not sufficient.
  2. Dosimetry measurements: Measurement of doses had by patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures (e.g., for legal or employment purposes); selection, calibration and maintenance of dosimetry related instrumentation; independent checking of dose related quantities provided by dose reporting devices (including software devices); measurement of dose related quantities required as inputs to dose reporting or estimating devices (including software). Measurements to be based on current recommended techniques and protocols. Includes dosimetry of all physical agents.
  3. Patient safety/risk management (including volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures. Surveillance of medical devices and evaluation of clinical protocols to ensure the ongoing protection of patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures from the deleterious effects of physical agents in accordance with the latest published evidence or own research when the available evidence is not sufficient. Includes the development of risk assessment protocols.
  4. Occupational and public safety/risk management (when there is an impact on medical exposure or own safety). Surveillance of medical devices and evaluation of clinical protocols with respect to protection of workers and public when impacting the exposure of patients, volunteers in biomedical research, carers, comforters and persons subjected to non-medical imaging exposures or responsibility with respect to own safety. Includes the development of risk assessment protocols in conjunction with other experts involved in occupational / public risks.
  5. Clinical medical device management: Specification, selection, acceptance testing, commissioning and quality assurance/ control of medical devices in accordance with the latest published European or International recommendations and the management and supervision of associated programmes. Testing to be based on current recommended techniques and protocols.
  6. Clinical involvement: Carrying out, participating in and supervising everyday radiation protection and quality control procedures to ensure ongoing effective and optimised use of medical radiological devices and including patient specific optimization.
  7. Development of service quality and cost-effectiveness: Leading the introduction of new medical radiological devices into clinical service, the introduction of new medical physics services and participating in the introduction/development of clinical protocols/techniques whilst giving due attention to economic issues.
  8. Expert consultancy: Provision of expert advice to outside clients (e.g., clinics with no in-house medical physics expertise).
  9. Education of healthcare professionals (including medical physics trainees: Contributing to quality healthcare professional education through knowledge transfer activities concerning the technical-scientific knowledge, skills and competences supporting the clinically effective, safe, evidence-based and economical use of medical radiological devices. Participation in the education of medical physics students and organisation of medical physics residency programmes.
  10. Health technology assessment (HTA): Taking responsibility for the physics component of health technology assessments related to medical radiological devices and /or the medical uses of radioactive substances/sources.
  11. Innovation: Developing new or modifying existing devices (including software) and protocols for the solution of hitherto unresolved clinical problems.

Medical biophysics and biomedical physics

Some education institutions house departments or programs bearing the title "medical biophysics" or "biomedical physics" or "applied physics in medicine". Generally, these fall into one of two categories: interdisciplinary departments that house biophysics, radiobiology, and medical physics under a single umbrella; and undergraduate programs that prepare students for further study in medical physics, biophysics, or medicine. Most of the scientific concepts in bionanotechnology are derived from other fields. Biochemical principles that are used to understand the material properties of biological systems are central in bionanotechnology because those same principles are to be used to create new technologies. Material properties and applications studied in bionanoscience include mechanical properties (e.g. deformation, adhesion, failure), electrical/electronic (e.g. electromechanical stimulation, capacitors, energy storage/batteries), optical (e.g. absorption, luminescence, photochemistry), thermal (e.g. thermomutability, thermal management), biological (e.g. how cells interact with nanomaterials, molecular flaws/defects, biosensing, biological mechanisms such as mechanosensation), nanoscience of disease (e.g. genetic disease, cancer, organ/tissue failure), as well as computing (e.g. DNA computing) and agriculture (target delivery of pesticides, hormones and fertilizers.

Areas of specialty

The International Organization for Medical Physics (IOMP) recognizes main areas of medical physics employment and focus.

Medical imaging physics

Para-sagittal MRI of the head in a patient with benign familial macrocephaly.

Medical imaging physics is also known as diagnostic and interventional radiology physics. Clinical (both "in-house" and "consulting") physicists typically deal with areas of testing, optimization, and quality assurance of diagnostic radiology physics areas such as radiographic X-rays, fluoroscopy, mammography, angiography, and computed tomography, as well as non-ionizing radiation modalities such as ultrasound, and MRI. They may also be engaged with radiation protection issues such as dosimetry (for staff and patients). In addition, many imaging physicists are often also involved with nuclear medicine systems, including single photon emission computed tomography (SPECT) and positron emission tomography (PET). Sometimes, imaging physicists may be engaged in clinical areas, but for research and teaching purposes, such as quantifying intravascular ultrasound as a possible method of imaging a particular vascular object.

Radiation therapeutic physics

Radiation therapeutic physics is also known as radiotherapy physics or radiation oncologist physics. The majority of medical physicists currently working in the US, Canada, and some western countries are of this group. A radiation therapy physicist typically deals with linear accelerator (Linac) systems and kilovoltage x-ray treatment units on a daily basis, as well as other modalities such as TomoTherapy, gamma knife, Cyberknife, proton therapy, and brachytherapy. The academic and research side of therapeutic physics may encompass fields such as boron neutron capture therapy, sealed source radiotherapy, terahertz radiation, high-intensity focused ultrasound (including lithotripsy), optical radiation lasers, ultraviolet etc. including photodynamic therapy, as well as nuclear medicine including unsealed source radiotherapy, and photomedicine, which is the use of light to treat and diagnose disease.

Nuclear medicine physics

Nuclear medicine is a branch of medicine that uses radiation to provide information about the functioning of a person's specific organs or to treat disease. The thyroid, bones, heart, liver and many other organs can be easily imaged, and disorders in their function revealed. In some cases radiation sources can be used to treat diseased organs, or tumours. Five Nobel laureates have been intimately involved with the use of radioactive tracers in medicine. Over 10,000 hospitals worldwide use radioisotopes in medicine, and about 90% of the procedures are for diagnosis. The most common radioisotope used in diagnosis is technetium-99m, with some 30 million procedures per year, accounting for 80% of all nuclear medicine procedures worldwide.

Health physics

Health physics is also known as radiation safety or radiation protection. Health physics is the applied physics of radiation protection for health and health care purposes. It is the science concerned with the recognition, evaluation, and control of health hazards to permit the safe use and application of ionizing radiation. Health physics professionals promote excellence in the science and practice of radiation protection and safety.

Non-ionizing Medical Radiation Physics

Some aspects of non-ionising radiation physics may be considered under radiation protection or diagnostic imaging physics. Imaging modalities include MRI, optical imaging and ultrasound. Safety considerations include these areas and lasers

Physiological measurement

Physiological measurements have also been used to monitor and measure various physiological parameters. Many physiological measurement techniques are non-invasive and can be used in conjunction with, or as an alternative to, other invasive methods. Measurement methods include electrocardiography Many of these areas may be covered by other specialities, for example medical engineering or vascular science.

Healthcare informatics and computational physics

Other closely related fields to medical physics include fields which deal with medical data, information technology and computer science for medicine.

Areas of research and academic development

ECG trace

Non-clinical physicists may or may not focus on the above areas from an academic and research point of view, but their scope of specialization may also encompass lasers and ultraviolet systems (such as photodynamic therapy), fMRI and other methods for functional imaging as well as molecular imaging, electrical impedance tomography, diffuse optical imaging, optical coherence tomography, and dual energy X-ray absorptiometry.

Biomedicine

From Wikipedia, the free encyclopedia
 

Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners.

Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century.

It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine.

Overview

Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy.

Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.

Biomedicine involves the study of (patho-) physiological processes with methods from biology and physiology. Approaches range from understanding molecular interactions to the study of the consequences at the in vivo level. These processes are studied with the particular point of view of devising new strategies for diagnosis and therapy.

Depending on the severity of the disease, biomedicine pinpoints a problem within a patient and fixes the problem through medical intervention. Medicine focuses on curing diseases rather than improving one's health.

In social sciences biomedicine is described somewhat differently. Through an anthropological lens biomedicine extends beyond the realm of biology and scientific facts; it is a socio-cultural system which collectively represents reality. While biomedicine is traditionally thought to have no bias due to the evidence-based practices, Gaines & Davis-Floyd (2004) highlight that biomedicine itself has a cultural basis and this is because biomedicine reflects the norms and values of its creators.

Molecular biology

Molecular biology is the process of synthesis and regulation of a cell's DNA, RNA, and protein. Molecular biology consists of different techniques including Polymerase chain reaction, Gel electrophoresis, and macromolecule blotting to manipulate DNA.

Polymerase chain reaction is done by placing a mixture of the desired DNA, DNA polymerase, primers, and nucleotide bases into a machine. The machine heats up and cools down at various temperatures to break the hydrogen bonds binding the DNA and allows the nucleotide bases to be added onto the two DNA templates after it has been separated.

Gel electrophoresis is a technique used to identify similar DNA between two unknown samples of DNA. This process is done by first preparing an agarose gel. This jelly-like sheet will have wells for DNA to be poured into. An electric current is applied so that the DNA, which is negatively charged due to its phosphate groups is attracted to the positive electrode. Different rows of DNA will move at different speeds because some DNA pieces are larger than others. Thus if two DNA samples show a similar pattern on the gel electrophoresis, one can tell that these DNA samples match.

Macromolecule blotting is a process performed after gel electrophoresis. An alkaline solution is prepared in a container. A sponge is placed into the solution and an agarose gel is placed on top of the sponge. Next, nitrocellulose paper is placed on top of the agarose gel and a paper towels are added on top of the nitrocellulose paper to apply pressure. The alkaline solution is drawn upwards towards the paper towel. During this process, the DNA denatures in the alkaline solution and is carried upwards to the nitrocellulose paper. The paper is then placed into a plastic bag and filled with a solution full of the DNA fragments, called the probe, found in the desired sample of DNA. The probes anneal to the complementary DNA of the bands already found on the nitrocellulose sample. Afterwards, probes are washed off and the only ones present are the ones that have annealed to complementary DNA on the paper. Next the paper is stuck onto an x ray film. The radioactivity of the probes creates black bands on the film, called an autoradiograph. As a result, only similar patterns of DNA to that of the probe are present on the film. This allows us the compare similar DNA sequences of multiple DNA samples. The overall process results in a precise reading of similarities in both similar and different DNA sample.

Biochemistry

Biochemistry is the science of the chemical processes which takes place within living organisms. Living organisms need essential elements to survive, among which are carbon, hydrogen, nitrogen, oxygen, calcium, and phosphorus. These elements make up the four macromolecules that living organisms need to survive: carbohydrates, lipids, proteins, and nucleic acids.

Carbohydrates, made up of carbon, hydrogen, and oxygen, are energy-storing molecules. The simplest carbohydrate is glucose,

C6H12O6, is used in cellular respiration to produce ATP, adenosine triphosphate, which supplies cells with energy.

Proteins are chains of amino acids that function, among other things, to contract skeletal muscle, as catalysts, as transport molecules, and as storage molecules. Protein catalysts can facilitate biochemical processes by lowering the activation energy of a reaction. Hemoglobins are also proteins, carrying oxygen to an organism's cells.

Lipids, also known as fats, are small molecules derived from biochemical subunits from either the ketoacyl or isoprene groups. Creating eight distinct categories: fatty acids, glycerolipids, glycerophospholipids, sphingolipids, saccharolipids, and polyketides (derived from condensation of ketoacyl subunits); and sterol lipids and prenol lipids (derived from condensation of isoprene subunits). Their primary purpose is to store energy over the long term. Due to their unique structure, lipids provide more than twice the amount of energy that carbohydrates do. Lipids can also be used as insulation. Moreover, lipids can be used in hormone production to maintain a healthy hormonal balance and provide structure to cell membranes.

Nucleic acids are a key component of DNA, the main genetic information-storing substance, found oftentimes in the cell nucleus, and controls the metabolic processes of the cell. DNA consists of two complementary antiparallel strands consisting of varying patterns of nucleotides. RNA is a single strand of DNA, which is transcribed from DNA and used for DNA translation, which is the process for making proteins out of RNA sequences.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...