Search This Blog

Tuesday, December 10, 2019

Technology life cycle

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Technology_life_cycle
 
The typical life-cycle of a manufacturing process or production system from the stages of its initial conception to its culmination as either a technique or procedure of common practice or to its demise. The Y-axis of the diagram shows the business gain to the proprietor of the technology while the X-axis traces its lifetime.
 
The technology life-cycle (TLC) describes the commercial gain of a product through the expense of research and development phase, and the financial return during its "vital life". Some technologies, such as steel, paper or cement manufacturing, have a long lifespan (with minor variations in technology incorporated with time) while in other cases, such as electronic or pharmaceutical products, the lifespan may be quite short.

The TLC associated with a product or technological service is different from product life-cycle (PLC) dealt with in product life-cycle management. The latter is concerned with the life of a product in the marketplace with respect to timing of introduction, marketing measures, and business costs. The technology underlying the product (for example, that of a uniquely flavoured tea) may be quite marginal but the process of creating and managing its life as a branded product will be very different.
The technology life cycle is concerned with the time and cost of developing the technology, the timeline of recovering cost, and modes of making the technology yield a profit proportionate to the costs and risks involved. The TLC may, further, be protected during its cycle with patents and trademarks seeking to lengthen the cycle and to maximize the profit from it.

The product of the technology may be a commodity such as polyethylene plastic or a sophisticated product like the integrated circuits used in a smartphone.

The development of a competitive product or process can have a major effect on the lifespan of the technology, making it shorter. Equally, the loss of intellectual property rights through litigation or loss of its secret elements (if any) through leakages also work to reduce a technology's lifespan. Thus, it is apparent that the management of the TLC is an important aspect of technology development.

Most new technologies follow a similar technology maturity lifecycle describing the technological maturity of a product. This is not similar to a product life cycle, but applies to an entire technology, or a generation of a technology.

Technology adoption is the most common phenomenon driving the evolution of industries along the industry lifecycle. After expanding new uses of resources they end with exhausting the efficiency of those processes, producing gains that are first easier and larger over time then exhaustingly more difficult, as the technology matures.

The four phases of the technology life-cycle

The TLC may be seen as composed of four phases:
  1. The research and development (R&D) phase (sometimes called the "bleeding edge") when incomes from inputs are negative and where the prospects of failure are high
  2. The ascent phase when out-of-pocket costs have been recovered and the technology begins to gather strength by going beyond some Point A on the TLC (sometimes called the "leading edge")
  3. The maturity phase when gain is high and stable, the region, going into saturation, marked by M, and
  4. The decline (or decay phase), after a Point D, of reducing fortunes and utility of the technology.

S-curve

The shape of the technology lifecycle is often referred to as S-curve.

Technology perception dynamics

There is usually technology hype at the introduction of any new technology, but only after some time has passed can it be judged as mere hype or justified true acclaim. Because of the logistic curve nature of technology adoption, it is difficult to see in the early stages whether the hype is excessive.

The two errors commonly committed in the early stages of a technology's development are:
  • fitting an exponential curve to the first part of the growth curve, and assuming eternal exponential growth
  • fitting a linear curve to the first part of the growth curve, and assuming that take-up of the new technology is disappointing
Rogers' bell curve

Similarly, in the later stages, the opposite mistakes can be made relating to the possibilities of technology maturity and market saturation.

The technology adoption life cycle typically occurs in an S curve, as modelled in diffusion of innovations theory. This is because customers respond to new products in different ways. Diffusion of innovations theory, pioneered by Everett Rogers, posits that people have different levels of readiness for adopting new innovations and that the characteristics of a product affect overall adoption. Rogers classified individuals into five groups: innovators, early adopters, early majority, late majority, and laggards. In terms of the S curve, innovators occupy 2.5%, early adopters 13.5%, early majority 34%, late majority 34%, and laggards 16%.

The four stages of technology life cycle are as follows:
  • Innovation stage: This stage represents the birth of a new product, material of process resulting from R&D activities. In R&D laboratories, new ideas are generated depending on gaining needs and knowledge factors. Depending on the resource allocation and also the change element, the time taken in the innovation stage as well as in the subsequent stages varies widely.
  • Syndication stage: This stage represents the demonstration and commercialisation of a new technology, such as, product, material or process with potential for immediate utilisation. Many innovations are put on hold in R&D laboratories. Only a very small percentage of these are commercialised. Commercialisation of research outcomes depends on technical as well non-technical, mostly economic factors.
  • Diffusion stage: This represents the market penetration of a new technology through acceptance of the innovation, by potential users of the technology. But supply and demand side factors jointly influence the rate of diffusion.
  • Substitution stage: This last stage represents the decline in the use and eventual extension of a technology, due to replacement by another technology. Many technical and non-technical factors influence the rate of substitution. The time taken in the substitution stage depends on the market dynamics.

Licensing options

Large corporations develop technology for their own benefit and not with the objective of licensing. The tendency to license out technology only appears when there is a threat to the life of the TLC (business gain) as discussed later.

Licensing in the R&D phase

There are always smaller firms (SMEs) who are inadequately situated to finance the development of innovative R&D in the post-research and early technology phases. By sharing incipient technology under certain conditions, substantial risk financing can come from third parties. This is a form of quasi-licensing which takes different formats. Even large corporates may not wish to bear all costs of development in areas of significant and high risk (e.g. aircraft development) and may seek means of spreading it to the stage that proof-of-concept is obtained. 

In the case of small and medium firms, entities such as venture capitalists or business angels, can enter the scene and help to materialize technologies. Venture capitalists accept both the costs and uncertainties of R&D, and that of market acceptance, in reward for high returns when the technology proves itself. Apart from finance, they may provide networking, management and marketing support. Venture capital connotes financial as well as human capital.

Larger firms may opt for Joint R&D or work in a consortium for the early phase of development. Such vehicles are called strategic alliances – strategic partnerships.

With both venture capital funding and strategic (research) alliances, when business gains begin to neutralize development costs (the TLC crosses the X-axis), the ownership of the technology starts to undergo change.

In the case of smaller firms, venture capitalists help clients enter the stock market for obtaining substantially larger funds for development, maturation of technology, product promotion and to meet marketing costs. A major route is through initial public offering (IPO) which invites risk funding by the public for potential high gain. At the same time, the IPOs enable venture capitalists to attempt to recover expenditures already incurred by them through part sale of the stock pre-allotted to them (subsequent to the listing of the stock on the stock exchange). When the IPO is fully subscribed, the assisted enterprise becomes a corporation and can more easily obtain bank loans, etc. if needed.

Strategic alliance partners, allied on research, pursue separate paths of development with the incipient technology of common origin but pool their accomplishments through instruments such as 'cross-licensing'. Generally, contractual provisions among the members of the consortium allow a member to exercise the option of independent pursuit after joint consultation; in which case the optee owns all subsequent development. 

Licensing in the ascent phase

The ascent stage of the technology usually refers to some point above Point A in the TLC diagram but actually it commences when the R&D portion of the TLC curve inflects (only that the cashflow is negative and unremunerative to Point A). The ascent is the strongest phase of the TLC because it is here that the technology is superior to alternatives and can command premium profit or gain. The slope and duration of the ascent depends on competing technologies entering the domain, although they may not be as successful in that period. Strongly patented technology extends the duration period.

The TLC begins to flatten out (the region shown as M) when equivalent or challenging technologies come into the competitive space and begin to eat away marketshare.

Till this stage is reached, the technology-owning firm would tend to exclusively enjoy its profitability, preferring not to license it. If an overseas opportunity does present itself, the firm would prefer to set up a controlled subsidiary rather than license a third party. 

Licensing in the maturity phase

The maturity phase of the technology is a period of stable and remunerative income but its competitive viability can persist over the larger timeframe marked by its 'vital life'. However, there may be a tendency to license out the technology to third parties during this stage to lower risk of decline in profitability (or competitivity) and to expand financial opportunity.

The exercise of this option is, generally, inferior to seeking participatory exploitation; in other words, engagement in joint venture, typically in regions where the technology would be in the ascent phase,as say, a developing country. In addition to providing financial opportunity it allows the technology-owner a degree of control over its use. Gain flows from the two streams of investment-based and royalty incomes. Further, the vital life of the technology is enhanced in such strategy. 

Licensing in the decline phase

After reaching a point such as D in the above diagram, the earnings from the technology begin to decline rather rapidly. To prolong the life cycle, owners of technology might try to license it out at some point L when it can still be attractive to firms in other markets. This, then, traces the lengthening path, LL'. Further, since the decline is the result of competing rising technologies in this space, licenses may be attracted to the general lower cost of the older technology (than what prevailed during its vital life). 

Licenses obtained in this phase are 'straight licenses'. They are free of direct control from the owner of the technology (as would otherwise apply, say, in the case of a joint-venture). Further, there may be fewer restrictions placed on the licensee in the employment of the technology. 

The utility, viability, and thus the cost of straight-licenses depends on the estimated 'balance life' of the technology. For instance, should the key patent on the technology have expired, or would expire in a short while, the residual viability of the technology may be limited, although balance life may be governed by other criteria such as knowhow which could have a longer life if properly protected.

It is important to note that the license has no way of knowing the stage at which the prime, and competing technologies, are on their TLCs. It would, of course, be evident to competing licensor firms, and to the originator, from the growth, saturation or decline of the profitability of their operations.

The license may, however, be able to approximate the stage by vigorously negotiating with the licensor and competitors to determine costs and licensing terms. A lower cost, or easier terms, may imply a declining technology. 

In any case, access to technology in the decline phase is a large risk that the licensee accepts. (In a joint-venture this risk is substantially reduced by licensor sharing it). Sometimes, financial guarantees from the licensor may work to reduce such risk and can be negotiated.

There are instances when, even though the technology declines to becoming a technique, it may still contain important knowledge or experience which the licensee firm cannot learn of without help from the originator. This is often the form that technical service and technical assistance contracts take (encountered often in developing country contracts). Alternatively, consulting agencies may fill this role.

Technology development cycle

According to the Encyclopedia of Earth, "In the simplest formulation, innovation can be thought of as being composed of research, development, demonstration, and deployment."

Technology development cycle describes the process of a new technology through the stages of technological maturity:
  1. Research and development
  2. Scientific demonstration
  3. System deployment
  4. Diffusion

Technology readiness level

From Wikipedia, the free encyclopedia
 
NASA Technology Readiness Levels
 
Technology readiness levels (TRLs) are a method for estimating the maturity of technologies during the acquisition phase of a program, developed at NASA during the 1970s. The use of TRLs enables consistent, uniform discussions of technical maturity across different types of technology. A technology's TRL is determined during a Technology Readiness Assessment (TRA) that examines program concepts, technology requirements, and demonstrated technology capabilities. TRLs are based on a scale from 1 to 9 with 9 being the most mature technology.  The US Department of Defense has used the scale for procurement since the early 2000s. By 2008 the scale was also in use at the European Space Agency (ESA), as evidenced by their handbook.
 
The European Commission advised EU-funded research and innovation projects to adopt the scale in 2010. TRLs were consequently used in 2014 in the EU Horizon 2020 program. In 2013, the TRL scale was further canonized by the ISO 16290:2013 standard. A comprehensive approach and discussion of TRLs has been published by the European Association of Research and Technology Organisations (EARTO). Extensive criticism of the adoption of TRL scale by the European Union was published in The Innovation Journal, stating that the "concreteness and sophistication of the TRL scale gradually diminished as its usage spread outside its original context (space programs)".

Current NASA usage

The current nine-point NASA scale is:
Level 1 – Basic principles observed and reported
Level 2 – Technology concept and/or application formulated
Level 3 – Analytical and experimental critical function and/or characteristic proof-of concept
Level 4 – Component and/or breadboard validation in laboratory environment
Level 5 – Component and/or breadboard validation in relevant environment
Level 6 – System/subsystem model or prototype demonstration in a relevant environment (ground or space)
Level 7 – System prototype demonstration in a space environment
Level 8 – Actual system completed and “flight qualified” through test and demonstration (ground or space)
Level 9 – Actual system “flight proven” through successful mission operations

History

Technology Readiness Levels were originally conceived at NASA in 1974 and formally defined in 1989. The original definition included seven levels, but in the 1990s NASA adopted the current nine-level scale that subsequently gained widespread acceptance.

Original NASA TRL Definitions (1989)
Level 1 – Basic Principles Observed and Reported
Level 2 – Potential Application Validated
Level 3 – Proof-of-Concept Demonstrated, Analytically and/or Experimentally
Level 4 – Component and/or Breadboard Laboratory Validated
Level 5 – Component and/or Breadboard Validated in Simulated or Realspace Environment
Level 6 – System Adequacy Validated in Simulated Environment
Level 7 – System Adequacy Validated in Space
The TRL methodology was originated by Stan Sadin at NASA Headquarters in 1974. At that time, Ray Chase was the JPL Propulsion Division representative on the Jupiter Orbiter design team. At the suggestion of Stan Sadin, Mr Chase used this methodology to assess the technology readiness of the proposed JPL Jupiter Orbiter spacecraft design. Later Mr Chase spent a year at NASA Headquarters helping Mr Sadin institutionalize the TRL methodology. Mr Chase joined ANSER in 1978, where he used the TRL methodology to evaluate the technology readiness of proposed Air Force development programs. He published several articles during the 1980s and 90s on reusable launch vehicles utilizing the TRL methodology. These documented an expanded version of the methodology that included design tools, test facilities, and manufacturing readiness on the Air Force Have Not program. The Have Not program manager, Greg Jenkins, and Ray Chase published the expanded version of the TRL methodology, which included design and manufacturing. Leon McKinney and Mr Chase used the expanded version to assess the technology readiness of the ANSER team's Highly Reusable Space Transportation ("HRST") concept. ANSER also created an adapted version of the TRL methodology for proposed Homeland Security Agency programs.

The United States Air Force adopted the use of Technology Readiness Levels in the 1990s.

In 1995, John C. Mankins, NASA, wrote a paper that discussed NASA's use of TRL, extended the scale, and proposed expanded descriptions for each TRL. In 1999, the United States General Accounting Office produced an influential report that examined the differences in technology transition between the DOD and private industry. It concluded that the DOD takes greater risks and attempts to transition emerging technologies at lesser degrees of maturity than does private industry. The GAO concluded that use of immature technology increased overall program risk. The GAO recommended that the DOD make wider use of Technology Readiness Levels as a means of assessing technology maturity prior to transition. In 2001, the Deputy Under Secretary of Defense for Science and Technology issued a memorandum that endorsed use of TRLs in new major programs. Guidance for assessing technology maturity was incorporated into the Defense Acquisition Guidebook. Subsequently, the DOD developed detailed guidance for using TRLs in the 2003 DOD Technology Readiness Assessment Deskbook. 

Because of their relevance to Habitation, 'Habitation Readiness Levels (HRL)' were formed by a group of NASA engineers (Jan Connolly, Kathy Daues, Robert Howard, and Larry Toups). They have been created to address habitability requirements and design aspects in correlation with already established and widely used standards by different agencies, including NASA TRLs.

In the European Union

The European Space Agency adopted the TRL scale in the mid-2000s. Its handbook closely follows the NASA definition of TRLs. The universal usage of TRL in EU policy was proposed in the final report of the first High Level Expert Group on Key Enabling Technologies, and it was indeed implemented in the subsequent EU framework program, called H2020, running from 2013 to 2020.[1] This means not only space and weapons programs, but everything from nanotechnology to informatics and communication technology.

The TRLs in Europe are as follows:
  • TRL 1 – Basic principles observed
  • TRL 2 – Technology concept formulated
  • TRL 3 – Experimental proof of concept
  • TRL 4 – Technology validated in lab
  • TRL 5 – Technology validated in relevant environment (industrially relevant environment in the case of key enabling technologies)
  • TRL 6 – Technology demonstrated in relevant environment (industrially relevant environment in the case of key enabling technologies)
  • TRL 7 – System prototype demonstration in operational environment
  • TRL 8 – System complete and qualified
  • TRL 9 – Actual system proven in operational environment (competitive manufacturing in the case of key enabling technologies; or in space)

Assessment tools

TPMM Transition Mechanism
A Technology Readiness Level Calculator was developed by the United States Air Force. This tool is a standard set of questions implemented in Microsoft Excel that produces a graphical display of the TRLs achieved. This tool is intended to provide a snapshot of technology maturity at a given point in time.
 
The Technology Program Management Model was developed by the United States Army. The TPMM is a TRL-gated high-fidelity activity model that provides a flexible management tool to assist Technology Managers in planning, managing, and assessing their technologies for successful technology transition. The model provides a core set of activities including systems engineering and program management tasks that are tailored to the technology development and management goals. This approach is comprehensive, yet it consolidates the complex activities that are relevant to the development and transition of a specific technology program into one integrated model.

Uses

The primary purpose of using technology readiness levels is to help management in making decisions concerning the development and transitioning of technology. It should be viewed as one of several tools that are needed to manage the progress of research and development activity within an organization.

Among the advantages of TRLs:
  • Provides a common understanding of technology status
  • Risk management
  • Used to make decisions concerning technology funding
  • Used to make decisions concerning transition of technology
Some of the characteristics of TRLs that limit their utility:
  • Readiness does not necessarily fit with appropriateness or technology maturity
  • A mature product may possess a greater or lesser degree of readiness for use in a particular system context than one of lower maturity
  • Numerous factors must be considered, including the relevance of the products' operational environment to the system at hand, as well as the product-system architectural mismatch
Current TRL models tend to disregard negative and obsolescence factors. There have been suggestions made for incorporating such factors into assessments.

For complex technologies that incorporate various development stages, a more detailed scheme called the Technology Readiness Pathway Matrix has been developed going from basic units to applications in society. This tool aims to show that a readiness level of a technology is based on a less linear process but on a more complex pathway through its application in society.

Biological engineering (updated)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Biological_engineering

 
Biological engineering, or bioengineering/bio-engineering, is the application of principles of biology and the tools of engineering to create usable, tangible, economically viable products. Biological engineering employs knowledge and expertise from a number of pure and applied sciences, such as mass and heat transfer, kinetics, biocatalysts, biomechanics, bioinformatics, separation and purification processes, bioreactor design, surface science, fluid mechanics, thermodynamics, and polymer science. It is used in the design of medical devices, diagnostic equipment, biocompatible materials, renewable bioenergy, ecological engineering, agricultural engineering, and other areas that improve the living standards of societies. Examples of bioengineering research include bacteria engineered to produce chemicals, new medical imaging technology, portable and rapid disease diagnostic devices, prosthetics, biopharmaceuticals, and tissue-engineered organs. Bioengineering overlaps substantially with biotechnology and the biomedical sciences in a way analogous to how various other forms of engineering and technology relate to various other sciences (for example, aerospace engineering and other space technology to kinetics and astrophysics). 

In general, biological engineers (or biomedical engineers) attempt to either mimic biological systems to create products or modify and control biological systems so that they can replace, augment, sustain, or predict chemical and mechanical processes. Bioengineers can apply their expertise to other applications of engineering and biotechnology, including genetic modification of plants and microorganisms, bioprocess engineering, and biocatalysis. Working with doctors, clinicians and researchers, bioengineers use traditional engineering principles and techniques and apply them to real-world biological and medical problems.

History

Biological engineering is a science-based discipline founded upon the biological sciences in the same way that chemical engineering, electrical engineering, and mechanical engineering can be based upon chemistry, electricity and magnetism, and classical mechanics, respectively.

Before WWII, biological engineering had just begun being recognized as a branch of engineering, and was a very new concept to people. Post-WWII, it started to grow more rapidly, partially due to the term "bioengineering" being coined by British scientist and broadcaster Heinz Wolff in 1954 at the National Institute for Medical Research. Wolff graduated that same year and became the director of the Division of Biological Engineering at the university. This was the first time Bioengineering was recognized as its own branch at a university. Electrical engineering is considered to pioneer this engineering sector due to its work with medical devices and machinery during this time. When engineers and life scientists started working together, they recognized the problem that the engineers didn't know enough about the actual biology behind their work. To resolve this problem, engineers who wanted to get into biological engineering devoted more of their time and studies to the details and processes that go into fields such as biology, psychology, and medicine. The term biological engineering may also be applied to environmental modifications such as surface soil protection, slope stabilization, watercourse and shoreline protection, windbreaks, vegetation barriers including noise barriers and visual screens, and the ecological enhancement of an area. Because other engineering disciplines also address living organisms, the term biological engineering can be applied more broadly to include agricultural engineering

The first biological engineering program was created at University of California, San Diego in 1966, making it the first biological engineering curriculum in the United States. More recent programs have been launched at MIT and Utah State University. Many old agricultural engineering departments in universities over the world have re-branded themselves as agricultural and biological engineering or agricultural and biosystems engineering, due to biological engineering as a whole being a rapidly developing field with fluid categorization. According to Professor Doug Lauffenburger of MIT, biological engineering has a broad base which applies engineering principles to an enormous range of size and complexities of systems. These systems range from the molecular level (molecular biology, biochemistry, microbiology, pharmacology, protein chemistry, cytology, immunology, neurobiology and neuroscience) to cellular and tissue-based systems (including devices and sensors), to whole macroscopic organisms (plants, animals), and can even range up to entire ecosystems. 

Education

The average length of study is three to five years, and the completed degree is signified as a bachelor of engineering (B.S. in engineering). Fundamental courses include thermodynamics, bio-mechanics, biology, genetic engineering, fluid and mechanical dynamics, kinetics, electronics, and materials properties.

Sub-disciplines

Modeling of the spread of disease using Cellular Automata and Nearest Neighbor Interactions
 
Depending on the institution and particular definitional boundaries employed, some major branches of bioengineering may be categorized as (note these may overlap):

Organizations

  • American Institute for Medical and Biological Engineering (AIMBE) is made up of 1,500 members. Their main goal is to educate the public about the value biological engineering has in our world, as well as invest in research and other programs to advance the field. They give out awards to those dedicated to innovation in the field, and awards of achievement in the field. (They do not have a direct contribution to biological engineering, they more recognize those who do and encourage the public to continue that forward movement.)
  • Institute of Biological Engineering (IBE) is a non-profit organization, they run on donations alone. They aim to encourage the public to learn and to continue advancements in biological engineering. (Like AIMBE, they don't do research directly, they do however offer scholarships to students who show promise in the field).

Tissue engineering

From Wikipedia, the free encyclopedia
 
A simplified overview of the general methods used in regenerative medicine
 
Tissue engineering is the use of a combination of cells, engineering, and materials methods, and suitable biochemical and physicochemical factors to improve or replace biological tissues. Tissue engineering involves the use of a tissue scaffold for the formation of new viable tissue for a medical purpose. While it was once categorized as a sub-field of biomaterials, having grown in scope and importance it can be considered as a field in its own.

While most definitions of tissue engineering cover a broad range of applications, in practice the term is closely associated with applications that repair or replace portions of or whole tissues (i.e., bone, cartilage,[1] blood vessels, bladder, skin, muscle etc.). Often, the tissues involved require certain mechanical and structural properties for proper functioning. The term has also been applied to efforts to perform specific biochemical functions using cells within an artificially-created support system (e.g. an artificial pancreas, or a bio artificial liver). The term regenerative medicine is often used synonymously with tissue engineering, although those involved in regenerative medicine place more emphasis on the use of stem cells or progenitor cells to produce tissues. 

Overview

Micro-mass cultures of C3H-10T1/2 cells at varied oxygen tensions stained with Alcian blue
 
A commonly applied definition of tissue engineering, as stated by Langer and Vacanti, is "an interdisciplinary field that applies the principles of engineering and life sciences toward the development of biological substitutes that restore, maintain, or improve [Biological tissue] function or a whole organ". Tissue engineering has also been defined as "understanding the principles of tissue growth, and applying this to produce functional replacement tissue for clinical use". A further description goes on to say that an "underlying supposition of tissue engineering is that the employment of natural biology of the system will allow for greater success in developing therapeutic strategies aimed at the replacement, repair, maintenance, or enhancement of tissue function".

Powerful developments in the multidisciplinary field of tissue engineering have yielded a novel set of tissue replacement parts and implementation strategies. Scientific advances in biomaterials, stem cells, growth and differentiation factors, and biomimetic environments have created unique opportunities to fabricate tissues in the laboratory from combinations of engineered extracellular matrices ("scaffolds"), cells, and biologically active molecules. Among the major challenges now facing tissue engineering is the need for more complex functionality, as well as both functional and biomechanical stability and vascularization in laboratory-grown tissues destined for transplantation. The continued success of tissue engineering and the eventual development of true human replacement parts will grow from the convergence of engineering and basic research advances in tissue, matrix, growth factor, stem cell, and developmental biology, as well as materials science and bioinformatics...
In 2003, the NSF published a report entitled "The Emergence of Tissue Engineering as a Research Field", which gives a thorough description of the history of this field.

Examples

Regenerating a human ear using a scaffold

Cells as building blocks

Stained cells in culture
 
Tissue engineering utilizes living cells as engineering materials. Examples include using living fibroblasts in skin replacement or repair, cartilage repaired with living chondrocytes, or other types of cells used in other ways. 

Cells became available as engineering materials when scientists at Geron Corp. discovered how to extend telomeres in 1998, producing immortalized cell lines. Before this, laboratory cultures of healthy, noncancerous mammalian cells would only divide a fixed number of times, up to the Hayflick limit, before dying. 

Extraction

From fluid tissues such as blood, cells are extracted by bulk methods, usually centrifugation or apheresis. From solid tissues, extraction is more difficult. Usually, the tissue is minced and then digested with the enzymes trypsin or collagenase to remove the extracellular matrix (ECM) that holds the cells. After that, the cells are free floating, and extracted using centrifugation or apheresis.

Digestion with trypsin is very dependent on temperature. Higher temperatures digest the matrix faster but create more damage. Collagenase is less temperature dependent, and damages fewer cells, but takes longer and is a more expensive reagent. 

Types of cells

Mouse embryonic stem cells
Cells are often categorized by their source 

Autologous cells are obtained from the same individual to which they will be reimplanted. Autologous cells have the fewest problems with rejection and pathogen transmission, however, in some cases might not be available. For example, in genetic disease suitable autologous cells are not available. Also, very ill or elderly persons, as well as patients suffering from severe burns, may not have sufficient quantities of autologous cells to establish useful cell lines. Moreover, since this category of cells needs to be harvested from the patient, there are also some concerns related to the necessity of performing such surgical operations that might lead to donor site infection or chronic pain. Autologous cells also must be cultured from samples before they can be used: this takes time, so autologous solutions may not be very quick. Recently there has been a trend towards the use of mesenchymal stem cells from bone marrow and fat. These cells can differentiate into a variety of tissue types, including bone, cartilage, fat, and nerve. A large number of cells can be easily and quickly isolated from fat, thus opening the potential for large numbers of cells to be quickly and easily obtained. 

Allogeneic cells come from the body of a donor of the same species. While there are some ethical constraints to the use of human cells for in vitro studies, the employment of dermal fibroblasts from human foreskin has been demonstrated to be immunologically safe and thus a viable choice for tissue engineering of skin. 

Xenogenic cells are these isolated from individuals of another species. In particular animal cells have been used quite extensively in experiments aimed at the construction of cardiovascular implants.
Syngenic or isogenic cells are isolated from genetically identical organisms, such as twins, clones, or highly inbred research animal models. 

Primary cells are from an organism.

Secondary cells are from a cell bank.

Stem cells are undifferentiated cells with the ability to divide in culture and give rise to different forms of specialized cells. According to their source stem cells are divided into "adult" and "embryonic" stem cells, the first class being multipotent and the latter mostly pluripotent; some cells are totipotent, in the earliest stages of the embryo. While there is still a large ethical debate related with the use of embryonic stem cells, it is thought that another alternative source - induced stem cells may be useful for the repair of diseased or damaged tissues, or may be used to grow new organs. 

Scaffolds

Scaffolds are materials that have been engineered to cause desirable cellular interactions to contribute to the formation of new functional tissues for medical purposes. Cells are often 'seeded' into these structures capable of supporting three-dimensional tissue formation. Scaffolds mimic the extracellular matrix of the native tissue, recapitulating the in vivo milieu and allowing cells to influence their own microenvironments. They usually serve at least one of the following purposes: allow cell attachment and migration, deliver and retain cells and biochemical factors, enable diffusion of vital cell nutrients and expressed products, exert certain mechanical and biological influences to modify the behaviour of the cell phase.

In 2009, an interdisciplinary team led by the thoracic surgeon Thorsten Walles implanted the first bioartificial transplant that provides an innate vascular network for post-transplant graft supply successfully into a patient awaiting tracheal reconstruction.

This animation of a rotating carbon nanotube shows its 3D structure. Carbon nanotubes are among the numerous candidates for tissue engineering scaffolds since they are biocompatible, resistant to biodegradation and can be functionalized with biomolecules. However, the possibility of toxicity with non-biodegradable nano-materials is not fully understood.
 
To achieve the goal of tissue reconstruction, scaffolds must meet some specific requirements. High porosity and adequate pore size are necessary to facilitate cell seeding and diffusion throughout the whole structure of both cells and nutrients. Biodegradability is often an essential factor since scaffolds should preferably be absorbed by the surrounding tissues without the necessity of surgical removal. The rate at which degradation occurs has to coincide as much as possible with the rate of tissue formation: this means that while cells are fabricating their own natural matrix structure around themselves, the scaffold is able to provide structural integrity within the body and eventually it will break down leaving the newly formed tissue which will take over the mechanical load. Injectability is also important for clinical uses. Recent research on organ printing is showing how crucial a good control of the 3D environment is to ensure reproducibility of experiments and offer better results.

Materials

Many different materials (natural and synthetic, biodegradable and permanent) have been investigated. Most of these materials have been known in the medical field before the advent of tissue engineering as a research topic, being already employed as bioresorbable sutures. Examples of these materials are collagen and some polyesters.

New biomaterials have been engineered to have ideal properties and functional customization: injectability, synthetic manufacture, biocompatibility, non-immunogenicity, transparency, nano-scale fibers, low concentration, resorption rates, etc. PuraMatrix, originating from the MIT labs of Zhang, Rich, Grodzinsky, and Langer is one of these new biomimetic scaffold families which has now been commercialized and is impacting clinical tissue engineering.

A commonly used synthetic material is PLA - polylactic acid. This is a polyester which degrades within the human body to form lactic acid, a naturally occurring chemical which is easily removed from the body. Similar materials are polyglycolic acid (PGA) and polycaprolactone (PCL): their degradation mechanism is similar to that of PLA, but they exhibit respectively a faster and a slower rate of degradation compared to PLA. While these materials have well maintained mechanical strength and structural integrity, they exhibit a hydrophobic nature. This hydrophobicity inhibits their biocompatibility, which makes them less effective for in vivo use as tissue scaffolding. In order to fix the lack of biocompatibility, much research has been done to combine these hydrophobic materials with hydrophilic and more biocompatible hydrogels. While these hydrogels have a superior biocompatibility, they lack the structural integrity of PLA, PCL, and PGA. By combining the two different types of materials, researchers are trying to create a synergistic relationship that produces a more biocompatible tissue scaffolding. Scaffolds may also be constructed from natural materials: in particular different derivatives of the extracellular matrix have been studied to evaluate their ability to support cell growth. Proteic materials, such as collagen or fibrin, and polysaccharidic materials, like chitosan or glycosaminoglycans (GAGs), have all proved suitable in terms of cell compatibility, but some issues with potential immunogenicity still remains. Among GAGs hyaluronic acid, possibly in combination with cross linking agents (e.g. glutaraldehyde, water-soluble carbodiimide, etc.), is one of the possible choices as scaffold material. Functionalized groups of scaffolds may be useful in the delivery of small molecules (drugs) to specific tissues. Another form of scaffold under investigation is decellularised tissue extracts whereby the remaining cellular remnants/extracellular matrices act as the scaffold. Recently a range of nanocomposites biomaterials are fabricated by incorporating nanomaterials within the polymeric matrix to engineer bioactive scaffolds.

A 2009 study by Derda et al. aimed to improve in vivo-like conditions for 3D tissue via "stacking and de-stacking layers of paper impregnated with suspensions of cells in extracellular matrix hydrogel, making it possible to control oxygen and nutrient gradients in 3D, and to analyze molecular and genetic responses". It is possible to manipulate gradients of soluble molecules, and to characterize cells in these complex gradients more effectively than conventional 3D cultures based on hydrogels, cell spheroids, or 3D perfusion reactors. Different thicknesses of paper and types of medium can support a variety of experimental environments. Upon deconstruction, these sheets can be useful in cell-based high-throughput screening and drug discovery.

Synthesis

Tissue engineered vascular graft
 
Tissue engineered heart valve
 
A number of different methods have been described in the literature for preparing porous structures to be employed as tissue engineering scaffolds. Each of these techniques presents its own advantages, but none are free of drawbacks. 

Nanofiber self-assembly

Molecular self-assembly is one of the few methods for creating biomaterials with properties similar in scale and chemistry to that of the natural in vivo extracellular matrix (ECM), a crucial step toward tissue engineering of complex tissues. Moreover, these hydrogel scaffolds have shown superiority in in vivo toxicology and biocompatibility compared to traditional macroscaffolds and animal-derived materials. 

Textile technologies

These techniques include all the approaches that have been successfully employed for the preparation of non-woven meshes of different polymers. In particular, non-woven polyglycolide structures have been tested for tissue engineering applications: such fibrous structures have been found useful to grow different types of cells. The principal drawbacks are related to the difficulties in obtaining high porosity and regular pore size. 

Solvent casting and particulate leaching

Solvent casting and particulate leaching (SCPL) allows for the preparation of structures with regular porosity, but with limited thickness. First, the polymer is dissolved into a suitable organic solvent (e.g. polylactic acid could be dissolved into dichloromethane), then the solution is cast into a mold filled with porogen particles. Such porogen can be an inorganic salt like sodium chloride, crystals of saccharose, gelatin spheres or paraffin spheres. The size of the porogen particles will affect the size of the scaffold pores, while the polymer to porogen ratio is directly correlated to the amount of porosity of the final structure. After the polymer solution has been cast the solvent is allowed to fully evaporate, then the composite structure in the mold is immersed in a bath of a liquid suitable for dissolving the porogen: water in the case of sodium chloride, saccharose and gelatin or an aliphatic solvent like hexane for use with paraffin. Once the porogen has been fully dissolved, a porous structure is obtained. Other than the small thickness range that can be obtained, another drawback of SCPL lies in its use of organic solvents which must be fully removed to avoid any possible damage to the cells seeded on the scaffold. 

Gas foaming

To overcome the need to use organic solvents and solid porogens, a technique using gas as a porogen has been developed. First, disc-shaped structures made of the desired polymer are prepared by means of compression molding using a heated mold. The discs are then placed in a chamber where they are exposed to high pressure CO2 for several days. The pressure inside the chamber is gradually restored to atmospheric levels. During this procedure the pores are formed by the carbon dioxide molecules that abandon the polymer, resulting in a sponge-like structure. The main problems resulting from such a technique are caused by the excessive heat used during compression molding (which prohibits the incorporation of any temperature labile material into the polymer matrix) and by the fact that the pores do not form an interconnected structure. 

Emulsification freeze-drying

This technique does not require the use of a solid porogen like SCPL. First, a synthetic polymer is dissolved into a suitable solvent (e.g. polylactic acid in dichloromethane) then water is added to the polymeric solution and the two liquids are mixed in order to obtain an emulsion. Before the two phases can separate, the emulsion is cast into a mold and quickly frozen by means of immersion into liquid nitrogen. The frozen emulsion is subsequently freeze-dried to remove the dispersed water and the solvent, thus leaving a solidified, porous polymeric structure. While emulsification and freeze-drying allow for a faster preparation when compared to SCPL (since it does not require a time-consuming leaching step), it still requires the use of solvents. Moreover, pore size is relatively small and porosity is often irregular. Freeze-drying by itself is also a commonly employed technique for the fabrication of scaffolds. In particular, it is used to prepare collagen sponges: collagen is dissolved into acidic solutions of acetic acid or hydrochloric acid that are cast into a mold, frozen with liquid nitrogen and then lyophilized

Thermally induced phase separation

Similar to the previous technique, the TIPS phase separation procedure requires the use of a solvent with a low melting point that is easy to sublime. For example, dioxane could be used to dissolve polylactic acid, then phase separation is induced through the addition of a small quantity of water: a polymer-rich and a polymer-poor phase are formed. Following cooling below the solvent melting point and some days of vacuum-drying to sublime the solvent, a porous scaffold is obtained. Liquid-liquid phase separation presents the same drawbacks of emulsification/freeze-drying.

Electrospinning

Electrospinning is a highly versatile technique that can be used to produce continuous fibers from submicrometer to nanometer diameters. In a typical electrospinning set-up, a solution is fed through a spinneret and a high voltage is applied to the tip. The buildup of electrostatic repulsion within the charged solution, causes it to eject a thin fibrous stream. A mounted collector plate or rod with an opposite or grounded charge draws in the continuous fibers, which arrive to form a highly porous network. The primary advantages of this technique are its simplicity and ease of variation. At a laboratory level, a typical electrospinning set-up only requires a high voltage power supply (up to 30 kV), a syringe, a flat tip needle, and a conducting collector. For these reasons, electrospinning has become a common method of scaffold manufacture in many labs. By modifying variables such as the distance to collector, magnitude of applied voltage, or solution flow rate—researchers can dramatically change the overall scaffold architecture.

Historically, research on electrospun fibrous scaffolds dates back to at least the late 1980s when Simon showed that electrospinning could be used to produced nano- and submicron-scale fibrous scaffolds from polymer solutions specifically intended for use as in vitro cell and tissue substrates. This early use of electrospun lattices for cell culture and tissue engineering showed that various cell types would adhere to and proliferate upon polycarbonate fibers. It was noted that as opposed to the flattened morphology typically seen in 2D culture, cells grown on the electrospun fibers exhibited a more rounded 3-dimensional morphology generally observed of tissues in vivo.

CAD/CAM technologies

Because most of the above techniques are limited when it comes to the control of porosity and pore size, computer assisted design and manufacturing techniques have been introduced to tissue engineering. First, a three-dimensional structure is designed using CAD software. The porosity can be tailored using algorithms within the software. The scaffold is then realized by using ink-jet printing of polymer powders or through Fused Deposition Modeling of a polymer melt.

A 2011 study by El-Ayoubi et al. investigated "3D-plotting technique to produce (biocompatible and biodegradable) poly-L-Lactide macroporous scaffolds with two different pore sizes" via solid free-form fabrication (SSF) with computer-aided-design (CAD), to explore therapeutic articular cartilage replacement as an "alternative to conventional tissue repair". The study found the smaller the pore size paired with mechanical stress in a bioreactor (to induce in vivo-like conditions), the higher the cell viability in potential therapeutic functionality via decreasing recovery time and increasing transplant effectiveness.

Laser-assisted bioprinting

In a 2012 study, Koch et al. focused on whether Laser-assisted BioPrinting (LaBP) can be used to build multicellular 3D patterns in natural matrix, and whether the generated constructs are functioning and forming tissue. LaBP arranges small volumes of living cell suspensions in set high-resolution patterns. The investigation was successful, the researchers foresee that "generated tissue constructs might be used for in vivo testing by implanting them into animal models". As of this study, only human skin tissue has been synthesized, though researchers project that "by integrating further cell types (e.g. melanocytes, Schwann cells, hair follicle cells) into the printed cell construct, the behavior of these cells in a 3D in vitro microenvironment similar to their natural one can be analyzed", which is useful for drug discovery and toxicology studies.

Assembly methods

One of the continuing, persistent problems with tissue engineering is mass transport limitations. Engineered tissues generally lack an initial blood supply, thus making it difficult for any implanted cells to obtain sufficient oxygen and nutrients to survive, or function properly. 

Self-assembly

Self-assembly methods have been shown to be promising methods for tissue engineering. Self-assembly methods have the advantage of allowing tissues to develop their own extracellular matrix, resulting in tissue that better recapitulates biochemical and biomechanical properties of native tissue. Self-assembling engineered articular cartilage was introduced by Jerry Hu and Kyriacos A. Athanasiou in 2006 and applications of the process have resulted in engineered cartilage approaching the strength of native tissue. Self-assembly is a prime technology to get cells grown in a lab to assemble into three-dimensional shapes. To break down tissues into cells, researchers first have to dissolve the extracellular matrix that normally binds them together. Once cells are isolated, they must form the complex structures that make up our natural tissues. 

Liquid-based template assembly

The air-liquid surface established by Faraday waves is explored as a template to assemble biological entities for bottom-up tissue engineering. This liquid-based template can be dynamically reconfigured in a few seconds, and the assembly on the template can be achieved in a scalable and parallel manner. Assembly of microscale hydrogels, cells, neuron-seeded micro-carrier beads, cell spheroids into various symmetrical and periodic structures was demonstrated with good cell viability. Formation of 3D neural network was achieved after 14-day tissue culture.

Additive manufacturing

It might be possible to print organs, or possibly entire organisms using additive manufacturing techniques. A recent innovative method of construction uses an ink-jet mechanism to print precise layers of cells in a matrix of thermoreversible gel. Endothelial cells, the cells that line blood vessels, have been printed in a set of stacked rings. When incubated, these fused into a tube.

The field of three-dimensional and highly accurate models of biological systems is pioneered by multiple projects and technologies including a rapid method for creating tissues and even whole organs involve a 3D printer that can print the scaffolding and cells layer by layer into a working tissue sample or organ. The device is presented in a TED talk by Dr. Anthony Atala, M.D. the Director of the Wake Forest Institute for Regenerative Medicine, and the W.H. Boyce Professor and Chair of the Department of Urology at Wake Forest University, in which a kidney is printed on stage during the seminar and then presented to the crowd. It is anticipated that this technology will enable the production of livers in the future for transplantation and theoretically for toxicology and other biological studies as well. 

Recently Multi-Photon Processing (MPP) was employed for in vivo experiments by engineering artificial cartilage constructs. An ex vivo histological examination showed that certain pore geometry and the pre-growing of chondrocytes (Cho) prior to implantation significantly improves the performance of the created 3D scaffolds. The achieved biocompatibility was comparable to the commercially available collagen membranes. The successful outcome of this study supports the idea that hexagonal-pore-shaped hybrid organic-inorganic microstructured scaffolds in combination with Cho seeding may be successfully implemented for cartilage tissue engineering.

Scaffolding

In 2013, using a 3-d scaffolding of Matrigel in various configurations, substantial pancreatic organoids was produced in vitro. Clusters of small numbers of cells proliferated into 40,000 cells within one week. The clusters transform into cells that make either digestive enzymes or hormones like insulin, self-organizing into branched pancreatic organoids that resemble the pancreas.

The cells are sensitive to the environment, such as gel stiffness and contact with other cells. Individual cells do not thrive; a minimum of four proximate cells was required for subsequent organoid development. Modifications to the medium composition produced either hollow spheres mainly composed of pancreatic progenitors, or complex organoids that spontaneously undergo pancreatic morphogenesis and differentiation. Maintenance and expansion of pancreatic progenitors require active Notch and FGF signaling, recapitulating in vivo niche signaling interactions.

The organoids were seen as potentially offering mini-organs for drug testing and for spare insulin-producing cells.

Tissue culture

In many cases, creation of functional tissues and biological structures in vitro requires extensive culturing to promote survival, growth and inducement of functionality. In general, the basic requirements of cells must be maintained in culture, which include oxygen, pH, humidity, temperature, nutrients and osmotic pressure maintenance.

Tissue engineered cultures also present additional problems in maintaining culture conditions. In standard cell culture, diffusion is often the sole means of nutrient and metabolite transport. However, as a culture becomes larger and more complex, such as the case with engineered organs and whole tissues, other mechanisms must be employed to maintain the culture, such as the creation of capillary networks within the tissue.

Bioreactor for cultivation of vascular grafts
 
Another issue with tissue culture is introducing the proper factors or stimuli required to induce functionality. In many cases, simple maintenance culture is not sufficient. Growth factors, hormones, specific metabolites or nutrients, chemical and physical stimuli are sometimes required. For example, certain cells respond to changes in oxygen tension as part of their normal development, such as chondrocytes, which must adapt to low oxygen conditions or hypoxia during skeletal development. Others, such as endothelial cells, respond to shear stress from fluid flow, which is encountered in blood vessels. Mechanical stimuli, such as pressure pulses seem to be beneficial to all kind of cardiovascular tissue such as heart valves, blood vessels or pericardium. 

Bioreactors

A bioreactor in tissue engineering, as opposed to industrial bioreactors, is a device that attempts to simulate a physiological environment in order to promote cell or tissue growth in vitro. A physiological environment can consist of many different parameters such as temperature and oxygen or carbon dioxide concentration but can extend to all kinds of biological, chemical or mechanical stimuli. Therefore, there are systems that may include the application of forces or stresses to the tissue or even of electric current in two- or three-dimensional setups. 

In academic and industry research facilities, it is typical for bioreactors to be developed to replicate the specific physiological environment of the tissue being grown (e.g., flex and fluid shearing for heart tissue growth). Several general-use and application-specific bioreactors are also commercially available, and may provide static chemical stimulation or combination of chemical and mechanical stimulation. 

There are a variety of Bioreactors designed for 3D cell cultures. There are small plastic cylindrical chambers, as well as glass chambers, with regulated internal humidity and moisture specifically engineered for the purpose of growing cells in three dimensions. The bioreactor uses bioactive synthetic materials such as polyethylene terephthalate membranes to surround the spheroid cells in an environment that maintains high levels of nutrients. They are easy to open and close, so that cell spheroids can be removed for testing, yet the chamber is able to maintain 100% humidity throughout. This humidity is important to achieve maximum cell growth and function. The bioreactor chamber is part of a larger device that rotates to ensure equal cell growth in each direction across three dimensions.

QuinXell Technologies from Singapore has developed a bioreactor known as the TisXell Biaxial Bioreactor which is specially designed for the purpose of tissue engineering. It is the first bioreactor in the world to have a spherical glass chamber with biaxial rotation; specifically to mimic the rotation of the fetus in the womb; which provides a conducive environment for the growth of tissues.

MC2 Biotek has also developed a bioreactor known as ProtoTissue that uses gas exchange to maintain high oxygen levels within the cell chamber; improving upon previous bioreactors, because the higher oxygen levels help the cell grow and undergo normal cell respiration.

Long fiber generation

In 2013, a group from the University of Tokyo developed cell laden fibers up to a meter in length and on the order of 100 µm in size. These fibers were created using a microfluidic device that forms a double coaxial laminar flow. Each 'layer' of the microfluidic device (cells seeded in ECM, a hydrogel sheath, and finally a calcium chloride solution). The seeded cells culture within the hydrogel sheath for several days, and then the sheath is removed with viable cell fibers. Various cell types were inserted into the ECM core, including myocytes, endothelial cells, nerve cell fibers, and epithelial cell fibers. This group then showed that these fibers can be woven together to fabricate tissues or organs in a mechanism similar to textile weaving. Fibrous morphologies are advantageous in that they provide an alternative to traditional scaffold design, and many organs (such as muscle) are composed of fibrous cells. 

Bioartificial organs

An artificial organ is a man-made device that is implanted or integrated into a human to replace a natural organ, for the purpose of restoring a specific function or a group of related functions so the patient may return to normal life as soon as possible. The replaced function doesn't necessarily have to be related to life support but often is. The ultimate goal of tissue engineering as a discipline is to allow both 'off the shelf' bioartificial organs and regeneration of injured tissue in the body. In order to successfully create bioartificial organs from patients stem cells, researchers continue to make improvements in the generation of complex tissues by tissue engineering. For example, much research is aimed at understanding nanoscale cues present in a cell’s microenvironment.

Biomimetics

Biomimetics is a field that aims to produce materials and systems that replicate those present in nature. In the context of tissue engineering, this is a common approach used by engineers to create materials for these applications that are comparable to native tissues in terms of their structure, properties, and biocompatibility. Material properties are largely dependent on physical, structural, and chemical characteristics of that material. Subsequently, a biomimetic approach to system design will become significant in material integration, and a sufficient understanding of biological processes and interactions will be necessary. Replication of biological systems and processes may also be used in the synthesis of bio-inspired materials to achieve conditions that produce the desired biological material. Therefore, if a material is synthesized having the same characteristics of biological tissues both structurally and chemically, then ideally the synthesized material will have similar properties. This technique has an extensive history originating from the idea of using natural phenomenon as design inspiration for solutions to human problems. Many modern advancements in technology have been inspired by nature and natural systems, including aircraft, automobiles, architecture, and even industrial systems. Advancements in nanotechnology initiated the application of this technique to micro- and nano-scale problems, including tissue engineering. This technique has been used to develop synthetic bone tissues, vascular technologies, scaffolding materials and integration techniques, and functionalized nanoparticles.

Constructing neural networks in soft material

In 2018, scientists at Brandeis University reported their research on soft material embedded with chemical networks which can mimic the smooth and coordinated behavior of neural tissue. This research was funded by the U.S. Army Research Laboratory. The researchers presented an experimental system of neural networks, theoretically modeled as reaction-diffusion systems. Within the networks was an array of patterned reactors, each performing the Belousov-Zhabotinsky (BZ) reaction. These reactors could function on a nanoliter scale.

The researchers state that the inspiration for their project was the movement of the blue ribbon eel. The eel's movements are controlled by electrical impulses determined by a class of neural networks called the central pattern generator.  Central Pattern Generators function within the autonomic nervous system to control bodily functions such as respiration, movement, and peristalsis.

Qualities of the reactor that were designed were the network topology, boundary conditions, initial conditions, reactor volume, coupling strength, and the synaptic polarity of the reactor (whether its behavior is inhibitory or excitatory). A BZ emulsion system with a solid elastomer polydimethylsiloxane (PDMS) was designed. Both light and bromine permeable PDMS have been reported as viable methods to create a pacemaker for neural networks.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...