Search This Blog

Tuesday, December 10, 2019

Transcranial direct-current stimulation

 
Transcranial direct-current stimulation
TDCS administration.gif
tDCS administration. Anodal (b) and cathodal (c) electrodes with 35-cm2 size are put on F3 and right supraorbital region, respectively. A head strap is used (d) for convenience and reproducibility, and a rubber band (e) for reducing resistance.
MeSHD065908

Transcranial direct current stimulation (tDCS) is a form of neuromodulation that uses constant, low direct current delivered via electrodes on the head. It can be contrasted with cranial electrotherapy stimulation, which generally uses alternating current the same way.

It was originally developed to help patients with brain injuries or psychiatric conditions like major depressive disorder. There is increasing evidence for tDCS as a treatment for depression. However, there is mixed evidence about whether tDCS is useful for cognitive enhancement in healthy people. Several reviews have found evidence of small yet significant cognitive improvements. Other reviews found no evidence at all, although one of them has been criticized for overlooking within-subject effects and evidence from multiple-session tDCS trials. There is no good evidence that tDCS is useful for memory deficits in Parkinson's disease and Alzheimer's disease, schizophrenia, non-neuropathic pain, or improving upper limb function after stroke.

Medical use

In 2015, the British National Institute for Health and Care Excellence (NICE) found tDCS to be safe and to appear effective for depression treatment, although more and larger studies was needed at that point . Since then, several studies and meta-analysis have been conducted that add to the evidence of tDCS as a safe and effective treatment for depression.

There is also evidence that tDCS is useful in treating neuropathic pain after spinal cord injury and improving activities of daily living assessment after stroke.

Adverse effects and contraindications

People susceptible to seizures, such as people with epilepsy should not receive tDCS.

As of 2017, at stimulation up to 60 min and up to 4 mA over two weeks, adverse effects include skin irritation, a phosphene at the start of stimulation, nausea, headache, dizziness, and itching under the electrode. Adverse effects of long term treatment were not known as of 2017. Nausea most commonly occurs when the electrodes are placed above the mastoid for stimulation of the vestibular system. A phosphene is a brief flash of light that can occur if an electrode is placed near the eye.

Studies have been completed to determine the current density at which overt brain damage occurs in rats. It was found that in cathodal stimulation, a current density of 142.9 A/m2 delivering a charge density of 52400 C/m2 or higher caused a brain lesion in the rat. This is over two orders of magnitude higher than protocols that were in use as of 2009.

Mechanism of action

One of the aspects of tDCS is its ability to achieve cortical changes even after the stimulation is ended. The duration of this change depends on the length of stimulation as well as the intensity of stimulation. The effects of stimulation increase as the duration of stimulation increases or the strength of the current increases. The way that the stimulation changes brain function is either by causing the neuron’s resting membrane potential to depolarize or hyperpolarize. When positive stimulation (anodal tDCS) is delivered, the current causes a depolarization of the resting membrane potential, which increases neuronal excitability and allows for more spontaneous cell firing. When negative stimulation (cathodal tDCS) is delivered, the current causes a hyperpolarization of the resting membrane potential. This decreases neuron excitability due to the decreased spontaneous cell firing.
tDCS has been proposed to promote both long term potentiation and long term depression.

Operation

Transcranial direct current stimulation works by sending constant, low direct current through the electrodes. When these electrodes are placed in the region of interest, the current induces intracerebral current flow. This current flow then either increases or decreases the neuronal excitability in the specific area being stimulated based on which type of stimulation is being used. This change of neuronal excitability leads to alteration of brain function, which can be used in various therapies as well as to provide more information about the functioning of the human brain.

Parts

Transcranial direct current stimulation is a relatively simple technique requiring only a few parts. These include two electrodes and a battery-powered device that delivers constant current. Control software can also be used in experiments that require multiple sessions with differing stimulation types so that neither the person receiving the stimulation nor the experimenter knows which type is being administered. Each device has an anodal, positively charged electrode and a cathodal, negative electrode. Current is "conventionally" described as flowing from the positive anode, through the intervening conducting tissue, to the cathode, creating a circuit. Note that in traditional electric circuits constructed from metal wires, current flow is created by the motion of negatively charged electrons, which actually flow from cathode to anode. However, in biological systems, such as the head, current is usually created by the flow of ions, which may be positively or negatively charged—positive ions will flow towards the cathode; negative ions will flow toward the anode. The device may control the current as well as the duration of stimulation.

Setup

To set up the tDCS device, the electrodes and the skin need to be prepared. This ensures a low resistance connection between the skin and the electrode. The careful placement of the electrodes is crucial to successful tDCS technique. The electrode pads come in various sizes with benefits to each size. A smaller sized electrode achieves a more focused stimulation of a site while a larger electrode ensures that the entirety of the region of interest is being stimulated. If the electrode is placed incorrectly, a different site or more sites than intended may be stimulated resulting in faulty results.[19] One of the electrodes is placed over the region of interest and the other electrode, the reference electrode, is placed in another location in order to complete the circuit. This reference electrode is usually placed on the neck or shoulder of the opposite side of the body than the region of interest. Since the region of interest may be small, it is often useful to locate this region before placing the electrode by using a brain imaging technique such as fMRI or PET. Once the electrodes are placed correctly, the stimulation can be started. Many devices have a built-in capability that allows the current to be "ramped up" or increased gradually until the necessary current is reached. This decreases the amount of stimulation effects felt by the person receiving the tDCS. After the stimulation has been started, the current will continue for the amount of time set on the device and then will automatically be shut off. Recently a new approach has been introduced where instead of using two large pads, multiple (more than two) smaller sized gel electrodes are used to target specific cortical structures. This new approach is called High Definition tDCS (HD-tDCS). In a pilot study, HD-tDCS was found to have greater and longer lasting motor cortex excitability changes than sponge tDCS.

Types of stimulation

There are three different types of stimulation: anodal, cathodal, and sham. The anodal stimulation is positive (V+) stimulation that increases the neuronal excitability of the area being stimulated. Cathodal (V-) stimulation decreases the neuronal excitability of the area being stimulated. Cathodal stimulation can treat psychiatric disorders that are caused by the hyper-activity of an area of the brain. Sham stimulation is used as a control in experiments. Sham stimulation emits a brief current but then remains off for the remainder of the stimulation time. With sham stimulation, the person receiving the tDCS does not know that they are not receiving prolonged stimulation. By comparing the results in subjects exposed to sham stimulation with the results of subjects exposed to anodal or cathodal stimulation, researchers can see how much of an effect is caused by the current stimulation, rather than by the placebo effect.

History

The basic design of tDCS, using direct current (DC) to stimulate the area of interest, has existed for over 100 years. There were a number of rudimentary experiments completed before the 19th century using this technique that tested animal and human electricity. Luigi Galvani and Alessandro Volta were two such researchers that utilized the technology of tDCS in their explorations of the source of animal cell electricity. It was due to these initial studies that tDCS was first brought into the clinical scene. In 1801, Giovanni Aldini (Galvani's nephew) started a study in which he successfully used the technique of direct current stimulation to improve the mood of melancholy patients.

There was a brief rise of interest in transcranial direct current stimulation in the 1960s when studies by researcher D. J. Albert proved that the stimulation could affect brain function by changing the cortical excitability. He also discovered that positive and negative stimulation had different effects on the cortical excitability. Research continued, further fueled by knowledge gained from other techniques like TMS and fMRI.

Comparison to other devices

Transcranial electrical stimulation techniques. While tDCS uses constant current intensity, tRNS and tACS use oscillating current. The vertical axis represents the current intensity in milliamp (mA), while the horizontal axis illustrates the time-course.
 
In transcranial magnetic stimulation (TMS), an electric coil is held above the region of interest on the scalp that uses rapidly changing magnetic fields to induce small electrical currents in the brain. There are two types of TMS: repetitive TMS and single pulse TMS. Both are used in research therapy but effects lasting longer than the stimulation period are only observed in repetitive TMS. Similar to tDCS, an increase or decrease in neuronal activity can be achieved using this technique, but the method of how this is induced is very different. Transcranial direct current stimulation has the two different directions of current that cause the different effects. Increased neuronal activity is induced in repetitive TMS by using a higher frequency and decreased neuronal activity is induced by using a lower frequency.

Variants related to tDCS include tACS, tPCS and transcranial random noise stimulation (tRNS), a group of technologies commonly referred to as transcranial electrical stimulation, or TES.

Research

In 2016, European meta analysis has found level B evidence (probable efficacy) for fibromyalgia, depression and craving.

A 2015 review of results from hundreds of tDCS experiments found that there was no statistically conclusive evidence to support any net cognitive effect, positive or negative, of single session tDCS in healthy populations - there is no evidence that tDCS is useful for cognitive enhancement. A second study by the same authors found there was little-to-no statistically reliable impact of tDCS on any neurophysiologic outcome.

A few clinical trials have been conducted on the use of tDCS to ameliorate memory deficits in Parkinson's disease and Alzheimer's disease and healthy subjects, with mixed results. A 2016 Cochrane review found evidence that tDSC can improve activities of daily living in Parkinson’s disease but the evidence was very low to moderate quality.

As of 2014, there have been several small randomized clinical trials (RCT) in major depressive disorder (MDD); most found alleviation of depressive symptoms. There have been only two RCTs in treatment-resistant MDD; both were small, and one found an effect and the other did not. One meta-analysis of the data focused on reduction in symptoms and found an effect compared to sham treatment, but another that was focused on relapse found no effect compared to sham.

Research conducted as of 2013 in schizophrenia, has found that while large effect sizes were initially found for symptom improvement, later and larger studies have found smaller effect sizes (see also section on use of tDCS in psychiatric disorders below). Studies have mostly concentrated on positive symptoms like auditory hallucinations; research on negative symptoms is lacking.

Research conducted as of 2012 on the use of tDCS to treat pain, found that the research has been of low quality and cannot be used as a basis to recommend use of tDCS to treat pain. In chronic pain following spinal cord injury, research is of high quality and has found tDCS to be ineffective.

In stroke, research conducted as of 2014, has found that tDCS is not effective for improving upper limb function after stroke. While some reviews have suggested an effect of tDCS for improving post-stroke aphasia, a 2015 Cochrane review could find no improvement from combining tDCS with conventional treatment. Research conducted as of 2013 suggests that tDCS may be effective for improve vision deficits following stroke.

tDCS has also been studied in various psychiatric disorders such as depression, and to reverse cognitive deficits in schizophrenia. Some researchers are investigating potential applications such as the improvement of focus and concentration. tDCS has also been studied in addiction.

tDCS has also been used in neuroscience research, particularly to try to link specific brain regions to specific cognitive tasks or psychological phenomena. The cerebellum has been a focus of research, due to its high concentration of neurons, its location immediately below the skull, and its multiple reciprocal anatomical connections to motor and associative parts of the brain. Most such studies focus on the impact of cerebellar tDCS on motor, cognitive, and affective functions in healthy and patient populations, but some also employ tDCS over the cerebellum to study the functional connectivity of the cerebellum to other areas of the brain.

Regulatory approvals

As of 2015, tDCS has not been approved for any use by the US FDA. An FDA briefing document prepared in 2012 stated that "there is no regulation for therapeutic tDCS". tDCS is a CE approved treatment for Major Depressive Disorder (MDD) in the EU, Australia, and Mexico.

New product development

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/New_product_development
 
In business and engineering, new product development (NPD) covers the complete process of bringing a new product to market. A central aspect of NPD is product design, along with various business considerations. New product development is described broadly as the transformation of a market opportunity into a product available for sale. The product can be tangible (something physical which one can touch) or intangible (like a service, experience, or belief), though sometimes services and other processes are distinguished from "products." NPD requires an understanding of customer needs and wants, the competitive environment, and the nature of the market. Cost, time and quality are the main variables that drive customer needs. Aiming at these three variables, innovative companies develop continuous practices and strategies to better satisfy customer requirements and to increase their own market share by a regular development of new products. There are many uncertainties and challenges which companies must face throughout the process. The use of best practices and the elimination of barriers to communication are the main concerns for the management of the NPD .

Process structure

The product development process typically consists of several activities that firms employ in the complex process of delivering new products to the market. A process management approach is used to provide a structure. Product development often overlaps much with the engineering design process, particularly if the new product being developed involves application of math and/or science. Every new product will pass through a series of stages/phases, including ideation among other aspects of design, as well as manufacturing and market introduction. In highly complex engineered products (e.g. aircraft, automotive, machinery), the NPD process can be likewise complex regarding management of personnel, milestones and deliverables. Such projects typically use an integrated product team approach. The process for managing large-scale complex engineering products is much slower (often 10-plus years) than that deployed for many types of consumer goods.

The product development process is articulated and broken down in many different ways, many of which often include the following phases/stages:
  1. Fuzzy front-end (FFE) is the set of activities employed before the more formal and well defined requirements specification is completed. Requirements speak to what the product should do or have, at varying degrees of specificity, in order to meet the perceived market or business need.
  2. Product design is the development of both the high-level and detailed-level design of the product: which turns the what of the requirements into a specific how this particular product will meet those requirements. This typically has the most overlap with the engineering design process, but can also include industrial design and even purely aesthetic aspects of design. On the marketing and planning side, this phase ends at pre-commercialization analysis stage.
  3. Product implementation often refers to later stages of detailed engineering design (e.g. refining mechanical or electrical hardware, or software, or goods or other product forms), as well as test process that may be used to validate that the prototype actually meets all design specifications that were established.
  4. Fuzzy back-end or commercialization phase represent the action steps where the production and market launch occur.
The front-end marketing phases have been very well researched, with valuable models proposed. Peter Koen et al. provides a five-step front-end activity called front-end innovation: opportunity identification, opportunity analysis, idea genesis, idea selection, and idea and technology development. He also includes an engine in the middle of the five front-end stages and the possible outside barriers that can influence the process outcome. The engine represents the management driving the activities described. The front end of the innovation is the greatest area of weakness in the NPD process. This is mainly because the FFE is often chaotic, unpredictable and unstructured. Engineering design is the process whereby a technical solution is developed iteratively to solve a given problem The design stage is very important because at this stage most of the product life cycle costs are engaged. Previous research shows that 70–80% of the final product quality and 70% of the product entire life-cycle cost are determined in the product design phase, therefore the design-manufacturing interface represent the greatest opportunity for cost reduction. Design projects last from a few weeks to three years with an average of one year. Design and Commercialization phases usually start a very early collaboration. When the concept design is finished it will be sent to manufacturing plant for prototyping, developing a Concurrent Engineering approach by implementing practices such as QFD, DFM/DFA and more. The output of the design (engineering) is a set of product and process specifications – mostly in the form of drawings, and the output of manufacturing is the product ready for sale. Basically, the design team will develop drawings with technical specifications representing the future product, and will send it to the manufacturing plant to be executed. Solving product/process fit problems is of high priority in information communication design because 90% of the development effort must be scrapped if any changes are made after the release to manufacturing.

NPD Process

  1. New Product Strategy – Innovators have clearly defined their goals and objectives for the new product.
  2. Idea Generation – Collective brainstorming ideas through internal and external sources.
  3. Screening – Condense the number of brainstormed ideas.
  4. Concept Testing – Structure an idea into a detailed concept.
  5. Business Analysis – Understand the cost and profits of the new product and determining if they meet company objectives.
  6. Product Development – Developing the product.
  7. Market Testing – Marketing mix is tested through a trial run of the product.
  8. Commercialization – Introducing the product to the public.

Models

Conceptual models have been designed in order to facilitate a smooth process.
  • IDEO approach. The concept adopted by IDEO, a design and consulting firm, is one of the most researched processes in regard to new product development and is a five-step procedure. These steps are listed in chronological order:
  1. Understand and observe the market, the client, the technology, and the limitations of the problem;
  2. Synthesize the information collected at the first step;
  3. Visualise new customers using the product;
  4. Prototype, evaluate and improve the concept;
  5. Implementation of design changes which are associated with more technologically advanced procedures and therefore this step will require more time
  • BAH Model. One of the first developed models that today companies still use in the NPD process is the Booz, Allen and Hamilton (BAH) Model, published in 1982. This is the best known model because it underlies the NPD systems that have been put forward later. This model represents the foundation of all the other models that have been developed afterwards. Significant work has been conducted in order to propose better models, but in fact these models can be easily linked to BAH model. The seven steps of BAH model are: new product strategy, idea generation, screening and evaluation, business analysis, development, testing, and commercialization.
  • Stage-gate model. A pioneer of NPD research in the consumers goods sector is Robert G. Cooper. Over the last two decades he conducted significant work in the area of NPD. The Stage-Gate model developed in the 1980s was proposed as a new tool for managing new products development processes. This was mainly applied to the consumers goods industry. The 2010 APQC benchmarking study reveals that 88% of U.S. businesses employ a stage-gate system to manage new products, from idea to launch. In return, the companies that adopt this system are reported to receive benefits such as improved teamwork, improved success rates, earlier detection of failure, a better launch, and even shorter cycle times – reduced by about 30%. These findings highlight the importance of the stage-gate model in the area of new product development.
  • Lean Start-up approach. Over the last few years, the Lean Startup movement has grown in popularity, challenging many of the assumptions inherent in the stage-gate model.
  • Exploratory product development model. Exploratory product development, which often goes by the acronym ExPD, is an emerging approach to new product development. Consultants Mary Drotar and Kathy Morrissey first introduced ExPD at the 2015 Product Development and Management Association annual meeting and later outlined their approach in the Product Development and Management Association’s magazine Visions. In 2015, their firm Strategy2Market received the trademark on the term “Exploratory PD.” Rather than going through a set of discrete phases, like the phase-gate process, exploratory product development allows organizations to adapt to a landscape of shifting market circumstances and uncertainty by using a more flexible and adaptable product development process for both hardware and software. Where the traditional phase-gate approach works best in a stable market environment, ExPD is more suitable for product development in markets that are unstable and less predictable. Unstable and unpredictable markets cause uncertainty and risk in product development. Many factors contribute to the outcome of a project, and ExPD works on the assumption that the ones that the product team doesn’t know enough about or are unaware of are the factors that create uncertainty and risk. The primary goal of ExPD is to reduce uncertainty and risk by reducing the unknown. When organizations adapt quickly to the changing environment (market, technology, regulations, globalization, etc.), they reduce uncertainty and risk, which leads to product success. ExPD is described as a two-pronged, integrated systems approach. Drotar and Morrissey state that product development is complex and needs to be managed as a system, integrating essential elements: strategy, portfolio management, organization/teams/culture, metrics, market/customer understanding, and process.

Marketing considerations

There have been a number of approaches proposed for analyzing and responding to the marketing challenges of new product development. Two of these are the eight stages process of Peter Koen of the Stevens Institute of Technology, and a process known as the fuzzy front end.

Fuzzy Front End

The Fuzzy Front End (FFE) is the messy "getting started" period of new product engineering development processes. It is also referred to as the "Front End of Innovation", or "Idea Management".

It is in the front end where the organization formulates a concept of the product to be developed and decides whether or not to invest resources in the further development of an idea. It is the phase between first consideration of an opportunity and when it is judged ready to enter the structured development process (Kim and Wilemon, 2007; Koen et al., 2001). It includes all activities from the search for new opportunities through the formation of a germ of an idea to the development of a precise concept. The Fuzzy Front End phase ends when an organization approves and begins formal development of the concept.

Although the Fuzzy Front End may not be an expensive part of product development, it can consume 50% of development time (see Chapter 3 of the Smith and Reinertsen reference below), and it is where major commitments are typically made involving time, money, and the product's nature, thus setting the course for the entire project and final end product. Consequently, this phase should be considered as an essential part of development rather than something that happens "before development," and its cycle time should be included in the total development cycle time.

Koen et al. (2001), distinguish five different front-end elements (not necessarily in a particular order):
  1. Opportunity Identification
  2. Opportunity Analysis
  3. Idea Genesis
  4. Idea Selection
  5. Idea and Technology Development
  • The first element is the opportunity identification. In this element, large or incremental business and technological chances are identified in a more or less structured way. Using the guidelines established here, resources will eventually be allocated to new projects.... which then lead to a structured NPPD (New Product & Process Development) strategy.
  • The second element is the opportunity analysis. It is done to translate the identified opportunities into implications for the business and technology specific context of the company. Here extensive efforts may be made to align ideas to target customer groups and do market studies and/or technical trials and research.
  • The third element is the idea genesis, which is described as evolutionary and iterative process progressing from birth to maturation of the opportunity into a tangible idea. The process of the idea genesis can be made internally or come from outside inputs, e.g. a supplier offering a new material/technology or from a customer with an unusual request.
  • The fourth element is the idea selection. Its purpose is to choose whether to pursue an idea by analyzing its potential business value.
  • The fifth element is the idea and technology development. During this part of the front-end, the business case is developed based on estimates of the total available market, customer needs, investment requirements, competition analysis and project uncertainty. Some organizations consider this to be the first stage of the NPPD process (i.e., Stage 0).
A universally acceptable definition for Fuzzy Front End or a dominant framework has not been developed so far. In a glossary of PDMA, it is mentioned that the Fuzzy Front End generally consists of three tasks: strategic planning, idea generation, and pre-technical evaluation. These activities are often chaotic, unpredictable, and unstructured. In comparison, the subsequent new product development process is typically structured, predictable, and formal. The term Fuzzy Front End was first popularized by Smith and Reinertsen (1991). R.G. Cooper (1988) it describes the early stages of NPPD as a four-step process in which ideas are generated (I), subjected to a preliminary technical and market assessment (II) and merged to coherent product concepts (III) which are finally judged for their fit with existing product strategies and portfolios (IV).

Other conceptualisations

Other authors have divided predevelopment product development activities differently.

The Phase Zero of the Stage-Gate Model of New Product Development

The Stage-Gate model of NPD predevelopment activities are summarised in Phase zero and one, in respect to earlier definition of predevelopment activities:
  1. Preliminary
  2. Technical assessment
  3. Source-of-supply assessment: suppliers and partners or alliances
  4. Market research: market size and segmentation analysis, VoC (voice of the customer) research
  5. Product idea testing
  6. Customer value assessment
  7. Product definition
  8. Business and financial analysis
These activities yield essential information to make a Go/No-Go to Development decision. These decisions represent the Gates in the Stage-Gate model. 

Early Phase of the Innovation Process

A conceptual model of Front-End Process was proposed which includes early phases of the innovation process. This model is structured in three phases and three gates:
  • Phase 1: Environmental screening or opportunity identification stage in which external changes will be analysed and translated into potential business opportunities.
  • Phase 2: Preliminary definition of an idea or concept.
  • Phase 3: Detailed product, project or service definition, and Business planning.
The gates are:
  • Opportunity screening
  • Idea evaluation
  • Go/No-Go for development
The final gate leads to a dedicated new product development project. Many professionals and academics consider that the general features of Fuzzy Front End (fuzziness, ambiguity, and uncertainty) make it difficult to see the FFE as a structured process, but rather as a set of interdependent activities ( e.g. Kim and Wilemon, 2002). However, Husig et al., 2005 [10] argue that front-end not need to be fuzzy, but can be handled in a structured manner. In fact Carbone showed that when using the front end success factors in an integrated process, product success is increased. Peter Koen argues that in the FFE for incremental, platform and radical projects, three separate strategies and processes are typically involved. The traditional Stage Gate (TM) process was designed for incremental product development, namely for a single product. The FFE for developing a new platform must start out with a strategic vision of where the company wants to develop products and this will lead to a family of products. Projects for breakthrough products start out with a similar strategic vision, but are associated with technologies which require new discoveries. 

Activity view on Fuzzy-Front End

Predevelopment is the initial stage in NPD and consists of numerous activities, such as:
  • product strategy formulation and communication
  • opportunity identification and assessment
  • idea generation
  • product definition
  • project planning
  • executive reviews
Economical analysis, benchmarking of competitive products and modeling and prototyping are also important activities during the front-end activities. 

The outcomes of FFE are the:
  • mission statement
  • customer needs
  • details of the selected idea
  • product definition and specifications
  • economic analysis of the product
  • the development schedule
  • project staffing and the budget
  • a business plan aligned with corporate strategy
Incremental, platform and breakthrough products include:
  • Incremental products are considered to be cost reductions, improvements to existing product lines, additions to existing platforms and repositioning of existing products introduced in markets.
  • Breakthrough products are new to the company or new to the world and offer a 5–10 times or greater improvement in performance combined with a 30–50% or greater reduction in costs.
  • Platform products establish a basic architecture for a next generation product or process and are substantially larger in scope and resources than incremental projects.

Technology life cycle

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Technology_life_cycle
 
The typical life-cycle of a manufacturing process or production system from the stages of its initial conception to its culmination as either a technique or procedure of common practice or to its demise. The Y-axis of the diagram shows the business gain to the proprietor of the technology while the X-axis traces its lifetime.
 
The technology life-cycle (TLC) describes the commercial gain of a product through the expense of research and development phase, and the financial return during its "vital life". Some technologies, such as steel, paper or cement manufacturing, have a long lifespan (with minor variations in technology incorporated with time) while in other cases, such as electronic or pharmaceutical products, the lifespan may be quite short.

The TLC associated with a product or technological service is different from product life-cycle (PLC) dealt with in product life-cycle management. The latter is concerned with the life of a product in the marketplace with respect to timing of introduction, marketing measures, and business costs. The technology underlying the product (for example, that of a uniquely flavoured tea) may be quite marginal but the process of creating and managing its life as a branded product will be very different.
The technology life cycle is concerned with the time and cost of developing the technology, the timeline of recovering cost, and modes of making the technology yield a profit proportionate to the costs and risks involved. The TLC may, further, be protected during its cycle with patents and trademarks seeking to lengthen the cycle and to maximize the profit from it.

The product of the technology may be a commodity such as polyethylene plastic or a sophisticated product like the integrated circuits used in a smartphone.

The development of a competitive product or process can have a major effect on the lifespan of the technology, making it shorter. Equally, the loss of intellectual property rights through litigation or loss of its secret elements (if any) through leakages also work to reduce a technology's lifespan. Thus, it is apparent that the management of the TLC is an important aspect of technology development.

Most new technologies follow a similar technology maturity lifecycle describing the technological maturity of a product. This is not similar to a product life cycle, but applies to an entire technology, or a generation of a technology.

Technology adoption is the most common phenomenon driving the evolution of industries along the industry lifecycle. After expanding new uses of resources they end with exhausting the efficiency of those processes, producing gains that are first easier and larger over time then exhaustingly more difficult, as the technology matures.

The four phases of the technology life-cycle

The TLC may be seen as composed of four phases:
  1. The research and development (R&D) phase (sometimes called the "bleeding edge") when incomes from inputs are negative and where the prospects of failure are high
  2. The ascent phase when out-of-pocket costs have been recovered and the technology begins to gather strength by going beyond some Point A on the TLC (sometimes called the "leading edge")
  3. The maturity phase when gain is high and stable, the region, going into saturation, marked by M, and
  4. The decline (or decay phase), after a Point D, of reducing fortunes and utility of the technology.

S-curve

The shape of the technology lifecycle is often referred to as S-curve.

Technology perception dynamics

There is usually technology hype at the introduction of any new technology, but only after some time has passed can it be judged as mere hype or justified true acclaim. Because of the logistic curve nature of technology adoption, it is difficult to see in the early stages whether the hype is excessive.

The two errors commonly committed in the early stages of a technology's development are:
  • fitting an exponential curve to the first part of the growth curve, and assuming eternal exponential growth
  • fitting a linear curve to the first part of the growth curve, and assuming that take-up of the new technology is disappointing
Rogers' bell curve

Similarly, in the later stages, the opposite mistakes can be made relating to the possibilities of technology maturity and market saturation.

The technology adoption life cycle typically occurs in an S curve, as modelled in diffusion of innovations theory. This is because customers respond to new products in different ways. Diffusion of innovations theory, pioneered by Everett Rogers, posits that people have different levels of readiness for adopting new innovations and that the characteristics of a product affect overall adoption. Rogers classified individuals into five groups: innovators, early adopters, early majority, late majority, and laggards. In terms of the S curve, innovators occupy 2.5%, early adopters 13.5%, early majority 34%, late majority 34%, and laggards 16%.

The four stages of technology life cycle are as follows:
  • Innovation stage: This stage represents the birth of a new product, material of process resulting from R&D activities. In R&D laboratories, new ideas are generated depending on gaining needs and knowledge factors. Depending on the resource allocation and also the change element, the time taken in the innovation stage as well as in the subsequent stages varies widely.
  • Syndication stage: This stage represents the demonstration and commercialisation of a new technology, such as, product, material or process with potential for immediate utilisation. Many innovations are put on hold in R&D laboratories. Only a very small percentage of these are commercialised. Commercialisation of research outcomes depends on technical as well non-technical, mostly economic factors.
  • Diffusion stage: This represents the market penetration of a new technology through acceptance of the innovation, by potential users of the technology. But supply and demand side factors jointly influence the rate of diffusion.
  • Substitution stage: This last stage represents the decline in the use and eventual extension of a technology, due to replacement by another technology. Many technical and non-technical factors influence the rate of substitution. The time taken in the substitution stage depends on the market dynamics.

Licensing options

Large corporations develop technology for their own benefit and not with the objective of licensing. The tendency to license out technology only appears when there is a threat to the life of the TLC (business gain) as discussed later.

Licensing in the R&D phase

There are always smaller firms (SMEs) who are inadequately situated to finance the development of innovative R&D in the post-research and early technology phases. By sharing incipient technology under certain conditions, substantial risk financing can come from third parties. This is a form of quasi-licensing which takes different formats. Even large corporates may not wish to bear all costs of development in areas of significant and high risk (e.g. aircraft development) and may seek means of spreading it to the stage that proof-of-concept is obtained. 

In the case of small and medium firms, entities such as venture capitalists or business angels, can enter the scene and help to materialize technologies. Venture capitalists accept both the costs and uncertainties of R&D, and that of market acceptance, in reward for high returns when the technology proves itself. Apart from finance, they may provide networking, management and marketing support. Venture capital connotes financial as well as human capital.

Larger firms may opt for Joint R&D or work in a consortium for the early phase of development. Such vehicles are called strategic alliances – strategic partnerships.

With both venture capital funding and strategic (research) alliances, when business gains begin to neutralize development costs (the TLC crosses the X-axis), the ownership of the technology starts to undergo change.

In the case of smaller firms, venture capitalists help clients enter the stock market for obtaining substantially larger funds for development, maturation of technology, product promotion and to meet marketing costs. A major route is through initial public offering (IPO) which invites risk funding by the public for potential high gain. At the same time, the IPOs enable venture capitalists to attempt to recover expenditures already incurred by them through part sale of the stock pre-allotted to them (subsequent to the listing of the stock on the stock exchange). When the IPO is fully subscribed, the assisted enterprise becomes a corporation and can more easily obtain bank loans, etc. if needed.

Strategic alliance partners, allied on research, pursue separate paths of development with the incipient technology of common origin but pool their accomplishments through instruments such as 'cross-licensing'. Generally, contractual provisions among the members of the consortium allow a member to exercise the option of independent pursuit after joint consultation; in which case the optee owns all subsequent development. 

Licensing in the ascent phase

The ascent stage of the technology usually refers to some point above Point A in the TLC diagram but actually it commences when the R&D portion of the TLC curve inflects (only that the cashflow is negative and unremunerative to Point A). The ascent is the strongest phase of the TLC because it is here that the technology is superior to alternatives and can command premium profit or gain. The slope and duration of the ascent depends on competing technologies entering the domain, although they may not be as successful in that period. Strongly patented technology extends the duration period.

The TLC begins to flatten out (the region shown as M) when equivalent or challenging technologies come into the competitive space and begin to eat away marketshare.

Till this stage is reached, the technology-owning firm would tend to exclusively enjoy its profitability, preferring not to license it. If an overseas opportunity does present itself, the firm would prefer to set up a controlled subsidiary rather than license a third party. 

Licensing in the maturity phase

The maturity phase of the technology is a period of stable and remunerative income but its competitive viability can persist over the larger timeframe marked by its 'vital life'. However, there may be a tendency to license out the technology to third parties during this stage to lower risk of decline in profitability (or competitivity) and to expand financial opportunity.

The exercise of this option is, generally, inferior to seeking participatory exploitation; in other words, engagement in joint venture, typically in regions where the technology would be in the ascent phase,as say, a developing country. In addition to providing financial opportunity it allows the technology-owner a degree of control over its use. Gain flows from the two streams of investment-based and royalty incomes. Further, the vital life of the technology is enhanced in such strategy. 

Licensing in the decline phase

After reaching a point such as D in the above diagram, the earnings from the technology begin to decline rather rapidly. To prolong the life cycle, owners of technology might try to license it out at some point L when it can still be attractive to firms in other markets. This, then, traces the lengthening path, LL'. Further, since the decline is the result of competing rising technologies in this space, licenses may be attracted to the general lower cost of the older technology (than what prevailed during its vital life). 

Licenses obtained in this phase are 'straight licenses'. They are free of direct control from the owner of the technology (as would otherwise apply, say, in the case of a joint-venture). Further, there may be fewer restrictions placed on the licensee in the employment of the technology. 

The utility, viability, and thus the cost of straight-licenses depends on the estimated 'balance life' of the technology. For instance, should the key patent on the technology have expired, or would expire in a short while, the residual viability of the technology may be limited, although balance life may be governed by other criteria such as knowhow which could have a longer life if properly protected.

It is important to note that the license has no way of knowing the stage at which the prime, and competing technologies, are on their TLCs. It would, of course, be evident to competing licensor firms, and to the originator, from the growth, saturation or decline of the profitability of their operations.

The license may, however, be able to approximate the stage by vigorously negotiating with the licensor and competitors to determine costs and licensing terms. A lower cost, or easier terms, may imply a declining technology. 

In any case, access to technology in the decline phase is a large risk that the licensee accepts. (In a joint-venture this risk is substantially reduced by licensor sharing it). Sometimes, financial guarantees from the licensor may work to reduce such risk and can be negotiated.

There are instances when, even though the technology declines to becoming a technique, it may still contain important knowledge or experience which the licensee firm cannot learn of without help from the originator. This is often the form that technical service and technical assistance contracts take (encountered often in developing country contracts). Alternatively, consulting agencies may fill this role.

Technology development cycle

According to the Encyclopedia of Earth, "In the simplest formulation, innovation can be thought of as being composed of research, development, demonstration, and deployment."

Technology development cycle describes the process of a new technology through the stages of technological maturity:
  1. Research and development
  2. Scientific demonstration
  3. System deployment
  4. Diffusion

Technology readiness level

From Wikipedia, the free encyclopedia
 
NASA Technology Readiness Levels
 
Technology readiness levels (TRLs) are a method for estimating the maturity of technologies during the acquisition phase of a program, developed at NASA during the 1970s. The use of TRLs enables consistent, uniform discussions of technical maturity across different types of technology. A technology's TRL is determined during a Technology Readiness Assessment (TRA) that examines program concepts, technology requirements, and demonstrated technology capabilities. TRLs are based on a scale from 1 to 9 with 9 being the most mature technology.  The US Department of Defense has used the scale for procurement since the early 2000s. By 2008 the scale was also in use at the European Space Agency (ESA), as evidenced by their handbook.
 
The European Commission advised EU-funded research and innovation projects to adopt the scale in 2010. TRLs were consequently used in 2014 in the EU Horizon 2020 program. In 2013, the TRL scale was further canonized by the ISO 16290:2013 standard. A comprehensive approach and discussion of TRLs has been published by the European Association of Research and Technology Organisations (EARTO). Extensive criticism of the adoption of TRL scale by the European Union was published in The Innovation Journal, stating that the "concreteness and sophistication of the TRL scale gradually diminished as its usage spread outside its original context (space programs)".

Current NASA usage

The current nine-point NASA scale is:
Level 1 – Basic principles observed and reported
Level 2 – Technology concept and/or application formulated
Level 3 – Analytical and experimental critical function and/or characteristic proof-of concept
Level 4 – Component and/or breadboard validation in laboratory environment
Level 5 – Component and/or breadboard validation in relevant environment
Level 6 – System/subsystem model or prototype demonstration in a relevant environment (ground or space)
Level 7 – System prototype demonstration in a space environment
Level 8 – Actual system completed and “flight qualified” through test and demonstration (ground or space)
Level 9 – Actual system “flight proven” through successful mission operations

History

Technology Readiness Levels were originally conceived at NASA in 1974 and formally defined in 1989. The original definition included seven levels, but in the 1990s NASA adopted the current nine-level scale that subsequently gained widespread acceptance.

Original NASA TRL Definitions (1989)
Level 1 – Basic Principles Observed and Reported
Level 2 – Potential Application Validated
Level 3 – Proof-of-Concept Demonstrated, Analytically and/or Experimentally
Level 4 – Component and/or Breadboard Laboratory Validated
Level 5 – Component and/or Breadboard Validated in Simulated or Realspace Environment
Level 6 – System Adequacy Validated in Simulated Environment
Level 7 – System Adequacy Validated in Space
The TRL methodology was originated by Stan Sadin at NASA Headquarters in 1974. At that time, Ray Chase was the JPL Propulsion Division representative on the Jupiter Orbiter design team. At the suggestion of Stan Sadin, Mr Chase used this methodology to assess the technology readiness of the proposed JPL Jupiter Orbiter spacecraft design. Later Mr Chase spent a year at NASA Headquarters helping Mr Sadin institutionalize the TRL methodology. Mr Chase joined ANSER in 1978, where he used the TRL methodology to evaluate the technology readiness of proposed Air Force development programs. He published several articles during the 1980s and 90s on reusable launch vehicles utilizing the TRL methodology. These documented an expanded version of the methodology that included design tools, test facilities, and manufacturing readiness on the Air Force Have Not program. The Have Not program manager, Greg Jenkins, and Ray Chase published the expanded version of the TRL methodology, which included design and manufacturing. Leon McKinney and Mr Chase used the expanded version to assess the technology readiness of the ANSER team's Highly Reusable Space Transportation ("HRST") concept. ANSER also created an adapted version of the TRL methodology for proposed Homeland Security Agency programs.

The United States Air Force adopted the use of Technology Readiness Levels in the 1990s.

In 1995, John C. Mankins, NASA, wrote a paper that discussed NASA's use of TRL, extended the scale, and proposed expanded descriptions for each TRL. In 1999, the United States General Accounting Office produced an influential report that examined the differences in technology transition between the DOD and private industry. It concluded that the DOD takes greater risks and attempts to transition emerging technologies at lesser degrees of maturity than does private industry. The GAO concluded that use of immature technology increased overall program risk. The GAO recommended that the DOD make wider use of Technology Readiness Levels as a means of assessing technology maturity prior to transition. In 2001, the Deputy Under Secretary of Defense for Science and Technology issued a memorandum that endorsed use of TRLs in new major programs. Guidance for assessing technology maturity was incorporated into the Defense Acquisition Guidebook. Subsequently, the DOD developed detailed guidance for using TRLs in the 2003 DOD Technology Readiness Assessment Deskbook. 

Because of their relevance to Habitation, 'Habitation Readiness Levels (HRL)' were formed by a group of NASA engineers (Jan Connolly, Kathy Daues, Robert Howard, and Larry Toups). They have been created to address habitability requirements and design aspects in correlation with already established and widely used standards by different agencies, including NASA TRLs.

In the European Union

The European Space Agency adopted the TRL scale in the mid-2000s. Its handbook closely follows the NASA definition of TRLs. The universal usage of TRL in EU policy was proposed in the final report of the first High Level Expert Group on Key Enabling Technologies, and it was indeed implemented in the subsequent EU framework program, called H2020, running from 2013 to 2020.[1] This means not only space and weapons programs, but everything from nanotechnology to informatics and communication technology.

The TRLs in Europe are as follows:
  • TRL 1 – Basic principles observed
  • TRL 2 – Technology concept formulated
  • TRL 3 – Experimental proof of concept
  • TRL 4 – Technology validated in lab
  • TRL 5 – Technology validated in relevant environment (industrially relevant environment in the case of key enabling technologies)
  • TRL 6 – Technology demonstrated in relevant environment (industrially relevant environment in the case of key enabling technologies)
  • TRL 7 – System prototype demonstration in operational environment
  • TRL 8 – System complete and qualified
  • TRL 9 – Actual system proven in operational environment (competitive manufacturing in the case of key enabling technologies; or in space)

Assessment tools

TPMM Transition Mechanism
A Technology Readiness Level Calculator was developed by the United States Air Force. This tool is a standard set of questions implemented in Microsoft Excel that produces a graphical display of the TRLs achieved. This tool is intended to provide a snapshot of technology maturity at a given point in time.
 
The Technology Program Management Model was developed by the United States Army. The TPMM is a TRL-gated high-fidelity activity model that provides a flexible management tool to assist Technology Managers in planning, managing, and assessing their technologies for successful technology transition. The model provides a core set of activities including systems engineering and program management tasks that are tailored to the technology development and management goals. This approach is comprehensive, yet it consolidates the complex activities that are relevant to the development and transition of a specific technology program into one integrated model.

Uses

The primary purpose of using technology readiness levels is to help management in making decisions concerning the development and transitioning of technology. It should be viewed as one of several tools that are needed to manage the progress of research and development activity within an organization.

Among the advantages of TRLs:
  • Provides a common understanding of technology status
  • Risk management
  • Used to make decisions concerning technology funding
  • Used to make decisions concerning transition of technology
Some of the characteristics of TRLs that limit their utility:
  • Readiness does not necessarily fit with appropriateness or technology maturity
  • A mature product may possess a greater or lesser degree of readiness for use in a particular system context than one of lower maturity
  • Numerous factors must be considered, including the relevance of the products' operational environment to the system at hand, as well as the product-system architectural mismatch
Current TRL models tend to disregard negative and obsolescence factors. There have been suggestions made for incorporating such factors into assessments.

For complex technologies that incorporate various development stages, a more detailed scheme called the Technology Readiness Pathway Matrix has been developed going from basic units to applications in society. This tool aims to show that a readiness level of a technology is based on a less linear process but on a more complex pathway through its application in society.

Biological engineering (updated)

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Biological_engineering

 
Biological engineering, or bioengineering/bio-engineering, is the application of principles of biology and the tools of engineering to create usable, tangible, economically viable products. Biological engineering employs knowledge and expertise from a number of pure and applied sciences, such as mass and heat transfer, kinetics, biocatalysts, biomechanics, bioinformatics, separation and purification processes, bioreactor design, surface science, fluid mechanics, thermodynamics, and polymer science. It is used in the design of medical devices, diagnostic equipment, biocompatible materials, renewable bioenergy, ecological engineering, agricultural engineering, and other areas that improve the living standards of societies. Examples of bioengineering research include bacteria engineered to produce chemicals, new medical imaging technology, portable and rapid disease diagnostic devices, prosthetics, biopharmaceuticals, and tissue-engineered organs. Bioengineering overlaps substantially with biotechnology and the biomedical sciences in a way analogous to how various other forms of engineering and technology relate to various other sciences (for example, aerospace engineering and other space technology to kinetics and astrophysics). 

In general, biological engineers (or biomedical engineers) attempt to either mimic biological systems to create products or modify and control biological systems so that they can replace, augment, sustain, or predict chemical and mechanical processes. Bioengineers can apply their expertise to other applications of engineering and biotechnology, including genetic modification of plants and microorganisms, bioprocess engineering, and biocatalysis. Working with doctors, clinicians and researchers, bioengineers use traditional engineering principles and techniques and apply them to real-world biological and medical problems.

History

Biological engineering is a science-based discipline founded upon the biological sciences in the same way that chemical engineering, electrical engineering, and mechanical engineering can be based upon chemistry, electricity and magnetism, and classical mechanics, respectively.

Before WWII, biological engineering had just begun being recognized as a branch of engineering, and was a very new concept to people. Post-WWII, it started to grow more rapidly, partially due to the term "bioengineering" being coined by British scientist and broadcaster Heinz Wolff in 1954 at the National Institute for Medical Research. Wolff graduated that same year and became the director of the Division of Biological Engineering at the university. This was the first time Bioengineering was recognized as its own branch at a university. Electrical engineering is considered to pioneer this engineering sector due to its work with medical devices and machinery during this time. When engineers and life scientists started working together, they recognized the problem that the engineers didn't know enough about the actual biology behind their work. To resolve this problem, engineers who wanted to get into biological engineering devoted more of their time and studies to the details and processes that go into fields such as biology, psychology, and medicine. The term biological engineering may also be applied to environmental modifications such as surface soil protection, slope stabilization, watercourse and shoreline protection, windbreaks, vegetation barriers including noise barriers and visual screens, and the ecological enhancement of an area. Because other engineering disciplines also address living organisms, the term biological engineering can be applied more broadly to include agricultural engineering

The first biological engineering program was created at University of California, San Diego in 1966, making it the first biological engineering curriculum in the United States. More recent programs have been launched at MIT and Utah State University. Many old agricultural engineering departments in universities over the world have re-branded themselves as agricultural and biological engineering or agricultural and biosystems engineering, due to biological engineering as a whole being a rapidly developing field with fluid categorization. According to Professor Doug Lauffenburger of MIT, biological engineering has a broad base which applies engineering principles to an enormous range of size and complexities of systems. These systems range from the molecular level (molecular biology, biochemistry, microbiology, pharmacology, protein chemistry, cytology, immunology, neurobiology and neuroscience) to cellular and tissue-based systems (including devices and sensors), to whole macroscopic organisms (plants, animals), and can even range up to entire ecosystems. 

Education

The average length of study is three to five years, and the completed degree is signified as a bachelor of engineering (B.S. in engineering). Fundamental courses include thermodynamics, bio-mechanics, biology, genetic engineering, fluid and mechanical dynamics, kinetics, electronics, and materials properties.

Sub-disciplines

Modeling of the spread of disease using Cellular Automata and Nearest Neighbor Interactions
 
Depending on the institution and particular definitional boundaries employed, some major branches of bioengineering may be categorized as (note these may overlap):

Organizations

  • American Institute for Medical and Biological Engineering (AIMBE) is made up of 1,500 members. Their main goal is to educate the public about the value biological engineering has in our world, as well as invest in research and other programs to advance the field. They give out awards to those dedicated to innovation in the field, and awards of achievement in the field. (They do not have a direct contribution to biological engineering, they more recognize those who do and encourage the public to continue that forward movement.)
  • Institute of Biological Engineering (IBE) is a non-profit organization, they run on donations alone. They aim to encourage the public to learn and to continue advancements in biological engineering. (Like AIMBE, they don't do research directly, they do however offer scholarships to students who show promise in the field).

Rydberg atom

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Rydberg_atom Figure 1: Electron orbi...