Search This Blog

Friday, July 15, 2022

Nuclear power in space

From Wikipedia, the free encyclopedia
 
The KIWI A prime nuclear thermal rocket engine
 
Mars Curiosity rover powered by a RTG on Mars. White RTG with fins is visible at far side of rover.

Nuclear power in space is the use of nuclear power in outer space, typically either small fission systems or radioactive decay for electricity or heat. Another use is for scientific observation, as in a Mössbauer spectrometer. The most common type is a radioisotope thermoelectric generator, which has been used on many space probes and on crewed lunar missions. Small fission reactors for Earth observation satellites, such as the TOPAZ nuclear reactor, have also been flown. A radioisotope heater unit is powered by radioactive decay and can keep components from becoming too cold to function, potentially over a span of decades.

The United States tested the SNAP-10A nuclear reactor in space for 43 days in 1965, with the next test of a nuclear reactor power system intended for space use occurring on 13 September 2012 with the Demonstration Using Flattop Fission (DUFF) test of the Kilopower reactor.

After a ground-based test of the experimental 1965 Romashka reactor, which used uranium and direct thermoelectric conversion to electricity, the USSR sent about 40 nuclear-electric satellites into space, mostly powered by the BES-5 reactor. The more powerful TOPAZ-II reactor produced 10 kilowatts of electricity.

Examples of concepts that use nuclear power for space propulsion systems include the nuclear electric rocket (nuclear powered ion thruster(s)), the radioisotope rocket, and radioisotope electric propulsion (REP). One of the more explored concepts is the nuclear thermal rocket, which was ground tested in the NERVA program. Nuclear pulse propulsion was the subject of Project Orion.

Regulation and hazard prevention

After the ban of nuclear weapons in space by the Outer Space Treaty in 1967, nuclear power has been discussed at least since 1972 as a sensitive issue by states. Particularly its potential hazards to Earth's environment and thus also humans has prompted states to adopt in the U.N. General Assembly the Principles Relevant to the Use of Nuclear Power Sources in Outer Space (1992), particularly introducing safety principles for launches and to manage their traffic.

Benefits

Both the Viking 1 and Viking 2 landers used RTGs for power on the surface of Mars. (Viking launch vehicle pictured)

While solar power is much more commonly used, nuclear power can offer advantages in some areas. Solar cells, although efficient, can only supply energy to spacecraft in orbits where the solar flux is sufficiently high, such as low Earth orbit and interplanetary destinations close enough to the Sun. Unlike solar cells, nuclear power systems function independently of sunlight, which is necessary for deep space exploration. Nuclear-based systems can have less mass than solar cells of equivalent power, allowing more compact spacecraft that are easier to orient and direct in space. In the case of crewed spaceflight, nuclear power concepts that can power both life support and propulsion systems may reduce both cost and flight time.

Selected applications and/or technologies for space include:

Types

Name and model Used on (# of RTGs per user) Maximum output Radio-
isotope
Max fuel
used (kg)
Mass (kg) Power/mass (Electrical W/kg)
Electrical (W) Heat (W)
MMRTG MSL/Curiosity rover and Perseverance/Mars 2020 rover c. 110 c. 2000 238Pu c. 4 <45 2.4
GPHS-RTG Cassini (3), New Horizons (1), Galileo (2), Ulysses (1) 300 4400 238Pu 7.8 55.9–57.8 5.2–5.4
MHW-RTG LES-8/9, Voyager 1 (3), Voyager 2 (3) 160 2400 238Pu c. 4.5 37.7 4.2
SNAP-3B Transit-4A (1) 2.7 52.5 238Pu ? 2.1 1.3
SNAP-9A Transit 5BN1/2 (1) 25 525 238Pu c. 1 12.3 2.0
SNAP-19 Nimbus-3 (2), Pioneer 10 (4), Pioneer 11 (4) 40.3 525 238Pu c. 1 13.6 2.9
modified SNAP-19 Viking 1 (2), Viking 2 (2) 42.7 525 238Pu c. 1 15.2 2.8
SNAP-27 Apollo 12–17 ALSEP (1) 73 1,480 238Pu 3.8 20 3.65
(fission reactor) Buk (BES-5)** US-As (1) 3000 100,000 highly enriched 235U 30 1000 3.0
(fission reactor) SNAP-10A*** SNAP-10A (1) 600 30,000 highly enriched 235U
431 1.4
ASRG**** prototype design (not launched), Discovery Program c. 140 (2x70) c. 500 238Pu 1 34 4.1

Radioisotope systems

SNAP-27 on the Moon

For more than fifty years, radioisotope thermoelectric generators (RTGs) have been the United States’ main nuclear power source in space. RTGs offer many benefits; they are relatively safe and maintenance-free, are resilient under harsh conditions, and can operate for decades. RTGs are particularly desirable for use in parts of space where solar power is not a viable power source. Dozens of RTGs have been implemented to power 25 different US spacecraft, some of which have been operating for more than 20 years. Over 40 radioisotope thermoelectric generators have been used globally (principally US and USSR) on space missions.

The advanced Stirling radioisotope generator (ASRG, a model of Stirling radioisotope generator (SRG)) produces roughly four times the electric power of an RTG per unit of nuclear fuel, but flight-ready units based on Stirling technology are not expected until 2028. NASA plans to utilize two ASRGs to explore Titan in the distant future.

Cutaway diagram of the advanced Stirling radioisotope generator.

Radioisotope power generators include:

Radioisotope heater units (RHUs) are also used on spacecraft to warm scientific instruments to the proper temperature so they operate efficiently. A larger model of RHU called the General Purpose Heat Source (GPHS) is used to power RTGs and the ASRG.

Extremely slow-decaying radioisotopes have been proposed for use on interstellar probes with multi-decade lifetimes.

As of 2011, another direction for development was an RTG assisted by subcritical nuclear reactions.

Fission systems

Fission power systems may be utilized to power a spacecraft's heating or propulsion systems. In terms of heating requirements, when spacecraft require more than 100 kW for power, fission systems are much more cost effective than RTGs.

In 1965, the US launched a space reactor, the SNAP-10A, which had been developed by Atomics International, then a division of North American Aviation.

Over the past few decades, several fission reactors have been proposed, and the Soviet Union launched 31 BES-5 low power fission reactors in their RORSAT satellites utilizing thermoelectric converters between 1967 and 1988.

In the 1960s and 1970s, the Soviet Union developed TOPAZ reactors, which utilize thermionic converters instead, although the first test flight was not until 1987.

In 1983, NASA and other US government agencies began development of a next-generation space reactor, the SP-100, contracting with General Electric and others. In 1994, the SP-100 program was cancelled, largely for political reasons, with the idea of transitioning to the Russian TOPAZ-II reactor system. Although some TOPAZ-II prototypes were ground-tested, the system was never deployed for US space missions.

In 2008, NASA announced plans to utilize a small fission power system on the surface of the Moon and Mars, and began testing "key" technologies for it to come to fruition.

Proposed fission power system spacecraft and exploration systems have included SP-100, JIMO nuclear electric propulsion, and Fission Surface Power.

SAFE-30 small experimental reactor

A number of micro nuclear reactor types have been developed or are in development for space applications:

Nuclear thermal propulsion systems (NTR) are based on the heating power of a fission reactor, offering a more efficient propulsion system than one powered by chemical reactions. Current research focuses more on nuclear electric systems as the power source for providing thrust to propel spacecraft that are already in space.

Other space fission reactors for powering space vehicles include the SAFE-400 reactor and the HOMER-15. In 2020, Roscosmos (the Russian Federal Space Agency) plans to launch a spacecraft utilizing nuclear-powered propulsion systems (developed at the Keldysh Research Center), which includes a small gas-cooled fission reactor with 1 MWe.

In September 2020, NASA and the Department of Energy (DOE) issued a formal request for proposals for a lunar nuclear power system, in which several awards would be granted to preliminary designs completed by the end of 2021, while in a second phase, by early 2022, they would select one company to develop a 10-kilowatt fission power system to be placed on the moon in 2027.

Artists's Conception of Jupiter Icy Moons Orbiter mission for Prometheus, with the reactor on the right, providing power to ion engines and electronics.

Project Prometheus

In 2002, NASA announced an initiative towards developing nuclear systems, which later came to be known as Project Prometheus. A major part of the Prometheus Project was to develop the Stirling Radioisotope Generator and the Multi-Mission Thermoelectric Generator, both types of RTGs. The project also aimed to produce a safe and long-lasting space fission reactor system for a spacecraft's power and propulsion, replacing the long-used RTGs. Budget constraints resulted in the effective halting of the project, but Project Prometheus has had success in testing new systems. After its creation, scientists successfully tested a High Power Electric Propulsion (HiPEP) ion engine, which offered substantial advantages in fuel efficiency, thruster lifetime, and thruster efficiency over other power sources.

Visuals

A gallery of images of space nuclear power systems.

Weak and strong sustainability

From Wikipedia, the free encyclopedia

Although related, sustainable development and sustainability are two different concepts. Weak sustainability is an idea within environmental economics which states that 'human capital' can substitute 'natural capital'. It is based upon the work of Nobel Laureate Robert Solow, and John Hartwick. Contrary to weak sustainability, strong sustainability assumes that "human capital" and "natural capital" are complementary, but not interchangeable.

This idea received more political attention as sustainable development discussions evolved in the late 1980s and early 1990s. A key landmark was the Rio Summit in 1992 where the vast majority of nation-states committed themselves to sustainable development. This commitment was demonstrated by the signing of Agenda 21, a global action plan on sustainable development.

Weak sustainability has been defined using concepts like human capital and natural capital. Human (or produced) capital incorporates resources such as infrastructure, labour and knowledge. Natural capital covers the stock of environmental assets such as fossil fuels, biodiversity and other ecosystem structures and functions relevant for ecosystem services. In very weak sustainability, the overall stock of man-made capital and natural capital remains constant over time. It is important to note that, unconditional substitution between the various kinds of capital is allowed within weak sustainability. This means that natural resources may decline as long as human capital is increased. Examples include the degradation of the ozone layer, tropical forests and coral reefs if accompanied by benefits to human capital. An example of the benefit to human capital could include increased financial profits. If capital is left constant over time intergenerational equity, and thus Sustainable Development, is achieved. An example of weak sustainability could be mining coal and using it for production of electricity. The natural resource coal, is replaced by a manufactured good which is electricity. The electricity is then in turn used to improve domestic life quality (e.g. cooking, lighting, heating, refrigeration and operating boreholes to supply water in some villages) and for industrial purposes (growing the economy by producing other resources using machines that are electricity operated.)

Case studies of weak sustainability in practice have had both positive and negative results. The concept of weak sustainability still attracts a lot of criticism. Some even suggest that the concept of sustainability is redundant. Other approaches are advocated, including ‘social bequests’, which focus the attention away from neoclassical theory altogether.

Strong sustainability assumes that the economic and environmental capital is complementary, but not interchangeable. Strong sustainability accepts there are certain functions that the environment performs that cannot be duplicated by humans or human made capital. The ozone layer is one example of an ecosystem service that is crucial for human existence, forms part of natural capital, but is difficult for humans to duplicate.

Unlike weak sustainability, strong sustainability puts the emphasis on ecological scale over economic gains. This implies that nature has a right to exist and that it has been borrowed and should be passed on from one generation to the next still intact in its original form.

An example of strong sustainability could be the manufacturing of office carpet tiles from used car tyres. In this scenario, office carpets and other products are manufactured from used motorcar tires that would have been sent to a landfill.

Origins and theory

Capital approach to sustainability and intergenerational equity

To understand the concept of weak sustainability, it is first necessary to explore the capital approach to sustainability. This is key to the idea of intergenerational equity. This implies that a fair distribution of resources and assets between generations exists. Decision makers, both in theory and practice, need a concept that enables assessment in order to decide if intergenerational equity is achieved. The capital approach lends itself to this task. In this context we must distinguish between the different types of capital. Human capital (e.g. skills, knowledge) and natural capital (e.g. minerals, water) tend to be the most frequently cited examples. Within the concept it is believed that the amount of capital a generation has at its disposal is decisive for its development. A development is then called sustainable when it leaves the capital stock at least unchanged.

Sustainable development

The weak sustainability paradigm stems from the 1970s. It began as an extension of the neoclassical theory of economic growth, accounting for non-renewable natural resources as a factor of production. However, it only really came into the mainstream in the 1990s within the context of sustainable development discourse. At its inception, sustainability was interpreted as a requirement to preserve, intact, the environment as we find it today in all its forms. The Brundtland Report, for example, stated that ‘The loss of plant and animal species can greatly limit the options of future generations. The result is that sustainable development requires the conservation of plant and animal species’.

Development of theory

Wilfred Beckerman posits that the absolutist concept of sustainable development given above is morally repugnant. The largest part of the world's population live in acute poverty. Taking that as well as the acute degradation into account, one could justify using up vast resources in an attempt to preserve certain species from extinction. These species providing no real benefit for society other than a possible value for the knowledge of their continued existence. He argues that such a task would involve using resources that could have instead been devoted to more pressing world concerns. Examples include increasing access to clean drinking water or sanitation in the Third World.

Many environmentalists shifted their attention to the idea of ‘weak’ sustainability. This allows for some natural resources to decrease as long as sufficient compensation is provided by increases in other resources. The result usually was an increase in human capital. This compensation is in the form of sustained human welfare. This is illustrated in a well-regarded definition by David Pearce, the author of numerous works on sustainability. He defines sustainability as implying something about maintaining the level of human welfare (or well-being) so that it may improve, but never declines (or, not more than temporarily). This implies sustainable development will not decrease over time.

Inter-generational equity assumes each following generation has at least as much capital at its disposal as the preceding generation. The idea of leaving capital stock at least unchanged is widely accepted. The question arises, whether or not one form of capital may be substituted by another. This is the focus of the debate between ‘weak’ and ‘strong’ sustainability, and how intergenerational equity is to be achieved.

It is also important to note that strong sustainability does not share the notion of inter-changeability. Since the nineties, there has been an ardent debate on the substitutability between natural and human-made capital. While "Weak Sustainability" supporters mainly believe that these are substitutable, "Strong Sustainability" followers generally contest the possibility of inter-changeability.

Weak sustainability in practice

A prime example of a weak sustainability is the Government Pension Fund of Norway. Statoil ASA, a state-owned Norwegian oil company invested its surplus profits from petroleum into a pension portfolio to date worth over $1 trillion. The oil, a type of natural capital, was exported in vast quantities by Norway. The resultant fund allows for long-lasting income for the population in exchange for a finite resource, actually increasing the total capital available for Norway above the original levels. This example shows how weak sustainability and substitution can be cleverly applied on a national scale, although it is recognised that its applications are very restricted on a global scale. In this application, Hartwick's rule would state that the pension fund was sufficient capital to offset the depletion of the oil resources.

A less positive case is that of the small Pacific nation of Nauru. A substantial phosphate deposit was found on the island in 1900, and now approximately 80% of the island has been rendered uninhabitable after over 100 years of mining. Concurrent with this extraction, Nauru's inhabitants, over the last few decades of the twentieth century, have enjoyed a high per capita income. Money from the mining of phosphate enabled the establishment of a trust fund, which was estimated to be as much as $1 billion. However, chiefly as a result of the Asian financial crisis, the trust fund was almost entirely wiped out. This ‘development’ of Nauru followed the logic of weak sustainability, and almost led to complete environmental destruction. This case presents a telling argument against weak sustainability, suggesting that a substitution of natural for man-made capital may not be reversible in the long-term.

Role of governance and policy recommendations

The implementation of weak sustainability in governance can be viewed theoretically and practically through Hartwick's rule. In resource economics, Hartwick's rule defines the amount of investment in human capital that is needed to offset declining stocks of non-renewable resources. Solow showed that, given a degree of substitutability between human capital and natural capital, one way to design a sustainable consumption program for an economy is to accumulate man-made capital. When this accumulation is sufficiently rapid the effect from the shrinking exhaustible resource stock is countered by the services from the increased human capital stock. Hartwick's rule, is often referred to as "invest resource rents", where ‘rent’ is payment to a factor of production (in this case capital) in excess of that needed to keep it in its present use. This requires that a nation invest all rent earned from exhaustible resources currently extracted.

Later, Pearce and Atkinson and Hamilton added to Hartwick's rule, by setting out a theoretical and empirical measure of net investment in human and natural capital (and later human capital) that became known as genuine savings. Genuine savings measures net changes in produced, natural and human capital stocks, valued in monetary terms.

The aim of governance therefore should be to keep genuine savings above or equal to zero. In this sense it is similar to green accounting, which attempts to factor environmental costs into the financial results of operations. A key example of this is the World Bank, who now regularly publishes a comparative and comprehensive set of genuine savings estimates for over 150 countries which is called ‘adjusted savings’.

Criticisms of the strong vs. weak sustainability model

Martinez-Allier's address concerns over the implications of measuring weak sustainability, after results of work conducted by Pearce & Atkinson in the early 1990s. By their measure, most of the Northern, industrialised countries are deemed sustainable, as is the world economy as a whole. This point of view can be considered to be flawed since the world would (arguably) not be sustainable if all countries have the resource intensity rate and pollution rate of many industrialised countries. Industrialization does not necessarily equate to sustainability.

According to Pearce and Atkinson's calculations, the Japanese economy is one of the most sustainable economies in the world. The reason for this is that its saving rate is so high. This trend still remains today and therefore exceeds depreciation on both natural and man-made capital. Thus, they suggest that it is the gross negligence of factors other than savings in measuring sustainability that makes weak sustainability an inappropriate concept.

The integrative sustainability model has the economy completely located within society and society completely located within the environment. In other words, the economy is a subset of society and society is completely dependent upon the environment. This interdependence means that any sustainability-related issue must be considered holistically.

A diagram indicating the relationship between the three pillars of sustainability, suggesting that both economy and society are constrained by environmental limits

Other inadequacies of the paradigm include the difficulties in measuring savings rates and the inherent problems in quantifying the many different attributes and functions of the biophysical world in monetary terms. By including all human and biophysical resources under the same heading of ‘capital’, the depleting of fossil fuels, reduction of biodiversity and so forth, are potentially compatible with sustainability. As Gowdy & O'Hara so aptly put it, "As long as the criterion of weak sustainability is met, with savings outstripping capital depletion, there is no conflict between the destruction of species and ecosystems or the depletion of fossil fuels, and the goal of sustainability."

Opposing weak sustainability, strong sustainability supporters contend that we need "a more small-scale decentralized way of life based upon greater self-reliance, so as to create a social and economic system less destructive towards nature." Strong sustainability does not make allowances for the substitution of human, and human made capital for Earth's land, water, and their biodiversity. The products created by mankind cannot replace the natural capital found in ecosystems.

Another critical weakness of the concept is related to environmental resilience. According to Van Den Bergh, resilience can be considered as a global, structural stability concept, based on the idea that multiple, locally stable ecosystems can exist. Sustainability can thus be directly related to resilience. With this in mind, weak sustainability can cause extreme sensitivity to either natural disturbances (such as diseases in agriculture with little crop diversity) or economic disturbances (as outlined in the case study of Nauru above). This high level of sensitivity within regional systems in the face of external factors brings to attention an important inadequacy of weak sustainability.

Rejection of both weak and strong models

Some critics have gone one step further, dismissing the entire concept of sustainability. Beckerman's influential work concludes that weak sustainability is, “redundant and illogical”. He holds that sustainability only makes sense in its 'strong' form, but that "requires subscribing to a morally repugnant and totally impracticable objective." He goes as far to say that he regrets so much time has been wasted on the entire concept of sustainable development. Contradictorily, it could be argued that even weak sustainability measures are better than having no measures or action at all.

Others have suggested a better approach to sustainability would be that of "social bequests". This change would "free us from a 'zero-sum' game in which our gain is an automatic loss for future generations". The social bequest approach looks at the problem in a different light by changing to what, rather than how much, we leave to future generations. When the problem is phrased as ‘how much’ this always implies that some amount of a resource should be used and some left. Daniel Bromley uses the example of rainforests to illustrate his argument. If we decide to use 25% of a rainforest and leave the rest, but then the next time we make a decision we start all over again and use 25% of what's left, and so on, eventually there will be no rainforest left. By focusing on bequests of specific rights and opportunities for future generations, we can remove ourselves from the "straightjacket of substitution and marginal tradeoffs of neoclassical theory".

Computer programming

From Wikipedia, the free encyclopedia

Computer programming is the process of performing a particular computation (or more generally, accomplishing a specific computing result), usually by designing and building an executable computer program. Programming involves tasks such as analysis, generating algorithms, profiling algorithms' accuracy and resource consumption, and the implementation of algorithms (usually in a chosen programming language, commonly referred to as coding). The source code of a program is written in one or more languages that are intelligible to programmers, rather than machine code, which is directly executed by the central processing unit. The purpose of programming is to find a sequence of instructions that will automate the performance of a task (which can be as complex as an operating system) on a computer, often for solving a given problem. Proficient programming thus usually requires expertise in several different subjects, including knowledge of the application domain, specialized algorithms, and formal logic.

Tasks accompanying and related to programming include testing, debugging, source code maintenance, implementation of build systems, and management of derived artifacts, such as the machine code of computer programs. These might be considered part of the programming process, but often the term software development is used for this larger process with the term programming, implementation, or coding reserved for the actual writing of code. Software engineering combines engineering techniques with software development practices. Reverse engineering is a related process used by designers, analysts, and programmers to understand and re-create/re-implement.

History

Ada Lovelace, whose notes added to the end of Luigi Menabrea's paper included the first algorithm designed for processing by an Analytical Engine. She is often recognized as history's first computer programmer.
 

Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them.

Code-breaking algorithms have also existed for centuries. In the 9th century, the Arab mathematician Al-Kindi described a cryptographic algorithm for deciphering encrypted code, in A Manuscript on Deciphering Cryptographic Messages. He gave the first description of cryptanalysis by frequency analysis, the earliest code-breaking algorithm.

The first computer program is generally dated to 1843, when mathematician Ada Lovelace published an algorithm to calculate a sequence of Bernoulli numbers, intended to be carried out by Charles Babbage's Analytical Engine.

Data and instructions were once stored on external punched cards, which were kept in order and arranged in program decks.

In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form. Later a control panel (plug board) added to his 1906 Type I Tabulator allowed it to be programmed for different jobs, and by the late 1940s, unit record equipment such as the IBM 602 and IBM 604, were programmed by control panels in a similar way, as were the first electronic computers. However, with the concept of the stored-program computer introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory.

Machine language

Machine code was the language of early programs, written in the instruction set of the particular machine, often in binary notation. Assembly languages were soon developed that let the programmer specify instruction in a text format (e.g., ADD X, TOTAL), with abbreviations for each operation code and meaningful names for specifying addresses. However, because an assembly language is little more than a different notation for a machine language, two machines with different instruction sets also have different assembly languages.

Wired control panel for an IBM 402 Accounting Machine. Wires connect pulse streams from the card reader to counters and other internal logic and ultimately to the printer.

Compiler languages

High-level languages made the process of developing a program simpler and more understandable, and less bound to the underlying hardware. The first compiler related tool, the A-0 System, was developed in 1952 by Grace Hopper, who also coined the term 'compiler'. FORTRAN, the first widely used high-level language to have a functional implementation, came out in 1957, and many other languages were soon developed—in particular, COBOL aimed at commercial data processing, and Lisp for computer research.

These compiled languages allow the programmer to write programs in terms that are syntactically richer, and more capable of abstracting the code, making it easy to target for varying machine instruction sets via compilation declarations and heuristics. Compilers harnessed the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula using infix notation.

Source code entry

Programs were mostly entered using punched cards or paper tape. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers. Text editors were also developed that allowed changes and corrections to be made much more easily than with punched cards.

Modern programming

Quality requirements

Whatever the approach to development may be, the final program must satisfy some fundamental properties. The following properties are among the most important:

  • Reliability: how often the results of a program are correct. This depends on conceptual correctness of algorithms and minimization of programming mistakes, such as mistakes in resource management (e.g., buffer overflows and race conditions) and logic errors (such as division by zero or off-by-one errors).
  • Robustness: how well a program anticipates problems due to errors (not bugs). This includes situations such as incorrect, inappropriate or corrupt data, unavailability of needed resources such as memory, operating system services, and network connections, user error, and unexpected power outages.
  • Usability: the ergonomics of a program: the ease with which a person can use the program for its intended purpose or in some cases even unanticipated purposes. Such issues can make or break its success even regardless of other issues. This involves a wide range of textual, graphical, and sometimes hardware elements that improve the clarity, intuitiveness, cohesiveness and completeness of a program's user interface.
  • Portability: the range of computer hardware and operating system platforms on which the source code of a program can be compiled/interpreted and run. This depends on differences in the programming facilities provided by the different platforms, including hardware and operating system resources, expected behavior of the hardware and operating system, and availability of platform-specific compilers (and sometimes libraries) for the language of the source code.
  • Maintainability: the ease with which a program can be modified by its present or future developers in order to make improvements or to customize, fix bugs and security holes, or adapt it to new environments. Good practices during initial development make the difference in this regard. This quality may not be directly apparent to the end user but it can significantly affect the fate of a program over the long term.
  • Efficiency/performance: Measure of system resources a program consumes (processor time, memory space, slow devices such as disks, network bandwidth and to some extent even user interaction): the less, the better. This also includes careful management of resources, for example cleaning up temporary files and eliminating memory leaks. This is often discussed under the shadow of a chosen programming language. Although the language certainly affects performance, even slower languages, such as Python, can execute programs instantly from a human perspective. Speed, resource usage, and performance are important for programs that bottleneck the system, but efficient use of programmer time is also important and is related to cost: more hardware may be cheaper.

Readability of source code

In computer programming, readability refers to the ease with which a human reader can comprehend the purpose, control flow, and operation of source code. It affects the aspects of quality above, including portability, usability and most importantly maintainability.

Readability is important because programmers spend the majority of their time reading, trying to understand, reusing and modifying existing source code, rather than writing new source code. Unreadable code often leads to bugs, inefficiencies, and duplicated code. A study found that a few simple readability transformations made code shorter and drastically reduced the time to understand it.

Following a consistent programming style often helps readability. However, readability is more than just programming style. Many factors, having little or nothing to do with the ability of the computer to efficiently compile and execute the code, contribute to readability. Some of these factors include:

The presentation aspects of this (such as indents, line breaks, color highlighting, and so on) are often handled by the source code editor, but the content aspects reflect the programmer's talent and skills.

Various visual programming languages have also been developed with the intent to resolve readability concerns by adopting non-traditional approaches to code structure and display. Integrated development environments (I.D.Es) aim to integrate all such help. Techniques like Code refactoring can enhance readability.

Algorithmic complexity

The academic field and the engineering practice of computer programming are both largely concerned with discovering and implementing the most efficient algorithms for a given class of problems. For this purpose, algorithms are classified into orders using so-called Big O notation, which expresses resource use, such as execution time or memory consumption, in terms of the size of an input. Expert programmers are familiar with a variety of well-established algorithms and their respective complexities and use this knowledge to choose algorithms that are best suited to the circumstances.

Chess algorithms as an example

"Programming a Computer for Playing Chess" was a 1950 paper that evaluated a "minimax" algorithm that is part of the history of algorithmic complexity; a course on IBM's Deep Blue (chess computer) is part of the computer science curriculum at Stanford University.

Methodologies

The first step in most formal software development processes is requirements analysis, followed by testing to determine value modeling, implementation, and failure elimination (debugging). There exist a lot of different approaches for each of those tasks. One approach popular for requirements analysis is Use Case analysis. Many programmers use forms of Agile software development where the various stages of formal software development are more integrated together into short cycles that take a few weeks rather than years. There are many approaches to the Software development process.

Popular modeling techniques include Object-Oriented Analysis and Design (OOAD) and Model-Driven Architecture (MDA). The Unified Modeling Language (UML) is a notation used for both the OOAD and MDA.

A similar technique used for database design is Entity-Relationship Modeling (ER Modeling).

Implementation techniques include imperative languages (object-oriented or procedural), functional languages, and logic languages.

Measuring language usage

It is very difficult to determine what are the most popular modern programming languages. Methods of measuring programming language popularity include: counting the number of job advertisements that mention the language, the number of books sold and courses teaching the language (this overestimates the importance of newer languages), and estimates of the number of existing lines of code written in the language (this underestimates the number of users of business languages such as COBOL).

Some languages are very popular for particular kinds of applications, while some languages are regularly used to write many different kinds of applications. For example, COBOL is still strong in corporate data centers often on large mainframe computers, Fortran in engineering applications, scripting languages in Web development, and C in embedded software. Many applications use a mix of several languages in their construction and use. New languages are generally designed around the syntax of a prior language with new functionality added, (for example C++ adds object-orientation to C, and Java adds memory management and bytecode to C++, but as a result, loses efficiency and the ability for low-level manipulation).

Debugging

The first known actual bug causing a problem in a computer was a moth, trapped inside a Harvard mainframe, recorded in a log book entry dated September 9, 1947. "Bug" was already a common term for a software defect when this insect was found.

Debugging is a very important task in the software development process since having defects in a program can have significant consequences for its users. Some languages are more prone to some kinds of faults because their specification does not require compilers to perform as much checking as other languages. Use of a static code analysis tool can help detect some possible problems. Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes or some unusual software bugs. Also, specific user environment and usage history can make it difficult to reproduce the problem.

After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, when a bug in a compiler can make it crash when parsing some large source file, a simplification of the test case that results in only few lines from the original source file can be sufficient to reproduce the same crash. Trial-and-error/divide-and-conquer is needed: the programmer will try to remove some parts of the original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if remaining actions are sufficient for bugs to appear. Scripting and breakpointing is also part of this process.

Debugging is often done with IDEs. Standalone debuggers like GDB are also used, and these often provide less of a visual environment, usually using a command line. Some text editors such as Emacs allow GDB to be invoked through them, to provide a visual environment.

Programming languages

Different programming languages support different styles of programming (called programming paradigms). The choice of language used is subject to many considerations, such as company policy, suitability to task, availability of third-party packages, or individual preference. Ideally, the programming language best suited for the task at hand will be selected. Trade-offs from this ideal involve finding enough programmers who know the language to build a team, the availability of compilers for that language, and the efficiency with which programs written in a given language execute. Languages form an approximate spectrum from "low-level" to "high-level"; "low-level" languages are typically more machine-oriented and faster to execute, whereas "high-level" languages are more abstract and easier to use but execute less quickly. It is usually easier to code in "high-level" languages than in "low-level" ones. Programming languages are essential for software development. They are the building blocks for all software, from the simplest applications to the most sophisticated ones.

Allen Downey, in his book How To Think Like A Computer Scientist, writes:

The details look different in different languages, but a few basic instructions appear in just about every language:
  • Input: Gather data from the keyboard, a file, or some other device.
  • Output: Display data on the screen or send data to a file or other device.
  • Arithmetic: Perform basic arithmetical operations like addition and multiplication.
  • Conditional Execution: Check for certain conditions and execute the appropriate sequence of statements.
  • Repetition: Perform some action repeatedly, usually with some variation.

Many computer languages provide a mechanism to call functions provided by shared libraries. Provided the functions in a library follow the appropriate run-time conventions (e.g., method of passing arguments), then these functions may be written in any other language.

Programmers

Computer programmers are those who write computer software. Their jobs usually involve:

Although programming has been presented in the media as a somewhat mathematical subject, some research shows that good programmers have strong skills in natural human languages, and that learning to code is similar to learning a foreign language.

Adherence (medicine)

From Wikipedia, the free encyclopedia

In medicine, patient compliance (also adherence, capacitance) describes the degree to which a patient correctly follows medical advice. Most commonly, it refers to medication or drug compliance, but it can also apply to other situations such as medical device use, self care, self-directed exercises, or therapy sessions. Both patient and health-care provider affect compliance, and a positive physician-patient relationship is the most important factor in improving compliance. Access to care plays a role in patient adherence, whereby greater wait times to access care contributing to greater absenteeism. The cost of prescription medication also plays a major role.

Compliance can be confused with concordance, which is the process by which a patient and clinician make decisions together about treatment.

Worldwide, non-compliance is a major obstacle to the effective delivery of health care. 2003 estimates from the World Health Organization indicated that only about 50% of patients with chronic diseases living in developed countries follow treatment recommendations with particularly low rates of adherence to therapies for asthma, diabetes, and hypertension. Major barriers to compliance are thought to include the complexity of modern medication regimens, poor "health literacy" and not understanding treatment benefits, occurrence of undiscussed side effects, poor treatment satisfaction, cost of prescription medicine, and poor communication or lack of trust between a patient and his or her health-care provider. Efforts to improve compliance have been aimed at simplifying medication packaging, providing effective medication reminders, improving patient education, and limiting the number of medications prescribed simultaneously. Studies show a great variation in terms of characteristics and effects of interventions to improve medicine adherence. It is still unclear how adherence can consistently be improved in order to promote clinically important effects.

Terminology

In medicine, compliance (synonymous with adherence, capacitance) describes the degree to which a patient correctly follows medical advice. Most commonly, it refers to medication or drug compliance, but it can also apply to medical device use, self care, self-directed exercises, or therapy sessions. Both patient and health-care provider affect compliance, and a positive physician-patient relationship is the most important factor in improving compliance.

As of 2003, US health care professionals more commonly used the term "adherence" to a regimen rather than "compliance", because it has been thought to reflect better the diverse reasons for patients not following treatment directions in part or in full. Additionally, the term adherence includes the ability of the patient to take medications as prescribed by their physician with regards to the correct drug, dose, route, timing, and frequency. It has been noted that compliance may only refer to passively following orders. The term adherence is often used to imply a collaborative approach to decision-making and treatment between a patient and clinician.

The term concordance has been used in the United Kingdom to involve a patient in the treatment process to improve compliance, and refers to a 2003 NHS initiative. In this context, the patient is informed about their condition and treatment options, involved in the decision as to which course of action to take, and partially responsible for monitoring and reporting back to the team. Informed intentional non-adherence is when the patient, after understanding the risks and benefits, chooses not to take the treatment.

As of 2005, the preferred terminology remained a matter of debate. As of 2007, concordance has been used to refer specifically to patient adherence to a treatment regimen which the physician sets up collaboratively with the patient, to differentiate it from adherence to a physician-only prescribed treatment regimen. Despite the ongoing debate, adherence has been the preferred term for the World Health Organization, The American Pharmacists Association, and the U.S. National Institutes of Health Adherence Research Network. The Medical Subject Headings of the United States National Library of Medicine defines various terms with the words adherence and compliance. Patient Compliance and Medication Adherence are distinguished under the MeSH tree of Treatment Adherence and Compliance.

Adherence factors

An estimated half of those for whom treatment regimens are prescribed do not follow them as directed.

Side effects

Negative side effects of a medicine can influence adherence.

Health literacy

Cost and poor understanding of the directions for the treatment, referred to as 'health literacy' have been known to be major barriers to treatment adherence. There is robust evidence that education and physical health are correlated. Poor educational attainment is a key factor in the cycle of health inequalities.

Educational qualifications help to determine an individual's position in the labour market, their level of income and therefore their access to resources.

Literacy

In 1999 one fifth of UK adults, nearly seven million people, had problems with basic skills, especially functional literacy and functional numeracy, described as: "The ability to read, write and speak in English, and to use mathematics at a level necessary to function at work and in society in general." This made it impossible for them to effectively take medication, read labels, follow drug regimes, and find out more.

In 2003, 20% of adults in the UK had a long-standing illness or disability and a national study for the UK Department of Health, found more than one-third of people with poor or very poor health had literary skills of Entry Level 3 or below.

Low levels of literacy and numeracy were found to be associated with socio-economic deprivation. Adults in more deprived areas, such as the North East of England, performed at a lower level than those in less deprived areas such as the South East. Local authority tenants and those in poor health were particularly likely to lack basic skills.

A 2002 analysis of over 100 UK local education authority areas found educational attainment at 15–16 years of age to be strongly associated with coronary heart disease and subsequent infant mortality.

A study of the relationship of literacy to asthma knowledge revealed that 31% of asthma patients with a reading level of a ten-year-old knew they needed to see the doctors, even when they were not having an asthma attack, compared to 90% with a high school graduate reading level.

Treatment cost

In 2013 the US National Community Pharmacists Association sampled for one month 1,020 Americans above age 40 for with an ongoing prescription to take medication for a chronic condition and gave a grade C+ on adherence. In 2009, this contributed to an estimated cost of $290 billion annually. In 2012, increase in patient medication cost share was found to be associated with low adherence to medication.

The United States is among the countries with the highest prices of prescription drugs mainly attributed to the government's lack of negotiating lower prices with monopolies in the pharmaceutical industry especially with brand name drugs. In order to manage medication costs, many US patients on long term therapies fail to fill their prescription, skip or reduce doses. According to a Kaiser Family Foundation survey in 2015, about three quarters (73%) of the public think drug prices are unreasonable and blame pharmaceutical companies for setting prices so high. In the same report, half of the public reported that they are taking prescription drugs and a "quarter (25%) of those currently taking prescription medicine report they or a family member have not filled a prescription in the past 12 months due to cost, and 18 percent report cutting pills in half or skipping doses". In a 2009 comparison to Canada, only 8% of adults reported to have skipped their doses or not filling their prescriptions due to the cost of their prescribed medications.

Age

Both young and elderly status have been associated with non-adherence.

The elderly often have multiple health conditions, and around half of all NHS medicines are prescribed for people over retirement age, despite representing only about 20% of the UK population. The recent National Service Framework on the care of older people highlighted the importance of taking and effectively managing medicines in this population. However, elderly individuals may face challenges, including multiple medications with frequent dosing, and potentially decreased dexterity or cognitive functioning. Patient knowledge is a concern that has been observed.

In 1999 Cline et al. identified several gaps in knowledge about medication in elderly patients discharged from hospital. Despite receiving written and verbal information, 27% of older people discharged after heart failure were classed as non-adherent within 30 days. Half the patients surveyed could not recall the dose of the medication that they were prescribed and nearly two-thirds did not know what time of day to take them. A 2001 study by Barat et al. evaluated the medical knowledge and factors of adherence in a population of 75-year-olds living at home. They found that 40% of elderly patients do not know the purpose of their regimen and only 20% knew the consequences of non-adherence. Comprehension, polypharmacy, living arrangement, multiple doctors, and use of compliance aids was correlated with adherence.

In children with asthma self-management compliance is critical and co-morbidities have been noted to affect outcomes; in 2013 it has been suggested that electronic monitoring may help adherence.

Social factors of treatment adherence have been studied in children and adolescents with disorders:

  • Young people who felt supported by their family and doctor, and had good motivation, were more likely to comply.
  • Young adults may stop taking prescribed medication in order to fit in with their friends, or because they lack insight of their illness.
  • Those who did not feel their condition to be a threat to their social well-being were eight times more likely to comply than those who perceived it as such a threat.
  • Non-adherence is often encountered among children and young adults; young males are relatively poor at adherence.

Ethnicity

People of different ethnic backgrounds have unique adherence issues through literacy, physiology, culture or poverty. There are few published studies on adherence in medicine taking in ethnic minority communities. Ethnicity and culture influence some health-determining behaviour, such as participation in screening programmes and attendance at follow-up appointments.

Prieto et al emphasised the influence of ethnic and cultural factors on adherence. They pointed out that groups differ in their attitudes, values and beliefs about health and illness. This view could affect adherence, particularly with preventive treatments and medication for asymptomatic conditions. Additionally, some cultures fatalistically attribute their good or poor health to their god(s), and attach less importance to self-care than others.

Measures of adherence may need to be modified for different ethnic or cultural groups. In some cases, it may be advisable to assess patients from a cultural perspective before making decisions about their individual treatment.

Recent studies have shown that black patients and those with non-private insurance are more likely to be labeled as non-adherent. The increased risk is observed even in patients with a controlled A1c, and after controlling for other socioeconomic factors.

Prescription fill rates

Not all patients will fill the prescription at a pharmacy. In a 2010 U.S. study, 20–30% of prescriptions were never filled at the pharmacy. Reasons people do not fill prescriptions include the cost of the medication, A US nationwide survey of 1,010 adults in 2001 found that 22% chose not to fill prescriptions because of the price, which is similar to the 20–30% overall rate of unfilled prescriptions. Other factors are doubting the need for medication, or preference for self-care measures other than medication. Convenience, side effects and lack of demonstrated benefit are also factors.

Medication Possession Ratio

Prescription medical claims records can be used to estimate medication adherence based on fill rate. Patients can be routinely defined as being 'Adherent Patients' if the amount of medication furnished is at least 80% based on days' supply of medication divided by the number of days patient should be consuming the medication. This percentage is called the medication possession ratio (MPR). 2013 work has suggested that a medication possession ratio of 90% or above may be a better threshold for deeming consumption as 'Adherent'.

Two forms of MPR can be calculated, fixed and variable. Calculating either is relatively straightforward, for Variable MPR (VMPR) it is calculated as the number of days' supply divided by the number of elapsed days including the last prescription.

For the Fixed MPR (FMPR) the calculation is similar but the denominator is the number of days in a year whilst the numerator is constrained to be the number of days' supply within the year that the patient has been prescribed.

For medication in tablet form it is relatively straightforward to calculate the number of days' supply based on a prescription. Some medications are less straightforward though because a prescription of a given number of doses may have a variable number of days' supply because the number of doses to be taken per day varies, for example with preventative corticosteroid inhalers prescribed for asthma where the number of inhalations to be taken daily may vary between individuals based on the severity of the disease.

Course completion

Once started, patients seldom follow treatment regimens as directed, and seldom complete the course of treatment. In respect of hypertension, 50% of patients completely drop out of care within a year of diagnosis. Persistence with first-line single antihypertensive drugs is extremely low during the first year of treatment. As far as lipid-lowering treatment is concerned, only one third of patients are compliant with at least 90% of their treatment. Intensification of patient care interventions (e.g. electronic reminders, pharmacist-led interventions, healthcare professional education of patients) improves patient adherence rates to lipid-lowering medicines, as well as total cholesterol and LDL-cholesterol levels.

The World Health Organization (WHO) estimated in 2003 that only 50% of people complete long-term therapy for chronic illnesses as they were prescribed, which puts patient health at risk. For example, in 2002 statin compliance dropped to between 25 and 40% after two years of treatment, with patients taking statins for what they perceive to be preventative reasons being unusually poor compliers.

A wide variety of packaging approaches have been proposed to help patients complete prescribed treatments. These approaches include formats that increase the ease of remembering the dosage regimen as well as different labels for increasing patient understanding of directions. For example, medications are sometimes packed with reminder systems for the day and/or time of the week to take the medicine. Some evidence shows that reminder packaging may improve clinical outcomes such as blood pressure.

A not-for-profit organisation called the Healthcare Compliance Packaging Council of Europe] (HCPC-Europe) was set up between the pharmaceutical industry, the packaging industry with representatives of European patients organisations. The mission of HCPC-Europe is to assist and to educate the healthcare sector in the improvement of patient compliance through the use of packaging solutions. A variety of packaging solutions have been developed by this collaboration.

World Health Organization Barriers to Adherence

The World Health Organization (WHO) groups barriers to medication adherence into five categories; health care team and system-related factors, social and economic factors, condition-related factors, therapy-related factors, and patient-related factors. Common barriers include:

Barrier Category
Poor Patient-provider Relationship Health Care Team and System
Inadequate Access to Health Services Health Care Team and System
High Medication Cost Social and Economic
Cultural Beliefs Social and Economic
Level of Symptom Severity Condition
Availability of Effective Treatments Condition
Immediacy of Beneficial Effects Therapy
Side Effects Therapy
Stigma Surrounding Disease Patient
Inadequate Knowledge of Treatment Patient

Improving adherence rates

Role of health care providers

Health care providers play a great role in improving adherence issues. Providers can improve patient interactions through motivational interviewing and active listening. Health care providers should work with patients to devise a plan that is meaningful for the patient's needs. A relationship that offers trust, cooperation, and mutual responsibility can greatly improve the connection between provider and patient for a positive impact. The wording that health care professionals take when sharing health advice may have an impact on adherence and health behaviours, however, further research is needed to understand if positive framing (e.g., the chance of surviving is improved if you go for screening) versus negative framing (e.g., the chance of dying is higher if you do not go for screening) is more effective for specific conditions.

Technology

In 2012 it was predicted that as telemedicine technology improves, physicians will have better capabilities to remotely monitor patients in real-time and to communicate recommendations and medication adjustments using personal mobile devices, such as smartphones, rather than waiting until the next office visit.

Medication Event Monitoring Systems, as in the form of smart medicine bottle tops, smart pharmacy vials or smart blister packages as used in clinical trials and other applications where exact compliance data are required, work without any patient input, and record the time and date the bottle or vial was accessed, or the medication removed from a blister package. The data can be read via proprietary readers, or NFC enabled devices, such as smartphones or tablets. A 2009 study stated that such devices can help improve adherence.

The effectiveness of two-way email communication between health care professionals and their patients has not been adequately assessed.

Mobile phones

As of 2019, 5.15 billion people, which equates to 67% of the global population, have a mobile device and this number is growing. Mobile phones have been used in healthcare and has fostered its own term, mHealth. They have also played a role in improving adherence to medication. For example, text messaging has been used to remind patients to take medication in patients with chronic conditions such as asthma and hypertension. Other examples include the use of smartphones for synchronous and asynchronous Video Observed Therapy (VOT) as a replacement for the currently resource intensive standard of Directly Observed Therapy (DOT) (recommended by the WHO) for Tuberculosis management. Other mHealth interventions for improving adherence to medication include smartphone applications, voice recognition in interactive phone calls and Telepharmacy. Some results show that the use of mHealth improves adherence to medication and is cost-effective, though some reviews report mixed results. Studies show that using mHealth to improve adherence to medication is feasible and accepted by patients. mHealth interventions have also been used alongside other telehealth interventions such as wearable wireless pill sensors, smart pillboxes and smart inhalers

Forms of medication

Depot injections need to be taken less regularly than other forms of medication and a medical professional is involved in the administration of drugs so can increase compliance. Depot's are used for oral contraceptive pill and antipsychotic medication used to treat schizophrenia and bipolar disorder.

Coercion

Sometimes drugs are given involuntarily to ensure compliance. This can occur if an individual has been involuntarily commitment or are subjected to an outpatient commitment order, where failure to take medication will result in detention and involuntary administration of treatment. This can also occur if a patient is not deemed to have mental capacity to consent to treatment in an informed way.

Health and disease management

A WHO study estimates that only 50% of patients with chronic diseases in developed countries follow treatment recommendations.

Asthma non-compliance (28–70% worldwide) increases the risk of severe asthma attacks requiring preventable ER visits and hospitalisations; compliance issues with asthma can be caused by a variety of reasons including: difficult inhaler use, side effects of medications, and cost of the treatment.

Cancer

200,000 new cases of cancer are diagnosed each year in the UK. One in three adults in the UK will develop cancer that can be life-threatening, and 120,000 people will be killed by their cancer each year. This accounts for 25% of all deaths in the UK. However while 90% of cancer pain can be effectively treated, only 40% of patients adhere to their medicines due to poor understanding.

Results of a recent (2016) systematic review found a large proportion of patients struggle to take their oral antineoplastic medications as prescribed. This presents opportunities and challenges for patient education, reviewing and documenting treatment plans, and patient monitoring, especially with the increase in patient cancer treatments at home.

The reasons for non-adherence have been given by patients as follows:

  • The poor quality of information available to them about their treatment
  • A lack of knowledge as to how to raise concerns whilst on medication
  • Concerns about unwanted effects
  • Issues about remembering to take medication

Partridge et al (2002) identified evidence to show that adherence rates in cancer treatment are variable, and sometimes surprisingly poor. The following table is a summary of their findings:

Type of Cancer Measure of non-Adherence Definition of non-Adherence Rate of Non-Adherence
Haematological malignancies Serum levels of drug metabolites Serum levels below expected threshold 83%
Breast cancer Self-report Taking less than 90% of prescribed medicine 47%
Leukemia or non Hodgkin's lymphoma Level of drug metabolite in urine Level lower than expected 33%
Leukemia, Hodgkin's disease, non Hodgkin's Self-report and parent report More than one missed dose per month 35%
Lymphoma, other malignancies Serum bioassay Not described
Hodgkin's disease, acute lymphocytic leukemia (ALL) Biological markers Level lower than expected 50%
ALL Level of drug metabolite in urine Level lower than expected 42%
ALL Level of drug metabolites in blood Level lower than expected 10%
ALL Level of drug metabolites in blood Level lower than expected 2%
  • Medication event monitoring system - a medication dispenser containing a microchip that records when the container is opened and from Partridge et al (2002)

In 1998, trials evaluating Tamoxifen as a preventative agent have shown dropout rates of around one-third:

  • 36% in the Royal Marsden Tamoxifen Chemoprevention Study of 1998
  • 29% in the National Surgical Adjuvant Breast and Bowel Project of 1998

In March 1999, the "Adherence in the International Breast Cancer Intervention Study" evaluating the effect of a daily dose of Tamoxifen for five years in at-risk women aged 35–70 years was

  • 90% after one year
  • 83% after two years
  • 74% after four years

Diabetes

Patients with diabetes are at high risk of developing coronary heart disease and usually have related conditions that make their treatment regimens even more complex, such as hypertension, obesity and depression which are also characterised by poor rates of adherence.

  • Diabetes non-compliance is 98% in US and the principal cause of complications related to diabetes including nerve damage and kidney failure.
  • Among patients with Type 2 Diabetes, adherence was found in less than one third of those prescribed sulphonylureas and/or metformin. Patients taking both drugs achieve only 13% adherence.

Other aspects that drive medicine adherence rates is the idea of perceived self-efficacy and risk assessment in managing diabetes symptoms and decision making surrounding rigorous medication regiments. Perceived control and self-efficacy not only significantly correlate with each other, but also with diabetes distress psychological symptoms and have been directly related to better medication adherence outcomes. Various external factors also impact diabetic patients' self-management behaviors including health-related knowledge/beliefs, problem-solving skills, and self-regulatory skills, which all impact perceived control over diabetic symptoms.

Additionally, it is crucial to understand the decision-making processes that drive diabetics in their choices surrounding risks of not adhering to their medication. While patient decision aids (PtDAs), sets of tools used to help individuals engage with their clinicians in making decisions about their healthcare options, have been useful in decreasing decisional conflict, improving transfer of diabetes treatment knowledge, and achieving greater risk perception for disease complications, their efficacy in medication adherence has been less substantial. Therefore, the risk perception and decision-making processes surrounding diabetes medication adherence are multi-faceted and complex with socioeconomic implications as well. For example, immigrant health disparities in diabetic outcomes have been associated with a lower risk perception amongst foreign-born adults in the United States compared to their native-born counterparts, which leads to fewer protective lifestyle and treatment changes crucial for combatting diabetes. Additionally, variations in patients' perceptions of time (i.e. taking rigorous, costly medication in the present for abstract beneficial future outcomes can conflict with patients' preferences for immediate versus delayed gratification) may also present severe consequences for adherence as diabetes medication often requires systematic, routine administration.

Hypertension

  • Hypertension non-compliance (93% in US, 70% in UK) is the main cause of uncontrolled hypertension-associated heart attack and stroke.
  • In 1975, only about 50% took at least 80% of their prescribed anti-hypertensive medications.

As a result of poor compliance, 75% of patients with a diagnosis of hypertension do not achieve optimum blood-pressure control.

Mental illness

A 2003 review found that 41–59% of patients prescribed antipsychotics took the medication prescribed to them infrequently or not at all. Sometimes non-adherence is due to lack of insight, but psychotic disorders can be episodic and antipsychotics are then use prophylacticly to reduce the likelihood of relapse rather than treat symptoms and in some cases individuals will have no further episodes despite not using antipsychotics. A 2006 review investigated the effects of compliance therapy for schizophrenia: and found no clear evidence to suggest that compliance therapy was beneficial for people with schizophrenia and related syndromes.

Rheumatoid arthritis

A longitudinal study has shown that adherence with treatment about 60%. The predictors of adherence were found to be more of psychological, communication and logistic nature rather than sociodemographic or clinical factors. The following factors were identified as independent predictors of adherence:

  • the type of treatment prescribed
  • agreement on treatment
  • having received information on treatment adaptation
  • clinician perception of patient trust

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...