Search This Blog

Thursday, February 12, 2026

Health economics

From Wikipedia, the free encyclopedia
World health expenditure as share of global GDP.
How much did the UK spend on healthcare in 2012?

Health economics is a branch of economics concerned with issues related to efficiency, effectiveness, value and behavior in the production and consumption of health and healthcare. Health economics is important in determining how to improve health outcomes and lifestyle patterns through interactions between individuals, healthcare providers and clinical settings. Health economists study the functioning of healthcare systems and health-affecting behaviors such as smoking, diabetes, and obesity.

One of the biggest difficulties regarding healthcare economics is that it does not follow normal rules for economics. Price and quality are often hidden by the third-party payer system of insurance companies and employers. Additionally, QALYs (Quality Adjusted Life Years), one of the most commonly used measurements for treatments, is very difficult to measure and relies upon assumptions that are often unreasonable.

A seminal 1963 article by Kenneth Arrow is often credited with giving rise to health economics as a discipline. His theory drew conceptual distinctions between health and other goods. Factors that distinguish health economics from other areas include extensive government intervention, intractable uncertainty in several dimensions, asymmetric information, barriers to entry, externality and the presence of a third-party agent. In healthcare, the third-party agent is the patient's health insurer, who is financially responsible for the healthcare goods and services consumed by the insured patient.

Externalities arise frequently when considering health and health care, notably in the context of the health impacts as with infectious disease or opioid abuse. For example, making an effort to avoid catching the common cold affects people other than the decision maker or finding sustainable, humane and effective solutions to the opioid epidemic.

Scope

The scope of health economics is neatly encapsulated by Alan Williams' "plumbing diagram" dividing the discipline into eight distinct topics:

History

In the third century BC, Aristotle, an ancient Greek thinker, once talked about the relationship between farmers and doctors in production and exchange. In the 17th century, William Petty, a British classical economist, pointed out that the medical and health expenses spent on workers would bring economic benefits.

Presently, contemporary health economics stands as a prominent interdisciplinary field, connecting economic theory with healthcare practice; its diverse sub-disciplines and research domains are evident. The academic roots of this knowledge are commonly traced back to the U.S. tradition.

The American Medical Association (AMA) was created in 1848, having as main goals scientific advancement, creation of standards for medical education, launching a program of medical ethics, and obtaining improved public health. Yet, it was only in 1931 that economic concerns came to the agenda, with the creation of the AMA Bureau of Medical Economics, established to study all economic matters affecting the medical profession.

After the Second World War, amid the rapid improvement of the level of medical research technology, the modernization of diagnosis and treatment means and health facilities and equipment, the aging of the population, the sharp increase of chronic diseases, and the improvement of people's demand for health care and other reasons, medical and health expenses increased significantly. For example, total U.S. health expenditures steadily increased as a share of gross domestic product (GDP), demonstrating the increased importance that society placed on health care relative to other non-health goods and services. Between 1960 and 2013, health spending as a share of GDP increased from 5.0 to 17.4 percent. Over the same period, the average annual growth in nominal national health expenditures was 9.2 percent compared to nominal GDP growth of 6.7 percent.

At the same time, the expenditure on health care in many European countries also increased, accounting for about 4% of GDP in the 1950s and 8% by the end of the 1970s. In terms of growth rate, the proportion of health care expenditure in GNP (gross national product) in many countries increased by 1% in the 1950s, 1.5% in the 1960s, and 2% in the 1970s. This high medical and health expenditure was a heavy economic burden on government, business owners, workers, and families, which required a way to restrain its growth.

In addition, the scale of health service increased, technical equipment became more advanced, and division of labor and specialization saw increases, too. The medical and health service developed into a "healthcare industry" which occupies a considerable amount of capital and labor and occupies an important position in social and economic life. The research on economic problems of the health sector became an important topic of economic research.

Selma Muskin published "Towards the definition of health economics" in 1958 and, four years later, the paper, "Health as an Investment". At that time, health was broadly regarded as rather a consumptive branch of the economy. Muhkin's analysis was the first understanding that health investment had long-term beneficial consequences for the community. Probably, the single most famous and cited contribution to the discipline was Kenneth Arrow's "Uncertainty and the welfare economics of medical care", published in 1963.

After the 1960s, research in health economics developed further, and a second academic seminar on health economics was held in the United States in 1962 followed by a third in 1968. In 1968, the World Health Organization held its first international health economics seminar in Moscow. The convening of the three meetings showed that health economics had boarded an academic forum as an independent discipline, which also marked the official formation of health economics.

After the 1970s, the health economy entered a period of rapid development and nursing economics gradually emerged. In 1979, Paul Feldstein, an American health economist, first used the principles of economics to discuss the long-term care market, registered market, and other nursing economy issues, laying the foundation for the emergence of nursing economics.

In 1983, Nursing Economic Magazine was founded in the United States, and its main research content included nursing market development, nursing cost accounting, policies related to nursing services, nursing economic management, etc. The magazine's publication was a mark of the formal formation of nursing economics. In 1993, The University of Iowa Cost Research Center conducted a systematic nursing cost study, simply the NIC System. The specific practice consisted of establishing a special research institution equipped with full-time researchers, sorting out the nursing cost accounting content, and, finally, identifying 433 items in 6 categories. At the same time, the Center adopted computer technology to carry out nursing cost management, including cost assessment, reasonable budget, decision making, etc., which played a crucial role in improving the efficiency of nursing management and alleviating the nursing management crisis.

Healthcare demand

The demand for healthcare is a derived demand from the demand for health. Healthcare is demanded as a means for consumers to achieve a larger stock of "health capital". The demand for health is unlike most other goods because individuals allocate resources in order to both consume and produce health.

The above description gives three roles of persons in health economics. The World Health Report (p. 52) states that people take four roles in healthcare:

  1. Contributors
  2. Citizens
  3. Provider
  4. Consumers

Michael Grossman's 1972 model of health production has been extremely influential in this field of study and has several unique elements that make it notable. Grossman's model views each individual as both a producer and a consumer of health. Health is treated as a stock which degrades over time in the absence of "investments" in health, so that health is viewed as a sort of capital. The model acknowledges that health is both a consumption good that yields direct satisfaction and utility, and an investment good, which yields satisfaction to consumers indirectly through fewer sick days. Investment in health is costly as consumers must trade off time and resources devoted to health, such as exercising at a local gym, against other goals. These factors are used to determine the optimal level of health that an individual will demand. The model makes predictions over the effects of changes in prices of healthcare and other goods, labour market outcomes such as employment and wages, and technological changes. These predictions and other predictions from models extending Grossman's 1972 paper form the basis of much of the econometric research conducted by health economists.

In Grossman's model, the optimal level of investment in health occurs where the marginal cost of health capital is equal to the marginal benefit. With the passing of time, health depreciates at some rate . The interest rate faced by the consumer is denoted by . The marginal cost of health capital can be found by adding these variables: . The marginal benefit of health capital is the rate of return from this capital in both market and non-market sectors. In this model, the optimal health stock can be impacted by factors like age, wages and education. As an example, increases with age, so it becomes more and more costly to attain the same level of health capital or health stock as one ages. Age also decreases the marginal benefit of health stock. The optimal health stock will therefore decrease as one ages.

Beyond issues of the fundamental, "real" demand for medical care derived from the desire to have good health (and thus influenced by the production function for health) is the important distinction between the "marginal benefit" of medical care (which is always associated with this "real demand" curve based on derived demand), and a separate "effective demand" curve, which summarizes the amount of medical care demanded at particular market prices. Because most medical care is not purchased from providers directly, but is rather obtained at subsidized prices due to insurance, the out-of-pocket prices faced by consumers are typically much lower than the market price. The consumer sets out of pocket, and so the "effective demand" will have a separate relationship between price and quantity other than the "marginal benefit curve" or real demand relationship will have. This distinction is often described under the rubric of "ex-post moral hazard" (which is again distinct from ex-ante moral hazard, which is found in any type of market with insurance).

Health utility

Life expectancy vs healthcare spending of rich OECD countries. US average of $10,447 in 2018.

Generally, economists assume that individuals act rationally with the aim of maximizing their lifetime utility, while all are subject to the fact that they cannot buy more than their resources allow. However, this model gets complex as there exists the uncertainty over individuals' lifetime. As such, the issue is split into two parts: 1. How does health produce utility and 2. What affects health (e.g., medical care and life-style choices).

Probably the most fundamental thing in consumer demand theory is that the good increases an individual's utility. Health is not really a good in the traditional sense, but health in itself produces happiness. We can think of health as a durable good, much like for instance a car, a house or an education. Every person comes into the world with some inherent "stock" of health, and a healthy baby has a fairly high stock of health. Basically, every decision an individual take during their lifetime will affect their stock of health.

Let X be a bundle of other goods, and H a stock of health. With these variables the formula for an individual's utility is: Utility = U(X, H). For simplicity, the stock of health produces utility, but technically, is the flow of services created by the stock of health that produces utility. As the traditional fashion for goods, "more is better", in other words an increase in health leads to an increase in utility. From this assumption, X grows with health, for instance it is more enjoyable to visit the zoo when not experiencing a headache.

Like other durable goods, the stock of health wears out over time, much like other durable goods. This process can be called aging. When the stock of health has dropped low enough, a person will lose their ability to function. In economic terminology it can be said that the stock of health depreciates. Since life expectancy has risen a lot during this century, it implies that e.g., the depreciation rate has decreased during this time. Public healthcare efforts and individual medical care are in place to restore the stock of health or to decrease the depreciation rate on the stock of health. A plot graph of an individual's stock of health throughout their lifetime would be steadily increasing in the beginning during their childhood, and after that gradually decline because of aging, meanwhile having sudden drops created by random events, such as injury or illness.

There are many other things than "random" health care events, which individuals consume or do during their lives that affect the speed of aging and the severity and frequency of the drops. Lifestyle choices can deeply improve or worsen health of an individual. The variable X, the bundle of goods and services, can undertake numerous characteristics, some add value while others noticeably decrease the stock of health. Outstanding among such lifestyle choices are the decision to consume alcohol, smoke tobacco, use drugs, composition of diet, amount of exercise and so on. Not only can X and H work as substitutes for one another in producing utility, but X can also affect H in a production sense as well. X can then be split into different categories depending on which effect it has on H, for instance "good" types (e.g., moderate exercise), "bad" types (e.g., food with high cholesterol) and "neutral" types (e.g., concerts and books). Neutral goods do not have an apparent effect on individuals' health.

Some agencies, including National Institute for Health and Care Excellence (NICE) in the United Kingdom, recommend the use of cost–utility analysis (CUA). This approach can compare the Incremental cost-effectiveness ratio to the Quality-adjusted life year (QALY).

Medical economics

Often used synonymously with health economics, medical economics, according to Culyer, is the branch of economics concerned with the application of economic theory to phenomena and problems associated typically with the second and third health market outlined above: physician and institutional service providers. Typically, however, it pertains to cost–benefit analysis of pharmaceutical products and cost-effectiveness of various medical treatments. Medical economics often uses mathematical models to synthesise data from biostatistics and epidemiology for support of medical decision-making, both for individuals and for wider health policy.

Mental health economics

Mental health economics incorporates a vast array of subject matters, ranging from pharmacoeconomics to labor economics and welfare economics. Mental health can be directly related to economics by the potential of affected individuals to contribute as human capital. In 2009 Currie and Stabile published "Mental Health in Childhood and Human Capital" in which they assessed how common childhood mental health problems may alter the human capital accumulation of affected children. Externalities may include the influence that affected individuals have on surrounding human capital, such as at the workplace or in the home. In turn, the economy also affects the individual, particularly in light of globalization. For example, studies in India, where there is an increasingly high occurrence of western outsourcing, have demonstrated a growing hybrid identity in young professionals who face very different sociocultural expectations at the workplace and in at home.

Mental health economics presents a unique set of challenges to researchers. Individuals with cognitive disabilities may not be able to communicate preferences. These factors represent challenges in terms of placing value on the mental health status of an individual, especially in relation to the individual's potential as human capital. Further, employment statistics are often used in mental health economic studies as a means of evaluating individual productivity; however, these statistics do not capture "presenteeism", when an individual is at work with a lowered productivity level, quantify the loss of non-paid working time, or capture externalities such as having an affected family member. Also, considering the variation in global wage rates or in societal values, statistics used may be contextually, geographically confined, and study results may not be internationally applicable.

Though studies have demonstrated mental healthcare to reduce overall healthcare costs, demonstrate efficacy, and reduce employee absenteeism while improving employee functioning, the availability of comprehensive mental health services is in decline. Petrasek and Rapin (2002) cite the three main reasons for this decline as (1) stigma and privacy concerns, (2) the difficulty of quantifying medical savings and (3) physician incentive to medicate without specialist referral. Evers et al. (2009) have suggested that improvements could be made by promoting more active dissemination of mental health economic analysis, building partnerships through policy-makers and researchers, and employing greater use of knowledge brokers.

Health technology assessment

Economic evaluation, and in particular cost-effectiveness analysis, has become a fundamental part of technology appraisal processes for agencies in a number of countries.

Health spending

Lists of health spending by country:

Health spending as percent of GDP over time.
Health spending by function (OECD, 2018)
Health spending by country: governmental, voluntary and out-of-pocket

The average estimated cost-effectiveness threshold (CET) per Quality-adjusted life year for health care rationing varied between countries in 2019 from 0.14 of GDP per capita for Ethiopia to 1.47 GDP per capita for USA according to a study.

Healthcare markets

The five health markets typically analyzed are:

Although assumptions of textbook models of economic markets apply reasonably well to healthcare markets, there are important deviations. Many states have created risk pools in which relatively healthy enrollees subsidize the care of the rest. Insurers must cope with adverse selection which occurs when they are unable to fully predict the medical expenses of enrollees; adverse selection can destroy the risk pool. Features of insurance market risk pools, such as group purchases, preferential selection ("cherry-picking"), and preexisting condition exclusions are meant to cope with adverse selection.

Insured patients are naturally less concerned about healthcare costs than they would if they paid the full price of care. The resulting moral hazard drives up costs, as shown by the RAND Health Insurance Experiment. Insurers use several techniques to limit the costs of moral hazard, including imposing copayments on patients and limiting physician incentives to provide costly care. Insurers often compete by their choice of service offerings, cost-sharing requirements, and limitations on physicians.

Consumers in healthcare markets often lack adequate information about what services they need to buy and which providers offer the best value proposition. Health economists have documented a problem with supplier induced demand, whereby providers base treatment recommendations on economic, rather than medical criteria. Researchers have also documented substantial "practice variations", whereby the treatment also on service availability to rein in inducement and practice variations.

Some economists argue that requiring doctors to have a medical license constrains inputs, inhibits innovation, and increases cost to consumers while largely only benefiting the doctors themselves.

Risk sharing

Risk-sharing can reduce risk premiums, for example for research and development of new cures and health care equipment.

Health insurance failure

Health insurance failure can be attributed to market failure or government failure. Underinsurance can arise when the cure of disease is very expensive, such as cancer or a wide spread of new diseases such as HIV/AIDS or COVID-19. In such cases either private insurers require a high premium as the risk factor and costs are high or they may not insure the people for a particular case. This leads to a void in the market where a certain section of the population will not be able to afford healthcare. Certain insurance markets, such as those for patients with HIV/AIDS, cancer, or other pre-existing conditions who are searching for new coverage, may be incomplete in the sense that those patients may be unable to afford coverage at any price. In such cases, the government usually intervenes and provides health care for such cases. For example, during the COVID-19 pandemic, no private insurance company predicted (or could have predicted) that such an outbreak would occur; as a result, state intervention became necessary to treat people. Governments can subsidize those who cannot afford insurance or, in certain situations, those low-cost activities and facilities that non-poor citizens can afford on their own. For example; the largest health insurance scheme in the world was launched in India by the name Ayushman Bharat in 2018.

Profit margin in healthcare industry

Government might want to intervene in case of market failure in healthcare industry. Several health-care markets tend to have the potential for monopoly control to be exercised. Medical care in markets with few hospitals, patent-protected prescription products, and some health insurance markets is the major reason for higher costs and especially in cases where the providers are private companies. Limitations on physician-owned hospitals are argued to reduce competition between hospitals.

Sustainable business

From Wikipedia, the free encyclopedia

Sustainable business is an enterprise that aims to do business minimizing negative impacts on the global or local environment, community, and society. Such businesses aim to achieve the triple bottom line: profit, people, and the planet, by integrating environmental, economic, and social considerations when making business decisions. Sustainable businesses often adopt practices that promote environmental protection, and long-term economic growth.

A green business is characterized by four pillars: First, the business incorporates environmentally friendly products or services that reduce the demand for harmful products and services, and help conserve natural resources. Second, the business preserves financial capital through responsible and efficient business models. Third, the company focuses on social responsibility by upholding human rights. Finally, it emphasizes cultural sustainability by supporting inclusion and respect for local and global communities.

Terminology

The concept of sustainability refers to the ability to meet the needs without compromising the ability of future generations to meet their own. In business, sustainability includes aligning economic growth with environmental protection. It emphasizes practices that minimize ecological damage, such as the reduction of waste and the limitation of greenhouse gas emissions that contribute to rising atmospheric CO2 levels.

The term sustainable business is related to other concepts such as corporate social responsibility (CSR), corporate citizenship, and responsible business. The triple bottom line, introduced by John Elkington in the 1990s, expands all this idea by evaluating business success across people, planet, and profit dimensions.

In contrast, short-termism refers to the prioritizing short-term financial outcome at the expense of long-term interests. Short-termism discourages investment in sustainable initiatives, contributing to source depletion and increased CO2 emissions. Short-termism has become a relevant theme in sustainability discussions, as balancing financial outcomes with environmental and social considerations is a challenge within modern business practices.

Green businesses are sometimes considered as possible mediators of economic-environmental relations, even if the business has a minimal effect on lowering atmospheric CO2 levels. The definition of "green jobs" is ambiguous. Still, it is generally agreed that these jobs, the result of green business, should be linked to sustainable energy and contribute to reducing greenhouse gases. These corporations can be seen as generators of not only "green energy" but as producers of new "materializes" that are the product of the technologies these firms developed and deployed.

Environmental Dimension

A major initiative of sustainable businesses is to eliminate or decrease the environmental harm caused by the production and consumption of their goods. The impact of such human activities in terms of the number of greenhouse gases produced can be measured in units of carbon dioxide and is referred to as the carbon footprint. The carbon footprint concept is derived from the ecological footprint analysis, which examines the ecological capacity required to support the consumption of products.

Businesses can adopt a wide range of green initiatives: Tao et al. refer to a variety of "green" business practices including green strategy, green design, green production and green operation. One of the most common examples of a "green" business practice is the act of "going paperless" or sending electronic correspondence in instead of paper when possible. On a higher level, examples of sustainable business practices include: refurbishing used products (e.g., tuning up lightly used commercial fitness equipment for resale); revising production processes to eliminate waste (such as using a more accurate template to cut out designs), and choosing nontoxic raw materials and processes. For example, Canadian farmers have found that hemp is a sustainable alternative to rapeseed in their traditional crop rotation; hemp grown for fiber or seed requires no pesticides or herbicides. Another example is upcycling clothes or textiles, in which businesses can upcycle products to maintain or increase their quality.

Sustainable businesses aim to reduce environmental harm caused by production and consumption. This includes measuring carbon footprints, minimizing greenhouse gas emissions, and adopting practices such as: Refurbishing or upcycling products, Revising production processes to reduce wast, Using nontoxic materials, implementing design for the environment principles.

Examples include using hemp as a crop alternative, adopting paperless practices, and designing products for longer lifespans.

Sustainable business leaders also take into account the life cycle costs for the items they produce. Input costs must be considered regarding regulations, energy use, storage, and disposal. Designing for the environment (DFE) is also an element of sustainable business. This process enables users to consider the potential environmental impacts of a product and the process used to make that product.

The many possibilities for adopting green practices have led to considerable pressure being put upon companies from consumers, employees, government regulators, and other stakeholders. Some companies have resorted to "greenwashing" instead of making meaningful changes, merely marketing their products in ways that suggest green practices. For example, various producers in the bamboo fiber industry have been taken to court for advertising their products as "greener" than they are. In their book Corporate Sustainability in International Comparison, Schaltegger et al. (2014) analyze the current state of corporate sustainability management and corporate social responsibility across eleven countries. Their research is based on an extensive survey focusing on the companies’ intention to pursue sustainability management (i.e. motivation; issues), the integration of sustainability in the organization (i.e. connecting sustainability to the core business; involving corporate functions; using drivers of business cases for sustainability) and the actual implementation of sustainability management measures (i.e. stakeholder management; sustainability management tools and standards; measurements). An effective way for businesses to contribute towards waste reduction is to remanufacture products so that the materials used can have a longer lifespan.

Examples of sustainable companies

The Harvard Business School business historian Geoffrey Jones traces the historical origins of green business back to pioneering start-ups in organic food and wind and solar energy before World War 1. Among large corporations, Ford Motor Company occupies an odd role in the story of sustainability. Ironically, founder Henry Ford was a pioneer in the sustainable business realm, experimenting with plant-based fuels during the days of the Model T. Ford Motor Company also shipped the Model A truck in crates that then became the vehicle floorboards at the factory destination. This was a form of upcycling, retaining high quality in a closed-loop industrial cycle. Furthermore, the original auto body was made of a stronger-than-steel hemp composite. Today, of course, Fords aren't made of hemp, nor do they run on the most sensible fuel. Currently, Ford's claim to eco-friendly fame is the use of seat fabric made from 100% post-industrial materials and renewable soy foam seat bases. Ford executives recently appointed the company’s first senior vice president of sustainability, environment, and safety engineering. This position is responsible for establishing a long-range sustainability strategy and environmental policy, developing the products and processes necessary to satisfy customers and society as a whole while working toward energy independence. It remains to be seen whether Ford will return to its founder's vision of a petroleum-free automobile, a vehicle powered by the remains of plant matter.

The automobile manufacturer Subaru has also made efforts to tackle sustainability. In 2008 a Subaru assembly plant in Lafayette became the first auto manufacturer to achieve zero landfill status when the plant implemented sustainable policies. The company successfully managed to implement a plan that increased refuse recycling to 99.8%. In 2012, the corporation increased the reuse of Styrofoam by 9%. And from the year 2008 to the year 2012, environmental incidents and accidents were reduced from 18 to 4.

Smaller companies such as Nature's Path, an organic cereal and snack-making business, have also made sustainability gains in the 21st century. CEO Arran Stephens and his associates have ensured that the quickly growing company's products are produced without toxic farm chemicals. Furthermore, employees are encouraged to find ways to reduce consumption. Sustainability is an essential part of corporate discussions. Another example comes from Salt Spring Coffee, a company created in 1996 as a certified organic, fair trade, coffee producer. In recent years they have become carbon neutral, lowering emissions by reducing long-range trucking and using bio-diesel in delivery trucks, upgrading to energy-efficient equipment, and purchasing carbon offsets. The company claims to offer the first carbon-neutral coffee sold in Canada. Salt Spring Coffee was recognized by the David Suzuki Foundation in the 2010 report Doing Business in a New Climate. A third example comes from Korea, where rice husks are used as nontoxic packaging for stereo components and other electronics. The same material is later recycled to make bricks.

Some companies in the textile industry have been moving towards more sustainable business practices. Specifically, the clothing company Patagonia has focused on reducing consumption and waste. The company limits its environmental impact by ensuring only recycled and organic materials, repairing damaged clothes, and by complying with strong environmental protection standards for its entire supply chain.

Some companies in the mining and specifically gold mining industries are attempting to move towards more sustainable practices, especially given that the industry is one of the most environmentally destructive. Regarding gold mining, Northwestern University scientists have, in the laboratory, discovered an inexpensive and environmentally sustainable method that uses simple cornstarch—instead of cyanide—to isolate gold from raw materials in a selective manner. Such a method can reduce the amount of cyanide released into the environment during gold extraction from raw ore, with one of the Northwestern University scientists, Sir Fraser Stoddart stating that: “The elimination of cyanide from the gold industry is of the utmost importance environmentally". Additionally, the retail jewelry industry is now trying to be more sustainable, with companies using green energy providers and recycling more, as well as preventing the use of mined-so called 'virgin gold' by applying re-finishing methods on pieces and re-selling them. Furthermore, the customer may opt for Fairtrade Gold, which gives a better deal to small-scale and artisanal miners, and is an element of sustainable business. However, not everyone thinks that mining can be sustainable and many believe that much more must be done, noting that mining in general requires greater regional and international legislation and regulation, which is a valid point given the huge impact mining has on the planet and the huge number of products and goods that are made wholly or partly from mined materials.

In the luxury sector, in 2012, the group Kering developed the "Environmental Profit & Loss account" (EP&L) accounting method to track the progress of its sustainability goals, a strategy aligned with the UN Sustainable Development Goals. In 2019, on a request from the President Emmanuel Macron, François-Henri Pinault, Chairman and CEO of the luxury group Kering, presented the Fashion Pact during the summit, an initiative signed by 32 fashion firms committing to concrete measures to reduce their environmental impact. By 2020, 60 firms joined the Fashion Pact.

Fair Trade is a form of sustainable business and among the highest forms of CSR (Corporate Social Responsibility). Organizations that participate in Fair Trade typically adhere to the ten principles of the World Fair Trade Organization (WFTO). Moreover, Fair Trade promotes entrepreneurial development among communities in developing countries and it encourages communities to be responsible and accountable for their economic development via market engagement. Fair Trade is a form of marketing with a strong and direct social benefit beyond the economic supply chain.

In Sub-Saharan Africa, Ghanaian micro, small, and medium enterprises (MSMEs) have also engaged in environmentally sustainable business practices despite limited resources. According to a 2023 study, MSMEs across sectors such as plastic recycling, oil marketing, financial services, and consumer goods have adopted strategies that align with environmental stewardship, process efficiency, and sustainability-oriented culture. These include utilizing electronic documentation to minimize paper waste, incorporating renewable energy sources, implementing effective waste disposal systems, and educating staff on sustainability awareness. The study highlights that drivers such as sustainability-focused leadership, eco-preneurship, and resource optimization contribute not only to environmental impact reduction but also to competitive advantage and business longevity in Ghana’s growing informal and formal MSME sectors.

While many studies highlight the long-term benefits of sustainable practices, recent research suggests that firms may also face short-term trade-offs. A 2023 study of European listed firms found that, contrary to the common assumption of a "win-win," stronger sustainability practices were associated with reduced profitability, indicating that sustainability initiatives can temporarily lower financial performance.

Social dimension of sustainability

Sustainability in the social sphere refers to a business's responsibility to maintain the well-being of their employees, customers and community. Organizations may contribute to sustainability by supporting education, encouraging employee volunteering, and making charitable contributions. Organizations that give back to the community, whether through employees volunteering their time or through charitable donations, are often considered socially sustainable.

Socially sustainable practices can improve the quality of life in local communities and stregthen stakeholder relationship. Social sustainability is often linked to environmental justice, emphasizing that social equity and environmental responsibility are related and one affects the other. For a business to be sustainable, it must sustain not only the necessary environmental resources, but also social resources, including employees, customers, and its reputation.

Nonprofit organizations are recognizing the importance of environmental sustainability, not only in advocacy but in operational practices. Recent research suggests that adopting an environmental culture can mediate and strengthen the relationship between sustainability efforts and organizational performance outcomes in nonprofits. In a 2023 study, Ramdhony and Rajadurai found that nonprofits embedding environmental sustainability into their organizational values experienced improvements in stakeholder trust, funding outcomes, and long-term resilience. These findings highlight the evolving role of nonprofits not just as beneficiaries of sustainable development goals, but as implementers of green operational practices in the broader ecosystem of sustainable business.

Consumer-producer dynamics

Modern sustainability includes social and environmental factors often overlooked in the traditional business models. More consumers are demanding more sustainable goods and services, particularly when they perceive companies neglect their environmental responsibilities. Ecological awareness, sometimes viewed as a personal preference instead of a necessity, is a strategic marketing tool for firms that want to enhance their brand image. However, it is essential that companies re-state their environmental claims, greenwashing leads to consumer distrust and long-term reputational damage.

Greenwashing

As sustainability has become a significant consideration in the last decade, companies face a greater perusal regarding the credibility of their environmental claims. In the United States, the Federal Trade Commission (FTC) Green guides enforces Green Guides, helping businesses avoid misleading advertising through deceptive environmental statements. Greenwashing refers to the act of presenting false, vague, or exaggerated benefits a company products, policies, or services offer. Greenwashing includes exaggerated claims about sustainability, stating the use of eco-friendly materials, or promoting harmful practices as "green".

When companies do not follow such guides, they may be subject to legal consequences and harmed reputations. Sustainable businesses often invest in experienced legal practitioners who can understand and can provide counsel on the FTC Guides and other such frameworks.

A related example is the 2015 Volkswagen emissions case, in which the company marketed diesel vehicles as environmentally friendly while using software that manipulated emissions test results. Subsequent investigations revealed the vehicles were equipped with "defeat" devices to cheat emission test, producing nitrogen oxide far above legal limits. This incident led to significant fines, charges, and loss of consumer trust.

The following case study illustrates how consumer awareness and behavior influence the sustainability practices adopted by producers, particularly in the fashion industry

Case study: consumer attitudes toward sustainable fashion

A qualitative study conducted in Lisbon, Portugal, by Leandro Pereira, Rita Carvalho, Alvaro Dias, Renato Costa, and Nelson Antonio examined how sustainability affects consumer choices in the fashion industry. Based on fifty interviews, researchers identified two groups: those who actively practice sustainable habits (60%), and those who are aware of sustainability, but not yet taken concrete action (40%). Active consumers practice recycling, purchasing second-hand garments, and supporting sustainable fashion brands. Less engaged consumers cited barriers including price, limited access, and lack of education. The study concluded that although awareness of sustainability is increasing, widespread change requires more affordable and convenient ecological options, as well as improved consumer education.

Organizations

The European community’s Restriction of Hazardous Substances Directive restricts the use of certain hazardous materials in the production of various electronic and electrical products. Waste Electrical and Electronic Equipment (WEEE) directives provide collection, recycling, and recovery practices for electrical goods. The World Business Council for Sustainable Development and the World Resources Institute are two organizations working together to set a standard for reporting on corporate carbon footprints. From October 2013, all quoted companies in the UK are legally required to report their annual greenhouse gas emissions in their directors’ report, under the Companies Act 2006 (Strategic and Directors’ Reports) Regulations 2013.

Lester Brown’s Plan B 2.0 and Hunter Lovins’s Natural Capitalism provide information on sustainability initiatives.

Corporate sustainability strategies

Corporate sustainability strategies can aim to take advantage of sustainable revenue opportunities, while protecting the value of business against increasing energy costs, the costs of meeting regulatory requirements, changes in the way customers perceive brands and products, the volatile price of resources.

Not all eco-strategies can be incorporated into a company's business immediately. The widely practiced strategies include Innovation, Collaboration, Process Improvement and Sustainability reporting.

  1. Innovation & Technology: This method focuses on a company's ability to change its products and services towards better environmental impacts, for example less waste production.
  2. Collaboration: The formation of networks with similar or partner companies facilitates knowledge sharing and propels innovation.
  3. Process Improvement: Continuous process surveying and improvement are essential to reduction in negative impacts. Employee awareness of company-wide sustainability plan further aids the integration of new and improved processes.
  4. Sustainability Reporting: Periodic reporting of company performance in relation to goals encourages performance monitoring internally and transparency and accountability externally. The goals might then be incorporated into the corporate mission.
  5. Greening the Supply Chain: Sustainable procurement is important for any sustainability strategy as a company's impact on the environment is much bigger than the products that they consume. The B Corporation (certification) model is a good example of one that encourages companies to focus on this.
  6. Choosing the Right Leaders: Having CEOs informed about the opportunities from sustainability guides companies in the right steps to being eco-friendly. As the world is slowly transitioning to sustainability, it is important for our company leaders to prioritize and have a sense of urgency.

Companies should adopt a sound measurement and management system to collect data on their sustainability impacts and dependencies, as well as a regular forum for all stakeholders to discuss sustainability issues. The Sustainability Balanced Scorecard is a performance measurement and management system aiming at balancing financial and non-financial as well as short and long-term measures. It explicitly integrates strategically relevant environmental, social and ethical goals into the overall performance management system and supports strategic sustainability management.

Noteworthy examples of sustainable business practices that are often part of corporate sustainability strategies can include: transitioning to renewable energy sources, implementing effective recycling programs, minimizing waste generation in industrial processes, developing eco-friendly product designs, prioritizing the adoption of sustainable packaging materials, fostering an ethical and responsible supply chain, partnering with charities, encouraging volunteerism, upholding equitable treatment of employees, and prioritizing their overall welfare, among numerous other initiatives.

Examples of sustainable businesses

Ford Motor Company: Uses renewable materials and has appointed executives to lead sustainability strategies. Subaru: Achieved zero landfill status at its Lafayette assembly plant and increased recycling practices. Patagonia: Focuses on recycled materials, product repair, and supply chain environmental standards. Nature’s Path & Salt Spring Coffee: Organic and fair trade practices, energy efficiency, and carbon neutrality. Luxury Sector (Kering): Tracks sustainability with the Environmental Profit & Loss account and participates in the Fashion Pact. Gold Mining: Innovative methods using cornstarch instead of cyanide reduce environmental impact.

Implementation in SMEs

Small and medium-sized enterprises face distinctive challenges when adopting corporate sustainability strategies. A 2023 Swiss survey of 514 SMEs reported that 89% had never produced a sustainability report, citing limited internal resources and unfamiliarity with international standards as main barriers. Nevertheless, SMEs that embraced systematic sustainability reporting recorded a 25% self-reported improvement in corporate reputation and an 18% improvement in workplace quality, indicating that even partial adoption can yield competitive advantages.

Standards

Enormous economic and population growth worldwide in the second half of the twentieth century aggravated the factors that threaten health and the environment — including ozone depletion, climate change, resource depletion, fouling of natural resources, and extensive loss of biodiversity and habitat. In the past, the standard approaches to environmental problems generated by business and industry have been regulatory-driven "end-of-the-pipe" remediation efforts. In the 1990s, efforts by governments, NGOs, corporations, and investors began to grow to develop awareness and plans for voluntary standards and investment in sustainability by business.

One critical milestone was the establishment of the ISO 14000 standards whose development came as a result of the Rio Summit on the Environment held in 1992. ISO 14001 is the cornerstone standard of the ISO 14000 series. This specifies a framework of control for an Environmental Management System against which an organization can be certified by a third party. Other ISO 14000 Series Standards are actually guidelines, many to help you achieve registration to ISO 14001. They include the following:

  • ISO 14004 provides guidance on the development and implementation of environmental management systems.
  • ISO 14010 provides general principles of environmental auditing (now superseded by ISO 19011)
  • ISO 14011 provides specific guidance on audit an environmental management system (now superseded by ISO 19011)
  • ISO 14012 provides guidance on qualification criteria for environmental auditors and lead auditors (now superseded by ISO 19011)
  • ISO 14013/5 provides audit program review and assessment material.
  • ISO 14020+ labeling issues
  • ISO 14030+ provides guidance on performance targets and monitoring within an Environmental Management System
  • ISO 14040+ covers life cycle issues

There are now a wide range of sustainability accounting frameworks that organizations use to measure and disclose on their sustainability impacts and dependencies. These have evolved since the 1990s to encompass metrics spanning a wide range of social, environmental, economic and ethical issues.

Circular business models

Early academic, industry, and policy discussions around circularity primarily focused on re-X strategies such as recycling, remanufacturing, reuse, and recovery. However, it became evident that technological capabilities advanced faster than the practical implementation. For a successful transition toward a circular economy, collaboration among stakeholders is required. The business model innovation is a key driver for integrating circular technologies into organizations.

Circular business models aim to reduce resource consumption, minimize waste, and regenerate natural systems by rethinking production and consumption dynamics. Corporations are increasingly implementing circular strategies to achieve both environmental and financial benefits. Circular business can be classified into four main strategies. First, by narrowing resource loops that involves increasing production efficiency, by using fewer materials, often as implementing initiatives that optimize manufacturing processes. For example, in the cosmetics industry, upcycling—the repurposing of byproduct waste materials or useless products—emerges as a powerful strategy to advance circularity, minimize waste, and conserve resources. Second, by slowing resource loops, extending the lifespan of products through repair, reuse, resale, or rental services. Closing resource loops, focusing on reusing or recycling materials to return them into new production cycles. Regenerating natural systems emphasizes restoring resources used for production, for example, through agricultural practices.

These strategies allow companies to reduce costs, create new value propositions, and improve sustainability, though there are still challenges such as technology, consumer adaptation to ecological practices, and profitability considerations.

Writing and sustainable businesses

Writing about and communicating clean initiatives and ways to reduce emissions inspires other businesses and their possibilities of adopting these practices. Since businesses are able to reach groups of people, they have a significant role in the advancement of sustainable practices and environmental protection, by influencing both public policy, their customers, and other businesses. It has been suggested that businesses “take up the rhetorics and literacies necessary to communicate, analyze, organize, and mitigate environmental crises such as climate change”. This task is challenging due to climate change’s innate scientific complexity, abstract nature, and politically polarized character.

However, there are some ways to increase the effectiveness of this communication. By shedding light on local and more immediate issues, people can be more easily influenced to take action. Additionally, bringing attention to how it will impact humans, specifically human health, can help stress the urgency and severity of the situation. Lastly, suggesting action items and ways one can do their part can help make the climate crisis less daunting.

Certification

Challenges and opportunities

Implementing sustainable business practices may have an effect on profits and a firm's financial 'bottom line'. However, during a time where environmental awareness is popular, green strategies are likely to be embraced by employees, consumers, and other stakeholders. Many organizations concerned about the environmental impact of their business are taking initiatives to invest in sustainable business practices. In fact, a positive correlation has been reported between environmental performance and economic performance. Businesses trying to implement sustainable business need to have insights on balancing the social equity, economic prosperity and environmental quality elements.

If an organization’s current business model is inherently unsustainable, becoming truly sustainable requires a complete makeover of the business model (e.g. from selling cars to offering car sharing and other mobility services). This can present a major challenge due to the differences between the old and the new model and the respective skills, resources and infrastructure needed. A new business model can offer major opportunities by entering or even creating new markets and reaching new customer groups. The main challenges faced in the sustainable business practices implementation by businesses in developing countries include lack of skilled personnel, technological challenges, socio-economic challenges, organizational challenges and lack of proper policy framework. Skilled personnel plays a crucial role in quality management, enhanced compliance with international quality standards, and preventative and operational maintenance attitude necessary to ensure sustainable business. In the absence of skilled work forces, companies fail to implement a sustainable business model.

Another major challenge to the effective implementation of sustainable business is organizational challenges. Organizational challenges to the implementation of sustainable business activities arise from the difficulties associated with the planning, implementation and evaluation of sustainable business models. Addressing the organizational challenges for the implementation of sustainable business practices need to begin by analyzing the whole value chain of the business rather than focusing solely on the company's internal operations. Another major challenge is the lack of an appropriate policy framework for sustainable business. Companies often comply with the lowest economic, social and environmental sustainability standards, when in fact the true sustainability can be achieved when the business is focused beyond compliance with integrated strategy and purpose.

Companies leading the way in sustainable business practices can take advantage of sustainable revenue opportunities: according to the Department for Business, Innovation and Skills the UK green economy will grow by 4.9 to 5.5 percent a year by 2015, and the average internal rate of return on energy efficiency investments for large businesses is 48%. A 2013 survey suggests that demand for green products appears to be increasing: 27% of respondents said they are more likely to buy a sustainable product and/or service than 5 years ago. Furthermore, sustainable business practices may attract talent and generate tax breaks.

Digitalization can also challenge and enhance the implementation of sustainable business models (SBMs). A systematic review by Broccardo et al. (2023) found that digital technologies, such as big data, blockchain, Internet of Things (IoT), and artificial intelligence (AI), can enable businesses to transform traditional business models toward sustainability. Some transforms include improving efficiency, promoting resource sharing, facilitating recycling and remanufacturing, and fostering stakeholder collaboration. These digital tools can create "virtuous circles," in which investments in digital transformation can generate operational savings and establish new revenue streams that further support sustainable initiatives. However, the study expresses that digitalization is not automatically beneficial; it requires strategic alignment with social, environmental, and economic goals to avoid unintended trade-offs, such as high energy consumption or unequal access to technology.

Employee Engagement and Organizational Culture in Sustainability

Sustainable business practices are significantly influenced by the degree of employee engagement and the prevailing organizational culture. Organizations that cultivate a culture oriented toward sustainability frequently demonstrate elevated levels of employee involvement in environmentally focused initiatives, including efforts to reduce energy consumption and participation in community programs. Employee engagement mechanisms, such as the establishment of "green teams" or the implementation of internal sustainability competitions, serve to motivate staff to contribute innovative ideas and assume responsibility for advancing environmental and social objectives.

Empirical research indicates that organizations characterized by a culture emphasizing sustainability tend to report improved employee morale, reduced turnover rates, and increased levels of innovation. Employees are more inclined to propose environmentally sustainable solutions when their contributions are recognized and perceived as aligned with organizational objectives. Leadership is considered instrumental in this context, as leaders who model sustainable behaviors, articulate clear sustainability goals, and acknowledge or reward sustainability-related initiatives contribute to the advancement of environmental and social outcomes within the organization.

The integration of sustainability into organizational culture is associated with the potential to reduce an organization’s ecological footprint, strengthen internal cohesion, foster trust among stakeholders, and enhance its reputation as a socially responsible entity.

Just war theory

From Wikipedia, the free encyclopedia
Saint Augustine was the first clear advocate of just-war theory.

The just war theory (Latin: bellum iustum) is a doctrine, also referred to as a tradition, of military ethics that aims to ensure that a war is morally justifiable through a series of criteria, all of which must be met for a war to be considered just. It has been studied by military leaders, theologians, ethicists and policymakers. The criteria are split into two groups: jus ad bellum ("right to go to war") and jus in bello ("right conduct in war"). There have been calls for the inclusion of a third category of just war theory (jus post bellum) dealing with the morality of post-war settlement and reconstruction. The just war theory postulates that war, while it is terrible but less so with the right conduct, is not always the worst option, but justifiable when justice is an objective of armed conflict. Important responsibilities, undesirable outcomes, or preventable atrocities may justify war.

Opponents of the just war theory may either be inclined to a stricter pacifist standard (proposing that there has never been nor can there ever be a justifiable basis for war) or they may be inclined toward a more permissive nationalist standard (proposing that a war need only to serve a nation's interests to be justifiable). In many cases, philosophers state that individuals do not need to be plagued by a guilty conscience if they are required to fight. A few philosophers ennoble the virtues of the soldier while they also declare their apprehensions for war itself. A few, such as Rousseau, argue for insurrection against oppressive rule.

The historical aspect, or the "just war tradition", deals with the historical body of rules or agreements that have applied in various wars across the ages. The just war tradition also considers the writings of various philosophers and lawyers through history, and examines both their philosophical visions of war's ethical limits and whether their thoughts have contributed to the body of conventions that have evolved to guide war and warfare.

In the twenty-first century there has been significant debate between traditional just war theorists, who largely support the existing law of war and develop arguments to support it, and revisionists who reject many traditional assumptions, although not necessarily advocating a change in the law.

Origins

Ancient Egypt

A 2017 study found that the just war tradition can be traced as far back as to Ancient Egypt. Egyptian ethics of war usually centered on three main ideas, these including the cosmological role of Egypt, the pharaoh as a divine office and executor of the will of the gods, and the superiority of the Egyptian state and population over all other states and peoples. Egyptian political theology held that the pharaoh had the exclusive legitimacy in justly initiating a war, usually claimed to carry out the will of the gods. Senusret I, in the Twelfth Dynasty, claimed, "I was nursed to be a conqueror...his [Atum's] son and his protector, he gave me to conquer what he conquered." Later pharaohs also considered their sonship of the god Amun-Re as granting them absolute ability to declare war on the deity's behalf. Pharaohs often visited temples prior to initiating campaigns, where the pharaoh was believed to receive their commands of war from the deities. For example, Kamose claimed that "I went north because I was strong (enough) to attack the Asiatics through the command of Amon, the just of counsels." A stele erected by Thutmose III at the Temple of Amun at Karnak "provides an unequivocal statement of the pharaoh's divine mandate to wage war on his enemies." As the period of the New Kingdom progressed and Egypt heightened its territorial ambition, so did the invocation of just war aid the justification of these efforts. The universal principle of Maat, signifying order and justice, was central to the Egyptian notion of just war and its ability to guarantee Egypt virtually no limits on what it could take, do, or use to guarantee the ambitions of the state.

India

The Indian Hindu epic, the Mahabharata, offers the first written discussions of a "just war" (dharma-yuddha or "righteous war"). In it, one of five ruling brothers (Pandavas) asks if the suffering caused by war can ever be justified. A long discussion then ensues between the siblings, establishing criteria like proportionality (chariots cannot attack cavalry, only other chariots; no attacking people in distress), just means (no poisoned or barbed arrows), just cause (no attacking out of rage), and fair treatment of captives and the wounded.

In Sikhism, the term dharamyudh describes a war that is fought for just, righteous or religious reasons, especially in defence of one's own beliefs. Though some core tenets in the Sikh religion are understood to emphasise peace and nonviolence, especially before the 1606 execution of Guru Arjan by Mughal Emperor Jahangir, military force may be justified if all peaceful means to settle a conflict have been exhausted, thus resulting in a dharamyudh.

East Asian

Chinese philosophy produced a massive body of work on warfare, much of it during the Zhou dynasty, especially the Warring States era. War was justified only as a last resort and only by the rightful sovereign; however, questioning the decision of the emperor concerning the necessity of a military action was not permissible. The success of a military campaign was sufficient proof that the campaign had been righteous.

Japan did not develop its own doctrine of just war but between the 5th and the 7th centuries drew heavily from Chinese philosophy, and especially Confucian views. As part of the Japanese campaign to take the northeastern island Honshu, Japanese military action was portrayed as an effort to "pacify" the Emishi people, who were likened to "bandits" and "wild-hearted wolf cubs" and accused of invading Japan's frontier lands.

Ancient Greece and Rome

The notion of just war in Europe originates and is developed first in ancient Greece and then in the Roman Empire.

It was Aristotle who first introduced the concept and terminology to the Hellenic world that called war a last resort requiring conduct that would allow the restoration of peace. Aristotle argues that the cultivation of a military is necessary and good for the purpose of self-defense, not for conquering: "The proper object of practising military training is not in order that men may enslave those who do not deserve slavery, but in order that first they may themselves avoid becoming enslaved to others" (Politics, Book 7).

Stoic philosopher Panaetius considered war inhuman, but he contemplated just war when it was impossible to bring peace and justice by peaceful means. Just war could be waged solely for retribution or defense, in both cases having to be declared officially. He also established the importance of treating the defeated in a civilized way, especially those who surrendered, even after a prolonged conflict.

In ancient Rome, a "just cause" for war might include the necessity of repelling an invasion, or retaliation for pillaging or a breach of treaty. War was always potentially nefas ("wrong, forbidden"), and risked religious pollution and divine disfavor. A "just war" (bellum iustum) thus required a ritualized declaration by the fetial priests. More broadly, conventions of war and treaty-making were part of the ius gentium, the "law of nations", the customary moral obligations regarded as innate and universal to human beings.

Christian views

Christian Just War thinking is often thought to begin with Saint Ambrose, Bishop of Milan, before being developed further by his contemporary Saint Augustine of Hippo. The Just War theory, with some amendments, is still used by Christians today as a guide to whether or not a war can be justified, and how it should be fought. Christians may argue "Sometimes war may be necessary and right, even though it may not be good." In the case of a country that has been invaded by an occupying force, for example, war may be the only way to restore justice. 

Saint Ambrose

Influenced by Roman law, and Cicero in particular, Ambrose believed that war was legitimate only for defensive purposes or the punishment of serious wrongdoing, and rulers were obliged to respect treaties, avoid exploiting enemies and treat the defeated with mercy. Ambrose also seems to have regarded military force as permissible against heretics, or in support of Christian orthodoxy. Nevertheless, he strictly prohibited the Church from direct involvement in violence, insisting that clergy must not take up arms themselves. Similarly, warfare had to be undertaken only to fulfill divine law, not for personal motives, and any war driven by emotional excess, vindictiveness or other disordered intentions fell outside the moral limits he envisioned.

Saint Augustine

Saint Augustine held that Christians should not resort immediately to violence, but that God has given the sword to governments for a good reason (based upon Romans 13:4). In Contra Faustum Manichaeum book 22 sections 69–76, the main source for his just war ideas, Augustine argues that Christians, as part of a government, need not be ashamed of protecting peace and punishing wickedness when they are obliged to do so. Augustine regarded intention as the main determinant of whether a war was just or sinful: "What is here required is not a bodily action, but an inward disposition. The sacred seat of virtue is the heart."

Nonetheless, Augustine asserted that peaceful inaction in the face of a grave wrong that could be rectified only by violence would be a sin. Defense of oneself or the innocent could therefore be a necessity, especially when authorized by a legitimate state authority:

They who have waged war in obedience to the divine command, or in conformity with His laws, have represented in their persons the public justice or the wisdom of government, and in this capacity have put to death wicked men; such persons have by no means violated the commandment, "Thou shalt not kill."

But, say they, the wise man will wage Just Wars. As if he would not all the rather lament the necessity of just wars, if he remembers that he is a man; for if they were not just he would not wage them, and would therefore be delivered from all wars.

No war is undertaken by a good state except on behalf of good faith or for safety.

According to J. Mark Mattox:

In terms of the traditional notion of jus ad bellum [the circumstances under which wars can be justly fought] ... war is a coping mechanism for righteous sovereigns who would ensure that their violent international encounters are minimal, a reflection of the Divine Will to the greatest extent possible, and always justified. In terms of the traditional notion of jus in bello [justice in war, or the moral considerations which ought to constrain the use of violence in war], war is a coping mechanism for righteous combatants who, by divine edict, have no choice but to subject themselves to their political masters and seek to ensure that they execute their war-fighting duty as justly as possible.

To summarize, Augustine explored the relationship between Christian charity and the use of force in greater philosophical depth than Saint Ambrose, though he ultimately affirmed many of the same principles. Augustine did not attempt to craft a systematic doctrine of just war, and his comments on it are scattered across his writing. Even so, the foundations of what later became the classical just war tradition can be clearly identified in his thought.

For Augustine, as for Ambrose, war could also be understood as analogous to a judicial process, in which the political authority uses war to punish those who commit injustice. Indeed, he compared military action to civil litigation seeking restitution or punitive redress. Since God was the ultimate judge, and there were Old Testament precedents for His ordering of wars against Israel's enemies and unbelievers, just war could also become holy war or religious war.

Several core just war principles emerge from Augustine's writing:

Legitimate authority: Only public authorities may wage war; private individuals have no right to initiate armed conflict.

Just cause: Defense of the community, protection of allies, or redress for wrongful acts are just causes for war, though Augustine also allowed for offensive action under certain circumstances, citing Moses’ expulsion of the Amorites after they denied Israel peaceful passage.

Right intention: Proper inner disposition is essential. A ruler or soldier must act with a mindset comparable to that of a Christian judge or executioner—firm yet guided by love and compassion. Actions motivated by revenge, wrath, or greed invalidate any claim to justice in war.

Finally, the ultimate goal of just war must be to establish peace.

Saint Isidore of Seville

Isidore of Seville writes:

Those wars are unjust which are undertaken without cause. For aside from vengeance or to fight off enemies no just war can be waged.

Isidore offers a succinct definition of just war in his Etymologiae, describing it as a conflict “waged by formal declaration, either to recover seized property or to drive off an enemy”. He immediately contrasts this with unjust war, following ideas drawn from Cicero’s De re publica. Although Isidore’s brief, essentially Roman, formulation did not fully engage with Augustine’s more sophisticated thinking, and did not fully incorporate subsequent Christian or post-Roman developments, it nevertheless became, like the writings of Saint Gregory of Tours, an important conduit through which the conception of just war entered the high medieval period, informing the Decretum of Gratian in particular.

Carolingian period

The just war ideas of Saint Augustine of Hippo and other Church Fathers were transmitted via Saint Isidore of Seville, Saint Gregory of Tours and other scholars into the Carolingian period, informing the Christianizing imperial project of Charlemagne and the consequent Carolingian Renaissance. Just/ holy war ideas about legitimate authority, just cause, punishment of enemies/unbelievers and the establishment of peace therefore gained traction, though traditional Christian concerns, particularly among clerics, about the sinfulness of killing in war, or being killed in a state of sin, were reflected in sermons, liturgies and penitential texts. This led to increasing deployment of military chaplains to enable soldiers to confess their sins, and thereby reconcile themselves with God, before battle. Proto-jus in bello rules to protect non-combatants such as clergy, nuns, widows, orphans and the poor, including their property, also began to appear in ecclesiastical texts, as well as some capitularies (laws or ordinances) issued by Charlemagne and other rulers, anticipating the later Peace of God, and Truce of God movements.

Peace and Truce of God

The medieval Peace of God (Latin: pax dei) was a 10th century mass movement in Western Europe instigated by the clergy that granted immunity from violence for non-combatants.

Starting in the 11th Century, the Truce of God (Latin: treuga dei) involved Church rules that successfully limited when and where fighting could occur: Catholic forces (e.g. of warring barons) could not fight each other on Sundays, Thursdays, holidays, the entirety of Lent and Advent and other times, severely disrupting the conduct of wars. The 1179 Third Council of the Lateran adopted a version of it for the whole church.

Saint Thomas Aquinas

Saint Thomas Aquinas contributed to the development of the just war theory in medieval Europe.

The just war theory by Saint Thomas Aquinas has had a lasting impact on later generations of thinkers and was part of an emerging consensus in medieval Europe on just war. In the 13th century Aquinas reflected in detail on peace and war. Aquinas was a Dominican friar and contemplated the teachings of the Bible on peace and war in combination with ideas from Aristotle, Plato, Socrates, Saint Augustine and other philosophers whose writings are part of the Western canon. Aquinas' views on war drew heavily on the Decretum Gratiani, a book the Italian monk Gratian had compiled with passages from the Bible. After its publication in the 12th century, the Decretum Gratiani had been republished with commentary from Pope Innocent IV and the Dominican friar Raymond of Penafort. Other significant influences on Aquinas just war theory were Alexander of Hales and Henry of Segusio.

In Summa Theologica Aquinas asserted that it is not always a sin to wage war, and he set out criteria for a just war. According to Aquinas, three requirements must be met. Firstly, the war must be waged upon the command of a rightful sovereign. Secondly, the war needs to be waged for just cause, on account of some wrong the attacked have committed. Thirdly, warriors must have the right intent, namely to promote good and to avoid evil. Aquinas came to the conclusion that a just war could be offensive and that injustice should not be tolerated so as to avoid war. Nevertheless, Aquinas argued that violence must only be used as a last resort. On the battlefield, violence was only justified to the extent it was necessary. Soldiers needed to avoid cruelty and a just war was limited by the conduct of just combatants. Aquinas argued that it was only in the pursuit of justice, that the good intention of a moral act could justify negative consequences, including the killing of the innocent during a war.

Renaissance and Christian Humanists

Various Renaissance humanists promoted Pacificist views.

  • John Colet famously preached a Lenten sermon before Henry VIII, who was preparing for a war, quoting Cicero "Better an unjust peace rather than the justest war."
  • Erasmus of Rotterdam wrote numerous works on peace which criticized Just War theory as a smokescreen and added extra limitations, notably The Complaint of Peace and the Treatise on War (Dulce bellum inexpertis).

A leading humanist writer after the Reformation was legal theorist Hugo Grotius, whose De jura belli ac pacis re-considered Just War and fighting wars justly.

First World War

At the beginning of the First World War, a group of theologians in Germany published a manifesto that sought to justify the actions of the German government. At the British government's request, Randall Davidson, Archbishop of Canterbury, took the lead in collaborating with a large number of other religious leaders, including some with whom he had differed in the past, to write a rebuttal of the Germans' contentions. Both German and British theologians based themselves on the just war theory, each group seeking to prove that it applied to the war waged by its own side.

Contemporary Catholic doctrine

The just war doctrine of the Catholic Church found in the 1992 Catechism of the Catholic Church, in paragraph 2309, lists four strict conditions for "legitimate defense by military force:"

  • The damage inflicted by the aggressor on the nation or community of nations must be lasting, grave and certain.
  • All other means of putting an end to it must have been shown to be impractical or ineffective.
  • There must be serious prospects of success.
  • The use of arms must not produce evils and disorders graver than the evil to be eliminated.

The Compendium of the Social Doctrine of the Church elaborates on the just war doctrine in paragraphs 500 to 501, while citing the Charter of the United Nations:

If this responsibility justifies the possession of sufficient means to exercise this right to defense, States still have the obligation to do everything possible "to ensure that the conditions of peace exist, not only within their own territory but throughout the world". It is important to remember that "it is one thing to wage a war of self-defense; it is quite another to seek to impose domination on another nation. The possession of war potential does not justify the use of force for political or military objectives. Nor does the mere fact that war has unfortunately broken out mean that all is fair between the warring parties".

The Charter of the United Nations ... is based on a generalized prohibition of a recourse to force to resolve disputes between States, with the exception of two cases: legitimate defence and measures taken by the Security Council within the area of its responsibilities for maintaining peace. In every case, exercising the right to self-defence must respect "the traditional limits of necessity and proportionality".

Therefore, engaging in a preventive war without clear proof that an attack is imminent cannot fail to raise serious moral and juridical questions. International legitimacy for the use of armed force, on the basis of rigorous assessment and with well-founded motivations, can only be given by the decision of a competent body that identifies specific situations as threats to peace and authorizes an intrusion into the sphere of autonomy usually reserved to a State.

Pope John Paul II in an address to a group of soldiers noted the following:

Peace, as taught by Sacred Scripture and the experience of men itself, is more than just the absence of war. And the Christian is aware that on earth a human society that is completely and always peaceful is, unfortunately, an utopia and that the ideologies which present it as easily attainable only nourish vain hopes. The cause of peace will not go forward by denying the possibility and the obligation to defend it.

Russian Orthodox Church

The War and Peace section in the Basis of the Social Concept of the Russian Orthodox Church is crucial for understanding the Russian Orthodox Church's attitude towards war. The document offers criteria of distinguishing between an aggressive war, which is unacceptable, and a justified war, attributing the highest moral and sacred value of military acts of bravery to a true believer who participates in a justified war. Additionally, the document considers the just war criteria as developed in Western Christianity to be eligible for Russian Orthodoxy; therefore, the justified war theory in Western theology is also applicable to the Russian Orthodox Church.

In the same document, it is stated that wars have accompanied human history since the fall of man, and according to the gospel, they will continue to accompany it. While recognizing war as evil, the Russian Orthodox Church does not prohibit its members from participating in hostilities if there is the security of their neighbours and the restoration of trampled justice at stake. War is considered to be necessary but undesirable. It is also stated that the Russian Orthodox Church has had profound respect for soldiers who gave their lives to protect the life and security of their neighbours.

Just war tradition

The just war theory, propounded by the medieval Christian philosopher Thomas Aquinas, was developed further by legal scholars in the context of international law. Cardinal Cajetan, the jurist Francisco de Vitoria, the two Jesuit priests Luis de Molina and Francisco Suárez, as well as the humanist Hugo Grotius and the lawyer Luigi Taparelli were most influential in the formation of a just war tradition. The just war tradition, which was well established by the 19th century, found its practical application in the Hague Peace Conferences (1899 and 1907) and in the founding of the League of Nations in 1920. After the United States Congress declared war on Germany in 1917, Cardinal James Gibbons issued a letter that all Catholics were to support the war because "Our Lord Jesus Christ does not stand for peace at any price... If by Pacifism is meant the teaching that the use of force is never justifiable, then, however well meant, it is mistaken, and it is hurtful to the life of our country."

Armed conflicts such as the Spanish Civil War, World War II and the Cold War were, as a matter of course, judged according to the norms (as established in Aquinas' just war theory) by philosophers such as Jacques Maritain, Elizabeth Anscombe and John Finnis.

The first work dedicated specifically to just war was the 15th-century sermon De bellis justis of Stanisław of Skarbimierz (1360–1431), who justified war by the Kingdom of Poland against the Teutonic KnightsFrancisco de Vitoria criticized the conquest of America by the Spanish conquistadors on the basis of just-war theory. With Alberico Gentili and Hugo Grotius, just war theory was replaced by international law theory, codified as a set of rules, which today still encompass the points commonly debated, with some modifications.

Just-war theorists combine a moral abhorrence towards war with a readiness to accept that war may sometimes be necessary. The criteria of the just-war tradition act as an aid in determining whether resorting to arms is morally permissible. Just-war theories aim "to distinguish between justifiable and unjustifiable uses of organized armed forces"; they attempt "to conceive of how the use of arms might be restrained, made more humane, and ultimately directed towards the aim of establishing lasting peace and justice".

The just war tradition addresses the morality of the use of force in two parts: when it is right to resort to armed force (the concern of jus ad bellum) and what is acceptable in using such force (the concern of jus in bello).

In 1869 the Russian military theorist Genrikh Antonovich Leer [ru] theorized on the advantages and potential benefits of war.

The Soviet leader Vladimir Lenin defined only three types of just war.

But picture to yourselves a slave-owner who owned 100 slaves warring against a slave-owner who owned 200 slaves for a more "just" distribution of slaves. Clearly, the application of the term "defensive" war, or war "for the defense of the fatherland" in such a case would be historically false, and in practice would be sheer deception of the common people, of philistines, of ignorant people, by the astute slaveowners. Precisely in this way are the present-day imperialist bourgeoisie deceiving the peoples by means of "national ideology" and the term "defense of the fatherland" in the present war between slave-owners for fortifying and strengthening slavery.

The anarcho-capitalist scholar Murray Rothbard (1926–1995) stated that "a just war exists when a people tries to ward off the threat of coercive domination by another people, or to overthrow an already-existing domination. A war is unjust, on the other hand, when a people try to impose domination on another people or try to retain an already-existing coercive rule over them."

Jonathan Riley-Smith writes:

The consensus among Christians on the use of violence has changed radically since the crusades were fought. The just war theory prevailing for most of the last two centuries—that violence is an evil that can, in certain situations, be condoned as the lesser of evils—is relatively young. Although it has inherited some elements (the criteria of legitimate authority, just cause, right intention) from the older war theory that first evolved around AD 400, it has rejected two premises that underpinned all medieval just wars, including crusades: first, that violence could be employed on behalf of Christ's intentions for mankind and could even be directly authorized by him; and second, that it was a morally neutral force that drew whatever ethical coloring it had from the intentions of the perpetrators.

Criteria

The just war theory has two sets of criteria, the first establishing jus ad bellum (the right to go to war), and the second establishing jus in bello (right conduct within war).

Jus ad bellum

The just war theory directs jus ad bellum to norms that aim to require certain circumstances to enable the right to go to war.

Competent authority
Only duly constituted public authorities may wage war. "A just war must be initiated by a political authority within a political system that allows distinctions of justice. Dictatorships (e.g. Hitler's regime) or deceptive military actions (e.g. the 1968 US bombing of Cambodia) are typically considered as violations of this criterion. The importance of this condition is key. Plainly, we cannot have a genuine process of judging a just war within a system that represses the process of genuine justice. A just war must be initiated by a political authority within a political system that allows distinctions of justice".
Probability of success
According to this principle, there must be good grounds for concluding that aims of the just war are achievable. This principle emphasizes that mass violence must not be undertaken if it is unlikely to secure the just cause. This criterion is to avoid invasion for invasion's sake and links to the proportionality criteria. One cannot invade if there is no chance of actually winning. However, wars are fought with imperfect knowledge, so one must simply be able to make a logical case that one can win; there is no way to know this in advance. These criteria move the conversation from moral and theoretical grounds to practical grounds. Essentially, this is meant to gather coalition building and win approval of other state actors.
Last resort
The principle of last resort stipulates that all non-violent options must first be exhausted before the use of force can be justified. Diplomatic options, sanctions, and other non-military methods must be attempted or validly ruled out before the engagement of hostilities. Further, in regard to the amount of harm—proportionally—the principle of last resort would support using small intervention forces first and then escalating rather than starting a war with massive force such as carpet bombing or nuclear warfare.
Just cause
The reason for going to war needs to be just and cannot, therefore, be solely for recapturing things taken or punishing people who have done wrong; innocent life must be in imminent danger and intervention must be to protect life. A contemporary view of just cause was expressed in 1993 when the US Catholic Conference said: "Force may be used only to correct a grave, public evil, i.e., aggression or massive violation of the basic human rights of whole populations."

Jus in bello

Once war has begun, just war theory (jus in bello) also directs how combatants are to act or should act:

Distinction
Just war conduct is governed by the principle of distinction. The acts of war should be directed towards enemy combatants, and not towards non-combatants caught in circumstances that they did not create. The prohibited acts include bombing civilian residential areas that include no legitimate military targets, committing acts of terrorism or reprisal against civilians or prisoners of war (POWs), and attacking neutral targets. Moreover, combatants are not permitted to attack enemy combatants who have surrendered, or who have been captured, or who are injured and not presenting an immediate lethal threat, or who are parachuting from disabled aircraft and are not airborne forces, or who are shipwrecked.
Proportionality
Just war conduct is governed by the principle of proportionality. Combatants must make sure that the harm caused to civilians or civilian property is not excessive in relation to the concrete and direct military advantage anticipated by an attack on a legitimate military objective. This principle is meant to discern the correct balance between the restriction imposed by a corrective measure and the severity of the nature of the prohibited act.
Military necessity
Just war conduct is governed by the principle of military necessity. An attack or action must be intended to help in the defeat of the enemy; it must be an attack on a legitimate military objective, and the harm caused to civilians or civilian property must be proportional and not excessive in relation to the concrete and direct military advantage anticipated. Jus in bello allows for military necessity and does not favor a specific justification in allowing for counter-attack recourse.[90] This principle is meant to limit excessive and unnecessary death and destruction.
Fair treatment of prisoners of war
Enemy combatants who surrendered or who are captured no longer pose a threat. It is therefore wrong to torture them or otherwise mistreat them.
No means malum in se
Combatants may not use weapons or other methods of warfare that are considered evil, such as mass rape, forcing enemy combatants to fight against their own side or using weapons whose effects cannot be controlled (e.g., nuclear/biological weapons).

Ending a war: Jus post bellum

In recent years, some theorists, such as Gary Bass, Louis Iasiello and Brian Orend, have proposed a third category within the just war theory. "Jus post bellum is described by some scholars as a new "discipline," or as "a new category of international law currently under construction". Jus post bellum concerns justice after a war, including peace treaties, reconstruction, environmental remediation, war crimes trials, and war reparations. Jus post bellum has been added to deal with the fact that some hostile actions may take place outside a traditional battlefield. Jus post bellum governs the justice of war termination and peace agreements, as well as the prosecution of war criminals, and publicly labelled terrorists.The idea has largely been used to help decide what to do with prisoners taken during battle. It is through government labeling and public opinion that people use jus post bellum to justify the pursuit of individuals labeled as terrorists for the safety of the government's state in a modern context. The actual fault lies with the aggressor, and by being the aggressor, they forfeit their rights to honorable treatment by their actions. That theory is used to justify the actions taken by anyone fighting in a war to treat prisoners outside the bounds of war.

Traditionalists and revisionists

There are two altering views related to the just war theory that scholars align with, which are traditionalists and revisionists. The debates between these different viewpoints rest on the moral responsiblites of actors in jus in bello.

Traditionalists

In the just war theory as it pertains to jus in bello, traditionalist scholars view that the two principles, jus ad bellum and jus in bello, are distinct in which actors in war are morally responsible. The traditional view places accountability on leaders who start the war, while soldiers are accountable for actions breaking jus in bello.

Revisionists

Revisionist scholars view that moral responsibility in conduct of war is placed on individual soldiers who participate in war, even if they follow the rules associated with jus in bello. Soldiers that participate in unjust wars are morally responsible. The revisionist view is based on an individual level, rather than on a collective whole.

Additionally, the distinctive methodologies associated with the use of nuclear weapons to wage mass war in the modern era, have also led some "nonreductive" revisionists question the relevance the just war theory itself. Such particular criticisms are more limited in scope, however, than the generalized objections which have been raised by both realists and pacifists.

Health economics

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Health_economics World health expen...