Search This Blog

Sunday, August 3, 2025

Biological naturalism

From Wikipedia, the free encyclopedia

Biological naturalism is a theory about, among other things, the relationship between consciousness and body (i.e., brain), and hence an approach to the mind–body problem. It was first proposed by the philosopher John Searle in 1980 and is defined by two main theses: 1) all mental phenomena, ranging from pains, tickles, and itches to the most abstruse thoughts, are caused by lower-level neurobiological processes in the brain; and 2) mental phenomena are higher-level features of the brain.

This entails that the brain has the right causal powers to produce intentionality. However, Searle's biological naturalism does not entail that brains and only brains can cause consciousness. Searle is careful to point out that while it appears to be the case that certain brain functions are sufficient for producing conscious states, our current state of neurobiological knowledge prevents us from concluding that they are necessary for producing consciousness. In his own words:

"The fact that brain processes cause consciousness does not imply that only brains can be conscious. The brain is a biological machine, and we might build an artificial machine that was conscious; just as the heart is a machine, and we have built artificial hearts. Because we do not know exactly how the brain does it we are not yet in a position to know how to do it artificially." ("Biological Naturalism", 2004)

Overview

John Searle

Searle denies Cartesian dualism, the idea that the mind is a separate kind of substance to the body, as this contradicts our entire understanding of physics, and unlike Descartes, he does not bring God into the problem. Indeed, Searle denies any kind of dualism, the traditional alternative to monism, claiming the distinction is a mistake. He rejects the idea that because the mind is not objectively viewable, it does not fall under the rubric of physics.

Searle believes that consciousness "is a real part of the real world and it cannot be eliminated in favor of, or reduced to, something else" whether that something else is a neurological state of the brain or a computer program. He contends, for example, that the software known as Deep Blue knows nothing about chess. He also believes that consciousness is both a cause of events in the body and a response to events in the body.

On the other hand, Searle doesn't treat consciousness as a ghost in the machine. He treats it, rather, as a state of the brain. The causal interaction of mind and brain can be described thus in naturalistic terms: Events at the micro-level (perhaps at that of individual neurons) cause consciousness. Changes at the macro-level (the whole brain) constitute consciousness. Micro-changes cause and then are impacted by holistic changes, in much the same way that individual football players cause a team (as a whole) to win games, causing the individuals to gain confidence from the knowledge that they are part of a winning team.

He articulates this distinction by pointing out that the common philosophical term 'reducible' is ambiguous. Searle contends that consciousness is "causally reducible" to brain processes without being "ontologically reducible". He hopes that making this distinction will allow him to escape the traditional dilemma between reductive materialism and substance dualism; he affirms the essentially physical nature of the universe by asserting that consciousness is completely caused by and realized in the brain, but also doesn't deny what he takes to be the obvious facts that humans really are conscious, and that conscious states have an essentially first-person nature.

It can be tempting to see the theory as a kind of property dualism, since, in Searle's view, a person's mental properties are categorically different from his or her micro-physical properties. The latter have "third-person ontology" whereas the former have "first-person ontology." Micro-structure is accessible objectively by any number of people, as when several brain surgeons inspect a patient's cerebral hemispheres. But pain or desire or belief are accessible subjectively by the person who has the pain or desire or belief, and no one else has that mode of access. However, Searle holds mental properties to be a species of physical property—ones with first-person ontology. So this sets his view apart from a dualism of physical and non-physical properties. His mental properties are putatively physical. (see "Property dualism" under the "Criticism" section below.)

Criticism

There have been several criticisms of Searle's idea of biological naturalism.

Jerry Fodor suggests that Searle gives us no account at all of exactly why he believes that a biochemistry like, or similar to, that of the human brain is indispensable for intentionality. Fodor thinks that it seems much more plausible to suppose that it is the way in which an organism (or any other system for that matter) is connected to its environment that is indispensable in the explanation of intentionality. It is easier to see "how the fact that my thought is causally connected to a tree might bear on its being a thought about a tree. But it's hard to imagine how the fact that (to put it crudely) my thought is made out of hydrocarbons could matter, except on the unlikely hypothesis that only hydrocarbons can be causally connected to trees in the way that brains are."

John Haugeland takes on the central notion of some set of special "right causal powers" that Searle attributes to the biochemistry of the human brain. He asks us to imagine a concrete situation in which the "right" causal powers are those that our neurons have to reciprocally stimulate one another. In this case, silicon-based alien life forms can be intelligent just in case they have these "right" causal powers; i.e. they possess neurons with synaptics connections that have the power to reciprocally stimulate each other. Then we can take any speaker of the Chinese language and cover his neurons in some sort of wrapper which prevents them from being influenced by neurotransmitters and, hence, from having the right causal powers. At this point, "Searle's demon" (an English speaking nanobot, perhaps) sees what is happening and intervenes: he sees through the covering and determines which neurons would have been stimulated and which not and proceeds to stimulate the appropriate neurons and shut down the others himself. The experimental subject's behavior is unaffected. He continues to speak perfect Chinese as before the operation but now the causal powers of his neurotransmitters have been replaced by someone who does not understand the Chinese language. The point is generalizable: for any causal powers, it will always be possible to hypothetically replace them with some sort of Searlian demon which will carry out the operations mechanically. His conclusion is that Searle's is necessarily a dualistic view of the nature of causal powers, "not intrinsically connected with the actual powers of physical objects."

Searle himself does not rule out the possibility for alternate arrangements of matter bringing forth consciousness other than biological brains.

Property dualism

Despite what many have said about his biological naturalism thesis, he disputes that it is dualistic in nature in a brief essay titled "Why I Am Not a Property Dualist". Firstly, he rejects the idea that the mental and physical are primary ontological categories, instead claiming that the act of categorisation is simply a way of speaking about our one world, so whether something is mental or physical is a matter of the vocabulary that one employs. He believes that a more useful distinction can be made between the biological and non-biological, in which case consciousness is a biological process. Secondly, he accepts that the mental is ontologically irreducible to the physical for the simple reason that the former has a first-person ontology and the latter a third-person ontology, but he rejects the property dualist notion of "over and above"; in other words, he believes that, causally speaking, consciousness is entirely reducible to and is nothing more than the neurobiology of the brain (again, because both are biological processes).

Thus, for Searle, the dilemma between epiphenomenalism and causal overdetermination that plagues the property dualist simply does not arise because, causally speaking, there is nothing there except the neurobiology of the brain, but because of the different ontologies of the mental and physical, the former is irreducible to the latter:

I say consciousness is a feature of the brain. The property dualist says consciousness is a feature of the brain. This creates the illusion that we are saying the same thing. But we are not, […]. The property dualist means that in addition to all the neurobiological features of the brain, there is an extra, distinct, non-physical feature of the brain, whereas I mean that consciousness is a state the brain can be in, in the way that liquidity and solidity are states that water can be in.

Buddhism and abortion

From Wikipedia, the free encyclopedia

There is no single Buddhist view concerning abortion, although it is generally regarded negatively.

Scriptural views and the monastic code

Inducing or otherwise causing an abortion is regarded as a serious matter in the monastic rules followed by both Theravada and Mahayana monks; monks can be expelled for assisting a woman in procuring an abortion. Traditional sources do not recognize a distinction between early- and late-term abortion, but in Sri Lanka and Thailand the "moral stigma" associated with an abortion grows with the development of the fetus. While traditional sources do not seem to be aware of the possibility of abortion as relevant to the health of the mother, modern Buddhist teachers from many traditions – and abortion laws in many Buddhist countries – recognize a threat to the life or physical health of the mother as an acceptable justification for abortion as a practical matter, though it may still be seen as a deed with negative moral or karmic consequences.

Regional views

Views on abortion vary a great deal between different regions, reflecting the influence of the various Buddhist traditions, as well as the influence of other religious and philosophical traditions and contact with Western thought.

Northern Buddhism

Abortion is generally regarded extremely negatively among ethnic Tibetan Buddhists. Prior to the emergence of the Tibetan diaspora in the 1950s, Tibetans do not seem to have been familiar with abortion for reasons of medical necessity, and, facing little population pressure, saw little reason to engage in what they saw as the destruction of innocent life. Though no systematic information is available, abortion appears to be very rare among exiled Tibetans living in areas where abortion is legal. Tibetan Buddhists believe that a person who has had an abortion should be treated compassionately, and guided to atone for the negative act through appropriate good deeds and religious practices; these acts are aimed at improving the karmic outcome for both the mother and the aborted fetus, but authorities warn that they will not be effective if one has undertaken an abortion while planning to 'negate' it by atoning for it later. The Dalai Lama has said that abortion is "negative," but there are exceptions. He said, "I think abortion should be approved or disapproved according to each circumstance."

Southern Buddhism

Laws and views on abortion vary greatly in Theravada Buddhist nations. Attitudes and laws in Thailand are generally more favourable towards abortion than in Sri Lanka. While abortion is still viewed as negative in Burma (Myanmar), it is allegedly also employed with some frequency to prevent out-of-wedlock births. Regarding attitudes towards abortion in Thailand, Peter Harvey notes:

...abortion is discussed not in the language of rights – to life or choice – but of 'benefit and harm, with the intent of relieving as much human suffering in all its states, stages and situations as circumstances allow', with an emphasis on reducing the circumstances leading women to feel that they need to have an abortion.

In November 2010, the issue of abortion and Buddhism in Thailand was thrust onto the front pages after 2000 fetuses were discovered stored at a temple in Bangkok. At this time, abortion was illegal in the country except in cases of rape or risk to the woman's health. Following the scandal, leading politicians and monks spoke out to reaffirm their opposition to abortion laws. Phramaha Vudhijaya Vajiramedhi was unequivocal: "In [the] Buddhist view, both having an abortion and performing an abortion amount to murder. Those involved in abortions will face distress in both this life and the next because their sins will follow them." Prime Minister Abhisit announced a crackdown on illegal abortion clinics and refused calls to change the law, saying that current laws were "good enough." However, in October 2022, Thailand's Public Health Ministry legalized abortions up to the 20th week of pregnancy – an extension of a previous law which allowed termination of pregnancy within the first 12 weeks. Pro-choice advocates in Thailand and around the world celebrated the new rules as a positive development but noted that more needed to be done to ensure doctors were trained and the public was made aware of their rights to an abortion. Experts note that Thailand's move to expand abortion access comes amid a wave of global expansion of abortion rights in recent years.

Peter Harvey relates attitudes towards abortion in Burma to Melford Spiro's observation that Buddhists in Myanmar recognize a clear distinction between what may be regarded as 'ultimate good' in a religious sense and what is a 'worldly good' or utilitarian act. Despite the prevalence of illegal abortions in Myanmar due to economic difficulty, many Buddhists consider it against their religious beliefs. A 1995 survey on women in Myanmar showed that 99% thought abortion was against their religious beliefs.

East Asia

Buddhists in Japan are said to be more tolerant of abortion than those who live elsewhere. In Japan, women sometimes participate in the Shinto-Buddhist ritual of mizuko kuyō (水子供養, lit. 'fetus memorial service') after an induced abortion or an abortion as the result of a miscarriage.

Similarly, in Taiwan, women sometimes pray to appease ghosts of aborted fetuses and assuage feelings of guilt due to having an abortion; this type of ritual is called yingling gongyang. The modern practice emerged in the mid-1970s and grew significantly in popularity in the 1980s, particularly following the full legalization of abortion in 1985. It draws both from traditional antecedents dating back to the Han dynasty, and the Japanese practice. These modern practices emerged in the context of demographic change associated with modernization – rising population, urbanization, and decreasing family size – together with changing attitudes towards sexuality, which occurred first in Japan, and then in Taiwan, hence the similar response and Taiwan's taking inspiration from Japan.

Saturday, August 2, 2025

Analysis of algorithms

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Analysis_of_algorithms
For looking up a given entry in a given ordered list, both the binary and the linear search algorithm (which ignores ordering) can be used. The analysis of the former and the latter algorithm shows that it takes at most log2 n and n check steps, respectively, for a list of size n. In the depicted example list of size 33, searching for "Morin, Arthur" takes 5 and 28 steps with binary (shown in cyan) and linear (magenta) search, respectively.
Graphs of functions commonly used in the analysis of algorithms, showing the number of operations N versus input size n for each function

In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.

The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.

In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the size n of the sorted list being searched, or in O(log n), colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant.

Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called a model of computation. A model of computation may be defined in terms of an abstract computer, e.g. Turing machine, and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has n elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most log2(n) + 1 time units are needed to return an answer.

Cost models

Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual run-time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant.

Two cost models are generally used:

  • the uniform cost model, also called unit-cost model (and similar variations), assigns a constant cost to every machine operation, regardless of the size of the numbers involved
  • the logarithmic cost model, also called logarithmic-cost measurement (and similar variations), assigns a cost to every machine operation proportional to the number of bits involved

The latter is more cumbersome to use, so it is only employed when necessary, for example in the analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography.

A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible.

Run-time analysis

Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time or execution time) of an algorithm as its input size (usually denoted as n) increases. Run-time efficiency is a topic of great interest in computer science: A program can take seconds, hours, or even years to finish executing, depending on which algorithm it implements. While software profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis.

Shortcomings of empirical metrics

Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), there are additional significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms.

Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a much slower machine, using a binary search algorithm. Benchmark testing on the two computers running their respective programs might look something like the following:

n (list size) Computer A run-time
(in nanoseconds)
Computer B run-time
(in nanoseconds)
16 8 100,000
63 32 150,000
250 125 200,000
1,000 500 250,000

Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is far superior in efficiency to that of Computer B. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error:

n (list size) Computer A run-time
(in nanoseconds)
Computer B run-time
(in nanoseconds)
16 8 100,000
63 32 150,000
250 125 200,000
1,000 500 250,000
... ... ...
1,000,000 500,000 500,000
4,000,000 2,000,000 550,000
16,000,000 8,000,000 600,000
... ... ...
63,072 × 1012 31,536 × 1012 ns,
or 1 year
1,375,000 ns,
or 1.375 milliseconds

Computer A, running the linear search program, exhibits a linear growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run-time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithmic growth rate. Quadrupling the input size only increases the run-time by a constant amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it is running an algorithm with a much slower growth rate.

Orders of growth

Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond a certain input size n, the function f(n) times a positive constant provides an upper bound or limit for the run-time of that algorithm. In other words, for a given input size n greater than some n0 and a constant c, the run-time of that algorithm will never be larger than c × f(n). This concept is frequently expressed using Big O notation. For example, since the run-time of insertion sort grows quadratically as its input size increases, insertion sort can be said to be of order O(n2).

Big O notation is a convenient way to express the worst-case scenario for a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario for quicksort is O(n2), but the average-case run-time is O(n log n).

Empirical orders of growth

Assuming the run-time follows power rule, tkna, the coefficient a can be found  by taking empirical measurements of run-time {t1, t2} at some problem-size points {n1, n2}, and calculating t2/t1 = (n2/n1)a so that a = log(t2/t1)/log(n2/n1). In other words, this measures the slope of the empirical line on the log–log plot of run-time vs. input size, at some size point. If the order of growth indeed follows the power rule (and so the line on the log–log plot is indeed a straight line), the empirical value of will stay constant at different ranges, and if not, it will change (and the line is a curved line)—but still could serve for comparison of any two given algorithms as to their empirical local orders of growth behaviour. Applied to the above table:

n (list size) Computer A run-time
(in nanoseconds)
Local order of growth
(n^_)
Computer B run-time
(in nanoseconds)
Local order of growth
(n^_)
15 7
100,000
65 32 1.04 150,000 0.28
250 125 1.01 200,000 0.21
1,000 500 1.00 250,000 0.16
... ...
...
1,000,000 500,000 1.00 500,000 0.10
4,000,000 2,000,000 1.00 550,000 0.07
16,000,000 8,000,000 1.00 600,000 0.06
... ...
...

It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one.

Evaluating run-time complexity

The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode:

1    get a positive integer n from input
2    if n > 10
3        print "This might take a while..."
4    for i = 1 to n
5        for j = 1 to i
6            print i * j
7    print "Done!"

A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. Say that the actions carried out in step 1 are considered to consume time at most T1, step 2 uses time at most T2, and so forth.

In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1–3 and step 7 is:

The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( n + 1 ) times, which will consume T4( n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes T6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time, and the inner loop test (step 5) consumes 3T5 time.

Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression:

which can be factored as

The total time required to run the inner loop test can be evaluated similarly:

which can be factored as

Therefore, the total run-time for this algorithm is:

which reduces to

As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2 is the highest-order term, so one can conclude that f(n) = O(n2). Formally this can be proven as follows:

Prove that

Let k be a constant greater than or equal to [T1..T7]



Therefore

A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's run-time breaks down as follows:

Growth rate analysis of other resources

The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of memory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a file which that program manages:

while file is still open:
    let n = size of file
    for every 100,000 kilobytes of increase in file size
        double the amount of memory reserved

In this instance, as the file size n increases, memory will be consumed at an exponential growth rate, which is order O(2n). This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources.

Relevance

Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless.

Constant factors

Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 232 = 4 GiB (greater if segmented memory is used) and on 64-bit machines 264 = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are O(1) for a large enough constant, or for small enough data.

This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log*) is less than 5 for all practical data (265536 bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (264 bits); and binary log (log n) is less than 64 for virtually all practical data (264 bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have so long as and .

For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity ), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity ) for small data, as the simpler algorithm is faster on small data.

Gross National Well-being

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Gross_National_Well-being

Gross National Well-being (GNW), also known as Gross National Wellness, is a socioeconomic development and measurement framework. The GNW Index consists of seven dimensions: economic, environmental, physical, mental, work, social, and political. Most wellness areas include both subjective results (via survey) and objective data.

The GNW Index is also known as the first Gross National Happiness Index, not to be confused with Bhutan's GNH Index. Both econometric frameworks are different in authorship, creation dates, and geographic scope. The GNW / GNH index is a global development measurement framework published in 2005 by the International Institute of Management in the United States.

History

The term "Gross National Happiness" was first coined by the Bhuntanese King Jigme Singye Wangchuck in 1972. However, no GNH Index existed until 2005.

The GNH philosophy suggested that the ideal purpose of governments is to promote happiness. The philosophy remained difficult to implement due to the subjective nature of happiness, the lack of exact quantitative definition of GNH, and the lack of a practical model to measure the impact of economic policies on the subjective well-being of the citizens.

The GNW Index paper proposed the first GNH Index as a solution to help with the implementation of the GHN philosophy and was designed to transform the first generation abstract subjective political mission statement into a second generation implementation holistic (objective and subjective) concept and by treating happiness as a socioeconomic development metric that would provide an alternative to the traditional GDP indicator, the new metric would integrate subjective and objective socioeconomic development policy framework and measurement indicators.

In 2006, a policy white paper providing recommendations for implementing the GNW Index metric was published by the International Institute of Management. The paper is widely referenced by academic and policy maker citing the GNW / GNH index as a potential model for local socioeconomic development and measurement.

Disambiguation

The GNW Index is a secular econometric model that tracks 7 subjective and objective development areas with no religious measurement components. On the other hand, Bhutan's GNH Index is a local development framework and measurement index, published by the Centre for Bhutan Studies in 2012 based on 2011 Index function designed by Alkire-Foster at Oxford University. The Bhutan's GNH Index is customized to the country's Buddhist cultural and spiritual values, it tracks 9 subjective happiness areas including spiritual measurement such as prayers recitation and other Karma indicators. The concepts and issues at the heart of Bhutanese approach are similar to the secular GNH Index.

Survey components

The subjective survey part of the GNW measurement system is structured into seven areas or dimensions. Each area or dimension satisfaction rating is scaled from 0–10: 0 being very dissatisfied, 5 being neutral, and 10 is very satisfied.

  1. Mental & Emotional Wellbeing Overall Satisfaction (0–10):
    Frequency and levels of positive vs. negative thoughts and feelings over the past year
  2. Physical & Health Wellbeing Overall Satisfaction (0–10):
    Physical safety and health, including risk to life, body and property and the cost and quality of healthcare, if one gets sick
  3. Work & Income Wellbeing Overall Satisfaction (0–10):
    Job and income to support essential living expenses, including shelter, food, transportation, and education. If a head of household, the expenses to support household/family is included
  4. Social Relations Wellbeing Overall Satisfaction (0–10):
    Relations with the significant other, family, friends, colleagues, neighbors, and community
  5. Economic & Retirement Wellbeing Overall Satisfaction (0–10):
    Disposable (extra) income, which is the remaining money after paying for essential living expenses. This money can be used for leisure activities, retirement savings, investments, or charity.
  6. Political & Government Wellbeing Overall Satisfaction (0–10):
    Political rights, privacy and personal freedom as well the performance of the government (including socioeconomic development policies effectiveness and efficiency)
  7. Living Environment Wellbeing Overall Satisfaction (0–10):
    City/urban planning, utilities, infrastructure, traffic, architecture, landscaping and nature's pollution (including noise, air, water, and soil)

The survey also asks four qualitative questions to identify key causes of happiness and unhappiness:

  1. What are the top positive things in your life that make you happy?
  2. What are the top challenges and causes of stress in your life?
  3. What would you advise your government to increase your well-being and happiness?
  4. What are the most influential city, state, federal or international projects? How are they impacting your well-being and happiness (positively or negatively)?

Crystallography

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Crystallo...