Search This Blog

Sunday, May 1, 2022

Decolonization of knowledge

From Wikipedia, the free encyclopedia
 
Removal of the statue of Cecil Rhodes from the campus of the University of Cape Town on 9 April 2015. Rhodes Must Fall movement is said to have been motivated by a desire to decolonize knowledge and education in South Africa.

Decolonization of knowledge (also epistemic or epistemological decolonization) is a concept advanced in decolonial scholarship that critiques the perceived universality of what the decolonial scholars refer to as the hegemonic Western knowledge system. It seeks to construct and legitimize other knowledge systems by exploring alternative epistemologies, ontologies and methodologies. It is also an intellectual project that aims to "disinfect" academic activities that are believed to have little connection with the objective pursuit of knowledge and truth. The presumption is that if curricula, theories, and knowledge are colonized, it means they have been partly influenced by political, economic, social and cultural considerations. The decolonial knowledge perspective covers a wide variety of subjects including epistemology, natural sciences, science history, and other fundamental categories in social science.

Background

In his 1585 Descripción de Tlaxcala, Diego Muñoz Camargo illustrates the destruction of Mexican codices by Franciscan friars.

Decolonization of knowledge inquires into the historical mechanisms of knowledge production and its colonial and ethnocentric foundations. It has been argued that knowledge and the standards that determine the validity of knowledge have been disproportionately informed by Western system of thought and ways of thinking about the universe. According to the decolonial theory, the western knowledge system developed in Europe during renaissance and Enlightenment was deployed to legitimise Europe’s colonial endeavour that eventually became a part of colonial rule and forms of civilization that the colonizers carried with them. The knowledge produced in Western system has been attributed a universal character and claimed to be superior over other systems of knowledge. Decolonial scholars concur that the western system of knowledge still continues to determine as to what should be considered as scientific knowledge and continues to "exclude, marginalise and dehumanise" those with different systems of knowledge, expertise and worldviews. Anibal Quijano stated:

In effect, all of the experiences, histories, resources, and cultural products ended up in one global cultural order revolving around European or Western hegemony. Europe’s hegemony over the new model of global power concentrated all forms of the control of subjectivity, culture, and especially knowledge and the production of knowledge under its hegemony. During that process, the colonizers... repressed as much as possible the colonized forms of knowledge production, the models of the production of meaning, their symbolic universe, the model of expression and of objectification and subjectivity.

In her book Decolonizing Methodologies: Research and Indigenous Peoples, Linda Tuhiwai Smith writes:

Imperialism and colonialism brought complete disorder to colonized peoples, disconnecting them from their histories, their landscapes, their languages, their social relations and their own ways of thinking, feeling and interacting with the world.

According to the decolonial perspective, although colonialism has ended in the legal and political sense, its legacy continues in many "colonial situations" where individuals and groups in historically colonized places are marginalized and exploited. Decolonial scholars refer to this continuing legacy of colonialism as "coloniality" which describes colonialism's perceived legacy of oppression and exploitation in many interrelated domains, including the domain of subjectivity and knowledge.

Origin and development

In community groups and social movements in the Americas, decolonization of knowledge traces its roots back to resistance against colonialism from its very beginning in 1492. Its emergence as an academic concern is rather a recent phenomenon. According to Enrique Dussel, the theme of epistemological decolonization has originated from a group of Latin American thinkers. Although the notion of decolonization of knowledge has been an academic topic since the 1970s, Walter Mignolo says, it was the ingenious work of Peruvian sociologist Anibal Quijano that "explicitly linked coloniality of power in the political and economic spheres with the coloniality of knowledge." It has developed as "an elaboration of a problematic" that began, because of a number of critical positions such as postcolonialism, subaltern studies and postmodernism. Enrique Dussel says epistemological decolonization is structured around the notions of coloniality of power and transmodernity, which traces its roots in the thoughts of José Carlos Mariátegui, Frantz Fanon and Immanuel Wallerstein. According to Sabelo J. Ndlovu-Gatsheni, although political, economic, cultural and epistemological dimensions of decolonization were and are intricately connected to each other, attainment of political sovereignty was preferred as a "practical strategic logic of struggles against colonialism." As a result, political decolonization in the 20th century failed to attain epistemological decolonization, as it did not widely inquire into the complex domain of knowledge.

Theoretical perspective

Decolonisation is sometimes viewed as a rejection of the notion of objectivity, which is seen as a legacy of colonial thought. It is sometimes argued that universal conception of ideas such as "truth" and "fact" are Western constructs that are imposed on other foreign cultures. This tradition considers notions of truth and fact as "local", arguing that what is "discovered" or "expressed" in one place or time may not be applicable in another. The concerns of decolonisation of knowledge are that the western knowledge system has become a norm for global knowledge and its methodologies are considered to be the only form of true knowledge. This perceived hegemonic approach towards other knowledge systems has resulted in reduction of epistemic diversity and constituted the center of knowledge which eventually suppressed all other knowledge forms. Boaventura de Sousa Santos says "throughout the world, not only are there very diverse forms of knowledge of matter, society, life and spirit, but also many and very diverse concepts of what counts as knowledge and criteria that may be used to validate it." This diversity of knowledge systems, however, has not gained much recognition. According to Lewis Gordon, the formulation of knowledge in its singular form itself was unknown to times before the emergence of European modernity. Modes of knowledge production and notions of knowledge were so diversified that knowledges, in his opinion, would be more appropriate description. According to Walter Mignolo, the modern foundation of knowledge is thus territorial and imperial. This foundation is based on "the socio-historical organization and classification of the world founded on a macro narrative and on a specific concept and principles of knowledge" which finds its roots in European modernity. He articulates epistemic decolonization as an expansive movement that identifies "geo-political locations of theology, secular philosophy and scientific reason" and simultaneously affirms "the modes and principles of knowledge that have been denied by the rhetoric of Christianization, civilisation, progress, development and market democracy." According to Achille Mbembe, decolonization of knowledge means contesting the hegemonic western epistemology that suppresses anything that is foreseen, conceived and formulated from outside of western epistemology. It has two aspects: a critique of Western knowledge paradigms and the development of new epistemic models. Savo Heleta states that decolonization of knowledge "implies the end of reliance on imposed knowledge, theories and interpretations, and theorizing based on one’s own past and present experiences and interpretation of the world."

Significance

According to Anibal Quijano, epistemological decolonization is necessary for creating new avenues for intercultural communication, interchange of experiences and meanings as the foundation of another rationality which may justifiably claim some universality. Sabelo Gatsheni says epistemological decolonization is crucial in handling the "asymmetrical global intellectual division of labor" in which Europe and North America not only act as teachers of the rest of the world, but also have become the "sites of theory and concept production", which are ultimately "consumed" by the entire human race.

Approaches

Decolonization of knowledge is neither about de-westernization nor about refusing Western science or Western knowledge systems. According to Lewis Gordon, decolonization of knowledge mandates a detachment from the "commitments to notions of an epistemic enemy." It rather emphasizes "the appropriation of any and all sources of knowledge" in order to achieve relative epistemic autonomy and epistemic justice for "previously unacknowledged and/or suppressed knowledge traditions."

Raewyn Connel states:

The colonized and postcolonial world [...] has actually been a major participant in the making of the dominant forms of knowledge in the modern era, which we too easily call ‘Western science’. The problem is not the absence of the majority world, but its epistemological subordination within the mainstream economy of knowledge. This economy has been profoundly shaped by what the Peruvian sociologist Aníbal Quijano (2000) has called the ‘coloniality of power’. In consequence, a wealth of knowledge produced in colonized and postcolonial societies has never been incorporated into the mainstream economy, or is included only in marginal ways.

According to Raewyn Connel, decolonizing knowledge is therefore about recognizing those unincorporated or marginalised forms of knowledge. Firstly, this includes indigenous knowledge, which was dismissed by the colonialist ideology. Secondly, it endorses alternative universalisms, i.e., knowledge systems having general and not just local application, which have not derived from the Eurocentric knowledge economy. Connel says the fairly known system among these is Islamic knowledge. This is not, however, the only alternative universalism. She also suggests Indian knowledge tradition as an alternative to the current economy of knowledge. Thirdly, it concerns Southern theory, i.e., knowledge framework developed during colonial encounter which emphasizes that the colonized and the postcolonial world has been affluent in theoretical thinking and that these societies have continually produced concepts, analyses and creative ideas.

According to Achille Mbembe:

The Western archive is singularly complex. It contains within itself the resources of its own refutation. It is neither monolithic nor the exclusive property of the West. Africa and its diaspora decisively contributed to its making and should legitimately make foundational claims on it. Decolonizing knowledge is therefore not simply about de-westernization.

Walter Mignolo theorises his approach for decolonizing knowledge in terms of delinking, which he believes shall ultimately lead to decolonial epistemic shift and will eventually foreground "other epistemologies, other principles of knowledge and understanding."

Decolonizing Academia

One of the most crucial aspects of decolonization of knowledge is to rethink the role of the academia, which, according to Louis Yako, has become the "biggest enemy of knowledge and the decolonial option." He says Western universities have always served colonial and imperial powers, and the situation has only become worse in the neoliberal age. According to Yako, the first step toward decolonizing academic knowledge production is to carefully examine "how knowledge is produced, by whom, whose works get canonized and taught in foundational theories and courses, and what types of bibliographies and references are mentioned in every book and published article." Yako says a research work in almost any given field is required to include names of certain "elite" European or American scholars in the bibliography or references. These scholars are commonly considered "foundational" in their respective fields. The citations are evaluated based on the "credibility" of the sources, which must be accepted by the "ruling elite, including the elite that rules the universities." For example, citing sources from a university press or an academic journal is deemed to be more "credible and rigorous" than citing sources from independent authors and experts. There is a hierarchy even when citing publicly available works. Citing the New York Times, the Guardian, or the Washington Post is considered "better" to a newspaper from Africa or Latin America. Even when such citations are allowed, foreign sources are typically seen as insufficient, necessitating their "validation" by Western sources. Yako opposes to the labeling of new scholars as "Marxist", "Foucauldian", "Hegelian", "Kantian", and so on, which he sees as a "colonial method of validating oneself and research" through these scholars. According to Yako, despite the fact that scholars such as Marx, Hegel, Foucault, and many others were all inspired by numerous thinkers before them, they are not identified with the names of such intellectuals. He criticizes the academic peer-review process as a system of "gatekeepers" who regulate the production of knowledge in a given field or about a certain region of the world.

Decolonizing research

Neo-colonial research or neo-colonial science, frequently described as helicopter research, parachute science or research, parasitic research, or safari study, is when researchers from wealthier countries go to a developing country, collect information, travel back to their country, analyze the data and samples, and publish the results with no or little involvement of local researchers. A 2003 study by the Hungarian academy of sciences found that 70% of articles in a random sample of publications about least-developed countries did not include a local research co-author.

Frequently, during this kind of research, the local colleagues might be used to provide logistics but are not engaged for their expertise or given credit for their participation in the research. Scientific publications resulting from parachute science frequently only contribute to the career of the scientists from rich countries, thus limiting the development of local science capacity (such as funded research centers) and the careers of local scientists. This form of "colonial" science has reverberations of 19th century scientific practices of treating non-Western participants as "others" in order to advance colonialism—and critics call for the end of these extractivist practices in order to decolonize knowledge.

This kind of research approach reduces the quality of research because international researchers may not ask the right questions or draw connections to local issues. The result of this approach is that local communities are unable to leverage the research to their own advantage. Ultimately, especially for fields dealing with global issues like conservation biology which rely on local communities to implement solutions, neo-colonial science prevents institutionalization of the findings in local communities in order to address issues being studied by scientists.

Evaluation

According to Piet Naudé, decolonization's efforts to create new epistemic models with distinct laws of validation than those developed in Western science has not produced reliable outcomes. Because the central issue of "credibility criteria" has yet to be resolved. He says decolonization will not succeed unless key concepts such as "problems", "paradigms", and finally "science" are essentially reconceptualized. The present "scholarly decolonial turn" has been criticised on the ground that it is divorced from the daily struggles of people living in historically colonized places. Robtel Neajai Pailey says that 21st-century epistemic decolonization will fail unless it is connected to and welcoming of the ongoing liberation movements against inequality, racism, austerity, imperialism, autocracy, sexism, xenophobia, environmental damage, militarisation, impunity, corruption, media surveillance, and land theft because epistemic decolonization "cannot happen in a political vacuum".

Charity (practice)

From Wikipedia, the free encyclopedia
 
Illustration of charity

The practice of charity is the voluntary giving of help to those in need, as a humanitarian act. There are a number of philosophies about charity, often associated with religion. Effective altruism is the use of evidence and reasoning to determine the most effective ways to help others.

Etymology

The word charity originated in late Old English to mean a "Christian love of one's fellows", and up until at least the beginning of the 20th century, this meaning remained synonymous with charity. Aside from this original meaning, charity is etymologically linked to Christianity, with the word originally entering into the English language through the Old French word charité, which was derived from the Latin caritas, a word commonly used in the Vulgate New Testament to translate the Greek word agape (ἀγάπη), a distinct form of love (see the article: Charity (virtue)).

Over time, the meaning of charity has evolved from one of "Christian love" to that of "providing for those in need; generosity and giving", a transition which began with the Old French word charité. Thus, while the older Douay-Rheims and King James versions of the Bible translate instances of agape (such as those that appear in 1 Corinthians 13) as "charity", modern English versions of the Bible typically translate agape as "love".

Practice

A Hindu woman giving alms (painting by Raja Ravi Varma)

Charitable giving is the act of giving money, goods or time to the unfortunate, either directly or by means of a charitable trust or other worthy cause. Charitable giving as a religious act or duty is referred to as almsgiving or alms. The name stems from the most obvious expression of the virtue of charity; giving the recipients of it the means they need to survive. The impoverished, particularly those widowed or orphaned, and the ailing or injured, are generally regarded as the proper recipients of charity. The people who cannot support themselves and lack outside means of support sometimes become "beggars", directly soliciting aid from strangers encountered in public.

Some groups regard charity as being distributed towards other members from within their particular group. Although giving to those nearly connected to oneself is sometimes called charity—as in the saying "Charity begins at home"—normally charity denotes giving to those not related, with filial piety and like terms for supporting one's family and friends. Indeed, treating those related to the giver as if they were strangers in need of charity has led to the figure of speech "as cold as charity"—providing for one's relatives as if they were strangers, without affection.

Most forms of charity are concerned with providing basic necessities such as food, water, clothing, healthcare and shelter, but other actions may be performed as charity: visiting the imprisoned or the homebound, ransoming captives, educating orphans, even social movements. Donations to causes that benefit the unfortunate indirectly, such as donations to fund cancer research, are also charity.

With regards to religious aspects, the recipient of charity may offer to pray for the benefactor. In medieval Europe, it was customary to feast the poor at the funeral in return for their prayers for the deceased. Institutions may commemorate benefactors by displaying their names, up to naming buildings or even the institution itself after the benefactors. If the recipient makes material return of more than a token value, the transaction is normally not called charity.

In the past century, many charitable organizations have created a "charitable model" in which donators give to conglomerates give to recipients. Examples of this include the Make a Wish Foundation (John Cena holds the title for most wishes granted by a single individual, with over 450 wishes) and the World Wildlife Fund. Today some charities have modernized, and allow people to donate online, through websites such as JustGiving. Originally charity entailed the benefactor directly giving the goods to the receiver. This practice was continued by some individuals, for example, "CNN Hero" Sal Dimiceli, and service organizations, such as the Jaycees. With the rise of more social peer-to-peer processes, many charities are moving away from the charitable model and starting to adopt this more direct donator to recipient approach. Examples of this include Global Giving (direct funding of community development projects in developing countries), DonorsChoose (for US-based projects), Kiva (funding loans administered by microfinance organizations in developing countries) and Zidisha (funding individual microfinance borrowers directly).

Institutions evolved to carry out the labor of assisting the poor, and these institutions, called charities, provide the bulk of charitable giving today, in terms of monetary value. These include orphanages, food banks, religious institutes dedicated to care of the poor, hospitals, organizations that visit the homebound and imprisoned, and many others. Such institutions allow those whose time or inclination does not lend themselves to directly care for the poor to enable others to do so, both by providing money for the work and supporting them while they do the work. Institutions can also attempt to more effectively sort out the actually needy from those who fraudulently claim charity. Early Christians particularly recommended the care of the unfortunate to the charge of the local bishop.

There have been examinations of who gives more to charity. One study conducted in the United States found that as a percentage of income, charitable giving increased as income decreased. The poorest fifth of Americans, for example, gave away 4.3% of their income, while the wealthiest fifth gave away 2.1%. In absolute terms, this was an average of $453 on an average income of $10,531, compared to $3,326 on an income of $158,388.

Studies have also found that "individuals who are religious are more likely to give money to charitable organizations" and they are also more likely to give more money than those who are not religious. Among those individuals are members of American religious communities, about whom the Institute for Social Policy and Understanding conducted a recent study regarding philanthropic and charitable giving. The study found that American Muslim donation patterns when it comes to charitable giving align mostly with other American faith groups, like Protestant, Catholic, and Jewish communities, but that American Muslims were more likely to donate out of a sense of religious obligation and a belief that those who have ought to give to those who do not. The study also found that most American faith groups prioritize charity towards their own houses of worship when it comes to monetary donations, and then other causes. Muslims and Jews contributed more than other religious groups to civil rights protection organizations, while white Evangelical Christians, followed by Protestants and then Catholics, were the most likely to make charitable contributions to youth and family services.

A study from 2021 found that when prospective donors were asked to choose between two similar donation targets, they were more likely to opt out of donating altogether.

Criticism

A philosophical critique of charity can be found in Oscar Wilde's essay The Soul of Man Under Socialism, where he calls it "a ridiculously inadequate mode of partial restitution . . . usually accompanied by some impertinent attempt on the part of the sentimentalist to tyrannise over [the poor's] private lives", as well as a remedy that prolongs the "disease" of poverty, rather than curing it. Wilde's thoughts are cited with approval by Slavoj Žižek, and the Slovenian thinker adds his description of the effect of charity on the charitable:

When, confronted with the starving child, we are told: "For the price of a couple of cappuccinos, you can save her life!", the true message is: "For the price of a couple of cappuccinos, you can continue in your ignorant and pleasurable life, not only not feeling any guilt, but even feeling good for having participated in the struggle against suffering!"

— Slavoj Žižek (2010). Living in the End Times. Verso. p. 117.

Friedrich Engels, in his 1845 treatise on the condition of the working class in England, points out that charitable giving, whether by governments or individuals, is often seen by the givers as a means to conceal suffering that is unpleasant to see. Engels quotes from a letter to the editor of an English newspaper who complains that

streets are haunted by swarms of beggars, who try to awaken the pity of the passers-by in a most shameless and annoying manner, by exposing their tattered clothing, sickly aspect, and disgusting wounds and deformities. I should think that when one not only pays the poor-rate, but also contributes largely to the charitable institutions, one had done enough to earn a right to be spared such disagreeable and impertinent molestations.

The English bourgeoisie, Engels concludes,

is charitable out of self-interest; it gives nothing outright, but regards its gifts as a business matter, makes a bargain with the poor, saying: "If I spend this much upon benevolent institutions, I thereby purchase the right not to be troubled any further, and you are bound thereby to stay in your dusky holes and not to irritate my tender nerves by exposing your misery. You shall despair as before, but you shall despair unseen, this I require, this I purchase with my subscription of twenty pounds for the infirmary!" It is infamous, this charity of a Christian bourgeois!

The American theologian, Reinhold Niebuhr also opined that charity could more than often act as a substitute for real justice. In his 1932 work Moral Man and Immoral Society he criticized charities funding Black education, writing that the "white philanthropy" failed to make a "frontal attack upon the social injustices" from which the Black Americans suffered. He wrote: "We have previously suggested that philanthropy combines genuine pity with the display of power and that the latter element explains why the powerful are more inclined to be generous than to grant social justice."

The philosopher Peter Singer opposes charity on the grounds that the interests of all people should count equally since their geographic location or citizenship status does not affect their obligations towards society.

The Institute of Economic Affairs published a report in 2012 called "Sock Puppets: How the government lobbies itself and why", which criticized the phenomenon of governments funding charities which then lobby the government for changes which the government wanted all along.

Needs-based versus rights-based debate

Increasing awareness of poverty and food insecurity has led to debates among scholars about the needs-based versus the rights-based approach. The needs-based approach solely provides recipients what they need, not expecting any action in response. Examples of needs-based approaches include charitable giving, philanthropy, and other private investments. A rights-based approach, on the other hand, includes participation from both ends, with the recipients being active influences on policies. Politically, a rights-based approach would be illustrated in policies of income redistribution, wage floors, and cash subsidies. Mariana Chilton, in the American Journal of Public Health, suggested that current government policies reflect the needs-based approach. Chilton argued this leads to a misconception that charity is the cure for basic needs insecurity, and this misconception drives the government to avoid welfare reform and instead to rely on charitable organizations and philanthropists. Amelia Barwise supported Chilton's argument by describing the consequences of philanthropy. Using an example of Michael Bloomberg's donation of $1.8 billion to Johns Hopkins University for student debts, Barwise questioned the most effective use for this money. She listed one motivation of philanthropy as to avoid paying federal taxes, so the donor may be recognized for their generosity and send their earned money to organizations they are passionate about. Barwise therefore implied that Bloomberg's actions resemble this motivation, since he has saved $600 million in federal taxes and donated the money to his alma mater. Furthermore, this non-politicized idea of philanthropy and charitable giving is linked to the government's approach to poverty. Barwise said that Americans have an innate distrust of the government, causing them to favor private and de-politicized actions such as charity. Her research explores consequences of philanthropic actions and how the money can be used more effectively. First, Barwise stated that since philanthropy allows for tax evasion, which decreases opportunities for welfare policies that would support all low-income workers. Furthermore, philanthropy can diminish the institution's mission and give more power and influence to the donor.

Acknowledging these consequences of philanthropy and the diminishing of public funding, Mariana Chilton offered solutions through the rights-based approach. Chilton argued that the government should adopt a more rights-based approach to include more people in their policies and significantly improve basic needs insecurity. She called for government accountability, an increase of transparency, an increase of public participation, and the acknowledgement of vulnerability and discrimination caused by current policies. She argued for increased federal legislation that provides social safety nets through entitlement programs, recognizing SNAP as a small example. Chilton concluded with a list of four strategies for a national plan: 1) increase monitoring to assess threats to food insecurity, 2) improve national, state, and local coordination, 3) improve accountability, and 4) utilize public participation to help construct policies.

Philosophies

Charity in Christianity

In medieval Europe during the 12th and 13th centuries, Latin Christendom underwent a charitable revolution. Rich patrons founded many leprosaria and hospitals for the sick and poor. New confraternities and religious orders emerged with the primary mission of engaging in intensive charitable work. Historians debate the causes. Some argue that this movement was spurred by economic and material forces, as well as a burgeoning urban culture. Other scholars argue that developments in spirituality and devotional culture were central. For still other scholars, medieval charity was primarily a way to elevate one's social status and affirm existing hierarchies of power.

Tzedakah in Judaism

Sandstone vestige of a Jewish gravestone depicting a Tzedakah box (pushke). Jewish cemetery in Otwock (Karczew-Anielin), Poland.

In religious Judaism, tzedakah—a Hebrew term literally meaning righteousness but commonly used to signify charity—refers to the religious obligation to do what is right and just. Because it is commanded by the Torah and not voluntary, the practice is not technically an act of charity; such a concept is virtually nonexistent in Jewish tradition. Jews give tzedakah, which can take the form of money, time and resources to the needy, out of "righteousness" and "justice" rather than benevolence, generosity, or charitableness. The Torah requires that 10 percent of a Jew's income be allotted to righteous deeds or causes, regardless if the receiving party is rich or poor. However, if one regards Judaism in its wider modern meaning, acts of charity can go far beyond the religious prescriptions of tzedakah and also beyond the wider concept of ethical obligation. See also mitzvot and halukkah.

Zakat and Sadaqah in Islam

In Islam there are two methods of charity. One called Zakat, the other is called Sadaqa.

Zakat is one of the five pillars upon which the Muslim religion is based, where 2.5% of one's saving is compulsory to be given as Zakat per Islamic calendar year, provided that the saving is beyond the threshold limit, called Nisab, usually determined by the religious authority.

Sadaqa is voluntary charity or contribution. Sadaqah can be given using money, personal items, time or other resources. There is no minimum or maximum requirement for Sadaqa. Even smiling to other people is considered a Sadaqah.

Dāna in Indian religions

The practice of charity is called Dāna or Daana in Hinduism, Buddhism and Jainism. It is the virtue of generosity or giving. Dāna has been defined in traditional texts, state Krishnan and Manoj, as "any action of relinquishing the ownership of what one considered or identified as one's own, and investing the same in a recipient without expecting anything in return". Karna, Mahabali and Harishchandra are heroes also known for giving charity.

The earliest known discussion of charity as a virtuous practice, in Indian texts, is in Rigveda. According to other ancient texts of Hinduism, dāna can take the form of feeding or giving to an individual in distress or need. It can also take the form of philanthropic public projects that empower and help many.

Dāna leads to one of the perfections (pāramitā). This can be characterized by unattached and unconditional generosity, giving and letting go.

Historical records, such as those by the Persian historian Abū Rayḥān al-Bīrūnī who visited India in early 11th century, suggest dāna has been an ancient and medieval era practice among Indian religions.

Effective altruism

Effective altruism is a philosophy and social movement that uses evidence and reasoning to determine the most effective ways to benefit others. Effective altruism encourages individuals to consider all causes and actions and to act in the way that brings about the greatest positive impact, based upon their values. It is the broad, evidence-based and cause-neutral approach that distinguishes effective altruism from traditional altruism or charity. Effective altruism is part of the larger movement towards evidence-based practices.

While a substantial proportion of effective altruists have focused on the nonprofit sector, the philosophy of effective altruism applies more broadly to prioritizing the scientific projects, companies, and policy initiatives which can be estimated to save lives, help people, or otherwise have the biggest benefit. People associated with the movement include philosopher Peter Singer, Facebook co founder Dustin Moskovitz, Cari Tuna, Ben Delo, Oxford-based researchers William MacAskill and Toby Ord, professional poker player Liv Boeree, and writer Jacy Reese Anthis.

Saturday, April 30, 2022

Radar signal characteristics

From Wikipedia, the free encyclopedia

A radar system uses a radio-frequency electromagnetic signal reflected from a target to determine information about that target. In any radar system, the signal transmitted and received will exhibit many of the characteristics described below.

The radar signal in the time domain

The diagram below shows the characteristics of the transmitted signal in the time domain. Note that in this and in all the diagrams within this article, the x axis is exaggerated to make the explanation clearer.

Radar Pulse Train

Carrier

The carrier is an RF signal, typically of microwave frequencies, which is usually (but not always) modulated to allow the system to capture the required data. In simple ranging radars, the carrier will be pulse modulated and in continuous wave systems, such as Doppler radar, modulation may not be required. Most systems use pulse modulation, with or without other supplementary modulating signals. Note that with pulse modulation, the carrier is simply switched on and off in sync with the pulses; the modulating waveform does not actually exist in the transmitted signal and the envelope of the pulse waveform is extracted from the demodulated carrier in the receiver. Although obvious when described, this point is often missed when pulse transmissions are first studied, leading to misunderstandings about the nature of the signal.

Pulse width

The pulse width () (or pulse duration) of the transmitted signal is the time, typically in microseconds, each pulse lasts. If the pulse is not a perfect square wave, the time is typically measured between the 50% power levels of the rising and falling edges of the pulse.

The pulse width must be long enough to ensure that the radar emits sufficient energy so that the reflected pulse is detectable by its receiver. The amount of energy that can be delivered to a distant target is the product of two things; the peak output power of the transmitter, and the duration of the transmission. Therefore, pulse width constrains the maximum detection range of a target.

Pulse width also constrains the range discrimination, that is the capacity of the radar to distinguish between two targets that are close together. At any range, with similar azimuth and elevation angles and as viewed by a radar with an unmodulated pulse, the range resolution is approximately equal in distance to half of the pulse duration times the speed of light (approximately 300 meters per microsecond).

Radar echoes, showing a representation of the carrier

Pulse width also determines the radar's dead zone at close ranges. While the radar transmitter is active, the receiver input is blanked to avoid the amplifiers being swamped (saturated) or, (more likely), damaged. A simple calculation reveals that a radar echo will take approximately 10.8 μs to return from a target 1 statute mile away (counting from the leading edge of the transmitter pulse (T0), (sometimes known as transmitter main bang)). For convenience, these figures may also be expressed as 1 nautical mile in 12.4 μs or 1 kilometre in 6.7 μs. (For simplicity, all further discussion will use metric figures.) If the radar pulse width is 1 μs, then there can be no detection of targets closer than about 150 m, because the receiver is blanked.

All this means that the designer cannot simply increase the pulse width to get greater range without having an impact on other performance factors. As with everything else in a radar system, compromises have to be made to a radar system's design to provide the optimal performance for its role.

Pulse repetition frequency (PRF)

In order to build up a discernible echo, most radar systems emit pulses continuously and the repetition rate of these pulses is determined by the role of the system. An echo from a target will therefore be 'painted' on the display or integrated within the signal processor every time a new pulse is transmitted, reinforcing the return and making detection easier. The higher the PRF that is used, then the more the target is painted. However, with the higher PRF the range that the radar can "see" is reduced. Radar designers try to use the highest PRF possible commensurate with the other factors that constrain it, as described below.

There are two other facets related to PRF that the designer must weigh very carefully; the beamwidth characteristics of the antenna, and the required periodicity with which the radar must sweep the field of view. A radar with a 1° horizontal beamwidth that sweeps the entire 360° horizon every 2 seconds with a PRF of 1080 Hz will radiate 6 pulses over each 1-degree arc. If the receiver needs at least 12 reflected pulses of similar amplitudes to achieve an acceptable probability of detection, then there are three choices for the designer: double the PRF, halve the sweep speed, or double the beamwidth. In reality, all three choices are used, to varying extents; radar design is all about compromises between conflicting pressures.

Staggered PRF

Staggered PRF is a transmission process where the time between interrogations from radar changes slightly, in a patterned and readily-discernible repeating manner. The change of repetition frequency allows the radar, on a pulse-to-pulse basis, to differentiate between returns from its own transmissions and returns from other radar systems with the same PRF and a similar radio frequency. Consider a radar with a constant interval between pulses; target reflections appear at a relatively constant range related to the flight-time of the pulse. In today's very crowded radio spectrum, there may be many other pulses detected by the receiver, either directly from the transmitter or as reflections from elsewhere. Because their apparent "distance" is defined by measuring their time relative to the last pulse transmitted by "our" radar, these "jamming" pulses could appear at any apparent distance. When the PRF of the "jamming" radar is very similar to "our" radar, those apparent distances may be very slow-changing, just like real targets. By using stagger, a radar designer can force the "jamming" to jump around erratically in apparent range, inhibiting integration and reducing or even suppressing its impact on true target detection.

Without staggered PRF, any pulses originating from another radar on the same radio frequency might appear stable in time and could be mistaken for reflections from the radar's own transmission. With staggered PRF the radar's own targets appear stable in range in relation to the transmit pulse, whilst the 'jamming' echoes may move around in apparent range (uncorrelated), causing them to be rejected by the receiver. Staggered PRF is only one of several similar techniques used for this, including jittered PRF (where the pulse timing is varied in a less-predictable manner), pulse-frequency modulation, and several other similar techniques whose principal purpose is to reduce the probability of unintentional synchronicity. These techniques are in widespread use in marine safety and navigation radars, by far the most numerous radars on planet Earth today.

Clutter

Clutter refers to radio frequency (RF) echoes returned from targets which are uninteresting to the radar operators. Such targets include natural objects such as ground, sea, precipitation (such as rain, snow or hail), sand storms, animals (especially birds), atmospheric turbulence, and other atmospheric effects, such as ionosphere reflections, meteor trails, and three body scatter spike. Clutter may also be returned from man-made objects such as buildings and, intentionally, by radar countermeasures such as chaff.

Some clutter may also be caused by a long radar waveguide between the radar transceiver and the antenna. In a typical plan position indicator (PPI) radar with a rotating antenna, this will usually be seen as a "sun" or "sunburst" in the centre of the display as the receiver responds to echoes from dust particles and misguided RF in the waveguide. Adjusting the timing between when the transmitter sends a pulse and when the receiver stage is enabled will generally reduce the sunburst without affecting the accuracy of the range, since most sunburst is caused by a diffused transmit pulse reflected before it leaves the antenna. Clutter is considered a passive interference source, since it only appears in response to radar signals sent by the radar.

Clutter is detected and neutralized in several ways. Clutter tends to appear static between radar scans; on subsequent scan echoes, desirable targets will appear to move, and all stationary echoes can be eliminated. Sea clutter can be reduced by using horizontal polarization, while rain is reduced with circular polarization (note that meteorological radars wish for the opposite effect, and therefore use linear polarization to detect precipitation). Other methods attempt to increase the signal-to-clutter ratio.

Clutter moves with the wind or is stationary. Two common strategies to improve measure or performance in a clutter environment are:

  • Moving target indication, which integrates successive pulses and
  • Doppler processing, which uses filters to separate clutter from desirable signals.

The most effective clutter reduction technique is pulse-Doppler radar with Look-down/shoot-down capability. Doppler separates clutter from aircraft and spacecraft using a frequency spectrum, so individual signals can be separated from multiple reflectors located in the same volume using velocity differences. This requires a coherent transmitter. Another technique uses a moving target indication that subtracts the receive signal from two successive pulses using phase to reduce signals from slow moving objects. This can be adapted for systems that lack a coherent transmitter, such as time-domain pulse-amplitude radar.

Constant False Alarm Rate, a form of Automatic Gain Control (AGC), is a method that relies on clutter returns far outnumbering echoes from targets of interest. The receiver's gain is automatically adjusted to maintain a constant level of overall visible clutter. While this does not help detect targets masked by stronger surrounding clutter, it does help to distinguish strong target sources. In the past, radar AGC was electronically controlled and affected the gain of the entire radar receiver. As radars evolved, AGC became computer-software controlled and affected the gain with greater granularity in specific detection cells.

Radar multipath echoes from a target cause ghosts to appear.

Clutter may also originate from multipath echoes from valid targets caused by ground reflection, atmospheric ducting or ionospheric reflection/refraction (e.g., Anomalous propagation). This clutter type is especially bothersome since it appears to move and behave like other normal (point) targets of interest. In a typical scenario, an aircraft echo is reflected from the ground below, appearing to the receiver as an identical target below the correct one. The radar may try to unify the targets, reporting the target at an incorrect height, or eliminating it on the basis of jitter or a physical impossibility. Terrain bounce jamming exploits this response by amplifying the radar signal and directing it downward. These problems can be overcome by incorporating a ground map of the radar's surroundings and eliminating all echoes which appear to originate below ground or above a certain height. Monopulse can be improved by altering the elevation algorithm used at low elevation. In newer air traffic control radar equipment, algorithms are used to identify the false targets by comparing the current pulse returns to those adjacent, as well as calculating return improbabilities.

Sensitivity time control (STC)

STC is used to avoid saturation of the receiver from close in ground clutter by adjusting the attenuation of the receiver as a function of distance. More attenuation is applied to returns close in and is reduced as the range increases.

Unambiguous range

Single PRF
Radar Echoes

In simple systems, echoes from targets must be detected and processed before the next transmitter pulse is generated if range ambiguity is to be avoided. Range ambiguity occurs when the time taken for an echo to return from a target is greater than the pulse repetition period (T); if the interval between transmitted pulses is 1000 microseconds, and the return-time of a pulse from a distant target is 1200 microseconds, the apparent distance of the target is only 200 microseconds. In sum, these 'second echoes' appear on the display to be targets closer than they really are.

Consider the following example : if the radar antenna is located at around 15 m above sea level, then the distance to the horizon is pretty close, (perhaps 15 km). Ground targets further than this range cannot be detected, so the PRF can be quite high; a radar with a PRF of 7.5 kHz will return ambiguous echoes from targets at about 20 km, or over the horizon. If however, the PRF was doubled to 15 kHz, then the ambiguous range is reduced to 10 km and targets beyond this range would only appear on the display after the transmitter has emitted another pulse. A target at 12 km would appear to be 2 km away, although the strength of the echo might be much lower than that from a genuine target at 2 km.

The maximum non ambiguous range varies inversely with PRF and is given by:

where c is the speed of light. If a longer unambiguous range is required with this simple system, then lower PRFs are required and it was quite common for early search radars to have PRFs as low as a few hundred Hz, giving an unambiguous range out to well in excess of 150 km. However, lower PRFs introduce other problems, including poorer target painting and velocity ambiguity in Pulse-Doppler systems (see below).

Multiple PRF

Modern radars, especially air-to-air combat radars in military aircraft, may use PRFs in the tens-to-hundreds of kilohertz and stagger the interval between pulses to allow the correct range to be determined. With this form of staggered PRF, a packet of pulses is transmitted with a fixed interval between each pulse, and then another packet is transmitted with a slightly different interval. Target reflections appear at different ranges for each packet; these differences are accumulated and then simple arithmetical techniques may be applied to determine true range. Such radars may use repetitive patterns of packets, or more adaptable packets that respond to apparent target behaviors. Regardless, radars that employ the technique are universally coherent, with a very stable radio frequency, and the pulse packets may also be used to make measurements of the Doppler shift (a velocity-dependent modification of the apparent radio frequency), especially when the PRFs are in the hundreds-of-kilohertz range. Radars exploiting Doppler effects in this manner typically determine relative velocity first, from the Doppler effect, and then use other techniques to derive target distance.

Maximum Unambiguous Range

At its most simplistic, MUR (Maximum Unambiguous Range) for a Pulse Stagger sequence may be calculated using the TSP (Total Sequence Period). TSP is defined as the total time it takes for the Pulsed pattern to repeat. This can be found by the addition of all the elements in the stagger sequence. The formula is derived from the speed of light and the length of the sequence:

where c is the speed of light, usually in metres per microsecond, and TSP is the addition of all the positions of the stagger sequence, usually in microseconds. However, in a stagger sequence, some intervals may be repeated several times; when this occurs, it is more appropriate to consider TSP as the addition of all the unique intervals in the sequence.

Also, it is worth remembering that there may be vast differences between the MUR and the maximum range (the range beyond which reflections will probably be too weak to be detected), and that the maximum instrumented range may be much shorter than either of these. A civil marine radar, for instance, may have user-selectable maximum instrumented display ranges of 72, or 96 or rarely 120 nautical miles, in accordance with international law, but maximum unambiguous ranges of over 40,000 nautical miles and maximum detection ranges of perhaps 150 nautical miles. When such huge disparities are noted, it reveals that the primary purpose of staggered PRF is to reduce "jamming", rather than to increase unambiguous range capabilities.

The radar signal in the frequency domain

Pure CW radars appear as a single line on a Spectrum analyser display and when modulated with other sinusoidal signals, the spectrum differs little from that obtained with standard analogue modulation schemes used in communications systems, such as Frequency Modulation and consist of the carrier plus a relatively small number of sidebands. When the radar signal is modulated with a pulse train as shown above, the spectrum becomes much more complicated and far more difficult to visualise.

Basic radar transmission frequency spectrum
 
3D Doppler Radar Spectrum showing a Barker Code of 13

Basic Fourier analysis shows that any repetitive complex signal consists of a number of harmonically related sine waves. The radar pulse train is a form of square wave, the pure form of which consists of the fundamental plus all of the odd harmonics. The exact composition of the pulse train will depend on the pulse width and PRF, but mathematical analysis can be used to calculate all of the frequencies in the spectrum. When the pulse train is used to modulate a radar carrier, the typical spectrum shown on the left will be obtained.

Examination of this spectral response shows that it contains two basic structures. The coarse structure; (the peaks or 'lobes' in the diagram on the left) and the Fine Structure which contains the individual frequency components as shown below. The envelope of the lobes in the coarse structure is given by: .

Note that the pulse width () determines the lobe spacing. Smaller pulse widths result in wider lobes and therefore greater bandwidth.

Radar transmission frequency fine spectrum

Examination of the spectral response in finer detail, as shown on the right, shows that the Fine Structure contains individual lines or spot frequencies. The formula for the fine structure is given by and since the period of the PRF (T) appears at the bottom of the fine spectrum equation, there will be fewer lines if higher PRFs are used. These facts affect the decisions made by radar designers when considering the trade-offs that need to be made when trying to overcome the ambiguities that affect radar signals.

Pulse profiling

If the rise and fall times of the modulation pulses are zero, (e.g. the pulse edges are infinitely sharp), then the sidebands will be as shown in the spectral diagrams above. The bandwidth consumed by this transmission can be huge and the total power transmitted is distributed over many hundreds of spectral lines. This is a potential source of interference with any other device and frequency-dependent imperfections in the transmit chain mean that some of this power never arrives at the antenna. In reality of course, it is impossible to achieve such sharp edges, so in practical systems the sidebands contain far fewer lines than a perfect system. If the bandwidth can be limited to include relatively few sidebands, by rolling off the pulse edges intentionally, an efficient system can be realised with the minimum of potential for interference with nearby equipment. However, the trade-off of this is that slow edges make range resolution poor. Early radars limited the bandwidth through filtration in the transmit chain, e.g. the waveguide, scanner etc., but performance could be sporadic with unwanted signals breaking through at remote frequencies and the edges of the recovered pulse being indeterminate. Further examination of the basic Radar Spectrum shown above shows that the information in the various lobes of the Coarse Spectrum is identical to that contained in the main lobe, so limiting the transmit and receive bandwidth to that extent provides significant benefits in terms of efficiency and noise reduction.

Radar transmission frequency spectrum of a trapezoid pulse profile

Recent advances in signal processing techniques have made the use of pulse profiling or shaping more common. By shaping the pulse envelope before it is applied to the transmitting device, say to a cosine law or a trapezoid, the bandwidth can be limited at source, with less reliance on filtering. When this technique is combined with pulse compression, then a good compromise between efficiency, performance and range resolution can be realised. The diagram on the left shows the effect on the spectrum if a trapezoid pulse profile is adopted. It can be seen that the energy in the sidebands is significantly reduced compared to the main lobe and the amplitude of the main lobe is increased.

Radar transmission frequency spectrum of a cosine pulse profile

Similarly, the use of a cosine pulse profile has an even more marked effect, with the amplitude of the sidelobes practically becoming negligible. The main lobe is again increased in amplitude and the sidelobes correspondingly reduced, giving a significant improvement in performance.

There are many other profiles that can be adopted to optimise the performance of the system, but cosine and trapezoid profiles generally provide a good compromise between efficiency and resolution and so tend to be used most frequently.

Unambiguous velocity

Doppler spectrum. Deliberately no units given (but could be dBu and MHz for example).

This is an issue only with a particular type of system; the pulse-Doppler radar, which uses the Doppler effect to resolve velocity from the apparent change in frequency caused by targets that have net radial velocities compared to the radar device. Examination of the spectrum generated by a pulsed transmitter, shown above, reveals that each of the sidebands, (both coarse and fine), will be subject to the Doppler effect, another good reason to limit bandwidth and spectral complexity by pulse profiling.

Consider the positive shift caused by the closing target in the diagram which has been highly simplified for clarity. It can be seen that as the relative velocity increases, a point will be reached where the spectral lines that constitute the echoes are hidden or aliased by the next sideband of the modulated carrier. Transmission of multiple pulse-packets with different PRF-values, e.g. staggered PRFs, will resolve this ambiguity, since each new PRF value will result in a new sideband position, revealing the velocity to the receiver. The maximum unambiguous target velocity is given by:

Typical system parameters

Taking all of the above characteristics into account means that certain constraints are placed on the radar designer. For example, a system with a 3 GHz carrier frequency and a pulse width of 1 µs will have a carrier period of approximately 333 ps. Each transmitted pulse will contain about 3000 carrier cycles and the velocity and range ambiguity values for such a system would be:

PRF Velocity Ambiguity Range Ambiguity
Low (2 kHz) 50 m/s 75 km
Medium (12 kHz) 300 m/s 12.5 km
High (200 kHz) 5000 m/s 750 m

Moon

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Moon   Near side of the Moon , lunar ...