Search This Blog

Thursday, October 1, 2020

Welfare dependency

From Wikipedia, the free encyclopedia
 
Welfare dependency is the state in which a person or household is reliant on government welfare benefits for their income for a prolonged period of time, and without which they would not be able to meet the expenses of daily living. The United States Department of Health and Human Services defines welfare dependency as the proportion of all individuals in families which receive more than 50 percent of their total annual income from Temporary Assistance for Needy Families (TANF), food stamps, and/or Supplemental Security Income (SSI) benefits. Typically viewed as a social problem, it has been the subject of major welfare reform efforts since the mid-20th century, primarily focused on trying to make recipients self-sufficient through paid work. While the term "welfare dependency" can be used pejoratively, for the purposes of this article it shall be used to indicate a particular situation of persistent poverty.

Discourses of dependency and the history of a social problem

Terminology

The term "welfare dependency" is itself controversial, often carrying derogatory connotations or insinuations that the recipient is unwilling to work. Historian Michael B. Katz discussed the discourses surrounding poverty in his 1989 book The Undeserving Poor, where he elaborated upon the distinctions Americans make between so-called “deserving” recipients of aid, such as widows, and “undeserving” ones, like single-parent mothers, with the distinction being that the former have fallen upon hard times through no fault of their own whereas the latter are seen as having chosen to live on the public purse. Drawing this dichotomy diverts attention from the structural factors that cause and entrench poverty, such as economic change. Instead of focusing on how to tackle the root causes of poverty, people focus on attacking the supposed poor character of the recipient.

It is important to note that while the term “welfare dependence” in and of itself is politically neutral and merely describes a state of drawing benefits, in conventional usage it has taken on a very negative meaning that blames welfare recipients for social ills and insinuates they are morally deficient. In his 1995 book The War Against the Poor, Columbia University sociology professor Herbert Gans asserted that the label “welfare recipient,” when used to malign a poor person, transforms the individual’s experience of being in poverty into a personal failing while ignoring positive aspects of their character. For example, Gans writes, “That a welfare recipient may be a fine mother becomes irrelevant; the label assumes that she, like all others in her family, is a bad mother, and she is given no chance to prove otherwise.”  In this way, structural factors that cause a person to be reliant on benefit payments for the majority of his or her income are in essence ignored because the problem is seen as situated within the person, not society. To describe a person as welfare dependent can therefore be interpreted as "blaming the victim," depending on context.

The term "welfare-reliant," as used by Edin and Lein (1996), can describe the same concept with potentially fewer negative connotations.

Welfare, long-term reliance, and policy

There is a great deal of overlap between discourses of welfare dependency and the stereotype of the welfare queen, in that long-term welfare recipients are often seen as draining public resources they have done nothing to earn, as well as stereotyped as doing nothing to improve their situation, choosing to draw benefits when there are alternatives available. This contributes to stigmatization of welfare recipients. While the stereotype of a long-term welfare recipient involves not wanting to work, in reality a large proportion of welfare recipients are engaged in some form of paid work but still cannot make ends meet.

Attention was drawn to the issue of long-term reliance on welfare in the Moynihan Report. Assistant Secretary of Labor Daniel Patrick Moynihan argued that in the wake of the 1964 Civil Rights Act, urban Black Americans would still suffer disadvantage and remain entrenched in poverty due to the decay of the family structure. Moynihan wrote, “The steady expansion of welfare programs can be taken as a measure of the steady disintegration of the Negro family structure over the past generation in the United States.” The relatively high proportion of Black families headed by single-parent mothers, along with the high proportion of children born out of wedlock, was seen as a pernicious social problem – one leading to long-term poverty and consequently reliance on welfare benefits for income, as there would be no male breadwinner working while the mother took care of her children.

From 1960 to 1975, both the percentage of families headed by single-parent mothers and reliance on welfare payments increased. At the same time, research began indicating that the majority of people living below the poverty line experienced only short spells of poverty, casting doubt on the notion of an entrenched underclass. For example, a worker who lost his job might be categorized as poor for a few months prior to re-entering full-time employment, and he or she would be much less likely to end up in a situation of long-term poverty than a single-parent mother with little formal education, even if both were considered “poor” for statistical purposes.

In 1983, researchers Mary Jo Bane and David T. Ellwood used the Panel Study of Income Dynamics to examine the duration of spells of poverty (defined as continuous periods spent with income under the poverty line), looking specifically at entry and exit. They found that while three in five people who were just beginning a spell of poverty came out of it within three years, only one-quarter of people who had already been poor for three years were able to exit poverty within the next two.  The probability that a person will be able to exit poverty declines as the spell lengthens. A small but significant group of recipients remained on welfare for much longer, forming the bulk of poverty at any one point in time and requiring the most in government resources. At any one time, if a cross-sectional sample of poor people in the United States was taken, about 60% would be in a spell of poverty that would last at least eight years. Interest thus arose in studying the determinants of long-term receipt of welfare. Bane & Ellwood found that only 37% of poor people in their sample became poor as a result of the head of household’s wages decreasing, and their average spell of poverty lasted less than four years. On the other hand, entry into poverty that was the result of a woman becoming head of household lasted on average for more than five years. Children born into poverty were particularly likely to remain poor.

Reform: the rise of workfare

In the popular imagination, welfare became seen as something that the poor had made into a lifestyle rather than a safety net. The federal government had been urging single-parent mothers with children to take on paid work in an effort to reduce welfare rolls since the introduction of the WIN Program in 1967, but in the 1980s this emphasis became central to welfare policy. Emphasis turned toward personal responsibility and the attainment of self-sufficiency through work.

Conservative views of welfare dependency, coming from the perspective of classical economics, argued that individual behaviors and the policies that reward them lead to the entrenchment of poverty. Lawrence M. Mead's 1986 book Beyond Entitlement: The Social Obligations of Citizenship argued that American welfare was too permissive, giving out benefit payments without demanding anything from poor people in return, particularly not requiring the recipient to work. Mead viewed this as directly linked to the higher incidence of social problems among poor Americans, more as a cause than an effect of poverty:

"[F]ederal programs have special difficulties in setting standards for their recipients. They seem to shield their clients from the threats and rewards that stem from private society – particularly the marketplace – while providing few sanctions of their own. The recipients seldom have to work or otherwise function to earn whatever income, service, or benefit a program gives; meager though it may be, they receive it essentially as an entitlement. Their place in American society is defined by their need and weakness, not their competence. This lack of accountability is among the reasons why nonwork, crime, family breakup, and other problems are much commoner among recipients than Americans generally."

Charles Murray argued that American social policy ignored people's inherent tendency to avoid hard work and be amoral, and that from the War on Poverty onward the government had given welfare recipients disincentives to work, marry, or have children in wedlock. His 1984 book Losing Ground was also highly influential in the welfare reforms of the 1990s.

In 1983, Bane & Ellwood found that one-third of single-parent mothers exited poverty through work, indicating that it was possible for employment to form a route out of reliance on welfare even for this particular group. Overall, four in five exits from poverty could be explained by an increase in earnings, according to their data. The idea of combining welfare reform with work programs in order to reduce long-term dependency received bipartisan support during the 1980s, culminating in the signing of the Family Support Act in 1988. This Act aimed to reduce the number of AFDC recipients, enforce child support payments, and establish a welfare-to-work program. One major component was the Job Opportunities and Basic Skills Training (JOBS) program, which provided remedial education and was specifically targeted to teenage mothers and recipients who had been on welfare for six years or more – those populations considered most likely to be welfare dependent. JOBS was to be administered by the states, with federal government matching up to a capped level of funding. A lack of resources, particularly in relation to financing and case management, stymied JOBS. However, in 1990, expansion of the Earned Income Tax Credit (EITC), first enacted in 1975, offered working poor families with children an incentive to remain in work. Also in that year, federal legislation aimed at providing child care to families who would otherwise be dependent on welfare aided single-parent mothers in particular.

Welfare reform during the Clinton presidency placed time limits on benefit receipt, replacing Aid for Families with Dependent Children and the JOBS program with Temporary Assistance for Needy Families (TANF) and requiring that recipients begin to work after two years of receiving these payments. Such measures were intended to decrease welfare dependence: The House Ways and Means Committee stated that the goal of the Personal Responsibility and Work Opportunity Act was to "reduce the length of welfare spells by attacking dependency while simultaneously preserving the function of welfare as a safety net for families experiencing temporary financial problems." This was a direct continuation of the line of thinking that had been prevalent in the 1980s, where personal responsibility was emphasized. TANF was administered by individual states, with funding coming from federal block grants. However, resources were not adjusted for inflation, caseload changes, or state spending changes. Unlike its predecessor AFDC, TANF had as its explicit goal the formation and maintenance of two-parent families and the prevention of out-of-wedlock births, reflecting the discourses that had come to surround long-term welfare receipt.

One shortcoming of workfare-based reform was that it did not take into account the fact that, due to welfare benefits often not paying enough to meet basic needs, a significant proportion of mothers on welfare already worked "off the books" to generate extra income without losing their welfare entitlements. Neither welfare nor work alone could provide enough money for daily expenses; only by combining the two could the recipients provide for themselves and their children. Even though working could make a woman eligible for the Earned Income Tax Credit, the amount was not enough to make up for the rest of her withdrawn welfare benefits. Work also brought with it related costs, such as transportation and child care. Without fundamental changes in the skill profile of the average single-parent mother on welfare to address structural changes in the economy, or a significant increase in pay for low-skilled work, withdrawing welfare benefits and leaving women with only work income meant that many faced a decline in overall income. Sociologists Kathryn Edin and Laura Lein interviewed mothers on welfare in Chicago, Charleston, Boston, and San Antonio, and found that while working mothers generally had more income left over after paying rent and food than welfare mothers did, the former were still worse-off financially because of the costs associated with work. Despite strong support for the idea that work will provide the income and opportunity to help people become self-sufficient, this approach has not alleviated the need for welfare payments in the first place: In 2005, approximately 52% of TANF recipients lived in a family with at least one working adult.

Measuring dependency

The United States Department of Health and Human Services defines ten indicators of welfare dependency:

  • Indicator 1: Degree of Dependence, which can be measured by the percentage of total income from means-tested benefits. If greater than 50%, the recipient of welfare is considered to be dependent on it for the purposes of official statistics.
  • Indicator 2: Receipt of Means-Tested Assistance and Labor Force Attachment, or what percentage of recipients are in families with different degrees of labor force participation.
  • Indicator 3: Rates of Receipt of Means-Tested Assistance, or the percentage of the population receiving TANF, food stamps, and SSI.
  • Indicator 4: Rates of Participation in Means-Tested Assistance Programs, or the percentage of people eligible for welfare benefits who are actually claiming them.
  • Indicator 5: Multiple Program Receipt, or the percentage of recipients who are receiving at least two of TANF, food stamps, or SSI.
  • Indicator 6: Dependence Transitions, which breaks down recipients by demographic characteristics and the level of income that welfare benefits represented for them in previous years.
  • Indicator 7: Program Spell Duration, or for how long recipients draw the three means-tested benefits.
  • Indicator 8: Welfare Spell Duration with No Labor Force Attachment, which measures how long recipients with no one working in their family remain on welfare.
  • Indicator 9: Long Term Receipt, which breaks down spells on TANF by how long a person has been in receipt.
  • Indicator 10: Events Associated with the Beginning and Ending of Program Spells, such as an increase in personal or household income, marriage, children no longer being eligible for a benefit, and/or transfer onto other benefits.

In 2005, the Department estimated that 3.8% of the American population could be considered dependent on welfare, calculated as having more than half of their family’s income coming from TANF, food stamps, and/or SSDI payments, down from 5.2% in 1996. As 15.3% of the population was in receipt of welfare benefits in 2005, it follows that approximately one-quarter of welfare recipients are considered dependent as per the official measures. In general, measures of welfare dependence are assessed alongside the statistics for poverty in general.

Government measures of welfare dependence include welfare benefits associated with work. If such benefits were excluded from calculations, the dependency rate would be lower.

Welfare Receipt and Dependence.gif


Risk factors

Demographic

Welfare dependence in the United States is typically associated with female-headed households with children. Mothers who have never been married are more likely to stay on welfare for long periods of time than their counterparts who have ever been married, including women who became separated or divorced from their partners. In her study using data from the 1984 Survey of Income and Program Participation, Patricia Ruggles found that 40% of never-married mothers remained on welfare for more than two years, and that while the median time spent on welfare for ever-married women was only 8 months, for never-married women it was between 17 and 18 months. Statistics from 2005 show that while only 1% of people living in married-couple families could be classified as welfare-dependent as per the government definition, 14% of people in single-parent family mothers were dependent.

Teenage mothers in particular are susceptible to having to rely on welfare for long periods of time because their interruption in schooling combined with the responsibilities of childrearing prevent them from gaining employment; there is no significant difference between single-parent and married teenage mothers because their partners are likely to be poor as well. While many young and/or single-parent mothers do seek work, their relatively low skill levels along with the burdens of finding appropriate childcare hurt their chances of remaining employed.

Black women are more likely than their White counterparts to be lone parents, which partially explains their higher rate of welfare dependency. At the time of the Moynihan Report, approximately one-quarter of Black households were headed by women, compared to about one in ten White households. Ruggles’ data analysis found that, in 1984, the median time on welfare for nonwhite recipients was just under 16 months, while for White recipients it was approximately 8 months. One year earlier, Bane & Ellwood found that the average duration of a new spell of poverty for a Black American was approximately seven years, compared to four years for Whites. In 2005, official statistics stated that 10.2% of Black Americans were welfare dependent, compared to 5.7% of Hispanics and 2.2% of non-Hispanic Whites.

William Julius Wilson, in The Truly Disadvantaged, explained that a shrinking pool of “marriageable” Black men, thanks to increasing unemployment brought about by structural changes in the economy, leads to more Black women remaining unmarried. However, there is no evidence that welfare payments themselves provide an incentive for teenage girls to have children or for Black women to remain unmarried.

There is an association between a parent's welfare dependency and that of her children; a mother's welfare participation increases the likelihood that her daughter, when grown, will also be dependent on welfare. The mechanisms through which this happens may include the child's lessened feelings of stigma related to being on welfare, lack of job opportunities because he or she did not observe a parent's participation in the labor market, and detailed knowledge of how the welfare system works imprinted from a young age. In some cases, the unemployment trap may function as a perverse incentive to remain dependent on welfare payments, as returning to work would not significantly increase household earnings as welfare benefits are withdrawn, and the associated costs and stressors would outweigh any benefits. This trap can be eliminated through the addition of work subsidies.

Other factors which entrench welfare dependency, particularly for women, include lack of affordable childcare, low education and skill levels, and unavailability of suitable jobs. Research has found that women who have been incarcerated also have high rates of social welfare receipt, especially if they were incarcerated in state prison rather than in county jail.

Structural economic factors

Kasarda and Ting (1996) argue that poor people become trapped in dependency on welfare due to a lack of skills along with spatial mismatch. Post-WWII, American cities have produced a surplus of high-skilled jobs which are beyond the reach of most urban welfare recipients, who do not have the appropriate skills. This is in large part due to fundamental inequalities in the quality of public education, which are themselves traceable to class disparities because school funding is heavily reliant on local property taxes. Meanwhile, low-skilled jobs have decreased within the city, moving out toward more economically advantageous suburban locations. Under the spatial mismatch hypothesis, reductions in urban welfare dependence, particularly among Blacks, would rely on giving potential workers access to suitable jobs in affluent suburbs. This would require changes in policies related not only to welfare, but to housing and transportation, to break down barriers to employment.

Without appropriate jobs, it can be argued using rational choice theory that welfare recipients would make the decision to do what is economically advantageous to them, which often means not taking low-paid work that would require expensive childcare and lengthy commutes. This would explain dependence on welfare over work. However, a large proportion of welfare recipients are also in some form of work, which casts doubt on this viewpoint.

The persistence of racism

One perspective argues that structural problems, particularly persistent racism, have concentrated disadvantage among urban Black residents and thus caused their need to rely on long-term welfare payments. Housing policies segregated Black Americans into impoverished neighborhoods and formally blocked avenues to quality education and high-paying employment. Economic growth in the 1980s and 1990s did not alleviate poverty, largely because wages remained stagnant while the availability of low-skilled but decent-paying jobs disappeared from American urban centers. Poverty could be alleviated by better-targeted economic policies as well as concerted efforts to penalize racial discrimination. However, William Julius Wilson, in The Truly Disadvantaged, urges caution in initiating race-based programs as there is evidence they may not benefit the poorest Black people, which would include people who have been on welfare for long periods of time.

Cultural

Oscar Lewis introduced a theory of a culture of poverty in the late 1950s, initially in the context of anthropological studies in Mexico. However, the idea gained currency and influenced the Moynihan Report. This perspective argues that poverty is perpetuated by a value system different from that of mainstream society, influenced by the material deprivation of one's surroundings and the experiences of family and friends. There are both liberal and conservative interpretations of the culture of poverty: the former argues that lack of work and opportunities for mobility have concentrated disadvantage and left people feeling as if they have no way out of their situation; the latter believe that welfare payments and government intervention normalize and incentivize relying on welfare, not working, and having children out of wedlock, and consequently transmit social norms supporting dependency to future generations.

Reducing poverty or reducing dependence?

Reducing poverty and reducing dependence are not equivalent terms. Cutting the number of individuals receiving welfare payments does not mean that poverty itself has been proportionally reduced, because many people with incomes below the official poverty line may not be receiving the transfer payments they may have been entitled to in previous years. For example, in the early 1980s there was a particularly large discrepancy between the official poverty rate and the number of AFDC recipients due to major government cuts in AFDC provision. As a result, many people who previously would have been entitled to welfare benefits no longer received them – an example of increasing official measures of poverty but decreasing dependence. While official welfare rolls were halved between 1996 and 2000, many working poor families were still reliant on government aid in the form of unemployment insurance, Medicaid, and assistance with food and childcare.

Changes in the practices surrounding the administration of welfare may obscure continuing problems with poverty and failures to change the discourse in the face of new evidence. Whereas in the 1980s and much of the 1990s discussions of problems with welfare centered on dependency, the focus in more recent years has come to rest on working poverty. The behavior of this particular group of poor people has changed, but their poverty has not been eliminated. Poverty rates in the United States have risen since the implementation of welfare reform. States that maintain more generous welfare benefits tend to have fewer people living below the poverty line, even if only pre-transfer income is considered.

In the United Kingdom

The Conservative/Liberal Democrat coalition government that took office in May 2010 set out to reduce welfare dependency, primarily relying on workfare and initiatives targeted to specific groups, such as disabled people, who are more likely to spend long periods of time receiving welfare payments. The Department of Work and Pensions has released a report claiming that Disability Living Allowance, the main payment given to people who are severely disabled, "can act as a barrier to work" and causes some recipients to become dependent on it as a source of income rather than looking for a suitable job. Iain Duncan Smith, Secretary for Work and Pensions, has argued that the United Kingdom has a culture of welfare dependency and a "broken" welfare system where a person would be financially better off living on state benefits than taking a job paying less than £15,000 annually. Critics argue that this is the government’s excuse to execute large-scale cuts in services, and that it perpetuates the stereotype that people on Incapacity Benefit or Disability Living Allowance are unwilling to work, faking their condition, or otherwise being "scroungers".

The previous Labour government introduced active labour market policies intended to reduce welfare dependency, an example of the Third Way philosophy favored by prime minister Tony Blair. The New Deal programs, targeted towards different groups of long-term unemployed people such as lone parents, young people, disabled people, and musicians, gave the government the ability to stop the benefit payments of people who did not accept reasonable offers of employment.

 

Refusenik

From Wikipedia, the free encyclopedia
 
January 10, 1973. Soviet Jewish refuseniks demonstrate in front of the Ministry of Internal Affairs for the right to emigrate to Israel.
 
A type 2 USSR exit visa. This type of visa was issued to those who received permission to leave the USSR permanently and lost their Soviet citizenship. Many people who wanted to emigrate were unable to receive this kind of exit visa.

Refusenik (Russian: отказник, otkaznik, from "отказ", otkaz "refusal") was an unofficial term for individuals—typically, but not exclusively, Soviet Jews—who were denied permission to emigrate, primarily to Israel, by the authorities of the Soviet Union and other countries of the Eastern bloc. The term refusenik is derived from the "refusal" handed down to a prospective emigrant from the Soviet authorities.

In addition to the Jews, broader categories included:

A typical basis to deny emigration was the real or alleged association with Soviet state secrets. Some individuals were labelled as foreign spies or potential seditionists who purportedly wanted to abuse Israeli aliyah and Law of Return (right to return) as a means of escaping punishment for high treason or sedition from abroad.

Applying for an exit visa was a step noted by the KGB, so that future career prospects, always uncertain for Soviet Jews, could be impaired. As a rule, Soviet dissidents and refuseniks were fired from their workplaces and denied employment according to their major specialty. As a result, they had to find a menial job, such as a street sweeper, or face imprisonment on charges of social parasitism.

The ban on Jewish immigration to Israel was lifted in 1971 leading to the 1970s Soviet Union aliyah. The coming to power of Mikhail Gorbachev in the Soviet Union in the mid-1980s, and his policies of glasnost and perestroika, as well as a desire for better relations with the West, led to major changes, and most refuseniks were allowed to emigrate.

Over time, "refusenik" has entered colloquial English for a person who refuses to do something, especially by way of protest.

History of the Jewish refuseniks

A large number of Soviet Jews applied for exit visas to leave the Soviet Union, especially in the period following the 1967 Six-Day War. While some were allowed to leave, many were refused permission to emigrate, either immediately or after their cases would languish for years in the OVIR (ОВиР, "Отдел Виз и Регистрации", "Otdel Viz i Registratsii", English: Office of Visas and Registration), the MVD (Soviet Ministry of Internal Affairs) department responsible for exit visas. In many instances, the reason given for denial was that these persons had been given access, at some point in their careers, to information vital to Soviet national security and could not now be allowed to leave.

During the Cold War, Soviet Jews were thought to be a security liability or possible traitors. To apply for an exit visa, the applicants (and often their entire families) would have to quit their jobs, which in turn would make them vulnerable to charges of social parasitism, a criminal offense.

Many Jews encountered systematic, institutional antisemitism which blocked their opportunities for advancement. Some government sectors were almost entirely off-limits to Jews. In addition, Soviet restrictions on religious education and expression prevented Jews from engaging in Jewish cultural and religious life. While these restrictions led many Jews to seek emigration, requesting an exit visa was itself seen as an act of betrayal by Soviet authorities. Thus, prospective emigrants requested permission to emigrate at great personal risk, knowing that an official refusal would often be accompanied by dismissal from work and other forms of social ostracism and economic pressure. At the same time, strong international condemnations caused the Soviet authorities to significantly increase the emigration quota. In the years 1960 through 1970, only 4,000 people (legally) emigrated from the USSR. In the following decade, the number rose to 250,000, to fall again by 1980.

Hijacking incident

In 1970, a group of sixteen refuseniks (two of whom were non-Jewish), organized by dissident Eduard Kuznetsov (who already served a seven-year term in Soviet prisons), plotted to buy all the seats for the local flight Leningrad-Priozersk, under the guise of a trip to a wedding, on a small 12-seater aircraft Antonov An-2 (colloquially known as "кукурузник", kukuruznik), throw out the pilots before takeoff from an intermediate stop and fly it to Sweden, knowing they faced a huge risk of being captured or shot down. One of the participants, Mark Dymshits, was a former military pilot.

On 15 June 1970, after arriving at Smolnoye (later Rzhevka) Airport near Leningrad, the entire group of the "wedding guests" was arrested by the MVD.

The accused were charged for high treason, punishable by the death sentence under Article 64 of the Penal code of the RSFSR. Mark Dymshits and Eduard Kuznetsov were sentenced to capital punishment but after international protests, it was appealed and replaced with 15 years of incarceration; Yosef Mendelevitch and Yuri Fedorov: 15 years; Aleksey Murzhenko: 14 years; Sylva Zalmanson (Kuznetsov's wife and the only woman on trial): 10 years; Arie (Leib) Knokh: 13 years; Anatoli Altmann: 12 years; Boris Penson: 10 years; Israel Zalmanson: 8 years; Wolf Zalmanson (brother of Sylva and Israel): 10 years; Mendel Bodnya: 4 years.

Crackdown on the refusenik activism and its growth

Jewish emigration from USSR, before and after the First Leningrad Trial

The affair was followed by a crackdown on the Jewish and dissident movement throughout the USSR. Activists were arrested, makeshift centers for studying the Hebrew language and Torah were closed, and more trials followed. At the same time, strong international condemnations caused the Soviet authorities to significantly increase the emigration quota. In the years 1960 through 1970, only about 3,000 Soviet Jews had (legally) emigrated from the USSR; after the trial, in the period from 1971 to 1980 347,100 people received a visa to leave the USSR, 245,951 of them were Jews.

Refuseniks included Jews who desired to emigrate on religious grounds, Jews seeking to immigrate to Israel for Zionist aspirations, and relatively secular Jews desiring to escape continuous state-sponsored antisemitism.

A leading proponent and spokesman for the refusenik rights during the mid-1970s was Natan Sharansky. Sharansky's involvement with the Moscow Helsinki Group helped to establish the struggle for emigration rights within the greater context of the human rights movement in the USSR. His arrest on charges of espionage and treason and subsequent trial contributed to international support for the refusenik cause.

International pressure

Yuli Edelstein, one of the Soviet Union's most prominent refuseniks, who has served as Speaker of the Knesset (Israel's parliament) from 2013-2020.

On 18 October 1976, 13 Jewish refuseniks came to the Presidium of the Supreme Soviet to petition for explanations of denials of their right to emigrate from the USSR, as affirmed under the Helsinki Final Act. Failing to receive any answer, they assembled in the reception room of the Presidium on the following day. After a few hours of waiting, they were seized by the police, taken outside of the city limits and beaten. Two of them were kept in police custody.

In the next week, following an unsuccessful meeting between the activists' leaders and the Soviet Minister of Internal Affairs, General Nikolay Shchelokov, these abuses of law inspired several demonstrations in the Soviet capital. On Monday, 25 October 1976, 22 activists, including Mark Azbel, Felix Kandel, Alexander Lerner, Ida Nudel, Anatoly Shcharansky, Vladimir Slepak, and Michael Zeleny, were arrested in Moscow on their way to the next demonstration. They were convicted of hooliganism and incarcerated in the detention center Beryozka and other penitentiaries in and around Moscow. An unrelated party, artist Victor Motko, arrested in Dzerzhinsky Square, was detained along with the protesters in recognition of his prior attempts to emigrate from the USSR. These events were covered by several British and American journalists including David K. Shipler, Craig R. Whitney, and Christopher S. Wren. The October demonstrations and arrests coincided with the end of the 1976 United States presidential election. On October 25, U.S. Presidential candidate Jimmy Carter expressed his support of the protesters in a telegram sent to Scharansky, and urged the Soviet authorities to release them. (See Léopold Unger, Christian Jelen, Le grand retour, A. Michel 1977; Феликс Кандель, Зона отдыха, или Пятнадцать суток на размышление, Типография Ольшанский Лтд, Иерусалим, 1979; Феликс Кандель, Врата исхода нашего: Девять страниц истории, Effect Publications, Tel-Aviv, 1980.) On 9 November 1976, a week after Carter won the Presidential election, the Soviet authorities released all but two of the previously arrested protesters. Several more were subsequently rearrested and incarcerated or exiled to Siberia.

On 1 June 1978, refuseniks Vladimir and Maria Slepak stood on the eighth story balcony of their apartment building. By then they had been denied permission to emigrate for over 8 years. Vladimir displayed a banner that read "Let us go to our son in Israel". His wife Maria held a banner that read "Visa for my son". Fellow refusenik and Helsinki activist Ida Nudel held a similar display on the balcony of her own apartment. They were all arrested and charged with malicious hooliganism in violation of Article 206.2 of the Penal Code of the Soviet Union. The Moscow Helsinki Group protested their arrests in circulars dated 5 and 15 June of that year. Vladimir Slepak and Ida Nudel were convicted of all charges. They served 5 and 4 years in Siberian exile.

Documentary films

Cities of Refuge

From Wikipedia, the free encyclopedia
 
Fleeing to the City of Refuge (Numbers 35:11-28). From Charles Foster, The Story of the Bible, 1884.
 
The Cities of Refuge (Hebrew: ערי המקלט‘ārê ha-miqlāṭ) were six Levitical towns in the Kingdom of Israel and the Kingdom of Judah in which the perpetrators of accidental manslaughter could claim the right of asylum. Maimonides, invoking talmudic literature, expands the city of refuge count to all 48 Levitical cities. Outside of these cities, blood vengeance against such perpetrators was allowed by law. The Bible names the six cities as being cities of refuge: Golan, Ramoth, and Bosor, on the east (left bank) of the Jordan River, and Kedesh, Shechem, and Hebron on the western (right) side.

Biblical regulations

In Numbers

In the Book of Numbers, the laws concerning the cities of refuge state that, once he had claimed asylum, a perpetrator had to be taken from the city and put on trial; if the trial found that the perpetrator was innocent of murder, then the perpetrator had to be returned under guard (for their own protection) to the city in which they had claimed asylum. This law code treats blood money as an unacceptable device that would compound the crime, insisting that atonement can only be made by the murderer's blood.

Numbers states that no harm was allowed to come to the perpetrator once the Jewish high priest had died, at which point the perpetrator was free to leave the city without fear.

In Deuteronomy

In the setting of the Book of Deuteronomy, the Israelites have conquered several kingdoms on the east side of the Jordan river, and are about to enter the land of Canaan. At this point, Moses separated three cities of refuge on the east side. Later on, it is prescribed that three cities of refuge be set aside in Canaan once it is conquered, with three additional cities to be set aside 'if the Lord your God enlarges your territory'. Thus, the total number of cities could be as high as nine. Albert Barnes stated that the additional three cities allowed for "the anticipated enlargement of the borders of Israel to the utmost limits promised by God, from the river of Egypt to the Euphrates" (Genesis 15:8) and the King James Version refers in Deuteronomy 19:8 to the enlargement of the coast of the promised land.

While the Book of Number describes the perpetrator being put on trial, Deuteronomy merely states that if the perpetrator is guilty of murder, the elders of the town in which the crime was committed should demand the perpetrator's return and hand him over without pity to the avenger of blood to be killed.[13] Deuteronomy does not give any role to the high priest or mention the terms on which the perpetrator could return home, but does state that roads should be built to the cities of refuge to ease the escape of the perpetrator to them.

In Joshua

A chapter in the Book of Joshua also reiterates the regulations for the cities of refuge, adding that when a perpetrator arrives at the city, he had to disclose the events that had occurred to the city elders, after which they had to find him a place to live within the city. Modern biblical critics regard the chapter as being written by the Deuteronomist. Though the masoretic text for this chapter includes a role for the death of the high priest, the Septuagint's version of the chapter does not mention it.

Origin and development

In many ancient cultures, the inviolability of deities was considered to extend to their religious sanctuaries and all that resided within, whether criminals, debtors, escaped slaves, priests, ordinary people, or, in some cases, passing cattle; biblical scholars suspect that Israelite culture was originally no different. In general, the area covered by these rights of sanctuary varied from a small area around the altar or other centrepiece to a large area beyond the limits of the town containing the sanctuary (the limits often being marked in some way), depending on the significance of the deity and the importance of the sanctuary; it was considered a greater crime to drag an individual from the sanctuary or to kill them there than it was to defile the sanctuary itself.

Biblical scholars perceive this simple right of asylum at sanctuaries as being presented by the Covenant Code, which textual scholars attribute to the 8th century BC. Biblical scholars also believe that this right was the context underlying the account in the Books of Kings of Joab and Adonijah each fleeing from Solomon to an altar, with their opponents being unwilling to attack them while they remained there; textual scholars regard these passages as being part of the Court History of David, which they date to the 9th century BC, or earlier.

Over time, these general rights of asylum were gradually curtailed, as some sanctuaries had become notorious hotbeds of crime; in Athens, for example, the regulations were changed so that slaves were only permitted to escape to the sanctuary of the temple of Theseus. This is considered by scholars to be the reason that, in Israelite culture, the rights were restricted to just six locations by the time the Priestly Code was compiled—the late 7th century according to textual scholars—and it is thus regarded by biblical scholars as being no coincidence that the three cities of refuge to the west of the Jordan were also important ancient religious sanctuaries; little is known about the cities of refuge to the east of the Jordan (as of 1901), but scholars consider it reasonable to assume that they were once also important sanctuaries.

The Deuteronomic Code is regarded by textual scholars as dating from the reign of Josiah, which postdates the fall of the Kingdom of Israel to the Assyrians; this is considered to be the reason that only three (unnamed) cities of refuge are mentioned in the Deuteronomic Code, with a further three only being added if the Israelite territory was expanded, as by the time of Josiah's reign, the cities east of the Jordan were no longer controlled by the Israelites. The lack of importance given by the Deuteronomic Code to the identity of the cities of refuge is considered by scholars to be an attempt to continue the right of asylum, even though the sanctuaries (apart from the Temple in Jerusalem) had been abolished by Josiah's reforms.

In rabbinic sources

As killers were freed from the city of refuge upon the death of the High Priest, the Mishnah states that the high priest's mother would traditionally supply them with clothing and food, so that they would not wish for the death of her son. The Talmud argues that the death of the high priest formed an atonement, as the death of pious individuals counted as an atonement. Maimonides argued that the death of the high priest was simply an event so upsetting to the Israelites that they dropped all thoughts of vengeance.

The Talmud states that, in accordance with the requirement to especially build roads to the cities of refuge, the roads to these cities were not only marked by signposts saying "Refuge", but the roads were 32 ells wide—twice the regulation width—and were particularly smooth and even, in order that fugitives were as unhindered as possible.

The classical rabbinical writers regarded all the cities controlled by the Levites as being cities of refuge, although they considered that asylum could only be claimed against the will of a city's inhabitants if the city was one of the six main cities of refuge. Although there the six main cities of refuge were named in the Bible, the Talmudic sources argued that other cities could, over time, be officially substituted for these six, to take account of changing political circumstances. The substitute cities of refuge were constrained to be only of moderate size, since, if they were too small, there could be scarcity of food, forcing the refugee to imperil himself by leaving the city to find sustenance, and, if they were too large, then it would be too easy for an avenger of blood to hide in the crowds; nevertheless, the surrounding region was required to be quite populous since that way, an attack by the avenger of blood could be more easily repelled. The altar of the Temple in Jerusalem also came to be regarded as a place of sanctuary, but only counted for the officiating priest, and even then only temporarily, as the priest ultimately had to be taken to a city of refuge; when Jerusalem was under Seleucid control, Demetrius I offered to turn the Temple into an official place of sanctuary, though the offer was turned down.

The rabbinical sources differentiated between four forms of killing, sometimes giving examples:

  • Complete innocence, for which no further action was necessary. This situation arises when someone is killed while the perpetrator is fulfilling his legal duties; for example, this situation arises if someone is accidentally killed by a teacher applying corporal punishment.
  • Negligence, which required exile to a city of refuge. This situation arises when someone is killed as a result of legal activity, which the perpetrator was not required to perform.
  • Severe carelessness, for which exile is insufficient. This situation arises when someone is accidentally killed as a result of illegal activity by the perpetrator; for example, this situation arises if a shop owner fails to maintain their property, and it collapses and kills a legitimate customer.
  • Murder, which was subject to the death penalty.

According to classical rabbinical authorities, the cities of refuge were not places of protection, but places where atonement was made; Philo explained this principle as being based on the theory that an innocent man would never be chosen by God as the instrument of another man's death, and therefore those claiming refuge at these cities must have committed some sin before they had killed, for which their exile acts as an atonement. Thus, these rabbinical authorities argued that if the perpetrator had died before reaching a city of refuge, their body still had to be taken there, and, if they had died before the high priest had, then their body had to be buried at the city of refuge until the high priest expired; even if the perpetrator lived beyond the death of the high priest, some opinions forbade them from holding political office. Furthermore, since it was to be a place of atonement, the rabbinical authorities required that the perpetrator should always contemplate the fact that they had killed someone and should refuse any honour that the denizens of the city might grant them from time to time, unless the denizens persisted.

Right of asylum

From Wikipedia, the free encyclopedia
 
Asylum seekers by country of origin in 2009.
  40,000 asylum seekers
  30,000 asylum seekers
  20,000 asylum seekers
  10,000 asylum seekers
  <10,000 asylum seekers (or no data)
 
Remains of one of four medieval stone boundary markers for the sanctuary of Saint John of Beverley in the East Riding of Yorkshire.
 
Sanctuary ring on a door of Notre-Dame de Paris (France).
 
Medieval boundary marker at St. Georgenberg, Tyrol.
 
Plaque at St. Mary Magdalene Chapel, Dingli, Malta, indicating that the chapel did not enjoy ecclesiastical immunity

The right of asylum (sometimes called right of political asylum; from the Ancient Greek word ἄσυλον) is an ancient juridical concept, under which a person persecuted by one's own country may be protected by another sovereign authority, such as another country or church official, who in medieval times could offer sanctuary. This right was recognized by the Egyptians, the Greeks, and the Hebrews, from whom it was adopted into Western tradition. René Descartes fled to the Netherlands, Voltaire to England, and Thomas Hobbes to France, because each state offered protection to persecuted foreigners.

The Egyptians, Greeks, and Hebrews recognized a religious "right of asylum", protecting criminals (or those accused of crime) from legal action to some extent. This principle was later adopted by the established Christian church, and various rules were developed that detailed how to qualify for protection and what degree of protection one would receive.

The Council of Orleans decided in 511, in the presence of Clovis I, that asylum could be granted to anyone who took refuge in a church or on church property, or at the home of a bishop. This protection was extended to murderers, thieves and adulterers alike.

That "Everyone has the right to seek and to enjoy in other countries asylum from persecution" is enshrined in the United Nations Universal Declaration of Human Rights of 1948 and supported by the 1951 Convention Relating to the Status of Refugees and the 1967 Protocol Relating to the Status of Refugees. Under these agreements, a refugee is a person who is outside that person's own country's territory owing to fear of persecution on protected grounds, including race, caste, nationality, religion, political opinions and participation in any particular social group or social activities.

Medieval England

In England, King Æthelberht of Kent proclaimed the first Anglo-Saxon laws on sanctuary in about 600 AD. However Geoffrey of Monmouth in his Historia Regum Britanniae (c. 1136) says that the legendary pre-Saxon king Dunvallo Molmutius (4th/5th century BC) enacted sanctuary laws among the Molmutine Laws as recorded by Gildas (c. 500–570). The term grith was used by the laws of king Ethelred. By the Norman era that followed 1066, two kinds of sanctuary had evolved: all churches had the lower-level powers and could grant sanctuary within the church proper, but the broader powers of churches licensed by royal charter extended sanctuary to a zone around the church. At least twenty-two churches had charters for this broader sanctuary, including

Sometimes the criminal had to get to the chapel itself to be protected, or ring a certain bell, hold a certain ring or door-knocker, or sit on a certain chair ("frith-stool"). Some of these items survive at various churches. Elsewhere, sanctuary held in an area around the church or abbey, sometimes extending in radius to as much as a mile and a half. Stone "sanctuary crosses" marked the boundaries of the area; some crosses still exist as well. Thus it could become a race between the felon and the medieval law officers to the nearest sanctuary boundary. Serving of justice upon the fleet of foot could prove a difficult proposition.

Church sanctuaries were regulated by common law. An asylum seeker had to confess his sins, surrender his weapons, and permit supervision by a church or abbey organization with jurisdiction. Seekers then had forty days to decide whether to surrender to secular authorities and stand trial for their alleged crimes, or to confess their guilt, abjure the realm, and go into exile by the shortest route and never return without the king's permission. Those who did return faced execution under the law or excommunication from the Church.

If the suspects chose to confess their guilt and abjure, they did so in a public ceremony, usually at the church gates. They would surrender their possessions to the church, and any landed property to the crown. The coroner, a medieval official, would then choose a port city from which the fugitive should leave England (though the fugitive sometimes had this privilege). The fugitive would set out barefooted and bareheaded, carrying a wooden cross-staff as a symbol of protection under the church. Theoretically they would stay to the main highway, reach the port and take the first ship out of England. In practice, however, the fugitive could get a safe distance away, abandon the cross-staff and take off and start a new life. However, one can safely assume the friends and relatives of the victim knew of this ploy and would do everything in their power to make sure this did not happen; or indeed that the fugitives never reached their intended port of call, becoming victims of vigilante justice under the pretense of a fugitive who wandered too far off the main highway while trying to "escape."

Knowing the grim options, some fugitives rejected both choices and opted for an escape from the asylum before the forty days were up. Others simply made no choice and did nothing. Since it was illegal for the victim's friends to break into an asylum, the church would deprive the fugitive of food and water until a decision was made.

During the Wars of the Roses, when the Yorkists or Lancastrians would suddenly get the upper hand by winning a battle, some adherents of the losing side might find themselves surrounded by adherents of the other side and not able to get back to their own side. Upon realizing this situation they would rush to sanctuary at the nearest church until it was safe to come out. A prime example is Queen Elizabeth Woodville, consort of Edward IV of England.

In 1470, when the Lancastrians briefly restored Henry VI to the throne, Queen Elizabeth was living in London with several young daughters. She moved with them into Westminster for sanctuary, living there in royal comfort until Edward IV was restored to the throne in 1471 and giving birth to their first son Edward V during that time. When King Edward IV died in 1483, Elizabeth (who was highly unpopular with even the Yorkists and probably did need protection) took her five daughters and youngest son (Richard, Duke of York) and again moved into sanctuary at Westminster. To be sure she had all the comforts of home, she brought so much furniture and so many chests that the workmen had to knock holes in some of the walls to get everything in fast enough to suit her.

Henry VIII changed the rules of asylum, reducing to a short list the types of crimes for which people were allowed to claim asylum. The medieval system of asylum was finally abolished entirely by James I in 1623.

Modern political asylum

Article 14 of the Universal Declaration of Human Rights states that "Everyone has the right to seek and to enjoy in other countries asylum from persecution." The United Nations 1951 Convention Relating to the Status of Refugees and the 1967 Protocol Relating to the Status of Refugees guides national legislation concerning political asylum. Under these agreements, a refugee (or for cases where repressing base means has been applied directly or environmentally to the refugee) is a person who is outside that person's own country's territory (or place of habitual residence if stateless) owing to fear of persecution on protected grounds. Protected grounds include race, caste, nationality, religion, political opinions and membership or participation in any particular social group or social activities. Rendering true victims of persecution to their persecutor is a violation of a principle called non-refoulement, part of the customary and trucial Law of Nations.

These are the accepted terms and criteria as principles and a fundamental part in the 1951 United Nations Convention Relating to the Status of Refugees non-refoulement order.

Since the 1990s, victims of sexual persecution (which may include domestic violence, or systematic oppression of a gender or sexual minority) have come to be accepted in some countries as a legitimate category for asylum claims, when claimants can prove that the state is unable or unwilling to provide protection.

Right of asylum by country of refuge

The Dutch government grants asylum to a couple of hundred elderly from Yugoslavia, Poland, Hungary and the Baltic states. Since the end of World War II the people stayed in camps in Austria and West Germany. (Newsreel (in Dutch))

European Union

Asylum in European Union member states formed over a half-century by application of the Geneva Convention of 28 July 1951 on the Status of Refugees. Common policies appeared in the 1990s in connection with the Schengen Agreement (which suppressed internal borders) so that asylum seekers unsuccessful in one Member State would not reapply in another. The common policy began with the Dublin Convention in 1990. It continued with the implementation of Eurodac and the Dublin Regulation in 2003, and the October 2009 adoption of two proposals by the European Commission.

France

France was the first country to recognize the constitutional right to asylum, this being enshrined in article 120 of the Constitution of 1793. The modern French right of asylum is recognized by the 1958 Constitution, vis-à-vis the paragraph 4 of the preamble to the Constitution of 1946, to which the Preamble of the 1958 Constitution directly refers. The Constitution of 1946 incorporated of parts of the 1793 constitution which had guaranteed the right of asylum to "anyone persecuted because of his action for freedom" who are unable to seek protection in their home countries.

In addition to the constitutional right to asylum, the modern French right to asylum (droit d'asile) is enshrined on a legal and regulatory basis in the Code de l'Entree et du Sejour des Etrangers et du Droit d'Asile (CESEDA).

France also adheres to international agreements which provide for application modalities for the right of asylum, such as the 1951 United Nations (UN) Convention Relating to the Status of Refugees (ratified in 1952), the additional 1967 protocol; articles K1 and K2 of the 1992 Maastricht Treaty as well as the 1985 Schengen Agreement, which defined EU immigration policy. Finally, the right of asylum is defined by article 18 of the Charter of Fundamental Rights of the European Union.

Some of the criteria for which an asylum application can be rejected include: i) Passage via “safe" third country, ii) Safe Country of Origin (An asylum seeker can be a prior refused asylum if they are a national of a country considered to be "safe" by the French asylum authority OFPRA), iii) Safety Threat (serious threat to the public order), or iv) Fraudulent Application (abuse of the asylum procedure for other reasons).

The December 10, 2003, law limited political asylum through two main restrictions:

  • The notion of "internal asylum": the request may be rejected if the foreigner may benefit from political asylum on a portion of the territory of their home country.
  • The OFPRA (Office français de protection des réfugiés et apatrides – French Office for the Protection of Refugees and Stateless Persons) now makes a list of allegedly "safe countries" which respect political rights and principles of liberty. If the demander of asylum comes from such a country, the request is processed in 15 days, and receives no social assistance protection. They may contest the decision, but this does not suspend any deportation order. The first list, enacted in July 2005, included as "safe countries" Benin, Cape Verde, Ghana, Mali, Mauritius Island, India, Senegal, Mongolia, Georgia, Ukraine, Bosnia and Croatia. It had the effect of reducing in six months by about 80% the number of applicants from these countries. The second list, passed in July 2006, included Tanzania, Madagascar, Niger, Albania and Macedonia.

While restricted, the right of political asylum has been conserved in France amid various anti-immigration laws. Some people claim that, apart from the purely judicial path, the bureaucratic process is used to slow down and ultimately reject what might be considered as valid requests. According to Le Figaro, France granted 7,000 people the status of political refugee in 2006, out of a total of 35,000 requests; in 2005, the OFPRA in charge of examining the legitimacy of such requests granted less than 10,000 from a total of 50,000 requests.

Numerous exiles from South American dictatorships, particularly from Augusto Pinochet's Chile and the Dirty War in Argentina, were received in the 1970s-80s. Since the 2001 invasion of Afghanistan, tens of homeless Afghan asylum seekers have been sleeping in a park in Paris near the Gare de l'Est train station. Although their demands haven't been yet accepted, their presence has been tolerated. However, since the end of 2005, NGOs have been noting that the police separate Afghans from other migrants during raids, and expel via charters those who have just arrived at Gare de l'Est by train and haven't had time to demand asylum (a May 30, 2005, decree requires them to pay for a translator to help with official formalities).

United Kingdom

In the 19th century, the United Kingdom accorded political asylum to various persecuted people, among whom were many members of the socialist movement (including Karl Marx). With the 1845 attempted bombing of the Greenwich Royal Observatory and the 1911 Siege of Sidney Street in the context of the propaganda of the deed (anarchist) actions, political asylum was restricted.

United States

The United States recognizes the right of asylum of individuals as specified by international and federal law. A specified number of legally defined refugees who apply for refugee status overseas, as well as those applying for asylum after arriving in the U.S., are admitted annually.

Since World War II, more refugees have found homes in the U.S. than any other nation and more than two million refugees have arrived in the U.S. since 1980. During much of the 1990s, the United States accepted over 100,000 refugees per year, though this figure has recently decreased to around 50,000 per year in the first decade of the 21st century, due to greater security concerns. As for asylum seekers, the latest statistics show that 86,400 persons sought sanctuary in the United States in 2001. Before the September 11 attacks individual asylum applicants were evaluated in private proceedings at the U.S. Immigration and Naturalization Services (INS).

Despite this, concerns have been raised with the U.S. asylum and refugee determination processes. A recent empirical analysis by three legal scholars described the U.S. asylum process as a game of refugee roulette; that is to say that the outcome of asylum determinations depends in large part on the personality of the particular adjudicator to whom an application is randomly assigned, rather than on the merits of the case. The very low numbers of Iraqi refugees accepted between 2003 and 2007 exemplifies concerns about the United States' refugee processes. The Foreign Policy Association reported that:

"Perhaps the most perplexing component of the Iraq refugee crisis... has been the inability for the U.S. to absorb more Iraqis following the 2003 invasion of the country. To date, the U.S. has granted less than 800 Iraqis refugee status, just 133 in 2007. By contrast, the U.S. granted asylum to more than 100,000 Vietnamese refugees during the Vietnam War."

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...