Search This Blog

Wednesday, January 5, 2022

Enabling Act of 1933

From Wikipedia, the free encyclopedia
 
Enabling Act of 1933
Coat of arms of Germany.svg
Reichstag (Weimar Republic)

Passed23 March 1933
Enacted23 March 1933
Signed23 March 1933
Signed byPaul von Hindenburg
Commenced23 March 1933
Repealed by
Control Council Law No. 1 - Repealing of Nazi Laws
Status: Repealed
Hitler's Reichstag speech promoting the bill was delivered at the Kroll Opera House, following the Reichstag fire.

The Enabling Act (German: Ermächtigungsgesetz) of 1933, officially titled Gesetz zur Behebung der Not von Volk und Reich ("Law to Remedy the Distress of People and Reich"), was a law that gave the German Cabinet—most importantly, the Chancellor—the powers to make and enforce laws without the involvement of the Reichstag and with no need to consult with Weimar President Paul von Hindenburg. Critically, the Enabling Act allowed the Chancellor to bypass the system of checks and balances in the government and these laws could explicitly violate individual rights prescribed in the Weimar Constitution.

In January 1933 Nazi Party leader Adolf Hitler convinced President Paul von Hindenburg to appoint him as chancellor, the head of the German government. Four weeks into his chancellorship, the Reichstag building caught fire in the middle of the night. Hitler blamed the incident on the communists and was convinced the arson was part of a larger effort to overthrow the German government, after which Hitler persuaded Hindenburg to enact the Reichstag Fire Decree. The decree abolished most civil liberties including the right to speak, assemble, protest, and due process. Using the decree the Nazis declared a state of emergency and began to arrest, intimidate, and purge his political enemies. Communists and labor union leaders were the first to be arrested and interned in the first Nazi concentration camps. By clearing the political arena of anyone willing to challenge him, Hitler submitted a proposal to the Reichstag that would immediately grant all legislative powers to the cabinet. This would in effect allow Hitler's government to act without concern to the constitution.

Despite outlawing the communists and repressing other opponents, the passage of the Enabling Act was not a guarantee. Hitler allied with other nationalists and conservative factions and they steamrolled over the Social Democrats in the March 5, 1933 German Federal Election.That election would be the last multiparty election held in a united Germany until 1990, fifty-seven years later, and occurred in an atmosphere of extreme voter intimidation on the part of the Nazis. Contrary to popular belief, Adolf Hitler did not command a majority in the Reichstag voting on the Enabling Act. The majority of Germans did not vote for the Nazi party, as Hitler's total vote was less than 45% despite the terror and fear fomented by his repression. In order for the enabling act to be passed the Nazis implemented a strategy of coercion, bribery, and manipulation. Hitler removed any remaining political obstacle so his coalition of conservatives, nationalists, and National Socialists built the Nazi dictatorship. The Communists had already been repressed and were not allowed to be present or to vote, and some Social Democrats were kept away as well. In the end most of those present voted for the act, except for the Social Democrats, who all voted against it.

The act passed in both the Reichstag and Reichsrat on 23 March 1933, and was signed by President Paul von Hindenburg later that day. Unless extended by the Reichstag, the act would expire after four years. With the Enabling Act now in force, the chancellor could pass and enforce unconstitutional laws without any objection. The combined effect of the two laws ultimately transformed Hitler's cabinet into a legal dictatorship and laid the groundwork for his totalitarian regime. The Nazis dramatically escalated the political repression, the party now armed with the Enabling Act outlawed all political activity and by July, the Nazis were the only legal party allowed to participate. The Reichstag from 1933 onward effectively became the rubber stamp parliament that Hitler always wanted. The Enabling Act would be renewed twice and would be rendered null once Nazi Germany collapsed to the Allies in 1945.

The passing of the Enabling Act is significant in German and world history as it marked the formal transition from the democratic Weimar Republic to the totalitarian Nazi dictatorship. From 1933 onwards Hitler continued to consolidate and centralize power via purges, and propaganda. The Hitlerian purges reached their height with the Night of the Long Knives. Once the purges of the Nazi party and German government concluded, Hitler had total control and authority over Germany and began the process of rearmament. Thus began the political and military struggles that ultimately culminated in the Second World War.

Background

After being appointed Chancellor of Germany on 30 January 1933, Hitler asked President von Hindenburg to dissolve the Reichstag. A general election was scheduled for 5 March 1933. A secret meeting was held between Hitler and 20 to 25 industrialists at the official residence of Hermann Göring in the Reichstag Presidential Palace, aimed at financing the election campaign of the Nazi Party.

The burning of the Reichstag, depicted by the Nazis as the beginning of a communist revolution, resulted in the presidential Reichstag Fire Decree, which among other things suspended freedom of press and habeas corpus rights just five days before the election. Hitler used the decree to have the Communist Party's offices raided and its representatives arrested, effectively eliminating them as a political force.

Although they received five million more votes than in the previous election, the Nazis failed to gain an absolute majority in parliament, and depended on the 8% of seats won by their coalition partner, the German National People's Party, to reach 52% in total.

To free himself from this dependency, Hitler had the cabinet, in its first post-election meeting on 15 March, draw up plans for an Enabling Act which would give the cabinet legislative power for four years. The Nazis devised the Enabling Act to gain complete political power without the need of the support of a majority in the Reichstag and without the need to bargain with their coalition partners. The Nazi regime was unique compared to its contemporaries most famously Joseph Stalin because Hitler did not seek to draft a completely new constitution whereas Stalin did so. Technically the Weimar Constitution of 1919 remained in effect even after the Enabling Act, only losing force when Berlin fell to the Soviet Union in 1945 and Germany surrendered.

Preparations and negotiations

The Enabling Act allowed the National Ministry (essentially the cabinet) to enact legislation, including laws deviating from or altering the constitution, without the consent of the Reichstag. Because this law allowed for departures from the constitution, it was itself considered a constitutional amendment. Thus, its passage required the support of two-thirds of those deputies who were present and voting. A quorum of two-thirds of the entire Reichstag was required to be present in order to call up the bill.

The Social Democrats (SPD) and the Communists (KPD) were expected to vote against the Act. The government had already arrested all Communist and some Social Democrat deputies under the Reichstag Fire Decree. The Nazis expected the parties representing the middle class, the Junkers and business interests to vote for the measure, as they had grown weary of the instability of the Weimar Republic and would not dare to resist.

Hitler believed that with the Centre Party members' votes, he would get the necessary two-thirds majority. Hitler negotiated with the Centre Party's chairman, Ludwig Kaas, a Catholic priest, finalizing an agreement by 22 March. Kaas agreed to support the Act in exchange for assurances of the Centre Party's continued existence, the protection of Catholics' civil and religious liberties, religious schools and the retention of civil servants affiliated with the Centre Party. It has also been suggested that some members of the SPD were intimidated by the presence of the Nazi Sturmabteilung (SA) throughout the proceedings.

Some historians, such as Klaus Scholder, have maintained that Hitler also promised to negotiate a Reichskonkordat with the Holy See, a treaty that formalized the position of the Catholic Church in Germany on a national level. Kaas was a close associate of Cardinal Pacelli, then Vatican Secretary of State (and later Pope Pius XII). Pacelli had been pursuing a German concordat as a key policy for some years, but the instability of Weimar governments as well as the enmity of some parties to such a treaty had blocked the project. The day after the Enabling Act vote, Kaas went to Rome in order to, in his own words, "investigate the possibilities for a comprehensive understanding between church and state". However, so far no evidence for a link between the Enabling Act and the Reichskonkordat signed on 20 July 1933 has surfaced.

Text

As with most of the laws passed in the process of Gleichschaltung, the Enabling Act is quite short, especially considering its implications. The full text, in German and English, follows:

Gesetz zur Behebung der Not von Volk und Reich Law to Remedy the Distress of the People and the Reich
Der Reichstag hat das folgende Gesetz beschlossen, das mit Zustimmung des Reichsrats hiermit verkündet wird, nachdem festgestellt ist, daß die Erfordernisse verfassungsändernder Gesetzgebung erfüllt sind: The Reichstag has enacted the following law, which is hereby proclaimed with the assent of the Reichsrat, it having been established that the requirements for a constitutional amendment have been fulfilled:
Artikel 1 Article 1
Reichsgesetze können außer in dem in der Reichsverfassung vorgesehenen Verfahren auch durch die Reichsregierung beschlossen werden. Dies gilt auch für die in den Artikeln 85 Abs. 2 und 87 der Reichsverfassung bezeichneten Gesetze. In addition to the procedure prescribed by the constitution, laws of the Reich may also be enacted by the government of the Reich. This includes the laws referred to by Articles 85 Paragraph 2 and Article 87 of the constitution.
Artikel 2 Article 2
Die von der Reichsregierung beschlossenen Reichsgesetze können von der Reichsverfassung abweichen, soweit sie nicht die Einrichtung des Reichstags und des Reichsrats als solche zum Gegenstand haben. Die Rechte des Reichspräsidenten bleiben unberührt. Laws enacted by the government of the Reich may deviate from the constitution as long as they do not affect the institutions of the Reichstag and the Reichsrat. The rights of the President remain unaffected.
Artikel 3 Article 3
Die von der Reichsregierung beschlossenen Reichsgesetze werden vom Reichskanzler ausgefertigt und im Reichsgesetzblatt verkündet. Sie treten, soweit sie nichts anderes bestimmen, mit dem auf die Verkündung folgenden Tage in Kraft. Die Artikel 68 bis 77 der Reichsverfassung finden auf die von der Reichsregierung beschlossenen Gesetze keine Anwendung. Laws enacted by the Reich government shall be issued by the Chancellor and announced in the Reich Gazette. They shall take effect on the day following the announcement, unless they prescribe a different date. Articles 68 to 77 of the Constitution do not apply to laws enacted by the Reich government.
Artikel 4 Article 4
Verträge des Reiches mit fremden Staaten, die sich auf Gegenstände der Reichsgesetzgebung beziehen, bedürfen für die Dauer der Geltung dieser Gesetze nicht der Zustimmung der an der Gesetzgebung beteiligten Körperschaften. Die Reichsregierung erläßt die zur Durchführung dieser Verträge erforderlichen Vorschriften. Treaties of the Reich with foreign states, which relate to matters of Reich legislation, shall for the duration of the validity of these laws not require the consent of the legislative authorities. The Reich government shall enact the legislation necessary to implement these agreements.
Artikel 5 Article 5
Dieses Gesetz tritt mit dem Tage seiner Verkündung in Kraft. Es tritt mit dem 1. April 1937 außer Kraft; es tritt ferner außer Kraft, wenn die gegenwärtige Reichsregierung durch eine andere abgelöst wird. This law enters into force on the day of its proclamation. It expires on 1 April 1937; it expires furthermore if the present Reich government is replaced by another.

Articles 1 and 4 gave the government the right to draw up the budget and approve treaties without input from the Reichstag.

Act (page 1)
 
Act (page 2 with signatures)

Passage

Debate within the Centre Party continued until the day of the vote, 23 March 1933, with Kaas advocating voting in favour of the act, referring to an upcoming written guarantee from Hitler, while former Chancellor Heinrich Brüning called for a rejection of the Act. The majority sided with Kaas, and Brüning agreed to maintain party discipline by voting for the Act.

The Reichstag, led by its President, Hermann Göring, changed its rules of procedure to make it easier to pass the bill. Under the Weimar Constitution, a quorum of two-thirds of the entire Reichstag membership was required to be present in order to bring up a constitutional amendment bill. In this case, 432 of the Reichstag's 647 deputies would have normally been required for a quorum. However, Göring reduced the quorum to 378 by not counting the 81 KPD deputies. Despite the virulent rhetoric directed against the Communists, the Nazis did not formally ban the KPD right away. Not only did they fear a violent uprising, but they hoped the KPD's presence on the ballot would siphon off votes from the SPD. However, it was an open secret that the KPD deputies would never be allowed to take their seats; they were thrown in jail as quickly as the police could track them down. Courts began taking the line that since the Communists were responsible for the fire, KPD membership was an act of treason. Thus, for all intents and purposes, the KPD was banned as of 6 March, the day after the election.

Göring also declared that any deputy who was "absent without excuse" was to be considered as present, in order to overcome obstructions. Leaving nothing to chance, the Nazis used the provisions of the Reichstag Fire Decree to detain several SPD deputies. A few others saw the writing on the wall and fled into exile.

Later that day, the Reichstag assembled under intimidating circumstances, with SA men swarming inside and outside the chamber. Hitler's speech, which emphasised the importance of Christianity in German culture, was aimed particularly at appeasing the Centre Party's sensibilities and incorporated Kaas' requested guarantees almost verbatim. Kaas gave a speech, voicing the Centre's support for the bill amid "concerns put aside", while Brüning notably remained silent.

Only SPD chairman Otto Wels spoke against the Act, declaring that the proposed bill could not "destroy ideas which are eternal and indestructible." Kaas had still not received the written constitutional guarantees he had negotiated, but with the assurance it was being "typed up", voting began. Kaas never received the letter.

At this stage, the majority of deputies already supported the bill, and any deputies who might have been reluctant to vote in favour were intimidated by the SA troops surrounding the meeting. In the end, all parties except the SPD voted in favour of the Enabling Act. With the KPD banned and 26 SPD deputies arrested or in hiding, the final tally was 444 in favour of the Enabling Act against 94 (all Social Democrats) opposed. The Reichstag had adopted the Enabling Act with the support of 83% of the deputies. The session took place under such intimidating conditions that even if all SPD deputies had been present, it would have still passed with 78.7% support. The same day in the evening, the Reichsrat also gave its approval, unanimously and without prior discussion. The Act was then signed into law by President Hindenburg.

Voting on the Enabling Act
Party Deputies For Against Absent

NSDAP 288 288

SPD 120 94 26

KPD 81 81

Centre 73 72 1

DNVP 52 52

BVP 19 19

DStP 5 5

CSVD 4 4

DVP 2 1 1

DBP 2 2

Landbund 1 1
Total 647 444 94 109

Consequences

Under the Act, the government had acquired the authority to enact laws without either parliamentary consent or control. These laws could (with certain exceptions) even deviate from the Constitution. The Act effectively eliminated the Reichstag as active player in German politics. While its existence was protected by the Enabling Act, for all intents and purposes it reduced the Reichstag to a mere stage for Hitler's speeches. It only met sporadically until the end of World War II, held no debates and enacted only a few laws. Within three months of the passage of the Enabling Act, all parties except the Nazi Party were banned or pressured into dissolving themselves, followed on 14 July by a law that made the Nazi Party the only legally permitted party in the country. With this, Hitler had fulfilled what he had promised in earlier campaign speeches: "I set for myself one aim ... to sweep these thirty parties out of Germany!"

During the negotiations between the government and the political parties, it was agreed that the government should inform the Reichstag parties of legislative measures passed under the Enabling Act. For this purpose, a working committee was set up, co-chaired by Hitler and Centre Party chairman Kaas. However, this committee met only three times without any major impact, and rapidly became a dead letter even before all other parties were banned.

Though the Act had formally given legislative powers to the government as a whole, these powers were for all intents and purposes exercised by Hitler himself. After its passage, there were no longer serious deliberations in Cabinet meetings. Its meetings became more and more infrequent after 1934, and it never met in full after 1938.

Due to the great care that Hitler took to give his dictatorship an appearance of legality, the Enabling Act was renewed twice, in 1937 and 1941. However, its renewal was practically assured since all other parties were banned. Voters were presented with a single list of Nazis and Nazi-approved "guest" candidates under far-from-secret conditions. In 1942, the Reichstag passed a law giving Hitler power of life and death over every citizen, effectively extending the provisions of the Enabling Act for the duration of the war.

Ironically, at least two, and possibly three, of the penultimate measures Hitler took to consolidate his power in 1934 violated the Enabling Act. In February 1934, the Reichsrat, representing the states, was abolished even though Article 2 of the Enabling Act specifically protected the existence of both the Reichstag and the Reichsrat. It can be argued that the Enabling Act had been breached two weeks earlier by the Law for the Reconstruction of the Reich, which transferred the states' powers to the Reich and effectively left the Reichsrat impotent. Article 2 stated that laws passed under the Enabling Act could not affect the institutions of either chamber.

In August, Hindenburg died, and Hitler seized the president's powers for himself in accordance with a law passed the previous day, an action confirmed via a referendum later that month. Article 2 stated that the president's powers were to remain "undisturbed" (or "unaffected", depending on the translation), which has long been interpreted to mean that it forbade Hitler from tampering with the presidency. A 1932 amendment to the constitution made the president of the High Court of Justice, not the chancellor, first in the line of succession to the presidency—and even then on an interim basis pending new elections. However, the Enabling Act provided no remedy for any violations of Article 2, and these actions were never challenged in court.

In the Federal Republic of Germany

Article 9 of the German Constitution, enacted in 1949, allows for social groups to be labeled verfassungsfeindlich ("hostile to the constitution") and to be proscribed by the federal government. Political parties can be labeled enemies to the constitution only by the Bundesverfassungsgericht (Federal Constitutional Court), according to Art. 21 II. The idea behind the concept is the notion that even a majority rule of the people cannot be allowed to install a totalitarian or autocratic regime such as with the Enabling Act of 1933, thereby violating the principles of the German constitution.

Validity

In his book, The Coming of the Third Reich, British historian Richard J. Evans argued that the Enabling Act was legally invalid. He contended that Göring had no right to arbitrarily reduce the quorum required to bring the bill up for a vote. While the Enabling Act only required the support of two-thirds of those present and voting, two-thirds of the entire Reichstag's membership had to be present in order for the legislature to consider a constitutional amendment. According to Evans, while Göring was not required to count the KPD deputies in order to get the Enabling Act passed, he was required to "recognize their existence" by counting them for purposes of the quorum needed to call it up, making his refusal to do so "an illegal act". (Even if the Communists had been present and voting, the session's atmosphere was so intimidating that the Act would have still passed with, at the very least, 68.7% support.) He also argued that the act's passage in the Reichsrat was tainted by the overthrow of the state governments under the Reichstag Fire Decree; as Evans put it, the states were no longer "properly constituted or represented", making the Enabling Act's passage in the Reichsrat "irregular".

Portrayal in films

The 2003 film Hitler: The Rise of Evil contains a scene portraying the passage of the Enabling Act. The portrayal in this film is inaccurate, with the provisions of the Reichstag Fire Decree (which in practice, as the name states, was a decree issued by President Hindenburg weeks before the Enabling Act) merged into the Act. Non-Nazi members of the Reichstag, including Vice-Chancellor von Papen, are shown objecting. In reality the Act met little resistance, with only the centre-left Social Democratic Party voting against passage.

This film also shows Hermann Göring, speaker of the house, beginning to sing the "Deutschlandlied". Nazi representatives then stand and immediately join in with Göring, all other party members join in too, with everyone performing the Hitler salute. In reality, this never happened.

Data dredging

From Wikipedia, the free encyclopedia
 
An example of a result produced by data dredging, showing a correlation between the number of letters in Scripps National Spelling Bee's winning word and the number of people in the United States killed by venomous spiders.

Data dredging (or data fishing, data snooping, data butchery), also known as significance chasing, significance questing, selective inference, and p-hacking is the misuse of data analysis to find patterns in data that can be presented as statistically significant, thus dramatically increasing and understating the risk of false positives. This is done by performing many statistical tests on the data and only reporting those that come back with significant results.

The process of data dredging involves testing multiple hypotheses using a single data set by exhaustively searching—perhaps for combinations of variables that might show a correlation, and perhaps for groups of cases or observations that show differences in their mean or in their breakdown by some other variable.

Conventional tests of statistical significance are based on the probability that a particular result would arise if chance alone were at work, and necessarily accept some risk of mistaken conclusions of a certain type (mistaken rejections of the null hypothesis). This level of risk is called the significance. When large numbers of tests are performed, some produce false results of this type; hence 5% of randomly chosen hypotheses might be (erroneously) reported to be statistically significant at the 5% significance level, 1% might be (erroneously) reported to be statistically significant at the 1% significance level, and so on, by chance alone. When enough hypotheses are tested, it is virtually certain that some will be reported to be statistically significant (even though this is misleading), since almost every data set with any degree of randomness is likely to contain (for example) some spurious correlations. If they are not cautious, researchers using data mining techniques can be easily misled by these results.

Data dredging is an example of disregarding the multiple comparisons problem. One form is when subgroups are compared without alerting the reader to the total number of subgroup comparisons examined.

Drawing conclusions from data

The conventional frequentist statistical hypothesis testing procedure is to formulate a research hypothesis, such as "people in higher social classes live longer", then collect relevant data, followed by carrying out a statistical significance test to see how likely such results would be found if chance alone were at work. (The last step is called testing against the null hypothesis.)

A key point in proper statistical analysis is to test a hypothesis with evidence (data) that was not used in constructing the hypothesis. This is critical because every data set contains some patterns due entirely to chance. If the hypothesis is not tested on a different data set from the same statistical population, it is impossible to assess the likelihood that chance alone would produce such patterns. See testing hypotheses suggested by the data.

Here is a simple example. Throwing a coin five times, with a result of 2 heads and 3 tails, might lead one to hypothesize that the coin favors tails by 3/5 to 2/5. If this hypothesis is then tested on the existing data set, it is confirmed, but the confirmation is meaningless. The proper procedure would have been to form in advance a hypothesis of what the tails probability is, and then throw the coin various times to see if the hypothesis is rejected or not. If three tails and two heads are observed, another hypothesis, that the tails probability is 3/5, could be formed, but it could only be tested by a new set of coin tosses. It is important to realize that the statistical significance under the incorrect procedure is completely spurious – significance tests do not protect against data dredging.

Hypothesis suggested by non-representative data

Suppose that a study of a random sample of people includes exactly two people with a birthday of August 7: Mary and John. Someone engaged in data snooping might try to find additional similarities between Mary and John. By going through hundreds or thousands of potential similarities between the two, each having a low probability of being true, an unusual similarity can almost certainly be found. Perhaps John and Mary are the only two people in the study who switched minors three times in college. A hypothesis, biased by data snooping, could then be "People born on August 7 have a much higher chance of switching minors more than twice in college."

The data itself taken out of context might be seen as strongly supporting that correlation, since no one with a different birthday had switched minors three times in college. However, if (as is likely) this is a spurious hypothesis, this result will most likely not be reproducible; any attempt to check if others with an August 7 birthday have a similar rate of changing minors will most likely get contradictory results almost immediately.

Bias

Bias is a systematic error in the analysis. For example, doctors directed HIV patients at high cardiovascular risk to a particular HIV treatment, abacavir, and lower-risk patients to other drugs, preventing a simple assessment of abacavir compared to other treatments. An analysis that did not correct for this bias unfairly penalised abacavir, since its patients were more high-risk so more of them had heart attacks. This problem can be very severe, for example, in the observational study.

Missing factors, unmeasured confounders, and loss to follow-up can also lead to bias. By selecting papers with a significant p-value, negative studies are selected against—which is the publication bias. This is also known as "file drawer bias", because less significant p-value results are left in the file drawer and never published.

Multiple modelling

Another aspect of the conditioning of statistical tests by knowledge of the data can be seen while using the system or machine analysis and linear regression to observe the frequency of data. A crucial step in the process is to decide which covariates to include in a relationship explaining one or more other variables. There are both statistical (see Stepwise regression) and substantive considerations that lead the authors to favor some of their models over others, and there is a liberal use of statistical tests. However, to discard one or more variables from an explanatory relation on the basis of the data means one cannot validly apply standard statistical procedures to the retained variables in the relation as though nothing had happened. In the nature of the case, the retained variables have had to pass some kind of preliminary test (possibly an imprecise intuitive one) that the discarded variables failed. In 1966, Selvin and Stuart compared variables retained in the model to the fish that don't fall through the net—in the sense that their effects are bound to be bigger than those that do fall through the net. Not only does this alter the performance of all subsequent tests on the retained explanatory model, it may introduce bias and alter mean square error in estimation.

Examples in meteorology and epidemiology

In meteorology, hypotheses are often formulated using weather data up to the present and tested against future weather data, which ensures that, even subconsciously, future data could not influence the formulation of the hypothesis. Of course, such a discipline necessitates waiting for new data to come in, to show the formulated theory's predictive power versus the null hypothesis. This process ensures that no one can accuse the researcher of hand-tailoring the predictive model to the data on hand, since the upcoming weather is not yet available.

As another example, suppose that observers note that a particular town appears to have a cancer cluster, but lack a firm hypothesis of why this is so. However, they have access to a large amount of demographic data about the town and surrounding area, containing measurements for the area of hundreds or thousands of different variables, mostly uncorrelated. Even if all these variables are independent of the cancer incidence rate, it is highly likely that at least one variable correlates significantly with the cancer rate across the area. While this may suggest a hypothesis, further testing using the same variables but with data from a different location is needed to confirm. Note that a p-value of 0.01 suggests that 1% of the time a result at least that extreme would be obtained by chance; if hundreds or thousands of hypotheses (with mutually relatively uncorrelated independent variables) are tested, then one is likely to obtain a p-value less than 0.01 for many null hypotheses.

Remedies

Looking for patterns in data is legitimate. Applying a statistical test of significance, or hypothesis test, to the same data that a pattern emerges from is wrong. One way to construct hypotheses while avoiding data dredging is to conduct randomized out-of-sample tests. The researcher collects a data set, then randomly partitions it into two subsets, A and B. Only one subset—say, subset A—is examined for creating hypotheses. Once a hypothesis is formulated, it must be tested on subset B, which was not used to construct the hypothesis. Only where B also supports such a hypothesis is it reasonable to believe the hypothesis might be valid. (This is a simple type of cross-validation and is often termed training-test or split-half validation.)

Another remedy for data dredging is to record the number of all significance tests conducted during the study and simply divide one's criterion for significance ("alpha") by this number; this is the Bonferroni correction. However, this is a very conservative metric. A family-wise alpha of 0.05, divided in this way by 1,000 to account for 1,000 significance tests, yields a very stringent per-hypothesis alpha of 0.00005. Methods particularly useful in analysis of variance, and in constructing simultaneous confidence bands for regressions involving basis functions are the Scheffé method and, if the researcher has in mind only pairwise comparisons, the Tukey method. The use of Benjamini and Hochberg's false discovery rate is a more sophisticated approach that has become a popular method for control of multiple hypothesis tests.

When neither approach is practical, one can make a clear distinction between data analyses that are confirmatory and analyses that are exploratory. Statistical inference is appropriate only for the former.

Ultimately, the statistical significance of a test and the statistical confidence of a finding are joint properties of data and the method used to examine the data. Thus, if someone says that a certain event has probability of 20% ± 2% 19 times out of 20, this means that if the probability of the event is estimated by the same method used to obtain the 20% estimate, the result is between 18% and 22% with probability 0.95. No claim of statistical significance can be made by only looking, without due regard to the method used to assess the data.

Academic journals increasingly shift to the registered report format, which aims to counteract very serious issues such as data dredging and HARKing, which have made theory-testing research very unreliable: For example, Nature Human Behaviour has adopted the registered report format, as it “shift[s] the emphasis from the results of research to the questions that guide the research and the methods used to answer them”. The European Journal of Personality defines this format as follows: “In a registered report, authors create a study proposal that includes theoretical and empirical background, research questions/hypotheses, and pilot data (if available). Upon submission, this proposal will then be reviewed prior to data collection, and if accepted, the paper resulting from this peer-reviewed procedure will be published, regardless of the study outcomes.”

Methods and results can also be made publicly available, as in the open science approach, making it yet more difficult for data dredging to take place.

Why Most Published Research Findings Are False

The PDF of the paper

"Why Most Published Research Findings Are False" is a 2005 essay written by John Ioannidis, a professor at the Stanford School of Medicine, and published in PLOS Medicine. It is considered foundational to the field of metascience.

In the paper, Ioannidis argued that a large number, if not the majority, of published medical research papers contain results that cannot be replicated. In simple terms, the essay states that scientists use hypothesis testing to determine whether scientific discoveries are significant. "Significance" is formalized in terms of probability and one formalized calculation ("P value") is reported in the scientific literature as a screening mechanism. Ioannidis posited assumptions about the way people perform and report these tests; then he constructed a statistical model which indicates that most published findings are false positive results.

Argument

Suppose that in a given scientific field there is a known baseline probability that a result is true, denoted by . When a study is conducted, the probability that a positive result is obtained is . Given these two factors, we want to compute the conditional probability , which is known as the positive predictive value (PPV). Bayes' theorem allows us to compute the PPV as:

where is the type I error rate and is the type II error rate; the statistical power is . It is customary in most scientific research to desire and . If we assume for a given scientific field, then we may compute the PPV for different values of and :


0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
0.01 0.91 0.90 0.89 0.87 0.85 0.82 0.77 0.69 0.53
0.02 0.83 0.82 0.80 0.77 0.74 0.69 0.63 0.53 0.36
0.03 0.77 0.75 0.72 0.69 0.65 0.60 0.53 0.43 0.27
0.04 0.71 0.69 0.66 0.63 0.58 0.53 0.45 0.36 0.22
0.05 0.67 0.64 0.61 0.57 0.53 0.47 0.40 0.31 0.18

However, the simple formula for PPV derived from Bayes' theorem does not account for bias in study design or reporting. Some published findings would not have been presented as research findings if not for researcher bias. Let be the probability that an analysis was only published due to researcher bias. Then the PPV is given by the more general expression:

 The introduction of bias will tend to depress the PPV; in the extreme case when the bias of a study is maximized, . Even if a study meets the benchmark requirements for and , and is free of bias, there is still a 36% probability that a paper reporting a positive result will be incorrect; if the base probability of a true result is lower, then this will push the PPV lower too. Furthermore, there is strong evidence that the average statistical power of a study in many scientific fields is well below the benchmark level of 0.8.

Given the realities of bias, low statistical power, and a small number of true hypotheses, Ioannidis concludes that the majority of studies in a variety of scientific fields are likely to report results that are false.

Corollaries

In addition to the main result, Ioannidis lists six corollaries for factors that can influence the reliability of published research:

  1. The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
  2. The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
  3. The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
  4. The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
  5. The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
  6. The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

Reception and influence

Despite skepticism about extreme statements made in the paper, Ioannidis's broader argument and warnings have been accepted by a large number of researchers. The growth of metascience and the recognition of a scientific replication crisis have bolstered the paper's credibility, and led to calls for methodological reforms in scientific research.

In commentaries and technical responses, statisticians Goodman and Greenland identified several weaknesses in Ioannidis' model. Ioannidis's use of dramatic and exaggerated language that he "proved" that most research findings' claims are false and that "most research findings are false for most research designs and for most fields" [italics added] was rejected, and yet they agreed with his paper's conclusions and recommendations. Biostatisticians Jager and Leek criticized the model as being based on justifiable but arbitrary assumptions rather than empirical data and did an investigation of their own which calculated that the false positive rate in biomedical studies was estimated to be around 14%, not over 50% as Ioannidis asserted. Their paper was published in a 2014 special edition of the journal Biostatistics along with extended, supporting critiques from other statisticians. Leek summarized the key points of agreement as: when talking about the science-wise false discovery rate one has to bring data; there are different frameworks for estimating the science-wise false discovery rate; and "it is pretty unlikely that most published research is false," but that probably varies by one's definition of "most" and "false". Statistician Ulrich Schimmack reinforced the importance of the empirical basis for models by noting the reported false discovery rate in some scientific fields is not the actual discovery rate because non-significant results are rarely reported. Ioannidis's theoretical model fails to account for that, but when a statistical method ("z-curve") to estimate the number of unpublished non-significant results is applied to two examples, the false positive rate is between 8% and 17%, not greater than 50%. Despite these weaknesses there is nonetheless general agreement with the problem and recommendations Ioannidis discusses, yet his tone has been described as "dramatic" and "alarmingly misleading", which runs the risk of making people unnecessarily skeptical or cynical about science.

A lasting impact of this work has been awareness of the underlying drivers of the high false positive rate in clinical medicine and biomedical research, and efforts by journals and scientists to mitigate them. Ioannidis restated these drivers in 2016 as being:

  • Solo, siloed investigator limited to small sample sizes
  • No pre-registration of hypotheses being tested
  • Post-hoc cherry picking of hypotheses with best P values
  • Only requiring P < .05
  • No replication
  • No data sharing

Bayesian epistemology

From Wikipedia, the free encyclopedia
 

Bayesian epistemology is a formal approach to various topics in epistemology that has its roots in Thomas Bayes' work in the field of probability theory. One advantage of its formal method in contrast to traditional epistemology is that its concepts and theorems can be defined with a high degree of precision. It is based on the idea that beliefs can be interpreted as subjective probabilities. As such, they are subject to the laws of probability theory, which act as the norms of rationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology. The problem of confirmation in the philosophy of science, for example, can be approached through the Bayesian principle of conditionalization by holding that a piece of evidence confirms a theory if it raises the likelihood that this theory is true. Various proposals have been made to define the concept of coherence in terms of probability, usually in the sense that two propositions cohere if the probability of their conjunction is higher than if they were neutrally related to each other. The Bayesian approach has also been fruitful in the field of social epistemology, for example, concerning the problem of testimony or the problem of group belief. Bayesianism still faces various theoretical objections that have not been fully solved.

Relation to traditional epistemology

Traditional epistemology and Bayesian epistemology are both forms of epistemology, but they differ in various respects, for example, concerning their methodology, their interpretation of belief, the role justification or confirmation plays in them and some of their research interests. Traditional epistemology focuses on topics such as the analysis of the nature of knowledge, usually in terms of justified true beliefs, the sources of knowledge, like perception or testimony, the structure of a body of knowledge, for example in the form of foundationalism or coherentism, and the problem of philosophical skepticism or the question of whether knowledge is possible at all. These inquiries are usually based on epistemic intuitions and regard beliefs as either present or absent. Bayesian epistemology, on the other hand, works by formalizing concepts and problems, which are often vague in the traditional approach. It thereby focuses more on mathematical intuitions and promises a higher degree of precision. It sees belief as a continuous phenomenon that comes in various degrees, so-called credences. Some Bayesians have even suggested that the regular notion of belief should be abandoned. But there are also proposals to connect the two, for example, the Lockean thesis, which defines belief as credence above a certain threshold. Justification plays a central role in traditional epistemology while Bayesians have focused on the related notions of confirmation and disconfirmation through evidence. The notion of evidence is important for both approaches but only the traditional approach has been interested in studying the sources of evidence, like perception and memory. Bayesianism, on the other hand, has focused on the role of evidence for rationality: how someone's credence should be adjusted upon receiving new evidence. There is an analogy between the Bayesian norms of rationality in terms of probabilistic laws and the traditional norms of rationality in terms of deductive consistency. Certain traditional problems, like the topic of skepticism about our knowledge of the external world, are difficult to express in Bayesian terms.

Fundamentals

Bayesian epistemology is based only on a few fundamental principles, which can be used to define various other notions and can be applied to many topics in epistemology. At their core, these principles constitute constraints on how we should assign credences to propositions. They determine what an ideally rational agent would believe. The basic principles can be divided into synchronic or static principles, which govern how credences are to be assigned at any moment, and diachronic or dynamic principles, which determine how the agent should change her beliefs upon receiving new evidence. The axioms of probability and the principal principle belong to the static principles while the principle of conditionalization governs the dynamic aspects as a form of probabilistic inference. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. This test for determining irrationality has been referred to as the "pragmatic self-defeat test".

Beliefs, probability and bets

One important difference to traditional epistemology is that Bayesian epistemology focuses not on the notion of simple belief but on the notion of degrees of belief, so-called credences. This approach tries to capture the idea of certainty: we believe in all kinds of claims but we are more certain about some, like that the earth is round, than about others, like that Plato was the author of the First Alcibiades. These degrees come in values between 0 and 1. 0 corresponds to full disbelief, 1 corresponds to full belief and 0.5 corresponds to suspension of belief. According to the Bayesian interpretation of probability, credences stand for subjective probabilities. Following Frank P. Ramsey, they are interpreted in terms of the willingness to bet money on a claim. So having a credence of 0.8 (i.e. 80 %) that your favorite soccer team will win the next game would mean being willing to bet up to four dollars for the chance to make one dollar profit. This account draws a tight connection between Bayesian epistemology and decision theory. It might seem that betting-behavior is only one special area and as such not suited for defining such a general notion as credences. But, as Ramsey argues, we bet all the time when understood in the widest sense. For example, in going to the train station, we bet on the train being there on time, otherwise we would have stayed at home. It follows from the interpretation of credence in terms of willingness to make bets that it would be irrational to ascribe a credence of 0 or 1 to any proposition, except for contradictions and tautologies. The reason for this is that ascribing these extreme values would mean that one would be willing to bet anything, including one's life, even if the payoff was minimal. Another negative side-effect of such extreme credences is that they are permanently fixed and cannot be updated anymore upon acquiring new evidence.

This central tenet of Bayesianism, that credences are interpreted as subjective probabilities and are therefore governed by the norms of probability, has been referred to as probabilism. These norms express the nature of the credences of ideally rational agents. They do not put demands on what credence we should have on any single given belief, for example, whether it will rain tomorrow. Instead, they constraint the system of beliefs as a whole. For example, if your credence that it will rain tomorrow is 0.8 then your credence in the opposite proposition, i.e. that it will not rain tomorrow, should be 0.2, not 0.1 or 0.5. According to Stephan Hartmann and Jan Sprenger, the axioms of probability can be expressed through the following two laws: (1) for any tautology ; (2) For incompatible (mutually exclusive) propositions and , .

Another important Bayesian principle of degrees of beliefs is the principal principle due to David Lewis. It states that our knowledge of objective probabilities should correspond to our subjective probabilities in the form of credences. So if you know that the objective chance of a coin landing heads is 50% then your credence that the coin will land heads should be 0.5.

The axioms of probability together with the principal principle determines the static or synchronic aspect of rationality: what an agent's beliefs should be like when only considering one moment. But rationality also involves a dynamic or diachronic aspect, which comes to play for changing one's credences upon being confronted with new evidence. This aspect is determined by the principle of conditionalization.

Principle of conditionalization

The principle of conditionalization governs how the agent's credence in a hypothesis should change upon receiving new evidence for or against this hypothesis. As such, it expresses the dynamic aspect of how ideal rational agents would behave. It is based on the notion of conditional probability, which is the measure of the probability that one event occurs given that another event has already occurred. The unconditional probability that will occur is usually expressed as while the conditional probability that will occur given that B has already occurred is written as . For example, the probability of flipping a coin two times and the coin landing heads two times is only 25%. But the conditional probability of this occurring given that the coin has landed heads on the first flip is then 50%. The principle of conditionalization applies this idea to credences:  we should change our credence that the coin will land heads two times upon receiving evidence that it has already landed heads on the first flip. The probability assigned to the hypothesis before the event is called prior probability. The probability afterward is called posterior probability. According to the simple principle of conditionalization, this can be expressed in the following way: . So the posterior probability that the hypothesis is true is equal to the conditional prior probability that the hypothesis is true relative to the evidence, which is equal to the prior probability that both the hypothesis and the evidence are true, divided by the prior probability that the evidence is true. The original expression of this principle, referred to as Bayes' theorem, can be directly deduced from this formulation.

The simple principle of conditionalization makes the assumption that our credence in the acquired evidence, i.e. its posterior probability, is 1, which is unrealistic. For example, scientists sometimes need to discard previously accepted evidence upon making new discoveries, which would be impossible if the corresponding credence was 1. An alternative form of conditionalization, proposed by Richard Jeffrey, adjusts the formula to take the probability of the evidence into account: .

Dutch books

A Dutch book is a series of bets that necessarily results in a loss. An agent is vulnerable to a Dutch book if her credences violate the laws of probability. This can be either in synchronic cases, in which the conflict happens between beliefs held at the same time, or in diachronic cases, in which the agent does not respond properly to new evidence. In the most simple synchronic case, only two credences are involved: the credence in a proposition and in its negation. The laws of probability hold that these two credences together should amount to 1 since either the proposition or its negation are true. Agents who violate this law are vulnerable to a synchronic Dutch book. For example, given the proposition that it will rain tomorrow, suppose that an agent's degree of belief that it is true is 0.51 and the degree that it is false is also 0.51. In this case, the agent would be willing to accept two bets at $0.51 for the chance to win $1: one that it will rain and another that it will not rain. The two bets together cost $1.02, resulting in a loss of $0.02, no matter whether it will rain or not. The principle behind diachronic Dutch books is the same, but they are more complicated since they involve making bets before and after receiving new evidence and have to take into account that there is a loss in each case no matter how the evidence turns out to be.

There are different interpretations about what it means that an agent is vulnerable to a Dutch book. On the traditional interpretation, such a vulnerability reveals that the agent is irrational since she would willingly engage in behavior that is not in her best self-interest. One problem with this interpretation is that it assumes logical omniscience as a requirement for rationality, which is problematic especially in complicated diachronic cases. An alternative interpretation uses Dutch books as "a kind of heuristic for determining when one's degrees of belief have the potential to be pragmatically self-defeating". This interpretation is compatible with holding a more realistic view of rationality in the face of human limitations.

Dutch books are closely related to the axioms of probability. The Dutch book theorem holds that only credence assignments that do not follow the axioms of probability are vulnerable to Dutch books. The converse Dutch book theorem states that no credence assignment following these axioms is vulnerable to a Dutch book.

Applications

Confirmation theory

In the philosophy of science, confirmation refers to the relation between a piece of evidence and a hypothesis confirmed by it. Confirmation theory is the study of confirmation and disconfirmation: how scientific hypotheses are supported or refuted by evidence. Bayesian confirmation theory provides a model of confirmation based on the principle of conditionalization. A piece of evidence confirms a theory if the conditional probability of that theory relative to the evidence is higher than the unconditional probability of the theory by itself. Expressed formally: . If the evidence lowers the probability of the hypothesis then it disconfirms it. Scientists are usually not just interested in whether a piece of evidence supports a theory but also in how much support it provides. There are different ways how this degree can be determined. The simplest version just measures the difference between the conditional probability of the hypothesis relative to the evidence and the unconditional probability of the hypothesis, i.e. the degree of support is . The problem with measuring this degree is that it depends on how certain the theory already is prior to receiving the evidence. So if a scientist is already very certain that a theory is true then one further piece of evidence will not affect her credence much, even if the evidence would be very strong. There are other constraints for how an evidence measure should behave, for example, surprising evidence, i.e. evidence that had a low probability on its own, should provide more support. Scientists are often faced with the problem of having to decide between two competing theories. In such cases, the interest is not so much in absolute confirmation, or how much a new piece of evidence would support this or that theory, but in relative confirmation, i.e. in which theory is supported more by the new evidence.

A well-known problem in confirmation theory is Carl Gustav Hempel's raven paradox.Hempel starts by pointing out that seeing a black raven counts as evidence for the hypothesis that all ravens are black while seeing a green apple is usually not taken to be evidence for or against this hypothesis. The paradox consists in the consideration that the hypothesis "all ravens are black" is logically equivalent to the hypothesis "if something is not black, then it is not a raven". So since seeing a green apple counts as evidence for the second hypothesis, it should also count as evidence for the first one. Bayesianism allows that seeing a green apple supports the raven-hypothesis while explaining our initial intuition otherwise. This result is reached if we assume that seeing a green apple provides minimal but still positive support for the raven-hypothesis while spotting a black raven provides significantly more support.

Coherence

Coherence plays a central role in various epistemological theories, for example, in the coherence theory of truth or in the coherence theory of justification. It is often assumed that sets of beliefs are more likely to be true if they are coherent than otherwise. For example, we would be more likely to trust a detective who can connect all the pieces of evidence into a coherent story. But there is no general agreement as to how coherence is to be defined. Bayesianism has been applied to this field by suggesting precise definitions of coherence in terms of probability, which can then be employed to tackle other problems surrounding coherence. One such definition was proposed by Tomoji Shogenji, who suggests that the coherence between two beliefs is equal to the probability of their conjunction divided by the probabilities of each by itself, i.e. . Intuitively, this measures how likely it is that the two beliefs are true at the same time, compared to how likely this would be if they were neutrally related to each other. The coherence is high if the two beliefs are relevant to each other. Coherence defined this way is relative to a credence assignment. This means that two propositions may have high coherence for one agent and a low coherence for another agent due to the difference in prior probabilities of the agents' credences.

Social epistemology

Social epistemology studies the relevance of social factors for knowledge. In the field of science, for example, this is relevant since individual scientists often have to place their trust in the discoveries of other scientists in order to progress. The Bayesian approach can be applied to various topics in social epistemology. For example, probabilistic reasoning can be used in the field of testimony to evaluate how reliable a given report is. In this way, it can be formally shown that witness reports that are probabilistically independent of each other provide more support than otherwise. Another topic in social epistemology concerns the question of how to aggregate the beliefs of the individuals within a group to arrive at the belief of the group as a whole. Bayesianism approaches this problem by aggregating the probability assignments of the different individuals.

Objections

Problem of priors

In order to draw probabilistic inferences based on new evidence, it is necessary to already have a prior probability assigned to the proposition in question. But this is not always the case: there are many propositions that the agent never considered and therefore lacks a credence. This problem is usually solved by assigning a probability to the proposition in question in order to learn from the new evidence through conditionalization. The problem of priors concerns the question of how this initial assignment should be done. Subjective Bayesians hold that there are no or few constraints besides probabilistic coherence that determine how we assign the initial probabilities. The argument for this freedom in choosing the initial credence is that the credences will change as we acquire more evidence and will converge on the same value after enough steps no matter where we start. Objective Bayesians, on the other hand, assert that there are various constraints that determine the initial assignment. One important constraint is the principle of indifference. It states that the credences should be distributed equally among all the possible outcomes. For example, the agent wants to predict the color of balls drawn from an urn containing only red and black balls without any information about the ratio of red to black balls. Applied to this situation, the principle of indifference states that the agent should initially assume that the probability to draw a red ball is 50%. This is due to symmetric considerations: it is the only assignment in which the prior probabilities are invariant to a change in label. While this approach works for some cases it produces paradoxes in others. Another objection is that one should not assign prior probabilities based on initial ignorance.

Problem of logical omniscience

The norms of rationality according to the standard definitions of Bayesian epistemology assume logical omniscience: the agent has to make sure to exactly follow all the laws of probability for all her credences in order to count as rational. Whoever fails to do so is vulnerable to Dutch books and is therefore irrational. This is an unrealistic standard for human beings, as critics have pointed out.

Problem of old evidence

The problem of old evidence concerns cases in which the agent does not know at the time of acquiring a piece of evidence that it confirms a hypothesis but only learns about this supporting-relation later. Normally, the agent would increase her belief in the hypothesis after discovering this relation. But this is not allowed in Bayesian confirmation theory since conditionalization can only happen upon a change of the probability of the evidential statement, which is not the case. For example, the observation of certain anomalies in the orbit of Mercury is evidence for the theory of general relativity. But this data had been obtained before the theory was formulated, thereby counting as old evidence.

Thailand

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Thailand Thailand , officially the K...