Search This Blog

Monday, March 21, 2022

Standardized test

From Wikipedia, the free encyclopedia
 
Young adults in Poland sit for their Matura exams. The Matura is standardized so that universities can easily compare results from students across the entire country.

A standardized test is a test that is administered and scored in a consistent, or "standard", manner. Standardized tests are designed in such a way that the questions and interpretations are consistent and are administered and scored in a predetermined, standard manner.

Any test in which the same test is given in the same manner to all test takers, and graded in the same manner for everyone, is a standardized test. Standardized tests do not need to be high-stakes tests, time-limited tests, or multiple-choice tests. A standardized test may be any type of test: a written test, an oral test, or a practical skills performance test. The questions can be simple or complex. The subject matter among school-age students is frequently academic skills, but a standardized test can be given on nearly any topic, including driving tests, creativity, athleticism, personality, professional ethics, or other attributes.

The opposite of standardized testing is non-standardized testing, in which either significantly different tests are given to different test takers, or the same test is assigned under significantly different conditions (e.g., one group is permitted far less time to complete the test than the next group) or evaluated differently (e.g., the same answer is counted right for one student, but wrong for another student).

Most everyday quizzes and tests taken by students during school meet the definition of a standardized test: everyone in the class takes the same test, at the same time, under the same circumstances, and all of the students are graded by their teacher in the same way. However, the term standardized test is most commonly used to refer to tests that are given to larger groups, such as a test taken by all adults who wish to acquire a license to have a particular kind of job, or by all students of a certain age.

Because everyone gets the same test and the same grading system, standardized tests are often perceived as being fairer than non-standardized tests. Such tests are often thought of as fairer and more objective than a system in which some students get an easier test and others get a more difficult test. Standardized tests are designed to permit reliable comparison of outcomes across all test takers, because everyone is taking the same test. However, both testing in general and standardized testing in specific are criticized by some people. For example, some people believe that it is unfair to ask all students the same questions, if some students' schools did not have the same learning standards.

Definition

Two men perform CPR on a CPR doll
Two men take an authentic, non-written, criterion-referenced standardized test. If they perform cardiopulmonary resuscitation on the mannequin with the correct speed and pressure, they will pass this exam.

The definition of a standardized test has changed somewhat over time. In 1960, standardized tests were defined as those in which the conditions and content were equal for everyone taking the test, regardless of when, where, or by whom the test was given or graded. The purpose of this standardization is to make sure that the scores reliably indicate the abilities or skills being measured, and not other things, such as different instructions about what to do if the test taker does not know the answer to a question. By the beginning of the 21st century, the focus shifted away from a strict sameness of conditions towards equal fairness of conditions. For example, a test taker with a broken wrist might write more slowly because of the injury, and it would be more equitable, and produce a more reliable understanding of the test taker's actual knowledge, if that person were given a few more minutes to write down the answers to a time-limited test. Changing the testing conditions in a way that improves fairness with respect to a permanent or temporary disability, but without undermining the main point of the assessment, is called accommodation. However, if the purpose of the test were to see how quickly the student could write, then giving the test taker extra time would become a modification of the content, and no longer a standardized test.

Examples of standardized and non-standardized tests
Subject Format Standardized test Non-standardized test
History Oral Each student is given the same questions, and their answers are scored in the same way. The teacher goes around the room and asks each student a different question. Some questions are harder than others.
Driving Practical skills Each driving student is asked to do the same things, and they are all evaluated by the same standards. Some driving students have to drive on a highway, but others only have to drive slowly around the block. One employee takes points off for "bad attitude".
Mathematics Written Each student is given the same questions, and their answers are scored in the same way. The teacher gives different questions to different students: an easy test for poor students, another test for most students, and a difficult test for the best students.
Music Audition All musicians play the same piece of music. The judges agreed in advance how much factors such as timing, expression, and musicality count for. Each musician chooses a different piece of music to play. Judges choose the musician they like best. One judge gives extra points to musicians who wear a costume.

History

China

The earliest evidence of standardized testing was in China, during the Han Dynasty, where the imperial examinations covered the Six Arts which included music, archery, horsemanship, arithmetic, writing, and knowledge of the rituals and ceremonies of both public and private parts. These exams were used to select employees for the state bureaucracy.

Later, sections on military strategies, civil law, revenue and taxation, agriculture and geography were added to the testing. In this form, the examinations were institutionalized for more than a millennium.

Today, standardized testing remains widely used, most famously in the Gaokao system.

UK

Standardized testing was introduced into Europe in the early 19th century, modeled on the Chinese mandarin examinations, through the advocacy of British colonial administrators, the most "persistent" of which was Britain's consul in Guangzhou, China, Thomas Taylor Meadows. Meadows warned of the collapse of the British Empire if standardized testing was not implemented throughout the empire immediately.

Prior to their adoption, standardized testing was not traditionally a part of Western pedagogy. Based on the skeptical and open-ended tradition of debate inherited from Ancient Greece, Western academia favored non-standardized assessments using essays written by students. It is because of this, that the first European implementation of standardized testing did not occur in Europe proper, but in British India. Inspired by the Chinese use of standardized testing, in the early 19th century, British "company managers hired and promoted employees based on competitive examinations in order to prevent corruption and favoritism." This practice of standardized testing was later adopted in the late 19th century by the British mainland. The parliamentary debates that ensued made many references to the "Chinese mandarin system".

It was from Britain that standardized testing spread, not only throughout the British Commonwealth, but to Europe and then America. Its spread was fueled by the Industrial Revolution. The increase in number of school students during and after the Industrial Revolution, as a result of compulsory education laws, decreased the use of open-ended assessment, which was harder to mass-produce and assess objectively due to its intrinsically subjective nature.

A man sorts small objects into a wooden tray
British soldiers took standardized tests during the Second World War. This new recruit is sorting mechanical parts to test his understanding of machinery. His uniform shows no name, rank, or other sign that might bias the scoring of his work.

Standardized tests such as the War Office Selection Boards were developed for the British Army during the Second World War to choose candidates for officer training and other tasks. The tests looked at soldiers' mental abilities, mechanical skills, ability to work with others, and other qualities. Previous methods had suffered from bias and resulted in choosing the wrong soldiers for officer training.

United States

Standardized testing has been a part of American education since the 19th century, but the widespread reliance on standardized testing in schools in the US is largely a 20th-century phenomenon.

Immigration in the mid-19th century contributed to the growth of standardized tests in the United States. Standardized tests were used when people first entered the US to test social roles and find social power and status.

The College Entrance Examination Board did not offer standardized testing for university and college admission until 1900. Their first examinations were administered in 1901, in nine subjects. This test was implemented with the idea of creating standardized admissions for the United States in northeastern elite universities. Originally, the test was also meant for top boarding schools, in order to standardize curriculum. Originally the standardized test was made of essays and was not intended for widespread testing.

During World War I, the Army Alpha and Beta tests were developed to help place new recruits in appropriate assignments based upon their assessed intelligence levels. The first edition of a modern standardized test for IQ, the Stanford–Binet Intelligence Test, appeared in 1916. The College Board then designed the SAT (Scholar Aptitude Test) in 1926. The first SAT test was based on the Army IQ tests, with the goal of determining the test taker's intelligence, problem-solving skills, and critical thinking. In 1959, Everett Lindquist offered the ACT (American College Testing) for the first time. As of 2020, the ACT includes four main sections with multiple-choice questions to test English, mathematics, reading, and science, plus an optional writing section.

Individual states began testing large numbers of children and teenagers through the public school systems in the 1970s. By the 1980s, American schools were assessing nationally. In 2012, 45 states paid an average of $27 per student, and $669 million overall, on large-scale annual academic tests. However, other costs, such as paying teachers to prepare students for the tests and for class time spent administering the tests, significantly exceeds the cost of the test itself.

The need for the federal government to make meaningful comparisons across a highly de-centralized (locally controlled) public education system has encouraged the use of large-scale standardized testing. The Elementary and Secondary Education Act of 1965 required some standardized testing in public schools. The No Child Left Behind Act of 2001 further tied some types of public school funding to the results of standardized testing.

The goal of No Child Left Behind was to improve the education system in the United States by holding schools and teachers accountable for student achievement, including the educational achievement gap between minority and non-minority children in public schools. An additional factor in the United States education system is the socioeconomic background of the students being tested. According to the National Center for Children in Poverty, 41 percent of children under the age of 18 come from lower-income families. These students require specialized attention to perform well in school and on the standardized tests.

Under these federal laws, the school curriculum was still set by each state, but the federal government required states to assess how well schools and teachers were teaching the state-chosen material with standardized tests. Students' results on large-scale standardized tests were used to allocate funds and other resources to schools, and to close poorly performing schools. The Every Student Succeeds Act replaced the NCLB at the end of 2015. By that point, these large-scale standardized tests had become controversial in the United States because they were high-stakes tests for the school systems and teachers.

Australia

The Australian National Assessment Program – Literacy and Numeracy (NAPLAN) standardized testing was commenced in 2008 by the Australian Curriculum, Assessment and Reporting Authority, an independent authority "responsible for the development of a national curriculum, a national assessment program and a national data collection and reporting program that supports 21st century learning for all Australian students".

The testing includes all students in Years 3, 5, 7 and 9 in Australian schools to be assessed using national tests. The subjects covered in these testings include Reading, Writing, Language Conventions (Spelling, Grammar and Punctuation) and Numeracy.

The program presents students level reports designed to enable parents to see their child's progress over the course of their schooling life, and help teachers to improve individual learning opportunities for their students. Students and school level data are also provided to the appropriate school system on the understanding that they can be used to target specific supports and resources to schools that need them most. Teachers and schools use this information, in conjunction with other information, to determine how well their students are performing and to identify any areas of need requiring assistance.

The concept of testing student achievement is not new, although the current Australian approach may be said to have its origins in current educational policy structures in both the US and the UK. There are several key differences between the Australian NAPLAN and the UK and USA strategies. Schools that are found to be under-performing in the Australian context will be offered financial assistance under the current federal government policy.

Colombia

In 1968 the Colombian Institute for the evaluation of education - ICFES was born to regulate higher education. The previous public evaluation system for the authorization of operation and legal recognition for institutions and university programs was implemented.

Colombia has several standardized tests that assess the level of education in the country. These exams are performed by the ICFES.

Students in third grade, fifth grade and ninth grade take the "Saber 3°5°9°" exam. This test is currently presented on a computer in controlled and census samples.

Upon leaving high school students present the "Saber 11" that allows them to enter different universities in the country. Students studying at home can take this exam to graduate from high school and get their degree certificate and diploma.

Students leaving university must take the "Saber Pro" exam.

Canada

Canada leaves education, and standardized testing as result, under the jurisdiction of the provinces. Each province has its own province-wide standardized testing regime, ranging from no required standardized tests for students in Ontario to exams worth 50% of final high school grades in Newfoundland and Labrador.

Design and scoring

Design

Most commonly, a major academic test includes both human-scored and computer-scored sections.

A standardized test can be composed of multiple-choice questions, true-false questions, essay questions, authentic assessments, or nearly any other form of assessment. Multiple-choice and true-false items are often chosen for tests that are taken by thousands of people because they can be given and scored inexpensively, quickly, and reliably through using special answer sheets that can be read by a computer or via computer-adaptive testing. Some standardized tests have short-answer or essay writing components that are assigned a score by independent evaluators who use rubrics (rules or guidelines) and benchmark papers (examples of papers for each possible score) to determine the grade to be given to a response.

Any subject matter

A blank form with many checkboxes
Scoring form for driving tests in the UK. Every person who wants a driver's license takes the same test and gets scored in the same way.

Not all standardized tests involve answering questions. An authentic assessment for athletic skills could take the form of running for a set amount of time or dribbling a ball for a certain distance. Healthcare professionals must pass tests proving that they can perform medical procedures. Candidates for driver's licenses must pass a standardized test showing that they can drive a car. The Canadian Standardized Test of Fitness has been used in medical research, to determine how physically fit the test takers are.

Machine and human scoring

Some standardized testing uses multiple-choice tests, which are relatively inexpensive to score, but any form of assessment can be used.

Since the latter part of the 20th century, large-scale standardized testing has been shaped in part, by the ease and low cost of grading of multiple-choice tests by computer. Most national and international assessments are not fully evaluated by people.

People are used to score items that are not able to be scored easily by computer (such as essays). For example, the Graduate Record Exam is a computer-adaptive assessment that requires no scoring by people except for the writing portion.

Human scoring is relatively expensive and often variable, which is why computer scoring is preferred when feasible. For example, some critics say that poorly paid employees will score tests badly. Agreement between scorers can vary between 60 and 85 percent, depending on the test and the scoring session. For large-scale tests in schools, some test-givers pay to have two or more scorers read each paper; if their scores do not agree, then the paper is passed to additional scorers.

Though the process is more difficult than grading multiple-choice tests electronically, essays can also be graded by computer. In other instances, essays and other open-ended responses are graded according to a pre-determined assessment rubric by trained graders. For example, at Pearson, all essay graders have four-year university degrees, and a majority are current or former classroom teachers.

Use of rubrics for fairness

Using a rubric is meant to increase fairness when the student's performance is evaluated. In standardized testing, measurement error (a consistent pattern of errors and biases in scoring the test) is easy to determine in standardized testing. In non-standardized assessment, graders have more individual discretion and therefore are more likely to produce unfair results through unconscious bias. When the score depends upon the graders' individual preferences, then the result an individual student receives depends upon who grades the test. Standardized tests also remove teacher bias in assessment. Research shows that teachers create a kind of self-fulfilling prophecy in their assessment of students, granting those they anticipate will achieve with higher scores and giving those who they expect to fail lower grades.

Sample scoring for the open-ended history question: What caused World War II?
Student answers Standardized grading Non-standardized grading

Grading rubric: Answers must be marked correct if they mention at least one of the following: Germany's invasion of Poland, Japan's invasion of China, or economic issues. No grading standards. Each teacher grades however he or she wants to, considering whatever factors the teacher chooses, such as the answer, the amount of effort, the student's academic background, language ability, or attitude.
Student #1: WWII was caused by Hitler and Germany invading Poland.

Teacher #1: This answer mentions one of the required items, so it is correct.
Teacher #2: This answer is correct.

Teacher #1: I feel like this answer is good enough, so I'll mark it correct.
Teacher #2: This answer is correct, but this good student should be able to do better than that, so I'll only give partial credit.

Student #2: WWII was caused by multiple factors, including the Great Depression and the general economic situation, the rise of national socialism, fascism, and imperialist expansionism, and unresolved resentments related to WWI. The war in Europe began with the German invasion of Poland.

Teacher #1: This answer mentions one of the required items, so it is correct.
Teacher #2: This answer is correct.

Teacher #1: I feel like this answer is correct and complete, so I'll give full credit.
Teacher #2: This answer is correct, so I'll give full points.

Student #3: WWII was caused by the assassination of Archduke Ferdinand.

Teacher #1: This answer does not mention any of the required items. No points.
Teacher #2: This answer is wrong. No credit.

Teacher #1: This answer is wrong. No points.
Teacher #2: This answer is wrong, but this student tried hard and the sentence is grammatically correct, so I'll give one point for effort.

Using scores for comparisons

There are two types of standardized test score interpretations: a norm-referenced score interpretation or a criterion-referenced score interpretation.

  • Norm-referenced score interpretations compare test-takers to a sample of peers. The goal is to rank students as being better or worse than other students. Norm-referenced test score interpretations are associated with traditional education. Students who perform better than others pass the test, and students who perform worse than others fail the test.
  • Criterion-referenced score interpretations compare test-takers to a criterion (a formal definition of content), regardless of the scores of other examinees. These may also be described as standards-based assessments, as they are aligned with the standards-based education reform movement. Criterion-referenced score interpretations are concerned solely with whether or not this particular student's answer is correct and complete. Under criterion-referenced systems, it is possible for all students to pass the test, or for all students to fail the test.

Either of these systems can be used in standardized testing. What is important to standardized testing is whether all students are asked equivalent questions, under equivalent circumstances, and graded equally. In a standardized test, if a given answer is correct for one student, it is correct for all students. Graders do not accept an answer as good enough for one student but reject the same answer as inadequate for another student.

The term normative assessment refers to the process of comparing one test-taker to his or her peers. A norm-referenced test (NRT) is a type of test, assessment, or evaluation which yields an estimate of the position of the tested individual in a predefined population. The estimate is derived from the analysis of test scores and other relevant data from a sample drawn from the population. This type of test identifies whether the test taker performed better or worse than other students taking this test. A criterion-referenced test (CRT) is a style of test which uses test scores to show whether or not test takers performed well on a given task, not how well they performed compared to other test takers. Most tests and quizzes that are written by school teachers are criterion-referenced tests. In this case, the objective is simply to see whether the student can answer the questions correctly. The teacher is not usually trying to compare each student's result against other students.

This makes standardized tests useful for admissions purposes in higher education, where a school is trying to compare students from across the nation or across the world. Examples of such international benchmark tests include the Trends in International Mathematics and Science Study (TIMMS) and the Progress in International Reading Literacy Study (PIRLS). Performance on these exams have been speculated to change based on the way standards like the Common Core State Standards (CCSS) line up with top countries across the world.

Because the results can be compared across dissimilar schools, the results of a national standardized test can be used to determine what areas need to be improved. Tests that are taken by everyone can help the government determine which schools and which students are struggling the most. With this information, they can implement solutions to fix the issue, allowing students to learn and grow in an academic environment.

Standards

The considerations of validity and reliability typically are viewed as essential elements for determining the quality of any standardized test. However, professional and practitioner associations frequently have placed these concerns within broader contexts when developing standards and making overall judgments about the quality of any standardized test as a whole within a given context.

Evaluation standards

In the field of evaluation, and in particular educational evaluation, the Joint Committee on Standards for Educational Evaluation has published three sets of standards for evaluations. The Personnel Evaluation Standards was published in 1988, The Program Evaluation Standards (2nd edition) was published in 1994, and The Student Evaluation Standards was published in 2003.

Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. The tests are aimed at ensuring that student evaluations will provide sound, accurate, and credible information about student learning and performance, however; standardized tests offer narrow information on many forms of intelligence and relying on them harms students because they inaccurately measure a student's potential for success.

Testing standards

In the field of psychometrics, the Standards for Educational and Psychological Testing place standards about validity and reliability, along with errors of measurement and issues related to the accommodation of individuals with disabilities. The third and final major topic covers standards related to testing applications, credentialing, plus testing in program evaluation and public policy.

Statistical validity

One of the main advantages of standardized testing is that the results can be empirically documented; therefore, the test scores can be shown to have a relative degree of validity and reliability, as well as results which are generalizable and replicable. This is often contrasted with grades on a school transcript, which are assigned by individual teachers. It may be difficult to account for differences in educational culture across schools, difficulty of a given teacher's curriculum, differences in teaching style, and techniques and biases that affect grading.

Another advantage is aggregation. A well designed standardized test provides an assessment of an individual's mastery of a domain of knowledge or skill which at some level of aggregation will provide useful information. That is, while individual assessments may not be accurate enough for practical purposes, the mean scores of classes, schools, branches of a company, or other groups may well provide useful information because of the reduction of error accomplished by increasing the sample size.

Test takers

There is criticism from students themselves that tests, while standardized, are unfair to the individual student. Some students are "bad test takers", meaning they get nervous and unfocused on tests. Therefore, while the test is standard and should provide fair results, the test takers are at a disadvantage, but have no way to prove their knowledge otherwise, as there is no other testing alternative that allows students to prove their knowledge and problem-solving skills.

Some students have test anxiety. Test anxiety applies to standardized tests as well, where students who may not have test anxiety regularly feel immense pressure to perform when the stakes are so high. High-stakes standardized testing includes exams like the SAT, the PARCC, and the ACT, where doing well is required for grade passing or college admission.

Annual standardized tests at school

Standardized testing is a very common way of determining a student's past academic achievement and future potential. However, high-stakes tests (whether standardized or non-standardized) can cause anxiety. When teachers or schools are rewarded for better performance on tests, then those rewards encourage teachers to "teach to the test" instead of providing a rich and broad curriculum. In 2007 a qualitative study done by Au Wayne demonstrated that standardized testing narrows the curriculum and encourages teacher-centered instruction instead of student-centered learning.

The validity, quality, or use of tests, particularly annual standardized tests common in education have continued to be widely both supported or criticized. Like the tests themselves, supports and criticisms of tests are often varied and may come from a variety of sources such as parents, test takers, instructors, business groups, universities, or governmental watchdogs.

Supporters of large-scale standardized tests in education often provide the following reasons for promoting testing in education:

  • Feedback or diagnosis of test taker's performance
  • Fair and efficient
  • Promotes accountability
  • Prediction and selection
  • Improves performance

Critics of standardized tests in education often provide the following reasons for revising or removing standardized tests in education:

  • Narrows curricular format and encourages teaching to the test.
  • Poor predictive quality.
  • Grade inflation of test scores or grades.
  • Culturally or socioeconomically biased.
  • Psychologically damaging.
  • Poor indicator of intelligence or ability.

Effects on schools

A past standardized testing paper using multiple choice questions and answering them in the form as shown above.

Standardized testing is used as a public policy strategy to establish stronger accountability measures for public education. While the National Assessment of Education Progress (NAEP) has served as an educational barometer for some thirty years by administering standardized tests on a regular basis to random schools throughout the United States, efforts over the last decade at the state and federal levels have mandated annual standardized test administration for all public schools across the country.

The idea behind the standardized testing policy movement is that testing is the first step to improving schools, teaching practice, and educational methods through data collection. Proponents argue that the data generated by the standardized tests act like a report card for the community, demonstrating how well local schools are performing. Critics of the movement, however, point to various discrepancies that result from current state standardized testing practices, including problems with test validity and reliability and false correlations (see Simpson's paradox).

Along with administering and scoring the actual tests, in some cases the teachers are being scored on how well their own students perform on the tests. Teachers are under pressure to continuously raise scores to prove they are worthy of keeping their jobs. This approach has been criticized because there are so many external factors, such as domestic violence, hunger, and homelessness among students, that affect how well students perform.

Performance-based pay is the idea that teachers should be paid more if the students perform well on the tests, and less if they perform poorly. New Jersey Governor Chris Christie proposed educational reform in New Jersey that pressures teachers not only to "teach to the test," but also have their students perform at the potential cost of their salary and job security. The reform called for performance-based pay that depends on students' performances on standardized tests and their educational gains.

Schools that score poorly wind up being slated for closure or downsizing, which gives direct influence on the administration to result to dangerous tactics such as intimidation, cheating and drilling of information to raise scores.

Uncritical use of standardized test scores to evaluate teacher and school performance is inappropriate, because the students' scores are influenced by three things: what students learn in school, what students learn outside of school, and the students' innate intelligence. The school only has control over one of these three factors. Value-added modeling has been proposed to cope with this criticism by statistically controlling for innate ability and out-of-school contextual factors. In a value-added system of interpreting test scores, analysts estimate an expected score for each student, based on factors such as the student's own previous test scores, primary language, or socioeconomic status. The difference between the student's expected score and actual score is presumed to be due primarily to the teacher's efforts.

Affecting what is taught to students

  • Offers guidance to teachers. Standardized tests will allow teachers to see how their students are performing compared to others in the country. This will help them revise their teaching methods if necessary to help their students meet the standards.
  • Allows students to see own progress. Students will be given the opportunity to reflect on their scores and see where their strengths as well as weaknesses are.
  • Provide parents information about their child. The scores can allow parents to get an idea about how their child is doing academically compared to everyone else of the same age in the nation.

Critics also charge that standardized tests encourage "teaching to the test" at the expense of creativity and in-depth coverage of subjects not on the test. Multiple choice tests are criticized for failing to assess skills such as writing. Furthermore, student's success is being tracked to a teacher's relative performance, making teacher advancement contingent upon a teacher's success with a student's academic performance. Ethical and economical questions arise for teachers when faced with clearly underperforming or underskilled students and a standardized test.

Critics contend that overuse and misuse of these tests harms teaching and learning by narrowing the curriculum. According to the group FairTest, when standardized tests are the primary factor in accountability, schools use the tests to narrowly define curriculum and focus instruction. Accountability creates an immense pressure to perform and this can lead to the misuse and misinterpretation of standardized tests.

Critics say that "teaching to the test" disfavors higher-order learning; it transforms what the teachers are allowed to be teaching and heavily limits the amount of other information students learn throughout the years. While it is possible to use a standardized test without letting its contents determine curriculum and instruction, frequently, what is not tested is not taught, and how the subject is tested often becomes a model for how to teach the subject.

Critics also object to the type of material that is typically tested by schools. Although standardized tests for non-academic attributes such as the Torrance Tests of Creative Thinking exist, schools rarely give standardized tests to measure initiative, creativity, imagination, curiosity, good will, ethical reflection, or a host of other valuable dispositions and attributes. Instead, the tests given by schools tend to focus less on moral or character development, and more on individual identifiable academic skills.

In her book, Now You See It, Cathy Davidson criticizes standardized tests. She describes our youth as "assembly line kids on an assembly line model," meaning the use of the standardized test as a part of a one-size-fits-all educational model. She also criticizes the narrowness of skills being tested and labeling children without these skills as failures or as students with disabilities. Widespread and organized cheating has been a growing culture.

Education theorist Bill Ayers has commented on the limitations of the standardized test, writing that "Standardized tests can't measure initiative, creativity, imagination, conceptual thinking, curiosity, effort, irony, judgment, commitment, nuance, good will, ethical reflection, or a host of other valuable dispositions and attributes. What they can measure and count are isolated skills, specific facts and function, content knowledge, the least interesting and least significant aspects of learning." In his book, The Shame of the Nation, Jonathan Kozol argues that students submitted to standardized testing are victims of "cognitive decapitation". Kozol comes to this realization after speaking to many children in inner city schools who have no spatial recollection of time, time periods, and historical events. This is especially the case in schools where due to shortages in funding and strict accountability policies, schools have done away with subjects like the arts, history and geography; in order to focus on the contest of the mandated tests.

There are three metrics by which the best performing countries in the TIMMS (the "A+ countries") are measured: focus, coherence, and rigor. Focus is defined as the number of topics covered in each grade; the idea is that the fewer topics covered in each grade, the more focus can be given to each topic. The definition of coherence is adhering to a sequence of topics covered that follows the natural progression or logical structure of mathematics. The CCSSM was compared to both the current state standards and the A+ country standards. With the most topics covered on average, the current state standards had the lowest focus. The Common Core Standards aim to fix this discrepancy by helping educators focus on what students need to learn instead of becoming distracted by extraneous topics. They encourage educational materials to go from covering a vast array of topics in a shallow manner to a few topics in much more depth.

Time and money

Standardized tests are a way to measure the education level of students and schools on a broad scale. From Kindergarten to 12th grade, most American students participate in annual standardized tests. The average student takes about 10 of these tests per year (e.g., one or two reading comprehension tests, one or two math tests, a writing test, a science test, etc.). The average amount of testing takes about 2.3% of total class time (equal to about four school days per year).

Standardized tests are expensive to administer. It has been reported that the United States spends about US$1.7 billion annually on these tests. In 2001, it was also reported that only three companies (Harcourt Educational Measurement, CTB McGraw-Hill and Riverside Publishing) design 96% of the tests taken at the state level.

Educational decisions

Types of tests

Low-stakes test High-stakes test
Standardized test A personality quiz on a website An educational entrance examination to determine university admission
Non-standardized test The teacher asks each student to share something they remember from their homework. The theater holds an audition to determine who will get a starring role.

Heavy reliance on high-stakes standardized tests for decision-making is often controversial. Critics often propose emphasizing cumulative or even non-numerical measures, such as classroom grades or brief individual assessments (written in prose) from teachers. Supporters argue that test scores provide a clear-cut, objective standard that serves as a valuable check on grade inflation.

The National Academy of Sciences recommends that major educational decisions not be based solely on a single test score. The use of minimum cut-scores for entrance or graduation does not imply a single standard, since test scores are nearly always combined with other minimal criteria such as number of credits, prerequisite courses, attendance, etc. Test scores are often perceived as the "sole criteria" simply because they are the most difficult, or the fulfillment of other criteria is automatically assumed. One exception to this rule is the GED, which has allowed many people to have their skills recognized even though they did not meet traditional criteria.

Some teachers would argue that a single standardized test only measures a student's current knowledge and it does not reflect the students progress from the beginning of the year. A result created by individuals that are not a part of the student's regular instruction, but by professionals that determine what students should know at different ages. In addition, teachers agree that the best test creator and facilitator are themselves. They argue that they are the most aware of students abilities, capacities, and necessities which would allow them to take a longer on subjects or proceed on with the regular curriculum.

Effects on disadvantaged students

Monty Neill, the director of the National Center for Fair and Open Testing, claims that students who speak English as a second language, who have a disability, or who come from low-income families are disproportionately denied a diploma due to a test score, which is unfair and harmful. In the late 1970s when the graduation test began in the United States, for example, a lawsuit claimed that many Black students had not had a fair opportunity on the material they were tested on the graduation test because they had attended schools segregated by law. "The interaction of under-resourced schools and testing most powerfully hits students of color", as Neill argues, "They are disproportionately denied diplomas or grade promotion, and the schools they attend are the ones most likely to fare poorly on the tests and face sanctions such as restructuring."

In the journal The Progressive, Barbara Miner explicates the drawbacks of standardized testing by analyzing three different books. As the co-director of the Center for Education at Rice University and a professor of education, Linda M. McNeil in her book Contradictions of School Reform: Educational Costs of Standardized Testing writes "Educational standardization harms teaching and learning and, over the long term, restratifies education by race and class." McNeil believes that test-based education reform places higher standards for students of color. According to Miner, McNeil "shows how test-based reform centralizes power in the hands of the corporate and political elite—a particularly frightening development during this time of increasing corporate and conservative influence over education reform." Such test-based reform has dumbed down learning, especially for students of color.

FairTest says that negative consequences of test misuse include pushing students out of school, driving teachers out of the profession, and undermining student engagement and school climate.

Use of standardized tests in university admissions

Standardized tests are reviewed by universities as part of the application, along with other supporting evidence such as personal statements, GPA, and letters of recommendation. Nathan Kuncel, a scholar of higher education, noticed that in college admission, SAT, ACT, and other standardized tests "help overwhelmed admissions officers divide enormous numbers of applicants into pools for further assessment. High scores don't guarantee admission anywhere, and low scores don't rule it out, but schools take the tests seriously."

Research shows that the tests predict more than just first-year grades and the level of courses a student is likely to take. The longitudinal research conducted by scientists shows that students with high test scores are more likely to take the challenging route through college. Tests also can indicate the outcomes of students beyond college, including faculty evaluations, research accomplishments, degree attainment, performance on comprehensive exams and professional licensure.

Since GPA has difference across schools and even for two students in the same school, the common measure provided by the test score is more useful.

However, in an April 1995 "meta-analysis" published in the Journal of Educational and Psychological Measurement, Todd Morrison and Melanie Morrison examined two dozen validity studies of the test required to get into just about any Masters or PhD program in America: the Graduate Record Examination (GRE). This study encompassed more than 5,000 test-takers over the past 30 years. The authors found that GRE scores accounted for just 6 percent of the variation in grades in graduate school. The GRE appears to be "virtually useless from a prediction standpoint," wrote the authors. Repeated studies of the Law School Admissions Test (LSAT) find the same.

There is debate whether the test will indicate the long-term success in work and life since there are many other factors, but fundamental skills such as reading, writing, and math are related to job performance.

A longitudinal research in 2007 has demonstrated that major life accomplishments, such as publishing a novel or patenting technology, are also associated with test scores, even after taking into account educational opportunities. There is even a sizable body of evidence that these skills are related to effective leadership and creative achievements at work. Being able to read texts and make sense of them and having strong quantitative reasoning are crucial in the modern information economy.

Many arguments suggest that skills from tests are useful—but only up to a point.

However, a remarkable longitudinal study published in 2008 in the journal Psychological Science examined students who scored in the top 1% at the age of 13. Twenty years later, they were, on average, very highly accomplished, with high incomes, major awards and career accomplishments that would make any parent proud.

Admissions officers rely on a combination of application materials, including letters of recommendation, interviews, student essays, GPA, tests, and personal statements to evaluate the student comprehensively. However, most of these tools are not guarantee of future success. Problems with traditional interviews and letters of recommendation are so pervasive that many schools are looking for better options.

There is a correlation between test scores and social class, but success on standardized tests and in college is not simply dependent on class. The studies show that "the tests were valid even when controlling for socioeconomic class. Regardless of their family background, students with good tests scores and high-school grades do better in college than students with lower scores and weaker transcripts."

Another criticism relating to social class and standardized testing is that only wealthy people receive test preparation and coaching. However, "Researchers have conducted a mix of experimental studies and controlled field studies to test this question. They have generally concluded that the gains due to test prep are more on the order of 5 to 20 points and not the 100 to 200 points claimed by some test prep companies."

More importantly, many people hold the opinion that tests prevent diversity in admissions since minorities have lower scores in tests compared to other represented groups. A 2012 study looked at schools where admissions tests are optional for applicants and compared them to schools that use the tests, and the result shows that "recent research demonstrates that testing-optional schools have been enrolling increasingly diverse student bodies. But the same is true of schools that require testing."

Opponents claim that standardized tests are misused and uncritical judgments of intelligence and performance, but supporters argue that these aren't negatives of standardized tests, but criticisms of poorly designed testing regimes. They argue that testing should and does focus educational resources on the most important aspects of education — imparting a pre-defined set of knowledge and skills — and that other aspects are either less important, or should be added to the testing scheme.

Evidence shows that black and Hispanic students score lower than whites and Asians on average. Therefore, the math and reading standard tests such as SAT have faced escalating attacks from progressives. However, an exhaustive UC faculty senate report, commissioned by Ms. Napolitano and released in 2020, found the tests are not discriminatory and play an important role in protecting educational quality.

The report suggested that worsening grade inflation, especially at wealthy high schools, makes a standard assessment especially important.

Regarding UC schools' intention in dropping standard tests such as the SAT and ACT in college admissions, subjective and customized tests like essays and extra-curriculars can be easily tailored and detrimental to the students who are not familiar with the process. Admissions without testing may be even more tilted in favor of the well-connected.

In January 2020, the faculty senate at the University of California recommended that the UC system keep standardized tests as admissions requirements. The report says standardized math and reading tests are useful for predicting college performance. Based on data from the students in the UC system, the report concludes that "test scores are currently better predictors of first-year GPA than high school grade point average." The report continues: scores are also good at predicting total college GPA and the possibility a student will graduate. While the "predictive power of test scores has gone up," the report adds, "the predictive power of high school grades has gone down."

Test scores enable UC schools "to select those students from underrepresented groups who are more likely to earn higher grades and to graduate on time." "The original intent of the SAT was to identify students who came from outside relatively privileged circles who might have the potential to succeed in university," the report says. The SAT's maker, the Educational Testing Service (ETS), now claims the SAT is not an "aptitude" test but rather an assessment of "developed abilities".

Testing for students of color, those with disabilities, and those from low-income communities in the United States

Controversy

Standardized testing and the requirement of such tests for college admissions is a controversial topic. The reason for the controversy is that these tests can create unequal opportunities for students based on their economic status, race, or even ability status. It is common for students of color, those with disabilities, and those from low-income communities to have low student performance rates. This is most likely due to "generations of exclusionary housing, education, and economic policy". These achievement gaps aren't a new concept. In 1991, the gap between the average scores of white students and those of black students was .91 standard deviations, while in 2020, the gap had decreased to .79 standard deviations.

Cost of Taking The Tests

Standardized testing can be costly for students, in both prep courses/tutors and in actually taking test. The ACT and SAT can cost $55-$70 and $52-$68 respectively. Many students who can afford to end up taking the tests multiple time to see the best score they can get, and will submit "super-scores" or a score consisting of their best scores from each section. Students from low-income communities cannot always afford to take the test multiple times.

Cost of Test Prep

Students in low-income communities often times do not have the same resources for test prep that their peers from more affluent backgrounds do. This discrepancy in resources available causes there to be a significant difference in the scores of students from different racial backgrounds. In an analysis conducted by the Brookings Institution found that 59% of white students and 80% of Asian test takers are deemed "college ready" by the SAT standards in comparison to the under 25% of Black students and under 33% of Hispanic/Latino students who are deemed "college ready." While the College Board reports that socioeconomic factors do not directly impact a student's performance, it can indirectly impact it through the course of access to prep courses and better schooling, experiences that can heavily impact on test scores.

Students with Disabilities

When it comes to students with disabilities and special needs, these tests are not always an appropriate method to measure knowledge or readiness. For students with disabilities, it is not always realistic to expect them to sit at a desk for hours at a time and silently take a test. To refute that, students with disabilities can get accommodations, such as extra time to work on the tests.

Metabarcoding

From Wikipedia, the free encyclopedia
 
Differences in the standard methods for DNA barcoding and metabarcoding. While DNA barcoding focuses on a specific species, metabarcoding examines whole communities.

Metabarcoding is the barcoding of DNA/RNA (or eDNA/eRNA) in a manner that allows for the simultaneous identification of many taxa within the same sample. The main difference between barcoding and metabarcoding is that metabarcoding does not focus on one specific organism, but instead aims to determine species composition within a sample.

A barcode consists of a short variable gene region (for example, see different markers/barcodes) which is useful for taxonomic assignment flanked by highly conserved gene regions which can be used for primer design. This idea of general barcoding originated in 2003 from researchers at the University of Guelph.

The metabarcoding procedure, like general barcoding, proceeds in order through stages of DNA extraction, PCR amplification, sequencing and data analysis. Different genes are used depending if the aim is to barcode single species or metabarcoding several species. In the latter case, a more universal gene is used. Metabarcoding does not use single species DNA/RNA as a starting point, but DNA/RNA from several different organisms derived from one environmental or bulk sample.

Environmental DNA

Environmental DNA or eDNA describes the genetic material present in environmental samples such as sediment, water, and air, including whole cells, extracellular DNA and potentially whole organisms. eDNA can be captured from environmental samples and preserved, extracted, amplified, sequenced, and categorized based on its sequence. From this information, detection and classification of species is possible. eDNA may come from skin, mucous, saliva, sperm, secretions, eggs, feces, urine, blood, roots, leaves, fruit, pollen, and rotting bodies of larger organisms, while microorganisms may be obtained in their entirety. eDNA production is dependent on biomass, age and feeding activity of the organism as well as physiology, life history, and space use.

By 2019 methods in eDNA research had been expanded to be able to assess whole communities from a single sample. This process involves metabarcoding, which can be precisely defined as the use of general or universal polymerase chain reaction (PCR) primers on mixed DNA samples from any origin followed by high-throughput next-generation sequencing (NGS) to determine the species composition of the sample. This method has been common in microbiology for years, but, as of 2020, it is only just finding its footing in the assessment of macroorganisms. Ecosystem-wide applications of eDNA metabarcoding have the potential to not only describe communities and biodiversity, but also to detect interactions and functional ecology over large spatial scales, though it may be limited by false readings due to contamination or other errors. Altogether, eDNA metabarcoding increases speed, accuracy, and identification over traditional barcoding and decreases cost, but needs to be standardized and unified, integrating taxonomy and molecular methods for full ecological study.

Applications of environmental DNA metabarcoding in aquatic and terrestrial ecosystems 
 
Global ecosystem and biodiversity monitoring
with environmental DNA metabarcoding 

eDNA metabarcoding has applications to diversity monitoring across all habitats and taxonomic groups, ancient ecosystem reconstruction, plant-pollinator interactions, diet analysis, invasive species detection, pollution responses, and air quality monitoring. eDNA metabarcoding is a unique method still in development and will likely remain in flux for some time as technology advances and procedures become standardized. However, as metabarcoding is optimized and its use becomes more widespread, it is likely to become an essential tool for ecological monitoring and global conservation study.

Community DNA

Since the inception of high‐throughput sequencing (HTS), the use of metabarcoding as a biodiversity detection tool has drawn immense interest. However, there has yet to be clarity regarding what source material is used to conduct metabarcoding analyses (e.g., environmental DNA versus community DNA). Without clarity between these two source materials, differences in sampling, as well as differences in laboratory procedures, can impact subsequent bioinformatics pipelines used for data processing, and complicate the interpretation of spatial and temporal biodiversity patterns. Here, we seek to clearly differentiate among the prevailing source materials used and their effect on downstream analysis and interpretation for environmental DNA metabarcoding of animals and plants compared to that of community DNA metabarcoding.

With community DNA metabarcoding of animals and plants, the targeted groups are most often collected in bulk (e.g., soil, malaise trap or net), and individuals are removed from other sample debris and pooled together prior to bulk DNA extraction. In contrast, macro‐organism eDNA is isolated directly from an environmental material (e.g., soil or water) without prior segregation of individual organisms or plant material from the sample and implicitly assumes that the whole organism is not present in the sample. Of course, community DNA samples may contain DNA from parts of tissues, cells and organelles of other organisms (e.g., gut contents, cutaneous intracellular or extracellular DNA). Likewise, macro‐organism eDNA samples may inadvertently capture whole microscopic nontarget organisms (e.g., protists, bacteria). Thus, the distinction can at least partly break down in practice.

Another important distinction between community DNA and macro‐organism eDNA is that sequences generated from community DNA metabarcoding can be taxonomically verified when the specimens are not destroyed in the extraction process. Here, sequences can then be generated from voucher specimens using Sanger sequencing. As the samples for eDNA metabarcoding lack whole organisms, no such in situ comparisons can be made. Taxonomic affinities can therefore only be established by directly comparing obtained sequences (or through bioinformatically generated operational taxonomic units (MOTUs)), to sequences that are taxonomically annotated such as NCBI's GenBank nucleotide database, BOLD, or to self‐generated reference databases from Sanger‐sequenced DNA. (The molecular operational taxonomic unit (MOTU) is a group identified through use of cluster algorithms and a predefined percentage sequence similarity, for example, 97%)). Then, to at least partially corroborate the resulting list of taxa, comparisons are made with conventional physical, acoustic or visual‐based survey methods conducted at the same time or compared with historical records from surveys for a location (see Table 1).

The difference in source material between community DNA and eDNA therefore has distinct ramifications for interpreting the scale of inference for time and space about the biodiversity detected. From community DNA, it is clear that the individual species were found in that time and place, but for eDNA, the organism that produced the DNA may be upstream from the sampled location, or the DNA may have been transported in the faeces of a more mobile predatory species (e.g., birds depositing fish eDNA, or was previously present, but no longer active in the community and detection is from DNA that was shed years to decades before. The latter means that the scale of inference both in space and in time must be considered carefully when inferring the presence for the species in the community based on eDNA.

Metabarcoding stages

Six steps in DNA barcoding and metabarcoding 

There are six stages or steps in DNA barcoding and metabarcoding. The DNA barcoding of animals (and specifically of bats) is used as an example in the diagram at the right and in the discussion immediately below.

First, suitable DNA barcoding regions are chosen to answer some specific research question. The most commonly used DNA barcode region for animals is a segment about 600 base pairs long of the mitochondrial gene cytochrome oxidase I (CO1). This locus provides large sequence variation between species yet relatively small amount of variation within species. Other commonly used barcode regions used for species identification of animals are ribosomal DNA (rDNA) regions such as 16S, 18S and 12S and mitochondrial regions such as cytochrome B. These markers have advantages and disadvantages and are used for different purposes. Longer barcode regions (at least 600 base pairs long) are often needed for accurate species delimitation, especially to differentiate close relatives. Identification of the producer of organism's remains such as faeces, hairs and saliva can be used as a proxy measure to verify absence/presence of a species in an ecosystem. The DNA in these remains is usually of low quality and quantity, and therefore, shorter barcodes of around 100 base pairs long are used in these cases. Similarly, DNA remains in dung are often degraded as well, so short barcodes are needed to identify prey consumed.

Second, a reference database needs to be built of all DNA barcodes likely to occur in a study. Ideally, these barcodes need to be generated from vouchered specimens deposited in a publicly accessible place, such as for instance a natural history museum or another research institute. Building up such reference databases is currently being done all over the world. Partner organizations collaborate in international projects such as the International Barcode of Life Project (iBOL) and Consortium for the Barcode of Life (CBOL), aiming to construct a DNA barcode reference that will be the foundation for DNA‐based identification of the world's biome. Well‐known barcode repositories are NCBI GenBank and the Barcode of Life Data System (BOLD).

Third, the cells containing the DNA of interest must be broken open to expose its DNA. This step, DNA extractions and purifications, should be performed from the substrate under investigation. There are several procedures available for this. Specific techniques must be chosen to isolate DNA from substrates with partly degraded DNA, for example fossil samples, and samples containing inhibitors, such as blood, faeces and soil. Extractions in which DNA yield or quality is expected to be low should be carried out in an ancient DNA facility, together with established protocols to avoid contamination with modern DNA. Experiments should always be performed in duplicate  and with positive controls included.

Fourth, amplicons have to be generated from DNA extracted, either from a single specimen or from complex mixtures with primers based on DNA barcodes selected under step 1. To keep track of their origin, labelled nucleotides (molecular IDs or MID labels) need to be added in case of metabarcoding. These labels are needed later on in the analyses to trace reads from a bulk data set back to their origin.

History of sequencing technology 

Fifth, the appropriate techniques should be chosen for DNA sequencing. The classic Sanger chain‐termination method relies on the selective incorporation of chain‐elongating inhibitors of DNA polymerase during DNA replication. These four bases are separated by size using electrophoresis and later identified by laser detection. The Sanger method is limited and can produce a single read at the same time and is therefore suitable to generate DNA barcodes from substrates that contain only a single species. Emerging technologies such as nanopore sequencing have resulted in the cost of DNA sequencing reducing from about USD 30,000 per megabyte in 2002 to about USD 0.60 in 2016. Modern next-generation sequencing (NGS) technologies can handle thousands to millions reads in parallel and are therefore suitable for mass identification of a mix of different species present in a substrate, summarized as metabarcoding.

Finally, bioinformatic analyses need to be carried out to match DNA barcodes obtained with Barcode Index Numbers (BINs) in reference libraries. Each BIN, or BIN cluster, can be identified to species level when it shows high (>97%) concordance with DNA barcodes linked to a species present in a reference library, or when taxonomic identification to the species level is still lacking, an operational taxonomic unit (OTU), which refers to a group of species (i.e. genus, family or higher taxonomic rank). (See binning (metagenomics)). The results of the bioinformatics pipeline must be pruned, for example by filtering out unreliable singletons, superfluous duplicates, low‐quality reads and/or chimeric reads. This is generally done by carrying out serial BLAST searches in combination with automatic filtering and trimming scripts. Standardized thresholds are needed to discriminate between different species or a correct and a wrong identification.

Metabarcoding workflow

Despite the obvious power of the approach, eDNA metabarcoding is affected by precision and accuracy challenges distributed throughout the workflow in the field, in the laboratory and at the keyboard. As set out in the diagram at the right, following the initial study design (hypothesis/question, targeted taxonomic group etc) the current eDNA workflow consists of three components: field, laboratory and bioinformatics. The field component consists of sample collection (e.g., water, sediment, air) that is preserved or frozen prior to DNA extraction. The laboratory component has four basic steps: (i) DNA is concentrated (if not performed in the field) and purified, (ii) PCR is used to amplify a target gene or region, (iii) unique nucleotide sequences called “indexes” (also referred to as “barcodes”) are incorporated using PCR or are ligated (bound) onto different PCR products, creating a “library” whereby multiple samples can be pooled together, and (iv) pooled libraries are then sequenced on a high‐throughput machine. The final step after laboratory processing of samples is to computationally process the output files from the sequencer using a robust bioinformatics pipeline.

Questions for consideration in the design and implementation phases
of an environmental DNA metabarcoding study 
 
Decisions involved in a molecular ecology workflow
Samples can be collected from a variety of different environments using appropriate collection techniques. DNA is then prepared and used to answer a variety of ecological questions: metabarcoding is used to answer questions about "who" is present, while the function of communities or individuals can be established using a metagenomics, single‐cell genomics or metatranscriptomics.

Method and visualisation

Visualization and diversity metrics from environmental sequencing data
a) Alpha diversity displayed as taxonomy bar charts, showing relative abundance of taxa across samples using the Phinch data visualization framework (Bik & Pitch Interactive 2014).
b) Beta diversity patterns illustrated via Principal Coordinate Analyses carried out in QIIME, where each dot represents a sample and colors distinguish different classes of sample. The closer two sample points in 3D space, the more similar their community assemblages
c) GraPhalAn phylogenetic visualization of environmental data, with circular heatmaps and abundance bars used to convey quantitative taxon traits.
d) Edge PCA, a tree‐based diversity metric that identifies specific lineages (green/orange branches) that contribute most to community changes observed in samples distributed across different PCA axes.

The method requires each collected DNA to be archived with its corresponding "type specimen" (one for each taxon), in addition to the usual collection data. These types are stored in specific institutions (museums, molecular laboratories, universities, zoological gardens, botanical gardens, herbaria, etc.) one for each country, and in some cases, the same institution is assigned to contain the types of more than a country, in cases where some nations do not have the technology or financial resources to do so.

In this way, the creation of type specimens of genetic codes represents a methodology parallel to that carried out by traditional taxonomy.

In a first stage, the region of the DNA that would be used to make the barcode was defined. It had to be short and achieve a high percentage of unique sequences. For animals, algae and fungi, a portion of a mitochondrial gene which codes for subunit 1 of the cytochrome oxidase enzyme, CO1, has provided high percentages (95%), a region around 648 base pairs.

In the case of plants, the use of CO1 has not been effective since they have low levels of variability in that region, in addition to the difficulties that are produced by the frequent effects of polyploidy, introgression, and hybridization, so the chloroplast genome seems more suitable.

Applications

Pollinator networks

↑ metabarcoding                                  ↑ visit surveys
(a,b) plant-pollinator groups
(c,d) plant-pollinator species
(e,f) individual pollinator-plant species
(Empis leptempis pandellei)

Apis: Apis mellifera; Bomb.: Bombus sp.; W.bee: wild bees; O.Hym.: other Hymenoptera; O.Dipt.: Other Diptera; Emp.: Empididae; Syrph.: Syrphidae; Col.: Coleoptera; Lep.: Lepidoptera; Musc.: Muscidae.
Line thickness highlights the proportion of interactions

The diagram on the right shows a comparison of pollination networks based on DNA metabarcoding with more traditional networks based on direct observations of insect visits to plants. By detecting numerous additional hidden interactions, metabarcoding data largely alters the properties of the pollination networks compared to visit surveys. Molecular data shows that pollinators are much more generalist than expected from visit surveys. However, pollinator species were composed of relatively specialized individuals and formed functional groups highly specialized upon floral morphs.

As a consequence of the ongoing global changes, a dramatic and parallel worldwide decline in pollinators and animal-pollinated plant species has been observed. Understanding the responses of pollination networks to these declines is urgently required to diagnose the risks the ecosystems may incur as well as to design and evaluate the effectiveness of conservation actions. Early studies on animal pollination dealt with simplified systems, i.e. specific pairwise interactions or involved small subsets of plant-animal communities. However, the impacts of disturbances occur through highly complex interaction networks  and, nowadays, these complex systems are currently a major research focus. Assessing the true networks (determined by ecological process) from field surveys that are subject to sampling effects still provides challenges.

Recent research studies have clearly benefited from network concepts and tools to study the interaction patterns in large species assemblages. They showed that plant-pollinator networks were highly structured, deviating significantly from random associations. Commonly, networks have (1) a low connectance (the realized fraction of all potential links in the community) suggesting a low degree of generalization; (2) a high nestedness (the more-specialist organisms are more likely to interact with subsets of the species that more-generalist organisms interact with) the more specialist species interact only with proper subsets of those species interacting with the more generalist ones; (3) a cumulative distribution of connectivity (number of links per species, s) that follows a power or a truncated power law function  characterized by few supergeneralists with more links than expected by chance and many specialists; (4) a modular organization. A module is a group of plant and pollinator species that exhibits high levels of within-module connectivity, and that is poorly connected to species of other groups.

The low level of connectivity and the high proportion of specialists in pollination networks contrast with the view that generalization rather than specialization is the norm in networks. Indeed, most plants species are visited by a diverse array of pollinators which exploit floral resources from a wide range of plant species. A main cause evoked to explain this apparent contradiction is the incomplete sampling of interactions. Indeed, most network properties are highly sensitive to sampling intensity and network size. Network studies are basically phytocentric i.e. based on the observations of pollinator visits to flowers. This plant-centered approach suffers nevertheless from inherent limitations which may hamper the comprehension of mechanisms contributing to community assembly and biodiversity patterns. First, direct observations of pollinator visits to certain taxa such as orchids are often scarce  and rare interactions are very difficult to detect in field in general. Pollinator and plant communities usually are composed of few abundant species and many rare species that are poorly recorded in visit surveys. These rare species appear as specialists, whereas in fact they could be typical generalists. Because of the positive relationship between interaction frequency (f) and connectivity (s), undersampled interactions may lead to overestimating the degree of specialization in networks. Second, network analyses have mostly operated at species levels. Networks have very rarely been up scaled to the functional groups or down scaled to the individual-based networks, and most of them have been focused on one or two species only. The behavior of either individuals or colonies is commonly ignored, although it may influence the structure of the species networks. Species accounted as generalists in species networks could, therefore, entail cryptic specialized individuals or colonies. Third, flower visitors are by no means always effective pollinators as they may deposit no conspecific pollen and/or a lot of heterospecific pollen. Animal-centered approaches based on the investigation of pollen loads on visitors and plant stigmas may be more efficient at revealing plant-pollinator interactions.

Disentangling food webs

Arthropod predators and vertebrate predators in a millet field 

(A) Trophic network:
of arthropod and vertebrate predators – arrows represent biomass flow between predators and preys.
(B) Intraguild interactions: * Arthropod predators * Parasitoids of arthropods: * Insectivorous vertebrates:

Metabarcoding offers new opportunities for deciphering trophic linkages between predators and their prey within food webs. Compared to traditional, time-consuming methods, such as microscopic or serological analyses, the development of DNA metabarcoding allows the identification of prey species without prior knowledge of the predator’s prey range. In addition, metabarcoding can also be used to characterize a large number of species in a single PCR reaction, and to analyze several hundred samples simultaneously. Such an approach is increasingly used to explore the functional diversity and structure of food webs in agroecosystems. Like other molecular-based approaches, metabarcoding only gives qualitative results on the presence/absence of prey species in the gut or fecal samples. However, this knowledge of the identity of prey consumed by predators of the same species in a given environment enables a "pragmatic and useful surrogate for truly quantitative information.

In food web ecology, "who eats whom" is a fundamental issue for gaining a better understanding of the complex trophic interactions existing between pests and their natural enemies within a given ecosystem. The dietary analysis of arthropod and vertebrate predators allows the identification of key predators involved in the natural control of arthropod pests and gives insights into the breadth of their diet (generalist vs. specialist) and intraguild predation.

The diagram on the right summarises results from a 2020 study which used metabarcoding to untangle the functional diversity and structure of the food web associated with a couple of millet fields in Senegal. After assigning the identified OTUs as species, 27 arthropod prey taxa were identified from nine arthropod predators. The mean number of prey taxa detected per sample was the highest in carabid beetles , ants and spiders, and the lowest in the remaining predators including anthocorid bugs, pentatomid bugs, and earwigs. Across predatory arthropods, a high diversity of arthropod preys was observed in spiders, carabid beetles, ants, and anthocorid bugs. In contrast, the diversity of prey species identified in earwigs and pentatomid bugs was relatively low. Lepidoptera, Hemiptera, Diptera and Coleoptera were the most common insect prey taxa detected from predatory arthropods.

Conserving functional biodiversity and related ecosystem services, especially by controlling pests using their natural enemies, offers new avenues to tackle challenges for the sustainable intensification of food production systems. Predation of crop pests by generalist predators, including arthropods and vertebrates, is a major component of natural pest control. A particularly important trait of most generalist predators is that they can colonize crops early in the season by first feeding on alternative prey. However, the breadth of the "generalist" diet entails some drawbacks for pest control, such as intra-guild predation. A tuned diagnosis of diet breadth in generalist predators, including predation of non-pest prey, is thus needed to better disentangle food webs (e.g., exploitation competition and apparent competition) and ultimately to identify key drivers of natural pest control in agroecosystems. However, the importance of generalist predators in the food web is generally difficult to assess, due to the ephemeral nature of individual predator–prey interactions. The only conclusive evidence of predation results from direct observation of prey consumption, identification of prey residues within predators’ guts, and analyses of regurgitates or feces.

Marine biosecurity

Metabarcoding eDNA and eRNA in marine biosecurity
Global biodiversity of operational taxonomic units (OTUs) for DNA-only, shared eDNA/eRNA, and RNA-only datasets. Charts show the relative abundance of sequences at highest assigned taxonomic levels.
 
Tunicate colony of Didemnum vexillum
 
Species like these survive passage through unfiltered pumping systems

The spread of non-indigenous species (NIS) represents significant and increasing risks to ecosystems. In marine systems, NIS that survive the transport and adapt to new locations can have significant adverse effects on local biodiversity, including the displacement of native species, and shifts in biological communities and associated food webs. Once NIS are established, they are extremely difficult and costly to eradicate, and further regional spread may occur through natural dispersal or via anthropogenic transport pathways. While vessel hull fouling and ships’ ballast waters are well known as important anthropogenic pathways for the international spread of NIS, comparatively little is known about the potential of regionally transiting vessels to contribute to the secondary spread of marine pests through bilge water translocation.

Recent studies have revealed that the water and associated debris entrained in bilge spaces of small vessels (<20 m) can act as a vector for the spread of NIS at regional scales. Bilge water is defined as any water that is retained on a vessel (other than ballast), and that is not deliberately pumped on board. It can accumulate on or below the vessel’s deck (e.g., under floor panels) through a variety of mechanisms, including wave actions, leaks, via the propeller stern glands, and through the loading of items such as diving, fishing, aquaculture or scientific equipment. Bilge water, therefore, may contain seawater as well as living organisms at various life stages, cell debris and contaminants (e.g., oil, dirt, detergent, etc.), all of which are usually discharged using automatic bilge pumps or are self-drained using duckbill valves. Bilge water pumped from small vessels (manually or automatically) is not usually treated prior to discharge to sea, contrasting with larger vessels that are required to separate oil and water using filtration systems, centrifugation, or carbon absorption. If propagules are viable through this process, the discharge of bilge water may result in the spread of NIS.

In 2017, Fletcher et al. used a combination of laboratory and field experiments to investigate the diversity, abundance, and survival of biological material contained in bilge water samples taken from small coastal vessels. Their laboratory experiment showed that ascidian colonies or fragments, and bryozoan larvae, can survive passage through an unfiltered pumping system largely unharmed. They also conducted the first morpho-molecular assessment (using eDNA metabarcoding) on the biosecurity risk posed by bilge water discharges from 30 small vessels (sailboats and motorboats) of various origins and sailing time. Using eDNA metabarcoding they characterised approximately three times more taxa than via traditional microscopic methods, including the detection of five species recognised as non-indigenous in the study region.

To assist in understanding the risks associated with different NIS introduction vectors, traditional microscope biodiversity assessments are increasingly being complemented by eDNA metabarcoding. This allows a wide range of diverse taxonomic assemblages, at many life stages to be identified. It can also enable the detection of NIS that may have been overlooked using traditional methods. Despite the great potential of eDNA metabarcoding tools for broad-scale taxonomic screening, a key challenge for eDNA in the context of environmental monitoring of marine pests, and particularly when monitoring enclosed environments such as some bilge spaces or ballast tanks, is differentiating dead and viable organisms. Extracellular DNA can persist in dark/cold environments for extended periods of time (months to years, thus many of the organisms detected using eDNA metabarcoding may have not been viable in the location of sample collection for days or weeks. In contrast, ribonucleic acid (RNA) deteriorates rapidly after cell death, likely providing a more accurate representation of viable communities. Recent metabarcoding studies have explored the use of co-extracted eDNA and eRNA molecules for monitoring benthic sediment samples around marine fish farms and oil drilling sites, and have collectively found slightly stronger correlations between biological and physico-chemical variables along impact gradients when using eRNA. From a marine biosecurity prospective, the detection of living NIS may represent a more serious and immediate threat than the detection of NIS based purely on a DNA signal. Environmental RNA may therefore offer a useful method for identifying living organisms in samples.

Miscellaneous

The construction of the genetic barcode library was initially focused on fish  and the birds, which were followed by butterflies and other invertebrates. In the case of birds, the DNA sample is usually obtained from the chest.

Researchers have already developed specific catalogs for large animal groups, such as bees, birds, mammals or fish. Another use is to analyze the complete zoocenosis of a given geographic area, such as the "Polar Life Bar Code" project that aims to collect the genetic traits of all organisms that live in polar regions; both poles of the Earth. Related to this form is the coding of all the ichthyofauna of a hydrographic basin, for example the one that began to develop in the Rio São Francisco, in the northeast of Brazil.

The potential of the use of Barcodes is very wide, since the discovery of numerous cryptic species (it has already yielded numerous positive results), the use in the identification of species at any stage of their life, the secure identification in cases of protected species that are illegally trafficked, etc.

Potentials and shortcomings

A region of the gene for the cytochrome c oxidase enzyme is used to distinguish species in the Barcode of Life Data Systems database.

Potentials

DNA barcoding has been proposed as a way to distinguish species suitable even for non-specialists to use.

Shortcomings

In general, the shortcomings for DNA barcoding are valid also for metabarcoding. One particular drawback for metabarcoding studies is that there is no consensus yet regarding the optimal experimental design and bioinformatics criteria to be applied in eDNA metabarcoding. However, there are current joined attempts, like e.g. the EU COST network DNAqua-Net, to move forward by exchanging experience and knowledge to establish best-practice standards for biomonitoring.

The so-called barcode is a region of mitochondrial DNA within the gene for cytochrome c oxidase. A database, Barcode of Life Data Systems (BOLD), contains DNA barcode sequences from over 190,000 species. However, scientists such as Rob DeSalle have expressed concern that classical taxonomy and DNA barcoding, which they consider a misnomer, need to be reconciled, as they delimit species differently. Genetic introgression mediated by endosymbionts and other vectors can further make barcodes ineffective in the identification of species.

Status of barcode species

In microbiology, genes can move freely even between distantly related bacteria, possibly extending to the whole bacterial domain. As a rule of thumb, microbiologists have assumed that kinds of Bacteria or Archaea with 16S ribosomal RNA gene sequences more similar than 97% to each other need to be checked by DNA-DNA hybridisation to decide if they belong to the same species or not. This concept was narrowed in 2006 to a similarity of 98.7%.

DNA-DNA hybridisation is outdated, and results have sometimes led to misleading conclusions about species, as with the pomarine and great skua. Modern approaches compare sequence similarity using computational methods.

Citation signal

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cit...