Search This Blog

Monday, March 30, 2015

Food irradiation


From Wikipedia, the free encyclopedia


The international Radura logo, used to show a food has been treated with ionizing radiation.

A portable, trailer-mounted food irradiation machine, circa 1968

Food irradiation is the process of exposing foodstuffs to a source of energy capable of stripping electrons from individual atoms (ionizing radiation).[1] This treatment is used to preserve food, reduce the risk of food borne illness, prevent the spread of invasive pests, and delay or eliminate sprouting or ripening. The radiation can be emitted by a radioactive substance or generated electrically. Irradiated food does not become radioactive. Food irradiation is permitted by over 60 countries, with about 500,000 metric tons of foodstuffs annually processed worldwide.[2] Irradiation is also used for non-food applications, such as medical devices.[3]

Although there have been concerns about the safety of irradiated food, a large amount of independent research has confirmed it to be safe.[4][5][6][7][8] One family of chemicals is uniquely formed by irradiation, and this product is nontoxic. When heating food, all other chemicals occur in a lower or comparable frequency.[5][9][6][10] Others criticize irradiation because of confusion with radioactive contamination or because of negative impressions of the nuclear industry.

The regulations that dictate how food is to be irradiated, as well as the food allowed to be irradiated, vary greatly from country to country. In Austria, Germany, and many other countries of the European Union only dried herbs, spices, and seasonings can be processed with irradiation and only at a specific dose, while in Brazil all foods are allowed at any dose.[11][12][13][14][15]

Uses

Irradiation is used to reduce the pathogens in foods. Depending on the dose, some or all of the microorganisms, bacteria, and viruses present are destroyed, slowed down, or rendered incapable of reproduction. This reduces or eliminates the risk of food borne illnesses. Some foods are irradiated at sufficient doses to ensure that the product is sterilized and does not add any spoilage or pathogenic microorganisms into the final product.[1]

Irradiation is used to delay the ripening of fruits and the sprouting of vegetables by slowing down the enzymatic action in foods. By halting or slowing down spoilage and slowing down the ripening of food, irradiation prolongs the shelf life of goods. Irradiation cannot revert spoiled or over ripened food to a fresh state. If this food was processed by irradiation, spoilage would cease and ripening would slow down, yet the irradiation would not destroy the toxins or repair the texture, color, or taste of the food.[16]

Insect pests are sterilized using irradiation at relatively low doses of irradiation. This stops the spread of foreign invasive species across national boundaries, and allows foods to pass quickly through quarantine and avoid spoilage.[17] Depending on the dose, some or all of the insects present are destroyed, or rendered incapable of reproduction.

Public perception and impact

Irradiation has been approved by the FDA for over 50 years, but the only major growth area for the commercial sale of irradiated foods for human consumption is fruits and vegetables that are irradiated to kill insects for the purpose of quarantine. In the early 2000s in the US irradiated meat was common at some grocery stores, but because of lack of consumer demand it is no longer common. Because consumer demand for irradiated food is low, reducing the spoilage between manufacture and consumer purchase and reducing the risk of food borne illness is currently not sufficient incentive for most manufactures to supplement their process with irradiation.[3]

It is widely believed that consumer perception of foods treated with irradiation is more negative than those processed by other means,[18] although some industry studies indicate the number of consumers concerned about the safety of irradiated food has decreased in the last 10 years to levels comparable to those of people concerned about food additives and preservatives.[19] “These irradiated foods are not less safe than others,” Dr. Tarantino said, “and the doses are effective in reducing the level of disease-causing micro-organisms.” "People think the product is radioactive," said Harlan Clemmons, president of Sadex, a food irradiation company based in Sioux City, Iowa.[20]

Some common concerns about food irradiation include the impact of irradiation on food chemistry, as well as the indirect effects of irradiation becoming a prevalent in the food handling process. Irradiation reduces the risk of infection and spoilage, does not make food radioactive, and the food is shown to be safe, but it does cause chemical reactions that alter the food and therefore alters the chemical makeup, nutritional content, and the sensory qualities of the food.[3] Some of the potential secondary impacts of irradiation are hypothetical, while others are demonstrated. These effects include impacts due to the reduction of food quality, the loss of bacteria, and the irradiation process. Because of these concerns and the increased cost of irritated foods, there is not a widespread public demand for the irradiation of foods for human consumption.[3]

Effect of irradiation on food chemistry

The irradiation source supplies energetic particles or waves. As these waves/particles pass through a target material they collide with other particles. Around the sites of these collisions chemical bonds are broken, creating short lived radicals (e.g. the hydroxyl radical, the hydrogen atom and solvated electrons). These radicals cause further chemical changes by bonding with and or stripping particles from nearby molecules. When collisions damage DNA or RNA, effective reproduction becomes unlikely, also when collisions occur in cells, cell division is often suppressed.[1]

Irradiated food does not become radioactive as the radioactive source is never in contact with the foodstuffs and energy of radiation is limited below the threshold of induction of radioactivity, but it does reduce the nutritional content and change the flavor (much like cooking), produce radiolytic products, and increase the number of free radicals in the food.[21]

Irradiation causes a multitude of chemical changes including introducing radiolytic products and free radicals. A few of these products are unique, but not considered dangerous. The scale of these chemical changes is not unique. Cooking, smoking, salting, and other less novel techniques, cause the food to be altered so drastically that its original nature is almost unrecognizable, and must be called by a different name. Storage of food also causes dramatic chemical changes, ones that eventually lead to deterioration and spoilage.[22]

Misconceptions

A major concern is that irradiation might cause chemical changes that are harmful to the consumer. Several national expert groups and two international expert groups evaluated the available data and concluded that any food at any dose is wholesome and safe to consume as long as it remains palatable and maintains its technical properties (e.g. feel, texture, or color).[5][6]

Irradiated food does not become radioactive, just as an object exposed to light does not start producing light. Radioactivity is the ability of a substance to emit high energy particles. When these particles hit the target materials they may free other highly energetic particles. This ends shortly after the end of the exposure, much like objects stop reflecting light when the source is turned off and warm objects emit heat until they cool down but do not continue to produce their own heat.

It is impossible for food irradiators to induce radiation into a product. Irradiators radiate non-alpha particles and radiation is intrinsically radiated at precisely known strengths (wavelengths). These radiated particles can never be strong enough to split the atoms found in food. Without alpha particles, radioactivity can only be induced if a radiated particle with sufficient strength hits another atom and that atom splits into two or more pieces . If this happens the resulting atom(s) may be radioactive. It the particle is not strong enough, it can never split an atom, no matter how many particles are emitted from the radioactive source. Only in rare materials, such as plutonium and uranium, is the energy released by splitting an atom strong enough to split other atoms, these materials are not found in foods in sufficient quantities, so there can be no chain reaction.[21]

Food quality

Because of the extent of the chemical reactions, changes to the foods quality after irradiation are inevitable. The nutritional content of food, as well as the sensory qualities (taste, appearance, and texture) is impacted by irradiation. Because of this food advocacy groups consider labeling irradiated food raw as misleading.[23] However, the degradation of vitamins caused by irradiation is similar or even less than the loss caused by other food preservation processes. Other processes like chilling, freezing, drying, and heating also result in some vitamin loss.[16]

The changes in quality and nutrition vary greatly from food to food. The changes in the flavor of fatty foods like meats, nuts and oils are sometimes noticeable, while the changes in lean products like fruits and vegetables are less so. Some studies by the irradiation industry show that for some properly treated fruits and vegetables irradiation is seen by consumers to improve the sensory qualities of the product compared to untreated fruits and vegetables.[16]

Radiolytic products and free radicals

The formation of new, previously unknown chemical compounds (unique radiolytic products) via irradiation is a concern. Most of the substances found in irradiated food are also found in food that has been subjected to other food processing treatments, and are therefore not unique. Furthermore, the quantities in which they occur in irradiated food are lower or similar to the quantities formed in heat treatments.[5][9][6][10]
When fatty acids are irradiated, a family of compounds called 2-alkylcyclobutanones (2-ACBs) are produced. These are thought to be unique radiolytic products. Some studies show that these chemicals may be toxic, while others dispute this.[citation needed]

Potentially damaging compounds known as free radicals form when food is irradiated. Most of these are oxidizers (i.e., accept electrons) and some react very strongly. According to the free-radical theory of aging excessive amounts of these free radicals can lead to cell injury and cell death, which may contribute to many diseases.[24] Though this traditional relates to the free radicals generated in the body, not the free radicals consumed by the individual, as much of these are destroyed in the digestive process.

The radiation doses to cause toxic changes are much higher than the doses needed to accomplish the benefits of irradiation, and taking into account the presence of 2-ABCs along with what is known of free radicals, these results lead to the conclusion that there is no significant risk from radiolytic products.[4]

Indirect effects/cumulative impacts of irradiation

The inndirect effects and cumulative impacts of irradiation are the concerns and benefits of irradiation that are not directly related to the chemical changes that occur when food is irradiated, but instead are related to what would occur if food irradiation was a common process.

When food is irradiated some nutrition is lost.[16] Therefore, if the majority of food was irradiated at high enough levels to decrease its nutritional content significantly, there could be an increase in nutritional deficiencies due to a diet composed entirely of irradiated foods.[25] Furthermore for at least 3 studies on cats, the consumption of irradiated food was associated with a loss of tissue in the myelin sheath, leading to reversible paralysis. Researchers suspect that reduced levels of vitamin C and high levels of free radicals may be the cause.[26] This effect is thought to be specific to cats and has not been reproduced in any other animal. To produce these effects the cats were fed solely on food that was irradiated at a dose at least five times higher than the maximum allowable dose.[27]

If irradiation was to become common in the food handling process there would be a reduction of the prevalence of foodborne illness and potentially the eradication of specific pathogens.[28] However, multiple studies suggest that an increased rate of pathogen growth may occur when irradiated food is cross-contaminated with a pathogen, as the competing spoilage organisms are no longer present.[29]

The ability to remove bacterial contamination through post-processing by irradiation may reduce the fear of mishandling food which could cultivate a cavalier attitude toward hygiene and result in contaminants other than bacteria. However, concerns that the pasteurization of milk would lead to increased contamination of milk where prevalent when mandatory pasteurization was introduced, but these fears never materialized after adoption of this law. Therefore, it is unlikely for irradiation to cause an increase of illness due to non bacteria based contamination.[30]

It may seem reasonable to assume that irradiating food might lead to radiation-tolerant strains, similar to the way that strains of bacteria have developed resistance to antibiotics. Bacteria develop a resistance to antibiotics after an individual uses antibiotics repeatedly. Much like pasteurization plants products that pass through irradiation plants are processed once, and are not processed and reprocessed. Cycles of heat treatment have been shown to produce heat tolerant bacteria, yet no problems have appeared so far in pasteurization plants. Furthermore, when the irradiation dose is chosen to target a specific species of microbe, it is calibrated to doses several times the value required to target the species. This ensures that the process randomly destroys all members of a target species.[31] Therefore the more irradiation tolerant members of the target species are not given any evolutionary advantage.
Without evolutionary advantage selection does not occur. As to the irradiation process directly producing mutations that lead to more virulent, radiation resistant, strains the European Commission's Scientific Committee on Food found that there is no evidence, on the contrary, irradiation has been found to cause loss of virulence and infectivity as mutants are usually less competitive and less adapted."[32]

Misconceptions

The argument is made that there is a lack of long-term studies, and therefore the safety of irradiated food is not scientifically proven[33] in spite of the fact that hundreds of animal feeding studies of irradiated food, including multigenerational studies, have been performed since 1950.[4] Endpoints investigated have included subchronic and chronic changes in metabolism, histopathology, and function of most systems; reproductive effects; growth; teratogenicity; and mutagenicity. A large number of studies have been performed; meta-studies have supported the safety of irradiated food.[5][6][4][7][8]

The below experiments are cited by food irradiation opponents[weasel words], but could be either not verified in later experiments, could not be clearly attributed to the radiation effect, or could be attributed to an inappropriate design of the experiment etc.[16][4]
  • India's National Institute of Nutrition (NIN) found an elevated rate of cells with more than one set of genes (Polyploidy) in humans and animals when fed wheat that was irradiated recently (within 12 weeks). Upon analysis scientist determined that the techniques used by NIN allowed for too much human error and statistical variation, therefore the results where unreliable. After multiple studies by independent agencies and scientists no correlation between polyploidy and irradiation of food could be found.[16]
  • Change in chronaxie in rats[citation needed]

Treatment

Up to the point where the food is processed by irradiation, the food is processed in the same way as all other food. To treat the foodstuffs, they are exposed to a radioactive source, for a set period of time to achieve a desired dose. Radiation may be emitted by a radioactive substance, or by X-ray and electron beam accelerators. Special precautions are taken to ensure the food stuffs never come in contact with the radioactive substances and that the personnel and the environment are protected from exposure radiation.[34] Irradiation treatments are typically classified by dose (high, medium, and low), but are sometimes classified by the effects of the treatment[35] (radappertization, radicidation and radurization). Food irradiation is sometimes referred to as "cold pasteurization"[36] or "electronic pasteurization"[37] because ionizing the food does not heat the food to high temperatures during the process, and the effect is similar to heat pasteurization. The term "cold pasteurization" is controversial because the term may be used to disguise the fact the food has been irradiated and pasteurization and irradiation are fundamentally different processes.

Treatment costs vary as a function of dose and facility usage. A pallet or tote is typically exposed for several minutes to hours depending on dose. Low-dose applications such as disinfestation of fruit range between US$0.01/lbs and US$0.08/lbs while higher-dose applications can cost as much as US$0.20/lbs.[38]

Process

Typically, when the food is being irradiated, pallets of food are exposed a source of radiation for a specific time. Dosimeters are embedded in the pallet (at various locations) of food to determine what dose was achieved.[34] Most irradiated food is processed by gamma irradiation.[39] Special precautions are taken because gamma rays are continuously emitted by the radioactive material. In most designs, to nullify the effects of radiation, the radioisotope is lowered into a water-filled storage pool, which absorbs the radiation but does not become radioactive. This allows pallets of the products to be added and removed from the irradiation chamber and other maintenance to be done.[34] Sometimes movable shields are used to reduce radiation levels in areas of the irradiation chamber instead of submerging the source.[citation needed] For x ray and electron irradiation these precautions are not necessary as the source of the radiation can be turned off.[34]

For x-ray, gamma ray and electron irradiation, shielding is required when the foodstuffs are being irradiated. This is done to protect workers and the environment outside of the chamber from radiation exposure. Typically permanent or movable shields are used.[34] In some gamma irradiators the radioactive source is under water at all times, and the hermetically sealed product is lowered into the water. The water acts as the shield in this application.[citation needed] Because of the lower penetration depth of electron irradiation, treatment to entire industrial pallets or totes is not possible.[citation needed]

Dosimetry

The radiation absorbed dose is the amount energy absorbed per unit weight of the target material. Dose is used because, when the same substance is given the same dose, similar changes are observed in the target material. The SI unit for dose is grays (Gy or J/kg). Dosimeters are used to measure dose, and are small components that, when exposed to ionizing radiation, change measurable physical attributes to a degree that can be correlated to the dose received. Measuring dose (dosimetry) involves exposing one or more dosimeters along with the target material.[40][41]

For purposes of legislation doses are divided into low (up to 1 kGy), medium (1 kGy to 10 kGy), and high dose applications (above 10 kGy).[citation needed] High dose applications are above those currently permitted in the USA for commercial food items by the FDA and other regulators around the world.[42] Though these doses are approved for non commercial applications, such as sterilizing frozen meat for NASA astronauts (doses of 44 kGy)[43] and food for hospital patients.
Applications By Overall Average Dose
Low dose (up to 1 kGy) Medium dose (1 kGy to 10 kGy) High dose (above 10 kGy)
Application Dose (kGy) Application Dose (kGy) Application Dose (kGy)
Inhibit sprouting[a] 0.03-0.15 kGy Delay spoilage of meat[b] 1.50–3.00 kGy Sterilization[c] of packaged meat[b] 25.00-70.00 kGy
Delay fruit ripening 0.03-0.15 kGy Reduce risk of pathogens in meat[b] 3.00–7.00 kGy Increase juice yield[citation needed]
Stop insect/parasite infestations[d] 0.07-1.00 kGy Increase sanitation[e] of spices[44] 10.00 kGy Improve re-hydration[citation needed]

Technology


Efficiency illustration of the different radiation technologies (electron beam, X-ray, gamma rays)

Electron irradiation uses electrons accelerated in an electric field to a velocity close to the speed of light. Electrons have a charge, and therefore do not penetrate the product beyond a few centimeters, depending on product density.
Gamma irradiation involves exposing the target material to packets of light (photons) that are highly energetic (Gamma rays). A radioactive material (radioisotopes) is used as the source for the gamma rays.[39] Gamma irradiation is the standard because the deeper penetration of the gamma rays enables administering treatment to entire industrial pallets or totes (reducing the need for material handling) and it is significantly less expensive than using a X-ray source. Generally cobalt-60 is used as a radioactive source for gamma irradiation. Cobalt-60 is bred from cobalt-59 using neutron irradiation in specifically designed nuclear reactors.[39] In limited applications caesium-137, a less costly alternative recovered during the processing of spent nuclear fuel, is used as a radioactive source. Insufficient quantities are available for large scale commercial use. An incident where water soluble caesium-137 leaked into the source storage pool requiring NRC intervention[45] has led to near elimination of this radioisotope outside of military applications.

Irradiation by X-ray is similar to irradiation by gamma rays in that less energetic packets of light (X-rays) are used. X-rays are generated by colliding accelerated electrons with a dense material (this process is known as bremsstrahlung-conversion), and therefore do not necessitate the use of radioactive materials.[46] X-rays ability to penetrate the target is similar to gamma irradiation. X-ray machine produces better dose uniformity then Gamma irradiation but they require much more electricity as only as much as 12% of the input energy is converted into X-rays.[39]

Cost

The cost of food irradiation is influenced by dose requirements, the food's tolerance of radiation, handling conditions, i.e., packaging and stacking requirements, construction costs, financing arrangements, and other variables particular to the situation.[47] Irradiation is a capital-intensive technology requiring a substantial initial investment, ranging from $1 million to $5 million. In the case of large research or contract irradiation facilities, major capital costs include a radiation source, hardware (irradiator, totes and conveyors, control systems, and other auxiliary equipment), land (1 to 1.5 acres), radiation shield, and warehouse. Operating costs include salaries (for fixed and variable labor), utilities, maintenance, taxes/insurance, cobalt-60 replenishment, general utilities, and miscellaneous operating costs.[38][48]

Regulations and international standards

The Codex Alimentarius represents the global standard for irradiation of food, in particular under the WTO-agreement. Member states are free to convert those standards into national regulations at their discretion,[citation needed] therefore regulations about irradiation differ from country to country.

The United Nations Food and Agricultural Organization (FAO) has passed a motion to commit member states to implement irradiation technology for their national phytosanitary programs; the General assembly of the International Atomic Energy Agency (IAEA) has urged wider use of the irradiation technology.[citation needed]

Labeling regulations and international standards


The Radura symbol, as required by U.S. Food and Drug Administration regulations to show a food has been treated with ionizing radiation.

The provisions of the Codex Alimentarius are that any "first generation" product must be labeled "irradiated" as any product derived directly from an irradiated raw material; for ingredients the provision is that even the last molecule of an irradiated ingredient must be listed with the ingredients even in cases where the unirradiated ingredient does not appear on the label. The RADURA-logo is optional; several countries use a graphical version that differs from the Codex-version. The suggested rules for labeling is published at CODEX-STAN – 1 (2005),[49] and includes the usage of the Radura symbol for all products that contain irradiated foods. The Radura symbol is not a designator of quality. The amount of pathogens remaining is based upon dose and the original content and the dose applied can vary on a product by product basis.

The European Union follows the Codex's provision to label irradiated ingredients down to the last molecule of irradiated foodstuffs. The European Community does not provide for the use of the Radura logo and relies exclusively on labeling by the appropriate phrases in the respective languages of the Member States. The European Union enforces its irradiation labeling laws by requiring its member countries to perform tests on a cross section of food items in the market-place and to report to the European Commission. The results are published annually in the OJ of the European Communities.[50]

The US defines irradiated foods as foods in which the irradiation causes a material change in the food, or a material change in the consequences that may result from the use of the food. Therefore food that is processed as an ingredient by a restaurant or food processor is exempt from the labeling requirement in the US. This definition is not consistent with the Codex Alimentarius. All irradiated foods must bear a slightly modified[49] Radura symbol at the point of sale and use the term "irradiated" or a derivative there of, in conjunction with explicit language describing the change in the food or its conditions of use.[51]

Food safety regulations and international standards

In 2003, the Codex Alimentarius removed any upper dose limit for food irradiation as well as clearances for specific foods, declaring that all are safe to irradiate. Countries such as Pakistan and Brazil have adopted the Codex without any reservation or restriction. Other countries, including New Zealand, Australia, Thailand, India, and Mexico, have permitted the irradiation of fresh fruits for fruit fly quarantine purposes, amongst others.[citation needed]

Standards that describe calibration and operation for radiation dosimetry, as well as procedures to relate the measured dose to the effects achieved and to report and document such results, are maintained by the American Society for Testing and Materials (ASTM international) and are also available as ISO/ASTM standards.[52]

All of the rules involved in processing foodstuffs are applied to all foods before they are irradiated.

United States clearances

In the United States, each new food is approved separately with a guideline specifying a maximum dosage; in case of quarantine applications the minimum dose is regulated. Packaging materials containing the food processed by irradiation must also undergo approval. Food irradiation in the United States is primarily regulated by the FDA[53] since it is considered a food additive. The United States Department of Agriculture (USDA) amends these rules for use with meat, poultry, and fresh fruit.[54]

The United States Department of Agriculture (USDA) has approved the use of low-level irradiation as an alternative treatment to pesticides for fruits and vegetables that are considered hosts to a number of insect pests, including fruit flies and seed weevils. Under bilateral agreements that allows less-developed countries to earn income through food exports agreements are made to allow them to irradiate fruits and vegetables at low doses to kill insects, so that the food can avoid quarantine.

The U.S. Food and Drug Administration (FDA) and the USDA have approved irradiation of the following foods and purposes:
  • Packaged refrigerated or frozen red meat[55] — to control pathogens (E. Coli O157:H7 and Salmonella), and to extend shelf life.[56]
  • Packaged poultry — control pathogens (Salmonella and Camplylobacter).[56]
  • Fresh fruits, vegetables and grains — to control insects and inhibit growth, ripening and sprouting.[56]
  • Pork — to control trichinosis.[56]
  • Herbs, spices and vegetable seasonings[57] — to control insects and microorganisms.[56]
  • Dry or dehydrated enzyme preparations — to control insects and microorganisms.[56]
  • White potatoes — to inhibit sprout development.[56]
  • Wheat and wheat flour — to control insects.[56]
  • Loose or bagged fresh iceberg lettuce and spinach[58]

European Union clearances

European law dictates that no foods other than dried aromatic herbs, spices and vegetable seasonings are permitted for the application of irradiation.[59] However, any Member State is permitted to maintain previous clearances that are in categories that the EC's Scientific Committee on Food (SCF) had previously approved, or add clearance granted to other Member States. Presently, Belgium, Czech Republic, France, Italy, Netherlands, Poland, and the United Kingdom) have adopted such provisions.[60] Before individual items in an approved class can be added to the approved list, studies into the toxicology of each of such food and for each of the proposed dose ranges are requested. It also states that irradiation shall not be used "as a substitute for hygiene or health practices or good manufacturing or agricultural practice". These regulations only govern food irradiation in consumer products to allow irradiation to be used for patients requiring sterile diets.

Because of the "Single Market" of the EC any food, even if irradiated, must be allowed to be marketed in any other Member State even if a general ban of food irradiation prevails, under the condition that the food has been irradiated legally in the state of origin. Furthermore, imports into the EC are possible from third countries if the irradiation facility had been inspected and approved by the EC and the treatment is legal within the EC or some Member state.[61][62][63][64][65]

Nuclear and employee safety regulations

Interlocks and safeguards are mandated to minimize this risk. There have been radiation related accidents, deaths, and injury at such facilities, many of them caused by operators overriding the safety related interlocks.[66] In a radiation processing facility, radiation specific concerns are supervised by special authorities, while "Ordinary" occupational safety regulations are handled much like other businesses.

The safety of irradiation facilities is regulated by the United Nations International Atomic Energy Agency and monitored by the different national Nuclear Regulatory Commissions. The regulators enforce a safety culture that mandates that all incidents that occur are documented and thoroughly analyzed to determine the cause and improvement potential. Such incidents are studied by personnel at multiple facilities, and improvements are mandated to retrofit existing facilities and future design.

In the US the Nuclear Regulatory Commission (NRC) regulates the safety of the processing facility, and the United States Department of Transportation (DOT) regulates the safe transport of the radioactive sources.

Irradiated food supply

There are analytical methods available to detect the usage of irradiation on food items in the marketplace.[67][68][69] This is used as a tool for government authorities to enforce existing labeling standards and to bolster consumer confidence. Phytosanitary irradiation of fruits and vegetables has been increasing globally. In 2010, 18446 tonnes of fruits and vegetables were irradiated in six countries for export quarantine control; the countries follow: Mexico (56.2%), United States (31.2%), Thailand (5.18%), Vietnam (4.63%), Australia (2.69%), and India (0.05%). The three types of fruits irradiated the most were guava (49.7%), sweet potato(29.3%) and sweet lime (3.27%).[70]
In total, 103 000 tonnes of food products were irradiated on mainland United States in 2010. The three types of foods irradiated the most were spices (77.7%), fruits and vegetables (14.6%) and meat and poultry (7.77%). 17 953 tonnes of irradiated fruits and vegetables were exported to the mainland United States.[70] Mexico, the United States' state of Hawaii, Thailand, Vietnam and India export irradiated produce to the mainland U.S.[70][71][72] Mexico, followed by the United States' state of Hawaii, is the largest exporter of irradiated produce to the mainland U.S.[70]

In total, 7 972 tonnes of food products were irradiated in European Union countries in 2012; mainly in three member state countries: Belgium (64.7%), the Netherlands (18.5%) and France (7.7%). The three types of foods irradiated the most were frog legs (36%), poultry (35%) and dried herbs and spices (15%).[73] The European Union's official site gives information on the regulatory status of food irradiation, the quantities of foods irradiated at authorized facilities in European Union member states and the results of market surveillance where foods have been tested to see if they are irradiated. The Official Journal of the European Union publishes annual reports on food irradiation, the current report covers the period from 1 January 2012 to 31 December 2012 and compiles information from 27 member States.[73]

Timeline of the history of food irradiation

  • 1895 Wilhelm Conrad Röntgen discovers X-rays ("bremsstrahlung", from German for radiation produced by deceleration)
  • 1896 Antoine Henri Becquerel discovers natural radioactivity; Minck proposes the therapeutic use[74]
  • 1904 Samuel Prescott describes the bactericide effects Massachusetts Institute of Technology (MIT)[75]
  • 1906 Appleby & Banks: UK patent to use radioactive isotopes to irradiate particulate food in a flowing bed[76]
  • 1918 Gillett: U.S. Patent to use X-rays for the preservation of food[77]
  • 1921 Schwartz describes the elimination of Trichinella from food[78]
  • 1930 Wuest: French patent on food irradiation[79]
  • 1943 MIT becomes active in the field of food preservation for the U.S. Army[80]
  • 1951 U.S. Atomic Energy Commission begins to co-ordinate national research activities
  • 1958 World first commercial food irradiation (spices) at Stuttgart, Germany[81]
  • 1970 Establishment of the International Food Irradiation Project (IFIP), headquarters at the Federal Research Centre for Food Preservation, Karlsruhe, Germany
  • 1980 FAO/IAEA/WHO Joint Expert Committee on Food Irradiation recommends the clearance generally up to 10 kGy "overall average dose"[5]
  • 1981/1983 End of IFIP after reaching its goals
  • 1983 Codex Alimentarius General Standard for Irradiated Foods: any food at a maximum "overall average dose" of 10 kGy
  • 1984 International Consultative Group on Food Irradiation (ICGFI) becomes the successor of IFIP
  • 1998 The European Union's Scientific Committee on Food (SCF) voted "positive" on eight categories of irradiation applications[82]
  • 1997 FAO/IAEA/WHO Joint Study Group on High-Dose Irradiation recommends to lift any upper dose limit[6]
  • 1999 The European Union issues Directives 1999/2/EC (framework Directive) and 1999/3/EC (implementing Directive) limiting irradiation a positive list whose sole content is one of the eight categories approved by the SFC, but allowing the individual states to give clearances for any food previously approved by the SFC.
  • 2000 Germany leads a veto on a measure to provide a final draft for the positive list.
  • 2003 Codex Alimentarius General Standard for Irradiated Foods: no longer any upper dose limit
  • 2003 The SCF adopts a "revised opinion" that recommends against the cancellation of the upper dose limit.[32]
  • 2004 ICGFI ends
  • 2011 The successor to the SFC, European Food Safety Authority (EFSA), reexamines the SFC's list and makes further recommendations for inclusion.[83]

Evolutionary developmental biology


From Wikipedia, the free encyclopedia

Evolutionary developmental biology (evolution of development or informally, evo-devo) is a field of biology that compares the developmental processes of different organisms to determine the ancestral relationship between them, and to discover how developmental processes evolved. It addresses the origin and evolution of embryonic development; how modifications of development and developmental processes lead to the production of novel features, such as the evolution of feathers;[1] the role of developmental plasticity in evolution; how ecology impacts development and evolutionary change; and the developmental basis of homoplasy and homology.[2]

Although interest in the relationship between ontogeny and phylogeny extends back to the nineteenth century, the contemporary field of evo-devo has gained impetus from the discovery of genes regulating embryonic development in model organisms. General hypotheses remain hard to test because organisms differ so much in shape and form.[3]

Nevertheless, it now appears that just as evolution tends to create new genes from parts of old genes (molecular economy), evo-devo demonstrates that evolution alters developmental processes to create new and novel structures from the old gene networks (such as bone structures of the jaw deviating to the ossicles of the middle ear) or will conserve (molecular economy) a similar program in a host of organisms such as eye development genes in molluscs, insects, and vertebrates.[4] [5] Initially the major interest has been in the evidence of homology in the cellular and molecular mechanisms that regulate body plan and organ development. However subsequent approaches include developmental changes associated with speciation.[6]

Basic principles

Charles Darwin's theory of evolution builds on three principles: natural selection, heredity, and variation. At the time that Darwin wrote, the principles underlying heredity and variation were poorly understood. In the 1940s, however, biologists incorporated Gregor Mendel's principles of genetics to explain both, resulting in the modern synthesis. It was not until the 1980s and 1990s, however, when more comparative molecular sequence data between different kinds of organisms was amassed and detailed, that an understanding of the molecular basis of the developmental mechanisms began to form.

Currently, it is well understood how genetic mutation occurs.[7] However, developmental mechanisms are not understood sufficiently to explain which kinds of phenotypic variation can arise in each generation from variation at the genetic level. Evolutionary developmental biology studies how the dynamics of development determine the phenotypic variation arising from genetic variation and how that affects phenotypic evolution (especially its direction). At the same time evolutionary developmental biology also studies how development itself evolves.

Thus the origins of evolutionary developmental biology come both from an improvement in molecular biology techniques as applied to development, and from the full appreciation of the limitations of classic neo-Darwinism as applied to phenotypic evolution. Some evo-devo researchers see themselves as extending and enhancing the modern synthesis by incorporating into it findings of molecular genetics and developmental biology.

Evolutionary developmental biology is not yet as of 2014 a unified discipline, but can be distinguished from earlier approaches to evolutionary theory by its focus on a few crucial ideas. One of these is modularity: as has been long recognized, plants and animal bodies are modular: they are organized into developmentally and anatomically distinct parts. Often these parts are repeated, such as fingers, ribs, and body segments. Evo-devo seeks the genetic and evolutionary basis for the division of the embryo into distinct modules, and for the partly independent development of such modules.

Another central idea recognizes that some gene products function as switches whereas others act as diffusible signals. Genes specify proteins, some of which act as structural components of cells and others as enzymes that regulate various biochemical pathways within an organism. Most biologists working within the modern synthesis assumed that an organism is a straightforward reflection of its component genes. The modification of existing, or evolution of new, biochemical pathways (and, ultimately, the evolution of new species of organisms) depended on specific genetic mutations. In 1961, however, Jacques Monod, Jean-Pierre Changeux and François Jacob discovered within the bacterium Escherichia coli a gene that functioned only when "switched on" by an environmental stimulus.[8] Later, scientists discovered specific genes in animals (including a subgroup of the genes which contain the homeobox DNA motif, called Hox genes) that act as switches for other genes, and could be induced by other gene products, morphogens, that act analogously to the external stimuli in bacteria. These discoveries drew biologists' attention to the fact that genes can be selectively turned on and off, rather than being always active, and that highly disparate organisms (for example, fruit flies and human beings) may use the same genes for embryogenesis (e.g., the genes of the "developmental-genetic toolkit", see below), just regulating them differently.

Similarly, organismal form can be influenced by mutations in promoter regions of genes, those DNA sequences at which the products of some genes bind to and control the activity of the same or other genes, not only protein-specifying sequences. This finding suggested that the crucial distinction between different species (even different orders or phyla) may be due less to differences in their content of gene products than to differences in spatial and temporal expression of conserved genes. The implication that large evolutionary changes in body morphology are associated with changes in gene regulation, rather than with the evolution of new genes, suggested that Hox and other "switch" genes may play a major role in evolution, something that contradicts the neo-Darwinian synthesis.
Another focus of evo-devo is developmental plasticity, the basis of the recognition that organismal phenotypes are not uniquely determined by their genotypes. If generation of phenotypes is conditional, and dependent on external or environmental inputs, evolution can proceed by a "phenotype-first" route,[3][9] with genetic change following, rather than initiating, the formation of morphological and other phenotypic novelties.[clarification needed] Mary Jane West-Eberhard argued the case for this in her 2003 book Developmental plasticity and evolution.[9]

History

An early version of recapitulation theory, also called the biogenetic law or embryological parallelism, was put forward by Étienne Serres in 1824–26 as what became known as the "Meckel-Serres Law" which attempted to provide a link between comparative embryology and a "pattern of unification" in the organic world. It was supported by Étienne Geoffroy Saint-Hilaire as part of his ideas of idealism, and became a prominent part of his version of Lamarckism leading to disagreements with Georges Cuvier. It was widely supported in the Edinburgh and London schools of higher anatomy around 1830, notably by Robert Edmond Grant, but was opposed by Karl Ernst von Baer's embryology of divergence in which embryonic parallels only applied to early stages where the embryo took a general form, after which more specialised forms diverged from this shared unity in a branching pattern. The anatomist Richard Owen used this to support his idealist concept of species as showing the unrolling of a divine plan from an archetype, and in the 1830s attacked the transmutation of species proposed by Lamarck, Geoffroy and Grant.[10] In the 1850s Owen began to support an evolutionary view that the history of life was the gradual unfolding of a teleological divine plan,[11] in a continuous "ordained becoming", with new species appearing by natural birth.[12]

In On the Origin of Species (1859), Charles Darwin proposed evolution through natural selection, a theory central to modern biology. Darwin recognised the importance of embryonic development in the understanding of evolution, and the way in which von Baer's branching pattern matched his own idea of descent with modification:[13]


Ernst Haeckel (1866), in his endeavour to produce a synthesis of Darwin's theory with Lamarckism and Naturphilosophie, proposed that "ontogeny recapitulates phylogeny," that is, the development of the embryo of every species (ontogeny) fully repeats the evolutionary development of that species (phylogeny), in Geoffroy's linear model rather than Darwin's idea of branching evolution.[13] Haeckel's concept explained, for example, why humans, and indeed all vertebrates, have gill slits and tails early in embryonic development. His theory has since been discredited. However, it served as a backdrop for a renewed interest in the evolution of development after the modern evolutionary synthesis was established (roughly 1936 to 1947).

Stephen Jay Gould called this approach to explaining evolution as terminal addition; as if every evolutionary advance was added as new stage by reducing the duration of the older stages. The idea was based on observations of neoteny.[15] This was extended by the more general idea of heterochrony (changes in timing of development) as a mechanism for evolutionary change.[16]

D'Arcy Thompson postulated that differential growth rates could produce variations in form in his 1917 book On Growth and Form. He showed the underlying similarities in body plans and how geometric transformations could be used to explain the variations.

Edward B. Lewis discovered homeotic genes, rooting the emerging discipline of evo-devo in molecular genetics. In 2000, a special section of the Proceedings of the National Academy of Sciences (PNAS) was devoted to "evo-devo",[17] and an entire 2005 issue of the Journal of Experimental Zoology Part B: Molecular and Developmental Evolution was devoted to the key evo-devo topics of evolutionary innovation and morphological novelty.[18]

John R. Horner began his project "How to Build a Dinosaur" in 2009 in conjunction with his published book of the same name. Using the principles and theories of evolutionary developmental biology, he took a chick embryo and attempted to change the development so it grew components similar to a dinosaur.[19] He successfully grew buds of teeth, and is currently continuing work on growing a tail, and changing the wings to claws. Horner used evolutionary developmental biology on a chick embryo because he knew he couldn't make an exact replica of a dinosaur since there is no more DNA so instead he just took the framework still in the chick's DNA that allowed it to evolve from a dinosaur.[20]

The developmental-genetic toolkit

The developmental-genetic toolkit consists of a small fraction of the genes in an organism's genome whose products control its development. These genes are highly conserved among phyla. Differences in deployment of toolkit genes affect the body plan and the number, identity, and pattern of body parts. The majority of toolkit genes are components of signaling pathways, and encode for the production of transcription factors, cell adhesion proteins, cell surface receptor proteins, and secreted morphogens, all of these participate in defining the fate of undifferentiated cells, generating spatial and temporal patterns, which in turn form the body plan of the organism.
Among the most important of the toolkit genes are those of the Hox gene cluster, or complex. Hox genes, transcription factors containing the more broadly distributed homeobox protein-binding DNA motif, function in patterning the body axis. Thus, by combinatorial specifying the identity of particular body regions, Hox genes determine where limbs and other body segments will grow in a developing embryo or larva. A paragon of a toolbox gene is Pax6/eyeless, which controls eye formation in all animals. It has been found to produce eyes in mice and Drosophila, even if mouse Pax6/eyeless was expressed in Drosophila.[21]

This means that a big part of the morphological evolution undergone by organisms is a product of variation in the genetic toolkit, either by the genes changing their expression pattern or acquiring new functions. A good example of the first is the enlargement of the beak in Darwin's Large Ground-finch (Geospiza magnirostris), in which the gene BMP is responsible for the larger beak of this bird, relative to the other finches.[22]

The loss of legs in snakes and other squamates is another good example of genes changing their expression pattern. In this case the gene Distal-less is very under-expressed, or not expressed at all, in the regions where limbs would form in other tetrapods.[23] This same gene determines the spot pattern in butterfly wings,[24] which shows that the toolbox genes can change their function.

Toolbox genes, as well as being highly conserved, also tend to evolve the same function convergently or in parallel. Classic examples of this are the already mentioned Distal-less gene, which is responsible for appendage formation in both tetrapods and insects, or, at a finer scale, the generation of wing patterns in the butterflies Heliconius erato and Heliconius melpomene. These butterflies are Müllerian mimics whose coloration pattern arose in different evolutionary events, but is controlled by the same genes.[25] The previous supports Kirschner and Gerhart's theory of Facilitated Variation, which states that morphological evolutionary novelty is generated by regulatory changes in various members of a large set of conserved mechanisms of development and physiology.[26]

Development and the origin of novelty

Among the more surprising and, perhaps, counterintuitive (from a neo-Darwinian viewpoint) results of recent research in evolutionary developmental biology is that the diversity of body plans and morphology in organisms across many phyla are not necessarily reflected in diversity at the level of the sequences of genes, including those of the developmental genetic toolkit and other genes involved in development. Indeed, as Gerhart and Kirschner have noted, there is an apparent paradox: "where we most expect to find variation, we find conservation, a lack of change".[27]
Even within a species, the occurrence of novel forms within a population does not generally correlate with levels of genetic variation sufficient to account for all morphological diversity. For example, there is significant variation in limb morphologies amongst salamanders and in differences in segment number in centipedes, even when the respective genetic variation is low.

A major question then, for evo-devo studies, is: If the morphological novelty we observe at the level of different clades is not always reflected in the genome, where does it come from? Apart from neo-Darwinian mechanisms such as mutation, translocation and duplication of genes, novelty may also arise by mutation-driven changes in gene regulation. The finding that much biodiversity is not due to differences in genes, but rather to alterations in gene regulation, has introduced an important new element into evolutionary theory.[28][29] Diverse organisms may have highly conserved developmental genes, but highly divergent regulatory mechanisms for these genes. Changes in gene regulation are "second-order" effects of genes, resulting from the interaction and timing of activity of gene networks, as distinct from the functioning of the individual genes in the network.

The discovery of the homeotic Hox gene family in vertebrates in the 1980s allowed researchers in developmental biology to empirically assess the relative roles of gene duplication and gene regulation with respect to their importance in the evolution of morphological diversity. Several biologists, including Sean B. Carroll of the University of Wisconsin–Madison suggest that "changes in the cis-regulatory systems of genes" are more significant than "changes in gene number or protein function".[30] These researchers argue that the combinatorial nature of transcriptional regulation allows a rich substrate for morphological diversity, since variations in the level, pattern, or timing of gene expression may provide more variation for natural selection to act upon than changes in the gene product alone.

Epigenetic alterations of gene regulation or phenotype generation that are subsequently consolidated by changes at the gene level constitute another class of mechanisms for evolutionary innovation. Epigenetic changes include modification of the genetic material due to methylation and other reversible chemical alteration,[31] as well as nonprogrammed remolding of the organism by physical and other environmental effects due to the inherent plasticity of developmental mechanisms.[9] The biologists Stuart A. Newman and Gerd B. Müller have suggested that organisms early in the history of multicellular life were more susceptible to this second category of epigenetic determination than are modern organisms, providing a basis for early macroevolutionary changes.[32]

Classical radicalism

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Cla...