Search This Blog

Wednesday, February 20, 2019

Fuel economy in automobiles

From Wikipedia, the free encyclopedia

Fuel consumption monitor from a 2006 Honda Airwave. The displayed fuel economy is 18.1 km/L (5.5 L/100 km; 43 mpg‑US).
 
A 1916 experiment in creating a fuel-saving automobile in the United States. The vehicle weighed only 135 pounds (61.2 kg) and was an adaptation of a small gasoline engine originally designed to power a bicycle.

The fuel economy of an automobile relates distance traveled by a vehicle and the amount of fuel consumed. Consumption can be expressed in terms of volume of fuel to travel a distance, or the distance traveled per unit volume of fuel consumed. Since fuel consumption of vehicles is a significant factor in air pollution, and since importation of motor fuel can be a large part of a nation's foreign trade, many countries impose requirements for fuel economy. Different methods are used to approximate the actual performance of the vehicle. The energy in fuel is required to overcome various losses (wind resistance, tire drag, and others) encountered while propelling the vehicle, and in providing power to vehicle systems such as ignition or air conditioning. Various strategies can be employed to reduce losses at each of the conversions between the chemical energy in the fuel and the kinetic energy of the vehicle. Driver behavior can affect fuel economy; maneuvers such as sudden acceleration and heavy braking waste energy.

Electric cars do not directly burn fuel, and so do not have fuel economy per se, but equivalence measures, such as miles per gallon gasoline equivalent have been created to attempt to compare them.

Units of measure

MPG to L/100 km conversion chart: blue, U.S. gallon; red, imperial gallon.
 
Fuel economy is the relationship between the distance traveled and fuel consumed.

Fuel economy can be expressed in two ways:
Units of fuel per fixed distance
 
Generally expressed as liters per 100 kilometers (L/100 km), used in most European countries, China, South Africa, Australia and New Zealand. British, Irish and Canadian law allow for the use of either liters per 100 kilometers or miles per imperial gallon. The window sticker on new US cars displays the vehicle's fuel consumption in US gallons per 100 miles, in addition to the traditional MPG number.
Units of distance per fixed fuel unit
 
Miles per gallon (mpg) is commonly used in the United States, the United Kingdom, and Canada (alongside L/100 km). Kilometers per liter (km/L) is more commonly used elsewhere in the Americas, Asia, parts of Africa and Oceania. In Arab countries km/20 L, which is known as kilometers per tanaka (or Tanakeh) is used, where tanaka is a metal container which has a volume of twenty liters. Both mpg and km/L are units of distance per fixed fuel amount (the increase of the value represents economic fuel consumption) whereas L/100 km is a unit of fuel consumption per a fixed unit of distance (the increase of the value represents large/bad fuel consumption). When the mpg unit is used, it is necessary to identify the type of gallon used: the imperial gallon is 4.54609 liters, and the U.S. gallon is 3.785 liters.

Fuel economy statistics

While the thermal efficiency (mechanical output to chemical energy in fuel) of petroleum engines has increased since the beginning of the automotive era to a current maximum of 36.4%  this is not the only factor in fuel economy. The design of automobile as a whole and usage pattern affects the fuel economy. Published fuel economy is subject to variation between jurisdiction due to variations in testing protocols. 

One of the first studies to determine fuel economy in the United States was the Mobil Economy Run, which was an event that took place every year from 1936 (except during World War II) to 1968. It was designed to provide real fuel efficiency numbers during a coast to coast test on real roads and with regular traffic and weather conditions. The Mobil Oil Corporation sponsored it and the United States Auto Club (USAC) sanctioned and operated the run. In more recent studies, the average fuel economy for new passenger car in the United States improved from 17 mpg (13.8 L/100 km) in 1978 to more than 22 mpg (10.7 L/100 km) in 1982. The average fuel economy in 2008 for new cars, light trucks and SUVs in the United States was 26.4 mpgUS (8.9 L/100 km). 2008 model year cars classified as "midsize" by the US EPA ranged from 11 to 46 mpgUS(21 to 5 L/100 km) However, due to environmental concerns caused by CO2 emissions, new EU regulations are being introduced to reduce the average emissions of cars sold beginning in 2012, to 130 g/km of CO2, equivalent to 4.5 L/100 km (52 mpgUS, 63 mpgimp) for a diesel-fueled car, and 5.0 L/100 km (47 mpgUS, 56 mpgimp) for a gasoline (petrol)-fueled car.

The average consumption across the fleet is not immediately affected by the new vehicle fuel economy: for example, Australia's car fleet average in 2004 was 11.5 L/100 km (20.5 mpgUS), compared with the average new car consumption in the same year of 9.3 L/100 km (25.3 mpgUS).

Speed and fuel economy studies

1997 fuel economy statistics for various US models
 
Fuel economy at steady speeds with selected vehicles was studied in 2010. The most recent study indicates greater fuel efficiency at higher speeds than earlier studies; for example, some vehicles achieve better fuel economy at 100 km/h (62 mph) rather than at 70 km/h (43 mph), although not their best economy, such as the 1994 Oldsmobile Cutlass Ciera with the LN2 2.2L engine, which has its best economy at 90 km/h (56 mph) (8.1 L/100 km (29 mpg‑US)), and gets better economy at 105 km/h (65 mph) than at 72 km/h (45 mph) (9.4 L/100 km (25 mpg‑US) vs 22 mpg‑US (11 L/100 km)). The proportion of driving on high speed roadways varies from 4% in Ireland to 41% in the Netherlands. 

When the US National Maximum Speed Law's 55 mph (89 km/h) speed limit was mandated, there were complaints that fuel economy could decrease instead of increase. The 1997 Toyota Celica got better fuel-efficiency at 105 km/h (65 mph) than it did at 65 km/h (40 mph) (5.41 L/100 km (43.5 mpg‑US) vs 5.53 L/100 km (42.5 mpg‑US)), although even better at 60 mph (97 km/h) than at 65 mph (105 km/h) (48.4 mpg‑US (4.86 L/100 km) vs 43.5 mpg‑US (5.41 L/100 km)), and its best economy (52.6 mpg‑US (4.47 L/100 km)) at only 25 mph (40 km/h). Other vehicles tested had from 1.4 to 20.2% better fuel-efficiency at 90 km/h (56 mph) vs. 105 km/h (65 mph). Their best economy was reached at speeds of 40 to 90 km/h (25 to 56 mph).

Officials hoped that the 55 mph (89 km/h) limit, combined with a ban on ornamental lighting, no gasoline sales on Sunday, and a 15% cut in gasoline production, would reduce total gas consumption by 200,000 barrels a day, representing a 2.2% drop from annualized 1973 gasoline consumption levels. This was partly based on a belief that cars achieve maximum efficiency between 65 and 80 km/h (40 and 50 mph) and that trucks and buses were most efficient at 55 mph (89 km/h).

In 1998, the U.S. Transportation Research Board footnoted an estimate that the 1974 National Maximum Speed Limit (NMSL) reduced fuel consumption by 0.2 to 1.0 percent. Rural interstates, the roads most visibly affected by the NMSL, accounted for 9.5% of the U.S' vehicle-miles-traveled in 1973, but such free-flowing roads typically provide more fuel-efficient travel than conventional roads.

Differences in testing standards

Identical vehicles can have varying fuel consumption figures listed depending upon the testing methods of the jurisdiction.

Lexus IS 250 – petrol 2.5 L 4GR-FSE V6, 204 hp (153 kW), 6 speed automatic, rear wheel drive.
  • Australia (L/100 km) – 'combined' 9.1, 'urban' 12.7, 'extra-urban' 7.0
  • Canada (L/100 km) – 'combined' 9.6, 'city' 11.1, 'highway' 7.8
  • European Union (L/100 km) – 'combined' 8.9, 'urban' 12.5, 'extra-urban' 6.9
  • United States (L/100 km) – 'combined' 9.8, 'city' 11.2, 'highway' 8.1

Energy considerations

Since the total force opposing the vehicle's motion (at constant speed) multiplied by the distance through which the vehicle travels represents the work that the vehicle's engine must perform, the study of fuel economy (the amount of energy consumed per unit of distance traveled) requires a detailed analysis of the forces that oppose a vehicle's motion. In terms of physics, Force = rate at which the amount of work generated (energy delivered) varies with the distance traveled, or:
Note: The amount of work generated by the vehicle's power source (energy delivered by the engine) would be exactly proportional to the amount of fuel energy consumed by the engine if the engine's efficiency is the same regardless of power output, but this is not necessarily the case due to the operating characteristics of the internal combustion engine. 

For a vehicle whose source of power is a heat engine (an engine that uses heat to perform useful work), the amount of fuel energy that a vehicle consumes per unit of distance (level road) depends upon:
  • The thermodynamic efficiency of the heat engine;
  • The forces of friction within the mechanical system that delivers engine output to the wheels;
  • The forces of friction in the wheels and between the road and the wheels (rolling friction);
  • Other internal forces that the engine works against (electrical generator, air conditioner, water pump, engine fan, etc.);
  • External forces that resist motion (e.g., wind, rain);
  • Non-regenerative braking force (brakes that turn motion energy into heat rather than storing it in a useful form; e.g., electrical energy in hybrid vehicles);
  • Fuel consumed while the engine is on standby and not powering the wheels, i.e., while the vehicle is coasting, braking or idling.
Energy dissipation in city and highway driving for a mid-size gasoline-powered car.
 
Ideally, a car traveling at a constant velocity on level ground in a vacuum with frictionless wheels could travel at any speed without consuming any energy beyond what is needed to get the car up to speed. Less ideally, any vehicle must expend energy on overcoming road load forces, which consist of aerodynamic drag, tire rolling resistance, and inertial energy that is lost when the vehicle is decelerated by friction brakes. With ideal regenerative braking, the inertial energy could be completely recovered, but there are few options for reducing aerodynamic drag or rolling resistance other than optimizing the vehicle's shape and the tire design. Road load energy, or the energy demanded at the wheels, can be calculated by evaluating the vehicle equation of motion over a specific driving cycle. The vehicle power train must then provide this minimum energy in order to move the vehicle, and will lose a large amount of additional energy in the process of converting fuel energy into work and transmitting it to the wheels. Overall, the sources of energy loss in moving a vehicle may be summarized as follows:
  • Engine efficiency (20–30%), which varies with engine type, the mass of the automobile and its load, and engine speed (usually measured in RPM).
  • Aerodynamic drag force, which increases roughly by the square of the car's speed, but note that drag power goes by the cube of the car's speed.
  • Rolling friction.
  • Braking, although regenerative braking captures some of the energy that would otherwise be lost.
  • Losses in the transmission. Manual transmissions can be up to 94% efficient whereas older automatic transmissions may be as low as 70% efficient Automatically controlled shifting of gearboxes that have the same internals as manual boxes will give the same efficiency as a pure manual gearbox plus the bonus of added intelligence selecting optimal shifting points
  • Air conditioning. The power required for the engine to turn the compressor decreases the fuel-efficiency, though only when in use. This may be offset by the reduced drag of the vehicle compared with driving with the windows down. The efficiency of AC systems gradually deteriorates due to dirty filters etc.; regular maintenance prevents this. The extra mass of the air conditioning system will cause a slight increase in fuel consumption.
  • Power steering. Older hydraulic power steering systems are powered by a hydraulic pump constantly engaged to the engine. Power assistance required for steering is inversely proportional to the vehicle speed so the constant load on the engine from a hydraulic pump reduces fuel efficiency. More modern designs improve fuel efficiency by only activating the power assistance when needed; this is done by using either direct electrical power steering assistance or an electrically powered hydraulic pump.
  • Cooling. Older cooling systems used a constantly engaged mechanical fan to draw air through the radiator at a rate directly related to the engine speed. This constant load reduces efficiency. More modern systems use electrical fans to draw additional air through the radiator when extra cooling is required.
  • Electrical systems. Headlights, battery charging, active suspension, circulating fans, defrosters, media systems, speakers, and other electronics can also significantly increase fuel consumption, as the energy to power these devices causes increased load on the alternator. Since alternators are commonly only 40–60% efficient, the added load from electronics on the engine can be as high as 3 horsepower (2.2 kW) at any speed including idle. In the FTP 75 cycle test, a 200 watt load on the alternator reduces fuel efficiency by 1.7 MPG. Headlights, for example, consume 110 watts on low and up to 240 watts on high. These electrical loads can cause much of the discrepancy between real world and EPA tests, which only include the electrical loads required to run the engine and basic climate control.
  • Standby. The energy needed to keep the engine running while it is not providing power to the wheels, i.e., when stopped, coasting or braking.
Fuel-efficiency decreases from electrical loads are most pronounced at lower speeds because most electrical loads are constant while engine load increases with speed. So at a lower speed a higher proportion of engine horsepower is used by electrical loads. Hybrid cars see the greatest effect on fuel-efficiency from electrical loads because of this proportional effect. 

Future technologies

Technologies that may improve fuel efficiency, but are not yet on the market, include:
  • HCCI (Homogeneous Charge Compression Ignition) combustion
  • Scuderi engine
  • Compound engines
  • Two-stroke diesel engines
  • High-efficiency gas turbine engines
  • BMW's Turbosteamer – using the heat from the engine to spin a mini turbine to generate power
  • Vehicle electronic control systems that automatically maintain distances between vehicles on motorways/freeways that reduce ripple back braking, and consequent re-acceleration.
  • Time-optimized piston path, to capture energy from hot gases in the cylinders when they are at their highest temperatures
  • sterling hybrid battery vehicle
Many aftermarket consumer products exist that are purported to increase fuel economy; many of these claims have been discredited. In the United States, the Environmental Protection Agency maintains a list of devices that have been tested by independent laboratories and makes the test results available to the public.

Fuel economy data reliability

The mandatory publication of the fuel consumption by the manufacturer led some to use dubious practices to reach better values in the past. If the test is on a test stand, the vehicle may detect open doors and adapt the engine control. Also when driven according to the test regime, the parameters may adapt automatically. Test laboratories use a "golden car" that is tested in each one to check that each lab produces the same set of measurements for a given drive cycle.

Tire pressures and lubricants have to be as recommended by the manufacturer (Higher tire pressures are required on a particular dynamometer type, but this is to compensate for the different rolling resistance of the dynamometer, not to produce an unrealistic load on the vehicle). Normally the quoted figures a manufacturer publishes have to be proved by the relevant authority witnessing vehicle/engine tests. Some jurisdictions independently test emissions of vehicles in service, and as a final measure can force a recall of all of a particular type of vehicle if the customer vehicles do not fulfill manufacturers' claims within reasonable limits. The expense and bad publicity from such a recall encourages manufacturers to publish realistic figures. The US Federal government retests 10–15% of models), to make sure that the manufacturer's tests are accurate.

Real world fuel consumption can vary greatly as they can be affected by many factors that have little to do with the vehicle. Driving conditions – weather, traffic, temperature; driving style – hard braking, jack rabbit starts, and speeding; road conditions – paved vs gravel, smooth vs potholes; and things like carrying excess weight, roof racks, and fuel quality can all combine to dramatically increase fuel consumption. Expecting to consistently perform in the face of so many variables is impossible as is the expectation for one set of numbers to encompass every driver and their personal circumstances. 

The ratings are meant to provide a comparison, and are not a promise of actual performance.

Concerns over EPA estimates

For many years critics had claimed that EPA (U.S. Environmental Protection Agency) estimated fuel economy figures had been misleading. The primary arguments of the EPA detractors were focused on the lack of real world testing, and the very limited scale (i.e., city or highway). 

Partly as a response to these criticisms, the EPA changed their fuel economy rating system in 2008 in an attempt to more adequately address these concerns. Instead of testing simply in two presumed modes, the testing now covers:
  • Faster speeds and acceleration
  • Air conditioner use
  • Colder outside temperatures
While the new EPA standards may represent an improvement, real world user data may still be the best way to gather and collect accurate fuel economy information. As such the EPA has also set up a http://www.fueleconomy.gov/mpg/MPG.do?action=browseList website where drivers can enter and track their own real-world fuel economy numbers. 

There are also a number of websites that attempt to track and report individual user fuel economy data through real-life driving. Sites or publications such as Consumer Reports, Edmunds.com, Consumer Guide and TrueDelta.com offer this service and claim more accurate numbers than those listed by the EPA.

Fuel economy maximizing behaviors

Governments, various environmentalist organizations, and companies like Toyota and Shell Oil Company have historically urged drivers to maintain adequate air pressure in tires and careful acceleration/deceleration habits. Keeping track of fuel efficiency stimulates fuel economy-maximizing behavior.

A five-year partnership between Michelin and Anglian Water shows that 60,000 liters of fuel can be saved on tire pressure. The Anglian Water fleet of 4,000 vans and cars are now lasting their full lifetime. This shows the impact that tire pressures have on the fuel efficiency.

Fuel economy as part of quality management regimes

Environmental management systems EMAS as well as good fleet management includes record keeping of the fleet fuel consumption. Quality management uses those figures to steer the measures acting on the fleets. This is a way to check whether procurement, driving, and maintenance in total have contributed to changes in the fleet's overall consumption. 

Fuel economy standards and testing procedures

Australia

From October 2008, all new cars had to be sold with a sticker on the windscreen showing the fuel consumption and the CO2 emissions. Fuel consumption figures are expressed as urban, extra urban and combined, measured according to ECE Regulations 83 and 101 – which are the based on the European driving cycle; previously, only the combined number was given.

Australia also uses a star rating system, from one to five stars, that combines greenhouse gases with pollution, rating each from 0 to 10 with ten being best. To get 5 stars a combined score of 16 or better is needed, so a car with a 10 for economy (greenhouse) and a 6 for emission or 6 for economy and 10 for emission, or anything in between would get the highest 5 star rating. The lowest rated car is the Ssangyong Korrando with automatic transmission, with one star, while the highest rated was the Toyota Prius hybrid. The Fiat 500, Fiat Punto and Fiat Ritmo as well as the Citroen C3 also received 5 stars. The greenhouse rating depends on the fuel economy and the type of fuel used. A greenhouse rating of 10 requires 60 or less grams of CO2 per km, while a rating of zero is more than 440 g/km CO2. The highest greenhouse rating of any 2009 car listed is the Toyota Prius, with 106 g/km CO2 and 4.4 L/100 km (64 mpg‑imp; 53 mpg‑US). Several other cars also received the same rating of 8.5 for greenhouse. The lowest rated was the Ferrari 575 at 499 g/km CO2 and 21.8 L/100 km (13.0 mpg‑imp; 10.8 mpg‑US). The Bentley also received a zero rating, at 465 g/km CO2. The best fuel economy of any year is the 2004–2005 Honda Insight, at 3.4 L/100 km (83 mpg‑imp; 69 mpg‑US).

Canada

Vehicle manufacturers follow a controlled laboratory testing procedure to generate the fuel consumption data that they submit to the Government of Canada. This controlled method of fuel consumption testing, including the use of standardized fuels, test cycles and calculations, is used instead of on-road driving to ensure that all vehicles are tested under identical conditions and that the results are consistent and repeatable.

Selected test vehicles are “run in” for about 6,000 km before testing. The vehicle is then mounted on a chassis dynamometer programmed to take into account the aerodynamic efficiency, weight and rolling resistance of the vehicle. A trained driver runs the vehicle through standardized driving cycles that simulate trips in the city and on the highway. Fuel consumption ratings are derived from the emissions generated during the driving cycles.

THE 5 CYCLE TEST:
  1. The city test simulates urban driving in stop-and-go traffic with an average speed of 34 km/h and a top speed of 90 km/h. The test runs for approximately 31 minutes and includes 23 stops. The test begins from a cold engine start, which is similar to starting a vehicle after it has been parked overnight during the summer. The final phase of the test repeats the first eight minutes of the cycle but with a hot engine start. This simulates restarting a vehicle after it has been warmed up, driven and then stopped for a short time. Over five minutes of test time are spent idling, to represent waiting at traffic lights. The ambient temperature of the test cell starts at 20 °C and ends at 30 °C.
  2. The highway test simulates a mixture of open highway and rural road driving, with an average speed of 78 km/h and a top speed of 97 km/h. The test runs for approximately 13 minutes and does not include any stops. The test begins from a hot engine start. The ambient temperature of the test cell starts at 20 °C and ends at 30 °C.
  3. In the cold temperature operation test, the same driving cycle is used as in the standard city test, except that the ambient temperature of the test cell is set to −7 °C.
  4. In the air conditioning test, the ambient temperature of the test cell is raised to 35 °C. The vehicle's climate control system is then used to lower the internal cabin temperature. Starting with a warm engine, the test averages 35 km/h and reaches a maximum speed of 88 km/h. Five stops are included, with idling occurring 19% of the time.
  5. The high speed/quick acceleration test averages 78 km/h and reaches a top speed of 129 km/h. Four stops are included and brisk acceleration maximizes at a rate of 13.6 km/h per second. The engine begins warm and air conditioning is not used. The ambient temperature of the test cell is constantly 25 °C.
Tests 1, 3, 4, and 5 are averaged to create the city driving fuel consumption rate.

Tests 2, 4, and 5 are averaged to create the highway driving fuel consumption rate.

Europe

Irish fuel economy label.

In the European Union, passenger vehicles are commonly tested using two drive cycles, and corresponding fuel economies are reported as 'urban' and 'extra-urban', in litres per 100 km and (in the UK) in miles per imperial gallon.

The urban economy is measured using the test cycle known as ECE-15, first introduced in 1970 by EC Directive 70/220/EWG and finalized by EEC Directive 90/C81/01 in 1999. It simulates a 4,052 m (2.518 mile) urban trip at an average speed of 18.7 km/h (11.6 mph) and at a maximum speed of 50 km/h (31 mph).

The extra-urban driving cycle or EUDC lasts 400 seconds (6 minutes 40 seconds) at an average speed 62.6 km/h (39 mph) and a top speed of 120 km/h (74.6 mph).

EU fuel consumption numbers are often considerably lower than corresponding US EPA test results for the same vehicle. For example, the 2011 Honda CR-Z with a six-speed manual transmission is rated 6.1/4.4 L/100 km in Europe and 7.6/6.4 L/100 km (31/37 mpg ) in the United States.

In the European Union advertising has to show Carbon dioxide (CO2)-emission and fuel consumption data in a clear way as described in the UK Statutory Instrument 2004 No 1661. Since September 2005 a color-coded "Green Rating" sticker has been available in the UK, which rates fuel economy by CO2 emissions: A: <= 100 g/km, B: 100–120, C: 121–150, D: 151–165, E: 166–185, F: 186–225, and G: 226+. Depending on the type of fuel used, for gasoline A corresponds to about 4.1 L/100 km (69 mpg‑imp; 57 mpg‑US) and G about 9.5 L/100 km (30 mpg‑imp; 25 mpg‑US). Ireland has a very similar label, but the ranges are slightly different, with A: <= 120 g/km, B: 121–140, C: 141–155, D: 156–170, E: 171–190, F: 191–225, and G: 226+.

In the UK the ASA (Advertising standards agency) have claimed that fuel consumption figures are misleading. Often the case with European vehicles as the MPG (miles per gallon) figures that can be advertised are often not the same as 'real world' driving.

The ASA have said that Car manufacturers can use ‘cheats’ to prepare their vehicles for their compulsory fuel efficiency and emissions tests in a way set out to make themselves look as ‘clean’ as possible. This practice is common in petrol and diesel vehicle tests, but hybrid and electric vehicles are not immune as manufacturers apply these techniques to fuel efficiency.

Car experts also assert that the official MPG figures given by manufacturers do not represent the true MPG values from real-world driving. Websites have been set up to show the real-world MPG figures, based on crowd-sourced data from real users, vs the official MPG figures.

The major loopholes in the current EU tests allow car manufacturers a number of ‘cheats’ to improve results. Car manufacturers can:
  • Disconnect the alternator, thus no energy is used to recharge the battery;
  • Use special lubricants that are not used in production cars, in order to reduce friction;
  • Turn off all electrical gadgets i.e. Air Con/Radio;
  • Adjust brakes or even disconnect them to reduce friction;
  • Tape up cracks between body panels and windows to reduce air resistance;
  • Remove Wing mirrors.
According to the results of a 2014 study by the International Council on Clean Transportation (ICCT), the gap between official and real-world fuel-economy figures in Europe has risen to about 38% in 2013 from 10% in 2001. The analysis found that for private cars, the difference between on-road and official CO2 values rose from around 8% in 2001 to 31% in 2013, and 45% for company cars in 2013. The report is based on data from more than half a million private and company vehicles across Europe. The analysis was prepared by the ICCT together with the Netherlands Organisation for Applied Scientific Research (TNO), and the German Institut für Energie- und Umweltforschung Heidelberg (IFEU).

In 2018 update of the ICCT data the difference between the official and real figures was again 38 %.

Japan

The evaluation criteria used in Japan reflects driving conditions commonly found, as the typical Japanese driver doesn't drive as fast as other regions internationally.

10–15 mode

The 10–15 mode driving cycle test is the official fuel economy and emission certification test for new light duty vehicles in Japan. Fuel economy is expressed in km/L (kilometers per litre) and emissions are expressed in g/km. The test is carried out on a dynamometer and consist of 25 tests which cover idling, acceleration, steady running and deceleration, and simulate typical Japanese urban and/or expressway driving conditions. The running pattern begins with a warm start, lasts for 660 seconds (11 minutes) and runs at speeds up to 70 km/h (43.5 mph). The distance of the cycle is 6.34 km (3.9 mi), average speed of 25.6 km/h (15.9 mph), and duration 892 seconds (14.9 minutes), including the initial 15 mode segment.

JC08

A new more demanding test, called the JC08, was established in December 2006 for Japan’s new standard that goes into effect in 2015, but it is already being used by several car manufacturers for new cars. The JC08 test is significantly longer and more rigorous than the 10–15 mode test. The running pattern with JC08 stretches out to 1200 seconds (20 minutes), and there are both cold and warm start measurements and top speed is 82 km/h (51.0 mph). The economy ratings of the JC08 are lower than the 10–15 mode cycle, but they are expected to be more real world. The Toyota Prius became the first car to meet Japan’s new 2015 Fuel Economy Standards measured under the JC08 test.

New Zealand

Starting on 7 April 2008 all cars of up to 3.5 tonnes GVW sold other than private sale need to have a fuel economy sticker applied (if available) that shows the rating from one half star to six stars with the most economic cars having the most stars and the more fuel hungry cars the least, along with the fuel economy in L/100 km and the estimated annual fuel cost for driving 14,000 km (at present fuel prices). The stickers must also appear on vehicles to be leased for more than 4 months. All new cars currently rated range from 6.9 L/100 km (41 mpg‑imp; 34 mpg‑US) to 3.8 L/100 km (74 mpg‑imp; 62 mpg‑US) and received respectively from 4.5 to 5.5 stars.

Saudi Arabia

The Kingdom of Saudi Arabia announced new light-duty vehicle fuel economy standards in November 2014 which became effective January 1, 2016 and will be fully phased in by January 1, 2018. A review of the targets will be carried by December 2018, at which time targets for 2021–2025 will be set.

United States

Motor vehicle fuel economy from 1966 to 2008.

US Energy Tax Act

The Energy Tax Act of 1978 in the US established a gas guzzler tax on the sale of new model year vehicles whose fuel economy fails to meet certain statutory levels. The tax applies only to cars (not trucks) and is collected by the IRS. Its purpose is to discourage the production and purchase of fuel-inefficient vehicles. The tax was phased in over ten years with rates increasing over time. It applies only to manufacturers and importers of vehicles, although presumably some or all of the tax is passed along to automobile consumers in the form of higher prices. Only new vehicles are subject to the tax, so no tax is imposed on used car sales. The tax is graduated to apply a higher tax rate for less-fuel-efficient vehicles. To determine the tax rate, manufacturers test all the vehicles at their laboratories for fuel economy. The US Environmental Protection Agency confirms a portion of those tests at an EPA lab.

In some cases, this tax may apply only to certain variants of a given model; for example, the 2004–2006 Pontiac GTO (captive import version of the Holden Monaro) did incur the tax when ordered with the four-speed automatic transmission, but did not incur the tax when ordered with the six-speed manual transmission.

EPA testing procedure through 2007

The "city" or Urban Dynamometer Driving Schedule (UDDS) used in the EPA Federal Test Procedure
 
The Highway Fuel Economy Driving Cycle (HWFET) used in the EPA Federal Test Procedure
 
Two separate fuel economy tests simulate city driving and highway driving: the "city" driving program or Urban Dynamometer Driving Schedule or (UDDS) or FTP-72 is defined in 40 C.F.R. 86 App I and consists of starting with a cold engine and making 23 stops over a period of 31 minutes for an average speed of 20 mph (32 km/h) and with a top speed of 56 mph (90 km/h).

The "highway" program or Highway Fuel Economy Driving Schedule (HWFET) is defined in 40 C.F.R. 600 App I and uses a warmed-up engine and makes no stops, averaging 48 mph (77 km/h) with a top speed of 60 mph (97 km/h) over a 10-mile (16 km) distance. The measurements are then adjusted downward by 10% (city) and 22% (highway) to more accurately reflect real-world results. A weighted average of city (55%) and highway (45%) fuel economies is used to determine the guzzler tax.

The procedure has been updated to FTP-75, adding a "hot start" cycle which repeats the "cold start" cycle after a 10-minute pause.

Because EPA figures had almost always indicated better efficiency than real-world fuel-efficiency, the EPA has modified the method starting with 2008. Updated estimates are available for vehicles back to the 1985 model year.

EPA testing procedure: 2008 and beyond

2008 Monroney sticker highlights fuel economy.
 
US EPA altered the testing procedure effective MY2008 which adds three new Supplemental Federal Test Procedure (SFTP) tests to include the influence of higher driving speed, harder acceleration, colder temperature and air conditioning use.

SFTP US06 is a high speed/quick acceleration loop that lasts 10 minutes, covers 8 miles (13 km), averages 48 mph (77 km/h) and reaches a top speed of 80 mph (130 km/h). Four stops are included, and brisk acceleration maximizes at a rate of 8.46 mph (13.62 km/h) per second. The engine begins warm and air conditioning is not used. Ambient temperature varies between 68 °F (20 °C) to 86 °F (30 °C).

SFTO SC03 is the air conditioning test, which raises ambient temperatures to 95 °F (35 °C), and puts the vehicle's climate control system to use. Lasting 9.9 minutes, the 3.6-mile (5.8 km) loop averages 22 mph (35 km/h) and maximizes at a rate of 54.8 mph (88.2 km/h). Five stops are included, idling occurs 19 percent of the time and acceleration of 5.1 mph/sec is achieved. Engine temperatures begin warm.

Lastly, a cold temperature cycle uses the same parameters as the current city loop, except that ambient temperature is set to 20 °F (−7 °C).

EPA tests for fuel economy do not include electrical load tests beyond climate control, which may account for some of the discrepancy between EPA and real world fuel-efficiency. A 200 W electrical load can produce a 0.4 km/L (0.94 mpg) reduction in efficiency on the FTP 75 cycle test.

Electric vehicles and hybrids

2010 Monroney sticker for a plug-in hybrid showing fuel economy in all-electric mode and gas-only mode.
 
Following the efficiency claims made for vehicles such as Chevrolet Volt and Nissan Leaf, the National Renewable Energy Laboratory recommended to use EPA's new vehicle fuel efficiency formula that gives different values depending on fuel used. In November 2010 the EPA introduced the first fuel economy ratings in the Monroney stickers for plug-in electric vehicles.

For the fuel economy label of the Chevy Volt plug-in hybrid EPA rated the car separately for all-electric mode expressed in miles per gallon gasoline equivalent (MPG-e) and for gasoline-only mode expressed in conventional miles per gallon. EPA also estimated an overall combined city/highway gas-electricity fuel economy rating expressed in miles per gallon gasoline equivalent (MPG-e). The label also includes a table showing fuel economy and electricity consumed for five different scenarios: 30 miles (48 km), 45 miles (72 km), 60 miles (97 km) and 75 miles (121 km) driven between a full charge, and a never charge scenario. This information was included in order to make the consumers aware of the variability of the fuel economy outcome depending on miles driven between charges. Also the fuel economy for a gasoline-only scenario (never charge) was included. For electric-only mode the energy consumption estimated in kWh per 100 miles (160 km) is also shown.

2010 Monroney label showing the EPA's combined city/highway fuel economy equivalent for an all-electric car
 
For the fuel economy label of the Nissan Leaf electric car EPA rated the combined fuel economy in terms of miles per gallon gasoline equivalent, with a separate rating for city and highway driving. This fuel economy equivalence is based on the energy consumption estimated in kWh per 100 miles, and also shown in the Monroney label.

In May 2011, the National Highway Traffic Safety Administration (NHTSA) and EPA issued a joint final rule establishing new requirements for a fuel economy and environment label that is mandatory for all new passenger cars and trucks starting with model year 2013, and voluntary for 2012 models. The ruling includes new labels for alternative fuel and alternative propulsion vehicles available in the US market, such as plug-in hybrids, electric vehicles, flexible-fuel vehicles, hydrogen fuel cell vehicle, and natural gas vehicles. The common fuel economy metric adopted to allow the comparison of alternative fuel and advanced technology vehicles with conventional internal combustion engine vehicles is miles per gallon of gasoline equivalent (MPGe). A gallon of gasoline equivalent means the number of kilowatt-hours of electricity, cubic feet of compressed natural gas (CNG), or kilograms of hydrogen that is equal to the energy in a gallon of gasoline.

The new labels also include for the first time an estimate of how much fuel or electricity it takes to drive 100 miles (160 km), providing US consumers with fuel consumption per distance traveled, the metric commonly used in many other countries. EPA explained that the objective is to avoid the traditional miles per gallon metric that can be potentially misleading when consumers compare fuel economy improvements, and known as the "MPG illusion" – this illusion arises because the reciprocal (i.e. non-linear) relationship between cost (equivalently, volume of fuel consumed) per unit distance driven and MPG value means that differences in MPG values are not directly meaningful – only ratios are (in mathematical terms, the reciprocal function does not commute with addition and subtraction; in general, a difference in reciprocal values is not equal to the reciprocal of their difference). It has been claimed that many consumers are unaware of this, and therefore compare MPG values by subtracting them, which can give a misleading picture of relative differences in fuel economy between different pairs of vehicles – for instance, an increase from 10 to 20 MPG corresponds to a 100% improvement in fuel economy, whereas an increase from 50 to 60 MPG is only a 20% improvement, although in both cases the difference is 10 MPG. The EPA explained that the new gallons-per-100-miles metric provides a more accurate measure of fuel efficiency – notably, it is equivalent to the normal metric measurement of fuel economy, liters per 100 kilometers (L/100 km).

CAFE standards

Curve of average car mileage for model years between 1978–2014

The Corporate Average Fuel Economy (CAFE) regulations in the United States, first enacted by Congress in 1975, are federal regulations intended to improve the average fuel economy of cars and light trucks (trucks, vans and sport utility vehicles) sold in the US in the wake of the 1973 Arab Oil Embargo. Historically, it is the sales-weighted average fuel economy of a manufacturer's fleet of current model year passenger cars or light trucks, manufactured for sale in the United States. Under Truck CAFE standards 2008–2011 this changes to a "footprint" model where larger trucks are allowed to consume more fuel. The standards were limited to vehicles under a certain weight, but those weight classes were expanded in 2011.

State regulations

The Clean Air Act of 1970 prohibited states from establishing their own air pollution standards. However, the legislation authorized the EPA to grant a waiver to California, allowing the state to set higher standards. The law provides a “piggybacking” provision that allows other states to adopt vehicle emission limits that are the same as California’s. California’s waivers were routinely granted until 2007, when the Bush administration rejected the state’s bid to adopt global warming pollution limits for cars and light trucks. California and 15 other states that were trying to put in place the same emissions standards sued in response. The case was tied up in court until the administration of Barack Obama, which in 2009 reversed the Bush administration’s decision by granting the waiver.

In April 2018, EPA Administrator Scott Pruitt announced that the Trump administration planned to roll back the federal fuel economy standards put in place in 2012 and that it would also seek to curb California’s authority to set its own standards. However, the Trump administration is reportedly also in talks with state officials to develop a compromise that would allow the state and national standards to stay in place.
 

Is the Insect Apocalypse Really Upon Us?

Chris McLoughlin / Getty

In 1828, a teenager named Charles Darwin opened a letter to his cousin with “I am dying by inches, from not having anybody to talk to about insects.” Almost two centuries on, Darwin would probably be thrilled and horrified: People are abuzz about insects, but their discussions are flecked with words such as apocalypse and Armageddon.

The drumbeats of doom began in late 2017, after a German study showed that the total mass of local flying insects had fallen by 80 percent in three decades. The alarms intensified after The New York Times Magazine published a masterful feature on the decline of insect life late last year. And panic truly set in this month when the researchers Francisco Sánchez-Bayo and Kris Wyckhuys, having reviewed dozens of studies, claimed that “insects as a whole will go down the path of extinction in a few decades.” The Guardian, in covering the duo’s review, wrote that “insects could vanish within a century”—a crisis that Sánchez-Bayo and Wyckhuys believe could lead to a “catastrophic collapse of nature’s ecosystems.”

I spoke with several entomologists about whether these claims are valid, and what I found was complicated. The data on insect declines are too patchy, unrepresentative, and piecemeal to justify some of the more hyperbolic alarms. At the same time, what little information we have tends to point in the same worrying direction. How, then, should we act on that imperfect knowledge? It’s a question that goes beyond the fate of insects: How do we preserve our rapidly changing world when the unknowns are vast and the cost of inaction is potentially high?
 
First, some good news: The claim that insects will all be annihilated within the century is absurd. Almost everyone I spoke with says that it’s not even plausible, let alone probable. “Not going to happen,” says Elsa Youngsteadt from North Carolina State University. “They’re the most diverse group of organisms on the planet. Some of them will make it.” Indeed, insects of some sort are likely to be the last ones standing. Any event sufficiently catastrophic to scour the world of insects would also render it inhospitable to other animal life. “If it happened, humans would no longer be on the planet,” says Corrie Moreau from Cornell University.

The sheer diversity of insects makes them, as a group, resilient—but also impossible to fully comprehend. There are more species of ladybugs than mammals, of ants than birds, of weevils than fish. There are probably more species of parasitic wasps than of any other group of animal. In total, about 1 million insect species have been described, and untold millions await discovery. And having learned of a creature’s existence is very different from actually knowing it: Most of the identified species are still mysterious in their habits, their proclivities, and—crucially for this discussion—their numbers.

Few researchers have kept running tallies on insect populations, aside from a smattering of species that are charismatic (monarch butterflies), commercially important (domesticated honeybees), or medically relevant (some mosquitoes). Society still has a lingering aversion toward creepy crawlies, and entomological research has long been underfunded. Where funds exist, they’ve been disproportionately channeled toward ways of controlling agricultural pests. The basic business of documenting insect diversity has been comparatively neglected, a situation made worse by the decline of taxonomists—species-spotting scientists who, ironically, have undergone their own mass extinction.

When scientists have collected long-term data on insects, they’ve usually done so in a piecemeal way. The 2017 German study, for example, collated data from traps that had been laid in different parts of the country over time, rather than from concerted attempts to systematically sample the same sites. Haphazard though such studies might be, many of them point in the same dispiriting direction. In their review, Sánchez-Bayo and Wyckhuys found 73 studies showing insect declines.

But that’s what they went looking for! They searched a database using the keywords insect and decline, and so wouldn’t have considered research showing stability or increases. The studies they found aren’t representative either: Most were done in Europe and North America, and the majority of insects live in the tropics. This spotty geographical spread makes it hard to know if insects are disappearing from some areas but recovering or surging in others. And without “good baselines for population sizes,” says Jessica Ware from Rutgers University, “when we see declines, it’s hard to know if this is something that happens all the time.”

It’s as if “our global climate dataset only involved 73 weather stations, mostly in Europe and the United States, active over different historical time windows,” explained Alex Wild from the University of Texas at Austin on Twitter. “Imagine that only some of those stations measured temperature. Others, only humidity. Others, only wind direction. Trying to cobble those sparse, disparate points into something resembling a picture of global trends is ambitious, to say the least.”

For those reasons, it’s hard to take the widely quoted numbers from Sánchez-Bayo and Wyckhuys’s review as gospel. They say that 41 percent of insect species are declining and that global numbers are falling by 2.5 percent a year, but “they’re trying to quantify things that we really can’t quantify at this point,” says Michelle Trautwein from the California Academy of Sciences. “I understand the desire to put numbers to these things to facilitate the conversation, but I would say all of those are built on mountains of unknown facts.”

Still, “our approach shouldn’t be to downplay these findings to console ourselves,” Trautwein adds. “I don’t see real danger in overstating the possible severity of insect decline, but there is real danger in underestimating how bad things really are. These studies aren’t perfect, but we’d be wise to heed this warning now instead of waiting for cleaner studies.”

After all, the factors that are probably killing off insects in Europe and North America, such as the transformation of wild spaces into agricultural land, are global problems. “I don’t see how those drivers would have a different outcome in a different area, whether we know the fauna there well or not,” says Jennifer Zaspel from the Milwaukee Public Museum.

Insects, though diverse, are also particularly vulnerable to such changes because many of them are so specialized, says May Berenbaum from the University of Illinois at Urbana-Champaign. “There’s a fly that lives in the gills of a crab on one Caribbean island,” she says. “So what happens if the island goes, or the crab goes? That’s the kind of danger that insects face. Very few of them can opportunistically exploit a broad diversity of habitats and supplies.” (That said, Sánchez-Bayo and Wyckhuys concluded that several once-common generalist species are declining, too.)

The loss of even a small percent of insects might also be disproportionately consequential. They sit at the base of the food web; if they go down, so will many birds, bats, spiders, and other predators. They aerate soils, pollinate plants, and remove dung and cadavers; if they disappear, entire landscapes will change. Given these risks, “do we wait to have definitive evidence that species are disappearing before we do something?” Berenbaum asks.

Doing something is hard, though, because insect declines have so many factors, and most studies struggle to tease them apart. In their review, Sánchez-Bayo and Wyckhuys point the finger at habitat loss above all else, followed by pesticides and other pollutants, introduced species, and climate change, in that order. “If it was one thing, we’d know what to do,” says Moreau from Cornell. Instead, we are stuck trying to tend to 1 million smaller cuts.

At least people are talking about the problem—a recent trend that surprised many of the entomologists I spoke with, who are more used to defending their interests to a creeped-out public. “Since when do people care about insects?” Berenbaum says. “I’m staggered by this!” She hopes that the apocalypse headlines will motivate people to take part in citizen-science projects, such as the BeeSpotter initiative she runs in Illinois. “There’s a huge amount of diversity, but we can divide up the work,” she says.

Youngsteadt of North Carolina State is also confused by the sudden flux of interest, but it has meant a lot of invitations from community groups that want her to talk about the declines. She advises them to plant their gardens with native flowers, which promote a wider diversity of insects than neatly manicured lawns. Many people heed that advice to save beautiful species such as monarchs, “but are shocked by all the bugs that come over,” Moreau says. “They’ll see flies, bees, other caterpillars. They start appreciating the whole realm of insects out there. Going from ‘Ew!’ to ‘I’ve heard they’re in trouble; what can I do?’ is a good thing.”

She and others hope that this newfound attention will finally persuade funding agencies to support the kind of research that has been sorely lacking—systematic, long-term, widespread censuses of all the major insect groups. “Now more than ever, we should be trying to collect baseline data,” Ware says. “That would allow us to see patterns if there really are any, and make better predictions.” Zaspel would also love to see more support for natural-history museums: The specimens pinned within their drawers can provide irreplaceable information about historical populations, but digitizing that information is expensive and laborious.

“We should get serious about figuring out how bad the situation really is,” Trautwein says. “This should be a huge wake-up call, and we should get on the ball instead of quibbling.”

Uranium mining

From Wikipedia, the free encyclopedia

2012 uranium mining, by nation.
 
World Uranium production in 2005.

Uranium mining is the process of extraction of uranium ore from the ground. The worldwide production of uranium in 2015 amounted to 60,496 tonnes. Kazakhstan, Canada, and Australia are the top three producers and together account for 70% of world uranium production. Other important uranium producing countries in excess of 1,000 tons per year are Niger, Russia, Namibia, Uzbekistan, China, the United States and Ukraine. Uranium from mining is used almost entirely as fuel for nuclear power plants

Uranium ores are normally processed by grinding the ore materials to a uniform particle size and then treating the ore to extract the uranium by chemical leaching. The milling process commonly yields dry powder-form material consisting of natural uranium, "yellowcake," which is sold on the uranium market as U3O8.

History

Uranium minerals were noticed by miners for a long time prior to the discovery of uranium in 1789. The uranium mineral pitchblende, also known as uraninite, was reported from the Krušné hory (Ore Mountains), Saxony, as early as 1565. Other early reports of pitchblende date from 1727 in Jáchymov and 1763 in Schwarzwald.

In the early 19th century, uranium ore was recovered as a byproduct of mining in Saxony, Bohemia, and Cornwall. The first deliberate mining of radioactive ores took place in Jáchymov, a silver-mining city in the Czech Republic. Marie Curie used pitchblende ore from Jáchymov to isolate the element radium, a decay product of uranium. Until World War II uranium was mined primarily for its radium content; some carnotite deposits were mined primarily for the vanadium content. Sources for radium, contained in the uranium ore, were sought for use as luminous paint for watch dials and other instruments, as well as for health-related applications, some of which in retrospect were certainly harmful. The byproduct uranium was used mostly as a yellow pigment

In the United States, the first radium/uranium ore was discovered in 1871 in gold mines near Central City, Colorado. This district produced about 50 tons of high grade ore between 1871 and 1895. However, most American uranium ore before World War II came from vanadium deposits on the Colorado Plateau of Utah and Colorado. 

In Cornwall, the South Terras Mine near St. Stephen opened for uranium production in 1873, and produced about 175 tons of ore before 1900. Other early uranium mining occurred in Autunois in France's Massif Central, Oberpfalz in Bavaria, and Billingen in Sweden. 

The Shinkolobwe deposit in Katanga, Belgian Congo now Shaba Province, Democratic Republic of the Congo (DRC) was discovered in 1913, and exploited by the Union Minière du Haut Katanga. Other important early deposits include Port Radium, near Great Bear Lake, Canada discovered in 1931, along with Beira Province, Portugal; Tyuya Muyun, Uzbekistan, and Radium Hill, Australia. 

Because of the need for the uranium for bomb research during World War II, the Manhattan Project used a variety of sources for the element. The Manhattan Project initially purchased uranium ore from the Belgian Congo, through the Union Minière du Haut Katanga. Later the project contracted with vanadium mining companies in the American Southwest. Purchases were also made from the Eldorado Mining and Refining Limited company in Canada. This company had large stocks of uranium as waste from its radium refining activities.

American uranium ores mined in Colorado were mixed ores of vanadium and uranium, but because of wartime secrecy, the Manhattan Project would publicly admit only to purchasing the vanadium, and did not pay the uranium miners for the uranium content. In a much later lawsuit, many miners were able to reclaim lost profits from the U.S. government. American ores had much lower uranium concentrations than the ore from the Belgian Congo, but they were pursued vigorously to ensure nuclear self-sufficiency. 

Similar efforts were undertaken in the Soviet Union, which did not have native stocks of uranium when it started developing its own atomic weapons program. 

Intensive exploration for uranium started after the end of World War II as a result of the military and civilian demand for uranium. There were three separate periods of uranium exploration or "booms." These were from 1956 to 1960, 1967 to 1971, and from 1976 to 1982
.
In the 20th century the United States was the world's largest uranium producer. Grants Uranium District in northwestern New Mexico was the largest United States uranium producer. The Gas Hills Uranium District was the second largest uranium producer. The famous Lucky Mc Mine is located in the Gas Hills near Riverton, Wyoming. Canada has since surpassed the United States as the cumulative largest producer in the world. In 1990, 55% of world production came from underground mines, but this shrank to 33% by 1999. From 2000, new Canadian mines again increased the proportion of underground mining, and with Olympic Dam it is now 37%. In situ leach (ISL, or ISR) mining has been steadily increasing its share of the total, mainly due to Kazakhstan.

Types of uranium deposits

Many different types of uranium deposits have been discovered and mined. There are mainly three types of uranium deposits including unconformity-type deposits, namely paleoplacer deposits and sandstone-type also known as roll front type deposits. 

Uranium deposits are classified into 15 categories according to their geological setting and the type of rock in which they are found. This geological classification system is determined by the International Atomic Energy Agency (IAEA).

Uranium deposits in sedimentary rock

The Mi Vida uranium mine, near Moab, Utah. Note alternating red and white/green sandstone. This type of uranium deposit is easier and cheaper to mine than the other types because the uranium is found not far from the surface of the crust.
 
Uranium deposits in sedimentary rocks include those in sandstone (in Canada and the western US), Precambrian unconformities (in Canada), phosphate, Precambrian quartz-pebble conglomerate, collapse breccia pipes, and calcrete

Sandstone uranium deposits are generally of two types. Roll-front type deposits occur at the boundary between the up dip and oxidized part of a sandstone body and the deeper down dip reduced part of a sandstone body. Peneconcordant sandstone uranium deposits, also called Colorado Plateau-type deposits, most often occur within generally oxidized sandstone bodies, often in localized reduced zones, such as in association with carbonized wood in the sandstone.

Precambrian quartz-pebble conglomerate-type uranium deposits occur only in rocks older than two billion years old. The conglomerates also contain pyrite. These deposits have been mined in the Blind River-Elliot Lake district of Ontario, Canada, and from the gold-bearing Witwatersrand conglomerates of South Africa.

Unconformity-type deposits make up about 33% of the World Outside Centrally Planned Economies Areas (WOCA)'s uranium deposits.

Igneous or hydrothermal uranium deposits

Hydrothermal uranium deposits encompass the vein-type uranium ores. Igneous deposits include nepheline syenite intrusives at Ilimaussaq, Greenland; the disseminated uranium deposit at Rossing, Namibia; and uranium-bearing pegmatites. Disseminated deposits are also found in the states of Washington and Alaska in the US.

Breccia uranium deposits

Breccia uranium deposits are found in rocks that have been broken due to tectonic fracturing, or weathering. Breccia uranium deposits are most common in India, Australia and the United States.
Olympic Dam mine is the world's largest uranium deposit and home to the Olympic Dam Center, a mining company currently owned by BHP Billiton.

Exploration

Uranium prospecting is similar to other forms of mineral exploration with the exception of some specialized instruments for detecting the presence of radioactive isotopes. 

The Geiger counter was the original radiation detector, recording the total count rate from all energy levels of radiation. Ionization chambers and Geiger counters were first adapted for field use in the 1930s. The first transportable Geiger–Müller counter (weighing 25 kg) was constructed at the University of British Columbia in 1932. H.V. Ellsworth of the GSC built a lighter weight, more practical unit in 1934. Subsequent models were the principal instruments used for uranium prospecting for many years, until geiger counters were replaced by scintillation counters.

The use of airborne detectors to prospect for radioactive minerals was first proposed by G.C. Ridland, a geophysicist working at Port Radium in 1943. In 1947, the earliest recorded trial of airborne radiation detectors (ionization chambers and Geiger counters) was conducted by Eldorado Mining and Refining Limited. (a Canadian Crown Corporation since sold to become Cameco Corporation). The first patent for a portable gamma-ray spectrometer was filed by Professors Pringle, Roulston & Brownell of the University of Manitoba in 1949, the same year as they tested the first portable scintillation counter on the ground and in the air in northern Saskatchewan

Airborne gamma-ray spectrometry is now the accepted leading technique for uranium prospecting with worldwide applications for geological mapping, mineral exploration & environmental monitoring. Airborne gamma-ray spectrometry used specifically for uranium measurement and prospecting must account for a number of factors like the distance between the source and the detector and the scattering of radiation through the minerals, surrounding earth and even in the air. In Australia, a Weathering Intensity Index has been developed to help prospectors based on the Shuttle Radar Topography Mission (SRTM) elevation and airborne gamma-ray spectrometry images.

A deposit of uranium, discovered by geophysical techniques, is evaluated and sampled to determine the amounts of uranium materials that are extractable at specified costs from the deposit. Uranium reserves are the amounts of ore that are estimated to be recoverable at stated costs.

Mining techniques

As with other types of hard rock mining there are several methods of extraction. In 2012, the percentage of the mined uranium produced by each mining method was: in-situ leach (44.9 percent), underground mining (26.2 percent), open pit (19.9 percent), and heap leaching (1.7 percent). The remaining 7.3% was derived as a byproduct of mining for other minerals, and miscellaneous recovery.

Open pit

Rössing open pit uranium mine, Namibia
 
In open pit mining, overburden is removed by drilling and blasting to expose the ore body, which is then mined by blasting and excavation using loaders and dump trucks. Workers spend much time in enclosed cabins thus limiting exposure to radiation. Water is extensively used to suppress airborne dust levels.

Underground uranium mining

If the uranium is too far below the surface for open pit mining, an underground mine might be used with tunnels and shafts dug to access and remove uranium ore. There is less waste material removed from underground mines than open pit mines, however this type of mining exposes underground workers to the highest levels of radon gas. 

Underground uranium mining is in principle no different from any other hard rock mining and other ores are often mined in association (e.g., copper, gold, silver). Once the ore body has been identified a shaft is sunk in the vicinity of the ore veins, and crosscuts are driven horizontally to the veins at various levels, usually every 100 to 150 meters. Similar tunnels, known as drifts, are driven along the ore veins from the crosscut. To extract the ore, the next step is to drive tunnels, known as raises when driven upwards and winzes when driven downwards, through the deposit from level to level. Raises are subsequently used to develop the stopes where the ore is mined from the veins. 

The stope, which is the workshop of the mine, is the excavation from which the ore is extracted. Two methods of stope mining are commonly used. In the "cut and fill" or open stoping method, the space remaining following removal of ore after blasting is filled with waste rock and cement. In the "shrinkage" method, only sufficient broken ore is removed via the chutes below to allow miners working from the top of the pile to drill and blast the next layer to be broken off, eventually leaving a large hole. Another method, known as room and pillar, is used for thinner, flatter ore bodies. In this method the ore body is first divided into blocks by intersecting drives, removing ore while so doing, and then systematically removing the blocks, leaving enough ore for roof support. 

The health effects discovered from radon exposure in unventilated uranium mining prompted the switch away from uranium mining via tunnel underground mining towards open cut and In-situ leaching technology, a method of extraction that does not produce the same occupational hazards, or mine tailings, as conventional mining. 

With regulations in place to ensure the use of high volume ventilation technology if any confined space uranium mining is occurring, occupational exposure and mining deaths can be largely eliminated. The Olympic Dam and Canadian underground mines are ventilated with powerful fans with radon levels being kept at a very low to practically "safe level" in uranium mines. Naturally occurring radon in other, non-uranium mines, also may need control by ventilation.

Heap leaching

Heap leaching is an extraction process by which chemicals (usually sulfuric acid) are used to extract the economic element from ore which has been mined and placed in piles on the surface. Heap leaching is generally economically feasible only for oxide ore deposits. Oxidation of sulfide deposits occurs during the geological process called weathering. Therefore, oxide ore deposits are typically found close to the surface. If there are no other economic elements within the ore a mine might choose to extract the uranium using a leaching agent, usually a low molar sulfuric acid. 

If the economic and geological conditions are right, the mining company will level large areas of land with a small gradient, layering it with thick plastic (usually HDPE or LLDPE), sometimes with clay, silt or sand beneath the plastic liner. The extracted ore will typically be run through a crusher and placed in heaps atop the plastic. The leaching agent will then be sprayed on the ore for 30–90 days. As the leaching agent filters through the heap the uranium will break its bonds with the oxide rock and enter the solution. The solution will then filter along the gradient into collecting pools which will then be pumped to on-site plants for further processing. Only some of the uranium (commonly about 70%) is actually extracted. 

The uranium concentrations within the solution are very important for the efficient separation of pure uranium from the acid. As different heaps will yield different concentrations the solution is pumped to a mixing plant that is carefully monitored. The properly balanced solution is then pumped into a processing plant where the Uranium is separated from the sulfuric acid. 

Heap leach is significantly cheaper than traditional milling processes. The low costs allow for lower grade ore to be economically feasible (given that it is the right type of ore body). Environmental law requires that the surrounding ground water is continually monitored for possible contamination. The mine will also have to have continued monitoring even after the shutdown of the mine. In the past mining companies would sometimes go bankrupt, leaving the responsibility of mine reclamation to the public. Recent additions to the mining law require that companies set aside the money for reclamation before the beginning of the project. The money will be held by the public to insure adherence to environmental standards if the company were to ever go bankrupt.

Another very similar mining technique is called in situ, or in place mining where the ore doesn't even need extracting.

In-situ leaching

Trial well field for in-situ recovery at Honeymoon, South Australia
 
In-situ leaching (ISL), also known as solution mining, or in-situ recovery (ISR) in North America, involves leaving the ore where it is in the ground, and recovering the minerals from it by dissolving them and pumping the pregnant solution to the surface where the minerals can be recovered. Consequently, there is little surface disturbance and no tailings or waste rock generated. However, the orebody needs to be permeable to the liquids used, and located so that they do not contaminate ground water away from the orebody. 

Uranium ISL uses the native groundwater in the orebody which is fortified with a complexing agent and in most cases an oxidant. It is then pumped through the underground orebody to recover the minerals in it by leaching. Once the pregnant solution is returned to the surface, the uranium is recovered in much the same way as in any other uranium plant (mill). 

In Australian ISL mines (Beverley, Four Mile and Honeymoon Mine) the oxidant used is hydrogen peroxide and the complexing agent sulfuric acid. Kazakh ISL mines generally do not employ an oxidant but use much higher acid concentrations in the circulating solutions. ISL mines in the USA use an alkali leach due to the presence of significant quantities of acid-consuming minerals such as gypsum and limestone in the host aquifers. Any more than a few percent carbonate minerals means that alkali leach must be used in preference to the more efficient acid leach. 

The Australian government has published a best practice guide for in situ leach mining of uranium, which is being revised to take account of international differences.

Recovery from seawater

The uranium concentration of sea water is low, approximately 3.3 parts per billion or 3.3 micrograms per liter of seawater. But the quantity of this resource is gigantic and some scientists believe this resource is practically limitless with respect to world-wide demand. That is to say, if even a portion of the uranium in seawater could be used the entire world's nuclear power generation fuel could be provided over a long time period. Some anti-nuclear proponents claim this statistic is exaggerated. Although research and development for recovery of this low-concentration element by inorganic adsorbents such as titanium oxide compounds has occurred since the 1960s in the United Kingdom, France, Germany, and Japan, this research was halted due to low recovery efficiency. 

At the Takasaki Radiation Chemistry Research Establishment of the Japan Atomic Energy Research Institute (JAERI Takasaki Research Establishment), research and development has continued culminating in the production of adsorbent by irradiation of polymer fiber. Adsorbents have been synthesized that have a functional group (amidoxime group) that selectively adsorbs heavy metals, and the performance of such adsorbents has been improved. Uranium adsorption capacity of the polymer fiber adsorbent is high, approximately tenfold greater in comparison to the conventional titanium oxide adsorbent.

One method of extracting uranium from seawater is using a uranium-specific nonwoven fabric as an adsorbent. The total amount of uranium recovered from three collection boxes containing 350 kg of fabric was less than1 kg of yellowcake after 240 days of submersion in the ocean. According to the OECD, uranium may be extracted from seawater using this method for about $300/kg-U. The experiment by Seko et al. was repeated by Tamada et al. in 2006. They found that the cost varied from ¥15,000 to ¥88,000 depending on assumptions and "The lowest cost attainable now is ¥25,000 with 4g-U/kg-adsorbent used in the sea area of Okinawa, with 18 repetitionuses [sic]." With the May, 2008 exchange rate, this was about $240/kg-U.

In 2012, ORNL researchers announced the successful development of a new adsorbent material dubbed "HiCap", which vastly outperforms previous best adsorbents, which perform surface retention of solid or gas molecules, atoms or ions. "We have shown that our adsorbents can extract five to seven times more uranium at uptake rates seven times faster than the world's best adsorbents," said Chris Janke, one of the inventors and a member of ORNL's Materials Science and Technology Division. HiCap also effectively removes toxic metals from water, according to results verified by researchers at Pacific Northwest National Laboratory.

Uranium prices

Since 1981 uranium prices and quantities in the US are reported by the Department of Energy. The import price dropped from 32.90 US$/lb-U3O8 in 1981 down to 12.55 in 1990 and to below 10 US$/lb-U3O8 in the year 2000. Prices paid for uranium during the 1970s were higher, 43 US$/lb-U3O8 is reported as the selling price for Australian uranium in 1978 by the Nuclear Information Centre. Uranium prices reached an all-time low in 2001, costing US$7/lb, but in April 2007 the price of Uranium on the spot market rose to US$113.00/lb, a high point of the uranium bubble of 2007. This was very close to the all time high (adjusted for inflation) in 1977.

Following the 2011 Fukushima nuclear disaster, the global uranium sector remained depressed with the uranium price falling more than 50%, declining share values, and reduced profitability of uranium producers since March 2011 and into 2014. As a result, uranium companies worldwide are reducing costs, and limiting operations.

As of July 2014, the price of uranium concentrate remained near a five-year low, the uranium price having fallen more than 50% from the peak spot price in January 2011, reflecting the loss of Japanese demand following the 2011 Fukushima nuclear disaster. As a result of continued low prices, in February 2014 mining company Cameco deferred plans to expand production from existing Canadian mines, although it continued work to open a new mine at Cigar Lake. Also in February 2014, Paladin energy suspended operations at its mine in Malawi, saying that the high-cost operation was losing money at current prices.

Politics of uranium mining

In the beginning of the Cold War, to ensure adequate supplies of uranium for national defense, the United States Congress passed the U.S. Atomic Energy Act of 1946, creating the Atomic Energy Commission (AEC) which had the power to withdraw prospective uranium mining land from public purchase, and also to manipulate the price of uranium to meet national needs. By setting a high price for uranium ore, the AEC created a uranium "boom" in the early 1950s, which attracted many prospectors to the Four Corners region of the country. Moab, Utah became known as the Uranium-capital of the world, when geologist Charles Steen discovered such an ore in 1952, even though American ore sources were considerably less potent than those in the Belgian Congo or South Africa

In the 1950s methods for extracting diluted uranium and thorium, found in abundance in granite or seawater, were pursued. Scientists speculated that, used in a breeder reactor, these materials would potentially provide limitless source of energy. 

American military requirements declined in the 1960s, and the government completed its uranium procurement program by the end of 1970. Simultaneously, a new market emerged: commercial nuclear power plants. However, in the U.S. this market virtually collapsed by the end of the 1970s as a result of industrial strains caused by the energy crisis, popular opposition, and finally the Three Mile Island nuclear accident in 1979, all of which led to a de facto moratorium on the development of new nuclear reactor power stations. 

In Europe a mixed situation exists. Considerable nuclear power capacities have been developed, notably in Belgium, Finland, France, Germany, Spain, Sweden, Switzerland, and the UK. In many countries development of nuclear power has been stopped and phased out by legal actions. In Italy the use of nuclear power was barred by a referendum in 1987; however, this is now under revision. Ireland in 2008 also had no plans to change its non-nuclear stance, although since the opening in 2012 of the East-West Interconnector between Ireland and Britain, it has been supported by British nuclear power.

The years 1976 and 1977 saw uranium mining become a major political issue in Australia, with the Ranger Inquiry (Fox) report opening up a public debate about uranium mining. The Movement Against Uranium Mining group was formed in 1976, and many protests and demonstrations against uranium mining were held. Concerns relate to the health risks and environmental damage from uranium mining. Notable Australian anti-uranium activists have included Kevin Buzzacott, Jacqui Katona, Yvonne Margarula, and Jillian Marsh.

The World Uranium Hearing was held in Salzburg, Austria in September 1992. Anti-nuclear speakers from all continents, including indigenous speakers and scientists, testified to the health and environmental problems of uranium mining and processing, nuclear power, nuclear weapons, nuclear tests, and radioactive waste disposal. People who spoke at the 1992 Hearing include: Thomas Banyacya, Katsumi Furitsu, Manuel Pino and Floyd Red Crow Westerman. They highlighted the threat of radioactive contamination to all peoples, especially indigenous communities and said that their survival requires self-determination and emphasis on spiritual and cultural values. Increased renewable energy commercialization was advocated.

Health risks of uranium mining

Lung cancer deaths

Uranium ore emits radon gas. The health effects of high exposure to radon are a particular problem in the mining of uranium; significant excess lung cancer deaths have been identified in epidemiological studies of uranium miners employed in the 1940s and 1950s.

The first major studies with radon and health occurred in the context of uranium mining, first in the Joachimsthal region of Bohemia and then in the Southwestern United States during the early Cold War. Because radon is a product of the radioactive decay of uranium, underground uranium mines may have high concentrations of radon. Many uranium miners in the Four Corners region contracted lung cancer and other pathologies as a result of high levels of exposure to radon in the mid-1950s. The increased incidence of lung cancer was particularly pronounced among Native American and Mormon miners, because those groups normally have low rates of lung cancer. Safety standards requiring expensive ventilation were not widely implemented or policed during this period.

In studies of uranium miners, workers exposed to radon levels of 50 to 150 picocuries of radon per liter of air (2000–6000 Bq/m3) for about 10 years have shown an increased frequency of lung cancer. Statistically significant excesses in lung cancer deaths were present after cumulative exposures of less than 50 WLM. There is, however, unexplained heterogeneity in these results (whose confidence interval do not always overlap). The size of the radon-related increase in lung cancer risk varied by more than an order of magnitude between the different studies.

Since that time, ventilation and other measures have been used to reduce radon levels in most affected mines that continue to operate. In recent years, the average annual exposure of uranium miners has fallen to levels similar to the concentrations inhaled in some homes. This has reduced the risk of occupationally induced cancer from radon, although it still remains an issue both for those who are currently employed in affected mines and for those who have been employed in the past. The power to detect any excess risks in miners nowadays is likely to be small, exposures being much smaller than in the early years of mining.

Clean-up efforts

United States

Despite efforts made in cleaning up uranium sites, significant problems stemming from the legacy of uranium development still exist today on the Navajo Nation and in the states of Utah, Colorado, New Mexico, and Arizona. Hundreds of abandoned mines have not been cleaned up and present environmental and health risks in many communities. At the request of the U.S. House Committee on Oversight and Government Reform in October 2007, and in consultation with the Navajo Nation, the Environmental Protection Agency (EPA), along with the Bureau of Indian Affairs (BIA), the Nuclear Regulatory Commission (NRC), the Department of Energy (DOE), and the Indian Health Service (IHS), developed a coordinated Five-Year Plan to address uranium contamination. Similar interagency coordination efforts are beginning in the State of New Mexico as well. In 1978, Congress passed the Uranium Mill Tailings Radiation Control Act (UMTRCA), a measure designed to assist in the cleanup of 22 inactive ore-processing sites throughout the southwest. This also included constructing 19 disposal sites for the tailings, which contain a total of 40 million cubic yards of low-level radioactive material. The Environmental Protection Agency estimates that there are 4000 mines with documented uranium production, and another 15,000 locations with uranium occurrences in 14 western states, most found in the Four Corners area and Wyoming.

The Uranium Mill Tailings Radiation Control Act is a United States environmental law that amended the Atomic Energy Act of 1954 and gave the Environmental Protection Agency the authority to establish health and environmental standards for the stabilization, restoration, and disposal of uranium mill waste. Title 1 of the Act required the EPA to set environmental protection standards consistent with the Resource Conservation and Recovery Act, including groundwater protection limits; the Department of Energy to implement EPA standards and provide perpetual care for some sites; and the Nuclear Regulatory Commission to review cleanups and license sites to states or the DOE for perpetual care. Title 1 established a uranium mill remedial action program jointly funded by the federal government and the state. Title 1 of the Act also designated 22 inactive uranium mill sites for remediation, resulting in the containment of 40 million cubic yards of low-level radioactive material in UMTRCA Title 1 holding cells.

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...