Search This Blog

Friday, July 1, 2022

Urban resilience

From Wikipedia, the free encyclopedia
 
Tuned mass damper in Taipei 101, the world's third tallest skyscraper

Urban resilience has conventionally been defined as the "measurable ability of any urban system, with its inhabitants, to maintain continuity through all shocks and stresses, while positively adapting and transforming towards sustainability". Therefore, a resilient city is one that assesses, plans and acts to prepare for and respond to hazards - natural and human-made, sudden and slow-onset, expected and unexpected. Resilient Cities are better positioned to protect and enhance people's lives, secure development gains, foster an investible environment, and drive positive change. Academic discussion of urban resilience has focused primarily on three distinct threats; climate change, natural disasters, and terrorism. Resilience to these threats has been discussed in the context of non-physical, as well as, physical aspects of urban planning and design. Accordingly, resilience strategies have tended to be conceived of in terms of counter-terrorism, other disasters (earthquakes, wildfires, tsunamis, coastal flooding, solar flares, etc.), and infrastructure adoption of sustainable energy.

More recently, there has been an increasing attention to genealogies of urban resilience  and the capability of urban systems to adapt to changing conditions. This branch of resilience theory builds on a notion of cities as highly complex adaptive systems. The implication of this insight is to move urban planning away from conventional approaches based in geometric plans to an approach informed by network science that involves less interference in the functioning of cities. Network science provides a way of linking city size to the forms of networks that are likely to enable cities to function in different ways. It can further provide insights into the potential effectiveness of various urban policies. This requires a better understanding of the types of practices and tools that contribute to building urban resilience. Genealogical approaches explore the evolution of these practices over time, including the values and power relations underpinning them.

Building resilience in cities relies on investment decisions that prioritize spending on activities that offer alternatives, which perform well in different scenarios. Such decisions need to take into account future risks and uncertainties. Because risk can never be fully eliminated, emergency and disaster planning is crucial. Disaster risk management frameworks, for example, offer practical opportunities for enhancing resilience.

More than half of the world's human population has lived in cities since 2007, and urbanization is calculated to rise to 80% by 2050. This means that the major resilience challenges of our era, such as poverty reduction, natural hazards and climate change, environmental sustainability, and social inclusion, will be won or lost in cities. Mass density of people makes them especially vulnerable both to the impacts of acute disasters and the slow, creeping effects of the changing climate; all making resilience planning critically important. At the same time, growing urbanization over the past century has been associated with a considerable increase in urban sprawl. Resilience efforts address how individuals, communities and business not only cope on the face of multiple shocks and stresses, but also exploit opportunities for transformational development.

As one way of addressing disaster risk in urban areas, national and local governments, often supported by international funding agencies, engage in resettlement. This can be preventative, or occur after a disaster. While this reduces people's exposure to hazards, it can also lead to other problems, which can leave people more vulnerable or worse off than they were before. Resettlement needs to be understood as part of long-term sustainable development, not just as a means for disaster risk reduction.

Sustainable Development Goal 11

In September 2015, world leaders adopted the 17 Sustainable Development Goals (SDGs) as part of the 2030 Agenda for Sustainable Development. The goals, which build on and replace the Millennium Development Goals, officially came into force on 1 January 2016 and are expected to be achieved within the next 15 years. While the SDGs are not legally binding, governments are expected to take ownership and establish national frameworks for their achievement. Countries also have the primary responsibility for follow-up and review of progress based on quality, accessible and timely data collection. National reviews will feed into regional reviews, which in turn will inform a review at the global level.

UN-Habitat's City Resilience Profiling Tool (CRPT)

As the UN Agency for Human Settlements, UN-Habitat is working to support local governments and their stakeholders build urban resilience through the City Resilience Profiling Tool (CRPT). When applied, UN-Habitat's holistic approach to increasing resiliency results in local governments that are better able to ensure the wellbeing of citizens, protect development gains and maintain functionality in the face of hazards. The tool developed by UN-Habitat to support local governments achieve resilience is the City Resilience Profiling Tool. The Tool follows various stages and UN-Habitat supports cities to maximize the impact of CRPT implementation.

Getting started Local governments and UN-Habitat connect to evaluate the needs, opportunities and context of the city and evaluate the possibility of implementing the tool in their city. WIth our local government partners, we consider the stakeholders that need to be involved in implementation, including civil society organizations, national governments, the private sectors, among others.

Engagement By signing an agreement with a UN agency, the local government is better able to work with the necessary stakeholders to plan-out risk and built-in resilience across the city.

Diagnosis The CRPT provides a framework for cities to collect the right data about the city that enables them to evaluate their resilience and identify potential vulnerability in the urban system. Diagnosis through data covers all elements of the urban system, and considers all potential hazards and stakeholders.

Resilience Actions Understanding of the entire urban system fuels effective action. The main output of the CRPT is a unique Resilience Action Plan (RAP) for each engaged city. The RAP sets out short-, medium- and long-term strategies based on the diagnosis and actions are prioritised, assigned interdepartmentally, and integrated into existing government policies and plans. The process is iterative and once resilience actions have been implemented, local governments monitor impact through the tool, which recalibrates to identify next steps.

Taking it further Resilience actions require the buy-in of all stakeholders and, in many cases, additional funding. With a detailed diagnostic, local governments can leverage the support of national governments, donors and other international organizations to work towards sustainable urban development.

To date, this approach is currently being adapted in Barcelona (Spain), Asuncion (Paraguay), Maputo (Mozambique), Port Vila (Vanuatu), Bristol (United Kingdom), Lisbon (Portugal), Yakutsk (Russia), and Dakar (Senegal). The biennial publication, Trends in Urban Resilience, also produced by UN-Habitat is tracking the most recent efforts to build urban resilience as well as the actors behind these actions and a number of case studies.

Medellin Collaboration for Urban Resilience

The Medellin Collaboration for Urban Resilience (MCUR) was launched at the 7th session of the World Urban Forum in Medellín, Colombia in 2014. As a pioneering partnerships platforms, it gathers the most prominent actors committed to building resilience globally, including UNISDR, The World Bank Group, Global Facility for Disaster Reduction and Recovery, Inter-American Development Bank, Rockefeller Foundation, 100 Resilient Cities, C40, ICLEI and Cities Alliance, and it is chaired by UN-Habitat.

MCUR aims to jointly collaborate on strengthening the resilience of all cities and human settlements around the world by supporting local, regional and national governments. It addresses its activity by providing knowledge and research, facilitating access to local-level finance and raising global awareness on urban resilience through policy advocacy and adaptation diplomacy efforts. Its work is devoted to achieving the main international development agendas, as it works to achieve the mandates set out in the Sustainable Development Goals, the New Urban Agenda, the Paris Agreement on Climate Change and the Sendai Framework for Disaster Risk Reduction.

The Medellin Collaboration conceived a platform to help local governments and other municipal professionals understand the primary utility of the vast array of tools and diagnostics designed to assess, measure, monitor and improve city-level resilience. For example, some tools are intended as rapid assessments to establish a general understanding and baseline of a city's resilience and can be self-deployed, while others are intended as a means to identify and prioritise areas for investment. The Collaboration has produced a guidebook to illustrate how cities are responding to current and future challenges by thinking strategically about design, planning, and management for building resilience. Currently, it is working in a collaborative model in six pilot cities: Accra, Bogotá, Jakarta, Maputo, Mexico City and New York City.

100 Resilient Cities and the City Resilience Index (CRI)

"Urban Resilience is the capacity of individuals, communities, institutions, businesses, and systems within a city to survive, adapt, and grow no matter what kinds of chronic stresses and acute shocks they experience." Rockefeller Foundation, 100 Resilient Cities.

A central program contributing to the achievement of SDG 11 is the Rockefeller Foundation's 100 Resilient Cities. In December 2013, The Rockefeller Foundation launched the 100 Resilient Cities initiative, which is dedicated to promoting urban resilience, defined as "the capacity of individuals, communities, institutions, businesses, and systems within a city to survive, adapt, and grow no matter what kinds of chronic stresses and acute shocks they experience". The related resilience framework is multidimensional in nature, incorporating the four core dimensions of leadership and strategy, health and well-being, economy and society and infrastructure and environment. Each dimension is defined by three individual "drivers" which reflect the actions cities can take to improve their resilience.

While the vagueness of the term "resilience" has enabled innovative multi-disciplinary collaboration, it has also made it difficult to operationalize or to develop generalizable metrics. To overcome this challenge, the professional services firm Arup has helped the Rockefeller Foundation develop the City Resilience Index based on extensive stakeholder consultation across a range of cities globally. The index is intended to serve as a planning and decision-making tool to help guide urban investments toward results that facilitate sustainable urban growth and the well-being of citizens. The hope is that city officials will utilize the tool to identify areas of improvement, systemic weaknesses and opportunities for mitigating risk. Its generalizable format also allows cities to learn from each other.

The index is a holistic articulation of urban resilience premised on the finding that there are 12 universal factors or drivers that contribute to city resilience. What varies is their relative importance. The factors are organized into the four core dimensions of the urban resilience framework:

Leadership and strategy

  • Effective leadership and management
  • Empowered stakeholders
  • Integrated development planning

Health and well-being

  • Minimal human vulnerability
  • Diverse livelihoods and employment
  • Effective safeguards to human health and life

Economy and society

  • Sustainable economy
  • Comprehensive security and rule of law
  • Collective identity and community support

Infrastructure and environment

  • Reduced exposure and fragility
  • Effective provision of critical services
  • Reliable mobility and communications

A total of 100 cities across six continents have signed up for the Rockefeller Center's urban resilience challenge. All 100 cities have developed individual City Resilience Strategies with technical support from a Chief Resilience Officer (CRO). The CRO ideally reports directly to the city's chief executive and helps coordinate all the resilience efforts in a single city.

Medellin in Colombia qualified for the urban resilience challenge in 2013. In 2016, it won the Lee Kuan Yew World City Prize.

Digital technology, open data and governance for urban resilience

A core factor enabling progress on all other dimensions of urban resilience is urban governance. Sustainable, resilient and inclusive cities are often the outcome of good governance that encompasses effective leadership, inclusive citizen participation and efficient financing among other things. To this end, public officials increasingly have access to public data, enabling evidence-based decision making. Open data is also increasingly transforming the way local governments share information with citizens, deliver services and monitor performance. It enables simultaneously increased public access to information and more direct citizen involvement in decision-making.

As part of their resilience strategies, city governments are increasingly relying on digital technology as part of a city's infrastructure and service delivery systems. On the one hand, reliance on technologies and electronic service delivery has made cities more vulnerable to hacking and cyberattacks. At the same time, information technologies have often had a positive transformative impact by supporting innovation and promoting efficiencies in urban infrastructure, thus leading to lower-cost city services. The deployment of new technologies in the initial construction of infrastructure have in some cases even allowed urban economies to leapfrog stages of development. An unintended outcome of the growing digitalization of cities is the emergence of a digital divide, which can exacerbate inequality between well-connected affluent neighborhoods and business districts, on the one hand, and under-serviced and under-connected low-income neighborhoods, on the other. In response, a number of cities have introduced digital inclusion programs to ensure that all citizens have the necessary tools to thrive in an increasingly digitalized world.

Climate change and urban resilience

The urban impacts of climate change vary widely across geographical and developmental scales. A recent study of 616 cities (home to 1.7 billion people, with a combined GDP of US$35 trillion, half of the world's total economic output), found that floods endanger more city residents than any other natural peril, followed by earthquakes and storms. Below is an attempt to define and discuss the challenges of heat waves, droughts and flooding. Resilience-boosting strategies will be introduced and outlined.

Heat waves and droughts

Heat waves are becoming increasingly prevalent as the global climate changes. The 1980 United States heat wave and drought killed 10,000 people. In 1988 a similar heat wave and drought killed 17,000 American citizens. In August 2003 the UK saw record breaking summer temperatures with average temperatures persistently rising above 32 °C. Nearly 3,000 deaths were contributed to the heat wave in the UK during this period, with an increase of 42% in London alone. This heat wave claimed more than 40,000 lives across Europe. Research indicates that by 2040 over 50% of summers will be warmer than 2003 and by 2100 those same summer temperatures will be considered cool. The 2010 northern hemisphere summer heat wave was also disastrous, with nearly 5,000 deaths occurring in Moscow. In addition to deaths, these heat waves also cause other significant problems. Extended periods of heat and droughts also cause widespread crop losses, spikes in electricity demand, forest fires, air pollution and reduced biodiversity in vital land and marine ecosystems. Agricultural losses from heat and drought might not occur directly within the urban area, but it certainly affects the lives of urban dwellers. Crop supply shortages can lead to spikes in food prices, food scarcity, civic unrest and even starvation in extreme cases. In terms of the direct fatalities from these heat waves and droughts, they are statistically concentrated in urban areas, and this is not just in line with increased population densities, but is due to social factors and the urban heat island effect.

Urban heat islands

Urban heat island (UHI) refers to the presence of an inner-city microclimate in which temperatures are comparatively higher than in the rural surroundings. Recent studies have shown that summer daytime temperatures can reach up to 10 °C hotter in a city centre than in rural areas and between 5–6 °C warmer at night. The causes of UHI are no mystery, and are mostly based on simple energy balances and geometrics. The materials commonly found in urban areas (concrete and asphalt) absorb and store heat energy much more effectively than the surrounding natural environment. The black colouring of asphalt surfaces (roads, parking lots and highways) is able to absorb significantly more electromagnetic radiation, further encouraging the rapid and effective capture and storage of heat throughout the day. Geometrics come into play as well, as tall buildings provide large surfaces that both absorb and reflect sunlight and its heat energy onto other absorbent surfaces. These tall buildings also block the wind, which limits convective cooling. The sheer size of the buildings also blocks surface heat from naturally radiating back into the cool sky at night. These factors, combined with the heat generated from vehicles, air conditioners and industry ensure that cities create, absorb and hold heat very effectively.

Social factors for heat vulnerability

The physical causes of heat waves and droughts and the exacerbation of the UHI effect are only part of the equation in terms of fatalities; social factors play a role as well. Statistically, senior citizens represent the majority of heat (and cold) related deaths within urban areas and this is often due to social isolation. In rural areas, seniors are more likely to live with family or in care homes, whereas in cities they are often concentrated in subsidised apartment buildings and in many cases have little to no contact with the outside world. Like other urban dwellers with little or no income, most urban seniors are unlikely to own an air conditioner. This combination of factors leads to thousands of tragic deaths every season, and incidences are increasing each year.

Adapting for heat and drought resilience

Greening, reflecting and whitening urban spaces

Greening urban spaces is among the most frequently mentioned strategies to address heat effects. The idea is to increase the amount of natural cover within the city. This cover can be made up of grasses, bushes, trees, vines, water, rock gardens; any natural material. Covering as much surface as possible with green space will both reduce the total quantity of thermally absorbent artificial material, but the shading effect will reduce the amount of light and heat that reaches the concrete and asphalt that cannot be replaced by greenery. Trees are among the most effective greening tool within urban environments because of their coverage/footprint ratio. Trees require a very small physical area for planting, but when mature, they provide a much larger coverage area. This both absorbs solar energy for photosynthesis (improving air quality and mitigating global warming), reducing the amount of energy being trapped and held within artificial surfaces, but also casts much-needed shade on the city and its inhabitants. Shade itself does not lower the ambient air temperature, but it greatly reduces the perceived temperature and comfort of those seeking its refuge. A popular method of reducing UHI is simply increasing the albedo (light reflectiveness) of urban surfaces that cannot be ‘greened’. This is done by using reflective paints or materials where appropriate, or white and light-coloured options where reflections would be distracting or dangerous. Glazing can also be added to windows to reduce the amount of heat entering buildings. Green roofs are also a resilience-boosting option, and have synergies with flood resilience strategies as well. However, depaving of excess pavement has been found to be a more effective and cost-efficient approach to greening and flood control.

Social strategies

There are various strategies to increase the resilience of those most vulnerable to urban heat waves. As established, these vulnerable citizens are primarily socially isolated seniors. Other vulnerable groups include young children (especially those facing abject poverty or living in informal housing), people with underlying health problems, the infirm or disabled and the homeless. Accurate and early prediction of heat waves is of fundamental importance, as it gives time for the government to issue extreme heat alerts. Urban areas must prepare and be ready to implement heat-wave emergency response initiatives. Seasonal campaigns aimed to educate the public on the risks associated with heat waves will help prepare the broad community, but in response to impending heat events more direct action is required. Local government must quickly communicate with the groups and institutions that work with heat-vulnerable populations. Cooling centres should be opened in libraries, community centres and government buildings. These centres ensure free access to air conditioning and water. In partnership with government and non-government social services, paramedics, police, firefighters, nurses and volunteers; the above-mentioned groups working with vulnerable populations should carry out regular door-to-door visits during these extreme heat scenarios. These visits should provide risk assessment, advice, bottled water (for areas without potable tap water) and the offer of free transportation to local cooling centres.

Food and water supplies

Heat waves and droughts can reap massive damage on agricultural areas vital to providing food staples to urban populations. Reservoirs and aquifers quickly dry up due to increased demand on water for drinking, industrial and agricultural purposes. The result can be shortages and price spikes for food and with increasing frequency, shortages of drinking water as observed with increasing severity seasonally in China and throughout most of the developing world. From an agricultural standpoint, farmers can be required to plant more heat and drought-resistant crops. Agricultural practices can also be streamlined to higher levels of hydrological efficiency. Reservoirs should be expanded and new reservoirs and water towers should be constructed in areas facing critical shortages. Grander schemes of damming and redirecting rivers should also be considered if possible. For saltwater coastal cities, desalination plants provide a possible solution to water shortages. Infrastructure also plays a role in resilience, as in many areas aging pipelines result in leakage and possible contamination of drinking water. In Kenya’s major cities, Nairobi and Mombasa, between 40 and 50% of drinking water is lost through leakage. In these types of cases, replacements and repairs are clearly needed.

Flooding

Flooding, either from weather events, rising sea levels or infrastructure failures are a major cause of death, disease and economic losses throughout the world. Climate change and rapidly expanding urban settlements are two factors that are leading to the increasing occurrence and severity of urban flood events, especially in the developing world. Storm surges can affect coastal cities and are caused by low pressure weather systems, like cyclones and hurricanes. Flash floods and river floods can affect any city within a floodplain or with inadequate drainage infrastructure. These can be caused by large quantities of rain or heavy rapid snow melt. With all forms of flooding, cities are increasingly vulnerable because of the large quantity of paved and concrete surfaces. These impermeable surfaces cause massive amounts of runoff and can quickly overwhelm the limited infrastructure of storm drains, flood canals and intentional floodplains. Many cities in the developing world simply have no infrastructure to redirect floodwaters whatsoever. Around the world, floods kill thousands of people every year and are responsible for billions of dollars in damages and economic losses. Flooding, much like heat waves and droughts, can also wreak havoc on agricultural areas, quickly destroying large amounts of crops. In cities with poor or absent drainage infrastructure, flooding can also lead to the contamination of drinking water sources (aquifers, wells, inland waterways) with salt water, chemical pollution, and most frequently, viral and bacterial contaminants.

Flood flow in urban environment

The flood flow in urbanised areas constitutes a hazard to the population and infrastructure. Some recent catastrophes included the inundations of Nîmes (France) in 1998 and Vaison-la-Romaine (France) in 1992, the flooding of New Orleans (USA) in 2005, the flooding in Rockhampton, Bundaberg, Brisbane during the 2010–2011 summer in Queensland (Australia). Flood flows in urban environments have been studied relatively recently despite many centuries of flood events. Some researchers mentioned the storage effect in urban areas. Several studies looked into the flow patterns and redistribution in streets during storm events and the implication in terms of flood modelling.

Some research considered the criteria for safe evacuation of individuals in flooded areas. But some recent field measurements during the 2010–2011 Queensland floods showed that any criterion solely based upon the flow velocity, water depth or specific momentum cannot account for the hazards caused by the velocity and water depth fluctuations. These considerations ignore further the risks associated with large debris entrained by the flow motion.

Adapting for flood resilience

Urban greening

Replacing as many non-porous surfaces with green space as possible will create more areas for natural ground (and plant-based) absorption of excess water. Gaining popularity are different types of green roofs. Green roofs vary in their intensity, from very thin layers of soil or rockwool supporting a variety of low or no-maintenance mosses or sedum species to large, deep, intensive roof gardens capable of supporting large plants and trees but requiring regular maintenance and more structural support. The deeper the soil, the more rainwater it can absorb and therefore the more potential floodwater it can prevent from reaching the ground. One of the best strategies, if possible, is to simply create enough space for the excess water. This involves planning or expanding areas of parkland in or adjacent to the zone where flooding is most likely to occur. Excess water is diverted into these areas when necessary, as in Cardiff, around the new Millennium Stadium. Floodplain clearance is another greening strategy that fundamentally removes structures and pavement built on floodplains and returns them to their natural habitat which is capable of absorbing massive quantities of water that otherwise would have flooded the built urban area.

Flood-water control

Levees and other flood barriers are indispensable for cities on floodplains or along rivers and coasts. In areas with lower financial and engineering capital, there are cheaper and simpler options for flood barriers. UK engineers are currently conducting field tests of a new technology called the SELOC (Self-Erecting Low-Cost Barrier). The barrier itself lies flat on the ground, and as the water rises, the SELOC floats up, with its top edge rising with the water level. A restraint holds the barrier in the vertical position. This simple, inexpensive flood barrier has great potential for increasing urban resilience to flood events and shows significant promise for developing nations with its low cost and simple, fool-proof design. The creation or expansion of flood canals and/or drainage basins can help direct excess water away from critical areas and the utilisation of innovative porous paving materials on city streets and car parks allow for the absorption and filtration of excess water.

During the January 2011 flood of the Brisbane River (Australia), some unique field measurements about the peak of the flood showed very substantial sediment fluxes in the Brisbane River flood plain, consistent with the murky appearance of floodwaters. The field deployment in an inundated street of the CBD showed also some unusual features of flood flow in an urban environment linked with some local topographic effects.

Structural resilience

In most developed nations, all new developments are assessed for flood risks. The aim is to ensure flood risk is taken into account in all stages of the planning process to avoid inappropriate development in areas of high risk. When development is required in areas of high risk, structures should be built to flood-resistant standards and living or working areas should be raised well above the worst-case scenario flood levels. For existing structures in high-risk areas, funding should be allocated to i.e. raise the electrical wiring/sockets so any water that enters the home can not reach the electrics. Other solutions are to raise these structures to appropriate heights or make them floating or considerations should be made to relocate or rebuild structures on higher ground. A house in Mexico Beach, Florida which survived Hurricane Michael is an example of a house built to survive tidal surge.

The pre-Incan Uru people of Lake Titicaca in Peru have lived on floating islands made of reeds for hundreds of years. The practice began as an innovative form of protection from competition for land by various groups, and it continues to support the Uru homeland. The manual technique is used to build homes resting on hand-made islands all from simple reeds from the totora plant. Similarly, in the southern wetlands of Iraq, the Marsh Arabs (Arab al-Ahwār) have lived for centuries on floating islands and in arched buildings all constructed exclusively from the local qasab reeds. Without any nails, wood, or glass, buildings are assembled by hand as quickly as within a day. Another aspect of these villages, called Al Tahla, is that the built homes can also be disassembled in a day, transported, and reassembled.

Emergency response

As with all disasters, flooding requires a specific set of disaster response plans. Various levels of contingency planning should be established, from basic medical and selective evacuation provisions involving local emergency responders right the way up to full military disaster relief plans involving air-based evacuations, search and rescue teams and relocation provisions for entire urban populations. Clear lines of responsibility and chains of command must be laid out, and tiered priority response levels should be established to address the immediate needs of the most vulnerable citizens first. For post-flooding repair and reconstruction sufficient emergency funding should be set aside proactively.

Educational programs related to urban resilience

The emergence of urban resilience as an educational topic has experienced an unprecedented level of growth due in large part to a series of natural disasters including the 2004 Indian Ocean earthquake and tsunami, 2005 Hurricane Katrina, the 2011 Tohoku earthquake and tsunami, and Hurricane Sandy in 2012. Two of the more well-recognized programs are Harvard Graduate School of Design's Master's program in Risk and Resilience, and Tulane University's Disaster Resilience Leadership Academy. There are also several workshops available related to the U.S. Federal Emergency Management Agency and the Department of Homeland Security. A list of more than 50 current graduate and undergraduate programs focusing on urban resilience has been compiled by The Resilience Shift.

Endosymbiont

From Wikipedia, the free encyclopedia
 
A representation of the endosymbiotic theory

An endosymbiont or endobiont is any organism that lives within the body or cells of another organism most often, though not always, in a mutualistic relationship. (The term endosymbiosis is from the Greek: ἔνδον endon "within", σύν syn "together" and βίωσις biosis "living".) Examples are nitrogen-fixing bacteria (called rhizobia), which live in the root nodules of legumes; single-cell algae inside reef-building corals, and bacterial endosymbionts that provide essential nutrients to about 10–15% of insects.

There are two types of symbiont transmissions. In horizontal transmission, each new generation acquires free living symbionts from the environment. An example is the nitrogen-fixing bacteria in certain plant roots. Vertical transmission takes place when the symbiont is transferred directly from parent to offspring. There is also a combination of these types, where symbionts are transferred vertically for some generation before a switch of host occurs and new symbionts are horizontally acquired from the environment. In vertical transmissions, the symbionts often have a reduced genome and are no longer able to survive on their own. As a result, the symbiont depends on the host, resulting in a highly intimate co-dependent relationship. For instance, pea aphid symbionts have lost genes for essential molecules, now relying on the host to supply them with nutrients. In return, the symbionts synthesize essential amino acids for the aphid host. Other examples include Wigglesworthia nutritional symbionts of tse-tse flies, or in sponges. When a symbiont reaches this stage, it begins to resemble a cellular organelle, similar to mitochondria or chloroplasts.

Many instances of endosymbiosis are obligate; that is, either the endosymbiont or the host cannot survive without the other, such as the gutless marine worms of the genus Riftia, which get nutrition from their endosymbiotic bacteria. The most common examples of obligate endosymbioses are mitochondria and chloroplasts. Some human parasites, e.g. Wuchereria bancrofti and Mansonella perstans, thrive in their intermediate insect hosts because of an obligate endosymbiosis with Wolbachia spp. They can both be eliminated from hosts by treatments that target this bacterium. However, not all endosymbioses are obligate and some endosymbioses can be harmful to either of the organisms involved.

Two major types of organelle in eukaryotic cells, mitochondria and plastids such as chloroplasts, are considered to be bacterial endosymbionts. This process is commonly referred to as symbiogenesis.

Symbiogenesis and organelles

An overview of the endosymbiosis theory of eukaryote origin (symbiogenesis).

Symbiogenesis explains the origins of eukaryotes, whose cells contain two major kinds of organelle: mitochondria and chloroplasts. The theory proposes that these organelles evolved from certain types of bacteria that eukaryotic cells engulfed through phagocytosis. These cells and the bacteria trapped inside them entered an endosymbiotic relationship, meaning that the bacteria took up residence and began living exclusively within the eukaryotic cells.

Numerous insect species have endosymbionts at different stages of symbiogenesis. A common theme of symbiogenesis involves the reduction of the genome to only essential genes for the host and symbiont collective genome. A remarkable example of this is the fractionation of the Hodgkinia genome of Magicicada cicadas. Because the cicada life cycle takes years underground, natural selection on endosymbiont populations is relaxed for many bacterial generations. This allows the symbiont genomes to diversify within the host for years with only punctuated periods of selection when the cicadas reproduce. As a result, the ancestral Hodgkinia genome has split into three groups of primary endosymbiont, each encoding only a fraction of the essential genes for the symbiosis. The host now requires all three sub-groups of symbiont, each with degraded genomes lacking most essential genes for bacterial viability.

Bacterial endosymbionts of invertebrates

The best-studied examples of endosymbiosis are known from invertebrates. These symbioses affect organisms with global impact, including Symbiodinium of corals, or Wolbachia of insects. Many insect agricultural pests and human disease vectors have intimate relationships with primary endosymbionts.

Endosymbionts of insects

Diagram of cospeciation, where parasites or endosymbionts speciate or branch alongside their hosts. This process is more common in hosts with primary endosymbionts.

Scientists classify insect endosymbionts in two broad categories, 'Primary' and 'Secondary'. Primary endosymbionts (sometimes referred to as P-endosymbionts) have been associated with their insect hosts for many millions of years (from 10 to several hundred million years in some cases). They form obligate associations (see below), and display cospeciation with their insect hosts. Secondary endosymbionts exhibit a more recently developed association, are sometimes horizontally transferred between hosts, live in the hemolymph of the insects (not specialized bacteriocytes, see below), and are not obligate.

Primary endosymbionts

Among primary endosymbionts of insects, the best-studied are the pea aphid (Acyrthosiphon pisum) and its endosymbiont Buchnera sp. APS, the tsetse fly Glossina morsitans morsitans and its endosymbiont Wigglesworthia glossinidia brevipalpis and the endosymbiotic protists in lower termites. As with endosymbiosis in other insects, the symbiosis is obligate in that neither the bacteria nor the insect is viable without the other. Scientists have been unable to cultivate the bacteria in lab conditions outside of the insect. With special nutritionally-enhanced diets, the insects can survive, but are unhealthy, and at best survive only a few generations.

In some insect groups, these endosymbionts live in specialized insect cells called bacteriocytes (also called mycetocytes), and are maternally-transmitted, i.e. the mother transmits her endosymbionts to her offspring. In some cases, the bacteria are transmitted in the egg, as in Buchnera; in others like Wigglesworthia, they are transmitted via milk to the developing insect embryo. In termites, the endosymbionts reside within the hindguts and are transmitted through trophallaxis among colony members.

The primary endosymbionts are thought to help the host either by providing nutrients that the host cannot obtain itself or by metabolizing insect waste products into safer forms. For example, the putative primary role of Buchnera is to synthesize essential amino acids that the aphid cannot acquire from its natural diet of plant sap. Likewise, the primary role of Wigglesworthia, it is presumed, is to synthesize vitamins that the tsetse fly does not get from the blood that it eats. In lower termites, the endosymbiotic protists play a major role in the digestion of lignocellulosic materials that constitute a bulk of the termites' diet.

Bacteria benefit from the reduced exposure to predators and competition from other bacterial species, the ample supply of nutrients and relative environmental stability inside the host.

Genome sequencing reveals that obligate bacterial endosymbionts of insects have among the smallest of known bacterial genomes and have lost many genes that are commonly found in closely related bacteria. Several theories have been put forth to explain the loss of genes. It is presumed that some of these genes are not needed in the environment of the host insect cell. A complementary theory suggests that the relatively small numbers of bacteria inside each insect decrease the efficiency of natural selection in 'purging' deleterious mutations and small mutations from the population, resulting in a loss of genes over many millions of years. Research in which a parallel phylogeny of bacteria and insects was inferred supports the belief that the primary endosymbionts are transferred only vertically (i.e., from the mother), and not horizontally (i.e., by escaping the host and entering a new host).

Attacking obligate bacterial endosymbionts may present a way to control their insect hosts, many of which are pests or carriers of human disease. For example, aphids are crop pests and the tsetse fly carries the organism Trypanosoma brucei that causes African sleeping sickness. Other motivations for their study involve understanding the origins of symbioses in general, as a proxy for understanding e.g. how chloroplasts or mitochondria came to be obligate symbionts of eukaryotes or plants.

Secondary endosymbionts

Pea aphids are commonly infested by parasitic wasps. Their secondary endosymbionts attack the infesting parasitoid wasp larvae promoting the survival of both the aphid host and its endosymbionts.

The pea aphid (Acyrthosiphon pisum) is known to contain at least three secondary endosymbionts, Hamiltonella defensa, Regiella insecticola, and Serratia symbiotica. Hamiltonella defensa defends its aphid host from parasitoid wasps. This defensive symbiosis improves the survival of aphids, which have lost some elements of the insect immune response.

One of the best-understood defensive symbionts is the spiral bacteria Spiroplasma poulsonii. Spiroplasma sp. can be reproductive manipulators, but also defensive symbionts of Drosophila flies. In Drosophila neotestacea, S. poulsonii has spread across North America owing to its ability to defend its fly host against nematode parasites. This defence is mediated by toxins called "ribosome-inactivating proteins" that attack the molecular machinery of invading parasites. These Spiroplasma toxins represent one of the first examples of a defensive symbiosis with a mechanistic understanding for defensive symbiosis between an insect endosymbiont and its host.

Sodalis glossinidius is a secondary endosymbiont of tsetse flies that lives inter- and intracellularly in various host tissues, including the midgut and hemolymph. Phylogenetic studies have not indicated a correlation between evolution of Sodalis and tsetse. Unlike tsetse's primary symbiont Wigglesworthia, though, Sodalis has been cultured in vitro.

Endosymbionts of ants

Bacteriocyte-associated symbionts

The most well studied endosymbiont of ants are bacteria of the genus Blochmannia, which are the primary endosymbiont of Camponotus ants. In 2018 a new ant-associated symbiont was discovered in Cardiocondyla ants. This symbiont was named Candidatus Westeberhardia Cardiocondylae and it is also believed to be a primary symbiont.

Endosymbionts of marine invertebrates

Extracellular endosymbionts are also represented in all four extant classes of Echinodermata (Crinoidea, Ophiuroidea, Echinoidea, and Holothuroidea). Little is known of the nature of the association (mode of infection, transmission, metabolic requirements, etc.) but phylogenetic analysis indicates that these symbionts belong to the class Alphaproteobacteria, relating them to Rhizobium and Thiobacillus. Other studies indicate that these subcuticular bacteria may be both abundant within their hosts and widely distributed among the Echinoderms in general.

Some marine oligochaeta (e.g., Olavius algarvensis and Inanidrillus spp.) have obligate extracellular endosymbionts that fill the entire body of their host. These marine worms are nutritionally dependent on their symbiotic chemoautotrophic bacteria lacking any digestive or excretory system (no gut, mouth, or nephridia).

The sea slug Elysia chlorotica lives in endosymbiotic relationship with the algae Vaucheria litorea, and the jellyfish Mastigias have a similar relationship with an algae.

Dinoflagellate endosymbionts

Dinoflagellate endosymbionts of the genus Symbiodinium, commonly known as zooxanthellae, are found in corals, mollusks (esp. giant clams, the Tridacna), sponges, and foraminifera. These endosymbionts drive the formation of coral reefs by capturing sunlight and providing their hosts with energy for carbonate deposition.

Previously thought to be a single species, molecular phylogenetic evidence over the past couple decades has shown there to be great diversity in Symbiodinium. In some cases, there is specificity between host and Symbiodinium clade. More often, however, there is an ecological distribution of Symbiodinium, the symbionts switching between hosts with apparent ease. When reefs become environmentally stressed, this distribution of symbionts is related to the observed pattern of coral bleaching and recovery. Thus, the distribution of Symbiodinium on coral reefs and its role in coral bleaching presents one of the most complex and interesting current problems in reef ecology.

Endosymbionts of phytoplankton

In marine environments, bacterial endosymbionts have more recently been discovered. These endosymbiotic relationships are especially prevalent in oligotrophic or nutrient-poor regions of the ocean like that of the North Atlantic. In these oligotrophic waters, cell growth of larger phytoplankton like that of diatoms is limited by low nitrate concentrations.  Endosymbiotic bacteria fix nitrogen for their diatom hosts and in turn receive organic carbon from photosynthesis. These symbioses play an important role in global carbon cycling in oligotrophic regions.

One known symbiosis between the diatom Hemialus spp. and the cyanobacterium Richelia intracellularis has been found in the North Atlantic, Mediterranean, and Pacific Ocean. The Richelia endosymbiont is found within the diatom frustule of Hemiaulus spp., and has a reduced genome likely losing genes related to pathways the host now provides.  Research by Foster et al. (2011) measured nitrogen fixation by the cyanobacterial host Richelia intracellularis well above intracellular requirements, and found the cyanobacterium was likely fixing excess nitrogen for Hemiaulus host cells. Additionally, both host and symbiont cell growth were much greater than free-living Richelia intracellularis or symbiont-free Hemiaulus spp. The Hemaiulus-Richelia symbiosis is not obligatory especially in areas with excess nitrogen (nitrogen replete).

Richelia intracellularis is also found in Rhizosolenia spp., a diatom found in oligotrophic oceans. Compared to the Hemaiulus host, the endosymbiosis with Rhizosolenia is much more consistent, and Richelia intracellularis is generally found in Rhizosolenia. There are some asymbiotic (occurs without an endosymbiont) Rhizosolenia, however there appears to be mechanisms limiting growth of these organisms in low nutrient conditions. Cell division for both the diatom host and cyanobacterial symbiont can be uncoupled and mechanisms for passing bacterial symbionts to daughter cells during cell division are still relatively unknown.

Other endosymbiosis with nitrogen fixers in open oceans include Calothrix in Chaetocerous spp. and UNCY-A in prymnesiophyte microalga.  The Chaetocerous-Calothrix endosymbiosis is hypothesized to be more recent, as the Calothrix genome is generally intact. While other species like that of the UNCY-A symbiont and Richelia have reduced genomes. This reduction in genome size occurs within nitrogen metabolism pathways indicating endosymbiont species are generating nitrogen for their hosts and losing the ability to use this nitrogen independently. This endosymbiont reduction in genome size, might be a step that occurred in the evolution of organelles (above).

Endosymbionts of protists

Mixotricha paradoxa is a protozoan that lacks mitochondria. However, spherical bacteria live inside the cell and serve the function of the mitochondria. Mixotricha also has three other species of symbionts that live on the surface of the cell.

Paramecium bursaria, a species of ciliate, has a mutualistic symbiotic relationship with green alga called Zoochlorella. The algae live inside the cell, in the cytoplasm.

Paulinella chromatophora is a freshwater amoeboid which has recently (evolutionarily speaking) taken on a cyanobacterium as an endosymbiont.

Many foraminifera are hosts to several types of algae, such as red algae, diatoms, dinoflagellates and chlorophyta. These endosymbionts can be transmitted vertically to the next generation via asexual reproduction of the host, but because the endosymbionts are larger than the foraminiferal gametes, they need to acquire new algae again after sexual reproduction.

Several species of radiolaria have photosynthetic symbionts. In some species the host will sometimes digest algae to keep their population at a constant level.

Hatena arenicola is a flagellate protist with a complicated feeding  that feed on other microbes. But when it engulfs a green alga from the genus Nephroselmis, the feeding apparatus disappears and it becomes photosynthetic. During mitosis the algae is transferred to only one of the two cells, and the cell without the algae needs to start the cycle all over again.

In 1976, biologist Kwang W. Jeon found that a lab strain of Amoeba proteus had been infected by bacteria that lived inside the cytoplasmic vacuoles. This infection killed all the protists except for a few individuals. After the equivalent of 40 host generations, the two organisms gradually became mutually interdependent. Over many years of study, it has been confirmed that a genetic exchange between the prokaryotes and protists had occurred.

Endosymbionts of vertebrates

The spotted salamander (Ambystoma maculatum) lives in a relationship with the algae Oophila amblystomatis, which grows in the egg cases.

Endosymbionts of plants

Chloroplasts are primary endosymbionts of plants that provide energy to the plant by generating sugars.

Of all the plants, Azolla has the most intimate relationship with a symbiont, as its cyanobacterium symbiont Anabaena is passed on directly from one generation to the next.

Endosymbionts of bacteria

It has been observed that some Betaproteobacteria have Gammaproteobacteria endosymbionts.

Virus-host associations

The human genome project found several thousand endogenous retroviruses, endogenous viral elements in the genome that closely resemble and can be derived from retroviruses, organized into 24 families.

Kepler-186f

From Wikipedia, the free encyclopedia
 
Kepler-186f
Kepler186f-ComparisonGraphic-20140417.jpg
Size comparison of Kepler-186f (artist's impression) with Earth along with their projected habitable zones.
Discovery
Discovered byElisa Quintana
Discovery siteKepler Space Observatory
Discovery date17 April 2014
Transit
Orbital characteristics
0.432 ± 0.01 AU
Eccentricity0.04
129.9444 ± 0.0012 d
0.355772 y
Inclination89.9
StarKepler-186
Physical characteristics
Mean radius
1.17 ± 0.08 REarth
Mass1.4 MEarth
TemperatureTeq: 188 K (−85 °C; −121 °F)

Kepler-186f (also known by its Kepler object of interest designation KOI-571.05) is an exoplanet orbiting the red dwarf Kepler-186, about 580 light-years (180 parsecs) from Earth.

It was the first planet with a radius similar to Earth's to be discovered in the habitable zone of another star. NASA's Kepler space telescope detected it using the transit method, along with four additional planets orbiting much closer to the star (all modestly larger than Earth).

Analysis of three years of data was required to find its signal. The results were presented initially at a conference on 19 March 2014 and some details were reported in the media at the time. The public announcement was on 17 April 2014, followed by publication in Science.

Physical characteristics

Mass, radius and temperature

The only physical property directly derivable from the observations (besides orbit) is the size of the planet relative to the central star, which follows from the amount of occultation of stellar light during a transit. This ratio was measured to be 0.021, giving a planetary radius of 1.11±0.14 times that of Earth. The planet is about 11% larger in radius than Earth (between 4.5% smaller and 26.5% larger), giving a volume about 1.37 times that of Earth (between 0.87 and 2.03 times as large).

A very wide range of possible masses can be calculated by combining the radius with densities derived from the possible types of matter from which planets can be made. For example, it could be a rocky terrestrial planet or a lower density ocean planet with a thick atmosphere. A massive hydrogen/helium (H/He) atmosphere is thought to be unlikely in a planet with a radius below 1.5 REarth. Planets with a radius of more than 1.5 times that of Earth tend to accumulate the thick atmospheres which make them less likely to be habitable. Red dwarfs emit a much stronger extreme ultraviolet (XUV) flux when young than later in life. The planet's primordial atmosphere would have been subjected to elevated photoevaporation during that period, which would probably have largely removed any H/He-rich envelope through hydrodynamic mass loss.

Mass estimates range from 0.32 MEarth for a pure water/ice composition to 3.77 MEarth if made up entirely of iron (both implausible extremes). For a body with radius 1.11 REarth, a composition similar to that of Earth (i.e., 1/3 iron, 2/3 silicate rock) yields a mass of 1.44 MEarth, taking into account the higher density due to the higher average pressure compared to Earth. That would make the force of gravity on the surface 17% higher than on Earth.

The estimated equilibrium temperature for Kepler-186f, which is the surface temperature without an atmosphere, is said to be around 188 K (−85 °C; −121 °F), somewhat colder than the equilibrium temperature of Mars.

Host star

The planet orbits Kepler-186, an M-type star which has a total of five known planets. The star has a mass of 0.54 M and a radius of 0.52 R. It has a temperature of 3755 K and is about 4 billion years old, about 600 million years younger than the Sun, which is 4.6 billion years old and has a temperature of 5,778 K (5,505 °C; 9,941 °F).

The star's apparent magnitude, or how bright it appears from Earth's perspective, is 14.62. This is too dim to be seen with the naked eye, which can only see objects with a magnitude up to at least 6.5 – 7 or lower.

Orbit

Kepler-186f orbits its star with about 5% of the Sun's luminosity with an orbital period of 129.9 days and an orbital radius of about 0.40 times that of Earth's (compared to 0.39 AU (58 million km; 36 million mi) for Mercury). The habitable zone for this system is estimated conservatively to extend over distances receiving from 88% to 25% of Earth's illumination (from 0.23 to 0.46 AU (34 to 69 million km; 21 to 43 million mi)). Kepler-186f receives about 32%, placing it within the conservative zone but near the outer edge, similar to the position of Mars in our planetary system.

Habitability

Artist's concept of a rocky Earth-sized exoplanet in the habitable zone of its host star, possibly compatible with Kepler-186f's known data (NASA/SETI/JPL).
 

Kepler-186f's location within the habitable zone does not ensure it is habitable; this is also dependent on its atmospheric characteristics, which are unknown. However, Kepler-186f is too distant for its atmosphere to be analyzed by existing telescopes (e.g., NESSI) or next-generation instruments such as the James Webb Space Telescope. A simple climate model – in which the planet's inventory of volatiles is restricted to nitrogen, carbon dioxide and water, and clouds are not accounted for – suggests that the planet's surface temperature would be above 273 K (0 °C; 32 °F) if at least 0.5 to 5 bars of CO2 is present in its atmosphere, for assumed N2 partial pressures ranging from 10 bar to zero, respectively.

The star hosts four other planets discovered so far, although Kepler-186 b, c, d, and e (in order of increasing orbital radius), being too close to their star, are considered too hot to have liquid water. The four innermost planets are probably tidally locked, but Kepler-186f is in a higher orbit, where the star's tidal effects are much weaker, so the time could have been insufficient for its spin to slow down significantly. Because of the very slow evolution of red dwarfs, the age of the Kepler-186 system was poorly constrained, although it is likely to be greater than a few billion years. Recent results have placed the age at around 4 billion years. The chance that it is tidally locked is approximately 50%. Since it is closer to its star than Earth is to the Sun, it will probably rotate much more slowly than Earth; its day could be weeks or months long (see Tidal effects on rotation rate, axial tilt and orbit).

Kepler-186f's axial tilt (obliquity) is likely very small, in which case it would not have tilt-induced seasons like Earth's. Its orbit is probably close to circular, so it will also lack eccentricity-induced seasonal changes like those of Mars. However, the axial tilt could be larger (about 23 degrees) if another undetected non-transiting planet orbits between it and Kepler-186e; planetary formation simulations have shown that the presence of at least one additional planet in this region is likely. If such a planet exists, it cannot be much more massive than Earth as it would then cause orbital instabilities.

One review essay in 2015 concluded that Kepler-186f, along with the exoplanets Kepler-442b and Kepler-62f, were likely the best candidates for being potentially habitable planets.

In June 2018, studies suggest that Kepler-186f may have seasons and a climate similar to those on Earth.

Follow-up studies

NASA Exoplanet Exploration Program "travel poster" for Kepler-186f

Target of SETI investigation

As part of the SETI Institute's search for extraterrestrial intelligence, the Allen Telescope Array had listened for radio emissions from the Kepler-186 system for about a month as of 17 April 2014. No signals attributable to extraterrestrial technology were found in that interval; however, to be detectable, such transmissions, if radiated in all directions equally and thus not preferentially towards the Earth, would need to be at least 10 times as strong as those from Arecibo Observatory. Another search, undertaken at the crowdsourcing project SETI-Live, reports inconclusive but optimistic-looking signs in the radio noise from the Allen Array observations. The more well known SETI @ Home search does not cover any object in the Kepler field of view. Another follow-up survey using the Green Bank Telescope has not reviewed Kepler 186f. Given the interstellar distance of 490 light-years (151 pc), the signals would have left the planet many years ago.

Future technology

At approximately 580 light-years (180 pc) distant, Kepler-186f is too far and its star too faint for current telescopes or the next generation of planned telescopes to determine its mass or whether it has an atmosphere. However, the discovery of Kepler-186f demonstrates conclusively that there are other Earth-sized planets in habitable zones. The Kepler spacecraft focused on a single small region of the sky but next-generation planet-hunting space telescopes, such as TESS and CHEOPS, will examine nearby stars throughout the sky. Nearby stars with planets can then be studied by the upcoming James Webb Space Telescope and future large ground-based telescopes to analyze atmospheres, determine masses and infer compositions. Additionally the Square Kilometer Array would significantly improve radio observations over the Arecibo Observatory and Green Bank Telescope.

Previous names

As the Kepler telescope observational campaign proceeded, an initially identified system was entered in the Kepler Input Catalog (KIC), and then progressed as a candidate host of planets to a Kepler Object of Interest (KOI). Thus, Kepler 186 started as KIC 8120608 and then was identified as KOI 571. Kepler 186f was mentioned when known as KOI-571-05 or KOI-571.05 or using similar nomenclatures in 2013 in various discussions and publications before its full confirmation.

Comparison

The nearest-to-Earth-size planet in a habitable zone previously known was Kepler-62f with 1.4 Earth radii. Kepler-186f orbits an M-dwarf star, while Kepler-62f orbits a K-type star. A study of atmospheric evolution in Earth-size planets in habitable zones of G-Stars (a class containing the Sun, but not Kepler-186) suggested that 0.8–1.15 R🜨 is the size range for planets small enough to lose their initial accreted hydrogen envelope but large enough to retain an outgassed secondary atmosphere such as Earth's.

Notable ExoplanetsKepler Space Telescope
KeplerExoplanets-NearEarthSize-HabitableZone-20150106.png
Confirmed small exoplanets in habitable zones.
(Kepler-62e, Kepler-62f, Kepler-186f, Kepler-296e, Kepler-296f, Kepler-438b, Kepler-440b, Kepler-442b)
(Kepler Space Telescope; 6 January 2015).

In popular culture

Coal liquefaction

From Wikipedia, the free encyclopedia

Coal liquefaction is a process of converting coal into liquid hydrocarbons: liquid fuels and petrochemicals. This process is often known as "Coal to X" or "Carbon to X", where X can be many different hydrocarbon-based products. However, the most common process chain is "Coal to Liquid Fuels" (CTL).

Historical background

Coal liquefaction originally was developed at the beginning of the 20th century. The best-known CTL process is Fischer–Tropsch synthesis (FT), named after the inventors Franz Fischer and Hans Tropsch from the Kaiser Wilhelm Institute in the 1920s. The FT synthesis is the basis for indirect coal liquefaction (ICL) technology. Friedrich Bergius, also a German chemist, invented direct coal liquefaction (DCL) as a way to convert lignite into synthetic oil in 1913.

Coal liquefaction was an important part of Adolf Hitler's four-year plan of 1936, and became an integral part of German industry during World War II. During the mid-1930s, companies like IG Farben and Ruhrchemie initiated industrial production of synthetic fuels derived from coal. This led to the construction of twelve DCL plants using hydrogenation and nine ICL plants using Fischer–Tropsch synthesis by the end of World War II. In total, CTL provided 92% of Germany's air fuel and over 50% of its petroleum supply in the 1940s. The DCL and ICL plants effectively complemented each other rather than competed. The reason for this is that coal hydrogenation yields high quality gasoline for aviation and motors, while FT synthesis chiefly produced high-quality diesel, lubrication oil, and waxes together with some smaller amounts of lower-quality motor gasoline. The DCL plants were also more developed, as lignite – the only coal available in many parts of Germany – worked better with hydrogenation than with FT synthesis. After the war, Germany had to abandon its synthetic fuel production as it was prohibited by the Potsdam conference in 1945.

South Africa developed its own CTL technology in the 1950s. The South African Coal, Oil and Gas Corporation (Sasol) was founded in 1950 as part of industrialization process that the South African government considered essential for continued economic development and autonomy. South Africa had no domestic oil reserves, and this made the country very vulnerable to disruption of supplies coming from outside, albeit for different reasons at different times. Sasol was a successful way to protect the country's balance of payment against the increasing dependence on foreign oil. For years its principal product was synthetic fuel, and this business enjoyed significant government protection in South Africa during the apartheid years for its contribution to domestic energy security. Although it was generally much more expensive to produce oil from coal than from natural petroleum, the political as well as economic importance of achieving as much independence as possible in this sphere was sufficient to overcome any objections. Early attempts to attract private capital, foreign or domestic, were unsuccessful, and it was only with state support that the coal liquefaction could start. CTL continued to play a vital part in South Africa's national economy, providing around 30% of its domestic fuel demand. The democratization of South Africa in the 1990s made Sasol search for products that could prove more competitive in the global marketplace; as of the new millennium the company was focusing primarily on its petrochemical business, as well as on efforts to convert natural gas into crude oil (GTL) using its expertise in Fischer–Tropsch synthesis.

CTL technologies have steadily improved since the Second World War. Technical development has resulted in a variety of systems capable of handling a wide array of coal types. However, only a few enterprises based on generating liquid fuels from coal have been undertaken, most of them based on ICL technology; the most successful one has been Sasol in South Africa. CTL also received new interest in the early 2000s as a possible mitigation option for reducing oil dependence, at a time when rising oil prices and concerns over peak oil made planners rethink existing supply chains for liquid fuels.

Methods

Specific liquefaction technologies generally fall into two categories: direct (DCL) and indirect liquefaction (ICL) processes. Direct processes are based on approaches such as carbonization, pyrolysis, and hydrogenation.

Indirect liquefaction processes generally involve gasification of coal to a mixture of carbon monoxide and hydrogen, often known as synthesis gas or simply syngas. Using the Fischer–Tropsch process syngas is converted into liquid hydrocarbons.

In contrast, direct liquefaction processes convert coal into liquids directly without having to rely on intermediate steps by breaking down the organic structure of coal with application of hydrogen-donor solvent, often at high pressures and temperatures. Since liquid hydrocarbons generally have a higher hydrogen-carbon molar ratio than coals, either hydrogenation or carbon-rejection processes must be employed in both ICL and DCL technologies.

At industrial scales (i.e. thousands of barrels/day) a coal liquefaction plant typically requires multibillion-dollar capital investments.

Pyrolysis and carbonization processes

A number of carbonization processes exist. The carbonization conversion typically occurs through pyrolysis or destructive distillation. It produces condensable coal tar, oil and water vapor, non-condensable synthetic gas, and a solid residue - char.

One typical example of carbonization is the Karrick process. In this low-temperature carbonization process, coal is heated at 680 °F (360 °C) to 1,380 °F (750 °C) in the absence of air. These temperatures optimize the production of coal tars richer in lighter hydrocarbons than normal coal tar. However, any produced liquids are mostly a by-product and the main product is semi-coke - a solid and smokeless fuel.

The COED Process, developed by FMC Corporation, uses a fluidized bed for processing, in combination with increasing temperature, through four stages of pyrolysis. Heat is transferred by hot gases produced by combustion of part of the produced char. A modification of this process, the COGAS Process, involves the addition of gasification of char. The TOSCOAL Process, an analogue to the TOSCO II oil shale retorting process and Lurgi–Ruhrgas process, which is also used for the shale oil extraction, uses hot recycled solids for the heat transfer.

Liquid yields of pyrolysis and the Karrick process are generally considered too low for practical use for synthetic liquid fuel production. The resulting coal tars and oils from pyrolysis generally require further treatment before they can be usable as motor fuels; they are processed by hydrotreating to remove sulfur and nitrogen species, after which they are finally processed into liquid fuels.

In summary, the economic viability of this technology is questionable.

Hydrogenation processes

One of the main methods of direct conversion of coal to liquids by hydrogenation process is the Bergius process, developed by Friedrich Bergius in 1913. In this process, dry coal is mixed with heavy oil recycled from the process. A catalyst is typically added to the mixture. The reaction occurs at between 400 °C (752 °F) to 500 °C (932 °F) and 20 to 70 MPa hydrogen pressure. The reaction can be summarized as follows:

After World War I several plants based on this technology were built in Germany; these plants were extensively used during World War II to supply Germany with fuel and lubricants. The Kohleoel Process, developed in Germany by Ruhrkohle and VEBA, was used in the demonstration plant with the capacity of 200 ton of lignite per day, built in Bottrop, Germany. This plant operated from 1981 to 1987. In this process, coal is mixed with a recycle solvent and iron catalyst. After preheating and pressurizing, H2 is added. The process takes place in a tubular reactor at the pressure of 300 bar (30 MPa) and at the temperature of 470 °C (880 °F). This process was also explored by SASOL in South Africa.

During the 1970s and 1980s, Japanese companies Nippon Kokan, Sumitomo Metal Industries, and Mitsubishi Heavy Industries developed the NEDOL process. In this process, coal is mixed with a recycled solvent and a synthetic iron-based catalyst; after preheating, H2 is added. The reaction takes place in a tubular reactor at a temperature between 430 °C (810 °F) and 465 °C (870 °F) at the pressure 150-200 bar. The produced oil has low quality and requires intensive upgrading. H-Coal process, developed by Hydrocarbon Research, Inc., in 1963, mixes pulverized coal with recycled liquids, hydrogen and catalyst in the ebullated bed reactor. Advantages of this process are that dissolution and oil upgrading are taking place in the single reactor, products have high H/C ratio, and a fast reaction time, while the main disadvantages are high gas yield (this is basically a thermal cracking process), high hydrogen consumption, and limitation of oil usage only as a boiler oil because of impurities.

The SRC-I and SRC-II (Solvent Refined Coal) processes were developed by Gulf Oil and implemented as pilot plants in the United States in the 1960s and 1970s.

The Nuclear Utility Services Corporation developed hydrogenation process which was patented by Wilburn C. Schroeder in 1976. The process involved dried, pulverized coal mixed with roughly 1wt% molybdenum catalysts. Hydrogenation occurred by use of high temperature and pressure synthesis gas produced in a separate gasifier. The process ultimately yielded a synthetic crude product, naphtha, a limited amount of C3/C4 gas, light-medium weight liquids (C5-C10) suitable for use as fuels, small amounts of NH3 and significant amounts of CO2. Other single-stage hydrogenation processes are the Exxon Donor Solvent Process, the Imhausen High-pressure Process, and the Conoco Zinc Chloride Process.

There are also a number of two-stage direct liquefaction processes; however, after the 1980s only the Catalytic Two-stage Liquefaction Process, modified from the H-Coal Process; the Liquid Solvent Extraction Process by British Coal; and the Brown Coal Liquefaction Process of Japan have been developed.

Shenhua, a Chinese coal mining company, decided in 2002 to build a direct liquefaction plant in Erdos, Inner Mongolia (Erdos CTL), with barrel capacity of 20 thousand barrels per day (3.2×103 m3/d) of liquid products including diesel oil, liquefied petroleum gas (LPG) and naphtha (petroleum ether). First tests were implemented at the end of 2008. A second and longer test campaign was started in October 2009. In 2011, Shenhua Group reported that the direct liquefaction plant had been in continuous and stable operations since November 2010, and that Shenhua had made 800 million yuan ($125.1 million) in earnings before taxes in the first six months of 2011 on the project.

Chevron Corporation developed a process invented by Joel W. Rosenthal called the Chevron Coal Liquefaction Process (CCLP). It is unique due to the close-coupling of the non-catalytic dissolver and the catalytic hydroprocessing unit. The oil produced had properties that were unique when compared to other coal oils; it was lighter and had far fewer heteroatom impurities. The process was scaled-up to the 6 ton per day level, but not proven commercially.

Indirect conversion processes

Indirect coal liquefaction (ICL) processes operate in two stages. In the first stage, coal is converted into syngas (a purified mixture of CO and H2 gas). In the second stage, the syngas is converted into light hydrocarbons using one of three main processes: Fischer–Tropsch synthesis, methanol synthesis with subsequent conversion to gasoline or petrochemicals, and methanation. Fischer–Tropsch is the oldest of the ICL processes.

In methanol synthesis processes syngas is converted to methanol, which is subsequently polymerized into alkanes over a zeolite catalyst. This process, under the moniker MTG (MTG for "Methanol To Gasoline"), was developed by Mobil in the early 1970s, and is being tested at a demonstration plant by Jincheng Anthracite Mining Group (JAMG) in Shanxi, China. Based on this methanol synthesis, China has also developed a strong coal-to-chemicals industry, with outputs such as olefins, MEG, DME and aromatics.

Methanation reaction converts syngas to substitute natural gas (SNG). The Great Plains Gasification Plant in Beulah, North Dakota is a coal-to-SNG facility producing 160 million cubic feet per day of SNG, and has been in operation since 1984. Several coal-to-SNG plants are in operation or in project in China, South Korea and India.

In another application of gasification, hydrogen extracted from synthetic gas reacts with nitrogen to form ammonia. Ammonia then reacts with carbon dioxide to produce urea.

The above instances of commercial plants based on indirect coal liquefaction processes, as well as many others not listed here including those in planning stages and under construction, are tabulated in the Gasification Technologies Council's World Gasification Database.

Environmental considerations

Typically coal liquefaction processes are associated with significant CO2 emissions from the gasification process or as well as from generation of necessary process heat and electricity inputs to the liquefaction reactors, thus releasing greenhouse gases that can contribute to anthropogenic global warming. This is especially true if coal liquefaction is conducted without any carbon capture and storage technologies. There are technically feasible low-emission configurations of CTL plants.

High water consumption in the water-gas shift reaction or steam methane reforming is another adverse environmental effect.


CO2 emission control at Erdos CTL, an Inner Mongolian plant with a carbon capture and storage demonstration project, involves injecting CO2 into the saline aquifer of Erdos Basin, at a rate of 100,000 tonnes per year. As of late October 2013, an accumulated amount of 154,000 tonnes of CO2 had been injected since 2010, which reached or exceeded the design value.

For example, in the United States, the Renewable Fuel Standard and low-carbon fuel standard such as enacted in the State of California reflect an increasing demand for low carbon footprint fuels. Also, legislation in the United States has restricted the military's use of alternative liquid fuels to only those demonstrated to have life-cycle GHG emissions less than or equal to those of their conventional petroleum-based equivalent, as required by Section 526 of the Energy Independence and Security Act (EISA) of 2007.

Research and development of coal liquefaction

The United States military has an active program to promote alternative fuels use, and utilizing vast domestic U.S. coal reserves to produce fuels through coal liquefaction would have obvious economic and security advantages. But with their higher carbon footprint, fuels from coal liquefaction face the significant challenge of reducing life-cycle GHG emissions to competitive levels, which demands continued research and development of liquefaction technology to increase efficiency and reduce emissions. A number of avenues of research & development will need to be pursued, including:

  • Carbon capture and storage including enhanced oil recovery and developmental CCS methods to offset emissions from both synthesis and utilization of liquid fuels from coal,
  • Coal/biomass/natural gas feedstock blends for coal liquefaction: Utilizing carbon-neutral biomass and hydrogen-rich natural gas as co-feeds in coal liquefaction processes has significant potential for bringing fuel products' life-cycle GHG emissions into competitive ranges,
  • Hydrogen from renewables: the hydrogen demand of coal liquefaction processes might be supplied through renewable energy sources including wind, solar, and biomass, significantly reducing the emissions associated with traditional methods of hydrogen synthesis (such as steam methane reforming or char gasification), and
  • Process improvements such as intensification of the Fischer–Tropsch process, hybrid liquefaction processes, and more efficient air separation technologies needed for production of oxygen (e.g. ceramic membrane-based oxygen separation).

Since 2014, the U.S. Department of Energy and the Department of Defense have been collaborating on supporting new research and development in the area of coal liquefaction to produce military-specification liquid fuels, with an emphasis on jet fuel, which would be both cost-effective and in accordance with EISA Section 526. Projects underway in this area are described under the U.S. Department of Energy National Energy Technology Laboratory's Advanced Fuels Synthesis R&D area in the Coal and Coal-Biomass to Liquids Program.

Every year, a researcher or developer in coal conversion is rewarded by the industry in receiving the World Carbon To X Award. The 2016 Award recipient is Mr. Jona Pillay, Executive director for Gasification & CTL, Jindal Steel & Power Ltd (India). The 2017 Award recipient is Dr. Yao Min, Deputy General Manager of Shenhua Ningxia Coal Group (China).

In terms of commercial development, coal conversion is experiencing a strong acceleration. Geographically, most active projects and recently commissioned operations are located in Asia, mainly in China, while U.S. projects have been delayed or canceled due to the development of shale gas and shale oil.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...