Event-related functional magnetic resonance imaging (efMRI) is a technique in magnetic resonance imaging that can be used to detect changes in the BOLD (Blood Oxygen Level Dependent) hemodynamic response to neural activity in response to certain events. Within fMRI methodology,
there are two different ways that are typically employed to present
stimuli. One method is a block related design, in which two or more
different conditions are alternated in order to determine the
differences between the two conditions, or a control may be included in
the presentation occurring between the two conditions. By contrast,
event related designs are not presented in a set sequence; the
presentation is randomized and the time in between stimuli can vary.
efMRI attempts to model the change in fMRI signal in response to neural
events associated with behavioral trials. According to D'Esposito,
"event-related fMRI has the potential to address a number of cognitive psychology questions with a degree of inferential and statistical power not previously available."
Each trial can be composed of one experimentally controlled (such as
the presentation of a word or picture) or a participant mediated "event"
(such as a motor response). Within each trial, there are a number of
events such as the presentation of a stimulus, delay period, and
response. If the experiment is properly set up and the different events
are timed correctly, efMRI allows a person to observe the differences in
neural activity associated with each event.
History
Positron Emission Tomography (PET), was the most frequently used brain mapping
technique before the development of fMRI. There are a number of
advantages that are presented in comparison to PET. According to
D’Esposito, they include that fMRI “does not require an injection of radioisotope into participants and is otherwise noninvasive, has better spatial resolution, and has better temporal resolution."[2] The first MRI studies employed the use of “exogenous paramagnetic tracers to map changes in cerebral blood volume”,[3][4]
which allowed for the assessment of brain activity over several
minutes. This changed with two advancements to MRI, the rapidness of MRI
techniques were increased to 1.5 Tesla
by the end of the 1980s, which provided a 2-d image. Next, endogenous
contrast mechanisms were discovered by Detre, Koretsky, and colleagues
was based on the net longitudinal magnetization within an organ, and a
“second based on changes in the magnetic susceptibility induced by
changing net tissue deoxyhemoglobin content”,[3]
which has been labeled BOLD contrast by Siege Ogawa. These discoveries
served as inspiration for future brain mapping advancements. This
allowed researchers to develop more complex types of experiments, going
beyond observing the effects of single types of trials. When fMRI was
developed one of its major limitations was the inability to randomize
trials, but the event related fMRI fixed this problem.[2] Cognitive subtraction was also an issue, which tried to correlate
cognitive-behavioral differences between tasks with brain activity by
pairing two tasks that are assumed to be matched perfectly for every
sensory, motor, and cognitive process except the one of interest.[2]
Next, a push for the improvement of temporal resolution of fMRI studies
led to the development of event-related designs, which according to
Peterson, was inherited from ERP research in electrophysiology, but it
was discovered that this averaging did not apply very well to the
hemodynamic response because the response from trials could overlap. As a
result, random jittering of the events was applied, which meant that
the time repetition was varied and randomized for the trials in order to
ensure that the activation signals did not overlap.
Hemodynamic response
In
order to function, neurons require energy which is supplied by blood
flow. Although it is not completely understood, the hemodynamic response
has been correlated with neuronal activity, that is, as the activity
level increases, the amount of blood used by neurons increases. This
response takes several seconds to completely develop. Accordingly, fMRI
has limited temporal resolution. The hemodynamic response is the basis for the BOLD (Blood Oxygen Level Dependent) contrast in fMRI.[5]
The hemodynamic response occurs within seconds of the presented
stimuli, but it is essential to space out the events in order to ensure
that the response being measured is from the event that was presented
and not from a prior event. Presenting stimuli in a more rapid sequence
allows experimenters to run more trials and gather more data, but this
is limited by the slow course of hemodynamic response, which generally
must be allowed to return baseline before the presentation of another
stimulus. According to Burock “as the presentation rate increases in the
random event related design, the variance in the signal increases
thereby increasing the transient information and ability to estimate the
underlying hemodynamic response”.[3]
Rapid event-related efMRI
In
a typical efMRI, after every trial the hemodynamic response is allowed
to return to baseline. In rapid event-related fMRI, trials are
randomized and the HRF is deconvolved afterwards. In order for this to
be possible, every possible combination of trial sequences must be used
and the inter-trial intervals jittered so that the time in between
trials is not always the same.
Advantages of efMRI
Ability
to randomize and mix different types of events, which ensures that one
event isn’t influenced by others and not affected by the cognitive state
of an individual, doesn’t allow for predictability of events.
Events can be organized into categories after the experiment based on the subjects behavior
The occurrence of events can be defined by the subject
Sometimes the blocked event design cannot be applied to an event.
Treating stimuli, even when blocked, as separate events can potentially result in a more accurate model.
Chee argues that event related designs provide a number of advantages
in language-related tasks, including the ability to separate correct
and incorrect responses, and show task dependent variations in temporal
response profiles.[6]
Disadvantages of efMRI
More complex design and analysis.
Need to increase the number of trials because the MR signal is small.
Some events are better blocked.
Timing issues: sampling (fix: random jitter, varying the timing of
the presentation of the stimuli, allows for a mean hemodynamic response
to be calculated at the end).
Easier to identify artifacts arising from non-physiologic signal fluctuations.,.[1][6]
Statistical analysis
In
fMRI data, it is assumed that there is a linear relationship between
neural stimulation and the BOLD response. The use of GLMs allows for
the development of a mean to represent the mean hemodynamic response
within the participants.
Statistical Parametric Mapping is used to produce a design matrix, which
includes all of the different response shapes produced during the
event. For more information on this, see Friston (1997).[7]
Applications
Visual Priming and Object Recognition
Examining differences between parts of a task
Changes over time
Memory Research - Working Memory using cognitive subtraction
A
new VR system from MIT’s Computer Science and Artificial Intelligence
Laboratory could make it easy for factory workers to telecommute.
(credit: Jason Dorfman, MIT CSAIL)
CSAIL’s “Homunculus Model” system (the classic notion of a small
human sitting inside the brain and controlling the actions of the body)
embeds you in a VR control room with multiple sensor displays, making it
feel like you’re inside the robot’s head. By using gestures, you can
control the robot’s matching movements to perform various tasks.
The system can be connected either via a wired local network or via a
wireless network connection over the Internet. (The team demonstrated
that the system could pilot a robot from hundreds of miles away, testing
it on a hotel’s wireless network in Washington, DC to control Baxter at
MIT.)
According to CSAIL postdoctoral associate Jeffrey Lipton, lead author on an open-access arXiv
paper about the system (presented this week at the IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS) in
Vancouver), “By teleoperating robots from home, blue-collar workers
would be able to telecommute and benefit from the IT revolution just as
white-collars workers do now.”
Jobs for video-gamers too
The researchers imagine that such a system could even help employ
jobless video-gamers by “game-ifying” manufacturing positions. (Users
with gaming experience had the most ease with the system, the
researchers found in tests.)
Homunculus
Model system. A Baxter robot (left) is outfitted with a stereo camera
rig and various end-effector devices. A virtual control room (user’s
view, center), generated on an Oculus Rift CV1 headset (right), allows
the user to feel like they are inside Baxter’s head while operating it.
Using VR device controllers, including Razer Hydra hand trackers used
for inputs (right), users can interact with controls that appear in the
virtual space — opening and closing the hand grippers to pick up, move,
and retrieve items. A user can plan movements based on the distance
between the arm’s location marker and their hand while looking at the
live display of the arm. (credit: Jeffrey I. Lipton et al./arXiv).
To make these movements possible, the human’s space is mapped into
the virtual space, and the virtual space is then mapped into the robot
space to provide a sense of co-location.
The team demonstrated the Homunculus Model system using the Baxter
humanoid robot from Rethink Robotics, but the approach could work on
other robot platforms, the researchers said.
In tests involving pick and place, assembly, and manufacturing tasks
(such as “pick an item and stack it for assembly”) comparing the
Homunculus Model system with existing state-of-the-art automated
remote-control, CSAIL’s Homunculus Model system had a 100% success rate
compared with a 66% success rate for state-of-the-art automated systems.
The CSAIL system was also better at grasping objects 95 percent of the
time and 57 percent faster at doing tasks.*
“This contribution represents a major milestone in the effort to
connect the user with the robot’s space in an intuitive, natural, and
effective manner.” says Oussama Khatib, a computer science professor at
Stanford University who was not involved in the paper.
The team plans to eventually focus on making the system more
scalable, with many users and different types of robots that are
compatible with current automation technologies.
* The Homunculus Model system solves a delay problem with
existing systems, which use a GPU or CPU, introducing delay. 3D
reconstruction from the stereo HD cameras is instead done by the human’s
visual cortex, so the user constantly receives visual feedback from the
virtual world with minimal latency (delay). This also avoids user
fatigue and nausea caused by motion sickness (known as simulator
sickness) generated by “unexpected incongruities, such as delays or
relative motions, between proprioception and vision [that] can lead to
the nausea,” the researchers explain in the paper.
MITCSAIL | Operating Robots with Virtual Reality
Abstract of Baxter’s Homunculus: Virtual Reality Spaces for Teleoperation in Manufacturing
Expensive specialized systems have hampered development of
telerobotic systems for manufacturing systems. In this paper we
demonstrate a telerobotic system which can reduce the cost of such
system by leveraging commercial virtual reality(VR) technology and
integrating it with existing robotics control software. The system runs
on a commercial gaming engine using off the shelf VR hardware. This
system can be deployed on multiple network architectures from a wired
local network to a wireless network connection over the Internet. The
system is based on the homunculus model of mind wherein we embed the
user in a virtual reality control room. The control room allows for
multiple sensor display, dynamic mapping between the user and robot,
does not require the production of duals for the robot, or its
environment. The control room is mapped to a space inside the robot to
provide a sense of co-location within the robot. We compared our system
with state of the art automation algorithms for assembly tasks, showing a
100% success rate for our system compared with a 66% success rate for
automated systems. We demonstrate that our system can be used for pick
and place, assembly, and manufacturing tasks.
The misinformation effect happens when a person's recall of episodic memories becomes less accurate because of post-event information.
For example, in a study published in 1994, subjects were initially
shown one of two different series of slides that depicted a college
student at the university bookstore, with different objects of the same
type changed in some slides. One version of the slides would, for
example, show a screwdriver while the other would show a wrench, and the
audio narrative accompanying the slides would only refer to the object
as a "tool". In the second phase, subjects would read a narrative
description of the events in the slides, except this time a specific
tool was named, which would be the incorrect tool half the time.
Finally, in the third phase, subjects had to list five examples of
specific types of objects, such as tools, but were told to only list
examples which they had not seen in the slides. Subjects who had
read an incorrect narrative were far less likely to list the written
object (which they hadn't actually seen) than the control subjects (28%
vs. 43%), and were far more likely to incorrectly list the item which
they had actually seen (33% vs. 26%).
The misinformation effect is a prime example of retroactive interference,
which occurs when information presented later interferes with the
ability to retain previously encoded information. Essentially, the new
information that a person receives works backward in time to distort
memory of the original event.[3] The misinformation effect has been studied since the mid-1970s. Elizabeth Loftus is one of the most influential researchers in the field. It reflects two of the cardinal sins of memory: suggestibility, the influence of others' expectations on our memory; and misattribution,
information attributed to an incorrect source. Research on the
misinformation effect has uncovered concerns about the permanence and
reliability of memory.[4]
Visual display of retroactive memory interference
Basic methods
A recreation of the type of image used by Loftus et. al in their 1978 work. Two versions of the same image, one showing a "stop" sign and the other a "yield" sign.
Loftus,
Miller, and Burns (1978) conducted the original misinformation effect
study. Participants were shown a series of slides, one of which featured
a car stopping in front of a stop sign.
After viewing the slides, participants read a description of what they
saw. Some of the participants were given descriptions that contained
misinformation, which stated that the car stopped at a yield sign. Following the slides and the reading of the description, participants
were tested on what they saw. The results revealed that participants who
were exposed to such misinformation were more likely to report seeing a
yield sign than participants who were not misinformed.[5]
Similar methods continue to be used in misinformation effect
studies. Today, standard methods involve showing subjects an event,
usually in the form of a slideshow or video. The event is followed by a
time delay and introduction of post-event information. Finally,
participants are retested on their memory of the original event.[6]
This original study by Loftus et al. paved the way for multiple
replications of the effect in order to test things like what specific
processes cause the effect to occur in the first place and how individual differences influence susceptibility to the effect.
Neurological causes
Functional magnetic resonance imaging
(fMRI) from 2010 pointed to certain brain areas which were especially
active when false memories were retrieved. participants studied photos
during an fMRI. Later, they viewed sentences describing the photographs,
some of which contained information conflicting with the photographs,
i.e. misinformation. One day later, participants returned for a surprise
item memory recognition test on the content of the photographs. Results
showed that some participants created false memories, reporting the
verbal misinformation conflicting with the photographs.[7] During the original event phase, increased activity in left fusiform gyrus
and right temporal/occipital cortex was found which may have reflected
the attention to visual detail,associated with later accurate memory for
the critical item(s) and thus resulted in resistance to the effects of
later misinformation.[7] Retrieval of true memories was associated with greater reactivation of sensory-specific cortices, for example, the occipital cortex for vision.[7]. Electroencephalography
research on this issue also suggests that the retrieval of false
memories is associated with reduced attention and recollection related
processing relative to true memories.[8]
Susceptibility
It
is important to note that not everyone is equally susceptible to the
misinformation effect. Individual traits and qualities can either
increase or decrease one's susceptibility to recalling misinformation.[5] Such traits and qualities include: age, working memory capacity, personality traits and imagery abilities.
Age
Several studies have focused on the influence of the misinformation effect on various age groups.[9] Young children are more susceptible than older children and adults to the misinformation effect.[9] Additionally, elderly adults are more susceptible than younger adults.[9][10]
Working memory capacity
Individuals
with greater working memory capacity are better able to establish a
more coherent image of an original event. Participants performed a dual
task: simultaneously remembering a word list and judging the accuracy of
arithmetic statements. Participants who were more accurate on the dual
task were less susceptible to the misinformation effect. This, in turn, allowed them to reject the misinformation.[5][11]
Personality traits
The Myers Briggs Type Indicator
is one type of test used to assess participant personalities. Individuals were presented with the same misinformation procedure as
that used in the original Loftus et al. study in 1978 (see
above). The results were evaluated in regards to their personality type.
Introvert-intuitive participants were more likely to accept both
accurate and inaccurate postevent information than extrovert-sensate
participants. Therefore, it was speculated that introverts are more
likely to have lower confidence in their memory and are more likely to
accept misinformation.[5][12] Individual personality characteristics, including empathy, absorption and self-monitoring, have also been linked to greater susceptibility.[9]
Imagery abilities
The
misinformation effect has been examined in individuals with varying
imagery abilities. Participants viewed a filmed event followed by
descriptive statements of the events in a traditional three-stage
misinformation paradigm. Participants with higher imagery abilities were
more susceptible to the misinformation effect than those with lower
abilities. The psychologists argued that participants with higher
imagery abilities were more likely to form vivid images of the
misleading information at encoding or at retrieval, therefore increasing
susceptibility.[5][13]
Influential factors
Time
Individuals
may not be actively rehearsing the details of a given event after
encoding. The longer the delay between the presentation of the original
event and post-event information, the more likely it is that individuals
will incorporate misinformation into their final reports.[6]
Furthermore, more time to study the original event leads to lower
susceptibility to the misinformation effect, due to increased rehearsal
time.[6]
Elizabeth Loftus coined the term discrepancy detection principle for
her observation that a person´s recollections are more likely to change,
if they do not immediately detect the discrepancies between
misinformation and the original event.[9][14] At times people recognize a discrepancy between their memory and what they are being told.[15]
People might recollect, "I thought I saw a stop sign, but the new
information mentions a yield sign, I guess I must be wrong, it was a
yield sign."[15] Although the individual recognizes the information as conflicting with their own memories they still adopt it as true.[9] If these discrepancies are not immediately detected they are more likely to be incorporated into memory.[9]
Source reliability
The
more reliable the source of the post-event information, the more likely
it is that participants will adopt the information into their memory.[6]
For example, Dodd and Bradshaw (1980) used slides of a car accident
for their original event. They then had misinformation delivered to half
of the participants by an unreliable source: a lawyer representing the
driver. The remaining participants were presented with misinformation,
but given no indication of the source. The misinformation was rejected
by those who received information from the unreliable source and adopted
by the other group of subjects.[6]
Discussion and rehearsal
The
question of whether discussion is detrimental to memories also exists
when considering what factors influence the misinformation effect. One
particular study examined the effects of discussion in groups on
recognition. The experimentors used three different conditions:
discussion in groups with a confederate providing misinformation,
discussion in groups with no confederate, and a no-discussion condition.
They found that participants in the confederate condition adopted the
misinformation provided by the confederate. However, there was no
difference between the no-confederate and no-discussion conditions,
proving that discussion (without misinformation) is neither harmful nor
beneficial to memory accuracy.[16]
In an additional study, Karns et al. (2009) found that collaborative
pairs showed a smaller misinformation effect than individuals. It
appeared as though collaborative recall allowed witnesses to dismiss
misinformation generated by an inaccurate narrative.[17]
In a 2011 study, Paterson et al. studied "memory conformity", showing
students two different videos of a burglary. It was found that if
witnesses who had watched the two different videos talked with one
another, they would then claim to remember details shown in the video of
the other witness and not their own. They continued to claim the
veracity of this memory, despite warnings of misinformation.[18]
State of mind
Various inhibited states of mind such as drunkenness and hypnosis can increase misinformation effects.[9]
Assefi and Garry (2002) found that participants who believed they had
consumed alcohol showed results of the misinformation effect on recall
tasks.[19] The same was true of participants under the influence of hypnosis.[20]
Other
Most obviously, leading questions and narrative accounts
can change episodic memories and thereby affect witness' responses to
questions about the original event. Additionally, witnesses are more
likely to be swayed by misinformation when they are suffering from
alcohol withdrawal[17][21] or sleep deprivation,[17][22] when interviewers are firm as opposed to friendly,[17][23] and when participants experience repeated questioning about the event.[17][24]
Arousal after learning
Arousal
induced after learning reduces source confusion, allowing participants
to better retrieve accurate details and reject misinformation. In a
study of how to reduce the misinformation effect, participants viewed
four short film clips, each followed by a retention test, which for some
participants included misinformation. Afterward, participants viewed
another film clip that was either arousing or neutral. One week later,
the arousal group recognized significantly more details and endorsed
significantly fewer misinformation items than the neutral group.[25]
Anticipation
Educating
participants about the misinformation effect can enable them to resist
its influence. However, if warnings are given after the presentation of
misinformation, they do not aid participants in discriminating between
original and post-event information.[9]
Psychotropic placebos
Research
published 2008 showed that placebos enhanced memory performance.
participants were given a phoney "cognitive enhancing drug" called R273.
When they participated in a misinformation effect experiment, people
who took R273 were more resistant to the effects of misleading postevent
information.[26] As a result of taking R273, people used stricter source monitoring and attributed their behavior to the placebo and not to themselves.[26]
Implications
Implications of this effect on long-term memories are as follows:
Variability
Some reject the notion that misinformation always causes impairment of original memories.[9] Modified tests can be used to examine the issue of long-term memory impairment.[9] In one example of such a test,(1985) participants were shown a burglar with a hammer.[27]
Standard post-event information claimed the weapon was a screwdriver
and participants were likely to choose the screwdriver rather than the
hammer as correct. In the modified test condition, postevent information
was not limited to one item,instead participants had the option of the
hammer and another tool (a wrench, for example). In this condition,
participants generally chose the hammer, showing that there was no
memory impairment.[27]
Rich false memories
Rich
false memories are researchers' attempts to plant entire memories of
events which never happened in participants' memories. Examples of such
memories include fabricated stories about participants getting lost in
the supermarket or shopping mall as children. Researchers often rely on
suggestive interviews and the power of suggestion from family members,
known as “familial informant false narrative procedure.”[9] Around 30% of subjects have gone on to produce either partial or complete false memories in these studies.[9]
There is a concern that real memories and experiences may be surfacing
as a result of prodding and interviews. To deal with this concern,
many researchers switched to implausible memory scenarios.[9]
Daily applications
The
misinformation effect can be observed in many suituations. For example,
after witnessing a crime or accident there may be opportunities for
witnesses to interact and share information. Late-arriving bystanders or
members of the media may ask witnesses to recall the event before law
enforcement or legal representatives have the opportunity to interview
them.[17]
Collaborative recall may lead to a more accurate account of what
happened, as opposed to individual responses that may contain more
untruths after the fact.[17]
In addition, while remembering small details may not seem
important, they can matter tremendously in certain situations. A jury's
perception of a defendant's guilt or innocence could depend on such a
detail. If a witness remembers a moustache or a weapon when there was
none, the wrong person may be wrongly convicted.
(Left)
Schematic of conventional von Neumann computer architecture, where the
memory and computing units are physically separated. To perform a
computational operation and to store the result in the same memory
location, data is shuttled back and forth between the memory and the
processing unit. (Right) An alternative architecture where the
computational operation is performed in the same memory location.
(credit: IBM Research)
IBM Research announced Tuesday (Oct. 24, 2017) that its scientists have developed the first “in-memory computing”
or “computational memory” computer system architecture, which is
expected to yield 200x improvements in computer speed and energy
efficiency — enabling ultra-dense, low-power, massively parallel
computing systems.
Their concept is to use one device (such as phase change memory or PCM*) for both storing and processing information. That design would replace the conventional “von Neumann”
computer architecture, used in standard desktop computers, laptops, and
cellphones, which splits computation and memory into two different
devices. That requires moving data back and forth between memory and the
computing unit, making them slower and less energy-efficient.
The
researchers used PCM devices made from a germanium antimony telluride
alloy, which is stacked and sandwiched between two electrodes. When the
scientists apply a tiny electric current to the material, they heat it,
which alters its state from amorphous (with a disordered atomic
arrangement) to crystalline (with an ordered atomic configuration). The
IBM researchers have used the crystallization dynamics to perform
computation in memory. (credit: IBM Research)
Especially useful in AI applications
The researchers believe this new prototype technology will enable
ultra-dense, low-power, and massively parallel computing systems that
are especially useful for AI applications. The researchers tested the
new architecture using an unsupervised machine-learning algorithm
running on one million phase change memory (PCM) devices, successfully finding temporal correlations in unknown data streams.
“This is an important step forward in our research of the physics of AI, which explores new hardware materials, devices and architectures,” says Evangelos Eleftheriou, PhD, an IBM Fellow and co-author of an open-access paper in the peer-reviewed journal Nature Communications. “As the CMOS
scaling laws break down because of technological limits, a radical
departure from the processor-memory dichotomy is needed to circumvent
the limitations of today’s computers.”
“Memory has so far been viewed as a place where we merely store information, said Abu Sebastian,
PhD. exploratory memory and cognitive technologies scientist, IBM
Research and lead author of the paper. But in this work, we conclusively
show how we can exploit the physics of these memory devices to also
perform a rather high-level computational primitive. The result of the
computation is also stored in the memory devices, and in this sense the
concept is loosely inspired by how the brain computes.” Sebastian also leads a European Research Council funded project on this topic.
* To demonstrate the technology, the authors chose two time-based
examples and compared their results with traditional machine-learning
methods such as k-means clustering:
Simulated Data: one million
binary (0 or 1) random processes organized on a 2D grid based on a 1000 x
1000 pixel, black and white, profile drawing of famed British
mathematician Alan Turing. The IBM scientists then made the pixels blink
on and off with the same rate, but the black pixels turned on and off
in a weakly correlated manner. This means that when a black pixel
blinks, there is a slightly higher probability that another black pixel
will also blink. The random processes were assigned to a million PCM
devices, and a simple learning algorithm was implemented. With each
blink, the PCM array learned, and the PCM devices corresponding to the
correlated processes went to a high conductance state. In this way, the
conductance map of the PCM devices recreates the drawing of Alan Turing.
Real-World Data: actual rainfall data,
collected over a period of six months from 270 weather stations across
the USA in one hour intervals. If rained within the hour, it was
labelled “1” and if it didn’t “0”. Classical k-means clustering and the
in-memory computing approach agreed on the classification of 245 out of
the 270 weather stations. In-memory computing classified 12 stations as
uncorrelated that had been marked correlated by the k-means clustering
approach. Similarly, the in-memory computing approach classified 13
stations as correlated that had been marked uncorrelated by k-means
clustering.
Abstract of Temporal correlation detection using computational phase-change memory
Conventional computers based on the von Neumann architecture perform
computation by repeatedly transferring data between their physically
separated processing and memory units. As computation becomes
increasingly data centric and the scalability limits in terms of
performance and power are being reached, alternative computing paradigms
with collocated computation and storage are actively being sought. A
fascinating such approach is that of computational memory where the
physics of nanoscale memory devices are used to perform certain
computational tasks within the memory unit in a non-von Neumann manner.
We present an experimental demonstration using one million phase change
memory devices organized to perform a high-level computational primitive
by exploiting the crystallization dynamics. Its result is imprinted in
the conductance states of the memory devices. The results of using such a
computational memory for processing real-world data sets show that this
co-existence of computation and storage at the nanometer scale could
enable ultra-dense, low-power, and massively-parallel computing systems.
(DJS): Climate change is causing wildfires to increase, right? Wrong again.
Across the grasslands of Asia, the tropical forests
of South America, and the savannas of Africa, shifting livelihoods are
leading to a significant decline in burned area. Using NASA satellites
to detect fires and burn scars from space, researchers have found that
an ongoing transition from nomadic cultures to settled lifestyles and
intensifying agriculture has led to a steep drop in the use of fire for
land clearing and an overall drop in natural and human-caused fires
worldwide.
Globally, the total acreage burned by fires declined 24 percent between 1998 and 2015, according to a new paper published in Science.
Scientists determined that the decline in burned area was greatest in
savannas and grasslands, where fires are essential for maintaining
healthy ecosystems and habitat conservation.
The map above, based on data from the international research team,
shows annual trends in burned area over the study period. Blues
represent areas where the trend was toward less burning, whether natural
or human-caused, while red areas had more burning. The line plot shows
the annual fluctuations in global burned area and the overall downward
trend. The research team, led by Niels Andela of NASA’s Goddard Space
Flight Center, analyzed fire data derived from the Moderate Resolution
Imaging Spectrometer (MODIS) instruments on NASA’s Terra and Aqua
satellites. They then compared these data sets with regional and global
trends in agriculture and socio-economic development.
Across Africa, fires collectively burned an area about half the size
of the continental United States every year. In traditional savanna
cultures, people often set fires to keep grazing lands productive and
free of shrubs and trees. But as many of these communities have shifted
to cultivating permanent fields and building more houses, roads, and
villages, the use of fire has declined. As this economic development
continues, the landscape becomes more fragmented and communities then
enact legislation to control fires. This leads the burned area to
decline even more.
By 2015, savanna fires in Africa had declined by 700,000 square
kilometers (270,000 square miles)—an area the size of Texas. “When land
use intensifies on savannas, fire is used less and less as a tool,”
Andela said. “As soon as people invest in houses, crops, and livestock,
they don’t want these fires close by anymore. The way of doing
agriculture changes, the practices change, and fire disappears from the
grassland landscape.”
A slightly different pattern occurs in tropical forests and other
humid regions near the equator. Fire rarely occurs naturally in these
forests; but as humans settle an area, they often use fire to clear land
for cropland and pastures. As more people move into these areas and
increase the investments in agriculture, they set fewer fires and the
burned area declines again.
The changes in savanna, grassland, and tropical forest fire patterns
are so large that they have so far offset some of the increased risk of
fire caused by global warming, said Doug Morton, a forest scientist at
NASA Goddard and a co-author of the study. The impact of a warming and
drying climate is more obvious at higher latitudes, where fire has
increased in Canada and the American West. Regions of China, India,
Brazil, and southern Africa also showed increases in burned area.
“Climate change has increased fire risk in many regions, but
satellite burned area data show that human activity has effectively
counterbalanced that climate risk, especially across the global
tropics,” Morton said. “We’ve seen a substantial global decline over the
satellite record, and the loss of fire has some really important
implications for the Earth system.”
Fewer and smaller fires on the savanna mean that there are more trees
and shrubs instead of open grasslands. This is a significant change in
habitat for the region’s iconic mammals like elephants, rhinoceroses,
and lions. “Humans are interrupting the ancient, natural cycle of
burning and regrowth in these areas,” said senior author Jim Randerson
of the University of California, Irvine. “Fire had been instrumental for
millennia in maintaining healthy savannas, keeping shrubs and trees at
bay and eliminating dead vegetation.”
There are benefits to fewer fires as well. Regions with less fire saw
a decrease in carbon monoxide emissions and an improvement in air
quality during fire season. With less fire, savanna vegetation is
increasing—taking up more carbon dioxide from the atmosphere.
But the decline in burned area from human activity raises some tricky
questions. “For fire-dependent ecosystems like savannas,” Morton said,
“the challenge is to balance the need for frequent burning to maintain
habitat for large mammals and to maintain biodiversity while protecting
people’s property, air quality, and agriculture.”
Click here to explore the data. NASA Earth Observatory images by Joshua Stevens, using GFED4s data courtesy of Doug Morton/NASA GSFC. Story by Kate Ramsayer, NASA’s Goddard Space Flight Center.
A wildfire or wildland fire is a fire in an area of combustible vegetation that occurs in the countryside or rural area. Depending on the type of vegetation where it occurs, a wildfire can also be classified more specifically as a brush fire, bush fire, desert fire, forest fire, grass fire, hill fire, peat fire, vegetation fire, and veld fire.
Fossilcharcoal indicates that wildfires began soon after the appearance of terrestrial plants 420 million years ago.[3]
Wildfire’s occurrence throughout the history of terrestrial life
invites conjecture that fire must have had pronounced evolutionary
effects on most ecosystems' flora and fauna.[4]
Earth is an intrinsically flammable planet owing to its cover of
carbon-rich vegetation, seasonally dry climates, atmospheric oxygen, and
widespread lightning and volcanic ignitions.[4]
Wildfires can be characterized in terms of the cause of ignition,
their physical properties, the combustible material present, and the
effect of weather on the fire.[5]
Wildfires can cause damage to property and human life, but they have
many beneficial effects on native vegetation, animals, and ecosystems
that have evolved with fire.[6][7] High-severity wildfire creates complex early seral forest
habitat (also called “snag forest habitat”), which often has higher
species richness and diversity than unburned old forest. Many plant
species depend on the effects of fire for growth and reproduction.[8] However, wildfire in ecosystems where wildfire is uncommon or where
non-native vegetation has encroached may have negative ecological
effects.[5] Wildfire behaviour and severity result from the combination of factors such as available fuels, physical setting, and weather.[9][10][11]
Analyses of historical meteorological data and national fire records in
western North America show the primacy of climate in driving large
regional fires via wet periods that create substantial fuels or drought
and warming that extend conducive fire weather.[12]
Strategies of wildfire prevention, detection, and suppression have varied over the years.[13] One common and inexpensive technique is controlled burning: permitting or even igniting smaller fires to minimize the amount of flammable material available for a potential wildfire.[14][15]
Vegetation may be burned periodically to maintain high species
diversity and frequent burning of surface fuels limits fuel
accumulation.[16][17] Wildland fire use is the cheapest and most ecologically appropriate policy for many forests.[18]
Fuels may also be removed by logging, but fuels treatments and thinning
have no effect on severe fire behavior when under extreme weather
conditions.[19]
Wildfire itself is reportedly "the most effective treatment for
reducing a fire's rate of spread, fireline intensity, flame length, and
heat per unit of area" according to Jan Van Wagtendonk, a biologist at
the Yellowstone Field Station.[20] Building codes in fire-prone areas typically require that structures be built of flame-resistant materials and a defensible space be maintained by clearing flammable materials within a prescribed distance from the structure.[21][22]
The most common direct human causes of wildfire ignition include arson, discarded cigarettes, power-line arcs (as detected by arc mapping), and sparks from equipment.[25][26] Ignition of wildland fires via contact with hot rifle-bullet fragments is also possible under the right conditions.[27] Wildfires can also be started in communities experiencing shifting cultivation, where land is cleared quickly and farmed until the soil loses fertility, and slash and burn clearing.[28] Forested areas cleared by logging encourage the dominance of flammable grasses, and abandoned logging roads overgrown by vegetation may act as fire corridors. Annual grassland fires in southern Vietnam stem in part from the destruction of forested areas by US military herbicides, explosives, and mechanical land-clearing and -burning operations during the Vietnam War.[29]
The most common cause of wildfires varies throughout the world.
In Canada and northwest China, lightning operates as the major source of
ignition. In other parts of the world, human involvement is a major
contributor. In Africa, Central America, Fiji, Mexico, New Zealand,
South America, and Southeast Asia, wildfires can be attributed to human
activities such as agriculture, animal husbandry, and land-conversion burning. In China and in the Mediterranean Basin, human carelessness is a major cause of wildfires.[30][31]
In the United States and Australia, the source of wildfires can be
traced both to lightning strikes and to human activities (such as
machinery sparks, cast-away cigarette butts, or arson).[32][33]Coal seam fires burn in the thousands around the world, such as those in Burning Mountain, New South Wales; Centralia, Pennsylvania; and several coal-sustained fires in China. They can also flare up unexpectedly and ignite nearby flammable material.[34]
Spread
A surface fire in the western desert of Utah, U.S.A.
Charred landscape following a crown fire in the North Cascades, U.S.A.
The spread of wildfires varies based on the flammable material
present, its vertical arrangement and moisture content, and weather
conditions.[35] Fuel arrangement and density is governed in part by topography,
as land shape determines factors such as available sunlight and water
for plant growth. Overall, fire types can be generally characterized by
their fuels as follows:
Ground fires are fed by subterranean roots, duff and other buried organic matter.
This fuel type is especially susceptible to ignition due to spotting.
Ground fires typically burn by smoldering, and can burn slowly for days
to months, such as peat fires in Kalimantan and Eastern Sumatra, Indonesia, which resulted from a riceland creation project that unintentionally drained and dried the peat.[36][37]
Crawling or surface fires are fueled by low-lying vegetation on the forest floor such as leaf and timber litter, debris, grass, and low-lying shrubbery.[38]
This kind of fire often burns at a relatively lower temperature than
crown fires (less than 400 °C (752 °F)) and may spread at slow rate,
though steep slopes and wind can accelerate the rate of spread.[39]
Ladder fires consume material between low-level vegetation and tree canopies, such as small trees, downed logs, and vines. Kudzu, Old World climbing fern, and other invasive plants that scale trees may also encourage ladder fires.[40]
Crown, canopy, or aerial fires burn suspended material at the canopy level, such as tall trees, vines, and mosses. The ignition of a crown fire, termed crowning,
is dependent on the density of the suspended material, canopy height,
canopy continuity, sufficient surface and ladder fires, vegetation
moisture content, and weather conditions during the blaze.[41] Stand-replacing fires lit by humans can spread into the Amazon rain forest, damaging ecosystems not particularly suited for heat or arid conditions.[42]
A dirt road acted as a fire barrier in South Africa. The effects of the barrier can clearly be seen on the unburnt (left) and burnt (right) sides of the road.
Wildfires occur when all of the necessary elements of a fire triangle come together in a susceptible area: an ignition source is brought into contact with a combustible material such as vegetation,
that is subjected to sufficient heat and has an adequate supply of
oxygen from the ambient air. A high moisture content usually prevents
ignition and slows propagation, because higher temperatures are required
to evaporate any water within the material and heat the material to its
fire point.[11][43] Dense forests usually provide more shade, resulting in lower ambient
temperatures and greater humidity, and are therefore less susceptible to
wildfires.[44]
Less dense material such as grasses and leaves are easier to ignite
because they contain less water than denser material such as branches
and trunks.[45] Plants continuously lose water by evapotranspiration, but water loss is usually balanced by water absorbed from the soil, humidity, or rain.[46] When this balance is not maintained, plants dry out and are therefore more flammable, often a consequence of droughts.[47][48]
A wildfire front is the portion sustaining continuous flaming combustion, where unburned material meets active flames, or the smoldering transition between unburned and burned material.[49] As the front approaches, the fire heats both the surrounding air and woody material through convection and thermal radiation. First, wood is dried as water is vaporized at a temperature of 100 °C (212 °F). Next, the pyrolysis
of wood at 230 °C (450 °F) releases flammable gases. Finally, wood can
smoulder at 380 °C (720 °F) or, when heated sufficiently, ignite at
590 °C (1,000 °F).[50][51] Even before the flames of a wildfire arrive at a particular location, heat transfer
from the wildfire front warms the air to 800 °C (1,470 °F), which
pre-heats and dries flammable materials, causing materials to ignite
faster and allowing the fire to spread faster.[45][52] High-temperature and long-duration surface wildfires may encourage flashover or torching: the drying of tree canopies and their subsequent ignition from below.[53]
Wildfires have a rapid forward rate of spread (FROS) when burning through dense, uninterrupted fuels.[54] They can move as fast as 10.8 kilometres per hour (6.7 mph) in forests and 22 kilometres per hour (14 mph) in grasslands.[55] Wildfires can advance tangential to the main front to form a flanking front, or burn in the opposite direction of the main front by backing.[56] They may also spread by jumping or spotting as winds and vertical convection columns carry firebrands (hot wood embers) and other burning materials through the air over roads, rivers, and other barriers that may otherwise act as firebreaks.[57][58]
Torching and fires in tree canopies encourage spotting, and dry ground
fuels that surround a wildfire are especially vulnerable to ignition
from firebrands.[59] Spotting can create spot fires
as hot embers and firebrands ignite fuels downwind from the fire. In
Australian bushfires, spot fires are known to occur as far as 20
kilometres (12 mi) from the fire front.[60]
Especially large wildfires may affect air currents in their immediate vicinities by the stack effect: air rises as it is heated, and large wildfires create powerful updrafts that will draw in new, cooler air from surrounding areas in thermal columns.[61] Great vertical differences in temperature and humidity encourage pyrocumulus clouds, strong winds, and fire whirls with the force of tornadoes at speeds of more than 80 kilometres per hour (50 mph).[62][63][64]
Rapid rates of spread, prolific crowning or spotting, the presence of
fire whirls, and strong convection columns signify extreme conditions.[65]
The thermal heat from wildfire can cause significant weathering of rocks and boulders, heat can rapidly expand a boulder and thermal shock can occur, which may cause an object's structure to fail.
Effect of weather
Lightning-sparked wildfires are frequent occurrences during the dry summer season in Nevada.
Heat waves, droughts, cyclical climate changes such as El Niño,
and regional weather patterns such as high-pressure ridges can increase
the risk and alter the behavior of wildfires dramatically.[66][67] Years of precipitation followed by warm periods can encourage more widespread fires and longer fire seasons.[68]
Since the mid-1980s, earlier snowmelt and associated warming has also
been associated with an increase in length and severity of the wildfire
season in the Western United States.[69]Global warming may increase the intensity and frequency of droughts in many areas, creating more intense and frequent wildfires.[5] A 2015 study[70] indicates that the increase in fire risk in California may be attributable to human-induced climate change.[71]
A study of alluvial sediment deposits going back over 8,000 years found
warmer climate periods experienced severe droughts and stand-replacing
fires and concluded climate was such a powerful influence on wildfire
that trying to recreate presettlement forest structure is likely
impossible in a warmer future.[72]
Intensity also increases during daytime hours. Burn rates of
smoldering logs are up to five times greater during the day due to lower
humidity, increased temperatures, and increased wind speeds.[73]
Sunlight warms the ground during the day which creates air currents
that travel uphill. At night the land cools, creating air currents that
travel downhill. Wildfires are fanned by these winds and often follow
the air currents over hills and through valleys.[74] Fires in Europe occur frequently during the hours of 12:00 p.m. and 2:00 p.m.[75] Wildfire suppression operations in the United States revolve around a 24-hour fire day that begins at 10:00 a.m. due to the predictable increase in intensity resulting from the daytime warmth.[76]
Wildfire’s occurrence throughout the history of terrestrial life
invites conjecture that fire must have had pronounced evolutionary
effects on most ecosystems' flora and fauna.[4]
Wildfires are common in climates that are sufficiently moist to allow
the growth of vegetation but feature extended dry, hot periods.[8] Such places include the vegetated areas of Australia and Southeast Asia, the veld in southern Africa, the fynbos in the Western Cape of South Africa, the forested areas of the United States and Canada, and the Mediterranean Basin.
High-severity wildfire creates complex early seral forest habitat (also called “snag forest habitat”), which often has higher species richness and diversity than unburned old forest.[6]
Plant and animal species in most types of North American forests
evolved with fire, and many of these species depend on wildfires, and
particularly high-severity fires, to reproduce and grow. Fire helps to
return nutrients from plant matter back to soil, the heat from fire is
necessary to the germination of certain types of seeds, and the snags
(dead trees) and early successional forests created by high-severity
fire create habitat conditions that are beneficial to wildlife.[6]
Early successional forests created by high-severity fire support some
of the highest levels of native biodiversity found in temperate conifer
forests.[7][77] Post-fire logging has no ecological benefits and many negative impacts; the same is often true for post-fire seeding.[18]
Although some ecosystems rely on naturally occurring fires to
regulate growth, some ecosystems suffer from too much fire, such as the chaparral in southern California
and lower elevation deserts in the American Southwest. The increased
fire frequency in these ordinarily fire-dependent areas has upset
natural cycles, damaged native plant communities, and encouraged the
growth of non-native weeds.[78][79][80][81]Invasive species, such as Lygodium microphyllum and Bromus tectorum,
can grow rapidly in areas that were damaged by fires. Because they are
highly flammable, they can increase the future risk of fire, creating a positive feedback loop that increases fire frequency and further alters native vegetation communities.[40][82]
In the Amazon Rainforest, drought, logging, cattle ranching practices, and slash-and-burn
agriculture damage fire-resistant forests and promote the growth of
flammable brush, creating a cycle that encourages more burning.[83] Fires in the rainforest threaten its collection of diverse species and produce large amounts of CO2.[84]
Also, fires in the rainforest, along with drought and human
involvement, could damage or destroy more than half of the Amazon
rainforest by the year 2030.[85] Wildfires generate ash, reduce the availability of organic nutrients,
and cause an increase in water runoff, eroding away other nutrients and
creating flash flood conditions.[35][86] A 2003 wildfire in the North Yorkshire Moors burned off 2.5 square kilometers (600 acres) of heather and the underlying peat
layers. Afterwards, wind erosion stripped the ash and the exposed soil,
revealing archaeological remains dating back to 10,000 BC.[87]
Wildfires can also have an effect on climate change, increasing the
amount of carbon released into the atmosphere and inhibiting vegetation
growth, which affects overall carbon uptake by plants.[88]
In tundra
there is a natural pattern of accumulation of fuel and wildfire which
varies depending on the nature of vegetation and terrain. Research in
Alaska has shown fire-event return intervals, (FRIs) that typically vary
from 150 to 200 years with dryer lowland areas burning more frequently
than wetter upland areas.[89]
Plants in wildfire-prone ecosystems often survive through adaptations to their local fire regime.
Such adaptations include physical protection against heat, increased
growth after a fire event, and flammable materials that encourage fire
and may eliminate competition. For example, plants of the genus Eucalyptus contain flammable oils that encourage fire and hard sclerophyll leaves to resist heat and drought, ensuring their dominance over less fire-tolerant species.[90][91]
Dense bark, shedding lower branches, and high water content in external
structures may also protect trees from rising temperatures.[8] Fire-resistant seeds and reserve shoots that sprout after a fire encourage species preservation, as embodied by pioneer species. Smoke, charred wood, and heat can stimulate the germination of seeds in a process called serotiny.[92] Exposure to smoke from burning plants promotes germination in other types of plants by inducing the production of the orange butenolide.[93]
Grasslands in Western Sabah, Malaysian pine forests, and Indonesian Casuarina forests are believed to have resulted from previous periods of fire.[94]Chamise deadwood litter is low in water content and flammable, and the shrub quickly sprouts after a fire.[8] Cape lilies lie dormant until flames brush away the covering, then blossom almost overnight.[95]Sequoia rely on periodic fires to reduce competition, release seeds from their cones, and clear the soil and canopy for new growth.[96]Caribbean Pine in Bahamian pineyards
have adapted to and rely on low-intensity, surface fires for survival
and growth. An optimum fire frequency for growth is every 3 to 10 years.
Too frequent fires favor herbaceous plants, and infrequent fires favor species typical of Bahamian dry forests.[97]
Most of the Earth's weather and air pollution resides in the troposphere,
the part of the atmosphere that extends from the surface of the planet
to a height of about 10 kilometers (6 mi). The vertical lift of a severe
thunderstorm or pyrocumulonimbus can be enhanced in the area of a large wildfire, which can propel smoke, soot, and other particulate matter as high as the lower stratosphere.[98] Previously, prevailing scientific theory held that most particles in the stratosphere came from volcanoes, but smoke and other wildfire emissions have been detected from the lower stratosphere.[99] Pyrocumulus clouds can reach 6,100 meters (20,000 ft) over wildfires.[100]
Satellite observation of smoke plumes from wildfires revealed that the
plumes could be traced intact for distances exceeding 1,600 kilometers
(1,000 mi).[101] Computer-aided models such as CALPUFF may help predict the size and direction of wildfire-generated smoke plumes by using atmospheric dispersion modeling.[102]
Wildfires can affect local atmospheric pollution,[103] and release carbon in the form of carbon dioxide.[104]Wildfire emissions contain fine particulate matter which can cause cardiovascular and respiratory problems.[105] Increased fire byproducts in the troposphere can increase ozone concentration beyond safe levels.[106] Forest fires in Indonesia in 1997 were estimated to have released between 0.81 and 2.57 gigatonnes (0.89 and 2.83 billion short tons) of CO2 into the atmosphere, which is between 13%–40% of the annual global carbon dioxide emissions from burning fossil fuels.[107][108] Atmospheric models suggest that these concentrations of sooty particles could increase absorption of incoming solar radiation during winter months by as much as 15%.[109]
National map of groundwater and soil moisture in the United States of America. It shows the very low soil moisture associated with the 2011 fire season in Texas.
Smoke trail from a fire seen while looking towards Dargo from Swifts Creek, Victoria, Australia, 11 January 2007
History
In the Welsh Borders, the first evidence of wildfire is rhyniophytoid plant fossils preserved as charcoal, dating to the Silurian period (about 420 million years ago). Smoldering surface fires started to occur sometime before the Early Devonian period 405 million years ago. Low atmospheric oxygen during the Middle and Late Devonian was accompanied by a decrease in charcoal abundance.[110][111] Additional charcoal evidence suggests that fires continued through the Carboniferous period. Later, the overall increase of atmospheric oxygen from 13% in the Late Devonian to 30-31% by the Late Permian was accompanied by a more widespread distribution of wildfires.[112] Later, a decrease in wildfire-related charcoal deposits from the late Permian to the Triassic periods is explained by a decrease in oxygen levels.[113]
Wildfires during the Paleozoic and Mesozoic periods followed
patterns similar to fires that occur in modern times. Surface fires
driven by dry seasons[clarification needed] are evident in Devonian and Carboniferous progymnosperm forests. Lepidodendron forests dating to the Carboniferous period have charred peaks, evidence of crown fires. In Jurassic gymnosperm forests, there is evidence of high frequency, light surface fires.[113] The increase of fire activity in the late Tertiary[114] is possibly due to the increase of C4-type grasses. As these grasses shifted to more mesic habitats, their high flammability increased fire frequency, promoting grasslands over woodlands.[115] However, fire-prone habitats may have contributed to the prominence of trees such as those of the genera Eucalyptus, Pinus and Sequoia, which have thick bark to withstand fires and employ serotiny.[116][117]
Human involvement
Aerial view of deliberate wildfires on the Khun Tan Range, Thailand. These fires are lit by local farmers every year in order to promote the growth of a certain mushroom
The human use of fire for agricultural and hunting purposes during the Paleolithic and Mesolithic
ages altered the preexisting landscapes and fire regimes. Woodlands
were gradually replaced by smaller vegetation that facilitated travel,
hunting, seed-gathering and planting.[118] In recorded human history, minor allusions to wildfires were mentioned in the Bible and by classical writers such as Homer.
However, while ancient Hebrew, Greek, and Roman writers were aware of
fires, they were not very interested in the uncultivated lands where
wildfires occurred.[119][120] Wildfires were used in battles throughout human history as early thermal weapons. From the Middle ages, accounts were written of occupational burning as well as customs and laws that governed the use of fire. In Germany, regular burning was documented in 1290 in the Odenwald and in 1344 in the Black Forest.[121] In the 14th century Sardinia, firebreaks were used for wildfire protection. In Spain during the 1550s, sheep husbandry was discouraged in certain provinces by Philip II due to the harmful effects of fires used in transhumance.[119][120] As early as the 17th century, Native Americans were observed using fire for many purposes including cultivation, signaling, and warfare. Scottish botanist David Douglas
noted the native use of fire for tobacco cultivation, to encourage deer
into smaller areas for hunting purposes, and to improve foraging for
honey and grasshoppers. Charcoal found in sedimentary deposits off the
Pacific coast of Central America suggests that more burning occurred in
the 50 years before the Spanish colonization of the Americas than after the colonization.[122] In the post-World War II Baltic region,
socio-economic changes led more stringent air quality standards and
bans on fires that eliminated traditional burning practices.[121] In the mid-19th century, explorers from HMS Beagle observed Australian Aborigines using fire for ground clearing, hunting, and regeneration of plant food in a method later named fire-stick farming.[123] Such careful use of fire has been employed for centuries in the lands protected by Kakadu National Park to encourage biodiversity.[124]
Wildfires typically occurred during periods of increased temperature and drought. An increase in fire-related debris flow in alluvial fans of northeastern Yellowstone National Park was linked to the period between AD 1050 and 1200, coinciding with the Medieval Warm Period.[125] However, human influence caused an increase in fire frequency. Dendrochronological fire scar data and charcoal layer data in Finland
suggests that, while many fires occurred during severe drought
conditions, an increase in the number of fires during 850 BC and 1660 AD
can be attributed to human influence.[126]
Charcoal evidence from the Americas suggested a general decrease in
wildfires between 1 AD and 1750 compared to previous years. However, a
period of increased fire frequency between 1750 and 1870 was suggested
by charcoal data from North America and Asia, attributed to human
population growth and influences such as land clearing practices. This
period was followed by an overall decrease in burning in the 20th
century, linked to the expansion of agriculture, increased livestock
grazing, and fire prevention efforts.[127]
A meta-analysis found that 17 times more land burned annually in
California before 1800 compared to recent decades (1,800,000
hectares/year compared to 102,000 hectares/year).[128]
According to a paper published in Science,
the number of natural and human-caused fires decreased by 24.3% between
1998 and 2015. Researchers explain this a transition from nomadism to settled lifestyle and intensification of agriculture that lead to a drop in the use of fire for land clearing.
Invasive species moved by humans have in some cases increased the intensity of wildfires, such as Eucalyptus in California and gamba grass in Australia.
Prevention
1985 Smokey Bear poster with part of his admonition, "Only you can prevent forest fires".
Wildfire prevention refers to the preemptive methods aimed at
reducing the risk of fires as well as lessening its severity and spread.[131] Prevention techniques aim to manage air quality, maintain ecological balances, protect resources,[82] and to affect future fires.[132]
North American firefighting policies permit naturally caused fires to
burn to maintain their ecological role, so long as the risks of escape
into high-value areas are mitigated.[133]
However, prevention policies must consider the role that humans play in
wildfires, since, for example, 95% of forest fires in Europe are
related to human involvement.[134]
Sources of human-caused fire may include arson, accidental ignition, or
the uncontrolled use of fire in land-clearing and agriculture such as
the slash-and-burn farming in Southeast Asia.[135]
In 1937, U.S. President Franklin D. Roosevelt
initiated a nationwide fire prevention campaign, highlighting the role
of human carelessness in forest fires. Later posters of the program
featured Uncle Sam, characters from the Disney movie Bambi, and the official mascot of the U.S. Forest Service, Smokey Bear.[136]
Reducing human-caused ignitions may be the most effective means of
reducing unwanted wildfire. Alteration of fuels is commonly undertaken
when attempting to affect future fire risk and behavior.[35] Wildfire prevention programs around the world may employ techniques such as wildland fire use and prescribed or controlled burns.[137][138]Wildland fire use refers to any fire of natural causes that is monitored but allowed to burn. Controlled burns are fires ignited by government agencies under less dangerous weather conditions.[139]
Vegetation may be burned periodically to maintain high species
diversity and frequent burning of surface fuels limits fuel
accumulation.[16][17] Wildland fire use is the cheapest and most ecologically appropriate policy for many forests.[18] Fuels may also be removed by logging, but fuels treatments and thinning have no effect on severe fire behavior[19]
Wildfire models are often used to predict and compare the benefits of
different fuel treatments on future wildfire spread, but their accuracy
is low.[35]
Wildfire itself is reportedly "the most effective treatment for
reducing a fire's rate of spread, fireline intensity, flame length, and
heat per unit of area" according to Jan van Wagtendonk, a biologist at
the Yellowstone Field Station.[20]
Building codes in fire-prone areas typically require that structures be built of flame-resistant materials and a defensible space be maintained by clearing flammable materials within a prescribed distance from the structure.[21][22] Communities in the Philippines also maintain fire lines
5 to 10 meters (16 to 33 ft) wide between the forest and their village,
and patrol these lines during summer months or seasons of dry weather.[140] Continued residential development in fire-prone areas and rebuilding structures destroyed by fires has been met with criticism.[141]
The ecological benefits of fire are often overridden by the economic
and safety benefits of protecting structures and human life.[142]
Fast and effective detection is a key factor in wildfire fighting.[143]
Early detection efforts were focused on early response, accurate
results in both daytime and nighttime, and the ability to prioritize
fire danger.[144]Fire lookout towers were used in the United States in the early 20th century and fires were reported using telephones, carrier pigeons, and heliographs.[145] Aerial and land photography using instant cameras were used in the 1950s until infrared scanning
was developed for fire detection in the 1960s. However, information
analysis and delivery was often delayed by limitations in communication
technology. Early satellite-derived fire analyses were hand-drawn on
maps at a remote site and sent via overnight mail to the fire manager. During the Yellowstone fires of 1988, a data station was established in West Yellowstone, permitting the delivery of satellite-based fire information in approximately four hours.[144]
Currently, public hotlines, fire lookouts
in towers, and ground and aerial patrols can be used as a means of
early detection of forest fires. However, accurate human observation may
be limited by operator fatigue,
time of day, time of year, and geographic location. Electronic systems
have gained popularity in recent years as a possible resolution to human
operator error. A government report on a recent trial of three
automated camera fire detection systems in Australia did, however,
conclude "...detection by the camera systems was slower and less
reliable than by a trained human observer". These systems may be semi-
or fully automated and employ systems based on the risk area and degree
of human presence, as suggested by GIS
data analyses. An integrated approach of multiple systems can be used
to merge satellite data, aerial imagery, and personnel position via Global Positioning System (GPS) into a collective whole for near-realtime use by wireless Incident Command Centers.[146][147]
A small, high risk area that features thick vegetation, a strong
human presence, or is close to a critical urban area can be monitored
using a local sensor network. Detection systems may include wireless sensor networks that act as automated weather systems: detecting temperature, humidity, and smoke.[148][149][150][151] These may be battery-powered, solar-powered, or tree-rechargeable: able to recharge their battery systems using the small electrical currents in plant material.[152]
Larger, medium-risk areas can be monitored by scanning towers that
incorporate fixed cameras and sensors to detect smoke or additional
factors such as the infrared signature of carbon dioxide produced by
fires. Additional capabilities such as night vision, brightness detection, and color change detection may also be incorporated into sensor arrays.[153][154][155]
Wildfires across the Balkans in late July 2007 (MODIS image)
Satellite and aerial monitoring through the use of planes,
helicopter, or UAVs can provide a wider view and may be sufficient to
monitor very large, low risk areas. These more sophisticated systems
employ GPS and aircraft-mounted infrared or high-resolution visible
cameras to identify and target wildfires.[156][157] Satellite-mounted sensors such as Envisat's Advanced Along Track Scanning Radiometer and European Remote-Sensing Satellite's
Along-Track Scanning Radiometer can measure infrared radiation emitted
by fires, identifying hot spots greater than 39 °C (102 °F).[158][159] The National Oceanic and Atmospheric Administration's Hazard Mapping System combines remote-sensing data from satellite sources such as Geostationary Operational Environmental Satellite (GOES), Moderate-Resolution Imaging Spectroradiometer (MODIS), and Advanced Very High Resolution Radiometer (AVHRR) for detection of fire and smoke plume locations.[160][161]
However, satellite detection is prone to offset errors, anywhere from 2
to 3 kilometers (1 to 2 mi) for MODIS and AVHRR data and up to 12
kilometers (7.5 mi) for GOES data.[162]
Satellites in geostationary orbits may become disabled, and satellites
in polar orbits are often limited by their short window of observation
time. Cloud cover and image resolution and may also limit the
effectiveness of satellite imagery.[163]
in 2015 a new fire detection tool is in operation at the U.S. Department of Agriculture (USDA) Forest Service (USFS) which uses data from the Suomi National Polar-orbiting Partnership
(NPP) satellite to detect smaller fires in more detail than previous
space-based products. The high-resolution data is used with a computer
model to predict how a fire will change direction based on weather and
land conditions. The active fire detection product using data from
Suomi NPP's Visible Infrared Imaging Radiometer Suite
(VIIRS) increases the resolution of fire observations to 1,230 feet
(375 meters). Previous NASA satellite data products available since the
early 2000s observed fires at 3,280 foot (1 kilometer) resolution. The
data is one of the intelligence tools used by the USFS and Department of
Interior agencies across the United States to guide resource allocation
and strategic fire management decisions. The enhanced VIIRS fire
product enables detection every 12 hours or less of much smaller fires
and provides more detail and consistent tracking of fire lines during
long duration wildfires – capabilities critical for early warning
systems and support of routine mapping of fire progression. Active fire
locations are available to users within minutes from the satellite
overpass through data processing facilities at the USFS Remote Sensing
Applications Center, which uses technologies developed by the NASA
Goddard Space Flight Center Direct Readout Laboratory in Greenbelt,
Maryland. The model uses data on weather conditions and the land
surrounding an active fire to predict 12–18 hours in advance whether a
blaze will shift direction. The state of Colorado decided to incorporate
the weather-fire model in its firefighting efforts beginning with the
2016 fire season.
In 2014, an international campaign was organized in South
Africa's Kruger National Park to validate fire detection products
including the new VIIRS active fire data. In advance of that campaign,
the Meraka Institute of the Council for Scientific and Industrial
Research in Pretoria, South Africa, an early adopter of the VIIRS 375m
fire product, put it to use during several large wildfires in Kruger.
The demand for timely, high-quality fire information has
increased in recent years. Wildfires in the United States burn an
average of 7 million acres of land each year. For the last 10 years, the
USFS and Department of Interior have spent a combined average of about
$2–4 billion annually on wildfire suppression.
Suppression
A Russian firefighter extinguishing a wildfire
Wildfire suppression depends on the technologies available in the
area in which the wildfire occurs. In less developed nations the
techniques used can be as simple as throwing sand or beating the fire
with sticks or palm fronds.[164] In more advanced nations, the suppression methods vary due to increased technological capacity. Silver iodide can be used to encourage snow fall,[165] while fire retardants and water can be dropped onto fires by unmanned aerial vehicles, planes, and helicopters.[166][167]
Complete fire suppression is no longer an expectation, but the majority
of wildfires are often extinguished before they grow out of control.
While more than 99% of the 10,000 new wildfires each year are contained,
escaped wildfires under extreme weather conditions are difficult to
suppress without a change in the weather. Wildfires in Canada and the US
burn an average of 54,500 square kilometers (13,000,000 acres) per
year.[168][169]
Above all, fighting wildfires can become deadly. A wildfire's
burning front may also change direction unexpectedly and jump across
fire breaks. Intense heat and smoke can lead to disorientation and loss
of appreciation of the direction of the fire, which can make fires
particularly dangerous. For example, during the 1949 Mann Gulch fire in Montana, USA, thirteen smokejumpers died when they lost their communication links, became disoriented, and were overtaken by the fire.[170] In the Australian February 2009 Victorian bushfires, at least 173 people died and over 2,029 homes and 3,500 structures were lost when they became engulfed by wildfire.[171]
Costs of wildfire suppression
In
California, the U.S. Forest Service spends about $200 million per year
to suppress 98% of wildfires and up to $1 billion to suppress the other
2% of fires that escape initial attack and become large.[172]
Wildland firefighting safety
Wildfire fighters cutting down a tree using a chainsaw
Especially in hot weather condition, fires present the risk of
heat stress, which can entail feeling heat, fatigue, weakness, vertigo,
headache, or nausea. Heat stress can progress into heat strain, which
entails physiological changes such as increased heart rate and core body
temperature. This can lead to heat-related illnesses, such as heat
rash, cramps, exhaustion or heat stroke.
Various factors can contribute to the risks posed by heat stress,
including strenuous work, personal risk factors such as age and fitness, dehydration, sleep deprivation, and burdensome personal protective equipment. Rest, cool water, and occasional breaks are crucial to mitigating the effects of heat stress.[173]
Smoke, ash, and debris can also pose serious respiratory hazards
to wildland fire fighters. The smoke and dust from wildfires can contain
gases such as carbon monoxide, sulfur dioxide and formaldehyde, as well as particulates such as ash and silica.
To reduce smoke exposure, wildfire fighting crews should, whenever
possible, rotate firefighters through areas of heavy smoke, avoid
downwind firefighting, use equipment rather than people in holding
areas, and minimize mop-up. Camps and command posts should also be
located upwind of wildfires. Protective clothing and equipment can also
help minimize exposure to smoke and ash.[173]
Firefighters are also at risk of cardiac events including strokes
and heart attacks. Fire fighters should maintain good physical fitness.
Fitness programs, medical screening and examination programs which
include stress tests can minimize the risks of firefighting cardiac
problems.[173]
Other injury hazards wildland fire fighters face include slips, trips
and falls, burns, scrapes and cuts from tools and equipment, being
struck by trees, vehicles, or other objects, plant hazards such as
thorns and poison ivy, snake and animal bites, vehicle crashes,
electrocution from power lines or lightning storms, and unstable
building structures.[173]
Fire retardant
Fire
retardants are used to slow wildfires by inhibiting combustion. They
are aqueous solutions of ammonium phosphates and ammonium sulfates, as
well as thickening agents.[175]
The decision to apply retardant depends on the magnitude, location and
intensity of the wildfire. In certain instances, fire retardant may
also be applied as a precautionary fire defense measure.[176]
Typical fire retardants contain the same agents as fertilizers.
Fire retardant may also affect water quality through leaching,
eutrophication, or misapplication. Fire retardant's effects on drinking
water remain inconclusive.[177]
Dilution factors, including water body size, rainfall, and water flow
rates lessen the concentration and potency of fire retardant.[176]
Wildfire debris (ash and sediment) clog rivers and reservoirs
increasing the risk for floods and erosion that ultimately slow and/or
damage water treatment systems.[177][178]
There is continued concern of fire retardant effects on land, water,
wildlife habitats, and watershed quality, additional research is needed.
However, on the positive side, fire retardant (specifically its
nitrogen and phosphorus components) has been shown to have a fertilizing
effect on nutrient-deprived soils and thus creates a temporary increase
in vegetation.[176]
Current USDA procedure maintains that the aerial application of
fire retardant in the United States must clear waterways by a minimum of
300 feet in order to safeguard effects of retardant runoff. Aerial uses
of fire retardant are required to avoid application near waterways and
endangered species (plant and animal habitats). After any incident of
fire retardant misapplication, the U.S. Forest Service requires
reporting and assessment impacts be made in order to determine
mitigation, remediation, and/or restrictions on future retardant uses in
that area.
Modeling
Fire Propagation Model
Wildfire modeling is concerned with numerical simulation of wildfires in order to comprehend and predict fire behavior.[179][180]
Wildfire modeling aims to aid wildfire suppression, increase the safety
of firefighters and the public, and minimize damage. Using computational science,
wildfire modeling involves the statistical analysis of past fire events
to predict spotting risks and front behavior. Various wildfire
propagation models have been proposed in the past, including simple
ellipses and egg- and fan-shaped models. Early attempts to determine
wildfire behavior assumed terrain and vegetation uniformity. However,
the exact behavior of a wildfire's front is dependent on a variety of
factors, including windspeed and slope steepness. Modern growth models
utilize a combination of past ellipsoidal descriptions and Huygens' Principle to simulate fire growth as a continuously expanding polygon.[181][182]Extreme value theory
may also be used to predict the size of large wildfires. However, large
fires that exceed suppression capabilities are often regarded as
statistical outliers in standard analyses, even though fire policies are
more influenced by large wildfires than by small fires.[183]
2009 California Wildfires at NASA/JPL – Pasadena, California
Wildfire risk is the chance that a wildfire will start in or reach a
particular area and the potential loss of human values if it does. Risk
is dependent on variable factors such as human activities, weather
patterns, availability of wildfire fuels, and the availability or lack
of resources to suppress a fire.[184] Wildfires have continually been a threat to human populations. However,
human induced geographical and climatic changes are exposing
populations more frequently to wildfires and increasing wildfire risk.
It is speculated that the increase in wildfires arises from a century of
wildfire suppression coupled with the rapid expansion of human
developments into fire-prone wildlands.[185] Wildfires are naturally occurring events that aid in promoting forest
health. Global warming and climate changes are causing an increase in
temperatures and more droughts nationwide which contributes to an
increase in wildfire risk.[186][187]
Airborne hazards
The
most noticeable adverse effect of wildfires is the destruction of
property. However, the release of hazardous chemicals from the burning
of wildland fuels also significantly impacts health in humans.
Wildfire smoke is composed primarily of carbon dioxide and water
vapor. Other common smoke components present in lower concentrations are
carbon monoxide, formaldehyde, acrolein, polyaromatic hydrocarbons, and
benzene.[188]
Small particulates suspended in air which come in solid form or in
liquid droplets are also present in smoke. 80 -90% of wildfire smoke, by
mass, is within the fine particle size class of 2.5 micrometers in
diameter or smaller.[189]
Despite carbon dioxide's high concentration in smoke, it poses a
low health risk due to its low toxicity. Rather, carbon monoxide and
fine particulate matter, particularly 2.5 µm in diameter and smaller,
have been identified as the major health threats.[188]
Other chemicals are considered to be significant hazards but are found
in concentrations that are too low to cause detectable health effects.
The degree of wildfire smoke exposure to an individual is
dependent on the length, severity, duration, and proximity of the fire.
People are exposed directly to smoke via the respiratory tract though
inhalation of air pollutants. Indirectly, communities are exposed to
wildfire debris that can contaminate soil and water supplies.
The U.S. Environmental Protection Agency (EPA) developed the Air
Quality Index (AQI), a public resource that provides national air
quality standard concentrations for common air pollutants. The public
can use this index as a tool to determine their exposure to hazardous
air pollutants based on visibility range.[190]
Post-fire risks
After
a wildfire, hazards remain. Residents returning to their homes may be
at risk from falling fire-weakened trees. Humans and pets may also be
harmed by falling into ash pits.
Groups at risk
Firefighters
are at the greatest risk for acute and chronic health effects resulting
from wildfire smoke exposure. Due to firefighters' occupational duties,
they are frequently exposed to hazardous chemicals
at a close proximity for longer periods of time. A case study on the
exposure of wildfire smoke among wildland firefighters shows that
firefighters are exposed to significant levels of carbon monoxide and
respiratory irritants above OSHA-permissible
exposure limits (PEL) and ACGIH threshold limit values (TLV). 5–10% are
overexposed. The study obtained exposure concentrations for one
wildland firefighter over a 10-hour shift spent holding down a fireline.
The firefighter was exposed to a wide range of carbon monoxide and
respiratory irritant (combination of particulate matter 3.5 µm and
smaller, acrolein, and formaldehype) levels. Carbon monoxide levels
reached up to 160ppm and the TLV irritant index value reached a high of
10. In contrast, the OSHA PEL for carbon monoxide is 30ppm and for the
TLV respiratory irritant index, the calculated threshold limit value is
1; any value above 1 exceeds exposure limits.[191]
Residents in communities surrounding wildfires are exposed to
lower concentrations of chemicals, but they are at a greater risk for
indirect exposure through water or soil contamination. Exposure to
residents is greatly dependent on individual susceptibility. Vulnerable
persons such as children (ages 0–4), the elderly (ages 65 and older),
smokers, and pregnant women are at an increased risk due to their
already compromised body systems, even when the exposures are present at
low chemical concentrations and for relatively short exposure periods.[188]
Additionally, there is evidence of an increase in material
stress, as documented by researchers M.H. O'Donnell and A.M. Behie, thus
affecting birth outcomes. In Australia, studies show that male infants
born with drastically higher average birth weights were born in mostly
severely fire-affected areas. This is attributed to the fact that
maternal signals directly affect fetal growth patterns.[193][194]
Health effects
Animation of diaphragmatic breathing with the diaphragm shown in green
Inhalation of smoke from a wildfire can be a health hazard. Wildfire
smoke is composed of carbon dioxide, water vapor, particulate matter,
organic chemicals, nitrogen oxides and other compounds. The principal
health concern is the inhalation of particulate matter and carbon
monoxide.[195]
Particulate matter (PM) is a type of air pollution made up of
particles of dust and liquid droplets. They are characterized into two
categories based on the diameter of the particle. Coarse particles are
between 2.5 micrometers and 10 micrometers and fine particles measure
2.5 micrometers and less. Both sizes can be inhaled. Coarse particles
are filtered by the upper airways and can cause eye and sinus irritation
as well as sore throat and coughing. The fine particles are more
problematic because, when inhaled, they can be deposited deep into the
lungs, where they are absorbed into the bloodstream. This is
particularly hazardous to the very young, elderly and those with chronic
conditions such as asthma, chronic obstructive pulmonary disease
(COPD), cystic fibrosis and cardiovascular conditions. The illnesses
most commonly with exposure to fine particle from wildfire smoke are
bronchitis, exacerbation of asthma or COPD, and pneumonia. Symptoms of
these complications include wheezing and shortness of breath and
cardiovascular symptoms include chest pain, rapid heart rate and
fatigue.[196]
Carbon monoxide (CO) is a colorless, odorless gas that can be
found at the highest concentration at close proximity to a smoldering
fire. For this reason, carbon monoxide inhalation is a serious threat to
the health of wildfire firefighters. CO in smoke can be inhaled into
the lungs where it is absorbed into the bloodstream and reduces oxygen
delivery to the body's vital organs. At high concentrations, it can
cause headache, weakness, dizziness, confusion, nausea, disorientation,
visual impairment, coma and even death. However, even at lower
concentrations, such as those found at wildfires, individuals with
cardiovascular disease may experience chest pain and cardiac arrhythmia.[188]
A recent study tracking the number and cause of wildfire firefighter
deaths from 1990–2006 found that 21.9% of the deaths occurred from heart
attacks.[197]
Another important and somewhat less obvious health effect of
wildfires is psychiatric diseases and disorders. Both adults and
children from countries ranging from the United States and Canada to
Greece and Australia who were directly and indirectly affected by
wildfires were found by researchers to demonstrate several different
mental conditions linked to their experience with the wildfires. These
include post-traumatic stress disorder (PTSD), depression, anxiety, and
phobias.
In a new twist to wildfire health effects, former uranium mining
sites were burned over in the summer of 2012 near North Fork, Idaho.
This prompted concern from area residents and Idaho State Department of
Environmental Quality officials over the potential spread of radiation
in the resultant smoke, since those sites had never been completely
cleaned up from radioactive remains.[203]
Epidemiology
The
EPA has defined acceptable concentrations of particulate matter in the
air, through the National Ambient Air Quality Standards and monitoring
of ambient air quality has been mandated.[204]
Due to these monitoring programs and the incidence of several large
wildfires near populated areas, epidemiological studies have been
conducted and demonstrate an association between human health effects
and an increase in fine particulate matter due to wildfire smoke.
An increase in PM emitted from the Hayman fire in Colorado in
June 2002, was associated with an increase in respiratory symptoms in
patients with COPD.[205]
Looking at the wildfires in Southern California in October 2003 in a
similar manner, investigators have shown an increase in hospital
admissions due to asthma during peak concentrations of PM.[206]
Children participating in the Children's Health Study were also found
to have an increase in eye and respiratory symptoms, medication use and
physician visits.[207]
Recently, it was demonstrated that mothers who were pregnant during the
fires gave birth to babies with a slightly reduced average birth weight
compared to those who were not exposed to wildfire during birth.
Suggesting that pregnant women may also be at greater risk to adverse
effects from wildfire.[208] Worldwide it is estimated that 339,000 people die due to the effects of wildfire smoke each year.