In evolutionary biology, robustness of a biological system (also called biological or genetic robustness) is the persistence of a certain characteristic or trait in a system under perturbations or conditions of uncertainty. Robustness in development is known as canalization. According to the kind of perturbation involved, robustness can be classified as mutational, environmental, recombinational, or behavioral robustness etc. Robustness is achieved through the combination of many genetic and molecular mechanisms and can evolve by either direct or indirect selection. Several model systems have been developed to experimentally study robustness and its evolutionary consequences.
Classification
Mutational robustness
Mutational
robustness (also called mutation tolerance) describes the extent to
which an organism's phenotype remains constant in spite of mutation. Robustness can be empirically measured for several genomes and individual genes by inducing mutations and measuring what proportion of mutants retain the same phenotype, function or fitness. More generally robustness corresponds to the neutral band in the distribution of fitness effects
of mutation (i.e. the frequencies of different fitnesses of mutants).
Proteins so far investigated have shown a tolerance to mutations of
roughly 66% (i.e. two thirds of mutations are neutral).
Conversely, measured mutational robustnesses of organisms vary widely. For example, >95% of point mutations in C. elegans have no detectable effect and even 90% of single gene knockouts in E. coli are non-lethal. Viruses, however, only tolerate 20-40% of mutations and hence are much more sensitive to mutation.
Robustness to stochasticity
Biological processes at the molecular scale are inherently stochastic.
They emerge from a combination of stochastic events that happen given
the physico-chemical properties of molecules. For instance, gene
expression is intrinsically noisy. This means that two cells in exactly
identical regulatory states will exhibit different mRNA contents. The cell population level log-normal distribution of mRNA content follows directly from the application of the Central Limit Theorem to the multi-step nature of gene expression regulation.
Environmental robustness
In varying environments, perfect adaptation
to one condition may come at the expense of adaptation to another.
Consequently, the total selection pressure on an organism is the average
selection across all environments weighted by the percentage time spent
in that environment. Variable environment can therefore select for
environmental robustness where organisms can function across a wide
range of conditions with little change in phenotype or fitness (biology).
Some organisms show adaptations to tolerate large changes in
temperature, water availability, salinity or food availability. Plants,
in particular, are unable to move when the environment changes and so
show a range of mechanisms for achieving environmental robustness.
Similarly, this can be seen in proteins as tolerance to a wide range of solvents, ion concentrations or temperatures.
Genetic, molecular and cellular causes
Genomes mutate by environmental damage and imperfect replication, yet
they display remarkable tolerance. This comes from robustness both at
many different levels.
Organism mutational robustness
There are many mechanisms that provide genome robustness. For example, genetic redundancy reduces the effect of mutations in any one copy of a multi-copy gene. Additionally the flux through a metabolic pathway
is typically limited by only a few of the steps, meaning that changes
in function of many of the enzymes have little effect on fitness. Similarly metabolic networks have multiple alternate pathways to produce many key metabolites.
Protein mutational robustness
Protein mutation tolerance is the product of two main features: the structure of the genetic code and protein structural robustness. Proteins are resistant to mutations because many sequences can fold into highly similar structural folds.
A protein adopts a limited ensemble of native conformations because
those conformers have lower energy than unfolded and mis-folded states
(ΔΔG of folding).This is achieved by a distributed, internal network of cooperative interactions (hydrophobic, polar and covalent).
Protein structural robustness results from few single mutations being
sufficiently disruptive to compromise function. Proteins have also
evolved to avoid aggregation as partially folded proteins can combine to form large, repeating, insoluble protein fibrils and masses. There is evidence that proteins show negative design features to reduce the exposure of aggregation-prone beta-sheet motifs in their structures.
Additionally, there is some evidence that the genetic code itself may be optimised such that most point mutations lead to similar amino acids (conservative). Together these factors create a distribution of fitness effects of mutations that contains a high proportion of neutral and nearly-neutral mutations.
Gene expression robustness
During embryonic development,
gene expression must be tightly controlled in time and space in order
to give rise to fully functional organs. Developing organisms must
therefore deal with the random perturbations resulting from gene
expression stochasticity. In bilaterians, robustness of gene expression can be achieved via enhancer
redundancy. This happens when the expression of a gene under the
control of several enhancers encoding the same regulatory logic (ie.
displaying binding sites for the same set of transcription factors). In Drosophila melanogaster such redundant enhancers are often called shadow enhancers.
Furthermore, in developmental contexts were timing of gene
expression in important for the phenotypic outcome, diverse mechanisms
exist to ensure proper gene expression in a timely manner. Poised promoters are transcriptionally inactive promoters that display RNA polymerase II binding, ready for rapid induction. In addition, because not all transcription factors can bind their target site in compacted heterochromatin, pioneer transcription factors (such as Zld or FoxA)
are required to open chromatin and allow the binding of other
transcription factors that can rapidly induce gene expression. Open
inactive enhancers are call poised enhancers.
Cell competition is a phenomenon first described in Drosophila where mosaic Minute mutant cells (affecting ribosomal proteins)
in a wild-type background would be eliminated. This phenomenon also
happens in the early mouse embryo where cells expressing high levels of Myc actively kill their neighbors displaying low levels of Myc expression. This results in homogeneously high levels of Myc.
Developmental patterning robustness
Patterning mechanisms such as those described by the French flag model
can be perturbed at many levels (production and stochasticity of the
diffusion of the morphogen, production of the receptor, stochastic of
the signaling cascade,
etc). Patterning is therefore inherently noisy. Robustness against this
noise and genetic perturbation is therefore necessary to ensure proper
that cells measure accurately positional information. Studies of the zebrafishneural tube
and antero-posterior patternings has shown that noisy signaling leads
to imperfect cell differentiation that is later corrected by
transdifferentiation, migration or cell death of the misplaced cells.
Additionally, the structure (or topology) of signaling pathways has been demonstrated to play an important role in robustness to genetic perturbations. Self-enhanced degradation has long been an example of robustness in System biology.
Similarly, robustness of dorsoventral patterning in many species
emerges from the balanced shuttling-degradation mechanisms involved in BMP signaling.
Evolutionary consequences
Since
organisms are constantly exposed to genetic and non-genetic
perturbations, robustness is important to ensure the stability of phenotypes. Also, under mutation-selection balance, mutational robustness can allow cryptic genetic variation
to accumulate in a population. While phenotypically neutral in a
stable environment, these genetic differences can be revealed as trait
differences in an environment-dependent manner (see evolutionary capacitance),
thereby allowing for the expression of a greater number of heritable
phenotypes in populations exposed to a variable environment.
Being robust may even be a favoured at the expense of total fitness as an evolutionarily stable strategy (also called survival of the flattest). A high but narrow peak of a fitness landscape
confers high fitness but low robustness as most mutations lead to
massive loss of fitness. High mutation rates may favour population of
lower, but broader fitness peaks. More critical biological systems may
also have greater selection for robustness as reductions in function are
more damaging to fitness. Mutational robustness is thought to be one driver for theoretical viral quasispecies formation.
Natural selection can select directly or indirectly for robustness. When mutation rates are high and population sizes are large, populations are predicted to move to more densely connected regions of neutral network as less robust variants have fewer surviving mutant descendants.
The conditions under which selection could act to directly increase
mutational robustness in this way are restrictive, and therefore such
selection is thought to be limited to only a few viruses and microbes having large population sizes and high mutation rates. Such emergent robustness has been observed in experimental evolution of cytochrome P450s and B-lactamase. Conversely, mutational robustness may evolve as a byproduct of natural selection for robustness to environmental perturbations.
Mutational robustness has been thought to have a negative impact on evolvability
because it reduces the mutational accessibility of distinct heritable
phenotypes for a single genotype and reduces selective differences
within a genetically diverse population.
Counter-intuitively however, it has been hypothesized that phenotypic
robustness towards mutations may actually increase the pace of heritable
phenotypic adaptation when viewed over longer periods of time.
One hypothesis for how robustness promotes evolvability in
asexual populations is that connected networks of fitness-neutral
genotypes result in mutational robustness which, while reducing
accessibility of new heritable phenotypes over short timescales, over
longer time periods, neutral mutation and genetic drift cause the population to spread out over a larger neutral network in genotype space.
This genetic diversity gives the population mutational access to a
greater number of distinct heritable phenotypes that can be reached from
different points of the neutral network.
However, this mechanism may be limited to phenotypes dependent on a
single genetic locus; for polygenic traits, genetic diversity in asexual
populations does not significantly increase evolvability.
In the case of proteins, robustness promotes evolvability in the form of an excess free energy of folding.
Since most mutations reduce stability, an excess folding free energy
allows toleration of mutations that are beneficial to activity but would
otherwise destabilise the protein.
In sexual populations, robustness leads to the accumulation of cryptic genetic variation with high evolutionary potential.
Evolvability may be high when robustness is reversible, with evolutionary capacitance allowing a switch between high robustness in most circumstances and low robustness at times of stress.
cmd.exe interacts with the user through a command-line interface. On Windows, this interface is implemented through the Win32 console. cmd.exe
may take advantage of features available to native programs of its own
platform. For example, on OS/2 and Windows, it can use real pipes in command pipelines, allowing both sides of the pipeline to run concurrently. As a result, it is possible to redirect the standard error stream. (COMMAND.COM uses temporary files, and runs the two sides serially, one after the other.)
Multiple commands can be processed in a single command line using the command separator &&.
When using this separator in the Windows cmd.exe, each command must complete successfully for the following commands to execute. For example:
C:\>CommandA && CommandB && CommandC
In the above example, Command B will only execute if Command A completes successfully, and the execution of Command C depends on the successful completion of Command B. To process subsequent commands even if the previous command produces an error, the command separator & should be used. For example:
C:\>CommandA & CommandB & CommandC
On Windows XP
or later, the maximum length of the string that can be used at the
command prompt is 8191 characters. On earlier versions, such as Windows 2000 or Windows NT 4.0, the maximum length of the string is 2047 characters. This limit includes the command line, individual environment variables that are inherited by other processes, and all environment variable expansions.
On Windows, cmd.exe is mostly compatible with COMMAND.COM but provides the following extensions over it:
More detailed error messages than the blanket "Bad command or file name" (in the case of malformed commands) of COMMAND.COM. In OS/2, errors are reported in the chosen language of the system, their text being taken from the system message files. The HELP command can then be issued with the error message number to obtain further information.
Supports using of arrow keys to scroll through command history. (Under DOS this function was only available under DR DOS (through HISTORY) and later via an external component called DOSKEY.)
Adds rotating command-line completion for file and folder paths, where the user can cycle through results for the prefix using the Tab ↹, and ⇧ Shift+Tab ↹ for reverse direction.
Treats the caret character (^) as the escape character; the character following it is to be taken literally. There are special characters in cmd.exe and COMMAND.COM (e.g. "<", ">"
and "|") that are meant to alter the behavior of the command line
processor. The caret character forces the command line processor to
interpret them literally.
Supports delayed variable expansion with SETLOCAL EnableDelayedExpansion,
allowing values of variables to be calculated at runtime instead of
during parsing of script before execution (Windows 2000 and later),
fixing DOS idioms that made using control structures hard and complex. The extensions can be disabled, providing a stricter compatibility mode.
Internal commands have also been improved:
The DELTREE command was merged into the RD command, as part of its /Sswitch.
SetLocal and EndLocal commands limit the scope of changes to the environment. Changes made to the command line environment after SetLocal commands are local to the batch file. EndLocal command restores the previous settings.
The Call command allows subroutines within batch file. The Call command in COMMAND.COM only supports calling external batch files.
File name parser extensions to the Set command are comparable with C shell.
An expansion of the For command supports parsing files and arbitrary sets in addition to file names.
The new PushD and PopD commands provide access past navigated paths similar to "forward" and "back" buttons in a web browser or File Explorer.
The conditional IF command can perform case-insensitive
comparisons and numeric equality and inequality comparisons in addition
to case-sensitive string comparisons. (This was available in DR-DOS,
but not in PC DOS or MS-DOS.)
Science journalism conveys reporting about science to the public. The field typically involves interactions between scientists, journalists and the public.
Origins
Modern
science journalism originated in weather and other natural history
observations, as well as reports of new scientific findings, reported
by almanacs and other news writing in the centuries following the advent
of the printing press. One early example dates back to Digdarshan
(means showing the direction), which was an educational monthly
magazine that started publication in 1818 from Srirampore, Bengal,
India. Digdarshan carried articles on different aspects of
science, such as plants, steam boat, etc. It was available in Bengali,
Hindi and English languages. In the U.S., Scientific American
was founded in 1845, in another early example. One of the occasions an
article was attributed to a 'scientific correspondent' was "A Gale in
the Bay of Biscay" by William Crookes which appeared in The Times on 18 January 1871, page 7. Thomas Henry Huxley (1825–1895) and John Tyndall (1820–1893) were scientists who were greatly involved in journalism and Peter Chalmers Mitchell (1864–1945) was Scientific Correspondent for The Times from 1918 to 1935. However it was with James Crowther's appointment as the 'scientific correspondent' of The Manchester Guardian by C. P. Scott
in 1928 that science journalism really took shape. Crowther related
that Scott had declared that there was "no such thing" as science
journalism, at which point Crowther replied that he intended to invent
it. Scott was convinced and then employed him.
Aims
Science
values detail, precision, the impersonal, the technical, the lasting,
facts, numbers and being right. Journalism values brevity,
approximation, the personal, the colloquial, the immediate, stories,
words and being right now. There are going to be tensions.
The aim of a science journalist is to render very detailed, specific, and often jargon-laden
information produced by scientists into a form that non-scientists can
understand and appreciate while still communicating the information
accurately. One way science journalism can achieve that is to avoid an information deficit model
of communication, which assumes a top-down, one-way direction of
communicating information that limits an open dialogue between knowledge
holders and the public. One such way of sparking an inclusive dialogue
between science and society that leads to a broader uptake of post-high
school science discoveries is science blogs.
Science journalists face an increasing need to convey factually correct
information through storytelling techniques in order to tap into both
the rational and emotional side of their audiences, the latter of which
to some extent ensuring that the information uptake persists.
Science journalists often have training in the scientific
disciplines that they cover. Some have earned a degree in a scientific
field before becoming journalists or exhibited talent in writing about
science subjects. However, good preparation for interviews and even
deceptively simple questions such as "What does this mean to the people
on the street?" can often help a science journalist develop material
that is useful for the intended audience.
Status
With
budget cuts at major newspapers and other media, there are fewer working
science journalists employed by traditional print and broadcast media
than before.
Similarly, there are currently very few journalists in traditional
media outlets that write multiple articles on emerging science, such as
nanotechnology.
In 2011, there were 459 journalists who had written a newspaper
article covering nanotechnology, of whom 7 wrote about the topic more
than 25 times.
In January 2012, just a week after The Daily Climate reported that worldwide coverage of climate change continued a three-year slide in 2012 and that among the five largest US dailies, the New York Times published the most stories and had the biggest increase in coverage, that newspaper announced that it was dismantling its environmental desk and merging its journalists with other departments.
News coverage on science by traditional media outlets, such as
newspapers, magazines, radio and news broadcasts is being replaced by
online sources. In April 2012, the New York Times was awarded two Pulitzer Prizes for content published by Politico and The Huffington Post (now HuffPost) both online sources, a sign of the platform shift by the media outlet.
Science information continues to be widely available to the
public online. The increase in access to scientific studies and findings
causes science journalism to adapt. "In many countries the public's
main source of information about science and technology is the mass
media."
Science journalists must compete for attention with other stories that
are perceived as more entertaining. Science information cannot always be
sensationalized to capture attention and the sheer amount of available
information can cause important findings to be buried. The general
public does not typically search for science information unless it is
mentioned or discussed in mainstream media first. However, the mass media are the most important or only source of scientific information for people after completing their education.
A common misconception about public interest surrounds science
journalism. Those who choose which news stories are important typically
assume the public is not as interested in news written by a scientist
and would rather receive news stories that are written by general
reporters instead. The results of a study conducted comparing public
interest between news stories written by scientists and stories written
by reporters concluded there is no significant difference.
The public was equally interested in news stories written by a reporter
and a scientist. This is a positive finding for science journalism
because it shows it is increasingly relevant and is relied upon by the
public to make informed decisions. "The vast majority of non-specialists
obtain almost all their knowledge about science from journalists, who
serve as the primary gatekeepers for scientific information." Ethical and accurate reporting by science journalists is vital to keeping the public informed.
Science journalism is reported differently than traditional journalism.
Conventionally, journalism is seen as more ethical if it is balanced
reporting and includes information from both sides of an issue. Science
journalism has moved to an authoritative type of reporting where they
present information based on peer reviewed
evidence and either ignore the conflicting side or point out their lack
of evidence. Science journalism continues to adapt to a slow journalism
method that is very time-consuming but contains higher quality
information from peer-reviewed sources. They also practice sustainable
journalism that focuses on solutions rather than only the problem.
Presenting information from both sides of the issue can confuse readers
on what the actual findings show. Balanced reporting can actually lead
to unbalanced reporting because it gives attention to extreme minority
views in the science community, implying that the both sides have an
equal number of supporters. It can give the false impression that an
opposing minority viewpoint is valid.
For example, a 2019 survey of scientists' views on climate change
yielded a 100% consensus that global warming is human-caused. However,
articles like "Climate Change: A Scientist and Skeptic Exchange
Viewpoints," published by Divided We Fall
in 2018, may unintentionally foster doubt in readers, as this
particular scientist "did not say, as the media and the political class
has said, that the science is settled."
The public benefits from an authoritative reporting style in
guiding them to make informed decisions about their lifestyle and
health.
Tracking the remaining experienced science journalists is
becoming increasingly difficult. For example, in Australia, the number
of science journalists has decreased to abysmal numbers: "you need less
than one hand to count them."
Due to the rapidly decreasing number of science journalists,
experiments on ways to improve science journalism are also rare.
However, in one of the few experiments conducted with science
journalists, when the remaining population of science journalists
networked online, they produced more accurate articles than when in
isolation.
New communication environments provide essentially unlimited
information on a large number of issues, which can be obtained anywhere
and with relatively limited effort. The web also offers opportunities
for citizens to connect with others through social media and other
2.0-type tools to make sense of this information.
"After a lot of hand wringing about the newspaper industry about
six years ago, I take a more optimistic view these days," said Cristine
Russell, president of the Council for the Advancement of Science
Writing. "The world is online. Science writers today have the
opportunity to communicate not just with their audience but globally".
Blog-based science reporting is filling in to some degree, but has problems of its own.
One of the main findings is about the controversy surrounding climate change
and how the media affects people's opinions on this topic. Survey and
experimental research have discovered connections between exposure to
cable and talk show radio channels and views on global warming.
However, early subject analyses noticed that U.S. media outlets
over exaggerate the dispute that surrounds global warming actually
existing. A majority of Americans view global warming as an outlying
issue that will essentially affect future generations of individuals in
other countries.
This is a problem considering that they are getting most of their
information from these media sources that are opinionated and not nearly
as concerned with supplying facts to their viewers. Research found that
after people finish their education, the media becomes the most
significant, and for many individuals, the sole source of information
regarding science, scientific findings and scientific processes. Many people fail to realize that information about science included from online sources is not always credible.
Since the 1980s, climate science and mass media have transformed into an increasingly politicized sphere.
In the United States, Conservatives and Liberals understand global
warming differently. Democrats often accept the evidence for global
warming and think that it's caused by humans, while not many Republicans
believe this. Democrats and liberals have higher and more steady trust
in scientists, while conservative Republicans' trust in scientists has
been declining.
However, in the United Kingdom, mass media do not have nearly the
impact on people's opinions as in the United States. They have a
different attitude towards the environment which prompted them to
approve the Kyoto Protocol,
which works to reduce carbon dioxide emissions, while the U.S., the
world's largest creator of carbon dioxide, has not done so.
The content of news stories regarding climate change are affected by journalistic norms including balance, impartiality,
neutrality and objectivity. Balanced reporting, which involves giving
equal time to each opposing side of a debate over an issue, has had a
rather harmful impact on the media coverage of climate science.
In 2015, John Bohannon produced a deliberately bad study to see how a low-quality open access publisher and the media would pick up their findings. He worked with a film-maker Peter Onneken who was making a film about junk science in the diet industry with fad diets becoming headline news despite terrible study design and almost no evidence.
He invented a fake "diet institute" that lacks even a website, used the
pen name "Johannes Bohannon" and fabricated a press release.
Science journalists keep the public informed of scientific
advancements and assess the appropriateness of scientific research.
However, this work comes with a set of criticisms. Science journalists
regularly come under criticism for misleading reporting of scientific
stories. All three groups of scientists, journalists and the public
often criticize science journalism for bias and inaccuracies. However,
with the increasing collaborations online between science journalists
there may be potential with removing inaccuracies.
The 2010 book Merchants of Doubt by historians of scienceNaomi Oreskes and Erik M. Conway argues that in topics like the global warming controversy, tobacco smoking, acid rain, DDT and ozone depletion, contrarian scientists have sought to "keep the controversy alive" in the public arena by demanding that reporters give false balance to the minority side. Very often, such as with climate change,
this leaves the public with the impression that disagreement within the
scientific community is much greater than it actually is. Science is based on experimental evidence and testing, and disputation is a normal activity.
Scholars have criticized science journalists for:
Uncritical reporting
Emphasizing frames of scientific progress and economic prospect
Not presenting a range of expert opinion
Having preferences toward positive messages
Reporting unrealistic timelines and engaging in the production of a "cycle of hype"
Science journalists can be seen as the gatekeepers of scientific
information. Just like traditional journalists, science journalists are
responsible for what truths reach the public.
Scientific information is often costly to access. This is counterproductive to the goals of science journalism. Open science,
a movement for "free availability and usability of scholarly
publications," seeks to counteract the accessibility issues of valuable
scientific information.
Freely accessible scientific journals will decrease the public's
reliance on potentially biased popular media for scientific information.
Many science magazines, along with Newspapers like The New York Times and popular science shows like PBS Nova tailor their content to relatively highly educated audiences.
Many universities and research institutions focus much of their media
outreach efforts on coverage in such outlets. Some government
departments require journalists to gain clearance to interview a
scientist, and require that a press secretary listen in on phone conversations between government funded scientists and journalists.
Many pharmaceutical marketing representatives have come under fire for offering free meals to doctors in order to promote new drugs.
Critics of science journalists have argued that they should disclose
whether industry groups have paid for a journalist to travel, or has
received free meals or other gifts.
Science journalism finds itself under a critical eye due to the
fact that it combines the necessary tasks of a journalist along with the
investigative process of a scientist.
Science journalist responsibility
Science journalists offer important contributions to the open science movement by using the Value Judgement Principle (VJP). Science journalists are responsible for "identifying and explaining major value judgments
for members of the public." In other words, science journalists must
make judgments such as what is good and bad (right and wrong). This is a
very significant role because it helps "equip non-specialists to draw
on scientific information and make decisions that accord with their own
values".
While scientific information is often portrayed in quantitative terms
and can be interpreted by experts, the audience must ultimately decide
how to feel about the information.
Most science journalists begin their careers as either a scientist or a
journalist and transition to science communication.
One area in which science journalists seem to support varying sides of
an issue is in risk communication. Science journalists may choose to
highlight the amount of risk that studies have uncovered while others
focus more on the benefits depending on audience and framing. Science
journalism in contemporary risk societies leads to the
institutionalisation of mediated scientific public spheres which
exclusively discuss science and technology related issues.
This also leads to the development of new professional relationship
between scientists and journalists, which is mutually beneficial.
Spamming remains economically viable because advertisers have no
operating costs beyond the management of their mailing lists, servers,
infrastructures, IP ranges, and domain names, and it is difficult to
hold senders accountable for their mass mailings. The costs, such as
lost productivity and fraud, are borne by the public and by Internet service providers, which have added extra capacity to cope with the volume. Spamming has been the subject of legislation in many jurisdictions.
A person who creates spam is called a spammer.
Etymology
The term spam is derived from the 1970 "Spam"sketch of the BBC sketch comedy television series Monty Python's Flying Circus. The sketch, set in a cafe,
has a waitress reading out a menu where every item but one includes the
Spam canned luncheon meat. As the waitress recites the Spam-filled
menu, a chorus of Viking patrons drown out all conversations with a song, repeating "Spam, Spam, Spam, Spam… Lovely Spam! Wonderful Spam!".
In the 1980s the term was adopted to describe certain abusive users who frequented BBSs and MUDs, who would repeat "Spam" a huge number of times to scroll other users' text off the screen. In early chat-room services like PeopleLink and the early days of
Online America (later known as America Online or AOL), they actually
flooded the screen with quotes from the Monty Python sketch.
This was used as a tactic by insiders of a group that wanted to drive
newcomers out of the room so the usual conversation could continue. It
was also used to prevent members of rival groups from chatting—for
instance, Star Wars fans often invaded Star Trek chat rooms, filling the space with blocks of text until the Star Trek fans left.
It later came to be used on Usenet
to mean excessive multiple posting—the repeated posting of the same
message. The unwanted message would appear in many, if not all
newsgroups, just as Spam appeared in all the menu items in the Monty
Python sketch. One of the earliest people to use "spam" in this sense
was Joel Furr.
This use had also become established—to "spam" Usenet was to flood
newsgroups with junk messages. The word was also attributed to the flood
of "Make Money Fast" messages that clogged many newsgroups during the 1990s. In 1998, the New Oxford Dictionary of English,
which had previously only defined "spam" in relation to the trademarked
food product, added a second definition to its entry for "spam":
"Irrelevant or inappropriate messages sent on the Internet to a large
number of newsgroups or users."
There was also an effort to differentiate between types of newsgroup spam. Messages that were crossposted to too many newsgroups at once, as opposed to those that were posted too frequently, were called "velveeta" (after a cheese product), but this term did not persist.
History
Pre-Internet
In the late 19th century, Western Union
allowed telegraphic messages on its network to be sent to multiple
destinations. The first recorded instance of a mass unsolicited
commercial telegram is from May 1864, when some British politicians
received an unsolicited telegram advertising a dentist.
History
The earliest documented spam (although the term had not yet been coined) was a message advertising the availability of a new model of Digital Equipment Corporation computers sent by Gary Thuerk to 393 recipients on ARPANET on May 3, 1978.
Rather than send a separate message to each person, which was the
standard practice at the time, he had an assistant, Carl Gartley, write a
single mass email. Reaction from the net community was fiercely
negative, but the spam did generate some sales.
Spamming had been practiced as a prank by participants in multi-user dungeon games, to fill their rivals' accounts with unwanted electronic junk.
The first major commercial spam incident started on March 5, 1994, when a husband and wife team of lawyers, Laurence Canter and Martha Siegel, began using bulk Usenet posting to advertise immigration law services. The incident was commonly termed the "Green Card
spam", after the subject line of the postings. Defiant in the face of
widespread condemnation, the attorneys claimed their detractors were
hypocrites or "zealots", claimed they had a free speech
right to send unwanted commercial messages, and labeled their opponents
"anti-commerce radicals". The couple wrote a controversial book
entitled How to Make a Fortune on the Information Superhighway.
An early example of nonprofitfundraising bulk posting via Usenet also occurred in 1994 on behalf of CitiHope, an NGO attempting to raise funds to rescue children at risk during the Bosnian War. However, as it was a violation of their terms of service, the ISP Panix deleted all of the bulk posts from Usenet, only missing three copies.
Within a few years, the focus of spamming (and anti-spam efforts) moved chiefly to email, where it remains today.
By 1999, Khan C. Smith, a well known hacker at the time, had begun to
commercialize the bulk email industry and rallied thousands into the
business by building more friendly bulk email software and providing
internet access illegally hacked from major ISPs such as Earthlink and
Botnets.
By 2009 the majority of spam sent around the World was in the English language; spammers began using automatic translation services to send spam in other languages.
Email spam, also known as unsolicited bulk email (UBE), or junk mail,
is the practice of sending unwanted email messages, frequently with
commercial content, in large quantities.
Spam in email started to become a problem when the Internet was opened
for commercial use in the mid-1990s. It grew exponentially over the
following years, and by 2007 it constituted about 80% to 85% of all
e-mail, by a conservative estimate.
Pressure to make email spam illegal has resulted in legislation in some
jurisdictions, but less so in others. The efforts taken by governing
bodies, security systems and email service providers seem to be helping
to reduce the volume of email spam. According to "2014 Internet Security
Threat Report, Volume 19" published by Symantec Corporation, spam volume dropped to 66% of all email traffic.
An industry of email address harvesting is dedicated to collecting email addresses and selling compiled databases.
Some of these address-harvesting approaches rely on users not reading
the fine print of agreements, resulting in their agreeing to send
messages indiscriminately to their contacts. This is a common approach
in social networking spam such as that generated by the social networking site Quechup.
Instant messaging spam makes use of instant messaging
systems. Although less prevalent than its e-mail counterpart, according
to a report from Ferris Research, 500 million spam IMs were sent in
2003, twice the level of 2002.
Newsgroup spam is a type of spam where the targets are Usenet
newsgroups. Spamming of Usenet newsgroups actually pre-dates e-mail
spam. Usenet convention defines spamming as excessive multiple posting,
that is, the repeated posting of a message (or substantially similar
messages). The prevalence of Usenet spam led to the development of the Breidbart Index as an objective measure of a message's "spamminess".
Forum spam is the creation of advertising messages on Internet
forums. It is generally done by automated spambots. Most forum spam
consists of links to external sites, with the dual goals of increasing
search engine visibility in highly competitive areas such as weight
loss, pharmaceuticals, gambling, pornography, real estate or loans, and
generating more traffic for these commercial websites. Some of these
links contain code to track the spambot's identity; if a sale goes
through, the spammer behind the spambot earns a commission.
Mobile phone spam is directed at the text messaging service of a mobile phone.
This can be especially irritating to customers not only for the
inconvenience, but also because of the fee they may be charged per text
message received in some markets.
To comply with CAN-SPAM regulations in the US, SMS messages now must
provide options of HELP and STOP, the latter to end communication with
the advertiser via SMS altogether.
Despite the high number of phone users, there has not been so
much phone spam, because there is a charge for sending SMS. Recently,
there are also observations of mobile phone spam delivered via browser
push notifications. These can be a result of allowing websites which are
malicious or delivering malicious ads to send a user notifications.
Facebook and Twitter are not immune to messages containing spam
links. Spammers hack into accounts and send false links under the guise
of a user's trusted contacts such as friends and family.
As for Twitter, spammers gain credibility by following verified
accounts such as that of Lady Gaga; when that account owner follows the
spammer back, it legitimizes the spammer.
Twitter has studied what interest structures allow their users to
receive interesting tweets and avoid spam, despite the site using the
broadcast model, in which all tweets from a user are broadcast to all
followers of the user.
Spammers, out of malicious intent, post either unwanted (or irrelevant)
information or spread misinformation on social media platforms.
Social spam
Spreading
beyond the centrally managed social networking platforms,
user-generated content increasingly appears on business, government, and
nonprofit websites worldwide. Fake accounts and comments planted by
computers programmed to issue social spam can infiltrate these websites.
Blog spam is spamming on weblogs. In 2003, this type of spam took advantage of the open nature of comments in the blogging software Movable Type
by repeatedly placing comments to various blog posts that provided
nothing more than a link to the spammer's commercial web site.
Similar attacks are often performed against wikis and guestbooks, both of which accept user contributions.
Another possible form of spam in blogs is the spamming of a certain tag on websites such as Tumblr.
Spam targeting video sharing sites
In actual video spam, the uploaded video is given a name and
description with a popular figure or event that is likely to draw
attention, or within the video a certain image is timed to come up as
the video's thumbnail
image to mislead the viewer, such as a still image from a feature film,
purporting to be a part-by-part piece of a movie being pirated, e.g. Big Buck Bunny Full Movie Online - Part 1/10 HD, a link to a supposed keygen, trainer, ISO file for a video game, or something similar. The actual content of the video ends up being totally unrelated, a Rickroll, offensive, or simply on-screen text of a link to the site being promoted.
In some cases, the link in question may lead to an online survey site, a
password-protected archive file with instructions leading to the
aforementioned survey (though the survey, and the archive file itself,
is worthless and does not contain the file in question at all), or in
extreme cases, malware. Others may upload videos presented in an infomercial-like format selling their product which feature actors and paid testimonials, though the promoted product or service is of dubious quality and would likely not pass the scrutiny of a standards and practices department at a television station or cable network.
VoIP spam is VoIP (Voice over Internet Protocol) spam, usually using SIP (Session Initiation Protocol).
This is nearly identical to telemarketing calls over traditional phone
lines. When the user chooses to receive the spam call, a pre-recorded
spam message or advertisement is usually played back. This is generally
easier for the spammer as VoIP services are cheap and easy to anonymize
over the Internet, and there are many options for sending mass number of
calls from a single location. Accounts or IP addresses being used for
VoIP spam can usually be identified by a large number of outgoing calls,
low call completion and short call length.
Academic search
Academic search engines enable researchers to find academic literature and are used to obtain citation data for calculating author-level metrics. Researchers from the University of California, Berkeley and OvGU demonstrated that most (web-based) academic search engines, especially Google Scholar are not capable of identifying spam attacks.
The researchers manipulated the citation counts of articles, and
managed to make Google Scholar index complete fake articles, some
containing advertising.
Mobile apps
Spamming
in mobile app stores include (i) apps that were automatically generated
and as a result do not have any specific functionality or a meaningful
description; (ii) multiple instances of the same app being published to
obtain increased visibility in the app market; and (iii) apps that make
excessive use of unrelated keywords to attract users through unintended
searches.
Bluetooth
Bluespam, or the action of sending spam to Bluetooth-enabled devices, is another form of spam that has developed in recent years.
Noncommercial forms
E-mail
and other forms of spamming have been used for purposes other than
advertisements. Many early Usenet spams were religious or political. Serdar Argic, for instance, spammed Usenet with historical revisionist screeds. A number of evangelists have spammed Usenet
and e-mail media with preaching messages. A growing number of criminals
are also using spam to perpetrate various sorts of fraud.
Geographical origins
In 2011 the origins of spam were analyzed by Cisco Systems. They provided a report that shows spam volume originating from countries worldwide.
Hormel Foods Corporation, the maker of SPAM
luncheon meat, does not object to the Internet use of the term
"spamming". However, they did ask that the capitalized word "Spam" be
reserved to refer to their product and trademark.
Cost–benefit analyses
The European Union's Internal Market Commission estimated in 2001 that "junk email" cost Internet users €10 billion per year worldwide.
The California legislature found that spam cost United States
organizations alone more than $13 billion in 2007, including lost
productivity and the additional equipment, software, and manpower needed
to combat the problem.
Spam's direct effects include the consumption of computer and network
resources, and the cost in human time and attention of dismissing
unwanted messages. Large companies who are frequent spam targets utilize numerous techniques to detect and prevent spam.
The cost to providers of search engines
is significant: "The secondary consequence of spamming is that search
engine indexes are inundated with useless pages, increasing the cost of
each processed query".
The costs of spam also include the collateral costs of the struggle
between spammers and the administrators and users of the media
threatened by spamming.
Email spam exemplifies a tragedy of the commons:
spammers use resources (both physical and human), without bearing the
entire cost of those resources. In fact, spammers commonly do not bear
the cost at all. This raises the costs for everyone.[44] In some ways spam is even a potential threat
to the entire email system, as operated in the past. Since email is so
cheap to send, a tiny number of spammers can saturate the Internet with
junk mail. Although only a tiny percentage of their targets are
motivated to purchase their products (or fall victim to their scams),
the low cost may provide a sufficient conversion rate to keep the
spamming alive. Furthermore, even though spam appears not to be
economically viable as a way for a reputable company to do business, it
suffices for professional spammers to convince a tiny proportion of
gullible advertisers that it is viable for those spammers to stay in
business. Finally, new spammers go into business every day, and the low
costs allow a single spammer to do a lot of harm before finally
realizing that the business is not profitable.
Some companies and groups "rank" spammers; spammers who make the news are sometimes referred to by these rankings.
General costs
In all cases listed above, including both commercial and non-commercial, "spam happens" because of a positive cost-benefit analysis result; if the cost to recipients is excluded as an externality the spammer can avoid paying.
Cost is the combination of
Overhead: The costs and overhead of electronic spamming include
bandwidth, developing or acquiring an email/wiki/blog spam tool, taking
over or acquiring a host/zombie, etc.
Transaction cost:
The incremental cost of contacting each additional recipient once a
method of spamming is constructed, multiplied by the number of
recipients (see CAPTCHA as a method of increasing transaction costs).
Risks: Chance and severity of legal and/or public reactions, including damages and punitive damages.
Damage: Impact on the community and/or communication channels being spammed (see Newsgroup spam).
Benefit is the total expected profit from spam, which may
include any combination of the commercial and non-commercial reasons
listed above. It is normally linear, based on the incremental benefit of
reaching each additional spam recipient, combined with the conversion rate. The conversion rate for botnet-generated
spam has recently been measured to be around one in 12,000,000 for
pharmaceutical spam and one in 200,000 for infection sites as used by
the Storm botnet.
The authors of the study calculating those conversion rates noted,
"After 26 days, and almost 350 million e-mail messages, only 28 sales
resulted."
In crime
Spam can be used to spread computer viruses, trojan horses or other malicious software. The objective may be identity theft, or worse (e.g., advance fee fraud).
Some spam attempts to capitalize on human greed, while some attempts to
take advantage of the victims' inexperience with computer technology to
trick them (e.g., phishing).
One of the world's most prolific spammers, Robert Alan Soloway, was arrested by US authorities on May 31, 2007.
Described as one of the top ten spammers in the world, Soloway was
charged with 35 criminal counts, including mail fraud, wire fraud, e-mail fraud, aggravated identity theft, and money laundering. Prosecutors allege that Soloway used millions of "zombie" computers to distribute spam during 2003.
This is the first case in which US prosecutors used identity theft laws
to prosecute a spammer for taking over someone else's Internet domain
name.
In an attempt to assess potential legal and technical strategies
for stopping illegal spam, a study cataloged three months of online spam
data and researched website naming and hosting infrastructures. The
study concluded that: 1) half of all spam programs have their domains
and servers distributed over just eight percent or fewer of the total
available hosting registrars and autonomous systems, with 80 percent of
spam programs overall being distributed over just 20 percent of all
registrars and autonomous systems; 2) of the 76 purchases for which the
researchers received transaction information, there were only 13
distinct banks acting as credit card acquirers and only three banks
provided the payment servicing for 95 percent of the spam-advertised
goods in the study; and, 3) a "financial blacklist" of banking entities
that do business with spammers would dramatically reduce monetization of
unwanted e-mails. Moreover, this blacklist could be updated far more
rapidly than spammers could acquire new banking resources, an asymmetry
favoring anti-spam efforts.
Political issues
An ongoing concern expressed by parties such as the Electronic Frontier Foundation and the American Civil Liberties Union
has to do with so-called "stealth blocking", a term for ISPs employing
aggressive spam blocking without their users' knowledge. These groups'
concern is that ISPs or technicians seeking to reduce spam-related costs
may select tools that (either through error or design) also block
non-spam e-mail from sites seen as "spam-friendly". Few object to the
existence of these tools; it is their use in filtering the mail of users
who are not informed of their use that draws fire.
Even though it is possible in some jurisdictions to treat some spam as unlawful merely by applying existing laws against trespass and conversion, some laws specifically targeting spam have been proposed. In 2004, United States passed the CAN-SPAM Act of 2003 that provided ISPs with tools to combat spam. This act allowed Yahoo!
to successfully sue Eric Head who settled the lawsuit for several
thousand U.S. dollars in June 2004. But the law is criticized by many
for not being effective enough. Indeed, the law was supported by some
spammers and organizations that support spamming, and opposed by many in
the anti-spam community.
Earthlink
won a $25 million judgment against one of the most notorious and active
"spammers" Khan C. Smith in 2001 for his role in founding the modern
spam industry which dealt billions in economic damage and established
thousands of spammers into the industry. His email efforts were said to make up more than a third of all Internet email being sent from 1999 until 2002.
Sanford Wallace
and Cyber Promotions were the target of a string of lawsuits, many of
which were settled out of court, up through a 1998 Earthlink settlement that put Cyber Promotions out of business. Attorney Laurence Canter was disbarred by the Tennessee Supreme Court in 1997 for sending prodigious amounts of spam advertising his immigration law practice. In 2005, Jason Smathers, a former America Online employee, pleaded guilty to charges of violating the CAN-SPAM Act.
In 2003, he sold a list of approximately 93 million AOL subscriber
e-mail addresses to Sean Dunaway who sold the list to spammers.
In 2007, Robert Soloway
lost a case in a federal court against the operator of a small
Oklahoma-based Internet service provider who accused him of spamming.
U.S. Judge Ralph G. Thompson granted a motion by plaintiff Robert Braver
for a default judgment and permanent injunction against him. The judgment includes a statutory damages award of about $10 million under Oklahoma law.
In June 2007, two men were convicted of eight counts stemming
from sending millions of e-mail spam messages that included hardcore
pornographic images. Jeffrey A. Kilbride, 41, of Venice, California was sentenced to six years in prison, and James R. Schaffer, 41, of Paradise Valley, Arizona, was sentenced to 63 months. In addition, the two were fined $100,000, ordered to pay $77,500 in restitution to AOL, and ordered to forfeit more than $1.1 million, the amount of illegal proceeds from their spamming operation. The charges included conspiracy, fraud, money laundering, and transportation of obscene materials. The trial, which began on June 5, was the first to include charges under the CAN-SPAM Act of 2003, according to a release from the Department of Justice.
The specific law that prosecutors used under the CAN-Spam Act was
designed to crack down on the transmission of pornography in spam.
In 2005, Scott J. Filary and Donald E. Townsend of Tampa, Florida were sued by Florida Attorney GeneralCharlie Crist for violating the Florida Electronic Mail Communications Act. The two spammers were required to pay $50,000 USD to cover the costs of investigation by the state of Florida,
and a $1.1 million penalty if spamming were to continue, the $50,000
was not paid, or the financial statements provided were found to be
inaccurate. The spamming operation was successfully shut down.
Edna Fiedler of Olympia, Washington, on June 25, 2008, pleaded guilty in a Tacoma court and was sentenced to two years imprisonment and five years of supervised release or probation
in an Internet $1 million "Nigerian check scam." She conspired to
commit bank, wire and mail fraud, against US citizens, specifically
using Internet by having had an accomplice who shipped counterfeit checks and money orders to her from Lagos,
Nigeria, the previous November. Fiedler shipped out $609,000 fake check
and money orders when arrested and prepared to send additional $1.1
million counterfeit materials. Also, the U.S. Postal Service recently
intercepted counterfeit checks, lottery tickets and eBay overpayment
schemes with a value of $2.1 billion.
In a 2009 opinion, Gordon v. Virtumundo, Inc.,
575 F.3d 1040, the Ninth Circuit assessed the standing requirements
necessary for a private plaintiff to bring a civil cause of action
against spam senders under the CAN-SPAM Act of 2003, as well as the
scope of the CAN-SPAM Act's federal preemption clause.
United Kingdom
In the first successful case of its kind, Nigel Roberts from the Channel Islands won £270 against Media Logistics UK who sent junk e-mails to his personal account.
In January 2007, a Sheriff Court in Scotland awarded Mr. Gordon
Dick £750 (the then maximum sum that could be awarded in a Small Claim
action) plus expenses of £618.66, a total of £1368.66 against Transcom
Internet Services Ltd. for breaching anti-spam laws.
Transcom had been legally represented at earlier hearings, but were not
represented at the proof, so Gordon Dick got his decree by default. It
is the largest amount awarded in compensation in the United Kingdom
since Roberts v Media Logistics case in 2005.
Despite the statutory tort that is created by the Regulations
implementing the EC Directive, few other people have followed their
example. As the Courts engage in active case management, such cases
would probably now be expected to be settled by mediation and payment of
nominal damages.
New Zealand
In
October 2008, an international internet spam operation run from New
Zealand was cited by American authorities as one of the world's largest,
and for a time responsible for up to a third of all unwanted e-mails.
In a statement the US Federal Trade Commission (FTC) named
Christchurch's Lance Atkinson as one of the principals of the operation.
New Zealand's Internal Affairs announced it had lodged a $200,000 claim
in the High Court against Atkinson and his brother Shane Atkinson
and courier Roland Smits, after raids in Christchurch. This marked the
first prosecution since the Unsolicited Electronic Messages Act (UEMA)
was passed in September 2007.
The FTC said it had received more than three million complaints about
spam messages connected to this operation, and estimated that it may be
responsible for sending billions of illegal spam messages. The US
District Court froze the defendants' assets to preserve them for
consumer redress pending trial.
U.S. co-defendant Jody Smith forfeited more than $800,000 and faces up
to five years in prison for charges to which he pleaded guilty.
Bulgaria
While most countries either outlaw or at least ignore spam, Bulgaria is the first and until now only one to legalize it. According to the Bulgarian E-Commerce act
(Чл.5,6) anyone can send spam to mailboxes published as owned by a
company or organization as long as there is a "clear and straight
indication that the message is unsolicited commercial e-mail" ("да
осигури ясното и недвусмислено разпознаване на търговското съобщение
като непоискано") in the message body.
This made lawsuits against Bulgarian ISP's and public e-mail
providers with antispam policy possible, as they are obstructing legal
commerce activity and thus violate Bulgarian antitrust acts. While there
are no such lawsuits until now, several cases of spam obstruction are
currently awaiting decision in the Bulgarian Antitrust Commission
(Комисия за защита на конкуренцията) and can end with serious fines for
the ISPs in question.
The law contains other dubious provisions — for example, the
creation of a nationwide public electronic register of e-mail addresses
that do not want to receive spam. It is usually abused as the perfect source for e-mail address harvesting, because publishing invalid or incorrect information in such a register is a criminal offense in Bulgaria.