Search This Blog

Wednesday, December 22, 2021

Internet manipulation

From Wikipedia, the free encyclopedia

Internet manipulation refers to the co-optation of digital technology, such as social media algorithms and automated scripts, for commercial, social or political purposes. Such tactics may be employed with the explicit intent to manipulate public opinion, polarise citizens, silence political dissidents, harm corporate or political adversaries, and improve personal or brand reputation. Hackers, hired professionals and private citizens have all been reported to engage in internet manipulation using software – typically Internet bots such as social bots, votebots and clickbots.

Cognitive hacking refers to a cyberattack that aims to change users' perceptions and corresponding behaviors.

Internet manipulation is sometimes also used to describe selective Internet censorship or violations of net neutrality.

Issues

  • Behavior Manipulation: Fake news, disinformation, and AI can secretly affect behavior. This is a different issue from affecting cognitive beliefs, as this can operate outside of awareness, making it harder to detect. 
  • High-arousal emotion virality: It has been found that content that evokes high-arousal emotions (e.g. awe, anger, anxiety or with hidden sexual meaning) is more viral and that content that holds one or many of these elements: surprising, interesting, or useful is taken into consideration.
  • Simplicity over complexity: Providing and perpetuating simple explanations for complex circumstances may be used for online manipulation. Often such are easier to believe, come in advance of any adequate investigations and have a higher virality than any complex, nuanced explanations and information.
  • Peer-influence: Prior collective ratings of an web content influences ones own perception of it. In 2015 it was shown that the perceived beauty of a piece of artwork in an online context varies with external influence as confederate ratings were manipulated by opinion and credibility for participants of an experiment who were asked to evaluate a piece of artwork. Furthermore, on Reddit, it has been found that content that initially gets a few down- or upvotes often continues going negative, or vice versa. This is referred to as "bandwagon/snowball voting" by reddit users and administrators.
  • Filter bubbles: Echo chambers and filter bubbles might be created by Website administrators or moderators locking out people with altering viewpoints or by establishing certain rules or by the typical member viewpoints of online sub/communities or Internet "tribes"
  • Confirmation bias & manipulated prevalence: Fake news does not need to be read but has an effect in quantity and emotional effect by its headlines and sound bites alone. Specific points, views, issues and people's apparent prevalence can be amplified, stimulated or simulated. (See also: Mere-exposure effect)
  • Information timeliness and uncorrectability: Clarifications, conspiracy busting and fake news exposure often come late when the damage is already done and/or do not reach the bulk of the audience of the associated misinformation
  • Psychological targeting: Social media activities and other data can be used to analyze the personality of people and predict their behaviour and preferences. Dr Michal Kosinski developed such a procedure. Such can be used for media or information tailored to a person's psyche e.g. via Facebook. According to reports such may have played an integral part in Donald Trump's win.

Algorithms, echo chambers and polarization

Due to overabundance of online content, social networking platforms and search engines have leveraged algorithms to tailor and personalize users' feeds based on their individual preferences. However, algorithms also restrict exposure to different viewpoints and content. This is commonly referred to as "echo-chambers" and "filter-bubbles".

With the help of algorithms, filter bubbles influence users' choices and perception of reality by giving the impression that a particular point of view or representation is widely shared. Following the 2016 referendum of membership of the European Union in the United Kingdom and the United States presidential elections, this gained attention as many individuals confessed their surprise at results that seemed very distant from their expectations. The range of pluralism is influenced by the personalized individualization of the services and the way it diminishes choice. Five manipulative verbal influences were found in media texts. There are self-expression, semantic speech strategies, persuasive strategies, swipe films and information manipulation. The vocabulary toolkit for speech manipulation includes euphemism, mood vocabulary, situational adjectives, slogans, verbal metaphors, etc.

Research on echo chambers from Flaxman, Goel, and Rao, Pariser, and Grömping suggest that use of social media and search engines tends to increase ideological distance among individuals.

Comparisons between online and off-line segregation have indicated how segregation tends to be higher in face-to-face interactions with neighbors, co-workers, or family members, and reviews of existing research have indicated how available empirical evidence does not support the most pessimistic views about polarization. A study conducted by Bakshy, Messing, and Adamic from Facebook and the University of Michigan, for example, has suggested that individuals’ own choices drive algorithmic filtering, limiting exposure to a range of content. While algorithms may not be causing polarization, they could amplify it, representing a significant component of the new information landscape.

Research and use by intelligence and military agencies

Some of the leaked JTRIG operation methods/techniques

The Joint Threat Research Intelligence Group unit of the Government Communications Headquarters (GCHQ), the British intelligence agency was revealed as part of the global surveillance disclosures in documents leaked by the former National Security Agency contractor Edward Snowden and its mission scope includes using "dirty tricks" to "destroy, deny, degrade [and] disrupt" enemies. Core-tactics include injecting false material onto the Internet in order to destroy the reputation of targets and manipulating online discourse and activism for which methods such as posting material to the Internet and falsely attributing it to someone else, pretending to be a victim of the target individual whose reputation is intended to be destroyed and posting "negative information" on various forums may be used.

Known as "Effects" operations, the work of JTRIG had become a "major part" of GCHQ's operations by 2010. The unit's online propaganda efforts (named "Online Covert Action") utilize "mass messaging" and the "pushing [of] stories" via the medium of Twitter, Flickr, Facebook and YouTube. Online "false flag" operations are also used by JTRIG against targets. JTRIG have also changed photographs on social media sites, as well as emailing and texting colleagues and neighbours with "unsavory information" about the targeted individual. In June 2015, NSA files published by Glenn Greenwald revealed new details about JTRIG's work at covertly manipulating online communities. The disclosures also revealed the technique of "credential harvesting", in which journalists could be used to disseminate information and identify non-British journalists who, once manipulated, could give information to the intended target of a secret campaign, perhaps providing access during an interview. It is unknown whether the journalists would be aware that they were being manipulated.

Furthermore, Russia is frequently accused of financing "trolls" to post pro-Russian opinions across the Internet. The Internet Research Agency has become known for employing hundreds of Russians to post propaganda online under fake identities in order to create the illusion of massive support. In 2016 Russia was accused of sophisticated propaganda campaigns to spread fake news with the goal of punishing Democrat Hillary Clinton and helping Republican Donald Trump during the 2016 presidential election as well as undermining faith in American democracy.

In a 2017 report Facebook publicly stated that its site has been exploited by governments for the manipulation of public opinion in other countries – including during the presidential elections in the US and France. It identified three main components involved in an information operations campaign: targeted data collection, content creation and false amplification and includes stealing and exposing information that's not public; spreading stories, false or real, to third parties through fake accounts; and fake accounts being coordinated to manipulate political discussion, such as amplifying some voices while repressing others.

In politics

In 2016 Andrés Sepúlveda disclosed that he manipulated public opinion to rig elections in Latin America. According to him with a budget of $600,000 he led a team of hackers that stole campaign strategies, manipulated social media to create false waves of enthusiasm and derision, and installed spyware in opposition offices to help Enrique Peña Nieto, a right-of-center candidate, win the election.

In the run up to India's 2014 elections, both the Bharatiya Janata party (BJP) and the Congress party were accused of hiring "political trolls" to talk favourably about them on blogs and social media.

The Chinese government is also believed to run a so-called "50-cent army" (a reference to how much they are said to be paid) and the "Internet Water Army" to reinforce favourable opinion towards it and the Communist Party of China (CCP) as well as to suppress dissent.

In December 2014 the Ukrainian information ministry was launched to counter Russian propaganda with one of its first tasks being the creation of social media accounts (also known as the i-Army) and amassing friends posing as residents of eastern Ukraine.

Twitter suspended a number of bot accounts that appeared to be spreading pro-Saudi Arabian tweets about the disappearance of Saudi dissident journalist Jamal Khashoggi.

In business and marketing

Trolling and other applications

In April 2009, Internet trolls of 4chan voted Christopher Poole, founder of the site, as the world's most influential person of 2008 with 16,794,368 votes by an open Internet poll conducted by Time magazine. The results were questioned even before the poll completed, as automated voting programs and manual ballot stuffing were used to influence the vote. 4chan's interference with the vote seemed increasingly likely, when it was found that reading the first letter of the first 21 candidates in the poll spelled out a phrase containing two 4chan memes: "Marblecake. Also, The Game".

Jokesters and politically oriented hacktivists may share sophisticated knowledge of how to manipulate the Web and social media.

Countermeasures

In Wired it was noted that nation-state rules such as compulsory registration and threats of punishment are not adequate measures to combat the problem of online bots.

To guard against the issue of prior ratings influencing perception several websites such as Reddit have taken steps such as hiding the vote-count for a specified time.

Some other potential measures under discussion are flagging posts for being likely satire or false. For instance in December 2016 Facebook announced that disputed articles will be marked with the help of users and outside fact checkers. The company seeks ways to identify 'information operations' and fake accounts and suspended 30,000 accounts before the presidential election in France in a strike against information operations.

Inventor of the World Wide Web Tim Berners-Lee considers putting few companies in charge of deciding what is or isn't true a risky proposition and states that openness can make the web more truthful. As an example he points to Wikipedia which, while not being perfect, allows anyone to edit with the key to its success being not just the technology but also the governance of the site. Namely, it has an army of countless volunteers and ways of determining what is or isn't true.

Furthermore, various kinds of software may be used to combat this problem such as fake checking software or voluntary browser extensions that store every website one reads or use the browsing history to deliver fake revelations to those who read a fake story after some kind of consensus was found on the falsehood of a story.

Furthermore, Daniel Suarez asks society to value critical analytic thinking and suggests education reforms such as the introduction of 'formal logic' as a discipline in schools and training in media literacy and objective evaluation.

Government responses

According to a study of the Oxford Internet Institute, at least 43 countries around the globe have proposed or implemented regulations specifically designed to tackle different aspects of influence campaigns, including fake news, social media abuse, and election interference.

Germany

In Germany, during the period preceding the elections in September 2017, all major political parties save AfD publicly announced that they would not use social bots in their campaigns. Additionally, they committed to strongly condemning such usage of online bots.

Moves towards regulation on social media have been made: three German states Hessen, Bavaria, and Saxony-Anhalt proposed in early 2017 a law that would mean social media users could face prosecution if they violate the terms and conditions of a platform. For example, the use of a pseudonym on Facebook, or the creation of fake account, would be punishable by up to one year's imprisonment.

Italy

In early 2018, the Italian Communications Agency AGCOM published a set of guidelines on its website, targeting the elections in March that same year. The six main topics are:

  1. Political Subjects’s Equal Treatment
  2. Political Propaganda’s Transparency
  3. Contents Illicit and Activities Whose Dissemination Is Forbidden (i.e. Polls)
  4. Social Media Accounts of Public Administrations
  5. Political Propaganda is Forbidden on Election Day and Day Before
  6. Recommendations for stronger fact-checking services

France

In November 2018, a law against the manipulation of information was passed in France. The law stipulates that during campaign periods:

  • Digital platforms must disclose the amount paid for ads and the names of their authors. Past a certain traffic threshold, platforms are required to have a representative present in France, and must publish the algorithms used.
  • An interim judge may pass a legal injunction to halt the spread of fake news swiftly. 'Fake news' must satisfy the following: (a)it must be manifest; (b) it must be disseminated on a massive scale; and (c) lead to a disturbance of the peace or compromise the outcome of an election.

Malaysia

In April 2018, the Malaysian parliament passed the Anti-Fake News Act. It defined fake news as 'news, information, data and reports which is or are wholly or partly false.' This applied to citizens or those working at a digital publication, and imprisonment of up to 6 years was possible. However, the law was repealed after heavy criticism in August 2018.

Kenya

In May 2018, President Uhuru Kenyatta signed into law the Computer and Cybercrimes bill, that criminalised cybercrimes including cyberbullying and cyberespionage. If a person “intentionally publishes false, misleading or fictitious data or misinforms with intent that the data shall be considered or acted upon as authentic,” they are subject to fines and up to two years imprisonment.

Research

German chancellor Angela Merkel has issued the Bundestag to deal with the possibilities of political manipulation by social bots or fake news.

Media manipulation

From Wikipedia, the free encyclopedia

Examples of televised manipulation can be found in news programs that can potentially influence mass audiences. Pictured is the infamous Dziennik (Journal) news cast, which attempted to slander capitalism in then-communist Poland using emotive and loaded language.

Media manipulation is a series of related techniques in which partisans create an image or argument that favours their particular interests. Such tactics may include the use of logical fallacies, manipulation, outright deception (disinformation), rhetorical and propaganda techniques, and often involve the suppression of information or points of view by crowding them out, by inducing other people or groups of people to stop listening to certain arguments, or by simply diverting attention elsewhere. In Propaganda: The Formation of Men's Attitudes, Jacques Ellul writes that public opinion can only express itself through channels which are provided by the mass media of communication – without which there could be no propaganda. It is used within public relations, propaganda, marketing, etc. While the objective for each context is quite different, the broad techniques are often similar.

As illustrated below, many of the more modern mass media manipulation methods are types of distraction, on the assumption that the public has a limited attention span.

Contexts

Activism

Activism is the practice or doctrine that has an emphasis on direct vigorous action especially supporting or opposing one side of a controversial matter. It is quite simply starting a movement to affect or change social views. It is frequently started by influential individuals but is done collectively through social movements with large masses. These social movements can be done through public rallies, strikes, street marches and even rants on social media.

A large social movement that has changed public opinion through time would be the 'Civil Rights March on Washington', where Martin Luther King Jr. performed his 'I Have a Dream' speech attempting to change social views on Non-White Americans in the United States of America, 28 August 1963. Most of King's movements were done through non-violent rallies and public speeches to show the white American population that they were peaceful but also wanted change in their community. In 1964, the 'Civil Rights Acts' commenced giving Non-White Americans equality with all races.

Advertising

"Daisy", a TV commercial for the re-election of U.S. President Lyndon B. Johnson. It aired only once, in September 1964, and is considered both one of the most controversial and one of the most effective political ads in U.S. history.
 

Advertising is the action of attracting public attention to something, especially through paid announcements for products and services. This tends to be done by businesses who wish to sell their product by paying media outlets to show their products or services on television breaks, banners on websites and mobile applications.

These advertisements are not only done by businesses but can also be done by certain groups. Non-commercial advertisers are those who spend money on advertising in a hope to raise awareness for a cause or promote specific ideas. These include groups such as interest groups, political parties, government organizations and religious movements. Most of these organizations intend to spread a message or sway public opinion instead of trying to sell products or services. Advertising can not only be found on social media, but it is also evident on billboards, newspapers, magazines and even word of mouth.

Hoaxing

A hoax is something intended to deceive or defraud. When a newspaper or the news reports a fake story, it is known as a hoax. Misleading public stunts, scientific frauds, false bomb threats and business scams are examples of hoaxes. A common aspect that hoaxes have is that they are all meant to deceive or lie. For something to become a hoax, the lie must have something more to offer. It must be outrageous, dramatic but also has to be believable and ingenious. Above all, it must be able to attract attention from the public. Once it has done that then a hoax is in full effect.

An example of a hoax can be found in a video from 2012, paid for by  Greenpeace  and made by Yes Men, that went viral. The video, purported to be footage from a cell phone filmed at a Shell party to celebrate the beginning of Arctic drilling operations, shows a drinking fountain that is designed to look like an oil platform malfunction and spray a dark beverage (similar to the appearance of oil) over a lady. This causes a commotion, with employees seen rushing to mop up the mess, and security guards attempting to confiscate the filmed footage. The hoax continued further through the distribution of fake legal messages from Shell that threatened legal action against the activists who supposedly pulled off the stunt, and even a fake website designed to look like Shell's, with copy such as "Birds are like sponges … for oil!"

Propagandizing

Propagandizing is a form of communication that is aimed at influencing the attitude of a community toward some cause or position by presenting only one side of an argument. Propaganda is commonly created by governments, but some forms of mass communication created by other powerful organizations can be considered propaganda as well. As opposed to impartially providing information, propaganda, in its most basic sense, presents information primarily to influence an audience. Propaganda is usually repeated and dispersed over a wide variety of media in order to create the chosen result in audience attitudes. While the term propaganda has justifiably acquired a strongly negative connotation by association with its most manipulative and jingoistic examples (e.g. Nazi propaganda used to justify the Holocaust), propaganda in its original sense was neutral, and could refer to uses that were generally benign or innocuous, such as public health recommendations, signs encouraging citizens to participate in a census or election, or messages encouraging persons to report crimes to the police, among others.

Propaganda uses societal norms and myths that people hear and believe. Because people respond to, understand and remember more simple ideas this is what is used to influence people's beliefs, attitudes and values.

Psychological warfare

Psychological warfare is sometimes considered synonymous with propaganda. The principal distinction being that propaganda normally occurs within a nation, whereas psychological warfare normally takes place between nations, often during war or cold war. Various techniques are used to influence a target's values, beliefs, emotions, motives, reasoning, or behavior. Target audiences can be governments, organizations, groups, and individuals.

This tactic has been used in multiple wars throughout history. During World War II, the western Allies, expected for the Soviet Union would drop leaflets on the US and England. During the conflict with Iraq, American and English forces dropped leaflets, with many of the leaflets telling the people how to surrender. In the Korean War both sides would use loud speakers from the front lines. In 2009 people in Israel in the Gaza war received text messages on their cell phones threatening them with rocket attacks. The Palestinian people were getting phone calls and leaflets warning them that they were going to drop rockets on them. These phone calls and leaflets were not always accurate.

Public relations

Public relations (PR) is the management of the flow of information between an individual or an organization and the public. Public relations may include an organization or individual gaining exposure to their audiences using topics of public interest and news items that do not require direct payment. PR is generally created by specialized individuals or firms at the behest of already public individuals or organizations, as a way of managing their public profile.

Techniques

Internet manipulation

Astroturfing

Astroturfing is when there is an intent and attempt to create the illusion of support for a particular cause, person, or stance. While this is mainly connected to and seen on the internet, it has also happened in newspapers during times of political elections. Corporations and political parties try to imitate grassroots movements in order to sway the public to believing something that isn't true.

Clickbait

Clickbait refers to headlines of online news articles that are sensationalized or sometimes completely fake. It uses people's natural curiosity to get people to click. In some cases clickbait is simply used to generate income, more clicks means more money made with advertisers. But these headlines and articles can also be used to influence a group of people on social media. They are constructed to appeal to the interest group's pre-existing biases and thus to be shared within filter bubbles.

Propaganda laundering

Propaganda laundering is a method of using a less trusted or less popular platform to publish a story of dubious origin or veracity for the purposes of reporting on that report, rather than the story itself. This technique serves to insulate the secondary more established media from having to issue a retraction if the report is false. Generally secondary reports will report that the original report is reporting without verifying or making the report themselves. The news and entertainment site Buzzfeed.com has been used to originate several via their BuzzFeed News section. This term was coined by a Reddit user HexezWork regarding a discussion related to the investigation by Robert Mueller into Russian Collusion.

Search engine marketing

In search engine marketing websites use market research, from past searches and other sources, to increase their visibility in search engine results pages. This allows them to guide search results along the lines they desire, and thereby influence searchers.

Business have many tactics to lure customers into their websites and to generate revenue such as banner ads, search engine optimization and pay-per-click marketing tools. They all serve a different purpose and use different tools that appeal to multiple types of users. Banner ads appear on sites that then redirect to other sites that are similar. Search engine optimization is changing a page to seem more reliable or applicable than other similar pages. Pay-per-click involves certain words being highlighted because they were bought by advertisers to then redirect to a page containing information or selling whatever that word pertained to. By using the internet, users are susceptible to these type of advertisements without a clear advertising campaign being viewed.

Distraction

Distraction by major events

Commonly known as "smoke screen", this technique consists of making the public focus its attention on a topic that is more convenient for the propagandist. This particular type of media manipulation has been referenced many times in popular culture. Some examples are:

  • The movie Wag the Dog (1997), which illustrates the public being deceitfully distracted from an important topic by presenting another that whose only quality is that of being more attractive.
  • In the U.S. TV series House of Cards, when protagonist Frank Underwood finds himself trapped in a media rampage, he addresses the viewer and says: "From the lion's den or a pack of wolves. When you're fresh meat, kill and throw them something fresher".

Politicians distract the public by showing them "shiny object" issues through the use of TV and other media. Sometimes they can be as simple as a politician with a reality show, like Sarah Palin had for a short time back in 2009, which aired on TLC.

Distracting the public

This a mere variation of the traditional arguments known, in logic, as ad hominem and ad populum but applied to countries instead of individuals. This technique consists on refuting arguments by appealing to nationalism or by inspiring fear and hate towards a foreign country or to all the foreigners. It has the potential of being important since it gives the propagandists the power to discredit any information coming from other countries.

Some examples are:

Q: "What do you think about Khokara's politic on X matter?" A: "I think they've been wrong about everything for the last 20 years or so..."

Q: "Your idea is quite similar to the one proposed in Falala." A: "Are you suggesting Falala is a better country than ours?"

Straw man fallacy

An informal fallacy. The "straw man" consists of appearing to refute the opponent's argument while actually attacking another topic. For it to work properly the topic that was actually refuted and the one that should have been refuted need to be similar.

Distraction by scapegoat

This is a combination of the straw man fallacy and the ad hominem argument. It is often used to incriminate someone in order to argument the innocence of someone else.

Photo manipulation

Visual media can be transformed through photo manipulation, commonly called "photoshopping." This can make a product, person, or idea seem more appealing. This is done by highlighting certain features on the product and using certain editing tools to enlarge the photo, to attract and persuade the public.

Video manipulation

Video manipulation is a new variant of media manipulation that targets digital video using a combination of traditional video processing and video editing techniques and auxiliary methods from artificial intelligence like face recognition. In typical video manipulation, the facial structure, body movements, and voice of the subject are replicated in order to create a fabricated recording of the subject. The applications of these methods range from educational videos to videos aimed at (mass) manipulation and propaganda, a straightforward extension of the long-standing possibilities of photo manipulation. This form of computer-generated misinformation has contributed to fake news, and there have been instances when this technology was used during political campaigns.

Compliance professionals

A compliance professional is an expert that utilizes and perfects means of gaining media influence. Though the means of gaining influence are common, their aims vary from political, economic, to personal. Thus the label of compliance professional applies to diverse groups of people, including propagandists, marketers, pollsters, salespeople and political advocates.

Techniques

Means of influence include, but are not limited to, the methods outlined in Influence: Science and Practice:

Additionally, techniques like framing and less formal means of effective obfuscation, such as the use of logical fallacies, are used to gain compliance.

Computer worm

From Wikipedia, the free encyclopedia

Hex dump of the Blaster worm, showing a message left for Microsoft CEO Bill Gates by the worm's creator
 
Spread of Conficker worm

A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. It often uses a computer network to spread itself, relying on security failures on the target computer to access it. It will use this machine as a host to scan and infect other computers. When these new worm-invaded computers are controlled, the worm will continue to scan and infect other computers using these computers as hosts, and this behavior will continue. Computer worms use recursive methods to copy themselves without host programs and distribute themselves based on the law of exponential growth, thus controlling and infecting more and more computers in a short time. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.

Many worms are designed only to spread, and do not attempt to change the systems they pass through. However, as the Morris worm and Mydoom showed, even these "payload-free" worms can cause major disruption by increasing network traffic and other unintended effects.

History

Morris worm source code floppy diskette at the Computer History Museum

The actual term "worm" was first used in John Brunner's 1975 novel, The Shockwave Rider. In the novel, Nichlas Haflinger designs and sets off a data-gathering worm in an act of revenge against the powerful men who run a national electronic information web that induces mass conformity. "You have the biggest-ever worm loose in the net, and it automatically sabotages any attempt to monitor it. There's never been a worm with that tough a head or that long a tail!"

The first ever computer worm was devised to be an anti-virus software. Named Reaper, it was created by Ray Tomlinson to replicate itself across the ARPANET and delete the experimental Creeper program. On November 2, 1988, Robert Tappan Morris, a Cornell University computer science graduate student, unleashed what became known as the Morris worm, disrupting many computers then on the Internet, guessed at the time to be one tenth of all those connected. During the Morris appeal process, the U.S. Court of Appeals estimated the cost of removing the worm from each installation at between $200 and $53,000; this work prompted the formation of the CERT Coordination Center and Phage mailing list. Morris himself became the first person tried and convicted under the 1986 Computer Fraud and Abuse Act.

Features

Independence

Computer viruses generally require a host program. The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. A worm does not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by the host program, but can run independently and actively carry out attacks.

Exploit attacks

Because a worm is not limited by the host program, worms can take advantage of various operating system vulnerabilities to carry out active attacks. For example, the "Nimda" virus exploits vulnerabilities to attack.

Complexity

Some worms are combined with web page scripts, and are hidden in HTML pages using VBScript, ActiveX and other technologies. When a user accesses a webpage containing a virus, the virus automatically resides in memory and waits to be triggered. There are also some worms that are combined with backdoor programs or Trojan horses, such as "Code Red".

Contagiousness

Worms are more infectious than traditional viruses. They not only infect local computers, but also all servers and clients on the network based on the local computer. Worms can easily spread through shared folders, e-mails, malicious web pages, and servers with a large number of vulnerabilities in the network.

Harm

Any code designed to do more than spread the worm is typically referred to as the "payload". Typical malicious payloads might delete files on a host system (e.g., the ExploreZip worm), encrypt files in a ransomware attack, or exfiltrate data such as confidential documents or passwords.

Some worms may install a backdoor. This allows the computer to be remotely controlled by the worm author as a "zombie". Networks of such machines are often referred to as botnets and are very commonly used for a range of malicious purposes, including sending spam or performing DoS attacks.

Some special worms attack industrial systems in a targeted manner. Stuxnet was primarily transmitted through LANs and infected thumb-drives, as its targets were never connected to untrusted networks, like the internet. This virus can destroy the core production control computer software used by chemical, power generation and power transmission companies in various countries around the world - in Stuxnet's case, Iran, Indonesia and India were hardest hit - it was used to "issue orders" to other equipment in the factory, and to hide those commands from being detected. Stuxnet used multiple vulnerabilities and four different zero-day exploits in Windows systems and Siemens SIMATICWinCC systems to attack the embedded programmable logic controllers of industrial machines. Although these systems operate independently from the network, if the operator inserts a virus-infected drive into the system's USB interface, the virus will be able to gain control of the system without any other operational requirements or prompts.

Countermeasures

Worms spread by exploiting vulnerabilities in operating systems. Vendors with security problems supply regular security updates (see "Patch Tuesday"), and if these are installed to a machine, then the majority of worms are unable to spread to it. If a vulnerability is disclosed before the security patch released by the vendor, a zero-day attack is possible.

Users need to be wary of opening unexpected email, and should not run attached files or programs, or visit web sites that are linked to such emails. However, as with the ILOVEYOU worm, and with the increased growth and efficiency of phishing attacks, it remains possible to trick the end-user into running malicious code.

Anti-virus and anti-spyware software are helpful, but must be kept up-to-date with new pattern files at least every few days. The use of a firewall is also recommended.

Users can minimize the threat posed by worms by keeping their computers' operating system and other software up to date, avoiding opening unrecognized or unexpected emails and running firewall and antivirus software.

Mitigation techniques include:

Infections can sometimes be detected by their behavior - typically scanning the Internet randomly, looking for vulnerable hosts to infect. In addition, machine learning techniques can be used to detect new worms, by analyzing the behavior of the suspected computer.

Worms with good intent

A helpful worm or anti-worm is a worm designed to do something that its author feels is helpful, though not necessarily with the permission of the executing computer's owner. Beginning with the first research into worms at Xerox PARC, there have been attempts to create useful worms. Those worms allowed John Shoch and Jon Hupp to test the Ethernet principles on their network of Xerox Alto computers. Similarly, the Nachi family of worms tried to download and install patches from Microsoft's website to fix vulnerabilities in the host system by exploiting those same vulnerabilities. In practice, although this may have made these systems more secure, it generated considerable network traffic, rebooted the machine in the course of patching it, and did its work without the consent of the computer's owner or user. Regardless of their payload or their writers' intentions, security experts regard all worms as malware.

One study proposed the first computer worm that operates on the second layer of the OSI model (Data link Layer), utilizing topology information such as Content-addressable memory (CAM) tables and Spanning Tree information stored in switches to propagate and probe for vulnerable nodes until the enterprise network is covered.

Anti-worms have been used to combat the effects of the Code Red, Blaster, and Santy worms. Welchia is an example of a helpful worm. Utilizing the same deficiencies exploited by the Blaster worm, Welchia infected computers and automatically began downloading Microsoft security updates for Windows without the users' consent. Welchia automatically reboots the computers it infects after installing the updates. One of these updates was the patch that fixed the exploit.

Other examples of helpful worms are "Den_Zuko", "Cheeze", "CodeGreen", and "Millenium".

Botnet

From Wikipedia, the free encyclopedia
 
Stacheldraht botnet diagram showing a DDoS attack. (Note this is also an example of a type of client–server model of a botnet.)

A botnet is a number of Internet-connected devices, each of which runs one or more bots. Botnets can be used to perform Distributed Denial-of-Service (DDoS) attacks, steal data, send spam, and allow the attacker to access the device and its connection. The owner can control the botnet using command and control (C&C) software. The word "botnet" is a portmanteau of the words "robot" and "network". The term is usually used with a negative or malicious connotation.

Overview

A botnet is a logical collection of Internet-connected devices, such as computers, smartphones or Internet of things (IoT) devices whose security have been breached and control ceded to a third party. Each compromised device, known as a "bot," is created when a device is penetrated by software from a malware (malicious software) distribution. The controller of a botnet is able to direct the activities of these compromised computers through communication channels formed by standards-based network protocols, such as IRC and Hypertext Transfer Protocol (HTTP).

Botnets are increasingly rented out by cyber criminals as commodities for a variety of purposes.

Architecture

Botnet architecture has evolved over time in an effort to evade detection and disruption. Traditionally, bot programs are constructed as clients which communicate via existing servers. This allows the bot herder (the controller of the botnet) to perform all control from a remote location, which obfuscates the traffic. Many recent botnets now rely on existing peer-to-peer networks to communicate. These P2P bot programs perform the same actions as the client–server model, but they do not require a central server to communicate.

Client–server model

A network based on the client–server model, where individual clients request services and resources from centralized servers

The first botnets on the Internet used a client–server model to accomplish their tasks. Typically, these botnets operate through Internet Relay Chat networks, domains, or websites. Infected clients access a predetermined location and await incoming commands from the server. The bot herder sends commands to the server, which relays them to the clients. Clients execute the commands and report their results back to the bot herder.

In the case of IRC botnets, infected clients connect to an infected IRC server and join a channel pre-designated for C&C by the bot herder. The bot herder sends commands to the channel via the IRC server. Each client retrieves the commands and executes them. Clients send messages back to the IRC channel with the results of their actions.

Peer-to-peer

A peer-to-peer (P2P) network in which interconnected nodes ("peers") share resources among each other without the use of a centralized administrative system

In response to efforts to detect and decapitate IRC botnets, bot herders have begun deploying malware on peer-to-peer networks. These bots may use digital signatures so that only someone with access to the private key can control the botnet. See e.g. Gameover ZeuS and ZeroAccess botnet.

Newer botnets fully operate over P2P networks. Rather than communicate with a centralized server, P2P bots perform as both a command distribution server and a client which receives commands. This avoids having any single point of failure, which is an issue for centralized botnets.

In order to find other infected machines, the bot discreetly probes random IP addresses until it contacts another infected machine. The contacted bot replies with information such as its software version and list of known bots. If one of the bots' version is lower than the other, they will initiate a file transfer to update. This way, each bot grows its list of infected machines and updates itself by periodically communicating to all known bots.

Core components

A botnet's originator (known as a "bot herder" or "bot master") controls the botnet remotely. This is known as the command-and-control (C&C). The program for the operation must communicate via a covert channel to the client on the victim's machine (zombie computer).

Control protocols

IRC is a historically favored means of C&C because of its communication protocol. A bot herder creates an IRC channel for infected clients to join. Messages sent to the channel are broadcast to all channel members. The bot herder may set the channel's topic to command the botnet. E.g. the message :herder!herder@example.com TOPIC #channel DDoS www.victim.com from the bot herder alerts all infected clients belonging to #channel to begin a DDoS attack on the website www.victim.com. An example response :bot1!bot1@compromised.net PRIVMSG #channel I am DDoSing www.victim.com by a bot client alerts the bot herder that it has begun the attack.

Some botnets implement custom versions of well-known protocols. The implementation differences can be used for detection of botnets. For example, Mega-D features a slightly modified Simple Mail Transfer Protocol (SMTP) implementation for testing spam capability. Bringing down the Mega-D's SMTP server disables the entire pool of bots that rely upon the same SMTP server.

Zombie computer

In computer science, a zombie computer is a computer connected to the Internet that has been compromised by a hacker, computer virus or trojan horse and can be used to perform malicious tasks under remote direction. Botnets of zombie computers are often used to spread e-mail spam and launch denial-of-service attacks (DDoS). Most owners of zombie computers are unaware that their system is being used in this way. Because the owner tends to be unaware, these computers are metaphorically compared to zombies. A coordinated DDoS attack by multiple botnet machines also resembles a zombie horde attack.

The process of stealing computing resources as a result of a system being joined to a "botnet" is sometimes referred to as "scrumping".

Command and control

Botnet command and control (C&C) protocols have been implemented in a number of ways, from traditional IRC approaches to more sophisticated versions.

Telnet

Telnet botnets use a simple C&C botnet protocol in which bots connect to the main command server to host the botnet. Bots are added to the botnet by using a scanning script, which runs on an external server and scans IP ranges for telnet and SSH server default logins. Once a login is found, the scanning server can infect it through SSH with malware, which pings the control server.

IRC

IRC networks use simple, low bandwidth communication methods, making them widely used to host botnets. They tend to be relatively simple in construction and have been used with moderate success for coordinating DDoS attacks and spam campaigns while being able to continually switch channels to avoid being taken down. However, in some cases, merely blocking of certain keywords has proven effective in stopping IRC-based botnets. The RFC 1459 (IRC) standard is popular with botnets. The first known popular botnet controller script, "MaXiTE Bot" was using IRC XDCC protocol for private control commands.

One problem with using IRC is that each bot client must know the IRC server, port, and channel to be of any use to the botnet. Anti-malware organizations can detect and shut down these servers and channels, effectively halting the botnet attack. If this happens, clients are still infected, but they typically lie dormant since they have no way of receiving instructions. To mitigate this problem, a botnet can consist of several servers or channels. If one of the servers or channels becomes disabled, the botnet simply switches to another. It is still possible to detect and disrupt additional botnet servers or channels by sniffing IRC traffic. A botnet adversary can even potentially gain knowledge of the control scheme and imitate the bot herder by issuing commands correctly.

P2P

Since most botnets using IRC networks and domains can be taken down with time, hackers have moved to P2P botnets with C&C to make the botnet more resilient and resistant to termination.

Some have also used encryption as a way to secure or lock down the botnet from others, most of the time when they use encryption it is public-key cryptography and has presented challenges in both implementing it and breaking it.

Domains

Many large botnets tend to use domains rather than IRC in their construction (see Rustock botnet and Srizbi botnet). They are usually hosted with bulletproof hosting services. This is one of the earliest types of C&C. A zombie computer accesses a specially-designed webpage or domain(s) which serves the list of controlling commands. The advantages of using web pages or domains as C&C is that a large botnet can be effectively controlled and maintained with very simple code that can be readily updated.

Disadvantages of using this method are that it uses a considerable amount of bandwidth at large scale, and domains can be quickly seized by government agencies with little effort. If the domains controlling the botnets are not seized, they are also easy targets to compromise with denial-of-service attacks.

Fast-flux DNS can be used to make it difficult to track down the control servers, which may change from day to day. Control servers may also hop from DNS domain to DNS domain, with domain generation algorithms being used to create new DNS names for controller servers.

Some botnets use free DNS hosting services such as DynDns.org, No-IP.com, and Afraid.org to point a subdomain towards an IRC server that harbors the bots. While these free DNS services do not themselves host attacks, they provide reference points (often hard-coded into the botnet executable). Removing such services can cripple an entire botnet.

Others

Calling back to large social media sites such as GitHub, Twitter, Reddit, Instagram, the XMPP open source instant message protocol and Tor hidden services are popular ways of avoiding egress filtering to communicate with a C&C server.

Construction

Traditional

This example illustrates how a botnet is created and used for malicious gain.

  1. A hacker purchases or builds a Trojan and/or exploit kit and uses it to start infecting users' computers, whose payload is a malicious application—the bot.
  2. The bot instructs the infected PC to connect to a particular command-and-control (C&C) server. (This allows the botmaster to keep logs of how many bots are active and online.)
  3. The botmaster may then use the bots to gather keystrokes or use form grabbing to steal online credentials and may rent out the botnet as DDoS and/or spam as a service or sell the credentials online for a profit.
  4. Depending on the quality and capability of the bots, the value is increased or decreased.

Newer bots can automatically scan their environment and propagate themselves using vulnerabilities and weak passwords. Generally, the more vulnerabilities a bot can scan and propagate through, the more valuable it becomes to a botnet controller community.

Computers can be co-opted into a botnet when they execute malicious software. This can be accomplished by luring users into making a drive-by download, exploiting web browser vulnerabilities, or by tricking the user into running a Trojan horse program, which may come from an email attachment. This malware will typically install modules that allow the computer to be commanded and controlled by the botnet's operator. After the software is downloaded, it will call home (send a reconnection packet) to the host computer. When the re-connection is made, depending on how it is written, a Trojan may then delete itself or may remain present to update and maintain the modules.

Others

In some cases, a botnet may be temporarily created by volunteer hacktivists, such as with implementations of the Low Orbit Ion Cannon as used by 4chan members during Project Chanology in 2010.

China's Great Cannon of China allows the modification of legitimate web browsing traffic at internet backbones into China to create a large ephemeral botnet to attack large targets such as GitHub in 2015.

Common features

  • Most botnets currently feature distributed denial-of-service attacks in which multiple systems submit as many requests as possible to a single Internet computer or service, overloading it and preventing it from servicing legitimate requests. An example is an attack on a victim's server. The victim's server is bombarded with requests by the bots, attempting to connect to the server, therefore, overloading it.
  • Spyware is software which sends information to its creators about a user's activities – typically passwords, credit card numbers and other information that can be sold on the black market. Compromised machines that are located within a corporate network can be worth more to the bot herder, as they can often gain access to confidential corporate information. Several targeted attacks on large corporations aimed to steal sensitive information, such as the Aurora botnet.
  • E-mail spam are e-mail messages disguised as messages from people, but are either advertising, annoying, or malicious.
  • Click fraud occurs when the user's computer visits websites without the user's awareness to create false web traffic for personal or commercial gain.
  • Ad fraud is often a consequence of malicious bot activity, according to CHEQ, Ad Fraud 2019, The Economic Cost of Bad Actors on the Internet. Commercial purposes of bots include influencers using them to boost their supposed popularity, and online publishers using bots to increase the number of clicks an ad receives, allowing sites to earn more commission from advertisers.
  • Bitcoin mining was used in some of the more recent botnets have which include bitcoin mining as a feature in order to generate profits for the operator of the botnet.
  • Self-spreading functionality, to seek for pre-configured command-and-control (CNC) pushed instruction contains targeted devices or network, to aim for more infection, is also spotted in several botnets. Some of the botnets are utilizing this function to automate their infections.

Market

The botnet controller community features a constant and continuous struggle over who has the most bots, the highest overall bandwidth, and the most "high-quality" infected machines, like university, corporate, and even government machines.

While botnets are often named after the malware that created them, multiple botnets typically use the same malware but are operated by different entities.

Phishing

Botnets can be used for many electronic scams. These botnets can be used to distribute malware such as viruses to take control of a regular users computer/software By taking control of someone's personal computer they have unlimited access to their personal information, including passwords and login information to accounts. This is called phishing. Phishing is the acquiring of login information to the "victim's" accounts with a link the "victim" clicks on that is sent through an email or text. A survey by Verizon found that around two-thirds of electronic "espionage" cases come from phishing.

Countermeasures

The geographic dispersal of botnets means that each recruit must be individually identified/corralled/repaired and limits the benefits of filtering.

Computer security experts have succeeded in destroying or subverting malware command and control networks, by, among other means, seizing servers or getting them cut off from the Internet, denying access to domains that were due to be used by malware to contact its C&C infrastructure, and, in some cases, breaking into the C&C network itself. In response to this, C&C operators have resorted to using techniques such as overlaying their C&C networks on other existing benign infrastructure such as IRC or Tor, using peer-to-peer networking systems that are not dependent on any fixed servers, and using public key encryption to defeat attempts to break into or spoof the network.

Norton AntiBot was aimed at consumers, but most target enterprises and/or ISPs. Host-based techniques use heuristics to identify bot behavior that has bypassed conventional anti-virus software. Network-based approaches tend to use the techniques described above; shutting down C&C servers, null-routing DNS entries, or completely shutting down IRC servers. BotHunter is software, developed with support from the U.S. Army Research Office, that detects botnet activity within a network by analyzing network traffic and comparing it to patterns characteristic of malicious processes.

Researchers at Sandia National Laboratories are analyzing botnets' behavior by simultaneously running one million Linux kernels—a similar scale to a botnet—as virtual machines on a 4,480-node high-performance computer cluster to emulate a very large network, allowing them to watch how botnets work and experiment with ways to stop them.

Detecting automated bot attacks is becoming more difficult each day as newer and more sophisticated generations of bots are getting launched by attackers. For example, an automated attack can deploy a large bot army and apply brute-force methods with highly accurate username and password lists to hack into accounts. The idea is to overwhelm sites with tens of thousands of requests from different IPs all over the world, but with each bot only submitting a single request every 10 minutes or so, which can result in more than 5 million attempts per day. In these cases, many tools try to leverage volumetric detection, but automated bot attacks now have ways of circumventing triggers of volumetric detection.

One of the techniques for detecting these bot attacks is what's known as "signature-based systems" in which the software will attempt to detect patterns in the request packet. But attacks are constantly evolving, so this may not be a viable option when patterns can't be discerned from thousands of requests. There is also the behavioral approach to thwarting bots, which ultimately tries to distinguish bots from humans. By identifying non-human behavior and recognizing known bot behavior, this process can be applied at the user, browser, and network levels.

The most capable method of using software to combat against a virus has been to utilize honeypot software in order to convince the malware that a system is vulnerable. The malicious files are then analyzed using forensic software.

On 15 July 2014, the Subcommittee on Crime and Terrorism of the Committee on the Judiciary, United States Senate, held a hearing on the threats posed by botnets and the public and private efforts to disrupt and dismantle them.

Non-malicious use

Non-malicious botnets such as the ones part of BOINC are often used for Scientific purposes. For example, there is Rosetta@home, which aims to predict protein–protein docking and design new proteins; LHC@home, which aims to simulate various different experiments relating to the Large Hadron Collider; and Einstein@Home, which searches for signals from spinning neutron stars. These botnets are voluntary, allowing any user to "enlist" their computer into the botnet, and later take it out when they no longer want it in the botnet.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...