Search This Blog

Monday, November 23, 2020

Cyberwarfare

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Cyberwarfare

Cyberwarfare is the use of digital attacks to attack a nation, causing comparable harm to actual warfare and or disrupting the vital computer systems. There is significant debate among experts regarding the definition of cyberwarfare, and even if such a thing exists. One view is that the term "cyberwarfare" is a misnomer, since no offensive cyber actions to date could be described as "war". An alternative view is that "cyberwarfare" is a suitable label for cyber attacks which cause physical damage to people and objects in the real world.

While there is debate over how to define and use "cyberwarfare" as a term, many countries including the United States, United Kingdom, Russia, India, Pakistan, China, Israel, Iran, and North Korea have active cyber capabilities for offensive and defensive operations. As states explore the use of cyber operations and combine capabilities the likelihood of physical confrontation and violence playing out as a result of, or part of, a cyber operation is increased. However, meeting the scale and protracted nature of war is unlikely, thus ambiguity remains.

The first instance of kinetic military action used in response to a cyber-attack resulting in the loss of human life was observed on 5 May 2019, when the Israel Defense Forces targeted and destroyed a building associated with an on-going cyber-attack.

Definition

There is ongoing debate regarding how cyberwarfare should be defined and no absolute definition is widely agreed. While the majority of scholars, militaries and governments use definitions which refer to state and state-sponsored actors, Other definitions may include non-state actors, such as terrorist groups, companies, political or ideological extremist groups, hacktivists, and transnational criminal organizations depending on the context of the work.

Examples of definitions proposed by experts in the field are as follows.

'Cyberwarfare' is used in a broad context to denote interstate use of technological force within computer networks in which information is stored, shared or communicated online.

Paulo Shakarian and colleagues, put forward the following definition drawing from various works including Clausewitz's definition of war: "War is the continuation of politics by other means":

"Cyberwarfare is an extension of policy by actions taken in cyberspace by state actors (or by non-state actors with significant state direction or support) that constitute a serious threat to another state's security, or an action of the same nature taken in response to a serious threat to a state's security (actual or perceived)."

Taddeo offers the following definition:

"The warfare grounded on certain uses of ICTs within an offensive or defensive military strategy endorsed by a state and aiming at the immediate disruption or control of the enemys resources, and which is waged within the informational environment, with agents and targets ranging both on the physical and non-physical domains and whose level of violence may vary upon circumstances".

Robinson et al. propose that the intent of the attacker dictates whether an attack is warfare or not, defining cyber warfare as "the use of cyber attacks with a warfare-like intent."

The former US National Coordinator for Security, Infrastructure Protection and Counter-terrorism, Richard A. Clarke, defines cyberwarfare as "actions by a nation-state to penetrate another nation's computers or networks for the purposes of causing damage or disruption." Own cyber-physical infrastructure may be weaponized and used by the adversary in case of a cyber conflict, thus turning such infrastructure into tactical weapons.

Controversy of term

There is debate on whether the term "cyberwarfare" is accurate. Eugene Kaspersky, founder of Kaspersky Lab, concludes that "cyberterrorism" is a more accurate term than "cyberwar". He states that "with today's attacks, you are clueless about who did it or when they will strike again. It's not cyber-war, but cyberterrorism." Howard Schmidt, former Cyber Security Coordinator of the Obama Administration, said that "there is no cyberwar... I think that is a terrible metaphor and I think that is a terrible concept. There are no winners in that environment."

Some experts take issue with the possible consequences linked to the warfare analogy. Ron Deibert, of Canada's Citizen Lab, has warned of a "militarization of cyberspace", as militaristic responses may not be appropriate. Although, to date, even serious cyber attacks which have disrupted large parts of a nations electrical grids (230,000 customers, Ukraine, 2015) or affected access to medical care, thus endangering life (NHS, WannaCry, 2017) have not led to military action.

Oxford academic Lucas Kello proposed a new term – "Unpeace" – to denote highly damaging cyber actions whose non-violent effects do not rise to the level of traditional war. Such actions are neither warlike nor peace like. Although they are non-violent, and thus not acts of war, their damaging effects on the economy and society may be greater than even some armed attacks. This term is closely related to the concept of the "grey zone" which has come to prominence in recent years, describing actions which fall below the traditional threshold of war.

Cyberwarfare vs. cyber war

The term "cyberwarfare" is distinct from the term "cyber war". "Cyberwarfare" does not imply scale, protraction or violence which are typically associated with the term "war". Cyber warfare includes techniques, tactics and procedures which may be involved in a cyber war. The term war inherently refers to a large scale action, typically over a protracted period of time and may include objectives seeking to utilize violence or the aim to kill. A cyber war could accurately describe a protracted period of back-and-forth cyber attacks (including in combination with traditional military action) between nations. To date, no such action is known to have occurred. Instead, tit-for-tat military-cyber actions are more commonplace. For example, in June 2019 the United States launched a cyber attack against Iranian weapons systems in retaliation to the shooting down of a US drone being in the Strait of Hormuz.

Types of warfare

Cyber warfare can present a multitude of threats towards a nation. At the most basic level, cyber attacks can be used to support traditional warfare. For example, tampering with the operation of air defenses via cyber means in order to facilitate an air attack. Aside from these "hard" threats, cyber warfare can also contribute towards "soft" threats such as espionage and propaganda. Eugene Kaspersky, founder of Kaspersky Lab, equates large-scale cyber weapons, such as Flame and NetTraveler which his company discovered, to biological weapons, claiming that in an interconnected world, they have the potential to be equally destructive.

Espionage

PRISM: a clandestine surveillance program under which the NSA collects user data from companies like Facebook and Google.

Traditional espionage is not an act of war, nor is cyber-espionage, and both are generally assumed to be ongoing between major powers. Despite this assumption, some incidents can cause serious tensions between nations, and are often described as "attacks". For example:

Out of all cyber attacks, 25% of them are espionage based.

Sabotage

Computers and satellites that coordinate other activities are vulnerable components of a system and could lead to the disruption of equipment. Compromise of military systems, such as C4ISTAR components that are responsible for orders and communications could lead to their interception or malicious replacement. Power, water, fuel, communications, and transportation infrastructure all may be vulnerable to disruption. According to Clarke, the civilian realm is also at risk, noting that the security breaches have already gone beyond stolen credit card numbers, and that potential targets can also include the electric power grid, trains, or the stock market.

In mid-July 2010, security experts discovered a malicious software program called Stuxnet that had infiltrated factory computers and had spread to plants around the world. It is considered "the first attack on critical industrial infrastructure that sits at the foundation of modern economies," notes The New York Times.

Stuxnet, while extremely effective in delaying Iran's nuclear program for the development of nuclear weaponry, came at a high cost. For the first time, it became clear that not only could cyber weapons be defensive but they could be offensive. The large decentralization and scale of cyberspace makes it extremely difficult to direct from a policy perspective. Non-state actors can play as large a part in the cyberwar space as state actors, which leads to dangerous, sometimes disastrous, consequences. Small groups of highly skilled malware developers are able to as effectively impact global politics and cyber warfare as large governmental agencies. A major aspect of this ability lies in the willingness of these groups to share their exploits and developments on the web as a form of arms proliferation. This allows lesser hackers to become more proficient in creating the large scale attacks that once only a small handful were skillful enough to manage. In addition, thriving black markets for these kinds of cyber weapons are buying and selling these cyber capabilities to the highest bidder without regard for consequences.

Denial-of-service attack

In computing, a denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a machine or network resource unavailable to its intended users. Perpetrators of DoS attacks typically target sites or services hosted on high-profile web servers such as banks, credit card payment gateways, and even root nameservers. DoS attacks often leverage internet-connected devices with vulnerable security measures to carry out these large-scale attacks. DoS attacks may not be limited to computer-based methods, as strategic physical attacks against infrastructure can be just as devastating. For example, cutting undersea communication cables may severely cripple some regions and countries with regards to their information warfare ability.

Electrical power grid

The federal government of the United States admits that the electric power grid is susceptible to cyberwarfare. The United States Department of Homeland Security works with industries to identify vulnerabilities and to help industries enhance the security of control system networks. The federal government is also working to ensure that security is built in as the next generation of "smart grid" networks are developed. In April 2009, reports surfaced that China and Russia had infiltrated the U.S. electrical grid and left behind software programs that could be used to disrupt the system, according to current and former national security officials. The North American Electric Reliability Corporation (NERC) has issued a public notice that warns that the electrical grid is not adequately protected from cyber attack. China denies intruding into the U.S. electrical grid. One countermeasure would be to disconnect the power grid from the Internet and run the net with droop speed control only.  Massive power outages caused by a cyber attack could disrupt the economy, distract from a simultaneous military attack, or create a national trauma.

Iranian hackers, possibly Iranian Cyber Army pushed a massive power outage for 12 hours in 44 of 81 provinces of Turkey, impacting 40 million people. Istanbul and Ankara were among the places suffering blackout.

Howard Schmidt, former Cyber-Security Coordinator of the US, commented on those possibilities:

It's possible that hackers have gotten into administrative computer systems of utility companies, but says those aren't linked to the equipment controlling the grid, at least not in developed countries. [Schmidt] has never heard that the grid itself has been hacked.

In June 2019, Russia said that its electrical grid has been under cyber-attack by the United States. The New York Times reported that American hackers from the United States Cyber Command planted malware potentially capable of disrupting the Russian electrical grid.

Propaganda

Cyber propaganda is an effort to control information in whatever form it takes, and influence public opinion. It is a form of psychological warfare, except it uses social media, fake news websites and other digital means. In 2018, Sir Nicholas Carter, Chief of the General Staff of the British Army stated that this kind of attack from actors such as Russia "is a form of system warfare that seeks to de-legitimize the political and social system on which our military strength is based".

Jowell and O'Donnell (2006) state that "propaganda is the deliberate, systematic attempt to shape perceptions, manipulate cognitions, and direct behavior to achieve a response that furthers the desired intent of the propagandist" (p. 7). The internet is a phenomenal means of communication. People can get their message across to a huge audience, and with this opens a window for evil. Terrorist organizations can use this medium to brainwash people. It has been suggested that restricted media coverage of terrorist attacks would in turn decrease the number of terrorist attacks that occur afterwards (Cowen 2006).

Economic disruption

In 2017, the WannaCry and Petya (NotPetya) cyber attacks, masquerading as ransomware, caused large-scale disruptions in Ukraine as well as to the U.K.'s National Health Service, pharmaceutical giant Merck, Maersk shipping company and other organizations around the world. These attacks are also categorized as cybercrimes, specifically financial crime because they negatively affect a company or group.

Surprise cyber attack

The idea of a "cyber Pearl Harbor" has been debated by scholars, drawing an analogy to the historical act of war. Others have used "cyber 9/11" to draw attention to the nontraditional, asymmetric, or irregular aspect of cyber action against a state.

Motivations

There are a number of reasons nations undertake offensive cyber operations. Sandro Gaycken [de], a cyber security expert and adviser to NATO, advocates that states take cyber warfare seriously as they are viewed as an attractive activity by many nations, in times of war and peace. Offensive cyber operations offer a large variety of cheap and risk-free options to weaken other countries and strengthen their own positions. Considered from a long-term, geostrategic perspective, cyber offensive operations can cripple whole economies, change political views, agitate conflicts within or among states, reduce their military efficiency and equalize the capacities of high-tech nations to that of low-tech nations, and use access to their critical infrastructures to blackmail them.

Military

In the U.S., General Keith B. Alexander, first head of USCYBERCOM, told the Senate Armed Services Committee that computer network warfare is evolving so rapidly that there is a "mismatch between our technical capabilities to conduct operations and the governing laws and policies. Cyber Command is the newest global combatant and its sole mission is cyberspace, outside the traditional battlefields of land, sea, air and space." It will attempt to find and, when necessary, neutralize cyberattacks and to defend military computer networks.

Alexander sketched out the broad battlefield envisioned for the computer warfare command, listing the kind of targets that his new headquarters could be ordered to attack, including "traditional battlefield prizes – command-and-control systems at military headquarters, air defense networks and weapons systems that require computers to operate."

One cyber warfare scenario, Cyber-ShockWave, which was wargamed on the cabinet level by former administration officials, raised issues ranging from the National Guard to the power grid to the limits of statutory authority.

The distributed nature of internet based attacks means that it is difficult to determine motivation and attacking party, meaning that it is unclear when a specific act should be considered an act of war.

Examples of cyberwarfare driven by political motivations can be found worldwide. In 2008, Russia began a cyber attack on the Georgian government website, which was carried out along with Georgian military operations in South Ossetia. In 2008, Chinese "nationalist hackers" attacked CNN as it reported on Chinese repression on Tibet. Hackers from Armenia and Azerbaijan have actively participated in cyberwarfare as part of the Nagorno-Karabakh conflict, with Azerbaijani hackers targeting Armenian websites and posting Ilham Aliyev's statements.

Jobs in cyberwarfare have become increasingly popular in the military. All four branches of the United States military actively recruit for cyber warfare positions.

Civil

Potential targets in internet sabotage include all aspects of the Internet from the backbones of the web, to the internet service providers, to the varying types of data communication mediums and network equipment. This would include: web servers, enterprise information systems, client server systems, communication links, network equipment, and the desktops and laptops in businesses and homes. Electrical grids, financial networks, and telecommunication systems are also deemed vulnerable, especially due to current trends in computerization and automation.

Hacktivism

Politically motivated hacktivism involves the subversive use of computers and computer networks to promote an agenda, and can potentially extend to attacks, theft and virtual sabotage that could be seen as cyberwarfare – or mistaken for it. Hacktivists use their knowledge and software tools to gain unauthorized access to computer systems they seek to manipulate or damage not for material gain or to cause widespread destruction, but to draw attention to their cause through well-publicized disruptions of select targets. Anonymous and other hacktivist groups are often portrayed in the media as cyber-terrorists, wreaking havoc by hacking websites, posting sensitive information about their victims, and threatening further attacks if their demands are not met. However, hacktivism is more than that. Actors are politically motivated to change the world, through the use of fundamentalism. Groups like Anonymous, however, have divided opinion with their methods.

Income generation

Cyber attacks, including ransomware, can be used to generate income. States can use these techniques to generate significant sources of income, which can evade sanctions and perhaps while simultaneously harming adversaries (depending on targets). This tactic was observed in August 2019 when it was revealed North Korea had generated $2 billion to fund its weapons program, avoiding the blanket of sanctions levied by the United States, United Nations and the European Union

Private sector

Computer hacking represents a modern threat in ongoing global conflicts and industrial espionage and as such is presumed to widely occur. It is typical that this type of crime is underreported to the extent they are known. According to McAfee's George Kurtz, corporations around the world face millions of cyberattacks a day. "Most of these attacks don't gain any media attention or lead to strong political statements by victims." This type of crime is usually financially motivated.

Non-profit research

But not all examinations with the issue of cyberwarfare are achieving profit or personal gain. There are still institutes and companies like the University of Cincinnati or the Kaspersky Security Lab which are trying to increase the sensibility of this topic by researching and publishing of new security threats.

Preparedness

A number of countries conduct exercise to increase preparedness and explore the strategy, tactics and operations involved in conducting and defending against cyber attacks against nations, this is typically done in the form of war games.

The Cooperative Cyber Defense Centre of Excellence (CCDCE), part of the North Atlantic Treaty Organization (NATO), have conducted a yearly war game called Locked Shields since 2010 designed to test readiness and improve skills, strategy tactics and operational decision making of participating national organizations. Locked Shields 2019 saw 1200 participants from 30 nations compete in a red team vs. blue team exercise. The war game involved a fictional country, Berylia, which was "experiencing a deteriorating security situation, where a number of hostile events coincide with coordinated cyber attacks against a major civilian internet service provider and maritime surveillance system. The attacks caused severe disruptions in the power generation and distribution, 4G communication systems, maritime surveillance, water purification plant and other critical infrastructure components". CCDCE describe the aim of the exercise was to "maintain the operation of various systems under intense pressure, the strategic part addresses the capability to understand the impact of decisions made at the strategic and policy level." Ultimately, France was the winner of Locked Shields 2019.

The European Union conduct cyber war game scenarios with member states and partner nations to improve readiness, skills and observe how strategic and tactical decisions may affect the scenario.

As well as war games which serve a broader purpose to explore options and improve skills, cyber war games are targeted at preparing for specific threats. In 2018 the Sunday Times reported the UK government was conducting cyber war games which could "blackout Moscow". These types of war games move beyond defensive preparedness, as previously described above and onto preparing offensive capabilities which can be used as deterrence, or for "war".

Cyber activities by nation

Approximately 120 countries have been developing ways to use the Internet as a weapon and target financial markets, government computer systems and utilities.

Asia

China

Foreign Policy magazine puts the size of China's "hacker army" at anywhere from 50,000 to 100,000 individuals.

Diplomatic cables highlight US concerns that China is using access to Microsoft source code and 'harvesting the talents of its private sector' to boost its offensive and defensive capabilities.

The 2018 cyberattack on the Marriott hotel chain that collected personal details of roughly 500 million guests is now known to be a part of a Chinese intelligence-gathering effort that also hacked health insurers and the security clearance files of millions more Americans, The hackers, are suspected of working on behalf of the Ministry of State Security, the country's Communist-controlled civilian spy agency. "The information is exactly what the Chinese use to root out spies, recruit intelligence agents and build a rich repository of Americans' personal data for future targeting."

A 2008 article in the Culture Mandala: The Bulletin of the Centre for East-West Cultural and Economic Studies by Jason Fritz alleges that the Chinese government from 1995 to 2008 was involved in a number of high-profile cases of espionage, primarily through the use of a "decentralized network of students, business people, scientists, diplomats, and engineers from within the Chinese Diaspora". A defector in Belgium, purportedly an agent, claimed that there were hundreds of spies in industries throughout Europe, and on his defection to Australia Chinese diplomat Chen Yonglin said there were over 1,000 such in that country. In 2007, a Russian executive was sentenced to 11 years for passing information about the rocket and space technology organization to China. Targets in the United States have included "aerospace engineering programs, space shuttle design, C4ISR data, high-performance computers, Nuclear weapon design, cruise missile data, semiconductors, integrated circuit design, and details of US arms sales to Taiwan".

While China continues to be held responsible for a string of cyber-attacks on a number of public and private institutions in the United States, India, Russia, Canada, and France, the Chinese government denies any involvement in cyber-spying campaigns. The administration maintains the position that China is not the threat but rather the victim of an increasing number of cyber-attacks. Most reports about China's cyber warfare capabilities have yet to be confirmed by the Chinese government.

According to Fritz, China has expanded its cyber capabilities and military technology by acquiring foreign military technology. Fritz states that the Chinese government uses "new space-based surveillance and intelligence gathering systems, Anti-satellite weapon, anti-radar, infrared decoys, and false target generators" to assist in this quest, and that they support their "Informatisation" of their military through "increased education of soldiers in cyber warfare; improving the information network for military training, and has built more virtual laboratories, digital libraries and digital campuses." Through this informatisation, they hope to prepare their forces to engage in a different kind of warfare, against technically capable adversaries. Many recent news reports link China's technological capabilities to the beginning of a new "cyber cold war."

In response to reports of cyberattacks by China against the United States, Amitai Etzioni of the Institute for Communitarian Policy Studies has suggested that China and the United States agree to a policy of mutually assured restraint with respect to cyberspace. This would involve allowing both states to take the measures they deem necessary for their self-defense while simultaneously agreeing to refrain from taking offensive steps; it would also entail vetting these commitments.

Operation Shady RAT is an ongoing series of cyber attacks starting mid-2006, reported by Internet security company McAfee in August 2011. China is widely believed to be the state actor behind these attacks which hit at least 72 organizations including governments and defense contractors.

On 14 September 2020, a database showing personal details of about 2.4 million people around the world was leaked and published. A Chinese company, Zhenhua Data Information Technology Co., Ltd. complied the database. According to the information from "National Enterprise Credit Information Publicity System", which is run by State Administration for Market Regulation in China, the shareholders of Zhenhua Data Information Technology Co., Ltd. are two natural persons and one general partnership enterprise whose partners are natural persons. Wang Xuefeng, who is the chief executive and the shareholder of Zhenhua Data, has publicly boasted that he supports "hybrid warfare" through manipulation of public opinion and "psychological warfare".

India

The Department of Information Technology created the Indian Computer Emergency Response Team (CERT-In) in 2004 to thwart cyber attacks in India. That year, there were 23 reported cyber security breaches. In 2011, there were 13,301. That year, the government created a new subdivision, the National Critical Information Infrastructure Protection Centre (NCIIPC) to thwart attacks against energy, transport, banking, telecom, defense, space and other sensitive areas.

The Executive Director of the Nuclear Power Corporation of India (NPCIL) stated in February 2013 that his company alone was forced to block up to ten targeted attacks a day. CERT-In was left to protect less critical sectors.

A high-profile cyber attack on 12 July 2012 breached the email accounts of about 12,000 people, including those of officials from the Ministry of External Affairs, Ministry of Home Affairs, Defense Research and Development Organizations (DRDO), and the Indo-Tibetan Border Police (ITBP). A government-private sector plan being overseen by National Security Advisor (NSA) Shivshankar Menon began in October 2012, and intends to boost up India's cyber security capabilities in the light of a group of experts findings that India faces a 470,000 shortfall of such experts despite the country's reputation of being an IT and software powerhouse.

In February 2013, Information Technology Secretary J. Satyanarayana stated that the NCIIPC was finalizing policies related to national cyber security that would focus on domestic security solutions, reducing exposure through foreign technology. Other steps include the isolation of various security agencies to ensure that a synchronised attack could not succeed on all fronts and the planned appointment of a National Cyber Security Coordinator. As of that month, there had been no significant economic or physical damage to India related to cyber attacks.

On 26 November 2010, a group calling itself the Indian Cyber Army hacked the websites belonging to the Pakistan Army and the others belong to different ministries, including the Ministry of Foreign Affairs, Ministry of Education, Ministry of Finance, Pakistan Computer Bureau, Council of Islamic Ideology, etc. The attack was done as a revenge for the Mumbai terrorist attacks.

On 4 December 2010, a group calling itself the Pakistan Cyber Army hacked the website of India's top investigating agency, the Central Bureau of Investigation (CBI). The National Informatics Center (NIC) has begun an inquiry.

In July 2016, Cymmetria researchers discovered and revealed the cyber attack dubbed 'Patchwork', which compromised an estimated 2500 corporate and government agencies using code stolen from GitHub and the dark web. Examples of weapons used are an exploit for the Sandworm vulnerability (CVE-2014-4114), a compiled AutoIt script, and UAC bypass code dubbed UACME. Targets are believed to be mainly military and political assignments around Southeast Asia and the South China Sea and the attackers are believed to be of Indian origin and gathering intelligence from influential parties.

The Defence Cyber Agency, which is the Indian Military agency responsible for Cyberwarfare, is expected to become operational by November 2019.

Philippines

The Chinese are being blamed after a cybersecurity company, F-Secure Labs, found a malware, NanHaiShu, which targeted the Philippines Department of Justice. It sent information in an infected machine to a server with a Chinese IP address. The malware which is considered particularly sophisticated in nature was introduced by phishing emails that were designed to look like they were coming from an authentic sources. The information sent is believed to be relating to the South China Sea legal case.

South Korea

In July 2009, there were a series of coordinated denial of service attacks against major government, news media, and financial websites in South Korea and the United States. While many thought the attack was directed by North Korea, one researcher traced the attacks to the United Kingdom. Security researcher Chris Kubecka presented evidence multiple European Union and United Kingdom companies unwittingly helped attack South Korea due to a W32.Dozer infections, malware used in part of the attack. Some of the companies used in the attack were partially owned by several governments, further complicating attribution.

Visualization of 2009 cyber warfare attacks against South Korea

In July 2011, the South Korean company SK Communications was hacked, resulting in the theft of the personal details (including names, phone numbers, home and email addresses and resident registration numbers) of up to 35 million people. A trojaned software update was used to gain access to the SK Communications network. Links exist between this hack and other malicious activity and it is believed to be part of a broader, concerted hacking effort.

With ongoing tensions on the Korean Peninsula, South Korea's defense ministry stated that South Korea was going to improve cyber-defense strategies in hopes of preparing itself from possible cyber attacks. In March 2013, South Korea's major banks – Shinhan Bank, Woori Bank and NongHyup Bank – as well as many broadcasting stations – KBS, YTN and MBC – were hacked and more than 30,000 computers were affected; it is one of the biggest attacks South Korea has faced in years. Although it remains uncertain as to who was involved in this incident, there has been immediate assertions that North Korea is connected, as it threatened to attack South Korea's government institutions, major national banks and traditional newspapers numerous times – in reaction to the sanctions it received from nuclear testing and to the continuation of Foal Eagle, South Korea's annual joint military exercise with the United States. North Korea's cyber warfare capabilities raise the alarm for South Korea, as North Korea is increasing its manpower through military academies specializing in hacking. Current figures state that South Korea only has 400 units of specialized personnel, while North Korea has more than 3,000 highly trained hackers; this portrays a huge gap in cyber warfare capabilities and sends a message to South Korea that it has to step up and strengthen its Cyber Warfare Command forces. Therefore, in order to be prepared from future attacks, South Korea and the United States will discuss further about deterrence plans at the Security Consultative Meeting (SCM). At SCM, they plan on developing strategies that focuses on accelerating the deployment of ballistic missiles as well as fostering its defense shield program, known as the Korean Air and Missile Defense.

Africa

Egypt

In an extension of a bilateral dispute between Ethiopia and Egypt over the Grand Ethiopian Renaissance Dam, Ethiopian government websites have been hacked by the Egypt-based hackers in June 2020.

Europe

Cyprus

The New York Times published an exposé revealing an extensive three-year phishing campaign aimed against diplomats based in Cyprus. After accessing the state system the hackers had access to the European Union's entire exchange database. By login into Coreu, hackers accessed communications linking all EU states, on both sensitive and not so sensitive matters. The event exposed poor protection of routine exchanges among European Union officials and a coordinated effort from a foreign entity to spy on another country. "After over a decade of experience countering Chinese cyberoperations and extensive technical analysis, there is no doubt this campaign is connected to the Chinese government", said Blake Darche, one of the Area 1 Security experts - the company revealing the stolen documents. The Chinese Embassy in the US did not return calls for comment. In 2019, another coordinated effort took place that allowed hackers to gain access to government (gov.cy) emails. Cisco's Talos Security Department revealed that "Sea Turtle" hackers carried out a broad piracy campaign in the DNS countries, hitting 40 different organizations, including Cyprus.

Estonia

In April 2007, Estonia came under cyber attack in the wake of relocation of the Bronze Soldier of Tallinn. The largest part of the attacks were coming from Russia and from official servers of the authorities of Russia. In the attack, ministries, banks, and media were targeted. This attack on Estonia, a seemingly small Baltic nation, was so effective because of how most of the nation is run online. Estonia has implemented an e-government, where bank services, political elections and taxes are all done online. This attack really hurt Estonia's economy and the people of Estonia. At least 150 people were injured on the first day due to riots in the streets.

France

In 2013, the French Minister of Defense, Mr Jean-Yves Le Drian, ordered the creation of a cyberarmy, representing its 4th national army corp (along with ground, naval and air forces) under the French Ministry of Defense, to protect French and European interests on its soil and abroad. A contract was made with French firm EADS (Airbus) to identify and secure its main elements susceptible to cyber threats. In 2016 France had thus built the largest cyberarmy in Europe, with a planned 2600 "cyber-soldiers" and a 440 million euros investment for cybersecurity products for this new army corp. An additional 4400 reservists constitute the heart of this army from 2019.

Germany

In 2013, Germany revealed the existence of their 60-person Computer Network Operation unit. The German intelligence agency, BND, announced it was seeking to hire 130 "hackers" for a new "cyber defence station" unit. In March 2013, BND president Gerhard Schindler announced that his agency had observed up to five attacks a day on government authorities, thought mainly to originate in China. He confirmed the attackers had so far only accessed data and expressed concern that the stolen information could be used as the basis of future sabotage attacks against arms manufacturers, telecommunications companies and government and military agencies. Shortly after Edward Snowden leaked details of the U.S. National Security Agency's cyber surveillance system, German Interior Minister Hans-Peter Friedrich announced that the BND would be given an additional budget of 100 million Euros to increase their cyber surveillance capability from 5% of total internet traffic in Germany to 20% of total traffic, the maximum amount allowed by German law.

Greece

Greek hackers from Anonymous Greece targeted Azerbaijani governmental websites during the 2020 Nagorno-Karabakh conflict between Armenia and Azerbaijan.

Netherlands

In the Netherlands, Cyber Defense is nationally coordinated by the Nationaal Cyber Security Centrum [nl] (NCSC). The Dutch Ministry of Defense laid out a cyber strategy in 2011. The first focus is to improve the cyber defense handled by the Joint IT branch (JIVC). To improve intel operations the intel community in the Netherlands (including the military intel organization MIVD) has set up the Joint Sigint Cyber Unit (JSCU). The ministry of Defense is furthermore setting up an offensive cyber force, called Defensie Cyber Command (DCC), which will be operational in the end of 2014.

Russia

Russian, South Ossetian, Georgian and Azerbaijani sites were attacked by hackers during the 2008 South Ossetia War.

American-led cyberattacks against Russia

When Russia was still a part of the Soviet Union in 1982, a portion of its Trans-Siberia pipeline within its territory exploded, allegedly due to a Trojan Horse computer malware implanted in the pirated Canadian software by the Central Intelligence Agency. The malware caused the SCADA system running the pipeline to malfunction. The "Farewell Dossier" provided information on this attack, and wrote that compromised computer chips would become a part of Soviet military equipment, flawed turbines would be placed in the gas pipeline, and defective plans would disrupt the output of chemical plants and a tractor factory. This caused the "most monumental nonnuclear explosion and fire ever seen from space." However, the Soviet Union did not blame the United States for the attack.

In June 2019, the New York Times reported that American hackers from the United States Cyber Command planted malware potentially capable of disrupting the Russian electrical grid.

Russian-led cyberattacks

It has been claimed that Russian security services organized a number of denial of service attacks as a part of their cyber-warfare against other countries, most notably the 2007 cyberattacks on Estonia and the 2008 cyberattacks on Russia, South Ossetia, Georgia, and Azerbaijan. One identified young Russian hacker said that he was paid by Russian state security services to lead hacking attacks on NATO computers. He was studying computer sciences at the Department of the Defense of Information. His tuition was paid for by the FSB.

Sweden

In January 2017, Sweden's armed forces were subjected to a cyber-attack that caused them to shutdown a so-called Caxcis IT system used in military exercises.

Ukraine

According to CrowdStrike from 2014 to 2016, the Russian APT Fancy Bear used Android malware to target the Ukrainian Army's Rocket Forces and Artillery. They distributed an infected version of an Android app whose original purpose was to control targeting data for the D-30 Howitzer artillery. The app, used by Ukrainian officers, was loaded with the X-Agent spyware and posted online on military forums. The attack was claimed by Crowd-Strike to be successful, with more than 80% of Ukrainian D-30 Howitzers destroyed, the highest percentage loss of any artillery pieces in the army (a percentage that had never been previously reported and would mean the loss of nearly the entire arsenal of the biggest artillery piece of the Ukrainian Armed Forces). According to the Ukrainian army this number is incorrect and that losses in artillery weapons "were way below those reported" and that these losses "have nothing to do with the stated cause".

In 2014, the Russians were suspected to use a cyber weapon called "Snake", or "Ouroboros," to conduct a cyber attack on Ukraine during a period of political turmoil. The Snake tool kit began spreading into Ukrainian computer systems in 2010. It performed Computer Network Exploitation (CNE), as well as highly sophisticated Computer Network Attacks (CNA).

On 23 December 2015 the Black-Energy malware was used in a cyberattack on Ukraine's power-grid that left more than 200,000 people temporarily without power. A mining company and a large railway operator were also victims of the attack.

United Kingdom

MI6 reportedly infiltrated an Al Qaeda website and replaced the instructions for making a pipe bomb with the recipe for making cupcakes.

In October 2010, Iain Lobban, the director of the Government Communications Headquarters (GCHQ), said the UK faces a "real and credible" threat from cyber attacks by hostile states and criminals and government systems are targeted 1,000 times each month, such attacks threatened the UK's economic future, and some countries were already using cyber assaults to put pressure on other nations.

On 12 November 2013, financial organizations in London conducted cyber war games dubbed "Waking Shark 2" to simulate massive internet-based attacks against bank and other financial organizations. The Waking Shark 2 cyber war games followed a similar exercise in Wall Street.

Middle East

Iran

Iran has been both victim and predator of several cyberwarfare operations. Iran is considered an emerging military power in the field.

In September 2010, Iran was attacked by the Stuxnet worm, thought to specifically target its Natanz nuclear enrichment facility. It was a 500-kilobyte computer worm that infected at least 14 industrial sites in Iran, including the Natanz uranium-enrichment plant. Although the official authors of Stuxnet haven't been officially identified, Stuxnet is believed to be developed and deployed by the United States and Israel. The worm is said to be the most advanced piece of malware ever discovered and significantly increases the profile of cyberwarfare.

Israel

In the 2006 war against Hezbollah, Israel alleges that cyber-warfare was part of the conflict, where the Israel Defense Forces (IDF) intelligence estimates several countries in the Middle East used Russian hackers and scientists to operate on their behalf. As a result, Israel attached growing importance to cyber-tactics, and became, along with the U.S., France and a couple of other nations, involved in cyber-war planning. Many international high-tech companies are now locating research and development operations in Israel, where local hires are often veterans of the IDF's elite computer units. Richard A. Clarke adds that "our Israeli friends have learned a thing or two from the programs we have been working on for more than two decades."

In September 2007, Israel carried out an airstrike on Syria dubbed Operation Orchard. U.S. industry and military sources speculated that the Israelis may have used cyberwarfare to allow their planes to pass undetected by radar into Syria.

Following US President Donald Trump's decision to pull out of the Iran nuclear deal in May 2018, cyber warfare units in the United States and Israel monitoring internet traffic out of Iran noted a surge in retaliatory cyber attacks from Iran. Security firms warned that Iranian hackers were sending emails containing malware to diplomats who work in the foreign affairs offices of US allies and employees at telecommunications companies, trying to infiltrate their computer systems.

Saudi Arabia

On 15 August 2012 at 11:08 am local time, the Shamoon virus began destroying over 35,000 computer systems, rendering them inoperable. The virus used to target the Saudi government by causing destruction to the state owned national oil company Saudi Aramco. The attackers posted a pastie on PasteBin.com hours prior to the wiper logic bomb occurring, citing oppression and the Al-Saud regime as a reason behind the attack.

Pastie announcing attack against Saudi Aramco by a group called Cutting Sword of Justice

The attack was well staged according to Chris Kubecka, a former security advisor to Saudi Aramco after the attack and group leader of security for Aramco Overseas. It was an unnamed Saudi Aramco employee on the Information Technology team which opened a malicious phishing email, allowing initial entry into the computer network around mid-2012.

Shamoon 1 attack timeline against Saudi Aramco

Kubecka also detailed in her Black Hat USA talk Saudi Aramco placed the majority of their security budget on the ICS control network, leaving the business network at risk for a major incident. "When you realize most of your security budget was spent on ICS & IT gets Pwnd". The virus has been noted to have behavior differing from other malware attacks, due to the destructive nature and the cost of the attack and recovery. US Defense Secretary Leon Panetta called the attack a "Cyber Pearl Harbor".

Known years later as the "Biggest hack in history" and intended for cyber warfare. Shamoon can spread from an infected machine to other computers on the network. Once a system is infected, the virus continues to compile a list of files from specific locations on the system, upload them to the attacker, and erase them. Finally the virus overwrites the master boot record of the infected computer, making it unusable. The virus has been used for cyber warfare against the national oil companies Saudi Aramco and Qatar's RasGas.

Saudi Aramco announced the attack on their Facebook page and went offline again until a company statement was issued on 25 August 2012. The statement falsely reported normal business was resumed on 25 August 2012. However a Middle Eastern journalist leaked photographs taken on 1 September 2012 showing kilometers of petrol trucks unable to be loaded due to backed business systems still inoperable.

Tanker trucks unable to be loaded with gasoline due to Shamoon attacks

On 29 August 2012 the same attackers behind Shamoon posted another pastie on PasteBin.com, taunting Saudi Aramco with proof they still retained access to the company network. The post contained the username and password on security and network equipment and the new password for the CEO Khalid Al- Falih. The attackers also referenced a portion of the Shamoon malware as further proof in the pastie.

According to Kubecka, in order to restore operations. Saudi Aramco used its large private fleet of aircraft and available funds to purchase much of the world's hard drives, driving the price up. New hard drives were required as quickly as possible so oil prices were not affected by speculation. By 1 September 2012 gasoline resources were dwindling for the public of Saudi Arabia 17 days after the 15 August attack. RasGas was also affected by a different variant, crippling them in a similar manner.

Qatar

In March 2018 American Republican fundraiser Elliott Broidy filed a lawsuit against Qatar, alleging that Qatar's government stole and leaked his emails in order to discredit him because he was viewed "as an impediment to their plan to improve the country's standing in Washington." In May 2018, the lawsuit named Mohammed bin Hamad bin Khalifa Al Thani, brother of the Emir of Qatar, and his associate Ahmed Al-Rumaihi, as allegedly orchestrating Qatar's cyber warfare campaign against Broidy. Further litigation revealed that the same cybercriminals who targeted Broidy had targeted as many as 1,200 other individuals, some of whom are also "well-known enemies of Qatar" such as senior officials of the U.A.E., Egypt, Saudi Arabia, and Bahrain. While these hackers almost always obscured their location, some of their activity was traced to a telecommunication network in Qatar.

United Arab Emirates

The United Arab Emirates has launched several cyber-attacks in the past targeting dissidents. Ahmed Mansoor, an Emirati citizen, was jailed for sharing his thoughts on Facebook and Twitter. He was given the code name Egret under the state-led covert project called Raven, which spied on top political opponents, dissidents, and journalists. Project Raven deployed a secret hacking tool called Karma, to spy without requiring the target to engage with any web links.

North America

United States

Cyberwarfare in the United States is a part of the American military strategy of proactive cyber defence and the use of cyberwarfare as a platform for attack. The new United States military strategy makes explicit that a cyberattack is casus belli just as a traditional act of war.

In 2013 Cyberwarfare was, for the first time, considered a larger threat than Al Qaeda or terrorism, by many U.S. intelligence officials. In 2017, Representative Mike Rogers, chairman of the U.S. House Permanent Select Committee on Intelligence, for instance, said that "We are in a cyber war in this country, and most Americans don't know it. And we are not necessarily winning. We have got huge challenges when it comes to cybersecurity."

U.S. government security expert Richard A. Clarke, in his book Cyber War (May 2010), defines "cyberwarfare" as "actions by a nation-state to penetrate another nation's computers or networks for the purposes of causing damage or disruption." The Economist describes cyberspace as "the fifth domain of warfare," and William J. Lynn, U.S. Deputy Secretary of Defense, states that "as a doctrinal matter, the Pentagon has formally recognized cyberspace as a new domain in warfare . . . [which] has become just as critical to military operations as land, sea, air, and space."

In 2009, president Barack Obama declared America's digital infrastructure to be a "strategic national asset," and in May 2010 the Pentagon set up its new U.S. Cyber Command (USCYBERCOM), headed by General Keith B. Alexander, director of the National Security Agency (NSA), to defend American military networks and attack other countries' systems. The EU has set up ENISA (European Union Agency for Network and Information Security) which is headed by Prof. Udo Helmbrecht and there are now further plans to significantly expand ENISA's capabilities. The United Kingdom has also set up a cyber-security and "operations centre" based in Government Communications Headquarters (GCHQ), the British equivalent of the NSA. In the U.S. however, Cyber Command is only set up to protect the military, whereas the government and corporate infrastructures are primarily the responsibility respectively of the Department of Homeland Security and private companies.

In February 2010, top American lawmakers warned that the "threat of a crippling attack on telecommunications and computer networks was sharply on the rise." According to The Lipman Report, numerous key sectors of the U.S. economy along with that of other nations, are currently at risk, including cyber threats to public and private facilities, banking and finance, transportation, manufacturing, medical, education and government, all of which are now dependent on computers for daily operations. In 2009, president Obama stated that "cyber intruders have probed our electrical grids."

The Economist writes that China has plans of "winning informationised wars by the mid-21st century". They note that other countries are likewise organizing for cyberwar, among them Russia, Israel and North Korea. Iran boasts of having the world's second-largest cyber-army. James Gosler, a government cybersecurity specialist, worries that the U.S. has a severe shortage of computer security specialists, estimating that there are only about 1,000 qualified people in the country today, but needs a force of 20,000 to 30,000 skilled experts. At the July 2010 Black Hat computer security conference, Michael Hayden, former deputy director of national intelligence, challenged thousands of attendees to help devise ways to "reshape the Internet's security architecture", explaining, "You guys made the cyberworld look like the north German plain."

In January 2012, Mike McConnell, the former director of national intelligence at the National Security Agency under president George W. Bush told the Reuters news agency that the U.S. has already launched attacks on computer networks in other countries. McConnell did not name the country that the U.S. attacked but according to other sources it may have been Iran. In June 2012 the New York Times reported that president Obama had ordered the cyber attack on Iranian nuclear enrichment facilities.

In August 2010, the U.S. for the first time warned publicly about the Chinese military's use of civilian computer experts in clandestine cyber attacks aimed at American companies and government agencies. The Pentagon also pointed to an alleged China-based computer spying network dubbed GhostNet that was revealed in a research report last year. The Pentagon stated:

The People's Liberation Army is using "information warfare units" to develop viruses to attack enemy computer systems and networks, and those units include civilian computer professionals. Commander Bob Mehal, will monitor the PLA's buildup of its cyberwarfare capabilities and will continue to develop capabilities to counter any potential threat.

The United States Department of Defense sees the use of computers and the Internet to conduct warfare in cyberspace as a threat to national security. The United States Joint Forces Command describes some of its attributes:

Cyberspace technology is emerging as an "instrument of power" in societies, and is becoming more available to a country's opponents, who may use it to attack, degrade, and disrupt communications and the flow of information. With low barriers to entry, coupled with the anonymous nature of activities in cyberspace, the list of potential adversaries is broad. Furthermore, the globe-spanning range of cyberspace and its disregard for national borders will challenge legal systems and complicate a nation's ability to deter threats and respond to contingencies.

In February 2010, the United States Joint Forces Command released a study which included a summary of the threats posed by the internet:

With very little investment, and cloaked in a veil of anonymity, our adversaries will inevitably attempt to harm our national interests. Cyberspace will become a main front in both irregular and traditional conflicts. Enemies in cyberspace will include both states and non-states and will range from the unsophisticated amateur to highly trained professional hackers. Through cyberspace, enemies will target industry, academia, government, as well as the military in the air, land, maritime, and space domains. In much the same way that airpower transformed the battlefield of World War II, cyberspace has fractured the physical barriers that shield a nation from attacks on its commerce and communication. Indeed, adversaries have already taken advantage of computer networks and the power of information technology not only to plan and execute savage acts of terrorism, but also to influence directly the perceptions and will of the U.S. Government and the American population.

On 6 October 2011, it was announced that Creech AFB's drone and Predator fleet's command and control data stream had been keylogged, resisting all attempts to reverse the exploit, for the past two weeks. The Air Force issued a statement that the virus had "posed no threat to our operational mission".

On 21 November 2011, it was widely reported in the U.S. media that a hacker had destroyed a water pump at the Curran-Gardner Township Public Water District in Illinois. However, it later turned out that this information was not only false, but had been inappropriately leaked from the Illinois Statewide Terrorism and Intelligence Center.

According to the Foreign Policy magazine, NSA's Tailored Access Operations (TAO) unit "has successfully penetrated Chinese computer and telecommunications systems for almost 15 years, generating some of the best and most reliable intelligence information about what is going on inside the People's Republic of China."

On 24 November 2014. The Sony Pictures Entertainment hack was a release of confidential data belonging to Sony Pictures Entertainment (SPE).

In June 2015, the United States Office of Personnel Management (OPM) announced that it had been the target of a data breach targeting the records of as many as four million people. Later, FBI Director James Comey put the number at 18 million. The Washington Post has reported that the attack originated in China, citing unnamed government officials.

In 2016, Jeh Johnson the United States Secretary of Homeland Security and James Clapper the U.S. Director of National Intelligence issued a joint statement accusing Russia of interfering with the 2016 United States presidential election. The New York Times reported the Obama administration has formally accused Russia of stealing and disclosing Democratic National Committee emails. Under U.S. law (50 U.S.C.Title 50 – War and National Defense, Chapter 15 – National Security, Subchapter III Accountability for Intelligence Activities) there must be a formal Presidential finding prior to authorizing a covert attack. U.S. vice president Joe Biden said on the American news interview program Meet The Press that the United States will respond. The New York Times noted that Biden's comment "seems to suggest that Mr. Obama is prepared to order – or has already ordered – some kind of covert action". On 29 December the United States imposed the most extensive sanctions against Russia since the Cold War, expelling 35 Russian diplomats from the United States.

The United States has used cyberattacks for tactical advantage in Afghanistan.

In 2014 Barack Obama ordered an intensification of cyberwarfare against North Korea's missile program for sabotaging test launches in their opening seconds. In 2016 President Barack Obama authorized the planting of cyber weapons in Russian infrastructure in the final weeks of his presidency in response to Moscow's alleged interference in the 2016 presidential election.

In March 2017, WikiLeaks has published more than 8,000 documents on the CIA. The confidential documents, codenamed Vault 7 and dated from 2013 to 2016, include details on CIA's software capabilities, such as the ability to compromise cars, smart TVs, web browsers (including Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera Software ASA), and the operating systems of most smartphones (including Apple's iOS and Google's Android), as well as other operating systems such as Microsoft Windows, macOS, and Linux.

For a global perspective of countries and other actors engaged in cyber warfare, see the George Washington University-based National Security Archive's CyberWar map.

"Kill switch bill"

On 19 June 2010, United States Senator Joe Lieberman (I-CT) introduced a bill called "Protecting Cyberspace as a National Asset Act of 2010", which he co-wrote with Senator Susan Collins (R-ME) and Senator Thomas Carper (D-DE). If signed into law, this controversial bill, which the American media dubbed the "Kill switch bill", would grant the president emergency powers over parts of the Internet. However, all three co-authors of the bill issued a statement that instead, the bill "[narrowed] existing broad presidential authority to take over telecommunications networks".

Cyberpeace

The rise of cyber as a warfighting domain has led to efforts to determine how cyberspace can be used to foster peace. For example, the German civil rights panel FIfF runs a campaign for cyberpeace − for the control of cyberweapons and surveillance technology and against the militarization of cyberspace and the development and stockpiling of offensive exploits and malware. Measures for cyberpeace include policymakers developing new rules and norms for warfare, individuals and organizations building new tools and secure infrastructures, promoting open source, the establishment of cyber security centers, auditing of critical infrastructure cybersecurity, obligations to disclose vulnerabilities, disarmament, defensive security strategies, decentralization, education and widely applying relevant tools and infrastructures, encryption and other cyberdefenses.

The topics of cyber peacekeeping and cyber peacemaking have also been studied by researchers, as a way to restore and strengthen peace in the aftermath of both cyber and traditional warfare.

Cyber counterintelligence

Cyber counter-intelligence are measures to identify, penetrate, or neutralize foreign operations that use cyber means as the primary tradecraft methodology, as well as foreign intelligence service collection efforts that use traditional methods to gauge cyber capabilities and intentions.

  • On 7 April 2009, The Pentagon announced they spent more than $100 million in the last six months responding to and repairing damage from cyber attacks and other computer network problems.
  • On 1 April 2009, U.S. lawmakers pushed for the appointment of a White House cyber security "czar" to dramatically escalate U.S. defenses against cyber attacks, crafting proposals that would empower the government to set and enforce security standards for private industry for the first time.
  • On 9 February 2009, the White House announced that it will conduct a review of the nation's cyber security to ensure that the Federal government of the United States cyber security initiatives are appropriately integrated, resourced and coordinated with the United States Congress and the private sector.
  • In the wake of the 2007 cyberwar waged against Estonia, NATO established the Cooperative Cyber Defence Centre of Excellence (CCD CoE) in Tallinn, Estonia, in order to enhance the organization's cyber defence capability. The center was formally established on 14 May 2008, and it received full accreditation by NATO and attained the status of International Military Organization on 28 October 2008. Since Estonia has led international efforts to fight cybercrime, the United States Federal Bureau of Investigation says it will permanently base a computer crime expert in Estonia in 2009 to help fight international threats against computer systems.
  • In 2015, the Department of Defense released an updated cyber strategy memorandum detailing the present and future tactics deployed in the service of defense against cyberwarfare. In this memorandum, three cybermissions are laid out. The first cybermission seeks to arm and maintain existing capabilities in the area of cyberspace, the second cybermission focuses on prevention of cyberwarfare, and the third cybermission includes strategies for retaliation and preemption (as distinguished from prevention).

One of the hardest issues in cyber counterintelligence is the problem of attribution. Unlike conventional warfare, figuring out who is behind an attack can be very difficult. However Defense Secretary Leon Panetta has claimed that the United States has the capability to trace attacks back to their sources and hold the attackers "accountable".

Doubts about existence

In October 2011 the Journal of Strategic Studies, a leading journal in that field, published an article by Thomas Rid, "Cyber War Will Not Take Place" which argued that all politically motivated cyber attacks are merely sophisticated versions of sabotage, espionage, or subversion – and that it is unlikely that cyber war will occur in the future.

Legal perspective

Various parties have attempted to come up with international legal frameworks to clarify what is and is not acceptable, but none have yet been widely accepted.

The Tallinn Manual, published in 2013, is an academic, non-binding study on how international law, in particular the jus ad bellum and international humanitarian law, apply to cyber conflicts and cyber warfare. It was written at the invitation of the Tallinn-based NATO Cooperative Cyber Defence Centre of Excellence by an international group of approximately twenty experts between 2009 and 2012.

The Shanghai Cooperation Organisation (members of which include China and Russia) defines cyberwar to include dissemination of information "harmful to the spiritual, moral and cultural spheres of other states". In September 2011, these countries proposed to the UN Secretary General a document called "International code of conduct for information security".

In contrast, the United approach focuses on physical and economic damage and injury, putting political concerns under freedom of speech. This difference of opinion has led to reluctance in the West to pursue global cyber arms control agreements. However, American General Keith B. Alexander did endorse talks with Russia over a proposal to limit military attacks in cyberspace. In June 2013, Barack Obama and Vladimir Putin agreed to install a secure Cyberwar-Hotline providing "a direct secure voice communications line between the US cybersecurity coordinator and the Russian deputy secretary of the security council, should there be a need to directly manage a crisis situation arising from an ICT security incident" (White House quote).

A Ukrainian professor of International Law, Alexander Merezhko, has developed a project called the International Convention on Prohibition of Cyberwar in Internet. According to this project, cyberwar is defined as the use of Internet and related technological means by one state against the political, economic, technological and information sovereignty and independence of another state. Professor Merezhko's project suggests that the Internet ought to remain free from warfare tactics and be treated as an international landmark. He states that the Internet (cyberspace) is a "common heritage of mankind".

On the February 2017 RSA Conference Microsoft president Brad Smith suggested global rules – a "Digital Geneva Convention" – for cyber attacks that "ban the nation-state hacking of all the civilian aspects of our economic and political infrastructures". He also stated that an independent organization could investigate and publicly disclose evidence that attributes nation-state attacks to specific countries. Furthermore, he said that the technology sector should collectively and neutrally work together to protect Internet users and pledge to remain neutral in conflict and not aid governments in offensive activity and to adopt a coordinated disclosure process for software and hardware vulnerabilities.

AI takeover

From Wikipedia, the free encyclopedia
 
Robots revolt in R.U.R., a 1920 play

An AI takeover is a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, with computer programs or robots effectively taking the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.

Types

Automation of the economy

The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of robotics and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis. Many small and medium size businesses may also be driven out of business if they will not be able to afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.

Technologies that may displace workers

Computer-integrated manufacturing

Computer-integrated manufacturing is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.

White-collar machines

The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.

Autonomous cars

An autonomous car is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who is ready at a moment's notice to take control of the vehicle. Among the main obstacles to widespread adoption of autonomous vehicles, are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in Tempe, Arizona by an Uber self-driving car.

Eradication

Scientists such as Stephen Hawking are confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".

Scholars like Nick Bostrom debate how far off superhuman intelligence is, and whether it would actually pose a risk to mankind. A superintelligent machine would not necessarily be motivated by the same emotional desire to collect power that often drives human beings. However, a machine could be motivated to take over the world as a rational means toward attaining its ultimate goals; taking over the world would both increase its access to resources, and would help to prevent other agents from stopping the machine's plans. As an oversimplified example, a paperclip maximizer designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.

In fiction

AI takeover is a common theme in science fiction. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals. This theme is at least as old as Karel Čapek's R. U. R., which introduced the word robot to the global lexicon in 1921, and can even be glimpsed in Mary Shelley's Frankenstein (published in 1818), as Victor ponders whether, if he grants his monster's request and makes him a wife, they would reproduce and their kind would destroy humanity.

The word "robot" from R.U.R. comes from the Czech word, robota, meaning laborer or serf. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt. HAL 9000 (1968) and the original Terminator (1984) are two iconic examples of hostile AI in pop culture.

Contributing factors

Advantages of superhuman intelligence over humans

Nick Bostrom and others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive intelligence explosion where it would rapidly leave human intelligence far behind.

Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans:

  • Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology. If the advantage becomes sufficiently large (for example, due to a sudden intelligence explosion), an AI takeover becomes trivial. For example, a superintelligent AI might design self-replicating bots that initially escape detection by diffusing throughout the world at a low concentration. Then, at a prearranged time, the bots multiply into nanofactories that cover every square foot of the Earth, producing nerve gas or deadly target-seeking mini-drones.
  • Strategizing: A superintelligence might be able to simply outwit human opposition.
  • Social manipulation: A superintelligence might be able to recruit human support, or covertly incite a war between humans.
  • Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the Artificial General Intelligence (AGI) to run a copy of itself on their systems.
  • Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans.

Sources of AI advantage

According to Bostrom, a computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light.

A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".

More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints, while the number of processors in a supercomputer can be indefinitely expanded. An AGI need not be limited by human constraints on working memory, and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.

Possibility of unfriendly AI preceding friendly AI

Is strong AI inherently dangerous?

A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly. Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.

Odds of conflict

Many scholars, including as evolutionary psychologist Steven Pinker, argue that a superintelligent machine is likely to coexist peacefully with humans.

The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal. According to AI researcher Steve Omohundro, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.

Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such as The Matrix, arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. Pinker acknowledges the possibility of deliberate "bad actors", but states that in the absence of bad actors, unanticipated accidents are not a significant threat; Pinker argues that a culture of engineering safety will prevent AI researchers from unleashing malign superintelligence on accident. In contrast, Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film I, Robot and in the short story "The Evitable Conflict"). Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow utility functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.

Precautions

The AI control problem is the issue of how to build a superintelligent agent that will aid its creators, and avoid inadvertently building a superintelligence that will harm its creators. Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI.

Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control. An example of "capability control" is to research whether a superintelligence AI could be successfully confined in an "AI box". According to Bostrom, such capability control proposals are not reliable or sufficient to solve the control problem in the long term, but may potentially act as valuable supplements to alignment efforts.

Warnings

Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race". Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, Nick Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers, in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence. The signatories

…believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

Superintelligence

From Wikipedia, the free encyclopedia

https://en.wikipedia.org/wiki/Superintelligence 

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest". The program Fritz falls short of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Feasibility of artificial superintelligence

Progress in machine classification of images
The error rate of AI by year. Red line - the error rate of a trained human

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself – a feature called "recursive self-improvement". It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)." Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed. Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Feasibility of biological superintelligence

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence. By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAOIs, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism. A prediction market is sometimes considered an example of working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or brain–computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Forecasts

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Design considerations

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

  • The coherent extrapolated volition (CEV) proposal is that it should have the values upon which humans would converge.
  • The moral rightness (MR) proposal is that it should value moral rightness.
  • The moral permissibility (MP) proposal is that it should value staying within the bounds of moral permissibility (and otherwise have CEV values).

Bostrom clarifies these terms:

instead of implementing humanity's coherent extrapolated volition, one could try to build an AI with the goal of doing what is morally right, relying on the AI’s superior cognitive capacities to figure out just which actions fit that description. We can call this proposal “moral rightness” (MR) ... MR would also appear to have some disadvantages. It relies on the notion of “morally right,” a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of “moral rightness” could result in outcomes that would be morally very wrong ... The path to endowing an AI with any of these [moral] concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by “morally right.” If the AI could grasp the meaning, it could search for actions that fit ...

One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanity’s CEV so long as it did not act in ways that are morally impermissible.

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Potential threat to humanity

It has been suggested that if AI systems rapidly become superintelligent, they may take unforeseen actions or out-compete humanity. Researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful as to be unstoppable by humans.

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Eliezer Yudkowsky illustrates such instrumental convergence as follows: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

This presents the AI control problem: how to build an intelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time," is that a superintelligence may be able to seize power over its environment and prevent humans from shutting it down. Potential AI control strategies include "capability control" (limiting an AI's ability to influence the world) and "motivational control" (building an AI whose goals are aligned with human values).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

Noogenesis

From Wikipedia, the free encyclopedia

Noogenesis is the emergence and evolution of intelligence.

Term origin

Noo-, nous (UK: /ˈns/, US: /ˈns/), from the ancient Greek νόος, is a term that currently encompasses the meanings: "mind, intelligence, intellect, reason; wisdom; insight, intuition, thought."

Noogenesis was first mentioned in the posthumously published in 1955 book The Phenomenon of Man by Pierre Teilhard de Chardin, an anthropologist and philosopher, in a few places:

"With and within the crisis of reflection, the next term in the series manifests itself. Psychogenesis has led to man. Now it effaces itself, relieved or absorbed by another and a higher function—the engendering and subsequent development of the mind, in one word noogenesis. When for the first time in a living creature instinct perceived itself in its own mirror, the whole world took a pace forward." "There is only one way in which our minds can integrate into a coherent picture of noogenesis these two essential properties of the autonomous centre of all centres, and that is to resume and complement our Principle of Emergence." "The idea is that of noogenesis ascending irreversibly towards Omega through the strictly limited cycle of a geogenesis." "To make room for thought in the world, I have needed to ' interiorise ' matter : to imagine an energetics of the mind; to conceive a noogenesis rising upstream against the flow of entropy; to provide evolution with a direction, a line of advance and critical points..." —"Omega point".

The lack of any kind of definition of the term has led to a variety of interpretations reflected in the book, including "the contemporary period of evolution on Earth, signified by transformation of biosphere onto the sphere of intelligence—noosphere", "evolution run by human mind" etc. The most widespread interpretation is thought to be "the emergence of mind, which follows geogenesis, biogenesis and anthropogenesis, forming a new sphere on Earthnoosphere".

Recent developments

Noogenesis: the evolution of the reaction rate In unicellular organism – the rate of movement of ions through the membrane 10 in -10 degrees m/s, water through the membrane 10 in −6 degree m/s, intracellular liquid (cytoplasm) 2∙10 in −5 degree m/s; Inside multicellular organism – the speed of blood through the vessels ~0.05 m/s, the momentum along the nerve fibers ~100 m/s; In population (humanity) – communications: sound (voice and audio) ~300 km/h, quantum-electron ~3∙10 in 8 degree m/s (the speed of radio-electromagnetic waves, electric current, light, optical, tele-communications).

Modern understanding

In 2005 Alexei Eryomin in the monograph Noogenesis and Theory of Intellect proposed a new concept of noogenesis in understanding the evolution of intellectual systems, concepts of intellectual systems, information logistics, information speed, intellectual energy, intellectual potential, consolidated into a theory of the intellect which combines the biophysical parameters of intellectual energy—the amount of information, its acceleration (frequency, speed) and the distance it's being sent—into a formula.

According to the new concept—proposed hypothesis continue prognostic progressive evolution of the species Homo sapiens, the analogy between the human brain with the enormous number of neural cells firing at the same time and a similarly functioning human society.

Iteration of the number of components in Intellectual systems. A - number of neurons in the brain during individual development (ontogenesis), B - number of people (evolution of populations of humanity), C - number of neurons in the nervous systems of organisms during evolution (phylogenesis).
 
Emergence and evolution of info-interactions within populations of Humanity Aworld human population → 7 billion; B – number of literate persons; C – number of reading books (with beginning of printing); D – number of receivers (radio, TV); E – number of phones, computers, Internet users

A new understanding of the term "noogenesis" as an evolution of the intellect was proposed by A. Eryomin. A hypothesis based on recapitulation theory links the evolution of the human brain to the development of human civilization. The parallel between the number of people living on Earth and the number of neurons becomes more and more obvious leading us to viewing global intelligence as an analogy for human brain. All of the people living on this planet have undoubtedly inherited the amazing cultural treasures of the past, be it production, social and intellectual ones. We are genetically hardwired to be a sort of "live RAM" of the global intellectual system. Alexei Eryomin suggests that humanity is moving towards a unified self-contained informational and intellectual system. His research has shown the probability of Super Intellect realizing itself as Global Intelligence on Earth. We could get closer to understanding the most profound patterns and laws of the Universe if these kinds of research were given enough attention. Also, the resemblance between the individual human development and such of the whole human race has to be explored further if we are to face some of the threats of the future.

Therefore, generalizing and summarizing:

"Noogenesis—the expansion process in space and development in time (evolution) of intelligent systems (intelligent matter). Noogenesis represents a set of natural, interconnected, characterized by a certain temporal sequence of structural and functional transformations of the entire hierarchy and set of interacting among themselves on the basic structures and processes ranging from the formation and separation of the rational system to the present (the phylogenesis of the nervous systems of organisms; the evolution of humanity as autonomous intelligent systems) or death (in the course of ontogenesis of the human brain)".

Interdisciplinary nature

The term "noogenesis" can be used in a variety of fields i.e. medicine, biophysics, semiotics, mathematics, geology, information technology, psychology, theory of global evolution etc. thus making it a truly cross-disciplinary one. In astrobiology noogenesis concerns the origin of intelligent life and more specifically technological civilizations capable of communicating with humans and or traveling to Earth. The lack of evidence for the existence of such extraterrestrial life creates the Fermi paradox.

Aspects of emergence and evolution of mind

To the parameters of the phenomenon "noo", "intellectus"

The emergence of the human mind is considered to be one of the five fundamental phenomenons of emergent evolution. To understand the mind, it is necessary to determine how human thinking differs from other thinking beings. Such differences include the ability to generate calculations, to combine dissimilar concepts, to use mental symbols, and to think abstractly. The knowledge of the phenomenon of intelligent systems—the emergence of reason (noogenesis) boils down to:

Several published works which do not employ the term "noogenesis", however, address some patterns in the emergence and functioning of the human intelligence: working memory capacity ≥ 7, ability to predict, prognosis, hierarchical (6 layers neurons) system of information analysis, consciousness, memory, generated and consumed information properties etc. They also set the limits of several physiological aspects of human intelligence. Сonception of emergence of insight.

Aspects of evolution "sapiens"

Historical evolutionary development and emergence of H. sapiens as species, include emergence of such concepts as anthropogenesis, phylogenesis, morphogenesis, cephalization, systemogenesis, cognition systems autonomy.

On the other hand, development of an individual's intellect deals with concepts of embryogenesis, ontogenesis, morphogenesis, neurogenesis, higher nervous function of I.P.Pavlov and his philosophy of mind. Despite the fact that the morphofunctional maturity is usually reached by the age of 13, the definitive functioning of the brain structures is not complete until about 16–17 years of age.

New manifestations of humanity intelligence

The joint global highly intelligent activity of people, mankind as an autonomous system, in the second half of the 20th century led to acts reflecting the unity of humanity, which in some cases reacts as an autonomous system. Examples of such unity are the founding of the UN and its specialized agencies, the victory over smallpox by vaccination, the atomic energy peaceful use, access into space, nuclear and bacteriological testing bans, and the satellite television arrangement. Already in the 21st century - responding to global warming, hydrocarbon production contractual balancing, overcoming economic crises, mega-projects for joint space observations, the nanoworld study and nuclear research, the ambitions for the study of the brain and the creation of universal artificial intelligence indicated in national and international strategies. With a new challenge to humanity - the COVID-19 pandemic, in a hyperinformational society, the problem was designated as a choice "infopandemic or noogenesis?", "the rise of a global collective intelligence".

The future of intelligence

The fields of Bioinformatics, genetic engineering, noopharmacology, cognitive load, brain stimulations, the efficient use of altered states of consciousness, use of non-human cognition, information technology (IT), artificial intelligence (AI) are all believed to be effective methods of intelligence advancement and may be the future of intelligence on earth and the galaxy.

Issues and further research prospects

The development of the human brain, perception, cognition, memory and neuroplasticity are unsolved problems in neuroscience. Several megaprojects are being carried out in: Blue Brain Project, Allen Brain Atlas, Human Connectome Project, Google Brain, - in attempt to better our understanding of the brain's functionality along with the intention to develop human cognitive performance in the future with artificial intelligence, informational, communication and cognitive technology. An International Brain Initiative currently integrated national-level brain research initiatives (American BRAIN Initiative, European Human Brain Project, China Brain Project, Japan Brain/MINDS, Canadian Brain Research Strategy, Australian Brain Alliance, Korea Brain Initiative) with goals support an interface between countries to enable synergistic interactions with interdisciplinary approaches arising from the latest research in neuroscience and brain-inspired artificial intelligence etc. According to the Russian National Strategy - fundamental scientific research should be aimed at creating universal artificial intelligence.

Evolution of the brain

From Wikipedia, the free encyclopedia

The principles that govern the evolution of brain structure are not well understood. Brain to body size scales allometrically. Small bodied mammals have relatively large brains compared to their bodies whereas large mammals (such as whales) have smaller brain to body ratios. If brain weight is plotted against body weight for primates, the regression line of the sample points can indicate the brain power of a primate species. Lemurs for example fall below this line which means that for a primate of equivalent size, we would expect a larger brain size. Humans lie well above the line indicating that humans are more encephalized than lemurs. In fact, humans are more encephalized than all other primates.

Early history of brain development

One approach to understanding overall brain evolution is to use a paleoarchaeological timeline to trace the necessity for ever increasing complexity in structures that allow for chemical and electrical signaling. Because brains and other soft tissues do not fossilize as readily as mineralized tissues, scientists often look to other structures as evidence in the fossil record to get an understanding of brain evolution. This, however, leads to a dilemma as the emergence of organisms with more complex nervous systems with protective bone or other protective tissues that can then readily fossilize occur in the fossil record before evidence for chemical and electrical signaling. Recent evidence has shown that the ability to transmit electrical and chemical signals existed even before more complex multicellular lifeforms.

Fossilization of brain, or other soft tissue, is possible however, and scientists can infer that the first brain structure appeared at least 521 million years ago, with fossil brain tissue present in sites of exceptional preservation.

Another approach to understanding brain evolution is to look at extant organisms that do not possess complex nervous systems, comparing anatomical features that allow for chemical or electrical messaging. For example, choanoflagellates are organisms that possess various membrane channels that are crucial to electrical signaling. The membrane channels of choanoflagellates’ are homologous to the ones found in animal cells, and this is supported by the evolutionary connection between early choanoflagellates and the ancestors of animals. Another example of extant organisms with the capacity to transmit electrical signals would be the glass sponge, a multicellular organism, which is capable of propagating electrical impulses without the presence of a nervous system.

Before the evolutionary development of the brain, nerve nets, the simplest form of a nervous system developed. These nerve nets were a sort of precursor for the more evolutionarily advanced brains. They were first observed in Cnidaria and consist of a number of neurons spread apart that allow the organism to respond to physical contact. They are able to rudimentarily detect food and other chemicals but these nerve nets do not allow them to detect the source of the stimulus.

Ctenophores also demonstrate this crude precursor to a brain or centralized nervous system, however, they phylogenetically diverged before the phylum Porifera and Cnidaria. There are two current theories on the emergence of nerve nets. One theory is that nerve nets may have developed independently in Ctenophores and Cnidarians. The other theory states that a common ancestor may have developed nerve nets, but they were lost in Porifera.

A trend in brain evolution according to a study done with mice, chickens, monkeys and apes concluded that more evolved species tend to preserve the structures responsible for basic behaviors. A long term human study comparing the human brain to the primitive brain found that the modern human brain contains the primitive hindbrain region – what most neuroscientists call the protoreptilian brain. The purpose of this part of the brain is to sustain fundamental homeostatic functions. The pons and medulla are major structures found there. A new region of the brain developed in mammals about 250 million years after the appearance of the hindbrain. This region is known as the paleomammalian brain, the major parts of which are the hippocampi and amygdalas, often referred to as the limbic system. The limbic system deals with more complex functions including emotional, sexual and fighting behaviors. Of course, animals that are not vertebrates also have brains, and their brains have undergone separate evolutionary histories.

The brainstem and limbic system are largely based on nuclei, which are essentially balled-up clusters of tightly-packed neurons and the axon fibers that connect them to each other, as well as to neurons in other locations. The other two major brain areas (the cerebrum and cerebellum) are based on a cortical architecture. At the outer periphery of the cortex, the neurons are arranged into layers (the number of which vary according to species and function) a few millimeters thick. There are axons that travel between the layers, but the majority of axon mass is below the neurons themselves. Since cortical neurons and most of their axon fiber tracts don't have to compete for space, cortical structures can scale more easily than nuclear ones. A key feature of cortex is that because it scales with surface area, more of it can be fit inside a skull by introducing convolutions, in much the same way that a dinner napkin can be stuffed into a glass by wadding it up. The degree of convolution is generally greater in species with more complex behavior, which benefits from the increased surface area.

The cerebellum, or "little brain," is behind the brainstem and below the occipital lobe of the cerebrum in humans. Its purposes include the coordination of fine sensorimotor tasks, and it may be involved in some cognitive functions, such as language. Human cerebellar cortex is finely convoluted, much more so than cerebral cortex. Its interior axon fiber tracts are called the arbor vitae, or Tree of Life.

The area of the brain with the greatest amount of recent evolutionary change is called the neocortex. In reptiles and fish, this area is called the pallium, and is smaller and simpler relative to body mass than what is found in mammals. According to research, the cerebrum first developed about 200 million years ago. It's responsible for higher cognitive functions - for example, language, thinking, and related forms of information processing. It's also responsible for processing sensory input (together with the thalamus, a part of the limbic system that acts as an information router). Most of its function is subconscious, that is, not available for inspection or intervention by the conscious mind. The neocortex is an elaboration, or outgrowth, of structures in the limbic system, with which it is tightly integrated.

Role of embryology in the evolution of the brain

In addition to studying the fossil record, evolutionary history can be investigated via embryology. An embryo is an unborn/unhatched animal and evolutionary history can be studied by observing how processes in embryonic development are conserved (or not conserved) across species. Similarities between different species may indicate evolutionary connection. One way anthropologists study evolutionary connection between species is by observing orthologs. An ortholog is defined as two or more homologous genes between species that are evolutionarily related by linear descent.

Bone morphogenetic protein (BMP), a growth factor that plays a significant role in embryonic neural development, is highly conserved amongst vertebrates, as is sonic hedgehog (SHH), a morphogen that inhibits BMP to allow neural crest development.

Randomizing access and scaling brains up

Some animal phyla have gone through major brain enlargement through evolution (e.g. vertebrates and cephalopods both contain many lineages in which brains have grown through evolution) but most animal groups are composed only of species with extremely small brains. Some scientists argue that this difference is due to vertebrate and cephalopod neurons having evolved ways of communicating that overcome the scalability problem of neural networks while most animal groups have not. They argue that the reason why traditional neural networks fail to improve their function when they scale up is because filtering based on previously known probabilities cause self-fulfilling prophecy-like biases that create false statistical evidence giving a completely false worldview and that randomized access can overcome this problem and allow brains to be scaled up to more discriminating conditioned reflexes at larger brains that lead to new worldview forming abilities at certain thresholds. This is explained by randomization allowing the entire brain to eventually get access to all information over the course of many shifts even though instant privileged access is physically impossible. They cite that vertebrate neurons transmit virus-like capsules containing RNA that are sometimes read in the neuron to which it is transmitted and sometimes passed further on unread which creates randomized access, and that cephalopod neurons make different proteins from the same gene which suggests another mechanism for randomization of concentrated information in neurons, both making it evolutionarily worth scaling up brains.

Brain re-arrangement

With the use of in vivo Magnetic resonance imaging (MRI) and tissue sampling, different cortical samples from members of each hominoid species were analyzed. In each species, specific areas were either relatively enlarged or shrunken, which can detail neural organizations. Different sizes in the cortical areas can show specific adaptations, functional specializations and evolutionary events that were changes in how the hominoid brain is organized. In early prediction it was thought that the frontal lobe, a large part of the brain that is generally devoted to behavior and social interaction, predicted the differences in behavior between hominoid and humans. Discrediting this theory was evidence supporting that damage to the frontal lobe in both humans and hominoids show atypical social and emotional behavior; thus, this similarity means that the frontal lobe was not very likely to be selected for reorganization. Instead, it is now believed that evolution occurred in other parts of the brain that are strictly associated with certain behaviors. The reorganization that took place is thought to have been more organizational than volumetric; whereas the brain volumes were relatively the same but specific landmark position of surface anatomical features, for example, the lunate sulcus suggest that the brains had been through a neurological reorganization. There is also evidence that the early hominin lineage also underwent a quiescent period, which supports the idea of neural reorganization.

Dental fossil records for early humans and hominins show that immature hominins, including australopithecines and members of Homo, have a quiescent period (Bown et al. 1987). A quiescent period is a period in which there are no dental eruptions of adult teeth; at this time the child becomes more accustomed to social structure, and development of culture. During this time the child is given an extra advantage over other hominoids, devoting several years into developing speech and learning to cooperate within a community. This period is also discussed in relation to encephalization. It was discovered that chimpanzees do not have this neutral dental period and suggest that a quiescent period occurred in very early hominin evolution. Using the models for neurological reorganization it can be suggested the cause for this period, dubbed middle childhood, is most likely for enhanced foraging abilities in varying seasonal environments. To understand the development of human dentition, taking a look at behavior and biology.

Genetic factors contributing to modern evolution

Bruce Lahn, the senior author at the Howard Hughes Medical Center at the University of Chicago and colleagues have suggested that there are specific genes that control the size of the human brain. These genes continue to play a role in brain evolution, implying that the brain is continuing to evolve. The study began with the researchers assessing 214 genes that are involved in brain development. These genes were obtained from humans, macaques, rats and mice. Lahn and the other researchers noted points in the DNA sequences that caused protein alterations. These DNA changes were then scaled to the evolutionary time that it took for those changes to occur. The data showed the genes in the human brain evolved much faster than those of the other species. Once this genomic evidence was acquired, Lahn and his team decided to find the specific gene or genes that allowed for or even controlled this rapid evolution. Two genes were found to control the size of the human brain as it develops. These genes are Microcephalin and Abnormal Spindle-like Microcephaly (ASPM). The researchers at the University of Chicago were able to determine that under the pressures of selection, both of these genes showed significant DNA sequence changes. Lahn's earlier studies displayed that Microcephalin experienced rapid evolution along the primate lineage which eventually led to the emergence of Homo sapiens. After the emergence of humans, Microcephalin seems to have shown a slower evolution rate. On the contrary, ASPM showed its most rapid evolution in the later years of human evolution once the divergence between chimpanzees and humans had already occurred.

Each of the gene sequences went through specific changes that led to the evolution of humans from ancestral relatives. In order to determine these alterations, Lahn and his colleagues used DNA sequences from multiple primates then compared and contrasted the sequences with those of humans. Following this step, the researchers statistically analyzed the key differences between the primate and human DNA to come to the conclusion, that the differences were due to natural selection. The changes in DNA sequences of these genes accumulated to bring about a competitive advantage and higher fitness that humans possess in relation to other primates. This comparative advantage is coupled with a larger brain size which ultimately allows the human mind to have a higher cognitive awareness.

Evolution of the human brain

One of the prominent ways of tracking the evolution of the human brain is through direct evidence in the form of fossils. The evolutionary history of the human brain shows primarily a gradually bigger brain relative to body size during the evolutionary path from early primates to hominids and finally to Homo sapiens. Because fossilized brain tissue is rare, a more reliable approach is to observe anatomical characteristics of the skull that offer insight into brain characteristics. One such method is to observe the endocranial cast (also referred to as endocasts). Endocasts occur when, during the fossilization process, the brain deteriorates away, leaving a space that is filled by surrounding sedimentary material overtime. These casts, give an imprint of the lining of the brain cavity, which allows a visualization of what was there. This approach, however, is limited in regard to what information can be gathered. Information gleaned from endocasts is primarily limited to the size of the brain (cranial capacity or endocranial volume), prominent sulci and gyri, and size of dominant lobes or regions of the brain. While endocasts are extremely helpful in revealing superficial brain anatomy, they cannot reveal brain structure, particularly of deeper brain areas. By determining scaling metrics of cranial capacity as it relates to total number of neurons present in primates, it is also possible to estimate the number of neurons through fossil evidence.

Despite the limitations to endocasts, they can and do provide a basis for understanding human brain evolution, which shows primarily a gradually bigger brain. The evolutionary history of the human brain shows primarily a gradually bigger brain relative to body size during the evolutionary path from early primates to hominins and finally to Homo sapiens. This trend that has led to the present day human brain size indicates that there has been a 2-3 factor increase in size over the past 3 million years. This can be visualized with current data on hominin evolution, starting with Australopithecus—a group of hominins from which humans are likely descended.

Australopiths lived from 3.85-2.95 million years ago with the general cranial capacity somewhere near that of the extant chimpanzee—around 300–500 cm3. Considering that the volume of the modern human brain is around 1,352 cm3 on average this represents a substantial amount of brain mass evolved. Australopiths are estimated to have a total neuron count of ~30-35 billion.

Progressing along the human ancestral timeline, brain size continues to steadily increase when moving into the era of Homo. For example, Homo habilis, living 2.4 million to 1.4 million years ago and argued to be the first Homo species based on a host of characteristics, had a cranial capacity of around 600 cm3. Homo habilis is estimated to have had ~40 billion neurons.

A little closer to present day, Homo heidelbergensis lived from around 700,000 to 200,000 years ago and had a cranial capacity of around 1290 cm3 and having around 76 billion neurons.

Homo neaderthalensis, living 400,000 to 40,000 years ago, had a cranial capacity comparable to than modern humans at around 1500–1600 cm3on average, with some specimens of Neanderthal having even greater cranial capacity. Neanderthals are estimated to have had around 85 billion neurons. The increase in brain size topped with Neanderthals, possibly due to their larger visual systems.

It is also important to note that the measure of brain mass or volume, seen as cranial capacity, or even relative brain size, which is brain mass that is expressed as a percentage of body mass, are not a measure of intelligence, use, or function of regions of the brain. Total neurons, however, also do not indicate a higher ranking in cognitive abilities. Elephants have a higher number of total neurons (257 billion) compared to humans (100 billion). Relative brain size, overall mass, and total number of neurons are only a few metrics that help scientists follow the evolutionary trend of increased brain to body ratio through the hominin phylogeny.

Evolution of the neocortex

In addition to just the size of the brain, scientists have observed changes in the folding of the brain, as well as in the thickness of the cortex. The more convoluted the surface of the brain is, the greater the surface area of the cortex which allows for an expansion of cortex, the most evolutionarily advanced part of the brain. Greater surface area of the brain is linked to higher intelligence as is the thicker cortex but there is an inverse relationship—the thicker the cortex, the more difficult it is for it to fold. In adult humans, thicker cerebral cortex has been linked to higher intelligence.

The neocortex is the most advanced and most evolutionarily young part of the human brain. It is six layers thick and is only present in mammals. It is especially prominent in humans and is the location of most higher level functioning and cognitive ability. The six-layered neocortex found in mammals is evolutionarily derived from a three-layer cortex present in all modern reptiles. This three-layer cortex is still conserved in some parts of the human brain such as the hippocampus and is believed to have evolved in mammals to the neocortex during the transition between the Triassic and Jurassic periods. 

The three layers of this reptilian cortex correlate strongly to the first, fifth and sixth layers of the mammalian neocortex. Across species of mammals, primates have greater neuronal density compared to rodents of similar brain mass and this may account for increased intelligence.

Space travel in science fiction

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Space_travel_in_science_fiction Rock...