Search This Blog

Friday, February 28, 2025

Information Age

From Wikipedia, the free encyclopedia
Third Industrial Revolution
1947–present
A laptop connects to the Internet to display information from Wikipedia; long-distance communication between computer systems is a hallmark of the Information Age
LocationWorldwide
Key eventsInvention of the transistor
Computer miniaturization
Invention of the Internet

The Information Age is a historical period that began in the mid-20th century. It is characterized by a rapid shift from traditional industries, as established during the Industrial Revolution, to an economy centered on information technology. The onset of the Information Age has been linked to the development of the transistor in 1947, and the optical amplifier in 1957. These technological advances have had a significant impact on the way information is processed and transmitted.

According to the United Nations Public Administration Network, the Information Age was formed by capitalizing on computer miniaturization advances, which led to modernized information systems and internet communications as the driving force of social evolution.

There is ongoing debate concerning whether the Third Industrial Revolution has already ended, and if the Fourth Industrial Revolution has already begun due to the recent breakthroughs in areas such as artificial intelligence and biotechnology. This next transition has been theorized to harken the advent of the Imagination Age, the Internet of things (IoT), and rapid advancements in machine learning.

History

The digital revolution converted technology from analog format to digital format. By doing this, it became possible to make copies that were identical to the original. In digital communications, for example, repeating hardware was able to amplify the digital signal and pass it on with no loss of information in the signal. Of equal importance to the revolution was the ability to easily move the digital information between media, and to access or distribute it remotely. One turning point of the revolution was the change from analog to digitally recorded music. During the 1980s the digital format of optical compact discs gradually replaced analog formats, such as vinyl records and cassette tapes, as the popular medium of choice.

Previous inventions

Humans have manufactured tools for counting and calculating since ancient times, such as the abacus, astrolabe, equatorium, and mechanical timekeeping devices. More complicated devices started appearing in the 1600s, including the slide rule and mechanical calculators. By the early 1800s, the Industrial Revolution had produced mass-market calculators like the arithmometer and the enabling technology of the punch card. Charles Babbage proposed a mechanical general-purpose computer called the Analytical Engine, but it was never successfully built, and was largely forgotten by the 20th century and unknown to most of the inventors of modern computers.

The Second Industrial Revolution in the last quarter of the 19th century developed useful electrical circuits and the telegraph. In the 1880s, Herman Hollerith developed electromechanical tabulating and calculating devices using punch cards and unit record equipment, which became widespread in business and government.

Meanwhile, various analog computer systems used electrical, mechanical, or hydraulic systems to model problems and calculate answers. These included an 1872 tide-predicting machine, differential analysers, perpetual calendar machines, the Deltar for water management in the Netherlands, network analyzers for electrical systems, and various machines for aiming military guns and bombs. The construction of problem-specific analog computers continued in the late 1940s and beyond, with FERMIAC for neutron transport, Project Cyclone for various military applications, and the Phillips Machine for economic modeling.

Building on the complexity of the Z1 and Z2, German inventor Konrad Zuse used electromechanical systems to complete in 1941 the Z3, the world's first working programmable, fully automatic digital computer. Also during World War II, Allied engineers constructed electromechanical bombes to break German Enigma machine encoding. The base-10 electromechanical Harvard Mark I was completed in 1944, and was to some degree improved with inspiration from Charles Babbage's designs.

1947–1969: Origins

A Pennsylvania state historical marker in Philadelphia cites the creation of ENIAC, the "first all-purpose digital computer", in 1946 as the beginning of the Information Age.

In 1947, the first working transistor, the germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain while working under William Shockley at Bell Labs. This led the way to more advanced digital computers. From the late 1940s, universities, military, and businesses developed computer systems to digitally replicate and automate previously manually performed mathematical calculations, with the LEO being the first commercially available general-purpose computer.

Digital communication became economical for widespread adoption after the invention of the personal computer in the 1970s. Claude Shannon, a Bell Labs mathematician, is credited for having laid out the foundations of digitalization in his pioneering 1948 article, A Mathematical Theory of Communication.

In 1948, Bardeen and Brattain patented an insulated-gate transistor (IGFET) with an inversion layer. Their concept, forms the basis of CMOS and DRAM technology today. In 1957 at Bell Labs, Frosch and Derick were able to manufacture planar silicon dioxide transistors, later a team at Bell Labs demonstrated a working MOSFET. The first integrated circuit milestone was achieved by Jack Kilby in 1958.

Other important technological developments included the invention of the monolithic integrated circuit chip by Robert Noyce at Fairchild Semiconductor in 1959, made possible by the planar process developed by Jean Hoerni. In 1963, complementary MOS (CMOS) was developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor. The self-aligned gate transistor, which further facilitated mass production, was invented in 1966 by Robert Bower at Hughes Aircraft and independently by Robert Kerwin, Donald Klein and John Sarace at Bell Labs.

In 1962 AT&T deployed the T-carrier for long-haul pulse-code modulation (PCM) digital voice transmission. The T1 format carried 24 pulse-code modulated, time-division multiplexed speech signals each encoded in 64 kbit/s streams, leaving 8 kbit/s of framing information which facilitated the synchronization and demultiplexing at the receiver. Over the subsequent decades the digitisation of voice became the norm for all but the last mile (where analogue continued to be the norm right into the late 1990s).

Following the development of MOS integrated circuit chips in the early 1960s, MOS chips reached higher transistor density and lower manufacturing costs than bipolar integrated circuits by 1964. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. In 1968, Fairchild engineer Federico Faggin improved MOS technology with his development of the silicon-gate MOS chip, which he later used to develop the Intel 4004, the first single-chip microprocessor. It was released by Intel in 1971, and laid the foundations for the microcomputer revolution that began in the 1970s.

MOS technology also led to the development of semiconductor image sensors suitable for digital cameras. The first such image sensor was the charge-coupled device, developed by Willard S. Boyle and George E. Smith at Bell Labs in 1969, based on MOS capacitor technology.

1969–1989: Invention of the internet, rise of home computers

A visualization of the various routes through a portion of the Internet (created via The Opte Project)

The public was first introduced to the concepts that led to the Internet when a message was sent over the ARPANET in 1969. Packet switched networks such as ARPANET, Mark I, CYCLADES, Merit Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of protocols. The ARPANET in particular led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks.

The Whole Earth movement of the 1960s advocated the use of new technology.

In the 1970s, the home computer was introduced, time-sharing computers, the video game console, the first coin-op video games, and the golden age of arcade video games began with Space Invaders. As digital technology proliferated, and the switch from analog to digital record keeping became the new standard in business, a relatively new job description was popularized, the data entry clerk. Culled from the ranks of secretaries and typists from earlier decades, the data entry clerk's job was to convert analog data (customer records, invoices, etc.) into digital data.

In developed nations, computers achieved semi-ubiquity during the 1980s as they made their way into schools, homes, business, and industry. Automated teller machines, industrial robots, CGI in film and television, electronic music, bulletin board systems, and video games all fueled what became the zeitgeist of the 1980s. Millions of people purchased home computers, making household names of early personal computer manufacturers such as Apple, Commodore, and Tandy. To this day the Commodore 64 is often cited as the best selling computer of all time, having sold 17 million units (by some accounts) between 1982 and 1994.

In 1984, the U.S. Census Bureau began collecting data on computer and Internet use in the United States; their first survey showed that 8.2% of all U.S. households owned a personal computer in 1984, and that households with children under the age of 18 were nearly twice as likely to own one at 15.3% (middle and upper middle class households were the most likely to own one, at 22.9%). By 1989, 15% of all U.S. households owned a computer, and nearly 30% of households with children under the age of 18 owned one. By the late 1980s, many businesses were dependent on computers and digital technology.

Motorola created the first mobile phone, Motorola DynaTac, in 1983. However, this device used analog communication - digital cell phones were not sold commercially until 1991 when the 2G network started to be opened in Finland to accommodate the unexpected demand for cell phones that was becoming apparent in the late 1980s.

Compute! magazine predicted that CD-ROM would be the centerpiece of the revolution, with multiple household devices reading the discs.

The first true digital camera was created in 1988, and the first were marketed in December 1989 in Japan and in 1990 in the United States. By the early 2000s, digital cameras had eclipsed traditional film in popularity.

Digital ink and paint was also invented in the late 1980s. Disney's CAPS system (created 1988) was used for a scene in 1989's The Little Mermaid and for all their animation films between 1990's The Rescuers Down Under and 2004's Home on the Range.

1989–2005: Invention of the World Wide Web, mainstreaming of the Internet, Web 1.0

Tim Berners-Lee invented the World Wide Web in 1989. The “Web 1.0 era” ended in 2005, coinciding with the development of further advanced technologies during the start of the 21st century.

The first public digital HDTV broadcast was of the 1990 World Cup that June; it was played in 10 theaters in Spain and Italy. However, HDTV did not become a standard until the mid-2000s outside Japan.

The World Wide Web became publicly accessible in 1991, which had been available only to government and universities. In 1993 Marc Andreessen and Eric Bina introduced Mosaic, the first web browser capable of displaying inline images and the basis for later browsers such as Netscape Navigator and Internet Explorer. Stanford Federal Credit Union was the first financial institution to offer online internet banking services to all of its members in October 1994. In 1996 OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. The Internet expanded quickly, and by 1996, it was part of mass culture and many businesses listed websites in their ads. By 1999, almost every country had a connection, and nearly half of Americans and people in several other countries used the Internet on a regular basis. However throughout the 1990s, "getting online" entailed complicated configuration, and dial-up was the only connection type affordable by individual users; the present day mass Internet culture was not possible.

In 1989, about 15% of all households in the United States owned a personal computer. For households with children, nearly 30% owned a computer in 1989, and in 2000, 65% owned one.

Cell phones became as ubiquitous as computers by the early 2000s, with movie theaters beginning to show ads telling people to silence their phones. They also became much more advanced than phones of the 1990s, most of which only took calls or at most allowed for the playing of simple games.

Text messaging became widely used in the late 1990s worldwide, except for in the United States of America where text messaging didn't become commonplace till the early 2000s.

The digital revolution became truly global in this time as well - after revolutionizing society in the developed world in the 1990s, the digital revolution spread to the masses in the developing world in the 2000s.

By 2000, a majority of U.S. households had at least one personal computer and internet access the following year. In 2002, a majority of U.S. survey respondents reported having a mobile phone.

2005–2020: Web 2.0, social media, smartphones, digital TV

In late 2005 the population of the Internet reached 1 billion, and 3 billion people worldwide used cell phones by the end of the decade. HDTV became the standard television broadcasting format in many countries by the end of the decade. In September and December 2006 respectively, Luxembourg and the Netherlands became the first countries to completely transition from analog to digital television. In September 2007, a majority of U.S. survey respondents reported having broadband internet at home. According to estimates from the Nielsen Media Research, approximately 45.7 million U.S. households in 2006 (or approximately 40 percent of approximately 114.4 million) owned a dedicated home video game console, and by 2015, 51 percent of U.S. households owned a dedicated home video game console according to an Entertainment Software Association annual industry report. By 2012, over 2 billion people used the Internet, twice the number using it in 2007. Cloud computing had entered the mainstream by the early 2010s. In January 2013, a majority of U.S. survey respondents reported owning a smartphone. By 2016, half of the world's population was connected and as of 2020, that number has risen to 67%.

Rise in digital technology use of computers

In the late 1980s, less than 1% of the world's technologically stored information was in digital format, while it was 94% in 2007, with more than 99% by 2014.

It is estimated that the world's capacity to store information has increased from 2.6 (optimally compressed) exabytes in 1986, to some 5,000 exabytes in 2014 (5 zettabytes).

Number of cell phone subscribers and internet users
Year Cell phone subscribers (% of world pop.) Internet users (% of world pop.)
1990 12.5 million (0.25%) 2.8 million (0.05%)
2002 1.5 billion (19%) 631 million (11%)
2010 4 billion (68%) 1.8 billion (26.6%)
2020 4.78 billion (62%) 4.54 billion (59%)
2023 6.31 billion (78%) 5.4 billion (67%)
A university computer lab containing many desktop PCs

Overview of early developments

A timeline of major milestones of the Information Age, from the first message sent by the Internet protocol suite to global Internet access

Library expansion and Moore's law

Library expansion was calculated in 1945 by Fremont Rider to double in capacity every 16 years where sufficient space made available. He advocated replacing bulky, decaying printed works with miniaturized microform analog photographs, which could be duplicated on-demand for library patrons and other institutions.

Rider did not foresee, however, the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission media, whereby vast increases in the rapidity of information growth would be made possible through automated, potentially-lossless digital technologies. Accordingly, Moore's law, formulated around 1965, would calculate that the number of transistors in a dense integrated circuit doubles approximately every two years.

By the early 1980s, along with improvements in computing power, the proliferation of the smaller and less expensive personal computers allowed for immediate access to information and the ability to share and store it. Connectivity between computers within organizations enabled access to greater amounts of information.

Information storage and Kryder's law

Hilbert & López (2011). The World's Technological Capacity to Store, Communicate, and Compute Information. Science, 332(6025), 60–65.

The world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes (EB) in 1986 to 15.8 EB in 1993; over 54.5 EB in 2000; and to 295 (optimally compressed) EB in 2007. This is the informational equivalent to less than one 730-megabyte (MB) CD-ROM per person in 1986 (539 MB per person); roughly four CD-ROM per person in 1993; twelve CD-ROM per person in the year 2000; and almost sixty-one CD-ROM per person in 2007. It is estimated that the world's capacity to store information has reached 5 zettabytes in 2014, the informational equivalent of 4,500 stacks of printed books from the earth to the sun.

The amount of digital data stored appears to be growing approximately exponentially, reminiscent of Moore's law. As such, Kryder's law prescribes that the amount of storage space available appears to be growing approximately exponentially.

Information transmission

The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (optimally compressed) information in 1986; 715 (optimally compressed) exabytes in 1993; 1.2 (optimally compressed) zettabytes in 2000; and 1.9 zettabytes in 2007, the information equivalent of 174 newspapers per person per day.

The world's effective capacity to exchange information through two-way Telecommunications networks was 281 petabytes of (optimally compressed) information in 1986; 471 petabytes in 1993; 2.2 (optimally compressed) exabytes in 2000; and 65 (optimally compressed) exabytes in 2007, the information equivalent of six newspapers per person per day. In the 1990s, the spread of the Internet caused a sudden leap in access to and ability to share information in businesses and homes globally. A computer that cost $3000 in 1997 would cost $2000 two years later and $1000 the following year, due to the rapid advancement of technology.

Computation

The world's technological capacity to compute information with human-guided general-purpose computers grew from 3.0 × 108 MIPS in 1986, to 4.4 × 109 MIPS in 1993; to 2.9 × 1011 MIPS in 2000; to 6.4 × 1012 MIPS in 2007. An article featured in the journal Trends in Ecology and Evolution in 2016 reported that:

Digital technology has vastly exceeded the cognitive capacity of any single human being and has done so a decade earlier than predicted. In terms of capacity, there are two measures of importance: the number of operations a system can perform and the amount of information that can be stored. The number of synaptic operations per second in a human brain has been estimated to lie between 10^15 and 10^17. While this number is impressive, even in 2007 humanity's general-purpose computers were capable of performing well over 10^18 instructions per second. Estimates suggest that the storage capacity of an individual human brain is about 10^12 bytes. On a per capita basis, this is matched by current digital storage (5x10^21 bytes per 7.2x10^9 people).

Genetic information

Genetic code may also be considered part of the information revolution. Now that sequencing has been computerized, genome can be rendered and manipulated as data. This started with DNA sequencing, invented by Walter Gilbert and Allan Maxam in 1976-1977 and Frederick Sanger in 1977, grew steadily with the Human Genome Project, initially conceived by Gilbert and finally, the practical applications of sequencing, such as gene testing, after the discovery by Myriad Genetics of the BRCA1 breast cancer gene mutation. Sequence data in Genbank has grown from the 606 genome sequences registered in December 1982 to the 231 million genomes in August 2021. An additional 13 trillion incomplete sequences are registered in the Whole Genome Shotgun submission database as of August 2021. The information contained in these registered sequences has doubled every 18 months.

Different stage conceptualizations

During rare times in human history, there have been periods of innovation that have transformed human life. The Neolithic Age, the Scientific Age and the Industrial Age all, ultimately, induced discontinuous and irreversible changes in the economic, social and cultural elements of the daily life of most people. Traditionally, these epochs have taken place over hundreds, or in the case of the Neolithic Revolution, thousands of years, whereas the Information Age swept to all parts of the globe in just a few years, as a result of the rapidly advancing speed of information exchange.

Between 7,000 and 10,000 years ago during the Neolithic period, humans began to domesticate animals, began to farm grains and to replace stone tools with ones made of metal. These innovations allowed nomadic hunter-gatherers to settle down. Villages formed along the Yangtze River in China in 6,500 B.C., the Nile River region of Africa and in Mesopotamia (Iraq) in 6,000 B.C. Cities emerged between 6,000 B.C. and 3,500 B.C. The development of written communication (cuneiform in Sumeria and hieroglyphs in Egypt in 3,500 B.C. and writing in Egypt in 2,560 B.C. and in Minoa and China around 1,450 B.C.) enabled ideas to be preserved for extended periods to spread extensively. In all, Neolithic developments, augmented by writing as an information tool, laid the groundwork for the advent of civilization.

The Scientific Age began in the period between Galileo's 1543 proof that the planets orbit the Sun and Newton's publication of the laws of motion and gravity in Principia in 1697. This age of discovery continued through the 18th century, accelerated by widespread use of the moveable type printing press by Johannes Gutenberg.

The Industrial Age began in Great Britain in 1760 and continued into the mid-19th century. The invention of machines such as the mechanical textile weaver by Edmund Cartwrite, the rotating shaft steam engine by James Watt and the cotton gin by Eli Whitney, along with processes for mass manufacturing, came to serve the needs of a growing global population. The Industrial Age harnessed steam and waterpower to reduce the dependence on animal and human physical labor as the primary means of production. Thus, the core of the Industrial Revolution was the generation and distribution of energy from coal and water to produce steam and, later in the 20th century, electricity.

The Information Age also requires electricity to power the global networks of computers that process and store data. However, what dramatically accelerated the pace of The Information Age’s adoption, as compared to previous ones, was the speed by which knowledge could be transferred and pervaded the entire human family in a few short decades. This acceleration came about with the adoptions of a new form of power. Beginning in 1972, engineers devised ways to harness light to convey data through fiber optic cable. Today, light-based optical networking systems at the heart of telecom networks and the Internet span the globe and carry most of the information traffic to and from users and data storage systems.

Three stages of the Information Age

There are different conceptualizations of the Information Age. Some focus on the evolution of information over the ages, distinguishing between the Primary Information Age and the Secondary Information Age. Information in the Primary Information Age was handled by newspapers, radio and television. The Secondary Information Age was developed by the Internet, satellite televisions and mobile phones. The Tertiary Information Age was emerged by media of the Primary Information Age interconnected with media of the Secondary Information Age as presently experienced.

Stages of development expressed as Kondratiev waves

Others classify it in terms of the well-established Schumpeterian long waves or Kondratiev waves. Here authors distinguish three different long-term metaparadigms, each with different long waves. The first focused on the transformation of material, including stone, bronze, and iron. The second, often referred to as Industrial Revolution, was dedicated to the transformation of energy, including water, steam, electric, and combustion power. Finally, the most recent metaparadigm aims at transforming information. It started out with the proliferation of communication and stored data and has now entered the age of algorithms, which aims at creating automated processes to convert the existing information into actionable knowledge.

Information in social and economic activities

The main feature of the information revolution is the growing economic, social and technological role of information. Information-related activities did not come up with the Information Revolution. They existed, in one form or the other, in all human societies, and eventually developed into institutions, such as the Platonic Academy, Aristotle's Peripatetic school in the Lyceum, the Musaeum and the Library of Alexandria, or the schools of Babylonian astronomy. The Agricultural Revolution and the Industrial Revolution came up when new informational inputs were produced by individual innovators, or by scientific and technical institutions. During the Information Revolution all these activities are experiencing continuous growth, while other information-oriented activities are emerging.

Information is the central theme of several new sciences, which emerged in the 1940s, including Shannon's (1949) Information Theory and Wiener's (1948) Cybernetics. Wiener stated: "information is information not matter or energy". This aphorism suggests that information should be considered along with matter and energy as the third constituent part of the Universe; information is carried by matter or by energy. By the 1990s some writers believed that changes implied by the Information revolution will lead to not only a fiscal crisis for governments but also the disintegration of all "large structures".

The theory of information revolution

The term information revolution may relate to, or contrast with, such widely used terms as Industrial Revolution and Agricultural Revolution. Note, however, that you may prefer mentalist to materialist paradigm. The following fundamental aspects of the theory of information revolution can be given:

  1. The object of economic activities can be conceptualized according to the fundamental distinction between matter, energy, and information. These apply both to the object of each economic activity, as well as within each economic activity or enterprise. For instance, an industry may process matter (e.g. iron) using energy and information (production and process technologies, management, etc.).
  2. Information is a factor of production (along with capital, labor, land (economics)), as well as a product sold in the market, that is, a commodity. As such, it acquires use value and exchange value, and therefore a price.
  3. All products have use value, exchange value, and informational value. The latter can be measured by the information content of the product, in terms of innovation, design, etc.
  4. Industries develop information-generating activities, the so-called Research and Development (R&D) functions.
  5. Enterprises, and society at large, develop the information control and processing functions, in the form of management structures; these are also called "white-collar workers", "bureaucracy", "managerial functions", etc.
  6. Labor can be classified according to the object of labor, into information labor and non-information labor.
  7. Information activities constitute a large, new economic sector, the information sector along with the traditional primary sector, secondary sector, and tertiary sector, according to the three-sector hypothesis. These should be restated because they are based on the ambiguous definitions made by Colin Clark (1940), who included in the tertiary sector all activities that have not been included in the primary (agriculture, forestry, etc.) and secondary (manufacturing) sectors. The quaternary sector and the quinary sector of the economy attempt to classify these new activities, but their definitions are not based on a clear conceptual scheme, although the latter is considered by some as equivalent with the information sector.
  8. From a strategic point of view, sectors can be defined as information sector, means of production, means of consumption, thus extending the classical Ricardo-Marx model of the Capitalist mode of production (see Influences on Karl Marx). Marx stressed in many occasions the role of the "intellectual element" in production, but failed to find a place for it into his model.
  9. Innovations are the result of the production of new information, as new products, new methods of production, patents, etc. Diffusion of innovations manifests saturation effects (related term: market saturation), following certain cyclical patterns and creating "economic waves", also referred to as "business cycles". There are various types of waves, such as Kondratiev wave (54 years), Kuznets swing (18 years), Juglar cycle (9 years) and Kitchin (about 4 years, see also Joseph Schumpeter) distinguished by their nature, duration, and, thus, economic impact.
  10. Diffusion of innovations causes structural-sectoral shifts in the economy, which can be smooth or can create crisis and renewal, a process which Joseph Schumpeter called vividly "creative destruction".

From a different perspective, Irving E. Fang (1997) identified six 'Information Revolutions': writing, printing, mass media, entertainment, the 'tool shed' (which we call 'home' now), and the information highway. In this work the term 'information revolution' is used in a narrow sense, to describe trends in communication media.

Measuring and modeling the information revolution

Porat (1976) measured the information sector in the US using the input-output analysis; OECD has included statistics on the information sector in the economic reports of its member countries. Veneris (1984, 1990) explored the theoretical, economic and regional aspects of the informational revolution and developed a systems dynamics simulation computer model.

These works can be seen as following the path originated with the work of Fritz Machlup who in his (1962) book "The Production and Distribution of Knowledge in the United States", claimed that the "knowledge industry represented 29% of the US gross national product", which he saw as evidence that the Information Age had begun. He defines knowledge as a commodity and attempts to measure the magnitude of the production and distribution of this commodity within a modern economy. Machlup divided information use into three classes: instrumental, intellectual, and pastime knowledge. He identified also five types of knowledge: practical knowledge; intellectual knowledge, that is, general culture and the satisfying of intellectual curiosity; pastime knowledge, that is, knowledge satisfying non-intellectual curiosity or the desire for light entertainment and emotional stimulation; spiritual or religious knowledge; unwanted knowledge, accidentally acquired and aimlessly retained.

More recent estimates have reached the following results:

  • the world's technological capacity to receive information through one-way broadcast networks grew at a sustained compound annual growth rate of 7% between 1986 and 2007;
  • the world's technological capacity to store information grew at a sustained compound annual growth rate of 25% between 1986 and 2007;
  • the world's effective capacity to exchange information through two-way telecommunications networks grew at a sustained compound annual growth rate of 30% during the same two decades;
  • the world's technological capacity to compute information with the help of humanly guided general-purpose computers grew at a sustained compound annual growth rate of 61% during the same period.

Economics

Eventually, Information and communication technology (ICT)—i.e. computers, computerized machinery, fiber optics, communication satellites, the Internet, and other ICT tools—became a significant part of the world economy, as the development of optical networking and microcomputers greatly changed many businesses and industries. Nicholas Negroponte captured the essence of these changes in his 1995 book, Being Digital, in which he discusses the similarities and differences between products made of atoms and products made of bits.

Jobs and income distribution

The Information Age has affected the workforce in several ways, such as compelling workers to compete in a global job market. One of the most evident concerns is the replacement of human labor by computers that can do their jobs faster and more effectively, thus creating a situation in which individuals who perform tasks that can easily be automated are forced to find employment where their labor is not as disposable. This especially creates issue for those in industrial cities, where solutions typically involve lowering working time, which is often highly resisted. Thus, individuals who lose their jobs may be pressed to move up into more indispensable professions (e.g. engineers, doctors, lawyers, teachers, professors, scientists, executives, journalists, consultants), who are able to compete successfully in the world market and receive (relatively) high wages.

Along with automation, jobs traditionally associated with the middle class (e.g. assembly line, data processing, management, and supervision) have also begun to disappear as result of outsourcing. Unable to compete with those in developing countries, production and service workers in post-industrial (i.e. developed) societies either lose their jobs through outsourcing, accept wage cuts, or settle for low-skill, low-wage service jobs. In the past, the economic fate of individuals would be tied to that of their nation's. For example, workers in the United States were once well paid in comparison to those in other countries. With the advent of the Information Age and improvements in communication, this is no longer the case, as workers must now compete in a global job market, whereby wages are less dependent on the success or failure of individual economies.

In effectuating a globalized workforce, the internet has just as well allowed for increased opportunity in developing countries, making it possible for workers in such places to provide in-person services, therefore competing directly with their counterparts in other nations. This competitive advantage translates into increased opportunities and higher wages.

Automation, productivity, and job gain

The Information Age has affected the workforce in that automation and computerization have resulted in higher productivity coupled with net job loss in manufacturing. In the United States, for example, from January 1972 to August 2010, the number of people employed in manufacturing jobs fell from 17,500,000 to 11,500,000 while manufacturing value rose 270%. Although it initially appeared that job loss in the industrial sector might be partially offset by the rapid growth of jobs in information technology, the recession of March 2001 foreshadowed a sharp drop in the number of jobs in the sector. This pattern of decrease in jobs would continue until 2003, and data has shown that, overall, technology creates more jobs than it destroys even in the short run.

Information-intensive industry

Industry has become more information-intensive while less labor- and capital-intensive. This has left important implications for the workforce, as workers have become increasingly productive as the value of their labor decreases. For the system of capitalism itself, the value of labor decreases, the value of capital increases.

In the classical model, investments in human and financial capital are important predictors of the performance of a new venture. However, as demonstrated by Mark Zuckerberg and Facebook, it now seems possible for a group of relatively inexperienced people with limited capital to succeed on a large scale.

Innovations

A visualization of the various routes through a portion of the Internet

The Information Age was enabled by technology developed in the Digital Revolution, which was itself enabled by building on the developments of the Technological Revolution.

Transistors

The onset of the Information Age can be associated with the development of transistor technology. The concept of a field-effect transistor was first theorized by Julius Edgar Lilienfeld in 1925. The first practical transistor was the point-contact transistor, invented by the engineers Walter Houser Brattain and John Bardeen while working for William Shockley at Bell Labs in 1947. This was a breakthrough that laid the foundations for modern technology. Shockley's research team also invented the bipolar junction transistor in 1952. The most widely used type of transistor is the metal–oxide–semiconductor field-effect transistor (MOSFET), invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1960. The complementary MOS (CMOS) fabrication process was developed by Frank Wanlass and Chih-Tang Sah in 1963.

Computers

Before the advent of electronics, mechanical computers, like the Analytical Engine in 1837, were designed to provide routine mathematical calculation and simple decision-making capabilities. Military needs during World War II drove development of the first electronic computers, based on vacuum tubes, including the Z3, the Atanasoff–Berry Computer, Colossus computer, and ENIAC.

The invention of the transistor enabled the era of mainframe computers (1950s–1970s), typified by the IBM 360. These large, room-sized computers provided data calculation and manipulation that was much faster than humanly possible, but were expensive to buy and maintain, so were initially limited to a few scientific institutions, large corporations, and government agencies.

The germanium integrated circuit (IC) was invented by Jack Kilby at Texas Instruments in 1958. The silicon integrated circuit was then invented in 1959 by Robert Noyce at Fairchild Semiconductor, using the planar process developed by Jean Hoerni, who was in turn building on Mohamed Atalla's silicon surface passivation method developed at Bell Labs in 1957. Following the invention of the MOS transistor by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959, the MOS integrated circuit was developed by Fred Heiman and Steven Hofstein at RCA in 1962. The silicon-gate MOS IC was later developed by Federico Faggin at Fairchild Semiconductor in 1968. With the advent of the MOS transistor and the MOS IC, transistor technology rapidly improved, and the ratio of computing power to size increased dramatically, giving direct access to computers to ever smaller groups of people.

The first commercial single-chip microprocessor launched in 1971, the Intel 4004, which was developed by Federico Faggin using his silicon-gate MOS IC technology, along with Marcian Hoff, Masatoshi Shima and Stan Mazor.

Along with electronic arcade machines and home video game consoles pioneered by Nolan Bushnell in the 1970s, the development of personal computers like the Commodore PET and Apple II (both in 1977) gave individuals access to the computer. However, data sharing between individual computers was either non-existent or largely manual, at first using punched cards and magnetic tape, and later floppy disks.

Data

The first developments for storing data were initially based on photographs, starting with microphotography in 1851 and then microform in the 1920s, with the ability to store documents on film, making them much more compact. Early information theory and Hamming codes were developed about 1950, but awaited technical innovations in data transmission and storage to be put to full use.

Magnetic-core memory was developed from the research of Frederick W. Viehe in 1947 and An Wang at Harvard University in 1949. With the advent of the MOS transistor, MOS semiconductor memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In 1967, Dawon Kahng and Simon Sze at Bell Labs described in 1967 how the floating gate of an MOS semiconductor device could be used for the cell of a reprogrammable ROM. Following the invention of flash memory by Fujio Masuoka at Toshiba in 1980, Toshiba commercialized NAND flash memory in 1987.

Copper wire cables transmitting digital data connected computer terminals and peripherals to mainframes, and special message-sharing systems leading to email, were first developed in the 1960s. Independent computer-to-computer networking began with ARPANET in 1969. This expanded to become the Internet (coined in 1974). Access to the Internet improved with the invention of the World Wide Web in 1991. The capacity expansion from dense wave division multiplexing, optical amplification and optical networking in the mid-1990s led to record data transfer rates. By 2018, optical networks routinely delivered 30.4 terabits/s over a fiber optic pair, the data equivalent of 1.2 million simultaneous 4K HD video streams.

MOSFET scaling, the rapid miniaturization of MOSFETs at a rate predicted by Moore's law,[122] led to computers becoming smaller and more powerful, to the point where they could be carried. During the 1980s–1990s, laptops were developed as a form of portable computer, and personal digital assistants (PDAs) could be used while standing or walking. Pagers, widely used by the 1980s, were largely replaced by mobile phones beginning in the late 1990s, providing mobile networking features to some computers. Now commonplace, this technology is extended to digital cameras and other wearable devices. Starting in the late 1990s, tablets and then smartphones combined and extended these abilities of computing, mobility, and information sharing. Metal–oxide–semiconductor (MOS) image sensors, which first began appearing in the late 1960s, led to the transition from analog to digital imaging, and from analog to digital cameras, during the 1980s–1990s. The most common image sensors are the charge-coupled device (CCD) sensor and the CMOS (complementary MOS) active-pixel sensor (CMOS sensor).

Electronic paper, which has origins in the 1970s, allows digital information to appear as paper documents.

Personal computers

By 1976, there were several firms racing to introduce the first truly successful commercial personal computers. Three machines, the Apple II, Commodore PET 2001 and TRS-80 were all released in 1977, becoming the most popular by late 1978. Byte magazine later referred to Commodore, Apple, and Tandy as the "1977 Trinity". Also in 1977, Sord Computer Corporation released the Sord M200 Smart Home Computer in Japan.

Apple II

April 1977: Apple II.

Steve Wozniak (known as "Woz"), a regular visitor to Homebrew Computer Club meetings, designed the single-board Apple I computer and first demonstrated it there. With specifications in hand and an order for 100 machines at US$500 each from the Byte Shop, Woz and his friend Steve Jobs founded Apple Computer.

About 200 of the machines sold before the company announced the Apple II as a complete computer. It had color graphics, a full QWERTY keyboard, and internal slots for expansion, which were mounted in a high quality streamlined plastic case. The monitor and I/O devices were sold separately. The original Apple II operating system was only the built-in BASIC interpreter contained in ROM. Apple DOS was added to support the diskette drive; the last version was "Apple DOS 3.3".

Its higher price and lack of floating point BASIC, along with a lack of retail distribution sites, caused it to lag in sales behind the other Trinity machines until 1979, when it surpassed the PET. It was again pushed into 4th place when Atari, Inc. introduced its Atari 8-bit computers.

Despite slow initial sales, the lifetime of the Apple II was about eight years longer than other machines, and so accumulated the highest total sales. By 1985, 2.1 million had sold and more than 4 million Apple II's were shipped by the end of its production in 1993.

Optical networking

Optical communication plays a crucial role in communication networks. Optical communication provides the transmission backbone for the telecommunications and computer networks that underlie the Internet, the foundation for the Digital Revolution and Information Age.

The two core technologies are the optical fiber and light amplification (the optical amplifier). In 1953, Bram van Heel demonstrated image transmission through bundles of optical fibers with a transparent cladding. The same year, Harold Hopkins and Narinder Singh Kapany at Imperial College succeeded in making image-transmitting bundles with over 10,000 optical fibers, and subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers.

Gordon Gould invented the optical amplifier and the laser, and also established the first optical telecommunications company, Optelecom, to design communication systems. The firm was a co-founder in Ciena Corp., the venture that popularized the optical amplifier with the introduction of the first dense wave division multiplexing system. This massive scale communication technology has emerged as the common basis of all telecommunications networks and, thus, a foundation of the Information Age.

Economy, society, and culture

Manuel Castells captures the significance of the Information Age in The Information Age: Economy, Society and Culture when he writes of our global interdependence and the new relationships between economy, state and society, what he calls "a new society-in-the-making." He cautions that just because humans have dominated the material world, does not mean that the Information Age is the end of history:

"It is in fact, quite the opposite: history is just beginning, if by history we understand the moment when, after millennia of a prehistoric battle with Nature, first to survive, then to conquer it, our species has reached the level of knowledge and social organization that will allow us to live in a predominantly social world. It is the beginning of a new existence, and indeed the beginning of a new age, The Information Age, marked by the autonomy of culture vis-à-vis the material basis of our existence."

Thomas Chatterton Williams wrote about the dangers of anti-intellectualism in the Information Age in a piece for The Atlantic. Although access to information has never been greater, most information is irrelevant or insubstantial. The Information Age's emphasis on speed over expertise contributes to "superficial culture in which even the elite will openly disparage as pointless our main repositories for the very best that has been thought."

Thursday, February 27, 2025

Grid parity

From Wikipedia, the free encyclopedia
Grid parity for solar PV systems around the world
  Reached grid-parity before 2014
  Reached grid-parity after 2014
  Reached grid-parity only for peak prices
  U.S. states poised to reach grid-parity
Source: Deutsche Bank, as of February 2015

Grid parity (or socket parity) occurs when an alternative energy source can generate power at a levelized cost of electricity (LCOE) that is less than or equal to the price of power from the electricity grid. The term is most commonly used when discussing renewable energy sources, notably solar power and wind power. Grid parity depends upon whether you are calculating from the point of view of a utility or of a retail consumer.

Reaching grid parity is considered to be the point at which an energy source becomes a contender for widespread development without subsidies or government support. It is widely believed that a wholesale shift in generation to these forms of energy will take place when they reach grid parity.

Germany was one of the first countries to reach parity for solar PV in 2011 and 2012 for utility-scale solar and rooftop solar PV, respectively. By January 2014, grid parity for solar PV systems had already been reached in at least nineteen countries.

Wind power reached grid parity in some places in Europe in the mid 2000s, and has continued to reduce in price.

Overview

The price of electricity from the grid is complex. Most power sources in the developed world are generated in industrial scale plants developed by private or public consortia. The company providing the power and the company delivering that power to the customers are often separate entities who enter into a Power Purchase Agreement that sets a fixed rate for all of the power delivered by the plant. On the other end of the wire, the local distribution company (LDC) charges rates that will cover their power purchases from the variety of producers they use.

This relationship is not straightforward; for instance, an LDC may buy large amounts of base load power from a nuclear plant at a low fixed cost and then buy peaking power only as required from natural gas peakers at a much higher cost, perhaps five to six times. Depending on their billing policy, this might be billed to the customer at a flat rate combining the two rates the LDC pays, or alternately based on a time-based pricing policy that tries to more closely match input costs with customer prices.

As a result of these policies, the exact definition of "grid parity" varies not only from location to location, but customer to customer and even hour to hour.

For instance, wind power connects to the grid on the distribution side (as opposed to the customer side). This means it competes with other large forms of industrial-scale power like hydro, nuclear or coal-fired plants, which are generally inexpensive forms of power. Additionally, the generator will be charged by the distribution operator to carry the power to the markets, adding to their levelized costs.

Solar has the advantage of scaling easily from systems as small as a single solar panel placed on the customer's roof. In this case the system has to compete with the post-delivery retail price, which is generally much higher than the wholesale price at the same time.

It is also important to consider changes in grid pricing when determining whether or not a source is at parity. For instance, the introduction of time-of-use pricing and a general increase in power prices in Mexico during 2010 and 2011 has suddenly resulted in many forms of renewable energy reaching grid parity. A drop in power prices, as has happened in some locations due to the late-2000s recession, can likewise render systems formerly at parity, to be no longer interesting.

In general terms, fuel prices continue to increase, while renewable energy sources continue to reduce in up-front costs. As a result, widespread grid parity for wind and solar were generally predicted for the time between 2015 and 2020.

Solar power

Projection of levelized cost of electricity for solar PV in Europe

Pricing solar

Swanson's law–stating that solar module prices have dropped about 20% for each doubling of installed capacity—defines the "learning rate" of solar photovoltaics.

Grid parity is most commonly used in the field of solar power, and most specifically when referring to solar photovoltaics (PV). As PV systems do not use fuel and are largely maintenance-free, the levelized cost of electricity (LCOE) is dominated almost entirely by the capital cost of the system. With the assumption that the discount rate will be similar to the inflation rate of grid power, the levelized cost can be calculated by dividing the original capital cost by the total amount of electricity produced over the system's lifetime.

As the LCOE of solar PV is dominated by the capital costs, and the capital costs by the panels, the wholesale prices of PV modules are the main consideration when tracking grid parity. A 2015 study shows price/kWh dropping by 10% per year since 1980, and predicts that solar could contribute 20% of total electricity consumption by 2030, whereas the International Energy Agency predicts 16% by 2050.

The price of electricity from these sources dropped about 25 times between 1990 and 2010. This rate of price reduction accelerated between late-2009 and mid-2011 due to oversupply; the wholesale cost of solar modules dropped approximately 70%. These pressures have demanded efficiencies throughout the construction chain, so total installed cost has also been strongly lowered. Adjusting for inflation, it cost $96 per watt for a solar module in the mid-1970s. Process improvements and a very large boost in production have brought that figure down 99 percent, to 68¢ per watt in February 2016, according to data from Bloomberg New Energy Finance. The downward move in pricing continues. Palo Alto California signed a wholesale purchase agreement in 2016 that secured solar power for 3.7 cents per kilowatt-hour. And in sunny Qatar large-scale solar generated electricity sold in 2020 for just $0.01567 per kWh cheaper than any form of fossil-based electricity.

The average retail price of solar cells as monitored by the Solarbuzz group fell from $3.50/watt to $2.43/watt over the course of 2011, and a decline to prices below $2.00/watt seems inevitable. Solarbuzz tracks retail prices, which includes a large mark-up over wholesale prices, and systems are commonly installed by firms buying at the wholesale price. For this reason, total installation costs are commonly similar to the retail price of the panels alone. Recent total-systems installation costs are around $2500/kWp in Germany or $3,250 in the UK. As of 2011, the capital cost of PV had fallen well below that of nuclear power and was set to fall further.

Knowing the expected production allows the calculation of the LCOE. Modules are generally warranted for 25 years and suffer only minor degradation during that time, so all that is needed to predict the generation is the local insolation. According to PVWatts Archived 18 January 2012 at the Wayback Machine a one-kilowatt system in Matsumoto, Nagano will produce 1187 kilowatt-hour (kWh) of electricity a year. Over a 25-year lifetime, the system will produce about 29,675 kWh (not accounting for the small effects of system degradation, about 0.25% a year). If this system costs $5,000 to install ($5 per watt), very conservative compared to worldwide prices, the LCOE = 5,000/29,675 ~= 17 cents per kWh. This is lower than the average Japanese residential rate of ~19.5 cents, which means that, in this simple case which skips the necessary time value of money calculation, PV had reached grid parity for residential users in Japan.

Reaching parity

Deciding whether or not PV is at grid parity is more complex than other sources, due to a side-effect of one of its main advantages. Compared to most sources, like wind turbines or hydro dams, PV can scale successfully to systems as small as one panel or as large as millions. In the case of small systems, they can be installed at the customer's location. In this case the LCOE competes against the retail price of grid power, which includes all upstream additions like transmission fees, taxes, etc. In the example above, grid parity has been reached in Nagano. However, retail prices are generally higher than wholesale prices, so grid parity may not have been reached for the very same system installed on the supply-side of the grid.

In order to encompass all of these possibilities, Japan's NEDO defines the grid parity in three phases:

  • 1st phase grid parity: residential grid-connected PV systems
  • 2nd phase grid parity: industrial/transport/commercial sectors
  • 3rd phase grid parity: general power generation

These categories are ranked in terms of the price of power they displace; residential power is more expensive than commercial wholesale. Thus, it is expected that the 1st phase would be reached earlier than the 3rd phase.

Predictions from the 2006 time-frame expected retail grid parity for solar in the 2016 to 2020 era, but due to rapid downward pricing changes, more recent calculations have forced dramatic reductions in time scale, and the suggestion that solar has already reached grid parity in a wide variety of locations. The European Photovoltaic Industry Association (EPIA) calculated that PV would reach parity in many of the European countries by 2020, with costs declining to about half of those of 2010. However, this report was based on the prediction that prices would fall 36 to 51% between 2010 and 2020, a decrease that actually took place during the year the report was authored. The parity line was claimed to have been crossed in Australia in September 2011, and module prices have continued to fall since then.

Stanwell Corporation, an electricity generator owned by the Queensland government made a loss in 2013 from its 4,000 MW of coal and gas fired generation. The company attributed this loss to the expansion of rooftop solar generation which reducing the price of electricity during the day, on some days the price per MWh (usually $40–$50 Australian dollars) was almost zero. The Australian Government and Bloomberg New Energy Finance forecast the production of energy by rooftop solar to rise sixfold between 2014 and 2024.

Rapid uptake

Photovoltaics since early-2010s started to compete in some places without subsidies. Shi Zhengrong said that, as of 2012, unsubsidised solar power were already competitive with fossil fuels in India, Hawaii, Italy and Spain. As PV system prices declined it was inevitable that subsidies would end. "Solar power will be able to compete without subsidies against conventional power sources in half the world by 2015". In fact, recent evidence suggest that photovoltaic grid parity has already been reached in countries of the Mediterranean basin (Cyprus).

Predictions that a power source becomes self-supporting when parity is reached appear to be coming true. According to many measures, PV is the fastest growing source of power in the world:

For large-scale installations, prices below $1.00/watt are now common. In some locations, PV has reached grid parity, the cost at which it is competitive with coal or gas-fired generation. More generally, it is now evident that, given a carbon price of $50/ton, which would raise the price of coal-fired power by 5c/kWh, solar PV will be cost-competitive in most locations. The declining price of PV has been reflected in rapidly growing installations, totalling about 23 GW in 2011. Although some consolidation is likely in 2012, as firms try to restore profitability, strong growth seems likely to continue for the rest of the decade. Already, by one estimate, total investment in renewables for 2011 exceeded investment in carbon-based electricity generation.

The dramatic price reductions in the PV industry have caused a number of other power sources to become less interesting. Nevertheless, there remained the widespread belief that concentrating solar power (CSP) will be even less expensive than PV, although this is suitable for industrial-scale projects only, and thus has to compete at wholesale pricing. One company stated in 2011 that CSP costs $0.12/kWh to produce in Australia, and expected this to drop to $0.06/kWh by 2015 due to improvements in technology and reductions in equipment manufacturing costs. Greentech Media predicted that LCOE of CSP and PV power would lower to $0.07–$0.12/kWh by 2020 in California.

Wind power

Grid parity also applies to wind power where it varies according to wind quality and existing distribution infrastructure. ExxonMobil predicted in 2011 that wind power real cost would approach parity with natural gas and coal without carbon sequestration and be cheaper than natural gas and coal with carbon sequestration by 2025.

Wind turbines reached grid parity in some areas of Europe in the mid-2000s, and in the US around the same time. Falling prices continue to drive the levelized cost down and it was suggested that it had reached general grid parity in Europe in 2010, and would reach the same point in the US around 2016 due to an expected reduction in capital costs of about 12%. Nevertheless, a significant amount of the wind power resource in North America remained above grid parity due to the long transmission distances involved. (see also OpenEI Database for cost of electricity by source).

Brain-reading

From Wikipedia, the free encyclopedia

Brain-reading or thought identification uses the responses of multiple voxels in the brain evoked by stimulus then detected by fMRI in order to decode the original stimulus. Advances in research have made this possible by using human neuroimaging to decode a person's conscious experience based on non-invasive measurements of an individual's brain activity. Brain reading studies differ in the type of decoding (i.e. classification, identification and reconstruction) employed, the target (i.e. decoding visual patterns, auditory patterns, cognitive states), and the decoding algorithms (linear classification, nonlinear classification, direct reconstruction, Bayesian reconstruction, etc.) employed.

In 2024 -2025, professor of neuropsychology Barbara Sahakian qualified, "A lot of neuroscientists in the field are very cautious and say we can't talk about reading individuals' minds, and right now that is very true, but we're moving ahead so rapidly, it's not going to be that long before we will be able to tell whether someone's making up a story, or whether someone intended to do a crime with a certain degree of certainty."

Applications

Natural images

Identification of complex natural images is possible using voxels from early and anterior visual cortex areas forward of them (visual areas V3A, V3B, V4, and the lateral occipital) together with Bayesian inference. This brain reading approach uses three components: a structural encoding model that characterizes responses in early visual areas; a semantic encoding model that characterizes responses in anterior visual areas; and a Bayesian prior that describes the distribution of structural and semantic scene statistics.

Experimentally the procedure is for subjects to view 1750 black and white natural images that are correlated with voxel activation in their brains. Then subjects viewed another 120 novel target images, and information from the earlier scans is used reconstruct them. Natural images used include pictures of a seaside cafe and harbor, performers on a stage, and dense foliage.

In 2008 IBM applied for a patent on how to extract mental images of human faces from the human brain. It uses a feedback loop based on brain measurements of the fusiform gyrus area in the brain which activates proportionate with degree of facial recognition.

In 2011, a team led by Shinji Nishimoto used only brain recordings to partially reconstruct what volunteers were seeing. The researchers applied a new model, about how moving object information is processed in human brains, while volunteers watched clips from several videos. An algorithm searched through thousands of hours of external YouTube video footage (none of the videos were the same as the ones the volunteers watched) to select the clips that were most similar. The authors have uploaded demos comparing the watched and the computer-estimated videos.

In 2017 a face perception study in monkeys reported the reconstruction of human faces by analyzing electrical activity from 205 neurons.

In 2023 image reconstruction was reported utilizing Stable Diffusion on human brain activity obtained via fMRI.

In 2024, a study demonstrated that images imagined in the mind, without visual stimulation, can be reconstructed from fMRI brain signals utilizing machine learning and generative AI technology. Another 2024 study reported the reconstruction of images from EEG.

Lie detector

Brain-reading has been suggested as an alternative to polygraph machines as a form of lie detection. Another alternative to polygraph machines is blood oxygenated level dependent functional MRI technology. This technique involves the interpretation of the local change in the concentration of oxygenated hemoglobin in the brain, although the relationship between this blood flow and neural activity is not yet completely understood. Another technique to find concealed information is brain fingerprinting, which uses EEG to ascertain if a person has a specific memory or information by identifying P300 event related potentials.

A number of concerns have been raised about the accuracy and ethical implications of brain-reading for this purpose. Laboratory studies have found rates of accuracy of up to 85%; however, there are concerns about what this means for false positive results among non-criminal populations: "If the prevalence of "prevaricators" in the group being examined is low, the test will yield far more false-positive than true-positive results; about one person in five will be incorrectly identified by the test." Ethical problems involved in the use of brain-reading as lie detection include misapplications due to adoption of the technology before its reliability and validity can be properly assessed and due to misunderstanding of the technology, and privacy concerns due to unprecedented access to individual's private thoughts. However, it has been noted that the use of polygraph lie detection carries similar concerns about the reliability of the results and violation of privacy.

Human–machine interfaces

The Emotiv Epoc is one way that users can give commands to devices using only thoughts.

Brain-reading has also been proposed as a method of improving human–machine interfaces, by the use of EEG to detect relevant brain states of a human. In recent years, there has been a rapid increase in patents for technology involved in reading brainwaves, rising from fewer than 400 from 2009–2012 to 1600 in 2014. These include proposed ways to control video games via brain waves and "neuro-marketing" to determine someone's thoughts about a new product or advertisement.

Emotiv Systems, an Australian electronics company, has demonstrated a headset that can be trained to recognize a user's thought patterns for different commands. Tan Le demonstrated the headset's ability to manipulate virtual objects on screen, and discussed various future applications for such brain-computer interface devices, from powering wheel chairs to replacing the mouse and keyboard.

Detecting attention

It is possible to track which of two forms of rivalrous binocular illusions a person was subjectively experiencing from fMRI signals.

When humans think of an object, such as a screwdriver, many different areas of the brain activate. Marcel Just and his colleague, Tom Mitchell, have used fMRI brain scans to teach a computer to identify the various parts of the brain associated with specific thoughts. This technology also yielded a discovery: similar thoughts in different human brains are surprisingly similar neurologically. To illustrate this, Just and Mitchell used their computer to predict, based on nothing but fMRI data, which of several images a volunteer was thinking about. The computer was 100% accurate, but so far the machine is only distinguishing between 10 images.

Detecting thoughts

The category of event which a person freely recalls can be identified from fMRI before they say what they remembered.

16 December 2015, a study conducted by Toshimasa Yamazaki at Kyushu Institute of Technology found that during a rock-paper-scissors game a computer was able to determine the choice made by the subjects before they moved their hand. An EEG was used to measure activity in the Broca's area to see the words two seconds before the words were uttered.

In 2023, the University of Texas in Austin trained a non-invasive brain decoder to translate volunteers' brainwaves into the GPT-1 language model. After lengthy training on each individual volunteer, the decoder usually failed to reconstruct the exact words, but could nevertheless reconstruct meanings close enough that the decoder could, most of the time, identify what timestamp of a given book the subject was listening to.

Detecting language

Statistical analysis of EEG brainwaves has been claimed to allow the recognition of phonemes, and (in 1999) at a 60% to 75% level color and visual shape words.

On 31 January 2012 Brian Pasley and colleagues of University of California Berkeley published their paper in PLoS Biology wherein subjects' internal neural processing of auditory information was decoded and reconstructed as sound on computer by gathering and analyzing electrical signals directly from subjects' brains. The research team conducted their studies on the superior temporal gyrus, a region of the brain that is involved in higher order neural processing to make semantic sense from auditory information. The research team used a computer model to analyze various parts of the brain that might be involved in neural firing while processing auditory signals. Using the computational model, scientists were able to identify the brain activity involved in processing auditory information when subjects were presented with recording of individual words. Later, the computer model of auditory information processing was used to reconstruct some of the words back into sound based on the neural processing of the subjects. However the reconstructed sounds were not of good quality and could be recognized only when the audio wave patterns of the reconstructed sound were visually matched with the audio wave patterns of the original sound that was presented to the subjects. However this research marks a direction towards more precise identification of neural activity in cognition.

Predicting intentions

Some researchers in 2008 were able to predict, with 60% accuracy, whether a subject was going to push a button with their left or right hand. This is notable, not just because the accuracy is better than chance, but also because the scientists were able to make these predictions up to 10 seconds before the subject acted – well before the subject felt they had decided. This data is even more striking in light of other research suggesting that the decision to move, and possibly the ability to cancel that movement at the last second, may be the results of unconscious processing.

John Dylan-Haynes has also demonstrated that fMRI can be used to identify whether a volunteer is about to add or subtract two numbers in their head.

Predictive processing in the brain

Neural decoding techniques have been used to test theories about the predictive brain, and to investigate how top-down predictions affect brain areas such as the visual cortex. Studies using fMRI decoding techniques have found that predictable sensory events and the expected consequences of our actions are better decoded in visual brain areas, suggesting that prediction 'sharpens' representations in line with expectations.

Virtual environments

It has also been shown that brain-reading can be achieved in a complex virtual environment.

Emotions

Just and Mitchell also claim they are beginning to be able to identify kindness, hypocrisy, and love in the brain.

Security

In 2013 a project led by University of California Berkeley professor John Chuang published findings on the feasibility of brainwave-based computer authentication as a substitute for passwords. Improvements in the use of biometrics for computer authentication has continually improved since the 1980s, but this research team was looking for a method faster and less intrusive than today's retina scans, fingerprinting, and voice recognition. The technology chosen to improve security measures is an electroencephalogram (EEG), or brainwave measurer, to improve passwords into "pass thoughts." Using this method Chuang and his team were able to customize tasks and their authentication thresholds to the point where they were able to reduce error rates under 1%, significantly better than other recent methods. In order to better attract users to this new form of security the team is still researching mental tasks that are enjoyable for the user to perform while having their brainwaves identified. In the future this method could be as cheap, accessible, and straightforward as thought itself.

John-Dylan Haynes states that fMRI can also be used to identify recognition in the brain. He provides the example of a criminal being interrogated about whether he recognizes the scene of the crime or murder weapons.

Methods of analysis

Classification

In classification, a pattern of activity across multiple voxels is used to determine the particular class from which the stimulus was drawn. Many studies have classified visual stimuli, but this approach has also been used to classify cognitive states.

Reconstruction

In reconstruction brain reading the aim is to create a literal picture of the image that was presented. Early studies used voxels from early visual cortex areas (V1, V2, and V3) to reconstruct geometric stimuli made up of flickering checkerboard patterns.

EEG

EEG has also been used to identify recognition of specific information or memories by the P300 event related potential, which has been dubbed 'brain fingerprinting'.

Accuracy

Brain-reading accuracy is increasing steadily as the quality of the data and the complexity of the decoding algorithms improve. In one recent experiment it was possible to identify which single image was being seen from a set of 120. In another it was possible to correctly identify 90% of the time which of two categories the stimulus came and the specific semantic category (out of 23) of the target image 40% of the time.

Limitations

It has been noted that so far brain-reading is limited. "In practice, exact reconstructions are impossible to achieve by any reconstruction algorithm on the basis of brain activity signals acquired by fMRI. This is because all reconstructions will inevitably be limited by inaccuracies in the encoding models and noise in the measured signals. Our results demonstrate that the natural image prior is a powerful (if unconventional) tool for mitigating the effects of these fundamental limitations. A natural image prior with only six million images is sufficient to produce reconstructions that are structurally and semantically similar to a target image."

Ethical issues

With brain scanning technology becoming increasingly accurate, experts predict important debates over how and when it should be used. One potential area of application is criminal law. Haynes states that simply refusing to use brain scans on suspects also prevents the wrongly accused from proving their innocence. US scholars generally believe that involuntary brain reading, and involuntary polygraph tests, would violate the Fifth Amendment's right to not self-incriminate. One perspective is to consider whether brain imaging is like testimony, or instead like DNA, blood, or semen. Paul Root Wolpe, director of the Center for Ethics at Emory University in Atlanta predicts that this question will be decided by a Supreme Court case.

In other countries outside the United States, thought identification has already been used in criminal law. In 2008 an Indian woman was convicted of murder after an EEG of her brain allegedly revealed that she was familiar with the circumstances surrounding the poisoning of her ex-fiancé. Some neuroscientists and legal scholars doubt the validity of using thought identification as a whole for anything past research on the nature of deception and the brain.

The Economist cautioned people to be "afraid" of the future impact, and some ethicists argue that privacy laws should protect private thoughts. Legal scholar Hank Greely argues that the court systems could benefit from such technology, and neuroethicist Julian Savulescu states that brain data is not fundamentally different from other types of evidence. In Nature, journalist Liam Drew writes about emerging projects to attach brain-reading devices to speech synthesizers or other output devices for the benefit of tetraplegics. Such devices could create concerns of accidentally broadcasting the patient's "inner thoughts" rather than merely conscious speech.

History

MRI scanner that could be used for Thought Identification

Psychologist John-Dylan Haynes experienced breakthroughs in brain imaging research in 2006 by using fMRI. This research included new findings on visual object recognition, tracking dynamic mental processes, lie detecting, and decoding unconscious processing. The combination of these four discoveries revealed such a significant amount of information about an individual's thoughts that Haynes termed it "brain reading".

The fMRI has allowed research to expand by significant amounts because it can track the activity in an individual's brain by measuring the brain's blood flow. It is currently thought to be the best method for measuring brain activity, which is why it has been used in multiple research experiments in order to improve the understanding of how doctors and psychologists can identify thoughts.

In a 2020 study, AI using implanted electrodes could correctly transcribe a sentence read aloud from a fifty-sentence test set 97% of the time, given 40 minutes of training data per participant.

Future research

Experts are unsure of how far thought identification can expand, but Marcel Just believed in 2014 that in 3–5 years there will be a machine that is able to read complex thoughts such as 'I hate so-and-so'.

Donald Marks, founder and chief science officer of MMT, is working on playing back thoughts individuals have after they have already been recorded.

Researchers at the University of California Berkeley have already been successful in forming, erasing, and reactivating memories in rats. Marks says they are working on applying the same techniques to humans. This discovery could be monumental for war veterans who suffer from PTSD.

Further research is also being done in analyzing brain activity during video games to detect criminals, neuromarketing, and using brain scans in government security checks.

A Captain Science comic panel showing a character using a device to read an alien's brain

The episode Black Hole of American medical drama House, which aired on 15 March 2010, featured an experimental "cognitive imaging" device that supposedly allowed seeing into a patient's subconscious mind. The patient was first put in a preparation phase of six hours while watching video clips, attached to a neuroimaging device looking like electroencephalography or functional near-infrared spectroscopy, to train the neuroimaging classifier. Then the patient was put under twilight anesthesia, and the same device was used to try to infer what was going through the patient's mind. The fictional episode somewhat anticipated the study by Nishimoto et al. published the following year, in which fMRI was used instead.

In the movie Dumb and Dumber To', one scene shows a brain reader.

In the Henry Danger episode, "Dream Busters," a machine shows Henry's dream.

Neutron star

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Neutron_star Central neutron star...