Search This Blog

Friday, February 28, 2025

Shape of the universe

From Wikipedia, the free encyclopedia

Current observational evidence (WMAP, BOOMERanG, and Planck for example) imply that the observable universe is spatially flat to within a 0.4% margin of error of the curvature density parameter with an unknown global topology. It is currently unknown whether the universe is simply connected like euclidean space or multiply connected like a torus. To date, no compelling evidence has been found suggesting the topology of the universe is not simply connected, though it has not been ruled out by astronomical observations.

Shape of the observable universe

The universe's structure can be examined from two angles:

  1. Local geometry: This relates to the curvature of the universe, primarily concerning what we can observe.
  2. Global geometry: This pertains to the universe's overall shape and structure.

The observable universe (of a given current observer) is a roughly spherical region extending about 46 billion light-years in all directions (from that observer, the observer being the current Earth, unless specified otherwise). It appears older and more redshifted the deeper we look into space. In theory, we could look all the way back to the Big Bang, but in practice, we can only see up to the cosmic microwave background (CMB) (roughly 370000 years after the Big Bang) as anything beyond that is opaque. Studies show that the observable universe is isotropic and homogeneous on the largest scales.

If the observable universe encompasses the entire universe, we might determine its structure through observation. However, if the observable universe is smaller, we can only grasp a portion of it, making it impossible to deduce the global geometry through observation. Different mathematical models of the universe's global geometry can be constructed, all consistent with current observations and general relativity. Hence, it is unclear whether the observable universe matches the entire universe or is significantly smaller, though it is generally accepted that the universe is larger than the observable universe.

The universe may be compact in some dimensions and not in others, similar to how a cuboid is longer in one dimension than the others. Scientists test these models by looking for novel implications – phenomena not yet observed but necessary if the model is accurate. For instance, a small closed universe would produce multiple images of the same object in the sky, though not necessarily of the same age. As of 2024, current observational evidence suggests that the observable universe is spatially flat with an unknown global structure.

Curvature of the universe

The curvature is a quantity describing how the geometry of a space differs locally from flat space. The curvature of any locally isotropic space (and hence of a locally isotropic universe) falls into one of the three following cases:

  1. Zero curvature (flat) – a drawn triangle's angles add up to 180° and the Pythagorean theorem holds; such 3-dimensional space is locally modeled by Euclidean space E3.
  2. Positive curvature – a drawn triangle's angles add up to more than 180°; such 3-dimensional space is locally modeled by a region of a 3-sphere S3.
  3. Negative curvature – a drawn triangle's angles add up to less than 180°; such 3-dimensional space is locally modeled by a region of a hyperbolic space H3.

Curved geometries are in the domain of non-Euclidean geometry. An example of a positively curved space would be the surface of a sphere such as the Earth. A triangle drawn from the equator to a pole will have at least two angles equal 90°, which makes the sum of the 3 angles greater than 180°. An example of a negatively curved surface would be the shape of a saddle or mountain pass. A triangle drawn on a saddle surface will have the sum of the angles adding up to less than 180°.

The local geometry of the universe is determined by whether the density parameter Ω is greater than, less than, or equal to 1. From top to bottom: a spherical universe with Ω > 1, a hyperbolic universe with Ω < 1, and a flat universe with Ω = 1. These depictions of two-dimensional surfaces are merely easily visualizable analogs to the 3-dimensional structure of (local) space.
Proper distance spacetime diagram of our flat ΛCDM universe. Particle horizon: green, Hubble radius: blue, Event horizon: purple, Light cone: orange.
Hyperbolic universe with the same radiation and matter density parameters as ours, but with negative curvature instead of dark energy (ΩΛ→Ωk).
Closed universe without dark energy and with overcritical matter density, which leads to a Big Crunch. Neither the hyperbolic nor the closed examples have an Event horizon (here the purple curve is the cosmic Antipode).

General relativity explains that mass and energy bend the curvature of spacetime and is used to determine what curvature the universe has by using a value called the density parameter, represented with Omega (Ω). The density parameter is the average density of the universe divided by the critical energy density, that is, the mass energy needed for a universe to be flat. Put another way,

  • If Ω = 1, the universe is flat.
  • If Ω > 1, there is positive curvature.
  • If Ω < 1, there is negative curvature.

Scientists could experimentally calculate Ω to determine the curvature two ways. One is to count all the mass–energy in the universe and take its average density, then divide that average by the critical energy density. Data from the Wilkinson Microwave Anisotropy Probe (WMAP) as well as the Planck spacecraft give values for the three constituents of all the mass–energy in the universe – normal mass (baryonic matter and dark matter), relativistic particles (predominantly photons and neutrinos), and dark energy or the cosmological constant:

Ωmass0.315±0.018
Ωrelativistic9.24×10−5
ΩΛ0.6817±0.0018
Ωtotal = Ωmass + Ωrelativistic + ΩΛ = 1.00±0.02

The actual value for critical density value is measured as ρcritical = 9.47×10−27 kg⋅m−3. From these values, within experimental error, the universe seems to be spatially flat.

Another way to measure Ω is to do so geometrically by measuring an angle across the observable universe. This can be done by using the CMB and measuring the power spectrum and temperature anisotropy. For instance, one can imagine finding a gas cloud that is not in thermal equilibrium due to being so large that light speed cannot propagate the thermal information. Knowing this propagation speed, we then know the size of the gas cloud as well as the distance to the gas cloud, we then have two sides of a triangle and can then determine the angles. Using a method similar to this, the BOOMERanG experiment has determined that the sum of the angles to 180° within experimental error, corresponding to Ωtotal1.00±0.12.

These and other astronomical measurements constrain the spatial curvature to be very close to zero, although they do not constrain its sign. This means that although the local geometries of spacetime are generated by the theory of relativity based on spacetime intervals, we can approximate 3-space by the familiar Euclidean geometry.

The Friedmann–Lemaître–Robertson–Walker (FLRW) model using Friedmann equations is commonly used to model the universe. The FLRW model provides a curvature of the universe based on the mathematics of fluid dynamics, that is, modeling the matter within the universe as a perfect fluid. Although stars and structures of mass can be introduced into an "almost FLRW" model, a strictly FLRW model is used to approximate the local geometry of the observable universe. Another way of saying this is that, if all forms of dark energy are ignored, then the curvature of the universe can be determined by measuring the average density of matter within it, assuming that all matter is evenly distributed (rather than the distortions caused by 'dense' objects such as galaxies). This assumption is justified by the observations that, while the universe is "weakly" inhomogeneous and anisotropic (see the large-scale structure of the cosmos), it is on average homogeneous and isotropic when analyzed at a sufficiently large spatial scale.

Global universal structure

Global structure covers the geometry and the topology of the whole universe—both the observable universe and beyond. While the local geometry does not determine the global geometry completely, it does limit the possibilities, particularly a geometry of a constant curvature. The universe is often taken to be a geodesic manifold, free of topological defects; relaxing either of these complicates the analysis considerably. A global geometry is a local geometry plus a topology. It follows that a topology alone does not give a global geometry: for instance, Euclidean 3-space and hyperbolic 3-space have the same topology but different global geometries.

As stated in the introduction, investigations within the study of the global structure of the universe include:

  • whether the universe is infinite or finite in extent,
  • whether the geometry of the global universe is flat, positively curved, or negatively curved, and,
  • whether the topology is simply connected (for example, like a sphere) or else multiply connected (for example, like a torus).

Infinite or finite

One of the unanswered questions about the universe is whether it is infinite or finite in extent. For intuition, it can be understood that a finite universe has a finite volume that, for example, could be in theory filled with a finite amount of material, while an infinite universe is unbounded and no numerical volume could possibly fill it. Mathematically, the question of whether the universe is infinite or finite is referred to as boundedness. An infinite universe (unbounded metric space) means that there are points arbitrarily far apart: for any distance d, there are points that are of a distance at least d apart. A finite universe is a bounded metric space, where there is some distance d such that all points are within distance d of each other. The smallest such d is called the diameter of the universe, in which case the universe has a well-defined "volume" or "scale".

With or without boundary

Assuming a finite universe, the universe can either have an edge or no edge. Many finite mathematical spaces, e.g., a disc, have an edge or boundary. Spaces that have an edge are difficult to treat, both conceptually and mathematically. Namely, it is difficult to state what would happen at the edge of such a universe. For this reason, spaces that have an edge are typically excluded from consideration.

However, there exist many finite spaces, such as the 3-sphere and 3-torus, that have no edges. Mathematically, these spaces are referred to as being compact without boundary. The term compact means that it is finite in extent ("bounded") and complete. The term "without boundary" means that the space has no edges. Moreover, so that calculus can be applied, the universe is typically assumed to be a differentiable manifold. A mathematical object that possesses all these properties, compact without boundary and differentiable, is termed a closed manifold. The 3-sphere and 3-torus are both closed manifolds.

Observational methods

In the 1990s and early 2000s, empirical methods for determining the global topology using measurements on scales that would show multiple imaging were proposed and applied to cosmological observations.

In the 2000s and 2010s, it was shown that, since the universe is inhomogeneous as shown in the cosmic web of large-scale structure, acceleration effects measured on local scales in the patterns of the movements of galaxies should, in principle, reveal the global topology of the universe.

Curvature

The curvature of the universe places constraints on the topology. If the spatial geometry is spherical, i.e., possess positive curvature, the topology is compact. For a flat (zero curvature) or a hyperbolic (negative curvature) spatial geometry, the topology can be either compact or infinite. Many textbooks erroneously state that a flat or hyperbolic universe implies an infinite universe; however, the correct statement is that a flat universe that is also simply connected implies an infinite universe. For example, Euclidean space is flat, simply connected, and infinite, but there are tori that are flat, multiply connected, finite, and compact (see flat torus).

In general, local to global theorems in Riemannian geometry relate the local geometry to the global geometry. If the local geometry has constant curvature, the global geometry is very constrained, as described in Thurston geometries.

The latest research shows that even the most powerful future experiments (like the SKA) will not be able to distinguish between a flat, open and closed universe if the true value of cosmological curvature parameter is smaller than 10−4. If the true value of the cosmological curvature parameter is larger than 10−3 we will be able to distinguish between these three models even now.

Final results of the Planck mission, released in 2018, show the cosmological curvature parameter, 1 − Ω = ΩK = −Kc2/a2H2, to be 0.0007±0.0019, consistent with a flat universe. (i.e. positive curvature: K = +1, ΩK < 0, Ω > 1, negative curvature: K = −1, ΩK > 0, Ω < 1, zero curvature: K = 0, ΩK = 0, Ω = 1).

Universe with zero curvature

In a universe with zero curvature, the local geometry is flat. The most familiar such global structure is that of Euclidean space, which is infinite in extent. Flat universes that are finite in extent include the torus and Klein bottle. Moreover, in three dimensions, there are 10 finite closed flat 3-manifolds, of which 6 are orientable and 4 are non-orientable. These are the Bieberbach manifolds. The most familiar is the aforementioned 3-torus universe.

In the absence of dark energy, a flat universe expands forever but at a continually decelerating rate, with expansion asymptotically approaching zero. With dark energy, the expansion rate of the universe initially slows down, due to the effect of gravity, but eventually increases. The ultimate fate of the universe is the same as that of an open universe in the sense that space will continue expanding forever.

A flat universe can have zero total energy.

Universe with positive curvature

A positively curved universe is described by elliptic geometry, and can be thought of as a three-dimensional hypersphere, or some other spherical 3-manifold (such as the Poincaré dodecahedral space), all of which are quotients of the 3-sphere.

Poincaré dodecahedral space is a positively curved space, colloquially described as "soccerball-shaped", as it is the quotient of the 3-sphere by the binary icosahedral group, which is very close to icosahedral symmetry, the symmetry of a soccer ball. This was proposed by Jean-Pierre Luminet and colleagues in 2003 and an optimal orientation on the sky for the model was estimated in 2008.

Universe with negative curvature

A hyperbolic universe, one of a negative spatial curvature, is described by hyperbolic geometry, and can be thought of locally as a three-dimensional analog of an infinitely extended saddle shape. There are a great variety of hyperbolic 3-manifolds, and their classification is not completely understood. Those of finite volume can be understood via the Mostow rigidity theorem. For hyperbolic local geometry, many of the possible three-dimensional spaces are informally called "horn topologies", so called because of the shape of the pseudosphere, a canonical model of hyperbolic geometry. An example is the Picard horn, a negatively curved space, colloquially described as "funnel-shaped".

Curvature: open or closed

When cosmologists speak of the universe as being "open" or "closed", they most commonly are referring to whether the curvature is negative or positive, respectively. These meanings of open and closed are different from the mathematical meaning of open and closed used for sets in topological spaces and for the mathematical meaning of open and closed manifolds, which gives rise to ambiguity and confusion. In mathematics, there are definitions for a closed manifold (i.e., compact without boundary) and open manifold (i.e., one that is not compact and without boundary). A "closed universe" is necessarily a closed manifold. An "open universe" can be either a closed or open manifold. For example, in the Friedmann–Lemaître–Robertson–Walker (FLRW) model, the universe is considered to be without boundaries, in which case "compact universe" could describe a universe that is a closed manifold.

Viewpoint: ‘Only in California could coffee both cause and prevent cancer’

  | February 28, 2025
geneticliteracyproject.org/2025/02/28/viewpoint-only-in-california-could-coffee-both-cause-and-prevent-cancer/?mc_cid=d7eebbd37b&mc_eid=539cc5c98c

The following is neither satire nor fiction. California’s insane Proposition 65 list contains a number of so-called carcinogens found in coffee. Yet, multiple epidemiological studies conclude that moderate coffee consumption decreases the chances of developing numerous cancers. Only in California could coffee both promote and reduce cancer.

Welcome to California, the Wild West – geographically and otherwise. Especially when it comes to science.

It’s a weird place indeed, so much so that If a space alien happened to land there (where else?) he/she/it/they/them would be hard-pressed to reconcile the disconnect between the genius of Silicon Valley and the idiocy of Proposition 65. Prop 65, a longtime laughingstock of the state, which was approved by California voters in 1986, was originally called the “Safe Drinking Water and Toxic Enforcement Act of 1986.” It is now a parody of itself; Prop 65 has little to do with safe water and everything to do with cataloging chemicals that may be reproductive toxins or carcinogens, regardless of dose, exposure, and common sense. It has become little more than a boondoggle for unscrupulous lawyers, environmental groups, and the “adhesive sticker industry” – something that will become evident later. The law has gained notoriety as a non-funny joke.

Or I could be wrong. Maybe it is funny after all…

Madness. All of these “deadly” items are carcinogenic, according to Crazyfornia California, and must have Proposition 65 warning stickers. The overarching message? Don’t go to Disneyland. but if you must, don’t bring any spare change, sit on a chair, or drink coffee. And make sure you leave your birdhouse and umbrella stand – items you’d normally never be caught dead without – at home. 

A Proposition 65 warning sticker. Many models are available to suit your sense of style and taste!

Coffee sure sounds dangerous

Although coffee itself is not listed on the Prop 65 list, (1) the “deadly potion” doesn’t get off scot-free. There are a bunch of carcinogenic chemicals in coffee that are on the list. Here are some:

  • Caffeic acid
  • Pyridine
  • Acrylamide
  • Polycyclic aromatic hydrocarbons (PAH)
  • Furan

Why are we not all dead?

To answer this question we have to discuss the difference between risk and hazard (the basis of inclusion on the list). Why should I bother when my esteemed colleague (and fellow long-suffering Yankee fan) Dr. Joe Schwarcz, the head of the McGill University Office for Science and Society, has already done so? [my emphasis]

Any substance that can cause cancer or reproductive problems under some condition is a candidate for being subject to regulation under Proposition 65. The problem is that the law is based on hazard, not risk. Hazard is the innate property of a substance or process to do harm, while risk is a measure of the chance that harm will actually occur.

Dr. Joe Schwarcz, “Should I Be Worried About My Earphones?” (May 2017)

This subtle but critical distinction – risk vs. hazard – is the basis for the massive confusion inflicted upon the general public and why no one knows what to believe. I don’t blame them. For example, the International Agency for Research on Cancer (IARC) uses a hazard-based classification, which results in meritless scares and huge (and also meritless) class action settlements. Any chemical that could possibly cause cancer, regardless of the time and amount of exposure (or whether the animal model in question has any relevance to humans), is regarded as a possible or probable carcinogen. Real life doesn’t count here.

An excellent review of the many faults of a hazard-based approach appeared in a 2016 article in the journal Regulatory Toxicology and Pharmacology: [my emphasis]

Classification schemes for carcinogenicity based solely on hazard-identification such as the IARC monograph process and the UN system adopted in the EU have become outmoded… Categorization in this way places into the same category chemicals and agents with widely differing potencies and modes of action. This is how eating processed meat can fall into the same category as sulfur mustard gas…the unintended downsides of a hazard only approach [include] health scares, unnecessary economic costs, loss of beneficial products, adoption of strategies with greater health costs, and the diversion of public funds into unnecessary research.

Doe, et. at., Regulatory Toxicology and Pharmacology, Volume 82, December 2016, Pages 158-166.   https://doi.org/10.1016/j.yrtph.2016.10.014

The annual cost to businesses has soared from $11 million in 2000 to $26 million in 2022. Who benefits? According to an article from CalChamber advocacy group it’s little more than a shakedown:

[The] features inherent to Proposition 65 have led to the growth of a multimillion-dollar cottage industry of “citizen enforcers” or “bounty hunters” who enrich themselves by abusing the statute’s warning label requirements as a pretext to file 60-day notices and lawsuits in order to exact settlements from businesses.

Who ends up paying for the settlements? Consumers, of course.

Wake up and smell the crappy science

Like IARC, Proposition 65 is based on hazard, not risk. This is why the list is essentially useless (and probably harmful) in determining whether a particular chemical at a real-world dose will actually be a cancer risk. No wonder the public is scared about the wrong things. No one is worried about coffee, despite trace quantities of carcinogens – chemicals that we are regularly exposed to in other foods. Nor should they be. Numerous epidemiological studies have concluded that people who regularly drink coffee gain multiple benefits. Here are a few (of many) review articles on the health benefits of the magical elixir.

  1. Reduced Mortality and Certain Chronic Diseases: Many studies show an association between high coffee consumption with decreased rates of mortality and lower incidences of a number of chronic diseases, for example,  type 2 diabetes, Parkinson’s disease, and cardiovascular diseases.
  2. Protective effects against certain types of cancer, including liver and breast cancer.
  3. A lower risk of Parkinson’s and Alzheimer’s 
  4. Cardiovascular Health: Moderate coffee consumption is associated with a reduced risk of metabolic syndrome
  5. Mental Health: Coffee consumption has been linked to a lower risk of depression and improved cognitive function.

It’s hard to believe that coffee is bad for you, yet indirectly at least, that’s just what Prop 65 says.

Bottom line

While (perhaps) well-meaning back in 1986 California’s Prop 65 is silly outmoded and largely irrelevant (except to those exploiting it for their own sleazy financial gains). This is because, like IARC, the state is using absurd parameters to determine theoretical, not real, cancer potential. How else can you explain that coffee, which is known to reduce certain types of cancers, is chock full of chemicals that appear on the Prop 65 list of carcinogens? Yes, in California coffee does increase and decrease cancer. Make sense? I didn’t think so.

Information Age

From Wikipedia, the free encyclopedia
Third Industrial Revolution
1947–present
A laptop connects to the Internet to display information from Wikipedia; long-distance communication between computer systems is a hallmark of the Information Age
LocationWorldwide
Key eventsInvention of the transistor
Computer miniaturization
Invention of the Internet

The Information Age is a historical period that began in the mid-20th century. It is characterized by a rapid shift from traditional industries, as established during the Industrial Revolution, to an economy centered on information technology. The onset of the Information Age has been linked to the development of the transistor in 1947, and the optical amplifier in 1957. These technological advances have had a significant impact on the way information is processed and transmitted.

According to the United Nations Public Administration Network, the Information Age was formed by capitalizing on computer miniaturization advances, which led to modernized information systems and internet communications as the driving force of social evolution.

There is ongoing debate concerning whether the Third Industrial Revolution has already ended, and if the Fourth Industrial Revolution has already begun due to the recent breakthroughs in areas such as artificial intelligence and biotechnology. This next transition has been theorized to harken the advent of the Imagination Age, the Internet of things (IoT), and rapid advancements in machine learning.

History

The digital revolution converted technology from analog format to digital format. By doing this, it became possible to make copies that were identical to the original. In digital communications, for example, repeating hardware was able to amplify the digital signal and pass it on with no loss of information in the signal. Of equal importance to the revolution was the ability to easily move the digital information between media, and to access or distribute it remotely. One turning point of the revolution was the change from analog to digitally recorded music. During the 1980s the digital format of optical compact discs gradually replaced analog formats, such as vinyl records and cassette tapes, as the popular medium of choice.

Previous inventions

Humans have manufactured tools for counting and calculating since ancient times, such as the abacus, astrolabe, equatorium, and mechanical timekeeping devices. More complicated devices started appearing in the 1600s, including the slide rule and mechanical calculators. By the early 1800s, the Industrial Revolution had produced mass-market calculators like the arithmometer and the enabling technology of the punch card. Charles Babbage proposed a mechanical general-purpose computer called the Analytical Engine, but it was never successfully built, and was largely forgotten by the 20th century and unknown to most of the inventors of modern computers.

The Second Industrial Revolution in the last quarter of the 19th century developed useful electrical circuits and the telegraph. In the 1880s, Herman Hollerith developed electromechanical tabulating and calculating devices using punch cards and unit record equipment, which became widespread in business and government.

Meanwhile, various analog computer systems used electrical, mechanical, or hydraulic systems to model problems and calculate answers. These included an 1872 tide-predicting machine, differential analysers, perpetual calendar machines, the Deltar for water management in the Netherlands, network analyzers for electrical systems, and various machines for aiming military guns and bombs. The construction of problem-specific analog computers continued in the late 1940s and beyond, with FERMIAC for neutron transport, Project Cyclone for various military applications, and the Phillips Machine for economic modeling.

Building on the complexity of the Z1 and Z2, German inventor Konrad Zuse used electromechanical systems to complete in 1941 the Z3, the world's first working programmable, fully automatic digital computer. Also during World War II, Allied engineers constructed electromechanical bombes to break German Enigma machine encoding. The base-10 electromechanical Harvard Mark I was completed in 1944, and was to some degree improved with inspiration from Charles Babbage's designs.

1947–1969: Origins

A Pennsylvania state historical marker in Philadelphia cites the creation of ENIAC, the "first all-purpose digital computer", in 1946 as the beginning of the Information Age.

In 1947, the first working transistor, the germanium-based point-contact transistor, was invented by John Bardeen and Walter Houser Brattain while working under William Shockley at Bell Labs. This led the way to more advanced digital computers. From the late 1940s, universities, military, and businesses developed computer systems to digitally replicate and automate previously manually performed mathematical calculations, with the LEO being the first commercially available general-purpose computer.

Digital communication became economical for widespread adoption after the invention of the personal computer in the 1970s. Claude Shannon, a Bell Labs mathematician, is credited for having laid out the foundations of digitalization in his pioneering 1948 article, A Mathematical Theory of Communication.

In 1948, Bardeen and Brattain patented an insulated-gate transistor (IGFET) with an inversion layer. Their concept, forms the basis of CMOS and DRAM technology today. In 1957 at Bell Labs, Frosch and Derick were able to manufacture planar silicon dioxide transistors, later a team at Bell Labs demonstrated a working MOSFET. The first integrated circuit milestone was achieved by Jack Kilby in 1958.

Other important technological developments included the invention of the monolithic integrated circuit chip by Robert Noyce at Fairchild Semiconductor in 1959, made possible by the planar process developed by Jean Hoerni. In 1963, complementary MOS (CMOS) was developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor. The self-aligned gate transistor, which further facilitated mass production, was invented in 1966 by Robert Bower at Hughes Aircraft and independently by Robert Kerwin, Donald Klein and John Sarace at Bell Labs.

In 1962 AT&T deployed the T-carrier for long-haul pulse-code modulation (PCM) digital voice transmission. The T1 format carried 24 pulse-code modulated, time-division multiplexed speech signals each encoded in 64 kbit/s streams, leaving 8 kbit/s of framing information which facilitated the synchronization and demultiplexing at the receiver. Over the subsequent decades the digitisation of voice became the norm for all but the last mile (where analogue continued to be the norm right into the late 1990s).

Following the development of MOS integrated circuit chips in the early 1960s, MOS chips reached higher transistor density and lower manufacturing costs than bipolar integrated circuits by 1964. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. In 1968, Fairchild engineer Federico Faggin improved MOS technology with his development of the silicon-gate MOS chip, which he later used to develop the Intel 4004, the first single-chip microprocessor. It was released by Intel in 1971, and laid the foundations for the microcomputer revolution that began in the 1970s.

MOS technology also led to the development of semiconductor image sensors suitable for digital cameras. The first such image sensor was the charge-coupled device, developed by Willard S. Boyle and George E. Smith at Bell Labs in 1969, based on MOS capacitor technology.

1969–1989: Invention of the internet, rise of home computers

A visualization of the various routes through a portion of the Internet (created via The Opte Project)

The public was first introduced to the concepts that led to the Internet when a message was sent over the ARPANET in 1969. Packet switched networks such as ARPANET, Mark I, CYCLADES, Merit Network, Tymnet, and Telenet, were developed in the late 1960s and early 1970s using a variety of protocols. The ARPANET in particular led to the development of protocols for internetworking, in which multiple separate networks could be joined into a network of networks.

The Whole Earth movement of the 1960s advocated the use of new technology.

In the 1970s, the home computer was introduced, time-sharing computers, the video game console, the first coin-op video games, and the golden age of arcade video games began with Space Invaders. As digital technology proliferated, and the switch from analog to digital record keeping became the new standard in business, a relatively new job description was popularized, the data entry clerk. Culled from the ranks of secretaries and typists from earlier decades, the data entry clerk's job was to convert analog data (customer records, invoices, etc.) into digital data.

In developed nations, computers achieved semi-ubiquity during the 1980s as they made their way into schools, homes, business, and industry. Automated teller machines, industrial robots, CGI in film and television, electronic music, bulletin board systems, and video games all fueled what became the zeitgeist of the 1980s. Millions of people purchased home computers, making household names of early personal computer manufacturers such as Apple, Commodore, and Tandy. To this day the Commodore 64 is often cited as the best selling computer of all time, having sold 17 million units (by some accounts) between 1982 and 1994.

In 1984, the U.S. Census Bureau began collecting data on computer and Internet use in the United States; their first survey showed that 8.2% of all U.S. households owned a personal computer in 1984, and that households with children under the age of 18 were nearly twice as likely to own one at 15.3% (middle and upper middle class households were the most likely to own one, at 22.9%). By 1989, 15% of all U.S. households owned a computer, and nearly 30% of households with children under the age of 18 owned one. By the late 1980s, many businesses were dependent on computers and digital technology.

Motorola created the first mobile phone, Motorola DynaTac, in 1983. However, this device used analog communication - digital cell phones were not sold commercially until 1991 when the 2G network started to be opened in Finland to accommodate the unexpected demand for cell phones that was becoming apparent in the late 1980s.

Compute! magazine predicted that CD-ROM would be the centerpiece of the revolution, with multiple household devices reading the discs.

The first true digital camera was created in 1988, and the first were marketed in December 1989 in Japan and in 1990 in the United States. By the early 2000s, digital cameras had eclipsed traditional film in popularity.

Digital ink and paint was also invented in the late 1980s. Disney's CAPS system (created 1988) was used for a scene in 1989's The Little Mermaid and for all their animation films between 1990's The Rescuers Down Under and 2004's Home on the Range.

1989–2005: Invention of the World Wide Web, mainstreaming of the Internet, Web 1.0

Tim Berners-Lee invented the World Wide Web in 1989. The “Web 1.0 era” ended in 2005, coinciding with the development of further advanced technologies during the start of the 21st century.

The first public digital HDTV broadcast was of the 1990 World Cup that June; it was played in 10 theaters in Spain and Italy. However, HDTV did not become a standard until the mid-2000s outside Japan.

The World Wide Web became publicly accessible in 1991, which had been available only to government and universities. In 1993 Marc Andreessen and Eric Bina introduced Mosaic, the first web browser capable of displaying inline images and the basis for later browsers such as Netscape Navigator and Internet Explorer. Stanford Federal Credit Union was the first financial institution to offer online internet banking services to all of its members in October 1994. In 1996 OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. The Internet expanded quickly, and by 1996, it was part of mass culture and many businesses listed websites in their ads. By 1999, almost every country had a connection, and nearly half of Americans and people in several other countries used the Internet on a regular basis. However throughout the 1990s, "getting online" entailed complicated configuration, and dial-up was the only connection type affordable by individual users; the present day mass Internet culture was not possible.

In 1989, about 15% of all households in the United States owned a personal computer. For households with children, nearly 30% owned a computer in 1989, and in 2000, 65% owned one.

Cell phones became as ubiquitous as computers by the early 2000s, with movie theaters beginning to show ads telling people to silence their phones. They also became much more advanced than phones of the 1990s, most of which only took calls or at most allowed for the playing of simple games.

Text messaging became widely used in the late 1990s worldwide, except for in the United States of America where text messaging didn't become commonplace till the early 2000s.

The digital revolution became truly global in this time as well - after revolutionizing society in the developed world in the 1990s, the digital revolution spread to the masses in the developing world in the 2000s.

By 2000, a majority of U.S. households had at least one personal computer and internet access the following year. In 2002, a majority of U.S. survey respondents reported having a mobile phone.

2005–2020: Web 2.0, social media, smartphones, digital TV

In late 2005 the population of the Internet reached 1 billion, and 3 billion people worldwide used cell phones by the end of the decade. HDTV became the standard television broadcasting format in many countries by the end of the decade. In September and December 2006 respectively, Luxembourg and the Netherlands became the first countries to completely transition from analog to digital television. In September 2007, a majority of U.S. survey respondents reported having broadband internet at home. According to estimates from the Nielsen Media Research, approximately 45.7 million U.S. households in 2006 (or approximately 40 percent of approximately 114.4 million) owned a dedicated home video game console, and by 2015, 51 percent of U.S. households owned a dedicated home video game console according to an Entertainment Software Association annual industry report. By 2012, over 2 billion people used the Internet, twice the number using it in 2007. Cloud computing had entered the mainstream by the early 2010s. In January 2013, a majority of U.S. survey respondents reported owning a smartphone. By 2016, half of the world's population was connected and as of 2020, that number has risen to 67%.

Rise in digital technology use of computers

In the late 1980s, less than 1% of the world's technologically stored information was in digital format, while it was 94% in 2007, with more than 99% by 2014.

It is estimated that the world's capacity to store information has increased from 2.6 (optimally compressed) exabytes in 1986, to some 5,000 exabytes in 2014 (5 zettabytes).

Number of cell phone subscribers and internet users
Year Cell phone subscribers (% of world pop.) Internet users (% of world pop.)
1990 12.5 million (0.25%) 2.8 million (0.05%)
2002 1.5 billion (19%) 631 million (11%)
2010 4 billion (68%) 1.8 billion (26.6%)
2020 4.78 billion (62%) 4.54 billion (59%)
2023 6.31 billion (78%) 5.4 billion (67%)
A university computer lab containing many desktop PCs

Overview of early developments

A timeline of major milestones of the Information Age, from the first message sent by the Internet protocol suite to global Internet access

Library expansion and Moore's law

Library expansion was calculated in 1945 by Fremont Rider to double in capacity every 16 years where sufficient space made available. He advocated replacing bulky, decaying printed works with miniaturized microform analog photographs, which could be duplicated on-demand for library patrons and other institutions.

Rider did not foresee, however, the digital technology that would follow decades later to replace analog microform with digital imaging, storage, and transmission media, whereby vast increases in the rapidity of information growth would be made possible through automated, potentially-lossless digital technologies. Accordingly, Moore's law, formulated around 1965, would calculate that the number of transistors in a dense integrated circuit doubles approximately every two years.

By the early 1980s, along with improvements in computing power, the proliferation of the smaller and less expensive personal computers allowed for immediate access to information and the ability to share and store it. Connectivity between computers within organizations enabled access to greater amounts of information.

Information storage and Kryder's law

Hilbert & López (2011). The World's Technological Capacity to Store, Communicate, and Compute Information. Science, 332(6025), 60–65.

The world's technological capacity to store information grew from 2.6 (optimally compressed) exabytes (EB) in 1986 to 15.8 EB in 1993; over 54.5 EB in 2000; and to 295 (optimally compressed) EB in 2007. This is the informational equivalent to less than one 730-megabyte (MB) CD-ROM per person in 1986 (539 MB per person); roughly four CD-ROM per person in 1993; twelve CD-ROM per person in the year 2000; and almost sixty-one CD-ROM per person in 2007. It is estimated that the world's capacity to store information has reached 5 zettabytes in 2014, the informational equivalent of 4,500 stacks of printed books from the earth to the sun.

The amount of digital data stored appears to be growing approximately exponentially, reminiscent of Moore's law. As such, Kryder's law prescribes that the amount of storage space available appears to be growing approximately exponentially.

Information transmission

The world's technological capacity to receive information through one-way broadcast networks was 432 exabytes of (optimally compressed) information in 1986; 715 (optimally compressed) exabytes in 1993; 1.2 (optimally compressed) zettabytes in 2000; and 1.9 zettabytes in 2007, the information equivalent of 174 newspapers per person per day.

The world's effective capacity to exchange information through two-way Telecommunications networks was 281 petabytes of (optimally compressed) information in 1986; 471 petabytes in 1993; 2.2 (optimally compressed) exabytes in 2000; and 65 (optimally compressed) exabytes in 2007, the information equivalent of six newspapers per person per day. In the 1990s, the spread of the Internet caused a sudden leap in access to and ability to share information in businesses and homes globally. A computer that cost $3000 in 1997 would cost $2000 two years later and $1000 the following year, due to the rapid advancement of technology.

Computation

The world's technological capacity to compute information with human-guided general-purpose computers grew from 3.0 × 108 MIPS in 1986, to 4.4 × 109 MIPS in 1993; to 2.9 × 1011 MIPS in 2000; to 6.4 × 1012 MIPS in 2007. An article featured in the journal Trends in Ecology and Evolution in 2016 reported that:

Digital technology has vastly exceeded the cognitive capacity of any single human being and has done so a decade earlier than predicted. In terms of capacity, there are two measures of importance: the number of operations a system can perform and the amount of information that can be stored. The number of synaptic operations per second in a human brain has been estimated to lie between 10^15 and 10^17. While this number is impressive, even in 2007 humanity's general-purpose computers were capable of performing well over 10^18 instructions per second. Estimates suggest that the storage capacity of an individual human brain is about 10^12 bytes. On a per capita basis, this is matched by current digital storage (5x10^21 bytes per 7.2x10^9 people).

Genetic information

Genetic code may also be considered part of the information revolution. Now that sequencing has been computerized, genome can be rendered and manipulated as data. This started with DNA sequencing, invented by Walter Gilbert and Allan Maxam in 1976-1977 and Frederick Sanger in 1977, grew steadily with the Human Genome Project, initially conceived by Gilbert and finally, the practical applications of sequencing, such as gene testing, after the discovery by Myriad Genetics of the BRCA1 breast cancer gene mutation. Sequence data in Genbank has grown from the 606 genome sequences registered in December 1982 to the 231 million genomes in August 2021. An additional 13 trillion incomplete sequences are registered in the Whole Genome Shotgun submission database as of August 2021. The information contained in these registered sequences has doubled every 18 months.

Different stage conceptualizations

During rare times in human history, there have been periods of innovation that have transformed human life. The Neolithic Age, the Scientific Age and the Industrial Age all, ultimately, induced discontinuous and irreversible changes in the economic, social and cultural elements of the daily life of most people. Traditionally, these epochs have taken place over hundreds, or in the case of the Neolithic Revolution, thousands of years, whereas the Information Age swept to all parts of the globe in just a few years, as a result of the rapidly advancing speed of information exchange.

Between 7,000 and 10,000 years ago during the Neolithic period, humans began to domesticate animals, began to farm grains and to replace stone tools with ones made of metal. These innovations allowed nomadic hunter-gatherers to settle down. Villages formed along the Yangtze River in China in 6,500 B.C., the Nile River region of Africa and in Mesopotamia (Iraq) in 6,000 B.C. Cities emerged between 6,000 B.C. and 3,500 B.C. The development of written communication (cuneiform in Sumeria and hieroglyphs in Egypt in 3,500 B.C. and writing in Egypt in 2,560 B.C. and in Minoa and China around 1,450 B.C.) enabled ideas to be preserved for extended periods to spread extensively. In all, Neolithic developments, augmented by writing as an information tool, laid the groundwork for the advent of civilization.

The Scientific Age began in the period between Galileo's 1543 proof that the planets orbit the Sun and Newton's publication of the laws of motion and gravity in Principia in 1697. This age of discovery continued through the 18th century, accelerated by widespread use of the moveable type printing press by Johannes Gutenberg.

The Industrial Age began in Great Britain in 1760 and continued into the mid-19th century. The invention of machines such as the mechanical textile weaver by Edmund Cartwrite, the rotating shaft steam engine by James Watt and the cotton gin by Eli Whitney, along with processes for mass manufacturing, came to serve the needs of a growing global population. The Industrial Age harnessed steam and waterpower to reduce the dependence on animal and human physical labor as the primary means of production. Thus, the core of the Industrial Revolution was the generation and distribution of energy from coal and water to produce steam and, later in the 20th century, electricity.

The Information Age also requires electricity to power the global networks of computers that process and store data. However, what dramatically accelerated the pace of The Information Age’s adoption, as compared to previous ones, was the speed by which knowledge could be transferred and pervaded the entire human family in a few short decades. This acceleration came about with the adoptions of a new form of power. Beginning in 1972, engineers devised ways to harness light to convey data through fiber optic cable. Today, light-based optical networking systems at the heart of telecom networks and the Internet span the globe and carry most of the information traffic to and from users and data storage systems.

Three stages of the Information Age

There are different conceptualizations of the Information Age. Some focus on the evolution of information over the ages, distinguishing between the Primary Information Age and the Secondary Information Age. Information in the Primary Information Age was handled by newspapers, radio and television. The Secondary Information Age was developed by the Internet, satellite televisions and mobile phones. The Tertiary Information Age was emerged by media of the Primary Information Age interconnected with media of the Secondary Information Age as presently experienced.

Stages of development expressed as Kondratiev waves

Others classify it in terms of the well-established Schumpeterian long waves or Kondratiev waves. Here authors distinguish three different long-term metaparadigms, each with different long waves. The first focused on the transformation of material, including stone, bronze, and iron. The second, often referred to as Industrial Revolution, was dedicated to the transformation of energy, including water, steam, electric, and combustion power. Finally, the most recent metaparadigm aims at transforming information. It started out with the proliferation of communication and stored data and has now entered the age of algorithms, which aims at creating automated processes to convert the existing information into actionable knowledge.

Information in social and economic activities

The main feature of the information revolution is the growing economic, social and technological role of information. Information-related activities did not come up with the Information Revolution. They existed, in one form or the other, in all human societies, and eventually developed into institutions, such as the Platonic Academy, Aristotle's Peripatetic school in the Lyceum, the Musaeum and the Library of Alexandria, or the schools of Babylonian astronomy. The Agricultural Revolution and the Industrial Revolution came up when new informational inputs were produced by individual innovators, or by scientific and technical institutions. During the Information Revolution all these activities are experiencing continuous growth, while other information-oriented activities are emerging.

Information is the central theme of several new sciences, which emerged in the 1940s, including Shannon's (1949) Information Theory and Wiener's (1948) Cybernetics. Wiener stated: "information is information not matter or energy". This aphorism suggests that information should be considered along with matter and energy as the third constituent part of the Universe; information is carried by matter or by energy. By the 1990s some writers believed that changes implied by the Information revolution will lead to not only a fiscal crisis for governments but also the disintegration of all "large structures".

The theory of information revolution

The term information revolution may relate to, or contrast with, such widely used terms as Industrial Revolution and Agricultural Revolution. Note, however, that you may prefer mentalist to materialist paradigm. The following fundamental aspects of the theory of information revolution can be given:

  1. The object of economic activities can be conceptualized according to the fundamental distinction between matter, energy, and information. These apply both to the object of each economic activity, as well as within each economic activity or enterprise. For instance, an industry may process matter (e.g. iron) using energy and information (production and process technologies, management, etc.).
  2. Information is a factor of production (along with capital, labor, land (economics)), as well as a product sold in the market, that is, a commodity. As such, it acquires use value and exchange value, and therefore a price.
  3. All products have use value, exchange value, and informational value. The latter can be measured by the information content of the product, in terms of innovation, design, etc.
  4. Industries develop information-generating activities, the so-called Research and Development (R&D) functions.
  5. Enterprises, and society at large, develop the information control and processing functions, in the form of management structures; these are also called "white-collar workers", "bureaucracy", "managerial functions", etc.
  6. Labor can be classified according to the object of labor, into information labor and non-information labor.
  7. Information activities constitute a large, new economic sector, the information sector along with the traditional primary sector, secondary sector, and tertiary sector, according to the three-sector hypothesis. These should be restated because they are based on the ambiguous definitions made by Colin Clark (1940), who included in the tertiary sector all activities that have not been included in the primary (agriculture, forestry, etc.) and secondary (manufacturing) sectors. The quaternary sector and the quinary sector of the economy attempt to classify these new activities, but their definitions are not based on a clear conceptual scheme, although the latter is considered by some as equivalent with the information sector.
  8. From a strategic point of view, sectors can be defined as information sector, means of production, means of consumption, thus extending the classical Ricardo-Marx model of the Capitalist mode of production (see Influences on Karl Marx). Marx stressed in many occasions the role of the "intellectual element" in production, but failed to find a place for it into his model.
  9. Innovations are the result of the production of new information, as new products, new methods of production, patents, etc. Diffusion of innovations manifests saturation effects (related term: market saturation), following certain cyclical patterns and creating "economic waves", also referred to as "business cycles". There are various types of waves, such as Kondratiev wave (54 years), Kuznets swing (18 years), Juglar cycle (9 years) and Kitchin (about 4 years, see also Joseph Schumpeter) distinguished by their nature, duration, and, thus, economic impact.
  10. Diffusion of innovations causes structural-sectoral shifts in the economy, which can be smooth or can create crisis and renewal, a process which Joseph Schumpeter called vividly "creative destruction".

From a different perspective, Irving E. Fang (1997) identified six 'Information Revolutions': writing, printing, mass media, entertainment, the 'tool shed' (which we call 'home' now), and the information highway. In this work the term 'information revolution' is used in a narrow sense, to describe trends in communication media.

Measuring and modeling the information revolution

Porat (1976) measured the information sector in the US using the input-output analysis; OECD has included statistics on the information sector in the economic reports of its member countries. Veneris (1984, 1990) explored the theoretical, economic and regional aspects of the informational revolution and developed a systems dynamics simulation computer model.

These works can be seen as following the path originated with the work of Fritz Machlup who in his (1962) book "The Production and Distribution of Knowledge in the United States", claimed that the "knowledge industry represented 29% of the US gross national product", which he saw as evidence that the Information Age had begun. He defines knowledge as a commodity and attempts to measure the magnitude of the production and distribution of this commodity within a modern economy. Machlup divided information use into three classes: instrumental, intellectual, and pastime knowledge. He identified also five types of knowledge: practical knowledge; intellectual knowledge, that is, general culture and the satisfying of intellectual curiosity; pastime knowledge, that is, knowledge satisfying non-intellectual curiosity or the desire for light entertainment and emotional stimulation; spiritual or religious knowledge; unwanted knowledge, accidentally acquired and aimlessly retained.

More recent estimates have reached the following results:

  • the world's technological capacity to receive information through one-way broadcast networks grew at a sustained compound annual growth rate of 7% between 1986 and 2007;
  • the world's technological capacity to store information grew at a sustained compound annual growth rate of 25% between 1986 and 2007;
  • the world's effective capacity to exchange information through two-way telecommunications networks grew at a sustained compound annual growth rate of 30% during the same two decades;
  • the world's technological capacity to compute information with the help of humanly guided general-purpose computers grew at a sustained compound annual growth rate of 61% during the same period.

Economics

Eventually, Information and communication technology (ICT)—i.e. computers, computerized machinery, fiber optics, communication satellites, the Internet, and other ICT tools—became a significant part of the world economy, as the development of optical networking and microcomputers greatly changed many businesses and industries. Nicholas Negroponte captured the essence of these changes in his 1995 book, Being Digital, in which he discusses the similarities and differences between products made of atoms and products made of bits.

Jobs and income distribution

The Information Age has affected the workforce in several ways, such as compelling workers to compete in a global job market. One of the most evident concerns is the replacement of human labor by computers that can do their jobs faster and more effectively, thus creating a situation in which individuals who perform tasks that can easily be automated are forced to find employment where their labor is not as disposable. This especially creates issue for those in industrial cities, where solutions typically involve lowering working time, which is often highly resisted. Thus, individuals who lose their jobs may be pressed to move up into more indispensable professions (e.g. engineers, doctors, lawyers, teachers, professors, scientists, executives, journalists, consultants), who are able to compete successfully in the world market and receive (relatively) high wages.

Along with automation, jobs traditionally associated with the middle class (e.g. assembly line, data processing, management, and supervision) have also begun to disappear as result of outsourcing. Unable to compete with those in developing countries, production and service workers in post-industrial (i.e. developed) societies either lose their jobs through outsourcing, accept wage cuts, or settle for low-skill, low-wage service jobs. In the past, the economic fate of individuals would be tied to that of their nation's. For example, workers in the United States were once well paid in comparison to those in other countries. With the advent of the Information Age and improvements in communication, this is no longer the case, as workers must now compete in a global job market, whereby wages are less dependent on the success or failure of individual economies.

In effectuating a globalized workforce, the internet has just as well allowed for increased opportunity in developing countries, making it possible for workers in such places to provide in-person services, therefore competing directly with their counterparts in other nations. This competitive advantage translates into increased opportunities and higher wages.

Automation, productivity, and job gain

The Information Age has affected the workforce in that automation and computerization have resulted in higher productivity coupled with net job loss in manufacturing. In the United States, for example, from January 1972 to August 2010, the number of people employed in manufacturing jobs fell from 17,500,000 to 11,500,000 while manufacturing value rose 270%. Although it initially appeared that job loss in the industrial sector might be partially offset by the rapid growth of jobs in information technology, the recession of March 2001 foreshadowed a sharp drop in the number of jobs in the sector. This pattern of decrease in jobs would continue until 2003, and data has shown that, overall, technology creates more jobs than it destroys even in the short run.

Information-intensive industry

Industry has become more information-intensive while less labor- and capital-intensive. This has left important implications for the workforce, as workers have become increasingly productive as the value of their labor decreases. For the system of capitalism itself, the value of labor decreases, the value of capital increases.

In the classical model, investments in human and financial capital are important predictors of the performance of a new venture. However, as demonstrated by Mark Zuckerberg and Facebook, it now seems possible for a group of relatively inexperienced people with limited capital to succeed on a large scale.

Innovations

A visualization of the various routes through a portion of the Internet

The Information Age was enabled by technology developed in the Digital Revolution, which was itself enabled by building on the developments of the Technological Revolution.

Transistors

The onset of the Information Age can be associated with the development of transistor technology. The concept of a field-effect transistor was first theorized by Julius Edgar Lilienfeld in 1925. The first practical transistor was the point-contact transistor, invented by the engineers Walter Houser Brattain and John Bardeen while working for William Shockley at Bell Labs in 1947. This was a breakthrough that laid the foundations for modern technology. Shockley's research team also invented the bipolar junction transistor in 1952. The most widely used type of transistor is the metal–oxide–semiconductor field-effect transistor (MOSFET), invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1960. The complementary MOS (CMOS) fabrication process was developed by Frank Wanlass and Chih-Tang Sah in 1963.

Computers

Before the advent of electronics, mechanical computers, like the Analytical Engine in 1837, were designed to provide routine mathematical calculation and simple decision-making capabilities. Military needs during World War II drove development of the first electronic computers, based on vacuum tubes, including the Z3, the Atanasoff–Berry Computer, Colossus computer, and ENIAC.

The invention of the transistor enabled the era of mainframe computers (1950s–1970s), typified by the IBM 360. These large, room-sized computers provided data calculation and manipulation that was much faster than humanly possible, but were expensive to buy and maintain, so were initially limited to a few scientific institutions, large corporations, and government agencies.

The germanium integrated circuit (IC) was invented by Jack Kilby at Texas Instruments in 1958. The silicon integrated circuit was then invented in 1959 by Robert Noyce at Fairchild Semiconductor, using the planar process developed by Jean Hoerni, who was in turn building on Mohamed Atalla's silicon surface passivation method developed at Bell Labs in 1957. Following the invention of the MOS transistor by Mohamed Atalla and Dawon Kahng at Bell Labs in 1959, the MOS integrated circuit was developed by Fred Heiman and Steven Hofstein at RCA in 1962. The silicon-gate MOS IC was later developed by Federico Faggin at Fairchild Semiconductor in 1968. With the advent of the MOS transistor and the MOS IC, transistor technology rapidly improved, and the ratio of computing power to size increased dramatically, giving direct access to computers to ever smaller groups of people.

The first commercial single-chip microprocessor launched in 1971, the Intel 4004, which was developed by Federico Faggin using his silicon-gate MOS IC technology, along with Marcian Hoff, Masatoshi Shima and Stan Mazor.

Along with electronic arcade machines and home video game consoles pioneered by Nolan Bushnell in the 1970s, the development of personal computers like the Commodore PET and Apple II (both in 1977) gave individuals access to the computer. However, data sharing between individual computers was either non-existent or largely manual, at first using punched cards and magnetic tape, and later floppy disks.

Data

The first developments for storing data were initially based on photographs, starting with microphotography in 1851 and then microform in the 1920s, with the ability to store documents on film, making them much more compact. Early information theory and Hamming codes were developed about 1950, but awaited technical innovations in data transmission and storage to be put to full use.

Magnetic-core memory was developed from the research of Frederick W. Viehe in 1947 and An Wang at Harvard University in 1949. With the advent of the MOS transistor, MOS semiconductor memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In 1967, Dawon Kahng and Simon Sze at Bell Labs described in 1967 how the floating gate of an MOS semiconductor device could be used for the cell of a reprogrammable ROM. Following the invention of flash memory by Fujio Masuoka at Toshiba in 1980, Toshiba commercialized NAND flash memory in 1987.

Copper wire cables transmitting digital data connected computer terminals and peripherals to mainframes, and special message-sharing systems leading to email, were first developed in the 1960s. Independent computer-to-computer networking began with ARPANET in 1969. This expanded to become the Internet (coined in 1974). Access to the Internet improved with the invention of the World Wide Web in 1991. The capacity expansion from dense wave division multiplexing, optical amplification and optical networking in the mid-1990s led to record data transfer rates. By 2018, optical networks routinely delivered 30.4 terabits/s over a fiber optic pair, the data equivalent of 1.2 million simultaneous 4K HD video streams.

MOSFET scaling, the rapid miniaturization of MOSFETs at a rate predicted by Moore's law,[122] led to computers becoming smaller and more powerful, to the point where they could be carried. During the 1980s–1990s, laptops were developed as a form of portable computer, and personal digital assistants (PDAs) could be used while standing or walking. Pagers, widely used by the 1980s, were largely replaced by mobile phones beginning in the late 1990s, providing mobile networking features to some computers. Now commonplace, this technology is extended to digital cameras and other wearable devices. Starting in the late 1990s, tablets and then smartphones combined and extended these abilities of computing, mobility, and information sharing. Metal–oxide–semiconductor (MOS) image sensors, which first began appearing in the late 1960s, led to the transition from analog to digital imaging, and from analog to digital cameras, during the 1980s–1990s. The most common image sensors are the charge-coupled device (CCD) sensor and the CMOS (complementary MOS) active-pixel sensor (CMOS sensor).

Electronic paper, which has origins in the 1970s, allows digital information to appear as paper documents.

Personal computers

By 1976, there were several firms racing to introduce the first truly successful commercial personal computers. Three machines, the Apple II, Commodore PET 2001 and TRS-80 were all released in 1977, becoming the most popular by late 1978. Byte magazine later referred to Commodore, Apple, and Tandy as the "1977 Trinity". Also in 1977, Sord Computer Corporation released the Sord M200 Smart Home Computer in Japan.

Apple II

April 1977: Apple II.

Steve Wozniak (known as "Woz"), a regular visitor to Homebrew Computer Club meetings, designed the single-board Apple I computer and first demonstrated it there. With specifications in hand and an order for 100 machines at US$500 each from the Byte Shop, Woz and his friend Steve Jobs founded Apple Computer.

About 200 of the machines sold before the company announced the Apple II as a complete computer. It had color graphics, a full QWERTY keyboard, and internal slots for expansion, which were mounted in a high quality streamlined plastic case. The monitor and I/O devices were sold separately. The original Apple II operating system was only the built-in BASIC interpreter contained in ROM. Apple DOS was added to support the diskette drive; the last version was "Apple DOS 3.3".

Its higher price and lack of floating point BASIC, along with a lack of retail distribution sites, caused it to lag in sales behind the other Trinity machines until 1979, when it surpassed the PET. It was again pushed into 4th place when Atari, Inc. introduced its Atari 8-bit computers.

Despite slow initial sales, the lifetime of the Apple II was about eight years longer than other machines, and so accumulated the highest total sales. By 1985, 2.1 million had sold and more than 4 million Apple II's were shipped by the end of its production in 1993.

Optical networking

Optical communication plays a crucial role in communication networks. Optical communication provides the transmission backbone for the telecommunications and computer networks that underlie the Internet, the foundation for the Digital Revolution and Information Age.

The two core technologies are the optical fiber and light amplification (the optical amplifier). In 1953, Bram van Heel demonstrated image transmission through bundles of optical fibers with a transparent cladding. The same year, Harold Hopkins and Narinder Singh Kapany at Imperial College succeeded in making image-transmitting bundles with over 10,000 optical fibers, and subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers.

Gordon Gould invented the optical amplifier and the laser, and also established the first optical telecommunications company, Optelecom, to design communication systems. The firm was a co-founder in Ciena Corp., the venture that popularized the optical amplifier with the introduction of the first dense wave division multiplexing system. This massive scale communication technology has emerged as the common basis of all telecommunications networks and, thus, a foundation of the Information Age.

Economy, society, and culture

Manuel Castells captures the significance of the Information Age in The Information Age: Economy, Society and Culture when he writes of our global interdependence and the new relationships between economy, state and society, what he calls "a new society-in-the-making." He cautions that just because humans have dominated the material world, does not mean that the Information Age is the end of history:

"It is in fact, quite the opposite: history is just beginning, if by history we understand the moment when, after millennia of a prehistoric battle with Nature, first to survive, then to conquer it, our species has reached the level of knowledge and social organization that will allow us to live in a predominantly social world. It is the beginning of a new existence, and indeed the beginning of a new age, The Information Age, marked by the autonomy of culture vis-à-vis the material basis of our existence."

Thomas Chatterton Williams wrote about the dangers of anti-intellectualism in the Information Age in a piece for The Atlantic. Although access to information has never been greater, most information is irrelevant or insubstantial. The Information Age's emphasis on speed over expertise contributes to "superficial culture in which even the elite will openly disparage as pointless our main repositories for the very best that has been thought."

Shape of the universe

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Shape_of_...