Search This Blog

Sunday, August 21, 2022

Non-coding DNA

From Wikipedia, the free encyclopedia

Non-coding DNA (ncDNA) sequences are components of an organism's DNA that do not encode protein sequences. Some non-coding DNA is transcribed into functional non-coding RNA molecules (e.g. transfer RNA, microRNA, piRNA, ribosomal RNA, and regulatory RNAs). Other functional regions of the non-coding DNA fraction include regulatory sequences that control gene expression; scaffold attachment regions; origins of DNA replication; centromeres; and telomeres. Some regions appear to be mostly nonfunctional such as introns, pseudogenes, intergenic DNA, and fragments of transposons and viruses. These apparently non-functional regions take up most of the genome of many eukaryotes and many scientists think that they are junk DNA.

Fraction of non-coding genomic DNA

In bacteria, the coding regions typically take up 88% of the genome. The remaining 12% consists largely of non-coding genes and regulatory sequences, which means that almost all of the bacterial genome has a function. The amount of coding DNA in eukaryrotes is usually a much smaller fraction of the genome because eukaryotic genomes contain large amounts of repetitive DNA not found in prokaryotes. The human genome contains somewhere between 1% and 2% coding DNA. (The exact number isn't known because there are disputes over the number of functional coding exons and over the total size of the human genome.) This means that 98-99% of the human genome consists of non-coding DNA and this includes many functional elements such as non-coding genes and regulatory sequences (see below).

Genome size in eukaryotes can vary over a wide range, even between closely related sequences. This puzzling observation was originally known as the C-value Paradox where "C" refers to the haploid genome size. The paradox was resolved with the discovery that most of the differences were due to the expansion and contraction of repetitive DNA and not the number of genes. Some researchers speculated that this repetitive DNA was mostly junk DNA. The reasons for the changes in genome size are still being worked out and this problem is called the C-value Enigma.

This led to the observation that the number of genes does not seem to correlate with perceived notions of complexity because the number of genes seems to be relatively constant - an issue that's called the G-value Paradox. For example, the genome of the unicellular Polychaos dubium (formerly known as Amoeba dubia) has been reported to contain more than 200 times the amount of DNA in humans (i.e. more than 600 billion pairs of bases vs a bit more than 3 billion in humans). The pufferfish Takifugu rubripes genome is only about one eighth the size of the human genome, yet seems to have a comparable number of genes. Genes take up about 30% of the pufferfish genome and the coding DNA is about 10%. (Non-coding DNA = 90%.) The reduced size of the pufferfish genome is due to a reduction in the length of introns and less repetitive DNA.

Utricularia gibba, a bladderwort plant, has a very small nuclear genome (100.2 Mb) compared to most plants. It likely evolved from an ancestral genome that was 1,500 Mb in size. The bladderwort genome has roughly the same number of genes as other plants but the total amount of coding DNA comes to about 30% of the genome. (Neither paper gives a precise number but it can be estimated from the number of genes and the average size of a coding region.)

The remainder of the genome (70% non-coding DNA) consists of promoters and regulatory sequences that are shorter than those in other plant species. The genes contain introns but there are fewer of them and they are smaller than the introns in other plant genomes. There are noncoding genes, including many copies of ribosomal RNA genes. The genome also contains telomere sequences and centromeres as expected. Much of the repetitive DNA seen in other eukaryotes has been deleted from the bladderwort genome since that lineage split from those of other plants. About 59% of the bladderwort genome consists of transposon-related sequences but since the genome is so much smaller than other genomes, this represents a considerable reduction in the amount of this DNA. The authors of the original 2013 article note that claims of additional functional elements in the non-coding DNA of animals ('dark matter') don't seem to apply to plant genomes.

According to a New York Times piece, during the evolution of this species, "... genetic junk that didn’t serve a purpose was expunged, and the necessary stuff was kept." That's because The bladderwort genome consists mostly of functional genes and their regulatory systems whereas the human genome is more than 90% junk DNA. One of the leading investigators on the study, Victor Albert of the University of Buffalo, puts it like this,

"The big story is that only 3 percent of the bladderwort's genetic material is so-called 'junk' DNA," Albert said. "Somehow, this plant has purged most of what makes up plant genomes. What that says is that you can have a perfectly good multicellular plant with lots of different cells, organs, tissue types and flowers, and you can do it without the junk. Junk is not needed."

Types of non-coding DNA sequences

Noncoding genes

There are two types of genes: protein coding genes and noncoding genes. Noncoding genes are an important part of non-coding DNA and they include genes for transfer RNA and ribosomal RNA. These genes were discovered in the 1960s. Prokaryotic genomes contain genes for a number of other noncoding RNAs but noncoding RNA genes are much more common in eukaryotes.

Typical classes of noncoding genes in eukaryotes include genes for small nuclear RNAs (snRNAs), small nucleolar RNAs (sno RNAs), microRNAs (miRNAs), short interfering RNAs (siRNAs), PIWI-interacting RNAs (piRNAs), and long noncoding RNAs (lncRNAs). In addition, there are a number of unique RNA genes that produce catalytic RNAs.

Noncoding genes account for only a few percent of prokaryotic genomes but they can represent a vastly higher fraction in eukaryotic genomes. In humans, the noncoding genes take up at least 6% of the genome, largely because there are hundreds of copies of ribosomal RNA genes. Protein-coding genes occupy about 38% of the genome; a fraction that is much higher than the coding region because genes contain large introns.

The total number of noncoding genes in the human genome is controversial. Some scientists think that there are only about 5,000 noncoding genes while others believe that there may be more than 100,000 (see the article on Non-coding RNA). The difference is largely due to debate over the number of lncRNA genes.

Promoters and regulatory elements

Promoters are DNA segments near the 5' end of the gene where transcription begins. They are the sites where RNA polymerase binds to initiate RNA synthesis. Every gene has a noncoding promoter.

Regulatory elements are sites that control the transcription of a nearby gene. They are almost always sequences where transcription factors bind to DNA and these transcription factors can either activate transcription (activators) or repress transcription (repressors). Regulatory elements were discovered in the 1960s and their general characteristics were worked out in the 1970s by studying specific transcription factors in bacteria and bacteriophage.

Promoters and regulatory sequences represent an abundant class of noncoding DNA but they mostly consist of a collection of relatively short sequences so they don't take up a very large fraction of the genome. The exact amount of regulatory DNA in mammalian genome is unclear because it is difficult to distinguish between spurious transcription factor binding sites and those that are functional. The binding characteristics of typical DNA-binding proteins were characterized in the 1970s and the biochemical properties of transcription factors predict that in cells with large genomes the majority of binding sites will be fortuitous and not biologiacally functional.

Many regulatory sequences occur near promoters, usually upstream of the transcription start site of the gene. Some occur within a gene and a few are located downstream of the transcription termination site. In eukaryotes, there are some regulatory sequences that are located at a considerable distance from the promoter region. These distant regulatory sequences are often called enhancers but there is no rigorous definition of enhancer that distinguishes it from other transcription factor binding sites.

Introns

Illustration of an unspliced pre-mRNA precursor, with five introns and six exons (top). After the introns have been removed via splicing, the mature mRNA sequence is ready for translation (bottom).

introns are the parts of a gene that are transcribed into the precursor RNA sequence, but ultimately removed by RNA splicing during the processing to mature RNA. Introns are found in both types of genes: protein-coding genes and noncoding genes. They are present in prokaryotes but they are much more common in eukaryotic genomes.

Group I and group II introns take up only a small percentage of the genome when they are present. Spliceosomal introns (see Figure) are only found in eukaryotes and they can represent a substantial proportion of the genome. In humans, for example, introns in protein-coding genes cover 37% of the genome. Combining that with about 1% coding sequences means that protein-coding genes occupy about 39% of the human genome. The calculations for noncoding genes are more complicated because there's considerable dispute over the total number of noncoding genes but taking only the well-defined examples means that noncoding genes occupy at least 6% of the genome.

Thus, genes take up 45% of the human genome and most of this is noncoding DNA in introns.

There are good reasons to believe that most of the intron DNA is junk DNA (see the discussion in the separate Wikipedia article on introns).

Untranslated regions

The standard biochemistry and molecular biology textbooks describe non-coding nucleotides in mRNA located between the 5' end of the gene and the translation initiation codon. These regions are called 5'-untranslated regions or 5'-UTRs. Similar regions called 3'-untranslated regions (3'-UTRs) are found at the end of the gene. The 5'-UTRs and 3'UTRs are very short in bacteria but they can be several hundred nucleotides in length in eukaryotes. They contain short elements that control the initiation of translation (5'-UTRs) and transcription termination (3'-UTRs) as well as regulatory elements that may control mRNA stability, processing, and targeting to different regions of the cell.

Origins of replication

DNA synthesis begins at specific sites called origins of replication. These are regions of the genome where the DNA replication machinery is assembled and the DNA is unwound to begin DNA synthesis. In most cases, replication proceeds in both directions from the replication origin.

The main features of replication origins are sequences where specific initiation proteins are bound. A typical replication origin covers about 100-200 base pairs of DNA. Prokaryotes have one origin of replication per chromosome or plasmid but there are usually multiple origins in eukaryotic chromosomes. The human genome contains about 100,000 origins of replication representing about 0.3% of the genome.

Centromeres

Centromeres are the sites where spindle fibers attach to newly replicated chromosomes in order to segregate them into daughter cells when the cell divides. Each eukaryotic chromosome has a single functional centromere that's seen as a constricted region in a condensed metaphase chromosome. Centromeric DNA consists of a number of repetitive DNA sequences that often take up a significant fraction of the genome because each centromere can be millions of base pairs in length. In humans, for example, the sequences of all 24 centromeres have been determined and they account for about 6% of the genome. However, it's unlikely that all of this noncoding DNA is essential since there is considerable variation in the total amount of centromeric DNA in different individuals. Centromeres are another example of functional noncoding DNA sequences that have been known for almost half a century and it's likely that they are more abundant than coding DNA.

Telomeres

Telomeres are regions of repetitive DNA at the end of a chromosome, which provide protection from chromosomal deterioration during DNA replication. Recent studies have shown that telomeres function to aid in its own stability. Telomeric repeat-containing RNA (TERRA) are transcripts derived from telomeres. TERRA has been shown to maintain telomerase activity and lengthen the ends of chromosomes.

Scaffold attachment regions

Both prokaryotic and eukarotic genomes are organized into large loops of protein-bound DNA. In eukaryotes, the bases of the loops are called scaffold attachment regions (SARs) and they consist of stretches of DNA that bind an RNA/protein complex to stabilize the loop. There are about 100,000 loops in the human genome and each one consists of about 100 bp of DNA. The total amount of DNA devoted to SARs accounts for about 0.3% of the human genome.

Pseudogenes

Pseudogenes are mostly former genes that have become non-functional due to mutation but the term also refers to inactive DNA sequences that are derived from RNAs produced by functional genes (processed pseudogenes). Pseudogenes are only a small fraction of noncoding DNA in prokaryotic genomes because they are eliminated by negative selection. In some eukaryotes, however, pseudogenes can accumulate because selection isn't powerful enough to eliminate them (see Nearly neutral theory of molecular evolution).

The human genome contains about 15,000 pseudogenes derived from protein-coding genes and an unknown number derived from noncoding genes. They may cover a substantial fraction of the genome (~5%) since many of them contain former intron sequences, .

Pseudogenes are junk DNA by definition and they evolve at the neutral rate as expected for junk DNA. Some former pseudogenes have secondarily acquired a function and this leads some scientists to speculate that most pseudogenes are not junk because they have a yet-to-be-discovered function.

Repeat sequences, transposons and viral elements

Mobile genetic elements in the cell (left) and how they can be acquired (right)

Transposons and retrotransposons are mobile genetic elements. Retrotransposon repeated sequences, which include long interspersed nuclear elements (LINEs) and short interspersed nuclear elements (SINEs), account for a large proportion of the genomic sequences in many species. Alu sequences, classified as a short interspersed nuclear element, are the most abundant mobile elements in the human genome. Some examples have been found of SINEs exerting transcriptional control of some protein-encoding genes.

Endogenous retrovirus sequences are the product of reverse transcription of retrovirus genomes into the genomes of germ cells. Mutation within these retro-transcribed sequences can inactivate the viral genome.

Over 8% of the human genome is made up of (mostly decayed) endogenous retrovirus sequences, as part of the over 42% fraction that is recognizably derived of retrotransposons, while another 3% can be identified to be the remains of DNA transposons. Much of the remaining half of the genome that is currently without an explained origin is expected to have found its origin in transposable elements that were active so long ago (> 200 million years) that random mutations have rendered them unrecognizable. Genome size variation in at least two kinds of plants is mostly the result of retrotransposon sequences.

Highly repetitive DNA

Highly repetitive DNA consists of short stretches of DNA that are repeated many times in tandem (one after the other). The repeat segments are usually between 2 bp and 10 bp but longer ones are known. Highly repetitive DNA is rare in prokaryotes but common in eukaryotes, especially those with large genomes. It is sometimes called satellite DNA.

Most of the highly repetitive DNA is found in centromeres and telomeres (see above) and most of it is functional although some might be redundant. The other significant fraction resides in short tandem repeats (STRs; also called microsatellites) consisting of short stretches of a simple repeat such as ATC. There are about 350,000 STRs in the human genome and they are scattered throughout the genome with an average length of about 25 repeats.

Variations in the number of STR repeats can cause genetic diseases when they lie within a gene but most of these regions appear to be non-functional junk DNA where the number of repeats can vary considerably from individual to individual. This is why these length differences are used extensively in DNA fingerprinting.

Junk DNA

"Junk DNA" refers broadly to "any DNA sequence that does not play a functional role in development, physiology, or some other organism-level capacity." The term "junk DNA" was used in the 1960s. but it only became widely known in 1972 in a paper by Susumu Ohno. Ohno noted that the mutational load from deleterious mutations placed an upper limit on the number of functional loci that could be expected given a typical mutation rate. He hypothesized that mammalian genomes could not have more than 30,000 loci under selection before the "cost" from the mutational load would cause an inescapable decline in fitness, and eventually extinction. The presence of junk DNA also explained the observation that even closely related species can have widely (orders-of-magnitude) different genome sizes (C-value paradox).

Since the late 1970s it has become apparent that most of the DNA in large genomes finds its origin in the selfish amplification of transposable elements, of which W. Ford Doolittle and Carmen Sapienza in 1980 wrote in the journal Nature: "When a given DNA, or class of DNAs, of unproven phenotypic function can be shown to have evolved a strategy (such as transposition) which ensures its genomic survival, then no other explanation for its existence is necessary." The amount of junk DNA can be expected to depend on the rate of amplification of these elements and the rate at which non-functional DNA is lost. Another source is genome duplication followed by a loss of function due to redundancy. In the same issue of Nature, Leslie Orgel and Francis Crick wrote that junk DNA has "little specificity and conveys little or no selective advantage to the organism".

The term "junk DNA" may provoke a strong reaction and some have recommended using more neutral terminology such as "nonfunctional DNA." Junk DNA is often confused with non-coding DNA but, as documented above, there are substantial fractions of non-coding DNA that have well-defined functions such as regulation, non-coding genes, origins of replication, telomeres, centromeres, and chromatin organizing sites (SARs).

ENCODE Project

The Encyclopedia of DNA Elements (ENCODE) project uncovered, by direct biochemical approaches, that at least 80% of human genomic DNA has biochemical activity such as "transcription, transcription factor association, chromatin structure, and histone modification". Though this was not necessarily unexpected due to previous decades of research discovering many functional non-coding regions, some scientists criticized the conclusion for conflating biochemical activity with biological function. Estimates for the biologically functional fraction of the human genome based on comparative genomics range between 8 and 15%. However, others have argued against relying solely on estimates from comparative genomics due to its limited scope since non-coding DNA has been found to be involved in epigenetic activity and complex networks of genetic interactions and is explored in evolutionary developmental biology. One consistent indication of biological functionality of a genomic region is if the sequence of that genomic region was maintained by purifying selection (or if mutating away the sequence is deleterious to the organism). Under this definition, 90% of the genome is 'junk'. However, some stress that 'junk' is not 'garbage' and the large body of nonfunctional transcripts produced by 'junk DNA' can evolve functional elements de novo.

The meaning of the results have been disputed by other scientists, who argue that neither accessibility of segments of the genome to transcription factors nor their transcription guarantees that those segments have biochemical function and that their transcription is selectively advantageous. After all, non-functional sections of the genome can be transcribed, given that transcription factors typically bind to short sequences that are found (randomly) all over the whole genome.

Furthermore, the much lower estimates of functionality prior to ENCODE were based on genomic conservation estimates across mammalian lineages. Widespread transcription and splicing in the human genome has been discussed as another indicator of genetic function in addition to genomic conservation which may miss poorly conserved functional sequences. Furthermore, much of the apparent junk DNA is involved in epigenetic regulation and appears to be necessary for the development of complex organisms. Genetic approaches may miss functional elements that do not manifest physically on the organism, evolutionary approaches have difficulties using accurate multispecies sequence alignments since genomes of even closely related species vary considerably, and with biochemical approaches, though having high reproducibility, the biochemical signatures do not always automatically signify a function. Kellis et al. noted that 70% of the transcription coverage was less than 1 transcript per cell (and may thus be based on spurious background transcription). On the other hand, they argued that 12–15% fraction of human DNA may be under functional constraint, and may still be an underestimate when lineage-specific constraints are included. Ultimately genetic, evolutionary, and biochemical approaches can all be used in a complementary way to identify regions that may be functional in human biology and disease. Some critics have argued that functionality can only be assessed in reference to an appropriate null hypothesis. In this case, the null hypothesis would be that these parts of the genome are non-functional and have properties, be it on the basis of conservation or biochemical activity, that would be expected of such regions based on our general understanding of molecular evolution and biochemistry. According to these critics, until a region in question has been shown to have additional features, beyond what is expected of the null hypothesis, it should provisionally be labelled as non-functional.

Genome-wide association studies (GWAS) and non-coding DNA

Genome-wide association studies (GWAS) identify linkages between alleles and observable traits such as phenotypes and diseases. Most of the associations are between single-nucleotide polymorphisms (SNPs) and the trait being examined and most of these SNPs are located in non-functional DNA. The association establishes a linkage that helps map the DNA region responsible for the trait but it doesn't necessarily identify the mutations causing the disease or phenotypic difference.

SNPs that are tightly linked to traits are the ones most likely to identify a causal mutation. (The association is referred to as tight linkage disequilibrium.) About 12% of these polymorphisms are found in coding regions; about 40% are located in introns; and most of the rest are found in intergenic regions, including regulatory sequences.

Virtual reality

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Virtual_reality

Researchers with the European Space Agency in Darmstadt, Germany, equipped with a VR headset and motion controllers, demonstrating how astronauts might use virtual reality in the future to train to extinguish a fire inside a lunar habitat

Virtual reality (VR) is a simulated experience that can be similar to or completely different from the real world. Applications of virtual reality include entertainment (particularly video games), education (such as medical or military training) and business (such as virtual meetings). Other distinct types of VR-style technology include augmented reality and mixed reality, sometimes referred to as extended reality or XR.

Currently, standard virtual reality systems use either virtual reality headsets or multi-projected environments to generate realistic images, sounds and other sensations that simulate a user's physical presence in a virtual environment. A person using virtual reality equipment is able to look around the artificial world, move around in it, and interact with virtual features or items. The effect is commonly created by VR headsets consisting of a head-mounted display with a small screen in front of the eyes, but can also be created through specially designed rooms with multiple large screens. Virtual reality typically incorporates auditory and video feedback, but may also allow other types of sensory and force feedback through haptic technology.

Etymology

"Virtual" has had the meaning of "being something in essence or effect, though not actually or in fact" since the mid-1400s. The term "virtual" has been used in the computer sense of "not physically existing but made to appear by software" since 1959.

In 1938, French avant-garde playwright Antonin Artaud described the illusory nature of characters and objects in the theatre as "la réalité virtuelle" in a collection of essays, Le Théâtre et son double. The English translation of this book, published in 1958 as The Theater and its Double, is the earliest published use of the term "virtual reality". The term "artificial reality", coined by Myron Krueger, has been in use since the 1970s. The term "virtual reality" was first used in a science fiction context in The Judas Mandala, a 1982 novel by Damien Broderick.

Widespread adaption of the term "virtual reality" in the popular media is attributed to Jaron Lanier, who in the late 1980s designed some of the first business-grade virtual reality hardware under his firm VPL Research, and the 1992 film Lawnmower Man, which features use of virtual reality systems.

Forms and methods

One method by which virtual reality can be realized is simulation-based virtual reality. Driving simulators, for example, give the driver on board the impression of actually driving an actual vehicle by predicting vehicular motion caused by driver input and feeding back corresponding visual, motion and audio cues to the driver.

With avatar image-based virtual reality, people can join the virtual environment in the form of real video as well as an avatar. One can participate in the 3D distributed virtual environment as form of either a conventional avatar or a real video. Users can select their own type of participation based on the system capability.

In projector-based virtual reality, modeling of the real environment plays a vital role in various virtual reality applications, such as robot navigation, construction modeling, and airplane simulation. Image-based virtual reality systems have been gaining popularity in computer graphics and computer vision communities. In generating realistic models, it is essential to accurately register acquired 3D data; usually, a camera is used for modeling small objects at a short distance.

Desktop-based virtual reality involves displaying a 3D virtual world on a regular desktop display without use of any specialized VR positional tracking equipment. Many modern first-person video games can be used as an example, using various triggers, responsive characters, and other such interactive devices to make the user feel as though they are in a virtual world. A common criticism of this form of immersion is that there is no sense of peripheral vision, limiting the user's ability to know what is happening around them.

An Omni treadmill being used at a VR convention.
 

A head-mounted display (HMD) more fully immerses the user in a virtual world. A virtual reality headset typically includes two small high resolution OLED or LCD monitors which provide separate images for each eye for stereoscopic graphics rendering a 3D virtual world, a binaural audio system, positional and rotational real-time head tracking for six degrees of movement. Options include motion controls with haptic feedback for physically interacting within the virtual world in an intuitive way with little to no abstraction and an omnidirectional treadmill for more freedom of physical movement allowing the user to perform locomotive motion in any direction.

Augmented reality (AR) is a type of virtual reality technology that blends what the user sees in their real surroundings with digital content generated by computer software. The additional software-generated images with the virtual scene typically enhance how the real surroundings look in some way. AR systems layer virtual information over a camera live feed into a headset or smartglasses or through a mobile device giving the user the ability to view three-dimensional images.

Mixed reality (MR) is the merging of the real world and virtual worlds to produce new environments and visualizations where physical and digital objects co-exist and interact in real time.

A cyberspace is sometimes defined as a networked virtual reality.

Simulated reality is a hypothetical virtual reality as truly immersive as the actual reality, enabling an advanced lifelike experience or even virtual eternity.

History

View-Master, a stereoscopic visual simulator, was introduced in 1939

The exact origins of virtual reality are disputed, partly because of how difficult it has been to formulate a definition for the concept of an alternative existence. The development of perspective in Renaissance Europe created convincing depictions of spaces that did not exist, in what has been referred to as the "multiplying of artificial worlds". Other elements of virtual reality appeared as early as the 1860s. Antonin Artaud took the view that illusion was not distinct from reality, advocating that spectators at a play should suspend disbelief and regard the drama on stage as reality. The first references to the more modern concept of virtual reality came from science fiction.

20th century

Morton Heilig wrote in the 1950s of an "Experience Theatre" that could encompass all the senses in an effective manner, thus drawing the viewer into the onscreen activity. He built a prototype of his vision dubbed the Sensorama in 1962, along with five short films to be displayed in it while engaging multiple senses (sight, sound, smell, and touch). Predating digital computing, the Sensorama was a mechanical device. Heilig also developed what he referred to as the "Telesphere Mask" (patented in 1960). The patent application described the device as "a telescopic television apparatus for individual use...The spectator is given a complete sensation of reality, i.e. moving three dimensional images which may be in colour, with 100% peripheral vision, binaural sound, scents and air breezes."

In 1968, Ivan Sutherland, with the help of his students including Bob Sproull, created what was widely considered to be the first head-mounted display system for use in immersive simulation applications. It was primitive both in terms of user interface and visual realism, and the HMD to be worn by the user was so heavy that it had to be suspended from the ceiling. The graphics comprising the virtual environment were simple wire-frame model rooms. The formidable appearance of the device inspired its name, The Sword of Damocles.

1970–1990

The virtual reality industry mainly provided VR devices for medical, flight simulation, automobile industry design, and military training purposes from 1970 to 1990.

David Em became the first artist to produce navigable virtual worlds at NASA's Jet Propulsion Laboratory (JPL) from 1977 to 1984. The Aspen Movie Map, a crude virtual tour in which users could wander the streets of Aspen in one of the three modes (summer, winter, and polygons), was created at MIT in 1978.

NASA Ames's 1985 VIEW headset

In 1979, Eric Howlett developed the Large Expanse, Extra Perspective (LEEP) optical system. The combined system created a stereoscopic image with a field of view wide enough to create a convincing sense of space. The users of the system have been impressed by the sensation of depth (field of view) in the scene and the corresponding realism. The original LEEP system was redesigned for NASA's Ames Research Center in 1985 for their first virtual reality installation, the VIEW (Virtual Interactive Environment Workstation) by Scott Fisher. The LEEP system provides the basis for most of the modern virtual reality headsets.

A VPL Research DataSuit, a full-body outfit with sensors for measuring the movement of arms, legs, and trunk. Developed circa 1989. Displayed at the Nissho Iwai showroom in Tokyo

By the late 1980s, the term "virtual reality" was popularized by Jaron Lanier, one of the modern pioneers of the field. Lanier had founded the company VPL Research in 1985. VPL Research has developed several VR devices like the DataGlove, the EyePhone, and the AudioSphere. VPL licensed the DataGlove technology to Mattel, which used it to make the Power Glove, an early affordable VR device.

Atari, Inc. founded a research lab for virtual reality in 1982, but the lab was closed after two years due to the Atari Shock (video game crash of 1983). However, its hired employees, such as Tom Zimmerman, Scott Fisher, Jaron Lanier, Michael Naimark, and Brenda Laurel, kept their research and development on VR-related technologies.

In 1988, the Cyberspace Project at Autodesk was the first to implement VR on a low-cost personal computer. The project leader Eric Gullichsen left in 1990 to found Sense8 Corporation and develop the WorldToolKit virtual reality SDK, which offered the first real time graphics with Texture mapping on a PC, and was widely used throughout industry and academia.

1990–2000

The 1990s saw the first widespread commercial releases of consumer headsets. In 1992, for instance, Computer Gaming World predicted "affordable VR by 1994".

In 1991, Sega announced the Sega VR headset for the Mega Drive home console. It used LCD screens in the visor, stereo headphones, and inertial sensors that allowed the system to track and react to the movements of the user's head. In the same year, Virtuality launched and went on to become the first mass-produced, networked, multiplayer VR entertainment system that was released in many countries, including a dedicated VR arcade at Embarcadero Center. Costing up to $73,000 per multi-pod Virtuality system, they featured headsets and exoskeleton gloves that gave one of the first "immersive" VR experiences.

A CAVE system at IDL's Center for Advanced Energy Studies in 2010

That same year, Carolina Cruz-Neira, Daniel J. Sandin and Thomas A. DeFanti from the Electronic Visualization Laboratory created the first cubic immersive room, the Cave automatic virtual environment (CAVE). Developed as Cruz-Neira's PhD thesis, it involved a multi-projected environment, similar to the holodeck, allowing people to see their own bodies in relation to others in the room. Antonio Medina, a MIT graduate and NASA scientist, designed a virtual reality system to "drive" Mars rovers from Earth in apparent real time despite the substantial delay of Mars-Earth-Mars signals.

Virtual Fixtures immersive AR system developed in 1992. Picture features Dr. Louis Rosenberg interacting freely in 3D with overlaid virtual objects called 'fixtures'

In 1992, Nicole Stenger created Angels, the first real-time interactive immersive movie where the interaction was facilitated with a dataglove and high-resolution goggles. That same year, Louis Rosenberg created the virtual fixtures system at the U.S. Air Force's Armstrong Labs using a full upper-body exoskeleton, enabling a physically realistic mixed reality in 3D. The system enabled the overlay of physically real 3D virtual objects registered with a user's direct view of the real world, producing the first true augmented reality experience enabling sight, sound, and touch.

By July 1994, Sega had released the VR-1 motion simulator ride attraction in Joypolis indoor theme parks, as well as the Dennou Senki Net Merc arcade game. Both used an advanced head-mounted display dubbed the "Mega Visor Display" developed in conjunction with Virtuality; it was able to track head movement in a 360-degree stereoscopic 3D environment, and in its Net Merc incarnation was powered by the Sega Model 1 arcade system board. Apple released QuickTime VR, which, despite using the term "VR", was unable to represent virtual reality, and instead displayed 360-degree interactive panoramas.

Nintendo's Virtual Boy console was released in 1995. A group in Seattle created public demonstrations of a "CAVE-like" 270 degree immersive projection room called the Virtual Environment Theater, produced by entrepreneurs Chet Dagit and Bob Jacobson. Forte released the VFX1, a PC-powered virtual reality headset that same year.

In 1999, entrepreneur Philip Rosedale formed Linden Lab with an initial focus on the development of VR hardware. In its earliest form, the company struggled to produce a commercial version of "The Rig", which was realized in prototype form as a clunky steel contraption with several computer monitors that users could wear on their shoulders. The concept was later adapted into the personal computer-based, 3D virtual world program Second Life.

21st century

The 2000s were a period of relative public and investment indifference to commercially available VR technologies.

In 2001, SAS Cube (SAS3) became the first PC-based cubic room, developed by Z-A Production (Maurice Benayoun, David Nahon), Barco, and Clarté. It was installed in Laval, France. The SAS library gave birth to Virtools VRPack. In 2007, Google introduced Street View, a service that shows panoramic views of an increasing number of worldwide positions such as roads, indoor buildings and rural areas. It also features a stereoscopic 3D mode, introduced in 2010.

2010–present

An inside view of the Oculus Rift Crescent Bay prototype headset

In 2010, Palmer Luckey designed the first prototype of the Oculus Rift. This prototype, built on a shell of another virtual reality headset, was only capable of rotational tracking. However, it boasted a 90-degree field of vision that was previously unseen in the consumer market at the time. Distortion issues arising from the lens used to create the field of vision were corrected for by software written by John Carmack for a version of Doom 3. This initial design would later serve as a basis from which the later designs came. In 2012, the Rift is presented for the first time at the E3 video game trade show by Carmack. In 2014, Facebook purchased Oculus VR for what at the time was stated as $2 billion but later revealed that the more accurate figure was $3 billion. This purchase occurred after the first development kits ordered through Oculus' 2012 Kickstarter had shipped in 2013 but before the shipping of their second development kits in 2014. ZeniMax, Carmack's former employer, sued Oculus and Facebook for taking company secrets to Facebook; the verdict was in favour of ZeniMax, settled out of court later.

HTC Vive headsets worn at Mobile World Congress 2018

In 2013, Valve discovered and freely shared the breakthrough of low-persistence displays which make lag-free and smear-free display of VR content possible. This was adopted by Oculus and was used in all their future headsets. In early 2014, Valve showed off their SteamSight prototype, the precursor to both consumer headsets released in 2016. It shared major features with the consumer headsets including separate 1K displays per eye, low persistence, positional tracking over a large area, and fresnel lenses. HTC and Valve announced the virtual reality headset HTC Vive and controllers in 2015. The set included tracking technology called Lighthouse, which utilized wall-mounted "base stations" for positional tracking using infrared light.

The Project Morpheus (PlayStation VR) headset worn at Gamescom 2015

In 2014, Sony announced Project Morpheus (its code name for the PlayStation VR), a virtual reality headset for the PlayStation 4 video game console. In 2015, Google announced Cardboard, a do-it-yourself stereoscopic viewer: the user places their smartphone in the cardboard holder, which they wear on their head. Michael Naimark was appointed Google's first-ever 'resident artist' in their new VR division. The Kickstarter campaign for Gloveone, a pair of gloves providing motion tracking and haptic feedback, was successfully funded, with over $150,000 in contributions. Also in 2015, Razer unveiled its open source project OSVR.

Smartphone-based budget headset Samsung Gear VR in dismantled state

By 2016, there were at least 230 companies developing VR-related products. Amazon, Apple, Facebook, Google, Microsoft, Sony and Samsung all had dedicated AR and VR groups. Dynamic binaural audio was common to most headsets released that year. However, haptic interfaces were not well developed, and most hardware packages incorporated button-operated handsets for touch-based interactivity. Visually, displays were still of a low-enough resolution and frame rate that images were still identifiable as virtual.

In 2016, HTC shipped its first units of the HTC Vive SteamVR headset. This marked the first major commercial release of sensor-based tracking, allowing for free movement of users within a defined space. A patent filed by Sony in 2017 showed they were developing a similar location tracking technology to the Vive for PlayStation VR, with the potential for the development of a wireless headset.

In 2019, Oculus released the Oculus Rift S and a standalone headset, the Oculus Quest. These headsets utilized inside-out tracking compared to external outside-in tracking seen in previous generations of headsets.

Later in 2019, Valve released the Valve Index. Notable features include a 130° field of view, off-ear headphones for immersion and comfort, open-handed controllers which allow for individual finger tracking, front facing cameras, and a front expansion slot meant for extensibility.

In 2020, Oculus released the Oculus Quest 2. Some new features include a sharper screen, reduced price, and increased performance. Facebook now requires user to log in with a Facebook account in order to use the new headset. In 2021 the Oculus Quest 2 accounted for 80% of all VR headsets sold.

Robinson R22 Virtual Reality Training Device developed by VRM Switzerland

In 2021, EASA approves the first Virtual Reality (VR) based Flight Simulation Training Device. The device, for rotorcraft pilots, enhances safety by opening up the possibility of practicing risky maneuvers in a virtual environment. This addresses a key risk area in rotorcraft operations, where statistics show that around 20% of accidents occur during training flights.

Future forecast

Since 2017, major strides in the integration of Virtual Reality and Cognitive Behavioral Therapy have been made, focusing on how to tailor the experience to suit each individual patient.

With the COVID-19 restrictions in 2020, VR is experiencing an enormous rise. According to Grand View Research, the global VR market will grow to 62.1 billion dollars in 2027.

Now in the post-pandemic era, augmented reality and virtual technologies have created a new avenue that may influence the future of occupational safety training and rehabilitation.

Technology

Software

The Virtual Reality Modelling Language (VRML), first introduced in 1994, was intended for the development of "virtual worlds" without dependency on headsets. The Web3D consortium was subsequently founded in 1997 for the development of industry standards for web-based 3D graphics. The consortium subsequently developed X3D from the VRML framework as an archival, open-source standard for web-based distribution of VR content. WebVR is an experimental JavaScript application programming interface (API) that provides support for various virtual reality devices, such as the HTC Vive, Oculus Rift, Google Cardboard or OSVR, in a web browser.

Hardware

Paramount for the sensation of immersion into virtual reality are a high frame rate (at least 95 fps), as well as a low latency

Modern virtual reality headset displays are based on technology developed for smartphones including: gyroscopes and motion sensors for tracking head, body, and hand positions; small HD screens for stereoscopic displays; and small, lightweight and fast computer processors. These components led to relative affordability for independent VR developers, and lead to the 2012 Oculus Rift Kickstarter offering the first independently developed VR headset.

Independent production of VR images and video has increased alongside the development of affordable omnidirectional cameras, also known as 360-degree cameras or VR cameras, that have the ability to record 360 interactive photography, although at relatively low resolutions or in highly compressed formats for online streaming of 360 video. In contrast, photogrammetry is increasingly used to combine several high-resolution photographs for the creation of detailed 3D objects and environments in VR applications.

To create a feeling of immersion, special output devices are needed to display virtual worlds. Well-known formats include head-mounted displays or the CAVE. In order to convey a spatial impression, two images are generated and displayed from different perspectives (stereo projection). There are different technologies available to bring the respective image to the right eye. A distinction is made between active (e.g. shutter glasses) and passive technologies (e.g. polarizing filters or Infitec).

In order to improve the feeling of immersion, wearable multi-string cables offer haptics to complex geometries in virtual reality. These strings offer fine control of each finger joint to simulate the haptics involved in touching these virtual geometries.

Special input devices are required for interaction with the virtual world. These include the 3D mouse, the wired glove, motion controllers, and optical tracking sensors. Controllers typically use optical tracking systems (primarily infrared cameras) for location and navigation, so that the user can move freely without wiring. Some input devices provide the user with force feedback to the hands or other parts of the body, so that the human being can orientate himself in the three-dimensional world through haptics and sensor technology as a further sensory sensation and carry out realistic simulations. This allows for the viewer to have a sense of direction in the artificial landscape. Additional haptic feedback can be obtained from omnidirectional treadmills (with which walking in virtual space is controlled by real walking movements) and vibration gloves and suits.

Virtual reality cameras can be used to create VR photography using 360-degree panorama videos. 360-degree camera shots can be mixed with virtual elements to merge reality and fiction through special effects. VR cameras are available in various formats, with varying numbers of lenses installed in the camera.

Visual immersion experience

Display resolution

Minimal Angle of Resolution (MAR) refers to the minimum distance between two display pixels. At the distance, viewer can clearly distinguish the independent pixels. Often measured in arc-seconds, MAR between two pixels has to do with the viewing distance. For the general public, resolution is about 30-65 arc-seconds, which is referred to as the spatial resolution when combined with distance. Given the viewing distance of 1m and 2m respectively, regular viewers won't be able to perceive two pixels as separate if they are less than 0.29mm apart at 1m and less than 0.58mm apart at 2m.

Image latency and display refresh frequency

Most small-size displays have a refresh rate of 60 Hz, which adds about 15ms of additional latency. The number is reduced to less than 7ms if the refresh rate is increased to 120 Hz or even 240 Hz and more. Participants generally feel that the experience is more immersive with higher refresh rates as a result. However, higher refresh rates require a more powerful graphics processing unit.

In theory, it represents participant's field of view (yellow area)

Relationship between display and field of view

We need to consider our field of view (FOV) in addition to quality image. Our eyes have a horizontal FOV of about 120 degrees per side and a vertical FOV of some 135 degrees. Stereopsis vision is limited to 120 degrees where the right and the left visions overlap. Generally speaking, we have a FOV of 200 degrees x 135 degrees with two eyes. However, most of it is peripheral vision, which varies from one person to another. So we conservatively take the average, i.e. 160 degrees. Therefore, if we keep our eyes stationary, a regular participant will have at least a stereopsis of 160 degrees x 135 degrees or 1/6 of the 360-degree FOV. We can quantify the abstract concept of immersion with the immersive index by getting the ratio of display viewing area and 1/6 of the 360-degree FOV.

In theory,

In practice, considering that the curved display cannot be made into a spherical shape, it is approximated by a cylinder instead.

In practice, considering that the curved display cannot be made into a spherical shape, it is approximated by a cylinder instead.

Applications

Apollo 11 astronaut Buzz Aldrin previewing the Destination: Mars VR experience at the Kennedy Space Center Visitor Complex in 2016

Virtual reality is most commonly used in entertainment applications such as video games, 3D cinema, dark rides and social virtual worlds. Consumer virtual reality headsets were first released by video game companies in the early-mid 1990s. Beginning in the 2010s, next-generation commercial tethered headsets were released by Oculus (Rift), HTC (Vive) and Sony (PlayStation VR), setting off a new wave of application development. 3D cinema has been used for sporting events, pornography, fine art, music videos and short films. Since 2015, roller coasters and theme parks have incorporated virtual reality to match visual effects with haptic feedback.

In social sciences and psychology, virtual reality offers a cost-effective tool to study and replicate interactions in a controlled environment. It can be used as a form of therapeutic intervention. For instance, there is the case of the virtual reality exposure therapy (VRET), a form of exposure therapy for treating anxiety disorders such as post traumatic stress disorder (PTSD) and phobias.

Virtual reality programs are being used in the rehabilitation processes with elderly individuals that have been diagnosed with Alzheimer's disease. This gives these elderly patients the opportunity to simulate real experiences that they would not otherwise be able to experience due to their current state. 17 recent studies with randomized controlled trials have shown that virtual reality applications are effective in treating cognitive deficits with neurological diagnoses. Loss of mobility in elderly patients can lead to a sense of loneliness and depression. Virtual reality is able to assist in making aging in place a lifeline to an outside world that they cannot easily navigate. Virtual reality allows exposure therapy to take place in a safe environment.

In medicine, simulated VR surgical environments were first developed in the 1990s. Under the supervision of experts, VR can provide effective and repeatable training at a low cost, allowing trainees to recognize and amend errors as they occur.

Virtual reality has been used in physical rehabilitation since the 2000s. Despite numerous studies conducted, good quality evidence of its efficacy compared to other rehabilitation methods without sophisticated and expensive equipment is lacking for the treatment of Parkinson's disease. A 2018 review on the effectiveness of mirror therapy by virtual reality and robotics for any type of pathology concluded in a similar way. Another study was conducted that showed the potential for VR to promote mimicry and revealed the difference between neurotypical and autism spectrum disorder individuals in their response to a two-dimensional avatar.

Immersive virtual reality technology with myoelectric and motion tracking control may represent a possible therapy option for treatment-resistant phantom limb pain. Pain scale measurements were taken into account and an interactive 3-D kitchen environment was developed bases on the principles of mirror therapy to allow for control of virtual hands while wearing a motion-tracked VR headset. A systematic search in Pubmed and Embase was performed to determine results that were pooled in two meta-analysis. Meta-analysis showed a significant result in favor of VRT for balance.

In the fast-paced and globalised business world meetings in VR are used to create an environment in which interactions with other people (e.g. colleagues, customers, partners) can feel more natural than a phone call or video chat. In the customisable meeting rooms all parties can join using the VR headset and interact as if they are in the same physical room. Presentations, videos or 3D models (of e.g. products or prototypes) can be uploaded and interacted with. Compared to traditional text-based CMC, Avatar-based interactions in 3D virtual environment lead to higher levels of consensus, satisfaction, and cohesion among group members.

U.S. Navy medic demonstrating a VR parachute simulator at the Naval Survival Training Institute in 2006

VR can simulate real workspaces for workplace occupational safety and health purposes, educational purposes, and training purposes. It can be used to provide learners with a virtual environment where they can develop their skills without the real-world consequences of failing. It has been used and studied in primary education, anatomy teaching, military, astronaut training, flight simulators, miner training, medical education, architectural design, driver training and bridge inspection. Immersive VR engineering systems enable engineers to see virtual prototypes prior to the availability of any physical prototypes. Supplementing training with virtual training environments has been claimed to offer avenues of realism in military and healthcare training while minimizing cost. It also has been claimed to reduce military training costs by minimizing the amounts of ammunition expended during training periods. VR can also be used for the healthcare training and education for medical practitioners.

In the engineering field, VR has proved very useful for both engineering educators and the students. A previously expensive cost in the educational department now being much more accessible due to lowered overall costs, has proven to be a very useful tool in educating future engineers. The most significant element lies in the ability for the students to be able to interact with 3-D models that accurately respond based on real world possibilities. This added tool of education provides many the immersion needed to grasp complex topics and be able to apply them. As noted, the future architects and engineers benefit greatly by being able to form understandings between spatial relationships and providing solutions based on real-world future applications.

The first fine art virtual world was created in the 1970s. As the technology developed, more artistic programs were produced throughout the 1990s, including feature films. When commercially available technology became more widespread, VR festivals began to emerge in the mid-2010s. The first uses of VR in museum settings began in the 1990s, seeing a significant increase in the mid-2010s. Additionally, museums have begun making some of their content virtual reality accessible.

Virtual reality's growing market presents an opportunity and an alternative channel for digital marketing. It is also seen as a new platform for e-commerce, particularly in the bid to challenge traditional "brick and mortar" retailers. However, a 2018 study revealed that the majority of goods are still purchased in physical stores.

In the case of education, the uses of virtual reality have demonstrated being capable of promoting higher order thinking, promoting the interest and commitment of students, the acquisition of knowledge, promoting mental habits and understanding that are generally useful within an academic context.

A case has also been made for including virtual reality technology in the context of public libraries. This would give library users access to cutting-edge technology and unique educational experiences. This could include giving users access to virtual, interactive copies of rare texts and artifacts and to tours of famous landmarks and archeological digs (as in the case with the Virtual Ganjali Khan Project).

Starting in the early 2020s, virtual reality has also been discussed as a technological setting that may support people's griefing process, based on digital recreations of deceased individuals. In 2021, this practice received substantial media attention following a South Korean TV documentary, which invited a griefing mother to interact with a virtual replica of her deceased daughter. Subsequently, scientists have summarized several potential implications of such endeavours, including its potential to facilitate adaptive mourning, but also many ethical challenges.

Growing interest in the metaverse has resulted in organizational efforts to incorporate the many diverse applications of virtual reality into ecosystems like VIVERSE, reportedly offering connectivity between platforms for a wide range of uses.

Concerts

On October 24, 2021, Billie Eilish performed on Oculus Venues. Pop group Imagine Dragons performed on June 15, 2022.

Concerns and challenges

Health and safety

There are many health and safety considerations of virtual reality. A number of unwanted symptoms have been caused by prolonged use of virtual reality, and these may have slowed proliferation of the technology. Most virtual reality systems come with consumer warnings, including: seizures; developmental issues in children; trip-and-fall and collision warnings; discomfort; repetitive stress injury; and interference with medical devices. Some users may experience twitches, seizures or blackouts while using VR headsets, even if they do not have a history of epilepsy and have never had blackouts or seizures before. One in 4,000 people, or .025%, may experience these symptoms. Motion sickness, eyestrain, headaches, and discomfort are the most prevalent short-term adverse effects. In addition, because of the virtual reality headsets' heavy weight, discomfort may be more likely among children. Therefore, children are advised against using VR headsets. Other problems may occur in physical interactions with one's environment. While wearing VR headsets, people quickly lose awareness of their real-world surroundings and may injure themselves by tripping over, or colliding with real-world objects.

VR headsets may regularly cause eye fatigue, as does all screened technology, because people tend to blink less when watching screens, causing their eyes to become more dried out. There have been some concerns about VR headsets contributing to myopia, but although VR headsets sit close to the eyes, they may not necessarily contribute to nearsightedness if the focal length of the image being displayed is sufficiently far away.

Virtual reality sickness (also known as cybersickness) occurs when a person's exposure to a virtual environment causes symptoms that are similar to motion sickness symptoms. Women are significantly more affected than men by headset-induced symptoms, at rates of around 77% and 33% respectively. The most common symptoms are general discomfort, headache, stomach awareness, nausea, vomiting, pallor, sweating, fatigue, drowsiness, disorientation, and apathy. For example, Nintendo's Virtual Boy received much criticism for its negative physical effects, including "dizziness, nausea, and headaches". These motion sickness symptoms are caused by a disconnect between what is being seen and what the rest of the body perceives. When the vestibular system, the body's internal balancing system, does not experience the motion that it expects from visual input through the eyes, the user may experience VR sickness. This can also happen if the VR system does not have a high enough frame rate, or if there is a lag between the body's movement and the onscreen visual reaction to it. Because approximately 25–40% of people experience some kind of VR sickness when using VR machines, companies are actively looking for ways to reduce VR sickness.

In January 2022 The Wall Street Journal found that VR usage could lead to physical injuries including leg, hand, arm and shoulder injuries. VR usage has also been tied to incidents that resulted in neck injuries, and death.

Children in virtual reality

The relationship between virtual reality and its underage users is controversial and unexplored. In the meantime, children are becoming increasingly aware of VR, with the number in the USA having never heard of it dropping by half from Autumn 2016 (40%) to Spring 2017 (19%).

Valeriy Kondruk, CEO of VR travel platform Ascape, says the app downloads in March 2020 increased by 60% compared to December 2019 and doubled in comparison with January 2020. According to Kondruk, normally, the busiest month for VR companies is December, which is associated with winter holidays and people spending more time at home.

In early 2016, virtual reality headsets became commercially available with offers from, for example, Facebook (Oculus), HTC and Valve (Vive) Microsoft (HoloLens), and Sony (Morpheus). At the time and to this day, these brands have different age instructions for users, e.g. 12+ or 14+, this indicates a completely self-regulatory policy.

Studies show that young children, compared to adults, may respond cognitively and behaviorally to immersive VR in ways that differ from adults. VR places users directly into the media content, potentially making the experience very vivid and real for children. For example, children of 6–18 years of age reported higher levels of presence and "realness" of a virtual environment compared with adults 19–65 years of age.

Studies on VR consumer behavior or its effect on children and a code of ethical conduct involving underage users are especially needed, given the availability of VR porn and violent content. Related research on violence in video games suggests that exposure to media violence may affect attitudes, behavior, and even self-concept. Self-concept is a key indicator of core attitudes and coping abilities, particularly in adolescents. Early studies conducted on observing versus participating in violent VR games suggest that physiological arousal and aggressive thoughts, but not hostile feelings, are higher for participants than for observers of the virtual reality game.

Experiencing VR by children may further involve simultaneously holding the idea of the virtual world in mind while experiencing the physical world. Excessive usage of immersive technology that has very salient sensory features may compromise children's ability to maintain the rules of the physical world, particularly when wearing a VR headset that blocks out the location of objects in the physical world. Immersive VR can provide users with multisensory experiences that replicate reality or create scenarios that are impossible or dangerous in the physical world. Observations of 10 children experiencing VR for the first time suggested that 8-12-years-old kids were more confident to explore VR content when it was in a familiar situation, e.g. the children enjoyed playing in the kitchen context of Job Simulator, and enjoyed breaking rules by engaging in activities they are not allowed to do in reality, such as setting things on fire.

Privacy

The persistent tracking required by all VR systems makes the technology particularly useful for, and vulnerable to, mass surveillance. The expansion of VR will increase the potential and reduce the costs for information gathering of personal actions, movements and responses. Data from eye tracking sensors, which are projected to become a standard feature in virtual reality headsets, may indirectly reveal information about a user's ethnicity, personality traits, fears, emotions, interests, skills, and physical and mental health condition.

Conceptual and philosophical concerns

In addition, there are conceptual and philosophical considerations and implications associated with the use of virtual reality. What the phrase "virtual reality" means or refers to can be ambiguous. Mychilo S. Cline argued in 2005 that through virtual reality, techniques will be developed to influence human behavior, interpersonal communication, and cognition.

Virtual reality in fiction

Behavioral modernity

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Beh...