Search This Blog

Sunday, August 7, 2022

Visual perception

From Wikipedia, the free encyclopedia

Visual perception is the ability to interpret the surrounding environment through photopic vision (daytime vision), color vision, scotopic vision (night vision), and mesopic vision (twilight vision), using light in the visible spectrum reflected by objects in the environment. This is different from visual acuity, which refers to how clearly a person sees (for example "20/20 vision"). A person can have problems with visual perceptual processing even if they have 20/20 vision.

The resulting perception is also known as vision, sight, or eyesight (adjectives visual, optical, and ocular, respectively). The various physiological components involved in vision are referred to collectively as the visual system, and are the focus of much research in linguistics, psychology, cognitive science, neuroscience, and molecular biology, collectively referred to as vision science.

Visual system

In humans and a number of other mammals, light enters the eye through the cornea and is focused by the lens onto the retina, a light-sensitive membrane at the back of the eye. The retina serves as a transducer for the conversion of light into neuronal signals. This transduction is achieved by specialized photoreceptive cells of the retina, also known as the rods and cones, which detect the photons of light and respond by producing neural impulses. These signals are transmitted by the optic nerve, from the retina upstream to central ganglia in the brain. The lateral geniculate nucleus, which transmits the information to the visual cortex. Signals from the retina also travel directly from the retina to the superior colliculus.

The lateral geniculate nucleus sends signals to primary visual cortex, also called striate cortex. Extrastriate cortex, also called visual association cortex is a set of cortical structures, that receive information from striate cortex, as well as each other. Recent descriptions of visual association cortex describe a division into two functional pathways, a ventral and a dorsal pathway. This conjecture is known as the two streams hypothesis.

The human visual system is generally believed to be sensitive to visible light in the range of wavelengths between 370 and 730 nanometers (0.00000037 to 0.00000073 meters) of the electromagnetic spectrum. However, some research suggests that humans can perceive light in wavelengths down to 340 nanometers (UV-A), especially the young. Under optimal conditions these limits of human perception can extend to 310 nm (UV) to 1100 nm (NIR).

Study

The major problem in visual perception is that what people see is not simply a translation of retinal stimuli (i.e., the image on the retina). Thus people interested in perception have long struggled to explain what visual processing does to create what is actually seen.

Early studies

The visual dorsal stream (green) and ventral stream (purple) are shown. Much of the human cerebral cortex is involved in vision.

There were two major ancient Greek schools, providing a primitive explanation of how vision works.

The first was the "emission theory" of vision which maintained that vision occurs when rays emanate from the eyes and are intercepted by visual objects. If an object was seen directly it was by 'means of rays' coming out of the eyes and again falling on the object. A refracted image was, however, seen by 'means of rays' as well, which came out of the eyes, traversed through the air, and after refraction, fell on the visible object which was sighted as the result of the movement of the rays from the eye. This theory was championed by scholars who were followers of Euclid's Optics and Ptolemy's Optics.

The second school advocated the so-called 'intromission' approach which sees vision as coming from something entering the eyes representative of the object. With its main propagator Aristotle (De Sensu), and his followers, this theory seems to have some contact with modern theories of what vision really is, but it remained only a speculation lacking any experimental foundation. (In eighteenth-century England, Isaac Newton, John Locke, and others, carried the intromission theory of vision forward by insisting that vision involved a process in which rays—composed of actual corporeal matter—emanated from seen objects and entered the seer's mind/sensorium through the eye's aperture.)

Both schools of thought relied upon the principle that "like is only known by like", and thus upon the notion that the eye was composed of some "internal fire" which interacted with the "external fire" of visible light and made vision possible. Plato makes this assertion in his dialogue Timaeus (45b and 46b), as does Empedocles (as reported by Aristotle in his De Sensu, DK frag. B17).

Leonardo da Vinci: The eye has a central line and everything that reaches the eye through this central line can be seen distinctly.

Alhazen (965 – c. 1040) carried out many investigations and experiments on visual perception, extended the work of Ptolemy on binocular vision, and commented on the anatomical works of Galen. He was the first person to explain that vision occurs when light bounces on an object and then is directed to one's eyes.

Leonardo da Vinci (1452–1519) is believed to be the first to recognize the special optical qualities of the eye. He wrote "The function of the human eye ... was described by a large number of authors in a certain way. But I found it to be completely different." His main experimental finding was that there is only a distinct and clear vision at the line of sight—the optical line that ends at the fovea. Although he did not use these words literally he actually is the father of the modern distinction between foveal and peripheral vision.

Isaac Newton (1642–1726/27) was the first to discover through experimentation, by isolating individual colors of the spectrum of light passing through a prism, that the visually perceived color of objects appeared due to the character of light the objects reflected, and that these divided colors could not be changed into any other color, which was contrary to scientific expectation of the day.

Unconscious inference

Hermann von Helmholtz is often credited with the first modern study of visual perception. Helmholtz examined the human eye and concluded that it was incapable of producing a high quality image. Insufficient information seemed to make vision impossible. He therefore concluded that vision could only be the result of some form of "unconscious inference", coining that term in 1867. He proposed the brain was making assumptions and conclusions from incomplete data, based on previous experiences.

Inference requires prior experience of the world.

Examples of well-known assumptions, based on visual experience, are:

  • light comes from above
  • objects are normally not viewed from below
  • faces are seen (and recognized) upright.
  • closer objects can block the view of more distant objects, but not vice versa
  • figures (i.e., foreground objects) tend to have convex borders

The study of visual illusions (cases when the inference process goes wrong) has yielded much insight into what sort of assumptions the visual system makes.

Another type of the unconscious inference hypothesis (based on probabilities) has recently been revived in so-called Bayesian studies of visual perception. Proponents of this approach consider that the visual system performs some form of Bayesian inference to derive a perception from sensory data. However, it is not clear how proponents of this view derive, in principle, the relevant probabilities required by the Bayesian equation. Models based on this idea have been used to describe various visual perceptual functions, such as the perception of motion, the perception of depth, and figure-ground perception. The "wholly empirical theory of perception" is a related and newer approach that rationalizes visual perception without explicitly invoking Bayesian formalisms.

Gestalt theory

Gestalt psychologists working primarily in the 1930s and 1940s raised many of the research questions that are studied by vision scientists today.

The Gestalt Laws of Organization have guided the study of how people perceive visual components as organized patterns or wholes, instead of many different parts. "Gestalt" is a German word that partially translates to "configuration or pattern" along with "whole or emergent structure". According to this theory, there are eight main factors that determine how the visual system automatically groups elements into patterns: Proximity, Similarity, Closure, Symmetry, Common Fate (i.e. common motion), Continuity as well as Good Gestalt (pattern that is regular, simple, and orderly) and Past Experience.

Analysis of eye movement

Eye movement first 2 seconds (Yarbus, 1967)

During the 1960s, technical development permitted the continuous registration of eye movement during reading, in picture viewing, and later, in visual problem solving, and when headset-cameras became available, also during driving.

The picture to the right shows what may happen during the first two seconds of visual inspection. While the background is out of focus, representing the peripheral vision, the first eye movement goes to the boots of the man (just because they are very near the starting fixation and have a reasonable contrast). Eye movements serve the function of attentional selection, i.e., to select a fraction of all visual inputs for deeper processing by the brain.

The following fixations jump from face to face. They might even permit comparisons between faces.

It may be concluded that the icon face is a very attractive search icon within the peripheral field of vision. The foveal vision adds detailed information to the peripheral first impression.

It can also be noted that there are different types of eye movements: fixational eye movements (microsaccades, ocular drift, and tremor), vergence movements, saccadic movements and pursuit movements. Fixations are comparably static points where the eye rests. However, the eye is never completely still, but gaze position will drift. These drifts are in turn corrected by microsaccades, very small fixational eye-movements. Vergence movements involve the cooperation of both eyes to allow for an image to fall on the same area of both retinas. This results in a single focused image. Saccadic movements is the type of eye movement that makes jumps from one position to another position and is used to rapidly scan a particular scene/image. Lastly, pursuit movement is smooth eye movement and is used to follow objects in motion.

Face and object recognition

There is considerable evidence that face and object recognition are accomplished by distinct systems. For example, prosopagnosic patients show deficits in face, but not object processing, while object agnosic patients (most notably, patient C.K.) show deficits in object processing with spared face processing. Behaviorally, it has been shown that faces, but not objects, are subject to inversion effects, leading to the claim that faces are "special". Further, face and object processing recruit distinct neural systems. Notably, some have argued that the apparent specialization of the human brain for face processing does not reflect true domain specificity, but rather a more general process of expert-level discrimination within a given class of stimulus, though this latter claim is the subject of substantial debate. Using fMRI and electrophysiology Doris Tsao and colleagues described brain regions and a mechanism for face recognition in macaque monkeys.

The inferotemporal cortex has a key role in the task of recognition and differentiation of different objects. A study of the MIT shows that subset regions of the IT cortex are in charge of different objects. By selectively shutting off neural activity of many small areas of the cortex, the animal gets alternately unable to distinguish between certain particular pairments of objects. This shows that the IT cortex is divided into regions that respond to different and particular visual features. In a similar way, certain particular patches and regions of the cortex are more involved into face recognition than other objects recognition.

Some studies tend to show that rather than the uniform global image, some particular features and regions of interest of the objects are key elements when the brain need to recognise an object in image. In this way, the human vision is vulnerable to small particular changes to the image, such as disrupting the edges of the object, modifying texture or any small change in a crucial region of the image.

Studies of people whose sight has been restored after a long blindness reveal that they cannot necessarily recognize objects and faces (as opposed to color, motion, and simple geometric shapes). Some hypothesize that being blind during childhood prevents some part of the visual system necessary for these higher-level tasks from developing properly. The general belief that a critical period lasts until age 5 or 6 was challenged by a 2007 study that found that older patients could improve these abilities with years of exposure.

Cognitive and computational approaches

In the 1970s, David Marr developed a multi-level theory of vision, which analyzed the process of vision at different levels of abstraction. In order to focus on the understanding of specific problems in vision, he identified three levels of analysis: the computational, algorithmic and implementational levels. Many vision scientists, including Tomaso Poggio, have embraced these levels of analysis and employed them to further characterize vision from a computational perspective.

The computational level addresses, at a high level of abstraction, the problems that the visual system must overcome. The algorithmic level attempts to identify the strategy that may be used to solve these problems. Finally, the implementational level attempts to explain how solutions to these problems are realized in neural circuitry.

Marr suggested that it is possible to investigate vision at any of these levels independently. Marr described vision as proceeding from a two-dimensional visual array (on the retina) to a three-dimensional description of the world as output. His stages of vision include:

  • A 2D or primal sketch of the scene, based on feature extraction of fundamental components of the scene, including edges, regions, etc. Note the similarity in concept to a pencil sketch drawn quickly by an artist as an impression.
  • A 212 D sketch of the scene, where textures are acknowledged, etc. Note the similarity in concept to the stage in drawing where an artist highlights or shades areas of a scene, to provide depth.
  • A 3 D model, where the scene is visualized in a continuous, 3-dimensional map.

Marr's 212D sketch assumes that a depth map is constructed, and that this map is the basis of 3D shape perception. However, both stereoscopic and pictorial perception, as well as monocular viewing, make clear that the perception of 3D shape precedes, and does not rely on, the perception of the depth of points. It is not clear how a preliminary depth map could, in principle, be constructed, nor how this would address the question of figure-ground organization, or grouping. The role of perceptual organizing constraints, overlooked by Marr, in the production of 3D shape percepts from binocularly-viewed 3D objects has been demonstrated empirically for the case of 3D wire objects, e.g. For a more detailed discussion, see Pizlo (2008).

A more recent, alternative, framework proposes that vision is composed instead of the following three stages: encoding, selection, and decoding. Encoding is to sample and represent visual inputs (e.g., to represent visual inputs as neural activities in the retina). Selection, or attentional selection, is to select a tiny fraction of input information for further processing, e.g., by shifting gaze to an object or visual location to better process the visual signals at that location. Decoding is to infer or recognize the selected input signals, e.g., to recognize the object at the center of gaze as somebody's face. In this framework, attentional selection starts at the primary visual cortex along the visual pathway, and the attentional constraints impose a dichotomy between the central and peripheral visual fields for visual recognition or decoding.

Transduction

Transduction is the process through which energy from environmental stimuli is converted to neural activity. The retina contains three different cell layers: photoreceptor layer, bipolar cell layer and ganglion cell layer. The photoreceptor layer where transduction occurs is farthest from the lens. It contains photoreceptors with different sensitivities called rods and cones. The cones are responsible for color perception and are of three distinct types labelled red, green and blue. Rods are responsible for the perception of objects in low light. Photoreceptors contain within them a special chemical called a photopigment, which is embedded in the membrane of the lamellae; a single human rod contains approximately 10 million of them. The photopigment molecules consist of two parts: an opsin (a protein) and retinal (a lipid). There are 3 specific photopigments (each with their own wavelength sensitivity) that respond across the spectrum of visible light. When the appropriate wavelengths (those that the specific photopigment is sensitive to) hit the photoreceptor, the photopigment splits into two, which sends a signal to the bipolar cell layer, which in turn sends a signal to the ganglion cells, the axons of which form the optic nerve and transmit the information to the brain. If a particular cone type is missing or abnormal, due to a genetic anomaly, a color vision deficiency, sometimes called color blindness will occur.

Opponent process

Transduction involves chemical messages sent from the photoreceptors to the bipolar cells to the ganglion cells. Several photoreceptors may send their information to one ganglion cell. There are two types of ganglion cells: red/green and yellow/blue. These neurons constantly fire—even when not stimulated. The brain interprets different colors (and with a lot of information, an image) when the rate of firing of these neurons alters. Red light stimulates the red cone, which in turn stimulates the red/green ganglion cell. Likewise, green light stimulates the green cone, which stimulates the green/red ganglion cell and blue light stimulates the blue cone which stimulates the blue/yellow ganglion cell. The rate of firing of the ganglion cells is increased when it is signaled by one cone and decreased (inhibited) when it is signaled by the other cone. The first color in the name of the ganglion cell is the color that excites it and the second is the color that inhibits it. i.e.: A red cone would excite the red/green ganglion cell and the green cone would inhibit the red/green ganglion cell. This is an opponent process. If the rate of firing of a red/green ganglion cell is increased, the brain would know that the light was red, if the rate was decreased, the brain would know that the color of the light was green.

Artificial visual perception

Theories and observations of visual perception have been the main source of inspiration for computer vision (also called machine vision, or computational vision). Special hardware structures and software algorithms provide machines with the capability to interpret the images coming from a camera or a sensor.

For instance, the 2022 Toyota 86 uses the Subaru EyeSight system for driver-assist technology.

Overclocking

From Wikipedia, the free encyclopedia

Overclocking BIOS setup on an ABIT NF7-S motherboard with an AMD Athlon XP processor. Front side bus (FSB) frequency (external clock) has been increased from 133 MHz to 148 MHz, and the CPU clock multiplier factor has been changed from 13.5 to 16.5. This corresponds to an overclocking of the FSB by 11.3% and of the CPU by 36%.

In computing, overclocking is the practice of increasing the clock rate of a computer to exceed that certified by the manufacturer. Commonly, operating voltage is also increased to maintain a component's operational stability at accelerated speeds. Semiconductor devices operated at higher frequencies and voltages increase power consumption and heat. An overclocked device may be unreliable or fail completely if the additional heat load is not removed or power delivery components cannot meet increased power demands. Many device warranties state that overclocking or over-specification voids any warranty, however there are an increasing number of manufacturers that will allow overclocking as long as performed (relatively) safely.

Overview

The purpose of overclocking is to increase the operating speed of a given component. Normally, on modern systems, the target of overclocking is increasing the performance of a major chip or subsystem, such as the main processor or graphics controller, but other components, such as system memory (RAM) or system buses (generally on the motherboard), are commonly involved. The trade-offs are an increase in power consumption (heat), fan noise (cooling), and shortened lifespan for the targeted components. Most components are designed with a margin of safety to deal with operating conditions outside of a manufacturer's control; examples are ambient temperature and fluctuations in operating voltage. Overclocking techniques in general aim to trade this safety margin by setting the device to run in the higher end of the margin, with the understanding that temperature and voltage must be more strictly monitored and controlled by the user. Examples are that operating temperature would need to be more strictly controlled with increased cooling, as the part will be less tolerant of increased temperatures at the higher speeds. Also base operating voltage may be increased to compensate for unexpected voltage drops and to strengthen signalling and timing signals, as low-voltage excursions are more likely to cause malfunctions at higher operating speeds.

While most modern devices are fairly tolerant of overclocking, all devices have finite limits. Generally for any given voltage most parts will have a maximum "stable" speed where they still operate correctly. Past this speed, the device starts giving incorrect results, which can cause malfunctions and sporadic behavior in any system depending on it. While in a PC context the usual result is a system crash, more subtle errors can go undetected, which over a long enough time can give unpleasant surprises such as data corruption (incorrectly calculated results, or worse writing to storage incorrectly) or the system failing only during certain specific tasks (general usage such as internet browsing and word processing appear fine, but any application wanting advanced graphics crashes the system).

At this point, an increase in operating voltage of a part may allow more headroom for further increases in clock speed, but the increased voltage can also significantly increase heat output, as well as shorten the lifespan further. At some point, there will be a limit imposed by the ability to supply the device with sufficient power, the user's ability to cool the part, and the device's own maximum voltage tolerance before it achieves destructive failure. Overzealous use of voltage or inadequate cooling can rapidly degrade a device's performance to the point of failure, or in extreme cases outright destroy it.

The speed gained by overclocking depends largely upon the applications and workloads being run on the system, and what components are being overclocked by the user; benchmarks for different purposes are published.

Underclocking

Conversely, the primary goal of underclocking is to reduce power consumption and the resultant heat generation of a device, with the trade-offs being lower clock speeds and reductions in performance. Reducing the cooling requirements needed to keep hardware at a given operational temperature has knock-on benefits such as lowering the number and speed of fans to allow quieter operation, and in mobile devices increase the length of battery life per charge. Some manufacturers underclock components of battery-powered equipment to improve battery life, or implement systems that detect when a device is operating under battery power and reduce clock frequency.

Underclocking and undervolting would be attempted on a desktop system to have it operate silently (such as for a home entertainment center) while potentially offering higher performance than currently offered by low-voltage processor offerings. This would use a "standard-voltage" part and attempt to run with lower voltages (while attempting to keep the desktop speeds) to meet an acceptable performance/noise target for the build. This was also attractive as using a "standard voltage" processor in a "low voltage" application avoided paying the traditional price premium for an officially certified low voltage version. However again like overclocking there is no guarantee of success, and the builder's time researching given system/processor combinations and especially the time and tedium of performing many iterations of stability testing need to be considered. The usefulness of underclocking (again like overclocking) is determined by what processor offerings, prices, and availability are at the specific time of the build. Underclocking is also sometimes used when troubleshooting.

Enthusiast culture

Overclocking has become more accessible with motherboard makers offering overclocking as a marketing feature on their mainstream product lines. However, the practice is embraced more by enthusiasts than professional users, as overclocking carries a risk of reduced reliability, accuracy and damage to data and equipment. Additionally, most manufacturer warranties and service agreements do not cover overclocked components nor any incidental damages caused by their use. While overclocking can still be an option for increasing personal computing capacity, and thus workflow productivity for professional users, the importance of stability testing components thoroughly before employing them into a production environment cannot be overstated.

Overclocking offers several draws for overclocking enthusiasts. Overclocking allows testing of components at speeds not currently offered by the manufacturer, or at speeds only officially offered on specialized, higher-priced versions of the product. A general trend in the computing industry is that new technologies tend to debut in the high-end market first, then later trickle down to the performance and mainstream market. If the high-end part only differs by an increased clock speed, an enthusiast can attempt to overclock a mainstream part to simulate the high-end offering. This can give insight on how over-the-horizon technologies will perform before they are officially available on the mainstream market, which can be especially helpful for other users considering if they should plan ahead to purchase or upgrade to the new feature when it is officially released.

Some hobbyists enjoy building, tuning, and "Hot-Rodding" their systems in competitive benchmarking competitions, competing with other like-minded users for high scores in standardized computer benchmark suites. Others will purchase a low-cost model of a component in a given product line, and attempt to overclock that part to match a more expensive model's stock performance. Another approach is overclocking older components to attempt to keep pace with increasing system requirements and extend the useful service life of the older part or at least delay a purchase of new hardware solely for performance reasons. Another rationale for overclocking older equipment is even if overclocking stresses equipment to the point of failure earlier, little is lost as it is already depreciated, and would have needed to be replaced in any case.

Components

Technically any component that uses a timer (or clock) to synchronize its internal operations can be overclocked. Most efforts for computer components however focus on specific components, such as, processors (a.k.a. CPU), video cards, motherboard chipsets, and RAM. Most modern processors derive their effective operating speeds by multiplying a base clock (processor bus speed) by an internal multiplier within the processor (the CPU multiplier) to attain their final speed.

Computer processors generally are overclocked by manipulating the CPU multiplier if that option is available, but the processor and other components can also be overclocked by increasing the base speed of the bus clock. Some systems allow additional tuning of other clocks (such as a system clock) that influence the bus clock speed that, again is multiplied by the processor to allow for finer adjustments of the final processor speed.

Most OEM systems do not expose to the user the adjustments needed to change processor clock speed or voltage in the BIOS of the OEM's motherboard, which precludes overclocking (for warranty and support reasons). The same processor installed on a different motherboard offering adjustments will allow the user to change them.

Any given component will ultimately stop operating reliably past a certain clock speed. Components will generally show some sort of malfunctioning behavior or other indication of compromised stability that alerts the user that a given speed is not stable, but there is always a possibility that a component will permanently fail without warning, even if voltages are kept within some pre-determined safe values. The maximum speed is determined by overclocking to the point of first instability, then accepting the last stable slower setting. Components are only guaranteed to operate correctly up to their rated values; beyond that different samples may have different overclocking potential. The end-point of a given overclock is determined by parameters such as available CPU multipliers, bus dividers, voltages; the user's ability to manage thermal loads, cooling techniques; and several other factors of the individual devices themselves such as semiconductor clock and thermal tolerances, interaction with other components and the rest of the system.

Considerations

There are several things to be considered when overclocking. First is to ensure that the component is supplied with adequate power at a voltage sufficient to operate at the new clock rate. Supplying the power with improper settings or applying excessive voltage can permanently damage a component.

In a professional production environment, overclocking is only likely to be used where the increase in speed justifies the cost of the expert support required, the possibly reduced reliability, the consequent effect on maintenance contracts and warranties, and the higher power consumption. If faster speed is required it is often cheaper when all costs are considered to buy faster hardware.

Cooling

High quality heat sinks are often made of copper.

All electronic circuits produce heat generated by the movement of electric current. As clock frequencies in digital circuits and voltage applied increase, the heat generated by components running at the higher performance levels also increases. The relationship between clock frequencies and thermal design power (TDP) are linear. However, there is a limit to the maximum frequency which is called a "wall". To overcome this issue, overclockers raise the chip voltage to increase the overclocking potential. Voltage increases power consumption and consequently heat generation significantly (proportionally to the square of the voltage in a linear circuit, for example); this requires more cooling to avoid damaging the hardware by overheating. In addition, some digital circuits slow down at high temperatures due to changes in MOSFET device characteristics. Conversely, the overclocker may decide to decrease the chip voltage while overclocking (a process known as undervolting), to reduce heat emissions while performance remains optimal.

Stock cooling systems are designed for the amount of power produced during non-overclocked use; overclocked circuits can require more cooling, such as by powerful fans, larger heat sinks, heat pipes and water cooling. Mass, shape, and material all influence the ability of a heatsink to dissipate heat. Efficient heatsinks are often made entirely of copper, which has high thermal conductivity, but is expensive. Aluminium is more widely used; it has good thermal characteristics, though not as good as copper, and is significantly cheaper. Cheaper materials such as steel do not have good thermal characteristics. Heat pipes can be used to improve conductivity. Many heatsinks combine two or more materials to achieve a balance between performance and cost.

Interior of a water-cooled computer, showing CPU water block, tubing, and pump

Water cooling carries waste heat to a radiator. Thermoelectric cooling devices which actually refrigerate using the Peltier effect can help with high thermal design power (TDP) processors made by Intel and AMD in the early twenty-first century. Thermoelectric cooling devices create temperature differences between two plates by running an electric current through the plates. This method of cooling is highly effective, but itself generates significant heat elsewhere which must be carried away, often by a convection-based heatsink or a water cooling system.

Liquid nitrogen may be used for cooling an overclocked system, when an extreme measure of cooling is needed.

Other cooling methods are forced convection and phase transition cooling which is used in refrigerators and can be adapted for computer use. Liquid nitrogen, liquid helium, and dry ice are used as coolants in extreme cases, such as record-setting attempts or one-off experiments rather than cooling an everyday system. In June 2006, IBM and Georgia Institute of Technology jointly announced a new record in silicon-based chip clock rate (the rate a transistor can be switched at, not the CPU clock rate) above 500 GHz, which was done by cooling the chip to 4.5 K (−268.6 °C; −451.6 °F) using liquid helium. Set in November 2012, the CPU Frequency World Record is 8.794 GHz as of January 2022. These extreme methods are generally impractical in the long term, as they require refilling reservoirs of vaporizing coolant, and condensation can form on chilled components. Moreover, silicon-based junction gate field-effect transistors (JFET) will degrade below temperatures of roughly 100 K (−173 °C; −280 °F) and eventually cease to function or "freeze out" at 40 K (−233 °C; −388 °F) since the silicon ceases to be semiconducting, so using extremely cold coolants may cause devices to fail.

Submersion cooling, used by the Cray-2 supercomputer, involves sinking a part of computer system directly into a chilled liquid that is thermally conductive but has low electrical conductivity. The advantage of this technique is that no condensation can form on components. A good submersion liquid is Fluorinert made by 3M, which is expensive. Another option is mineral oil, but impurities such as those in water might cause it to conduct electricity.

Amateur overclocking enthusiasts have used a mixture of dry ice and a solvent with a low freezing point, such as acetone or isopropyl alcohol. This cooling bath, often used in laboratories, achieves a temperature of −78 °C. However, this practice is discouraged due to its safety risks; the solvents are flammable and volatile, and dry ice can cause frostbite (through contact with exposed skin) and suffocation (due to the large volume of carbon dioxide generated when it sublimes).

Stability and functional correctness

As an overclocked component operates outside of the manufacturer's recommended operating conditions, it may function incorrectly, leading to system instability. Another risk is silent data corruption by undetected errors. Such failures might never be correctly diagnosed and may instead be incorrectly attributed to software bugs in applications, device drivers, or the operating system. Overclocked use may permanently damage components enough to cause them to misbehave (even under normal operating conditions) without becoming totally unusable.

A large-scale 2011 field study of hardware faults causing a system crash for consumer PCs and laptops showed a four to 20 times increase (depending on CPU manufacturer) in system crashes due to CPU failure for overclocked computers over an eight-month period.

In general, overclockers claim that testing can ensure that an overclocked system is stable and functioning correctly. Although software tools are available for testing hardware stability, it is generally impossible for any private individual to thoroughly test the functionality of a processor. Achieving good fault coverage requires immense engineering effort; even with all of the resources dedicated to validation by manufacturers, faulty components and even design faults are not always detected.

A particular "stress test" can verify only the functionality of the specific instruction sequence used in combination with the data and may not detect faults in those operations. For example, an arithmetic operation may produce the correct result but incorrect flags; if the flags are not checked, the error will go undetected.

To further complicate matters, in process technologies such as silicon on insulator (SOI), devices display hysteresis—a circuit's performance is affected by the events of the past, so without carefully targeted tests it is possible for a particular sequence of state changes to work at overclocked rates in one situation but not another even if the voltage and temperature are the same. Often, an overclocked system which passes stress tests experiences instabilities in other programs.

In overclocking circles, "stress tests" or "torture tests" are used to check for correct operation of a component. These workloads are selected as they put a very high load on the component of interest (e.g. a graphically intensive application for testing video cards, or different math-intensive applications for testing general CPUs). Popular stress tests include Prime95, Everest, Superpi, OCCT, AIDA64, Linpack (via the LinX and IntelBurnTest GUIs), SiSoftware Sandra, BOINC, Intel Thermal Analysis Tool and Memtest86. The hope is that any functional-correctness issues with the overclocked component will manifest themselves during these tests, and if no errors are detected during the test, then the component is deemed "stable". Since fault coverage is important in stability testing, the tests are often run for long periods of time, hours or even days. An overclocked computer is sometimes described using the number of hours and the stability program used, such as "prime 12 hours stable".

Factors allowing overclocking

Overclockability arises in part due to the economics of the manufacturing processes of CPUs and other components. In many cases components are manufactured by the same process, and tested after manufacture to determine their actual maximum ratings. Components are then marked with a rating chosen by the market needs of the semiconductor manufacturer. If manufacturing yield is high, more higher-rated components than required may be produced, and the manufacturer may mark and sell higher-performing components as lower-rated for marketing reasons. In some cases, the true maximum rating of the component may exceed even the highest rated component sold. Many devices sold with a lower rating may behave in all ways as higher-rated ones, while in the worst case operation at the higher rating may be more problematical.

Notably, higher clocks must always mean greater waste heat generation, as semiconductors set to high must dump to ground more often. In some cases, this means that the chief drawback of the overclocked part is far more heat dissipated than the maximums published by the manufacturer. Pentium architect Bob Colwell calls overclocking an "uncontrolled experiment in better-than-worst-case system operation".

Measuring effects of overclocking

Benchmarks are used to evaluate performance, and they can become a kind of "sport" in which users compete for the highest scores. As discussed above, stability and functional correctness may be compromised when overclocking, and meaningful benchmark results depend on the correct execution of the benchmark. Because of this, benchmark scores may be qualified with stability and correctness notes (e.g. an overclocker may report a score, noting that the benchmark only runs to completion 1 in 5 times, or that signs of incorrect execution such as display corruption are visible while running the benchmark). A widely used test of stability is Prime95, which has built-in error checking that fails if the computer is unstable.

Using only the benchmark scores, it may be difficult to judge the difference overclocking makes to the overall performance of a computer. For example, some benchmarks test only one aspect of the system, such as memory bandwidth, without taking into consideration how higher clock rates in this aspect will improve the system performance as a whole. Apart from demanding applications such as video encoding, high-demand databases and scientific computing, memory bandwidth is typically not a bottleneck, so a great increase in memory bandwidth may be unnoticeable to a user depending on the applications used. Other benchmarks, such as 3DMark, attempt to replicate game conditions.

Manufacturer and vendor overclocking

Commercial system builders or component resellers sometimes overclock to sell items at higher profit margins. The seller makes more money by overclocking lower-priced components which are found to operate correctly and selling equipment at prices appropriate for higher-rated components. While the equipment will normally operate correctly, this practice may be considered fraudulent if the buyer is unaware of it.

Overclocking is sometimes offered as a legitimate service or feature for consumers, in which a manufacturer or retailer tests the overclocking capability of processors, memory, video cards, and other hardware products. Several video card manufactures now offer factory-overclocked versions of their graphics accelerators, complete with a warranty, usually at a price intermediate between that of the standard product and a non-overclocked product of higher performance.

It is speculated that manufacturers implement overclocking prevention mechanisms such as CPU multiplier locking to prevent users from buying lower-priced items and overclocking them. These measures are sometimes marketed as a consumer protection benefit, but are often criticized by buyers.

Many motherboards are sold, and advertised, with extensive facilities for overclocking implemented in hardware and controlled by BIOS settings.

CPU multiplier locking

CPU multiplier locking is the process of permanently setting a CPU's clock multiplier. AMD CPUs are unlocked in early editions of a model and locked in later editions, but nearly all Intel CPUs are locked and recent models are very resistant to unlocking to prevent overclocking by users. AMD ships unlocked CPUs with their Opteron, FX, Ryzen and Black Series line-up, while Intel uses the monikers of "Extreme Edition" and "K-Series." Intel usually has one or two Extreme Edition CPUs on the market as well as X series and K series CPUs analogous to AMD's Black Edition. AMD has the majority of their desktop range in a Black Edition.

Users usually unlock CPUs to allow overclocking, but sometimes to allow for underclocking in order to maintain the front side bus speed (on older CPUs) compatibility with certain motherboards. Unlocking generally invalidates the manufacturer's warranty, and mistakes can cripple or destroy a CPU. Locking a chip's clock multiplier does not necessarily prevent users from overclocking, as the speed of the front-side bus or PCI multiplier (on newer CPUs) may still be changed to provide a performance increase. AMD Athlon and Athlon XP CPUs are generally unlocked by connecting bridges (jumper-like points) on the top of the CPU with conductive paint or pencil lead. Other CPU models may require different procedures.

Increasing front-side bus or northbridge/PCI clocks can overclock locked CPUs, but this throws many system frequencies out of sync, since the RAM and PCI frequencies are modified as well.

One of the easiest ways to unlock older AMD Athlon XP CPUs was called the pin mod method, because it was possible to unlock the CPU without permanently modifying bridges. A user could simply put one wire (or some more for a new multiplier/Vcore) into the socket to unlock the CPU. More recently however, notably with Intel's Skylake architecture, Intel had a bug with the Skylake (6th gen Core) processors where the base clock could be increased past 102.7 MHz, however functionality of certain features would not work. Intel intended to block base clock (BCLK) overclocking of locked processors when designing the Skylake architecture to prevent consumers from purchasing cheaper components and overclocking to previously-unseen heights (since the CPU's BCLK was no longer tied to the PCI buses), however for LGA1151, the 6th generation "Skylake" processors were able to be overclocked past 102.7 MHz (which was the intended limit by Intel, and was later mandated through later BIOS updates). All other unlocked processors from LGA1151 and v2 (including 7th, 8th, and 9th generation) and BGA1440 allow for BCLK overclocking (as long as the OEM allows it), while all other locked processors from 7th, 8th, and 9th gen were not able to go past 102.7 Mhz. 10th gen however, could reach 103 Mhz on the BCLK.

Advantages

  • Higher performance in games, en-/decoding, video editing and system tasks at no additional direct monetary expense, but with increased electrical consumption and thermal output.
  • System optimization: Some systems have "bottlenecks", where small overclocking of one component can help realize the full potential of another component to a greater percentage than when just the limiting hardware itself is overclocked. For instance: many motherboards with AMD Athlon 64 processors limit the clock rate of four units of RAM to 333 MHz. However, the memory performance is computed by dividing the processor clock rate (which is a base number times a CPU multiplier, for instance 1.8 GHz is most likely 9×200 MHz) by a fixed integer such that, at a stock clock rate, the RAM would run at a clock rate near 333 MHz. Manipulating elements of how the processor clock rate is set (usually adjusting the multiplier), it is often possible to overclock the processor a small amount, around 5-10%, and gain a small increase in RAM clock rate and/or reduction in RAM latency timings.
  • It can be cheaper to purchase a lower performance component and overclock it to the clock rate of a more expensive component.
  • Extending life on older equipment (through underclocking/undervolting).

Disadvantages

General

  • Higher clock rates and voltages increase power consumption, also increasing electricity cost and heat production. The additional heat increases the ambient air temperature within the system case, which may affect other components. The hot air blown out of the case heats the room it's in.
  • Fan noise: High-performance fans running at maximum speed used for the required degree of cooling of an overclocked machine can be noisy, some producing 50 dB or more of noise. When maximum cooling is not required, in any equipment, fan speeds can be reduced below the maximum: fan noise has been found to be roughly proportional to the fifth power of fan speed; halving speed reduces noise by about 15 dB. Fan noise can be reduced by design improvements, e.g. with aerodynamically optimized blades for smoother airflow, reducing noise to around 20 dB at approximately 1 metre or larger fans rotating more slowly, which produce less noise than smaller, faster fans with the same airflow. Acoustical insulation inside the case e.g. acoustic foam can reduce noise. Additional cooling methods which do not use fans can be used, such as liquid and phase-change cooling.
  • An overclocked computer may become unreliable. For example: Microsoft Windows may appear to work with no problems, but when it is re-installed or upgraded, error messages may be received such as a "file copy error" during Windows Setup. Because installing Windows is very memory-intensive, decoding errors may occur when files are extracted from the Windows XP CD-ROM
  • The lifespan of semiconductor components may be reduced by increased voltages and heat.
  • Warranties may be voided by overclocking.

Risks of overclocking

  • Increasing the operation frequency of a component will usually increase its thermal output in a linear fashion, while an increase in voltage usually causes thermal power to increase quadratically. Excessive voltages or improper cooling may cause chip temperatures to rise to dangerous levels, causing the chip to be damaged or destroyed.
  • Exotic cooling methods used to facilitate overclocking such as water cooling are more likely to cause damage if they malfunction. Sub-ambient cooling methods such as phase-change cooling or liquid nitrogen will cause water condensation, which will cause electrical damage unless controlled; some methods include using kneaded erasers or shop towels to catch the condensation.

Limitations

Overclocking components can only be of noticeable benefit if the component is on the critical path for a process, if it is a bottleneck. If disk access or the speed of an Internet connection limit the speed of a process, a 20% increase in processor speed is unlikely to be noticed, however there are some scenarios where increasing the clock speed of a processor actually allows an SSD to be read and written to faster. Overclocking a CPU will not noticeably benefit a game when a graphics card's performance is the "bottleneck" of the game.

Graphics cards

The BFG GeForce 6800GSOC ships with higher memory and clock rates than the standard 6800GS.

Graphics cards can also be overclocked. There are utilities to achieve this, such as EVGA's Precision, RivaTuner, AMD Overdrive (on AMD cards only), MSI Afterburner, Zotac Firestorm, and the PEG Link Mode on Asus motherboards. Overclocking a GPU will often yield a marked increase in performance in synthetic benchmarks, usually reflected in game performance. It is sometimes possible to see that a graphics card is being pushed beyond its limits before any permanent damage is done by observing on-screen artifacts or unexpected system crashes. It is common to run into one of those problems when overclocking graphics cards; both symptoms at the same time usually means that the card is severely pushed beyond its heat, clock rate, and/or voltage limits, however if seen when not overclocked, they indicate a faulty card. After a reboot, video settings are reset to standard values stored in the graphics card firmware, and the maximum clock rate of that specific card is now deducted.

Some overclockers apply a potentiometer to the graphics card to manually adjust the voltage (which usually invalidates the warranty). This allows for finer adjustments, as overclocking software for graphics cards can only go so far. Excessive voltage increases may damage or destroy components on the graphics card or the entire graphics card itself (practically speaking).

RAM

Alternatives

Flashing and unlocking can be used to improve the performance of a video card, without technically overclocking (but is much riskier than overclocking just through software).

Flashing refers to using the firmware of a different card with the same (or sometimes similar) core and compatible firmware, effectively making it a higher model card; it can be difficult, and may be irreversible. Sometimes standalone software to modify the firmware files can be found, e.g. NiBiTor (GeForce 6/7 series are well regarded in this aspect), without using firmware for a better model video card. For example, video cards with 3D accelerators (most, as of 2011) have two voltage and clock rate settings, one for 2D and one for 3D, but were designed to operate with three voltage stages, the third being somewhere between the aforementioned two, serving as a fallback when the card overheats or as a middle-stage when going from 2D to 3D operation mode. Therefore, it could be wise to set this middle-stage prior to "serious" overclocking, specifically because of this fallback ability; the card can drop down to this clock rate, reducing by a few (or sometimes a few dozen, depending on the setting) percent of its efficiency and cool down, without dropping out of 3D mode (and afterwards return to the desired high performance clock and voltage settings).

Some cards have abilities not directly connected with overclocking. For example, Nvidia's GeForce 6600GT (AGP flavor) has a temperature monitor used internally by the card, invisible to the user if standard firmware is used. Modifying the firmware can display a 'Temperature' tab.

Unlocking refers to enabling extra pipelines or pixel shaders. The 6800LE, the 6800GS and 6800 (AGP models only) were some of the first cards to benefit from unlocking. While these models have either 8 or 12 pipes enabled, they share the same 16x6 GPU core as a 6800GT or Ultra, but pipelines and shaders beyond those specified are disabled; the GPU may be fully functional, or may have been found to have faults which do not affect operation at the lower specification. GPUs found to be fully functional can be unlocked successfully, although it is not possible to be sure that there are undiscovered faults; in the worst case the card may become permanently unusable.

History

Overclocked processors first became commercially available in 1983, when AMD sold an overclocked version of the Intel 8088 CPU. In 1984, some consumers were overclocking IBM's version of the Intel 80286 CPU by replacing the clock crystal. Xeon W-3175X is the only Xeon with a multiplier unlocked for overclocking.

Geosynchronous orbit

From Wikipedia, the free encyclopedia
 
Animation (not to scale) showing geosynchronous satellite orbiting the Earth.

A geosynchronous orbit (sometimes abbreviated GSO) is an Earth-centered orbit with an orbital period that matches Earth's rotation on its axis, 23 hours, 56 minutes, and 4 seconds (one sidereal day). The synchronization of rotation and orbital period means that, for an observer on Earth's surface, an object in geosynchronous orbit returns to exactly the same position in the sky after a period of one sidereal day. Over the course of a day, the object's position in the sky may remain still or trace out a path, typically in a figure-8 form, whose precise characteristics depend on the orbit's inclination and eccentricity. A circular geosynchronous orbit has a constant altitude of 35,786 km (22,236 mi).

A special case of geosynchronous orbit is the geostationary orbit, which is a circular geosynchronous orbit in Earth's equatorial plane with both inclination and eccentricity equal to 0. A satellite in a geostationary orbit remains in the same position in the sky to observers on the surface.

Communications satellites are often given geostationary or close to geostationary orbits so that the satellite antennas that communicate with them do not have to move, but can be pointed permanently at the fixed location in the sky where the satellite appears.

History

The geosynchronous orbit was popularised by the science fiction author Arthur C. Clarke, and is thus sometimes called the Clarke Orbit.

In 1929, Herman Potočnik described both geosynchronous orbits in general and the special case of the geostationary Earth orbit in particular as useful orbits for space stations. The first appearance of a geosynchronous orbit in popular literature was in October 1942, in the first Venus Equilateral story by George O. Smith, but Smith did not go into details. British science fiction author Arthur C. Clarke popularised and expanded the concept in a 1945 paper entitled Extra-Terrestrial Relays – Can Rocket Stations Give Worldwide Radio Coverage?, published in Wireless World magazine. Clarke acknowledged the connection in his introduction to The Complete Venus Equilateral. The orbit, which Clarke first described as useful for broadcast and relay communications satellites, is sometimes called the Clarke Orbit. Similarly, the collection of artificial satellites in this orbit is known as the Clarke Belt.

Syncom 2: The first functional geosynchronous satellite

In technical terminology, the geosynchronous orbits are often referred to as geostationary if they are roughly over the equator, but the terms are used somewhat interchangeably. Specifically, geosynchronous Earth orbit (GEO) may be a synonym for geosynchronous equatorial orbit, or geostationary Earth orbit.

The first geosynchronous satellite was designed by Harold Rosen while he was working at Hughes Aircraft in 1959. Inspired by Sputnik 1, he wanted to use a geostationary (geosynchronous equatorial) satellite to globalise communications. Telecommunications between the US and Europe was then possible between just 136 people at a time, and reliant on high frequency radios and an undersea cable.

Conventional wisdom at the time was that it would require too much rocket power to place a satellite in a geosynchronous orbit and it would not survive long enough to justify the expense, so early efforts were put towards constellations of satellites in low or medium Earth orbit. The first of these were the passive Echo balloon satellites in 1960, followed by Telstar 1 in 1962. Although these projects had difficulties with signal strength and tracking that could be solved through geosynchronous satellites, the concept was seen as impractical, so Hughes often withheld funds and support.

By 1961, Rosen and his team had produced a cylindrical prototype with a diameter of 76 centimetres (30 in), height of 38 centimetres (15 in), weighing 11.3 kilograms (25 lb); it was light, and small, enough to be placed into orbit by then-available rocketry, was spin stabilised and used dipole antennas producing a pancake-shaped waveform.  In August 1961, they were contracted to begin building the working satellite. They lost Syncom 1 to electronics failure, but Syncom 2 was successfully placed into a geosynchronous orbit in 1963. Although its inclined orbit still required moving antennas, it was able to relay TV transmissions, and allowed for US President John F. Kennedy to phone Nigerian prime minister Abubakar Tafawa Balewa from a ship on August 23, 1963.

Today there are hundreds of geosynchronous satellites providing remote sensing, navigation and communications.

Although most populated land locations on the planet now have terrestrial communications facilities (microwave, fiber-optic), which often have latency and bandwidth advantages, and telephone access covering 96% of the population and internet access 90% as of 2018, some rural and remote areas in developed countries are still reliant on satellite communications.

Types

Geostationary orbit

The geostationary satellite (green) always remains above the same marked spot on the equator (brown).

A geostationary equatorial orbit (GEO) is a circular geosynchronous orbit in the plane of the Earth's equator with a radius of approximately 42,164 km (26,199 mi) (measured from the center of the Earth). A satellite in such an orbit is at an altitude of approximately 35,786 km (22,236 mi) above mean sea level. It maintains the same position relative to the Earth's surface. If one could see a satellite in geostationary orbit, it would appear to hover at the same point in the sky, i.e., not exhibit diurnal motion, while the Sun, Moon, and stars would traverse the skies behind it. Such orbits are useful for telecommunications satellites.

A perfectly stable geostationary orbit is an ideal that can only be approximated. In practice the satellite drifts out of this orbit because of perturbations such as the solar wind, radiation pressure, variations in the Earth's gravitational field, and the gravitational effect of the Moon and Sun, and thrusters are used to maintain the orbit in a process known as station-keeping.

Eventually, without the use of thrusters, the orbit will become inclined, oscillating between 0° and 15° every 55 years. At the end of the satellite's lifetime, when fuel approaches depletion, satellite operators may decide to omit these expensive manoeuvres to correct inclination and only control eccentricity. This prolongs the life-time of the satellite as it consumes less fuel over time, but the satellite can then only be used by ground antennas capable of following the N-S movement.

Geostationary satellites will also tend to drift around one of two stable longitudes of 75° and 255° without station keeping.

Elliptical and inclined geosynchronous orbits

A quasi-zenith satellite orbit

Many objects in geosynchronous orbits have eccentric and/or inclined orbits. Eccentricity makes the orbit elliptical and appear to oscillate E-W in the sky from the viewpoint of a ground station, while inclination tilts the orbit compared to the equator and makes it appear to oscillate N-S from a groundstation. These effects combine to form an analemma (figure-8).

Satellites in elliptical/eccentric orbits must be tracked by steerable ground stations.

Tundra orbit

The Tundra orbit is an eccentric Russian geosynchronous orbit, which allows the satellite to spend most of its time dwelling over one high latitude location. It sits at an inclination of 63.4°, which is a frozen orbit, which reduces the need for stationkeeping. At least two satellites are needed to provide continuous coverage over an area. It was used by the Sirius XM Satellite Radio to improve signal strength in the northern US and Canada.

Quasi-zenith orbit

The Quasi-Zenith Satellite System (QZSS) is a four-satellite system that operates in a geosynchronous orbit at an inclination of 42° and a 0.075 eccentricity. Each satellite dwells over Japan, allowing signals to reach receivers in urban canyons then passes quickly over Australia.

Launch

An example of a transition from Geostationary Transfer Orbit (GTO) to Geosynchronous Orbit (GSO).
  EchoStar XVII ·   Earth.

Geosynchronous satellites are launched to the east into a prograde orbit that matches the rotation rate of the equator. The smallest inclination that a satellite can be launched into is that of the launch site's latitude, so launching the satellite from close to the equator limits the amount of inclination change needed later. Additionally, launching from close to the equator allows the speed of the Earth's rotation to give the satellite a boost. A launch site should have water or deserts to the east, so any failed rockets do not fall on a populated area.

Most launch vehicles place geosynchronous satellites directly into a geosynchronous transfer orbit (GTO), an elliptical orbit with an apogee at GSO height and a low perigee. On-board satellite propulsion is then used to raise the perigee, circularise and reach GSO.

Once in a viable geostationary orbit, spacecraft can change their longitudinal position by adjusting their semi-major axis such that the new period is shorter or longer than a sidereal day, in order to effect an apparent "drift" Eastward or Westward, respectively. Once at the desired longitude, the spacecraft's period is restored to geosynchronous.

Proposed orbits

Statite proposal

A statite is a hypothetical satellite that uses radiation pressure from the sun against a solar sail to modify its orbit.

It would hold its location over the dark side of the Earth at a latitude of approximately 30 degrees. It would return to the same spot in the sky every 24 hours from an Earth-based viewer's perspective, so be functionally similar to a geosynchronous orbit.

Space elevator

A further form of geosynchronous orbit is the theoretical space elevator. When one end is attached to the ground, for altitudes below the geostationary belt the elevator maintains a shorter orbital period than by gravity alone.

Retired satellites

Earth from space, surrounded by small white dots
A computer-generated image of space debris. Two debris fields are shown: around geosynchronous space and low Earth orbit.

Geosynchronous satellites require some station keeping to keep their position, and once they run out of thruster fuel and are no longer useful they are moved into a higher graveyard orbit. It is not feasible to deorbit geosynchronous satellites as it would take far more fuel than slightly elevating the orbit, and atmospheric drag is negligible, giving GSOs lifetimes of thousands of years.

The retirement process is becoming increasingly regulated and satellites must have a 90% chance of moving over 200 km above the geostationary belt at end of life.

Space debris

Space debris in geosynchronous orbits typically has a lower collision speed than at LEO since most GSO satellites orbit in the same plane, altitude and speed; however, the presence of satellites in eccentric orbits allows for collisions at up to 4 km/s. Although a collision is comparatively unlikely, GSO satellites have a limited ability to avoid any debris.

Debris less than 10 cm in diameter cannot be seen from the Earth, making it difficult to assess their prevalence.

Despite efforts to reduce risk, spacecraft collisions have occurred. The European Space Agency telecom satellite Olympus-1 was struck by a meteoroid on August 11, 1993 and eventually moved to a graveyard orbit, and in 2006 the Russian Express-AM11 communications satellite was struck by an unknown object and rendered inoperable, although its engineers had enough contact time with the satellite to send it into a graveyard orbit. In 2017 both AMC-9 and Telkom-1 broke apart from an unknown cause.

Properties

The orbit of a geosynchronous satellite at an inclination, from the perspective of an off earth observer (ECI) and of an observer rotating around the earth at its spin rate (ECEF).

A geosynchronous orbit has the following properties:

Period

All geosynchronous orbits have an orbital period equal to exactly one sidereal day. This means that the satellite will return to the same point above the Earth's surface every (sidereal) day, regardless of other orbital properties. This orbital period, T, is directly related to the semi-major axis of the orbit through the formula:

where:

a is the length of the orbit's semi-major axis
is the standard gravitational parameter of the central body

Inclination

A geosynchronous orbit can have any inclination.

Satellites commonly have an inclination of zero, ensuring that the orbit remains over the equator at all times, making it stationary with respect to latitude from the point of view of a ground observer (and in the ECEF reference frame).

Another popular inclinations is 63.4° for a Tundra orbit, which ensures that the orbit's argument of perigee doesn't change over time.

Ground track

In the special case of a geostationary orbit, the ground track of a satellite is a single point on the equator. In the general case of a geosynchronous orbit with a non-zero inclination or eccentricity, the ground track is a more or less distorted figure-eight, returning to the same places once per sidereal day.

Delayed-choice quantum eraser

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Delayed-choice_quantum_eraser A delayed-cho...