Search This Blog

Tuesday, December 17, 2024

Software

From Wikipedia, the free encyclopedia
Software written in the JavaScript language

Software consists of computer programs that instruct the execution of a computer. Software also includes design documents and specifications.

The history of software is closely tied to the development of digital computers in the mid-20th century. Early programs were written in the machine language specific to the hardware. The introduction of high-level programming languages in 1958 allowed for more human-readable instructions, making software development easier and more portable across different computer architectures. Software in a programming language is run through a compiler or interpreter to execute on the architecture's hardware. Over time, software has become complex, owing to developments in networking, operating systems, and databases.

Software can generally be categorized into two main types:

  1. operating systems, which manage hardware resources and provide services for applications
  2. application software, which performs specific tasks for users

The rise of cloud computing has introduced the new software delivery model Software as a Service (SaaS). In SaaS, applications are hosted by a provider and accessed over the Internet.

The process of developing software involves several stages. The stages include software design, programming, testing, release, and maintenance. Software quality assurance and security are critical aspects of software development, as bugs and security vulnerabilities can lead to system failures and security breaches. Additionally, legal issues such as software licenses and intellectual property rights play a significant role in the distribution of software products.

History

The integrated circuit is an essential invention to produce modern software systems.

The first use of the word software is credited to mathematician John Wilder Tukey in 1958. The first programmable computers, which appeared at the end of the 1940s, were programmed in machine language. Machine language is difficult to debug and not portable across different computers. Initially, hardware resources were more expensive than human resources. As programs became complex, programmer productivity became the bottleneck. The introduction of high-level programming languages in 1958 hid the details of the hardware and expressed the underlying algorithms into the code. Early languages include Fortran, Lisp, and COBOL.

Types

A diagram showing how the user interacts with application software on a typical desktop computer. The application software layer interfaces with the operating system, which in turn communicates with the hardware. The arrows indicate information flow.

There are two main types of software:

  • Operating systems are "the layer of software that manages a computer's resources for its users and their applications". There are three main purposes that an operating system fulfills:
    • Allocating resources between different applications, deciding when they will receive central processing unit (CPU) time or space in memory.
    • Providing an interface that abstracts the details of accessing hardware details (like physical memory) to make things easier for programmers.
    • Offering common services, such as an interface for accessing network and disk devices. This enables an application to be run on different hardware without needing to be rewritten.
  • Application software runs on top of the operating system and uses the computer's resources to perform a task. There are many different types of application software because the range of tasks that can be performed with modern computers is so large. Applications account for most software and require the environment provided by an operating system, and often other applications, in order to function.
Comparison of on-premise hardware and software, infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS)

Software can also be categorized by how it is deployed. Traditional applications are purchased with a perpetual license for a specific version of the software, downloaded, and run on hardware belonging to the purchaser. The rise of the Internet and cloud computing enabled a new model, software as a service (SaaS), in which the provider hosts the software (usually built on top of rented infrastructure or platforms) and provides the use of the software to customers, often in exchange for a subscription fee. By 2023, SaaS products—which are usually delivered via a web application—had become the primary method that companies deliver applications.

Software development and maintenance

Diagram for a traditional software development life cycle from 1988. The numbers represent the typical cost of each phase.

Software companies aim to deliver a high-quality product on time and under budget. A challenge is that software development effort estimation is often inaccurate. Software development begins by conceiving the project, evaluating its feasibility, analyzing the business requirements, and making a software design. Most software projects speed up their development by reusing or incorporating existing software, either in the form of commercial off-the-shelf (COTS) or open-source software. Software quality assurance is typically a combination of manual code review by other engineers and automated software testing. Due to time constraints, testing cannot cover all aspects of the software's intended functionality, so developers often focus on the most critical functionality. Formal methods are used in some safety-critical systems to prove the correctness of code, while user acceptance testing helps to ensure that the product meets customer expectations. There are a variety of software development methodologies, which vary from completing all steps in order to concurrent and iterative models. Software development is driven by requirements taken from prospective users, as opposed to maintenance, which is driven by events such as a change request.

Frequently, software is released in an incomplete state when the development team runs out of time or funding. Despite testing and quality assurance, virtually all software contains bugs where the system does not work as intended. Post-release software maintenance is necessary to remediate these bugs when they are found and keep the software working as the environment changes over time. New features are often added after the release. Over time, the level of maintenance becomes increasingly restricted before being cut off entirely when the product is withdrawn from the market. As software ages, it becomes known as legacy software and can remain in use for decades, even if there is no one left who knows how to fix it. Over the lifetime of the product, software maintenance is estimated to comprise 75 percent or more of the total development cost.

Completing a software project involves various forms of expertise, not just in software programmers but also testing, documentation writing, project management, graphic design, user experience, user support, marketing, and fundraising.

Quality and security

Software quality is defined as meeting the stated requirements as well as customer expectations. Quality is an overarching term that can refer to a code's correct and efficient behavior, its reusability and portability, or the ease of modification. It is usually more cost-effective to build quality into the product from the beginning rather than try to add it later in the development process. Higher quality code will reduce lifetime cost to both suppliers and customers as it is more reliable and easier to maintain. Software failures in safety-critical systems can be very serious including death. By some estimates, the cost of poor quality software can be as high as 20 to 40 percent of sales. Despite developers' goal of delivering a product that works entirely as intended, virtually all software contains bugs.

The rise of the Internet also greatly increased the need for computer security as it enabled malicious actors to conduct cyberattacks remotely. If a bug creates a security risk, it is called a vulnerability. Software patches are often released to fix identified vulnerabilities, but those that remain unknown (zero days) as well as those that have not been patched are still liable for exploitation. Vulnerabilities vary in their ability to be exploited by malicious actors, and the actual risk is dependent on the nature of the vulnerability as well as the value of the surrounding system. Although some vulnerabilities can only be used for denial of service attacks that compromise a system's availability, others allow the attacker to inject and run their own code (called malware), without the user being aware of it. To thwart cyberattacks, all software in the system must be designed to withstand and recover from external attack. Despite efforts to ensure security, a significant fraction of computers are infected with malware.

Encoding and execution

Programming languages

The source code for a computer program in C. The gray lines are comments that explain the program to humans. When compiled and run, it will give the output "Hello, world!".

Programming languages are the format in which software is written. Since the 1950s, thousands of different programming languages have been invented; some have been in use for decades, while others have fallen into disuse. Some definitions classify machine code—the exact instructions directly implemented by the hardware—and assembly language—a more human-readable alternative to machine code whose statements can be translated one-to-one into machine code—as programming languages. Programs written in the high-level programming languages used to create software share a few main characteristics: knowledge of machine code is not necessary to write them, they can be ported to other computer systems, and they are more concise and human-readable than machine code. They must be both human-readable and capable of being translated into unambiguous instructions for computer hardware.

Compilation, interpretation, and execution

The invention of high-level programming languages was simultaneous with the compilers needed to translate them automatically into machine code. Most programs do not contain all the resources needed to run them and rely on external libraries. Part of the compiler's function is to link these files in such a way that the program can be executed by the hardware. Once compiled, the program can be saved as an object file and the loader (part of the operating system) can take this saved file and execute it as a process on the computer hardware. Some programming languages use an interpreter instead of a compiler. An interpreter converts the program into machine code at run time, which makes them 10 to 100 times slower than compiled programming languages.

Liability

Software is often released with the knowledge that it is incomplete or contains bugs. Purchasers knowingly buy it in this state, which has led to a legal regime where liability for software products is significantly curtailed compared to other products.

Licenses

Blender, a free software program

Source code is protected by copyright law that vests the owner with the exclusive right to copy the code. The underlying ideas or algorithms are not protected by copyright law, but are often treated as a trade secret and concealed by such methods as non-disclosure agreements. Software copyright has been recognized since the mid-1970s and is vested in the company that makes the software, not the employees or contractors who wrote it. The use of most software is governed by an agreement (software license) between the copyright holder and the user. Proprietary software is usually sold under a restrictive license that limits copying and reuse (often enforced with tools such as digital rights management (DRM)). Open-source licenses, in contrast, allow free use and redistribution of software with few conditions. Most open-source licenses used for software require that modifications be released under the same license, which can create complications when open-source software is reused in proprietary projects.

Patents

Patents give an inventor an exclusive, time-limited license for a novel product or process. Ideas about what software could accomplish are not protected by law and concrete implementations are instead covered by copyright law. In some countries, a requirement for the claimed invention to have an effect on the physical world may also be part of the requirements for a software patent to be held valid. Software patents have been historically controversial. Before the 1998 case State Street Bank & Trust Co. v. Signature Financial Group, Inc., software patents were generally not recognized in the United States. In that case, the Supreme Court decided that business processes could be patented. Patent applications are complex and costly, and lawsuits involving patents can drive up the cost of products. Unlike copyrights, patents generally only apply in the jurisdiction where they were issued.

Impact

Computer-generated simulations are one of the advances enabled by software.
Engineer Capers Jones writes that "computers and software are making profound changes to every aspect of human life: education, work, warfare, entertainment, medicine, law, and everything else". It has become ubiquitous in everyday life in developed countries. In many cases, software augments the functionality of existing technologies such as household appliances and elevators. Software also spawned entirely new technologies such as the Internet, video games, mobile phones, and GPS. New methods of communication, including email, forums, blogs, microblogging, wikis, and social media, were enabled by the Internet. Massive amounts of knowledge exceeding any paper-based library are now available with a quick web search. Most creative professionals have switched to software-based tools such as computer-aided design, 3D modeling, digital image editing, and computer animation. Almost every complex device is controlled by software.

Gamma correction

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Gamma_correction
The effect of gamma correction on an image: the original image was taken to varying powers, showing that powers larger than 1 make the shadows darker, while powers smaller than 1 make dark regions lighter. This is not the actual gamma the picture has, though.

Gamma correction or gamma is a nonlinear operation used to encode and decode luminance or tristimulus values in video or still image systems. Gamma correction is, in the simplest cases, defined by the following power-law expression:

where the non-negative real input value is raised to the power and multiplied by the constant A to get the output value . In the common case of A = 1, inputs and outputs are typically in the range 0–1.

A gamma value is sometimes called an encoding gamma, and the process of encoding with this compressive power-law nonlinearity is called gamma compression; conversely, a gamma value is called a decoding gamma, and the application of the expansive power-law nonlinearity is called gamma expansion.

Explanation

Gamma encoding of images is used to optimize the usage of bits when encoding an image, or bandwidth used to transport an image, by taking advantage of the non-linear manner in which humans perceive light and color. The human perception of brightness (lightness), under common illumination conditions (neither pitch black nor blindingly bright), follows an approximate power function (which has no relation to the gamma function), with greater sensitivity to relative differences between darker tones than between lighter tones, consistent with the Stevens power law for brightness perception. If images are not gamma-encoded, they allocate too many bits or too much bandwidth to highlights that humans cannot differentiate, and too few bits or too little bandwidth to shadow values that humans are sensitive to and would require more bits/bandwidth to maintain the same visual quality. Gamma encoding of floating-point images is not required (and may be counterproductive), because the floating-point format already provides a piecewise linear approximation of a logarithmic curve.

Although gamma encoding was developed originally to compensate for the brightness characteristics of cathode-ray tube (CRT) displays, that is not its main purpose or advantage in modern systems. In CRT displays, the light intensity varies nonlinearly with the electron-gun voltage. Altering the input signal by gamma compression can cancel this nonlinearity, such that the output picture has the intended luminance. However, the gamma characteristics of the display device do not play a factor in the gamma encoding of images and video. They need gamma encoding to maximize the visual quality of the signal, regardless of the gamma characteristics of the display device. The similarity of CRT physics to the inverse of gamma encoding needed for video transmission was a combination of coincidence and engineering, which simplified the electronics in early television sets.

Photographic film has a much greater ability to record fine differences in shade than can be reproduced on photographic paper. Similarly, most video screens are not capable of displaying the range of brightnesses (dynamic range) that can be captured by typical electronic cameras. For this reason, considerable artistic effort is invested in choosing the reduced form in which the original image should be presented. The gamma correction, or contrast selection, is part of the photographic repertoire used to adjust the reproduced image.

Analogously, digital cameras record light using electronic sensors that usually respond linearly. In the process of rendering linear raw data to conventional RGB data (e.g. for storage into JPEG image format), color space transformations and rendering transformations will be performed. In particular, almost all standard RGB color spaces and file formats use a non-linear encoding (a gamma compression) of the intended intensities of the primary colors of the photographic reproduction. In addition, the intended reproduction is almost always nonlinearly related to the measured scene intensities, via a tone reproduction nonlinearity.

Generalized gamma

The concept of gamma can be applied to any nonlinear relationship. For the power-law relationship , the curve on a log–log plot is a straight line, with slope everywhere equal to gamma (slope is represented here by the derivative operator):

That is, gamma can be visualized as the slope of the input–output curve when plotted on logarithmic axes. For a power-law curve, this slope is constant, but the idea can be extended to any type of curve, in which case gamma (strictly speaking, "point gamma") is defined as the slope of the curve in any particular region.

Film photography

Characteristic curve of a photographic film. The slope of its linear section is called the gamma of the film.

When a photographic film is exposed to light, the result of the exposure can be represented on a graph showing log of exposure on the horizontal axis, and density, or negative log of transmittance, on the vertical axis. For a given film formulation and processing method, this curve is its characteristic or Hurter–Driffield curve. Since both axes use logarithmic units, the slope of the linear section of the curve is called the gamma of the film. Negative film typically has a gamma less than 1; positive film (slide film, reversal film) typically has a gamma with absolute value greater than 1.

Standard gammas

Analog TV

Output to CRT-based television receivers and monitors does not usually require further gamma correction. The standard video signals that are transmitted or stored in image files incorporate gamma compression matching the gamma expansion of the CRT (although it is not the exact inverse). For television signals, gamma values are fixed and defined by the analog video standards. CCIR System M and N, associated with NTSC color, use gamma 2.2; systems B/G, H, I, D/K, K1, L and M associated with PAL or SECAM color use gamma 2.8.

Computer displays

In most computer display systems, images are encoded with a gamma of about 0.45 and decoded with the reciprocal gamma of 2.2. A notable exception, until the release of Mac OS X 10.6 (Snow Leopard) in September 2009, were Macintosh computers, which encoded with a gamma of 0.55 and decoded with a gamma of 1.8. In any case, binary data in still image files (such as JPEG) are explicitly encoded (that is, they carry gamma-encoded values, not linear intensities), as are motion picture files (such as MPEG). The system can optionally further manage both cases, through color management, if a better match to the output device gamma is required.

Plot of the sRGB standard gamma-expansion nonlinearity in red, and its local gamma value (slope in log–log space) in blue. The local gamma rises from 1 to about 2.2.

The sRGB color space standard used with most cameras, PCs, and printers does not use a simple power-law nonlinearity as above, but has a decoding gamma value near 2.2 over much of its range, as shown in the plot to the right/above. Below a compressed value of 0.04045 or a linear intensity of 0.00313, the curve is linear (encoded value proportional to intensity), so γ = 1. The dashed black curve behind the red curve is a standard γ = 2.2 power-law curve, for comparison.

Gamma correction in computers is used, for example, to display a gamma = 1.8 Apple picture correctly on a gamma = 2.2 PC monitor by changing the image gamma. Another usage is equalizing of the individual color-channel gammas to correct for monitor discrepancies.

Gamma meta information

Some picture formats allow an image's intended gamma (of transformations between encoded image samples and light output) to be stored as metadata, facilitating automatic gamma correction. The PNG specification includes the gAMA chunk for this purpose and with formats such as JPEG and TIFF the Exif Gamma tag can be used. Some formats can specify the ICC profile which includes a transfer function.

These features have historically caused problems, especially on the web. For HTML and CSS colors and JPG or GIF images without attached color profile metadata, popular browsers passed numerical color values to the display without color management, resulting in substantially different appearance between devices; however those same browsers sent images with gamma explicitly set in metadata through color management, and also applied a default gamma to PNG images with metadata omitted. This made it impossible for PNG images to simultaneously match HTML or untagged JPG colors on every device. This situation has since improved, as most major browsers now support the gamma setting (or lack of it).

Power law for video display

A gamma characteristic is a power-law relationship that approximates the relationship between the encoded luma in a television system and the actual desired image luminance.

With this nonlinear relationship, equal steps in encoded luminance correspond roughly to subjectively equal steps in brightness. Ebner and Fairchild used an exponent of 0.43 to convert linear intensity into lightness (luma) for neutrals; the reciprocal, approximately 2.33 (quite close to the 2.2 figure cited for a typical display subsystem), was found to provide approximately optimal perceptual encoding of grays.

The following illustration shows the difference between a scale with linearly-increasing encoded luminance signal (linear gamma-compressed luma input) and a scale with linearly-increasing intensity scale (linear luminance output).

Linear encoding VS =   0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Linear intensity  I 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

On most displays (those with gamma of about 2.2), one can observe that the linear-intensity scale has a large jump in perceived brightness between the intensity values 0.0 and 0.1, while the steps at the higher end of the scale are hardly perceptible. The gamma-encoded scale, which has a nonlinearly-increasing intensity, will show much more even steps in perceived brightness.

A CRT, for example, converts a video signal to light in a nonlinear way, because the electron gun's intensity (brightness) as a function of applied video voltage is nonlinear. The light intensity I is related to the source voltage Vs according to

where γ is the Greek letter gamma. For a CRT, the gamma that relates brightness to voltage is usually in the range 2.35 to 2.55; video look-up tables in computers usually adjust the system gamma to the range 1.8 to 2.2, which is in the region that makes a uniform encoding difference give approximately uniform perceptual brightness difference, as illustrated in the diagram at the top of this section.

For simplicity, consider the example of a monochrome CRT. In this case, when a video signal of 0.5 (representing a mid-gray) is fed to the display, the intensity or brightness is about 0.22 (resulting in a mid-gray, about 22% the intensity of white). Pure black (0.0) and pure white (1.0) are the only shades that are unaffected by gamma.

To compensate for this effect, the inverse transfer function (gamma correction) is sometimes applied to the video signal so that the end-to-end response is linear. In other words, the transmitted signal is deliberately distorted so that, after it has been distorted again by the display device, the viewer sees the correct brightness. The inverse of the function above is

where Vc is the corrected voltage, and Vs is the source voltage, for example, from an image sensor that converts photocharge linearly to a voltage. In our CRT example 1/γ is 1/2.2 ≈ 0.45.

A color CRT receives three video signals (red, green, and blue) and in general each color has its own value of gamma, denoted γR, γG or γB. However, in simple display systems, a single value of γ is used for all three colors.

Other display devices have different values of gamma: for example, a Game Boy Advance display has a gamma between 3 and 4 depending on lighting conditions. In LCDs such as those on laptop computers, the relation between the signal voltage Vs and the intensity I is very nonlinear and cannot be described with gamma value. However, such displays apply a correction onto the signal voltage in order to approximately get a standard γ = 2.5 behavior. In NTSC television recording, γ = 2.2.

The power-law function, or its inverse, has a slope of infinity at zero. This leads to problems in converting from and to a gamma colorspace. For this reason most formally defined colorspaces such as sRGB will define a straight-line segment near zero and add raising x + K (where K is a constant) to a power so the curve has continuous slope. This straight line does not represent what the CRT does, but does make the rest of the curve more closely match the effect of ambient light on the CRT. In such expressions the exponent is not the gamma; for instance, the sRGB function uses a power of 2.4 in it, but more closely resembles a power-law function with an exponent of 2.2, without a linear portion.

Methods to perform display gamma correction in computing

Up to four elements can be manipulated in order to achieve gamma encoding to correct the image to be shown on a typical 2.2- or 1.8-gamma computer display:

  • The pixel's intensity values in a given image file; that is, the binary pixel values are stored in the file in such way that they represent the light intensity via gamma-compressed values instead of a linear encoding. This is done systematically with digital video files (as those in a DVD movie), in order to minimize the gamma-decoding step while playing, and maximize image quality for the given storage. Similarly, pixel values in standard image file formats are usually gamma-compensated, either for sRGB gamma (or equivalent, an approximation of typical of legacy monitor gammas), or according to some gamma specified by metadata such as an ICC profile. If the encoding gamma does not match the reproduction system's gamma, further correction may be done, either on display or to create a modified image file with a different profile.
  • The rendering software writes gamma-encoded pixel binary values directly to the video memory (when highcolor/truecolor modes are used) or in the CLUT hardware registers (when indexed color modes are used) of the display adapter. They drive digital-to-analog converters (DAC) which output the proportional voltages to the display. For example, when using 24-bit RGB color (8 bits per channel), writing a value of 128 (rounded midpoint of the 0–255 byte range) in video memory it outputs the proportional ≈ 0.5 voltage to the display, which it is shown darker due to the monitor behavior. Alternatively, to achieve ≈ 50% intensity, a gamma-encoded look-up table can be applied to write a value near to 187 instead of 128 by the rendering software.
  • Modern display adapters have dedicated calibrating CLUTs, which can be loaded once with the appropriate gamma-correction look-up table in order to modify the encoded signals digitally before the DACs that output voltages to the monitor. Setting up these tables to be correct is called hardware calibration.
  • Some modern monitors allow the user to manipulate their gamma behavior (as if it were merely another brightness/contrast-like setting), encoding the input signals by themselves before they are displayed on screen. This is also a calibration by hardware technique but it is performed on the analog electric signals instead of remapping the digital values, as in the previous cases.

In a correctly calibrated system, each component will have a specified gamma for its input and/or output encodings. Stages may change the gamma to correct for different requirements, and finally the output device will do gamma decoding or correction as needed, to get to a linear intensity domain. All the encoding and correction methods can be arbitrarily superimposed, without mutual knowledge of this fact among the different elements; if done incorrectly, these conversions can lead to highly distorted results, but if done correctly as dictated by standards and conventions will lead to a properly functioning system.

In a typical system, for example from camera through JPEG file to display, the role of gamma correction will involve several cooperating parts. The camera encodes its rendered image into the JPEG file using one of the standard gamma values such as 2.2, for storage and transmission. The display computer may use a color management engine to convert to a different color space (such as older Macintosh's γ = 1.8 color space) before putting pixel values into its video memory. The monitor may do its own gamma correction to match the CRT gamma to that used by the video system. Coordinating the components via standard interfaces with default standard gamma values makes it possible to get such system properly configured.

Simple monitor tests

Gamma correction test image. Only valid at browser zoom = 100%

This procedure is useful for making a monitor display images approximately correctly, on systems in which profiles are not used (for example, the Firefox browser prior to version 3.0 and many others) or in systems that assume untagged source images are in the sRGB colorspace.

In the test pattern, the intensity of each solid color bar is intended to be the average of the intensities in the surrounding striped dither; therefore, ideally, the solid areas and the dithers should appear equally bright in a system properly adjusted to the indicated gamma.

Normally a graphics card has contrast and brightness control and a transmissive LCD monitor has contrast, brightness, and backlight control. Graphics card and monitor contrast and brightness have an influence on effective gamma, and should not be changed after gamma correction is completed.

The top two bars of the test image help to set correct contrast and brightness values. There are eight three-digit numbers in each bar. A good monitor with proper calibration shows the six numbers on the right in both bars, a cheap monitor shows only four numbers.

Given a desired display-system gamma, if the observer sees the same brightness in the checkered part and in the homogeneous part of every colored area, then the gamma correction is approximately correct. In many cases the gamma correction values for the primary colors are slightly different.

Setting the color temperature or white point is the next step in monitor adjustment.

Before gamma correction the desired gamma and color temperature should be set using the monitor controls. Using the controls for gamma, contrast and brightness, the gamma correction on an LCD can only be done for one specific vertical viewing angle, which implies one specific horizontal line on the monitor, at one specific brightness and contrast level. An ICC profile allows one to adjust the monitor for several brightness levels. The quality (and price) of the monitor determines how much deviation of this operating point still gives a satisfactory gamma correction. Twisted nematic (TN) displays with 6-bit color depth per primary color have lowest quality. In-plane switching (IPS) displays with typically 8-bit color depth are better. Good monitors have 10-bit color depth, have hardware color management and allow hardware calibration with a tristimulus colorimeter. Often a 6bit plus FRC panel is sold as 8bit and a 8bit plus FRC panel is sold as 10bit. FRC is no true replacement for more bits. The 24-bit and 32-bit color depth formats have 8 bits per primary color.

With Microsoft Windows 7 and above the user can set the gamma correction through the display color calibration tool dccw.exe or other programs. These programs create an ICC profile file and load it as default. This makes color management easy. Increase the gamma slider in the dccw program until the last colored area, often the green color, has the same brightness in checkered and homogeneous area. Use the color balance or individual colors gamma correction sliders in the gamma correction programs to adjust the two other colors. Some old graphics card drivers do not load the color Look Up Table correctly after waking up from standby or hibernate mode and show wrong gamma. In this case update the graphics card driver.

On some operating systems running the X Window System, one can set the gamma correction factor (applied to the existing gamma value) by issuing the command xgamma -gamma 0.9 for setting gamma correction factor to 0.9, and xgamma for querying current value of that factor (the default is 1.0). In macOS systems, the gamma and other related screen calibrations are made through the System Preferences.

Scaling and blending

Generally, operations on pixel values should be performed in "linear light" (gamma 1). Eric Brasseur discusses the issue at length and provides test images. They serve to point out a widespread problem: Many programs perform scaling in a color space with gamma, instead of a physically correct linear space. The test images are constructed so as to have a drastically different appearance when downsampled incorrectly. Jonas Berlin has created a "your scaling software sucks/rules" image based on this principle.

In addition to scaling, the problem also applies to other forms of downsampling (scaling down), such as chroma subsampling in JPEG's gamma-enabled Y′CbCr. WebP solves this problem by calculating the chroma averages in linear space then converting back to a gamma-enabled space; an iterative solution is used for larger images. The same sharp YUV (formerly smart YUV) code is used in sjpeg and optionally in AVIF. Kornelski provides a simpler approximation by luma-based weighted average. Alpha compositing, color gradients, and 3D rendering are also affected by this issue.

Paradoxically, when upsampling (scaling up) an image, the result processed in a "wrong" (non-physical) gamma color space is often more aesthetically pleasing. This is because resampling filters with negative lobes like Mitchell–Netravali and Lanczos create ringing artifacts linearly even though human perception is non-linear and better approximated by gamma. (Emulating "stepping back," which motivates downsampling in linear light (gamma=1), does not apply when upsampling.) A related method of reducing the visibility of ringing artifacts consists of using a sigmoidal light transfer function as pioneered by ImageMagick and GIMP's LoHalo filter and adapted to video upsampling by madVR, AviSynth and Mpv.

Terminology

The term intensity refers strictly to the amount of light that is emitted per unit of time and per unit of surface, in units of lux. Note, however, that in many fields of science this quantity is called luminous exitance, as opposed to luminous intensity, which is a different quantity. These distinctions, however, are largely irrelevant to gamma compression, which is applicable to any sort of normalized linear intensity-like scale.

"Luminance" can mean several things even within the context of video and imaging:

  • luminance is the photometric brightness of an object (in units of cd/m2), taking into account the wavelength-dependent sensitivity of the human eye (the photopic curve);
  • relative luminance is the luminance relative to a white level, used in a color-space encoding;
  • luma is the encoded video brightness signal, i.e., similar to the signal voltage VS.

One contrasts relative luminance in the sense of color (no gamma compression) with luma in the sense of video (with gamma compression), and denote relative luminance by Y and luma by Y′, the prime symbol (′) denoting gamma compression. Note that luma is not directly calculated from luminance, it is the (somewhat arbitrary) weighted sum of gamma compressed RGB components.

Likewise, brightness is sometimes applied to various measures, including light levels, though it more properly applies to a subjective visual attribute.

Gamma correction is a type of power law function whose exponent is the Greek letter gamma (γ). It should not be confused with the mathematical Gamma function. The lower case gamma, γ, is a parameter of the former; the upper case letter, Γ, is the name of (and symbol used for) the latter (as in Γ(x)). To use the word "function" in conjunction with gamma correction, one may avoid confusion by saying "generalized power law function".

Without context, a value labeled gamma might be either the encoding or the decoding value. Caution must be taken to correctly interpret the value as that to be applied-to-compensate or to be compensated-by-applying its inverse. In common parlance, in many occasions the decoding value (as 2.2) is employed as if it were the encoding value, instead of its inverse (1/2.2 in this case), which is the real value that must be applied to encode gamma.

Image editing

From Wikipedia, the free encyclopedia
An image that has been digitally edited to include additional rose blooms

Image editing encompasses the processes of altering images, whether they are digital photographs, traditional photo-chemical photographs, or illustrations. Traditional analog image editing is known as photo retouching, using tools such as an airbrush to modify photographs or edit illustrations with any traditional art medium. Graphic software programs, which can be broadly grouped into vector graphics editors, raster graphics editors, and 3D modelers, are the primary tools with which a user may manipulate, enhance, and transform images. Many image editing programs are also used to render or create computer art from scratch. The term "image editing" usually refers only to the editing of 2D images, not 3D ones.

Basics of image editing

Raster images are stored on a computer in the form of a grid of picture elements, or pixels. These pixels contain the image's color and brightness information. Image editors can change the pixels to enhance the image in many ways. The pixels can be changed as a group or individually by the sophisticated algorithms within the image editors. This article mostly refers to bitmap graphics editors, which are often used to alter photographs and other raster graphics. However, vector graphics software, such as Adobe Illustrator, CorelDRAW, Xara Designer Pro or Inkscape, is used to create and modify vector images, which are stored as descriptions of lines, Bézier curves, and text instead of pixels. It is easier to rasterize a vector image than to vectorize a raster image; how to go about vectorizing a raster image is the focus of much research in the field of computer vision. Vector images can be modified more easily because they contain descriptions of the shapes for easy rearrangement. They are also scalable, being rasterizable at any resolution.

Automatic image enhancement

Camera or computer image editing programs often offer basic automatic image enhancement features that correct color hue and brightness imbalances as well as other image editing features, such as red eye removal, sharpness adjustments, zoom features and automatic cropping. These are called automatic because generally they happen without user interaction or are offered with one click of a button or mouse button or by selecting an option from a menu. Additionally, some automatic editing features offer a combination of editing actions with little or no user interaction.

Super-resolution imaging

There is promising research on using deep convolutional networks to perform super-resolution. In particular work has been demonstrated showing the transformation of a 20x microscope image of pollen grains into a 1500x scanning electron microscope image using it. While this technique can increase the information content of an image, there is no guarantee that the upscaled features exist in the original image and deep convolutional upscalers should not be used in analytical applications with ambiguous inputs. These methods can hallucinate image features, which can make them unsafe for medical use.

Digital data compression

Many image file formats use data compression to reduce file size and save storage space. Digital compression of images may take place in the camera, or can be done on the computer with the image editor. When images are stored in JPEG format, compression has already taken place. Both cameras and computer programs allow the user to set the level of compression.

Some compression algorithms, such as those used in PNG file format, are lossless, which means no information is lost when the file is saved. By contrast, the more popular JPEG file format uses a lossy compression algorithm (based on discrete cosine transform coding) by which the greater the compression, the more information is lost, ultimately reducing image quality or detail that can not be restored. JPEG uses knowledge of the way the human visual system perceives color to make this loss of detail less noticeable.

Image editor features

Listed below are some of the most used capabilities of the better graphics manipulation programs. The list is by no means all-inclusive. There are a myriad of choices associated with the application of most of these features.

Selection

One of the prerequisites for many of the applications mentioned below is a method of selecting part(s) of an image, thus applying a change selectively without affecting the entire picture. Most graphics programs have several means of accomplishing this, such as:

  • a marquee tool for selecting rectangular or other regular polygon-shaped regions,
  • a lasso tool for freehand selection of a region,
  • a magic wand tool that selects objects or regions in the image defined by proximity of color or luminance,
  • vector-based pen tools,

as well as more advanced facilities such as edge detection, masking, alpha compositing, and color and channel-based extraction. The border of a selected area in an image is often animated with the marching ants effect to help the user to distinguish the selection border from the image background.

Layers

Leonardo da Vinci's Vitruvian Man overlaid with Goethe's Color Wheel using a screen layer in Adobe Photoshop. Screen layers can be helpful in graphic design and in creating multiple exposures in photography.
Leonardo da Vinci's Vitruvian Man overlaid a soft light layer Moses Harris's Color Wheel and a soft light layer of Ignaz Schiffermüller's Color Wheel. Soft light layers have a darker, more translucent look than screen layers.

Another feature common to many graphics applications is that of Layers, which are analogous to sheets of transparent acetate (each containing separate elements that make up a combined picture), stacked on top of each other, each capable of being individually positioned, altered, and blended with the layers below, without affecting any of the elements on the other layers. This is a fundamental workflow that has become the norm for the majority of programs on the market today, and enables maximum flexibility for the user while maintaining non-destructive editing principles and ease of use.

Image size alteration

Image editors can resize images in a process often called image scaling, making them larger, or smaller. High image resolution cameras can produce large images, which are often reduced in size for Internet use. Image editor programs use a mathematical process called resampling to calculate new pixel values whose spacing is larger or smaller than the original pixel values. Images for Internet use are kept small, say 640 x 480 pixels, which would equal 0.3 megapixels.

Cropping an image

Digital editors are used to crop images. Cropping creates a new image by selecting a desired rectangular portion from the image being cropped. The unwanted part of the image is discarded. Image cropping does not reduce the resolution of the area cropped. Best results are obtained when the original image has a high resolution. A primary reason for cropping is to improve the image composition in the new image.

Uncropped image from camera
Lily cropped from larger image

Cutting out a part of an image from the background

Using a selection tool, the outline of the figure or element in the picture is traced/selected, and then the background is removed. Depending on how intricate the "edge" is this may be more or less difficult to do cleanly. For example, individual hairs can require a lot of work. Hence the use of the "green screen" technique (chroma key) which allows one to easily remove the background.

Histogram

Image editors have provisions to create an image histogram of the image being edited. The histogram plots the number of pixels in the image (vertical axis) with a particular brightness value (horizontal axis). Algorithms in the digital editor allow the user to visually adjust the brightness value of each pixel and to dynamically display the results as adjustments are made. Improvements in picture brightness and contrast can thus be obtained.

Sunflower image
Histogram of Sunflower image

Noise reduction

Image editors may feature a number of algorithms which can add or remove noise in an image. Some JPEG artifacts can be removed; dust and scratches can be removed and an image can be de-speckled. Noise reduction merely estimates the state of the scene without the noise and is not a substitute for obtaining a "cleaner" image. Excessive noise reduction leads to a loss of detail, and its application is hence subject to a trade-off between the undesirability of the noise itself and that of the reduction artifacts.

Noise tends to invade images when pictures are taken in low light settings. A new picture can be given an 'antiqued' effect by adding uniform monochrome noise.

Removal of unwanted elements

Most image editors can be used to remove unwanted branches, etc., using a "clone" tool. Removing these distracting elements draws focus to the subject, improving overall composition.

Notice the branch in the original image
The eye is drawn to the center of the globe.

Selective color change

Some image editors have color swapping abilities to selectively change the color of specific items in an image, given that the selected items are within a specific color range.

Selective color change

Image orientation

Image orientation (from left to right): original, 30° CCW rotation, and flipped.

Image editors are capable of altering an image to be rotated in any direction and to any degree. Mirror images can be created and images can be horizontally flipped or vertically flopped. A small rotation of several degrees is often enough to level the horizon, correct verticality (of a building, for example), or both. Rotated images usually require cropping afterwards, in order to remove the resulting gaps at the image edges.

Perspective control and distortion

Perspective control: original (left), perspective distortion removed (right).

Some image editors allow the user to distort (or "transform") the shape of an image. While this might also be useful for special effects, it is the preferred method of correcting the typical perspective distortion that results from photographs being taken at an oblique angle to a rectilinear subject. Care is needed while performing this task, as the image is reprocessed using interpolation of adjacent pixels, which may reduce overall image definition. The effect mimics the use of a perspective control lens, which achieves a similar correction in-camera without loss of definition.

Lens correction

Photo manipulation packages have functions to correct images for various lens distortions, including pincushion, fisheye, and barrel distortions. The corrections are in most cases subtle, but can improve the appearance of some photographs.

Enhancing images

In computer graphics, the enhancement of an image is the process of improving the quality of a digitally stored image by manipulating the image with software. It is quite easy, for example, to make an image lighter or darker, or to increase or decrease contrast. Advanced photo enhancement software also supports many filters for altering images in various ways. Programs specialized for image enhancement are sometimes called image editors.

Sharpening and softening images

Graphics programs can be used to both sharpen and blur images in a number of ways, such as unsharp masking or deconvolution. Portraits often appear more pleasing when selectively softened (particularly the skin and the background) to better make the subject stand out. This can be achieved with a camera by using a large aperture, or in the image editor by making a selection and then blurring it. Edge enhancement is an extremely common technique used to make images appear sharper, although purists frown on the result as appearing unnatural.

Image sharpening: original (top), image sharpened (bottom).

Another form of image sharpening involves a form of contrast. This is done by finding the average color of the pixels around each pixel in a specified radius, and then contrasting that pixel from that average color. This effect makes the image seem clearer, seemingly adding details. An example of this effect can be seen to the right. It is widely used in the printing and photographic industries for increasing the local contrasts and sharpening the images.

Selecting and merging of images

Photomontage of 16 photos which have been digitally manipulated in Photoshop to give the impression that it is a real landscape

Many graphics applications are capable of merging one or more individual images into a single file. The orientation and placement of each image can be controlled.

When selecting a raster image that is not rectangular, it requires separating the edges from the background, also known as silhouetting. This is the digital-analog of cutting out the image from a physical picture. Clipping paths may be used to add silhouetted images to vector graphics or page layout files that retain vector data. Alpha compositing, allows for soft translucent edges when selecting images. There are a number of ways to silhouette an image with soft edges, including selecting the image or its background by sampling similar colors, selecting the edges by raster tracing, or converting a clipping path to a raster selection. Once the image is selected, it may be copied and pasted into another section of the same file, or into a separate file. The selection may also be saved in what is known as an alpha channel.

A popular way to create a composite image is to use transparent layers. The background image is used as the bottom layer, and the image with parts to be added are placed in a layer above that. Using an image layer mask, all but the parts to be merged is hidden from the layer, giving the impression that these parts have been added to the background layer. Performing a merge in this manner preserves all of the pixel data on both layers to more easily enable future changes in the new merged image.

Slicing of images

A more recent tool in digital image editing software is the image slicer. Parts of images for graphical user interfaces or web pages are easily sliced, labeled and saved separately from whole images so the parts can be handled individually by the display medium. This is useful to allow dynamic swapping via interactivity or animating parts of an image in the final presentation.

Special effects

An example of some special effects that can be added to a picture

Image editors usually have a list of special effects that can create unusual results. Images may be skewed and distorted in various ways. Scores of special effects can be applied to an image which include various forms of distortion, artistic effects, geometric transforms and texture effects,[9] or combinations thereof.

A complex effect in the first image from the right


Using custom Curves settings in Image editors such as Photoshop, one can mimic the "pseudo-solarisation" effect, better known in photographic circles as the Sabattier-effect.

A pseudo-solarised color image

Stamp Clone Tool

The Clone Stamp tool selects and samples an area of your picture and then uses these pixels to paint over any marks. The Clone Stamp tool acts like a brush so you can change the size, allowing cloning from just one pixel wide to hundreds. You can change the opacity to produce a subtle clone effect. Also, there is a choice between Clone align or Clone non-align the sample area. In Photoshop this tool is called Clone Stamp, but it may also be called a Rubber Stamp tool.



Image after stamp tool processed

Change color depth

An example of converting an image from color to grayscale

It is possible, using the software, to change the color depth of images. Common color depths are 2, 4, 16, 256, 65,536 and 16.7 million colors. The JPEG and PNG image formats are capable of storing 16.7 million colors (equal to 256 luminance values per color channel). In addition, grayscale images of 8 bits or less can be created, usually via conversion and down-sampling from a full-color image. Grayscale conversion is useful for reducing the file size dramatically when the original photographic print was monochrome, but a color tint has been introduced due to aging effects.

Contrast change and brightening

An example of contrast correction. Left side of the image is untouched.

Image editors have provisions to simultaneously change the contrast of images and brighten or darken the image. Underexposed images can often be improved by using this feature. Recent advances have allowed more intelligent exposure correction whereby only pixels below a particular luminosity threshold are brightened, thereby brightening underexposed shadows without affecting the rest of the image. The exact transformation that is applied to each color channel can vary from editor to editor. GIMP applies the following formula:

if (brightness < 0.0)  value = value * ( 1.0 + brightness);
                  else value = value + ((1 - value) * brightness);
value = (value - 0.5) * (tan ((contrast + 1) * PI/4) ) + 0.5;

where value is the input color value in the 0..1 range and brightness and contrast are in the −1..1 range.

Gamma correction

In addition to the capability of changing the images' brightness and/or contrast in a non-linear fashion, most current image editors provide an opportunity to manipulate the images' gamma value.

Gamma correction is particularly useful for bringing details that would be hard to see on most computer monitors out of shadows. In some image editing software, this is called "curves", usually, a tool found in the color menu, and no reference to "gamma" is used anywhere in the program or the program documentation. Strictly speaking, the curves tool usually does more than simple gamma correction, since one can construct complex curves with multiple inflection points, but when no dedicated gamma correction tool is provided, it can achieve the same effect.

Color adjustments

An example of color adjustment using raster graphics editor

The color of images can be altered in a variety of ways. Colors can be faded in and out, and tones can be changed using curves or other tools. The color balance can be improved, which is important if the picture was shot indoors with daylight film, or shot on a camera with the white balance incorrectly set. Special effects, like sepia tone and grayscale, can be added to an image. In addition, more complicated procedures, such as the mixing of color channels, are possible using more advanced graphics editors.

The red-eye effect, which occurs when flash photos are taken when the pupil is too widely open (so that light from the flash that passes into the eye through the pupil reflects off the fundus at the back of the eyeball), can also be eliminated at this stage.

Dynamic blending

Before and after example of the advanced dynamic blending technique created by Elia Locardi

Advanced Dynamic Blending is a concept introduced by photographer Elia Locardi in his blog Blame The Monkey to describe the photographic process of capturing multiple bracketed exposures of a land or cityscape over a specific span of time in a changing natural or artificial lighting environment. Once captured, the exposure brackets are manually blended together into a single High Dynamic Range image using post-processing software. Dynamic Blending images serve to display a consolidated moment. This means that while the final image may be a blend of a span of time, it visually appears to represent a single instant.

Printing

Control printed image by changing pixels-per-inch

Controlling the print size and quality of digital images requires an understanding of the pixels-per-inch (ppi) variable that is stored in the image file and sometimes used to control the size of the printed image. Within Adobe Photoshop's Image Size dialog, the image editor allows the user to manipulate both pixel dimensions and the size of the image on the printed document. These parameters work together to produce a printed image of the desired size and quality. Pixels per inch of the image, pixel per inch of the computer monitor, and dots per inch on the printed document are related, but in use are very different. The Image Size dialog can be used as an image calculator of sorts. For example, a 1600 × 1200 image with a resolution of 200 ppi will produce a printed image of 8 × 6 inches. The same image with 400 ppi will produce a printed image of 4 × 3 inches. Change the resolution to 800 ppi, and the same image now prints out at 2 × 1.5 inches. All three printed images contain the same data (1600 × 1200 pixels), but the pixels are closer together on the smaller prints, so the smaller images will potentially look sharp when the larger ones do not. The quality of the image will also depend on the capability of the printer.

Software

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Software Software written in the Java...