Search This Blog

Monday, October 3, 2022

Computer graphics

From Wikipedia, the free encyclopedia

A Blender 2.45 screenshot displaying the 3D test model Suzanne

Computer graphics deals with generating images with the aid of computers. Today, computer graphics is a core technology in digital photography, film, video games, cell phone and computer displays, and many specialized applications. A great deal of specialized hardware and software has been developed, with the displays of most devices being driven by computer graphics hardware. It is a vast and recently developed area of computer science. The phrase was coined in 1960 by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG, or typically in the context of film as computer generated imagery (CGI). The non-artistic aspects of computer graphics are the subject of computer science research.

Some topics in computer graphics include user interface design, sprite graphics, rendering, ray tracing, geometry processing, computer animation, vector graphics, 3D modeling, shaders, GPU design, implicit surfaces, visualization, scientific computing, image processing, computational photography, scientific visualization, computational geometry and computer vision, among others. The overall methodology depends heavily on the underlying sciences of geometry, optics, physics, and perception.

Simulated flight over Trenta valley in the Julian Alps

Computer graphics is responsible for displaying art and image data effectively and meaningfully to the consumer. It is also used for processing image data received from the physical world, such as photo and video content. Computer graphics development has had a significant impact on many types of media and has revolutionized animation, movies, advertising, video games, in general.

Overview

The term computer graphics has been used in a broad sense to describe "almost everything on computers that is not text or sound". Typically, the term computer graphics refers to several different things:

  • the representation and manipulation of image data by a computer
  • the various technologies used to create and manipulate images
  • methods for digitally synthesizing and manipulating visual content, see study of computer graphics

Today, computer graphics is widespread. Such imagery is found in and on television, newspapers, weather reports, and in a variety of medical investigations and surgical procedures. A well-constructed graph can present complex statistics in a form that is easier to understand and interpret. In the media "such graphs are used to illustrate papers, reports, theses", and other presentation material.

Many tools have been developed to visualize data. Computer-generated imagery can be categorized into several different types: two dimensional (2D), three dimensional (3D), and animated graphics. As technology has improved, 3D computer graphics have become more common, but 2D computer graphics are still widely used. Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Over the past decade, other specialized fields have been developed like information visualization, and scientific visualization more concerned with "the visualization of three dimensional phenomena (architectural, meteorological, medical, biological, etc.), where the emphasis is on realistic renderings of volumes, surfaces, illumination sources, and so forth, perhaps with a dynamic (time) component".

History

The precursor sciences to the development of modern computer graphics were the advances in electrical engineering, electronics, and television that took place during the first half of the twentieth century. Screens could display art since the Lumiere brothers' use of mattes to create special effects for the earliest films dating from 1895, but such displays were limited and not interactive. The first cathode ray tube, the Braun tube, was invented in 1897 – it in turn would permit the oscilloscope and the military control panel – the more direct precursors of the field, as they provided the first two-dimensional electronic displays that responded to programmatic or user input. Nevertheless, computer graphics remained relatively unknown as a discipline until the 1950s and the post-World War II period – during which time the discipline emerged from a combination of both pure university and laboratory academic research into more advanced computers and the United States military's further development of technologies like radar, advanced aviation, and rocketry developed during the war. New kinds of displays were needed to process the wealth of information resulting from such projects, leading to the development of computer graphics as a discipline.

1950s

SAGE Sector Control Room.

Early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed a personal experiment in which he wrote a small program that captured the movement of his finger and displayed its vector (his traced name) on a display scope. One of the first interactive video games to feature recognizable, interactive graphics – Tennis for Two – was created for an oscilloscope by William Higinbotham to entertain visitors in 1958 at Brookhaven National Laboratory and simulated a tennis match. In 1959, Douglas T. Ross innovated again while working at MIT on transforming mathematic statements into computer generated 3D machine tool vectors by taking the opportunity to create a display scope image of a Disney cartoon character.

Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, and established strong ties with Stanford University through its founders, who were alumni. This began the decades-long transformation of the southern San Francisco Bay Area into the world's leading computer technology hub – now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware.

Further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MIT's Lincoln Laboratory. The TX-2 integrated a number of new man-machine interfaces. A light pen could be used to draw sketches on the computer using Ivan Sutherland's revolutionary Sketchpad software. Using a light pen, Sketchpad allowed one to draw simple shapes on the computer screen, save them and even recall them later. The light pen itself had a small photoelectric cell in its tip. This cell emitted an electronic pulse whenever it was placed in front of a computer screen and the screen's electron gun fired directly at it. By simply timing the electronic pulse with the current location of the electron gun, it was easy to pinpoint exactly where the pen was on the screen at any given moment. Once that was determined, the computer could then draw a cursor at that location. Sutherland seemed to find the perfect solution for many of the graphics problems he faced. Even today, many standards of computer graphics interfaces got their start with this early Sketchpad program. One example of this is in drawing constraints. If one wants to draw a square for example, they do not have to worry about drawing four lines perfectly to form the edges of the box. One can simply specify that they want to draw a box, and then specify the location and size of the box. The software will then construct a perfect box, with the right dimensions and at the right location. Another example is that Sutherland's software modeled objects – not just a picture of objects. In other words, with a model of a car, one could change the size of the tires without affecting the rest of the car. It could stretch the body of car without deforming the tires.

1960s

The phrase "computer graphics" has been credited to William Fetter, a graphic designer for Boeing in 1960. Fetter in turn attributed it to Verne Hudson, also at Boeing.

In 1961 another student at MIT, Steve Russell, created another important title in the history of video games, Spacewar! Written for the DEC PDP-1, Spacewar was an instant success and copies started flowing to other PDP-1 owners and eventually DEC got a copy. The engineers at DEC used it as a diagnostic program on every new PDP-1 before shipping it. The sales force picked up on this quickly enough and when installing new units, would run the "world's first video game" for their new customers. (Higginbotham's Tennis For Two had beaten Spacewar by almost three years, but it was almost unknown outside of a research or academic setting.)

At around the same time (1961–1962) in the University of Cambridge, Elizabeth Waldram wrote code to display radio-astronomy maps on a cathode ray tube.

E. E. Zajac, a scientist at Bell Telephone Laboratory (BTL), created a film called "Simulation of a two-giro gravity attitude control system" in 1963. In this computer-generated film, Zajac showed how the attitude of a satellite could be altered as it orbits the Earth. He created the animation on an IBM 7090 mainframe computer. Also at BTL, Ken Knowlton, Frank Sinden, Ruth A. Weiss and Michael Noll started working in the computer graphics field. Sinden created a film called Force, Mass and Motion illustrating Newton's laws of motion in operation. Around the same time, other scientists were creating computer graphics to illustrate their research. At Lawrence Radiation Laboratory, Nelson Max created the films Flow of a Viscous Fluid and Propagation of Shock Waves in a Solid Form. Boeing Aircraft created a film called Vibration of an Aircraft.

Also sometime in the early 1960s, automobiles would also provide a boost through the early work of Pierre Bézier at Renault, who used Paul de Casteljau's curves – now called Bézier curves after Bézier's work in the field – to develop 3d modeling techniques for Renault car bodies. These curves would form the foundation for much curve-modeling work in the field, as curves – unlike polygons – are mathematically complex entities to draw and model well.

Pong arcade version

It was not long before major corporations started taking an interest in computer graphics. TRW, Lockheed-Georgia, General Electric and Sperry Rand are among the many companies that were getting started in computer graphics by the mid-1960s. IBM was quick to respond to this interest by releasing the IBM 2250 graphics terminal, the first commercially available graphics computer. Ralph Baer, a supervising engineer at Sanders Associates, came up with a home video game in 1966 that was later licensed to Magnavox and called the Odyssey. While very simplistic, and requiring fairly inexpensive electronic parts, it allowed the player to move points of light around on a screen. It was the first consumer computer graphics product. David C. Evans was director of engineering at Bendix Corporation's computer division from 1953 to 1962, after which he worked for the next five years as a visiting professor at Berkeley. There he continued his interest in computers and how they interfaced with people. In 1966, the University of Utah recruited Evans to form a computer science program, and computer graphics quickly became his primary interest. This new department would become the world's primary research center for computer graphics through the 1970s.

Also, in 1966, Ivan Sutherland continued to innovate at MIT when he invented the first computer-controlled head-mounted display (HMD). It displayed two separate wireframe images, one for each eye. This allowed the viewer to see the computer scene in stereoscopic 3D. The heavy hardware required for supporting the display and tracker was called the Sword of Damocles because of the potential danger if it were to fall upon the wearer. After receiving his Ph.D. from MIT, Sutherland became Director of Information Processing at ARPA (Advanced Research Projects Agency), and later became a professor at Harvard. In 1967 Sutherland was recruited by Evans to join the computer science program at the University of Utah – a development which would turn that department into one of the most important research centers in graphics for nearly a decade thereafter, eventually producing some of the most important pioneers in the field. There Sutherland perfected his HMD; twenty years later, NASA would re-discover his techniques in their virtual reality research. At Utah, Sutherland and Evans were highly sought after consultants by large companies, but they were frustrated at the lack of graphics hardware available at the time, so they started formulating a plan to start their own company.

In 1968, Dave Evans and Ivan Sutherland founded the first computer graphics hardware company, Evans & Sutherland. While Sutherland originally wanted the company to be located in Cambridge, Massachusetts, Salt Lake City was instead chosen due to its proximity to the professors' research group at the University of Utah.

Also in 1968 Arthur Appel described the first ray casting algorithm, the first of a class of ray tracing-based rendering algorithms that have since become fundamental in achieving photorealism in graphics by modeling the paths that rays of light take from a light source, to surfaces in a scene, and into the camera.

In 1969, the ACM initiated A Special Interest Group on Graphics (SIGGRAPH) which organizes conferences, graphics standards, and publications within the field of computer graphics. By 1973, the first annual SIGGRAPH conference was held, which has become one of the focuses of the organization. SIGGRAPH has grown in size and importance as the field of computer graphics has expanded over time.

1970s

The Utah teapot by Martin Newell and its static renders became emblematic of CGI development during the 1970s.

Subsequently, a number of breakthroughs in the field – particularly important early breakthroughs in the transformation of graphics from utilitarian to realistic – occurred at the University of Utah in the 1970s, which had hired Ivan Sutherland. He was paired with David C. Evans to teach an advanced computer graphics class, which contributed a great deal of founding research to the field and taught several students who would grow to found several of the industry's most important companies – namely Pixar, Silicon Graphics, and Adobe Systems. Tom Stockham led the image processing group at UU which worked closely with the computer graphics lab.

One of these students was Edwin Catmull. Catmull had just come from The Boeing Company and had been working on his degree in physics. Growing up on Disney, Catmull loved animation yet quickly discovered that he did not have the talent for drawing. Now Catmull (along with many others) saw computers as the natural progression of animation and they wanted to be part of the revolution. The first computer animation that Catmull saw was his own. He created an animation of his hand opening and closing. He also pioneered texture mapping to paint textures on three-dimensional models in 1974, now considered one of the fundamental techniques in 3D modeling. It became one of his goals to produce a feature-length motion picture using computer graphics – a goal he would achieve two decades later after his founding role in Pixar. In the same class, Fred Parke created an animation of his wife's face. The two animations were included in the 1976 feature film Futureworld.

As the UU computer graphics laboratory was attracting people from all over, John Warnock was another of those early pioneers; he later founded Adobe Systems and create a revolution in the publishing world with his PostScript page description language, and Adobe would go on later to create the industry standard photo editing software in Adobe Photoshop and a prominent movie industry special effects program in Adobe After Effects.

James Clark was also there; he later founded Silicon Graphics, a maker of advanced rendering systems that would dominate the field of high-end graphics until the early 1990s.

A major advance in 3D computer graphics was created at UU by these early pioneers – hidden surface determination. In order to draw a representation of a 3D object on the screen, the computer must determine which surfaces are "behind" the object from the viewer's perspective, and thus should be "hidden" when the computer creates (or renders) the image. The 3D Core Graphics System (or Core) was the first graphical standard to be developed. A group of 25 experts of the ACM Special Interest Group SIGGRAPH developed this "conceptual framework". The specifications were published in 1977, and it became a foundation for many future developments in the field.

Also in the 1970s, Henri Gouraud, Jim Blinn and Bui Tuong Phong contributed to the foundations of shading in CGI via the development of the Gouraud shading and Blinn–Phong shading models, allowing graphics to move beyond a "flat" look to a look more accurately portraying depth. Jim Blinn also innovated further in 1978 by introducing bump mapping, a technique for simulating uneven surfaces, and the predecessor to many more advanced kinds of mapping used today.

The modern videogame arcade as is known today was birthed in the 1970s, with the first arcade games using real-time 2D sprite graphics. Pong in 1972 was one of the first hit arcade cabinet games. Speed Race in 1974 featured sprites moving along a vertically scrolling road. Gun Fight in 1975 featured human-looking animated characters, while Space Invaders in 1978 featured a large number of animated figures on screen; both used a specialized barrel shifter circuit made from discrete chips to help their Intel 8080 microprocessor animate their framebuffer graphics.

1980s

Donkey Kong was one of the video games that helped to popularize computer graphics to a mass audience in the 1980s.

The 1980s began to see the modernization and commercialization of computer graphics. As the home computer proliferated, a subject which had previously been an academics-only discipline was adopted by a much larger audience, and the number of computer graphics developers increased significantly.

In the early 1980s, metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI) technology led to the availability of 16-bit central processing unit (CPU) microprocessors and the first graphics processing unit (GPU) chips, which began to revolutionize computer graphics, enabling high-resolution graphics for computer graphics terminals as well as personal computer (PC) systems. NEC's µPD7220 was the first GPU, fabricated on a fully integrated NMOS VLSI chip. It supported up to 1024x1024 resolution, and laid the foundations for the emerging PC graphics market. It was used in a number of graphics cards, and was licensed for clones such as the Intel 82720, the first of Intel's graphics processing units. MOS memory also became cheaper in the early 1980s, enabling the development of affordable framebuffer memory, notably video RAM (VRAM) introduced by Texas Instruments (TI) in the mid-1980s. In 1984, Hitachi released the ARTC HD63484, the first complementary MOS (CMOS) GPU. It was capable of displaying high-resolution in color mode and up to 4K resolution in monochrome mode, and it was used in a number of graphics cards and terminals during the late 1980s. In 1986, TI introduced the TMS34010, the first fully programmable MOS graphics processor.

Computer graphics terminals during this decade became increasingly intelligent, semi-standalone and standalone workstations. Graphics and application processing were increasingly migrated to the intelligence in the workstation, rather than continuing to rely on central mainframe and mini-computers. Typical of the early move to high-resolution computer graphics intelligent workstations for the computer-aided engineering market were the Orca 1000, 2000 and 3000 workstations, developed by Orcatech of Ottawa, a spin-off from Bell-Northern Research, and led by David Pearson, an early workstation pioneer. The Orca 3000 was based on the 16-bit Motorola 68000 microprocessor and AMD bit-slice processors, and had Unix as its operating system. It was targeted squarely at the sophisticated end of the design engineering sector. Artists and graphic designers began to see the personal computer, particularly the Commodore Amiga and Macintosh, as a serious design tool, one that could save time and draw more accurately than other methods. The Macintosh remains a highly popular tool for computer graphics among graphic design studios and businesses. Modern computers, dating from the 1980s, often use graphical user interfaces (GUI) to present data and information with symbols, icons and pictures, rather than text. Graphics are one of the five key elements of multimedia technology.

In the field of realistic rendering, Japan's Osaka University developed the LINKS-1 Computer Graphics System, a supercomputer that used up to 257 Zilog Z8001 microprocessors, in 1982, for the purpose of rendering realistic 3D computer graphics. According to the Information Processing Society of Japan: "The core of 3D image rendering is calculating the luminance of each pixel making up a rendered surface from the given viewpoint, light source, and object position. The LINKS-1 system was developed to realize an image rendering methodology in which each pixel could be parallel processed independently using ray tracing. By developing a new software methodology specifically for high-speed image rendering, LINKS-1 was able to rapidly render highly realistic images. It was used to create the world's first 3D planetarium-like video of the entire heavens that was made completely with computer graphics. The video was presented at the Fujitsu pavilion at the 1985 International Exposition in Tsukuba." The LINKS-1 was the world's most powerful computer, as of 1984. Also in the field of realistic rendering, the general rendering equation of David Immel and James Kajiya was developed in 1986 – an important step towards implementing global illumination, which is necessary to pursue photorealism in computer graphics.

The continuing popularity of Star Wars and other science fiction franchises were relevant in cinematic CGI at this time, as Lucasfilm and Industrial Light & Magic became known as the "go-to" house by many other studios for topnotch computer graphics in film. Important advances in chroma keying ("bluescreening", etc.) were made for the later films of the original trilogy. Two other pieces of video would also outlast the era as historically relevant: Dire Straits' iconic, near-fully-CGI video for their song "Money for Nothing" in 1985, which popularized CGI among music fans of that era, and a scene from Young Sherlock Holmes the same year featuring the first fully CGI character in a feature movie (an animated stained-glass knight). In 1988, the first shaders – small programs designed specifically to do shading as a separate algorithm – were developed by Pixar, which had already spun off from Industrial Light & Magic as a separate entity – though the public would not see the results of such technological progress until the next decade. In the late 1980s, Silicon Graphics (SGI) computers were used to create some of the first fully computer-generated short films at Pixar, and Silicon Graphics machines were considered a high-water mark for the field during the decade.

The 1980s is also called the golden era of videogames; millions-selling systems from Atari, Nintendo and Sega, among other companies, exposed computer graphics for the first time to a new, young, and impressionable audience – as did MS-DOS-based personal computers, Apple IIs, Macs, and Amigas, all of which also allowed users to program their own games if skilled enough. For the arcades, advances were made in commercial, real-time 3D graphics. In 1988, the first dedicated real-time 3D graphics boards were introduced for arcades, with the Namco System 21 and Taito Air System. On the professional side, Evans & Sutherland and SGI developed 3D raster graphics hardware that directly influenced the later single-chip graphics processing unit (GPU), a technology where a separate and very powerful chip is used in parallel processing with a CPU to optimize graphics.

The decade also saw computer graphics applied to many additional professional markets, including location-based entertainment and education with the E&S Digistar, vehicle design, vehicle simulation, and chemistry.

1990s

Quarxs, series poster, Maurice Benayoun, François Schuiten, 1992

The 1990s' overwhelming note was the emergence of 3D modeling on a mass scale and an impressive rise in the quality of CGI generally. Home computers became able to take on rendering tasks that previously had been limited to workstations costing thousands of dollars; as 3D modelers became available for home systems, the popularity of Silicon Graphics workstations declined and powerful Microsoft Windows and Apple Macintosh machines running Autodesk products like 3D Studio or other home rendering software ascended in importance. By the end of the decade, the GPU would begin its rise to the prominence it still enjoys today.

The field began to see the first rendered graphics that could truly pass as photorealistic to the untrained eye (though they could not yet do so with a trained CGI artist) and 3D graphics became far more popular in gaming, multimedia, and animation. At the end of the 1980s and the beginning of the nineties were created, in France, the very first computer graphics TV series: La Vie des bêtes by studio Mac Guff Ligne (1988), Les Fables Géométriques (1989–1991) by studio Fantôme, and Quarxs, the first HDTV computer graphics series by Maurice Benayoun and François Schuiten (studio Z-A production, 1990–1993).

In film, Pixar began its serious commercial rise in this era under Edwin Catmull, with its first major film release, in 1995 – Toy Story – a critical and commercial success of nine-figure magnitude. The studio to invent the programmable shader would go on to have many animated hits, and its work on prerendered video animation is still considered an industry leader and research trail breaker.

In video games, in 1992, Virtua Racing, running on the Sega Model 1 arcade system board, laid the foundations for fully 3D racing games and popularized real-time 3D polygonal graphics among a wider audience in the video game industry. The Sega Model 2 in 1993 and Sega Model 3 in 1996 subsequently pushed the boundaries of commercial, real-time 3D graphics. Back on the PC, Wolfenstein 3D, Doom and Quake, three of the first massively popular 3D first-person shooter games, were released by id Software to critical and popular acclaim during this decade using a rendering engine innovated primarily by John Carmack. The Sony PlayStation, Sega Saturn, and Nintendo 64, among other consoles, sold in the millions and popularized 3D graphics for home gamers. Certain late-1990s first-generation 3D titles became seen as influential in popularizing 3D graphics among console users, such as platform games Super Mario 64 and The Legend Of Zelda: Ocarina Of Time, and early 3D fighting games like Virtua Fighter, Battle Arena Toshinden, and Tekken.

Technology and algorithms for rendering continued to improve greatly. In 1996, Krishnamurty and Levoy invented normal mapping – an improvement on Jim Blinn's bump mapping. 1999 saw Nvidia release the seminal GeForce 256, the first home video card billed as a graphics processing unit or GPU, which in its own words contained "integrated transform, lighting, triangle setup/clipping, and rendering engines". By the end of the decade, computers adopted common frameworks for graphics processing such as DirectX and OpenGL. Since then, computer graphics have only become more detailed and realistic, due to more powerful graphics hardware and 3D modeling software. AMD also became a leading developer of graphics boards in this decade, creating a "duopoly" in the field which exists this day.

2000s

A screenshot from the videogame Killing Floor, built in Unreal Engine 2. Personal computers and console video games took a great graphical leap forward in the 2000s, becoming able to display graphics in real time computing that had previously only been possible pre-rendered and/or on business-level hardware.

CGI became ubiquitous in earnest during this era. Video games and CGI cinema had spread the reach of computer graphics to the mainstream by the late 1990s and continued to do so at an accelerated pace in the 2000s. CGI was also adopted en masse for television advertisements widely in the late 1990s and 2000s, and so became familiar to a massive audience.

The continued rise and increasing sophistication of the graphics processing unit were crucial to this decade, and 3D rendering capabilities became a standard feature as 3D-graphics GPUs became considered a necessity for desktop computer makers to offer. The Nvidia GeForce line of graphics cards dominated the market in the early decade with occasional significant competing presence from ATI. As the decade progressed, even low-end machines usually contained a 3D-capable GPU of some kind as Nvidia and AMD both introduced low-priced chipsets and continued to dominate the market. Shaders which had been introduced in the 1980s to perform specialized processing on the GPU would by the end of the decade become supported on most consumer hardware, speeding up graphics considerably and allowing for greatly improved texture and shading in computer graphics via the widespread adoption of normal mapping, bump mapping, and a variety of other techniques allowing the simulation of a great amount of detail.

Computer graphics used in films and video games gradually began to be realistic to the point of entering the uncanny valley. CGI movies proliferated, with traditional animated cartoon films like Ice Age and Madagascar as well as numerous Pixar offerings like Finding Nemo dominating the box office in this field. The Final Fantasy: The Spirits Within, released in 2001, was the first fully computer-generated feature film to use photorealistic CGI characters and be fully made with motion capture. The film was not a box-office success, however. Some commentators have suggested this may be partly because the lead CGI characters had facial features which fell into the "uncanny valley". Other animated films like The Polar Express drew attention at this time as well. Star Wars also resurfaced with its prequel trilogy and the effects continued to set a bar for CGI in film.

In videogames, the Sony PlayStation 2 and 3, the Microsoft Xbox line of consoles, and offerings from Nintendo such as the GameCube maintained a large following, as did the Windows PC. Marquee CGI-heavy titles like the series of Grand Theft Auto, Assassin's Creed, Final Fantasy, BioShock, Kingdom Hearts, Mirror's Edge and dozens of others continued to approach photorealism, grow the video game industry and impress, until that industry's revenues became comparable to those of movies. Microsoft made a decision to expose DirectX more easily to the independent developer world with the XNA program, but it was not a success. DirectX itself remained a commercial success, however. OpenGL continued to mature as well, and it and DirectX improved greatly; the second-generation shader languages HLSL and GLSL began to be popular in this decade.

In scientific computing, the GPGPU technique to pass large amounts of data bidirectionally between a GPU and CPU was invented; speeding up analysis on many kinds of bioinformatics and molecular biology experiments. The technique has also been used for Bitcoin mining and has applications in computer vision.

2010s

A diamond plate texture rendered close-up using physically based rendering principles – increasingly an active area of research for computer graphics in the 2010s.

In the 2010s, CGI has been nearly ubiquitous in video, pre-rendered graphics are nearly scientifically photorealistic, and real-time graphics on a suitably high-end system may simulate photorealism to the untrained eye.

Texture mapping has matured into a multistage process with many layers; generally, it is not uncommon to implement texture mapping, bump mapping or isosurfaces or normal mapping, lighting maps including specular highlights and reflection techniques, and shadow volumes into one rendering engine using shaders, which are maturing considerably. Shaders are now very nearly a necessity for advanced work in the field, providing considerable complexity in manipulating pixels, vertices, and textures on a per-element basis, and countless possible effects. Their shader languages HLSL and GLSL are active fields of research and development. Physically based rendering or PBR, which implements many maps and performs advanced calculation to simulate real optic light flow, is an active research area as well, along with advanced areas like ambient occlusion, subsurface scattering, Rayleigh scattering, photon mapping, and many others. Experiments into the processing power required to provide graphics in real time at ultra-high-resolution modes like 4K Ultra HD are beginning, though beyond reach of all but the highest-end hardware.

In cinema, most animated movies are CGI now; a great many animated CGI films are made per year, but few, if any, attempt photorealism due to continuing fears of the uncanny valley. Most are 3D cartoons.

In videogames, the Microsoft Xbox One, Sony PlayStation 4, and Nintendo Switch currently dominate the home space and are all capable of highly advanced 3D graphics; the Windows PC is still one of the most active gaming platforms as well.

Image types

Two-dimensional

Raster graphic sprites (left) and masks (right)

2D computer graphics are the computer-based generation of digital images—mostly from models, such as digital image, and by techniques specific to them.

2D computer graphics are mainly used in applications that were originally developed upon traditional printing and drawing technologies such as typography. In those applications, the two-dimensional image is not just a representation of a real-world object, but an independent artifact with added semantic value; two-dimensional models are therefore preferred because they give more direct control of the image than 3D computer graphics, whose approach is more akin to photography than to typography.

Pixel art

A large form of digital art, pixel art is created through the use of raster graphics software, where images are edited on the pixel level. Graphics in most old (or relatively limited) computer and video games, graphing calculator games, and many mobile phone games are mostly pixel art.

Sprite graphics

A sprite is a two-dimensional image or animation that is integrated into a larger scene. Initially including just graphical objects handled separately from the memory bitmap of a video display, this now includes various manners of graphical overlays.

Originally, sprites were a method of integrating unrelated bitmaps so that they appeared to be part of the normal bitmap on a screen, such as creating an animated character that can be moved on a screen without altering the data defining the overall screen. Such sprites can be created by either electronic circuitry or software. In circuitry, a hardware sprite is a hardware construct that employs custom DMA channels to integrate visual elements with the main screen in that it super-imposes two discrete video sources. Software can simulate this through specialized rendering methods.

Vector graphics

Example showing effect of vector graphics versus raster (bitmap) graphics

Vector graphics formats are complementary to raster graphics. Raster graphics is the representation of images as an array of pixels and is typically used for the representation of photographic images. Vector graphics consists in encoding information about shapes and colors that comprise the image, which can allow for more flexibility in rendering. There are instances when working with vector tools and formats is best practice, and instances when working with raster tools and formats is best practice. There are times when both formats come together. An understanding of the advantages and limitations of each technology and the relationship between them is most likely to result in efficient and effective use of tools.

Three-dimensional

3D graphics, compared to 2D graphics, are graphics that use a three-dimensional representation of geometric data. For the purpose of performance, this is stored in the computer. This includes images that may be for later display or for real-time viewing.

Despite these differences, 3D computer graphics rely on similar algorithms as 2D computer graphics do in the frame and raster graphics (like in 2D) in the final rendered display. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques.

3D computer graphics are the same as 3D models. The model is contained within the graphical data file, apart from the rendering. However, there are differences that include the 3D model is the representation of any 3D object. Until visually displayed a model is not graphic. Due to printing, 3D models are not only confined to virtual space. 3D rendering is how a model can be displayed. Also can be used in non-graphical computer simulations and calculations.

Computer animation

Example of Computer animation produced using Motion capture
 

Computer animation is the art of creating moving images via the use of computers. It is a subfield of computer graphics and animation. Increasingly it is created by means of 3D computer graphics, though 2D computer graphics are still widely used for stylistic, low bandwidth, and faster real-time rendering needs. Sometimes the target of the animation is the computer itself, but sometimes the target is another medium, such as film. It is also referred to as CGI (Computer-generated imagery or computer-generated imaging), especially when used in films.

Virtual entities may contain and be controlled by assorted attributes, such as transform values (location, orientation, and scale) stored in an object's transformation matrix. Animation is the change of an attribute over time. Multiple methods of achieving animation exist; the rudimentary form is based on the creation and editing of keyframes, each storing a value at a given time, per attribute to be animated. The 2D/3D graphics software will change with each keyframe, creating an editable curve of a value mapped over time, in which results in animation. Other methods of animation include procedural and expression-based techniques: the former consolidates related elements of animated entities into sets of attributes, useful for creating particle effects and crowd simulations; the latter allows an evaluated result returned from a user-defined logical expression, coupled with mathematics, to automate animation in a predictable way (convenient for controlling bone behavior beyond what a hierarchy offers in skeletal system set up).

To create the illusion of movement, an image is displayed on the computer screen then quickly replaced by a new image that is similar to the previous image, but shifted slightly. This technique is identical to the illusion of movement in television and motion pictures.

Concepts and principles

Images are typically created by devices such as cameras, mirrors, lenses, telescopes, microscopes, etc.

Digital images include both vector images and raster images, but raster images are more commonly used.

Pixel

In the enlarged portion of the image individual pixels are rendered as squares and can be easily seen.

In digital imaging, a pixel (or picture element) is a single point in a raster image. Pixels are placed on a regular 2-dimensional grid, and are often represented using dots or squares. Each pixel is a sample of an original image, where more samples typically provide a more accurate representation of the original. The intensity of each pixel is variable; in color systems, each pixel has typically three components such as red, green, and blue.

Graphics are visual presentations on a surface, such as a computer screen. Examples are photographs, drawing, graphics designs, maps, engineering drawings, or other images. Graphics often combine text and illustration. Graphic design may consist of the deliberate selection, creation, or arrangement of typography alone, as in a brochure, flier, poster, web site, or book without any other element. Clarity or effective communication may be the objective, association with other cultural elements may be sought, or merely, the creation of a distinctive style.

Primitives

Primitives are basic units which a graphics system may combine to create more complex images or models. Examples would be sprites and character maps in 2D video games, geometric primitives in CAD, or polygons or triangles in 3D rendering. Primitives may be supported in hardware for efficient rendering, or the building blocks provided by a graphics application.

Rendering

Rendering is the generation of a 2D image from a 3D model by means of computer programs. A scene file contains objects in a strictly defined language or data structure; it would contain geometry, viewpoint, texture, lighting, and shading information as a description of the virtual scene. The data contained in the scene file is then passed to a rendering program to be processed and output to a digital image or raster graphics image file. The rendering program is usually built into the computer graphics software, though others are available as plug-ins or entirely separate programs. The term "rendering" may be by analogy with an "artist's rendering" of a scene. Although the technical details of rendering methods vary, the general challenges to overcome in producing a 2D image from a 3D representation stored in a scene file are outlined as the graphics pipeline along a rendering device, such as a GPU. A GPU is a device able to assist the CPU in calculations. If a scene is to look relatively realistic and predictable under virtual lighting, the rendering software should solve the rendering equation. The rendering equation does not account for all lighting phenomena, but is a general lighting model for computer-generated imagery. 'Rendering' is also used to describe the process of calculating effects in a video editing file to produce final video output.

3D projection
3D projection is a method of mapping three dimensional points to a two dimensional plane. As most current methods for displaying graphical data are based on planar two dimensional media, the use of this type of projection is widespread. This method is used in most real-time 3D applications and typically uses rasterization to produce the final image.
Ray tracing
Ray tracing is a technique from the family of image order algorithms for generating an image by tracing the path of light through pixels in an image plane. The technique is capable of producing a high degree of photorealism; usually higher than that of typical scanline rendering methods, but at a greater computational cost.
Shading
Example of shading
Shading refers to depicting depth in 3D models or illustrations by varying levels of darkness. It is a process used in drawing for depicting levels of darkness on paper by applying media more densely or with a darker shade for darker areas, and less densely or with a lighter shade for lighter areas. There are various techniques of shading including cross hatching where perpendicular lines of varying closeness are drawn in a grid pattern to shade an area. The closer the lines are together, the darker the area appears. Likewise, the farther apart the lines are, the lighter the area appears. The term has been recently generalized to mean that shaders are applied.
Texture mapping
Texture mapping is a method for adding detail, surface texture, or colour to a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Dr Edwin Catmull in 1974. A texture map is applied (mapped) to the surface of a shape, or polygon. This process is akin to applying patterned paper to a plain white box. Multitexturing is the use of more than one texture at a time on a polygon. Procedural textures (created from adjusting parameters of an underlying algorithm that produces an output texture), and bitmap textures (created in an image editing application or imported from a digital camera) are, generally speaking, common methods of implementing texture definition on 3D models in computer graphics software, while intended placement of textures onto a model's surface often requires a technique known as UV mapping (arbitrary, manual layout of texture coordinates) for polygon surfaces, while non-uniform rational B-spline (NURB) surfaces have their own intrinsic parameterization used as texture coordinates. Texture mapping as a discipline also encompasses techniques for creating normal maps and bump maps that correspond to a texture to simulate height and specular maps to help simulate shine and light reflections, as well as environment mapping to simulate mirror-like reflectivity, also called gloss.
Anti-aliasing
Rendering resolution-independent entities (such as 3D models) for viewing on a raster (pixel-based) device such as a liquid-crystal display or CRT television inevitably causes aliasing artifacts mostly along geometric edges and the boundaries of texture details; these artifacts are informally called "jaggies". Anti-aliasing methods rectify such problems, resulting in imagery more pleasing to the viewer, but can be somewhat computationally expensive. Various anti-aliasing algorithms (such as supersampling) are able to be employed, then customized for the most efficient rendering performance versus quality of the resultant imagery; a graphics artist should consider this trade-off if anti-aliasing methods are to be used. A pre-anti-aliased bitmap texture being displayed on a screen (or screen location) at a resolution different than the resolution of the texture itself (such as a textured model in the distance from the virtual camera) will exhibit aliasing artifacts, while any procedurally defined texture will always show aliasing artifacts as they are resolution-independent; techniques such as mipmapping and texture filtering help to solve texture-related aliasing problems.

Volume rendering

Volume rendered CT scan of a forearm with different colour schemes for muscle, fat, bone, and blood

Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set. A typical 3D data set is a group of 2D slice images acquired by a CT or MRI scanner.

Usually these are acquired in a regular pattern (e.g., one slice every millimeter) and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel.

3D modeling

3D modeling is the process of developing a mathematical, wireframe representation of any three-dimensional object, called a "3D model", via specialized software. Models may be created automatically or manually; the manual modeling process of preparing geometric data for 3D computer graphics is similar to plastic arts such as sculpting. 3D models may be created using multiple approaches: use of NURBs to generate accurate and smooth surface patches, polygonal mesh modeling (manipulation of faceted geometry), or polygonal mesh subdivision (advanced tessellation of polygons, resulting in smooth surfaces similar to NURB models). A 3D model can be displayed as a two-dimensional image through a process called 3D rendering, used in a computer simulation of physical phenomena, or animated directly for other purposes. The model can also be physically created using 3D Printing devices.

Aerogel

From Wikipedia, the free encyclopedia
 
A block of silica aerogel in a hand.

Aerogels are a class of synthetic porous ultralight material derived from a gel, in which the liquid component for the gel has been replaced with a gas, without significant collapse of the gel structure. The result is a solid with extremely low density and extremely low thermal conductivity. Aerogels can be made from a variety of chemical compounds. Silica aerogels feel like fragile expanded polystyrene to the touch, while some polymer-based aerogels feel like rigid foams.

The first documented example of an aerogel was created by Samuel Stephens Kistler in 1931, as a result of a bet with Charles Learned over who could replace the liquid in "jellies" with gas without causing shrinkage.

Aerogels are produced by extracting the liquid component of a gel through supercritical drying or freeze-drying. This allows the liquid to be slowly dried off without causing the solid matrix in the gel to collapse from capillary action, as would happen with conventional evaporation. The first aerogels were produced from silica gels. Kistler's later work involved aerogels based on alumina, chromia and tin dioxide. Carbon aerogels were first developed in the late 1980s.

Properties

A flower resting on a piece of silica aerogel, which is suspended over a flame from a Bunsen burner. Aerogels have excellent insulating properties, and the flower is protected from the heat of the flame.

Despite the name, aerogels are solid, rigid, and dry materials that do not resemble a gel in their physical properties: the name comes from the fact that they are made from gels. Pressing softly on an aerogel typically does not leave even a minor mark; pressing more firmly will leave a permanent depression. Pressing extremely firmly will cause a catastrophic breakdown in the sparse structure, causing it to shatter like glass (a property known as friability), although more modern variations do not suffer from this. Despite the fact that it is prone to shattering, it is very strong structurally. Its impressive load-bearing abilities are due to the dendritic microstructure, in which spherical particles of average size 2–5 nm are fused together into clusters. These clusters form a three-dimensional highly porous structure of almost fractal chains, with pores just under 100 nm. The average size and density of the pores can be controlled during the manufacturing process.

Aerogel is a material that is 99.8% air. Aerogels have a porous solid network that contains air pockets, with the air pockets taking up the majority of space within the material. The dearth of solid material allows aerogel to be almost weightless.

Aerogels are good thermal insulators because they almost nullify two of the three methods of heat transfer – conduction (they are mostly composed of insulating gas) and convection (the microstructure prevents net gas movement). They are good conductive insulators because they are composed almost entirely of gases, which are very poor heat conductors. (Silica aerogel is an especially good insulator because silica is also a poor conductor of heat; a metallic or carbon aerogel, on the other hand, would be less effective.) They are good convective inhibitors because air cannot circulate through the lattice. Aerogels are poor radiative insulators because infrared radiation (which transfers heat) passes through them.

Owing to its hygroscopic nature, aerogel feels dry and acts as a strong desiccant. People handling aerogel for extended periods should wear gloves to prevent the appearance of dry brittle spots on their skin.

The slight colour it does have is due to Rayleigh scattering of the shorter wavelengths of visible light by the nano-sized dendritic structure. This causes it to appear smoky blue against dark backgrounds and yellowish against bright backgrounds.

Aerogels by themselves are hydrophilic, and if they absorb moisture they usually suffer a structural change, such as contraction, and deteriorate, but degradation can be prevented by making them hydrophobic, via a chemical treatment. Aerogels with hydrophobic interiors are less susceptible to degradation than aerogels with only an outer hydrophobic layer, especially if a crack penetrates the surface.

Knudsen effect

Aerogels may have a thermal conductivity smaller than that of the gas they contain. This is caused by the Knudsen effect, a reduction of thermal conductivity in gases when the size of the cavity encompassing the gas becomes comparable to the mean free path. Effectively, the cavity restricts the movement of the gas particles, decreasing the thermal conductivity in addition to eliminating convection. For example, thermal conductivity of air is about 25 mW·m−1·K−1 at STP and in a large container, but decreases to about 5 mW·m−1·K−1 in a pore 30 nanometers in diameter.

Structure

Aerogel structure results from a sol-gel polymerization, which is when monomers (simple molecules) react with other monomers to form a sol or a substance that consists of bonded, cross-linked macromolecules with deposits of liquid solution among them. When the material is critically heated, the liquid evaporates and the bonded, cross-linked macromolecule frame is left behind. The result of the polymerization and critical heating is the creation of a material that has a porous strong structure classified as aerogel. Variations in synthesis can alter the surface area and pore size of the aerogel. The smaller the pore size the more susceptible the aerogel is to fracture.

Waterproofing

Aerogel contains particles that are 2–5 nm in diameter. After the process of creating aerogel, it will contain a large amount of hydroxyl groups on the surface. The hydroxyl groups can cause a strong reaction when the aerogel is placed in water, causing it to catastrophically dissolve in the water. One way to waterproof the hydrophilic aerogel is by soaking the aerogel with some chemical base that will replace the surface hydroxyl groups (–OH) with non-polar groups (–OR), a process which is most effective when R is an aliphatic group.

Porosity of aerogel

There are several ways to determine the porosity of aerogel: the three main methods are gas adsorption, mercury porosimetry, and scattering method. In gas adsorption, nitrogen at its boiling point is adsorbed into the aerogel sample. The gas being adsorbed is dependent on the size of the pores within the sample and on the partial pressure of the gas relative to its saturation pressure. The volume of the gas adsorbed is measured by using the Brunauer, Emmit and Teller formula (BET), which gives the specific surface area of the sample. At high partial pressure in the adsorption/desorption the Kelvin equation gives the pore size distribution of the sample. In mercury porosimetry, the mercury is forced into the aerogel porous system to determine the pores' size, but this method is highly inefficient since the solid frame of aerogel will collapse from the high compressive force. The scattering method involves the angle-dependent deflection of radiation within the aerogel sample. The sample can be solid particles or pores. The radiation goes into the material and determines the fractal geometry of the aerogel pore network. The best radiation wavelengths to use are X-rays and neutrons. Aerogel is also an open porous network: the difference between an open porous network and a closed porous network is that in the open network, gases can enter and leave the substance without any limitation, while a closed porous network traps the gases within the material forcing them to stay within the pores. The high porosity and surface area of silica aerogels allow them to be used in a variety of environmental filtration applications.

Materials

A 2.5 kg brick is supported by a piece of aerogel with a mass of 2 g.

Silica aerogel

Silica aerogels are the most common type of aerogel, and the primary type in use or study. It is silica-based and can be derived from silica gel or by a modified Stober process. Nicknames include frozen smoke, solid smoke, solid air, solid cloud, and blue smoke, owing to its translucent nature and the way light scatters in the material. The lowest-density silica nanofoam weighs 1,000 g/m3, which is the evacuated version of the record-aerogel of 1,900 g/m3. The density of air is 1,200 g/m3 (at 20 °C and 1 atm). As of 2013, aerographene had a lower density at 160 g/m3, or 13% the density of air at room temperature.

The silica solidifies into three-dimensional, intertwined clusters that make up only 3% of the volume. Conduction through the solid is therefore very low. The remaining 97% of the volume is composed of air in extremely small nanopores. The air has little room to move, inhibiting both convection and gas-phase conduction.

Silica aerogel also has a high optical transmission of ~99% and a low refractive index of ~1.05. It is very robust with respect to high power input beam in continuous wave regime and does not show any boiling or melting phenomena. This property permits to study high intensity nonlinear waves in the presence of disorder in regimes typically unaccessible by liquid materials, making it promising material for nonlinear optics.

This aerogel has remarkable thermal insulative properties, having an extremely low thermal conductivity: from 0.03 W·m−1·K−1 in atmospheric pressure down to 0.004 W·m−1·K−1 in modest vacuum, which correspond to R-values of 14 to 105 (US customary) or 3.0 to 22.2 (metric) for 3.5 in (89 mm) thickness. For comparison, typical wall insulation is 13 (US customary) or 2.7 (metric) for the same thickness. Its melting point is 1,473 K (1,200 °C; 2,192 °F). It is also worth noting that even lower conductivities have been reported for experimentally produced monolithic samples in the literature, reaching 0.009 W·m−1·K−1 at 1atm.

Until 2011, silica aerogel held 15 entries in Guinness World Records for material properties, including best insulator and lowest-density solid, though it was ousted from the latter title by the even lighter materials aerographite in 2012 and then aerographene in 2013.

Carbon

Carbon aerogels are composed of particles with sizes in the nanometer range, covalently bonded together. They have very high porosity (over 50%, with pore diameter under 100 nm) and surface areas ranging between 400 and 1,000 m2/g. They are often manufactured as composite paper: non-woven paper made of carbon fibers, impregnated with resorcinolformaldehyde aerogel, and pyrolyzed. Depending on the density, carbon aerogels may be electrically conductive, making composite aerogel paper useful for electrodes in capacitors or deionization electrodes. Due to their extremely high surface area, carbon aerogels are used to create supercapacitors, with values ranging up to thousands of farads based on a capacitance density of 104 F/g and 77 F/cm3. Carbon aerogels are also extremely "black" in the infrared spectrum, reflecting only 0.3% of radiation between 250 nm and 14.3 µm, making them efficient for solar energy collectors.

The term "aerogel" to describe airy masses of carbon nanotubes produced through certain chemical vapor deposition techniques is incorrect. Such materials can be spun into fibers with strength greater than Kevlar, and unique electrical properties. These materials are not aerogels, however, since they do not have a monolithic internal structure and do not have the regular pore structure characteristic of aerogels.

Metal oxide

Metal oxide aerogels are used as catalysts in various chemical reactions/transformations or as precursors for other materials.

Aerogels made with aluminium oxide are known as alumina aerogels. These aerogels are used as catalysts, especially when "doped" with a metal other than aluminium. Nickel–alumina aerogel is the most common combination. Alumina aerogels are also being considered by NASA for capturing hypervelocity particles; a formulation doped with gadolinium and terbium could fluoresce at the particle impact site, with the amount of fluorescence dependent on impact energy.

One of the most notable differences between silica aerogels and metal oxide aerogel is that metal oxide aerogels are often variedly colored.

Aerogel Color
Silica, alumina, titania, zirconia Clear with Rayleigh scattering blue or white
Iron oxide Rust red or yellow, opaque
Chromia Deep green or deep blue, opaque
Vanadia Olive green, opaque
Neodymium oxide Purple, transparent
Samaria Yellow, transparent
Holmia, erbia Pink, transparent

Other

Organic polymers can be used to create aerogels. SEAgel is made of agar. AeroZero film is made of polyimide. Cellulose from plants can be used to create a flexible aerogel.

GraPhage13 is the first graphene-based aerogel assembled using graphene oxide and the M13 bacteriophage.

Chalcogel is an aerogel made of chalcogens (the column of elements on the periodic table beginning with oxygen) such as sulfur, selenium and other elements. Metals less expensive than platinum have been used in its creation.

Aerogels made of cadmium selenide quantum dots in a porous 3-D network have been developed for use in the semiconductor industry.

Aerogel performance may be augmented for a specific application by the addition of dopants, reinforcing structures and hybridizing compounds. For example, Spaceloft is a composite of aerogel with some kind of fibrous batting.

Applications

Aerogels are used for a variety of applications:

  • Thermal insulation; with fibre reinforced silica aerogel insulation boards insulation thickness can be reduced by about 50% compared to conventional materials. This makes silica aerogel boards well suited for the retrofit of historic buildings or for the application in dense city areas. As other examples, aerogel has been added in granular form to skylights for this purpose. Georgia Institute of Technology's 2007 Solar Decathlon House project used an aerogel as an insulator in the semi-transparent roof.
  • A chemical adsorber for cleaning up spills. Silica aerogels may be used for filtration; They have a high surface area, porosity, and are ultrahydrophobic. They may be used for the removal of heavy metals. This could be applied to wastewater treatment.
  • A catalyst or a catalyst carrier.
  • Silica aerogels can be used in imaging devices, optics, and light guides.
  • Thickening agents in some paints and cosmetics.
  • As components in energy absorbers.
  • Laser targets for the United States National Ignition Facility (NIF).
  • A material used in impedance matchers for transducers, speakers and range finders.
  • According to Hindawi's Journal of Nanomaterials, aerogels are used for more flexible materials such as clothing and blankets: "Commercial manufacture of aerogel 'blankets' began around the year 2000, combining silica aerogel and fibrous reinforcement that turns the brittle aerogel into a durable, flexible material. The mechanical and thermal properties of the product may be varied based upon the choice of reinforcing fibers, the aerogel matrix and opacification additives included in the composite."
  • Silica aerogel has been used to capture cosmic dust, also known as space dust. NASA used an aerogel to trap space dust particles aboard the Stardust spacecraft. These aerogel dust collectors have very low mass. The particles vaporize on impact with solids and pass through gases, but can be trapped in aerogels. NASA also used aerogel for thermal insulation for the Mars rovers.
  • The US Navy evaluated use of aerogels in undergarments as passive thermal protection for divers. Similarly, aerogels have been used by NASA for insulating space suits.
  • In particle physics as radiators in Cherenkov effect detectors, such as the ACC system of the Belle detector, used in the Belle experiment at KEKB. The suitability of aerogels is determined by their low index of refraction, filling the gap between gases and liquids, and their transparency and solid state, making them easier to use than cryogenic liquids or compressed gases.
  • Resorcinolformaldehyde aerogels (polymers chemically similar to phenol formaldehyde resins) are used as precursors for manufacture of carbon aerogels, or when an organic insulator with large surface is desired.
  • Metal–aerogel nanocomposites prepared by impregnating the hydrogel with solution containing ions of a transition metal and irradiating the result with gamma rays, precipitates nanoparticles of the metal. Such composites can be used as catalysts, sensors, electromagnetic shielding, and in waste disposal. A prospective use of platinum-on-carbon catalysts is in fuel cells.
  • As a drug delivery system owing to its biocompatibility. Due to its high surface area and porous structure, drugs can be adsorbed from supercritical CO
    2
    . The release rate of the drugs can be tailored by varying the properties of the aerogel.
  • Carbon aerogels are used in the construction of small electrochemical double layer supercapacitors. Due to the high surface area of the aerogel, these capacitors can be 1/2000th to 1/5000th the size of similarly rated electrolytic capacitors. According to Hindawi's Journal of Nanomaterials, "Aerogel supercapacitors can have a very low impedance compared to normal supercapacitors and can absorb or produce very high peak currents. At present, such capacitors are polarity-sensitive and need to be wired in series if a working voltage of greater than about 2.75 V is needed."
  • Dunlop Sport uses aerogel in some of its racquets for sports such as tennis.
  • In water purification, chalcogels have shown promise in absorbing the heavy metal pollutants mercury, lead, and cadmium from water. Aerogels may be used to separate oil from water, which could for example be used to respond to oil spills. Aerogels may be used to disinfect water, killing bacteria.
  • Aerogel can introduce disorder into superfluid helium-3.
  • In aircraft de-icing, a new proposal uses a carbon nanotube aerogel. A thin filament is spun on a winder to create a 10 micron-thick film. The amount of material needed to cover the wings of a jumbo jet weighs 80 grams (2.8 oz). Aerogel heaters could be left on continuously at low power, to prevent ice from forming.
  • Thermal insulation transmission tunnel of the Chevrolet Corvette (C7).
  • CamelBak uses aerogel as insulation in a thermal sport bottle.
  • 45 North uses aerogel as palm insulation in its Sturmfist 5 cycling gloves.
  • Silica aerogels may be used for sound insulation, such as on windows or for construction purposes.

Production

Silica aerogels are typically synthesized by using a sol-gel process. The first step is the creation of a colloidal suspension of solid particles known as a "sol". The precursors are a liquid alcohol such as ethanol which is mixed with a silicon alkoxide, such as tetramethoxysilane (TMOS), tetraethoxysilane (TEOS), and polyethoxydisiloxane (PEDS) (earlier work used sodium silicates). The solution of silica is mixed with a catalyst and allowed to gel during a hydrolysis reaction which forms particles of silicon dioxide. The oxide suspension begins to undergo condensation reactions which result in the creation of metal oxide bridges (either M–O–M, "oxo" bridges, or M–OH–M, "ol" bridges) linking the dispersed colloidal particles. These reactions generally have moderately slow reaction rates, and as a result either acidic or basic catalysts are used to improve the processing speed. Basic catalysts tend to produce more transparent aerogels and minimize the shrinkage during the drying process and also strengthen it to prevent pore collapse during drying.

Finally, during the drying process of the aerogel, the liquid surrounding the silica network is carefully removed and replaced with air, while keeping the aerogel intact. Gels where the liquid is allowed to evaporate at a natural rate are known as xerogels. As the liquid evaporates, forces caused by surface tensions of the liquid-solid interfaces are enough to destroy the fragile gel network. As a result, xerogels cannot achieve the high porosities and instead peak at lower porosities and exhibit large amounts of shrinkage after drying. To avoid the collapse of fibers during slow solvent evaporation and reduce surface tensions of the liquid-solid interfaces, aerogels can be formed by lyophilization (freeze-drying). Depending on the concentration of the fibers and the temperature to freeze the material, the properties such as porosity of the final aerogel will be affected.

In 1931, to develop the first aerogels, Kistler used a process known as supercritical drying which avoids a direct phase change. By increasing the temperature and pressure he forced the liquid into a supercritical fluid state where by dropping the pressure he could instantly gasify and remove the liquid inside the aerogel, avoiding damage to the delicate three-dimensional network. While this can be done with ethanol, the high temperatures and pressures lead to dangerous processing conditions. A safer, lower temperature and pressure method involves a solvent exchange. This is typically done by exchanging the initial aqueous pore liquid for a CO2-miscible liquid such as ethanol or acetone, then onto liquid carbon dioxide and then bringing the carbon dioxide above its critical point. A variant on this process involves the direct injection of supercritical carbon dioxide into the pressure vessel containing the aerogel. The end result of either process exchanges the initial liquid from the gel with carbon dioxide, without allowing the gel structure to collapse or lose volume.

Resorcinolformaldehyde aerogel (RF aerogel) is made in a way similar to production of silica aerogel. A carbon aerogel can then be made from this resorcinol–formaldehyde aerogel by pyrolysis in an inert gas atmosphere, leaving a matrix of carbon. The resulting carbon aerogel may be used to produce solid shapes, powders, or composite paper. Additives have been successful in enhancing certain properties of the aerogel for the use of specific applications. Aerogel composites have been made using a variety of continuous and discontinuous reinforcements. The high aspect ratio of fibers such as fiberglass have been used to reinforce aerogel composites with significantly improved mechanical properties.

Safety

Silica-based aerogels are not known to be carcinogenic or toxic. However, they are a mechanical irritant to the eyes, skin, respiratory tract, and digestive system. They can also induce dryness of the skin, eyes, and mucous membranes. Therefore, it is recommended that protective gear including respiratory protection, gloves and eye goggles be worn whenever handling or processing bare aerogels, particularly when a dust or fine fragments may occur.

Friday, September 30, 2022

Chiral drugs

From Wikipedia, the free encyclopedia

Chemical compounds that come as mirror-image pairs are referred to by chemists as chiral or handed molecules. Each twin is called an enantiomer. Drugs that exhibit handedness are referred to as chiral drugs. Chiral drugs that are equimolar (1:1) mixture of enantiomers are called racemic drugs and these are obviously devoid of optical rotation. The most commonly encountered stereogenic unit, that confers chirality to drug molecules are stereogenic center. Stereogenic center can be due to the presence of tetrahedral tetra coordinate atoms (C,N,P) and pyramidal tricoordinate atoms (N,S). The word chiral describes the three-dimensional architecture of the molecule and does not reveal the stereochemical composition. Hence "chiral drug" does not say whether the drug is racemic (racemic drug), single enantiomer (chiral specific drug) or some other combination of stereoisomers. To resolve this issue Joseph Gal introduced a new term called unichiral. Unichiral indicates that the stereochemical composition of a chiral drug is homogenous consisting of a single enantiomer.

Many medicinal agents important to life are combinations of mirror-image twins. Despite the close resemblance of such twins, the differences in their biological properties can be profound. In other words, the component enantiomers of a racemic chiral drug may differ wildly in their pharmacokinetic, pharmacodynamic profile. The tragedy of thalidomide illustrates the potential for extreme consequences resulting from the administration of a racemate drug that exhibits multiple effects attributable to individual enantiomers. With the advancements in chiral technology and the increased awareness about three-dimensional consequences of drug action and disposition emerged specialized field "chiral pharmacology". Simultaneously the chirality nomenclature system also evolved. A brief overview of chirality history and terminology/descriptors is given below. A detailed chirality timeline is not the focus of this article.

Chirality: history overview

Louis Pasteur - pioneering stereochemist

Chirality can be traced back to 1812, when physicist Jean-Baptiste Biot found out about a phenomenon called "optical activity." Louis Pasteur, a famous student of Biot's, made a series of observations that led him to suggest that the optical activity of some substances is caused by their molecular asymmetry, which makes nonsuperimposable mirror-images. In 1848, Pasteur grew two different kinds of crystals from the racemic sodium ammonium salt of tartaric acid. He was the first person to separate enantiomeric crystals by hand. In fact Pasteur laid the foundations of stereochemistry and chirality.

In 1874, Jacobus Henricus van 't Hoff came up with the idea of an asymmetric carbon atom. He said that all optically active carbon compounds have an asymmetric carbon atom. In the same year, Joseph Achille Le Bel only used asymmetry arguments and talked about the asymmetry of the molecules as a whole instead of the asymmetry of each carbon atom. So, Le Bel's idea could be seen as the general theory of stereoisomerism, while van 't Hoff's could be seen as a special case (restricted to tetrahedral carbon).

Soon, scientists started to look into what chiral compounds meant for living things. In 1903, Cushny was the first person to show that enantiomers of a chiral molecule have different biological effects. Lord Kelvin was the first person to use the word "chiral." He did this in 1904.

Chirality: terminology/descriptors

This is to give an overview of the evolving chirality nomenclature system commonly employed to distinguish enantiomers of a chiral drug. In the beginning, enantiomers were distinguished based on their ability to rotate the plane of plane-polarized light. The enantiomer that rotates the plane-polarized light to the right is named "dextro-rotatory", abbreviated as "dextro" or "d" and the counterpart as "levo" or "l". A racemic mixture is denoted as "(±)", "rac", or "dl". Now the d/l system of naming based on optical rotation is falling into disuse.

Later, the Fischer convention was introduced to specify the configuration of a stereogenic center and uses the symbols D and L. The use of capital letters is to differentiate from the "d" / "l" notation (optical descriptor) described earlier. In this system, the enantiomers are named with reference to D- and L-glyceraldehyde which is taken as the standard for comparison. The structure of the chiral molecule should be represented in the Fischer projection formula. If the hydroxyl group attached to the highest chiral carbon is on the right-hand side it is referred to as D-series and if on the left-hand side it is called L-series. This nomenclature system has also become obsolete. But D-/L-system of naming is still employed to designate the configuration of amino acids and sugars. In general the D/L system of nomenclature is superseded by the Cahn-Ingold-Prelog (CIP) rule to describe the configuration of a stereogenic/chiral center.

In the CIP or R/S convention, or sequence rule, the configuration, spatial arrangements of ligands/substituents around a chiral center, is labeled as either "R" or "S". This convention is now almost worldwide in use and become a part of the IUPAC (International Union of Pure and Applied Chemistry) rules of nomenclature. In this approach: identify the chiral center, label the four atoms directly attached to the stereogenic center in question, assign priorities according to the sequence rule ( from 1 to 4), rotate the molecule until the lowest priority (number 4) substituent is away from the observer/viewer, draw a curve from number 1 to number 2 to number 3 substituent. If the curve is clockwise, the chiral center is of R-absolute configuration, "R" (Latin, rectus = right). If the curve is counterclockwise, the chiral center is of S-absolute configuration, "S" (Latin, sinister = left). Refer to figure, the Cahn-Ingold-Prelog rule.

The Cahn-Ingold-Prelog rule

An overview of the nomenclature system is presented in the table below.


Chirality descriptor

(used as prefix)

Description Comments
(+)- / dextro-/ d- Optical rotation signs; does not reflect the configuration d; Obsolete terms/falling in disuse
(-)- / levo- Optical rotation signs; does not reflect the configuration l; Obsolete terms/falling in disuse
(±)- / rac- / dl- Racemate or racemic mixture is an equimolar (1:1) mixture of enantiomers; corresponds to the enantiomeric excess of 0%. dl; Obsolete terms/falling in disuse
D- Relative configuration with respect to D-glyceraldehyde; referred to as Fischer convention -
L- Relative configuration with respect to L-glyceraldehyde; referred to as Fischer convention -
R- Latin: rectus = right; absolute configuration as per Cahn-Ingold-Prelog rule/sequence rule -
S- Latin: sinister = left; absolute configuration as per Cahn-Ingold-Prelog rule/Sequence rule -

Racemic drugs

For many years scientists in drug development have been blind to the three-dimensional consequences of stereochemistry, chiefly due to the lack of technology for making enantioselective investigations. Besides, the thalidomide tragedy, another event that raised the importance of issues of stereochemistry in drug research and development was the publication of a manuscript in 1984 entitled, "Stereochemistry, a basis of sophisticated nonsense in pharmacokinetics and clinical pharmacology" by Ariëns. This article and series of articles that followed, criticized the practice of conducting pharmacokinetic and pharmacodynamic studies on racemic drugs and ignoring the separate contributions of the individual enantiomers. These papers have served to crystallize some of the important issues surrounding racemic drugs and stimulated much discussion in industry, government and academia.

Chiral pharmacology

As a result of these criticisms and the renewed awareness of the three-dimensional effects of drug action fueled by the exponential explosion of chiral technology emerged the new area "stereo-pharmacology". A more specific term is "chiral pharmacology", a phrase popularized by John Caldwell. This field has grown itself into a specialized discipline concerned with the three-dimensional aspects of drug action and disposition. This approach essentially views each version of the chiral twins as separate chemical species. To express the pharmacological activities of each of the chiral twins two technical terms have been coined, eutomer and distomer. The member of the chiral twin that has greater physiological activity is referred to as the eutomer and the other one with lesser activity is referred to as distomer. It is generally understood that this reference is necessarily to a single activity being studied. The eutomer for one effect may well be the distomer when another is studied. The eutomer/distomer ratio is called the eudysmic ratio.

Bio-environment and chiral twins

The behavior of the chiral twins depends mainly on the nature of the environment (achiral/chiral) in which they are present. An achiral environment does not differentiate the molecular twins whereas a chiral environment does distinguish the left-handed version from the right-handed version. Human body, a classic bio-environment, is inherently handed as it is filled with chiral discriminators like amino acids, enzymes, carbohydrates, lipids, nucleic acids, etc. Hence when a racemic therapeutic gets exposed to biological system the component enantiomers will be acted upon stereoselectively. For drugs, chiral discrimination can take place either in the pharmacokinetic or pharmacodynamic phase.

Chiral discrimination

Easson and Stedman (1933) advanced a drug-receptor interaction model to account for the differential pharmacodynamic activity between enantiomeric pairs. In this model the more active enantiomer (the eutomer) take part in a minimum of three simultaneous intermolecular interactions with the receptor surface (good fit), Figure. A., where as the less active enantiomer (distomer) interacts at two sites only (bad fit), Figure B. [Refer image for Figure: Easson-Stedman model]. Thus the "fit" of the individual enantiomers to the receptor site differs, as does the energy of interaction. This is a simplistic model but used to explain the biological discrimination between enantiomeric pairs.

Easson-Stedman model

In reality the drug-receptor interaction is not that simple, but this view of such complex phenomenon has provided major insights into the mechanism of action of drugs.

Pharmacodynamic considerations

Racemic drugs are not drug combinations in the accepted sense of two or more co-formulated therapeutic agents, but combinations of isomeric substances whose pharmacological activity may reside predominantly in one specific enantiomeric form. In case of stereoselectivity in action only one of the components in the racemic mixture is truly active. The other isomer, the distomer, should be regarded as impurity or isomeric ballast, a term coined by Ariëns, not contributing to the effects aimed at. In contrast to the pharmacokinetic properties of an enantiomeric pair, differences in pharmacodynamic activity tend to be more obvious. There is a wide spectrum of possibilities of distomer actions, many of which are confirmed experimentally. Selected examples of the distomer actions (viz. equipotent, less active, inactive, antagonistic, chiral inversion) are presented in the table below.


Chiral drug Stereogenic

center(s)

Therapeutic action Eutomer Distomer Distomer action
promethazine 1 Antihistaminic (R)-/(S)- - Equipotent
Salbutamol 1 Bronchodilator (R)- (S)- Less active; no serious side-effects
Propranolol 1 Antihypertensive (S)- (R)- Inactive; half placebo
Indacrinone 1 Diuretic (R)- (S)- Antagonizes side effect of the eutomer
Propoxyphene 2 Analgesic; (Dexpropoxyphene) (2R),(3S)- (2S),(3R)- Independent therapeutic value
Antitussive; (Levopropoxyphene) (2S),(3R)- (2R),(3S)-
Ibuprofen 1 Anti-inflammatory (S)- (R)- Chiral inversion (unidirectional; [(R)- to (S)-]

Drug toxicity

Since there is a frequent large pharmacokinetic and pharmacodynamic differences between enantiomers of a chiral drug it is not surprising that enantiomers may result in stereoselective toxicity. They can reside in the pharmacologically active enantiomer (eutomer) or in the inactive one (distomer). The toxicologic differences between enantiomers of have also been demonstrated. The following are examples of some of the chiral drugs where their toxic/undesirable side-effects dwell almost in the distomer. This would seem to be a clear cut case of going for a chiral switch.

Penicillamine

Penicillamine enantiomers

Penicillamine is a chiral drug with one chiral center and exists as a pair of enantiomers. (S)-penicillamine is the eutomer with the desired antiarthritic activity while the (R)-penicillamine is extremely toxic.

Ketamine

Ketamine enantiomers

Ketamine is a widely used anaesthetic agent. It is a chiral molecule that is administered as a racemate. Studies show that (S)-(+)-ketamine is the active anaesthetic and the undesired side-effects (hallucination and agitation) reside in the distomer, (R)-(-)-ketamine.

Dopa

Dopa enantiomers

The initial use of racemic dopa for the treatment of Parkinson's disease resulted in a number of adverse effects viz. nausea, vomiting, anorexia, involuntary movements and granulocytopenia. The use of L-dopa [the (S)-enantiomer] resulted in reducing the required dose, and adverse effects. The granulocytopenia was not observed with the single enantiomer.

Ethambutol

Ethambutol enantiomers

The antitubercular agent Ethambutol contains two constitutionally symmetrical stereogenic centers in its structure and exists in three stereoisomeric forms. An enantiomeric pair (S,S)- and (R,R)-ethmabutol, along with the achiral stereoisomer called meso-form, it holds a diastereomeric relationship with the optically active stereoisomers. The activity of the drug resides in the (S,S)-enantiomer which is 500 and 12 fold more potent than the (R,R)-ethmabutol and the meso-form. The drug s initially introduced for clinical use as the racemate and was changed to the (S,S)-enantiomer, as a result of optic neuritis leading to blindness. Toxicity is related to both dose and duration of treatment. All the three stereoisomers were almost equipotent with respect to side effects. Hence the use of S,S)-enantiomer greatly enhanced the risk/benefit ratio.

Thalidomide

Thalidomide enantiomers

Thalidomide is a classical example highlighting the alleged role of chirality in drug toxicity. Thalidomide was a racemic therapeutic and prescribed to pregnant women to control nausea and vomiting. The drug was withdrawn from world market when it became evident that the use in pregnancy causes phocomelia (clinical conditions where babies are born with deformed hand and limbs). Later in late 1970s studies indicated that the (R)- enantiomer is an effective sedative, the (S)-enantiomer harbors teratogenic effect and causes fetal abnormalities. Later studies established that under biological conditions the (R)-thalidomide, good partner, undergoes an in vivo metabolic inversion to the (S)-thalidomide, evil partner and vice versa. It is a bidirectional chiral inversion. Hence the argument that the thalidomide tragedy could have been avoided by using a single enantiomer is ambiguous and pointless.

The salient features are presented in the table below.

Chiral drugs: Adverse effects residing in the distomer
Chiral drug Chiral centers Clinical effects


Eutomer; Activity Distomer; Activity
Penicillamine 1 (S)-; Antiarthritic (R)-; Mutagen
Ketamine 1 (S)-; Anesthetic (R)-; Hallucinogen
Dopa 1 (S)-; Antiparkinson (R)-; Granulocytopenia
Ethambutol 2 (S,S)-; Tuberculostatic (R,R)-; and meso- form; Blindness
Thalidomide 1 (R)-; Sedative (S)-; Teratogenic

Unichiral drugs

Unichiral indicates configurationally homogeneous substance (i.e. made up of chiral molecules of one and the same configuration). Other commonly used synonyms are enantiopure drugs and enantiomerically pure drugs. Monochiral drugs has also been suggested as another synonym. Professor Eliel, Wilen, and Gal expressed their deep concern over the misuse of the term "homochiral" in articles to denote enantiomerically pure drugs, which is incorrect. Homochiral means objects or molecules of the same handedness. Hence should be used only for comparison of two or more objects of like "chirality". For instance, left hands of different individuals, or say R-naproxen and R-ibuprofen.

Globally drug companies and regulatory agencies have an inclination towards the development of unichiral drugs as a consequence of the increased understanding of the differing biological properties of individual enantiomers of a racemic therapeutics. Most of these unichiral drugs are the consequence of chiral switch approach. The table below list selected unichiral drugs used in drug therapy.

Unichiral drugs employed in drug therapy
Unichiral drugs Drug class/ Type of medication Therapeutic area
Esomeprazole Proton pump inhibitor Gastroenterology
S-pantoprazole Proton pump inhibitor
Dexrabiprazole Proton pump inhibitor
Levosalbutamol Bronchodilator Pulmonology
Levocetrizine Antihistaminic
Levofloxacin Antibacterial Infectious diseases
S-penicillamine Rheumatoid Arthritis Rheumatology / Pain/ Inflammation
S-etodolac NSAID
Dexketoprofen NSAID
S-ketamine Anesthetic Anesthesiology
Levobupivacaine Local anesthetic
Levothyroxine Anti-hypothyroidism Endocrinology
Levodopa Anti-Parkinson Neuropsychiatry
S-amlodipine Antianginal / Antihypertensive Cardiology
S-metoprolol Antihypertensive
S-atenolol Antihypertensive

A company may go in for developing a racemic drug against an enantiomer by providing adequate reasoning. The rationale why a company might pursue developing racemic drugs could include expensive separation of enantiomers, eutomer racemizes in solution (e.g. oxazepam), activities of the enantiomeric pair are different but supplementary, distomer is inactive, but separation is exorbitant. Insignificant/low toxicity of the distomer, high therapeutic index, mutually beneficial, pharmacological activities of both the enantiomers, and if the development of an enantiomer takes huge amount of time for a drug of emergency need e.g., cancer, AIDS, etc.

Chiral purity

Chiral purity is a measure of the purity of a chiral drug. Other synonyms employed include enantiomeric excess, enantiomer purity, enantiomeric purity, and optical purity. Optical purity is an obsolete term since today most of the chiral purity measurements are done using chromatographic techniques (not based on optical principles). Enantiomeric excess tells the extent (in %) to which the chiral substance contains one enantiomer over the other. For a racemic drug the enantiomeric excess will be 0%. There are number of chiral analysis tools such as polarimetry, NMR spectroscopy with the use of chiral shift reagents, chiral GC (gas chromatography), chiral HPLC (high performance liquid chromatography) and other chiral chromatographic techniques, that are employed to evaluate chiral purity. Assessing the purity of a unichiral drug or enantiopure drug is of great importance from a drug safety and efficacy perspective.

Lie point symmetry

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_point_symmetry     ...