Search This Blog

Tuesday, July 1, 2025

Genetic representation

From Wikipedia, the free encyclopedia

In computer programming, genetic representation is a way of presenting solutions/individuals in evolutionary computation methods. The term encompasses both the concrete data structures and data types used to realize the genetic material of the candidate solutions in the form of a genome, and the relationships between search space and problem space. In the simplest case, the search space corresponds to the problem space (direct representation). The choice of problem representation is tied to the choice of genetic operators, both of which have a decisive effect on the efficiency of the optimization. Genetic representation can encode appearance, behavior, physical qualities of individuals. Difference in genetic representations is one of the major criteria drawing a line between known classes of evolutionary computation.

Terminology is often analogous with natural genetics. The block of computer memory that represents one candidate solution is called an individual. The data in that block is called a chromosome. Each chromosome consists of genes. The possible values of a particular gene are called alleles. A programmer may represent all the individuals of a population using binary encoding, permutational encoding, encoding by tree, or any one of several other representations.

Genetic algorithms (GAs) are typically linear representations; these are often, but not always, binary. Holland's original description of GA used arrays of bits. Arrays of other types and structures can be used in essentially the same way. The main property that makes these genetic representations convenient is that their parts are easily aligned due to their fixed size. This facilitates simple crossover operation. Depending on the application, variable-length representations have also been successfully used and tested in evolutionary algorithms (EA) in general and genetic algorithms in particular, although the implementation of crossover is more complex in this case.

Evolution strategy uses linear real-valued representations, e.g., an array of real values. It uses mostly gaussian mutation and blending/averaging crossover.

Genetic programming (GP) pioneered tree-like representations and developed genetic operators suitable for such representations. Tree-like representations are used in GP to represent and evolve functional programs with desired properties.

Human-based genetic algorithm (HBGA) offers a way to avoid solving hard representation problems by outsourcing all genetic operators to outside agents, in this case, humans. The algorithm has no need for knowledge of a particular fixed genetic representation as long as there are enough external agents capable of handling those representations, allowing for free-form and evolving genetic representations.

Common genetic representations

Distinction between search space and problem space

Analogous to biology, EAs distinguish between problem space (corresponds to phenotype) and search space (corresponds to genotype). The problem space contains concrete solutions to the problem being addressed, while the search space contains the encoded solutions. The mapping from search space to problem space is called genotype-phenotype mapping. The genetic operators are applied to elements of the search space, and for evaluation, elements of the search space are mapped to elements of the problem space via genotype-phenotype mapping.

Relationships between search space and problem space

The importance of an appropriate choice of search space for the success of an EA application was recognized early on. The following requirements can be placed on a suitable search space and thus on a suitable genotype-phenotype mapping:

Completeness

All possible admissible solutions must be contained in the search space.

Redundancy

When more possible genotypes exist than phenotypes, the genetic representation of the EA is called redundant. In nature, this is termed a degenerate genetic code. In the case of a redundant representation, neutral mutations are possible. These are mutations that change the genotype but do not affect the phenotype. Thus, depending on the use of the genetic operators, there may be phenotypically unchanged offspring, which can lead to unnecessary fitness determinations, among other things. Since the evaluation in real-world applications usually accounts for the lion's share of the computation time, it can slow down the optimization process. In addition, this can cause the population to have higher genotypic diversity than phenotypic diversity, which can also hinder evolutionary progress.

In biology, the Neutral Theory of Molecular Evolution states that this effect plays a dominant role in natural evolution. This has motivated researchers in the EA community to examine whether neutral mutations can improve EA functioning by giving populations that have converged to a local optimum a way to escape that local optimum through genetic drift. This is discussed controversially and there are no conclusive results on neutrality in EAs. On the other hand, there are other proven measures to handle premature convergence.

Locality

The locality of a genetic representation corresponds to the degree to which distances in the search space are preserved in the problem space after genotype-phenotype mapping. That is, a representation has a high locality exactly when neighbors in the search space are also neighbors in the problem space. In order for successful schemata not to be destroyed by genotype-phenotype mapping after a minor mutation, the locality of a representation must be high.

Scaling

In genotype-phenotype mapping, the elements of the genotype can be scaled (weighted) differently. The simplest case is uniform scaling: all elements of the genotype are equally weighted in the phenotype. A common scaling is exponential. If integers are binary coded, the individual digits of the resulting binary number have exponentially different weights in representing the phenotype.

Example: The number 90 is written in binary (i.e., in base two) as 1011010. If now one of the front digits is changed in the binary notation, this has a significantly greater effect on the coded number than any changes at the rear digits (the selection pressure has an exponentially greater effect on the front digits).

For this reason, exponential scaling has the effect of randomly fixing the "posterior" locations in the genotype before the population gets close enough to the optimum to adjust for these subtleties.

Hybridization and repair in genotype-phenotype mapping

When mapping the genotype to the phenotype being evaluated, domain-specific knowledge can be used to improve the phenotype and/or ensure that constraints are met. This is a commonly used method to improve EA performance in terms of runtime and solution quality. It is illustrated below by two of the three examples.

Examples

Example of a direct representation

An obvious and commonly used encoding for the traveling salesman problem and related tasks is to number the cities to be visited consecutively and store them as integers in the chromosome. The genetic operators must be suitably adapted so that they only change the order of the cities (genes) and do not cause deletions or duplications. Thus, the gene order corresponds to the city order and there is a simple one-to-one mapping.

Example of a complex genotype-phenotype mapping

In a scheduling task with heterogeneous and partially alternative resources to be assigned to a set of subtasks, the genome must contain all necessary information for the individual scheduling operations or it must be possible to derive them from it. In addition to the order of the subtasks to be executed, this includes information about the resource selection. A phenotype then consists of a list of subtasks with their start times and assigned resources. In order to be able to create this, as many allocation matrices must be created as resources can be allocated to one subtask at most. In the simplest case this is one resource, e.g., one machine, which can perform the subtask. An allocation matrix is a two-dimensional matrix, with one dimension being the available time units and the other being the resources to be allocated. Empty matrix cells indicate availability, while an entry indicates the number of the assigned subtask. The creation of allocation matrices ensures firstly that there are no inadmissible multiple allocations. Secondly, the start times of the subtasks can be read from it as well as the assigned resources.

A common constraint when scheduling resources to subtasks is that a resource can only be allocated once per time unit and that the reservation must be for a contiguous period of time. To achieve this in a timely manner, which is a common optimization goal and not a constraint, a simple heuristic can be used: Allocate the required resource for the desired time period as early as possible, avoiding duplicate reservations. The advantage of this simple procedure is twofold: it avoids the constraint and helps the optimization.

If the scheduling problem is modified to the scheduling of workflows instead of independent subtasks, at least some of the work steps of a workflow have to be executed in a given order. If the previously described scheduling heuristic now determines that the predecessor of a work step is not completed when it should be started itself, the following repair mechanism can help: Postpone the scheduling of this work step until all its predecessors are finished. Since the genotype remains unchanged and repair is performed only at the phenotype level, it is also called phenotypic repair.

Example of a heuristic-based genotype-phenotype mapping

The following layout planning task is intended to illustrate a different use of a heuristic in genotype-phenotype mapping: On a rectangular surface different geometric types of objects are to be arranged in such a way that as little area as possible remains unused. The objects can be rotated, must not overlap after placement, and must be positioned completely on the surface. A related application would be scrap minimization when cutting parts from a steel plate or fabric sheet.

The coordinates of the centers of the objects and a rotation angle reduced to possible isomorphisms of the geometry of the objects can be considered as variables to be determined. If this is done directly by an EA, there will probably be a lot of overlaps. To avoid this, only the angle and the coordinate of one side of the rectangle are determined by the EA. Each object is now rotated and positioned on the edge of that side, shifting it if necessary so that it is inside the rectangle when it is subsequently moved. Then it is moved parallel to the other side until it touches another object or reaches the opposite end of the rectangle. In this way, overlaps are avoided and the unused area is reduced per placement, but not in general, which is left to optimization.

Semiconductor device

From Wikipedia, the free encyclopedia
Outlines of some packaged semiconductor devices

A semiconductor device is an electronic component that relies on the electronic properties of a semiconductor material (primarily silicon, germanium, and gallium arsenide, as well as organic semiconductors) for its function. Its conductivity lies between conductors and insulators. Semiconductor devices have replaced vacuum tubes in most applications. They conduct electric current in the solid state, rather than as free electrons across a vacuum (typically liberated by thermionic emission) or as free electrons and ions through an ionized gas.

Semiconductor devices are manufactured both as single discrete devices and as integrated circuit (IC) chips, which consist of two or more devices—which can number from the hundreds to the billions—manufactured and interconnected on a single semiconductor wafer (also called a substrate).

Semiconductor materials are useful because their behavior can be easily manipulated by the deliberate addition of impurities, known as doping. Semiconductor conductivity can be controlled by the introduction of an electric or magnetic field, by exposure to light or heat, or by the mechanical deformation of a doped monocrystalline silicon grid; thus, semiconductors can make excellent sensors. Current conduction in a semiconductor occurs due to mobile or "free" electrons and electron holes, collectively known as charge carriers. Doping a semiconductor with a small proportion of an atomic impurity, such as phosphorus or boron, greatly increases the number of free electrons or holes within the semiconductor. When a doped semiconductor contains excess holes, it is called a p-type semiconductor (p for positive electric charge); when it contains excess free electrons, it is called an n-type semiconductor (n for a negative electric charge). A majority of mobile charge carriers have negative charges. The manufacture of semiconductors controls precisely the location and concentration of p- and n-type dopants. The connection of n-type and p-type semiconductors form p–n junctions.

The most common semiconductor device in the world is the MOSFET (metal–oxide–semiconductor field-effect transistor), also called the MOS transistor. As of 2013, billions of MOS transistors are manufactured every day. Semiconductor devices made per year have been growing by 9.1% on average since 1978, and shipments in 2018 are predicted for the first time to exceed 1 trillion, meaning that well over 7 trillion have been made to date.

Main types

Diode

A semiconductor diode is a device typically made from a single p–n junction. At the junction of a p-type and an n-type semiconductor, there forms a depletion region where current conduction is inhibited by the lack of mobile charge carriers. When the device is forward biased (connected with the p-side, having a higher electric potential than the n-side), this depletion region is diminished, allowing for significant conduction. Contrariwise, only a very small current can be achieved when the diode is reverse biased (connected with the n-side at lower electric potential than the p-side, and thus the depletion region expanded).

Exposing a semiconductor to light can generate electron–hole pairs, which increases the number of free carriers and thereby the conductivity. Diodes optimized to take advantage of this phenomenon are known as photodiodes. Compound semiconductor diodes can also produce light, as in light-emitting diodes and laser diode.

Transistor

Bipolar junction transistor

An n–p–n bipolar junction transistor structure

Bipolar junction transistors (BJTs) are formed from two p–n junctions, in either n–p–n or p–n–p configuration. The middle, or base, the region between the junctions is typically very narrow. The other regions, and their associated terminals, are known as the emitter and the collector. A small current injected through the junction between the base and the emitter changes the properties of the base-collector junction so that it can conduct current even though it is reverse biased. This creates a much larger current between the collector and emitter, controlled by the base-emitter current.

Field-effect transistor

Another type of transistor, the field-effect transistor (FET), operates on the principle that semiconductor conductivity can be increased or decreased by the presence of an electric field. An electric field can increase the number of free electrons and holes in a semiconductor, thereby changing its conductivity. The field may be applied by a reverse-biased p–n junction, forming a junction field-effect transistor (JFET) or by an electrode insulated from the bulk material by an oxide layer, forming a metal–oxide–semiconductor field-effect transistor (MOSFET).

Metal-oxide-semiconductor

Operation of a MOSFET and its Id-Vg curve. At first, when no gate voltage is applied. There is no inversion electron in the channel, the device is OFF. As gate voltage increase, the inversion electron density in the channel increase, the current increases, and the device turns on.

The metal-oxide-semiconductor FET (MOSFET, or MOS transistor), a solid-state device, is by far the most used widely semiconductor device today. It accounts for at least 99.9% of all transistors, and there have been an estimated 13 sextillion MOSFETs manufactured between 1960 and 2018.

The gate electrode is charged to produce an electric field that controls the conductivity of a "channel" between two terminals, called the source and drain. Depending on the type of carrier in the channel, the device may be an n-channel (for electrons) or a p-channel (for holes) MOSFET. Although the MOSFET is named in part for its "metal" gate, in modern devices polysilicon is typically used instead.

Other types

Two-terminal devices:

Three-terminal devices:

Four-terminal devices:

Materials

By far, silicon (Si) is the most widely used material in semiconductor devices. Its combination of low raw material cost, relatively simple processing, and a useful temperature range makes it currently the best compromise among the various competing materials. Silicon used in semiconductor device manufacturing is currently fabricated into boules that are large enough in diameter to allow the production of 300 mm (12 in.) wafers.

Germanium (Ge) was a widely used early semiconductor material but its thermal sensitivity makes it less useful than silicon. Today, germanium is often alloyed with silicon for use in very-high-speed SiGe devices; IBM is a major producer of such devices.

Gallium arsenide (GaAs) is also widely used in high-speed devices but so far, it has been difficult to form large-diameter boules of this material, limiting the wafer diameter to sizes significantly smaller than silicon wafers thus making mass production of GaAs devices significantly more expensive than silicon.

Gallium Nitride (GaN) is gaining popularity in high-power applications including power ICs, light-emitting diodes (LEDs), and RF components due to its high strength and thermal conductivity. Compared to silicon, GaN's band gap is more than 3 times wider at 3.4 eV and it conducts electrons 1,000 times more efficiently.

Other less common materials are also in use or under investigation.

Silicon carbide (SiC) is also gaining popularity in power ICs and has found some application as the raw material for blue LEDs and is being investigated for use in semiconductor devices that could withstand very high operating temperatures and environments with the presence of significant levels of ionizing radiation. IMPATT diodes have also been fabricated from SiC.

Various indium compounds (indium arsenide, indium antimonide, and indium phosphide) are also being used in LEDs and solid-state laser diodes. Selenium sulfide is being studied in the manufacture of photovoltaic solar cells.

The most common use for organic semiconductors is organic light-emitting diodes.

Applications

All transistor types can be used as the building blocks of logic gates, which are fundamental in the design of digital circuits. In digital circuits like microprocessors, transistors act as on-off switches; in the MOSFET, for instance, the voltage applied to the gate determines whether the switch is on or off.

Transistors used for analog circuits do not act as on-off switches; rather, they respond to a continuous range of inputs with a continuous range of outputs. Common analog circuits include amplifiers and oscillators.

Circuits that interface or translate between digital circuits and analog circuits are known as mixed-signal circuits.

Power semiconductor devices are discrete devices or integrated circuits intended for high current or high voltage applications. Power integrated circuits combine IC technology with power semiconductor technology, these are sometimes referred to as "smart" power devices. Several companies specialize in manufacturing power semiconductors.

Component identifiers

The part numbers of semiconductor devices are often manufacturer specific. Nevertheless, there have been attempts at creating standards for type codes, and a subset of devices follow those. For discrete devices, for example, there are three standards: JEDEC JESD370B in the United States, Pro Electron in Europe, and Japanese Industrial Standards (JIS).

Fabrication

A semiconductor device manufacturing facility at HP Labs

Semiconductor device fabrication is the process used to manufacture semiconductor devices, typically integrated circuits (ICs) such as microprocessors, microcontrollers, and memories (such as RAM and flash memory). It is a multiple-step photolithographic and physico-chemical process (with steps such as thermal oxidation, thin-film deposition, ion-implantation, etching) during which electronic circuits are gradually created on a wafer, typically made of pure single-crystal semiconducting material. Silicon is almost always used, but various compound semiconductors are used for specialized applications. This article focuses on the manufacture of integrated circuits, however steps such as etching and photolithography can be used to manufacture other devices such as LCD and OLED displays.

The fabrication process is performed in highly specialized semiconductor fabrication plants, also called foundries or "fabs", with the central part being the "clean room". In more advanced semiconductor devices, such as modern 14/10/7 nm nodes, fabrication can take up to 15 weeks, with 11–13 weeks being the industry average. Production in advanced fabrication facilities is completely automated, with automated material handling systems taking care of the transport of wafers from machine to machine.

A wafer often has several integrated circuits which are called dies as they are pieces diced from a single wafer. Individual dies are separated from a finished wafer in a process called die singulation, also called wafer dicing. The dies can then undergo further assembly and packaging.

Within fabrication plants, the wafers are transported inside special sealed plastic boxes called FOUPs. FOUPs in many fabs contain an internal nitrogen atmosphere which helps prevent copper from oxidizing on the wafers. Copper is used in modern semiconductors for wiring. The insides of the processing equipment and FOUPs is kept cleaner than the surrounding air in the cleanroom. This internal atmosphere is known as a mini-environment and helps improve yield which is the amount of working devices on a wafer. This mini environment is within an EFEM (equipment front end module) which allows a machine to receive FOUPs, and introduces wafers from the FOUPs into the machine. Additionally many machines also handle wafers in clean nitrogen or vacuum environments to reduce contamination and improve process control. Fabrication plants need large amounts of liquid nitrogen to maintain the atmosphere inside production machinery and FOUPs, which are constantly purged with nitrogen. There can also be an air curtain or a mesh between the FOUP and the EFEM which helps reduce the amount of humidity that enters the FOUP and improves yield.

Companies that manufacture machines used in the industrial semiconductor fabrication process include ASML, Applied Materials, Tokyo Electron and Lam Research.

History of development

Cat's-whisker detector

Semiconductors had been used in the electronics field for some time before the invention of the transistor. Around the turn of the 20th century they were quite common as detectors in radios, used in a device called a "cat's whisker" developed by Jagadish Chandra Bose and others. These detectors were somewhat troublesome, however, requiring the operator to move a small tungsten filament (the whisker) around the surface of a galena (lead sulfide) or carborundum (silicon carbide) crystal until it suddenly started working. Then, over a period of a few hours or days, the cat's whisker would slowly stop working and the process would have to be repeated. At the time their operation was completely mysterious. After the introduction of the more reliable and amplified vacuum tube based radios, the cat's whisker systems quickly disappeared. The "cat's whisker" is a primitive example of a special type of diode still popular today, called a Schottky diode.

Metal rectifier

Another early type of semiconductor device is the metal rectifier in which the semiconductor is copper oxide or selenium. Westinghouse Electric (1886) was a major manufacturer of these rectifiers.

World War II

During World War II, radar research quickly pushed radar receivers to operate at ever higher frequencies about 4000 MHz and the traditional tube-based radio receivers no longer worked well. The introduction of the cavity magnetron from Britain to the United States in 1940 during the Tizard Mission resulted in a pressing need for a practical high-frequency amplifier.

On a whim, Russell Ohl of Bell Laboratories decided to try a cat's whisker. By this point, they had not been in use for a number of years, and no one at the labs had one. After hunting one down at a used radio store in Manhattan, he found that it worked much better than tube-based systems.

Ohl investigated why the cat's whisker functioned so well. He spent most of 1939 trying to grow more pure versions of the crystals. He soon found that with higher-quality crystals their finicky behavior went away, but so did their ability to operate as a radio detector. One day he found one of his purest crystals nevertheless worked well, and it had a clearly visible crack near the middle. However, as he moved about the room trying to test it, the detector would mysteriously work, and then stop again. After some study he found that the behavior was controlled by the light in the room – more light caused more conductance in the crystal. He invited several other people to see this crystal, and Walter Brattain immediately realized there was some sort of junction at the crack.

Further research cleared up the remaining mystery. The crystal had cracked because either side contained very slightly different amounts of the impurities Ohl could not remove – about 0.2%. One side of the crystal had impurities that added extra electrons (the carriers of electric current) and made it a "conductor". The other had impurities that wanted to bind to these electrons, making it (what he called) an "insulator". Because the two parts of the crystal were in contact with each other, the electrons could be pushed out of the conductive side which had extra electrons (soon to be known as the emitter), and replaced by new ones being provided (from a battery, for instance) where they would flow into the insulating portion and be collected by the whisker filament (named the collector). However, when the voltage was reversed the electrons being pushed into the collector would quickly fill up the "holes" (the electron-needy impurities), and conduction would stop almost instantly. This junction of the two crystals (or parts of one crystal) created a solid-state diode, and the concept soon became known as semiconduction. The mechanism of action when the diode off has to do with the separation of charge carriers around the junction. This is called a "depletion region".

Development of the diode

Armed with the knowledge of how these new diodes worked, a vigorous effort began to learn how to build them on demand. Teams at Purdue University, Bell Labs, MIT, and the University of Chicago all joined forces to build better crystals. Within a year germanium production had been perfected to the point where military-grade diodes were being used in most radar sets.

Development of the transistor

After the war, William Shockley decided to attempt the building of a triode-like semiconductor device. He secured funding and lab space, and went to work on the problem with Brattain and John Bardeen.

The key to the development of the transistor was the further understanding of the process of the electron mobility in a semiconductor. It was realized that if there were some way to control the flow of the electrons from the emitter to the collector of this newly discovered diode, an amplifier could be built. For instance, if contacts are placed on both sides of a single type of crystal, current will not flow between them through the crystal. However, if a third contact could then "inject" electrons or holes into the material, the current would flow.

Actually doing this appeared to be very difficult. If the crystal were of any reasonable size, the number of electrons (or holes) required to be injected would have to be very large, making it less than useful as an amplifier because it would require a large injection current to start with. That said, the whole idea of the crystal diode was that the crystal itself could provide the electrons over a very small distance, the depletion region. The key appeared to be to place the input and output contacts very close together on the surface of the crystal on either side of this region.

Brattain started working on building such a device, and tantalizing hints of amplification continued to appear as the team worked on the problem. Sometimes the system would work but then stop working unexpectedly. In one instance a non-working system started working when placed in water. Ohl and Brattain eventually developed a new branch of quantum mechanics, which became known as surface physics, to account for the behavior. The electrons in any one piece of the crystal would migrate about due to nearby charges. Electrons in the emitters, or the "holes" in the collectors, would cluster at the surface of the crystal where they could find their opposite charge "floating around" in the air (or water). Yet they could be pushed away from the surface with the application of a small amount of charge from any other location on the crystal. Instead of needing a large supply of injected electrons, a very small number in the right place on the crystal would accomplish the same thing.

Their understanding solved the problem of needing a very small control area to some degree. Instead of needing two separate semiconductors connected by a common, but tiny, region, a single larger surface would serve. The electron-emitting and collecting leads would both be placed very close together on the top, with the control lead placed on the base of the crystal. When current flowed through this "base" lead, the electrons or holes would be pushed out, across the block of the semiconductor, and collect on the far surface. As long as the emitter and collector were very close together, this should allow enough electrons or holes between them to allow conduction to start.

First transistor

A stylized replica of the first transistor

The Bell team made many attempts to build such a system with various tools but generally failed. Setups, where the contacts were close enough, were invariably as fragile as the original cat's whisker detectors had been, and would work briefly, if at all. Eventually, they had a practical breakthrough. A piece of gold foil was glued to the edge of a plastic wedge, and then the foil was sliced with a razor at the tip of the triangle. The result was two very closely spaced contacts of gold. When the wedge was pushed down onto the surface of a crystal and voltage was applied to the other side (on the base of the crystal), current started to flow from one contact to the other as the base voltage pushed the electrons away from the base towards the other side near the contacts. The point-contact transistor had been invented.

While the device was constructed a week earlier, Brattain's notes describe the first demonstration to higher-ups at Bell Labs on the afternoon of 23 December 1947, often given as the birthdate of the transistor. What is now known as the "p–n–p point-contact germanium transistor" operated as a speech amplifier with a power gain of 18 in that trial. John Bardeen, Walter Houser Brattain, and William Bradford Shockley were awarded the 1956 Nobel Prize in physics for their work.

Etymology of "transistor"

Bell Telephone Laboratories needed a generic name for their new invention: "Semiconductor Triode", "Solid Triode", "Surface States Triode" [sic], "Crystal Triode" and "Iotatron" were all considered, but "transistor", coined by John R. Pierce, won an internal ballot. The rationale for the name is described in the following extract from the company's Technical Memoranda (May 28, 1948) calling for votes:

Transistor. This is an abbreviated combination of the words "transconductance" or "transfer", and "varistor". The device logically belongs in the varistor family, and has the transconductance or transfer impedance of a device having gain, so that this combination is descriptive.

Improvements in transistor design

Shockley was upset about the device being credited to Brattain and Bardeen, who he felt had built it "behind his back" to take the glory. Matters became worse when Bell Labs lawyers found that some of Shockley's own writings on the transistor were close enough to those of an earlier 1925 patent by Julius Edgar Lilienfeld that they thought it best that his name be left off the patent application.

Shockley was incensed, and decided to demonstrate who was the real brains of the operation. A few months later he invented an entirely new, considerably more robust, bipolar junction transistor type of transistor with a layer or 'sandwich' structure, used for the vast majority of all transistors into the 1960s.

With the fragility problems solved, the remaining problem was purity. Making germanium of the required purity was proving to be a serious problem and limited the yield of transistors that actually worked from a given batch of material. Germanium's sensitivity to temperature also limited its usefulness. Scientists theorized that silicon would be easier to fabricate, but few investigated this possibility. Former Bell Labs scientist Gordon K. Teal was the first to develop a working silicon transistor at the nascent Texas Instruments, giving it a technological edge. From the late 1950s, most transistors were silicon-based. Within a few years transistor-based products, most notably easily portable radios, were appearing on the market. "Zone melting", a technique using a band of molten material moving through the crystal, further increased crystal purity.

Metal-oxide semiconductor

In the 1950s, Mohamed Atalla investigated the surface properties of silicon semiconductors at Bell Labs, where he proposed a new method of semiconductor device fabrication, coating a silicon wafer with an insulating layer of silicon oxide so that electricity could reliably penetrate to the conducting silicon below, overcoming the surface states that prevented electricity from reaching the semiconducting layer. This is known as surface passivation, a method that became critical to the semiconductor industry as it made possible the mass production of silicon integrated circuits (ICs). Building on his surface passivation method, he developed the metal oxide semiconductor (MOS) process, which he proposed could be used to build the first working silicon field-effect transistor (FET). The led to the invention of the MOSFET (MOS field-effect transistor) by Mohamed Atalla and Dawon Kahng in 1959. With its scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET became the most common type of transistor in computers, electronics, and communications technology such as smartphones. The US Patent and Trademark Office calls the MOSFET a "groundbreaking invention that transformed life and culture around the world".

CMOS (complementary MOS) was invented by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. The first report of a floating-gate MOSFET was made by Dawon Kahng and Simon Sze in 1967. FinFET (fin field-effect transistor), a type of 3D multi-gate MOSFET, was developed by Digh Hisamoto and his team of researchers at Hitachi Central Research Laboratory in 1989.

Biosemiotics

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Biosemiotics

Biosemiotics
(from the Greek βίος bios, "life" and σημειωτικός sēmeiōtikos, "observant of signs") is a field of semiotics (especially Neurosemiotics) and biology that studies the prelinguistic meaning-making, biological interpretation processes, production of signs and codes and communication processes in the biological realm.

Biosemiotics integrates the findings of biology and semiotics and proposes a paradigmatic shift in the scientific view of life, in which semiosis (sign process, including meaning and interpretation) is one of its immanent and intrinsic features. The term biosemiotic was first used by Friedrich S. Rothschild in 1962, but Thomas Sebeok, Thure von Uexküll, Jesper Hoffmeyer and many others have implemented the term and field. The field is generally divided between theoretical and applied biosemiotics.

Insights from biosemiotics have also been adopted in the humanities and social sciences, including human–animal studies, human–plant studies and cybersemiotics.

Definition

Biosemiotics is the study of meaning making processes in the living realm, or, to elaborate, a study of

  • signification, communication and habit formation of living processes
  • semiosis (creating and changing sign relations) in living nature
  • the biological basis of all signs and sign interpretation
  • interpretative processes, codes and cognition in organisms

Main branches

According to the basic types of semiosis under study, biosemiotics can be divided into

  • vegetative semiotics (also endosemiotics, or phytosemiotics), the study of semiosis at the cellular and molecular level (including the translation processes related to genome and the organic form or phenotype); vegetative semiosis occurs in all organisms at their cellular and tissue level; vegetative semiotics includes prokaryote semiotics, sign-mediated interactions in bacteria communities such as quorum sensing and quorum quenching.
  • zoosemiotics or animal semiotics, or the study of animal forms of knowing; animal semiosis occurs in the organisms with neuromuscular system, also includes anthroposemiotics, the study of semiotic behavior in humans.

According to the dominant aspect of semiosis under study, the following labels have been used: biopragmatics, biosemantics, and biosyntactics.

History

Apart from Charles Sanders Peirce (1839–1914) and Charles W. Morris (1903–1979), early pioneers of biosemiotics were Jakob von Uexküll (1864–1944), Heini Hediger (1908–1992), Giorgio Prodi (1928–1987), Marcel Florkin (1900–1979) and Friedrich S. Rothschild (1899–1995); the founding fathers of the contemporary interdiscipline were Thomas Sebeok (1920–2001) and Thure von Uexküll (1908–2004).

In the 1980s a circle of mathematicians active in Theoretical Biology, René Thom (Institut des Hautes Etudes Scientifiques), Yannick Kergosien (Dalhousie University and Institut des Hautes Etudes Scientifiques), and Robert Rosen (Dalhousie University, also a former member of the Buffalo group with Howard H. Pattee), explored the relations between Semiotics and Biology using such headings as "Nature Semiotics", "Semiophysics", or "Anticipatory Systems" and taking a modeling approach.

The contemporary period (as initiated by Copenhagen-Tartu school) include biologists Jesper Hoffmeyer, Kalevi Kull, Claus Emmeche, Terrence Deacon, semioticians Martin Krampen, Paul Cobley, philosophers Donald Favareau, John Deely, John Collier and complex systems scientists Howard H. Pattee, Michael Conrad, Luis M. Rocha, Cliff Joslyn and León Croizat.

In 2001, an annual international conference for biosemiotic research known as the Gatherings in Biosemiotics was inaugurated, and has taken place every year since.

In 2004, a group of biosemioticians – Marcello Barbieri, Claus Emmeche, Jesper Hoffmeyer, Kalevi Kull, and Anton Markoš – decided to establish an international journal of biosemiotics. Under their editorship, the Journal of Biosemiotics was launched by Nova Science Publishers in 2005 (two issues published), and with the same five co-editors Biosemiotics was launched by Springer in 2008. The book series Biosemiotics (Springer), edited by Claus Emmeche, Donald Favareau, Kalevi Kull, and Alexei Sharov, began in 2007 and 27 volumes have been published in the series by 2024.

The International Society for Biosemiotic Studies was established in 2005 by Donald Favareau and the five editors listed above. A collective programmatic paper on the basic theses of biosemiotics appeared in 2009. and in 2010, an 800 page textbook and anthology, Essential Readings in Biosemiotics, was published, with bibliographies and commentary by Donald Favareau.

One of roots for biosemiotics has been medical semiotics. In 2016, Springer published Biosemiotic Medicine: Healing in the World of Meaning, edited by Farzad Goli as part of Studies in Neuroscience, Consciousness and Spirituality.

In the humanities

Since the work of Jakob von Uexküll and Martin Heidegger, several scholars in the humanities have engaged with or appropriated ideas from biosemiotics in their own projects; conversely, biosemioticians have critically engaged with or reformulated humanistic theories using ideas from biosemiotics and complexity theory. For instance, Andreas Weber has reformulated some of Hans Jonas's ideas using concepts from biosemiotics, and biosemiotics have been used to interpret the poetry of John Burnside.

Since 2021, the American philosopher Jason Josephson Storm has drawn on biosemiotics and empirical research on animal communication to propose hylosemiotics, a theory of ontology and communication that Storm believes could allow the humanities to move beyond the linguistic turn.

John Deely's work also represents an engagement between humanistic and biosemiotic approaches. Deely was trained as a historian and not a biologist but discussed biosemiotics and zoosemiotics extensively in his introductory works on semiotics and clarified terms that are relevant for biosemiotics. Although his idea of physiosemiotics was criticized by practicing biosemioticians, Paul Cobley, Donald Favareau, and Kalevi Kull wrote that "the debates on this conceptual point between Deely and the biosemiotics community were always civil and marked by a mutual admiration for the contributions of the other towards the advancement of our understanding of sign relations."

Cisgender

From Wikipedia, the free encyclopedia
 
The word cisgender (often shortened to cis; sometimes cissexual) describes a person whose gender identity corresponds to their sex assigned at birth, i.e., someone who is not transgender. The prefix cis- is Latin and means on this side of. The term cisgender was coined in 1994 as an antonym to transgender, and entered into dictionaries starting in 2015 as a result of changes in social discourse about gender.

Related concepts are cisnormativity (the presumption that cisgender identity is preferred or normal) and cissexism (bias or prejudice favoring cisgender people).

Etymology

The term cisgender has its origin in the Latin-derived prefix cis-, meaning 'on this side of', which is the opposite of trans-, meaning 'across from' or 'on the other side of'. This usage can be seen in the cistrans distinction in chemistry, the cis and trans sides of the Golgi apparatus in cellular biology, the ancient Roman term Cisalpine Gaul (i.e. 'Gaul on this side of the Alps'), and Cisjordan (as distinguished from Transjordan). In cisgender, cis- describes the alignment of gender identity with assigned sex.

History and usage of the term

Coinage

German

Marquis Bey states that "proto-cisgender discourse" arose in German in 1914, when Ernst Burchard introduced the cis/trans distinction to sexology by contrasting "cisvestitismus, or a type of inclination to wear gender-conforming clothing, [...] with transvestitismus, or cross-dressing." German sexologist Volkmar Sigusch used the term cissexual (zissexuell in German) in his two-part 1991 article "Die Transsexuellen und unser nosomorpher Blick" ("Transsexuals and our nosomorphic view"); in 1998, he said he had coined the term there.

English

The term cisgender was coined in English in 1994 in a Usenet newsgroup about transgender topics as Dana Defosse, then a graduate student, sought a way to refer to non-transgender people that avoided marginalizing transgender people or implying that transgender people were an other. John Hollister used it that same year. In 1995, Carl Buijs used it, apparently coining it independently.

Academic use

Medical academics use the term and have recognized its importance in transgender studies since the 1990s. After the terms cisgender and cissexual were used in a 2006 article in the Journal of Lesbian Studies and Serano's 2007 book Whipping Girl, the former gained further popularity among English-speaking activists and scholars. Cisgender was added to the Oxford English Dictionary in 2015, defined as "designating a person whose sense of personal identity corresponds to the sex and gender assigned to him or her at birth (in contrast with transgender)". Perspectives on History states that since this inclusion, the term has increasingly become common usage.

Social media

In February 2014, Facebook began offering "custom" gender options, allowing users to identify with one or more gender-related terms from a selected list, including cis, cisgender, and others.

Definitions

Sociologists Kristen Schilt and Laurel Westbrook define cisgender as a label for "individuals who have a match between the gender they were assigned at birth, their bodies, and their personal identity". A number of derivatives of the terms cisgender and cissexual include cis male for "male assigned male at birth", cis female for "female assigned female at birth", analogously cis man and cis woman, and cissexism and cissexual assumption or cisnormativity (akin to heteronormativity). Eli R. Green wrote in 2006, "cisgendered is used [instead of the more popular gender normative] to refer to people who do not identify with a gender diverse experience, without enforcing existence of a normative gender expression".

Others have similarly argued that using terms such as man or woman to mean cis man or cis woman reinforced cisnormativity, and that instead using the prefix cis similarly to trans would counteract the cisnormative connotations within language.

Julia Serano has defined cissexual as "people who are not transsexual and who have only ever experienced their mental and physical sexes as being aligned", while cisgender is a slightly narrower term for those who do not identify as transgender (a larger cultural category than the more clinical transsexual). For Jessica Cadwallader, cissexual is "a way of drawing attention to the unmarked norm, against which trans is identified, in which a person feels that their gender identity matches their body/sex".

Serano also uses the related term cissexism, "which is the belief that transsexuals' identified genders are inferior to, or less authentic than, those of cissexuals". In 2010, the term cisgender privilege appeared in academic literature, defined as the "set of unearned advantages that individuals who identify as the gender they were assigned at birth accrue solely due to having a cisgender identity".

Critiques

While intended to be a positive descriptor to distinguish between trans and non-trans identity, the term has been met with criticisms in more recent years.

From feminism and gender studies

Krista Scott-Dixon wrote in 2009 that she preferred "the term non-trans to other options such as cissexual/cisgendered", saying non-trans is clearer to average people.

Women's and gender studies scholar Mimi Marinucci writes that some consider the 'cisgender–transgender' binary distinction to be as dangerous or self-defeating as the masculine–feminine gender binary because it lumps people who identify as lesbian, gay, or bisexual (LGB) together (over-simplistically, in her view) with a heteronormative class of people in an opposition with transgender people; she says that characterizing LGB individuals together with heterosexual, non-trans people may problematically suggest that LGB individuals, unlike transgender individuals, "experience no mismatch between their own gender identity and gender expression and cultural expectations regarding gender identity and expression".

Gender studies professor Chris Freeman criticizes the term, describing it as "clunky, unhelpful and maybe even regressive" and saying it "‍creates – or re-creates – a gender binary".

From intersex organizations

Intersex people are born with atypical physical sex characteristics that can complicate initial sex assignment and lead to involuntary or coercive medical treatment. The term cisgender "can get confusing" in relation to people with intersex conditions, although some intersex people use the term according to the Interact Advocates for Intersex Youth Inter/Act project. Hida Viloria of Intersex Campaign for Equality notes that, as a person born with an intersex body who has a non-binary sense of gender identity that "matches" their body, they are both cisgender and gender non-conforming, presumably opposites according to cisgender's definition, and that this evidences the term's basis on a binary sex model that does not account for intersex people's existence. Viloria also critiques the fact that the term sex assigned at birth is used in one of cisgender's definitions without noting that babies are assigned male or female regardless of intersex status in most of the world, stating that doing so obfuscates the birth of intersex babies and frames gender identity within a binary male/female sex model that fails to account for both the existence of natally congruent gender non-conforming gender identities, and gender-based discrimination against intersex people based on natal sex characteristics rather than on gender identity or expression, such as "normalizing" infant genital surgeries.

From Elon Musk

In June 2023, Elon Musk, owner of social network Twitter (now X), stated that use of the words "cis" and "cisgender" on the platform as "targeted harassment" would constitute violations of its hateful content policy, as he considered them to be slurs. The changes came following an interaction between Musk and a gender-critical commentator, who alleged that pro-trans advocates were using forms of the word (such as "cissy", a variant of the pejorative sissy) to insult him following a post in which he rejected the term. Musk has since described cisgender as being "heterophobic" and a "heterosexual slur". The change came amid the loosening of other rules protecting LGBT users under his ownership, including removing rules prohibiting deadnaming.

Responses to critiques

After the Oxford Dictionary added cisgender as a word in 2015, The Advocate wrote that "even among LGBT people, the word is hotly debated"; transgender veteran Brynn Tannehill argued that it was "often used in a negative way" by trans people to express "a certain level of contempt" for people they think should not partake in discussions on trans issues. Transgender scholar K.J. Rawson, by contrast, stated that "cis" was "not meant to be dismissive, but rather descriptive", and was no different than using the word "straight" to describe people who are heterosexual. Rawson explained that people who are straight "don't typically experience their heterosexuality as an identity, many don't identify as heterosexual—they don't need to, because culture has already done that for them", and that "similarly, cisgender people don't generally identify as cisgender because societal expectations already presume that they are."

In a 2023 essay, Defosse said she did not intend the word as an insult. She says she does not believe the word cisgender caused problems, and that "it only revealed them."

Genetic representation

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Genetic_representation In compu...