Search This Blog

Tuesday, November 8, 2022

Information policy

From Wikipedia, the free encyclopedia

Information policy is the set of all public laws, regulations and policies that encourage, discourage, or regulate the creation, use, storage, access, and communication and dissemination of information. It thus encompasses any other decision-making practice with society-wide constitutive efforts that involve the flow of information and how it is processed.

There are several fundamental issues that comprise information policy. Most prominent are public policy issues concerned with the use of information for democratization and commercialization of social life. These issues include, inter alia, digital environment, such as the digital divide, intellectual property, economic regulations, freedom of expression, confidentiality or privacy of information, information security, access management, and regulating how the dissemination of public information occurs. Certain categories of information are of particular importance for information policy. These include news information, health information, and census information.

Information policy is the central problem for information societies. As nations make the transition from industrialism to post-industrialism, information issues become increasingly critical. According to sociologist Daniel Bell, "what counts now is not raw muscle power or energy but information" (Daniel Bell, The Coming of Post-Industrial Society, 1973, p. 37). While all societies have been to some extent based on information, information societies are almost wholly dependent on computerized information. As Marc Uri Porat, the first researcher to use the term "information policy", wrote: "The foundation of the information economy, our new central fact, is the computer. Its ability to manipulate and process information represents a profound departure from our modest human abilities". The computer's combination with telecommunications, he continued, posed "the policy problems of the future". (Marc Uri Porat, The Information Economy, 1976, p. 205.)

Overview

Information policy became a prominent field of study during the latter half of the 20th century as the shift from an industrial to an information society transpired. It has since then evolved from being seen as relatively unimportant to having a much more overarching strategic significance since it establishes the conditions “under which all other decision making, public discourse, and political activity occur.” The growing awareness in the importance of information policy has sparked an interest in various groups to further study and analyze its magnitude. The most common audience for information policy analysis includes undergraduate and graduate students, scholars, policymakers, policy analysts, as well as those members of the public who have taken an interest in understanding the effects of the laws and regulations involving information.

Although information policy generally has a broader definition and encapsulates a multitude of components, its scope and impact can vary depending on the context. For example, in the context of an information lifecycle, information policy refers to the laws and policies that deal with the stages information goes through beginning with its creation, through its collection, organization, dissemination, and finally to its destruction. On the other hand, in the context of public administration, information policy is the means by which government employees, institutions, and information systems adapt themselves to an environment in rapid fluctuation and use information for decision-making (e.g., Andersen and Dawes, 1991; also see Bozeman and Bretschneider, 1986, and Stevens and McGowan, 1985). One can see how these two contexts offer varying scopes for the phrase “ information policy.”

Information policy is in fact, a combination of several varying disciplines including information science, economics, law, and public policy. Thus, its scope may differ when each of these disciplines analyses or uses it. The information sciences may be more concerned with technical advances and how this impacts information policy, while from a law perspective, issues such as privacy rights and intellectual property may be of greatest focus.

History

The earliest sight of information policy was present around the mid-1900s. The stages to begin evolving from an industrial society to an information society sparked several other transformations. The common industrial technologies were beginning to be replaced by informational meta-technologies. Organizations began changing their form, several new architectures of knowledge developed, and most importantly, the information economy replaced industrial and agricultural economies.

By the 1970s, the concept of national information policy was created to protect the data and information that was used in creating public policies. The earliest adopters of information policy included the United States, Australia, as well as several European countries who all recognized the importance for a more standardized governance of information.

Elizabeth Orna contributed to a paper on information policies by providing a brief history of the development of ideas surrounding national and organizational information policies, from the beginning when the United Kingdom Ministry of Information was established in the First World War to present day. The history of information policy is reflected in this chart.

In the 20th century, to cope with the privacy problems of databases, information policy evolved further safeguards. In the US, the federal Privacy Act provides individuals the right to inspect and correct personal information in federal datafiles.

Types and importance

The types of information policy can be separated into two different categories. It can be discussed in the short-term focus exclusively on information science. It can also have a much broader context in relation to different subjects and be within a larger time period, for example dating back to Roman civilization, the Bill of Rights, or the Constitution.

The obvious reason for the need of information policy deals with the legal issues that can be associated with the advancement of technology. More precisely,—the digitization of the cultural content made the cost of the copy decreasing to nearly zero and increased the illegal exchange of files, online, via sharing web site or P2P technologies, or off line (copy of hard disks). As a result, there are many grey areas between what users can and cannot do, and this creates the need for some sort of regulation. Today, this has led to the creation of SOPA (Stop Online Piracy Act). Information policy will mark the boundaries needed to evaluate certain issues dealing with the creation, processing, exchange, access, and use of information.

1. for avoiding risks (financial losses from incomplete and uncoordinated exploitation of information, wasted time, failures of innovation, and reputation loss);

2. for positive benefits, including negotiation and openness among those responsible for different aspects of information management

3. productive use of IT in supporting staff in their use of information

4. ability to initiate change to take advantage of changing environments

Issues

There are some issues around organizational information policies, which are the interaction between human beings and technology for using information, the issue to proceed information policy itself, whether top-down or middle-up-down, is the best way to approach information policy in an organization. Also, issues that information tends to be influenced by organization's culture that result in complexity of information flow. Moreover, the concern about valuing information is discussed by Orna, the fact that value of information is dependent on the user, and it can't be measured by price. Considering that information is an asset or intellectual capital that becomes valuable when it is used in productive ways.

Convergence

Convergence essentially combines all forms of media, telecommunications, broadcasting, and computing by the use of a single technology: digital computers. It integrates diverse technological systems in the hopes of improving performance of similar tasks. Convergence is thought to be the result of the need for expansion into new markets due to competition and technological advances that have created a threat of new entrants into various segments of the value chain. As a result, previously disparate technologies interact with one other synergistically to deliver information in new and unique ways and allow for inventive solutions to be developed.

Nearly every innovative trend in the social industry involves adding data or layers of connectivity. Social networking sites have begun interacting with e-mail functionalities, search engines have begun integrating Internet searches with Facebook data, Twitter along with various other social media platforms have started to play a prominent role in the emergency management framework (mitigation, preparedness, response, and recovery) among several others.

In 2012 a prominent issue arose that deals with the convergence of social media with copyright infringement monitoring systems. The growing interest in this topic can be largely attributed to the recent anti-piracy bills: the Stop Online Piracy Act and the PROTECT IP Act. Various officials from all over the world have expressed an interest in forcing social networks to install and utilize monitoring systems to determine if users are illegally obtaining copyrighted material. For example, if implemented, these filters could prevent the illegal sharing of music over social networking platforms. The convergence of search engines and social networks could make this process even easier. Search engines such as Google, Yahoo, and Bing have begun to merge with social media platforms to link Internet searches to your social networking sites such as Facebook. This poses an even greater threat to users since their Internet searches can be monitored via their social networks.

The issue of converging social networks with piracy monitoring systems becomes controversial when it comes to protecting personal data and abiding by privacy laws. In order for a synergy such as this one to take place, regulatory convergence would need to be considered. Regulatory convergence is the merging of previously disparate industry-based laws and regulations into a single legal and regulatory framework.

Internet governance

Internet governance has both narrow and broad definitions, thus, making it a complex concept to understand. When most people think of Internet governance, they think of the regulations of the content and conduct that are communicated and acted on through the Internet. Although this is certainly a broad component of Internet governance, additionally, there are more narrow elements to the definition that are often overlooked. Internet governance also encompasses the regulation of Internet infrastructure and the processes, systems, and institutions that regulate the fundamental systems that determine the capabilities of the Internet.

Architecture is the foundation of the Internet. The fundamental goal of the Internet architecture is to essentially create a network of networks by interconnecting various computer network systems globally. Protocols such as TCP/IP as well as other network protocols serve as the rules and conventions by which computers can communicate with each other. Thus, TCP/IP is often viewed as the most important institution of Internet governance. It serves as the backbone to network connectivity.

Organizations such as the Internet Corporation for Assigned Names and Numbers (ICANN) coordinate the various systems within the Internet on a global level to help preserve the operational stability of the Internet. For example, coordination of IP addresses and managing the Domain Name System (DNS) ensure computers and devices can correctly connect to the Internet and can communicate effectively globally. If regulation of these crucial elements of the Internet such as TCP/IP and DNS were governed by disparate principles, the Internet would no longer exist as it does today. Networks, computers, and peripherals would not be able to communicate and have the same accessibility if these foundational elements varied.

Government roles

Like with any policy, there needs to be an agent to govern and regulate it. With information policy in a broader sense, the government has several roles and responsibilities. Some examples include providing accurate information, producing and maintaining information that meets the specific needs of the public, protecting the privacy and confidentiality of personal and sensitive information, and making informed decisions on which information should be disseminated and how to distribute it effectively, among others. Although the government plays an active role in information policy, the analysis of information policy should not only include the formal decision making processes by government entities, but also the formal and informal decisions of both the private and public sector of governance.

Security vs freedom of information

A persistent debate concerning the government role in information policy is the separation of security and freedom of information. Legislation such as the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USAPATRIOT or USAPA) Act of 2001 is an example of security taking precedence over civil liberties. The USAPA affected several surveillance and privacy laws to include:

  • Wire Tapping (Title III) which requires there be probable cause for real-time interception of voice and data communication.
  • Electronic Communications Privacy Act(ECPA) regulates government access to email and other electronic communications.
  • Foreign Intelligence Surveillance Act (FISA) authorizes the government to carry out electronic surveillance against any person, including Americans.

The USAPA was passed in October 2001, not long after 9/11 and without much contention from congress. Civil liberties advocates argue that the changes made to the standing surveillance laws were done in an abrupt manner and without consideration to basic rights as outlined in the US constitution, specifically fourth amendment rights which protects against unreasonable search and seizure.

Research methods and tools

The five broad methodological strands that Rowlands (1996) identified are current tools for information policy study:

Classification: The tool demonstrates a wide range of issues and subjects on information policy. It helps research to understand the breadth of the subject. The published materials are reasonably well documented and described. It is also good for start-up literature review and research purposes.

Identification of policy issues and options: This tool relies on inputs for information; for example, interviews and questionnaires targeting policy makers and other stakeholders. It is a commonly used in the studies of on-the-job policy makers in government or industry (Moore and Steele, 1991; McIntosh, 1990; Rowlands, 1997).

Reductionism: The reductionist approach control factors to reduce ambiguity. The factors include constraining data collection, analysis and interpretation within the framework of a specific discipline. It helps researchers to notice how a specific factor relates to the overall environment. Forecasting and scenario-building: The commonly used model is STEEP framework. It helps in reducing uncertainties for a topic that is under studies.

Process-oriented research and cases studies: It provides detailed contextual analyses of particular events. It helps researcher to experience the policy process in semi-real situation and study its outcome.

The future

In regard to the future of information policy, it should be flexible and changing to different circumstances as the ability to access, store, and share information grows. Galvin suggests that information policy might include setting a boundary to the uncertainty in this field. As information policy becomes a larger and more important topic, it will also become a greater subject to governmental regulation in regards to the future of technology as well. It will also include the studies of these subjects: information science, communications, library science, and technology studies.

The information policies will earn more advantages for national and organizational, such as getting the best from development of Web 2.0 nationally and in organization, influence people for paying attention to the socio aspect and socio-technical system, for securing preservation of digital content, bringing out information product, also respecting all users and making thinking time respectable.

In order to achieve this national organization, it will be important to focus not only on a domestic level but also nationally. Making domestic agencies cooperate internationally (and vice versa) though, will not be overly successful. A single nation can take the lead in establishing communication-based relationships specifically regarding the internet. These relations will need to be slowly and consistently established in order to truly unify any kind of information policy and decision-making. If information policy can be established and guided on a semi-national level, the degree of communication and cooperation throughout the world will increase dramatically. As information policy continues to shape many aspects of society, these international relations will become vital (Harpham, 2011).

Information policy is playing a greater role in the economy leading to the production of goods and services, as well as selling them directly to consumers (UCLA, 2009). The cost of information varies from a tangible good in that initial costs of the first unit are large and fixed; however, after that, marginal costs are relatively low (MacInnes, 2011). As an increase from the information services, information can be paralleled to that of manufacturing several years ago (UCLA, 2009). The digitalization of information allows businesses to make better justified business decisions (MacInnes, 2011).

Objectivity (philosophy)

From Wikipedia, the free encyclopedia

In philosophy, objectivity is the concept of truth independent from individual subjectivity (bias caused by one's perception, emotions, or imagination). A proposition is considered to have objective truth when its truth conditions are met without bias caused by the mind of a sentient being. Scientific objectivity refers to the ability to judge without partiality or external influence. Objectivity in the moral framework calls for moral codes to be assessed based on the well-being of the people in the society that follow it. Moral objectivity also calls for moral codes to be compared to one another through a set of universal facts and not through subjectivity.

Objectivity of knowledge

Plato considered geometry a condition of idealism concerned with universal truth. In Republic, Socrates opposes the sophist Thrasymachus's relativistic account of justice, and argues that justice is mathematical in its conceptual structure, and that ethics was therefore a precise and objective enterprise with impartial standards for truth and correctness, like geometry. The rigorous mathematical treatment Plato gave to moral concepts set the tone for the western tradition of moral objectivism that came after him. His contrasting between objectivity and opinion became the basis for philosophies intent on resolving the questions of reality, truth, and existence. He saw opinions as belonging to the shifting sphere of sensibilities, as opposed to a fixed, eternal and knowable incorporeality. Where Plato distinguished between how we know things and their ontological status, subjectivism such as George Berkeley's depends on perception. In Platonic terms, a criticism of subjectivism is that it is difficult to distinguish between knowledge, opinions, and subjective knowledge.

Platonic idealism is a form of metaphysical objectivism, holding that the ideas exist independently from the individual. Berkeley's empirical idealism, on the other hand, holds that things only exist as they are perceived. Both approaches boast an attempt at objectivity. Plato's definition of objectivity can be found in his epistemology, which is based on mathematics, and his metaphysics, where knowledge of the ontological status of objects and ideas is resistant to change.

In opposition to philosopher René Descartes' method of personal deduction, natural philosopher Isaac Newton applied the relatively objective scientific method to look for evidence before forming a hypothesis. Partially in response to Kant's rationalism, logician Gottlob Frege applied objectivity to his epistemological and metaphysical philosophies. If reality exists independently of consciousness, then it would logically include a plurality of indescribable forms. Objectivity requires a definition of truth formed by propositions with truth value. An attempt of forming an objective construct incorporates ontological commitments to the reality of objects.

The importance of perception in evaluating and understanding objective reality is debated in the observer effect of quantum mechanics. Direct or naïve realists rely on perception as key in observing objective reality, while instrumentalists hold that observations are useful in predicting objective reality. The concepts that encompass these ideas are important in the philosophy of science. Philosophies of mind explore whether objectivity relies on perceptual constancy.

Objectivity in ethics

Ethical subjectivism

The term "ethical subjectivism" covers two distinct theories in ethics. According to cognitive versions of ethical subjectivism, the truth of moral statements depends upon people's values, attitudes, feelings, or beliefs. Some forms of cognitivist ethical subjectivism can be counted as forms of realism, others are forms of anti-realism. David Hume is a foundational figure for cognitive ethical subjectivism. On a standard interpretation of his theory, a trait of character counts as a moral virtue when it evokes a sentiment of approbation in a sympathetic, informed, and rational human observer. Similarly, Roderick Firth's ideal observer theory held that right acts are those that an impartial, rational observer would approve of. William James, another ethical subjectivist, held that an end is good (to or for a person) just in the case it is desired by that person (see also ethical egoism). According to non-cognitive versions of ethical subjectivism, such as emotivism, prescriptivism, and expressivism, ethical statements cannot be true or false, at all: rather, they are expressions of personal feelings or commands. For example, on A. J. Ayer's emotivism, the statement, "Murder is wrong" is equivalent in meaning to the emotive, "Murder, Boo!"

Ethical objectivism

According to the ethical objectivist, the truth or falsehood of typical moral judgments does not depend upon the beliefs or feelings of any person or group of persons. This view holds that moral propositions are analogous to propositions about chemistry, biology, or history, in so much as they are true despite what anyone believes, hopes, wishes, or feels. When they fail to describe this mind-independent moral reality, they are false—no matter what anyone believes, hopes, wishes, or feels.

There are many versions of ethical objectivism, including various religious views of morality, Platonistic intuitionism, Kantianism, utilitarianism, and certain forms of ethical egoism and contractualism. Note that Platonists define ethical objectivism in an even more narrow way, so that it requires the existence of intrinsic value. Consequently, they reject the idea that contractualists or egoists could be ethical objectivists. Objectivism, in turn, places primacy on the origin of the frame of reference—and, as such, considers any arbitrary frame of reference ultimately a form of ethical subjectivism by a transitive property, even when the frame incidentally coincides with reality and can be used for measurements.

Moral objectivism and relativism

Moral objectivism is the view that what is right or wrong doesn't depend on what anyone thinks is right or wrong. Moral objectivism depends on how the moral code affects the well-being of the people of the society. Moral objectivism allows for moral codes to be compared to each other through a set of universal facts than mores of a society. Nicholas Reschar defines mores as customs within every society (i.e. what women can wear) and states that moral codes cannot be compared to one's personal moral compass. An example is the categorical imperative of Immanuel Kant which says: "Act only according to that maxim [i.e., rule] whereby you can at the same time will that it become a universal law." John Stuart Mill was a consequential thinker and therefore proposed utilitarianism which asserts that in any situation, the right thing to do is whatever is likely to produce the most happiness overall. Moral relativism is the view where a moral code is relative to an agent in their specific moral context. The rules within moral codes are equal to each other and are only deemed "right" or "wrong" within their specific moral codes. Relativism is opposite to Universalism because there is not a single moral code for every agent to follow. Relativism differs from Nihilism because it validates every moral code that exists whereas nihilism does not. When it comes to relativism, Russian philosopher and writer, Fyodor Dostoevsky, coined the phrase "If God doesn't exist, everything is permissible". That phrase was his view of the consequences for rejecting theism as a basis of ethics. American anthropologist Ruth Benedict argued that there is no single objective morality and that morality varies with culture.

Objectivity in History

History as a discipline has wrestled with notions of objectivity from its very beginning. While its object of study is commonly thought to be the past, the only thing historians have to work with are different versions of stories based on individual perceptions of reality and memory.

Several history streams developed to devise ways to solve this dilemma: Historians like Leopold von Ranke (XIX century) have advocated for the use of extensive evidence –especially archived physical paper documents– to recover the bygone past, claiming that, as opposed to people's memories, objects remain stable in what they say about the era they witnessed, and therefore represent a better insight into objective reality. In the XX century, the Annales School emphasized the importance of shifting focus away from the perspectives of influential men –usually politicians around whose actions narratives of the past were shaped–, and putting it on the voices of ordinary people. Postcolonial streams of history challenge the colonial-postcolonial dichotomy and critique Eurocentric academia practices, such as the demand for historians from colonized regions to anchor their local narratives to events happening in the territories of their colonizers to earn credibility. All the streams explained above try to uncover whose voice is more or less truth-bearing and how historians can stitch together versions of it to best explain what "actually happened."

The anthropologist Michel-Rolph Trouillot developed the concepts of historicity 1 and 2 to explain the difference between the materiality of socio-historical processes (H1) and the narratives that are told about the materiality of socio-historical processes (H2). This distinction hints that H1 would be understood as the factual reality that elapses and is captured with the concept of "objective truth", and that H2 is the collection of subjectivities that humanity has stitched together to grasp the past. Debates about positivism, relativism, and postmodernism are relevant to evaluating these concepts' importance and the distinction between them.

Ethical considerations

In his book "Silencing the past", Trouillot wrote about the power dynamics at play in history-making, outlining four possible moments in which historical silences can be created: (1) making of sources (who gets to know how to write, or to have possessions that are later examined as historical evidence), (2) making of archives (what documents are deemed important to save and which are not, how to classify materials, and how to order them within physical or digital archives), (3) making of narratives (which accounts of history are consulted, which voices are given credibility), and (4) the making of history (the retrospective construction of what The Past is).

Because history (official, public, familial, personal) informs current perceptions and how we make sense of the present, whose voice gets to be included in it –and how– has direct consequences in material socio-historical processes. Thinking of current historical narratives as impartial depictions of the totality of events unfolded in the past by labeling them as "objective" risks sealing historical understanding. Acknowledging that history is never objective and always incomplete has a meaningful opportunity to support social justice efforts. Under said notion, voices that have been silenced are placed on an equal footing to the grand and popular narratives of the world, appreciated for their unique insight of reality through their subjective lens.

Quantum-cascade laser

From Wikipedia, the free encyclopedia

Quantum-cascade lasers (QCLs) are semiconductor lasers that emit in the mid- to far-infrared portion of the electromagnetic spectrum and were first demonstrated by Jérôme Faist, Federico Capasso, Deborah Sivco, Carlo Sirtori, Albert Hutchinson, and Alfred Cho at Bell Laboratories in 1994.

Unlike typical interband semiconductor lasers that emit electromagnetic radiation through the recombination of electron–hole pairs across the material band gap, QCLs are unipolar, and laser emission is achieved through the use of intersubband transitions in a repeated stack of semiconductor multiple quantum well heterostructures, an idea first proposed in the article "Possibility of amplification of electromagnetic waves in a semiconductor with a superlattice" by R. F. Kazarinov and R. A. Suris in 1971.

Intersubband vs. interband transitions

Interband transitions in conventional semiconductor lasers emit a single photon.

Within a bulk semiconductor crystal, electrons may occupy states in one of two continuous energy bands — the valence band, which is heavily populated with low energy electrons and the conduction band, which is sparsely populated with high energy electrons. The two energy bands are separated by an energy band gap in which there are no permitted states available for electrons to occupy. Conventional semiconductor laser diodes generate light by a single photon being emitted when a high energy electron in the conduction band recombines with a hole in the valence band. The energy of the photon and hence the emission wavelength of laser diodes is therefore determined by the band gap of the material system used.

A QCL however does not use bulk semiconductor materials in its optically active region. Instead, it consists of a periodic series of thin layers of varying material composition forming a superlattice. The superlattice introduces a varying electric potential across the length of the device, meaning that there is a varying probability of electrons occupying different positions over the length of the device. This is referred to as one-dimensional multiple quantum well confinement and leads to the splitting of the band of permitted energies into a number of discrete electronic subbands. By suitable design of the layer thicknesses it is possible to engineer a population inversion between two subbands in the system which is required in order to achieve laser emission. Because the position of the energy levels in the system is primarily determined by the layer thicknesses and not the material, it is possible to tune the emission wavelength of QCLs over a wide range in the same material system.

In quantum cascade structures, electrons undergo intersubband transitions and photons are emitted. The electrons tunnel to the next period of the structure and the process repeats.

Additionally, in semiconductor laser diodes, electrons and holes are annihilated after recombining across the band gap and can play no further part in photon generation. However, in a unipolar QCL, once an electron has undergone an intersubband transition and emitted a photon in one period of the superlattice, it can tunnel into the next period of the structure where another photon can be emitted. This process of a single electron causing the emission of multiple photons as it traverses through the QCL structure gives rise to the name cascade and makes a quantum efficiency of greater than unity possible which leads to higher output powers than semiconductor laser diodes.

Operating principles

Rate equations

Subband populations are determined by the intersubband scattering rates and the injection/extraction current.

QCLs are typically based upon a three-level system. Assuming the formation of the wavefunctions is a fast process compared to the scattering between states, the time independent solutions to the Schrödinger equation may be applied and the system can be modelled using rate equations. Each subband contains a number of electrons (where is the subband index) which scatter between levels with a lifetime (reciprocal of the average intersubband scattering rate ), where and are the initial and final subband indices. Assuming that no other subbands are populated, the rate equations for the three level lasers are given by:

In the steady state, the time derivatives are equal to zero and . The general rate equation for electrons in subband i of an N level system is therefore:

,

Under the assumption that absorption processes can be ignored, (i.e. , valid at low temperatures) the middle rate equation gives

Therefore, if (i.e. ) then and a population inversion will exist. The population ratio is defined as

If all N steady-state rate equations are summed, the right hand side becomes zero, meaning that the system is underdetermined, and it is possible only to find the relative population of each subband. If the total sheet density of carriers in the system is also known, then the absolute population of carriers in each subband may be determined using:

.

As an approximation, it can be assumed that all the carriers in the system are supplied by doping. If the dopant species has a negligible ionisation energy then is approximately equal to the doping density.

Electron wave functions are repeated in each period of a three quantum well QCL active region. The upper laser level is shown in bold.

Active region designs

The scattering rates are tailored by suitable design of the layer thicknesses in the superlattice which determine the electron wave functions of the subbands. The scattering rate between two subbands is heavily dependent upon the overlap of the wave functions and energy spacing between the subbands. The figure shows the wave functions in a three quantum well (3QW) QCL active region and injector.

In order to decrease , the overlap of the upper and lower laser levels is reduced. This is often achieved through designing the layer thicknesses such that the upper laser level is mostly localised in the left-hand well of the 3QW active region, while the lower laser level wave function is made to mostly reside in the central and right-hand wells. This is known as a diagonal transition. A vertical transition is one in which the upper laser level is localised in mainly the central and right-hand wells. This increases the overlap and hence which reduces the population inversion, but it increases the strength of the radiative transition and therefore the gain.

In order to increase , the lower laser level and the ground level wave functions are designed such that they have a good overlap and to increase further, the energy spacing between the subbands is designed such that it is equal to the longitudinal optical (LO) phonon energy (~36 meV in GaAs) so that resonant LO phonon-electron scattering can quickly depopulate the lower laser level.

Material systems

The first QCL was fabricated in the GaInAs/AlInAs material system lattice-matched to an InP substrate. This particular material system has a conduction band offset (quantum well depth) of 520 meV. These InP-based devices have reached very high levels of performance across the mid-infrared spectral range, achieving high power, above room-temperature, continuous wave emission.

In 1998 GaAs/AlGaAs QCLs were demonstrated by Sirtori et al. proving that the QC concept is not restricted to one material system. This material system has a varying quantum well depth depending on the aluminium fraction in the barriers. Although GaAs-based QCLs have not matched the performance levels of InP-based QCLs in the mid-infrared, they have proven to be very successful in the terahertz region of the spectrum.

The short wavelength limit of QCLs is determined by the depth of the quantum well and recently QCLs have been developed in material systems with very deep quantum wells in order to achieve short wavelength emission. The InGaAs/AlAsSb material system has quantum wells 1.6 eV deep and has been used to fabricate QCLs emitting at 3.05 μm. InAs/AlSb QCLs have quantum wells 2.1 eV deep and electroluminescence at wavelengths as short as 2.5 μm has been observed.

The couple InAs/AlSb is the most recent QCL material family compared to alloys grown on InP and GaAs substrates. The main advantage of the InAs/AlSb material system is the small effective electron mass in quantum wells, which favors a high intersubband gain. This benefit can be better exploited in long-wavelength QCLs where the lasing transition levels are close to the bottom of the conduction band, and the effect of nonparabolicity is weak. InAs-based QCLs have demonstrated room temperature (RT) continuous wave (CW) operation at wavelengths up to with a pulsed threshold current density as low as . Low values of have also been achieved in InAs-based QCLs emitting in other spectral regions: at , at   and at   (QCL grown on InAs). Most recently, InAs-based QCLs operating near with as low as at room temperature have been demonstrated. The threshold obtained is lower than the of the best reported InP-based QCLs to date without facet treatment.

QCLs may also allow laser operation in materials traditionally considered to have poor optical properties. Indirect bandgap materials such as silicon have minimum electron and hole energies at different momentum values. For interband optical transitions, carriers change momentum through a slow, intermediate scattering process, dramatically reducing the optical emission intensity. Intersubband optical transitions, however, are independent of the relative momentum of conduction band and valence band minima and theoretical proposals for Si/SiGe quantum cascade emitters have been made. Intersubband electroluminescence from non-polar SiGe heterostructures has been observed for mid-infrared and far-infrared wavelengths, both in the valence and conduction band.

Emission wavelengths

QCLs currently cover the wavelength range from 2.63 μm  to 250 μm (and extends to 355 μm with the application of a magnetic field.)

Optical waveguides

End view of QC facet with ridge waveguide. Darker gray: InP, lighter gray: QC layers, black: dielectric, gold: Au coating. Ridge ~ 10 um wide.
 
End view of QC facet with buried heterostructure waveguide. Darker gray: InP, lighter gray: QC layers, black: dielectric. Heterostructure ~ 10 um wide

The first step in processing quantum cascade gain material to make a useful light-emitting device is to confine the gain medium in an optical waveguide. This makes it possible to direct the emitted light into a collimated beam, and allows a laser resonator to be built such that light can be coupled back into the gain medium.

Two types of optical waveguides are in common use. A ridge waveguide is created by etching parallel trenches in the quantum cascade gain material to create an isolated stripe of QC material, typically ~10 um wide, and several mm long. A dielectric material is typically deposited in the trenches to guide injected current into the ridge, then the entire ridge is typically coated with gold to provide electrical contact and to help remove heat from the ridge when it is producing light. Light is emitted from the cleaved ends of the waveguide, with an active area that is typically only a few micrometers in dimension.

The second waveguide type is a buried heterostructure. Here, the QC material is also etched to produce an isolated ridge. Now, however, new semiconductor material is grown over the ridge. The change in index of refraction between the QC material and the overgrown material is sufficient to create a waveguide. Dielectric material is also deposited on the overgrown material around QC ridge to guide the injected current into the QC gain medium. Buried heterostructure waveguides are efficient at removing heat from the QC active area when light is being produced.

Laser types

Although the quantum cascade gain medium can be used to produce incoherent light in a superluminescent configuration, it is most commonly used in combination with an optical cavity to form a laser.

Fabry–Perot lasers

This is the simplest of the quantum cascade lasers. An optical waveguide is first fabricated out of the quantum cascade material to form the gain medium. The ends of the crystalline semiconductor device are then cleaved to form two parallel mirrors on either end of the waveguide, thus forming a Fabry–Pérot resonator. The residual reflectivity on the cleaved facets from the semiconductor-to-air interface is sufficient to create a resonator. Fabry–Pérot quantum cascade lasers are capable of producing high powers, but are typically multi-mode at higher operating currents. The wavelength can be changed chiefly by changing the temperature of the QC device.

Distributed feedback lasers

A distributed feedback (DFB) quantum cascade laser is similar to a Fabry–Pérot laser, except for a distributed Bragg reflector (DBR) built on top of the waveguide to prevent it from emitting at other than the desired wavelength. This forces single mode operation of the laser, even at higher operating currents. DFB lasers can be tuned chiefly by changing the temperature, although an interesting variant on tuning can be obtained by pulsing a DFB laser. In this mode, the wavelength of the laser is rapidly "chirped" during the course of the pulse, allowing rapid scanning of a spectral region.

External cavity lasers

Schematic of QC device in external cavity with frequency selective optical feedback provided by diffraction grating in Littrow configuration.

In an external cavity (EC) quantum cascade laser, the quantum cascade device serves as the laser gain medium. One, or both, of the waveguide facets has an anti-reflection coating that defeats the optical cavity action of the cleaved facets. Mirrors are then arranged in a configuration external to the QC device to create the optical cavity.

If a frequency-selective element is included in the external cavity, it is possible to reduce the laser emission to a single wavelength, and even tune the radiation. For example, diffraction gratings have been used to create a tunable laser that can tune over 15% of its center wavelength.

Extended tuning devices

There exists several methods to extend the tuning range of quantum cascade lasers using only monolithically integrated elements. Integrated heaters can extend the tuning range at fixed operation temperature to 0.7% of the central wavelength and superstructure gratings operating through the Vernier effect can extend it to 4% of the central wavelength, compared to <0.1% for a standard DFB device.

Growth

The alternating layers of the two different semiconductors which form the quantum heterostructure may be grown on to a substrate using a variety of methods such as molecular beam epitaxy (MBE) or metalorganic vapour phase epitaxy (MOVPE), also known as metalorganic chemical vapor deposition (MOCVD).

Applications

Fabry-Perot (FP) quantum cascade lasers were first commercialized in 1998, distributed feedback (DFB) devices were first commercialized in 2004, and broadly-tunable external cavity quantum cascade lasers first commercialized in 2006. The high optical power output, tuning range and room temperature operation make QCLs useful for spectroscopic applications such as remote sensing of environmental gases and pollutants in the atmosphere and security. They may eventually be used for vehicular cruise control in conditions of poor visibility, collision avoidance radar, industrial process control, and medical diagnostics such as breath analyzers. QCLs are also used to study plasma chemistry.

When used in multiple-laser systems, intrapulse QCL spectroscopy offers broadband spectral coverage that can potentially be used to identify and quantify complex heavy molecules such as those in toxic chemicals, explosives, and drugs.

In fiction

The video game Star Citizen imagines external-cavity quantum cascade lasers as high-power weapons.

Climate change and poverty

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Climate_change_and_poverty ...