Search This Blog

Sunday, October 29, 2023

Digital imaging

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Digital_imaging

Digital imaging or digital image acquisition is the creation of a digital representation of the visual characteristics of an object, such as a physical scene or the interior structure of an object. The term is often assumed to imply or include the processing, compression, storage, printing and display of such images. A key advantage of a digital image, versus an analog image such as a film photograph, is the ability to digitally propagate copies of the original subject indefinitely without any loss of image quality.

Digital imaging can be classified by the type of electromagnetic radiation or other waves whose variable attenuation, as they pass through or reflect off objects, conveys the information that constitutes the image. In all classes of digital imaging, the information is converted by image sensors into digital signals that are processed by a computer and made output as a visible-light image. For example, the medium of visible light allows digital photography (including digital videography) with various kinds of digital cameras (including digital video cameras). X-rays allow digital X-ray imaging (digital radiography, fluoroscopy, and CT), and gamma rays allow digital gamma ray imaging (digital scintigraphy, SPECT, and PET). Sound allows ultrasonography (such as medical ultrasonography) and sonar, and radio waves allow radar. Digital imaging lends itself well to image analysis by software, as well as to image editing (including image manipulation).

History

Before digital imaging, the first photograph ever produced, View from the Window at Le Gras, was in 1826 by Frenchman Joseph Nicéphore Niépce. When Joseph was 28, he was discussing with his brother Claude about the possibility of reproducing images with light. His focus on his new innovations began in 1816. He was in fact more interested in creating an engine for a boat. Joseph and his brother focused on that for quite some time and Claude successfully promoted his innovation moving and advancing him to England. Joseph was able to focus on the photograph and finally in 1826, he was able to produce his first photograph of a view through his window. This took 8 hours or more of exposure to light.

The first digital image was produced in 1920, by the Bartlane cable picture transmission system. British inventors, Harry G. Bartholomew and Maynard D. McFarlane, developed this method. The process consisted of "a series of negatives on zinc plates that were exposed for varying lengths of time, thus producing varying densities,". The Bartlane cable picture transmission system generated at both its transmitter and its receiver end a punched data card or tape that was recreated as an image.

In 1957, Russell A. Kirsch produced a device that generated digital data that could be stored in a computer; this used a drum scanner and photomultiplier tube.

Digital imaging was developed in the 1960s and 1970s, largely to avoid the operational weaknesses of film cameras, for scientific and military missions including the KH-11 program. As digital technology became cheaper in later decades, it replaced the old film methods for many purposes.

In the early 1960s, while developing compact, lightweight, portable equipment for the onboard nondestructive testing of naval aircraft, Frederick G. Weighart and James F. McNulty (U.S. radio engineer) at Automation Industries, Inc., then, in El Segundo, California co-invented the first apparatus to generate a digital image in real-time, which image was a fluoroscopic digital radiograph. Square wave signals were detected on the fluorescent screen of a fluoroscope to create the image.

Digital image sensors

The charge-coupled device was invented by Willard S. Boyle and George E. Smith at Bell Labs in 1969. While researching MOS technology, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.

Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. It was a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.

The NMOS active-pixel sensor (APS) was invented by Olympus in Japan during the mid-1980s. This was enabled by advances in MOS semiconductor device fabrication, with MOSFET scaling reaching smaller micron and then sub-micron levels. The NMOS APS was fabricated by Tsutomu Nakamura's team at Olympus in 1985. The CMOS active-pixel sensor (CMOS sensor) was later developed by Eric Fossum's team at the NASA Jet Propulsion Laboratory in 1993. By 2007, sales of CMOS sensors had surpassed CCD sensors.

Digital image compression

An important development in digital image compression technology was the discrete cosine transform (DCT). DCT compression is used in JPEG, which was introduced by the Joint Photographic Experts Group in 1992. JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format on the Internet.

Digital cameras

These different scanning ideas were the basis of the first designs of digital camera. Early cameras took a long time to capture an image and were poorly suited for consumer purposes. It was not until the adoption of the CCD (charge-coupled device) that the digital camera really took off. The CCD became part of the imaging systems used in telescopes, the first black-and-white digital cameras in the 1980s. Color was eventually added to the CCD and is a usual feature of cameras today.

Changing environment

Great strides have been made in the field of digital imaging. Negatives and exposure are foreign concepts to many, and the first digital image in 1920 led eventually to cheaper equipment, increasingly powerful yet simple software, and the growth of the Internet.

The constant advancement and production of physical equipment and hardware related to digital imaging has affected the environment surrounding the field. From cameras and webcams to printers and scanners, the hardware is becoming sleeker, thinner, faster, and cheaper. As the cost of equipment decreases, the market for new enthusiasts widens, allowing more consumers to experience the thrill of creating their own images.

Everyday personal laptops, family desktops, and company computers are able to handle photographic software. Our computers are more powerful machines with increasing capacities for running programs of any kind—especially digital imaging software. And that software is quickly becoming both smarter and simpler. Although functions on today's programs reach the level of precise editing and even rendering 3-D images, user interfaces are designed to be friendly to advanced users as well as first-time fans.

The Internet allows editing, viewing, and sharing digital photos and graphics. A quick browse around the web can easily turn up graphic artwork from budding artists, news photos from around the world, corporate images of new products and services, and much more. The Internet has clearly proven itself a catalyst in fostering the growth of digital imaging.

Online photo sharing of images changes the way we understand photography and photographers. Online sites such as Flickr, Shutterfly, and Instagram give billions the capability to share their photography, whether they are amateurs or professionals. Photography has gone from being a luxury medium of communication and sharing to more of a fleeting moment in time. Subjects have also changed. Pictures used to be primarily taken of people and family. Now, we take them of anything. We can document our day and share it with everyone with the touch of our fingers.

In 1826 Niepce was the first to develop a photo which used lights to reproduce images, the advancement of photography has drastically increased over the years. Everyone is now a photographer in their own way, whereas during the early 1800s and 1900s the expense of lasting photos was highly valued and appreciated by consumers and producers. According to the magazine article on five ways digital camera changed us states the following:The impact on professional photographers has been dramatic. Once upon a time a photographer wouldn't dare waste a shot unless they were virtually certain it would work."The use of digital imaging( photography) has changed the way we interacted with our environment over the years. Part of the world is experienced differently through visual imagining of lasting memories, it has become a new form of communication with friends, family and love ones around the world without face to face interactions. Through photography it is easy to see those that you have never seen before and feel their presence without them being around, for example Instagram is a form of social media where anyone is allowed to shoot, edit, and share photos of whatever they want with friends and family. Facebook, snapshot, vine and twitter are also ways people express themselves with little or no words and are able to capture every moment that is important. Lasting memories that were hard to capture, is now easy because everyone is now able to take pictures and edit it on their phones or laptops. Photography has become a new way to communicate and it is rapidly increasing as time goes by, which has affected the world around us.

A study done by Basey, Maines, Francis, and Melbourne found that drawings used in class have a significant negative effect on lower-order content for student's lab reports, perspectives of labs, excitement, and time efficiency of learning. Documentation style learning has no significant effects on students in these areas. He also found that students were more motivated and excited to learn when using digital imaging.

Field advancements

In the field of education.

  • As digital projectors, screens, and graphics find their way to the classroom, teachers and students alike are benefitting from the increased convenience and communication they provide, although their theft can be a common problem in schools. In addition acquiring a basic digital imaging education is becoming increasingly important for young professionals. Reed, a design production expert from Western Washington University, stressed the importance of using "digital concepts to familiarize students with the exciting and rewarding technologies found in one of the major industries of the 21st century".

The field of medical imaging

  • A branch of digital imaging that seeks to assist in the diagnosis and treatment of diseases, is growing at a rapid rate. A recent study by the American Academy of Pediatrics suggests that proper imaging of children who may have appendicitis may reduce the amount of appendectomies needed. Further advancements include amazingly detailed and accurate imaging of the brain, lungs, tendons, and other parts of the body—images that can be used by health professionals to better serve patients.
  • According to Vidar, as more countries take on this new way of capturing an image, it has been found that image digitalization in medicine has been increasingly beneficial for both patient and medical staff. Positive ramifications of going paperless and heading towards digitization includes the overall reduction of cost in medical care, as well as an increased global, real-time, accessibility of these images. (http://www.vidar.com/film/images/stories/PDFs/newsroom/Digital%20Transition%20White%20Paper%20hi-res%20GFIN.pdf)
  • There is a program called Digital Imaging in Communications and Medicine (DICOM) that is changing the medical world as we know it. DICOM is not only a system for taking high quality images of the aforementioned internal organs, but also is helpful in processing those images. It is a universal system that incorporates image processing, sharing, and analyzing for the convenience of patient comfort and understanding. This service is all encompassing and is beginning a necessity.

In the field of technology, digital image processing has become more useful than analog image processing when considering the modern technological advancement.

  • Image sharpen & reinstatement
    • Image sharpens & reinstatement is the procedure of images which is capture by the contemporary camera making them an improved picture or manipulating the pictures in the way to get chosen product. This comprises the zooming process, the blurring process, the sharpening process, the gray scale to color translation process, the picture recovery process and the picture identification process.
  • Facial Recognition
    • Face recognition is a PC innovation that decides the positions and sizes of human faces in self-assertive digital pictures. It distinguishes facial components and overlooks whatever, for example, structures, trees & bodies.
  • Remote detection
    • Remote detecting is little or substantial scale procurement of data of article or occurrence, with the utilization of recording or ongoing detecting apparatus which is not in substantial or close contact with an article. Practically speaking, remote detecting is face-off accumulation using an assortment of gadgets for collecting data on particular article or location.
  • Pattern detection
    • The pattern detection is the study or investigation from picture processing. In the pattern detection, image processing is utilized for recognizing elements in the images and after that machine study is utilized to instruct a framework for variation in pattern. The pattern detection is utilized in computer-aided analysis, detection of calligraphy, identification of images, and many more.
  • Color processing
    • The color processing comprises processing of colored pictures and diverse color locations which are utilized. This moreover involves study of transmit, store, and encode of the color pictures.

Augmented reality

Digital Imaging for Augmented Reality (DIAR) is a comprehensive field within the broader context of Augmented Reality (AR) technologies. It involves the creation, manipulation, and interpretation of digital images for use in augmented reality environments. DIAR plays a significant role in enhancing the user experience, providing realistic overlays of digital information onto the real world, thereby bridging the gap between the physical and the virtual realms.

DIAR is employed in numerous sectors including entertainment, education, healthcare, military, and retail. In entertainment, DIAR is used to create immersive gaming experiences and interactive movies. In education, it provides a more engaging learning environment, while in healthcare, it assists in complex surgical procedures. The military uses DIAR for training purposes and battlefield visualization. In retail, customers can virtually try on clothes or visualize furniture in their home before making a purchase.

With continuous advancements in technology, the future of DIAR is expected to witness more realistic overlays, improved 3D object modeling, and seamless integration with the Internet of Things (IoT). The incorporation of haptic feedback in DIAR systems could further enhance the user experience by adding a sense of touch to the visual overlays. Additionally, advancements in artificial intelligence and machine learning are expected to further improve the context-appropriateness and realism of the overlaid digital images.

Theoretical application

Although theories are quickly becoming realities in today's technological society, the range of possibilities for digital imaging is wide open. One major application that is still in the works is that of child safety and protection. How can we use digital imaging to better protect our kids? Kodak's program, Kids Identification Digital Software (KIDS) may answer that question. The beginnings include a digital imaging kit to be used to compile student identification photos, which would be useful during medical emergencies and crimes. More powerful and advanced versions of applications such as these are still developing, with increased features constantly being tested and added.

But parents and schools aren't the only ones who see benefits in databases such as these. Criminal investigation offices, such as police precincts, state crime labs, and even federal bureaus have realized the importance of digital imaging in analyzing fingerprints and evidence, making arrests, and maintaining safe communities. As the field of digital imaging evolves, so does our ability to protect the public.

Digital imaging can be closely related to the social presence theory especially when referring to the social media aspect of images captured by our phones. There are many different definitions of the social presence theory but two that clearly define what it is would be "the degree to which people are perceived as real" (Gunawardena, 1995), and "the ability to project themselves socially and emotionally as real people" (Garrison, 2000). Digital imaging allows one to manifest their social life through images in order to give the sense of their presence to the virtual world. The presence of those images acts as an extension of oneself to others, giving a digital representation of what it is they are doing and who they are with. Digital imaging in the sense of cameras on phones helps facilitate this effect of presence with friends on social media. Alexander (2012) states, "presence and representation is deeply engraved into our reflections on images...this is, of course, an altered presence...nobody confuses an image with the representation reality. But we allow ourselves to be taken in by that representation, and only that 'representation' is able to show the liveliness of the absentee in a believable way." Therefore, digital imaging allows ourselves to be represented in a way so as to reflect our social presence.

Photography is a medium used to capture specific moments visually. Through photography our culture has been given the chance to send information (such as appearance) with little or no distortion. The Media Richness Theory provides a framework for describing a medium's ability to communicate information without loss or distortion. This theory has provided the chance to understand human behavior in communication technologies. The article written by Daft and Lengel (1984,1986) states the following:

Communication media fall along a continuum of richness. The richness of a medium comprises four aspects: the availability of instant feedback, which allows questions to be asked and answered; the use of multiple cues, such as physical presence, vocal inflection, body gestures, words, numbers and graphic symbols; the use of natural language, which can be used to convey an understanding of a broad set of concepts and ideas; and the personal focus of the medium (pp. 83).

The more a medium is able to communicate the accurate appearance, social cues and other such characteristics the more rich it becomes. Photography has become a natural part of how we communicate. For example, most phones have the ability to send pictures in text messages. Apps Snapchat and Vine have become increasingly popular for communicating. Sites like Instagram and Facebook have also allowed users to reach a deeper level of richness because of their ability to reproduce information. Sheer, V. C. (January–March 2011). Teenagers' use of MSN features, discussion topics, and online friendship development: the impact of media richness and communication control. Communication Quarterly, 59(1).

Methods

A digital photograph may be created directly from a physical scene by a camera or similar device. Alternatively, a digital image may be obtained from another image in an analog medium, such as photographs, photographic film, or printed paper, by an image scanner or similar device. Many technical images—such as those acquired with tomographic equipment, side-scan sonar, or radio telescopes—are actually obtained by complex processing of non-image data. Weather radar maps as seen on television news are a commonplace example. The digitalization of analog real-world data is known as digitizing, and involves sampling (discretization) and quantization. Projectional imaging of digital radiography can be done by X-ray detectors that directly convert the image to digital format. Alternatively, phosphor plate radiography is where the image is first taken on a photostimulable phosphor (PSP) plate which is subsequently scanned by a mechanism called photostimulated luminescence.

Finally, a digital image can also be computed from a geometric model or mathematical formula. In this case, the name image synthesis is more appropriate, and it is more often known as rendering.

Digital image authentication is an issue for the providers and producers of digital images such as health care organizations, law enforcement agencies, and insurance companies. There are methods emerging in forensic photography to analyze a digital image and determine if it has been altered.

Previously digital imaging depended on chemical and mechanical processes, now all these processes have converted to electronic. A few things need to take place for digital imaging to occur, the light energy converts to electrical energy – think of a grid with millions of little solar cells. Each condition generates a specific electrical charge. Charges for each of these "solar cells" are transported and communicated to the firmware to be interpreted. The firmware is what understands and translates the color and other light qualities. Pixels are what is noticed next, with varying intensities they create and cause different colors, creating a picture or image. Finally, the firmware records the information for a future date and for reproduction.

Advantages

There are several benefits of digital imaging. First, the process enables easy access of photographs and word documents. Google is at the forefront of this 'revolution,' with its mission to digitize the world's books. Such digitization will make the books searchable, thus making participating libraries, such as Stanford University and the University of California Berkeley, accessible worldwide. Digital imaging also benefits the medical world because it "allows the electronic transmission of images to third-party providers, referring dentists, consultants, and insurance carriers via a modem". The process "is also environmentally friendly since it does not require chemical processing". Digital imaging is also frequently used to help document and record historical, scientific and personal life events.

Benefits also exist regarding photographs. Digital imaging will reduce the need for physical contact with original images. Furthermore, digital imaging creates the possibility of reconstructing the visual contents of partially damaged photographs, thus eliminating the potential that the original would be modified or destroyed. In addition, photographers will be "freed from being 'chained' to the darkroom," will have more time to shoot and will be able to cover assignments more effectively. Digital imaging 'means' that "photographers no longer have to rush their film to the office, so they can stay on location longer while still meeting deadlines".

Another advantage to digital photography is that it has been expanded to camera phones. We are able to take cameras with us wherever as well as send photos instantly to others. It is easy for people to us as well as help in the process of self-identification for the younger generation.

Criticisms

Critics of digital imaging cite several negative consequences. An increased "flexibility in getting better quality images to the readers" will tempt editors, photographers and journalists to manipulate photographs. In addition, "staff photographers will no longer be photojournalists, but camera operators... as editors have the power to decide what they want 'shot'". Legal constraints, including copyright, pose another concern: will copyright infringement occur as documents are digitized and copying becomes easier?

State cartel theory

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/State_cartel_theory

State cartel theory
is a new concept in the field of international relations theory (IR) and belongs to the group of institutionalist approaches. Up to now the theory has mainly been specified with regard to the European Union (EU), but could be made much more general. Hence state cartel theory should consider all international governmental organizations (IGOs) as cartels made up by states.

Terminology

The term cartel in state cartel theory means – in very short words – an alliance of rivals. It is used in a neutral, strictly analytical way, not as a degradation. The terminology has been predominantly adopted from the old historical cartel theory of pre-World War II Europe. But additionally these terms have been checked and sometimes adjusted in their meanings to be able to incorporate political and governmental functions as cartel functions of the combined states.

Methodical base and scientific background

State cartel theory is a hybrid design made up of two or more theories, assembled in an adequate way.

The method of theory creation consists of three steps:

  1. The starting material of a state cartel theory is the intellectual corpus of a broad existing theory of international relations. For instance the following theories might be adaptable: the realism, the neofunctionalist Europe-science, or even a Marxist imperialism theory. Their statements on the relationships between the industrialized nation states are called into question as these are thought to be ideologically biased and therefore these are marked up for revision and change.
  2. The losses and vacancies are now to be refilled by another theory, the classical cartel theory of economic enterprises. This theory, made up mainly in Germany, was authoritative in Europe till the end of World War II and was pushed aside globally by the American anti-trust policy up to the 1960s. The classical cartel theory comprised an elaborate organizational theory of the cartel institution. Its knowledge of the relationships among the cartelized enterprises and between them and the common cartel institutions are now to be applied. Hence, the classical economic cartel theory serves as a tool kit for repairing the ideological deformations and corruptions of the existing theories of IR.
  3. In a third step the transfer results were rechecked in the light of available facts of international relations and they were stated more precisely and with greater differentiation.

In the final outcome a theory gets built, which – like the cartel theory of economic enterprises – based on the utilitarian image of man. Thus, state cartel theory is strictly determined by socio-economic factors. Since this approach prevents ideological influences it is not – neither evidently nor in a hidden or subtle manner – connected with the interests of any existing great power.

The philosophical precondition of the specified knowledge transfer from cartel theory is the – one century old – insight, that there are striking analogies between combinations of states and combinations of economic enterprises (i.e. the cartels formerly legal and of a great number in Europe).[1] These analogies are both institutionally and functionally adept.

History of the state cartel conception

The conception of international relations as potential cartel phenomena has a long tradition:

  • John Atkinson Hobson, a left-liberal British economist, suggested between 1902 and 1938, imperial antagonisms could be pacified in a system of 'inter-imperialism', if the great powers would "learn the art of combination" ('combination' or 'combine' were at those times used to designate cartels).[2]
  • Karl Kautsky, the leading theoretician of social democracy before World War I, hoped since 1912, the great powers – beginning with the British Empire and the Deutsche Reich – would unite into a 'state cartel' giving them organization and reconciliation within an ultra-imperialism – an idea, that was an illusion at that time.[3]
  • Early Commentators of the European unification described the organizational system of the Schuman Plan in 1950 as 'cartel-like'; the Corriere della Sera, a respected Italian newspaper, understood the aim of the proposal as to build a cartello anticartello, i.e., a states’ cartel to eliminate the private cartels in the coal and steel sectors.[4]

The cartel concept for closer forms of inter-state cooperation was counteracted by just a range of actors: by Leninism, American anti-trust policy and European federalists (e.g., Jean Monnet). This conception was first blamed, then ignored, and by the 1960s increasingly forgotten.

Central conclusions and basic instruments

The breakthrough phase: The historical origin of a wider political cartelization is identified in the crisis of the capitalist system after World War II.[5] The breakdown of the anarchic – or imperialistic – world system in 1945 marks the beginning of the comprehensive inter-state cartelization of the western world. The extreme material, political and human losses and sacrifices led the nations – or more precisely: their ruling classes – to the conclusion that war and protectionism should not be used any more as weapons against each other to assure the survival of the free western world. This culminated in the General Agreement on Tariffs and Trade coming into force rather than in service of mankind on 1 January 1948.

The cartel relationship: The analysis of relations among the combined states is a basic instrument of state cartel theory. The aim is to identify the extent of their cooperation, of common interests on the one hand, and the amount of their competition, the dimension of divergence of their interests on the other hand. This is carried out essentially contrary to existing statements of the conventional theories of IR, e.g. the argument of interstate 'friendship' – which is the idealist or functionalist position – or a human drive for power – which is the realist position. This way, the basic relationship between the cooperation seeking capitalist states can be recognized as a quite rational, but also difficult friendship-rivalry, a relationship of cooperation and antagonism.[5] The Franco-German friendship can be seen as a paradigm for this and many examples of its ambivalence have been quoted.[6]

Hegemony analysis: The supremacy of the greater states – like that of the bigger enterprises – leads to overproportional assertiveness and thus to privileges for these actors obtained by persuasion or by force. On the other hand, integration brings about dependencies binding all participants. So there is a structure of symmetric and asymmetric connections, which can be found both inside and outside of the respective state cartels. The analysis of these complex forms of international relations is in the focus of state cartel theory and can lead to a global analysis of state cartels and state cartel effects. A not considered aspect is the "disrupter", often manifesting itself in opposition to real or perceived grievance brought to bear, at little relative cost, to its effect such as the attacks on the World Trade Center where two aircraft as weapons brought to sharp focus world attention the power of a new terror.

Institutions and theory of ideology

The organizational structure: In enterprise cartels the members' assembly was always the historically first and main institution of the combination. All further institutions had serving functions (secretary, bodies for market regulation, arbitration board) and were the result of the will and the needs of the members. This structure can be found alike in the combinations of states: the council of ministers or delegates is the members' assembly of the participating states (e.g. the Council of the European Union), it has a secretariat, there are operative commissions (e.g. the European Commission), and there can be an arbitration board/court (e.g. the European Court).[7] Additional institutions could be developed – in enterprise as in state cartels – according to needs.

On the democratic character of the European Union: The European Parliament is – according to state cartel theory – a less important, not really indispensable multifunctional communitarian institution of EU:[8] The most obvious function is the orchestration of a European democracy; at this, the democratic pretensions of parties and citizens of the member states are to be served symbolically – meaning: they often get sent to nowhere. Another function is the provision and application of more EU-expertise by the representatives of the single member states, being an additional channel to import national interests into the communitarian system. Finally, the European Parliament is able to influence the EU legislation slightly by its rights of participation: it can actually bias and improve decisions, which otherwise would be made exclusively by the mighty council, and this would often happen according to the notorious principle of the least common denominator. A significant increase in the rights of the parliament would challenge the system and pose the question: cartel or federal state. A process like this, which could really override the cartel-logic, could only develop with the support or at the instigation of a strong dominant group of member states.

Theory of ideology: While national-imperialist ideologies of the pre-1945 era being abolished, international institutions (state cartels) today spread an ideology of interstate cooperation: "Since war and protectionism should fall away as means of policy, a different stile of contact becomes necessary between the partner states. […] The nationalism of former days is repressed by an ideology of international understanding and friendship. The European Spirit gets evoked particularly in context of the European Union. The commandments of international understanding and European Communitarianship are the lubricant in the mechanics of the bargaining process in the state cartel. As ideologies they often make the relations look much better than they really are, but as appeals or instructions they could be eminently valuable. […] The Origin of the communitarian ideology in its pure occurrence are the central institutions of the EU, its commission and its parliament."[5]

Functions and results of integration

The functional typology: The enterprise cartels of former times framed markets according to their interests, state cartels frame policies. While the aims of the standardization activities can be different, the methods and instruments of private and state cartels are often similar and always analogically comparable. Thereby the functional typology of the classical enterprise cartels is applicable also for interstate regulative communities. This typification by the purpose of the cartel can be demonstrated by means of the example of the European Union:[9]

  • the European common agricultural market has instruments similar to a – normally forbidden – production cartel typically controlling prices and outputs.
  • the miscellaneous market regulations of EU, but also its health care and environmental standards, can be seen to correspond to standardization cartels, partially also to conditions cartels.
  • the settlements on maximum prices for cell phone calls within Europe constitute a supranationally decreed calculations cartel.

The cartel gain: Cooperating within international institutions normally provides the participating states with substantial benefits. "The cartel gain of the EU consists of the various gains in prosperity, which result from economic integration and now make the member states adhere like being glued together. Any far-reaching disintegration, trying to go back to national autarchy, would invariably lead to an economic crisis, for which the Great Depression [in Europe] of 1929/33 should have been just a slight forerunner."[10] Transnational corporations and export-oriented national enterprises plus their employees and suppliers constitute a social power, which would hinder a breaking apart of the community. On the other hand, the cooperation in the state cartel is complicated because of distributional conflicts.

Tendencies for crises: According to state cartel theory inter-state organizations typically develop severe problems and crises. The European Union is seen to be in a permanent crisis.[11] The causes for this are thought to lie in the clashes of increasingly unbridgeable interests between the participating nations or just plain cupidity. The EU – as a particularly advanced cartel combine – would strike more and more against a systemic barrier of development, i.e. could only be upgraded effectively by a change-over of power, by a federal revolution, in which the cartel form will be conquered and a federal state – with its considerable potentials for rationalization – will be erected.[12]

Compatibility with other theories of international relations

State cartel theory suggests:

  • an intense rivalry among the developed industrialized states for socio-economic reasons,
  • a partial (not complete) resolvability of these contradictions within the framework of international institutions or – in other words – by the cartel method,
  • the power of the nation states as the crucial force within international political relations.

Thereby state cartel theory is partially in accordance or in opposition:

  • to the neofunctionalist Europe-science and the communitarian method of Jean Monnet: The belief in the resolvability of the inner-European divergences of interests, in the feasibility of an efficient and conciliable Europe, is criticized by state cartel theory as naíve-idealistic. On the other hand, both integration theories agree with regard to the importance they attach to institution-building in state communities.
  • to Leninist imperialism theory: The allegation of antagonistic rivalry between the developed capitalist states should be wrong, certainly since World War II. These states could definitely cooperate enduringly and abstain from open violence in their relationships. But state cartel theory and imperialism theory accord in the belief that societal interests are caused by socio-economic factors, thus eventually depending on the economy.
  • to theories of International Relations with a pro-American bias: In the hegemony analysis of a state cartel theory it is always the look at the most powerful nations (i.e. globally the USA) which is most important. – Whitewashing of America as 'good strong power' as done in realism (by Morgenthau: the USA were not in a consistent pursuit of hegemony) or the methodical deferment of the power aspect as in the mainstream of both regime theory and global governance approach would be contrary to a state cartel theory.

CMB cold spot

From Wikipedia, the free encyclopedia
The circled area is the cold spot. Black lines in the Planck's CMB map indicates each constellation, cold spot is in Eridanus constellation. The blue circle is the equatorial line in the celestial sphere. Image generated with Celestia.
The circled area is the cold spot in the WMAP.

The CMB Cold Spot or WMAP Cold Spot is a region of the sky seen in microwaves that has been found to be unusually large and cold relative to the expected properties of the cosmic microwave background radiation (CMBR). The "Cold Spot" is approximately 70 µK (0.00007 K) colder than the average CMB temperature (approximately 2.7 K), whereas the root mean square of typical temperature variations is only 18 µK. At some points, the "cold spot" is 140 µK colder than the average CMB temperature.

The radius of the "cold spot" subtends about 5°; it is centered at the galactic coordinate lII = 207.8°, bII = −56.3° (equatorial: α = 03h 15m 05s, δ = −19° 35′ 02″). It is, therefore, in the Southern Celestial Hemisphere, in the direction of the constellation Eridanus.

Typically, the largest fluctuations of the primordial CMB temperature occur on angular scales of about 1°. Thus a cold region as large as the "cold spot" appears very unlikely, given generally accepted theoretical models. Various alternative explanations exist, including a so-called Eridanus Supervoid or Great Void that may exist between us and the primordial CMB (foreground voids can cause cold spots against the CMB background). Such a void would affect the observed CMB via the integrated Sachs–Wolfe effect, and would be one of the largest structures in the observable universe. This would be an extremely large region of the universe, roughly 150 to 300 Mpc or 500 million to one billion light-years across and 6 to 10 billion light years away, at redshift , containing a density of matter much smaller than the average density at that redshift.

Discovery and significance

CMB Cold Spot was also observed by the Planck satellite at similar significance. Image generated with Celestia Program.

In the first year of data recorded by the Wilkinson Microwave Anisotropy Probe (WMAP), a region of sky in the constellation Eridanus was found to be cooler than the surrounding area. Subsequently, using the data gathered by WMAP over 3 years, the statistical significance of such a large, cool region was estimated. The probability of finding a deviation at least as high in Gaussian simulations was found to be 1.85%. Thus it appears unlikely, but not impossible, that the cold spot was generated by the standard mechanism of quantum fluctuations during cosmological inflation, which in most inflationary models gives rise to Gaussian statistics. The cold spot may also, as suggested in the references above, be a signal of non-Gaussian primordial fluctuations.

Some authors called into question the statistical significance of this cold spot.

In 2013, the CMB Cold Spot was also observed by the Planck satellite at similar significance, discarding the possibility of being caused by a systematic error of the WMAP satellite.

Possible causes other than primordial temperature fluctuation

The large 'cold spot' forms part of what has been called an 'axis of evil' (so-called because it was unexpected to see a structure like this).

Supervoid

The mean ISW imprint 50 supervoids have on the Cosmic Microwave Background: color scale from -20 to +20 µK.

One possible explanation of the cold spot is a huge void between us and the primordial CMB. A region cooler than surrounding sightlines can be observed if a large void is present, as such a void would cause an increased cancellation between the "late-time" integrated Sachs–Wolfe effect and the "ordinary" Sachs–Wolfe effect. This effect would be much smaller if dark energy were not stretching the void as photons went through it.

Rudnick et al. found a dip in NVSS galaxy number counts in the direction of the Cold Spot, suggesting the presence of a large void. Since then, some additional works have cast doubt on the "supervoid" explanation. The correlation between the NVSS dip and the Cold Spot was found to be marginal using a more conservative statistical analysis. Also, a direct survey for galaxies in several one-degree-square fields within the Cold Spot found no evidence for a supervoid. However, the supervoid explanation has not been ruled out entirely; it remains intriguing, since supervoids do seem capable of affecting the CMB measurably.

A 2015 study shows the presence of a supervoid that has a diameter of 1.8 billion light years and is centered at 3 billion light-years from our galaxy in the direction of the Cold Spot, likely being associated with it. This would make it the largest void detected, and one of the largest structures known. Later measurements of the Sachs–Wolfe effect show too its likely existence.

Although large voids are known in the universe, a void would have to be exceptionally vast to explain the cold spot, perhaps 1,000 times larger in volume than expected typical voids. It would be 6 billion–10 billion light-years away and nearly one billion light-years across, and would be perhaps even more improbable to occur in the large-scale structure than the WMAP cold spot would be in the primordial CMB.

A 2017 study reported surveys showing no evidence that associated voids in the line of sight could have caused the CMB Cold Spot and concluded that it may instead have a primordial origin.

One important thing to confirm or rule out the late time integrated Sachs–Wolfe effect is the mass profile of galaxies in the area as ISW effect is affected by the galaxy bias which depends on the mass profiles and types of galaxies.

In December 2021, the Dark Energy Survey (DES), analyzing their data, put forward more evidence for the correlation between the Eridanus supervoid and the CMB cold spot.

Cosmic texture

In late 2007, (Cruz et al.) argued that the Cold Spot could be due to a cosmic texture, a remnant of a phase transition in the early Universe.

Parallel universe

A controversial claim by Laura Mersini-Houghton is that it could be the imprint of another universe beyond our own, caused by quantum entanglement between universes before they were separated by cosmic inflation. Laura Mersini-Houghton said, "Standard cosmology cannot explain such a giant cosmic hole" and made the hypothesis that the WMAP cold spot is "… the unmistakable imprint of another universe beyond the edge of our own." If true, this provides the first empirical evidence for a parallel universe (though theoretical models of parallel universes existed previously). It would also support string theory. The team claims that there are testable consequences for its theory. If the parallel-universe theory is true, there will be a similar void in the Celestial sphere's opposite hemisphere (which New Scientist reported to be in the Southern celestial hemisphere; the results of the New Mexico array study reported it as being in the Northern).

Other researchers have modeled the cold spot as potentially the result of cosmological bubble collisions, again before inflation.

A sophisticated computational analysis (using Kolmogorov complexity) has derived evidence for a north and a south cold spot in the satellite data: "...among the high randomness regions is the southern non-Gaussian anomaly, the Cold Spot, with a stratification expected for the voids. Existence of its counterpart, a Northern Cold Spot with almost identical randomness properties among other low-temperature regions is revealed."

These predictions and others were made prior to the measurements (see Laura Mersini). However, apart from the Southern Cold Spot, the varied statistical methods in general fail to confirm each other regarding a Northern Cold Spot. The 'K-map' used to detect the Northern Cold Spot was noted to have twice the measure of randomness measured in the standard model. The difference is speculated to be caused by the randomness introduced by voids (unaccounted-for voids were speculated to be the reason for the increased randomness above the standard model).

Sensitivity to finding method

The cold spot is mainly anomalous because it stands out compared to the relatively hot ring around it; it is not unusual if one only considers the size and coldness of the spot itself. More technically, its detection and significance depends on using a compensated filter like a Mexican hat wavelet to find it.

Superparamagnetism

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Superparamagnetism

Superparamagnetism
is a form of magnetism which appears in small ferromagnetic or ferrimagnetic nanoparticles. In sufficiently small nanoparticles, magnetization can randomly flip direction under the influence of temperature. The typical time between two flips is called the Néel relaxation time. In the absence of an external magnetic field, when the time used to measure the magnetization of the nanoparticles is much longer than the Néel relaxation time, their magnetization appears to be in average zero; they are said to be in the superparamagnetic state. In this state, an external magnetic field is able to magnetize the nanoparticles, similarly to a paramagnet. However, their magnetic susceptibility is much larger than that of paramagnets.

The Néel relaxation in the absence of magnetic field

Normally, any ferromagnetic or ferrimagnetic material undergoes a transition to a paramagnetic state above its Curie temperature. Superparamagnetism is different from this standard transition since it occurs below the Curie temperature of the material.

Superparamagnetism occurs in nanoparticles which are single-domain, i.e. composed of a single magnetic domain. This is possible when their diameter is below 3–50 nm, depending on the materials. In this condition, it is considered that the magnetization of the nanoparticles is a single giant magnetic moment, sum of all the individual magnetic moments carried by the atoms of the nanoparticle. Those in the field of superparamagnetism call this "macro-spin approximation".

Because of the nanoparticle’s magnetic anisotropy, the magnetic moment has usually only two stable orientations antiparallel to each other, separated by an energy barrier. The stable orientations define the nanoparticle’s so called “easy axis”. At finite temperature, there is a finite probability for the magnetization to flip and reverse its direction. The mean time between two flips is called the Néel relaxation time and is given by the following Néel–Arrhenius equation:

,

where:

  • is thus the average length of time that it takes for the nanoparticle’s magnetization to randomly flip as a result of thermal fluctuations.
  • is a length of time, characteristic of the material, called the attempt time or attempt period (its reciprocal is called the attempt frequency); its typical value is between 10−9 and 10−10 second.
  • K is the nanoparticle’s magnetic anisotropy energy density and V its volume. KV is therefore the energy barrier associated with the magnetization moving from its initial easy axis direction, through a “hard plane”, to the other easy axis direction.
  • kB is the Boltzmann constant.
  • T is the temperature.

This length of time can be anywhere from a few nanoseconds to years or much longer. In particular, it can be seen that the Néel relaxation time is an exponential function of the grain volume, which explains why the flipping probability becomes rapidly negligible for bulk materials or large nanoparticles.

Blocking temperature

Let us imagine that the magnetization of a single superparamagnetic nanoparticle is measured and let us define as the measurement time. If , the nanoparticle magnetization will flip several times during the measurement, then the measured magnetization will average to zero. If , the magnetization will not flip during the measurement, so the measured magnetization will be what the instantaneous magnetization was at the beginning of the measurement. In the former case, the nanoparticle will appear to be in the superparamagnetic state whereas in the latter case it will appear to be “blocked” in its initial state.

The state of the nanoparticle (superparamagnetic or blocked) depends on the measurement time. A transition between superparamagnetism and blocked state occurs when . In several experiments, the measurement time is kept constant but the temperature is varied, so the transition between superparamagnetism and blocked state is seen as a function of the temperature. The temperature for which is called the blocking temperature:

For typical laboratory measurements, the value of the logarithm in the previous equation is in the order of 20–25.

Equivalently, blocking temperature is the temperature below which a material shows slow relaxation of magnetization.

Effect of a magnetic field

Langevin function (red line), compared with (blue line).

When an external magnetic field H is applied to an assembly of superparamagnetic nanoparticles, their magnetic moments tend to align along the applied field, leading to a net magnetization. The magnetization curve of the assembly, i.e. the magnetization as a function of the applied field, is a reversible S-shaped increasing function. This function is quite complicated but for some simple cases:

  1. If all the particles are identical (same energy barrier and same magnetic moment), their easy axes are all oriented parallel to the applied field and the temperature is low enough (TB < TKV/(10 kB)), then the magnetization of the assembly is
    .
  2. If all the particles are identical and the temperature is high enough (TKV/kB), then, irrespective of the orientations of the easy axes:

In the above equations:

  • n is the density of nanoparticles in the sample
  • is the magnetic permeability of vacuum
  • is the magnetic moment of a nanoparticle
  • is the Langevin function

The initial slope of the function is the magnetic susceptibility of the sample :

The latter susceptibility is also valid for all temperatures if the easy axes of the nanoparticles are randomly oriented.

It can be seen from these equations that large nanoparticles have a larger µ and so a larger susceptibility. This explains why superparamagnetic nanoparticles have a much larger susceptibility than standard paramagnets: they behave exactly as a paramagnet with a huge magnetic moment.

Time dependence of the magnetization

There is no time-dependence of the magnetization when the nanoparticles are either completely blocked () or completely superparamagnetic (). There is, however, a narrow window around where the measurement time and the relaxation time have comparable magnitude. In this case, a frequency-dependence of the susceptibility can be observed. For a randomly oriented sample, the complex susceptibility is:

where

  • is the frequency of the applied field
  • is the susceptibility in the superparamagnetic state
  • is the susceptibility in the blocked state
  • is the relaxation time of the assembly

From this frequency-dependent susceptibility, the time-dependence of the magnetization for low-fields can be derived:

Measurements

A superparamagnetic system can be measured with AC susceptibility measurements, where an applied magnetic field varies in time, and the magnetic response of the system is measured. A superparamagnetic system will show a characteristic frequency dependence: When the frequency is much higher than 1/τN, there will be a different magnetic response than when the frequency is much lower than 1/τN, since in the latter case, but not the former, the ferromagnetic clusters will have time to respond to the field by flipping their magnetization. The precise dependence can be calculated from the Néel–Arrhenius equation, assuming that the neighboring clusters behave independently of one another (if clusters interact, their behavior becomes more complicated). It is also possible to perform magneto-optical AC susceptibility measurements with magneto-optically active superparamagnetic materials such as iron oxide nanoparticles in the visible wavelength range.

Effect on hard drives

Superparamagnetism sets a limit on the storage density of hard disk drives due to the minimum size of particles that can be used. This limit on areal-density is known as the superparamagnetic limit.

  • Older hard disk technology uses longitudinal recording. It has an estimated limit of 100 to 200 Gbit/in2.
  • Current hard disk technology uses perpendicular recording. As of July 2020 drives with densities of approximately 1 Tbit/in2 are available commercially. This is at the limit for conventional magnetic recording that was predicted in 1999.
  • Future hard disk technologies currently in development include: heat-assisted magnetic recording (HAMR) and microwave-assisted magnetic recording (MAMR), which use materials that are stable at much smaller sizes. They require localized heating or microwave excitation before the magnetic orientation of a bit can be changed. Bit-patterned recording (BPR) avoids the use of fine-grained media and is another possibility. In addition, magnetic recording technologies based on topological distortions of the magnetization, known as skyrmions, have been proposed.

Applications

General applications

Biomedical applications

Lie group

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Lie_group In mathematics , a Lie gro...