Search This Blog

Thursday, August 4, 2022

Energy harvesting

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Energy_harvesting

Energy harvesting (EH, also known as power harvesting or energy scavenging or ambient power) is the process by which energy is derived from external sources (e.g., solar power, thermal energy, wind energy, salinity gradients, and kinetic energy, also known as ambient energy), captured, and stored for small, wireless autonomous devices, like those used in wearable electronics and wireless sensor networks.

Energy harvesters usually provide a very small amount of power for low-energy electronics. While the input fuel to some large-scale generation costs resources (oil, coal, etc.), the energy source for energy harvesters is present as ambient background. For example, temperature gradients exist from the operation of a combustion engine and in urban areas, there is a large amount of electromagnetic energy in the environment because of radio and television broadcasting.

One of the earliest applications of ambient power collected from ambient electromagnetic radiation (EMR) is the crystal radio.

The principles of energy harvesting from ambient EMR can be demonstrated with basic components.

Operation

Energy harvesting devices converting ambient energy into electrical energy have attracted much interest in both the military and commercial sectors. Some systems convert motion, such as that of ocean waves, into electricity to be used by oceanographic monitoring sensors for autonomous operation. Future applications may include high power output devices (or arrays of such devices) deployed at remote locations to serve as reliable power stations for large systems. Another application is in wearable electronics, where energy harvesting devices can power or recharge cellphones, mobile computers, radio communication equipment, etc. All of these devices must be sufficiently robust to endure long-term exposure to hostile environments and have a broad range of dynamic sensitivity to exploit the entire spectrum of wave motions.

Accumulating energy

Energy can also be harvested to power small autonomous sensors such as those developed using MEMS technology. These systems are often very small and require little power, but their applications are limited by the reliance on battery power. Scavenging energy from ambient vibrations, wind, heat or light could enable smart sensors to be functional indefinitely.

Typical power densities available from energy harvesting devices are highly dependent upon the specific application (affecting the generator's size) and the design itself of the harvesting generator. In general, for motion powered devices, typical values are a few μW/cm3 for human body powered applications and hundreds of μW/cm3 for generators powered from machinery. Most energy scavenging devices for wearable electronics generate very little power.

Storage of power

In general, energy can be stored in a capacitor, super capacitor, or battery. Capacitors are used when the application needs to provide huge energy spikes. Batteries leak less energy and are therefore used when the device needs to provide a steady flow of energy. these aspects of the battery depend on the type that is used. A common type of battery that is used for this purpose is the lead acid or lithium ion battery although older types such as nickel metal hydride are still widely used today. Compared to batteries, super capacitors have virtually unlimited charge-discharge cycles and can therefore operate forever enabling a maintenance-free operation in IoT and wireless sensor devices.

Use of the power

Current interest in low power energy harvesting is for independent sensor networks. In these applications an energy harvesting scheme puts power stored into a capacitor then boosted/regulated to a second storage capacitor or battery for the use in the microprocessor or in the data transmission. The power is usually used in a sensor application and the data stored or is transmitted possibly through a wireless method.

Motivation

The history of energy harvesting dates back to the windmill and the waterwheel. People have searched for ways to store the energy from heat and vibrations for many decades. One driving force behind the search for new energy harvesting devices is the desire to power sensor networks and mobile devices without batteries. Energy harvesting is also motivated by a desire to address the issue of climate change and global warming.

Energy sources

There are many small-scale energy sources that generally cannot be scaled up to industrial size in terms of comparable output to industrial size solar, wind or wave power:

  • Some wristwatches are powered by kinetic energy (called automatic watches), in this case movement of the arm is used. The arm movement causes winding of its mainspring. A newer design introduced by Seiko ("Kinetic") uses movement of a magnet in the electromagnetic generator instead to power the quartz movement. The motion provides a rate of change of flux, which results in some induced emf on the coils. The concept is related to Faraday's Law.
  • Photovoltaics is a method of generating electrical power by converting solar radiation (both indoors and outdoors) into direct current electricity using semiconductors that exhibit the photovoltaic effect. Photovoltaic power generation employs solar panels composed of a number of cells containing a photovoltaic material. Note that photovoltaics have been scaled up to industrial size and that large solar farms exist.
  • Thermoelectric generators (TEGs) consist of the junction of two dissimilar materials and the presence of a thermal gradient. Large voltage outputs are possible by connecting many junctions electrically in series and thermally in parallel. Typical performance is 100–300 μV/K per junction. These can be utilized to capture mW.s of energy from industrial equipment, structures, and even the human body. They are typically coupled with heat sinks to improve temperature gradient.
  • Micro wind turbine are used to harvest wind energy readily available in the environment in the form of kinetic energy to power the low power electronic devices such as wireless sensor nodes. When air flows across the blades of the turbine, a net pressure difference is developed between the wind speeds above and below the blades. This will result in a lift force generated which in turn rotate the blades. Similar to photovoltaics, wind farms have been constructed on an industrial scale and are being used to generate substantial amounts of electrical energy.
  • Piezoelectric crystals or fibers generate a small voltage whenever they are mechanically deformed. Vibration from engines can stimulate piezoelectric materials, as can the heel of a shoe, or the pushing of a button.
  • Special antennas can collect energy from stray radio waves, this can also be done with a Rectenna and theoretically at even higher frequency EM radiation with a Nantenna.
  • Power from keys pressed during use of a portable electronic device or remote controller, using magnet and coil or piezoelectric energy converters, may be used to help power the device.
  • Vibration energy harvesting based on Electromagnetic induction that uses a magnet and a copper coil in the most simple versions to generate a current that can be converted into electricity.

Ambient-radiation sources

A possible source of energy comes from ubiquitous radio transmitters. Historically, either a large collection area or close proximity to the radiating wireless energy source is needed to get useful power levels from this source. The nantenna is one proposed development which would overcome this limitation by making use of the abundant natural radiation (such as solar radiation).

One idea is to deliberately broadcast RF energy to power and collect information from remote devices: This is now commonplace in passive radio-frequency identification (RFID) systems, but the Safety and US Federal Communications Commission (and equivalent bodies worldwide) limit the maximum power that can be transmitted this way to civilian use. This method has been used to power individual nodes in a wireless sensor network.

Fluid flow

Airflow can be harvested by various turbine and non-turbine generator technologies. Towered wind turbines and airborne wind energy systems (AWES) mine the flow of air. There are multiple companies in this space, with one example being Zephyr Energy Corporation's patented Windbeam micro generator captures energy from airflow to recharge batteries and power electronic devices. The Windbeam's novel design allows it to operate silently in wind speeds as low as 2 mph. The generator consists of a lightweight beam suspended by durable long-lasting springs within an outer frame. The beam oscillates rapidly when exposed to airflow due to the effects of multiple fluid flow phenomena. A linear alternator assembly converts the oscillating beam motion into usable electrical energy. A lack of bearings and gears eliminates frictional inefficiencies and noise. The generator can operate in low-light environments unsuitable for solar panels (e.g. HVAC ducts) and is inexpensive due to low cost components and simple construction. The scalable technology can be optimized to satisfy the energy requirements and design constraints of a given application.

The flow of blood can also be used to power devices. For instance, the pacemaker developed at the University of Bern, uses blood flow to wind up a spring which in turn drives an electrical micro-generator.

Water energy harvesting with high energy conversion efficiency and high power density was achieved by the design of generators with transistor-like architecture.

Photovoltaic

Photovoltaic (PV) energy harvesting wireless technology offers significant advantages over wired or solely battery-powered sensor solutions: virtually inexhaustible sources of power with little or no adverse environmental effects. Indoor PV harvesting solutions have to date been powered by specially tuned amorphous silicon (aSi)a technology most used in Solar Calculators. In recent years new PV technologies have come to the forefront in Energy Harvesting such as Dye-Sensitized Solar Cells (DSSC). The dyes absorb light much like chlorophyll does in plants. Electrons released on impact escape to the layer of TiO2 and from there diffuse, through the electrolyte, as the dye can be tuned to the visible spectrum much higher power can be produced. At 200 lux a DSSC can provide over 10 μW per cm2.

Picture of a batteryless and wireless wall switch

Piezoelectric

The piezoelectric effect converts mechanical strain into electric current or voltage. This strain can come from many different sources. Human motion, low-frequency seismic vibrations, and acoustic noise are everyday examples. Except in rare instances the piezoelectric effect operates in AC requiring time-varying inputs at mechanical resonance to be efficient.

Most piezoelectric electricity sources produce power on the order of milliwatts, too small for system application, but enough for hand-held devices such as some commercially available self-winding wristwatches. One proposal is that they are used for micro-scale devices, such as in a device harvesting micro-hydraulic energy. In this device, the flow of pressurized hydraulic fluid drives a reciprocating piston supported by three piezoelectric elements which convert the pressure fluctuations into an alternating current.

As piezo energy harvesting has been investigated only since the late 1990s, it remains an emerging technology. Nevertheless, some interesting improvements were made with the self-powered electronic switch at INSA school of engineering, implemented by the spin-off Arveni. In 2006, the proof of concept of a battery-less wireless doorbell push button was created, and recently, a product showed that classical wireless wallswitch can be powered by a piezo harvester. Other industrial applications appeared between 2000 and 2005, to harvest energy from vibration and supply sensors for example, or to harvest energy from shock.

Piezoelectric systems can convert motion from the human body into electrical power. DARPA has funded efforts to harness energy from leg and arm motion, shoe impacts, and blood pressure for low level power to implantable or wearable sensors. The nanobrushes are another example of a piezoelectric energy harvester. They can be integrated into clothing. Multiple other nanostructures have been exploited to build an energy-harvesting device, for example, a single crystal PMN-PT nanobelt was fabricated and assembled into a piezoelectric energy harvester in 2016. Careful design is needed to minimise user discomfort. These energy harvesting sources by association affect the body. The Vibration Energy Scavenging Project is another project that is set up to try to scavenge electrical energy from environmental vibrations and movements. Microbelt can be used to gather electricity from respiration. Besides, as the vibration of motion from human comes in three directions, a single piezoelectric cantilever based omni-directional energy harvester is created by using 1:2 internal resonance. Finally, a millimeter-scale piezoelectric energy harvester has also already been created.

Piezo elements are being embedded in walkways to recover the "people energy" of footsteps. They can also be embedded in shoes to recover "walking energy". Researchers at MIT developed the first micro-scale piezoelectric energy harvester using thin film PZT in 2005. Arman Hajati and Sang-Gook Kim invented the Ultra Wide-Bandwidth micro-scale piezoelectric energy harvesting device by exploiting the nonlinear stiffness of a doubly clamped microelectromechanical systems (MEMSs) resonator. The stretching strain in a doubly clamped beam shows a nonlinear stiffness, which provides a passive feedback and results in amplitude-stiffened Duffing mode resonance. Typically, piezoelectric cantilevers are adopted for the above-mentioned energy harvesting system. One drawback is that the piezoelectric cantilever has gradient strain distribution, i.e., the piezoelectric transducer is not fully utilized. To address this issue, triangle shaped and L-shaped cantilever are proposed for uniform strain distribution.

In 2018, Soochow University researchers reported hybridizing a triboelectric nanogenerator and a silicon solar cell by sharing a mutual electrode. This device can collect solar energy or convert the mechanical energy of falling raindrops into electricity.

UK telecom company Orange UK created an energy harvesting T-shirt and boots. Other companies have also done the same.

Energy from smart roads and piezoelectricity

Tetragonal unit cell of lead titanate
 
A piezoelectric disk generates a voltage when deformed (change in shape is greatly exaggerated)

Brothers Pierre Curie and Jacques Curie gave the concept of piezoelectric effect in 1880. Piezoelectric effect converts mechanical strain into voltage or electric current and generates electric energy from motion, weight, vibration and temperature changes as shown in the figure.

Considering piezoelectric effect in thin film lead zirconate titanate PZT, microelectromechanical systems (MEMS) power generating device has been developed. During recent improvement in piezoelectric technology, Aqsa Abbasi diffentiated two modes called and in vibration converters and re-designed to resonate at specific frequencies from an external vibration energy source, thereby creating electrical energy via the piezoelectric effect using electromechanical damped mass. However, Aqsa further developed beam-structured electrostatic devices that are more difficult to fabricate than PZT MEMS devices versus a similar because general silicon processing involves many more mask steps that do not require PZT film. Piezoelectric type sensors and actuators have a cantilever beam structure that consists of a membrane bottom electrode, film, piezoelectric film, and top electrode. More than (3~5 masks) mask steps are required for patterning of each layer while have very low induced voltage. Pyroelectric crystals that have a unique polar axis and have spontaneous polarization, along which the spontaneous polarization exists. These are the crystals of classes 6mm, 4mm, mm2, 6, 4, 3m, 3,2, m. The special polar axis—crystallophysical axis X3 – coincides with the axes L6,L4, L3, and L2 of the crystals or lies in the unique straight plane P (class "m"). Consequently, the electric centers of positive and negative charges are displaced of an elementary cell from equilibrium positions, i.e., the spontaneous polarization of the crystal changes. Therefore, all considered crystals have spontaneous polarization . Since piezoelectric effect in pyroelectric crystals arises as a result of changes in their spontaneous polarization under external effects (electric fields, mechanical stresses). As a result of displacement, Aqsa Abbasi introduced change in the components along all three axes . Suppose that is proportional to the mechanical stresses causing in a first approximation, which results where Tkl represents the mechanical stress and dikl represents the piezoelectric modules.

PZT thin films have attracted attention for applications such as force sensors, accelerometers, gyroscopes actuators, tunable optics, micro pumps, ferroelectric RAM, display systems and smart roads, when energy sources are limited, energy harvesting plays an important role in the environment. Smart roads have the potential to play an important role in power generation. Embedding piezoelectric material in the road can convert pressure exerted by moving vehicles into voltage and current.

Smart transportation intelligent system

Piezoelectric sensors are most useful in smart-road technologies that can be used to create systems that are intelligent and improve productivity in the long run. Imagine highways that alert motorists of a traffic jam before it forms. Or bridges that report when they are at risk of collapse, or an electric grid that fixes itself when blackouts hit. For many decades, scientists and experts have argued that the best way to fight congestion is intelligent transportation systems, such as roadside sensors to measure traffic and synchronized traffic lights to control the flow of vehicles. But the spread of these technologies has been limited by cost. There are also some other smart-technology shovel ready projects which could be deployed fairly quickly, but most of the technologies are still at the development stage and might not be practically available for five years or more.

Pyroelectric

The pyroelectric effect converts a temperature change into electric current or voltage. It is analogous to the piezoelectric effect, which is another type of ferroelectric behavior. Pyroelectricity requires time-varying inputs and suffers from small power outputs in energy harvesting applications due to its low operating frequencies. However, one key advantage of pyroelectrics over thermoelectrics is that many pyroelectric materials are stable up to 1200 °C or higher, enabling energy harvesting from high temperature sources and thus increasing thermodynamic efficiency.

One way to directly convert waste heat into electricity is by executing the Olsen cycle on pyroelectric materials. The Olsen cycle consists of two isothermal and two isoelectric field processes in the electric displacement-electric field (D-E) diagram. The principle of the Olsen cycle is to charge a capacitor via cooling under low electric field and to discharge it under heating at higher electric field. Several pyroelectric converters have been developed to implement the Olsen cycle using conduction, convection, or radiation. It has also been established theoretically that pyroelectric conversion based on heat regeneration using an oscillating working fluid and the Olsen cycle can reach Carnot efficiency between a hot and a cold thermal reservoir. Moreover, recent studies have established polyvinylidene fluoride trifluoroethylene [P(VDF-TrFE)] polymers and lead lanthanum zirconate titanate (PLZT) ceramics as promising pyroelectric materials to use in energy converters due to their large energy densities generated at low temperatures. Additionally, a pyroelectric scavenging device that does not require time-varying inputs was recently introduced. The energy-harvesting device uses the edge-depolarizing electric field of a heated pyroelectric to convert heat energy into mechanical energy instead of drawing electric current off two plates attached to the crystal-faces.

Thermoelectrics

Seebeck effect in a thermopile made from iron and copper wires
 

In 1821, Thomas Johann Seebeck discovered that a thermal gradient formed between two dissimilar conductors produces a voltage. At the heart of the thermoelectric effect is the fact that a temperature gradient in a conducting material results in heat flow; this results in the diffusion of charge carriers. The flow of charge carriers between the hot and cold regions in turn creates a voltage difference. In 1834, Jean Charles Athanase Peltier discovered that running an electric current through the junction of two dissimilar conductors could, depending on the direction of the current, cause it to act as a heater or cooler. The heat absorbed or produced is proportional to the current, and the proportionality constant is known as the Peltier coefficient. Today, due to knowledge of the Seebeck and Peltier effects, thermoelectric materials can be used as heaters, coolers and generators (TEGs).

Ideal thermoelectric materials have a high Seebeck coefficient, high electrical conductivity, and low thermal conductivity. Low thermal conductivity is necessary to maintain a high thermal gradient at the junction. Standard thermoelectric modules manufactured today consist of P- and N-doped bismuth-telluride semiconductors sandwiched between two metallized ceramic plates. The ceramic plates add rigidity and electrical insulation to the system. The semiconductors are connected electrically in series and thermally in parallel.

Miniature thermocouples have been developed that convert body heat into electricity and generate 40 μ W at 3 V with a 5-degree temperature gradient, while on the other end of the scale, large thermocouples are used in nuclear RTG batteries.

Practical examples are the finger-heartratemeter by the Holst Centre and the thermogenerators by the Fraunhofer-Gesellschaft.

Advantages to thermoelectrics:

  1. No moving parts allow continuous operation for many years.
  2. Thermoelectrics contain no materials that must be replenished.
  3. Heating and cooling can be reversed.

One downside to thermoelectric energy conversion is low efficiency (currently less than 10%). The development of materials that are able to operate in higher temperature gradients, and that can conduct electricity well without also conducting heat (something that was until recently thought impossible), will result in increased efficiency.

Future work in thermoelectrics could be to convert wasted heat, such as in automobile engine combustion, into electricity.

Electrostatic (capacitive)

This type of harvesting is based on the changing capacitance of vibration-dependent capacitors. Vibrations separate the plates of a charged variable capacitor, and mechanical energy is converted into electrical energy. Electrostatic energy harvesters need a polarization source to work and to convert mechanical energy from vibrations into electricity. The polarization source should be in the order of some hundreds of volts; this greatly complicates the power management circuit. Another solution consists in using electrets, that are electrically charged dielectrics able to keep the polarization on the capacitor for years. It's possible to adapt structures from classical electrostatic induction generators, which also extract energy from variable capacitances, for this purpose. The resulting devices are self-biasing, and can directly charge batteries, or can produce exponentially growing voltages on storage capacitors, from which energy can be periodically extracted by DC/DC converters.

Magnetic induction

Magnetic induction refers to the production of an electromotive force (i.e., voltage) in a changing magnetic field. This changing magnetic field can be created by motion, either rotation (i.e. Wiegand effect and Wiegand sensors) or linear movement (i.e. vibration).

Magnets wobbling on a cantilever are sensitive to even small vibrations and generate microcurrents by moving relative to conductors due to Faraday's law of induction. By developing a miniature device of this kind in 2007, a team from the University of Southampton made possible the planting of such a device in environments that preclude having any electrical connection to the outside world. Sensors in inaccessible places can now generate their own power and transmit data to outside receivers.

One of the major limitations of the magnetic vibration energy harvester developed at University of Southampton is the size of the generator, in this case approximately one cubic centimeter, which is much too large to integrate into today's mobile technologies. The complete generator including circuitry is a massive 4 cm by 4 cm by 1 cm nearly the same size as some mobile devices such as the iPod nano. Further reductions in the dimensions are possible through the integration of new and more flexible materials as the cantilever beam component. In 2012, a group at Northwestern University developed a vibration-powered generator out of polymer in the form of a spring. This device was able to target the same frequencies as the University of Southampton groups silicon based device but with one third the size of the beam component.

A new approach to magnetic induction based energy harvesting has also been proposed by using ferrofluids. The journal article, "Electromagnetic ferrofluid-based energy harvester", discusses the use of ferrofluids to harvest low frequency vibrational energy at 2.2 Hz with a power output of ~80 mW per g.

Quite recently, the change in domain wall pattern with the application of stress has been proposed as a method to harvest energy using magnetic induction. In this study, the authors have shown that the applied stress can change the domain pattern in microwires. Ambient vibrations can cause stress in microwires, which can induce a change in domain pattern and hence change the induction. Power, of the order of uW/cm2 has been reported.

Commercially successful vibration energy harvesters based on magnetic induction are still relatively few in number. Examples include products developed by Swedish company ReVibe Energy, a technology spin-out from Saab Group. Another example is the products developed from the early University of Southampton prototypes by Perpetuum. These have to be sufficiently large to generate the power required by wireless sensor nodes (WSN) but in M2M applications this is not normally an issue. These harvesters are now being supplied in large volumes to power WSNs made by companies such as GE and Emerson and also for train bearing monitoring systems made by Perpetuum. Overhead powerline sensors can use magnetic induction to harvest energy directly from the conductor they are monitoring.

Blood sugar

Another way of energy harvesting is through the oxidation of blood sugars. These energy harvesters are called biobatteries. They could be used to power implanted electronic devices (e.g., pacemakers, implanted biosensors for diabetics, implanted active RFID devices, etc.). At present, the Minteer Group of Saint Louis University has created enzymes that could be used to generate power from blood sugars. However, the enzymes would still need to be replaced after a few years. In 2012, a pacemaker was powered by implantable biofuel cells at Clarkson University under the leadership of Dr. Evgeny Katz.

Tree-based

Tree metabolic energy harvesting is a type of bio-energy harvesting. Voltree has developed a method for harvesting energy from trees. These energy harvesters are being used to power remote sensors and mesh networks as the basis for a long term deployment system to monitor forest fires and weather in the forest. According to Voltree's website, the useful life of such a device should be limited only by the lifetime of the tree to which it is attached. A small test network was recently deployed in a US National Park forest.

Other sources of energy from trees include capturing the physical movement of the tree in a generator. Theoretical analysis of this source of energy shows some promise in powering small electronic devices. A practical device based on this theory has been built and successfully powered a sensor node for a year.

Metamaterial

A metamaterial-based device wirelessly converts a 900 MHz microwave signal to 7.3 volts of direct current (greater than that of a USB device). The device can be tuned to harvest other signals including Wi-Fi signals, satellite signals, or even sound signals. The experimental device used a series of five fiberglass and copper conductors. Conversion efficiency reached 37 percent. When traditional antennas are close to each other in space they interfere with each other. But since RF power goes down by the cube of the distance, the amount of power is very very small. While the claim of 7.3 volts is grand, the measurement is for an open circuit. Since the power is so low, there can be almost no current when any load is attached.

Atmospheric pressure changes

The pressure of the atmosphere changes naturally over time from temperature changes and weather patterns. Devices with a sealed chamber can use these pressure differences to extract energy. This has been used to provide power for mechanical clocks such as the Atmos clock.

Ocean energy

A relatively new concept of generating energy is to generate energy from oceans. Large masses of waters are present on the planet which carry with them great amounts of energy. The energy in this case can be generated by tidal streams, ocean waves, difference in salinity and also difference in temperature. As of 2018, efforts are underway to harvest energy this way. United States Navy recently was able to generate electricity using difference in temperatures present in the ocean.

One method to use the temperature difference across different levels of the thermocline in the ocean is by using a thermal energy harvester that is equipped with a material that changes phase while in different temperatures regions. This is typically a polymer-based material that can handle reversible heat treatments. When the material is changing phase, the energy differential is converted into mechanical energy. The materials used will need to be able to alter phases, from liquid to solid, depending on the position of the thermocline underwater. These phase change materials within thermal energy harvesting units would be an ideal way to recharge or power an unmanned underwater vehicle (UUV) being that it will rely on the warm and cold water already present in large bodies of water; minimizing the need for standard battery recharging. Capturing this energy would allow for longer-term missions since the need to be collected or return for charging can be eliminated. This is also a very environmentally friendly method of powering underwater vehicles. There are no emissions that come from utilizing a phase change fluid, and it will likely have a longer lifespan than that of a standard battery.

Future directions

Electroactive polymers (EAPs) have been proposed for harvesting energy. These polymers have a large strain, elastic energy density, and high energy conversion efficiency. The total weight of systems based on EAPs (electroactive polymers) is proposed to be significantly lower than those based on piezoelectric materials.

Nanogenerators, such as the one made by Georgia Tech, could provide a new way for powering devices without batteries. As of 2008, it only generates some dozen nanowatts, which is too low for any practical application.

Noise has been the subject of a proposal by NiPS Laboratory in Italy to harvest wide spectrum low scale vibrations via a nonlinear dynamical mechanism that can improve harvester efficiency up to a factor 4 compared to traditional linear harvesters.

Combinations of different types of energy harvesters can further reduce dependence on batteries, particularly in environments where the available ambient energy types change periodically. This type of complementary balanced energy harvesting has the potential to increase reliability of wireless sensor systems for structural health monitoring.

Sunday, July 31, 2022

Vera C. Rubin Observatory

From Wikipedia, the free encyclopedia
 
Vera C. Rubin Observatory
Large Synoptic Survey Telescope 3 4 render 2013.png
Rendering of completed LSST
Alternative namesLSST 
Named afterVera Rubin 
Location(s)Elqui Province, Coquimbo Region, Chile
Coordinates30°14′40.7″S 70°44′57.9″WCoordinates: 30°14′40.7″S 70°44′57.9″W
OrganizationLarge Synoptic Survey Telescope Corporation 
Altitude2,663 m (8,737 ft), top of pier
Wavelength320–1060 nm
Built2015–2021 
First lightExpected in 2023
Telescope styleThree-mirror anastigmat, Paul-Baker / Mersenne-Schmidt wide-angle
Diameter8.417 m (27.6 ft) physical
8.360 m (27.4 ft) optical
5.116 m (16.8 ft) inner
Secondary diameter3.420 m (1.800 m inner)
Tertiary diameter5.016 m (1.100 m inner)
Angular resolution0.7″ median seeing limit
0.2″ pixel size
Collecting area35 square meters (376.7 sq ft)
Focal length10.31 m (f/1.23) overall
9.9175 m (f/1.186) primary
Mountingaltazimuth mount  
Websitewww.vro.org/,%20https://www.lsst.org/ 

Artist's conception of the LSST inside its dome. The LSST will carry out a deep, ten-year imaging survey in six broad optical bands over the main survey area of 18,000 square degrees.

The Vera C. Rubin Observatory, previously referred to as the Large Synoptic Survey Telescope (LSST), is an astronomical observatory currently under construction in Chile. Its main task will be carrying out a synoptic astronomical survey, the Legacy Survey of Space and Time (LSST). The word synoptic is derived from the Greek words σύν (syn "together") and ὄψις (opsis "view"), and describes observations that give a broad view of a subject at a particular time. The observatory is located on the El Peñón peak of Cerro Pachón, a 2,682-meter-high mountain in Coquimbo Region, in northern Chile, alongside the existing Gemini South and Southern Astrophysical Research Telescopes. The LSST Base Facility is located about 100 kilometres (62 mi) away by road, in the town of La Serena. The observatory is named for Vera Rubin, an American astronomer who pioneered discoveries about galaxy rotation rates.

The Rubin Observatory will house the Simonyi Survey Telescope, a wide-field reflecting telescope with an 8.4-meter primary mirror that will photograph the entire available sky every few nights. The telescope uses a novel three-mirror design, a variant of three-mirror anastigmat, which allows a compact telescope to deliver sharp images over a very wide 3.5-degree diameter field of view. Images will be recorded by a 3.2-gigapixel CCD imaging camera, the largest digital camera ever constructed.

The LSST was proposed in 2001, and construction of the mirror began (with private funds) in 2007. LSST then became the top-ranked large ground-based project in the 2010 Astrophysics Decadal Survey, and the project officially began construction 1 August 2014 when the National Science Foundation (NSF) authorized the FY2014 portion ($27.5 million) of its construction budget. Funding comes from the NSF, the United States Department of Energy, and private funding raised by the dedicated international non-profit organization, the LSST Corporation. Operations are under the management of the Association of Universities for Research in Astronomy (AURA).

Site construction began on 14 April 2015 with the ceremonial laying of the first stone. First light for the engineering camera is expected in July 2023, while system first light is expected in February 2024 and full survey operations are aimed to begin in October 2024, due to COVID-related schedule delays. LSST data is scheduled to become fully public after two years.

Name

In June 2019, the renaming of the Large Synoptic Survey Telescope (LSST) to the Vera C. Rubin Observatory was initiated by Rep. Eddie Bernice Johnson and Jenniffer González-Colón. The renaming was enacted into law on December 20, 2019. The official renaming was announced at the 2020 American Astronomical Society winter meeting. The observatory is named after Vera C. Rubin. The name honors Rubin and her colleagues' legacy to probe the nature of dark matter by mapping and cataloging billions of galaxies through space and time.

The telescope will be named the Simonyi Survey Telescope, to acknowledge the private donors Charles and Lisa Simonyi.

History

The L1 lens for the LSST, 2018

The LSST is the successor to a long tradition of sky surveys. These started as visually compiled catalogs in the 18th century, such as the Messier catalog. This was replaced by photographic surveys, starting with the 1885 Harvard Plate Collection, the National Geographic Society – Palomar Observatory Sky Survey, and others. By about 2000, the first digital surveys, such as the Sloan Digital Sky Survey (SDSS), began to replace the photographic plates of the earlier surveys.

LSST evolved from the earlier concept of the Dark Matter Telescope, mentioned as early as 1996. The fifth decadal report, Astronomy and Astrophysics in the New Millennium, was released in 2001, and recommended the "Large-Aperture Synoptic Survey Telescope" as a major initiative. Even at this early stage the basic design and objectives were set:

The Large-aperture Synoptic Survey Telescope (LSST) is a 6.5-m-class optical telescope designed to survey the visible sky every week down to a much fainter level than that reached by existing surveys. It will catalog 90 percent of the near-Earth objects larger than 300 m and assess the threat they pose to life on Earth. It will find some 10,000 primitive objects in the Kuiper Belt, which contains a fossil record of the formation of the solar system. It will also contribute to the study of the structure of the universe by observing thousands of supernovae, both nearby and at large redshift, and by measuring the distribution of dark matter through gravitational lensing. All the data will be available through the National Virtual Observatory... providing access for astronomers and the public to very deep images of the changing night sky.

Early development was funded by a number of small grants, with major contributions in January 2008 by software billionaires Charles and Lisa Simonyi and Bill Gates of $20- and $10 million respectively. $7.5 million was included in the U.S. President's FY2013 NSF budget request. The Department of Energy is funding construction of the digital camera component by the SLAC National Accelerator Laboratory, as part of its mission to understand dark energy.

In the 2010 decadal survey, LSST was ranked as the highest-priority ground-based instrument.

NSF funding for the rest of construction was authorized as of 1 August 2014. The lead organizations are:

As of May 2022, the project critical path was the camera installation, integration and testing.

In May 2018, Congress surprisingly appropriated much more funding than the telescope had asked for, in hopes of speeding up construction and operation. Telescope management was thankful but unsure this would help, since at the late stage of construction they were not cash-limited.

Overview

The Simonyi Survey Telescope design is unique among large telescopes (8 m-class primary mirrors) in having a very wide field of view: 3.5 degrees in diameter, or 9.6 square degrees. For comparison, both the Sun and the Moon, as seen from Earth, are 0.5 degrees across, or 0.2 square degrees. Combined with its large aperture (and thus light-collecting ability), this will give it a spectacularly large etendue of 319 m2∙degree2. This is more than three times the etendue of the largest-view existing telescopes, the Subaru Telescope with its Hyper Suprime Camera and Pan-STARRS, and more than an order of magnitude better than most large telescopes.

Optics

The LSST primary/tertiary mirror successfully cast, August 2008.
 
Optics of the LSST Telescope.

The Simonyi Survey Telescope is the latest in a long line of improvements giving telescopes larger fields of view. The earliest reflecting telescopes used spherical mirrors, which although easy to fabricate and test, suffer from spherical aberration; a very long focal length was needed to reduce spherical aberration to a tolerable level. Making the primary mirror parabolic removes spherical aberration on-axis, but the field of view is then limited by off-axis coma. Such a parabolic primary, with either a prime or Cassegrain focus, was the most common optical design up through the Hale telescope in 1949. After that, telescopes used mostly the Ritchey–Chrétien design, using two hyperbolic mirrors to remove both spherical aberration and coma, leaving only astigmatism, and giving a wider useful field of view. Most large telescopes since the Hale use this design—the Hubble and Keck telescopes are Ritchey–Chrétien, for example. LSST will use a three-mirror anastigmat to cancel astigmatism by employing three non-spherical mirrors. The result is sharp images over a very wide field of view, but at the expense of light-gathering power due to the large tertiary mirror.

The telescope's primary mirror (M1) is 8.4 meters (28 ft) in diameter, the secondary mirror (M2) is 3.4 meters (11.2 ft) in diameter, and the tertiary mirror (M3), inside the ring-like primary, is 5.0 meters (16 ft) in diameter. The secondary mirror is expected to be the largest convex mirror in any operating telescope, until surpassed by the ELT's 4.2 m secondary in about 2024. The second and third mirrors reduce the primary mirror's light-collecting area to 35 square meters (376.7 sq ft), equivalent to a 6.68-meter-diameter (21.9 ft) telescope. Multiplying this by the field of view produces an étendue of 336 m2∙degree2; the actual figure is reduced by vignetting.

The primary and tertiary mirrors (M1 and M3) are designed as a single piece of glass, the "M1M3 monolith". Placing the two mirrors in the same location minimizes the overall length of the telescope, making it easier to reorient quickly. Making them out of the same piece of glass results in a stiffer structure than two separate mirrors, contributing to rapid settling after motion.

The optics includes three corrector lenses to reduce aberrations. These lenses, and the telescope's filters, are built into the camera assembly. The first lens at 1.55 m diameter is the largest lens ever built, and the third lens forms the vacuum window in front of the focal plane.

Unlike many telescopes, the Rubin Observatory makes no attempt to compensate for dispersion in the atmosphere. Such correction, which requires re-adjusting an additional element in the optical train, would be very difficult in the 5 seconds allowed between pointings, plus is a technical challenge due to the extremely short focal length. As a result, shorter wavelength bands away from the zenith will have somewhat reduced image quality.

Camera

Life-size model of the LSST focal plane array. The array's diameter is 64 cm, and will provide over 3 gigapixels per image. The image of the Moon (30 arcminutes) is present to show the scale of the field of view. The model is held by Suzanne Jacoby, the Rubin Observatory communications director.

A 3.2-gigapixel prime focus digital camera will take a 15-second exposure every 20 seconds. Repointing such a large telescope (including settling time) within 5 seconds requires an exceptionally short and stiff structure. This in turn implies a very small f-number, which requires very precise focusing of the camera.

The 15-second exposures are a compromise to allow spotting both faint and moving sources. Longer exposures would reduce the overhead of camera readout and telescope re-positioning, allowing deeper imaging, but then fast moving objects such as near-Earth objects would move significantly during an exposure. Each spot on the sky is imaged with two consecutive 15 second exposures, to efficiently reject cosmic ray hits on the CCDs.

The camera focal plane is flat and 64 cm in diameter. The main imaging is performed by a mosaic of 189 CCD detectors, each with 16 megapixels. They are grouped into a 5×5 grid of "rafts", where the central 21 rafts contain 3×3 imaging sensors, while the four corner rafts contain only three CCDs each, for guiding and focus control. The CCDs provide better than 0.2 arcsecond sampling, and will be cooled to approximately −100 °C (173 K) to help reduce noise.

The camera includes a filter located between the second and third lenses, and an automatic filter-changing mechanism. Although the camera has six filters (ugrizy) covering 330 to 1080 nm wavelengths, the camera's position between the secondary and tertiary mirrors limits the size of its filter changer. It can only hold five filters at a time, so each day one of the six must be chosen to be omitted for the following night.

Image data processing

Scan of Flammarion engraving taken with LSST in September 2020.

Allowing for maintenance, bad weather and other contingencies, the camera is expected to take over 200,000 pictures (1.28 petabytes uncompressed) per year, far more than can be reviewed by humans. Managing and effectively analyzing the enormous output of the telescope is expected to be the most technically difficult part of the project. In 2010, the initial computer requirements were estimated at 100 teraflops of computing power and 15 petabytes of storage, rising as the project collects data. By 2018, estimates had risen to 250 teraflops and 100 petabytes of storage.

Once images are taken, they are processed according to three different timescales, prompt (within 60 seconds), daily, and annually.

The prompt products are alerts, issued within 60 seconds of observation, about objects that have changed brightness or position relative to archived images of that sky position. Transferring, processing, and differencing such large images within 60 seconds (previous methods took hours, on smaller images) is a significant software engineering problem by itself. Approximately 10 million alerts will be generated per night. Each alert will include the following:

  • Alert and database ID: IDs uniquely identifying this alert
  • The photometric, astrometric, and shape characterization of the detected source
  • 30×30 pixel (on average) cut-outs of the template and difference images (in FITS format)
  • The time series (up to a year) of all previous detections of this source
  • Various summary statistics ("features") computed of the time series

There is no proprietary period associated with alerts—they are available to the public immediately, since the goal is to quickly transmit nearly everything LSST knows about any given event, enabling downstream classification and decision making. LSST will generate an unprecedented rate of alerts, hundreds per second when the telescope is operating. Most observers will be interested in only a tiny fraction of these events, so the alerts will be fed to "event brokers" which forward subsets to interested parties. LSST will provide a simple broker, and provide the full alert stream to external event brokers. The Zwicky Transient Facility will serve as a prototype of LSST system, generating 1 million alerts per night.

Daily products, released within 24 hours of observation, comprise the images from that night, and the source catalogs derived from difference images. This includes orbital parameters for Solar System objects. Images will be available in two forms: Raw Snaps, or data straight from the camera, and Single Visit Images, which have been processed and include instrumental signature removal (ISR), background estimation, source detection, deblending and measurements, point spread function estimation, and astrometric and photometric calibration.

Annual release data products will be made available once a year, by re-processing the entire science data set to date. These include:

  • Calibrated images
  • Measurements of positions, fluxes, and shapes
  • Variability information
  • A compact description of light curves
  • A uniform reprocessing of the difference-imaging-based prompt data products
  • A catalog of roughly 6 million Solar Systems objects, with their orbits
  • A catalog of approximately 37 billion sky objects (20 billion galaxies and 17 billion stars), each with more than 200 attributes

The annual release will be computed partially by NCSA, and partially by IN2P3 in France.

LSST is reserving 10% of its computing power and disk space for user generated data products. These will be produced by running custom algorithms over the LSST data set for specialized purposes, using Application Program Interfaces (APIs) to access the data and store the results. This avoids the need to download, then upload, huge quantities of data by allowing users to use the LSST storage and computation capacity directly. It also allows academic groups to have different release policies than LSST as a whole.

An early version of the LSST image data processing software is being used by the Subaru telescope's Hyper Suprime-Cam instrument, a wide-field survey instrument with a sensitivity similar to LSST but one fifth the field of view: 1.8 square degrees versus the 9.6 square degrees of LSST.

Scientific goals

Comparison of primary mirrors of several optical telescopes. (The LSST, with its very large central hole, is near the center of the diagram).

LSST will cover about 18,000 deg2 of the southern sky with 6 filters in its main survey, with about 825 visits to each spot. The 5σ (SNR greater than 5) magnitude limits are expected to be r<24.5 in single images, and r<27.8 in the full stacked data.

The main survey will use about 90% of the observing time. The remaining 10% will be used to obtain improved coverage for specific goals and regions. This includes very deep (r ∼ 26) observations, very short revisit times (roughly one minute), observations of "special" regions such as the Ecliptic, Galactic plane, and the Large and Small Magellanic Clouds, and areas covered in detail by multi-wavelength surveys such as COSMOS and the Chandra Deep Field South. Combined, these special programs will increase the total area to about 25,000 deg2.

Particular scientific goals of the LSST include:

Because of its wide field of view and high sensitivity, LSST is expected to be among the best prospects for detecting optical counterparts to gravitational wave events detected by LIGO and other observatories.

It is also hoped that the vast volume of data produced will lead to additional serendipitous discoveries.

NASA has been tasked by the US Congress with detecting and cataloging 90% of the NEO population of size 140 meters or greater. LSST, by itself, is estimated to be capable of detecting 62% of such objects, and according to the National Academy of Sciences, extending its survey from ten years to twelve would be the most cost-effective way of finishing the task.

Rubin Observatory has a program of Education and Public Outreach (EPO). Rubin Observatory EPO will serve four main categories of users: the general public, formal educators, citizen science principal investigators, and content developers at informal science education facilities. Rubin Observatory will partner with Zooniverse for a number of their citizen science projects.

Comparison with other sky surveys

Top-end assembly lowered by 500-ton crane

There have been many other optical sky surveys, some still on-going. For comparison, here are some of the main currently used optical surveys, with differences noted:

  • Photographic sky surveys, such as the National Geographic Society – Palomar Observatory Sky Survey and its digitized version, the Digitized Sky Survey. This technology is obsolete, with much less depth, and in general taken from locations with less than excellent views. However, these archives are still used since they span a rather large time interval—more than 100 years in some cases—and cover the entire sky. The plate scans reached a limit of R~18 and B~19.5 over 90% of the sky, and about one magnitude fainter over 50% of the sky.
  • The Sloan Digital Sky Survey (SDSS) (2000–2009) surveyed 14,555 square degrees of the northern hemisphere sky with a 2.5 meter telescope. It continues to the present day as a spectrographic survey. Its limiting photometric magnitude ranged from 20.5 to 22.2, depending on the filter.
  • Pan-STARRS (2010–present) is an ongoing sky survey using two wide-field 1.8 m Ritchey–Chrétien telescopes located at Haleakala in Hawaii. Until LSST begins operation, it will remain the best detector of near-Earth objects. Its coverage, 30,000 square degrees, is comparable to what LSST will cover. The single image depth in the PS1 survey was between magnitude 20.9-22.0 depending on filter.
  • The DESI Legacy Imaging Surveys (2013–present) looks at 14,000 square degrees of the northern and southern sky with the Bok 2.3-m telescope, the 4-meter Mayall telescope and the 4-meter Victor M. Blanco Telescope. The Legacy Surveys make use of the Mayall z-band Legacy Survey, the Beijing-Arizona Sky Survey and the Dark Energy Survey. The Legacy Surveys avoided the Milky Way since it was primarily concerned with distant galaxies. The area of DES (5,000 square degrees) is entirely contained within the anticipated survey area of LSST in the southern sky. Its exposures typically reach magnitude 23-24.
  • Gaia is an ongoing space-based survey of the entire sky since 2014, whose primary goal is extremely precise astrometry of roughly two billion stars, quasars, galaxies and sun system objects. Its collecting area of 0.7 m2 does not allow observation of objects as faint as can be included in other surveys, but the location of each object observed is known with far greater precision. While not taking exposures in the traditional sense, it detects objects up to a magnitude of 21.
  • The Zwicky Transient Facility (2018–present) is a similar, rapid, wide-field survey to detect transient events. The telescope has an even larger field of view (47 square degrees; 5× the field), but a significantly smaller aperture (1.22 m; 1/30 the area). It is being used to develop and test the LSST automated alert software. Its exposures typically reach magnitude 20-21.
  • The Space Surveillance Telescope (2011–present) is a similar rapid wide-field survey telescope used primarily for military applications, with secondary civil applications including space debris and NEO detection and cataloguing.

Construction progress

Construction progress of the LSST observatory building at Cerro Pachón as of September 2019

The Cerro Pachón site was selected in 2006. The main factors were the number of clear nights per year, seasonal weather patterns, and the quality of images as seen through the local atmosphere (seeing). The site also needed to have an existing observatory infrastructure, to minimize costs of construction, and access to fiber optic links, to accommodate the 30 terabytes of data LSST will produce each night.

As of February 2018, construction was well underway. The shell of the summit building is complete, and 2018 saw the installation of major equipment, including HVAC, the dome, mirror coating chamber, and the telescope mount assembly. It also saw the expansion of the AURA base facility in La Serena and the summit dormitory shared with other telescopes on the mountain.

By February 2018, the camera and telescope shared the critical path. The main risk was deemed to be whether sufficient time was allotted for system integration.

As of 2017 the project remained within budget, although the budget contingency was tight.

In March 2020, work on the summit facility, and the main camera at SLAC, was suspended due to the COVID-19 pandemic, though work on software continued. During this time, the commissioning camera arrived at the base facility and is being tested there. It will be moved to the summit when it is safe to do so.

Mirrors

The primary mirror, the most critical and time-consuming part of a large telescope's construction, was made over a 7-year period by the University of Arizona's Steward Observatory Mirror Lab. Construction of the mold began in November 2007, mirror casting was begun in March 2008, and the mirror blank was declared "perfect" at the beginning of September 2008. In January 2011, both M1 and M3 figures had completed generation and fine grinding, and polishing had begun on M3.

The mirror was formally accepted on 13 February 2015, then placed in the mirror transport box and stored in an airplane hangar. In October 2018, it was moved back to the mirror lab and integrated with the mirror support cell. It went through additional testing in January/February 2019, then was returned to its shipping crate. In March 2019, it was sent by truck to Houston, was placed on a ship for delivery to Chile, and arrived on the summit in May. There it will be re-united with the mirror support cell and coated.

The coating chamber, which was used to coat the mirrors once they arrived, itself arrived at the summit in November 2018.

The secondary mirror was manufactured by Corning of ultra low expansion glass and coarse-ground to within 40 μm of the desired shape. In November 2009, the blank was shipped to Harvard University for storage until funding to complete it was available. On 21 October 2014, the secondary mirror blank was delivered from Harvard to Exelis (now a subsidiary of Harris Corporation) for fine grinding. The completed mirror was delivered to Chile on 7 December 2018, and was coated in July 2019.

Building

Cutaway rendering of the telescope, dome, and support building.

Site excavation began in earnest on 8 March 2011, and the site had been leveled by the end of 2011. Also during that time, the design progressed, with significant improvements to the mirror support system, stray-light baffles, wind screen, and calibration screen.

In 2015, a large amount of broken rock and clay was found under the site of the support building adjacent to the telescope. This caused a 6-week construction delay while it was dug out and the space filled with concrete. This did not affect the telescope proper or its dome, whose much more important foundations were examined more thoroughly during site planning.

The building was declared substantially complete in March 2018. The dome was expected to be complete in August 2018, but a picture from May 2019 showed it still incomplete. The (still incomplete) Rubin Observatory dome first rotated under its own power in November 2019.

Telescope mount assembly

Telescope Mount Assembly of the 8.4-meter Simonyi Survey Telescope at Vera C. Rubin Observatory, under construction atop Cerro Pachón in Chile.

The telescope mount, and the pier on which it sits, are substantial engineering projects in their own right. The main technical problem is that the telescope must slew 3.5 degrees to the adjacent field and settle within four seconds. This requires a very stiff pier and telescope mount, with very high speed slew and acceleration (10°/sec and 10°/sec2, respectively). The basic design is conventional: an altitude over azimuth mount made of steel, with hydrostatic bearings on both axes, mounted on a pier which is isolated from the dome foundations. However, the LSST pier is unusually large (16 m diameter) and robust (1.25 m thick walls), and mounted directly to virgin bedrock, where care was taken during site excavation to avoid using explosives that would crack it. Other unusual design features are linear motors on the main axes and a recessed floor on the mount. This allows the telescope to extend slightly below the azimuth bearings, giving it a very low center of gravity.

The contract for the Telescope Mount Assembly was signed in August 2014. It passed its acceptance tests in 2018 and arrived at the construction site in September 2019.

Camera construction

In August 2015, the LSST Camera project, which is separately funded by the U.S. Department of Energy, passed its "critical decision 3" design review, with the review committee recommending DoE formally approve start of construction. On August 31, the approval was given, and construction began at SLAC. As of September 2017, construction of the camera was 72% complete, with sufficient funding in place (including contingencies) to finish the project. By September 2018, the cryostat was complete, the lenses ground, and 12 of the 21 needed rafts of CCD sensors had been delivered. As of September 2020, the entire focal plane was complete and undergoing testing. By October 2021, the last of the six filters needed by the camera had been finished and delivered. By November 2021, the entire camera had been cooled down to its required operating temperature, so final testing could begin.

Before the final camera is installed, a smaller and simpler version (the Commissioning Camera, or ComCam) will be used "to perform early telescope alignment and commissioning tasks, complete engineering first light, and possibly produce early usable science data".

Data transport

The data must be transported from the camera, to facilities at the summit, to the base facilities, and then to the LSST Data Facility at the National Center for Supercomputing Applications in the United States. This transfer must be very fast (100 Gbit/s or better) and reliable since NCSA is where the data will be processed into scientific data products, including real-time alerts of transient events. This transfer uses multiple fiber optic cables from the base facility in La Serena to Santiago, then via two redundant routes to Miami, where it connects to existing high speed infrastructure. These two redundant links were activated in March 2018 by the AmLight consortium.

Since the data transfer crosses international borders, many different groups are involved. These include the Association of Universities for Research in Astronomy (AURA, Chile and the USA), REUNA (Chile), Florida International University (USA), AmLightExP (USA), RNP (Brazil), and University of Illinois at Urbana–Champaign NCSA (USA), all of which participate in the LSST Network Engineering Team (NET). This collaboration designs and delivers end-to-end network performance across multiple network domains and providers.

Possible impact of satellite constellations

A study in 2020 by the European Southern Observatory estimated that up to 30% to 50% of the exposures around twilight with the Rubin Observatory would be severely affected by satellite constellations. Survey telescopes have a large field of view and they study short-lived phenomena like supernova or asteroids, and mitigation methods that work on other telescopes may be less effective. The images would be affected especially during twilight (50%) and at the beginning and end of the night (30%). For bright trails the complete exposure could be ruined by a combination of saturation, crosstalk (far away pixels gaining signal due to the nature of CCD electronics), and ghosting (internal reflections within the telescope and camera) caused by the satellite trail, affecting an area of the sky significantly larger than the satellite path itself during imaging. For fainter trails only a quarter of the image would be lost. A previous study by the Rubin Observatory found an impact of 40% at twilight and only nights in the middle of the winter would be unaffected.

Possible approaches to this problem would be a reduction of the number or brightness of satellites, upgrades to the telescope's CCD camera system, or both. Observations of Starlink satellites showed a decrease of the satellite trail brightness for darkened satellites. This decrease is however not enough to mitigate the effect on wide-field surveys like the one conducted by the Rubin Observatory. Therefore SpaceX is introducing a sunshade on newer satellites, to keep the portions of the satellite visible from the ground out of direct sunlight. The objective is to keep the satellites below 7th magnitude, to avoid saturating the detectors. This limits the problem to only the trail of the satellite and not the whole image.

Gallery

Saturday, July 30, 2022

Psychosynthesis

From Wikipedia, the free encyclopedia

Psychosynthesis is an approach to psychology that expands the boundaries of the field by identifying a deeper center of identity, which is the postulate of the Self. It considers each individual unique in terms of purpose in life, and places value on the exploration of human potential. The approach combines spiritual development with psychological healing by including the life journey of an individual or their unique path to self-realization.

The integrative framework of psychosynthesis is based on Sigmund Freud's theory of the unconscious and addresses psychological distress and intra-psychic and interpersonal conflicts.

Development

Psychosynthesis was developed by Italian psychiatrist, Roberto Assagioli, who was a student of Freud and Bleuler. He compared psychosynthesis to the prevailing thinking of the day, contrasting psychosynthesis for example with existential psychology, but unlike the latter considered loneliness not to be "either ultimate or essential".

Assagioli asserted that "the direct experience of the self, of pure self-awareness...—is true." Spiritual goals of "self-realization" and the "interindividual psychosynthesis"—of "social integration...the harmonious integration of the individual into ever larger groups up to the 'one humanity'"—were central to Assagioli's theory. Psychosynthesis was not intended to be a school of thought or an exclusive method. However, many conferences and publications had it as a central theme, and centres were formed in Italy and the United States in the 1960s.

Psychosynthesis departed from the empirical foundations of psychology because it studied a person as a personality and a soul, but Assagioli continued to insist that it was scientific. He developed therapeutic methods beyond those in psychoanalysis. Although the unconscious is an important part of his theory, Assagioli was careful to maintain a balance with rational, conscious therapeutical work.

Assagioli was not the first to use the term "psychosynthesis". The earliest use was by James Jackson Putnam, who used it as the name of his electroconvulsive therapy. The term was also used by C. G. Jung and A. R. Orage, who were both more aligned to Assagioli's use of the term than Putnam's use. C. G. Jung, in comparing his goals to those of Sigmund Freud, wrote, "If there is a 'psychoanalysis' there must also be a 'psychosynthesis which creates future events according to the same laws'." A. R. Orage, who was the publisher of the influential journal, The New Age, used the term as well, but hyphenated it (psycho-synthesis). Orage formed an early psychology study group (which included Maurice Nicoll who later studied with Carl Jung) and concluded that what humanity needed was not psychoanalysis, but psycho-synthesis. The term was also used by Bezzoli. Freud, however, was opposed to what he saw as the directive element in Jung's approach to psychosynthesis, and Freud argued for a spontaneous synthesis on the patient's part: "As we analyse...the great unity which we call his ego fits into itself all the instinctual impulses which before had been split off and held apart from it. The psycho-synthesis is thus achieved in analytic treatment without our intervention, automatically and inevitably."

Origins

In 1909, C.G. Jung wrote to Sigmund Freud of "a very pleasant and perhaps valuable acquaintance, our first Italian, a Dr. Assagioli from the psychiatric clinic in Florence". Later however, this same Roberto Assagioli (1888 – 1974) wrote a doctoral dissertation, "La Psicosintesi," in which he began to move away from Freud's psychoanalysis toward what he called psychosynthesis:

A beginning of my conception of psychosynthesis was contained in my doctoral thesis on Psychoanalysis (1910), in which I pointed out what I considered to be some of the limitations of Freud's views.

In developing psychosynthesis, Assagioli agreed with Freud that healing childhood trauma and developing a healthy ego were necessary aims of psychotherapy, but Assagioli believed that human growth could not be limited to this alone. A student of philosophical and spiritual traditions of both East and West, Assagioli sought to address human growth as it proceeded beyond the norm of the well-functioning ego; he wished to support the fruition of human potential—what Abraham Maslow later termed self-actualization—into the spiritual or transpersonal dimensions of human experience as well.

Assagioli envisioned an approach to the human being that could address both the process of personal growth—of personality integration and self-actualization—as well as transpersonal development—that dimension glimpsed for example in peak experiences (Maslow) of inspired creativity, spiritual insight, and unitive states of consciousness. Psychosynthesis recognizes the process of self-realization, of contact and response with one's deepest callings and directions in life, which can involve either or both personal and transpersonal development.

Psychosynthesis is therefore one of the earliest forerunners of both humanistic psychology and transpersonal psychology, even preceding Jung's break with Freud by several years. Assagioli's conception has an affinity with existential-humanistic psychology and other approaches that attempt to understand the nature of the healthy personality, personal responsibility, and choice, and the actualization of the personal self. Similarly, his conception is related to the field of transpersonal psychology (with its focus on higher states of consciousness), spirituality, and human experience beyond the individual self. Assagioli served on the board of editors for both the Journal of Humanistic Psychology and the Journal of Transpersonal Psychology.

Assagioli presents two major theoretical models in his seminal book, Psychosynthesis, models that have remained fundamental to psychosynthesis theory and practice:

  1. A diagram and description of the human person
  2. A stage theory of the process of psychosynthesis (see below).

Aims

In Psychosomatic Medicine and Bio-psychosynthesis, Assagioli states that the principal aims and tasks of psychosynthesis are:

  1. the elimination of the conflicts and obstacles, conscious and unconscious, that block [the complete and harmonious development of the human personality]
  2. the use of active techniques to stimulate the psychic functions still weak and immature.

In his major book, Psychosynthesis: A Collection of Basic Writings (1965), Assagioli writes of three aims of psychosynthesis:

Let us examine whether and how it is possible to solve this central problem of human life, to heal this fundamental infirmity of man. Let us see how he may free himself from this enslavement and achieve an harmonious inner integration, true Self-realization, and right relationships with others. (p. 21)

Model of the person

Psychosynthesis Egg Diagram
 
1: Lower Unconscious
2: Middle Unconscious
3: Higher Unconscious
4: Field of Consciousness
5: Conscious Self or "I"
6: Higher Self
7: Collective Unconscious

At the core of psychosynthesis theory is the Egg Diagram, which maps the human psyche into different distinct and interconnected levels.

Lower unconscious

For Assagioli, 'the lower unconscious, which contains one's personal psychological past in the form of repressed complexes, long-forgotten memories and dreams and imaginations', stood at the base of the diagram of the mind.

The lower unconscious is that realm of the person to which is relegated the experiences of shame, fear, pain, despair, and rage associated with primal wounding suffered in life. One way to think of the lower unconscious is that it is a particular bandwidth of one's experiential range that has been broken away from consciousness. It comprises that range of experience related to the threat of personal annihilation, of destruction of self, of nonbeing, and more generally, of the painful side of the human condition. As long as this range of experience remains unconscious, the person will have a limited ability to be empathic with self or others in the more painful aspects of human life.

At the same time, 'the lower unconscious merely represents the most primitive part of ourselves...It is not bad, it is just earlier '. Indeed, 'the "lower" side has many attractions and great vitality', and – as with Freud's id, or Jung's shadow – the conscious goal must be to 'achieve a creative tension' with the lower unconscious.

Middle unconscious

The middle unconscious is a sector of the person whose contents, although unconscious, nevertheless support normal conscious functioning in an ongoing way (thus it is illustrated as most immediate to "I"). It is the capacity to form patterns of skills, behaviors, feelings, attitudes, and abilities that can function without conscious attention, thereby forming the infrastructure of one's conscious life.

The function of the middle unconscious can be seen in all spheres of human development, from learning to walk and talk, to acquiring languages, to mastering a trade or profession, to developing social roles. Anticipating today's neuroscience, Assagioli even referred to "developing new neuromuscular patterns". All such elaborate syntheses of thought, feeling, and behavior are built upon learnings and abilities that must eventually operate unconsciously.

For Assagioli, 'Human healing and growth that involves work with either the middle or the lower unconscious is known as personal psychosynthesis '.

Higher unconscious

Assagioli termed 'the sphere of aesthetic experience, creative inspiration, and higher states of consciousness...the higher unconscious '. The higher unconscious (or superconscious) denotes "our higher potentialities which seek to express themselves, but which we often repel and repress" (Assagioli). As with the lower unconscious, this area is by definition not available to consciousness, so its existence is inferred from moments in which contents from that level affect consciousness. Contact with the higher unconscious can be seen in those moments, termed peak experiences by Maslow, which are often difficult to put into words, experiences in which one senses deeper meaning in life, a profound serenity and peace, a universality within the particulars of existence, or perhaps a unity between oneself and the cosmos. This level of the unconscious represents an area of the personality that contains the "heights" overarching the "depths" of the lower unconscious. As long as this range of experience remains unconscious – in what Desoille termed '"repression of the sublime"' – the person will have a limited ability to be empathic with self or other in the more sublime aspects of human life.

The higher unconscious thus represents 'an autonomous realm, from where we receive our higher intuitions and inspirations – altruistic love and will, humanitarian action, artistic and scientific inspiration, philosophic and spiritual insight, and the drive towards purpose and meaning in life'. It may be compared to Freud's superego, seen as 'the higher, moral, supra-personal side of human nature...a higher nature in man', incorporating 'Religion, morality, and a social sense – the chief elements in the higher side of man...putting science and art to one side'.

Subpersonalities

Subpersonalities based in the personal unconscious form a central strand in psychosynthesis thinking. 'One of the first people to have started really making use of subpersonalities for therapy and personal growth was Roberto Assagioli', psychosynthesis reckoning that 'subpersonalities exist at various levels of organization, complexity, and refinement' throughout the mind. A five-fold process of recognition, acceptance, co-ordination, integration, and synthesis 'leads to the discovery of the Transpersonal Self, and the realization that that is the final truth of the person, not the subpersonalities'.

Some subpersonalities may be seen 'as psychological contents striving to emulate an archetype...degraded expressions of the archetypes of higher qualities '. Others will resist the process of integration; will 'take the line that it is difficult being alive, and it is far easier – and safer – to stay in an undifferentiated state'.

"I"

Psychosynthesis Star Diagram
Psychosynthesis Star Diagram
formulated by Roberto Assagioli

"I" is the direct "reflection" or "projection" of Self (Assagioli) and the essential being of the person, distinct but not separate from all contents of experience. "I" possesses the two functions of consciousness, or awareness, and will, whose field of operation is represented by the concentric circle around "I" in the oval diagram – Personal Will.

Psychosynthesis suggests that "we can experience the will as having four stages. The first stage could be described as 'having no will'", and might perhaps be linked with the hegemony of the lower unconscious. "The next stage of the will is understanding that 'will exists'. We might still feel that we cannot actually do it, but we know...it is possible". "Once we have developed our will, at least to some degree, we pass to the next stage which is called 'having a will'", and thereafter "in psychosynthesis we call the fourth and final stage of the evolution of the will in the individual 'being will'" – which then "relates to the 'I' or self...draws energy from the transpersonal self".

The "I" is placed at the center of the field of awareness and will in order to indicate that "I" is the one who has consciousness and will. It is "I" who is aware of the psyche-soma contents as they pass in and out of awareness; the contents come and go, while "I" may remain present to each experience as it arises. But "I" is dynamic as well as receptive: "I" has the ability to affect the contents of awareness and can even affect awareness itself, by choosing to focus awareness (as in many types of meditation), expand it, or contract it.

Since "I" is distinct from any and all contents and structures of experience, "I" can be thought of as not a "self" at all but as "noself". That is, "I" is never the object of experience. "I" is who can experience, for example, the ego disintegrating and reforming, who can encounter emptiness and fullness, who can experience utter isolation or cosmic unity, who can engage any and all arising experiences. "I" is not any particular experience but the experiencer, not object but subject, and thus cannot be seen or grasped as an object of consciousness. This "noself" view of "I" can be seen in Assagioli's discussion of "I" as a reflection of Self: "The reflection appears to be self-existent but has, in reality, no autonomous substantiality. It is, in other words, not a new and different light but a projection of its luminous source". The next section describes this "luminous source", Self.

Self

Pervading all the areas mapped by the oval diagram, distinct but not separate from all of them, is Self (which has also been called Higher Self or Transpersonal Self). The concept of Self points towards a source of wisdom and guidance within the person, a source which can operate quite beyond the control of the conscious personality. Since Self pervades all levels, an ongoing lived relationship with Self—Self-realization—may lead anywhere on the diagram as one's direction unfolds (this is one reason for not illustrating Self at the top of the diagram, a representation that tends to give the impression that Self-realization leads only into the higher unconscious). Relating to Self may lead for example to engagement with addictions and compulsions, to the heights of creative and religious experience, to the mysteries of unitive experience, to issues of meaning and mortality, to grappling with early childhood wounding, to discerning a sense of purpose and meaning in life.

The relationship of "I" and Self is paradoxical. Assagioli was clear that "I" and Self were from one point of view, one. He wrote, "There are not really two selves, two independent and separate entities. The Self is one". Such a nondual unity is a fundamental aspect of this level of experience. But Assagioli also understood that there could be a meaningful relationship between the person and Self as well:

Accounts of religious experiences often speak of a "call" from God, or a "pull" from some Higher Power; this sometimes starts a "dialogue" between the man [or woman] and this "higher Source"...

Assagioli did not of course limit this relationship and dialogue to those dramatic experiences of "call" seen in the lives of great men and women throughout history. Rather, the potential for a conscious relationship with Self exists for every person at all times and may be assumed to be implicit in every moment of every day and in every phase of life, even when one does not recognize this. Whether within one's private inner world of feelings, thoughts, and dreams, or within one's relationships with other people and the natural world, a meaningful ongoing relationship with Self may be lived.

Stages

Writing about the model of the person presented above, Assagioli states that it is a "structural, static, almost 'anatomical' representation of our inner constitution, while it leaves out its dynamic aspect, which is the most important and essential one". Thus he follows this model immediately with a stage theory outlining the process of psychosynthesis. This scheme can be called the "stages of psychosynthesis", and is presented here.

It is important to note that although the linear progression of the following stages does make logical sense, these stages may not in fact be experienced in this sequence; they are not a ladder up which one climbs, but aspects of a single process. Further, one never outgrows these stages; any stage can be present at any moment throughout the process of Psychosynthesis, Assaglioli acknowledging 'persisting traits belonging to preceding psychological ages' and the perennial possibility of 'retrogression to primitive stages'.

The stages of Psychosynthesis may be tabulated as follows:

  1. Thorough knowledge of one's personality.
  2. Control of its various elements.
  3. Realization of one's true Self—the discovery or creation of a unifying center.
  4. Psychosynthesis: the formation or reconstruction of the personality around a new center.

Methods

Psychosynthesis was regarded by Assagioli as more of an orientation and a general approach to the whole human being, and as existing apart from any of its particular concrete applications. This approach allows for a wide variety of techniques and methods to be used within the psychosynthesis context. 'Dialogue, Gestalt techniques, dream work, guided imagery, affirmations, and meditation are all powerful tools for integration', but 'the attitude and presence of the guide are of far greater importance than the particular methods used'. Sand tray, art therapy, journaling, drama therapy, and body work; cognitive-behavioral techniques; object relations, self psychology, and family systems approaches, may all be used in different contexts, from individual and group psychotherapy, to meditation and self-help groups. Psychosynthesis offers an overall view which can help orient oneself within the vast array of different modalities available today, and be applied either for therapy or for self-actualization.

Recently, two psychosynthesis techniques were shown to help student sojourners in their acculturation process. First, the self-identification exercise eased anxiety, an aspect of culture shock. Secondly, the subpersonality model aided students in their ability to integrate a new social identity. In another recent study, the subpersonality model was shown to be an effective intervention for aiding creative expression, helping people connect to different levels of their unconscious creativity. Most recently, psychosynthesis psychotherapy has proven to activate personal and spiritual growth in self-identified atheists.

One broad classification of the techniques used involves the following headings: ' Analytical: To help identify blocks and enable the exploration of the unconscious'. Psychosynthesis stresses 'the importance of using obstacles as steps to growth' – 'blessing the obstacle...blocks are our helpers'. ' Mastery...the eight psychological functions need to be gradually retrained to produce permanent positive change'. ' Transformation...the refashioning of the personality around a new centre'. ' Grounding...into the concrete terms of daily life. ' Relational...to cultivate qualities such as love, openness and empathy'.

Psychosynthesis allows practitioners the recognition and validation of an extensive range of human experience: the vicissitudes of developmental difficulties and early trauma; the struggle with compulsions, addictions, and the trance of daily life; the confrontation with existential identity, choice, and responsibility; levels of creativity, peak performance, and spiritual experience; and the search for meaning and direction in life. None of these important spheres of human existence need be reduced to the other, and each can find its right place in the whole. This means that no matter what type of experience is engaged, and no matter what phase of growth is negotiated, the complexity and uniqueness of the person may be respected—a fundamental principle in any application of psychosynthesis.

Criticism

In the December 1974 issue of Psychology Today, Assagioli was interviewed by Sam Keen and was asked to comment on the limits of psychosynthesis. He answered paradoxically: "The limit of psychosynthesis is that it has no limits. It is too extensive, too comprehensive. Its weakness is that it accepts too much. It sees too many sides at the same time and that is a drawback."

Psychosynthesis "has always been on the fringes of the 'official' therapy world" and it "is only recently that the concepts and methods of psychoanalysis and group analysis have been introduced into the training and practice of psychosynthesis psychotherapy".

As a result, the movement has been at times exposed to the dangers of fossilisation and cultism, so that on occasion, having "started out reflecting the high-minded spiritual philosophy of its founder, [it] became more and more authoritarian, more and more strident in its conviction that psychosynthesis was the One Truth".

A more technical danger is that premature concern with the transpersonal may hamper dealing with personal psychosynthesis: for example, "evoking serenity ... might produce a false sense of well-being and security". Practitioners have noted how "inability to ... integrate the superconscious contact with everyday experience easily leads to inflation", and have spoken of "an 'Icarus complex', the tendency whereby spiritual ambition fails to take personality limitations into account and causes all sorts of psychological difficulties".

Fictional analogies

Stephen Potter's "Lifemanship Psycho-Synthesis Clinic", where you may "find the psycho-synthesist lying relaxed on the couch while the patient will be encouraged to walk up and down" would seem a genuine case of "parallel evolution", since its clear targets, as "the natural antagonists...of the lifeplay, are the psychoanalysts".

Operator (computer programming)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Operator_(computer_programmin...