Search This Blog

Thursday, July 7, 2022

Multiple trace theory

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Multiple trace theory is a memory consolidation model advanced as an alternative model to strength theory. It posits that each time some information is presented to a person, it is neurally encoded in a unique memory trace composed of a combination of its attributes. Further support for this theory came in the 1960s from empirical findings that people could remember specific attributes about an object without remembering the object itself. The mode in which the information is presented and subsequently encoded can be flexibly incorporated into the model. This memory trace is unique from all others resembling it due to differences in some aspects of the item's attributes, and all memory traces incorporated since birth are combined into a multiple-trace representation in the brain. In memory research, a mathematical formulation of this theory can successfully explain empirical phenomena observed in recognition and recall tasks.

Attributes

The attributes an item possesses form its trace and can fall into many categories. When an item is committed to memory, information from each of these attributional categories is encoded into the item's trace. There may be a kind of semantic categorization at play, whereby an individual trace is incorporated into overarching concepts of an object. For example, when a person sees a pigeon, a trace is added to the "pigeon" cluster of traces within his or her mind. This new "pigeon" trace, while distinguishable and divisible from other instances of pigeons that the person may have seen within his or her life, serves to support the more general and overarching concept of a pigeon.

Physical

Physical attributes of an item encode information about physical properties of a presented item. For a word, this could include color, font, spelling, and size, while for a picture, the equivalent aspects could be shapes and colors of objects. It has been shown experimentally that people who are unable to recall an individual word can sometimes recall the first or last letter or even rhyming words, all aspects encoded in the physical orthography of a word's trace. Even when an item is not presented visually, when encoded, it may have some physical aspects based on a visual representation of the item.

Contextual

Contextual attributes are a broad class of attributes that define the internal and external features that are simultaneous with presentation of the item. Internal context is a sense of the internal network that a trace evokes. This may range from aspects of an individual's mood to other semantic associations the presentation of the word evokes. On the other hand, external context encodes information about the spatial and temporal aspects as information is being presented. This may reflect time of day or weather, for example. Spatial attributes can refer both to physical environment and imagined environment. The method of loci, a mnemonic strategy incorporating an imagined spatial position, assigns relative spatial positions to different items memorized and then "walking through" these assigned positions to remember the items.

Modal

Modality attributes possess information as to the method by which an item was presented. The most frequent types of modalities in an experimental setting are auditory and visual. Any sensory modality may be utilized practically.

Classifying

These attributes refer to the categorization of items presented. Items that fit into the same categories will have the same class attributes. For example, if the item "touchdown" were presented, it would evoke the overarching concept of "football" or perhaps, more generally, "sports", and it would likely share class attributes with "endzone" and other elements that fit into the same concept. A single item may fit into different concepts at the time it is presented depending on other attributes of the item, like context. For example, the word "star" might fall into the class of astronomy after visiting a space museum or a class with words like "celebrity" or "famous" after seeing a movie.

Mathematical formulation

The mathematical formulation of traces allows for a model of memory as an ever-growing matrix that is continuously receiving and incorporating information in the form of a vectors of attributes. Multiple trace theory states that every item ever encoded, from birth to death, will exist in this matrix as multiple traces. This is done by giving every possible attribute some numerical value to classify it as it is encoded, so each encoded memory will have a unique set of numerical attributes.

Matrix definition of traces

By assigning numerical values to all possible attributes, it is convenient to construct a column vector representation of each encoded item. This vector representation can also be fed into computational models of the brain like neural networks, which take as inputs vectorial "memories" and simulate their biological encoding through neurons.

Formally, one can denote an encoded memory by numerical assignments to all of its possible attributes. If two items are perceived to have the same color or experienced in the same context, the numbers denoting their color and contextual attributes, respectively, will be relatively close. Suppose we encode a total of L attributes anytime we see an object. Then, when a memory is encoded, it can be written as m1 with L total numerical entries in a column vector:

.

A subset of the L attributes will be devoted to contextual attributes, a subset to physical attributes, and so on. One underlying assumption of multiple trace theory is that, when we construct multiple memories, we organize the attributes in the same order. Thus, we can similarly define vectors m2, m3, ..., mn to account for n total encoded memories. Multiple trace theory states that these memories come together in our brain to form a memory matrix from the simple concatenation of the individual memories:

.

For L total attributes and n total memories, M will have L rows and n columns. Note that, although the n traces are combined into a large memory matrix, each trace is individually accessible as a column in this matrix.

In this formulation, the n different memories are made to be more or less independent of each other. However, items presented in some setting together will become tangentially associated by the similarity of their context vectors. If multiple items are made associated with each other and intentionally encoded in that manner, say an item a and an item b, then the memory for these two can be constructed, with each having k attributes as follows:

.

Context as a stochastic vector

When items are learned one after another, it is tempting to say that they are learned in the same temporal context. However, in reality, there are subtle variations in context. Hence, contextual attributes are often considered to be changing over time as modeled by a stochastic process. Considering a vector of only r total context attributes ti that represents the context of memory mi, the context of the next-encoded memory is given by ti+1:

so,

Here, ε(j) is a random number sampled from a Gaussian distribution.

Summed similarity

As explained in the subsequent section, the hallmark of multiple trace theory is an ability to compare some probe item to the pre-existing matrix of encoded memories. This simulates the memory search process, whereby we can determine whether we have ever seen the probe before as in recognition tasks or whether the probe gives rise to another previously encoded memory as in cued recall.

First, the probe p is encoded as an attribute vector. Continuing with the preceding example of the memory matrix M, the probe will have L entries:

.

This p is then compared one by one to all pre-existing memories (trace) in M by determining the Euclidean distance between p and each mi:

.

Due to the stochastic nature of context, it is almost never the case in multiple trace theory that a probe item exactly matches an encoded memory. Still, high similarity between p and mi is indicated by a small Euclidean distance. Hence, another operation must be performed on the distance that leads to very low similarity for great distance and very high similarity for small distance. A linear operation does not eliminate low-similarity items harshly enough. Intuitively, an exponential decay model seems most suitable:

where τ is a decay parameter that can be experimentally assigned. We can go on to then define similarity to the entire memory matrix by a summed similarity SS(p,M) between the probe p and the memory matrix M:

.

If the probe item is very similar to even one of the encoded memories, SS receives a large boost. For example, given m1 as a probe item, we will get a near 0 distance (not exactly due to context) for i=1, which will add nearly the maximal boost possible to SS. To differentiate from background similarity (there will always be some low similarity to context or a few attributes for example), SS is often compared to some arbitrary criterion. If it is higher than the criterion, then the probe is considered among those encoded. The criterion can be varied based on the nature of the task and the desire to prevent false alarms. Thus, multiple trace theory predicts that, given some cue, the brain can compare that cue to a criterion to answer questions like "has this cue been experienced before?" (recognition) or "what memory does this cue elicit?" (cued recall), which are applications of summed similarity described below.

Applications to memory phenomena

Recognition

Multiple trace theory fits well into the conceptual framework for recognition. Recognition requires an individual to determine whether or not they have seen an item before. For example, facial recognition is determining whether one has seen a face before. When asked this for a successfully encoded item (something that has indeed been seen before), recognition should occur with high probability. In the mathematical framework of this theory, we can model recognition of an individual probe item p by summed similarity with a criterion. We translate the test item into an attribute vector as done for the encoded memories and compared to every trace ever encountered. If summed similarity passes the criterion, we say we have seen the item before. Summed similarity is expected to be very low if the item has never been seen but relatively higher if it has due to the similarity of the probe's attributes to some memory of the memory matrix.

This can be applied both to individual item recognition and associative recognition for two or more items together.

Cued recall

The theory can also account for cued recall. Here, some cue is given that is meant to elicit an item out of memory. For example, a factual question like "Who was the first President of the United States?" is a cue to elicit the answer of "George Washington". In the "ab" framework described above, we can take all attributes present in a cue and list consider these the a item in an encoded association as we try to recall the b portion of the mab memory. In this example, attributes like "first", "President", and "United States" will be combined to form the a vector, which will have already been formulated into the mab memory whose b values encode "George Washington". Given a, there are two popular models for how we can successfully recall b:

1) We can go through and determine similarity (not summed similarity, see above for distinction) to every item in memory for the a attributes, then pick whichever memory has the highest similarity for the a. Whatever b-type attributes we are linked to gives what we recall. The mab memory gives best chance of recall since its a elements will have high similarity to the cue a. Still, since recall does not always occur, we can say that the similarity must pass a criterion for recall to occur at all. This is similar to how the IBM machine Watson operates. Here, the similarity compares only the a-type attributes of a to mab.

2) We can use a probabilistic choice rule to determine probability of recalling an item as proportional to its similarity. This is akin to throwing a dart at a dartboard with bigger areas represented by larger similarities to the cue item. Mathematically speaking, given the cue a, the probability of recalling the desired memory mab is:

In computing both similarity and summed similarity, we only consider relations among a-type attributes. We add the error term because without it, the probability of recalling any memory in M will be 1, but there are certainly times when recall does not occur at all.

Other common results explained

Phenomena in memory associated with repetition, word frequency, recency, forgetting, and contiguity, among others, can be easily explained in the realm of multiple trace theory. Memory is known to improve with repeated exposure to items. For example, hearing a word several times in a list will improve recognition and recall of that word later on. This is because repeated exposure simply adds the memory into the ever-growing memory matrix, so summed similarity for this memory will be larger and thus more likely to pass the criterion.

In recognition, very common words are harder to recognize as part of a memorized list, when tested, than rare words. This is known as the word frequency effect and can be explained by multiple trace theory as well. For common words, summed similarity will be relatively high, whether the word was seen in the list or not, because it is likely that the word has been encountered and encoded in the memory matrix several times throughout life. Thus, the brain typically selects a higher criterion in determining whether common words are part of a list, making them harder to successfully select. However, rarer words are typically encountered less throughout life and so their presence in the memory matrix is limited. Hence, low overall summed similarity will lead to a more lax criterion. If the word was present in the list, high context similarity at time of test and other attribute similarity will lead to enough boost in summed similarity to excel past criterion and thus recognize the rare word successfully.

Recency in the serial position effect can be explained because more recent memories encoded will share a temporal context most similar to the present context, as the stochastic nature of time will not have had as pronounced an effect. Thus, context similarity will be high for recently encoded items, so overall similarity will be relatively higher for these items as well. The stochastic contextual drift is also thought to account for forgetting because the context in which a memory was encoded is lost over time, so summed similarity for an item only presented in that context will decrease over time.

Finally, empirical data have shown a contiguity effect, whereby items that are presented together temporally, even though they may not be encoded as a single memory as in the "ab" paradigm described above, are more likely to be remembered together. This can be considered a result of low contextual drift between items remembered together, so the contextual similarity between two items presented together is high.

Shortcomings

One of the biggest shortcomings of multiple trace theory is the requirement of some item with which to compare the memory matrix when determining successful encoding. As mentioned above, this works quite well in recognition and cued recall, but there is a glaring inability to incorporate free recall into the model. Free recall requires an individual to freely remember some list of items. Although the very act of asking to recall may act as a cue that can then elicit cued recall techniques, it is unlikely that the cue is unique enough to reach a summed similarity criterion or to otherwise achieve a high probability of recall.

Another major issue lies in translating the model to biological relevance. It is hard to imagine that the brain has unlimited capacity to keep track of such a large matrix of memories and continue expanding it with every item with which it has ever been presented. Furthermore, searching through this matrix is an exhaustive process that would not be relevant on biological time scales.

Wednesday, July 6, 2022

Ground source heat pump

From Wikipedia, the free encyclopedia

A heat pump in combination with heat and cold storage

A ground source heat pump (also geothermal heat pump) is a heating/cooling system for buildings that uses a type of heat pump to transfer heat to or from the ground, taking advantage of the relative constancy of temperatures of the earth through the seasons. Ground source heat pumps (GSHPs) – or geothermal heat pumps (GHP) as they are commonly termed in North America – are among the most energy-efficient technologies for providing HVAC and water heating, using far less energy than can be achieved by burning a fuel in a boiler/furnace or by use of resistive electric heaters.

Efficiency is given as a coefficient of performance (CoP) which is typically in the range 3 – 6, meaning that the devices provide 3 – 6 units of heat for each unit of electricity used. Setup costs are higher than for other heating systems due to the requirement to install ground loops over large areas or drill bore holes, and for this reason air source heat pumps are often used instead.

History

The heat pump was described by Lord Kelvin in 1853 and developed by Peter Ritter von Rittinger in 1855. Heinrich Zoelly had patented the idea of using it to draw heat from the ground in 1912.

After experimenting with a freezer, Robert C. Webber built the first direct exchange ground source heat pump in the late 1940s, however sources disagree as to the exact timeline of his invention The first successful commercial project was installed in the Commonwealth Building (Portland, Oregon) in 1948, and has been designated a National Historic Mechanical Engineering Landmark by ASME. Professor Carl Nielsen of Ohio State University built the first residential open loop version in his home in 1948.

The technology became popular in Sweden in the 1970s as a result of the 1973 oil crisis, and has been growing slowly in worldwide acceptance since then. Open loop systems dominated the market until the development of polybutylene pipe in 1979 made closed loop systems economically viable.

As of 2004, there are over a million units installed worldwide providing 12 GW of thermal capacity with a growth rate of 10% per year. Each year (as of 2011/2004, respectively), about 80,000 units are installed in the US and 27,000 in Sweden. In Finland, a geothermal heat pump was the most common heating system choice for new detached houses between 2006 and 2011 with market share exceeding 40%.

Arrangement

Internal arrangement

Liquid-to-water heat pump

The heat pump, which is the central unit that becomes the heating and cooling plant for the building, comes in two main variants:

Liquid-to-water heat pumps (also called water-to-water) are hydronic systems that carry heating or cooling through the building through pipes to conventional radiators, underfloor heating, baseboard radiators and hot water tanks. These heat pumps are also preferred for pool heating. Heat pumps typically only heat water to about 55 °C (131 °F) efficiently, whereas boilers typically operate at 65–95 °C (149–203 °F). The size of radiators designed for the higher temperatures achieved by boilers may be too small for use with heat pumps, requiring replacement with larger radiators when retrofitting a home from boiler to heat pump. When used for cooling, the temperature of the circulating water must normally be kept above the dew point to ensure that atmospheric humidity does not condense on the radiator.

Liquid-to-air heat pumps (also called water-to-air) output forced air, and are most commonly used to replace legacy forced air furnaces and central air conditioning systems. There are variations that allow for split systems, high-velocity systems, and ductless systems. Heat pumps cannot achieve as high a fluid temperature as a conventional furnace, so they require a higher volume flow rate of air to compensate. When retrofitting a residence, the existing ductwork may have to be enlarged to reduce the noise from the higher air flow.

Ground heat exchanger

A horizontal slinky loop prior to being covered with soil.

Ground source heat pumps employ a ground heat exchanger in contact with the ground or groundwater to extract or dissipate heat. Incorrect design can result in the system freezing after a number of years or very inefficient system performance; thus accurate system design is critical to a successful system 

Pipework for the ground loop is typically made of high-density polyethylene pipe and contains a mixture of water and anti-freeze (propylene glycol, denatured alcohol or methanol). Monopropylene glycol has the least damaging potential when it might leak into the ground, and is, therefore, the only allowed anti-freeze in ground sources in an increasing number of European countries.

Horizontal

A horizontal closed loop field is composed of pipes that are arrayed in a plane in the ground. A long trench, deeper than the frost line, is dug and U-shaped or slinky coils are spread out inside the same trench. Shallow 3–8-foot (0.91–2.44 m) horizontal heat exchangers experience seasonal temperature cycles due to solar gains and transmission losses to ambient air at ground level. These temperature cycles lag behind the seasons because of thermal inertia, so the heat exchanger will harvest heat deposited by the sun several months earlier, while being weighed down in late winter and spring, due to accumulated winter cold. Systems in wet ground or in water are generally more efficient than drier ground loops since water conducts and stores heat better than solids in sand or soil. If the ground is naturally dry, soaker hoses may be buried with the ground loop to keep it wet.

Vertical
Drilling of a borehole for residential heating

A vertical system consists of a number of boreholes some 50 to 400 feet (15–122 m) deep fitted with U-shaped pipes through which a heat-carrying fluid that absorbs (or discharges) heat from (or to) the ground is circulated. Bore holes are spaced at least 5–6 m apart and the depth depends on ground and building characteristics. Alternatively, pipes may be integrated with the foundation piles used to support the building. Vertical systems rely on migration of heat from surrounding geology, unless recharged using during the summer and at other times when surplus heat is available. Vertical systems are typically used where there is insufficient available land for a horizontal system.

Pipe pairs in the hole are joined with a U-shaped cross connector at the bottom of the hole or comprises two small-diameter high-density polyethylene (HDPE) tubes thermally fused to form a U-shaped bend at the bottom. The space between the wall of the borehole and the U-shaped tubes is usually grouted completely with grouting material or, in some cases, partially filled with groundwater. For illustration, a detached house needing 10 kW (3 ton) of heating capacity might need three boreholes 80 to 110 m (260 to 360 ft) deep.

Radial or directional drilling

As an alternative to trenching, loops may be laid by mini horizontal directional drilling (mini-HDD). This technique can lay piping under yards, driveways, gardens or other structures without disturbing them, with a cost between those of trenching and vertical drilling. This system also differs from horizontal & vertical drilling as the loops are installed from one central chamber, further reducing the ground space needed. Radial drilling is often installed retroactively (after the property has been built) due to the small nature of the equipment used and the ability to bore beneath existing constructions.

Open loop

In an open-loop system (also called a groundwater heat pump), the secondary loop pumps natural water from a well or body of water into a heat exchanger inside the heat pump. Since the water chemistry is not controlled, the appliance may need to be protected from corrosion by using different metals in the heat exchanger and pump. Limescale may foul the system over time and require periodic acid cleaning. This is much more of a problem with cooling systems than heating systems. A standing column well system is a specialized type of open-loop system where water is drawn from the bottom of a deep rock well, passed through a heat pump, and returned to the top of the well. A growing number of jurisdictions have outlawed open-loop systems that drain to the surface because these may drain aquifers or contaminate wells. This forces the use of more environmentally sound injection wells or a closed-loop system.

Pond
12-ton pond loop system being sunk to the bottom of a pond

A closed pond loop consists of coils of pipe similar to a slinky loop attached to a frame and located at the bottom of an appropriately sized pond or water source. Artificial ponds are used as heat storage (up to 90% efficient) in some central solar heating plants, which later extract the heat (similar to ground storage) via a large heat pump to supply district heating.

Direct exchange (DX)

The direct exchange geothermal heat pump (DX) is the oldest type of geothermal heat pump technology where the refrigerant itself is passed through the ground loop. Developed during the 1980s, this approach faced issues with the refrigerant and oil management system, especially after the ban of CFC refrigerants in 1989 and DX systems now are infrequently used.

Installation

Because of the technical knowledge and equipment needed to design and size the system properly (and install the piping if heat fusion is required), a GSHP system installation requires a professional's services. Several installers have published real-time views of system performance in an online community of recent residential installations. The International Ground Source Heat Pump Association (IGSHPA), Geothermal Exchange Organization (GEO), the Canadian GeoExchange Coalition and the Ground Source Heat Pump Association maintain listings of qualified installers in the US, Canada and the UK. Furthermore, detailed analysis of Soil thermal conductivity for horizontal systems and formation thermal conductivity for vertical systems will generally result in more accurately designed systems with a higher efficiency.

Thermal performance

Cooling performance is typically expressed in units of BTU/hr/watt as the energy efficiency ratio (EER), while heating performance is typically reduced to dimensionless units as the coefficient of performance (COP). The conversion factor is 3.41 BTU/hr/watt. Since a heat pump moves three to five times more heat energy than the electric energy it consumes, the total energy output is much greater than the electrical input. This results in net thermal efficiencies greater than 300% as compared to radiant electric heat being 100% efficient. Traditional combustion furnaces and electric heaters can never exceed 100% efficiency. Ground source heat pumps can reduce energy consumption – and corresponding air pollution emissions – up to 72% compared to electric resistance heating with standard air-conditioning equipment.

Efficient compressors, variable speed compressors and larger heat exchangers all contribute to heat pump efficiency. Residential ground source heat pumps on the market today have standard COPs ranging from 2.4 to 5.0 and EERs ranging from 10.6 to 30. To qualify for an Energy Star label, heat pumps must meet certain minimum COP and EER ratings which depend on the ground heat exchanger type. For closed-loop systems, the ISO 13256-1 heating COP must be 3.3 or greater and the cooling EER must be 14.1 or greater.

Standards ARI 210 and 240 define Seasonal Energy Efficiency Ratio (SEER) and Heating Seasonal Performance Factors (HSPF) to account for the impact of seasonal variations on air source heat pumps. These numbers are normally not applicable and should not be compared to ground source heat pump ratings. However, Natural Resources Canada has adapted this approach to calculate typical seasonally adjusted HSPFs for ground-source heat pumps in Canada. The NRC HSPFs ranged from 8.7 to 12.8 BTU/hr/watt (2.6 to 3.8 in nondimensional factors, or 255% to 375% seasonal average electricity utilization efficiency) for the most populated regions of Canada.

For the sake of comparing heat pump appliances to each other, independently from other system components, a few standard test conditions have been established by the American Refrigerant Institute (ARI) and more recently by the International Organization for Standardization. Standard ARI 330 ratings were intended for closed-loop ground-source heat pumps, and assume secondary loop water temperatures of 25 °C (77 °F) for air conditioning and 0 °C (32 °F) for heating. These temperatures are typical of installations in the northern US. Standard ARI 325 ratings were intended for open-loop ground-source heat pumps, and include two sets of ratings for groundwater temperatures of 10 °C (50 °F) and 21 °C (70 °F). ARI 325 budgets more electricity for water pumping than ARI 330. Neither of these standards attempts to account for seasonal variations. Standard ARI 870 ratings are intended for direct exchange ground-source heat pumps. ASHRAE transitioned to ISO 13256-1 in 2001, which replaces ARI 320, 325 and 330. The new ISO standard produces slightly higher ratings because it no longer budgets any electricity for water pumps.

Soil without artificial heat addition or subtraction and at depths of several metres or more remains at a relatively constant temperature year round. This temperature equates roughly to the average annual air temperature of the chosen location, usually 7–12 °C (45–54 °F) at a depth of 6 metres (20 ft) in the northern US. Because this temperature remains more constant than the air temperature throughout the seasons, Ground source heat pumps perform with far greater efficiency during extreme air temperatures than air conditioners and air-source heat pumps.

Analysis of heat transfer

A challenge in predicting the thermal response of a ground heat exchanger (GHE) is the diversity of the time and space scales involved. Four space scales and eight time scales are involved in the heat transfer of GHEs. The first space scale having practical importance is the diameter of the borehole (~ 0.1 m) and the associated time is on the order of 1 hr, during which the effect of the heat capacity of the backfilling material is significant. The second important space dimension is the half distance between two adjacent boreholes, which is on the order of several meters. The corresponding time is on the order of a month, during which the thermal interaction between adjacent boreholes is important. The largest space scale can be tens of meters or more, such as the half-length of a borehole and the horizontal scale of a GHE cluster. The time scale involved is as long as the lifetime of a GHE (decades).

The short-term hourly temperature response of the ground is vital for analyzing the energy of ground-source heat pump systems and for their optimum control and operation. By contrast, the long-term response determines the overall feasibility of a system from the standpoint of the life cycle. Addressing the complete spectrum of time scales require vast computational resources.

The main questions that engineers may ask in the early stages of designing a GHE are (a) what the heat transfer rate of a GHE as a function of time is, given a particular temperature difference between the circulating fluid and the ground, and (b) what the temperature difference as a function of time is, given a required heat exchange rate. In the language of heat transfer, the two questions can probably be expressed as

where Tf is the average temperature of the circulating fluid, T0 is the effective, undisturbed temperature of the ground, ql is the heat transfer rate of the GHE per unit time per unit length (W/m), and R is the total thermal resistance (m.K/W).R(t) is often an unknown variable that needs to be determined by heat transfer analysis. Despite R(t) being a function of time, analytical models exclusively decompose it into a time-independent part and a time-dependent part to simplify the analysis.

Various models for the time-independent and time-dependent R can be found in the references. Further, a Thermal response test is often performed to make a deterministic analysis of ground thermal conductivity to optimize the loopfield size, especially for larger commercial sites (e.g., over 10 wells).

Seasonal thermal storage

A heat pump in combination with heat and cold storage
 

The efficiency of ground source heat pumps can be greatly improved by using seasonal thermal energy storage and interseasonal heat transfer. Heat captured and stored in thermal banks in the summer can be retrieved efficiently in the winter. Heat storage efficiency increases with scale, so this advantage is most significant in commercial or district heating systems.

Geosolar combisystems have been used to heat and cool a greenhouse using an aquifer for thermal storage. In summer, the greenhouse is cooled with cold ground water. This heats the water in the aquifer which can become a warm source for heating in winter. The combination of cold and heat storage with heat pumps can be combined with water/humidity regulation. These principles are used to provide renewable heat and renewable cooling to all kinds of buildings.

Also the efficiency of existing small heat pump installations can be improved by adding large, cheap, water-filled solar collectors. These may be integrated into a to-be-overhauled parking lot, or in walls or roof constructions by installing one-inch PE pipes into the outer layer.

Environmental impact

The US Environmental Protection Agency (EPA) has called ground source heat pumps the most energy-efficient, environmentally clean, and cost-effective space conditioning systems available. Heat pumps offer significant emission reductions potential, particularly where they are used for both heating and cooling and where the electricity is produced from renewable resources.

GSHPs have unsurpassed thermal efficiencies and produce zero emissions locally, but their electricity supply includes components with high greenhouse gas emissions unless the owner has opted for a 100% renewable energy supply. Their environmental impact, therefore, depends on the characteristics of the electricity supply and the available alternatives.

Annual greenhouse gas (GHG) savings from using a ground source heat pump instead of a high-efficiency furnace in a detached residence (assuming no specific supply of renewable energy)
Country Electricity CO2
Emissions Intensity
GHG savings relative to
natural gas heating oil electric heating
Canada 223 ton/GWh 2.7 ton/yr 5.3 ton/yr 3.4 ton/yr
Russia 351 ton/GWh 1.8 ton/yr 4.4 ton/yr 5.4 ton/yr
US 676 ton/GWh −0.5 ton/yr 2.2 ton/yr 10.3 ton/yr
China 839 ton/GWh −1.6 ton/yr 1.0 ton/yr 12.8 ton/yr

The GHG emissions savings from a heat pump over a conventional furnace can be calculated based on the following formula:

  • HL = seasonal heat load ≈ 80 GJ/yr for a modern detached house in the northern US
  • FI = emissions intensity of fuel = 50 kg(CO2)/GJ for natural gas, 73 for heating oil, 0 for 100% renewable energy such as wind, hydro, photovoltaic or solar thermal
  • AFUE = furnace efficiency ≈ 95% for a modern condensing furnace
  • COP = heat pump coefficient of performance ≈ 3.2 seasonally adjusted for northern US heat pump
  • EI = emissions intensity of electricity ≈ 200–800 ton(CO2)/GWh, depending on the region's mix of electric power plants (Coal vs Natural Gas vs Nuclear, Hydro, Wind & Solar)

Ground-source heat pumps always produce fewer greenhouse gases than air conditioners, oil furnaces, and electric heating, but natural gas furnaces may be competitive depending on the greenhouse gas intensity of the local electricity supply. In countries like Canada and Russia with low emitting electricity infrastructure, a residential heat pump may save 5 tons of carbon dioxide per year relative to an oil furnace, or about as much as taking an average passenger car off the road. But in cities like Beijing or Pittsburgh that are highly reliant on coal for electricity production, a heat pump may result in 1 or 2 tons more carbon dioxide emissions than a natural gas furnace. For areas not served by utility natural gas infrastructure, however, no better alternative exists.

The fluids used in closed loops may be designed to be biodegradable and non-toxic, but the refrigerant used in the heat pump cabinet and in direct exchange loops was, until recently, chlorodifluoromethane, which is an ozone-depleting substance. Although harmless while contained, leaks and improper end-of-life disposal contribute to enlarging the ozone hole. For new construction, this refrigerant is being phased out in favor of the ozone-friendly but potent greenhouse gas R410A. The EcoCute water heater is an air-source heat pump that uses carbon dioxide as its working fluid instead of chlorofluorocarbons. Open-loop systems (i.e. those that draw ground water as opposed to closed-loop systems using a borehole heat exchanger) need to be balanced by reinjecting the spent water. This prevents aquifer depletion and the contamination of soil or surface water with brine or other compounds from underground.

Before drilling, the underground geology needs to be understood, and drillers need to be prepared to seal the borehole, including preventing penetration of water between strata. The unfortunate example is a geothermal heating project in Staufen im Breisgau, Germany which seems the cause of considerable damage to historical buildings there. In 2008, the city centre was reported to have risen 12 cm, after initially sinking a few millimeters. The boring tapped a naturally pressurized aquifer, and via the borehole this water entered a layer of anhydrite, which expands when wet as it forms gypsum. The swelling will stop when the anhydrite is fully reacted, and reconstruction of the city center "is not expedient until the uplift ceases." By 2010 sealing of the borehole had not been accomplished. By 2010, some sections of town had risen by 30 cm.

Economics

Ground source heat pumps are characterized by high capital costs and low operational costs compared to other HVAC systems. Their overall economic benefit depends primarily on the relative costs of electricity and fuels, which are highly variable over time and across the world. Based on recent prices, ground-source heat pumps currently have lower operational costs than any other conventional heating source almost everywhere in the world. Natural gas is the only fuel with competitive operational costs, and only in a handful of countries where it is exceptionally cheap, or where electricity is exceptionally expensive. In general, a homeowner may save anywhere from 20% to 60% annually on utilities by switching from an ordinary system to a ground-source system.

Capital costs and system lifespan have received much less study until recently, and the return on investment is highly variable. The most recent data from an analysis of 2011–2012 incentive payments in the state of Maryland showed an average cost of residential systems of $1.90 per watt, or about $26,700 for a typical (4 ton/14 kW) home system. An older study found the total installed cost for a system with 10 kW (3 ton) thermal capacity for a detached rural residence in the US averaged $8000–$9000 in 1995 US dollars. More recent studies found an average cost of $14,000 in 2008 US dollars for the same size system. The US Department of Energy estimates a price of $7500 on its website, last updated in 2008. One source in Canada placed prices in the range of $30,000–$34,000 Canadian dollars. The rapid escalation in system price has been accompanied by rapid improvements in efficiency and reliability. Capital costs are known to benefit from economies of scale, particularly for open-loop systems, so they are more cost-effective for larger commercial buildings and harsher climates. The initial cost can be two to five times that of a conventional heating system in most residential applications, new construction or existing. In retrofits, the cost of installation is affected by the size of the living area, the home's age, insulation characteristics, the geology of the area, and the location of the property. Proper duct system design and mechanical air exchange should be considered in the initial system cost.

Payback period for installing a ground source heat pump in a detached residence
Country Payback period for replacing
natural gas heating oil electric heating
Canada 13 years 3 years 6 years
US 12 years 5 years 4 years
Germany net loss 8 years 2 years
Notes:
  • Highly variable with energy prices.
  • Government subsidies not included.
  • Climate differences not evaluated.

Capital costs may be offset by government subsidies; for example, Ontario offered $7000 for residential systems installed in the 2009 fiscal year. Some electric companies offer special rates to customers who install a ground-source heat pump for heating or cooling their building. Where electrical plants have larger loads during summer months and idle capacity in the winter, this increases electrical sales during the winter months. Heat pumps also lower the load peak during the summer due to the increased efficiency of heat pumps, thereby avoiding the costly construction of new power plants. For the same reasons, other utility companies have started to pay for the installation of ground-source heat pumps at customer residences. They lease the systems to their customers for a monthly fee, at a net overall saving to the customer.

The lifespan of the system is longer than conventional heating and cooling systems. Good data on system lifespan is not yet available because the technology is too recent, but many early systems are still operational today after 25–30 years with routine maintenance. Most loop fields have warranties for 25 to 50 years and are expected to last at least 50 to 200 years. Ground-source heat pumps use electricity for heating the house. The higher investment above conventional oil, propane or electric systems may be returned in energy savings in 2–10 years for residential systems in the US. If compared to natural gas systems, the payback period can be much longer or non-existent. The payback period for larger commercial systems in the US is 1–5 years, even when compared to natural gas. Additionally, because geothermal heat pumps usually have no outdoor compressors or cooling towers, the risk of vandalism is reduced or eliminated, potentially extending a system's lifespan.

Ground source heat pumps are recognized as one of the most efficient heating and cooling systems on the market. They are often the second-most cost-effective solution in extreme climates (after co-generation), despite reductions in thermal efficiency due to ground temperature. (The ground source is warmer in climates that need strong air conditioning, and cooler in climates that need strong heating.) The financial viability of these systems depends on the adequate sizing of ground heat exchangers (GHEs), which generally contribute the most to the overall capital costs of GSHP systems.

Commercial systems maintenance costs in the US have historically been between $0.11 to $0.22 per m2 per year in 1996 dollars, much less than the average $0.54 per m2 per year for conventional HVAC systems.

Governments that promote renewable energy will likely offer incentives for the consumer (residential), or industrial markets. For example, in the United States, incentives are offered both on the state and federal levels of government. In the United Kingdom the Renewable Heat Incentive provides a financial incentive for the generation of renewable heat based on metered readings on an annual basis for 20 years for commercial buildings. The domestic Renewable Heat Incentive is due to be introduced in Spring 2014 for seven years and be based on deemed heat.

Inequality (mathematics)

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Inequality...