Search This Blog

Sunday, September 22, 2024

Dead reckoning

From Wikipedia, the free encyclopedia

The navigator plots their 9 a.m. position, indicated by the triangle, and, using their course and speed, estimates their own position at 9:30 and 10 a.m.

In navigation, dead reckoning is the process of calculating the current position of a moving object by using a previously determined position, or fix, and incorporating estimates of speed, heading (or direction or course), and elapsed time. The corresponding term in biology, to describe the processes by which animals update their estimates of position or heading, is path integration.

Advances in navigational aids that give accurate information on position, in particular satellite navigation using the Global Positioning System, have made simple dead reckoning by humans obsolete for most purposes. However, inertial navigation systems, which provide very accurate directional information, use dead reckoning and are very widely applied.

Etymology

Contrary to myth, the term "dead reckoning" was not originally used to abbreviate "deduced reckoning", nor is it a misspelling of the term "ded reckoning". The use of "ded" or "deduced reckoning" is not known to have appeared earlier than 1931, much later in history than "dead reckoning", which appeared as early as 1613 in the Oxford English Dictionary. The original intention of "dead" in the term is generally assumed to mean using a stationary object that is "dead in the water" as a basis for calculations. Additionally, at the time the first appearance of "dead reckoning", "ded" was considered a common spelling of "dead". This potentially led to later confusion of the origin of the term.

By analogy with their navigational use, the words dead reckoning are also used to mean the process of estimating the value of any variable quantity by using an earlier value and adding whatever changes have occurred in the meantime. Often, this usage implies that the changes are not known accurately. The earlier value and the changes may be measured or calculated quantities.

Errors

Drift is an error that can arise in dead reckoning when speed of a medium is not accounted for. A is the last known position (fix), B is the position calculated by dead reckoning, and C is the true position after the time interval. The vector from A to B is the expected path for plane based on the initial heading (HDG) and true airspeed (TAS). The vector from B to C is the wind velocity (W/V), and the third vector is the actual track (TR) and ground speed (GS). The drift angle is marked in red.

While dead reckoning can give the best available information on the present position with little math or analysis, it is subject to significant errors of approximation. For precise positional information, both speed and direction must be accurately known at all times during travel. Most notably, dead reckoning does not account for directional drift during travel through a fluid medium. These errors tend to compound themselves over greater distances, making dead reckoning a difficult method of navigation for longer journeys.

For example, if displacement is measured by the number of rotations of a wheel, any discrepancy between the actual and assumed traveled distance per rotation, due perhaps to slippage or surface irregularities, will be a source of error. As each estimate of position is relative to the previous one, errors are cumulative, or compounding, over time.

The accuracy of dead reckoning can be increased significantly by using other, more reliable methods to get a new fix part way through the journey. For example, if one was navigating on land in poor visibility, then dead reckoning could be used to get close enough to the known position of a landmark to be able to see it, before walking to the landmark itself—giving a precisely known starting point—and then setting off again.

Localization of mobile sensor nodes

Localizing a static sensor node is not a difficult task because attaching a Global Positioning System (GPS) device suffices the need of localization. But a mobile sensor node, which continuously changes its geographical location with time is difficult to localize. Mostly mobile sensor nodes within some particular domain for data collection can be used, i.e, sensor node attached to an animal within a grazing field or attached to a soldier on a battlefield. Within these scenarios a GPS device for each sensor node cannot be afforded. Some of the reasons for this include cost, size and battery drainage of constrained sensor nodes. To overcome this problem a limited number of reference nodes (with GPS) within a field is employed. These nodes continuously broadcast their locations and other nodes in proximity receive these locations and calculate their position using some mathematical technique like trilateration. For localization, at least three known reference locations are necessary to localize. Several localization algorithms based on Sequential Monte Carlo (SMC) method have been proposed in literature. Sometimes a node at some places receives only two known locations and hence it becomes impossible to localize. To overcome this problem, dead reckoning technique is used. With this technique a sensor node uses its previous calculated location for localization at later time intervals. For example, at time instant 1 if node A calculates its position as loca_1 with the help of three known reference locations; then at time instant 2 it uses loca_1 along with two other reference locations received from other two reference nodes. This not only localizes a node in less time but also localizes in positions where it is difficult to get three reference locations.

Animal navigation

In studies of animal navigation, dead reckoning is more commonly (though not exclusively) known as path integration. Animals use it to estimate their current location based on their movements from their last known location. Animals such as ants, rodents, and geese have been shown to track their locations continuously relative to a starting point and to return to it, an important skill for foragers with a fixed home.

Vehicular navigation

Marine

Dead reckoning navigation tools in coastal navigation

In marine navigation a "dead" reckoning plot generally does not take into account the effect of currents or wind. Aboard ship a dead reckoning plot is considered important in evaluating position information and planning the movement of the vessel.

Dead reckoning begins with a known position, or fix, which is then advanced, mathematically or directly on the chart, by means of recorded heading, speed, and time. Speed can be determined by many methods. Before modern instrumentation, it was determined aboard ship using a chip log. More modern methods include pit log referencing engine speed (e.g. in rpm) against a table of total displacement (for ships) or referencing one's indicated airspeed fed by the pressure from a pitot tube. This measurement is converted to an equivalent airspeed based upon known atmospheric conditions and measured errors in the indicated airspeed system. A naval vessel uses a device called a pit sword (rodmeter), which uses two sensors on a metal rod to measure the electromagnetic variance caused by the ship moving through water. This change is then converted to ship's speed. Distance is determined by multiplying the speed and the time. This initial position can then be adjusted resulting in an estimated position by taking into account the current (known as set and drift in marine navigation). If there is no positional information available, a new dead reckoning plot may start from an estimated position. In this case subsequent dead reckoning positions will have taken into account estimated set and drift.

Dead reckoning positions are calculated at predetermined intervals, and are maintained between fixes. The duration of the interval varies. Factors including one's speed made good and the nature of heading and other course changes, and the navigator's judgment determine when dead reckoning positions are calculated.

Before the 18th-century development of the marine chronometer by John Harrison and the lunar distance method, dead reckoning was the primary method of determining longitude available to mariners such as Christopher Columbus and John Cabot on their trans-Atlantic voyages. Tools such as the traverse board were developed to enable even illiterate crew members to collect the data needed for dead reckoning. Polynesian navigation, however, uses different wayfinding techniques.

Air

British P10 Magnetic Compass with dead reckoning navigation tools

On 14 June, 1919, John Alcock and Arthur Brown took off from Lester's Field in St. John's, Newfoundland in a Vickers Vimy. They navigated across the Atlantic Ocean by dead reckoning and landed in County Galway, Ireland at 8:40 a.m. on 15 June completing the first non-stop transatlantic flight.

On 21 May 1927 Charles Lindbergh landed in Paris, France after a successful non-stop flight from the United States in the single-engined Spirit of St. Louis. As the aircraft was equipped with very basic instruments, Lindbergh used dead reckoning to navigate.

Dead reckoning in the air is similar to dead reckoning on the sea, but slightly more complicated. The density of the air the aircraft moves through affects its performance as well as winds, weight, and power settings.

The basic formula for DR is Distance = Speed x Time. An aircraft flying at 250 knots airspeed for 2 hours has flown 500 nautical miles through the air. The wind triangle is used to calculate the effects of wind on heading and airspeed to obtain a magnetic heading to steer and the speed over the ground (groundspeed). Printed tables, formulae, or an E6B flight computer are used to calculate the effects of air density on aircraft rate of climb, rate of fuel burn, and airspeed.

A course line is drawn on the aeronautical chart along with estimated positions at fixed intervals (say every half hour). Visual observations of ground features are used to obtain fixes. By comparing the fix and the estimated position corrections are made to the aircraft's heading and groundspeed.

Dead reckoning is on the curriculum for VFR (visual flight rules – or basic level) pilots worldwide. It is taught regardless of whether the aircraft has navigation aids such as GPS, ADF and VOR and is an ICAO Requirement. Many flying training schools will prevent a student from using electronic aids until they have mastered dead reckoning.

Inertial navigation systems (INSes), which are nearly universal on more advanced aircraft, use dead reckoning internally. The INS provides reliable navigation capability under virtually any conditions, without the need for external navigation references, although it is still prone to slight errors.

Automotive

Dead reckoning is today implemented in some high-end automotive navigation systems in order to overcome the limitations of GPS/GNSS technology alone. Satellite microwave signals are unavailable in parking garages and tunnels, and often severely degraded in urban canyons and near trees due to blocked lines of sight to the satellites or multipath propagation. In a dead-reckoning navigation system, the car is equipped with sensors that know the wheel circumference and record wheel rotations and steering direction. These sensors are often already present in cars for other purposes (anti-lock braking system, electronic stability control) and can be read by the navigation system from the controller-area network bus. The navigation system then uses a Kalman filter to integrate the always-available sensor data with the accurate but occasionally unavailable position information from the satellite data into a combined position fix.

Autonomous navigation in robotics

Dead reckoning is utilized in some robotic applications. It is usually used to reduce the need for sensing technology, such as ultrasonic sensors, GPS, or placement of some linear and rotary encoders, in an autonomous robot, thus greatly reducing cost and complexity at the expense of performance and repeatability. The proper utilization of dead reckoning in this sense would be to supply a known percentage of electrical power or hydraulic pressure to the robot's drive motors over a given amount of time from a general starting point. Dead reckoning is not totally accurate, which can lead to errors in distance estimates ranging from a few millimeters (in CNC machining) to kilometers (in UAVs), based upon the duration of the run, the speed of the robot, the length of the run, and several other factors.

Pedestrian dead reckoning

With the increased sensor offering in smartphones, built-in accelerometers can be used as a pedometer and built-in magnetometer as a compass heading provider. Pedestrian dead reckoning (PDR) can be used to supplement other navigation methods in a similar way to automotive navigation, or to extend navigation into areas where other navigation systems are unavailable.

In a simple implementation, the user holds their phone in front of them and each step causes position to move forward a fixed distance in the direction measured by the compass. Accuracy is limited by the sensor precision, magnetic disturbances inside structures, and unknown variables such as carrying position and stride length. Another challenge is differentiating walking from running, and recognizing movements like bicycling, climbing stairs, or riding an elevator.

Before phone-based systems existed, many custom PDR systems existed. While a pedometer can only be used to measure linear distance traveled, PDR systems have an embedded magnetometer for heading measurement. Custom PDR systems can take many forms including special boots, belts, and watches, where the variability of carrying position has been minimized to better utilize magnetometer heading. True dead reckoning is fairly complicated, as it is not only important to minimize basic drift, but also to handle different carrying scenarios and movements, as well as hardware differences across phone models.

Directional dead reckoning

The south-pointing chariot was an ancient Chinese device consisting of a two-wheeled horse-drawn vehicle which carried a pointer that was intended always to aim to the south, no matter how the chariot turned. The chariot pre-dated the navigational use of the magnetic compass, and could not detect the direction that was south. Instead it used a kind of directional dead reckoning: at the start of a journey, the pointer was aimed southward by hand, using local knowledge or astronomical observations e.g. of the Pole Star. Then, as it traveled, a mechanism possibly containing differential gears used the different rotational speeds of the two wheels to turn the pointer relative to the body of the chariot by the angle of turns made (subject to available mechanical accuracy), keeping the pointer aiming in its original direction, to the south. Errors, as always with dead reckoning, would accumulate as distance traveled increased.

For networked games

Networked games and simulation tools routinely use dead reckoning to predict where an actor should be right now, using its last known kinematic state (position, velocity, acceleration, orientation, and angular velocity). This is primarily needed because it is impractical to send network updates at the rate that most games run, 60 Hz. The basic solution starts by projecting into the future using linear physics:

This formula is used to move the object until a new update is received over the network. At that point, the problem is that there are now two kinematic states: the currently estimated position and the just received, actual position. Resolving these two states in a believable way can be quite complex. One approach is to create a curve (e.g. cubic Bézier splines, centripetal Catmull–Rom splines, and Hermite curves) between the two states while still projecting into the future. Another technique is to use projective velocity blending, which is the blending of two projections (last known and current) where the current projection uses a blending between the last known and current velocity over a set time.

The first equation calculates a blended velocity given the client-side velocity at the time of the last server update and the last known server-side velocity . This essentially blends from the client-side velocity towards the server-side velocity for a smooth transition. Note that should go from zero (at the time of the server update) to one (at the time at which the next update should be arriving). A late server update is unproblematic as long as remains at one.

Next, two positions are calculated: firstly, the blended velocity and the last known server-side acceleration are used to calculate . This is a position which is projected from the client-side start position based on , the time which has passed since the last server update. Secondly, the same equation is used with the last known server-side parameters to calculate the position projected from the last known server-side position and velocity , resulting in .

Finally, the new position to display on the client is the result of interpolating from the projected position based on client information towards the projected position based on the last known server information . The resulting movement smoothly resolves the discrepancy between client-side and server-side information, even if this server-side information arrives infrequently or inconsistently. It is also free of oscillations which spline-based interpolation may suffer from.

Computer science

In computer science, dead-reckoning refers to navigating an array data structure using indexes. Since every array element has the same size, it is possible to directly access one array element by knowing any position in the array.

Given the following array:

A B C D E

knowing the memory address where the array starts, it is easy to compute the memory address of D:

Likewise, knowing D's memory address, it is easy to compute the memory address of B:

This property is particularly important for performance when used in conjunction with arrays of structures because data can be directly accessed, without going through a pointer dereference.

Saturday, September 21, 2024

Fermi problem

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Fermi_problem

A Fermi problem (or Fermi quiz, Fermi question, Fermi estimate), also known as a order-of-magnitude problem (or order-of-magnitude estimate, order estimation), is an estimation problem in physics or engineering education, designed to teach dimensional analysis or approximation of extreme scientific calculations. Fermi problems are usually back-of-the-envelope calculations. The estimation technique is named after physicist Enrico Fermi as he was known for his ability to make good approximate calculations with little or no actual data. Fermi problems typically involve making justified guesses about quantities and their variance or lower and upper bounds. In some cases, order-of-magnitude estimates can also be derived using dimensional analysis.

Historical background

An example is Enrico Fermi's estimate of the strength of the atomic bomb that detonated at the Trinity test, based on the distance traveled by pieces of paper he dropped from his hand during the blast. Fermi's estimate of 10 kilotons of TNT was well within an order of magnitude of the now-accepted value of 21 kilotons.

Examples

Fermi questions are often extreme in nature, and cannot usually be solved using common mathematical or scientific information.

Example questions given by the official Fermi Competition:

"If the mass of one teaspoon of water could be converted entirely into energy in the form of heat, what volume of water, initially at room temperature, could it bring to a boil? (litres)."

"How much does the Thames River heat up in going over the Fanshawe Dam? (Celsius degrees)."

"What is the mass of all the automobiles scrapped in North America this month? (kilograms)."

Possibly the most famous Fermi Question is the Drake equation, which seeks to estimate the number of intelligent civilizations in the galaxy. The basic question of why, if there were a significant number of such civilizations, human civilization has never encountered any others is called the Fermi paradox.

Advantages and scope

Scientists often look for Fermi estimates of the answer to a problem before turning to more sophisticated methods to calculate a precise answer. This provides a useful check on the results. While the estimate is almost certainly incorrect, it is also a simple calculation that allows for easy error checking, and to find faulty assumptions if the figure produced is far beyond what we might reasonably expect. By contrast, precise calculations can be extremely complex but with the expectation that the answer they produce is correct. The far larger number of factors and operations involved can obscure a very significant error, either in mathematical process or in the assumptions the equation is based on, but the result may still be assumed to be right because it has been derived from a precise formula that is expected to yield good results. Without a reasonable frame of reference to work from it is seldom clear if a result is acceptably precise or is many degrees of magnitude (tens or hundreds of times) too big or too small. The Fermi estimation gives a quick, simple way to obtain this frame of reference for what might reasonably be expected to be the answer.

As long as the initial assumptions in the estimate are reasonable quantities, the result obtained will give an answer within the same scale as the correct result, and if not gives a base for understanding why this is the case. For example, suppose a person was asked to determine the number of piano tuners in Chicago. If their initial estimate told them there should be a hundred or so, but the precise answer tells them there are many thousands, then they know they need to find out why there is this divergence from the expected result. First looking for errors, then for factors the estimation did not take account of – does Chicago have a number of music schools or other places with a disproportionately high ratio of pianos to people? Whether close or very far from the observed results, the context the estimation provides gives useful information both about the process of calculation and the assumptions that have been used to look at problems.

Fermi estimates are also useful in approaching problems where the optimal choice of calculation method depends on the expected size of the answer. For instance, a Fermi estimate might indicate whether the internal stresses of a structure are low enough that it can be accurately described by linear elasticity; or if the estimate already bears significant relationship in scale relative to some other value, for example, if a structure will be over-engineered to withstand loads several times greater than the estimate.

Although Fermi calculations are often not accurate, as there may be many problems with their assumptions, this sort of analysis does inform one what to look for to get a better answer. For the above example, one might try to find a better estimate of the number of pianos tuned by a piano tuner in a typical day, or look up an accurate number for the population of Chicago. It also gives a rough estimate that may be good enough for some purposes: if a person wants to start a store in Chicago that sells piano tuning equipment, and calculates that they need 10,000 potential customers to stay in business, they can reasonably assume that the above estimate is far enough below 10,000 that they should consider a different business plan (and, with a little more work, they could compute a rough upper bound on the number of piano tuners by considering the most extreme reasonable values that could appear in each of their assumptions).

Explanation

Fermi estimates generally work because the estimations of the individual terms are often close to correct, and overestimates and underestimates help cancel each other out. That is, if there is no consistent bias, a Fermi calculation that involves the multiplication of several estimated factors (such as the number of piano tuners in Chicago) will probably be more accurate than might be first supposed.

In detail, multiplying estimates corresponds to adding their logarithms; thus one obtains a sort of Wiener process or random walk on the logarithmic scale, which diffuses as (in number of terms n). In discrete terms, the number of overestimates minus underestimates will have a binomial distribution. In continuous terms, if one makes a Fermi estimate of n steps, with standard deviation σ units on the log scale from the actual value, then the overall estimate will have standard deviation , since the standard deviation of a sum scales as in the number of summands.

For instance, if one makes a 9-step Fermi estimate, at each step overestimating or underestimating the correct number by a factor of 2 (or with a standard deviation 2), then after 9 steps the standard error will have grown by a logarithmic factor of , so 23 = 8. Thus one will expect to be within 18 to 8 times the correct value – within an order of magnitude, and much less than the worst case of erring by a factor of 29 = 512 (about 2.71 orders of magnitude). If one has a shorter chain or estimates more accurately, the overall estimate will be correspondingly better.

Light-second

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Light-second

The light-second is a unit of length useful in astronomy, telecommunications and relativistic physics. It is defined as the distance that light travels in free space in one second, and is equal to exactly 299792458 m (approximately 983571055 ft or 186282 mi).

Just as the second forms the basis for other units of time, the light-second can form the basis for other units of length, ranging from the light-nanosecond (299.8 mm or just under one international foot) to the light-minute, light-hour and light-day, which are sometimes used in popular science publications. The more commonly used light-year is also currently defined to be equal to precisely 31557600 light-seconds, since the definition of a year is based on a Julian year (not the Gregorian year) of exactly 365.25 d, each of exactly 86400 SI seconds.

Use in telecommunications

Communications signals on Earth rarely travel at precisely the speed of light in free space. Distances in fractions of a light-second are useful for planning telecommunications networks.

  • One light-nanosecond is almost 300 millimetres (299.8 mm, 5 mm less than one foot), which limits the speed of data transfer between different parts of a computer.
  • One light-microsecond is about 300 metres.
  • The mean distance, over land, between opposite sides of the Earth is 66.8 light-milliseconds.
  • Communications satellites are typically 1.337 light-milliseconds (low Earth orbit) to 119.4 light-milliseconds (geostationary orbit) from the surface of the Earth. Hence there will always be a delay of at least a quarter of a second in a communication via geostationary satellite (119.4 ms times 2); this delay is just perceptible in a transoceanic telephone conversation routed by satellite. The answer will also be delayed with a quarter of a second and this is clearly noticeable during interviews or discussions on TV when sent over satellite.

Use in astronomy

The yellow shell indicating one light-day distance from the Sun compares in size with the positions of Voyager 1 and Pioneer 10 (red and green arrows respectively). It is larger than the heliosphere's termination shock (blue shell) but smaller than Comet Hale-Bopp's orbit (faint orange ellipse below).
The faint yellow sphere centred on the Sun has a radius of one light-minute. For comparison, sizes of Rigel (the blue star in the top left) and Aldebaran (the red star in the top right) are shown to scale. The large yellow ellipse represents Mercury's orbit.

The light-second is a convenient unit for measuring distances in the inner Solar System, since it corresponds very closely to the radiometric data used to determine them. (The match is not exact for an Earth-based observer because of a very small correction for the effects of relativity.) The value of the astronomical unit (roughly the distance between Earth and the Sun) in light-seconds is a fundamental measurement for the calculation of modern ephemerides (tables of planetary positions). It is usually quoted as "light-time for unit distance" in tables of astronomical constants, and its currently accepted value is 499.004786385(20) s.

  • The mean diameter of Earth is about 0.0425 light-seconds.
  • The average distance between Earth and the Moon (the lunar distance) is about 1.282 light-seconds.
  • The diameter of the Sun is about 4.643 light-seconds.
  • The average distance between Earth and the Sun (the astronomical unit) is 499.0 light-seconds.

Multiples of the light-second can be defined, although apart from the light-year, they are more used in popular science publications than in research works. For example:

  • A light-minute is 60 light-seconds, and so the average distance between Earth and the Sun is 8.317 light-minutes.
  • The average distance between Pluto and the Sun (34.72 AU) is 4.81 light-hours.
  • Humanity's most distant artificial object, Voyager 1, has an interstellar velocity of 3.57 AU per year, or 29.7 light-minutes per year. As of 2023 the probe, launched in 1977, is over 22 light-hours from Earth and the Sun, and is expected to reach a distance of one light-day around November 2026 – February 2027.

Speed of electricity

From Wikipedia, the free encyclopedia
https://en.wikipedia.org/wiki/Speed_of_electricity

The word electricity refers generally to the movement of electrons, or other charge carriers, through a conductor in the presence of a potential difference or an electric field. The speed of this flow has multiple meanings. In everyday electrical and electronic devices, the signals travel as electromagnetic waves typically at 50%–99% of the speed of light in vacuum. The electrons themselves move much more slowly. See drift velocity and electron mobility.

Electromagnetic waves

The speed at which energy or signals travel down a cable is actually the speed of the electromagnetic wave traveling along (guided by) the cable. I.e., a cable is a form of a waveguide. The propagation of the wave is affected by the interaction with the material(s) in and surrounding the cable, caused by the presence of electric charge carriers, interacting with the electric field component, and magnetic dipoles, interacting with the magnetic field component.

These interactions are typically described using mean field theory by the permeability and the permittivity of the materials involved. The energy/signal usually flows overwhelmingly outside the electric conductor of a cable. The purpose of the conductor is thus not to conduct energy, but to guide the energy-carrying wave.

Velocity of electromagnetic waves in good dielectrics

The velocity of electromagnetic waves in a low-loss dielectric is given by

where

Velocity of electromagnetic waves in good conductors

The velocity of transverse electromagnetic (TEM) mode waves in a good conductor is given by

where

  • = frequency.
  • = angular frequency = 2πf.
  • = conductivity of annealed copper = 5.96×107 S/m.
  • = conductivity of the material relative to the conductivity of copper. For hard drawn copper may be as low as 0.97.
  • .

and permeability is defined as above in § Speed of electromagnetic waves in good dielectrics

This velocity is the speed with which electromagnetic waves penetrate into the conductor and is not the drift velocity of the conduction electrons. In copper at 60 Hz, 3.2 m/s. As a consequence of Snell's Law and the extremely low speed, electromagnetic waves always enter good conductors in a direction that is within a milliradian of normal to the surface, regardless of the angle of incidence.

Electromagnetic waves in circuits

In the theoretical investigation of electric circuits, the velocity of propagation of the electromagnetic field through space is usually not considered; the field is assumed, as a precondition, to be present throughout space. The magnetic component of the field is considered to be in phase with the current, and the electric component is considered to be in phase with the voltage. The electric field starts at the conductor, and propagates through space at the velocity of light, which depends on the material it is traveling through.

The electromagnetic fields do not move through space. It is the electromagnetic energy that moves. The corresponding fields simply grow and decline in a region of space in response to the flow of energy. At any point in space, the electric field corresponds not to the condition of the electric energy flow at that moment, but to that of the flow at a moment earlier. The latency is determined by the time required for the field to propagate from the conductor to the point under consideration. In other words, the greater the distance from the conductor, the more the electric field lags.

Since the velocity of propagation is very high – about 300,000 kilometers per second – the wave of an alternating or oscillating current, even of high frequency, is of considerable length. At 60 cycles per second, the wavelength is 5,000 kilometers, and even at 100,000 hertz, the wavelength is 3 kilometers. This is a very large distance compared to those typically used in field measurement and application.

The important part of the electric field of a conductor extends to the return conductor, which usually is only a few feet distant. At greater distance, the aggregate field can be approximated by the differential field between conductor and return conductor, which tend to cancel. Hence, the intensity of the electric field is usually inappreciable at a distance which is still small compared to the wavelength.

Within the range in which an appreciable field exists, this field is practically in phase with the flow of energy in the conductor. That is, the velocity of propagation has no appreciable effect unless the return conductor is very distant, or entirely absent, or the frequency is so high that the distance to the return conductor is an appreciable portion of the wavelength.

Charge carrier drift

The drift velocity deals with the average velocity of a particle, such as an electron, due to an electric field. In general, an electron will propagate randomly in a conductor at the Fermi velocity. Free electrons in a conductor follow a random path. Without the presence of an electric field, the electrons have no net velocity.

When a DC voltage is applied, the electron drift velocity will increase in speed proportionally to the strength of the electric field. The drift velocity in a 2 mm diameter copper wire in 1 ampere current is approximately 8 cm per hour. AC voltages cause no net movement. The electrons oscillate back and forth in response to the alternating electric field, over a distance of a few micrometers – see example calculation.

Giant oil and gas fields

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Giant_oil_and_gas_fields ...