Search This Blog

Saturday, June 1, 2019

Engineering

From Wikipedia, the free encyclopedia

The InSight lander with solar panels deployed in a cleanroom
 
The steam engine, a major driver in the Industrial Revolution, underscores the importance of engineering in modern history. This beam engine is on display in the Technical University of Madrid.
 
Engineering is the study of using scientific principles to design and build machines, structures, and other things, including bridges, roads, vehicles, and buildings. The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application.

The term engineering is derived from the Latin ingenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise".

Definition

The American Engineers' Council for Professional Development (ECPD, the predecessor of ABET) has defined "engineering" as:
The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.

History

Relief map of the Citadel of Lille, designed in 1668 by Vauban, the foremost military engineer of his age.
 
Engineering has existed since ancient times, when humans devised inventions such as the wedge, lever, wheel and pulley.

The term engineering is derived from the word engineer, which itself dates back to 1390 when an engine'er (literally, one who builds or operates a siege engine) referred to "a constructor of military engines." In this context, now obsolete, an "engine" referred to a military machine, i.e., a mechanical contraption used in war (for example, a catapult). Notable examples of the obsolete usage which have survived to the present day are military engineering corps, e.g., the U.S. Army Corps of Engineers

The word "engine" itself is of even older origin, ultimately deriving from the Latin ingenium (c. 1250), meaning "innate quality, especially mental power, hence a clever invention."

Later, as the design of civilian structures, such as bridges and buildings, matured as a technical discipline, the term civil engineering entered the lexicon as a way to distinguish between those specializing in the construction of such non-military projects and those involved in the discipline of military engineering.

Ancient era

The Ancient Romans built aqueducts to bring a steady supply of clean and fresh water to cities and towns in the empire.
 
The pyramids in Egypt, the Acropolis and the Parthenon in Greece, the Roman aqueducts, Via Appia and the Colosseum, Teotihuacán, the Brihadeeswarar Temple of Thanjavur, among many others, stand as a testament to the ingenuity and skill of ancient civil and military engineers. Other monuments, no longer standing, such as the Hanging Gardens of Babylon, and the Pharos of Alexandria were important engineering achievements of their time and were considered among the Seven Wonders of the Ancient World

The earliest civil engineer known by name is Imhotep. As one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser (the Step Pyramid) at Saqqara in Egypt around 2630–2611 BC. Ancient Greece developed machines in both civilian and military domains. The Antikythera mechanism, the first known mechanical computer, and the mechanical inventions of Archimedes are examples of early mechanical engineering. Some of Archimedes' inventions as well as the Antikythera mechanism required sophisticated knowledge of differential gearing or epicyclic gearing, two key principles in machine theory that helped design the gear trains of the Industrial Revolution, and are still widely used today in diverse fields such as robotics and automotive engineering.

Ancient Chinese, Greek, Roman and Hungarian armies employed military machines and inventions such as artillery which was developed by the Greeks around the 4th century BC, the trireme, the ballista and the catapult. In the Middle Ages, the trebuchet was developed.

Renaissance era

A water-powered mine hoist used for raising ore, ca. 1556
 
Before the development of modern engineering, mathematics was used by artisans and craftsmen, such as millwrights, clockmakers, instrument makers and surveyors. Aside from these professions, universities were not believed to have had much practical significance to technology.

A standard reference for the state of mechanical arts during the Renaissance is given in the mining engineering treatise De re metallica (1556), which also contains sections on geology, mining and chemistry. De re metallica was the standard chemistry reference for the next 180 years.

Modern era

The application of the steam engine allowed coke to be substituted for charcoal in iron making, lowering the cost of iron, which provided engineers with a new material for building bridges. This bridge was made of cast iron, which was soon displaced by less brittle wrought iron as a structural material
 
The science of classical mechanics, sometimes called Newtonian mechanics, formed the scientific basis of much of modern engineering. With the rise of engineering as a profession in the 18th century, the term became more narrowly applied to fields in which mathematics and science were applied to these ends. Similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. 

Canal building was an important engineering work during the early phases of the Industrial Revolution.

John Smeaton was the first self-proclaimed civil engineer and is often regarded as the "father" of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbours, and lighthouses. He was also a capable mechanical engineer and an eminent physicist. Using a model water wheel, Smeaton conducted experiments for seven years, determining ways to increase efficiency. Smeaton introduced iron axles and gears to water wheels. Smeaton also made mechanical improvements to the Newcomen steam engine. Smeaton designed the third Eddystone Lighthouse (1755–59) where he pioneered the use of 'hydraulic lime' (a form of mortar which will set under water) and developed a technique involving dovetailed blocks of granite in the building of the lighthouse. He is important in the history, rediscovery of, and development of modern cement, because he identified the compositional requirements needed to obtain "hydraulicity" in lime; work which led ultimately to the invention of Portland cement

Applied science lead to the development of the steam engine. The sequence of events began with the invention the barometer and the measurement of atmospheric pressure by Evangelista Torricelli in 1643, demonstration of the force of atmospheric pressure by Otto von Guericke using the Magdeburg hemispheres in 1656, laboratory experiments by Denis Papin, who built experimental model steam engines and demonstrated the use of a piston, which he published in 1707. Edward Somerset, 2nd Marquess of Worcester published a book of 100 inventions containing a method for raising waters similar to a coffee percolator. Samuel Morland, a mathematician and inventor who worked on pumps, left notes at the Vauxhall Ordinance Office on a steam pump design that Thomas Savery read. In 1698 Savery built a steam pump called “The Miner’s Friend.” It employed both vacuum and pressure. Iron merchant Thomas Newcomen, who built the first commercial piston steam engine in 1712, was not known to have any scientific training.

The application of steam powered cast iron blowing cylinders for providing pressurized air for blast furnaces lead to a large increase in iron production in the late 18th century. The higher furnace temperatures made possible with steam powered blast allowed for the use of more lime in blast furnaces, which enabled the transition from charcoal to coke. These innovations lowered the cost of iron, making horse railways and iron bridges practical. The puddling process, patented by Henry Cort in 1784 produced large scale quantities of wrought iron. Hot blast, patented by James Beaumont Neilson in 1828, greatly lowered the amount of fuel needed to smelt iron. With the development of the high pressure steam engine, the power to weight ratio of steam engines made practical steamboats and locomotives possible. New steel making processes, such as the Bessemer process and the open hearth furnace, ushered in an area of heavy engineering in the late 19th century.

One of the most famous engineers of the mid 19th century was Isambard Kingdom Brunel, who built railroads, dockyards and steamships.

Offshore platform, Gulf of Mexico
 
The Industrial Revolution created a demand for machinery with metal parts, which led to the development of several machine tools. Boring cast iron cylinders with precision was not possible until John Wilkinson invented his boring machine, which is considered the first machine tool. Other machine tools included the screw cutting lathe, milling machine, turret lathe and the metal planer. Precision machining techniques were developed in the first half of the 19th century. These included the use of gigs to guide the machining tool over the work and fixtures to hold the work in the proper position. Machine tools and machining techniques capable of producing interchangeable parts lead to large scale factory production by the late 19th century.

The United States census of 1850 listed the occupation of "engineer" for the first time with a count of 2,000. There were fewer than 50 engineering graduates in the U.S. before 1865. In 1870 there were a dozen U.S. mechanical engineering graduates, with that number increasing to 43 per year in 1875. In 1890, there were 6,000 engineers in civil, mining, mechanical and electrical.

There was no chair of applied mechanism and applied mechanics at Cambridge until 1875, and no chair of engineering at Oxford until 1907. Germany established technical universities earlier.

The foundations of electrical engineering in the 1800s included the experiments of Alessandro Volta, Michael Faraday, Georg Ohm and others and the invention of the electric telegraph in 1816 and the electric motor in 1872. The theoretical work of James Maxwell and Heinrich Hertz in the late 19th century gave rise to the field of electronics. The later inventions of the vacuum tube and the transistor further accelerated the development of electronics to such an extent that electrical and electronics engineers currently outnumber their colleagues of any other engineering specialty. Chemical engineering developed in the late nineteenth century. Industrial scale manufacturing demanded new materials and new processes and by 1880 the need for large scale production of chemicals was such that a new industry was created, dedicated to the development and large scale manufacturing of chemicals in new industrial plants. The role of the chemical engineer was the design of these chemical plants and processes.

The solar furnace at Odeillo in the Pyrénées-Orientales in France can reach temperatures up to 3,500 °C (6,330 °F)
 
Aeronautical engineering deals with aircraft design process design while aerospace engineering is a more modern term that expands the reach of the discipline by including spacecraft design. Its origins can be traced back to the aviation pioneers around the start of the 20th century although the work of Sir George Cayley has recently been dated as being from the last decade of the 18th century. Early knowledge of aeronautical engineering was largely empirical with some concepts and skills imported from other branches of engineering.

The first PhD in engineering (technically, applied science and engineering) awarded in the United States went to Josiah Willard Gibbs at Yale University in 1863; it was also the second PhD awarded in science in the U.S.

Only a decade after the successful flights by the Wright brothers, there was extensive development of aeronautical engineering through development of military aircraft that were used in World War I. Meanwhile, research to provide fundamental background science continued by combining theoretical physics with experiments.

Main branches of engineering

Engineering is a broad discipline which is often broken down into several sub-disciplines. Although an engineer will usually be trained in a specific discipline, he or she may become multi-disciplined through experience. Engineering is often characterized as having four main branches: chemical engineering, civil engineering, electrical engineering, and mechanical engineering.

Chemical engineering

Chemical engineering is the application of physics, chemistry, biology, and engineering principles in order to carry out chemical processes on a commercial scale, such as the manufacture of commodity chemicals, specialty chemicals, petroleum refining, microfabrication, fermentation, and biomolecule production.

Civil engineering

Civil engineering is the design and construction of public and private works, such as infrastructure (airports, roads, railways, water supply, and treatment etc.), bridges, tunnels, dams, and buildings. Civil engineering is traditionally broken into a number of sub-disciplines, including structural engineering, environmental engineering, and surveying. It is traditionally considered to be separate from military engineering.

Electrical engineering

Electrical engineering is the design, study, and manufacture of various electrical and electronic systems, such as Broadcast engineering, electrical circuits, generators, motors, electromagnetic/electromechanical devices, electronic devices, electronic circuits, optical fibers, optoelectronic devices, computer systems, telecommunications, instrumentation, controls, and electronics.

Mechanical engineering

Mechanical engineering is the design and manufacture of physical or mechanical systems, such as power and energy systems, aerospace/aircraft products, weapon systems, transportation products, engines, compressors, powertrains, kinematic chains, vacuum technology, vibration isolation equipment, manufacturing, and mechatronics.

Interdisciplinary engineering

Interdisciplinary engineering draws from more than one of the principle branches of the practice. Historically, naval engineering and mining engineering were major branches. Other engineering fields are manufacturing engineering, acoustical engineering, corrosion engineering, instrumentation and control, aerospace, automotive, computer, electronic, information engineering, petroleum, environmental, systems, audio, software, architectural, agricultural, biosystems, biomedical, geological, textile, industrial, materials, and nuclear engineering. These and other branches of engineering are represented in the 36 licensed member institutions of the UK Engineering Council.

New specialties sometimes combine with the traditional fields and form new branches – for example, Earth systems engineering and management involves a wide range of subject areas including engineering studies, environmental science, engineering ethics and philosophy of engineering.

Practice

One who practices engineering is called an engineer, and those licensed to do so may have more formal designations such as Professional Engineer, Chartered Engineer, Incorporated Engineer, Ingenieur, European Engineer, or Designated Engineering Representative.

Methodology

Design of a turbine requires collaboration of engineers from many fields, as the system involves mechanical, electro-magnetic and chemical processes. The blades, rotor and stator as well as the steam cycle all need to be carefully designed and optimized.
 
In the engineering design process, engineers apply mathematics and sciences such as physics to find novel solutions to problems or to improve existing solutions. More than ever, engineers are now required to have a proficient knowledge of relevant sciences for their design projects. As a result, many engineers continue to learn new material throughout their career. 

If multiple solutions exist, engineers weigh each design choice based on their merit and choose the solution that best matches the requirements. The crucial and unique task of the engineer is to identify, understand, and interpret the constraints on a design in order to yield a successful result. It is generally insufficient to build a technically successful product, rather, it must also meet further requirements. 

Constraints may include available resources, physical, imaginative or technical limitations, flexibility for future modifications and additions, and other factors, such as requirements for cost, safety, marketability, productivity, and serviceability. By understanding the constraints, engineers derive specifications for the limits within which a viable object or system may be produced and operated.

Problem solving

A drawing for a booster engine for steam locomotives. Engineering is applied to design, with emphasis on function and the utilization of mathematics and science.
 
Engineers use their knowledge of science, mathematics, logic, economics, and appropriate experience or tacit knowledge to find suitable solutions to a problem. Creating an appropriate mathematical model of a problem often allows them to analyze it (sometimes definitively), and to test potential solutions. 

Usually, multiple reasonable solutions exist, so engineers must evaluate the different design choices on their merits and choose the solution that best meets their requirements. Genrich Altshuller, after gathering statistics on a large number of patents, suggested that compromises are at the heart of "low-level" engineering designs, while at a higher level the best design is one which eliminates the core contradiction causing the problem. 

Engineers typically attempt to predict how well their designs will perform to their specifications prior to full-scale production. They use, among other things: prototypes, scale models, simulations, destructive tests, nondestructive tests, and stress tests. Testing ensures that products will perform as expected.

Engineers take on the responsibility of producing designs that will perform as well as expected and will not cause unintended harm to the public at large. Engineers typically include a factor of safety in their designs to reduce the risk of unexpected failure. 

The study of failed products is known as forensic engineering and can help the product designer in evaluating his or her design in the light of real conditions. The discipline is of greatest value after disasters, such as bridge collapses, when careful analysis is needed to establish the cause or causes of the failure.

Computer use

A computer simulation of high velocity air flow around a Space Shuttle orbiter during re-entry. Solutions to the flow require modelling of the combined effects of fluid flow and the heat equations.
 
As with all modern scientific and technological endeavors, computers and software play an increasingly important role. As well as the typical business application software there are a number of computer aided applications (computer-aided technologies) specifically for engineering. Computers can be used to generate models of fundamental physical processes, which can be solved using numerical methods

Graphic representation of a minute fraction of the WWW, demonstrating hyperlinks
 
One of the most widely used design tools in the profession is computer-aided design (CAD) software. It enables engineers to create 3D models, 2D drawings, and schematics of their designs. CAD together with digital mockup (DMU) and CAE software such as finite element method analysis or analytic element method allows engineers to create models of designs that can be analyzed without having to make expensive and time-consuming physical prototypes. 

These allow products and components to be checked for flaws; assess fit and assembly; study ergonomics; and to analyze static and dynamic characteristics of systems such as stresses, temperatures, electromagnetic emissions, electrical currents and voltages, digital logic levels, fluid flows, and kinematics. Access and distribution of all this information is generally organized with the use of product data management software.

There are also many tools to support specific engineering tasks such as computer-aided manufacturing (CAM) software to generate CNC machining instructions; manufacturing process management software for production engineering; EDA for printed circuit board (PCB) and circuit schematics for electronic engineers; MRO applications for maintenance management; and Architecture, engineering and construction (AEC) software for civil engineering.

In recent years the use of computer software to aid the development of goods has collectively come to be known as product lifecycle management (PLM).

Social context

Robotic Kismet can produce a range of facial expressions.
 
The engineering profession engages in a wide range of activities, from large collaboration at the societal level, and also smaller individual projects. Almost all engineering projects are obligated to some sort of financing agency: a company, a set of investors, or a government. The few types of engineering that are minimally constrained by such issues are pro bono engineering and open-design engineering. 

By its very nature engineering has interconnections with society, culture and human behavior. Every product or construction used by modern society is influenced by engineering. The results of engineering activity influence changes to the environment, society and economies, and its application brings with it a responsibility and public safety. 

Engineering projects can be subject to controversy. Examples from different engineering disciplines include the development of nuclear weapons, the Three Gorges Dam, the design and use of sport utility vehicles and the extraction of oil. In response, some western engineering companies have enacted serious corporate and social responsibility policies. 

Engineering is a key driver of innovation and human development. Sub-Saharan Africa, in particular, has a very small engineering capacity which results in many African nations being unable to develop crucial infrastructure without outside aid. The attainment of many of the Millennium Development Goals requires the achievement of sufficient engineering capacity to develop infrastructure and sustainable technological development.

Radar, GPS, lidar, ... are all combined to provide proper navigation and obstacle avoidance (vehicle developed for 2007 DARPA Urban Challenge)
 
All overseas development and relief NGOs make considerable use of engineers to apply solutions in disaster and development scenarios. A number of charitable organizations aim to use engineering directly for the good of mankind:
Engineering companies in many established economies are facing significant challenges with regard to the number of professional engineers being trained, compared with the number retiring. This problem is very prominent in the UK where engineering has a poor image and low status. There are many negative economic and political issues that this can cause, as well as ethical issues. It is widely agreed that the engineering profession faces an "image crisis", rather than it being fundamentally an unattractive career. Much work is needed to avoid huge problems in the UK and other western economies.

Code of ethics

Many engineering societies have established codes of practice and codes of ethics to guide members and inform the public at large. The National Society of Professional Engineers code of ethics states:
Engineering is an important and learned profession. As members of this profession, engineers are expected to exhibit the highest standards of honesty and integrity. Engineering has a direct and vital impact on the quality of life for all people. Accordingly, the services provided by engineers require honesty, impartiality, fairness, and equity, and must be dedicated to the protection of the public health, safety, and welfare. Engineers must perform under a standard of professional behavior that requires adherence to the highest principles of ethical conduct.
In Canada, many engineers wear the Iron Ring as a symbol and reminder of the obligations and ethics associated with their profession.

Relationships with other disciplines

Science

Scientists study the world as it is; engineers create the world that has never been.
Engineers, scientists and technicians at work on target positioner inside National Ignition Facility (NIF) target chamber
 
There exists an overlap between the sciences and engineering practice; in engineering, one applies science. Both areas of endeavor rely on accurate observation of materials and phenomena. Both use mathematics and classification criteria to analyze and communicate observations.

Scientists may also have to complete engineering tasks, such as designing experimental apparatus or building prototypes. Conversely, in the process of developing technology engineers sometimes find themselves exploring new phenomena, thus becoming, for the moment, scientists or more precisely "engineering scientists".

The International Space Station is used to conduct science experiments of outer space
 
In the book What Engineers Know and How They Know It, Walter Vincenti asserts that engineering research has a character different from that of scientific research. First, it often deals with areas in which the basic physics or chemistry are well understood, but the problems themselves are too complex to solve in an exact manner. 

There is a "real and important" difference between engineering and physics as similar to any science field has to do with technology. Physics is an exploratory science that seeks knowledge of principles while engineering uses knowledge for practical applications of principles. The former equates an understanding into a mathematical principle while the latter measures variables involved and creates technology. For technology, physics is an auxiliary and in a way technology is considered as applied physics. Though physics and engineering are interrelated, it does not mean that a physicist is trained to do an engineer's job. A physicist would typically require additional and relevant training. Physicists and engineers engage in different lines of work. But PhD physicists who specialize in sectors of engineering physics and applied physics are titled as Technology officer, R&D Engineers and System Engineers.

An example of this is the use of numerical approximations to the Navier–Stokes equations to describe aerodynamic flow over an aircraft, or the use of Miner's rule to calculate fatigue damage. Second, engineering research employs many semi-empirical methods that are foreign to pure scientific research, one example being the method of parameter variation.

As stated by Fung et al. in the revision to the classic engineering text Foundations of Solid Mechanics:
Engineering is quite different from science. Scientists try to understand nature. Engineers try to make things that do not exist in nature. Engineers stress innovation and invention. To embody an invention the engineer must put his idea in concrete terms, and design something that people can use. That something can be a complex system, device, a gadget, a material, a method, a computing program, an innovative experiment, a new solution to a problem, or an improvement on what already exists. Since a design has to be realistic and functional, it must have its geometry, dimensions, and characteristics data defined. In the past engineers working on new designs found that they did not have all the required information to make design decisions. Most often, they were limited by insufficient scientific knowledge. Thus they studied mathematics, physics, chemistry, biology and mechanics. Often they had to add to the sciences relevant to their profession. Thus engineering sciences were born.
Although engineering solutions make use of scientific principles, engineers must also take into account safety, efficiency, economy, reliability, and constructability or ease of fabrication as well as the environment, ethical and legal considerations such as patent infringement or liability in the case of failure of the solution.

Medicine and biology

A 3 tesla clinical MRI scanner.
 
The study of the human body, albeit from different directions and for different purposes, is an important common link between medicine and some engineering disciplines. Medicine aims to sustain, repair, enhance and even replace functions of the human body, if necessary, through the use of technology

Genetically engineered mice expressing green fluorescent protein, which glows green under blue light. The central mouse is wild-type.
 
Modern medicine can replace several of the body's functions through the use of artificial organs and can significantly alter the function of the human body through artificial devices such as, for example, brain implants and pacemakers. The fields of bionics and medical bionics are dedicated to the study of synthetic implants pertaining to natural systems. 

Conversely, some engineering disciplines view the human body as a biological machine worth studying and are dedicated to emulating many of its functions by replacing biology with technology. This has led to fields such as artificial intelligence, neural networks, fuzzy logic, and robotics. There are also substantial interdisciplinary interactions between engineering and medicine.

Both fields provide solutions to real world problems. This often requires moving forward before phenomena are completely understood in a more rigorous scientific sense and therefore experimentation and empirical knowledge is an integral part of both.

Medicine, in part, studies the function of the human body. The human body, as a biological machine, has many functions that can be modeled using engineering methods.

The heart for example functions much like a pump, the skeleton is like a linked structure with levers, the brain produces electrical signals etc. These similarities as well as the increasing importance and application of engineering principles in medicine, led to the development of the field of biomedical engineering that uses concepts developed in both disciplines. 

Newly emerging branches of science, such as systems biology, are adapting analytical tools traditionally used for engineering, such as systems modeling and computational analysis, to the description of biological systems.

Art

Leonardo da Vinci, seen here in a self-portrait, has been described as the epitome of the artist/engineer. He is also known for his studies on human anatomy and physiology.
 
There are connections between engineering and art, for example, architecture, landscape architecture and industrial design (even to the extent that these disciplines may sometimes be included in a university's Faculty of Engineering).

The Art Institute of Chicago, for instance, held an exhibition about the art of NASA's aerospace design. Robert Maillart's bridge design is perceived by some to have been deliberately artistic. At the University of South Florida, an engineering professor, through a grant with the National Science Foundation, has developed a course that connects art and engineering.

Among famous historical figures, Leonardo da Vinci is a well-known Renaissance artist and engineer, and a prime example of the nexus between art and engineering.

Business

Business Engineering deals with the relationship between professional engineering, IT systems, business administration and change management. Engineering management or "Management engineering" is a specialized field of management concerned with engineering practice or the engineering industry sector. The demand for management-focused engineers (or from the opposite perspective, managers with an understanding of engineering), has resulted in the development of specialized engineering management degrees that develop the knowledge and skills needed for these roles. During an engineering management course, students will develop industrial engineering skills, knowledge, and expertise, alongside knowledge of business administration, management techniques, and strategic thinking. Engineers specializing in change management must have in-depth knowledge of the application of industrial and organizational psychology principles and methods. Professional engineers often train as certified management consultants in the very specialized field of management consulting applied to engineering practice or the engineering sector. This work often deals with large scale complex business transformation or Business process management initiatives in aerospace and defence, automotive, oil and gas, machinery, pharmaceutical, food and beverage, electrical & electronics, power distribution & generation, utilities and transportation systems. This combination of technical engineering practice, management consulting practice, industry sector knowledge, and change management expertise enables professional engineers who are also qualified as management consultants to lead major business transformation initiatives. These initiatives are typically sponsored by C-level executives.

Other fields

In political science, the term engineering has been borrowed for the study of the subjects of social engineering and political engineering, which deal with forming political and social structures using engineering methodology coupled with political science principles. Financial engineering has similarly borrowed the term.

Celestial navigation

From Wikipedia, the free encyclopedia

A diagram of a nautical sextant, a tool used in celestial navigation
 
Celestial navigation, also known as astronavigation, is the ancient and modern practice of position fixing that enables a navigator to transition through a space without having to rely on estimated calculations, or dead reckoning, to know their position. Celestial navigation uses "sights", or angular measurements taken between a celestial body (e.g. the Sun, the Moon, a planet, or a star) and the visible horizon. The Sun is most commonly used, but navigators can also use the Moon, a planet, Polaris, or one of 57 other navigational stars whose coordinates are tabulated in the nautical almanac and air almanacs

Celestial navigation is the use of angular measurements (sights) between celestial bodies and the visible horizon to locate one's position in the world, on land as well as at sea. At a given time, any celestial body is located directly over one point on the Earth's surface. The latitude and longitude of that point is known as the celestial body's geographic position (GP), the location of which can be determined from tables in the nautical or air almanac for that year. 

The measured angle between the celestial body and the visible horizon is directly related to the distance between the celestial body's GP and the observer's position. After some computations, referred to as sight reduction, this measurement is used to plot a line of position (LOP) on a navigational chart or plotting work sheet, the observer's position being somewhere on that line. (The LOP is actually a short segment of a very large circle on Earth that surrounds the GP of the observed celestial body. An observer located anywhere on the circumference of this circle on Earth, measuring the angle of the same celestial body above the horizon at that instant of time, would observe that body to be at the same angle above the horizon.) Sights on two celestial bodies give two such lines on the chart, intersecting at the observer's position (actually, the two circles would result in two points of intersection arising from sights on two stars described above, but one can be discarded since it will be far from the estimated position—see the figure at example below). Most navigators will use sights of three to five stars, if available, since that will result in only one common intersection and minimizes the chance of error. That premise is the basis for the most commonly used method of celestial navigation, referred to as the 'altitude-intercept method'. 

There are several other methods of celestial navigation that will also provide position-finding using sextant observations, such as the noon sight, and the more archaic lunar distance method. Joshua Slocum used the lunar distance method during the first recorded single-handed circumnavigation of the world. Unlike the altitude-intercept method, the noon sight and lunar distance methods do not require accurate knowledge of time. The altitude-intercept method of celestial navigation requires that the observer know exact Greenwich Mean Time (GMT) at the moment of his observation of the celestial body, to the second—since for every four seconds that the time source (commonly a chronometer or, in aircraft, an accurate "hack watch") is in error, the position will be off by approximately one nautical mile.

Example

Sun Moon (annotated).gif

An example illustrating the concept behind the intercept method for determining one's position is shown to the right. (Two other common methods for determining one's position using celestial navigation are the longitude by chronometer and ex-meridian methods.) In the adjacent image, the two circles on the map represent lines of position for the Sun and Moon at 1200 GMT on October 29, 2005. At this time, a navigator on a ship at sea measured the Moon to be 56 degrees above the horizon using a sextant. Ten minutes later, the Sun was observed to be 40 degrees above the horizon. Lines of position were then calculated and plotted for each of these observations. Since both the Sun and Moon were observed at their respective angles from the same location, the navigator would have to be located at one of the two locations where the circles cross. 

In this case the navigator is either located on the Atlantic Ocean, about 350 nautical miles (650 km) west of Madeira, or in South America, about 90 nautical miles (170 km) southwest of Asunción, Paraguay. In most cases, determining which of the two intersections is the correct one is obvious to the observer because they are often thousands of miles apart. As it is unlikely that the ship is sailing across South America, the position in the Atlantic is the correct one. Note that the lines of position in the figure are distorted because of the map's projection; they would be circular if plotted on a globe.

An observer at the Gran Chaco point would see the Moon at the left of the Sun, and an observer in the Madeira point would see the Moon at the right of the Sun.

Angular measurement

Using a marine sextant to measure the altitude of the sun above the horizon
 
Accurate angle measurement evolved over the years. One simple method is to hold the hand above the horizon with one's arm stretched out. The width of the little finger is an angle just over 1.5 degrees elevation at extended arm's length and can be used to estimate the elevation of the sun from the horizon plane and therefore estimate the time until sunset. The need for more accurate measurements led to the development of a number of increasingly accurate instruments, including the kamal, astrolabe, octant and sextant. The sextant and octant are most accurate because they measure angles from the horizon, eliminating errors caused by the placement of an instrument's pointers, and because their dual mirror system cancels relative motions of the instrument, showing a steady view of the object and horizon.

Navigators measure distance on the globe in degrees, arcminutes and arcseconds. A nautical mile is defined as 1852 meters, but is also (not accidentally) one minute of angle along a meridian on the Earth. Sextants can be read accurately to within 0.2 arcminutes, so the observer's position can be determined within (theoretically) 0.2 miles, about 400 yards (370 m). Most ocean navigators, shooting from a moving platform, can achieve a practical accuracy of 1.5 miles (2.8 km), enough to navigate safely when out of sight of land.

Practical navigation

Practical celestial navigation usually requires a marine chronometer to measure time, a sextant to measure the angles, an almanac giving schedules of the coordinates of celestial objects, a set of sight reduction tables to help perform the height and azimuth computations, and a chart of the region. 

Two nautical ship officers "shoot" in one morning with the sextant, the sun altitude
 
With sight reduction tables, the only calculations required are addition and subtraction. Small handheld computers, laptops and even scientific calculators enable modern navigators to "reduce" sextant sights in minutes, by automating all the calculation and/or data lookup steps. Most people can master simpler celestial navigation procedures after a day or two of instruction and practice, even using manual calculation methods.

Modern practical navigators usually use celestial navigation in combination with satellite navigation to correct a dead reckoning track, that is, a course estimated from a vessel's position, course and speed. Using multiple methods helps the navigator detect errors, and simplifies procedures. When used this way, a navigator will from time to time measure the sun's altitude with a sextant, then compare that with a precalculated altitude based on the exact time and estimated position of the observation. On the chart, one will use the straight edge of a plotter to mark each position line. If the position line indicates a location more than a few miles from the estimated position, more observations can be taken to restart the dead-reckoning track.

In the event of equipment or electrical failure, taking sun lines a few times a day and advancing them by dead reckoning allows a vessel to get a crude running fix sufficient to return to port. One can also use the Moon, a planet, Polaris, or one of 57 other navigational stars to track celestial positioning.

Latitude

Latitude was measured in the past either by measuring the altitude of the Sun at noon (the "noon sight"), or by measuring the altitudes of any other celestial body when crossing the meridian (reaching its maximum altitude when due north or south), and frequently by measuring the altitude of Polaris, the north star (assuming it is sufficiently visible above the horizon, which it is not in the Southern Hemisphere). Polaris always stays within 1 degree of the celestial north pole. If a navigator measures the angle to Polaris and finds it to be 10 degrees from the horizon, then he is about 10 degrees north of the equator. This approximate latitude is then corrected using simple tables or almanac corrections to determine a latitude theoretically accurate to within a fraction of a mile. Angles are measured from the horizon because locating the point directly overhead, the zenith, is not normally possible. When haze obscures the horizon, navigators use artificial horizons, which are horizontal mirrors of pans of reflective fluid, especially mercury historically. In the latter case, the angle between the reflected image in the mirror and the actual image of the object in the sky is exactly twice the required altitude.

Longitude

Longitude can be measured in the same way. If the angle to Polaris can be accurately measured, a similar measurement to a star near the eastern or western horizons will provide the longitude. The problem is that the Earth turns 15 degrees per hour, making such measurements dependent on time. A measure a few minutes before or after the same measure the day before creates serious navigation errors. Before good chronometers were available, longitude measurements were based on the transit of the moon, or the positions of the moons of Jupiter. For the most part, these were too difficult to be used by anyone except professional astronomers. The invention of the modern chronometer by John Harrison in 1761 vastly simplified longitudinal calculation. 

The longitude problem took centuries to solve and was dependent on the construction of a non-pendulum clock (as pendulum clocks cannot function accurately on a tilting ship, or indeed a moving vehicle of any kind). Two useful methods evolved during the 18th century and are still practised today: lunar distance, which does not involve the use of a chronometer, and use of an accurate timepiece or chronometer. 

Presently, lay-person calculations of longitude can be made by noting the exact local time (leaving out any reference for Daylight Saving Time) when the sun is at its highest point in the sky. The calculation of noon can be made more easily and accurately with a small, exactly vertical rod driven into level ground—take the time reading when the shadow is pointing due north (in the northern hemisphere). Then take your local time reading and subtract it from GMT (Greenwich Mean Time) or the time in London, England. For example, a noon reading (1200 hours) near central Canada or the US would occur at approximately 6 pm (1800 hours) in London. The six-hour differential is one quarter of a 24-hour day, or 90 degrees of a 360-degree circle (the Earth). The calculation can also be made by taking the number of hours (use decimals for fractions of an hour) multiplied by 15, the number of degrees in one hour. Either way, it can be demonstrated that much of central North America is at or near 90 degrees west longitude. Eastern longitudes can be determined by adding the local time to GMT, with similar calculations.

Lunar distance

The older method, called "lunar distances", was refined in the 18th century and employed with decreasing regularity at sea through the middle of the 19th century. It is only used today by sextant hobbyists and historians, but the method is theoretically sound, and can be used when a timepiece is not available or its accuracy is suspect during a long sea voyage. The navigator precisely measures the angle between the moon and the sun, or between the moon and one of several stars near the ecliptic. The observed angle must be corrected for the effects of refraction and parallax, like any celestial sight. To make this correction the navigator would measure the altitudes of the moon and sun (or star) at about the same time as the lunar distance angle. Only rough values for the altitudes were required. Then a calculation with logarithms or graphical tables requiring ten to fifteen minutes' work would convert the observed angle to a geocentric lunar distance. The navigator would compare the corrected angle against those listed in the almanac for every three hours of Greenwich time, and interpolate between those values to get the actual Greenwich time aboard ship. Knowing Greenwich time and comparing against local time from a common altitude sight, the navigator can work out his longitude.

Use of time

The considerably more popular method was (and still is) to use an accurate timepiece to directly measure the time of a sextant sight. The need for accurate navigation led to the development of progressively more accurate chronometers in the 18th century. Today, time is measured with a chronometer, a quartz watch, a shortwave radio time signal broadcast from an atomic clock, or the time displayed on a GPS. A quartz wristwatch normally keeps time within a half-second per day. If it is worn constantly, keeping it near body heat, its rate of drift can be measured with the radio and, by compensating for this drift, a navigator can keep time to better than a second per month. Traditionally, a navigator checked his chronometer from his sextant, at a geographic marker surveyed by a professional astronomer. This is now a rare skill, and most harbourmasters cannot locate their harbour's marker. 

Traditionally, three chronometers were kept in gimbals in a dry room near the centre of the ship. They were used to set a hack watch for the actual sight, so that no chronometers were ever exposed to the wind and salt water on deck. Winding and comparing the chronometers was a crucial duty of the navigator. Even today, it is still logged daily in the ship's deck log and reported to the Captain before eight bells on the forenoon watch (shipboard noon). Navigators also set the ship's clocks and calendar.

Modern celestial navigation

The celestial line of position concept was discovered in 1837 by Thomas Hubbard Sumner when, after one observation, he computed and plotted his longitude at more than one trial latitude in his vicinity – and noticed that the positions lay along a line. Using this method with two bodies, navigators were finally able to cross two position lines and obtain their position – in effect determining both latitude and longitude. Later in the 19th century came the development of the modern (Marcq St. Hilaire) intercept method; with this method the body height and azimuth are calculated for a convenient trial position, and compared with the observed height. The difference in arcminutes is the nautical mile "intercept" distance that the position line needs to be shifted toward or away from the direction of the body's subpoint. (The intercept method uses the concept illustrated in the example in the “How it works” section above.) Two other methods of reducing sights are the longitude by chronometer and the ex-meridian method.

While celestial navigation is becoming increasingly redundant with the advent of inexpensive and highly accurate satellite navigation receivers (GPS), it was used extensively in aviation until the 1960s, and marine navigation until quite recently. However; since a prudent mariner never relies on any sole means of fixing his position, many national maritime authorities still require deck officers to show knowledge of celestial navigation in examinations, primarily as a backup for electronic/satellite navigation. One of the most common current usages of celestial navigation aboard large merchant vessels is for compass calibration and error checking at sea when no terrestrial references are available. 

The U.S. Air Force and U.S. Navy continued instructing military aviators on celestial navigation use until 1997, because
  • celestial navigation can be used independently of ground aids
  • celestial navigation has global coverage
  • celestial navigation can not be jammed (although it can be obscured by clouds)
  • celestial navigation does not give off any signals that could be detected by an enemy 
The United States Naval Academy announced that it was discontinuing its course on celestial navigation (considered to be one of its most demanding non-engineering courses) from the formal curriculum in the spring of 1998. In October 2015, citing concerns about the reliability of GPS systems in the face of potential hostile hacking, the USNA reinstated instruction in celestial navigation in the 2015–16 academic year.

At another federal service academy, the US Merchant Marine Academy, there was no break in instruction in celestial navigation as it is required to pass the US Coast Guard License Exam to enter the Merchant Marine. It is also taught at Harvard, most recently as Astronomy 2.

Celestial navigation continues to be used by private yachtsmen, and particularly by long-distance cruising yachts around the world. For small cruising boat crews, celestial navigation is generally considered an essential skill when venturing beyond visual range of land. Although GPS (Global Positioning System) technology is reliable, offshore yachtsmen use celestial navigation as either a primary navigational tool or as a backup.

Celestial navigation was used in commercial aviation up until the early part of the jet age; early Boeing 747s had a "sextant port" in the roof of the cockpit. It was only phased out in the 1960s with the advent of inertial navigation and doppler navigation systems, and today's satellite-based systems which can locate the aircraft's position accurate to a 3-meter sphere with several updates per second.

A variation on terrestrial celestial navigation was used to help orient the Apollo spacecraft en route to and from the Moon. To this day, space missions such as the Mars Exploration Rover use star trackers to determine the attitude of the spacecraft.

As early as the mid-1960s, advanced electronic and computer systems had evolved enabling navigators to obtain automated celestial sight fixes. These systems were used aboard both ships and US Air Force aircraft, and were highly accurate, able to lock onto up to 11 stars (even in daytime) and resolve the craft's position to less than 300 feet (91 m). The SR-71 high-speed reconnaissance aircraft was one example of an aircraft that used a combination of automated celestial and inertial navigation. These rare systems were expensive, however, and the few that remain in use today are regarded as backups to more reliable satellite positioning systems.

Intercontinental ballistic missiles use celestial navigation to check and correct their course (initially set using internal gyroscopes) while flying outside the Earth's atmosphere. The immunity to jamming signals is the main driver behind this seemingly archaic technique.

X-ray pulsar-based navigation and timing (XNAV) is an experimental navigation technique whereby the periodic X-ray signals emitted from pulsars are used to determine the location of a vehicle, such as a spacecraft in deep space. A vehicle using XNAV would compare received X-ray signals with a database of known pulsar frequencies and locations. Similar to GPS, this comparison would allow the vehicle to triangulate its position accurately (±5 km). The advantage of using X-ray signals over radio waves is that X-ray telescopes can be made smaller and lighter. On 9 November 2016 the Chinese Academy of Sciences launched an experimental pulsar navigation satellite called XPNAV 1. SEXTANT (Station Explorer for X-ray Timing and Navigation Technology) is a NASA-funded project developed at the Goddard Space Flight Center that is testing XNAV on-orbit on board the International Space Station in connection with the NICER project, launched on 3 June 2017 on the SpaceX CRS-11 ISS resupply mission.

Celestial navigation trainer

Celestial navigation trainers for aircraft crews combine a simple flight simulator with a planetarium.

An early example is the Link Celestial Navigation Trainer, used in the Second World War. Housed in a 45 feet (14 m) high building, it featured a cockpit accommodating a whole bomber crew (pilot, navigator and bombardier). The cockpit offered a full array of instruments which the pilot used to fly the simulated aeroplane. Fixed to a dome above the cockpit was an arrangement of lights, some collimated, simulating constellations from which the navigator determined the plane's position. The dome's movement simulated the changing positions of the stars with the passage of time and the movement of the plane around the earth. The navigator also received simulated radio signals from various positions on the ground. Below the cockpit moved "terrain plates" – large, movable aerial photographs of the land below – which gave the crew the impression of flight and enabled the bomber to practise lining up bombing targets. A team of operators sat at a control booth on the ground below the machine, from which they could simulate weather conditions such as wind or cloud. This team also tracked the aeroplane's position by moving a "crab" (a marker) on a paper map.
The Link Celestial Navigation Trainer was developed in response to a request made by the Royal Air Force (RAF) in 1939. The RAF ordered 60 of these machines, and the first one was built in 1941. The RAF used only a few of these, leasing the rest back to the US, where eventually hundreds were in use.

Theoretical astronomy

From Wikipedia, the free encyclopedia

Theoretical astronomy is the use of the analytical models of physics and chemistry to describe astronomical objects and astronomical phenomena. 
 
Ptolemy's Almagest, although a brilliant treatise on theoretical astronomy combined with a practical handbook for computation, nevertheless includes many compromises to reconcile discordant observations. Theoretical astronomy is usually assumed to have begun with Johannes Kepler (1571–1630), and Kepler's laws. It is co-equal with observation. The general history of astronomy deals with the history of the descriptive and theoretical astronomy of the Solar System, from the late sixteenth century to the end of the nineteenth century. The major categories of works on the history of modern astronomy include general histories, national and institutional histories, instrumentation, descriptive astronomy, theoretical astronomy, positional astronomy, and astrophysics. Astronomy was early to adopt computational techniques to model stellar and galactic formation and celestial mechanics. From the point of view of theoretical astronomy, not only must the mathematical expression be reasonably accurate but it should preferably exist in a form which is amenable to further mathematical analysis when used in specific problems. Most of theoretical astronomy uses Newtonian theory of gravitation, considering that the effects of general relativity are weak for most celestial objects. The obvious fact is that theoretical astronomy cannot (and does not try to) predict the position, size and temperature of every star in the heavens. Theoretical astronomy by and large has concentrated upon analyzing the apparently complex but periodic motions of celestial objects.

Integrating astronomy and physics

"Contrary to the belief generally held by laboratory physicists, astronomy has contributed to the growth of our understanding of physics." Physics has helped in the elucidation of astronomical phenomena, and astronomy has helped in the elucidation of physical phenomena:
  1. Discovery of the law of gravitation came from the information provided by the motion of the Moon and the planets,
  2. Viability of nuclear fusion as demonstrated in the Sun and stars and yet to be reproduced on earth in a controlled form.
Integrating astronomy with physics involves
Physical interaction Astronomical phenomena
Electromagnetism: observation using the electromagnetic spectrum
black body radiation stellar radiation
synchrotron radiation radio and X-ray sources
inverse-Compton scattering astronomical X-ray sources
acceleration of charged particles pulsars and cosmic rays
absorption/scattering interstellar dust
Strong and weak interaction: nucleosynthesis in stars

cosmic rays

supernovae

primeval universe
Gravity: motion of planets, satellites and binary stars, stellar structure and evolution, N-body motions in clusters of stars and galaxies, black holes, and the expanding universe.

The aim of astronomy is to understand the physics and chemistry from the laboratory that is behind cosmic events so as to enrich our understanding of the cosmos and of these sciences as well.

Integrating astronomy and chemistry

Astrochemistry, the overlap of the disciplines of astronomy and chemistry, is the study of the abundance and reactions of chemical elements and molecules in space, and their interaction with radiation. The formation, atomic and chemical composition, evolution and fate of molecular gas clouds, is of special interest because it is from these clouds that solar systems form.

Infrared astronomy, for example, has revealed that the interstellar medium contains a suite of complex gas-phase carbon compounds called aromatic hydrocarbons, often abbreviated (PAHs or PACs). These molecules composed primarily of fused rings of carbon (either neutral or in an ionized state) are said to be the most common class of carbon compound in the galaxy. They are also the most common class of carbon molecule in meteorites and in cometary and asteroidal dust (cosmic dust). These compounds, as well as the amino acids, nucleobases, and many other compounds in meteorites, carry deuterium (2H) and isotopes of carbon, nitrogen, and oxygen that are very rare on earth, attesting to their extraterrestrial origin. The PAHs are thought to form in hot circumstellar environments (around dying carbon rich red giant stars). 

The sparseness of interstellar and interplanetary space results in some unusual chemistry, since symmetry-forbidden reactions cannot occur except on the longest of timescales. For this reason, molecules and molecular ions which are unstable on earth can be highly abundant in space, for example the H3+ ion. Astrochemistry overlaps with astrophysics and nuclear physics in characterizing the nuclear reactions which occur in stars, the consequences for stellar evolution, as well as stellar 'generations'. Indeed, the nuclear reactions in stars produce every naturally occurring chemical element. As the stellar 'generations' advance, the mass of the newly formed elements increases. A first-generation star uses elemental hydrogen (H) as a fuel source and produces helium (He). Hydrogen is the most abundant element, and it is the basic building block for all other elements as its nucleus has only one proton. Gravitational pull toward the center of a star creates massive amounts of heat and pressure, which cause nuclear fusion. Through this process of merging nuclear mass, heavier elements are formed. Lithium, carbon, nitrogen and oxygen are examples of elements that form in stellar fusion. After many stellar generations, very heavy elements are formed (e.g. iron and lead).

Tools of theoretical astronomy

Theoretical astronomers use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.

Astronomy theorists endeavor to create theoretical models and figure out the observational consequences of those models. This helps observers look for data that can refute a model or help in choosing between several alternate or conflicting models. 

Theorists also try to generate or modify models to take into account new data. Consistent with the general scientific approach, in the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.

Topics of theoretical astronomy

Topics studied by theoretical astronomers include
  1. stellar dynamics and evolution;
  2. galaxy formation;
  3. large-scale structure of matter in the Universe;
  4. origin of cosmic rays;
  5. general relativity and physical cosmology, including string cosmology and astroparticle physics.
Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.

Astronomical models

Some widely accepted and studied theories and models in astronomy, now included in the Lambda-CDM model are the Big Bang, Cosmic inflation, dark matter, and fundamental theories of physics.
A few examples of this process: 

Physical process Experimental tool Theoretical model Explains/predicts
Gravitation Radio telescopes Self-gravitating system Emergence of a star system
Nuclear fusion Spectroscopy Stellar evolution How the stars shine and how metals formed
The Big Bang Hubble Space Telescope, COBE Expanding universe Age of the Universe
Quantum fluctuations
Cosmic inflation Flatness problem
Gravitational collapse X-ray astronomy General relativity Black holes at the center of Andromeda Galaxy
CNO cycle in stars


Leading topics in theoretical astronomy

Dark matter and dark energy are the current leading topics in astronomy, as their discovery and controversy originated during the study of the galaxies.

Theoretical astrophysics

Of the topics approached with the tools of theoretical physics, particular consideration is often given to stellar photospheres, stellar atmospheres, the solar atmosphere, planetary atmospheres, gaseous nebulae, nonstationary stars, and the interstellar medium. Special attention is given to the internal structure of stars.

Weak equivalence principle

The observation of a neutrino burst within 3 h of the associated optical burst from Supernova 1987A in the Large Magellanic Cloud (LMC) gave theoretical astrophysicists an opportunity to test that neutrinos and photons follow the same trajectories in the gravitational field of the galaxy.

Thermodynamics for stationary black holes

A general form of the first law of thermodynamics for stationary black holes can be derived from the microcanonical functional integral for the gravitational field. The boundary data
  1. the gravitational field as described with a micocanonical system in a spatially finite region and
  2. the density of states expressed formally as a functional integral over Lorentzian metrics and as a functional of the geometrical boundary data that are fixed in the corresponding action,
are the thermodynamical extensive variables, including the energy and angular momentum of the system. For the simpler case of nonrelativistic mechanics as is often observed in astrophysical phenomena associated with a black hole event horizon, the density of states can be expressed as a real-time functional integral and subsequently used to deduce Feynman's imaginary-time functional integral for the canonical partition function.

Theoretical astrochemistry

Reaction equations and large reaction networks are an important tool in theoretical astrochemistry, especially as applied to the gas-grain chemistry of the interstellar medium. Theoretical astrochemistry offers the prospect of being able to place constraints on the inventory of organics for exogenous delivery to the early Earth.

Interstellar organics

"An important goal for theoretical astrochemistry is to elucidate which organics are of true interstellar origin, and to identify possible interstellar precursors and reaction pathways for those molecules which are the result of aqueous alterations." One of the ways this goal can be achieved is through the study of carbonaceous material as found in some meteorites. Carbonaceous chondrites (such as C1 and C2) include organic compounds such as amines and amides; alcohols, aldehydes, and ketones; aliphatic and aromatic hydrocarbons; sulfonic and phosphonic acids; amino, hydroxycarboxylic, and carboxylic acids; purines and pyrimidines; and kerogen-type material. The organic inventories of primitive meteorites display large and variable enrichments in deuterium, carbon-13 (13C), and nitrogen-15 (15N), which is indicative of their retention of an interstellar heritage.

Chemistry in cometary comae

The chemical composition of comets should reflect both the conditions in the outer solar nebula some 4.5 × 109 ayr, and the nature of the natal interstellar cloud from which the Solar system was formed. While comets retain a strong signature of their ultimate interstellar origins, significant processing must have occurred in the protosolar nebula. Early models of coma chemistry showed that reactions can occur rapidly in the inner coma, where the most important reactions are proton transfer reactions. Such reactions can potentially cycle deuterium between the different coma molecules, altering the initial D/H ratios released from the nuclear ice, and necessitating the construction of accurate models of cometary deuterium chemistry, so that gas-phase coma observations can be safely extrapolated to give nuclear D/H ratios.

Theoretical chemical astronomy

While the lines of conceptual understanding between theoretical astrochemistry and theoretical chemical astronomy often become blurred so that the goals and tools are the same, there are subtle differences between the two sciences. Theoretical chemistry as applied to astronomy seeks to find new ways to observe chemicals in celestial objects, for example. This often leads to theoretical astrochemistry having to seek new ways to describe or explain those same observations.

Astronomical spectroscopy

The new era of chemical astronomy had to await the clear enunciation of the chemical principles of spectroscopy and the applicable theory.

Chemistry of dust condensation

Supernova radioactivity dominates light curves and the chemistry of dust condensation is also dominated by radioactivity. Dust is usually either carbon or oxides depending on which is more abundant, but Compton electrons dissociate the CO molecule in about one month. The new chemical astronomy of supernova solids depends on the supernova radioactivity:
  1. The radiogenesis of 44Ca from 44Ti decay after carbon condensation establishes their supernova source,
  2. Their opacity suffices to shift emission lines blueward after 500 d and emits significant infrared luminosity,
  3. Parallel kinetic rates determine trace isotopes in meteoritic supernova graphites,
  4. The chemistry is kinetic rather than due to thermal equilibrium and
  5. Is made possible by radiodeactivation of the CO trap for carbon.

Theoretical physical astronomy

Like theoretical chemical astronomy, the lines of conceptual understanding between theoretical astrophysics and theoretical physical astronomy are often blurred, but, again, there are subtle differences between these two sciences. Theoretical physics as applied to astronomy seeks to find new ways to observe physical phenomena in celestial objects and what to look for, for example. This often leads to theoretical astrophysics having to seek new ways to describe or explain those same observations, with hopefully a convergence to improve our understanding of the local environment of Earth and the physical Universe.

Weak interaction and nuclear double beta decay

Nuclear matrix elements of relevant operators as extracted from data and from a shell-model and theoretical approximations both for the two-neutrino and neutrinoless modes of decay are used to explain the weak interaction and nuclear structure aspects of nuclear double beta decay.

Neutron-rich isotopes

New neutron-rich isotopes, 34Ne, 37Na, and 43Si have been produced unambiguously for the first time, and convincing evidence for the particle instability of three others, 33Ne, 36Na, and 39Mg has been obtained. These experimental findings compare with recent theoretical predictions.

Theory of astronomical time keeping

Until recently all the time units that appear natural to us are caused by astronomical phenomena:
  1. Earth's orbit around the Sun => the year, and the seasons,
  2. Moon's orbit around the Earth => the month,
  3. Earth's rotation and the succession of brightness and darkness => the day (and night).
High precision appears problematic:
  1. Amibiguities arise in the exact definition of a rotation or revolution,
  2. Some astronomical processes are uneven and irregular, such as the noncommensurability of year, month, and day,
  3. There are a multitude of time scales and calendars to solve the first two problems.
Some of these time scales are sidereal time, solar time, and universal time.

Atomic time

Historical accuracy of atomic clocks from NIST.
 
From the Systeme Internationale (SI) comes the second as defined by the duration of 9 192 631 770 cycles of a particular hyperfine structure transition in the ground state of caesium-133 (133Cs). For practical usability a device is required that attempts to produce the SI second (s) such as an atomic clock. But not all such clocks agree. The weighted mean of many clocks distributed over the whole Earth defines the Temps Atomique International; i.e., the Atomic Time TAI. From the General theory of relativity the time measured depends on the altitude on earth and the spatial velocity of the clock so that TAI refers to a location on sea level that rotates with the Earth.

Ephemeris time

Since the Earth's rotation is irregular, any time scale derived from it such as Greenwich Mean Time led to recurring problems in predicting the Ephemerides for the positions of the Moon, Sun, planets and their natural satellites. In 1976 the International Astronomical Union (IAU) resolved that the theoretical basis for ephemeris time (ET) was wholly non-relativistic, and therefore, beginning in 1984 ephemeris time would be replaced by two further time scales with allowance for relativistic corrections. Their names, assigned in 1979, emphasized their dynamical nature or origin, Barycentric Dynamical Time (TDB) and Terrestrial Dynamical Time (TDT). Both were defined for continuity with ET and were based on what had become the standard SI second, which in turn had been derived from the measured second of ET.

During the period 1991–2006, the TDB and TDT time scales were both redefined and replaced, owing to difficulties or inconsistencies in their original definitions. The current fundamental relativistic time scales are Geocentric Coordinate Time (TCG) and Barycentric Coordinate Time (TCB). Both of these have rates that are based on the SI second in respective reference frames (and hypothetically outside the relevant gravity well), but due to relativistic effects, their rates would appear slightly faster when observed at the Earth's surface, and therefore diverge from local Earth-based time scales using the SI second at the Earth's surface.

The currently defined IAU time scales also include Terrestrial Time (TT) (replacing TDT, and now defined as a re-scaling of TCG, chosen to give TT a rate that matches the SI second when observed at the Earth's surface), and a redefined Barycentric Dynamical Time (TDB), a re-scaling of TCB to give TDB a rate that matches the SI second at the Earth's surface.

Extraterrestrial time-keeping

Stellar dynamical time scale

For a star, the dynamical time scale is defined as the time that would be taken for a test particle released at the surface to fall under the star's potential to the centre point, if pressure forces were negligible. In other words, the dynamical time scale measures the amount of time it would take a certain star to collapse in the absence of any internal pressure. By appropriate manipulation of the equations of stellar structure this can be found to be


where R is the radius of the star, G is the gravitational constant, M is the mass of the star and v is the escape velocity. As an example, the Sun dynamical time scale is approximately 1133 seconds. Note that the actual time it would take a star like the Sun to collapse is greater because internal pressure is present. 

The 'fundamental' oscillatory mode of a star will be at approximately the dynamical time scale. Oscillations at this frequency are seen in Cepheid variables.

Theory of astronomical navigation

On earth

The basic characteristics of applied astronomical navigation are
  1. usable in all areas of sailing around the earth,
  2. applicable autonomously (does not depend on others – persons or states) and passively (does not emit energy),
  3. conditional usage via optical visibility (of horizon and celestial bodies), or state of cloudiness,
  4. precisional measurement, sextant is 0.1', altitude and position is between 1.5' and 3.0'.
  5. temporal determination takes a couple of minutes (using the most modern equipment) and ≤ 30 min (using classical equipment).
The superiority of satellite navigation systems to astronomical navigation are currently undeniable, especially with the development and use of GPS/NAVSTAR. This global satellite system
  1. enables automated three-dimensional positioning at any moment,
  2. automatically determines position continuously (every second or even more often),
  3. determines position independent of weather conditions (visibility and cloudiness),
  4. determines position in real time to a few meters (two carrying frequencies) and 100 m (modest commercial receivers), which is two to three orders of magnitude better than by astronomical observation,
  5. is simple even without expert knowledge,
  6. is relatively cheap, comparable to equipment for astronomical navigation, and
  7. allows incorporation into integrated and automated systems of control and ship steering. The use of astronomical or celestial navigation is disappearing from the surface and beneath or above the surface of the earth.
Geodetic astronomy is the application of astronomical methods into networks and technical projects of geodesy for
Astronomical algorithms are the algorithms used to calculate ephemerides, calendars, and positions (as in celestial navigation or satellite navigation).

Many astronomical and navigational computations use the Figure of the Earth as a surface representing the earth. 

The International Earth Rotation and Reference Systems Service (IERS), formerly the International Earth Rotation Service, is the body responsible for maintaining global time and reference frame standards, notably through its Earth Orientation Parameter (EOP) and International Celestial Reference System (ICRS) groups.

Deep space

The Deep Space Network, or DSN, is an international network of large antennas and communication facilities that supports interplanetary spacecraft missions, and radio and radar astronomy observations for the exploration of the solar system and the universe. The network also supports selected Earth-orbiting missions. DSN is part of the NASA Jet Propulsion Laboratory (JPL).

Aboard an exploratory vehicle

An observer becomes a deep space explorer upon escaping Earth's orbit. While the Deep Space Network maintains communication and enables data download from an exploratory vessel, any local probing performed by sensors or active systems aboard usually require astronomical navigation, since the enclosing network of satellites to ensure accurate positioning is absent.

Political psychology

From Wikipedia, the free encyclopedia ...