Search This Blog

Monday, September 7, 2020

Unix time

From Wikipedia, the free encyclopedia
 
Unix time passed 1000000000 seconds on 2001-09-09T01:46:40Z. It was celebrated in Copenhagen, Denmark at a party held by DKUUG (at 03:46:40 local time).
 
Unix time (also known as Epoch time, POSIX time, seconds since the Epoch, or UNIX Epoch time) is a system for describing a point in time. It is the number of seconds that have elapsed since the Unix epoch, minus leap seconds; the Unix epoch is 00:00:00 UTC on 1 January 1970. Leap seconds are ignored, with a leap second having the same Unix time as the second before it, and every day is treated as if it contains exactly 86400 seconds. Due to this treatment, Unix time is not a true representation of UTC. 

Unix time is widely used in operating systems and file formats. In Unix-like operating systems, date is a command which will print or set the current time; by default, it prints or sets the time in the system time zone, but with the -u flag, it prints or sets the time in UTC and, with the TZ environment variable set to refer to a particular time zone, prints or sets the time in that time zone.

Definition

Two layers of encoding make up Unix time. The first layer encodes a point in time as a scalar real number which represents the number of seconds that have passed since 00:00:00 UTC Thursday, 1 January 1970. The second layer encodes that number as a sequence of bits or decimal digits.

As is standard with UTC, this article labels days using the Gregorian calendar, and counts times within each day in hours, minutes, and seconds. Some of the examples also show International Atomic Time (TAI), another time scheme which uses the same seconds and is displayed in the same format as UTC, but in which every day is exactly 86400 seconds long, gradually losing synchronization with the Earth's rotation at a rate of roughly one second per year.

Encoding time as a number

Unix time is a single signed number that increments every second, which makes it easier for computers to store and manipulate than conventional date systems. Interpreter programs can then convert it to a human-readable format.

The Unix epoch is the time 00:00:00 UTC on 1 January 1970. There is a problem with this definition, in that UTC did not exist in its current form until 1972; this issue is discussed below. For brevity, the remainder of this section uses ISO 8601 date and time format, in which the Unix epoch is 1970-01-01T00:00:00Z.

The Unix time number is zero at the Unix epoch, and increases by exactly 86400 per day since the epoch. Thus 2004-09-16T00:00:00Z, 12677 days after the epoch, is represented by the Unix time number 12677 × 86400 = 1095292800. This can be extended backwards from the epoch too, using negative numbers; thus 1957-10-04T00:00:00Z, 4472 days before the epoch, is represented by the Unix time number −4472 × 86400 = −386380800. This applies within days as well; the time number at any given time of a day is the number of seconds that has passed since the midnight starting that day added to the time number of that midnight. 

Because Unix time is based on an epoch, and because of a common misunderstanding that the Unix epoch is the only epoch (often called "the Epoch"), Unix time is sometimes referred to as Epoch time.

Leap seconds

The above scheme means that on a normal UTC day, which has a duration of 86400 seconds, the Unix time number changes in a continuous manner across midnight. For example, at the end of the day used in the examples above, the time representations progress as follows:

Unix time across midnight into 17 September 2004 (no leap second)
TAI (17 September 2004) UTC (16 to 17 September 2004) Unix time
2004-09-17T00:00:30.75 2004-09-16T23:59:58.75 1095379198.75
2004-09-17T00:00:31.00 2004-09-16T23:59:59.00 1095379199.00
2004-09-17T00:00:31.25 2004-09-16T23:59:59.25 1095379199.25
2004-09-17T00:00:31.50 2004-09-16T23:59:59.50 1095379199.50
2004-09-17T00:00:31.75 2004-09-16T23:59:59.75 1095379199.75
2004-09-17T00:00:32.00 2004-09-17T00:00:00.00 1095379200.00
2004-09-17T00:00:32.25 2004-09-17T00:00:00.25 1095379200.25
2004-09-17T00:00:32.50 2004-09-17T00:00:00.50 1095379200.50
2004-09-17T00:00:32.75 2004-09-17T00:00:00.75 1095379200.75
2004-09-17T00:00:33.00 2004-09-17T00:00:01.00 1095379201.00
2004-09-17T00:00:33.25 2004-09-17T00:00:01.25 1095379201.25

When a leap second occurs, the UTC day is not exactly 86400 seconds long and the Unix time number (which always increases by exactly 86400 each day) experiences a discontinuity. Leap seconds may be positive or negative. No negative leap second has ever been declared, but if one were to be, then at the end of a day with a negative leap second, the Unix time number would jump up by 1 to the start of the next day. During a positive leap second at the end of a day, which occurs about every year and a half on average, the Unix time number increases continuously into the next day during the leap second and then at the end of the leap second jumps back by 1 (returning to the start of the next day). For example, this is what happened on strictly conforming POSIX.1 systems at the end of 1998:

Unix time across midnight into 1 January 1999 (positive leap second)
TAI (1 January 1999) UTC (31 December 1998 to 1 January 1999) Unix time
1999-01-01T00:00:29.75 1998-12-31T23:59:58.75 915148798.75
1999-01-01T00:00:30.00 1998-12-31T23:59:59.00 915148799.00
1999-01-01T00:00:30.25 1998-12-31T23:59:59.25 915148799.25
1999-01-01T00:00:30.50 1998-12-31T23:59:59.50 915148799.50
1999-01-01T00:00:30.75 1998-12-31T23:59:59.75 915148799.75
1999-01-01T00:00:31.00 1998-12-31T23:59:60.00 915148800.00
1999-01-01T00:00:31.25 1998-12-31T23:59:60.25 915148800.25
1999-01-01T00:00:31.50 1998-12-31T23:59:60.50 915148800.50
1999-01-01T00:00:31.75 1998-12-31T23:59:60.75 915148800.75
1999-01-01T00:00:32.00 1999-01-01T00:00:00.00 915148800.00
1999-01-01T00:00:32.25 1999-01-01T00:00:00.25 915148800.25
1999-01-01T00:00:32.50 1999-01-01T00:00:00.50 915148800.50
1999-01-01T00:00:32.75 1999-01-01T00:00:00.75 915148800.75
1999-01-01T00:00:33.00 1999-01-01T00:00:01.00 915148801.00
1999-01-01T00:00:33.25 1999-01-01T00:00:01.25 915148801.25

Unix time numbers are repeated in the second immediately following a positive leap second. The Unix time number 1483142400 is thus ambiguous: it can refer either to start of the leap second (2016-12-31 23:59:60) or the end of it, one second later (2017-01-01 00:00:00). In the theoretical case when a negative leap second occurs, no ambiguity is caused, but instead there is a range of Unix time numbers that do not refer to any point in UTC time at all.

A Unix clock is often implemented with a different type of positive leap second handling associated with the Network Time Protocol (NTP). This yields a system that does not conform to the POSIX standard. See the section below concerning NTP for details.

When dealing with periods that do not encompass a UTC leap second, the difference between two Unix time numbers is equal to the duration in seconds of the period between the corresponding points in time. This is a common computational technique. However, where leap seconds occur, such calculations give the wrong answer. In applications where this level of accuracy is required, it is necessary to consult a table of leap seconds when dealing with Unix times, and it is often preferable to use a different time encoding that does not suffer from this problem.

A Unix time number is easily converted back into a UTC time by taking the quotient and modulus of the Unix time number, modulo 86400. The quotient is the number of days since the epoch, and the modulus is the number of seconds since midnight UTC on that day. If given a Unix time number that is ambiguous due to a positive leap second, this algorithm interprets it as the time just after midnight. It never generates a time that is during a leap second. If given a Unix time number that is invalid due to a negative leap second, it generates an equally invalid UTC time. If these conditions are significant, it is necessary to consult a table of leap seconds to detect them.

Non-synchronous Network Time Protocol-based variant

Commonly a Mills-style Unix clock is implemented with leap second handling not synchronous with the change of the Unix time number. The time number initially decreases where a leap should have occurred, and then it leaps to the correct time 1 second after the leap. This makes implementation easier, and is described by Mills' paper. This is what happens across a positive leap second:

Non-synchronous Mills-style Unix clock
across midnight into 1 January 1999 (positive leap second)
TAI (1 January 1999) UTC (31 December 1998 to 1 January 1999) State Unix clock
1999-01-01T00:00:29.75 1998-12-31T23:59:58.75 TIME_INS 915148798.75
1999-01-01T00:00:30.00 1998-12-31T23:59:59.00 TIME_INS 915148799.00
1999-01-01T00:00:30.25 1998-12-31T23:59:59.25 TIME_INS 915148799.25
1999-01-01T00:00:30.50 1998-12-31T23:59:59.50 TIME_INS 915148799.50
1999-01-01T00:00:30.75 1998-12-31T23:59:59.75 TIME_INS 915148799.75
1999-01-01T00:00:31.00 1998-12-31T23:59:60.00 TIME_INS 915148800.00
1999-01-01T00:00:31.25 1998-12-31T23:59:60.25 TIME_OOP 915148799.25
1999-01-01T00:00:31.50 1998-12-31T23:59:60.50 TIME_OOP 915148799.50
1999-01-01T00:00:31.75 1998-12-31T23:59:60.75 TIME_OOP 915148799.75
1999-01-01T00:00:32.00 1999-01-01T00:00:00.00 TIME_OOP 915148800.00
1999-01-01T00:00:32.25 1999-01-01T00:00:00.25 TIME_WAIT 915148800.25
1999-01-01T00:00:32.50 1999-01-01T00:00:00.50 TIME_WAIT 915148800.50
1999-01-01T00:00:32.75 1999-01-01T00:00:00.75 TIME_WAIT 915148800.75
1999-01-01T00:00:33.00 1999-01-01T00:00:01.00 TIME_WAIT 915148801.00
1999-01-01T00:00:33.25 1999-01-01T00:00:01.25 TIME_WAIT 915148801.25

This can be decoded properly by paying attention to the leap second state variable, which unambiguously indicates whether the leap has been performed yet. The state variable change is synchronous with the leap. 

A similar situation arises with a negative leap second, where the second that is skipped is slightly too late. Very briefly the system shows a nominally impossible time number, but this can be detected by the TIME_DEL state and corrected. 

In this type of system the Unix time number violates POSIX around both types of leap second. Collecting the leap second state variable along with the time number allows for unambiguous decoding, so the correct POSIX time number can be generated if desired, or the full UTC time can be stored in a more suitable format. 

The decoding logic required to cope with this style of Unix clock would also correctly decode a hypothetical POSIX-conforming clock using the same interface. This would be achieved by indicating the TIME_INS state during the entirety of an inserted leap second, then indicating TIME_WAIT during the entirety of the following second while repeating the seconds count. This requires synchronous leap second handling. This is probably the best way to express UTC time in Unix clock form, via a Unix interface, when the underlying clock is fundamentally untroubled by leap seconds.

TAI-based variant

Another, much rarer, non-conforming variant of Unix time keeping involves encoding TAI rather than UTC; some Linux systems are configured this way. Because TAI has no leap seconds, and every TAI day is exactly 86400 seconds long, this encoding is actually a pure linear count of seconds elapsed since 1970-01-01T00:00:00 TAI. This makes time interval arithmetic much easier. Time values from these systems do not suffer the ambiguity that strictly conforming POSIX systems or NTP-driven systems have. 

In these systems it is necessary to consult a table of leap seconds to correctly convert between UTC and the pseudo-Unix-time representation. This resembles the manner in which time zone tables must be consulted to convert to and from civil time; the IANA time zone database includes leap second information, and the sample code available from the same source uses that information to convert between TAI-based time stamps and local time. Conversion also runs into definitional problems prior to the 1972 commencement of the current form of UTC (see section UTC basis below).

This TAI-based system, despite its superficial resemblance, is not Unix time. It encodes times with values that differ by several seconds from the POSIX time values. A version of this system was proposed for inclusion in ISO C's time.h, but only the UTC part was accepted in 2011. A tai_clock does, however, exist in C++20.

Representing the number

A Unix time number can be represented in any form capable of representing numbers. In some applications the number is simply represented textually as a string of decimal digits, raising only trivial additional problems. However, certain binary representations of Unix times are particularly significant. 

The Unix time_t data type that represents a point in time is, on many platforms, a signed integer, traditionally of 32 bits (but see below), directly encoding the Unix time number as described in the preceding section. Being 32 bits means that it covers a range of about 136 years in total. The minimum representable date is Friday 1901-12-13, and the maximum representable date is Tuesday 2038-01-19. One second after 03:14:07 UTC 2038-01-19 this representation will overflow. This milestone is anticipated with a mixture of amusement and dread—see year 2038 problem

In some newer operating systems, time_t has been widened to 64 bits. This expands the times representable by approximately 293 billion years in both directions, which is over twenty times the present age of the universe per direction. 

There was originally some controversy over whether the Unix time_t should be signed or unsigned. If unsigned, its range in the future would be doubled, postponing the 32-bit overflow (by 68 years). However, it would then be incapable of representing times prior to the epoch. The consensus is for time_t to be signed, and this is the usual practice. The software development platform for version 6 of the QNX operating system has an unsigned 32-bit time_t, though older releases used a signed type. 

The POSIX and Open Group Unix specifications include the C standard library, which includes the time types and functions defined in the header file. The ISO C standard states that time_t must be an arithmetic type, but does not mandate any specific type or encoding for it. POSIX requires time_t to be an integer type, but does not mandate that it be signed or unsigned. 

Unix has no tradition of directly representing non-integer Unix time numbers as binary fractions. Instead, times with sub-second precision are represented using composite data types that consist of two integers, the first being a time_t (the integral part of the Unix time), and the second being the fractional part of the time number in millionths (in struct timeval) or billionths (in struct timespec). These structures provide a decimal-based fixed-point data format, which is useful for some applications, and trivial to convert for others.

UTC basis

The present form of UTC, with leap seconds, is defined only starting from 1 January 1972. Prior to that, since 1 January 1961 there was an older form of UTC in which not only were there occasional time steps, which were by non-integer numbers of seconds, but also the UTC second was slightly longer than the SI second, and periodically changed to continuously approximate the Earth's rotation. Prior to 1961 there was no UTC, and prior to 1958 there was no widespread atomic timekeeping; in these eras, some approximation of GMT (based directly on the Earth's rotation) was used instead of an atomic timescale.

The precise definition of Unix time as an encoding of UTC is only uncontroversial when applied to the present form of UTC. The Unix epoch predating the start of this form of UTC does not affect its use in this era: the number of days from 1 January 1970 (the Unix epoch) to 1 January 1972 (the start of UTC) is not in question, and the number of days is all that is significant to Unix time. 

The meaning of Unix time values below +63072000 (i.e., prior to 1 January 1972) is not precisely defined. The basis of such Unix times is best understood to be an unspecified approximation of UTC. Computers of that era rarely had clocks set sufficiently accurately to provide meaningful sub-second timestamps in any case. Unix time is not a suitable way to represent times prior to 1972 in applications requiring sub-second precision; such applications must, at least, define which form of UT or GMT they use. 

As of 2009, the possibility of ending the use of leap seconds in civil time is being considered. A likely means to execute this change is to define a new time scale, called International Time, that initially matches UTC but thereafter has no leap seconds, thus remaining at a constant offset from TAI. If this happens, it is likely that Unix time will be prospectively defined in terms of this new time scale, instead of UTC. Uncertainty about whether this will occur makes prospective Unix time no less predictable than it already is: if UTC were simply to have no further leap seconds the result would be the same.

History

The earliest versions of Unix time had a 32-bit integer incrementing at a rate of 60 Hz, which was the rate of the system clock on the hardware of the early Unix systems. The value 60 Hz still appears in some software interfaces as a result. The epoch also differed from the current value. The first edition Unix Programmer's Manual dated 3 November 1971 defines the Unix time as "the time since 00:00:00, 1 January 1971, measured in sixtieths of a second".

The User Manual also commented that "the chronologically-minded user will note that 2**32 sixtieths of a second is only about 2.5 years". Because of this limited range, the epoch was redefined more than once, before the rate was changed to 1 Hz and the epoch was set to its present value of 1 January 1970 00:00:00 UTC. This yielded a range of about 136 years, half of it before 1970 and half of it afterwards. 

As indicated by the definition quoted above, the Unix time scale was originally intended to be a simple linear representation of time elapsed since an epoch. However, there was no consideration of the details of time scales, and it was implicitly assumed that there was a simple linear time scale already available and agreed upon. The first edition manual's definition does not even specify which time zone is used. Several later problems, including the complexity of the present definition, result from Unix time having been defined gradually by usage rather than fully defined from the outset.




When POSIX.1 was written, the question arose of how to precisely define time_t in the face of leap seconds. The POSIX committee considered whether Unix time should remain, as intended, a linear count of seconds since the epoch, at the expense of complexity in conversions with civil time or a representation of civil time, at the expense of inconsistency around leap seconds. Computer clocks of the era were not sufficiently precisely set to form a precedent one way or the other. 


The POSIX committee was swayed by arguments against complexity in the library functions, and firmly defined the Unix time in a simple manner in terms of the elements of UTC time. This definition was so simple that it did not even encompass the entire leap year rule of the Gregorian calendar, and would make 2100 a leap year.




The 2001 edition of POSIX.1 rectified the faulty leap year rule in the definition of Unix time, but retained the essential definition of Unix time as an encoding of UTC rather than a linear time scale. Since the mid-1990s, computer clocks have been routinely set with sufficient precision for this to matter, and they have most commonly been set using the UTC-based definition of Unix time. This has resulted in considerable complexity in Unix implementations, and in the Network Time Protocol, to execute steps in the Unix time number whenever leap seconds occur.

Notable events in Unix time

Unix enthusiasts have a history of holding "time_t parties" (pronounced "time tea parties") to celebrate significant values of the Unix time number. These are directly analogous to the new year celebrations that occur at the change of year in many calendars. As the use of Unix time has spread, so has the practice of celebrating its milestones. Usually it is time values that are round numbers in decimal that are celebrated, following the Unix convention of viewing time_t values in decimal. Among some groups round binary numbers are also celebrated, such as +230 which occurred at 13:37:04 UTC on Saturday, 10 January 2004.

The events that these celebrate are typically described as "N seconds since the Unix epoch", but this is inaccurate; as discussed above, due to the handling of leap seconds in Unix time the number of seconds elapsed since the Unix epoch is slightly greater than the Unix time number for times later than the epoch.
  • At 18:36:57 UTC on Wednesday, 17 October 1973, the first appearance of the date in ISO 8601 format[a] (1973-10-17) within the digits of Unix time (119731017) took place.
  • At 01:46:40 UTC on Sunday, 9 September 2001, the Unix billennium (Unix time number 1000000000) was celebrated. The name billennium is a portmanteau of billion and millennium. Some programs which stored timestamps using a text representation encountered sorting errors, as in a text sort times after the turnover, starting with a 1 digit, erroneously sorted before earlier times starting with a 9 digit. Affected programs included the popular Usenet reader KNode and e-mail client KMail, part of the KDE desktop environment. Such bugs were generally cosmetic in nature and quickly fixed once problems became apparent. The problem also affected many Filtrix document-format filters provided with Linux versions of WordPerfect; a patch was created by the user community to solve this problem, since Corel no longer sold or supported that version of the program.
  • At 23:31:30 UTC on Friday, 13 February 2009, the decimal representation of Unix time reached 1234567890 seconds. Google celebrated this with a Google doodle. Parties and other celebrations were held around the world, among various technical subcultures, to celebrate the 1234567890th second.
  • At 03:33:20 UTC on Wednesday, 18 May 2033, the Unix time value will equal 2000000000 seconds.
  • At 06:28:16 UTC on Thursday, 7 February 2036, Network Time Protocol will loop over to the next epoch, as the 32-bit time stamp value used in NTP (unsigned, but based on 1 January 1900) will overflow. This date is close to the following date because the 136-year range of a 32-bit integer number of seconds is close to twice the 70-year offset between the two epochs.
  • At 03:14:08 UTC on Tuesday, 19 January 2038, 32-bit versions of the Unix time stamp will cease to work, as it will overflow the largest value that can be held in a signed 32-bit number (7FFFFFFF16 or 2147483647). Before this moment, software using 32-bit time stamps will need to adopt a new convention for time stamps, and file formats using 32-bit time stamps will need to be changed to support larger time stamps or a different epoch. If unchanged, the next second will be incorrectly interpreted as 20:45:52 Friday 13 December 1901 UTC. This is referred to as the Year 2038 problem.
  • At 05:20:00 UTC on Saturday, 24 January 2065, the Unix time value will equal 3000000000 seconds.
  • At 06:28:15 UTC on Sunday, 7 February 2106, the Unix time will reach FFFFFFFF16 or 4294967295 seconds which, for systems that hold the time on 32-bit unsigned integers, is the maximum attainable. For some of these systems, the next second will be incorrectly interpreted as 00:00:00 Thursday 1 January 1970 UTC. Other systems may experience an overflow error with unpredictable outcomes.
  • At 15:30:08 UTC on Sunday, 4 December 292277026596, 64-bit versions of the Unix time stamp cease to work, as it will overflow the largest value that can be held in a signed 64-bit number. This is nearly 22 times the estimated current age of the universe, which is 1.37×1010 years (13.7 billion).

In literature and calendrics

Vernor Vinge's novel A Deepness in the Sky describes a spacefaring trading civilization thousands of years in the future that still uses the Unix epoch. The "programmer-archaeologist" responsible for finding and maintaining usable code in mature computer systems first believes that the epoch refers to the time when man first walked on the Moon, but then realizes that it is "the 0-second of one of Humankind's first computer operating systems".

Universal Time

From Wikipedia, the free encyclopedia
 
Universal Time (UT) is a time standard based on Earth's rotation. There are several versions of Universal Time, which differ by up to a few seconds. The most commonly used are Coordinated Universal Time (UTC) and UT1 (see § Versions). All of these versions of UT, except for UTC, are based on Earth's rotation relative to distant celestial objects (stars and quasars), but with a scaling factor and other adjustments to make them closer to solar time. UTC is based on International Atomic Time, with leap seconds added to keep it within 0.9 second of UT1.

Universal Time and standard time

Prior to the introduction of standard time, each municipality throughout the clock-using world set its official clock, if it had one, according to the local position of the Sun (see solar time). This served adequately until the introduction of rail travel in Britain, which made it possible to travel fast enough over long distances to require continuous re-setting of timepieces as a train progressed in its daily run through several towns. Greenwich Mean Time, the mean solar time on the Prime Meridian at Greenwich, England, was established to solve this problem: all clocks in Britain were set to this time regardless of local solar noon. Chronometers or telegraphy were used to synchronize these clocks.

Standard time zones of the world since 2016. The number at the bottom of each zone specifies the number of hours to add to UTC to convert it to the local time.
 
Standard time was originally proposed by Scottish-Canadian Sir Sandford Fleming at a meeting of the Canadian Institute in Toronto on 8 February 1879. He suggested that standard time zones could be used locally, but they were subordinate to his single world time, which he called Cosmic Time. The proposal divided the world into twenty-four time zones, each one covering 15 degrees of longitude. All clocks within each zone would be set to the same time as the others, but differed by one hour from those in the neighboring zones. The local time at the Royal Observatory in Greenwich was announced as the recommended base reference for world time on 22 October 1884 at the end of the International Meridian Conference. This location was chosen because by 1884 two-thirds of all nautical charts and maps already used it as their prime meridian. The conference did not adopt Fleming's time zones because they were outside the purpose for which it was called, which was to choose a basis for universal time (as well as a prime meridian). 

During the period between 1848 and 1972, all of the major countries adopted time zones based on the Greenwich meridian.

In 1935, the term Universal Time was recommended by the International Astronomical Union as a more precise term than Greenwich Mean Time, because GMT could refer to either an astronomical day starting at noon or a civil day starting at midnight. In some countries, the term Greenwich Mean Time persists in common usage to this day in reference to civil timekeeping.

Measurement

Based on the rotation of the Earth, time can be measured by observing celestial bodies crossing the meridian every day. Astronomers found that it was more accurate to establish time by observing stars as they crossed a meridian rather than by observing the position of the Sun in the sky. Nowadays, UT in relation to International Atomic Time (TAI) is determined by Very Long Baseline Interferometry (VLBI) observations of distant quasars, a method which can determine UT1 to within 15 microseconds or better.

An 1853 "Universal Dial Plate" showing the relative times of "all nations" before the adoption of universal time
 
The rotation of the Earth and UT are monitored by the International Earth Rotation and Reference Systems Service (IERS). The International Astronomical Union also is involved in setting standards, but the final arbiter of broadcast standards is the International Telecommunication Union or ITU.

The rotation of the Earth is somewhat irregular and also is very gradually slowing due to tidal acceleration. Furthermore, the length of the second was determined from observations of the Moon between 1750 and 1890. All of these factors cause the modern mean solar day, on the average, to be slightly longer than the nominal 86,400 SI seconds, the traditional number of seconds per day. As UT is thus slightly irregular in its rate, astronomers introduced Ephemeris Time, which has since been replaced by Terrestrial Time (TT). Because Universal Time is determined by the Earth's rotation, which drifts away from more precise atomic-frequency standards, an adjustment (called a leap second) to this atomic time is needed since (as of 2019) 'broadcast time' remains broadly synchronised with solar time. Thus, the civil broadcast standard for time and frequency usually follows International Atomic Time closely, but occasionally step (or "leap") in order to prevent them from drifting too far from mean solar time. 

Barycentric Dynamical Time (TDB), a form of atomic time, is now used in the construction of the ephemerides of the planets and other solar system objects, for two main reasons. First, these ephemerides are tied to optical and radar observations of planetary motion, and the TDB time scale is fitted so that Newton's laws of motion, with corrections for general relativity, are followed. Next, the time scales based on Earth's rotation are not uniform and therefore, are not suitable for predicting the motion of bodies in our solar system.

Versions

There are several versions of Universal Time:
  • UT0 is Universal Time determined at an observatory by observing the diurnal motion of stars or extragalactic radio sources, and also from ranging observations of the Moon and artificial Earth satellites. The location of the observatory is considered to have fixed coordinates in a terrestrial reference frame (such as the International Terrestrial Reference Frame) but the position of the rotational axis of the Earth wanders over the surface of the Earth; this is known as polar motion. UT0 does not contain any correction for polar motion. The difference between UT0 and UT1 is on the order of a few tens of milliseconds. The designation UT0 is no longer in common use.
  • UT1 is the principal form of Universal Time. While conceptually it is mean solar time at 0° longitude, precise measurements of the Sun are difficult. Hence, it is computed from observations of distant quasars using long baseline interferometry, laser ranging of the Moon and artificial satellites, as well as the determination of GPS satellite orbits. UT1 is the same everywhere on Earth, and is proportional to the rotation angle of the Earth with respect to distant quasars, specifically, the International Celestial Reference Frame (ICRF), neglecting some small adjustments. The observations allow the determination of a measure of the Earth's angle with respect to the ICRF, called the Earth Rotation Angle (ERA, which serves as a modern replacement for Greenwich Mean Sidereal Time). UT1 is required to follow the relationship
ERA = 2π(0.7790572732640 + 1.00273781191135448Tu) radians
where Tu = (Julian UT1 date - 2451545.0)
  • UT1R is a smoothed version of UT1, filtering out periodic variations due to tides. It includes 62 smoothing terms, with periods ranging from 5.6 days to 18.6 years.
  • UT2 is a smoothed version of UT1, filtering out periodic seasonal variations. It is mostly of historic interest and rarely used anymore. It is defined by
where t is the time as fraction of the Besselian year.
  • UTC (Coordinated Universal Time) is an atomic timescale that approximates UT1. It is the international standard on which civil time is based. It ticks SI seconds, in step with TAI. It usually has 86,400 SI seconds per day but is kept within 0.9 seconds of UT1 by the introduction of occasional intercalary leap seconds. As of 2016, these leaps have always been positive (the days which contained a leap second were 86,401 seconds long). Whenever a level of accuracy better than one second is not required, UTC can be used as an approximation of UT1. The difference between UT1 and UTC is known as DUT1.

Adoption in various countries

The table shows the dates of adoption of time zones based on the Greenwich meridian, including half-hour zones. 


Apart from Nepal Standard Time (UTC+05:45), the Chatham Standard Time Zone (UTC+12:45) used in New Zealand's Chatham Islands and the officially unsanctioned Central Western Time Zone (UTC+8:45) used in Eucla, Western Australia and surrounding areas, all time zones in use are defined by an offset from UTC that is a multiple of half an hour, and in most cases a multiple of an hour.

Coordinated Universal Time

From Wikipedia, the free encyclopedia
 
 
World map of current time zones
 
Coordinated Universal Time (or UTC) is the primary time standard by which the world regulates clocks and time. It is within about 1 second of mean solar time at 0° longitude, and is not adjusted for daylight saving time. It is effectively a successor to Greenwich Mean Time (GMT).


The coordination of time and frequency transmissions around the world began on 1 January 1960. UTC was first officially adopted as CCIR Recommendation 374, Standard-Frequency and Time-Signal Emissions, in 1963, but the official abbreviation of UTC and the official English name of Coordinated Universal Time (along with the French equivalent) were not adopted until 1967.


The system has been adjusted several times, including a brief period where the time-coordination radio signals broadcast both UTC and "Stepped Atomic Time (SAT)" before a new UTC was adopted in 1970 and implemented in 1972. This change also adopted leap seconds to simplify future adjustments. This CCIR Recommendation 460 "stated that (a) carrier frequencies and time intervals should be maintained constant and should correspond to the definition of the SI second; (b) step adjustments, when necessary, should be exactly 1 s to maintain approximate agreement with Universal Time (UT); and (c) standard signals should contain information on the difference between UTC and UT."

A number of proposals have been made to replace UTC with a new system that would eliminate leap seconds. A decision whether to remove them altogether has been deferred until 2023.

The current version of UTC is defined by International Telecommunications Union Recommendation (ITU-R TF.460-6), Standard-frequency and time-signal emissions, and is based on International Atomic Time (TAI) with leap seconds added at irregular intervals to compensate for the slowing of the Earth's rotation. Leap seconds are inserted as necessary to keep UTC within 0.9 second of the UT1 variant of universal time. See the "Current number of leap seconds" section for the number of leap seconds inserted to date.

Etymology

The official abbreviation for Coordinated Universal Time is UTC. This abbreviation arose from a desire by the International Telecommunication Union and the International Astronomical Union to use the same abbreviation in all languages. English speakers originally proposed CUT (for "coordinated universal time"), while French speakers proposed TUC (for "temps universel coordonné"). The compromise that emerged was UTC, which conforms to the pattern for the abbreviations of the variants of Universal Time (UT0, UT1, UT2, UT1R, etc.).

Uses

Time zones around the world are expressed using positive or negative offsets from UTC, as in the list of time zones by UTC offset.




The westernmost time zone uses UTC−12, being twelve hours behind UTC; the easternmost time zone uses UTC+14, being fourteen hours ahead of UTC. In 1995, the island nation of Kiribati moved those of its atolls in the Line Islands from UTC−10 to UTC+14 so that Kiribati would all be on the same day. 


UTC is used in many Internet and World Wide Web standards. The Network Time Protocol (NTP), designed to synchronise the clocks of computers over the Internet, transmits time information from the UTC system. If only milliseconds precision is needed, clients can obtain the current UTC from a number of official internet UTC servers. For sub-microsecond precision, clients can obtain the time from satellite signals. 

UTC is also the time standard used in aviation, e.g. for flight plans and air traffic control. Weather forecasts and maps all use UTC to avoid confusion about time zones and daylight saving time. The International Space Station also uses UTC as a time standard. 

Amateur radio operators often schedule their radio contacts in UTC, because transmissions on some frequencies can be picked up in many time zones.

Mechanism

UTC divides time into days, hours, minutes and seconds. Days are conventionally identified using the Gregorian calendar, but Julian day numbers can also be used. Each day contains 24 hours and each hour contains 60 minutes. The number of seconds in a minute is usually 60, but with an occasional leap second, it may be 61 or 59 instead. Thus, in the UTC time scale, the second and all smaller time units (millisecond, microsecond, etc.) are of constant duration, but the minute and all larger time units (hour, day, week, etc.) are of variable duration. Decisions to introduce a leap second are announced at least six months in advance in "Bulletin C" produced by the International Earth Rotation and Reference Systems Service. The leap seconds cannot be predicted far in advance due to the unpredictable rate of rotation of the Earth.

Nearly all UTC days contain exactly 86,400 SI seconds with exactly 60 seconds in each minute. UTC is within about one second of mean solar time at 0° longitude, so that, because the mean solar day is slightly longer than 86,400 SI seconds, occasionally the last minute of a UTC day is adjusted to have 61 seconds. The extra second is called a leap second. It accounts for the grand total of the extra length (about 2 milliseconds each) of all the mean solar days since the previous leap second. The last minute of a UTC day is permitted to contain 59 seconds to cover the remote possibility of the Earth rotating faster, but that has not yet been necessary. The irregular day lengths mean that fractional Julian days do not work properly with UTC. 

Since 1972, UTC is calculated by subtracting the accumulated leap seconds from International Atomic Time (TAI), which is a coordinate time scale tracking notional proper time on the rotating surface of the Earth (the geoid). In order to maintain a close approximation to UT1, UTC occasionally has discontinuities where it changes from one linear function of TAI to another. These discontinuities take the form of leap seconds implemented by a UTC day of irregular length. Discontinuities in UTC have occurred only at the end of June or December, although there is provision for them to happen at the end of March and September as well as a second preference. The International Earth Rotation and Reference Systems Service (IERS) tracks and publishes the difference between UTC and Universal Time, DUT1 = UT1 − UTC, and introduces discontinuities into UTC to keep DUT1 in the interval (−0.9 s, +0.9 s). 

As with TAI, UTC is only known with the highest precision in retrospect. Users who require an approximation in real time must obtain it from a time laboratory, which disseminates an approximation using techniques such as GPS or radio time signals. Such approximations are designated UTC(k), where k is an abbreviation for the time laboratory. The time of events may be provisionally recorded against one of these approximations; later corrections may be applied using the International Bureau of Weights and Measures (BIPM) monthly publication of tables of differences between canonical TAI/UTC and TAI(k)/UTC(k) as estimated in real time by participating laboratories.

Because of time dilation, a standard clock not on the geoid, or in rapid motion, will not maintain synchronicity with UTC. Therefore, telemetry from clocks with a known relation to the geoid is used to provide UTC when required, on locations such as those of spacecraft. 

It is not possible to compute the exact time interval elapsed between two UTC timestamps without consulting a table that shows how many leap seconds occurred during that interval. By extension, it is not possible to compute the precise duration of a time interval that ends in the future and may encompass an unknown number of leap seconds (for example, the number of TAI seconds between "now" and 2099-12-31 23:59:59). Therefore, many scientific applications that require precise measurement of long (multi-year) intervals use TAI instead. TAI is also commonly used by systems that cannot handle leap seconds. GPS time always remains exactly 19 seconds behind TAI (neither system is affected by the leap seconds introduced in UTC).

Time zones

Time zones are usually defined as differing from UTC by an integer number of hours, although the laws of each jurisdiction would have to be consulted if sub-second accuracy was required. Several jurisdictions have established time zones that differ by an odd integer number of half-hours or quarter-hours from UT1 or UTC.

Current civil time in a particular time zone can be determined by adding or subtracting the number of hours and minutes specified by the UTC offset, which ranges from UTC−12:00 in the west to UTC+14:00 in the east (see List of UTC time offsets). 

The time zone using UTC is sometimes denoted UTC±00:00 or by the letter Z—a reference to the equivalent nautical time zone (GMT), which has been denoted by a Z since about 1950. Time zones were identified by successive letters of the alphabet and the Greenwich time zone was marked by a Z as it was the point of origin. The letter also refers to the "zone description" of zero hours, which has been used since 1920. Since the NATO phonetic alphabet word for Z is "Zulu", UTC is sometimes known as "Zulu time". This is especially true in aviation, where "Zulu" is the universal standard. This ensures that all pilots, regardless of location, are using the same 24-hour clock, thus avoiding confusion when flying between time zones. See the list of military time zones for letters used in addition to Z in qualifying time zones other than Greenwich. 

On electronic devices which only allow the time zone to be configured using maps or city names, UTC can be selected indirectly by selecting cities such as Accra in Ghana or Reykjavík in Iceland as they are always on UTC and do not currently use Daylight Saving Time.

Daylight saving time

UTC does not change with a change of seasons, but local time or civil time may change if a time zone jurisdiction observes daylight saving time (summer time). For example, local time on the east coast of the United States is five hours behind UTC during winter, but four hours behind while daylight saving is observed there.

History

The Scottish-Canadian engineer Sir Sandford Fleming promoted worldwide standard time zones, a prime meridian, and the use of the 24-hour clock as key elements in communicating the accurate time. He referred to the resulting system as Cosmic Time. At the 1884 International Meridian Conference held in Washington, D.C., the local mean solar time at the Royal Observatory, Greenwich in England was chosen to define the Universal day, counted from 0 hours at mean midnight. This agreed with civil Greenwich Mean Time (GMT), used on the island of Great Britain since 1847. In contrast, astronomical GMT began at mean noon, 12 hours after mean midnight of the same date until 1 January 1925, whereas nautical GMT began at mean noon, 12 hours before mean midnight of the same date, at least until 1805 in the Royal Navy, but persisted much later elsewhere because it was mentioned at the 1884 conference. In 1884, the Greenwich Meridian was used for two-thirds of all charts and maps as their Prime Meridian. In 1928, the term Universal Time (UT) was introduced by the International Astronomical Union to refer to GMT, with the day starting at midnight. Until the 1950s, broadcast time signals were based on UT, and hence on the rotation of the Earth. 

In 1955, the caesium atomic clock was invented. This provided a form of timekeeping that was both more stable and more convenient than astronomical observations. In 1956, the U.S. National Bureau of Standards and U.S. Naval Observatory started to develop atomic frequency time scales; by 1959, these time scales were used in generating the WWV time signals, named for the shortwave radio station that broadcasts them. In 1960, the U.S. Naval Observatory, the Royal Greenwich Observatory, and the UK National Physical Laboratory coordinated their radio broadcasts so that time steps and frequency changes were coordinated, and the resulting time scale was informally referred to as "Coordinated Universal Time".

In a controversial decision, the frequency of the signals was initially set to match the rate of UT, but then kept at the same frequency by the use of atomic clocks and deliberately allowed to drift away from UT. When the divergence grew significantly, the signal was phase shifted (stepped) by 20 ms to bring it back into agreement with UT. Twenty-nine such steps were used before 1960.

In 1958, data was published linking the frequency for the caesium transition, newly established, with the ephemeris second. The ephemeris second is a unit in the system of time that, when used as the independent variable in the laws of motion that govern the movement of the planets and moons in the solar system, enables the laws of motion to accurately predict the observed positions of solar system bodies. Within the limits of observable accuracy, ephemeris seconds are of constant length, as are atomic seconds. This publication allowed a value to be chosen for the length of the atomic second that would accord with the celestial laws of motion.

In 1961, the Bureau International de l'Heure began coordinating the UTC process internationally (but the name Coordinated Universal Time was not formally adopted by the International Astronomical Union until 1967).  From then on, there were time steps every few months, and frequency changes at the end of each year. The jumps increased in size to 0.1 second. This UTC was intended to permit a very close approximation to UT2.

In 1967, the SI second was redefined in terms of the frequency supplied by a caesium atomic clock. The length of second so defined was practically equal to the second of ephemeris time. This was the frequency that had been provisionally used in TAI since 1958. It was soon recognised that having two types of second with different lengths, namely the UTC second and the SI second used in TAI, was a bad idea. It was thought better for time signals to maintain a consistent frequency, and that this frequency should match the SI second. Thus it would be necessary to rely on time steps alone to maintain the approximation of UT. This was tried experimentally in a service known as "Stepped Atomic Time" (SAT), which ticked at the same rate as TAI and used jumps of 0.2 second to stay synchronised with UT2.

There was also dissatisfaction with the frequent jumps in UTC (and SAT). In 1968, Louis Essen, the inventor of the caesium atomic clock, and G. M. R. Winkler both independently proposed that steps should be of 1 second only. This system was eventually approved, along with the idea of maintaining the UTC second equal to the TAI second. At the end of 1971, there was a final irregular jump of exactly 0.107758 TAI seconds, making the total of all the small time steps and frequency shifts in UTC or TAI during 1958–1971 exactly ten seconds, so that 1 January 1972 00:00:00 UTC was 1 January 1972 00:00:10 TAI exactly, and a whole number of seconds thereafter. At the same time, the tick rate of UTC was changed to exactly match TAI. UTC also started to track UT1 rather than UT2. Some time signals started to broadcast the DUT1 correction (UT1 − UTC) for applications requiring a closer approximation of UT1 than UTC now provided.

Current number of leap seconds

The first leap second occurred on 30 June 1972. Since then, leap seconds have occurred on average about once every 19 months, always on 30 June or 31 December. As of July 2019, there have been 27 leap seconds in total, all positive, putting UTC 37 seconds behind TAI.

Rationale

Graph showing the difference DUT1 between UT1 and UTC (in seconds). Vertical segments correspond to leap seconds.
 
Earth's rotational speed is very slowly decreasing because of tidal deceleration; this increases the length of the mean solar day. The length of the SI second was calibrated on the basis of the second of ephemeris time and can now be seen to have a relationship with the mean solar day observed between 1750 and 1892, analysed by Simon Newcomb. As a result, the SI second is close to 1/86400 of a mean solar day in the mid‑19th century. In earlier centuries, the mean solar day was shorter than 86,400 SI seconds, and in more recent centuries it is longer than 86,400 seconds. Near the end of the 20th century, the length of the mean solar day (also known simply as "length of day" or "LOD") was approximately 86,400.0013 s. For this reason, UT is now "slower" than TAI by the difference (or "excess" LOD) of 1.3 ms/day. 

The excess of the LOD over the nominal 86,400 s accumulates over time, causing the UTC day, initially synchronised with the mean sun, to become desynchronised and run ahead of it. Near the end of the 20th century, with the LOD at 1.3 ms above the nominal value, UTC ran faster than UT by 1.3 ms per day, getting a second ahead roughly every 800 days. Thus, leap seconds were inserted at approximately this interval, retarding UTC to keep it synchronised in the long term. The actual rotational period varies on unpredictable factors such as tectonic motion and has to be observed, rather than computed.




Just as adding a leap day every four years does not mean the year is getting longer by one day every four years, the insertion of a leap second every 800 days does not indicate that the mean solar day is getting longer by a second every 800 days. It will take about 50,000 years for a mean solar day to lengthen by one second (at a rate of 2 ms/cy, where cy means century). This rate fluctuates within the range of 1.7–2.3 ms/cy. While the rate due to tidal friction alone is about 2.3 ms/cy, the uplift of Canada and Scandinavia by several metres since the last Ice Age has temporarily reduced this to 1.7 ms/cy over the last 2,700 years. The correct reason for leap seconds, then, is not the current difference between actual and nominal LOD, but rather the accumulation of this difference over a period of time: Near the end of the 20th century, this difference was about 1/800 of a second per day; therefore, after about 800 days, it accumulated to 1 second (and a leap second was then added). 


In the graph of DUT1 above, the excess of LOD above the nominal 86,400 s corresponds to the downward slope of the graph between vertical segments. (The slope became shallower in the 2000s (decade), because of a slight acceleration of Earth's crust temporarily shortening the day.) Vertical position on the graph corresponds to the accumulation of this difference over time, and the vertical segments correspond to leap seconds introduced to match this accumulated difference. Leap seconds are timed to keep DUT1 within the vertical range depicted by this graph. The frequency of leap seconds therefore corresponds to the slope of the diagonal graph segments, and thus to the excess LOD.

Future

As the Earth's rotation continues to slow, positive leap seconds will be required more frequently. The long-term rate of change of LOD is approximately +1.7 ms per century. At the end of the 21st century, LOD will be roughly 86,400.004 s, requiring leap seconds every 250 days. Over several centuries, the frequency of leap seconds will become problematic.

Some time in the 22nd century, two leap seconds will be required every year. The current use of only the leap second opportunities in June and December will be insufficient to maintain a difference of less than 1 second, and it might be decided to introduce leap seconds in March and September. In the 25th century, four leap seconds are projected to be required every year, so the current quarterly options would be insufficient.

In April 2001, Rob Seaman of the National Optical Astronomy Observatory proposed that leap seconds be allowed to be added monthly rather than twice yearly.

There is a proposal to redefine UTC and abolish leap seconds, so that sundials would very slowly get further out of sync with civil time. The resulting gradual shift of the sun's movements relative to civil time is analogous to the shift of seasons relative to the yearly calendar that results from the calendar year not precisely matching the tropical year length. This would be a practical change in civil timekeeping, but would take effect slowly over several centuries. UTC (and TAI) would be more and more ahead of UT; it would coincide with local mean time along a meridian drifting slowly eastward (reaching Paris and beyond). Thus, the time system would lose its fixed connection to the geographic coordinates based on the IERS meridian. Assuming that there are no major events affecting civilisation over the coming centuries, the difference between UTC and UT could reach 0.5 hour after the year 2600 and 6.5 hours around 4600.

ITU‑R Study Group 7 and Working Party 7A were unable to reach consensus on whether to advance the proposal to the 2012 Radiocommunications Assembly; the chairman of Study Group 7 elected to advance the question to the 2012 Radiocommunications Assembly (20 January 2012), but consideration of the proposal was postponed by the ITU until the World Radio Conference in 2015. This conference, in turn, considered the question, but no permanent decision was reached; it only chose to engage in further study with the goal of reconsideration in 2023.

Sunday, September 6, 2020

IERS Reference Meridian

From Wikipedia, the free encyclopedia
 
Line across the Earth
Prime Meridian
 
Nations that touch the Equator (red) and the Prime Meridian (blue)
 
The IERS Reference Meridian (IRM), also called the International Reference Meridian, is the prime meridian (0° longitude) maintained by the International Earth Rotation and Reference Systems Service (IERS). It passes about 5.3 arcseconds east of George Biddell Airy's 1851 transit circle or 102 metres (335 ft) at the latitude of the Royal Observatory, Greenwich.  It is also the reference meridian of the Global Positioning System (GPS) operated by the United States Department of Defense, and of WGS84 and its two formal versions, the ideal International Terrestrial Reference System (ITRS) and its realization, the International Terrestrial Reference Frame (ITRF).

Location

The reason for the 5.3 arcsecond offset between the IERS Reference Meridian and the Airy transit circle is that the observations with the transit circle were based on the local vertical, while the IERS Reference is a geodetic longitude, that is, the plane of the meridian contains the center of mass of the Earth.

The International Hydrographic Organization adopted an early version of the IRM in 1983 for all nautical charts. The IRM was adopted for air navigation by the International Civil Aviation Organization on 3 March 1989. Tectonic plates slowly move over the surface of Earth, so most countries have adopted for their maps an IRM version fixed relative to their own tectonic plate as it existed at the beginning of a specific year. Examples include the North American Datum 1983 (NAD83), the European Terrestrial Reference Frame 1989 (ETRF89), and the Geocentric Datum of Australia 1994 (GDA94). Versions fixed to a tectonic plate differ from the global version by at most a few centimetres. 

However, the IRM is not fixed to any point on Earth. Instead, all points on the European portion of the Eurasian plate, including the Royal Observatory, are slowly moving northeast about 2.5 cm per year relative to it. Thus this IRM is the weighted average (in the least squares sense) of the reference meridians of the hundreds of ground stations contributing to the IERS network. The network includes GPS stations, Satellite Laser Ranging (SLR) stations, Lunar Laser Ranging (LLR) stations, and the highly accurate Very Long Baseline Interferometry (VLBI) stations. All stations' coordinates are adjusted annually to remove net rotation relative to the major tectonic plates. If Earth had only two hemispherical plates moving relative to each other around any axis which intersects their centres or their junction, then the longitudes (around any other rotation axis) of any two, diametrically opposite, stations must move in opposite directions by the same amount. The 180th meridian is opposite the IERS Reference Meridian and forms a great circle with it dividing the earth into Western Hemisphere and Eastern Hemisphere

Universal Time is notionally based on the WGS84 meridian. Because of changes in the rate of Earth's rotation, standard international time UTC can differ from the mean observed solar time at noon on the prime meridian by up to 0.9 second. Leap seconds are inserted periodically to keep UTC close to Earth's angular position relative to the Sun.

List of places

Map all coordinates using: OpenStreetMap 
Download coordinates as: KML · GPX
Starting at the North Pole and heading south to the South Pole, the IERS Reference Meridian passes through 8 countries.

Introduction to entropy

From Wikipedia, the free encyclopedia https://en.wikipedia.org/wiki/Introduct...