NavList:
A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding
Re: Watches as chronometers
From: Geoffrey Kolbe
Date: 2013 Jun 01, 08:16 +0100
From: Geoffrey Kolbe
Date: 2013 Jun 01, 08:16 +0100
Bill B wrote:
And what "standard" should that be? GMT? UT1? Atomic Time? GPS Time?
I would argue that the most useful "standard" to which a chronometer used for celestial navigation should be compared is 'Earth Time', or the observed mean solar day. (This is often called UT1, but these days the definition of UT1 is not actually based on observations of the sun, so it is not directly linked to the mean solar day.) As we all know, the rotational period of the earth is not constant, and is gradually slowing down in a way which is not predictable in the short term. As a result, there is no way to accurately predict how 'Earth Time' compares with other standards based on atomic clocks. The only way to determine Earth Time is to derive it from observations of the sun, moon and planets as compared to those predicted in the ephemerides for a given time.
UTC (broadcast time, which is derived from atomic clocks) is currently forced to keep pace with this 'Earth Time' (or more specifically, UT1) by the intercalation of leap seconds as required, usually once every year or so. This means we can use UTC to extract the positions of the planets, moon and sun from the ephemerides, so long as we are prepared to accept an accuracy of half a second or so.
Where you are rating a chronometer over periods that are short compared with a year, like a few weeks, and there has been the insertion of a leap second in the interim, then of course that leap second should be taken into account. But where you are looking at a drift of your chronometer against UTC over periods as long as ten years then I would argue that whether leap seconds have been inserted or not in the intervening period is not of any concern. Moreover, I would argue that rating chronometers over periods as long as ten years does not necessarily mean that the derived rate is accurate for keeping track of 'Earth Time' as currently paced out by the rotation of the earth. It will have slowed significantly over that period.
However, where you are rating a chronometer by its timekeeping over periods comparable to one year, I would argue that how you deal with the intercalation of leap seconds is not trivial and needs careful thought.
Geoffrey Kolbe
Navigation or not, he has to compare the watch's time to some standard.
And what "standard" should that be? GMT? UT1? Atomic Time? GPS Time?
I would argue that the most useful "standard" to which a chronometer used for celestial navigation should be compared is 'Earth Time', or the observed mean solar day. (This is often called UT1, but these days the definition of UT1 is not actually based on observations of the sun, so it is not directly linked to the mean solar day.) As we all know, the rotational period of the earth is not constant, and is gradually slowing down in a way which is not predictable in the short term. As a result, there is no way to accurately predict how 'Earth Time' compares with other standards based on atomic clocks. The only way to determine Earth Time is to derive it from observations of the sun, moon and planets as compared to those predicted in the ephemerides for a given time.
UTC (broadcast time, which is derived from atomic clocks) is currently forced to keep pace with this 'Earth Time' (or more specifically, UT1) by the intercalation of leap seconds as required, usually once every year or so. This means we can use UTC to extract the positions of the planets, moon and sun from the ephemerides, so long as we are prepared to accept an accuracy of half a second or so.
Where you are rating a chronometer over periods that are short compared with a year, like a few weeks, and there has been the insertion of a leap second in the interim, then of course that leap second should be taken into account. But where you are looking at a drift of your chronometer against UTC over periods as long as ten years then I would argue that whether leap seconds have been inserted or not in the intervening period is not of any concern. Moreover, I would argue that rating chronometers over periods as long as ten years does not necessarily mean that the derived rate is accurate for keeping track of 'Earth Time' as currently paced out by the rotation of the earth. It will have slowed significantly over that period.
However, where you are rating a chronometer by its timekeeping over periods comparable to one year, I would argue that how you deal with the intercalation of leap seconds is not trivial and needs careful thought.
Geoffrey Kolbe