NavList:
A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding
Re: Watches as chronometers
From: Bill B
Date: 2013 Jun 10, 15:21 -0400
From: Bill B
Date: 2013 Jun 10, 15:21 -0400
On 6/9/2013 2:34 AM, Geoffrey Kolbe wrote: > ------------------------------------------------------------------------ > You are correct, my questions were rhetorical. What I was trying to do > was to make you think a little about what you had said, and about what > you had proposed was a mistake by Gary in not accounting for a leap > second. You have obviously gone away and researched "time" and > the complexity of all its various standards, so I account that a partial > success. Before we began again, let me compliment Gary on the success of his experiment. I was in no way attempting to diminish his astonishing results. For the record, I did not propose Gary made a mistake in not accounting for a leap second, I was simply curious and followed up. If I have in any way offended you Gary, I apologize. That was not my intent. And now back to the program already in progress. Thank you Socrates.Your "obviously" is a bit off the mark. Near the end of carefully crafting the response that wound up in the bit bucket the scales lifted from my eyes, and vagaries in semi-jargon usage became clear to me. I had been doing experiments since Alex and my adventure in St. Joe, MI in March 2012 (during the bizarre warmup) where all our intercepts were off in the same direction by approx. 10 arc minutes. Alex was using his pocket sextant and sharing my Astra IIIB. Alex made a case for my RC clock being off by a minute. My take was highly abnormal dip/refraction. Those discussions are in the archives. The experiments included recording the NIST computer clock with a video camera at 30 FPS to check the camera's rate. Then video taping watches, GPS units, and my RC clock next to the computer screen to determine their drift. I also placed my RC clock in a steel box in my basement for 10 days. The clock was surrounded by lead, copper, brass and gun powder so it could not receive a signal. This was to rate its drift for a period longer than 24 hours. This is also a matter of public record. The upcoming 2012 June 30 leap second also prompted serious study. It was simply a matter dusting off the cobwebs and taking adequate time to think it through after my first hasty reply to you. I mention this not to be argumentative, but rather to save you the pain and expense of rotator cuff surgery resulting from patting yourself on the back. > But you obviously did not grasp my point about accounting for > the insertion of leap seconds in UTC when rating a clock against UTC. I did grasp your point, and stated a uniform time scale was necessary to rate a chronometer in my last post. It is the *only* theoretically correct method. Perhaps that statement got lost between the lines. I have gone back and read the entire thread(s) which Greg Rudzinski initiated on 2009 May 6 as "How Many Chronometers." Here he suggested the use of three quartz watches. The thread name later became "Watches as Chronometers" when Gary picked up the mantel and purchased three $17 quartz watches and started his experiment 2009 September 15. Much discussion was had on the affect of temperature during the exchanges in both threads. After his initial 11? day rating and some backtracking after refrigerating the watches, he was off and running and rated the array for a 99-day period. He did state he was using UTC time ticks from WWV. I was somehow under the impression he had backed out any changes in DUT1 to adjust back to a uniform time scale. I am apparently in error on that count. My bad. In theory using UTC was incorrect no matter what the duration of the rating period. In practice, if we accept as a given the human ability to resolve time without aids is 0.2s, 0.2s over a 99-day period would at least 0.0020 s per day unless random errors canceled each other out. Multiplied over 3+ years, that makes Gary's results even more remarkable. For the sake of discussion let's assume Gary possesses super-human abilities (I often believe he does). Assume he successfully synced up all three watches to UTC within 0.001 s and placed them in a climate-controlled vault. Let's further assume UT1 changed by 0.1s during the 99-day rating. That would result in an error of roughly 0.0010 difference per day. Tallied that is a possible 0.0030 s per day, or a 4.053 second error in a 1351 day stretch. If I was impressed before, I'm in awe now! > There are no leap second adjustments in TAI. I do understand. If I have even hinted to the contrary, it was a typo, acid flashback, or my evil twin contacted you off list. > Gary was rating his clocks > against UTC, which is literally "broadcast time" as he was comparing his > clocks against the WWV radio signal. Understood now. Please see above. > > As for the leap seconds which were inserted during the rating period, > see Gary Lapook's posting of 31st May. "I did not allow for the one leap > second inserted during the test period on June 30, 2012." Here interpretation again causes confusion. Do *not* confuse "testing" with "rating." Gary is clear he rated his array for 99 days and then used that rate to predict UTC, comparing his predictions to WWV UTC. I recollect no mention of Gary changing the rating past he 99 day rating. Once he established a starting point he had the option of resetting the watches to account for the 2012 June 30 leap second (ill advised IMHO) or factoring the leap second into his predictions. It's that simple. > > Gary accounted for this by adding the leap second to the calculated > error. The main point of my posting was to suggest that when rating > clocks over a three year time period then simply adding the number of > leap seconds inserted during that period to the calculated error was too > simplistic. I argued that this approach would be correct for a short > period of, say a few months. I argued that over a period of, say, ten > years, then the saw-tooth discontinuities in UTC caused by the insertion > of a number of leap seconds would average out and you need not account > for leap seconds at all. We are in agreement here, mostly. I need to think through your saw-tooth discontinuities proposal. It creates cognitive dissonance for me at the moment. > But where your rating period (three years in > this case) is approximately the same as the saw-tooth waveform period in > UTC (approximately one year) then how you account for the insertion of a > leap second is non-trivial. You did not address this point at all. There is no point to address. Gary was not technically "rating" for three years unless you wish to consider comparing predicted to actual as rating. He formally rated for 99 days. Rating without a uniform time scale is simply not a viable option over a 3-year period. Period! Finally, Barbie whined, "Math is hard." As TAI and UTC are joined at the hip, adjusting for the UTC change over a period of time in not rocket science. Back out any leap seconds and voila, an approximation of a uniform time scale. Go one step further and use UT1 and you can be closer to 0.1s. Closer yet with UT2. Yes no? "Time flies like an arrow. Fruit flies like a banana."--Groucho Marx. Sadly Zeno's Arrow cannot move at all. Bill B