NavList:
A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding
From: Lars Bergman
Date: 2022 Aug 8, 14:07 -0700
As the azimuth of the Moon is very near the star's azimuth, we could approximate cos(azimuth difference) to unity. Then the difference between the true (geocentric) altitudes of Moon and star is equal to the true distance. The difference between the measured altitudes is only 0.2' from the measured distance, so if there are errors in the altitudes these must be in the same direction, either both are too low or both are too high. This will lessen the impact of altitude errors, it seems that a 5' error in altitudes gives 0.1' or 0.2' error in true distance due to slight changes in Moon's parallax and star's refraction. Anyway, the altitude corrections have to be calculated with highest possible accuracy.
I get for the Moon: augmented semidiameter +14.9', refraction -1.1', parallax +41.0', oblateness -0.1'; true altitude 41°23.7'.
And for the star: refraction -5.1' (using 15°C and 1010 hPa); true altitude 10°13.9'.
The true distance then becomes 31°9.8' and I get UT 3h38m37s. An error of say ±0.2' in the distance thus calculated gives a time error of ±33s. I have not checked if an azimuth difference of 2° between Moon and star makes any significant difference compared with the unity approximation, to do that you need to solve an "ordinary" lunar. To be on the safe side, let's double the uncertainty, say ±1m.
Lars