NavList:
A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding
Re: camera sextant?
From: Marcel Tschudin
Date: 2010 Jul 7, 17:47 +0300
From: Marcel Tschudin
Date: 2010 Jul 7, 17:47 +0300
I'm thankful for George's critical review, even if we do not always agree. You find my comments inserted in the corresponding parts of George's last posting. > ... > before refraction became a serious problem. Even better, though it would > probably call for a second observer, would be to calibrate by taking > simultaneous photos and sextant observations, in which case refraction > wouldn't need to be predicted, and the job could be done right down to > zero. Could it be that you think here of a calibration using a multiple exposure photo, as it was discussed for the midnight fun picture and using a tripod? If so, then an exact timing of each exposure should be sufficient. This procedure would however have the important disadvantage that the calibration would be done in different parts of the photo and not along the "reference line" (this is the line in the photo which is intended to be used for angular measurements and requires to be calibrated for this purpose). > Yes, but the difficulty is the small size of the yardstick being used, the > diameter of the Sun. I would liken it to measuring up a room for a fitted > carpet, by stepping a penny across the floor. It depends, crucially, on > that Sun diameter. Not just the scatter in measuring it, which shows up as > scatter in Marcel's calculations of scale-factor in his "cal fig" plot. But > also in systematic error, if there happens to be any effect of > over-exposure on the apparent size of the Sun disc, the camera's equivalent > to "irradiation". How confident can Marcel be that any such effect is > negligible? What evidence can he offer to base that on? I agree that depending on the lens and the picture size, the sun picture can be small eventually also too small. It's size is however a very accurate reference scale. Your example might give a wrong impression. The penny is not used to measure the length of the room. You place the penny at selected places across the room to measure the scale in moa per pixels. One does this at so many places as it is necessary to reasonably approximate the measured values with a polynomial. But you are right, it doesn't necessarily require to use the sun. In the sheet "ReadMe" under "Practical Considerations" it is therefore recommended "The calibration photos have to be done of a reference object of known horizontal dimension and having crisp edges." Also in the sheet "Cal_Data": "Left and right limb refer to the boarders of the reference object, which could be the sun." It can indeed also be a yard stick or any other item of known size and at known distance. Regarding your above comment on the systematic errors. You are right to mentioning them, but I'm not aware to have ever suggested that they would be negligible. May be in some cases they really are. It just hasn't been investigated in more detail. It's however exactly for this reason that in the sheet "ReadMe" under "Practical Considerations" it is recommended: "The same Software and the same technique should be used for the calibration measurements and for measuring an actual observation." The cause for this recommendation is the following: When Greg and I measured the sun's diameter in his 200mm lens photos we noticed to have systematic differences of around 1 to 2 pixels (of 325 Px). One reason for this may have been the different programs we used to measure the pixels and an other one the different techniques we adopted for measuring the sun's diameter. Different brightness/contrast settings DO change the diameter in the photo. My guess is that the different programs have different default values for the brightness and contrast settings when showing a picture. May be also our different techniques have contributed: Greg measures the diameter by cropping the picture at the selected tangent points whereas I measure it using the tip of an arrow cursor. Doing the calibration measurements the same way as the actual measurements helps therefore to cancel out at least a certain amount of such systematic errors. > | Regarding again the conversion function: After some additional > | thoughts I gained the impression that the transformation of the second > | order polynomial calibration function into the conversion function is > | likely to result in the conversion function also to be a second order > | polynomial. I'm however not in a position to prove this > | mathematically. > > If I understand his meaning of "calibration function" and "conversion > function" correctly, such proof will be impossible, because it just isn't > so. One is the slope of the other. Sorry, George, here I don't agree with you. The "calibration function" corresponds to the measured scales (moa/pixel) as a function of pixel POSITIONS along the reference line. The "conversion function" results from a transformation of the "calibration function" consisting in integrating the "calibration function" over different pixel RANGES around a selected pixel position. In this particular case around the pixel position in the middle of the photo. The "conversion function" is only valid for this selected pixel position. The resulting "conversion function" represent therefore spherical angles as a function of pixel RANGES. Looking at a data point of this "conversion function" providing an angle for a given pixel range: If we divide this angle by the range we obtain in the "calibration function" the mean value of the scale within this pixel range around the selected pixel position. If now the "calibration function" is approximated with a second order polynomial, which is about the best we can do for the example shown, then the "calibration function" has one extrema. The mathematical proof would likely consist in showing that if the original function has one extrema the transformed function can also have maximal only one extrema. > =================== > > Now let me go back to an earlier posting today, from Marcel, in which he > asked- > > "Could the reason for this confusion be that in one case we have pixel > POSITIONS and in the other pixel RANGES and as a consequence of this > also the meaning of the origin (0,0)?" > > Yes, that's part of it. The three of us have approached this problem from > somewhat different directions and some confusion has resulted. I have tried > to define the terms I have used but may not have succeeded. Let me try > again, from the start, and see if we can agree. > > If we take axial symmetry for granted, then it seems simplest to define > everything in terms of that central axis, and a radial line passing through > the centre of the array. The incoming angle A, is measured from that axis, > and the corresponding pixel count Px (in the x direction), is measured from > the centre of the array, positive or negative along the x axis. If, in some > implementation, pixels are instead counted from one edge of the array, a > suitable offset is to be subtracted from that count. That relationship, Px > = f(A) defines everything we need to know about the distortion of the > system. That seems to be what Marcel refers to as the "conversion > function", and I'll go along with that name. The distortion of the lens is approximated with the "calibration function" (moa/pixel as a function of pixel position). The "conversion function" is "only" a particular transformation of the "calibration function". Your "suitable offset" makes only sense for the "calibration function". For the "conversion function" where the angles are represented as a function of pixel RANGES a pixel offset doesn't seem to make really sense. > That is the function that has to be antisymmetric about the zero point, so > that f(-A) = -f(A) : because of which, if it's a polynomial in A, it can > not contain any constant term or any terms in even powers of A. So it can > have only terms in A, A cubed, A to the 5th power, and so on. Another > possibility is a tan function, which is also antisymmetric, passing > through the origin at (0,0). The "calibration function" shows on the provided example that that the lens distortion *tends* rather to be SYMMETRICAL (not anti-symmetrical); it however doesn't have to be exactly symmetrical; it depends on the lens and the selected "reference line" (this is the line in the photo which is calibrated and used for measurements, which is preferably near the centre of the photo). All the measurements done so far on different lenses showed that the "calibration function" were best approximated with a second order polynomial. Depending on the type of distortion the extrema of this quadratic function doesn't have to be in the centre; it thus may really require also a linear term. The "calibration function" is expected to approximate the really existent distortions of the lens; it's not supposed to represent some ideal features of an optical system. I hope these comments help to answer also all the further reflections you made on this subject. Your conclusions for the lens distortion to be antisymmetric do not apply to lenses I came across so far. There may however exist lenses where this may possibly apply and that the distortion is actually best approximated with a third order polynomial. The present version of the SAMT tool couldn't handle such lenses because the approximation of the distortions with the "calibration function" is at the moment limited to a second order polynomial. Replacing it with a third order one would overcome this present limitation and would thus allow to calibrate also lenses which are distorted the way how you think they are. ==== It might be useful to come back again on my estimation which I made in my last posting on the estimated errors in reply of George's question on the possible errors of the data points in Greg's graphs. Going through this in detail actually shows how the error propagation is treated in the SAMT tool. When I performed the estimation for the last posting I actually fell in the same trap of taking the pixel range for a pixel position; the software treats it however correctly. Assuming that a pixel POSITION is measured to +/- 1 pixel, the error for the RANGE between sun position and horizon is +/- 1.4 pixels. Looking at the mean values obtained for the different lenses this corresponds now to about: 50mm lens: +/- 0.5 moa 100mm lens: +/-0.3 moa further contributions are (as explained in my last posting): H.E./Dip: +/- 0.25 moa Timing: +/- 0.3 moa (Errors from refraction were thought to be negligible since Greg aimed at taking the photos under "normal" atmospheric conditions.) The estimated errors of the data points, which George ask for, are therefore about 50mm lens: +/- 0.6 moa 100mm lens: +/- 0.5 moa The +/- 0.8 moa which Greg mentioned refer to measured heights when using the conversion formula. For this also the errors of the conversion formulae themselves have additionally to be added which are: 50mm lens: +/- 0.5 moa 100mm lens: +/- 0.2 moa Putting now all together results in the following expected errors for measurements when using the corresponding "conversion formula": 50mm lens: +/- 0.8 moa 100mm lens: +/- 0.5 moa So much for the moment. Marcel