Welcome to the NavList Message Boards.

NavList:

A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding

Compose Your Message

Message:αβγ
Message:abc
Add Images & Files
    Name or NavList Code:
    Email:
       
    Reply
    Re: Fitting a line to sights
    From: Bruce J. Pennino
    Date: 2014 Jan 27, 09:38 -0500
    
    Frank makes many good points, which I can somewhat summarize based on my  taking data with many different systems, with many associates, over many years.  First, you need a very good reason for throwing out any data..... a sneeze, every light on the property "blinked", etc, makes it easy to throw out some or all of the data.  Each person or device has a normal error or scatter band.  This "band" describes the limits (accuracy and precision) of your final total measurement.  If the measured data  point has  two or three  times the normal scatter (maybe the standard deviation), probably ok to throw the point out.  However, for CN where maybe you have 3 to 7 points, data analysis becomes a little more tricky until you accept that the overall accuracy of an experienced person on a good day with a steady platform is no better than 2 or 3 nm (maybe a smidgeon better on a perfect day) when you know your height of eye accurately. The dip table is only good to plus or minus 5 to 10 % of the tabulated dip. The greatest relative error is at heights if eye below 10-15 ft, but fortunately dip is small.
     
    I typically take three or 5 points. I've  plotted  the data and averaged the data.  For my personal scatter,  I really can't tell the difference whether averaging or plotting is better.  I like plotting (bias) because I can see if one point is beyond my normal scatter. Beyond that, avaeraging of 5 points is sufficient and maybe you only need three. Maybe if I went to the same location 20 times, on 20 perfect days, taking the same star or body, plotted the data and averaged the data, I might  conclude the averaging or plotting method is clearly  better. Everyone develops a preference depending on the circumstances. As my skills have improved, I'm shocked when I take 2 points, average them, and I'm within 2 nm of my actual location .  Seems like "magic" to me.

    Bruce
     
     
     
     
     
     
     
     
     
     
     
     
     
     
     
    ----- Original Message -----
    From: Frank Reed
    Sent: Sunday, January 26, 2014 4:28 PM
    Subject: [NavList] Fitting a line to sights


    Greg (L) brought up David Burch's article on plotting sights against a line as a method of detecting outliers. While we're building spreadsheets, we should talk about the rationale underlying a system like this. David does a good job of this in his article, which Greg kindly attached to a recent message here: http://fer3.com/arc/m2.aspx/Need-help-writing-slopefit-excel-sheet-GregLicfi-jan-2014-g26692. But it's worth going over some points.

    Here's the scenario: we've taken a bunch of sights in a row of a single body when it is well away from the meridian (conditions to be specified) during a relatively short period of time, say, less than twenty minutes. How can we get the most value out of those sights? First things first: the whole point of this process is to process these multiple sights in a way that "averages" them or "nets" them statistically. We could do this by a simple arithmetic average (just add up the altitudes and divide by N and add up the sight times and divide by N) or by some visual plotting method or by some more sophisticated mathematical technique. And indeed, if possible, we should use all the sights to generate a fix and an error ellipse. But the WHOLE POINT of taking multiple sights is to eliminate noise from the data by "some form" of averaging. It's important to keep this in mind. We WANT all of our sights. The goal of this process is not to somehow magically separate the wheat from the chaff. It's all wheat!

    The most important math to remember when taking multiple sights is the "square root of N" rule. This is a basic statistical property of random noise in a set of observed data. If you average (in some appropriate way) N observations, the error from random noise will be reduced in proportion to the square root of N. In other words, if my typical error (in the 1 s.d. sense) is around 1' of arc for each altitude sight, then if I average 100 such sights, I can expect to reduce random error by a factor of ten; the typical error of the averaged sight would be 0.1' of arc. It's important to emphasize right at the top that this applies to random error only. If there's a systematic error (a "fixed" error, like an erroneous index correction), it will still be there after any such averaging process.

    With the "square root of N" rule in hand, we can immediately see the value from taking a few extra sights. If you take 4 sights of a single body, you will typically improve the accuracy of the result by a factor of TWO (compared to a single sight with the same observer, instrument, and conditions). That's certainly worth doing if you have the time. But suppose you want to improve by another factor two. You would then need to take 16 separate sights. So there are diminishing returns here (naturally: graph 1/sqrt(N) for N=1 to 25 to see this). For a single human observer, it's probably not possible to get more than a dozen sights in a row before fatigue sets in. But if you have an automated system or multiple observers with comparable skills, then the more the merrier! Bigger N is better.

    The process of taking sights with a sextant is "sloppier" than many other sorts of measurements. If you're distracted by a little pang of seasickness, or if a gust of wind makes your eye tear up at just the wrong moment, or if you happen to take your sight just as an intervening swell slightly obscures the horizon, you can go from normal random error to cases of much larger random error. In other words, the probability of "outliers" is considerably higher with sextant sights than in pure statistical models. A statistical model would often assume that errors are "gaussian" or "normally distributed". A normal distribution is an excellent starting point for analyzing observation errors, but the "tails" of the distribution, representing the probability of large errors, are much too thin. There's a mathematical name, really a shorthand, for the "fatter tails" or higher probability of large errors that we see in sextant observations (and in many other phenomena, by the way). It's called "kurtosis". But all it means is that you're likely to see occasional errors that are really larger than they "should be". And this is where things get interesting. Those large errors have a disproportionate affect when averaging sights, especially small numbers of sights. So if we can identify them, then we MAY be able to improve the results of our averaging process.

    Navigators with years of experience will often tell you that there is an "art" to what they do. And in support of this, they will often describe their "artistic" methods for handling various scenarios. These are tricks and techniques that they have invented over the years that they believe go beyond mere science and mathematics. Very rarely there's some actual "art" here, but in my experience these tricks usually turn out to be bad habits, popular "lore", and ritualized behaviors which "feel" effective, but they're often just more noise added onto the noise. Methods adopted by moderately experienced navigators for identifying outlying observations often depend on hunches and instincts --navigators' voodoo. But it's all junk UNLESS you start with one basic principle: any method for identifying outliers must be developed and written down in advance and then followed with rigorous discipline. The temptation to throw out observations based on instinct is just too strong. And that's not art. It's a bad habit. It is something to be resisted, not trusted.

    Well, ok, there's an exception. Everyone who has ever attempted to take sights with a sextant does, in fact, throw out some observations. They're the ones you don't even bother writing down. If you get the sight lined up almost perfectly... and then sneeze, you probably wouldn't bother recording the altitude and time. This is not a rigorous, disciplined rule for tossing bad observations, but it is normal. So the sights that have to be analyzed are the ones that pass that minimum hurdle --the sights that have been recorded. Once you have them on a list, you need to step back and view them with detachment. They're all equals at that point, and you have to avoid the tendency to pick out favorites and friends among the numbers. When navigators do pick favorites, it often happens that they toss out the wrong sights. Again, we WANT all the sights that we take, unless they're really way out there. It's that "square root of N" rule that saves the day. The more sights available, the greater the improvement in the average.

    In David's article, he describes how to plot sights and look for outliers based on distance from a line that passes through most of them. He points out that we should not draw just do a linear regression, plotting the line that best "fits" the points. A simple regression line or even an "eyeball" estimated line can be useful when we know nothing about our position, and it's an acceptable way to detect a really large error (like a 10' error when your normal random error is 1'). But as David points out, we can do better by drawing a line with the right slope based on our approximate position. There are lots of ways to do this, but the simplest is to clear two of the sights, one at the beginning of the sight run and one toward the end. You calculate the altitudes at those extremes, plot them and then just run a straight line through them. This doesn't work near the meridian, and depending on just what you plot, it may not work at low altitudes, but normally this is just fine. And notice that you can treat the sum of dip and index correction as an unknown (a systematic error) if you like.. This is part of what's accomplished by "sliding" the line up or down on the graph paper. Also notice that this process is nearly the same as clearing each sight separately and calculating each intercept. The distance between the line and each plotted observation *IS* the intercept. This holds true because altitudes change nearly linearly (steadily increasing or decreasing with time) when objects are a good distance away from the meridian and the time interval isn't too long.

    What rigorous, disciplined procedure can we adopt to remove a small number of outliers from a set of sextant observations using a plot like this? First, we need a "standard deviation". We can calculate this quantity for any set of observations if we have ehough of them and, more importantly for this sort of analysis, we can treat it as a known property of good observations even if our current set of observations is small in number. This latter aspect is often missing from mathematical treatments like, for example, the algorithms for calculating a fix and an error ellipse published by HMNAO (and included in part in every copy of the Nautical Almanac in the past 25+ years). Once we have our normal "standard deviation", a reasonable criterion for eliminating outliers is to drop any observations that are more than 3.0 standard deviations away from the best line through the points (properly sloped, as above). By the distribution of outliers in a true normal distribution, there would be almost no observations this far out. For a true normal distribution, 99% of observations would fall inside the 3 s.d. limit. But in sextant observations, you may find as many as 10% of observations fall outside that range. Suppose, for example, that my s.d. for observations is about 1.0' by my best estimate from actual practice (with one specific instrument). If I plot my observations and the line that they should be on, I should find that just about two-thirds of the observations are within a band +/-1' away from that plotted line. That's the 1 s.d. band. If you don't have an estimate in advance of this "1 s.d." limit, and you have enough observations (a dozen or more), you can actually eyeball these bands just by the condition that a solid majority --about two-thirds of points-- should be inside that limit. Now to eliminate outliers, triple that limit. That is, go three times further away from the original plotted line on each side. If you have any observations outside those limits, then you have reasonable cause to throw them out. Those are outliers, and they are the result of the natural "kurtosis" in the process of making sextant observations. They are the noise on top of the noise.

    After we've tossed out an outlier or two, or more likely with a rigorous standard, determined that there are no observations that deserve to be considered outliers, we can process the remaining observations. Plain averaging of sights is the brute force approach to analyzing a series of multiple sights. At the other extreme, we can do a complete work-up on each sight, plot an LOP for each, and we can generate a fix and an error ellipse that steadily, and surprisingly rapidly, homes in on our actual position. This is what I have called a "rapid-fire" fix. We can still implement a scheme for tossing outliers in a systems like this, but it becomes a little more complicated since we're using each sight as it comes in.

    I should add that there are more sophisticated methods of handling outliers, and two of the most famous were invented by American mathematicians directly connected with 19th century celestial navigation, Benjamin Peirce and William Chauvenet. I don't see any reason to apply their specific rules to sextant observations, but there are other statistical methods for addressing outliers. The graphic rule described above is only one example. There is also a considerable historical literature claiming that one should NEVER attempt to eliminate outliers from a data set. This latter point of view apparently led to a debate in the 19th century pitting continental European mathematicians against those crazy American cowboys, Peirce and Chauvenet, and their ilk. I don't know where British mathematicians sided in this story. It remains the case that many fields of modern science, including observational astronomy, employ standard procedures for rejecting outliers. The process itself is not "voodoo". But remember, it only works if it's done with objective rigor and discipline. Make a rule and stick to it. :)

    -FER


    ----------------------------------------------------------------
    NavList message boards and member settings: www.fer3..com/NavList
    Members may optionally receive posts by email.
    To cancel email delivery, send a message to NoMail[at]fer3.com
    ----------------------------------------------------------------

    : http://fer3.com/arc/m2.aspx?i=126699

       
    Reply
    Browse Files

    Drop Files

    NavList

    What is NavList?

    Get a NavList ID Code

    Name:
    (please, no nicknames or handles)
    Email:
    Do you want to receive all group messages by email?
    Yes No

    A NavList ID Code guarantees your identity in NavList posts and allows faster posting of messages.

    Retrieve a NavList ID Code

    Enter the email address associated with your NavList messages. Your NavList code will be emailed to you immediately.
    Email:

    Email Settings

    NavList ID Code:

    Custom Index

    Subject:
    Author:
    Start date: (yyyymm dd)
    End date: (yyyymm dd)

    Visit this site
    Visit this site
    Visit this site
    Visit this site
    Visit this site
    Visit this site