NavList:
A Community Devoted to the Preservation and Practice of Celestial Navigation and Other Methods of Traditional Wayfinding
From: Murray Buckman
Date: 2024 Feb 5, 08:39 -0800
Bill said:
"I assumed, therefore, that the image was a composite of at least two photographs..."
Oh yes - absolutely. In my post I had a typo (I should read carefully before hitting the button). My first problem with the image is the stars in front of the distant land.
The image is little more than a botched job of two unrelated images.
That said, most photographers taking Milky Way images today do use composites. Attached is one of mine (at the other end of the Milky Way), which comprises 16 images all taken within the space of a few minutes. 8 are taken of the foreground and near distance and are intended to allow light to pull out the foreground and provide visual interest. Then 8 are taken of the dark sky. Because of the apparent movement of the celestial objects, each sky image is an exposure of just 15 seconds. A very wide angle lens is used (these were taken on a crop sensor frame with a 14mm lens). Even so, those 8 images, when layered, display the stars as obvious streaks of light - more obvious at the furthest distance from the celestial poles. So the images are then manually aligned to get the best match. Photo processing software can do this automatically but never to my full satisfaction, so a manual process of rotating and stretching the digital image is what I do. This is a time consuming process.
Then same is done with the lower images but without the rotation and stretching (nothing moved between shots). Then the two halves are joined together with considerable "digital touch-up" with the image zoomed in a great deal to carefully achieve a join that the eye does not see except when the eye belongs to another photographer.
Over time I have established other adjustments to pull out the colors in a way that works for me, but that is entirely subjective.
The key benefit of all this stacking of images is that it goes a long way towards eliminating the digital noise in a low light image, which we would usually see as grain in the image.
So to you point, it is possible to take an image of a moving object in low light, such as a boat, by taking several short (and underexposed) images and then layering them and adjusting the exposyre digitally. The composite boat - and nothing else - must then be cut and digitally implanted into one of those images. However in this case I think all we are seeing is a boat photgraphed in good light and then digitally reduced in exposure via the photo processing software.
The image Frank as shared is a symptom of the automation of images possible through widely available digital tools where the user's purpose is probably some quick, cheap, but appealing result for advertising or social media. Totally unrelated to our navigation discussions, but the rise of AI generated images - many of which do not make sense when looked at for more that a second or two - is further reducing the ability of photogrpahers to make a living.
Frank noted that it is difficult to identify the navigation stars in such rich images. I agree. With my own images I routinely look for the navigation stars, but can really only identify them because I know where they are relative to the various "blobs" (my non-technical term) within the Milky Way. Cheating (via Stellarium) is my go-to method otherwise. Because I grew up in the Southern Hemisphere I still have an easier job with that view of the Milky Way. With Crux up high, if Canopus and Sirius are above the horizon they are easily found, and you go from there. Of course as celestial navigators we are used to identifying them in a half-light when little else is visible. Even when viewed in the middle of the night, especially with some light polution around, it is much easier to pick them out than within images such as these.
Murray