Hi-res DSO imaging with live-stacked "subs"?

Anything that doesn't fit into any of the other forums
Post Reply
ngc1977
Posts: 4
Joined: Fri Dec 23, 2022 12:29 pm

Hi-res DSO imaging with live-stacked "subs"?

#1

Post by ngc1977 »

I'm very interested in seeing if SharpCap can mimic the process described in this CN thread:

https://www.cloudynights.com/topic/9211 ... -inch-sct/

The main advantages I see in this method are increased signal/noise per sub and reduced total file size/integration time.

Would calibrated (darks, flats) live-stacked raw FITS files used as individual subs be suitable for typical DSO stacking and processing, assuming that no stretch is applied anywhere in the capture/live-stacking?
User avatar
admin
Site Admin
Posts: 13688
Joined: Sat Feb 11, 2017 3:52 pm
Location: Vale of the White Horse, UK
Contact:

Re: Hi-res DSO imaging with live-stacked "subs"?

#2

Post by admin »

Hi,

a very interesting post on CN and a very good image.

My analysis would be that there are two significant parts of the approach

1) An accurate guiding system based on the 10s images rather than using a separate guide camera.

SharpCap doesn't have any equivalent functionality to that (for example SharpCap's feature tracking is designed to stop things drifting out of view, not to keep them pixel-perfect steady). However, I don't know if this new approach would have any significant benefit over a more traditional approach using PHD2, a guide camera and potentially multi-star guiding.

2) Alignment and stacking of the 10s frames with each other along with a quality selection to keep only the best 50%.

The 'fourier' alignment method does sound (as Borodog points out in the CN thread) very close to the phase alignment algorithms used in SharpCap's solar system live stacking, which find the most accurate offset between two frames of the same target based on all the image data rather than just star positions. This can be applied either to the whole image or sub-tiles.

Fourier/phase alignment can give alignment results accurate at the sub-pixel level, but multi-star alignment as used in SharpCap's deep sky live stacking can do that too (using many stars helps bring down the error due to uncertainly in position of a single star). The fourier/phase alignment also can only provide estimates of linear offset between the frames, so that it cannot help with image derotation. Even so, it's a powerful technique and I have plans to use it in more places in SharpCap.

One thing to note is that since the individual frames are being aligned with each other, it's essential to apply dark frame correction (and hot/cold pixel removal) to the individual 10s frames before they are aligned and added to the mini stack - trying to do dark correction or hot pixel removal after that first stacking phase will be less effective due to shifts between the individual frames. Flat frame correction should also be applied at that stage, but is probably less critical (particularly for correcting large scale vignetting rather than dust shadows).

What could you do that is roughly equivalent in SharpCap? Maybe something like this...

* 10s exposures, gain set as desired, dark, flat and hot/cold pixel correction enabled
* Live stacking (deep sky sort) turned on
* Check a good selection of stars (15+) found by live stacking to give robust alignment
* Live stacking auto save and reset turned on and set for 10 minutes (600s, 60 frames)
* Live stacking FWHM filtering turned on and set agressively to reject about 50% of frames
* Run guiding alongside this (optional - if your mount tracks well then probably not needed in my opinion as the drift between frames is handled in software and the drift inside a 10s frame should be minimal)

The saved stacks that you would get every 10 minutes would contain roughly 30 10s frames each - with the best ones being kept and the poor ones being rejected by FWHM filtering in theory.

How does the SharpCap approach differ?

* Measurement of average star FWHM for each frame - is it as good at splitting bad frames from good ones as the methods described in the CN thread?
* FWHM threshold is fixed in SharpCap and would need to be adjusted if seeing conditions change to try to keep ~30 frames used in each 10 minute block
* Using multi star alignment - how accurate is it compared to phase alignment in these circumstances to get the best alignment for each sub frame

You could probably also do something in planetary live stacking, but that would depend on their being enough 'galaxy' signal in individual frames for the alignment routines to pick it up above the noise level - I think some people have (inevitably) tried planetary live stacking for deep sky targets...

Further discussion welcome ;)

cheers,

Robin
ngc1977
Posts: 4
Joined: Fri Dec 23, 2022 12:29 pm

Re: Hi-res DSO imaging with live-stacked "subs"?

#3

Post by ngc1977 »

Your suggested process is roughly what I tried last night. I didn’t have darks available at the remote observatory, and yes the stack-subs were a mess without them (no surprise). I’ll be building calibration frames tonight for this effort. If I get a decent result I’ll be sure to post it. First test will be on the very bright Cat’s Eye core.
timh
Posts: 534
Joined: Mon Aug 26, 2019 5:50 pm

Re: Hi-res DSO imaging with live-stacked "subs"?

#4

Post by timh »

Interested in this because it is what I have been trying with Sharpcap also. I'd say that SC already supports soing that. Actually inspired by Robin's lecture and the realization that under my B7 skies sky noise was always going to vastly exceed read noise and that it really is feasible to stack lots of short subs that you can preselect fairly well already using the SC FWHM and Brightness tools ..delivering perfectly acceptable SNR and DR after a thousand or so subs - well at least on the brighter objects. In fact doing that with SC you don't even need to do a lot of post-processing - take the autostacks direct from SC and then process with Blur Exterminator. The results can be remarkable with resolution in the arcsec range-- some examples in a recent thread here similar to CN .

cf viewtopic.php?t=7711


I don't see why conventional PHD2 guiding can't be used with 10s subs --it's what I do anyway?

There are just a couple of wrinkles (that I am aware of) with SC around hot pixel detection and the brightness and FWHM estimates not being entirely orthogonal to eachother?

Tim
Post Reply