Thanks for the kind words - I am also learning. The description below was used for this https://www.astrobin.com/393472/?nc=user
. The processes described below are how I go about capturing DSO data for later processing.
Single FITS capture
Having established the camera settings being used (gain, exposure, offset for CMOS or exposure for CCD), I take a test frame and inspect it in FITS Liberator. If certain criteria are met then I would proceed to capture a set of light frames. I am looking at the Mi
n and Max
values in the Image Statistics
part of the display. I need to see Min > 0
(means I am not hitting the left hand side of the histogram and losing faint data) and Max < 65535
(not hitting the right hand side of the histogram to avoid over-exposing).
I then apply a stretch (usually arcsinh(x)) and hope to see the histogram pulled away from the left and the histogram 'fattening'. This indicates decent data has been captured. Compare the histogram from above with that below. The histogram shown is a classic shape for a DSO - rising steeply and then tailing off. I don't get too hung up if I have say up to 10 over-exposed pixels (see the pixel count in the SharpCap histogram) as we could be talking 10 out of 20 million.
A less than 5 minute check to ensure collection of good data I find to be time well spent.
Flat frame capture
To capture flat frames I use a home made flat generator. This consists of 2 pieces of plasticard, an electro-luminescent panel, battery and 2 plastic rods which allow the panel to be hung over the end of thes scope - all parts off eBay. The one below is for a 66mm refractor - cost ~£20. I have one for each telescope - 81mm refractor, Skywatcher 127 MAK and Celestron C8. The C8 frame cost £40 for a 250x250mm EL panel.
My thinking is that data capture and processing are 2 completely distinct processes, therefore I do no processing at the capture laptop. With a DSLR I would use Nebulosity and I also use Linux (Debian & Raspberry Pi). For this reason I work to 'lowest common denominator' i.e. I use a consistent process which can be applied across applications and operating systems. The means that the flats capture is in my hands rather than using the in-built facility in SharpCap. Why risk 'trashing' data at the point of capture is my view.
To capture flats, I hang the EL panel over the scope, switch on and capture frames. I leave the gain & offset as for the lights and reduce the exposure until the histogram is just left of centre - round about 40%. This usually means exposures in the range 50-150ms depending on optics, filters etc.
To save the question being asked, this is my workflow for calibrating, registering and stacking the frames. I use SiriL, https://www.siril.org/
. I follow this https://free-astro.org/siril_doc-en/#Pr ... t_and_bias
from the SiriL manual. I take lights, darks, bias and flat frames.
Capture steps are:
- lights with defined gain, offset, exposure
- darks with above settings, aim for temperature match to lights
- bias, change exposure to minimum, other settings the same
- flats, set exposure to obtain 40% histogram, other settings the same as for lights
Processing steps are:
- stack bias frames, no normalisation
- load flats and apply master bias to the flats
- stack pre-processed flats, 'multiplicative' normalisation
- stack darks, no normalisation
- apply master dark and processed master flat to lights, debayering if applicable at this step
- stack using 'additive with scaling' normalisation
- colour calibration if applicable
[Note: I select Winsorized Sigma Clipping for the rejection criteria when stacking].
Following the above has led to consistently satisfactory results for me over the last few months. The histogram is my friend.
As a working process, after a capture session, I backup all new data to a 4Tb drive on my network. I then copy the data from the 4Tb to an SSD in my workstation for subsequent processing. That is 3 copies of capture data before I go anywhere near the pixel mangling software
Hope this helps.