Exposure's Effect On Color Quality

Somewhere to share your expertise in using SharpCap
timh
Posts: 218
Joined: Mon Aug 26, 2019 5:50 pm

Re: Exposure's Effect On Color Quality

Post by timh »

Hi Brian,

Thank you. That is a generous offer. I'd be interested in having a look to see how you have approached it and how you have incorporated the information on targets - without pushing the reads over the top of course.. PM on its way

Tim
zerolatitude
Posts: 51
Joined: Mon Mar 01, 2021 5:24 am

Re: Exposure's Effect On Color Quality

Post by zerolatitude »

Interesting discussion.

There seem to be two parts to that CN article. Part 1 is calculating the sub length, with the associated factor of 4.38, and Part 2 is about the effects of quantization on extremely low signals.

My math is a bit rusty, but some of Part 1 doesn't seem to make sense. In equation 9, the constant of 4.38 comes from equations 6 and 4. This assumes that you are using only 2 subs. Actually it should be generalized to n subs. Then in equation 6, substitute n instead of 2. With this, if you have 2 subs, the constant for p=0.05 is 4.38. But with 30 subs, it is 9.4 or thereabouts. This is important because it means as per equation 7, the individual subs can be far noisier if you are taking more of them. Which of course, we knew already, so no disconnect here, apart from the actual constant used.

Now Part 2. Intuitively, it makes sense that you need a minimum ADU level for the faintest signals to register. The paper takes 15 over the sub period as that is where the Poisson distribution starts to approximate a normal distribution.

But what additional info does it really give us? We already know that the "optimal" sub period given by SC is not a maximization, but an optimization for noise, mount quality, etc. We also know that we will get better SNR and detail if we go beyond that, but with diminishing returns.

Another question is how much additional detail can be pulled out from the background noise with this additional exposure? Would it not depend on how different the detail is from the background?

This needs some more thinking through, but meanwhile suggesting two hypotheses for discussion:

1. Equation 14 in the article can provide a sort of upper limit to the exposure, beyond which you will not get any detectable signal. We can then choose a number between the SC optimization and this maximization depending upon preference, light pollution levels, mount quality etc.

2. The most detectable faint signals beyond the SC number would be those that stand out from the background in some way. Could this be why the additional detail came out in blue while the rest of the spectrum seemed pretty much the same?
User avatar
oopfan
Posts: 1002
Joined: Sat Jul 08, 2017 2:37 pm
Location: New York
Contact:

Re: Exposure's Effect On Color Quality

Post by oopfan »

Yes, this is an interesting discussion, but unfortunately clear skies are hard to come by. I need data, lots of it, but M51 requires too much integration time with my paltry 71mm aperture. Instead, I'm switching over to a brighter DSO, like globular cluster M92 in Hercules, as seen here:

https://www.astrobin.com/99988/0/

Mr. McGill did a fine job of accurately reproducing star color. I have photometric data for several of the brighter stars that confirms it. Therefore I am confident that the faint stars are blue. This will be a good way to test my theory that SNR per sub affects color quality. Also, the cluster's high surface brightness means I can capture several different datasets in a single session. Another bonus is its high declination of +43 degrees. I can achieve round stars by just tracking. Also, it is a perfect candidate to test my auto-guiding when it comes online shortly. I'll post when I have more to share.

Brian
User avatar
oopfan
Posts: 1002
Joined: Sat Jul 08, 2017 2:37 pm
Location: New York
Contact:

Re: Exposure's Effect On Color Quality

Post by oopfan »

Here is my plan. Three sessions: SNR per sub 0.5, 1.0, and 1.5. Each session is 150 minutes long plus or minus a couple minutes. There is plenty of excess data to make other tests:
Project M92 - LRGB tests.jpg
Project M92 - LRGB tests.jpg (121.46 KiB) Viewed 467 times
I am reverting to using LRGB filters instead of Wratten #12, red, and green. I have also seen this "muddy brown" phenomenon with LRGB, but I am reverting back just to make sure.

I am expecting that M92's faint blue stars will turn muddy brown when SNR per sub is 0.5.

If you are wondering why each exposure is different, it is due to my camera's quantum efficiency curve. I "color balanced" my filters according to Al Kelly:
http://kellysky.net/White%20Balancing%2 ... ilters.pdf
He calls it "white balancing" but I try to avoid using the word "white" now.

Brian
zerolatitude
Posts: 51
Joined: Mon Mar 01, 2021 5:24 am

Re: Exposure's Effect On Color Quality

Post by zerolatitude »

Look forward to the results.

It will also be interesting to see if the detail becomes sharper only in faint blue stars or in others as well (my hypothesis no 2).
User avatar
oopfan
Posts: 1002
Joined: Sat Jul 08, 2017 2:37 pm
Location: New York
Contact:

Re: Exposure's Effect On Color Quality

Post by oopfan »

This came out much better than I thought!

Last night. 98% Moon. 53 degrees away from M51. Ordinarily I would pass up this opportunity but the skies were clear after several days of inclement weather. The plan was to delay a few more days until the Moon was more than 90 degrees away, but the weather forecast didn't look promising, so I took this opportunity.

As you recall, I've been experimenting with different exposures. My first session resulted in a muddy brown image. As I later learned it was due to the sub-frame exposure being too short. My analysis revealed that the SNR per sub was well below 1.0. My second session resulted in better color differentiation, but it still needed improvement. My analysis revealed that the SNR per sub was just a hair above 1.0. This third session, I specifically targeted SNR per sub of 1.5. Here is the result:
M51_W12-14x425B1_R-6x285B2_G-6x167B2_ST-15-3-25_SA-25-25_2021-03-29.jpg
M51_W12-14x425B1_R-6x285B2_G-6x167B2_ST-15-3-25_SA-25-25_2021-03-29.jpg (1009.45 KiB) Viewed 422 times
William Optics 71mm f/5.9
Atik 314E

Wratten #12: 14x 425s bin 1
Red: 6x 285s bin 2
Green: 6x 167s bin 2

According to my calculator, those exposures yielded SNR per sub of 1.5 in each of the three stacks. Normally I have Bortle 5 skies without the Moon, but last night I estimated that light pollution was approximately Bortle 7 due to the Moon. That explains the long exposures. In a few days when the Moon goes away and the skies clear again, I'll run another session that is appropriate for Bortle 5 skies.

The guiding was performed manually, so don't beat me up too much. So far I think that my theory is holding up well: if you want good color quality, then you need to "bake" your subs. How much is enough? That is what calculators are for.

EDIT: An interesting bit of information: My bias level is 220 ADU, but the sky background in each sub last night was running about 7000 ADU due to light pollution (the Moon). 7000 ADU is only one-tenth of 65535, so it is a small price to pay for good color quality. This flies in the face of conventional wisdom that we should reduce exposure in city-like settings.

Brian
timh
Posts: 218
Joined: Mon Aug 26, 2019 5:50 pm

Re: Exposure's Effect On Color Quality. Expt. Data

Post by timh »

Hi Folks,

I posted (above) on a possible effect of short (5s) versus longer (40s) individual frame length upon the detection of colour and nebulosity across stacks of the same total exposure time (70 min) - the tentative example being the two pictures of the nebulosity around IC1805 where stacking lots of 5s frames was much less effective than stacking fewer 40s frames. However this was anecdotal and not a controlled comparison (different nights, tdarkness conditions etc).

Below are descriptions of two fairly carefully carried out experiments to try and explore this question more deeply. In each the SNR of the same faint object (in the background field of M3) is measured in stacks of equal total duration but comprising subframes of different lengths (i.e subframes from 10 up to 110s).

The first experiment described immediately below indicated that there was no significant effect of exposure length between exposure lengths of 10s and 110s under the particular conditions of that experiment. All nice and simple and as expected.

However a second experiment (actually carried out earlier on exactly the same object) indicated quite a marked effect where a stack of 10s exposure subframes delivered a much inferior SNR to a stack of 20s frames or a stack of 40s frames.

So, overall, a rather confusing result. My tentative suggestion is that the difference was to do with the difference in sky conditions. The transparency of the sky was poor in the experiment where the shorter subexposures performed poorly. So a suggestion is that if the sky is murky, misty and the transparency is poor then it might be better to use somewhat longer subexposures than the minimum suggested by BRAIN and on the basis of sky brightness ? Alternatively of course my data might be misleading (if the sky had changed during the 30-40min it took to do the experiment) .

Anyway below are the data and experiment descriptions..

--------------------------------------

EXPT 1 30/03/21

Compared 20 minute stacks of 10, 20, 40, 70 and 110s exposures and examined the SNR in the red, green and blue channels of the same group of pixels within the same object (a faint background galaxy) as signal and a nearby patch of starless frame (about 40,000 pixels) to measure the dark background. This control patch was set nearby so as to minimise slight local gradient effects.

SW 200PDS, Baader flattener (F = 5.0) , CEM 70 mount, PHD2 guiding using ASI 120MM and 50 mm, f = ~ 150 mm finder

ZWO ASI 294 MC PRO camera at -10C

The telescope was centred on M3 and only about 35 degrees away from a gibbous moon giving a constant very bright sky background of 11.16 electrons/ pixel/ second. (BRAIN recommended exposure for max dynamic range was about 5s under these conditions) . Image scale was 0.95 arcsec/ pixel. Gain at 124 (near unity). The camera ADU is 14bit

Frames captured in SC and preprocessed and stacks integrated in PI - calibrated versus the same flat and the appropriate 40 frame darks created in SC. The 5 stacks were all registered against the same subframe and so were exactly aligned. Analysis was carried out using the PI frame statistics tool after cropping to the desired image and dark background frame areas. Means and medians were very close so the area identified as the dark control was quite homogenous. SNR was ( image (total signal) mean - dark background mean)/bckgrd signal S.D.

Results for the stacks (versus a particular obscure background galaxy) were ..

11 x 110 s --> SNRred = 5.989 SNRgreen = 5.888 SNRblue = 3.825

18 x 70s --> SNRred = 6.688 SNRgreen = 6.276 SNRblue = 4.220

30 x 40s --> SNRred = 7.386 SNRgreen = 6.490 SNRblue = 4.169

60 x 20s --> SNRred = 6.718 SNRgreen = 6.516 SNRblue = 4.096

120 x 10s --> SNRred = 6.736 SNRgreen = 6.399 SNRblue = 4.086

The SNR values are all the same within error (which I can't exactly quantify but judged by the effect of sampling different background areas from across the frame is probably ~ +/- 10%)

I did at first think that the 10s stack was significantly different but that was because I had forgotten the effect of a meridian shift on that particular image - and so the initial background I had used was not closeby. Blue appears to be inherently weaker than the other two colours.

So a somewhat dull result but comforting in the sense that it just confirms that the basic theory all works as it should.

EXPT 2 24/03/21

Also had carried out the same experiment a week earlier with just over a halfmoon about 100 degrees away - so about 1/3 to 1/2 the sky brightness of above and a less transparent sky. On that occasion carrying out the experiment in exactly the same way as above and measuring SNR with respect to the exact same faint galaxy but comparing stacks that were only 10 minutes rather than 20 minutes total exposure gave a different result that did indeed suggest that the shorter 10s subframe stacks had a significantly lower SNR and less colour depth. i.e.

22 x 40s --> SNRred = 3.851 SNRgreen = 4.700 SNRblue = 2.861

30 x 20s --> SNRred = 2.975 SNRgreen = 3.800 SNRblue = 2.319

60 x 10s --> SNRred = 1.256 SNRgreen = 1.162 SNRblue = 0.817

So, contrary to the first experiment, this earlier experiment indicated that, for the same total exposure time (10 min) the subframe exposure length did appear to matter -- the 40s exposures giving a slightly better SNR in all three colours than the 20s (attributable to a slightly longer total exposure of 14 min at 40s) but the 10s exposures barely providing any detection of the faint galaxy - and certainly not its colour. (this was also all obvious from simple inspection of the images)

Not quite sure what to make of it all. All I can suggest is that under conditions with poor sky transparency (but that might nevertheless give quite a high sky brightness?) - the sort of thing you get with thin cloud/ mistiness - it maybe better to use exposures that are longer.

Poor transparency might possibly be one set of circumstances where it is better to exceed the BRAIN recommendation for minimum subframe exposure length? Either that or quite possibly the experiment that suggests this was confounded by changing sky conditions during the course of carrying it out? But the SNR measurements themselves are probably accurate.

Tim
Last edited by timh on Sat Apr 03, 2021 10:44 am, edited 5 times in total.
User avatar
oopfan
Posts: 1002
Joined: Sat Jul 08, 2017 2:37 pm
Location: New York
Contact:

Re: Exposure's Effect On Color Quality

Post by oopfan »

I am also looking into the possibility that this "muddy brown" phenomenon is a feature of Astro Pixel Processor (APP). This afternoon I tried to get DSS and StarTools to process my Wratten #12, Red, and Green data, but I am suspicious of the output. I will be happier giving it conventional LRGB. That means swapping filters in the wheel and then recapturing data. In the past, I successfully processed LRGB in DSS, Siril, and APP. That is what I will do. I don't want to place any bets, but APP's log is very fond of SNR. SNR does play some role. How much? I can't say without speaking to Mabula. APP is very easy to use, and many of the controls default to "automatic". Perhaps there is some default behavior that causes this phenomenon.

Brian
User avatar
oopfan
Posts: 1002
Joined: Sat Jul 08, 2017 2:37 pm
Location: New York
Contact:

Re: Exposure's Effect On Color Quality

Post by oopfan »

Yep, just as I suspected, the culprit was one of APP's "automatic" settings. The following image consists of the same data, but processed differently. In the "Integrate" tab, APP offers two controls. The default setting is "automatic" and "quality". That setting resulted in the left-hand image. But just now, I changed it to "average" and "equal". That setting resulted in the right-hand image:
M51 60s APP automatic vs equal.jpg
M51 60s APP automatic vs equal.jpg (224.17 KiB) Viewed 362 times
Admittedly, both images are c**p, but that is due to under-exposure. In my opinion, the right-hand image is better because it shows a better balance of blue and red. The left-hand image is almost entirely monochrome. That is the "muddy brown" phenomenon that I've spoken about. To the best of my recollection that happens whenever I choose the same exposure for each color filter. For this M51 I used a 60-second exposure everywhere. However, if I were to select a different exposure for each filter, then my image would come out great, as I've proven many times. The purpose of selecting a different exposure is to guarantee that the SNR of each sub in each filter is similar. That is how I normally image using different exposures. But the downside is each exposure requires its own set of darks.

You may be asking "Why does APP choose a default that creates awful images?" The answer is that APP's designer expects everyone to run Star Color Calibration. I choose not to. The reason is, as author Roger Clark notes, too many astrophotographs show far too many blue stars. As he explains, the reason is that the sky is filled with a majority of red stars. When Star Color Calibration sees a lot of red it shifts everything towards blue. I have an additional reason. If you go to AstroBin, you will be shocked by the range of colors for the same object. Everyone has their own idea of what the color content should be. I am different. I want to depict colors as accurately as I can. I'm not interested in artistry. I want realism. So, I learned how to white balance my filters. Furthermore, I created an SNR calculator to ensure that my exposure and frame counts created perfectly balanced color stacks.

In conclusion, it still is important to "bake" your color subs to get the best possible SNR and dynamic range. My 60-second c**p images shown above provide the proof. But now I know that I can safely use the same exposure for each color filter by using the proper setting in APP.

Brian
Post Reply