Hi Tim,
Thank you for your reply! Great questions.
The Newtonian Mask adds visual interest. I have a refractor. Refractors render stars as round discs due to the clear, unobstructed path provided by the lens system. Newtonian reflectors utilize two mirrors. The primary mirror gathers starlight, and the secondary mirror redirects the light 90 degrees into your camera. The secondary mirror partially blocks starlight as it travels through the system. There is the mirror, which is usually 20-25% the diameter of the primary, and there are the spider vanes that suspend the mirror at the tube's center. Starlight must flow around these obstructions, and in the process, create a cross-like diffraction pattern that you see in your image. The diffraction pattern is mostly noticeable on bright stars. Personally, I like the diffraction pattern. You have two choices if you have a refractor: create them with software, or mimic a Newtonian reflector by creating an obstruction. I've tried software, specifically StarTools, to simulate diffraction spikes but I felt that the image looked contrived. (There are other software packages that do a better job, from what I understand.) If you want to create an obstruction, I've seen people use dental floss! I took a different approach. I created a 3D-print of a faux secondary mirror and spider vanes, and then affixed it to the end of my refractor's dew shield. You can see it here:
viewtopic.php?f=10&t=3364
The Wratten #12 filter corrects my refractor's poor optics. This filter passes red and green light but not blue. My refractor is a doublet, not a triplet. Doublets do a poor job of focusing all colors. Triplets do a much better job. This problem is severe for my telescope. In fact, people have written scathing commentaries against this particular telescope. It does do a fine job of focusing red and green, but blue is absolutely terrible. You can read my write-up here:
viewtopic.php?f=10&t=3367
My monochrome camera allows me to experiment more freely. Playing around with filters is one of the great benefits of owning a monochrome camera, but there is a downside. There is the extra cost for filters, a filter wheel, and the extra steps needed to create a color image. Still, I prefer this platform.
Long exposures achieve the desired effect. My monochrome camera is a CCD, not CMOS. CCD sensors suffer from higher readout noise than CMOS. As a consequence, CCD's require longer exposure times. I don't like going below 60-seconds. Nothing prevents me from going lower, but the downside is that I would need to significantly increase the total integration in order to achieve the same low-noise final image. By comparison, my monochrome CMOS camera is happy with 20-second exposures. I can go lower, but then I run into the same problem: I would need to increase total integration time. So, why do I choose CCD over CMOS? In my opinion, the image quality of a CCD is higher. This is mostly due to the sensor's architecture. My CCD has a single, expensive analog-to-digital converter. CMOS sensors, on the other hand, have a large number of less costly analog-to-digital converters. The benefit of CMOS is that the image can be clocked out of the sensor at much higher rates because the A/D's run in parallel. That is why CMOS is preferred for lunar and planetary imaging which require high frame rates. In my image of M34, I used a rather long 85-second exposure. I did this for two reasons. First, I wanted to capture a lot of diffraction spikes. Second, I wanted to test a theory. Up until this time, I firmly believed that my problem with "fat stars" was due to saturation of bright stars. I wanted to prove that this was not the case once I began using the Wratten #12 filter (minus blue). Next year, when I have the opportunity I will image M34 again. I'll dial back the exposure to 60 seconds.
Thanks for your interest. I hope I answered your questions.
EDIT: There is one last bit that I forgot to mention. What effect does the Wratten #12 filter have on color astrophotography? By definition, the Wratten #12 subtracts blue, but blue is an important component of a color image. So, how did I create the color images seen here:
viewtopic.php?f=16&t=3410
I accomplished those color images by borrowing a technique used in narrowband imaging of emission nebulae, specifically bi-color imaging in hydrogen (Ha) and oxygen (OIII). There is something called a HOO palette. (Perhaps you have heard of the SHO palette. That palette was made famous by the Hubble Space Telescope. It is tri-color, which includes sulfur SII.) Anyhow, this technique maps two colors to three. For narrowband using HOO, you would map Ha to Red, and OIII to Green and Blue, therefore "HOO" = "RGB"! Now, in my case I have a Red stack and a Green stack that I want to map to RGB. So, I take 67% of my Red Stack and assign it to the Red Channel, and 67% of my Green Stack and assign it to the Blue Channel. Finally, I assign the remaining 33% of Red and 33% of Green to the Green Channel. Color rendition is not perfect. Blue stars tend to be cyan, and Red stars are orange. In the next couple weeks, when the weather clears and the Moon goes away, I plan to image star clusters having extremely red and blue stars. I am quite interested to see how they render.
Brian