full-well dynmic range increase
full-well dynmic range increase
A question for Robin - In order to get around full-well depth/saturation problems, Is there any reason you could not interpose an accumulation buffer, say 32 integer or floating point, to accept something like a 16 bit raw frame from the camera, allow it to reset and collect another image frame (presumably set to be unsaturated), and then download that to to be added to the accumulation buffer? The accumulation frame would at some point be used to output a 32 bit fits frame. This would seem to increase the available dynamic range from the camera while reducing the number of required fits file for storage. Or am I missing something fundamental that prevents this? Perhaps the storage requirements are the same for lots of smaller frames as the fewer high dynamic range frames? Thanks,
John
John
Re: full-well dynmic range increase
I would love to say something but you've limited the number of respondents to just Robin.
Brian
Brian
Re: full-well dynmic range increase
Sorry, not just Robin - anyone, Thanks,
Jhn
Jhn
Re: full-well dynmic range increase
Hi John,
I've wondered the same. ...and trying to improve dynamic resolution is an interesting question and challenge...
I don't know if my answer below is similar to what Brian or Robin would write -- it may be -- but for what it is worth here is my take on it..
I don't think that staging the capture in the way you suggest could help because - however you play it - if the original (say) 16 bit frame read out is itself saturated ( i.e.has pixels maxing out at 65536 that correspond to stars of differing brightness) then that differential brightness information is now irretrievably lost irrespective of anything that happens subsequently. ---- and on the other hand, if the original 16 bit frame were not saturated then the information is there anyway and so there would be no point doing anything other than stacking frames as normal?
In practice the best way that I am aware of to increase the dynamic range is to combine stacks of exposures of different lengths into the same composition. So , for example, with objects like globular clusters where the middle is consistently brighter than the periphery it is common practice to combine a stack of shorter exposures with a stack of longer ones (e.g there are tools in PixInsight--HDRM composition that do this intelligently) to form an overall image that flattens out the brightness differences and covers an overall higher dynamic range.
Tim
I've wondered the same. ...and trying to improve dynamic resolution is an interesting question and challenge...
I don't know if my answer below is similar to what Brian or Robin would write -- it may be -- but for what it is worth here is my take on it..
I don't think that staging the capture in the way you suggest could help because - however you play it - if the original (say) 16 bit frame read out is itself saturated ( i.e.has pixels maxing out at 65536 that correspond to stars of differing brightness) then that differential brightness information is now irretrievably lost irrespective of anything that happens subsequently. ---- and on the other hand, if the original 16 bit frame were not saturated then the information is there anyway and so there would be no point doing anything other than stacking frames as normal?
In practice the best way that I am aware of to increase the dynamic range is to combine stacks of exposures of different lengths into the same composition. So , for example, with objects like globular clusters where the middle is consistently brighter than the periphery it is common practice to combine a stack of shorter exposures with a stack of longer ones (e.g there are tools in PixInsight--HDRM composition that do this intelligently) to form an overall image that flattens out the brightness differences and covers an overall higher dynamic range.
Tim
Last edited by timh on Mon Aug 23, 2021 4:43 pm, edited 1 time in total.
- admin
- Site Admin
- Posts: 13287
- Joined: Sat Feb 11, 2017 3:52 pm
- Location: Vale of the White Horse, UK
- Contact:
Re: full-well dynmic range increase
Hi,
essentially this is what stacking of images (both live stacking in SharpCap and stacking in applications like DSS and PixInsight) does. The final image made by stacking has a higher dynamic range than any of the individual images. No need to add this as a new feature as it already exists
cheers,
Robin
essentially this is what stacking of images (both live stacking in SharpCap and stacking in applications like DSS and PixInsight) does. The final image made by stacking has a higher dynamic range than any of the individual images. No need to add this as a new feature as it already exists
cheers,
Robin
Re: full-well dynmic range increase
Robin and Timh,
Understood, thanks. But would there be any advantage in producing fewer frames for subsequent stacking. I realize that the frames would each take more storage space (say 32 bit vs 16 bit). But if you accumulate more than 4 frames prior to writing an output file wouldn't this reduce the storage requirements? So no increase in dynamic range (since the stacking already does this), but less storage? Thanks,
John
Understood, thanks. But would there be any advantage in producing fewer frames for subsequent stacking. I realize that the frames would each take more storage space (say 32 bit vs 16 bit). But if you accumulate more than 4 frames prior to writing an output file wouldn't this reduce the storage requirements? So no increase in dynamic range (since the stacking already does this), but less storage? Thanks,
John
Re: full-well dynmic range increase
Hi John,
I think that 'autostacks' in livestack already does exactly this? Useful to save space especially if stacking lots of short exposures. There is an awful lot already packed into SharpCap that takes a while to discover --- as I continue to discover.
Tim
I think that 'autostacks' in livestack already does exactly this? Useful to save space especially if stacking lots of short exposures. There is an awful lot already packed into SharpCap that takes a while to discover --- as I continue to discover.
Tim
Re: full-well dynmic range increase
Timh,
I believe you are correct, but if you save the autostacked frames I think they would have the darks and flats removed if you use this option for viewing. Or is there an option to save the raw uncalibrated stack while viewing calibrated stacked frames. I ask because I'm trying to do the best I can at both real-time EAA, and later postprocessing in something like PixInsight where I believe you are better off doing the calibration in PI. I've sort of bugged Robin about the ability to save raw calibration frames (within the livestacking side) rather than the averaged masters. I know you can already save these raw calibration frames by doing them separately on the still capture side. Maybe I'm overthinking this. I have read the manual several times, but a lot of stuff just comes from trying and experience. Thanks,
John
I believe you are correct, but if you save the autostacked frames I think they would have the darks and flats removed if you use this option for viewing. Or is there an option to save the raw uncalibrated stack while viewing calibrated stacked frames. I ask because I'm trying to do the best I can at both real-time EAA, and later postprocessing in something like PixInsight where I believe you are better off doing the calibration in PI. I've sort of bugged Robin about the ability to save raw calibration frames (within the livestacking side) rather than the averaged masters. I know you can already save these raw calibration frames by doing them separately on the still capture side. Maybe I'm overthinking this. I have read the manual several times, but a lot of stuff just comes from trying and experience. Thanks,
John
Re: full-well dynmic range increase
I may be wrong here but I think that ..
1) the autostacked 16 bit and 32 bit FTS files are indeed saved with whichever flat and dark master files you specify already-applied.
2) In live stack there is already an option to save all the raw files --or alternatively only those raw files that were selected for including in these stacks (this latter is an option that I frequently use in order to save space --and also because I find the brightness and FWHM filters very useful to sift out poor frames).
Of course the autostack files rather than just raw files can also be processed later in PixInsight. They just need to be further integrated together --and of course then without any need to apply darks and flats again. I don't think that it would make sense to produce 'raw' uncalibrated autostacks because post applying darks, flats and cosmetic correction post-stacking just wouldn't work?
Tim
1) the autostacked 16 bit and 32 bit FTS files are indeed saved with whichever flat and dark master files you specify already-applied.
2) In live stack there is already an option to save all the raw files --or alternatively only those raw files that were selected for including in these stacks (this latter is an option that I frequently use in order to save space --and also because I find the brightness and FWHM filters very useful to sift out poor frames).
Of course the autostack files rather than just raw files can also be processed later in PixInsight. They just need to be further integrated together --and of course then without any need to apply darks and flats again. I don't think that it would make sense to produce 'raw' uncalibrated autostacks because post applying darks, flats and cosmetic correction post-stacking just wouldn't work?
Tim
- admin
- Site Admin
- Posts: 13287
- Joined: Sat Feb 11, 2017 3:52 pm
- Location: Vale of the White Horse, UK
- Contact:
Re: full-well dynmic range increase
Hi,
I don't really see an advantage to 'pre stacking'. If you do it without alignment then the total exposure time in a pre-stack is limited by your mount tracking accuracy anyway, and the lower limit on sub-exposure length is set by read noise considerations. Unless you are very constrained for storage space then the downsides (more time lost if the mount glitches tracking, or if a plane/satellite comes across) likely exceed the upside.
cheers,
Robin
I don't really see an advantage to 'pre stacking'. If you do it without alignment then the total exposure time in a pre-stack is limited by your mount tracking accuracy anyway, and the lower limit on sub-exposure length is set by read noise considerations. Unless you are very constrained for storage space then the downsides (more time lost if the mount glitches tracking, or if a plane/satellite comes across) likely exceed the upside.
cheers,
Robin