Robin,

I always pictured nonlinear background extraction as an "artificial flat", because again, with rare exceptions, light pollution gradients in individual frames are usually linear. They only become complex and nonlinear after stacking, which linear background extraction avoids in preprocessing prior to stacking.

If the nonlinear background extraction as an artificial flat were "moved over" to be an option under flat application instead, and implemented via division, I don't think it would need to change the existing code for the simple and blended offset at all.

I think you are probably correct that there must be a relatively simple equation governing incomplete sensor illumination. I'll see what I can find. You do run into possible complexities. What if the center of illumination is not at the center of the ROI, for example. And what do you do about the linear term? I think the safest assumption would be to fit the nonlinear "synthetic flat" under the assumption that it is radially symmetric, but that the center of illumination may be offset from the center of the ROI, and leave any residual linear gradient to be corrected by the linear background extraction under the assumption that it is from light pollution.

## New Feature : Background Subtraction

- admin
- Site Admin
**Posts:**13266**Joined:**Sat Feb 11, 2017 3:52 pm**Location:**Vale of the White Horse, UK-
**Contact:**

### Re: New Feature : Background Subtraction

Hi,

yes, if there is a solution to the artificial flat then it will be based (I think) on the area of overlap between two circles, each of the radius of the fully illuminated circle on the sensor, once circle placed on the center of illumination and the other on the pixel of interest. It should be possible to recover all 3 parameters from a least squares/least absolute deviation fit. I can test the theory on typical flat frames to see if it holds up before trying anything on light frames.

I'm afraid I'm unconvinced about background correction being flat-like - it's just extra light arriving at the sensor having come from the atmosphere rather than deep space, so to wipe it out cleanly you have to subtract it. A division based correction would impact on the relative intensities of the remaining part of the data that does come from astro objects, so would be inappropriate.

cheers,

Robin

yes, if there is a solution to the artificial flat then it will be based (I think) on the area of overlap between two circles, each of the radius of the fully illuminated circle on the sensor, once circle placed on the center of illumination and the other on the pixel of interest. It should be possible to recover all 3 parameters from a least squares/least absolute deviation fit. I can test the theory on typical flat frames to see if it holds up before trying anything on light frames.

I'm afraid I'm unconvinced about background correction being flat-like - it's just extra light arriving at the sensor having come from the atmosphere rather than deep space, so to wipe it out cleanly you have to subtract it. A division based correction would impact on the relative intensities of the remaining part of the data that does come from astro objects, so would be inappropriate.

cheers,

Robin

### Re: New Feature : Background Subtraction

I don't think I'm making myself clear. There are *two different things* that show up in "the background" of the image that we might want to correct. One is non-uniform illumination of the sensor; it comes from the optical train and not the sky and should be corrected by division and can probably safely be assumed to be radially symmetric (although again, perhaps not about the center of the ROI). The other is light pollution, which comes from the sky, not the optical train, and I think in almost all cases can be assumed to be a linear gradient, and that should be corrected by subtraction.

So what I'm saying is, fit the radial non-linear background and divide it out, and then fit the linear background and subtract it out.

So what I'm saying is, fit the radial non-linear background and divide it out, and then fit the linear background and subtract it out.

### Re: New Feature : Background Subtraction

Regarding fitting a vignetting function, this document has some nice visuals:

https://wp.optics.arizona.edu/jgreivenk ... etting.pdf

So the image will be fully illuminated to some radius

However, other sources I've found simply attempt to fit the vignetting as a 6th order even polynomial expansion:

L_corrected = L_vignetted * (1 + k1 * R^2 + k2 * R^4 + k3 * R^6)

If you don't assume the center of vignetting is the center of the FOV, this actually produces 5 fit parameters, k1, k2, k3, and (x0, y0), the center of vignetting. The second method can't fit all the way to fully vignetted, but hopefully there aren't too many people imaging with hard shadows in the corners. :Op

That seems like it would be fairly simple to test.

https://wp.optics.arizona.edu/jgreivenk ... etting.pdf

So the image will be fully illuminated to some radius

*a*, and fully vignetted (black) by radius*a + 2y*, where*y*is the radius of a smaller circle determined by the optical system. Basically if the circle of diameter*2y*lies entirely within the circle of diameter*2a*, it is fully illuminated. If it lies entirely outside of the circle of diameter*2a*, it is fully vignetted. When it is straddling the edge of the larger circle, the illumination goes as the area of the small circle that is within the large circle. I haven't yet tried to turn this into an equation.However, other sources I've found simply attempt to fit the vignetting as a 6th order even polynomial expansion:

L_corrected = L_vignetted * (1 + k1 * R^2 + k2 * R^4 + k3 * R^6)

If you don't assume the center of vignetting is the center of the FOV, this actually produces 5 fit parameters, k1, k2, k3, and (x0, y0), the center of vignetting. The second method can't fit all the way to fully vignetted, but hopefully there aren't too many people imaging with hard shadows in the corners. :Op

That seems like it would be fairly simple to test.

- admin
- Site Admin
**Posts:**13266**Joined:**Sat Feb 11, 2017 3:52 pm**Location:**Vale of the White Horse, UK-
**Contact:**

### Re: New Feature : Background Subtraction

Hi Mike,

yes, there are definitely two components, but my inclination would be to put any division based correction into the 'flat correction' option rather than 'background subtraction', as it would basically be a synthetic flat. Putting it there also ensures that it cannot be turned on at the same time as real flat correction (which would surely lead to a horrible mess).

I did come up with a formula for the overlap of two circles - it turns out pretty messy as it involves subtracting the area of a triangle made by the center of the circle and a chord from the area of the segment facing the chord. I can see why it gets approximated. In either case I suspect some iterative hill descent algorithm aiming to reduce the total error from the modelled data would be the way to solve it.

cheers,

Robin

yes, there are definitely two components, but my inclination would be to put any division based correction into the 'flat correction' option rather than 'background subtraction', as it would basically be a synthetic flat. Putting it there also ensures that it cannot be turned on at the same time as real flat correction (which would surely lead to a horrible mess).

I did come up with a formula for the overlap of two circles - it turns out pretty messy as it involves subtracting the area of a triangle made by the center of the circle and a chord from the area of the segment facing the chord. I can see why it gets approximated. In either case I suspect some iterative hill descent algorithm aiming to reduce the total error from the modelled data would be the way to solve it.

cheers,

Robin

### Re: New Feature : Background Subtraction

Ok; yes, we are saying and suggesting the same thing.admin wrote: ↑Sat Feb 24, 2024 2:39 pm Hi Mike,

yes, there are definitely two components, but my inclination would be to put any division based correction into the 'flat correction' option rather than 'background subtraction', as it would basically be a synthetic flat. Putting it there also ensures that it cannot be turned on at the same time as real flat correction (which would surely lead to a horrible mess).

I did come up with a formula for the overlap of two circles - it turns out pretty messy as it involves subtracting the area of a triangle made by the center of the circle and a chord from the area of the segment facing the chord. I can see why it gets approximated. In either case I suspect some iterative hill descent algorithm aiming to reduce the total error from the modelled data would be the way to solve it.

cheers,

Robin

Regarding the vignetting equation, I ended up in the same place. It's nasty.

The thing that bothers me about the even polynomial expansion though is that it cannot produce a true fully illuminated "flat" region in the center. I think a better version would be a piecewise even expansion:

L_corrected = L_vignetted for R <= a

L_corrected = L_vignetted * (1 + k1 * (R-a)^2 + k2 * (R-a)^4 + k3 * (R-a)^6) for R > a

Then

*a*, the fully illuminated radius, becomes another parameter of the fit, totaling 6 parameters, if you take the expansion all the way to the 6th order term.

Edit: I think that piecewise equation can be written as a single equation using the step function:

L_corrected = L_vignetted * (1 + step(a) * ( k1 * (R-a)^2 + k2 * (R-a)^4 + k3 * (R-a)^6 ) )