[ptx] Vignetting correction in nona

Pablo d'Angelo pablo.dangelo at web.de
Mon Jan 2 19:15:32 GMT 2006


Hi JD,

> Great to hear you're working on this, Pablo.  Really a missing
> ingredient in the PanoTools workflow.

Yes, its hidden a bit by enblend, but still its visible, especially in
printed panoramas.

> The direct polynomial estimation is the most vital component from my
> point of view.  If we can reduce vignetting correction to a small set of
> readily-determined parameters which accompany all the other parameters
> for a given lens/camera/focal length setup, then it becomes easy to
> collect and trade these (e.g. clens), and much more likely that people
> will use them.

Sure, thats one of the goals.

> I still maintain that estimating the vignetting correction function from
> resolution-degraded images would be an advantage.

I haven't really played a lot with the estimation yet, but I also think this
will help a lot.

> Since vignetting is a geometric affect (i.e. color
> unaware), do you first project the RGB along the grayscale vector, or do
> some other luminance conversion?

I'm working with grayscale data so far.

I'll also add optimisation of the vignetting center and direct estimation
from the flatfield images. After the correction has been added to nona/hugin
I will publish my matlab prototypes. (Maybe they also work with octave).

> By the way, for astronomical imaging, we typically take 5-10 flat field
> images, and median combine or trimmed-average them to produce a very
> high signal flat.  You might allow people to select a set of images
> which would be combined in this way for the internal flat (or we could
> just suggest a workflow for doing this externally).

This can be done outside, there are lots of image stackers out there.

> I'd also suggest normalizing the flat image
> to the median, rather than the max, to ensure no overall shift in
> brightness.

Hmm, but the image should have been brighter, and its brighness at the
corners was reduced by the vignetting... The question is wether to risk
clipping pixels or not (in the 8 bit case..).

> With the division method (which is actually the most physically
> appropriate, for linear data, as you mention), I suspect limited bit
> depth will come into play.  Do you plan on up-converting 8-bit images
> temporarily to 16bit or float before performing the correction, or just
> warning users that banding may result if they correct vignetting in 8-
> bit space?

The images will be internally converted to float or a fixed point
representation.

I also suspect that fixed point representation would also speed up
interpolation quite lot, since currently the interpolation weights are
double, and many expensive byte -> double conversions will be done.

ciao
  Pablo


More information about the ptx mailing list