[ptx] Vignetting correction in nona
JD Smith
jdsmith at as.arizona.edu
Mon Jan 2 17:49:23 GMT 2006
On Mon, 2006-01-02 at 17:07 +0100, pablo.dangelo at informatik.uni-ulm.de
wrote:
> Zitat von Mike Runge <mike at trozzreaxxion.net>:
>
> > Sounds to good to be true :-)
> > Pablo,
> > #1 will there be some kind of preview? I would like to see this as
> > option in the regular viewer.
>
> It will be displayed in the panorama viewer. I probably wont create an extra
> vignetting correction preview.
>
> > #2 will this feature be supported by the lens settings import/export?
>
> Yes.
>
> The polynomial correction is working, I just have to write the flatfield code,
> and add another window for the vignetting coefficents to the lens tab. I won't
> add it to the lens tab itself, it is already too crowded.
>
> I have some matlab scripts that estimate the correction polynomial from
> overlapping images (which have to be very well registered or contain a scene
> with uniform color to avoid high errors). I'll experiment a bit more with the
> approach once the correction is implemented.
Great to hear you're working on this, Pablo. Really a missing
ingredient in the PanoTools workflow.
The direct polynomial estimation is the most vital component from my
point of view. If we can reduce vignetting correction to a small set of
readily-determined parameters which accompany all the other parameters
for a given lens/camera/focal length setup, then it becomes easy to
collect and trade these (e.g. clens), and much more likely that people
will use them.
I still maintain that estimating the vignetting correction function from
resolution-degraded images would be an advantage. To 1st order, the
flat field function is very smooth, so performing your estimation on a
lower-resolution set of images would minimize errors arising from mis-
registration, and strong color contrast. As a bonus, it should be
faster too. Since vignetting is a geometric affect (i.e. color
unaware), do you first project the RGB along the grayscale vector, or do
some other luminance conversion?
I wonder whether allowing the center of the radial vignetting function
to move (perhaps with constraints) would be useful. I've seen some
fairly off-center looking flat field images. This would introduce two
more parameters, but could provide flexibility for troublesome cases.
These would likely be rarely used, similar to d and e in the lens offset
case. They probably make direct estimation somewhat more difficult and
less robust, but might make the difference in some cases.
Another good option to include would be to estimate the flat-field
function not just from overlapping images, but, if requested, from the
flat-field image(s) fed in. This is a comparatively easy fit, and will
allow us to easily check how good a job it can do estimating the
function directly from overlapping images in a pano build. It's also a
decent way to provide for the (vast majority) of people who don't want
to or can't take flat-fields (assuming the direct estimation from
overlap doesn't pan out). Reducing the correction to a handful of
parameters will allow motivated people to take and measure flats, and
share the resulting parameters with less motivated people, much as the
other lens calibrations are shared.
By the way, for astronomical imaging, we typically take 5-10 flat field
images, and median combine or trimmed-average them to produce a very
high signal flat. You might allow people to select a set of images
which would be combined in this way for the internal flat (or we could
just suggest a workflow for doing this externally).
Since all camera detectors have bad pixels at some level, it actually
makes more sense to me to fit a smooth radial function and use that,
even when you have real flat field images. Otherwise, you'll produce
holes and spikes in your final image (unless you used jpegpixi or some
other bad pixel cleaner). I'd also suggest normalizing the flat image
to the median, rather than the max, to ensure no overall shift in
brightness.
With the division method (which is actually the most physically
appropriate, for linear data, as you mention), I suspect limited bit
depth will come into play. Do you plan on up-converting 8-bit images
temporarily to 16bit or float before performing the correction, or just
warning users that banding may result if they correct vignetting in 8-
bit space?
JD
More information about the ptx
mailing list