color correction & vignetting [Re: [ptx] Hugin wishlist, RFC]
JD Smith
jdsmith at as.arizona.edu
Sun Feb 8 18:33:11 GMT 2004
On Sat, 2004-02-07 at 08:51, Pablo d'Angelo wrote:
> On Wed, 04 Feb 2004, JD Smith wrote:
>
> > On Wed, 2004-02-04 at 01:30, Pablo d'Angelo wrote:
> >
> > As an astronomer who creates flat-fields all the time, I can tell you
> > that it isn't conceptually difficult, although obtaining a true flat
> > illumination source can be challenging (we typically use a "white spot"
> > illuminated by a special lamp in the dome of the telescope, the bright
> > twilight sky in the evening or morning, or, at wavelengths where the sky
> > is bright enough already, combine large numbers of "science" images by
> > rejecting objects to produce a "sky flat" -- sometimes a combination of
> > all of these).
>
> Yes, thats the difficult part about it. One has to create the flatfields.
> As you noted below, the good thing is that they handle arbitrary light
> falloff.
>
> One thing I was thinking about: wouldn't in be possible to create the
> flatfield from a panorama shot with > 50% overlap, by examining the
> overlapping areas? The drawback is the super accurate registration
> needed, or one has to restrict the matching to regions with uniform
> color (sky with some trees in it, to allow registration). Hmm, probably its
> easier to be more careful and shoot the special flatfield images.
This is possible, but very difficult, thanks to distortions. If all
images were perfect rectangular projections, you could pixel-offset and
divide a set of them to recover the flat. Since you'll have to project
to a sphere, divide, and de-project to image space, you'll probably
introduce "ringing" around sharp features like trees, etc. If you took
a series of images of a more uniform source, like the sky, you might be
able to overcome this ringing, but they how would you align them in the
first place? You could also register, then filter with a low-pass and
then compare. Or use a high-pass filter to ignore sharp features in the
comparison.
Much easier, in my opinion, to take a series of photos for which
alignment is not required to create a flat. Most users would skip this
step, which is fine, as a good seaming algorithm with lots of overlap
can overcome most of the artifacts in non-flatfielded data.
> > Creating a high-quality flat field is usually best achieved with a
> > diffuse (non-spotlight) lighting source on a neutral white screen
> > (poster paper, etc.). Take many (10 or more) images at each f/#
> > setting, and average them together. You might also adjust the lighting
> > somewhat between images to average over any illumination gradients.
> > White balance setting on your camera may also affect the measured flat.
>
> I'll play around with that some time in the future.
Make sure to use very matte (non-glossy) paper and keep the light
ambient (bouncing it off the ceiling, for instance).
> > Once correctly flat fielded, variations in intensity and color are due
> > to differences to true lighting (a cloud covered the sun), exposure
> > times (left camera on auto-exposure), and possibly spatial variations on
> > the detector itself (green increasing from left to right, etc). A paper
> > that Pablo pointed out seems to have the best approach to blending I've
> > encountered, and is worth considering for implementation:
> >
> > http://leibniz.cs.huji.ac.il/tr/acc/2003/HUJI-CSE-LTR-2003-82_blending.pdf
>
> I have found another paper of interest:
>
> "Mapping Colour in Image Stitching Applications":
> http://ivrgwww.epfl.ch/publications/hs03_JVCIR.pdf
>
> It doesn't really take vignetting into account (they assume better cameras
> than I have ;). But its more concerned with recovering the transfer function
> of the camera, a bit similar to HDR stuff we briefly mentioned earlier.
This one seems to cover the same area that Brown & Lowe briefly mention
as "Multi-Band blending". I've recently been thinking about the
standard panorama procedure we've all learned: set white balance and
exposure lock, pick the mid-to-brightest field, and lock settings
there. The problem is this fails to fully capture the dynamic range
present in the scene. Fixing the white balance, and letting exposure
auto-select might be closer. Post-blending to produce a scene which is
"natural" looking can then produce a larger dynamic range than each
individual image offers.
Anyway, reading through this paper, I'm realizing that my flat-fielding
procedure might be complicated by the post-processing of the linear
detector response. In astronomy, achieving a linear response to
incident flux is the name of the game: not so in photography, where all
sorts of tricks are employed to manipulate the raw, linear-response data
into a faithful reproduction of a given scene. I think Helmut made a
reference to this "gamma correction" issue when describing his blending
viz. a viz. vignetting. It's likely you need to work with what they
call "relative scene-referred" images (i.e. de-gamma'd,
de-whitebalanced).
What they refer to as vignetting, would more correctly be called "light
falloff", or "natural vignetting", which is the completely unavoidable
situations of geometry and projection: viewed from an angle, the
entrance aperture is smaller, so less light gets through: this
introduces a cos(theta) loss. Also, the mapping of angle to area in a
rectilinear projection goes as cos(theta)^3 (i.e. the edges of the
projection get stretched over a larger area). Hence the cos(theta)^4
you'll often hear as the vignetting function, which is completely
independent of the aperture size. Real vignetting involves
obstructions, like the lens barrel, giving more loss of light than the
"natural" amount. I would think substituting their analytical V(p)
correction with an empirical one would be easy enough. Then again, some
of this can be accounted for in their OECF fitting, given enough freedom
and overlap.
Anyway, looks very interesting, if somewhat tricky to implement.
JD
More information about the ptX
mailing list