color correction & vignetting [Re: [ptx] Hugin wishlist, RFC]

Pablo d'Angelo pablo at mathematik.uni-ulm.de
Sun Feb 8 20:50:30 GMT 2004


Hi together,

> If you took a series of images of a more uniform source, like the sky,
> you might be able to overcome this ringing, but they how would you
> align them in the first place?  You could also register, then filter
> with a low-pass and then compare.  Or use a high-pass filter to ignore
> sharp features in the comparison.

Yes, I was thinking of something like that, but its probably its easier
to take the calibration shots.

> Much easier, in my opinion, to take a series of photos for which
> alignment is not required to create a flat.  Most users would skip this
> step, which is fine, as a good seaming algorithm with lots of overlap
> can overcome most of the artifacts in non-flatfielded data.

Hmm, probably true. However, I still haven't made up my mind if we
should use a stitching algorithm that tries to cover up the mismatches
like the one in "Seamles Image Stitching in the Gradient Domain", or if
we should use a more physically orientend approach with estimated camera
response functions, like the one mentioned in 

> > "Mapping Colour in Image Stitching Applications":
> > http://ivrgwww.epfl.ch/publications/hs03_JVCIR.pdf

and in the paper from Debevec mentioned earlier.

As you mentioned, using images taken with different exposures would be a
really nice step forward, and would simplify quite a few problems with
panoramic photography.

People seem to do that already with HDRShop.

Essentially this would lead to the following workflow:

1. calibrate camera (both geometric distortions, and optical transfer
   function and light falloff).
2. capture panoramic images with a different exposures.
3. register images (both spatially and in exposure), by using
    the calibration data acquired in step 1, together with acquisition
    parameters stored in the EXIF file, or supplied by the user.
4. blend images (probably easier than before, because the primary source
    of registration errors will be scene changes).
5. save a HDR image, for viewing with a HDR viewer. Additionally a
    method to save a dynamically compressed image, for normal viewing is
    needed. This is effectively, applying a similar "magic" as the camera
    did while capturing the scene, which we have undone for the registration
    step ;)

Integrating everything into a nice workflow that makes this process as
simple as loading the images and letting the program work on them would
be a really nice thing, beyond the power of most other solutions for
panoramic imaging :)

It is however a quite ambigous goal, which needs in depth knowledge of
all the factors involved.

On the other hand I'm quite sure that step 1) is quite hard to do for
the general case. And the typical user will probably not spend a great
deal of time on it. So it needs to be a largely automatic process.
Ideally, no calibration would be needed, and all estimation should be
done with minimum user effort in step 3. *dream*

I'm also always a bit sceptic when I read papers and articles. Often I'm
not recognizing (sometimes hidden) drawbacks of the algorithms for our
real world scenarios :)

Anyway, we always need a good blending algorithm, since there will be
always misregistrations, no matter if we are working on hdr images or
not.. so it might be a good idea to work an that after the spatial
registration has been made simpler by the sift feature detector step.

ciao
  Pablo
-- 
http://wurm.wohnheim.uni-ulm.de/~redman/
Please use PGP.


More information about the ptX mailing list