[ptx] Re: Brightness/colour correction in pano12 and nona
JD Smith
jdsmith at as.arizona.edu
Tue Nov 22 06:12:32 GMT 2005
On Nov 21, 2005, at 5:09 PM, Pablo d'Angelo wrote:
> Rik Littlefield schrieb:
>
>> Bruno Postle wrote:
>>
>>
>>> Actually, if you assume that two overlapping images have the same
>>> radial falloff you should be able to infer this radial falloff (and
>>> possibly the mapping-to-linear curve as well) with just the
>>> difference in brightness for each pixel pair and the relative
>>> distances to their photo centres.
>>>
>>> This idea appeared on the panotools list a year ago:
>>>
>>> http://thread.gmane.org/gmane.comp.graphics.panotools/26390
>>>
>>> ..but this should be unnecessary if you are working with linear RAW
>>> data in the first place, it should be possible to simply apply a
>>> cosine-rule correction to each pixel based on the (known) angle of
>>> view of the photo - this would only work with rectilinear images.
>>>
>>>
>>>
>> Sure, working with linear RGB makes things a *lot* easier. It's
>> kind of
>> an extreme example of "knowing the actual...gradation curve", as I
>> wrote.
>>
>> But I'll bet that a user with linear RGB doesn't need color
>> correction
>> between frames in the first place.
>>
>> Inferring radial falloff from overlapping images is an attractive
>> idea,
>> but I am not convinced it is robust.
>>
>
> I have done a quick and dirty matlab implementation of the algorithm
> mentioned in this thread last year. I have just played around with
> it a
> bit, and I might post some results later. The first results are not
> very
> robust, probably due to misregistration, movement in the scene and
> nonlinear camera response. It is indeed pixel based. Here is what I
> did:
>
> 1. calcuate a ratio image
>
> Z(x,y) = I1(x,y)/I2(x,y) [1]
>
> for all pixels x in the image.
>
> On the original website, the assumption was that each intensity
> measured
> by the camera is given by
> I1(x,y) = L(x,y)*f(r_1) [2]
> where L(x) is the true intensity (irradiance) at x and f(r) is a
> vignetting function, based on the distance r betwee x and the
> principal
> point (image center) of the image.
>
> Combining [1] and [2] leads to:
> Z(x,y) = f(r_1)/f(r_2)
>
> One can then fit the function f(r_1)/f(r_2) to the calculated Z values
> using a nonlinear least squares fit. I used the matlab
> implementation of
> the Levenberg Marquardt algorithm.
>
> Even when removing unplausible points (I1>I2 when r1>r2) this is not
> very robust, but it seems to work with simple scenes (uniform or
> slowly
> changing objects). Maybe the use of a robust estimation technique will
> result in better results.
Glad there has been some renewed interest in my idea. I originally
proposed it in the context of Enblend. Presumably the estimated
radial function f(r) would be relatively smooth, so it would be
natural to attempt to estimate it from one or more of the lower
resolution images in the enblend multi-resolution pyramid, so that
slight mis-registration would not overwhelm the minimizer with
spurious outliers (which might be giving you difficulty). I suppose
it would be easy to create a few levels of the multi-res pyramid
yourself, but ideally the correction would occur in higher bit-
space. If the enblend algorithm ever migrated into hugin, this could
all come together quite naturally.
> The real drawback however is that the camera response R needs to be
> known as well, because [2] is a simplication, since the camera
> response
> is applied after the incoming light has been attenuated by the
> vignetting/light falloff:
> I(x,y) = R(L(x)*f(r1))
Right. This is the same curve that HDRShop and other high dynamic
range compositors have to estimate. It would be nice to centralize
some functionality for calibrating and encoding this information.
It's terribly useful for a variety of reasons. Even 1% non-linearity
really bothers us astronomers. However, for the most common case of
matched exposures, and median brightness, formulating the function f
(r) in the R(L(x)) space (vs. the ideal L(x) space) may be
sufficient. The fact that you can use photoshop to just divide by
some ad hoc radial gradient and vastly improve the results gives me
no small amount of hope.
> Actually I have seen a nice poster on the ICCV conference, which
> estimates the vignetting, response function, relative irrandiance and
> exposure difference between the overlapping images, all from the
> overlap.
Sounds interesting. My proposed method would estimate exposure
differences between the two as well. If a cloud passes during the
panorama sequence, this could obviously complicate things.
> http://grail.cs.washington.edu/projects/vignette/
>
> The author claims that it works and even provides the software.
> However
> it is written in python and is VERY slow. Haven't tested it on my
> images
> yet.
>
> The algorithm alternates between two nonlinear steps until
> convergence:
> 1. estimation vignetting function, camera response and exposure
> difference
> 2. estimation of scene irradiance.
>
>
>
>> I am afraid that for really critical work, there may be no substitute
>> for explicitly calibrating radial falloff with test shots of a
>> fixed target.
>>
>
> How do you use the resulting flatfield image to correct your images?
You divide, after having suitably removed the bias (zero frame) and
dark current from both the image and the flat itself. This is, of
course, in linear space. Typically vignetting involves removing some
fraction of the collimated or input beam, so the vignetted brightness
at a given point should be linear in the input brightness. This is
of course an idealization for the very fast optics of cameras.
> Do you simply add it (The vignetting correction plugin of H.Dersch
> used
> this approach, see
> http://article.gmane.org/gmane.comp.graphics.panotools/9554),
> or do you use a multiplicative correction, together with a camera
> response curve.
>
> I'd like to add a vignetting correction step to nona, but I haven't
> decided if it should be based on a polynomial or a supplied flatfield
> image (probably both make sense) and how the correction should be
> really
> done (additive or multiplicative). However, its not practical to
> depend
> on the camera response curve just for vignetting correction. Most
> people
> will not go through the hassle to create a good one, I guess.
Exactly. This is the reason I suggested a rough radial estimation
might be able to simply solve many of the light falloff issues people
still have, even after using Enblend and the other blenders.
Since the function f(r) should be constant for a given lens/camera/
aperture, it should be given a similar status to other "calibration"
constants such as field of view, and the distortion variables.
Ideally, it should be possible to continue to refine all calibration
data for a lens by averaging in new results, to improve old ones.
E.g. start with one panorama, estimate a set of cal constants from
it. On the next, generate a new set, average with the first, etc. I
suspect such a function would quickly reveal an optimal type of
panorama for calibration, similar to the grids and wire meshes which
are currently recommended for the spatial parameters. Probably
uniformly bright with some identifiable marks for control points.
It's also likely that a larger set of measurements would be required
to "converge" on the function f(r) than for the others. Using the
same radial polynomial used in the spatial distortions (a,b,c,...),
but perhaps with an adjustable center, would seem sufficient.
Good luck!
JD
More information about the ptx
mailing list