[ptx] Re: Brightness/colour correction in pano12 and nona
Pablo d'Angelo
pablo.dangelo at web.de
Tue Nov 22 00:09:52 GMT 2005
Rik Littlefield schrieb:
> Bruno Postle wrote:
>
>>Actually, if you assume that two overlapping images have the same
>>radial falloff you should be able to infer this radial falloff (and
>>possibly the mapping-to-linear curve as well) with just the
>>difference in brightness for each pixel pair and the relative
>>distances to their photo centres.
>>
>>This idea appeared on the panotools list a year ago:
>>
>> http://thread.gmane.org/gmane.comp.graphics.panotools/26390
>>
>>..but this should be unnecessary if you are working with linear RAW
>>data in the first place, it should be possible to simply apply a
>>cosine-rule correction to each pixel based on the (known) angle of
>>view of the photo - this would only work with rectilinear images.
>>
>>
> Sure, working with linear RGB makes things a *lot* easier. It's kind of
> an extreme example of "knowing the actual...gradation curve", as I wrote.
>
> But I'll bet that a user with linear RGB doesn't need color correction
> between frames in the first place.
>
> Inferring radial falloff from overlapping images is an attractive idea,
> but I am not convinced it is robust.
I have done a quick and dirty matlab implementation of the algorithm
mentioned in this thread last year. I have just played around with it a
bit, and I might post some results later. The first results are not very
robust, probably due to misregistration, movement in the scene and
nonlinear camera response. It is indeed pixel based. Here is what I did:
1. calcuate a ratio image
Z(x,y) = I1(x,y)/I2(x,y) [1]
for all pixels x in the image.
On the original website, the assumption was that each intensity measured
by the camera is given by
I1(x,y) = L(x,y)*f(r_1) [2]
where L(x) is the true intensity (irradiance) at x and f(r) is a
vignetting function, based on the distance r betwee x and the principal
point (image center) of the image.
Combining [1] and [2] leads to:
Z(x,y) = f(r_1)/f(r_2)
One can then fit the function f(r_1)/f(r_2) to the calculated Z values
using a nonlinear least squares fit. I used the matlab implementation of
the Levenberg Marquardt algorithm.
Even when removing unplausible points (I1>I2 when r1>r2) this is not
very robust, but it seems to work with simple scenes (uniform or slowly
changing objects). Maybe the use of a robust estimation technique will
result in better results.
The real drawback however is that the camera response R needs to be
known as well, because [2] is a simplication, since the camera response
is applied after the incoming light has been attenuated by the
vignetting/light falloff:
I(x,y) = R(L(x)*f(r1))
Actually I have seen a nice poster on the ICCV conference, which
estimates the vignetting, response function, relative irrandiance and
exposure difference between the overlapping images, all from the overlap.
http://grail.cs.washington.edu/projects/vignette/
The author claims that it works and even provides the software. However
it is written in python and is VERY slow. Haven't tested it on my images
yet.
The algorithm alternates between two nonlinear steps until convergence:
1. estimation vignetting function, camera response and exposure difference
2. estimation of scene irradiance.
> I am afraid that for really critical work, there may be no substitute
> for explicitly calibrating radial falloff with test shots of a fixed target.
How do you use the resulting flatfield image to correct your images?
Do you simply add it (The vignetting correction plugin of H.Dersch used
this approach, see
http://article.gmane.org/gmane.comp.graphics.panotools/9554),
or do you use a multiplicative correction, together with a camera
response curve.
I'd like to add a vignetting correction step to nona, but I haven't
decided if it should be based on a polynomial or a supplied flatfield
image (probably both make sense) and how the correction should be really
done (additive or multiplicative). However, its not practical to depend
on the camera response curve just for vignetting correction. Most people
will not go through the hassle to create a good one, I guess.
> [ "linear" RAW ]
> Normally I would not even think about these
> gory details, let alone mention them, but I have recently been
> sensitized by crawling around in the guts of dcraw, talking with its
> author Dave Coffin, and poking into some CRW files produced by my Canon
> Digital Rebel. They are not so clean as I would have thought, and I am
> a bit more skeptical now that the "linear" data coming out of a raw
> converter, really is.
If you are still interested in some more gory details, there is a very
interesting site by a french astrophotographer, Christian Bruil:
http://www.astrosurf.org/buil/index.htm
and especially
http://www.astrosurf.org/buil/us/test/test.htm
For example I was quite surprised that the Nikon D70 does some kind of
median filtering on the RAW images before they are written to the card.
ciao
Pablo
More information about the ptx
mailing list