[ptx] Re: Brightness/colour correction in pano12 and nona

Rik Littlefield rj.littlefield at computer.org
Mon Nov 21 15:43:53 GMT 2005


Bruno Postle wrote:

>On Sun 20-Nov-2005 at 13:08 -0800, Rik Littlefield wrote:
>
>  
>
>>For example, you can expect less than perfect correction if radial
>>falloff causes a left-to-right gradient in one image but
>>right-to-left in the other.  Radial falloff should be corrected
>>in each image separately, before attempting to correct one image
>>against another.  (And accurately correcting for radial falloff
>>requires knowing the actual light-level-to-pixel-value gradation
>>curve, not just some idealized gamma, but that's another story.)
>>    
>>
>
>Actually, if you assume that two overlapping images have the same
>radial falloff you should be able to infer this radial falloff (and
>possibly the mapping-to-linear curve as well) with just the
>difference in brightness for each pixel pair and the relative
>distances to their photo centres.
>
>This idea appeared on the panotools list a year ago:
>
>  http://thread.gmane.org/gmane.comp.graphics.panotools/26390
>
>..but this should be unnecessary if you are working with linear RAW
>data in the first place, it should be possible to simply apply a
>cosine-rule correction to each pixel based on the (known) angle of
>view of the photo - this would only work with rectilinear images.
>  
>
Sure, working with linear RGB makes things a *lot* easier.  It's kind of 
an extreme example of "knowing the actual...gradation curve", as I wrote. 

But I'll bet that a user with linear RGB doesn't need color correction 
between frames in the first place.

Inferring radial falloff from overlapping images is an attractive idea, 
but I am not convinced it is robust.  The paper linked in the article 
that you reference is no longer posted.  If its method really does rely 
on comparing pixel to pixel, than I am reminded of Helmut's explanation 
of why he developed the histogram matching approach in the first place: 
"Real world images never fit together perfectly, and unavoidable spatial 
errors of one pixel or more between A and B may completely screw the 
optimization." [http://www.all-in-one.ee/~dersch/cbcorrect/cb.html]

Correcting for radial falloff by cosine rule is also attractive, but it 
is not a cure-all.  There are other reasons for radial falloff that do 
not follow cosine rule.  Some are even affected by stopping down.  See 
http://en.wikipedia.org/wiki/Vignetting for discussion.

I am afraid that for really critical work, there may be no substitute 
for explicitly calibrating radial falloff with test shots of a fixed target.

By the way, the reason I wrote "linear RGB" instead of "linear RAW" is 
that the actual raw data appearing in camera files is very far from 
linear -- doubling the light does not double the sensor value, and there 
is only one color, not three, at each sensor position due to the Bayer 
filters.  The conversion from actual raw data to linear RGB requires 
subtracting off the sensor's black pedestal, de-Bayerizing by color 
interpolation, and transforming the colors from whatever the Bayer 
filters give into a standard color space.  Fortunately, raw converters 
do this job fairly well.  Normally I would not even think about these 
gory details, let alone mention them, but I have recently been 
sensitized by crawling around in the guts of dcraw,  talking with its 
author Dave Coffin, and poking into some CRW files produced by my Canon 
Digital Rebel.  They are not so clean as I would have thought, and I am 
a bit more skeptical now that the "linear" data coming out of a raw 
converter, really is.

Dang, I wish the world were not such a complicated place!  ;-)

--Rik

-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://www.email-lists.org/pipermail/ptx/attachments/20051121/6a1ef5cf/attachment.html


More information about the ptx mailing list