[ptx] Enblend Multiresolution Spline Blender

JD Smith jdsmith at as.arizona.edu
Sun Mar 7 09:54:52 GMT 2004


On Sun, 2004-03-07 at 01:41, Terje Mathisen wrote:
> JD Smith wrote:
> 
> > On Sat, 2004-03-06 at 13:51, Andrew C Mihal wrote:
> >>Check it out. There are some demonstration images on the web page above.
> > 
> > Looks great Andrew. 
> 
> Indeed! A very nice job!
> > 
> > It might be a very good idea to incorporate into or chain enblend to
> > nona, Hugin's fast PTStitcher replacement, which itself has no seaming
> > capabilities.  Then we can let Helmut's libpano do the work it's best
> > at: optimizing image position, projecting coordinates, and interpolating
> > the result, and let enblend handle the seaming, all from the convenient
> > interface of hugin.  
> 
> Yes.
> > 
> > As Ed mentions, this would probably be most effective and efficient on
> > multi-layer TIFFs.  If possible (and it may not be), an option to just
> > update the alpha masks (and image planes?) within the TIFFm would also
> > be very useful, for those situations when some final post-processing and
> > hand tweaking to remove duplicates (person walking through scene, etc.)
> > is necessary.
> 
> This might be useful, but it does take away almost all the stuff enblend 
> does, doesn't it?
> 
> I.e. the key idea behind enblend (and the pyramid blender) is to 
> effectively split the two source images into multiple layers, according 
> to detail frequencies. This means that there is no way to conver this 
> into just a single blending mask.

I agree.  Obviously this is more complicated than simply formulating a
good set of alpha masks and stacking (otherwise there wouldn't be papers
on it ;).  What I envision is changing both the mask and the image data
in the file, with some compromise made to deliver one mask and one image
per input image, such that, without modification, the alpha-stack would
be exactly equivalent to the result of a full run, and yet some
flexibility is still preserved for fiddling.  Since I don't know the
algorithm well, it's not immediately obvious if a useful such factoring
is possible, e.g.:

sum(i){newimage[i] * newmask[i]} == sum(i){sum(f){weight[i][f] * image[i][f]}}

where f is the frequency, and image[i][f] is the f blurred frequency
image of image i.  I imagine for most regions it should be very possible
to come up with a useful factoring into newimage and newmask.  It's
certainly trivial to come with any number of valid factorings (e.g. set
newmask=1): the trick is to make them useful.  

Another option for removing duplicate scene elements, etc. is to
pre-process the alpha mask to exclude certain portions of images.

JD


More information about the ptX mailing list