[ptx] Enblend Multiresolution Spline Blender
Pablo d'Angelo
pablo.dangelo at web.de
Fri Mar 12 07:39:25 GMT 2004
On Thu, 11 Mar 2004, Andrew C Mihal wrote:
> On Thu, 11 Mar 2004, Pablo d'Angelo wrote:
>
> > While I haven't looked at the algo in detail, it mainly seems to do two
> > loops over the whole image. should be pretty fast.
>
> I looked at the source too - the theory of the algorithm is not documented
> but it looks similar to what I had in mind. Another thing I wanted to play
> with is weighted nearest feature - where I can assign a weight to
> stitching discrepancies in the overlap region and have the transition line
> try to avoid those locations.
Yes, and if an automatic system doesn't work out, we can even let the user
define this easily, either by editing the mask, or maybe even from inside
hugin, by letting him mark the areas that shouldn't appear in the pano.
> > Ok. I'll start to write a faster mask creation algorithm and integrate that
> > into nona.
>
> Ok. Let me know what I can do to help make the enblend code portable into
> hugin.
The most important thing would be the processing of roi's, instead of
complete images. While it shouldn't be very difficult, it needs loop index
changes throughout the whole program, I guess.
Then it would be possible to use the multilayer tiffs from nona easily.
This would enable people to blend the images individually. So the memory
requirement would be the pyramids of the main image, the panorama, and the
image to be blended, plus the associated image, I guess.
then the main loop would look like (with a preprocessing step, that
determines a suitable image order (for example, breadth first traversal of
the connection graph):
initalize pano with first image + mask + pyramids.
for images
// create masks for blending
createMask(panoMask,image);
// build laplacian pyramid for current image
buildPyramids(image, imageLP);
blend(panoLP, imageLP);
// update panoMask, as well
collapse(panoPyr)
I hope it is possible to move the collapse out of the loop.
Else the pyramids for the pano images would have to be rebuild every loop,
which is slow, and will probably introduce some error (thanks to your
assemble steps, this happens only once or twice, in your implementation.)
For big panos, with many input images, the memory for a second full size
representation wouldn't be needed anymore.
For really big panos, one could keep the pano* data on the disk (for example
with the tile interface of the tiff lib, I think), and only load the part
that is needed for blending the current image.
Actually, I have scrapped the plans of using stitching information. If a
general algorithm can be build on the fast distance transform, it isn't
worth the effort.
ciao
Pablo
More information about the ptX
mailing list