[ptx] Autopano-sift -> Hugin -> Enblend Workflow

Mike Runge mike at trozzreaxxion.net
Fri Jun 25 11:47:11 BST 2004


Hi Ian,.

On 6/25/2004, "Ian Sydenham" <ian_sydenham at hotmail.com> wrote:

>Mike Runge wrote:
>> It would be not very complicated to solve that mis-syntax by a small
script.
>Do you have a windows script to do something like that? I guess if you
>know what to do it is simple, but I'm not sure how.

I actually don't have a script - I would do that using awk or perl. 
For my understanding it should just look in the image section line by line
(exept the first line) if the v-parameter contains a "=" or not and replace
v12.3455 by v=0. And do the same stuff for the other lens parameters in the
same line. 

>> My current process for multirows of full panos is to first complete
>> the middle row and then load the images for an additional row. the
>> last of this additional images will have explicit lens settings  - not
just
>> linked to the first image. Maybe this happens because I optimised the v
>> parameter!? Unfortunately there seems to be no way to tell hugin via ui
to
>> link all the images - you need to edit the pto-file.
>Why do you do each of the rows of images separately? I tried this a
>few times, but now I just use autopano(-sift) to create a "full"
>project file with all the images and work on that.

I didn't managed to run autopano-sift till now and I wasn't successfull with
autopano (Alexandre Jenny) so far. I tried several times, but most of the
time I failed to include all images without getting to much obsolete points.
I will try the recommend parameters i got from the list yesterday. In the few
cases where autopano collected all of my images I wasn't able to optimize the
created pto-file to a usable result.

I have problems optimising a multirow pano with hugin from scratch. Seems
like there is no chance optimize just a single row while controlpoints to
other rows exist and theses other rows still specified a yaw/pitch/roll=0.
The good old PTOpenGui had some toggles to define what controlpoints should
be used - that was very helpful. If I try to start directly optimising the
full set of images (up to 40) I don't get the good results I can reach
creating row_by_row. To clearify what I'm doing:
- I start with the middle row and completely optimise this one
- than I add an additional row and I optimise just the new images.
- and so on for more rows.
- at the very end I optimise all images together.
I usally use 3-5 controlpoint pairs for each pair of images of the middle
row. I use 3-5 for each pair of row0/row1 and only 2-3 for each pair of the
additional rows. This is because auto estimate works fine for the middle row
and between the rows but not in between rows that were done with the cam
tilted.  

>> I thought about using nona to get the mlayer-tiff, open it in gimp and do
all
>> required edits there, and then export just the touched layers as single
>> tiff's. The plan was to run enblend with the mlayer-tiff AND the edited
>> single tiffs in one go. But I didn't have found a method to remove
specific
>> layers (the touched ones, that would be double in) from the mlayer-tiff.
>> convert (ImageMagick) can remove specific layers (write all exept
specific
>> ones into a new file), but unfortunately convert looses the offset
position.
>> :-(
>That sounds like a good workflow. If you find a way to do it I'd love
>to know too. It is strange that it is possible to open a m-layer tiff
>in Gimp, but not possible to save it as a mlayer-tiff (but you can as
>an m-layer xcf). I expect that the best chance of being able to do
>this is for Gimp to get the ability to write multiple layer tiffs, or
>image magic to be updated so that it does not loose the offsets when
>converting.

Here's an alternative so far:
- run nona
- run enblend_patched
- open the mlayer-tiff in gimp
- modify some layers or create new ones.
- scale the layers size of the modified layers to the image size
- export the modified layers as single tiff files
- open and modify the blended tiff in gimp (i.e. make some areas transparent
that should be better covered by the new layers)
- save the blended image as single tiff
- run enbled again above the modified blended image and the new exported
layers.

I have illustrated that workflow a bit here:
http://www.trozzreaxxion.net/ut/gallery/hugin_nona_enblend_workflow

PROS:
- fast nona stitcher
- less files
- enblend is faster as well (not measured, just a feeling)
- direct visualisation of areas who need afterwork

CONS:
- requires 2 runs of enblend
- not really straight forward like the PTStitcher-multiple-Tiff-workflow.
Complicate to see what needs to be done in which step. 

BTW:
Does somebody know for sure that enblend does NOTHING to areas where is no
overlapping?

best, mike


More information about the ptX mailing list