PanoTools mailing list archive

Mailinglist:PanoTools NG
Sender:Bernhard Vogl
Date/Time:2007-Dec-30 17:46:57
Subject:Re: Re: New hardware testing on PTgui performance

Thread:


PanoTools NG: Re: Re: New hardware testing on PTgui performance Bernhard Vogl 2007-Dec-30 17:46:57
Hello Helmut,

Thanks for pointing out the important fact that limiting factors of 
large panorama rendering are mostly software and not hardware-related.
I'm trying to point that out for several months now and i feel relieved 
that it is mentioned my you.

Here are just my 2cent:

> As has been mentioned in this thread, IO related bottlenecks
> are due to the use of uncompressed fileformats, if not for
> the final then for intermediate tempfiles. A request to 
> the authors of frontends to my pano-libraries (ptgui, nona,...): 
> please insert the line
>
> TIFFSetField(tif, TIFFTAG_COMPRESSION, COMPRESSION_PACKBITS );
>   
Nona actually can do that already, e.g.
n"TIFF_m c:DEFLATE r:CROP"
Will write the warped files as cropped TIFFs with deflate compression. 
(zip and deflate are possible, no packbits, so there actually will be a 
certain cost of cpu power)
Those images can be directly fed to smartblend.
> [...]
> However, there is no need for all this in a standalone stitcher.
> No random access to either source nor target image is required,
> and only a small fraction of the target panorama actually needs to
> be rendered for each image. Therefor, image processing can
> be done in a streaming fashion, with minimal RAM requirements.
> At most one source image should fit into memory, and even that
> can be relaxed without significant penalty. I am currently
> looking into this and will post some results for a reworked
> fast pano-library hopefully soon. 
>
>   
However, from a practical point of view, warping the images and writing 
to temp files is not the most time consuming part of the process.
(Well, the badly written temp files are a problem in the blending 
step.). You can speed up warping almost linear by adding more CPU power 
and a higher number of cores (with nona and PTGui stitcher).
But: Most of the time is needed to blend the images. This is certainly 
caused by the assumption of the software authors that the whole panorama 
image can be accessed without extensive computational cost (in terms of 
harddisk access and memory needs). I am not sure if this is a problem 
inherent to the blending process - however - all blenders (enblend, 
PTGui, smartblend) do scale badly (only one CPU, "random access" in the 
image)...

Just as a number: I have compared 2 methods with PTGui:
- one is rendering a large panorama as one large image: took 2h 38min, 
warping (and temp file writing) took only 30min, the rest was blending time
- the other is rendering this large panorama in 100 parts (slices): took 
1h 25min
It can easily be seen that computational cost of the blender raises 
exponentially if you overstrain current blending algorithms with large 
images.

I have added this "sliced rendering" comparison to the timing chart:
http://hdview.at/speedtest/results.html
(I will also add some numbers with nona/enblend soon)

Best regards
Bernhard


-- 
<*> Wiki: http://wiki.panotools.org
<*> User Guidelines: http://wiki.panotools.org/User_Guidelines
<*> Nabble (Web) http://www.nabble.com/PanoToolsNG-f15658.html
<*> NG Member Map http://www.panomaps.com/ng
<*> Moderators/List Admins: #removed# 
 
Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/PanoToolsNG/

<*> Your email settings:
    Individual Email | Traditional

<*> To change settings online go to:
    http://groups.yahoo.com/group/PanoToolsNG/join
    (Yahoo! ID required)

<*> To change settings via email:
    mailto:#removed# 
    mailto:#removed#

<*> To unsubscribe from this group, send an email to:
    #removed#

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/
 

Next thread:

Previous thread:

back to search page