Agisoft Metashape > General

Introducing Agisoft to a Studio Pipeline

(1/1)

davidw:
Hello,

After a few successful attempts at Photogrammetry with a variety of softwares, we are considering introducing Photogrammetry into our studio pipeline for some product and prop 3D models. However, I need to further my research before I can validate its worth adopting and the investment it would require.

As such, I have a few questions and if anyone could point me in the right direction that would be great ( I will be researching outside of this thread also), but also updating this thread myself for others who may actually stumble across and need the same information. Some of these questions I've already got my own experiences to add towards but its always helpful to gather more from experienced people. :)

Q1 - In a rig of several cameras, how does agisoft deal with different cameras - would this be an issue, what about if these cameras had a different resolution?

Q2 - I understand that ideally the cameras need to revolve around an asset rather then vice versa, meaning that a lazy susan is an alternative but does anyone have any suggestions for a rig that could do this as all we seem to find is human-capable rigs. We are hoping to find a rig that is adaptable to different sized assets.

Q3 - From tests i've produced of a variety of different assets; the amount of photos ive used have been roughly 60-120 images. Is there any guideline here or is it a guesswork with just maximum possible coverage.

Q4 - Is there anything anyone can recommend to try and automate this process; i.e. capturing the photos on each rotation automatically from all cameras etc. I've seen a few Pi rigs which captures an image per rotation.

Q5 - How many cameras and from what angles would you recommend?

Q6 - What specs would a dedicated PC require to churn through these assets on a daily basis, with as minimal processing time?

jwoods:
You might could reach out to someone at the Swedish video game developer DICE; they specifically used Agisoft to created game models from physical props and terrain for Battlefield 1 and Star Wars: Battlefront.  There are a few youtube videos that speak about the process they used, but if you can find some of the folks that specifically worked in those departments and reach out to them they might be able to answer your questions directly. 

Alexey Pasumansky:
Hello davidw,

A few comments on your questions:

1. You can mix the images coming from different cameras and different lens in the same project, providing that the level of detail (the effective resolution) of the images from the different subsets is more or less similar and the overlap is sufficient. By default PhotoScan will group the images to separate calibration groups according to the EXIF data and image dimensions. You can also manually split the calibration groups, even if identical camera/lens models are used in the rig, but you what to have the distortion coefficients estimated individually for each camera.

3. Number of images would depend on the type of object and shape complexity. For some objects 40-50 images would be sufficient, whereas for full body and complex poses you may need 100 - 120 images.
5. Usually it is suggested to have a kind of circular "path" of the camera around the object on several heights in order to provide overlap in both vertical and horizontal directions.

6. There are quite a few threads on forum related to the hardware. Generaly we suggest to have fast CPU (6-12 cores with 3.2+ GHz frequency), at least 1-2 GPUs with total number of CUDA cores or shader processor units over 1500 and amount of RAM sufficient for the common datasets you are processing (at least 32 GB usually).

The following videos may be useful for general knowledge about image acquisition and post-processing:
https://www.youtube.com/watch?v=T4uej9tppsU
https://www.youtube.com/watch?v=U_WaqCBp9zo
https://vimeo.com/188877674

Navigation

[0] Message Index

Go to full version