Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - mtbouchard

Pages: [1]
Wondering people's thoughts on a static rig (Esper) with 48 Full Frame (Sony Alpha 7iii) mounted about 1m away from the subject in a full sphere (as much as is possible).

I've been recommended by a collegue using 100mm (manual Samyang) lenses.  We have a custom strobe light system synced to all cameras.

Curious peoples thoughts? We will use Metashape for the reconstruction. Thank you!


Thanks Alexy.

Quick follow up: what would the python code be to do this? And if all intrinsics are fixed, are the extrinsics optimized during this?


Matt Bouchard

Just a slight bump here since I posted near the holiday break.

I have an additional question: can optimize_cameras() be run but only to change their slight position/rotation change? Or is it best done by using "fixed" on the sensor itself (which I now have 1 per camera, since the are all unique). And if so: what is the python call?

Basically I want: to calibrate 1 time per week, re-use the calibration/per cam, but allow for tiny drift from gravity. :)

Thank you!

Matt Bouchard

Does this include extrinsics?


I have a 48 camera rigid rig of Sony Alpha 7iii's.

I am have a special calibration capture (with a scale bar) which is solved/constrained, and then I want to re-use this exact calibration in subsequent captures for a while.

For a long time I have used the single Photogroup, but I believe the quality should be higher if I split them into 1 camera/group instances so the intrinsics are unique. This has worked but I have found the bundle adjustment(optimize cameras) to slightly change the accuracy of the scale.

I have used code from Alexy to create per sensor groups, and it worked great! I export the xml as a single file.

But..I am having trouble reloading. I had been simply importing the camera, but that doesnt seem to work.

Any help in taking a single calibration.xml and using that to import that instrinsic/extrinsics? I am wondering what parameters to use in optimize camera such that I keep the instrinsics locked, but allow for very minor camera movement.

Thank you!


General / 16 bit linear tiff workflow
« on: October 25, 2020, 10:21:42 PM »
Hi Agisoft:

Q: When loading .tif images, does Metashape do any colorspace conversion? Is the resultant UV .tif or exr map in the same colourspace as input?

Detail: We convert CR2/ARW raw into 16 bit linear tifs (using oiiotoo -we color correct as welll). We've noticed the metadata is missing to 'hint' it is linear - so on OSX for example, most viewers default to view as if it were in gamma2.2 space. Also loading into Metashape interactive will also show "dark". We do get good results and make sure to interpret the output .tif as if it were in 16 bit linear. Its not clear how .tif files should be tagged as linear (aside from attaching an ICC profile which is how dcraw seems to work). 

oiiotool command:
oiiotool input_img_path --tocolorspace linear--ociofiletransform <cc_lut> -d uint16 --compression zip -o <output_img_path>

Followup: What is the preferred linear space input to Metashape? It seems 16 bit half-float EXR is not supported?

Followup2: We've noticed we have to copy some of the metadata into the .tif..the Image Size and orientation and this seems to work well.

Thanks as always.
Matt Bouchard

Python and Java API / Re: Metashape 1.6 exportTexture
« on: October 05, 2020, 07:18:18 PM »
Ah! This makes sense!  Thank you Alexey.

Many changes in 1.6 but honestly, I find it much cleaner and results better. Thank you! 


Python and Java API / Metashape 1.6 exportTexture
« on: October 05, 2020, 09:58:11 AM »

Looking for how to export only the generated UV texture (as an 8k, and want a .tif file) in Metashape  1.6

Old docs show:

RuntimeError: Can't save texture: <my_path>

I don't see an exportTexture method either.

What am I missing?

Thank you.


Hello Linshuh,

Solid view mesh display in Metashape GUI is applying light to the model, which is not considered when renderImage or renderPreview method is used, so the only easy way is to use command. The point of view can be changed using

Heya Alexey,

I am also interested in this topic - to procedurally render non-texture view of the mesh from various angles (saw 45deg around y) to evaluate the mesh quality better for a few thousand automated scans.

So, running via python only(no UI): use the app captureView() function? And pass 'Metashape.ModelViewMode.ShadedModelView' as the mode? No code  needed, just verification is cool.

I appreciate how responsive you are on these forums Alexey...and thank you for the great product and support!

Matt Bouchard

Camera Calibration / Re: Camera calibration
« on: October 15, 2019, 01:09:05 AM »
That's great Alexy.

Is the "via "photo-invariant parameters" dialog of the Camera Calibration" operation possible via python as well?

I'd like to solve the camera intrinsics for each camera individually....I realise this will take longer.


Pages: [1]