Forum

Author Topic: Stereo camera processing  (Read 3764 times)

Christian Lees

  • Newbie
  • *
  • Posts: 1
    • View Profile
Stereo camera processing
« on: March 09, 2020, 03:07:12 AM »
I'm sure this has been asked before but I can't find an answer for it.

We are processing large underwater stereo data sets, 20,000 images, 12MP cameras.  They are stereo pairs where cameras rig is fixed.  We supply reference data for Latitude, Longitude and RPH.  The accuracy of the RPH is less then 1 degree, positional accuracy is around 1m.

We calibrate our cameras using a checker board and our own software based on OpenCV.  I am able to convert the calibration file into something suitable for Metashape.  We produce both intrinsic as well as the extrinsic parameters.

Is there a way to put int the offset for the second camera and only put in the reference location for the primary?  In my initial testing I have put in the same location for both cameras.  The actual separation is 0.15m.  I am able to calculate the position of the second camera and put that in but we are processing in WGS84 so it's a pretty small change and within the positional error.

Also for the matchPhotos stage which reference preselection mode is best to use when we already know where all the images are in space?  How is it limited in terms of looking for matches?  Am I able to limit it on distance?

Do you expect a loss in image matching accuracy if the max key points is limited, or the downscale is set to low.  We can run the data through multiple times to see what it changes but it does take a long time to run.

chunk.matchPhotos(downscale=2, generic_preselection=False, reference_preselection=True, keep_keypoints=True, subdivide_task=True, progress=bar.update)

Additionally is there a way to use both the GPU and CPU for feature extraction etc?  Most of out processing machines have 16 or 32 cores as well as a GPU.