Forum

Author Topic: Multi-camera UAV: Use camera groups or simply create individual sensors?  (Read 6878 times)

Deepvision

  • Newbie
  • *
  • Posts: 21
    • View Profile
Hey,

I have a UAV set-up with 3 different sensors attached. As the UAV flys, the sensors will all take photos in unison. So for a given position we will have three photos taken (one from each sensor). From reading the user manual, it seems to suggest that when "a subset of photos were captured from one camera position - camera station, for
Metashape to process them correctly it is obligatory to move those photos to a camera group and mark
the group as Camera Station."

Thus I assumed that I should create a new camera group for each position where the sensors all took a photo at the same time. However, reading through some other threads here (https://www.agisoft.com/forum/index.php?topic=6901.0) it seems that creating new sensors and assigning them to the corresponding photos is the correct method.

I guess my question is, do I need to use camera groups given my current set up? The end goal is to create an orthomosaic image from each sensor (so 3 orthomosaics) which are all properly aligned.

Thanks!
Sven

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 15438
    • View Profile
Re: Multi-camera UAV: Use camera groups or simply create individual sensors?
« Reply #1 on: February 06, 2020, 06:21:35 PM »
Hello Sven,

You need to use multi-camera system approach (I've just posted a short message about that: https://www.agisoft.com/forum/index.php?topic=916.msg52893#msg52893). Also you can refer to pages 21-22 of Metashape Pro manual: https://www.agisoft.com/pdf/metashape-pro_1_6_en.pdf

The Camera Station approach most likely wouldn't work for you, as it has been designed to support the processing of the images captured with the help of the tripod and nodal-head.
Best regards,
Alexey Pasumansky,
Agisoft LLC

Deepvision

  • Newbie
  • *
  • Posts: 21
    • View Profile
Re: Multi-camera UAV: Use camera groups or simply create individual sensors?
« Reply #2 on: February 12, 2020, 08:19:13 PM »
Hey Alexey,

I have looked at the sources you linked. However these two descriptions tell only how this can be accomplished using the GUI, and I am trying to write this all in Python (maybe I should have posted this question in the other forum to begin with).

From the manual it says that I should set a slave in the calibration tab. The closest thing I have found in the Python API is the option to set
Code: [Select]
sensor.master, but I have not been able to find an "Adjust location" method in Python. Is this method of
Code: [Select]
sensor.master the correct way to go about this? After setting this master band I tried to perform an alignment:

Code: [Select]
chunk.matchPhotos(accuracy = Metashape.HighAccuracy,
                  generic_preselection = True,
                  reference_preselection = True,
                  keypoint_limit = 40000,
                  tiepoint_limit = 10000)

chunk.alignCameras()

But it didn't seem to give me the roto-translation I was looking for. When I check
Code: [Select]
for sensor in chunk.sensors
sensor.rotation
I just get the identity matrix for all of the sensors.

Essentially what I am missing is a roto-translation matrix for each of my three cameras/sensors, where the master camera/sensor would have the idenity matrix and the slave cameras/sensors would have a roto-translation matrix that explains the offset from the master.

Perhaps you could outline the steps (or point me towards a useful resource that would guide me) I would need to take to get the relative orientations (rotation and translation)? I already have found this for the individual images in
Code: [Select]
chunk.cameras[0].transform.rotation.