Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - MarinK

Pages: [1]
Python and Java API / Point cloud generation with and without GCPs
« on: May 07, 2021, 03:16:14 PM »

I have a set of four images with which I use 15 GCPs to construct a dense cloud and then a DEM of my survey domain. I use these 15 GCPs to optimize my camera positions, view angles and lenses parameters.

I then wrote a python script to generate point cloud & DEM of the same domain using the exact same images, but instead of using GCPs, I use as inputs the optimized camera positions, view angles and lenses parameters. In doing so, I was thinking I would get the exact same results with and without GCPs, but this is not the case and whatever I've tried there's always a small shift remaining - I don't understand where this would come from, does anyone have an idea?

Here's the script I am using to process the images without GCPs:

Code: [Select]

import Metashape
import csv

global doc
doc =
path = 'path'

# Add chunk
chunk = doc.chunk

# load images to chunk

# Load camera position & view angles

#define coordinate system = Metashape.CoordinateSystem("EPSG::32646")

# Import calibration parameters of cameras (one group per camera since each has different parameters)

for camera in chunk.cameras:
    Group = chunk.addCameraGroup()
    filename = 'path2calibration'
    calib = Metashape.Calibration()
    calib.load(filename, format = Metashape.CalibrationFormatAustralis)
    sensor = chunk.addSensor()
    sensor.width = calib.width
    sensor.height = calib.height
    sensor.type = calib.type
    sensor.user_calib = calib
    sensor.fixed = True = Group
    camera.sensor = sensor

# Match photos
accuracy = 0  # equivalent to highest accuracy
keypoints = 200000 #align photos key point limit
tiepoints = 20000 #align photos tie point limit
chunk.matchPhotos(downscale=accuracy, generic_preselection = True,reference_preselection=False,\
                  filter_mask = False, keypoint_limit = keypoints, tiepoint_limit = tiepoints)

# Enable rotation angles for alignement
for camera in chunk.cameras:
    camera.reference.rotation_enabled = True

# Align cameras
chunk.alignCameras(adaptive_fitting=False) #align cameras without adaptive fitting of distortion coefficients

## Optimize cameras - first optimization without GCPs
chunk.optimizeCameras(fit_f=False, fit_cx=False, fit_cy=False, fit_b1=False,\
                      fit_b2=False, fit_k1=True,fit_k2=False, fit_k3=False,\
                      fit_k4=False, fit_p1=False, fit_p2=False, fit_corrections=False,\
                      adaptive_fitting=False, tiepoint_covariance=False)



I am trying to write a script to process automatically images with known camera locations and calibration parameters.

So far my script follows these steps (based on a lot of reading in this very useful forum):

1. Loading the images
2. Importing camera position & view angles
3. Defining coordinate system
4. Defining camera calibration parameters
5. Matching photos
6. Aligning cameras using uploaded information (run_camera_alignement() from
7. Defining a bounding box which encompasses a large enough domain
8. Processing sparse cloud (chunk.triangulatePoints())
9. Building depth maps & dense cloud

However, when I get to this step, I get the error message 'Zero resolution'. This makes me think that something is wrong with defining the camera calibration parameters (since the bounding box looks ok, and the cameras are properly imported with the wanted position & view angles). Here is my script for defining the camera calibration parameters (My images are taken with a Canon DSLR camera of sensor size 22.3x14.9 mm, focal length is fixed at 18 mm):

Code: [Select]
for camera in chunk.cameras:
    sensor = camera.sensor
    new_sensor = chunk.addSensor()
    new_sensor.focal_length = 18 #in mm
    new_sensor.height = 4000 # in pixels
    new_sensor.width = 6000 # in pixels
    new_sensor.pixel_height = new_sensor.focal_length*14.9/new_sensor.height
    new_sensor.pixel_width = new_sensor.focal_length*22.3/new_sensor.width
    new_sensor.pixel_size = Metashape.Vector([new_sensor.pixel_height, new_sensor.pixel_width])
    new_sensor.type = Metashape.Sensor.Type.Frame
    cal = new_sensor.calibration = 3000.5 = 2000.5
    cal.height = new_sensor.height
    cal.width = new_sensor.width
    cal.f = new_sensor.focal_length
    cal.k1 = 0
    cal.k2 = 0
    cal.k3 = 0
    cal.k4 = 0
    cal.p1 = 0
    cal.p2 = 0
    cal.p3 = 0
    cal.p4 = 0

    new_sensor.user_calib = cal
    new_sensor.calibration = cal
    new_sensor.fixed = True
    camera.sensor = new_sensor

Am I missing something here?

Thanks for the help!

Pages: [1]