Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - MarinK

Pages: [1]
1
Hi Paulo,

Thanks for the advice! Increasing the accuracy does improve the outputs a bit - I guess this will have to be good enough :)

Cheers


2
Python and Java API / Point cloud generation with and without GCPs
« on: May 07, 2021, 03:16:14 PM »
Hello,

I have a set of four images with which I use 15 GCPs to construct a dense cloud and then a DEM of my survey domain. I use these 15 GCPs to optimize my camera positions, view angles and lenses parameters.

I then wrote a python script to generate point cloud & DEM of the same domain using the exact same images, but instead of using GCPs, I use as inputs the optimized camera positions, view angles and lenses parameters. In doing so, I was thinking I would get the exact same results with and without GCPs, but this is not the case and whatever I've tried there's always a small shift remaining - I don't understand where this would come from, does anyone have an idea?

Here's the script I am using to process the images without GCPs:

Code: [Select]

import Metashape
import csv

global doc
doc = Metashape.app.document
path = 'path'
doc.save(path)

# Add chunk
chunk = doc.chunk

# load images to chunk
chunk.addPhotos(photo_list)

# Load camera position & view angles
chunk.importReference(path='path2ref.csv']),
                    format=Metashape.ReferenceFormatCSV,
                    columns='nxyzXYZabcABC',delimiter=",")

#define coordinate system
chunk.crs = Metashape.CoordinateSystem("EPSG::32646")

doc.save(path)

# Import calibration parameters of cameras (one group per camera since each has different parameters)

for camera in chunk.cameras:
    Group = chunk.addCameraGroup()
    filename = 'path2calibration'
    calib = Metashape.Calibration()
    calib.load(filename, format = Metashape.CalibrationFormatAustralis)
    sensor = chunk.addSensor()
    sensor.width = calib.width
    sensor.height = calib.height
    sensor.type = calib.type
    sensor.user_calib = calib
    sensor.fixed = True
    camera.group = Group
    camera.sensor = sensor


doc.save(path)

# Match photos
accuracy = 0  # equivalent to highest accuracy
keypoints = 200000 #align photos key point limit
tiepoints = 20000 #align photos tie point limit
chunk.matchPhotos(downscale=accuracy, generic_preselection = True,reference_preselection=False,\
                  filter_mask = False, keypoint_limit = keypoints, tiepoint_limit = tiepoints)

# Enable rotation angles for alignement
for camera in chunk.cameras:
    camera.reference.rotation_enabled = True

# Align cameras
chunk.alignCameras(adaptive_fitting=False) #align cameras without adaptive fitting of distortion coefficients

## Optimize cameras - first optimization without GCPs
chunk.optimizeCameras(fit_f=False, fit_cx=False, fit_cy=False, fit_b1=False,\
                      fit_b2=False, fit_k1=True,fit_k2=False, fit_k3=False,\
                      fit_k4=False, fit_p1=False, fit_p2=False, fit_corrections=False,\
                      adaptive_fitting=False, tiepoint_covariance=False)

doc.save(path)


Cheers

3
Hi Paul,

Not quite the same position but I also realized in the meantime that my reference was different from Agisoft - I had yaw = 0° when facing east instead of when facing north...

I updated all this and it now works!!

Thanks so much for the help in all this process and apologies for not checking the viewing angles first, it really slipped my mind...

Thanks!

Marin

4
Hi Paulo,

That makes sense - here are the angles I use, along with a picture of the setup (camera is in the black box, pointing towards left).

So here my yaw angle corresponds to the direction at which camera is pointing relative to North, pitch is inclination of camera in the direction at which it is pointing and roll is inclination in perpendicular direction.

So I guess in this case I should at least take pitch = 90-pitch and possibly swap yaw and roll?

Cheers,

Marin

5
Dear Paulo,

Thanks for all this and apologies for the late reply. The bounding box is a good tip to center the bounding box, thanks!

However, I still get the 'zero resolution' problem, which I think, after a few tests comes from my camera viewing angles: after loading my camera positions and view angles, I aligned the cameras and at this point it did give me a point cloud, but with very high errors for the view angles. This makes me think that the definition of yaw, pitch, roll angles I use is different than the one used in Agisoft: I suspect that in Agisoft these angles are the angles of the AIRCRAFT, while I had considered the CAMERA angles, since I am doing TERRESTRIAL SfM photogrammetry rather than using a UAV.

So in my project, I defined yaw as the camera view direction (with 0 deg = North), and accordingly the pitch and roll of the camera. This probably doesn't make sense for Agisoft, does it? Do you know what corrections I need to make to get it working?

I'm hoping that this is the issue, otherwise I really have no idea...

Thanks!


6
Hi Paulo,

Thanks for this input. I had used these parameters in the GUI and it worked there so didn't think about changing them - I tried again with lower values and get the same error.

I seems that detecting points with matchPhotos() works, and the camera alignment  and bounding box definition as well, but no tie points are selected in the triangulation step or displayed in the model (attached is a screenshot of the GUI).

Cheers

Marin



7
Hi Paulo,

Thanks a lot for this very swift reply. I have followed your instructions (including not forcing the camera calibration, but just defining the focal length, width and height), but still get a 'Zero resolution' error when getting at the point cloud calculation, which makes me think there might be an error somewhere else.

This is the script that I am currently using:

Code: [Select]
import os
import Metashape
import csv
import math
import copy
import time
import statistics

# following https://github.com/agisoft-llc/metashape-scripts/blob/master/src/quick_layout.py

# Checking compatibility
compatible_major_version = "1.7"
found_major_version = ".".join(Metashape.app.version.split('.')[:2])
if found_major_version != compatible_major_version:
    raise Exception("Incompatible Metashape version: {} != {}".format(found_major_version, compatible_major_version))

from PySide2.QtGui import *
from PySide2.QtCore import *
from PySide2.QtWidgets import *

def time_measure(func):
    def wrapper(*args, **kwargs):
        t1 = time.time()
        res = func(*args, **kwargs)
        t2 = time.time()
        print("Finished processing in {} sec.".format(t2 - t1))
        return res

    return wrapper


def show_message(msg):
    msgBox = QMessageBox()
    print(msg)
    msgBox.setText(msg)
    msgBox.exec()


def check_chunk(chunk):
    if chunk is None or len(chunk.cameras) == 0:
        show_message("Empty chunk!")
        return False

    if chunk.crs is None:
        show_message("Initialize chunk coordinate system first")
        return False

    return True


def get_antenna_transform(sensor):
    location = sensor.antenna.location
    if location is None:
        location = sensor.antenna.location_ref
    rotation = sensor.antenna.rotation
    if rotation is None:
        rotation = sensor.antenna.rotation_ref
    return Metashape.Matrix.Diag((1, -1, -1, 1)) * Metashape.Matrix.Translation(location) * Metashape.Matrix.Rotation(Metashape.Utils.ypr2mat(rotation))


def init_chunk_transform(chunk):
    if chunk.transform.scale is not None:
        return
    chunk_origin = Metashape.Vector([0, 0, 0])
    for c in chunk.cameras:
        if c.reference.location is None:
            continue
        chunk_origin = chunk.crs.unproject(c.reference.location)
        break

    chunk.transform.scale = 1
    chunk.transform.rotation = Metashape.Matrix.Diag((1, 1, 1))
    chunk.transform.translation = chunk_origin


def estimate_rotation_matrices(chunk):
    groups = copy.copy(chunk.camera_groups)

    groups.append(None)
    for group in groups:
        group_cameras = list(filter(lambda c: c.group == group, chunk.cameras))

        if len(group_cameras) == 0:
            continue

        if len(group_cameras) == 1:
            if group_cameras[0].reference.rotation is None:
                group_cameras[0].reference.rotation = Metashape.Vector([0, 0, 0])
            continue

        for idx, c in enumerate(group_cameras[0:-1]):
            next_camera = group_cameras[idx + 1]

            if c.reference.rotation is None:
                if c.reference.location is None or next_camera.reference.location is None:
                    continue

                prev_location = chunk.crs.unproject(c.reference.location)
                next_location = chunk.crs.unproject(next_camera.reference.location)

                direction = chunk.crs.localframe(prev_location).mulv(next_location - prev_location)

                yaw = math.degrees(math.atan2(direction.y, direction.x)) + 90
                if yaw < 0:
                    yaw = yaw + 360

                c.reference.rotation = Metashape.Vector([yaw, 0, 0])

        if group_cameras[-1].reference.rotation is None and group_cameras[-1].reference.location is not None:
            group_cameras[-1].reference.rotation = group_cameras[-2].reference.rotation


@time_measure
def align_cameras(chunk):
    init_chunk_transform(chunk)

    estimate_rotation_matrices(chunk)

    for c in chunk.cameras:
        if c.transform is not None:
            continue

        location = c.reference.location
        if location is None:
            continue

        rotation = c.reference.rotation
        if rotation is None:
            continue

        location = chunk.crs.unproject(location)  # location in ECEF
        rotation = chunk.crs.localframe(location).rotation().t() * Metashape.Utils.euler2mat(rotation, chunk.euler_angles) # rotation matrix in ECEF

        transform = Metashape.Matrix.Translation(location) * Metashape.Matrix.Rotation(rotation)
        transform = chunk.transform.matrix.inv() * transform * get_antenna_transform(c.sensor).inv()

        c.transform = Metashape.Matrix.Translation(transform.translation()) * Metashape.Matrix.Rotation(transform.rotation())


def run_camera_alignment():
    print("Alignement started...")

    doc = Metashape.app.document
    chunk = doc.chunk

    if not check_chunk(chunk):
        return

    try:
        align_cameras(chunk)
    except Exception as e:
        print(e)

    print("Alignement finished!")

   

global doc
doc = Metashape.app.document
path = "/path/2/file"
photo_list = list(...)
doc.save(path)

# Add chunk
chunk = doc.chunk

# load images to chunk
chunk.addPhotos(photo_list)

# Load camera position & view angles
chunk.importReference(path=''.join([resultsdir,'IMG_ref.csv']),
                    format=Metashape.ReferenceFormatCSV,
                    columns='nxyzXYZabcABC',delimiter=",")

#define coordinate system
chunk.crs = Metashape.CoordinateSystem("EPSG::32646")

doc.save(path)

# Import calibration parameters of cameras
for camera in chunk.cameras:
    sensor = camera.sensor
    new_sensor = chunk.addSensor()
   
    new_sensor.focal_length = 18 #in mm
    new_sensor.height = 4000 # in pixels
    new_sensor.width = 6000 # in pixels
    new_sensor.pixel_height = 14.9/new_sensor.height
    new_sensor.pixel_width = 22.3/new_sensor.width
    new_sensor.pixel_size = Metashape.Vector([new_sensor.pixel_height, new_sensor.pixel_width])
    new_sensor.type = Metashape.Sensor.Type.Frame
       
    camera.sensor = new_sensor

doc.save(path)

# Match photos
accuracy = 0  # equivalent to highest accuracy
keypoints = 200000 #align photos key point limit
tiepoints = 20000 #align photos tie point limit
chunk.matchPhotos(downscale=accuracy, generic_preselection = True,reference_preselection=True,\
                  filter_mask = False, keypoint_limit = keypoints, tiepoint_limit = tiepoints)

doc.save(path)

# Align cameras using uploaded camera position and view angles (following https://github.com/agisoft-llc/metashape-scripts/blob/master/src/quick_layout.py)
run_camera_alignment()


# Define: Bounding box around camera locations (based on https://www.agisoft.com/forum/index.php?topic=10102.0)

BUFFER = 10000 #percent

def cross(a, b):
result = Metashape.Vector([a.y*b.z - a.z*b.y, a.z*b.x - a.x*b.z, a.x*b.y - a.y *b.x])
return result.normalized()

new_region = Metashape.Region()
xcoord = Metashape.Vector([10E10, -10E10])
ycoord = Metashape.Vector([10E10, -10E10])
avg = [[],[]]
T = chunk.transform.matrix
s = chunk.transform.matrix.scale()
crs = chunk.crs
z = Metashape.Vector([0,0])

for camera in chunk.cameras:
if camera.transform:
coord = crs.project(T.mulp(camera.center))
xcoord[0] = min(coord.x, xcoord[0])
xcoord[1] = max(coord.x, xcoord[1])
ycoord[0] = min(coord.y, ycoord[0])
ycoord[1] = max(coord.y, ycoord[1])
z[0] += coord.z
z[1] += 1
avg[0].append(coord.x)
avg[1].append(coord.y)
       
z = z[0] / z[1]
avg = Metashape.Vector([statistics.median(avg[0]), statistics.median(avg[1]), z])

corners = [Metashape.Vector([xcoord[0], ycoord[0], z]),
Metashape.Vector([xcoord[0], ycoord[1], z]),
Metashape.Vector([xcoord[1], ycoord[1], z]),
Metashape.Vector([xcoord[1], ycoord[0], z])]
corners = [T.inv().mulp(crs.unproject(x)) for x in list(corners)]

side1 = corners[0] - corners[1]
side2 = corners[0] - corners[-1]
side1g = T.mulp(corners[0]) - T.mulp(corners[1])
side2g = T.mulp(corners[0]) - T.mulp(corners[-1])
side3g = T.mulp(corners[0]) - T.mulp(Metashape.Vector([corners[0].x, corners[0].y, 0]))
new_size = ((100 + BUFFER) / 100) * Metashape.Vector([side2g.norm()/s, side1g.norm()/s, 3*side3g.norm() / s]) ##

xcoord, ycoord, z = T.inv().mulp(crs.unproject(Metashape.Vector([sum(xcoord)/2., sum(ycoord)/2., z - 2 * side3g.z]))) #
new_center = Metashape.Vector([xcoord, ycoord, z]) #by 4 corners

horizontal = side2
vertical = side1
normal = cross(vertical, horizontal)
horizontal = -cross(vertical, normal)
vertical = vertical.normalized()

R = Metashape.Matrix ([horizontal, vertical, -normal])
new_region.rot = R.t()

new_region.center = new_center
new_region.size = new_size
chunk.region = new_region

# Process sparse cloud
chunk.triangulatePoints()



#building dense cloud
chunk.buildDepthMaps(downscale=4, filter_mode=Metashape.MildFiltering)
chunk.buildDenseCloud()

print("Script finished")


Any ideas welcome! Thanks!

8
Hello,

I am trying to write a script to process automatically images with known camera locations and calibration parameters.

So far my script follows these steps (based on a lot of reading in this very useful forum):

1. Loading the images
2. Importing camera position & view angles
3. Defining coordinate system
4. Defining camera calibration parameters
5. Matching photos
6. Aligning cameras using uploaded information (run_camera_alignement() from https://github.com/agisoft-llc/metashape-scripts/blob/master/src/quick_layout.py)
7. Defining a bounding box which encompasses a large enough domain
8. Processing sparse cloud (chunk.triangulatePoints())
9. Building depth maps & dense cloud

However, when I get to this step, I get the error message 'Zero resolution'. This makes me think that something is wrong with defining the camera calibration parameters (since the bounding box looks ok, and the cameras are properly imported with the wanted position & view angles). Here is my script for defining the camera calibration parameters (My images are taken with a Canon DSLR camera of sensor size 22.3x14.9 mm, focal length is fixed at 18 mm):

Code: [Select]
for camera in chunk.cameras:
    sensor = camera.sensor
    new_sensor = chunk.addSensor()
   
    new_sensor.focal_length = 18 #in mm
    new_sensor.height = 4000 # in pixels
    new_sensor.width = 6000 # in pixels
    new_sensor.pixel_height = new_sensor.focal_length*14.9/new_sensor.height
    new_sensor.pixel_width = new_sensor.focal_length*22.3/new_sensor.width
    new_sensor.pixel_size = Metashape.Vector([new_sensor.pixel_height, new_sensor.pixel_width])
    new_sensor.type = Metashape.Sensor.Type.Frame
   
    cal = new_sensor.calibration
    cal.cx = 3000.5
    cal.cy = 2000.5
    cal.height = new_sensor.height
    cal.width = new_sensor.width
    cal.f = new_sensor.focal_length
    cal.k1 = 0
    cal.k2 = 0
    cal.k3 = 0
    cal.k4 = 0
    cal.p1 = 0
    cal.p2 = 0
    cal.p3 = 0
    cal.p4 = 0

    new_sensor.user_calib = cal
    new_sensor.calibration = cal
    new_sensor.fixed = True
   
    camera.sensor = new_sensor

Am I missing something here?

Thanks for the help!


Pages: [1]