Forum

Author Topic: Camera calibration for Multi View Stereo: Zero resolution error  (Read 2770 times)

MarinK

  • Newbie
  • *
  • Posts: 8
    • View Profile
Hello,

I am trying to write a script to process automatically images with known camera locations and calibration parameters.

So far my script follows these steps (based on a lot of reading in this very useful forum):

1. Loading the images
2. Importing camera position & view angles
3. Defining coordinate system
4. Defining camera calibration parameters
5. Matching photos
6. Aligning cameras using uploaded information (run_camera_alignement() from https://github.com/agisoft-llc/metashape-scripts/blob/master/src/quick_layout.py)
7. Defining a bounding box which encompasses a large enough domain
8. Processing sparse cloud (chunk.triangulatePoints())
9. Building depth maps & dense cloud

However, when I get to this step, I get the error message 'Zero resolution'. This makes me think that something is wrong with defining the camera calibration parameters (since the bounding box looks ok, and the cameras are properly imported with the wanted position & view angles). Here is my script for defining the camera calibration parameters (My images are taken with a Canon DSLR camera of sensor size 22.3x14.9 mm, focal length is fixed at 18 mm):

Code: [Select]
for camera in chunk.cameras:
    sensor = camera.sensor
    new_sensor = chunk.addSensor()
   
    new_sensor.focal_length = 18 #in mm
    new_sensor.height = 4000 # in pixels
    new_sensor.width = 6000 # in pixels
    new_sensor.pixel_height = new_sensor.focal_length*14.9/new_sensor.height
    new_sensor.pixel_width = new_sensor.focal_length*22.3/new_sensor.width
    new_sensor.pixel_size = Metashape.Vector([new_sensor.pixel_height, new_sensor.pixel_width])
    new_sensor.type = Metashape.Sensor.Type.Frame
   
    cal = new_sensor.calibration
    cal.cx = 3000.5
    cal.cy = 2000.5
    cal.height = new_sensor.height
    cal.width = new_sensor.width
    cal.f = new_sensor.focal_length
    cal.k1 = 0
    cal.k2 = 0
    cal.k3 = 0
    cal.k4 = 0
    cal.p1 = 0
    cal.p2 = 0
    cal.p3 = 0
    cal.p4 = 0

    new_sensor.user_calib = cal
    new_sensor.calibration = cal
    new_sensor.fixed = True
   
    camera.sensor = new_sensor

Am I missing something here?

Thanks for the help!


Paulo

  • Hero Member
  • *****
  • Posts: 1302
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #1 on: April 26, 2021, 09:06:56 PM »
Hi Marink,

i think you are trying to define an ideal camera with f exactly 18 mm, pp exacly centered on image and no distorsion. I doubt it unless  your sytem produces some synthetic ideal image....

Any way from your camera definition you should change:
Code: [Select]
    new_sensor.pixel_height = new_sensor.focal_length*14.9/new_sensor.height
    new_sensor.pixel_width = new_sensor.focal_length*22.3/new_sensor.width
with
Code: [Select]
    new_sensor.pixel_height = 14.9/new_sensor.height
    new_sensor.pixel_width = 22.3/new_sensor.width
supposing your sensor size is 22.3 mm by 14.9 mm

also in calibration definition
cal.f = new_sensor.focal_length /   new_sensor.pixel_height 
also as PP is centered then cal.cx = cal.cy = 0 (do not know where the 0.5 come from).

But unless you know that these are 'ideal ' images then i would definitely not fix the calibration...

or better if you just define the new_sensor focal length, pixel height and and pixel  width, then the system will automatucally fill in the calibration f and b1 (affinity) as pixel height != pixel width
as seen in attachement

Hope this can be useful,
« Last Edit: April 26, 2021, 09:27:51 PM by Paulo »
Best Regards,
Paul Pelletier,
Surveyor

MarinK

  • Newbie
  • *
  • Posts: 8
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #2 on: April 27, 2021, 10:24:20 AM »
Hi Paulo,

Thanks a lot for this very swift reply. I have followed your instructions (including not forcing the camera calibration, but just defining the focal length, width and height), but still get a 'Zero resolution' error when getting at the point cloud calculation, which makes me think there might be an error somewhere else.

This is the script that I am currently using:

Code: [Select]
import os
import Metashape
import csv
import math
import copy
import time
import statistics

# following https://github.com/agisoft-llc/metashape-scripts/blob/master/src/quick_layout.py

# Checking compatibility
compatible_major_version = "1.7"
found_major_version = ".".join(Metashape.app.version.split('.')[:2])
if found_major_version != compatible_major_version:
    raise Exception("Incompatible Metashape version: {} != {}".format(found_major_version, compatible_major_version))

from PySide2.QtGui import *
from PySide2.QtCore import *
from PySide2.QtWidgets import *

def time_measure(func):
    def wrapper(*args, **kwargs):
        t1 = time.time()
        res = func(*args, **kwargs)
        t2 = time.time()
        print("Finished processing in {} sec.".format(t2 - t1))
        return res

    return wrapper


def show_message(msg):
    msgBox = QMessageBox()
    print(msg)
    msgBox.setText(msg)
    msgBox.exec()


def check_chunk(chunk):
    if chunk is None or len(chunk.cameras) == 0:
        show_message("Empty chunk!")
        return False

    if chunk.crs is None:
        show_message("Initialize chunk coordinate system first")
        return False

    return True


def get_antenna_transform(sensor):
    location = sensor.antenna.location
    if location is None:
        location = sensor.antenna.location_ref
    rotation = sensor.antenna.rotation
    if rotation is None:
        rotation = sensor.antenna.rotation_ref
    return Metashape.Matrix.Diag((1, -1, -1, 1)) * Metashape.Matrix.Translation(location) * Metashape.Matrix.Rotation(Metashape.Utils.ypr2mat(rotation))


def init_chunk_transform(chunk):
    if chunk.transform.scale is not None:
        return
    chunk_origin = Metashape.Vector([0, 0, 0])
    for c in chunk.cameras:
        if c.reference.location is None:
            continue
        chunk_origin = chunk.crs.unproject(c.reference.location)
        break

    chunk.transform.scale = 1
    chunk.transform.rotation = Metashape.Matrix.Diag((1, 1, 1))
    chunk.transform.translation = chunk_origin


def estimate_rotation_matrices(chunk):
    groups = copy.copy(chunk.camera_groups)

    groups.append(None)
    for group in groups:
        group_cameras = list(filter(lambda c: c.group == group, chunk.cameras))

        if len(group_cameras) == 0:
            continue

        if len(group_cameras) == 1:
            if group_cameras[0].reference.rotation is None:
                group_cameras[0].reference.rotation = Metashape.Vector([0, 0, 0])
            continue

        for idx, c in enumerate(group_cameras[0:-1]):
            next_camera = group_cameras[idx + 1]

            if c.reference.rotation is None:
                if c.reference.location is None or next_camera.reference.location is None:
                    continue

                prev_location = chunk.crs.unproject(c.reference.location)
                next_location = chunk.crs.unproject(next_camera.reference.location)

                direction = chunk.crs.localframe(prev_location).mulv(next_location - prev_location)

                yaw = math.degrees(math.atan2(direction.y, direction.x)) + 90
                if yaw < 0:
                    yaw = yaw + 360

                c.reference.rotation = Metashape.Vector([yaw, 0, 0])

        if group_cameras[-1].reference.rotation is None and group_cameras[-1].reference.location is not None:
            group_cameras[-1].reference.rotation = group_cameras[-2].reference.rotation


@time_measure
def align_cameras(chunk):
    init_chunk_transform(chunk)

    estimate_rotation_matrices(chunk)

    for c in chunk.cameras:
        if c.transform is not None:
            continue

        location = c.reference.location
        if location is None:
            continue

        rotation = c.reference.rotation
        if rotation is None:
            continue

        location = chunk.crs.unproject(location)  # location in ECEF
        rotation = chunk.crs.localframe(location).rotation().t() * Metashape.Utils.euler2mat(rotation, chunk.euler_angles) # rotation matrix in ECEF

        transform = Metashape.Matrix.Translation(location) * Metashape.Matrix.Rotation(rotation)
        transform = chunk.transform.matrix.inv() * transform * get_antenna_transform(c.sensor).inv()

        c.transform = Metashape.Matrix.Translation(transform.translation()) * Metashape.Matrix.Rotation(transform.rotation())


def run_camera_alignment():
    print("Alignement started...")

    doc = Metashape.app.document
    chunk = doc.chunk

    if not check_chunk(chunk):
        return

    try:
        align_cameras(chunk)
    except Exception as e:
        print(e)

    print("Alignement finished!")

   

global doc
doc = Metashape.app.document
path = "/path/2/file"
photo_list = list(...)
doc.save(path)

# Add chunk
chunk = doc.chunk

# load images to chunk
chunk.addPhotos(photo_list)

# Load camera position & view angles
chunk.importReference(path=''.join([resultsdir,'IMG_ref.csv']),
                    format=Metashape.ReferenceFormatCSV,
                    columns='nxyzXYZabcABC',delimiter=",")

#define coordinate system
chunk.crs = Metashape.CoordinateSystem("EPSG::32646")

doc.save(path)

# Import calibration parameters of cameras
for camera in chunk.cameras:
    sensor = camera.sensor
    new_sensor = chunk.addSensor()
   
    new_sensor.focal_length = 18 #in mm
    new_sensor.height = 4000 # in pixels
    new_sensor.width = 6000 # in pixels
    new_sensor.pixel_height = 14.9/new_sensor.height
    new_sensor.pixel_width = 22.3/new_sensor.width
    new_sensor.pixel_size = Metashape.Vector([new_sensor.pixel_height, new_sensor.pixel_width])
    new_sensor.type = Metashape.Sensor.Type.Frame
       
    camera.sensor = new_sensor

doc.save(path)

# Match photos
accuracy = 0  # equivalent to highest accuracy
keypoints = 200000 #align photos key point limit
tiepoints = 20000 #align photos tie point limit
chunk.matchPhotos(downscale=accuracy, generic_preselection = True,reference_preselection=True,\
                  filter_mask = False, keypoint_limit = keypoints, tiepoint_limit = tiepoints)

doc.save(path)

# Align cameras using uploaded camera position and view angles (following https://github.com/agisoft-llc/metashape-scripts/blob/master/src/quick_layout.py)
run_camera_alignment()


# Define: Bounding box around camera locations (based on https://www.agisoft.com/forum/index.php?topic=10102.0)

BUFFER = 10000 #percent

def cross(a, b):
result = Metashape.Vector([a.y*b.z - a.z*b.y, a.z*b.x - a.x*b.z, a.x*b.y - a.y *b.x])
return result.normalized()

new_region = Metashape.Region()
xcoord = Metashape.Vector([10E10, -10E10])
ycoord = Metashape.Vector([10E10, -10E10])
avg = [[],[]]
T = chunk.transform.matrix
s = chunk.transform.matrix.scale()
crs = chunk.crs
z = Metashape.Vector([0,0])

for camera in chunk.cameras:
if camera.transform:
coord = crs.project(T.mulp(camera.center))
xcoord[0] = min(coord.x, xcoord[0])
xcoord[1] = max(coord.x, xcoord[1])
ycoord[0] = min(coord.y, ycoord[0])
ycoord[1] = max(coord.y, ycoord[1])
z[0] += coord.z
z[1] += 1
avg[0].append(coord.x)
avg[1].append(coord.y)
       
z = z[0] / z[1]
avg = Metashape.Vector([statistics.median(avg[0]), statistics.median(avg[1]), z])

corners = [Metashape.Vector([xcoord[0], ycoord[0], z]),
Metashape.Vector([xcoord[0], ycoord[1], z]),
Metashape.Vector([xcoord[1], ycoord[1], z]),
Metashape.Vector([xcoord[1], ycoord[0], z])]
corners = [T.inv().mulp(crs.unproject(x)) for x in list(corners)]

side1 = corners[0] - corners[1]
side2 = corners[0] - corners[-1]
side1g = T.mulp(corners[0]) - T.mulp(corners[1])
side2g = T.mulp(corners[0]) - T.mulp(corners[-1])
side3g = T.mulp(corners[0]) - T.mulp(Metashape.Vector([corners[0].x, corners[0].y, 0]))
new_size = ((100 + BUFFER) / 100) * Metashape.Vector([side2g.norm()/s, side1g.norm()/s, 3*side3g.norm() / s]) ##

xcoord, ycoord, z = T.inv().mulp(crs.unproject(Metashape.Vector([sum(xcoord)/2., sum(ycoord)/2., z - 2 * side3g.z]))) #
new_center = Metashape.Vector([xcoord, ycoord, z]) #by 4 corners

horizontal = side2
vertical = side1
normal = cross(vertical, horizontal)
horizontal = -cross(vertical, normal)
vertical = vertical.normalized()

R = Metashape.Matrix ([horizontal, vertical, -normal])
new_region.rot = R.t()

new_region.center = new_center
new_region.size = new_size
chunk.region = new_region

# Process sparse cloud
chunk.triangulatePoints()



#building dense cloud
chunk.buildDepthMaps(downscale=4, filter_mode=Metashape.MildFiltering)
chunk.buildDenseCloud()

print("Script finished")


Any ideas welcome! Thanks!

Paulo

  • Hero Member
  • *****
  • Posts: 1302
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #3 on: April 27, 2021, 12:22:21 PM »
Hey Marink,

just wondring why you are using highest alignment accuracy with 200 000 kpt limit. Unless you have very sharp images I do not think that this is necessary. On my modest GPU it just overwhelms it and Metashape exits without concluding the matchPhotos part...

However, I would include after trangulatePoints a optimizeCameras so that you can adjust camera calibration parameters (f, cx, cy, k1 ,k2, k3, p1, p2 and b1).
Best Regards,
Paul Pelletier,
Surveyor

MarinK

  • Newbie
  • *
  • Posts: 8
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #4 on: April 27, 2021, 03:13:40 PM »
Hi Paulo,

Thanks for this input. I had used these parameters in the GUI and it worked there so didn't think about changing them - I tried again with lower values and get the same error.

I seems that detecting points with matchPhotos() works, and the camera alignment  and bounding box definition as well, but no tie points are selected in the triangulation step or displayed in the model (attached is a screenshot of the GUI).

Cheers

Marin



Paulo

  • Hero Member
  • *****
  • Posts: 1302
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #5 on: April 27, 2021, 06:39:33 PM »
It could just be a hunch,

but try putting the matchPhotos after call to run_camera_alignment and before triangulatePoints()

Maybe that is the trick  :)

PS. Finally i do not think this is the problem..... The other thing is CRS. You import camera position and orientation from a CSV.  In what CRS are the camera positions^?

If they are in EPSG:32646, then I would recommend defining the chunk crs before importReference function....

Update: neither of 2 above are the problem. I ran a test on 6 images: loading them, importing reference, matching them, setting alignement, setting BoundingBox (region) and triangulating Points...

and result shows BoundingBox off set in height from TiePoints so that it does not encompass any. See attachement with front view of point_cloud...  That is why you get  'Zero resolution' error when running Build Dense Cloud.

I think you should review the code defining new_region.center  or new_region.size
« Last Edit: April 28, 2021, 04:54:23 AM by Paulo »
Best Regards,
Paul Pelletier,
Surveyor

Paulo

  • Hero Member
  • *****
  • Posts: 1302
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #6 on: April 28, 2021, 04:14:48 AM »
Hi Marink,

i was able to change the code so that Region (Bounding Box) now encompasses the cloud.
Changed:
Code: [Select]
corners = [T.inv().mulp(crs.unproject(x)) for x in list(corners)]

side1 = corners[0] - corners[1]
side2 = corners[0] - corners[-1]
side1g = T.mulp(corners[0]) - T.mulp(corners[1])
side2g = T.mulp(corners[0]) - T.mulp(corners[-1])
side3g = T.mulp(corners[0]) - T.mulp(Metashape.Vector([corners[0].x, corners[0].y, 0]))
new_size = ((100 + BUFFER) / 100) * Metashape.Vector([side2g.norm()/s, side1g.norm()/s, 3*side3g.norm() / s]) ##

xcoord, ycoord, z = T.inv().mulp(crs.unproject(Metashape.Vector([sum(xcoord)/2., sum(ycoord)/2., z - 2 * side3g.z]))) #
With:
Code: [Select]
# corners = [T.inv().mulp(crs.unproject(x)) for x in list(corners)]

side1 = T.inv().mulp(crs.unproject(corners[0])) - T.inv().mulp(crs.unproject(corners[1]))
side2 = T.inv().mulp(crs.unproject(corners[0])) - T.inv().mulp(crs.unproject(corners[-1]))
side1g = corners[0] - corners[1]
side2g = corners[0] - corners[-1]
side3g = corners[0] - Metashape.Vector([corners[0].x, corners[0].y, 0])
new_size = ((100 + BUFFER) / 100) * Metashape.Vector([side2g.norm()/s, side1g.norm()/s, 2*side3g.norm() / s]) ##

xcoord, ycoord, z = T.inv().mulp(crs.unproject(Metashape.Vector([sum(xcoord)/2., sum(ycoord)/2., z - 0.5*side3g.z]))) #

And resulting BBox is seen in attachment...
« Last Edit: April 28, 2021, 05:28:25 AM by Paulo »
Best Regards,
Paul Pelletier,
Surveyor

MarinK

  • Newbie
  • *
  • Posts: 8
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #7 on: April 29, 2021, 04:55:52 PM »
Dear Paulo,

Thanks for all this and apologies for the late reply. The bounding box is a good tip to center the bounding box, thanks!

However, I still get the 'zero resolution' problem, which I think, after a few tests comes from my camera viewing angles: after loading my camera positions and view angles, I aligned the cameras and at this point it did give me a point cloud, but with very high errors for the view angles. This makes me think that the definition of yaw, pitch, roll angles I use is different than the one used in Agisoft: I suspect that in Agisoft these angles are the angles of the AIRCRAFT, while I had considered the CAMERA angles, since I am doing TERRESTRIAL SfM photogrammetry rather than using a UAV.

So in my project, I defined yaw as the camera view direction (with 0 deg = North), and accordingly the pitch and roll of the camera. This probably doesn't make sense for Agisoft, does it? Do you know what corrections I need to make to get it working?

I'm hoping that this is the issue, otherwise I really have no idea...

Thanks!


Paulo

  • Hero Member
  • *****
  • Posts: 1302
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #8 on: April 29, 2021, 05:38:06 PM »
MarinK,

that could be it. Could you show the camera orientation angles that you are importing into MS from your terrestrial cameras?

Yaw = 0 North is correct. But let's say you have a camera looking straight to the horizon in North direction. Then the orientation angles in Metashape angle convention would be Yaw = 0, Pitch = 90 and Roll = 0 (camera in landscape)....

Example  shows image taken looking  North  inclined 60 degrees from vertical and in landscape view (yaw = 2.9 deg, pitch = 59.9  deg, roll = 0.2 deg)
« Last Edit: April 29, 2021, 06:00:43 PM by Paulo »
Best Regards,
Paul Pelletier,
Surveyor

MarinK

  • Newbie
  • *
  • Posts: 8
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #9 on: April 29, 2021, 06:04:35 PM »
Hi Paulo,

That makes sense - here are the angles I use, along with a picture of the setup (camera is in the black box, pointing towards left).

So here my yaw angle corresponds to the direction at which camera is pointing relative to North, pitch is inclination of camera in the direction at which it is pointing and roll is inclination in perpendicular direction.

So I guess in this case I should at least take pitch = 90-pitch and possibly swap yaw and roll?

Cheers,

Marin

Paulo

  • Hero Member
  • *****
  • Posts: 1302
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #10 on: April 29, 2021, 06:18:35 PM »
Hey Marink,

if camera0028 is pointing 10 degrees over the horizon then I would just add 90 degrees to all your pitch values to get them into the MS convention. Yaw = 234 deg means camera pointing in SWW direction and roll = 5 basically camera in landscape orientation....
Best Regards,
Paul Pelletier,
Surveyor

Paulo

  • Hero Member
  • *****
  • Posts: 1302
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #11 on: April 29, 2021, 09:06:27 PM »
I simulated 3 images with same position and orientation that you supplied  (except 90 deg was added to pitch). I set the eo with quick layout and got the situation shown in attachment. Images looking sww and inclined about 10 degrees over horizon... Does this correspond to reality?
Best Regards,
Paul Pelletier,
Surveyor

MarinK

  • Newbie
  • *
  • Posts: 8
    • View Profile
Re: Camera calibration for Multi View Stereo: Zero resolution error
« Reply #12 on: April 29, 2021, 10:21:45 PM »
Hi Paul,

Not quite the same position but I also realized in the meantime that my reference was different from Agisoft - I had yaw = 0° when facing east instead of when facing north...

I updated all this and it now works!!

Thanks so much for the help in all this process and apologies for not checking the viewing angles first, it really slipped my mind...

Thanks!

Marin