Forum

Author Topic: Projecting from camera/image space to point_cloud/object space  (Read 6984 times)

kcm

  • Newbie
  • *
  • Posts: 12
    • View Profile
Projecting from camera/image space to point_cloud/object space
« on: November 20, 2020, 06:14:31 PM »
Hi,

I am needing to perform two-way operations to obtain image coordinates of tie points, as well as the inverse, to obtain object space coordinate estimates for arbitrary image points. To obtain the tie point projections onto an image, I am using an iterated version of the following in an existing project:
Code: [Select]
doc = Metashape.app.document
chunk = doc.chunk
camera0 = chunk.cameras[0]
proj = projections[camera0][0]
print(proj.coord)

I realize I can obtain the camera transform as:
Code: [Select]
camera0.transformBut I cannot figure out how to apply it to project an image pixel to a point cloud point. I have tried using Camera.unproject(Vector) but don't obtain the correct result (I am not clear on the directionality of the camera transform matrix and Camera.unproject). 

Thank you for the help.

K

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 15083
    • View Profile
Re: Projecting from camera/image space to point_cloud/object space
« Reply #1 on: November 21, 2020, 12:09:39 PM »
Hello kcm,

Please check for the following general approach for getting the projected point 3D coordinates from the 2D image coordinates and vice versa (assuming that the chunk is georeferenced):

Code: [Select]
import Metashape
doc = Metashape.app.document
chunk = doc.chunk
T = chunk.transform.matrix
crs = chunk.crs

#for projecting 3D point to the camera and obtaining 2D coordinates:
camera0 = chunk.cameras[0] #first camera
point = Metashape.Vector([x, y, z]) #point of interest in geographic coord
point_internal = T.inv().mulp(crs.unproject(point))
coords_2D = camera0.project(point_internal)


#for projecting 2D point in image coordinates to available 3D surface (mesh model in this case):
coords_2D = Metashape.Vector([X, Y])
point_internal  = chunk.model.pickPoint(camera0.center, camera0.unproject(coords_2D))
point3D_world = chunk.crs.project(chunk.transform.matrix.mulp(point_internal))

In case your task is to get 3D coordinates of the tie points and their projections on the individual images, you can follow a different approach, as this information is already present in the project:

Code: [Select]
#export format:
#point_ID X Y Z
#[camera_label x_proj y_proj]

import Metashape, math

doc = Metashape.app.document
chunk = doc.chunk
M = chunk.transform.matrix
crs = chunk.crs
point_cloud = chunk.point_cloud
projections = point_cloud.projections
points = point_cloud.points
npoints = len(points)
tracks = point_cloud.tracks

path = Metashape.app.getSaveFileName("Specify export path and filename:", filter ="Text / CSV (*.txt *.csv);;All files (*.*)")
file = open(path, "wt")
print("Script started...")

point_ids = [-1] * len(point_cloud.tracks)
for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id
points_proj = {}

for photo in chunk.cameras:

if not photo.transform:
continue
T = photo.transform.inv()
calib = photo.sensor.calibration

for proj in projections[photo]:
track_id = proj.track_id
point_id = point_ids[track_id]

if point_id < 0:
continue
if not points[point_id].valid: #skipping invalid points
continue

point_index = point_id
if point_index in points_proj.keys():
x, y = proj.coord
points_proj[point_index] = (points_proj[point_index] + "\n" + photo.label + "\t{:.2f}\t{:.2f}".format(x, y))

else:
x, y = proj.coord
points_proj[point_index] = ("\n" + photo.label + "\t{:.2f}\t{:.2f}".format(x, y))

for point_index in range(npoints):

if not points[point_index].valid:
continue

coord = M * points[point_index].coord
coord.size = 3
if chunk.crs:
#coord
X, Y, Z = chunk.crs.project(coord)
else:
X, Y, Z = coord

line = points_proj[point_index]

file.write("{}\t{:.6f}\t{:.6f}\t{:.6f}\t{:s}\n".format(point_index, X, Y, Z, line))

file.close()
print("Finished")
Best regards,
Alexey Pasumansky,
Agisoft LLC

kcm

  • Newbie
  • *
  • Posts: 12
    • View Profile
Re: Projecting from camera/image space to point_cloud/object space
« Reply #2 on: November 23, 2020, 05:41:09 AM »
Alexey,

Thank you, exactly what I am after. I realized after my post that part of the answer was contained in an adjacent post (https://www.agisoft.com/forum/index.php?topic=12780.0). From that post I ended up here:
Code: [Select]
# for projecting 2D point in image coordinates to available 3D surface (sparse cloud in this case):
sensor0 = camera0.sensor
pc_pred = chunk.point_cloud.pickPoint(camera0.center, camera0.transform.mulp(sensor0.calibration.unproject(image_point)))
pc_pred.size = 4
pc_pred[3] = 1.
chunk.crs.project((chunk.transform.matrix*pc_pred)[0:3])
It's good to know the unproject using the sensor transform is identical in this case, both formulations generated the same coordinates essentially. Under what circumstance is the sensor transform needed to unproject as opposed to unprojecting with the specific camera?

Also, is the sparse cloud the source of geo-registration and reference for all subsequent products in an orthomosaic workflow (e.g., dense cloud, mesh, dem, etc.)?

Kris


Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 15083
    • View Profile
Re: Projecting from camera/image space to point_cloud/object space
« Reply #3 on: December 04, 2020, 06:55:17 PM »
Hello Kris,

I think you do not have to use sensor unproject approach - that implementation appears to be an old workaround, now the task should be solved with camera.project / camera.unproject methods.

As for your second question, what do you mean by a reference? The tie points - points of the sparse cloud, defines which cameras are overlapping and should be used for depth maps calculation, for example.
Best regards,
Alexey Pasumansky,
Agisoft LLC

remco.kootstra

  • Newbie
  • *
  • Posts: 6
    • View Profile
Re: Projecting from camera/image space to point_cloud/object space
« Reply #4 on: April 05, 2023, 12:24:32 PM »
Hi Alexey,

I just came across this post and it seems the be the solution I'm looking for.
However I can't get the script running (possibly because it is outdated).

We are using Metashape V2.0.1. Can you tell if we need to change the code below?
Code: [Select]
#export format:
#point_ID X Y Z
#[camera_label x_proj y_proj]

import Metashape, math

doc = Metashape.app.document
chunk = doc.chunk
M = chunk.transform.matrix
crs = chunk.crs
point_cloud = chunk.point_cloud
projections = point_cloud.projections
points = point_cloud.points
npoints = len(points)
tracks = point_cloud.tracks

path = Metashape.app.getSaveFileName("Specify export path and filename:", filter ="Text / CSV (*.txt *.csv);;All files (*.*)")
file = open(path, "wt")
print("Script started...")

point_ids = [-1] * len(point_cloud.tracks)
for point_id in range(0, npoints):
point_ids[points[point_id].track_id] = point_id
points_proj = {}

for photo in chunk.cameras:

if not photo.transform:
continue
T = photo.transform.inv()
calib = photo.sensor.calibration

for proj in projections[photo]:
track_id = proj.track_id
point_id = point_ids[track_id]

if point_id < 0:
continue
if not points[point_id].valid: #skipping invalid points
continue

point_index = point_id
if point_index in points_proj.keys():
x, y = proj.coord
points_proj[point_index] = (points_proj[point_index] + "\n" + photo.label + "\t{:.2f}\t{:.2f}".format(x, y))

else:
x, y = proj.coord
points_proj[point_index] = ("\n" + photo.label + "\t{:.2f}\t{:.2f}".format(x, y))

for point_index in range(npoints):

if not points[point_index].valid:
continue

coord = M * points[point_index].coord
coord.size = 3
if chunk.crs:
#coord
X, Y, Z = chunk.crs.project(coord)
else:
X, Y, Z = coord

line = points_proj[point_index]

file.write("{}\t{:.6f}\t{:.6f}\t{:.6f}\t{:s}\n".format(point_index, X, Y, Z, line))

file.close()
print("Finished")

Thanks and looking forward to your reply.

Kind regards,

Remco

Paulo

  • Hero Member
  • *****
  • Posts: 1354
    • View Profile
Re: Projecting from camera/image space to point_cloud/object space
« Reply #5 on: April 05, 2023, 05:54:49 PM »
Hi Remco,

just change
Code: [Select]
point_cloud = chunk.point_cloudwith
Code: [Select]
point_cloud = chunk.tie_points
Should work,
Best Regards,
Paul Pelletier,
Surveyor

smiller

  • Newbie
  • *
  • Posts: 28
    • View Profile
Re: Projecting from camera/image space to point_cloud/object space
« Reply #6 on: July 27, 2024, 12:44:08 AM »
Is there any way to do this with an orthomosaic image? Find the camera based on a point on the orthomosaic?