Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Daniele

Pages: [1]
Bug Reports / Re: Texture missing when using GPU
« on: August 03, 2022, 05:10:45 PM »
Dear Alexey,
just to inform you that after getting in touch with your support team, I solved my problem which was not related to the last version of Metashape but to bad NVIDIA drivers.

So I have totally uninstalled the NVIDIA driver with Display Driver Uninstaller and manually installed one old driver from here:
Now texture blending works again with GPU enabled. Also works fine with the last version 1.8 of metashape. 

Thanks for your support, and sorry for the trouble.

Best regards,

Bug Reports / Re: Texture missing when using GPU
« on: August 02, 2022, 12:12:19 AM »
Many thanks Alexey for your great help!
I have just sent the test project following your suggestion to

I got the same problem also with your sample imagery.


Bug Reports / Re: Texture missing when using GPU
« on: August 01, 2022, 06:37:11 PM »
Of course Yes!
Here it is in attachment the log.

Many thanks, Alexey

Bug Reports / Texture missing when using GPU
« on: August 01, 2022, 03:00:15 PM »
Dear Alexey,
I noticed that after the last update of metashape (1.8.4) the textures are not generated when enabling GPU processing.
 I have the following system:
OpenGL Version: 4.6.0 NVIDIA 516.59
CPU: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz

However, by using CPU the textures are ok.
Do you have some suggestions regarding such an issue in order to return to using my GPU correctly?

Many thanks for your time and support as always.

Best regards,

General / Re: Texture problems with large project
« on: July 04, 2022, 06:42:55 PM »
Hello Bzuco,
many many thanks for your reply and solution. As you suggested, I used  8192 for the size of texture and 4 as count and now the texture is properly applied to the entire model. THANK YOU AGAIN for your valuable help!
Do you have a rule of thumb to understand how I should set such parameters according to the size of the model?

All the best,

General / Texture problems with large project
« on: July 02, 2022, 11:53:03 AM »
Dear all,
I am writing here after searching in the forum for a solution but with no success to solve my issue.
I have a very large underwater imagery dataset (2996 Gopro10 photos) which I would like to use to reconstruct a diving site. Due to its 3D morphology (flat areas, boulders, overhangs), I have both oblique and nadiral images to cover all the surfaces.  I have no problem with the alignment (all cameras aligned successfully in low quality to save processing time). I filtered and cleaned the resulting point cloud and generated two meshes: one in high quality and another by decimating it. Now the problem is the texturing process because I can't get a good texture on  mesh. I tried different mapping modes (generic, spherical, adaptive orthophoto)  after normal map generation but the results are the same. The images even if are from an action camera are all good and sharp enough so I don't think the problem is related to image quality. The model shaded looks better than the textured one. I am working on Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz GPU(s) NVIDIA GeForce GTX 1080. Since the project Is huge (10 gigas) maybe I could upload the report to better show the workflow used. However, the size limit of the attachment is limiting for it.

 I am attaching some snaps just to better describe such aspects.

Does someone have some suggestions to solve such an issue?
Many thanks in advance for your help!


Hello Daniele,

The covariance matrix for the points of the sparse point cloud can be accessed by:
Code: [Select]
for point in chunk.point_cloud.points:
   cov_matrix = point.cov
With that you should be able to create your custom tie point cloud exporter.

You may adapt the following script for your needs, if the project is georeferenced:
Code: [Select]
import Metashape, math

path ="Specify the export file path:", filter = "Text file (*.txt);;All formats (*.*)")
file = open(path, "wt")

chunk =
T = chunk.transform.matrix
if chunk.transform.translation and chunk.transform.rotation and chunk.transform.scale:
T = * T
R = T.rotation() * T.scale()

for point in chunk.point_cloud.points:
if not point.valid:
cov = point.cov
coord = point.coord

coord = T * coord
cov = R * cov * R.t()
u, s, v = cov.svd()
var = math.sqrt(sum(s)) #variance vector length
vect = (u.col(0) * var)

file.write("\t{:.6f}\t{:.6f}\t{:.6f}\t{:.6f}".format(coord[0], coord[1], coord[2], var))
file.write("\t{:.6f}\t{:.6f}\t{:.6f}".format(vect.x, vect.y, vect.z))

print("Script finished.")

Many thanks again Alexey!!
The covariance matrix you cited in the first part of the script is the same that can be also accessed through the camera optimization panel, isn't it?
Regarding the georeferencing of the project, is it sufficient to have GCP with coordinates or you meant that also the cameras should have coordinates? This aspect for me is a key point because underwater I can't use GPS for direct georeferencing the images so I have only some GCP distributed inside the mapping area.

Best regards,

Many Thanks, Alexey for your kind reply and great support!
I run the script provided and in the output, I found the sigma values associated with camera positions and markers. However, I was looking for different results since in the M3C3 plugin of CloudComapre the required fields are the sigma values concerning sparse point cloud (to build precision map). I apologize because probably my first request was not very clear.
I found a very useful discussion and python script concerning this aspect in the valuable paper of  James et al., (2017) available here From this paper, I read that some other software such as
‘Vision Measurement System’ (VMS; http://www.  provides point precision as standard output, so I suppose that this feature could be very useful if integrated into the next updates of Metashape. What do you think about this?

Thanks again for your time.

All the best,


Hi Alexey,
In the meanwhile, I run the script and a text file is generated so the script still works also for methashape. However, I checked that the reproject error is not my target because I would need precision estimates (sigma) of each point for X,Y and Z components (sigmaX, sigmaY and sigmaZ). If you have any suggestions to achieve this result I will be grateful for your support.

Thanks again

Python and Java API / Re: Export reprojection errors for each tie points
« on: August 25, 2021, 11:25:20 AM »
Hello emi1975,

Please use the updated script for these purposes:
Code: [Select]
# Compatibility - Agisoft PhotoScan Professional 1.2.4
# saves reprojection error for the tie points in the sparse cloud

# export format:
# point_index X-coord Y-coord Z-coord reproj_error

import PhotoScan
import math, time

doc =
chunk = doc.chunk
M = chunk.transform.matrix
crs =
point_cloud = chunk.point_cloud
projections = point_cloud.projections
points = point_cloud.points
npoints = len(points)
tracks = point_cloud.tracks

path ="Specify export path and filename:")
file = open(path, "wt")
print("Script started")

t0 = time.time()

points_coords = {}
points_errors = {}

for photo in chunk.cameras:

if not photo.transform:

T = photo.transform.inv()
calib = photo.sensor.calibration

point_index = 0
for proj in projections[photo]:
track_id = proj.track_id
while point_index < npoints and points[point_index].track_id < track_id:
point_index += 1
if point_index < npoints and points[point_index].track_id == track_id:
if not points[point_index].valid:

coord = T * points[point_index].coord
coord.size = 3
dist = calib.error(coord, proj.coord).norm() ** 2
v = M * points[point_index].coord
v.size = 3

if point_index in points_errors.keys():
point_index = int(point_index)
points_errors[point_index].x += dist
points_errors[point_index].y += 1
points_errors[point_index] = PhotoScan.Vector([dist, 1])

for point_index in range(npoints):

if not points[point_index].valid:

w = M * points[point_index].coord
w.size = 3
X, Y, Z =
X, Y, Z, w = M * points[point_index].coord

error = math.sqrt(points_errors[point_index].x / points_errors[point_index].y)

file.write("{:6d}\t{:.6f}\t{:.6f}\t{:.6f}\t{:.6f}\n".format(point_index, X, Y, Z, error))

t1 = time.time()

print("Script finished in " + str(int(t1-t0)) + " seconds.")

Hi Alexey,
I have started a new post here concerning this subject...Maybe I was wrong and I should add a reply here...Let me know if it's ok and in case I will remove the other similar post...In the meanwhile, I run this script and a text file is generated so the script still works also for methashape. However, I checked that the reproject error is not my target because I would need precision estimates (sigma) of each point for X,Y and Z components (sigmaX, sigmaY and sigmaZ).

Hello Alexy or everyone could help me :-),
I am totally new to this forum so I apologize if this thread has been already addressed, but I can't find a solution to my issue.
I am using metashape to build  dense point clouds of underwater habitats and perform in CloudCompare a change detection analysis. For this task, I would like to use the  M3C3 plugin ( However, to fill the Precision maps tab to enables the calculation of detectable change using measurement precision values stored in scalar fields of point clouds, I would need such information stored in the exported cloud from metashape. If I well understood such precision information could be assumed to be the reprojection errors, as stated by Maria in this other topic:
However, because I am using Metashape 1.7.2  I suppose that the same script needs to be updated, am I right? If a new version is needed, please could you help me because I am totally foreign to python scripting :'(
Finally, do you suggest using sparse point cloud or dense point cloud for such computation?  I read that the dense matching process does not optimise any aspects of the image network and, therefore, does not affect the underlying precision estimates, indeed tie point precision could be used to represent the main measurement contribution to surface model precision. Could you confirm such information?

Many thanks for your time and support in advance.

All the best,

Pages: [1]