Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Daniele

Pages: [1] 2
1
General / Good point cloud but bad models, help for generate accurate mesh
« on: September 06, 2023, 08:05:14 PM »
Hello guys,
after several trials I decided to ask for a suggestion here since I struggling with the reconstruction of complex underwater organisms, namely gorgonians (seafan).
After masking the original imagery to remove the blue background, I get I very nice point cloud after manul editing and cleaning (see attached photo). However, I was not able to generate a good mesh model using such point cloud even if I disabled interpolation and used high face counts. Using also depth maps even after ultra high qualiity generation settings did not satisfy me. Do you have some Ideas to improve my results?

Many thanks in advance.

All the best,
Daniele

2
General / Re: How to solve strange images aligment
« on: October 29, 2022, 08:08:29 PM »
hello Daniele,

i am wondering at what speed is your ROV moving? and what is the exposure time on your pictures? It is possible that the GoPro has a rolling shutter and maybe such compensation could help?




Hello Paulo,
thanks for your suggestion and for sure the GoPro as a rolling shutter but the speed of the rov was very slow less than 1 m/sec. I tried with that option but that compensation the results remain the same.

3
General / Re: How to solve strange images aligment
« on: October 28, 2022, 10:00:06 AM »
Hi Alexey,
I switched to auto and set camera type to fisheye, but the generated point cloud with this setting is more strange than before, without any definable shape.
I am attaching the images with the adjusted parameters after both the alignment (fisheye and frame).


4
General / Re: How to solve strange images aligment
« on: October 27, 2022, 08:27:58 PM »
Hello Daniele,

Can you please also share the screenshots of the Initial and Adjusted tabs from the Camera Calibration dialogue and also specify the parameters used for Align Photos operation.


Hi Alexey,
thank you for taking the time to read this post and helping me again. Your request has unlocked my mind to look probably towards the right direction.

 In fact, on the camera calibration tab, if I set  "auto" in the initial parameters tab, the focus length seems a very odd number (1733.43811), and after image alignment, the adjusted tab remains greyed out. Of course, I get my spirally sparse point cloud :-\ (figure 1)

However, if I chose the precalibrated option and manually set the focal length to 3 mm, the process led to better results because, after image alignment, I can see my images correctly aligned in a straight point cloud representing my pipeline. However, the adjusted tab still remains greyed out and also the elevation seems disappeared because the sparse point cloud is without the z component (Figure 2).

Do you have some suggestions to solve this situation?





5
General / Re: How to solve strange images aligment
« on: October 24, 2022, 09:56:20 AM »
Precalibrate your camera could be the best solution.
Thanks, JMR,
for calibration, do you mean acquiring photos of the using checkboard and then estimating camera parameters in Agisoft lens?

6
General / How to solve strange images aligment
« on: October 22, 2022, 12:42:17 AM »
Dear all,
I would like to hear if someone can help in understanding this strange photo alignment and maybe find a workaround to solve the problem.
During ROV monitoring, I am acquiring many underwater images along a pipeline with a GoPro action camera (Hero 7 in wide FOV). The images have a good overlap because they are acquired every second.
However, after the alignment process, the images are correctly aligned, but the shape of the sparse point cloud seems a spiral (screen in the attachment). Of course the pipeline is a straight line.
Do you have any idea about this strange behaviour?

Many thanks
Daniele

7
Bug Reports / Re: Texture missing when using GPU
« on: August 03, 2022, 05:10:45 PM »
Dear Alexey,
just to inform you that after getting in touch with your support team, I solved my problem which was not related to the last version of Metashape but to bad NVIDIA drivers.

So I have totally uninstalled the NVIDIA driver with Display Driver Uninstaller and manually installed one old driver from here: https://www.nvidia.com/Download/Find.aspx
Now texture blending works again with GPU enabled. Also works fine with the last version 1.8 of metashape. 

Thanks for your support, and sorry for the trouble.

Best regards,
Daniele

8
Bug Reports / Re: Texture missing when using GPU
« on: August 02, 2022, 12:12:19 AM »
Many thanks Alexey for your great help!
I have just sent the test project following your suggestion to support@agisoft.com.

I got the same problem also with your sample imagery.

Regards,
Daniele

9
Bug Reports / Re: Texture missing when using GPU
« on: August 01, 2022, 06:37:11 PM »
Of course Yes!
Here it is in attachment the log.

Many thanks, Alexey






10
Bug Reports / Texture missing when using GPU
« on: August 01, 2022, 03:00:15 PM »
Dear Alexey,
I noticed that after the last update of metashape (1.8.4) the textures are not generated when enabling GPU processing.
 I have the following system:
GPU: NVIDIA GeForce GTX 1080/PCIe/SSE2
OpenGL Version: 4.6.0 NVIDIA 516.59
CPU: Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz

However, by using CPU the textures are ok.
Do you have some suggestions regarding such an issue in order to return to using my GPU correctly?

Many thanks for your time and support as always.

Best regards,
Daniele

11
General / Re: Texture problems with large project
« on: July 04, 2022, 06:42:55 PM »
Hello Bzuco,
many many thanks for your reply and solution. As you suggested, I used  8192 for the size of texture and 4 as count and now the texture is properly applied to the entire model. THANK YOU AGAIN for your valuable help!
Do you have a rule of thumb to understand how I should set such parameters according to the size of the model?

All the best,
Daniele

12
General / Texture problems with large project
« on: July 02, 2022, 11:53:03 AM »
Dear all,
I am writing here after searching in the forum for a solution but with no success to solve my issue.
I have a very large underwater imagery dataset (2996 Gopro10 photos) which I would like to use to reconstruct a diving site. Due to its 3D morphology (flat areas, boulders, overhangs), I have both oblique and nadiral images to cover all the surfaces.  I have no problem with the alignment (all cameras aligned successfully in low quality to save processing time). I filtered and cleaned the resulting point cloud and generated two meshes: one in high quality and another by decimating it. Now the problem is the texturing process because I can't get a good texture on  mesh. I tried different mapping modes (generic, spherical, adaptive orthophoto)  after normal map generation but the results are the same. The images even if are from an action camera are all good and sharp enough so I don't think the problem is related to image quality. The model shaded looks better than the textured one. I am working on Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz GPU(s) NVIDIA GeForce GTX 1080. Since the project Is huge (10 gigas) maybe I could upload the report to better show the workflow used. However, the size limit of the attachment is limiting for it.

 I am attaching some snaps just to better describe such aspects.

Does someone have some suggestions to solve such an issue?
Many thanks in advance for your help!

Daniele

13
Hello Daniele,

The covariance matrix for the points of the sparse point cloud can be accessed by:
Code: [Select]
for point in chunk.point_cloud.points:
   cov_matrix = point.cov
With that you should be able to create your custom tie point cloud exporter.

You may adapt the following script for your needs, if the project is georeferenced:
Code: [Select]
import Metashape, math

path = Metashape.app.getSaveFileName("Specify the export file path:", filter = "Text file (*.txt);;All formats (*.*)")
file = open(path, "wt")
file.write("Id\tX\tY\tZ\tvar\tcov_x\tcov_y\tcov_z\n")

chunk = Metashape.app.document.chunk
T = chunk.transform.matrix
if chunk.transform.translation and chunk.transform.rotation and chunk.transform.scale:
T = chunk.crs.localframe(T.mulp(chunk.region.center)) * T
R = T.rotation() * T.scale()

for point in chunk.point_cloud.points:
if not point.valid:
continue
cov = point.cov
coord = point.coord

coord = T * coord
cov = R * cov * R.t()
u, s, v = cov.svd()
var = math.sqrt(sum(s)) #variance vector length
vect = (u.col(0) * var)

file.write(str(point.track_id))
file.write("\t{:.6f}\t{:.6f}\t{:.6f}\t{:.6f}".format(coord[0], coord[1], coord[2], var))
file.write("\t{:.6f}\t{:.6f}\t{:.6f}".format(vect.x, vect.y, vect.z))
file.write("\n")

file.close()
print("Script finished.")




Many thanks again Alexey!!
The covariance matrix you cited in the first part of the script is the same that can be also accessed through the camera optimization panel, isn't it?
Regarding the georeferencing of the project, is it sufficient to have GCP with coordinates or you meant that also the cameras should have coordinates? This aspect for me is a key point because underwater I can't use GPS for direct georeferencing the images so I have only some GCP distributed inside the mapping area.

Best regards,
Daniele

14
Many Thanks, Alexey for your kind reply and great support!
I run the script provided and in the output, I found the sigma values associated with camera positions and markers. However, I was looking for different results since in the M3C3 plugin of CloudComapre the required fields are the sigma values concerning sparse point cloud (to build precision map). I apologize because probably my first request was not very clear.
I found a very useful discussion and python script concerning this aspect in the valuable paper of  James et al., (2017) available here https://onlinelibrary.wiley.com/doi/10.1002/esp.4125. From this paper, I read that some other software such as
‘Vision Measurement System’ (VMS; http://www. geomsoft.com)  provides point precision as standard output, so I suppose that this feature could be very useful if integrated into the next updates of Metashape. What do you think about this?

Thanks again for your time.

All the best,
Daniele


15

Hi Alexey,
In the meanwhile, I run the script and a text file is generated so the script still works also for methashape. However, I checked that the reproject error is not my target because I would need precision estimates (sigma) of each point for X,Y and Z components (sigmaX, sigmaY and sigmaZ). If you have any suggestions to achieve this result I will be grateful for your support.

Thanks again
Daniele


Pages: [1] 2