Forum

Recent Posts

Pages: [1] 2 3 ... 10
1
Phogi,

I think that camera.project() which transforms from your project internal coordinates (Xint, Yint, Xint) to camera pixel coordinates (u,v) can only be represented by combination of 2 matrices:
- camra.transform.inv() (CTinv) which transforms from internal project coordinates to camera local coordinates (Xcam, Ycam, Zcam);
- calibration matrix K which transforms from camera homogeneous coordinates (Xcam/Zcam, Ycam/Zcam, 1) to pixel coordinates (u,v).
in case where there is no radial and decentering distorsion and where camera is frame type with no rolling shutter...
In case where there is radial and decentering distorsion, then it can not be  represented by combination of 2 matrices...

see following attachment... to better explain this...
2
Bug Reports / Re: Help with a heavy project
« Last post by OZDF on Today at 01:38:23 PM »
One "test" solution would be to leave the images in the original, but to go to Lowest for the point cloud.
In any case, I would not split the project.

I am going to try this right now.
By the way, for 6000 photos 20mb, how long does your computer take to process a dense cloud? And which machine are you using? So I can compare with mine and understand if there's something wrong with it.
3
Bug Reports / Re: Help with a heavy project
« Last post by c-r-o-n-o-s on Today at 01:31:35 PM »
One "test" solution would be to leave the images in the original, but to go to Lowest for the point cloud.
In any case, I would not split the project.
4
Python and Java API / Segmentation Fault after exporting raster
« Last post by pedrounas on Today at 01:25:16 PM »
Good day,

I am currently using the Metashape 1.6.5 Python package and after exporting some raster files I receive a Segmentation Fault error and the program ends. What could be the cause of this issue?
5
Bug Reports / Re: Help with a heavy project
« Last post by OZDF on Today at 01:22:16 PM »
Let me first say, 1600 images is not a big project. (Especially not for this computer).
We use a 20Mpix drone camera and process projects with up to 6000 images.
True, they are JPEGs as our drone only outputs them. As far as I know, the profit of RAW photos is moderate, though.

So I think you should change something in the workflow.
It's definitely doable.

I thought my project was big because trying to create the dense cloud of the whole project with 1600 photos Metashape estimate about 10+ days of work...
Maybe I am missing something. Can you suggest me a workflow for this kind of projects? Even in a very synthetic way so I can try to emulate it, because I absolutely cannot do anything with the methods I tried.
About the resolution, I tried the first time with 7000x5000 300dpi, and the second time with 5000x3000 100dpi, none of them worked.
Thank you so much.
6
Bug Reports / Re: Help with a heavy project
« Last post by c-r-o-n-o-s on Today at 01:16:33 PM »
Let me first say, 1600 images is not a big project. (Especially not for this computer).
We use a 20Mpix drone camera and process projects with up to 6000 images.
True, they are JPEGs as our drone only outputs them. As far as I know, the profit of RAW photos is moderate, though.

So I think you should change something in the workflow.
It's definitely doable.

How high is the resolution of the images and what settings do you use for the point cloud?
(Unfortunately, I cannot see that from the screenshot because of the quality.)
7
Bug Reports / Help with a heavy project
« Last post by OZDF on Today at 12:54:31 PM »
Hi everyone,
I am writing on Metashape forum because I need some advice for a huge work.
My machine is an Intel i9 18 core, 128 GB RAM, GeForce 3060 RTX 12GB, 4TB SSD, Motherboard Asus Prime X299-A2.

I am working with Metashape to reconstruct a big wooden church choir, it has a lot of details and I took 1620 pictures of this object.
Since it is the first time I use Metashape for a big project, I tried to work with all the 1620 uncompressed pictures (25mb each photo). The alignment worked but once I tried to do the dense cloud the computer wasn't able to process the action. My SSD and RAM jumped to 99-100% of activity and the completion rate did not advance once 4% was reached (even if I left my computer working for days).
So I decided to create the dense cloud dividing the church choir into smaller parts, but the results of the alignment weren't as good as the alignment of the whole object with 1620 pictures. Indeed I would say that the results obtained were full of errors. So I just tried to reduce the size of the pictures to 4-6 mb each. This time I reached about 70% of completion with a normal use of memory, CPU, etc, after 70% everything goes to 100% and the completion rate doesn't advance.

Is there a "right way" to go for this kind of big projects? What would be the steps you would take to get a good result?
Furthermore, from my experiments, I noticed that it is more difficult to obtain a good result by dividing the project into smaller parts rather than processing big objects as a whole, but in this case obviously we have to contend with the limits of computer performance. Is there a way to subdivide a big the project without losing quality in the result? Maybe doing the alignment of all the 1600 photos, and after that dividing it in 4 chunks and creating smaller dense clouds (I am now trying to proceed in this way, but it is just an experiment).

Thank you so much for your patience and information.
8
Feature Requests / Re: Zoom to marker/shape
« Last post by c-r-o-n-o-s on Today at 12:48:38 PM »
That is no problem. Mark your point and zoom in. Then go along the markers with the keys Page up / Page down.
9
General / Re: Should I build dence point cloud at all now?
« Last post by c-r-o-n-o-s on Today at 12:41:45 PM »
I found the depth maps to be far more accurate and the whole process is quicker.

That's what I always thought until now. But even if I reuse the depth images, DEM creation with the point cloud is much faster than from the depth images.
10
Hi Paul,

Thank you so much, I actually found that the marker position I used was wrong! In GUI GCP002 is the first marker but then in the chunk.markers it is the third, so I was using another marker position that's why it was off a lot!

May I ask if you know how can we get the projection matrix? As camera.project() can project world coordinates into image coordinates, from my understanding the marker.position it should be meaning the u,v,w coordinates, but it can't directly change from homogenious coordinates by divide the w, so I'm confusing about which internal coordinate system does this marker.position actually refer to?

Thanks a lot!

Phogi
Pages: [1] 2 3 ... 10