Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - marcel.d

Pages: [1]
1
Python and Java API / Depth map or range image?
« on: November 20, 2025, 03:28:45 PM »
Dear Alexey,
Dear Forum Members,

When running the line below, what we get is a range image [1] and not a depth map [2], right?
Code: [Select]
out_img = my_dense_cloud.renderDepth(cam_transform,cam_ calibration)
Because the output seems to consist of (scaled) distances calculated between camera location and point cloud (range), and not from the camera plane along the camera z axis (depth).

Could you please comment whether (according to the wikipedia definitions below) the result is range or depth?

The documentation [3] says the following:

Quote
Render point cloud depth image for specified viewpoint.

Thanks
Marcel

[1] https://en.wikipedia.org/wiki/Range_imaging
[2] https://en.wikipedia.org/wiki/Depth_map
[3] https://www.agisoft.com/pdf/metashape_python_api_2_3_0.pdf p. 134

2
Bug Reports / Imported point cloud incomplete
« on: May 21, 2024, 12:30:41 PM »
Hello,

As part of a workflow we need to import very small point clouds into Metashape (10-100 points). The clouds are in the same projected crs as the rest of the project.

A) When importing a LAZ with 10 points, only 8 are shown in Metashape.

This is what the doc.xml file for the dense_cloud shows:

Code: [Select]
      <totalCount>10</totalCount>
      <validCount>8</validCount>

Here is the original point cloud (left) next to the point cloud exported from metashape (right):



You can see that 2 points in the lower left corner are missing.

It gets weirder:

B) When converting the LAZ to OBJ before importing, also 8 points are imported, but one of them is different from A.

C) Also, when pretending that local coordinates are used, 9/10 points are imported, but of course at a very wrong location.

D) When importing the points in the .pts format, 9/10 points are imported (again different from the ones in C).

E) LAS leads to the same results as LAZ. I did not try the PLY or E57 file formats.

I tried varying the number of neighbors used for normals calcuation but, "0", "1" and the default "28" all led to the same results.

I really need all 10 points. What can I do?

Thanks,
Marcel

3
Python and Java API / [Request] Proper Python Documentation
« on: February 14, 2024, 06:34:45 PM »
Dear Agisoft,

Thank you for the powerful Python API.

While the API itself is great, its documentation is not usable.

Finding anything in the PDF document with over 200 pages is super slow and only feasible when you already know the function name.
This issue (along with other vital points)  has already been raised in 2022 (https://www.agisoft.com/forum/index.php?topic=14910), but the problem has not been fixed yet.
There are better formats for documentation (e.g. https://numpy.org/doc/stable/reference/index.html), why do we get stuck with a PDF?
PDFs are nice for paper-printing, but that is not how most of us use software documentation.

Furthermore, writing code in any IDE but PyCharm or a notebook is like not using an IDE at all, because code completion, linting and checking do not work.

Again, thanks for the great API and please advise when we can expect its documentation to be at the same level.

Cheers,
Marcel

PS Here is what I mean with linting not working in e.g. VSCode:


4
General / Depth maps incomplete for some images
« on: January 04, 2024, 03:23:58 PM »
Hi,

I am processing an aerial dataset with very high overlap. Everything works great. Except some generated depth maps are incomplete. The example in the attachment demonstrates it well:



Two images with plenty of overlap. On the left a big chunk of the depth map is missing, on the right it is complete (including the overlapping part).

A feature appearing in both images is highlighted.

Any ideas what is going on?
A. Is Metashape deciding not to reconstruct every depth map because it can create the dense point cloud from other images?
B. Or are parts of depth maps not passing the depth filtering (mild filterin in this case).
C. Something else?

What do I do if I want to have all depth maps generated?
If the issue is option A, I could try to separate the project into "staggered" chunks, where chunk 1 would contain images 1,4,7,10,..., chunk 2: images 2,5,8,11,..., chunk 3:images 3,6,9,12,...  Thus forcing Metashape to generate more depth maps because of lower overlap?
If it is option B, I could disable filtering...

I look forward to some pointers.

Thanks,
Marcel

5
Hello,

I have a very simple project with 20 photos which aligns easily on CPU, but fails to align when using the GPU. Since this fails, I cannot do any of the other steps. Any ideas?

Error message: "Error: cudaMemGetInfo(&free_mem_size, &total_mem_size): all CUDA-capable devices are busy or unavailable (46) at line 33"

Specs:

Agisoft Metashape Professional Version: 1.5.5 build 9097 (64 bit)
Platform: Windows
CPU: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz (server)
CPU family: 6 model: 79 signature: 406F1h
RAM: 511.9 GB
OpenGL Vendor: NVIDIA Corporation
OpenGL Renderer: GeForce GTX TITAN X/PCIe/SSE2
OpenGL Version: 4.6.0 NVIDIA 461.72

Thanks in advance!

6
General / Appropriate format (TIFF/PNG/JP2/etc) for 12-bit images
« on: January 27, 2021, 11:45:12 PM »
Dear Agisoft Community!

I have images coming from a 12-bit camera. Reading through the many useful posts on the forum I came across this: https://www.agisoft.com/forum/index.php?topic=308.0 which made me curious to use the whole bit depth in Metashape.

The easy thing to do is to store the 12 significant bits as the most significant bits of an unsigned 16-bit integer and pad the least significant bits with zeroes. It is easy because I can see the colors well when I open these images in any viewer. The other option is to store the data in the least significant bits. But then the images initially look almost black in most image viewers. They do look just as good as the other ones with the correct settings.

Another argument for using the 12 top bits, in PNG I can set the sBIT (SignificantBits) tag to 12, so the original data can be retrieved with the precision/accuracy that it was captured with.

Unfortunately the sBIT tag is not available in the TIFF file specification, but the Metashape manual (beginning of chapter 2) specifically recommends using TIFF (https://www.agisoft.com/pdf/metashape_1_7_en.pdf#page=11).

I do pre-processing in OpenCV and can save the files in both formats. And with both "bit-positions".

With the exact same data (12MP, 12-bit significant, RGB), I get the following file sizes:

Option 1: PNG, data in MOST significant bits: 40.1 MB   (Deflate/Inflate lossless compression)
Option 2: TIFF, data in MOST significant bits: 43.1 MB        (LZW lossless compression)
---------------
Option 3: PNG, data in LEAST significant bits: 42.2 MB   (Deflate/Inflate lossless compression)
Option 4: TIFF, data in LEAST significant bits: 47.6 MB        (LZW lossless compression)

Option 1 wins by a small margin also regarding the file size.

Summarizing:
1. Is either "Data in MSB (Options 1&2)" or "Data in LSB (Options 3&4)" better / easier for Metashape / preferable?
2. Is either TIFF or PNG  (or JPEG2000/something else)  better / easier for Metashape / preferable?
Again: the underlying data is exactly the same!

@Alexey: I know the questions are rather loaded, but I would appreciate it if you found the time to go through them. Thank you in advance!

Thanks,
Marcel Dogotari

Pages: [1]