Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - PolarNick

Pages: [1] 2 3 ... 8
1
I agree that it will be nice to have such speedup out-of-the-box, but I see a lot of problems and reliability risks that makes this feature very hard (or even impossible) to implement.

Quote
For each sub-task, the optimal number of nodes is spawned to complete the task, maximising the utilisation of the hardware.
Either MS just dynamically adds another node if it's not at 100% utilisation, or there's still VRAM available, or kills one if it's swapping... or perhaps MS runs some tests to 'characterise' the hardware in use, and sets the numbers of nodes it'll use for each sub-task?

This is hard even if we are talking about fixed hardware (f.e. in your case you have to manually tune number of nodes on per-subtask basis), in case of generic hardware - IMHO this is nearly impossible to be implemented reliably. Even usage of the GPU in parallel triggers bugs in GPU drivers (f.e. race-conditions), but even if it wasn't the case - the problem with parallel VRAM/RAM usage is nearly impossible to be solved in generic case. What if sub-task has peak RAM/VRAM usage at the very end of the processing? It will be killed, but it wasted a lot of computation resources (slowing down other nodes).

Quote
or kills one if it's swapping...

The same problem with reliability on any hardware/OS: what is "if it's swapping"? Allocation of at least one byte in swap? But it can happend even if RAM usage is very low.

Quote
And using multiple 'workers' is very standard trick... FileZilla runs multiple fzsftp.exe workers to do up to 10 concurrent downloads. Massively speeding up the transfer of smaller files, or if the server pings are crap...  And Handbrake spawns multiple HandBrake.worker.exe workers to concurrently transcode multiple videos at once on CPUs with large core counts, as transcoding doesn't scale linearly with cores. This allows for faster transcoding of the batch of files.

They use fixed number of parallel workers (without some kind of adaptive nodes count, or killing on RAM/VRAM) - it is the common thing in any software - including Metashape. And Metashape has one more feature - to have high-level parallelism - via launching nodes in local cluster (like you do).

2
Feature Requests / Re: Gaussian Splatting
« on: December 12, 2024, 03:38:54 PM »
Yes, Gaussian Splatting uses SfM results (cameras calibration + 3D tie points), so to transfer that information from Metashape project to Colmap format (that is supported as input by Gaussian Splatting) - this script exists. But this scripts uses Metashape Python API that is available only in Metashape Professional.
Hi! :)
Do you know maybe - is possible way to convert somehow camera + pointcloud/tie data into colmap format without Pro version?
Any advices?
Thanks!

Just FYI - export to Colmap format was implemented in Metashape Standard in 2.1.3+ and in 2.2.0 pre-release (2.2.0 also includes fisheye camera model export):

File->Export->Export Cameras...->Choose 'Colmap (*.txt)' in 'Files of type'

3
General / Re: Metashape 2 Benchmark - Windows 30-40% slower than Linux?!
« on: December 12, 2024, 12:54:56 PM »
Hi, andyroo.

Have you tried to run some kind of CPU benchmarks (f.e. Geekbench) to compare platforms' generic performance?

Because it seems that each step of processing has pretty significant speedup on Linux. So probably this can be due to OS's memory allocator performance or due to difference in CPU-cores scheduling.

4
General / Re: Bad allocation
« on: November 19, 2024, 01:48:25 PM »
Hello!
Please share a project-reproducer to support@agisoft.com with steps to reproduce (probably open project + export cameras in colmap format?), so it was possible to reproduce and investigate the problem.
Probably small part of the project will be enough - just make sure that it can reproduce the problem.

5
Feature Requests / Re: depthmap hole filling
« on: October 09, 2024, 01:11:44 PM »
Hi, Martin!

1.1) Depth Pro - performs processing in 0.3 seconds on a Tesla V100 with 32 GB VRAM, and we're only talking about 2.25 megapixels here.
1.2) Metashape supports many video cards from different vendors (including those that do not support CUDA, including those with quite little VRAM), to process even large 100+ megapixel images as fast as possible, etc.

2.1) Such methods are well suited for VFX (bokeh effect, etc.), but poorly suited for photogrammetric tasks, that's why there are no measurements of accuracy in real units of measurement (centimeters/millimeters) in the article, besides such neural networks will not be able to solve the problem in unfamiliar conditions (for example, when shooting in a well, if there were no such frames in the training dataset).
2.2) Metashape relies on the laws of optics and geometry, which are universal and reliable in any environment (even underwater), have a certain predictability in accuracy and therefore allow the use of photogrammetry for construction/measurement tasks.

Indeed, ideas from such methods can be used, for example, to try to improve a depth map constructed by photogrammetric methods - by patching holes in it. But this requires a lot of work to bring an academic idea to a reliable industrial application, will slowdown processing a lot and have other problems. In particular, there is a risk that such a method will often spoil the depth map - by patching holes that are actually holes (as a mental experiment - for example, imagine a lattice in a window, and through the hole of this lattice you can see a perfectly clear sky - what is it? a hole in the lattice, or a white sheet of paper in the lattice?).

If you are talking about building depth maps from single frame (i.e. without taking parallax into account), then this is a task that should rather be solved in VFX programs for creating effects in video, while Metashape is focused on photogrammetry tasks.

6
Feature Requests / Re: Export COLMAP in standard
« on: October 03, 2024, 11:56:52 AM »
Quote
So, I think to conclude, this is not possible to mix dataset formats.

It seems that in terms of export formats - Metashape allows to export Colmap format with information about cameras + tie points, also it supports export of dense point cloud.

Postshot supports Colmap format with information about cameras and tie points, it also supports import of dense point clouds in .ply format, but unfortunately it is not possible to import such dense clouds after importing Colmap project (to replace tie points with dense cloud).

IMHO it is more reasonable to add such an import function on the Postshot side than to add a workaround in Metashape in the Colmap format export, because a dense point cloud cannot fully mimic tie points (dense cloud doesn't have projections), and so - this is a workaround (with all related problems like 'what if somebody will use it to export into another app which relies on projections', and so on).

7
Feature Requests / Re: Export COLMAP in standard
« on: October 02, 2024, 03:58:13 PM »
Quote
This altered Colmap points3D.txt file (without projections data) imports into Jawset Postshot for Gaussian Splatting processing with no issue and produces far greater accuracy than using just the tie points alone.

Nice. Have you tried to directly use Dense Cloud .ply file in Postshot (without mimicking points3D.txt)? It seems that it should also work - https://www.jawset.com/docs/d/Postshot+User+Guide/Importing+Images

8
Feature Requests / Re: Export COLMAP in standard
« on: October 02, 2024, 01:29:11 PM »
Quote
can you add the option to export a dense cloud as well as or instead of the sparse cloud?

In Colmap format's points3D.txt - points need to have their projections (named 'Track') in cameras - see https://colmap.github.io/format.html#points3d-txt . Such tracks exists for Tie Points because Sparse Cloud was created from 2D keypoints detected in images and then - those 2D keypoints were matched between images and were transformed into 3D Tie Points.

Dense Cloud can be exported in such format because points in Dense Cloud doesn't have tracks - it is unknown what cameras observe which points (and what points were occluded), moreover - such information is too heavy to be represented in text-based points3D.txt+images.txt efficently, because these files formats were not supposed to be used in such way.

So, it makes sense to export Dense Cloud only if some GS implementation can use it AS IS - without tracks - i.e. without projections in cameras, because Dense Cloud doesn't have it by its nature. Do you know such GS implementations? May be they support plain and simple .ply-file import? If so - you can just Export+Import+Use Dense Cloud via .ply-binary format.

9
Feature Requests / Re: Export COLMAP in standard
« on: September 12, 2024, 01:51:45 PM »
> Hi, nice tip thank you - only this is a destructive method - crop box would be non destructive

Hi, you can use Ctrl+Z to revert tie points removal.

10
Python and Java API / Re: [Request] Proper Python Documentation
« on: May 21, 2024, 01:46:28 PM »
Hi marcel.d,

support for autocompletion in VSCode was improved in Metashape 2.1 python wheel release, so you probably just need to update.

11
Feature Requests / Re: Please add Stubs for the Python standalone module
« on: November 27, 2023, 03:51:09 PM »
> Seeing as those autogenerated stubs seem to work at least partially, maybe you could think about including them in the module in the future?

I don't know how to generate these stubs automatically (without PyCharm GUI) - so it is a big question how to integrate their generation in CI build system. It is possible to implement a custom stubs generator, but this requires a lot of time/efforts and currently is not planned.

12
Feature Requests / Re: Please add Stubs for the Python standalone module
« on: November 24, 2023, 09:22:00 PM »
About auto-completion in PyCharm - it was not working for Metashape 2.0 (see screenshot pycharm_typehinting_MS20.png), but it seems to work properly for Metashape 2.1 (see screenshot pycharm_typehinting_MS21.png).

About VSCode - as a workaround - maybe it should work to generate stubs in PyCharm and then use generated binary skeletons in VSCode? (see screenshot pycharm_binary_skeleton_MS21.png)

13
Feature Requests / Re: Gaussian Splatting
« on: November 10, 2023, 01:47:10 PM »
Quote
Do you know maybe - is possible way to convert somehow camera + pointcloud/tie data into colmap format without Pro version?
Any advices?

Hi. Sorry, I don't know any alternative methods.

14
Feature Requests / Re: Gaussian Splatting
« on: November 09, 2023, 03:31:19 PM »
Yes, Gaussian Splatting uses SfM results (cameras calibration + 3D tie points), so to transfer that information from Metashape project to Colmap format (that is supported as input by Gaussian Splatting) - this script exists. But this scripts uses Metashape Python API that is available only in Metashape Professional.

15
Feature Requests / Re: Gaussian Splatting
« on: November 07, 2023, 12:00:28 PM »
Quote
Hi, any option for Standard version user?

At the moment I don't know any options for Standard version, because this is a Python script and only Pro version has Python API support.

Pages: [1] 2 3 ... 8