Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Bzuco

Pages: [1] 2 3 ... 10
General / Re: Wide angle lens for full frame Nikon mirrorless camera
« on: December 05, 2022, 12:46:22 PM »
You can always calibrate lens with checkerboard or let metashape deal with distortion automatically, but..
..from my perspective there is no need use wide angle lens for architecture/buildings/monuments because you can make photos of object from oblique angles with 24/28/35mm lenses(where is less distortion).
I am using 15mm lens(24mm after 1.6 crop factor) and I had never problems with distortion(in terms of bad alignment). Mostly doing photogrammetry of houses in exterior for garden projects.
So grab some prime 20+mm or use that zoom 14-30 lens at 25-30mm focal length.

The distortion is mostly in areas of buildings overhangs and windows under it. Also shadows in this areas does not help increase model quality.
If you don't have photos from ground(or some drone photos pointing horizontal on buildings) in that problematic areas, solve this distortion will be almost impossible.

General / Re: Editing/Exporting Tiled Model in other software
« on: November 22, 2022, 12:36:04 PM »
UE5 nanite technology could handle high resolution models and also VR.

General / Re: Can't write file (exported point cloud)
« on: November 21, 2022, 06:13:47 PM »
Exporting point cloud as obj file is not good idea, because obj is text based format and writing positions, normals and colors as text representation with 5 decimal places takes very large amount of disk space and processing time. Use some binary format(e.g. ply), which needs just 3 times less of disk storage and wirting times are much much faster.

General / Re: Performance on Mac devices
« on: November 11, 2022, 06:40:09 PM »
If you are not using some preselection option, then each image is compared against each other image during matching points and 40,000 key points is overkill value. For 5,000 images it means 5,000x5,000x40,000 point to compare...huge number.
Try set 10,000 key points limit and 4000 or even less(3,000/2,000) for tie point limit. If some images will be not aligned, try 15,000 key point limit.

From GPU utilization perspective it is good to have more key points for matching, because your GPU will be longer time computing and less time transfering data to and from GPU...have more performant GPU(e.g. RTX3080) make sense. If 10,000 key limit would work for you, no need to have  high performance GPU(RTX3060ti would be enough).

You can check my test in this post   to see what is the difference in GPU utilization when different number of key points needs to be matched.

The good is, that estimating location does not take long.

Your detect points 10min. time for 864 can speed up this phase on CPU with high single core frequency, because it is single threaded task.
I am using RTX 2060 super, intel 11700f@4.4GHz and 18Mpix JPEG files, each ~ 10MB big. My CPU can feed GPU at speed ~ 3-4 JPEGs/s. Time of detect point on one photo by GPU does not care much, because it is quick process, the bottleneck is CPU single core boost frequency.

Try make changes in alignment settings and if it does not help you much, then we will try to change hardware.

General / Re: Performance on Mac devices
« on: November 11, 2022, 11:55:11 AM »
Hi, what alignment parameters are you using?
You need to figure out what phase of alignment is the slowest....then it will be easier to advice if you need more CPU or GPU performance.
1. - detecting points is gpu accelerated, but sadly single threaded on CPU almost no benefit from GPU acceleration
2. - selecting pairs is good gpu accelerated
3. - matching points is heavily GPU accelerated
4. - estimate camera locations is pure CPU only task
...can you measure the times of each of this task?

You can also adjust key point limit and tie point limit to speed up matching points and estimate camera locations tasks.

General / Re: Dense Cloud Export
« on: November 11, 2022, 11:34:58 AM »
Hi, it depends on:
1. number of points in dens cloud
2. data you are exporting: just colors/normals/confidence or all of them
3. export format, if it is binary(much faster export) or text based
4. export storage device, if internal hdd / itnernal ssd / external usb2.0/3.0

Can you share more info?

General / Re: Texture built from dense cloud
« on: November 07, 2022, 11:35:16 AM »

you can use Meshlab program to transfer point colors to texture.
In this video   is similar technique used - transfer point color to mesh vertex.

For transfering point colors to mesh texture you would need this workflow:
1. export dense cloud and mesh and import them in meshlab project somewhere on disk
2. select mesh layer and from top menu apply filters:
   a: Texture - Parametrization:Trivial Per-Triangle...this step is not needed if you can provide UV unwarapping in apps like 3dsmax/blender/rizomUV/...
   b: Texture - Convert PerWedge UV into PerVertex UV
   c: Texture - Per Vertex Texture Function
   d: Texture - transfer vertex attributes to point cloud as source and mesh as target, also texture file name and resolution
3. select mesh layer and export it as obj model, you can set also texture file name

Model icon in top panel, switch it to Mode Wireframe.

Check the wireframe on model and if it is not enough dense, then use High face count option or some custom higher value.

If you don't have enough precision on your photos(1pix ~3 cm) you can not expect 3cm precision on model and textures.
Buildings also need few photos from oblique angles, because only top down are not enough for nice shapes without distortion.
What Face count option did you use?

General / Re: Agisoft Metashape 2.0.0 pre-release
« on: November 06, 2022, 11:05:08 AM »
I wanted to know if you have been able to solve the problem that processors with many cores have, in the tests carried out by Pubget Systems  it is clear that a ryzen 7 5800x (8 cores) has more performance than a ryzen 9 5950x (16 cores) which it doesn't make any kind of sense, from what I saw the problem was seen in the tile model but there have also been problems with the dense point cloud, it is assumed that the 5950x will tend to take less than twice as long as the 5800x or something approximate but no is the case, even the ryzen threadripper 3975wx(32 cores) is slower than the 5800x. I have done my own tests but only on my laptop that barely has 4 cores, so I could not draw any conclusions about it, it would be very helpful if you can answer that question, and the last question would be, how recommendable is it to use memories DDR5 in metashape, are they worth it?

5800X/5950X/3975wx  has default TDP limit 105W/105W/280W.
TDP limit is causing lower clocks on higher core count procesors. E.g. 105W on 5800x means, that during processing that procesor can run on ~4.0-4.4GHz, but the same limit on 5950x means, that clock are lowered during processing, way below 4.0GHz...from that reason build dense cloud task does not scale linearly with core count. It is even worse for 3975wx, because 280W limit is not enough for high frequencies for all 32 cores(just ~3.0GHz or lower).

The solution is unlock cpu power limit in motherboard bios and set properly PBO settings and also CPU core voltage. Then all higher count processors will scale up much much better...but I am almost 100% sure that this kind of fine tuning settings was not case in puget testing methodology....everything was set on default.

Lower CPU core frequencies are also causing lower frequency of memory controller which increases processing time.
DDR5 can speed up a bit processing, but if they are more expensive than DDR4, buying better CPU or GPU make more sense.

Can you provide some screenshots of building mesh and also original aerial photo with that building?

If high quality of depthmaps does not gives you enough quality, then ultra high will need a lot of processing power.
I am just metashape standard user, so I am not familiar with tiled builds and SLPK file format.

Depth maps quality:
Ultra High - full photo resolution is used,  14204x10652
High - 1/2 of original res. - 7102x5326
Medium - 1/4 of orig. res. - 3551x2663
From you photos you can estimate(or quite preciselly calculate) how big area covers one pixel in reality...e.g. if 1pix is  3x3cm, your maximum precision is 3cm. If 10cm precision would be enough for you, then you can set High(6cm precision) or Medium(12cm precision) quality.

Build mesh:
Face count - you can't really tell if high/med/low or custom value will be enough until you see the final result. So it is trial/error task. City area would need more polys, and flat areas less....every project is different.
Try create mesh on some smaller project and you will be able guess better if e.g. 500k polygons is enought for some object/area. Then try to guess what polygon count will be enough for your big project. The bad is, that metashape is still creating much denser model, which is in the last step decimated to your desired polycount.

I am often using just point cloud for presentation( ), because size of points can be adjusted and I still have option to measure everything.

Then it seems 3 hours time for depth maps correct. During this process CPU is also quite high utilized.
Do you have all the project files on SSD disk 0 or on the external disk 1?
How was utilized GPU/CPU and disk read/writes during depth maps calculation? can you show some larger/longer graph from task manager or some other monitoring tools?

In mesh generation process, as I said, it is mostly CPU task, and only some part are GPU accelerated. If I am correct, metashape is generating much denser mesh during generation and then it is decimated, which is in my eyes a bit waste of time.

I am using cloudcompare for generating mesh from metashape pointcloud, where I can set the density of mesh in metric units.

Do you need High quality of depth maps and mesh model? Wouldn't be also Medium quality enough?

Pages: [1] 2 3 ... 10