Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Bzuco

Pages: 1 ... 14 15 [16] 17
226
General / Re: New Low Hash Rate (LHR) GeForce Cards
« on: September 03, 2021, 09:29:07 AM »
@Corensia
In multicore performance 5600X is sadly slower than 9900K, both on stock frequencies(good comaprisson is cinebench Rxx). Only in single core performance is 5600x better.
In terms of GPUs, there were changes in CUDA cores and especially how they are grouped in GPU core. In rtx 30xx CUDA count was doubled, but also "regrouped in two blocks" in one SM block. There is also question about dispatcher which is assigning tasks to CUDA cores, how good it can handle that assigning process. You can compare numbers(core configuration, and other fields in table) on this page https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_20_series
Differences: 3060ti vs 2080s: SM blocks(38vs48), memory bandwith(448/496), default and boost frequencies(1410/1650 and 1665/1815), memory frequency(14000/15500)
Can you compare just depth maps generation times alone? and check what was the ~frequencies during that process? Or better lock frequencies using C:\Program Files\NVIDIA Corporation\NVSMI\nvidia-smi.exe utility to have more accurate result.
There is also chance, that LHR limit can affect some kind of processing tasks, we will see in the future. Double count of CUDA cores in RTX30xx gen. not always means automatically doubled performance in programs, probably some kind of optimization is needed in kernels which are executed on GPU. I'm not an expert in this field, but I'm a little interested in it.

227
General / Re: Best hardware for network processing ?
« on: September 01, 2021, 07:31:18 PM »
Your output will be just dense cloud, or are you planning to create also mesh with textures?
What will be the final quality of depth maps/dense cloud...ultra high/high/or just medium?
If your project/s is some aerial photos it is easier to process and less time consuming(less depth maps count needs to be filtered together).
If your project is archeology / interior-exterior of building,..where parts of object were taken from tens of different angles...it will be more time consuming(more depth maps needs to be filtered together).
If you can answer these questions then it will be more clear if you need more invest on GPU(s) or on CPU part. Alignment phase is good handled by GPU, so easy decision. Rest of the processing is ~50:50 spreaded between CPU/GPU...depends on what will be the output.

2000-7000 is plenty of photos(especialy 60Mpix), but it still can be done on one computer...if there is no time pressure to finish the project.
I would build one computer and see how it can handle the project in terms of time. I don't have enough knowledge about metashape network processing, what data are shared, which phase is most effective/speeded up,...

I would go for one PC with AMD 16/24/32 core CPU and one or two GPUs(if 24GB VRAM not needed, 3080TI better option). Amount of RAM depends on quality of the output.
My guess is that whole $50k does not need to be spent on HW + licenses, at the size of the projects you mentioned.
We'll see what others suggest.

228
General / Re: Best hardware for network processing ?
« on: September 01, 2021, 11:05:00 AM »
It would be useful for us to know what kind of tasks you are planning to process... if one big project or hundreds of smaller. Then it would be easier to advise you some HW.

229
General / Re: Convert camera coordinates to world coordinates
« on: August 26, 2021, 02:18:45 PM »
Until someone tells you, if it is possible or how to convert, you can find some help "decomposing a 4x4 transformation matrix" on google.
You can also export cameras to FBX and import it in Blender, which will tell you what is the position and rotation in world space.

230
General / Re: Image alignment processing times 1.7.3 vs 1.7.4
« on: August 21, 2021, 08:25:21 AM »
Preselect ON: +17% on matching time and +6% on alignment time
Preselect OFF: 0% on matching time and +6% on alignment time

231
General / Re: Image alignment processing times 1.7.3 vs 1.7.4
« on: August 20, 2021, 10:35:38 PM »
I am keeping only last ver. of installer  :-\ . Maybe I will find some mirror sites with older. ver.
EDIT: Changing "1.7.4" to "1.7.3" in original download link gives me what I need  :) . I will try tomorrow.

232
General / Re: Image alignment processing times 1.7.3 vs 1.7.4
« on: August 20, 2021, 09:45:53 PM »
I will gladly test on my sets with locked GPU frequencies, but I don't know where to download previous metashape standard 1.7.3  :-\
You have got less sparse points in 1.7.4, could it be the reason that 1.7.4 is detecting less/more matches on photos and therefore alignment task simply takes different time to complete?

233
General / Re: Face Count, VRAM and Model viewport display.
« on: August 11, 2021, 05:27:24 PM »
@wojtek
I tried import 24M model from FBX file(binary ver. 2014) and I am also getting not correct model ..missing polygons. When I imported same model but in OBJ format, it displayed correctly. I tried VBO on/off...difference was only in VRAM usage/allocation ( VBO-on 465MB, VBO-off 104MB ).
 I think A6000 is not necessary . With 24GB vram you should be able teoretically load model with ~ 5500M polygons(just vertexes and triangles, no normals, no smooth groups, no UVs, VBO-off).
You can check in gpuz util how much vram increases on your model(with all attributes UVs, smoothgroups, ...vertex colors, .. etc.)

234
General / Re: LODs and Metashape
« on: August 09, 2021, 04:08:18 PM »
Yes, same vertex/face count. And the texture quality depends on: 1. quality in build dens cloud dialog in metashape, 2. texel density of your mesh after unwrapping(less chunks and better packed in UV space means higher details in final texture).
CloudCompare does not generate as smooth meshes as meshroom...it is simple poisson algorithm, where you can set distances between vertexes in metric units and three methods of mesh boundary. Meshroom is probably using some retopology algorithm so the topology is uniform and smooth. I think meshroom is good for LOD_0 mesh, but other LODs with higher number should have more precisely and effectively distributed vertexes across mesh, especialy if they will be used in games.(more vertexes in curvy areas and less in flat areas). Evenly distributed vertexes are not the best for LOD_1/2/3/...   But if you just need smooth meshes with less polygons, then meshroom does it's job great :)

For smaller objects will be probably better your workflow. I am dooing mostly exterior models with hundreds of photos, so for me blending so many photos to create one texture was not ideal solution and I dont even need such pixel density in final mesh texture as was on original photos. Therefor I choice just baking point colors to texture, because my dense clouds has density ~ 1point per 3mm and that is enough for exterior models(garden around house, ...).

235
General / Re: LODs and Metashape
« on: August 09, 2021, 01:27:44 PM »
Even if you reduce the mesh by 90% it does not matter, because you would always be baking original dense point cloud colors(not vertex colors from mesh) to texture even on smallest LODxy.

236
General / Re: LODs and Metashape
« on: August 09, 2021, 12:08:23 PM »
Well you can try another workflow outside of metashape, probably same amount of time needed, but maybe with some advantages.
1. create dense cloud in metashape and export it
2. create several point cloud LODs in CloudCompare using subsample  function and use poison surface reconstruction on each LOD to create meshes...this should be faster, because you will be creating meshes from LOD point clouds and not allways from large dense cloud. Export meshes.
3. in blender or other software unwrap meshes for texturing
4. export meshes from blender and dense cloud from CloudCompare/ Metashape
5. import everything to meshlab and use Transfer Vertex Attributes To Texture. This will bake dense pointcloud colors to texture.

I used this workflow in few projects, because I only have standard ver. of Metashape and I am applying transforms on pointcloud in cloudcompare so then it is not easy to bake photos from original camera positions to mesh texture. Also creating texture from photos takes some time. Transfering point colors to texture is faster process.

This workflow brings maybe less problems in creating and cleaning meshes. One disadvantage is that you are baking point colors and not original photos...but if your dense cloud is dense enough, then it should be not so problem. MeshLab is able to create script from last used functions which also speed up overall work. CloudCompare has function to cleanup noise in dense cloud...it should help create better mesh without additional needs of clean it.

I hope some of my tips will help you.

237
General / Re: Thermals on 10900k
« on: July 29, 2021, 11:33:50 AM »
Look at undervolting your CPU. All intel CPUs in the last 10-15 years have their core voltage set much higher than it is necessary. You can save lot of watts during 100% cpu usage and decrease you temperatures significantly. I would advice you to set fixed frequencies to 4.3/4.4GHz on all cores and start decreasing volatages until your cpu freezes during calculations...then go step back(increase voltage back +0.01V) and test again if it is stable. This is quick tip how to undervolt and you don't need to care about temperatures anymore. If you want to keep frequencies variable(boost freqv.) then undervolting will be longer process to be sure your system would be stable...I think it is not worth for those extra boost CPU frequencies.

Also check your GPU temperatures during alignment phase and depth maps calculation. I was surprised that my RTX 2060 super reaches 74°C core and 87°C hotspot during matching points phase...and I have undervolted GPU and set power limit to 70% only.
Not all GPU calculations in metashape are utilizing all cuda cores @100% all the time, but when it hapens for few more seconds it can quickly increase GPU core temperature to limit when trottling starts.

Undervolting is not about making system unstable or less performant, but making it power efficient.

238
2) https://github.com/wjakob/instant-meshes
Instant meshes allow you to paint flow lines before retopology. If I remember there was option to load point cloud(experimental feature? ) in PLY format and create mesh from it.
In other softwares like zbrush/3dcoat/... you can also have control of mesh flow, but no option to load pointcloud data, only mesh.

In regards rounded edges you need to create better pointcloud...take more photos of the curbs from different angles to have sharper shapes.

239
General / Re: How to get adaptive resolution of texture?
« on: July 29, 2021, 10:22:19 AM »
1. export mesh from metashape
2. import mesh in blender/3dsmax/...other apps
3. unwrap mesh and scale up polygons(areas where you want more details) in UV editor, but keep every polygon in UV square with no overlaps.
4. import mesh back to metashape
5. make texture with Mapping mode set to keep UV

In example I grabbed top right part of mesh and enlarged it, so in the result that part is much sharper in final texture.

240
General / Re: image stabilisation - yes or no
« on: July 28, 2021, 03:56:39 PM »
I made an attempt. I took six photos of graffiti wall, one row from left to right. First(img 2669-2674) and second(img 2675-2680) attempt was with stabilization off and third(img 2681-2686) with on. I was taking photos from about the same positions and I was pointing to the ~same positions on wall. At third attempt I was slightly far from wall(~30cm), but nothing critical.
My expectations was that 1. and 2. attempt should results in the same pointcloud and 3. attempt should be slightly or significantly different pointcloud.
My expectations were confirmed.
Additional info:
I was taking photos "quickly" holding camera with one hand trying to not move. Manual exposure 1/400, ISO100, F4.0, auto focus on, 5184x3456 jpg, 24mm lens.
Alignment params:  Accur. High, preselect off, key limit 20000, tie limit 0, rest options NO.
Dense cloud: Ultra high, filter aggressive
View matches table also confirmed, that stabilization ON is causing more invalid points...I compared one photo in the middle of the wall with it's two neighbours.

You can comapre 3 rasters from link below to see differences (e.g. in photoshop set difference layer blending mode). Images are ~1pix = 1mm and I was ~2.5m far from wall.
https://www22.zippyshare.com/v/esWhdJBJ/file.html  (23.6MB)

I will be glad if somebody can confirm that this experiment is enough to confirm, that image stabilization brings deviations in point positions. But I also suppose that If I would be holding breath and taking photos after 2 sec. of not moving camera(let stabilization finish its job and return position to center of the chip, etc.) then I would get almost same results as with stabilization off.

Pages: 1 ... 14 15 [16] 17