Hi folks. We're scanning some sets and props for our animated movie(
https://bloodandclay.com/bnc.php) and I would love to have feedback on our approach and ask some questions about it.
For this example we shot 248 jpegs with a Canon EOS RP with some strong (bounce)-lights to avoid shadow-casting. We're on the latest Metashape standard version
- est. image quality about 30 pics around 0.5. ~125 pics above 0.7
- focal length: 50
- iso: 100
- F-stop: F/20
- shutter: 1/3.33
We're
- aligning photos
- build dense cloud
- build mesh
- build texture
Everything's pretty much default. I chosed highest/ultra-high in the quality settings and went for 3 UDIM tiles at 4k.
The result is very good in some areas and poor in the other. For example the small cupboard came out rather bumpy. We assume that this is because its surface is rather shiny. We know about the need of cross-polarizing and will implement this in future shootings.
But on the floor there are some noisy areas, too. If you zoom in you'll see that there are mesh parts 'floating' over other parts. What is the reason for this? There are some areas that will be difficult to capture because there are indeed little spikes pointing out like here. But we do have quite a few photos of the floor with out-of-focus-areas: Is this the reason? Better leave them out? Use masking to mask out the out-of-focus areas? Better shoot that angles with shorter focal length?
In general: I have the feeling that generated texture could be sharper. Are the pictures with out-of-focus-areas the reason for this? What other factors contribute to texture quality?
A question about masking: As I understand it: masking will speed up processing because Metashape know where to NOT look at. But does it improve the quality of the mesh?
A question about the selection tools: After aligning the images I use the selection tools to delete the points of no interest. But after the mesh-building there are all back and I do it again. What is it I do not understand?
Are there workflows(on the horizon) enhancing the photogrammetry result with machine learning tools and/or lidar images?
technical:
- Processing is rather slow with that settings and amount of photos. I have a couple of machines here. Currently latest metashape is on my WS: Intel(R) Core(TM) i7-5960X CPU @ 3.00GHz, 64 gb Ram and geforce1070( will upgrade to 3070ti) on win10pro.
I could install metashape on a 24-core-Xeon-machine( but with no GPU). If I put the old 1070 to this machine: Would that be the better metashape-machine?
Any feedback greatly appreciated- thank you!