Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - 3create

Pages: 1 [2] 3
General / Re: Cylindrical Orthomosaic Generation for long think wall scan
« on: January 25, 2022, 10:25:56 PM »
Another approach for non-planar surfaces (in this case unintended) would be to create a simplyfied UV-mapped mesh model (in 3d software) and use the textured (bent) model as source for transferring its texture onto this new one (in Metashape). The resulting texture represents an accurate unwrapping of the source, as far as the  UV-coordinates of the simplyfied mesh are dimensionally correct (i.e. not stretched).


General / Re: SfM from Archival images with inconsistent lighting
« on: January 08, 2022, 09:09:54 PM »
Hi danuhl,

I've worked on projects with archival images: very challenging!
Basically, SfM alone won't get you very far, there are usually too many textural changes between archival and current images for dense stereo matching to work.
The main work is a manual process, not suited for sfm apps.

The first step is to know more about the intrinsic parameters of the archival images. This can be done by matching well spread points of the old and new images (with known coordinates, i.e. from laser scans), a process known as "reverse calibration".

A  well known "classic" in photogrammetry is the reconstruction of the destroyed Buddha statues, maybe that will give you a few hints:


General / Re: Dense cloud with Sky noise? How to solve?
« on: January 07, 2022, 01:12:21 PM »
That's very fine geometry detail in the balcony. I can imagine that geometry hasn't been reconstructed 100% in your DEM/Mesh? Because then it wouldn't be a texturing problem, as Metashape doesn't have the correct geometry to map onto.

Fixes: can't think of really simple one.
You could create a plane for the front part of the balcony in a 3d app, then orthomoasaic just that geometry in agisoft and then alpha mask that new orthomosaic of just the plane.
However, the wall behind the balcony will still have artifacts. This would require another plane for the wall part with alpha masked balcony images...

What I've done in such cases is to hand model the detailed structures based on the Metashape mesh in a 3d app, import that into Agisoft again and then create the Orthomosaic.
But that's certainly not a "hotfix".

Feature Requests / Option to choose "half-dome" for occlusion map
« on: December 14, 2021, 10:25:09 PM »
If a 3d-scanned object is on the ground, it would be helpful to choose a "half-dome" lighting model for generating the occlusion map.
The occlusion map is a very important feature for delighting the diffuse map,
In a normal (efficient) workflow one isolates the scanned object (region and/or geometry clean-up). However, this means there is no ground geometry for the occlusion map generation.

Of course there are workarounds, but this suggested option would be really useful.

General / Re: How to straighten a 'banana' scan in MS Std?
« on: December 04, 2021, 06:15:00 PM »
...that comes out looking accurate enough, is there any advantages to doing the pre-calibration?
There's no significant speed benefit with pre-calibrated cameras. I haven't used GoPros on any project that requires metric accuracy, so without any measured control points I can't judge the accuracy benefit of pre-calibrating GoPros.
However, pre-calibration for such "challenging" lenses has helped me when Metashape had problems with the initial image alignment (thus adding some constraints for the camera pose calculations).

Do you have any experience with the 7-8?  How does it compare with the 10?
I've only used the Hero 10, because I was hoping that the new chip would have "good enough" image quality. I'm rather dissapointed with the quality of the stills, and have found the quality of grabbing frames from 5K video footage to be better (thus however loosing the whitebalance correction possibility of the raws).
I tried the Insta 360 One R 1 Inch and it has much better Raw image quality. However, the delay between shots is even worse than GoPro, and one can't remote control several One Rs simultaneously.

General / Re: How to straighten a 'banana' scan in MS Std?
« on: December 02, 2021, 10:33:20 PM »
As already mentioned, oblique and rotated images add a lot to a more solid bundle adjustment, so certainly give that a try.
But it may also really help to calibrate your Gopros in a more controlled "lab-setup" (this doesn't necessaraly mean photos of a calibration pattern).

I've recently used a few GoPros and their lens distortions are very "challenging", but the cams have their use, just like in your project.
The advantage for pre-calibrating Gopros could be that they have a fixed focus, so this parameter will not change.
So you could just try importing the calibration (individually per camera, not just generic per cam model). Of course their will still be some variations in the field (or below water ;) due to temperature differences, general calibration tolerences etc...

Attached a downsampled pic of the UV-distortion map for a Gopro Hero 10: pretty crazy, especially the corners.

General / Generic mask images for multi-camera setups
« on: July 08, 2021, 10:01:28 PM »

I'm using a rig with 6 cameras. The images are imported to MS with the multi-camera workflow (master/slaves).
Works like a charm, however, some of the images have parts of the rig in them (can't be avoided in this specific setup).

In a conventional MS-workflow it would be very easy to select a single mask image for each camera group (i.e. "Cam5_xxxx.tif" -> "Cam5_mask.png" / "Cam6_xxxx.tif" -> "Cam6_mask.png").
However, if I select a single mask image for the master, it applies it to all slave groups.
What's the solution to this, are there any "{}"-placeholders which would work when choosing the mask-images?

A workaround which did do the job, was to create hundreds of "Cam5_xxxx_mask.png" images and use "{filename}_mask.png as a template.
But that is highly inefficient, as Cam5_mask.png is always the same image...

Thanks in advance!

General / Re: Strategy for texture on edited mesh
« on: July 08, 2021, 09:21:09 PM »
It's rather hard to pinpoint the exact errors based on screenshots in such cases (without the actual project files), but here are some rough guesses:
  • Artifacts: could be a problem with the polygon topology, i.e. overlapping faces. Solution: 3rd-party app to check mesh integrity with option to "repair" mesh
  • Textures on edited meshes: looks like a serious UV-mapping error. Basic test: let Metashape calculate the UVs for the edited/imported mesh (Mapping mode set to "generic" rather than "keep uv").
    If the texture is then still so buggy, it's a scaling/pivot problem of the imported mesh. If the texture is then OK, the UV coordinates of the imported mesh are messed up, i.e. a different mapping channel was used in your mesh editing software, which didn't get exported to the obj file.
Btw. I'm using imported, edited meshes in nearly every Metashape project, so rest assured, that MS can handle this task very well. There just semms to be a "little" hickup in your (uv-mapping/-exporting?) workflow... ;)
Good luck!

General / Re: image stabilisation - yes or no
« on: June 16, 2021, 01:07:53 AM »
Good points!
But 400 ISO is the max for my APSC-cameras (and weight + depth-of-field are constraints for higher mounted cameras).

Yes indeed, maybe someone could come up with a cool concept for IS-comparisons:
i.e. a reproducible robotic motion rig ;)

General / Re: image stabilisation - yes or no
« on: June 15, 2021, 11:03:58 PM »
As mentioned, IS changes the camera model (intrinsic parameters) in a non-reproducible fashion.
HOWEVER, despite this obvious error introduction, what are the "real world" implications?

I'm sometimes confronted with projects, where there is little chance to optimize the image capture process (i.e. using a long monopod without the possibility for additional artificial lighting).
Images with motion blur in such a situation almost certainly introduce more reconstruction errors than IS-images?!

Does anyone know of any scientifically evaluated comparisons (i.e. controlled setup) between IS v.s. non-IS?
My guess is, that IS-images will introduce more noise (assuming oblique image-cature scenarios with solid  projection angles).

Otherwise, this "IS-subject" remains foggy...

General / Re: i have question about Roughness and metalic Texture
« on: May 21, 2021, 11:08:04 PM »

Metashape "only" extracts the diffuse texture (and optionally normal and occlusion map), so deriving specular and glossiness maps is manual "guess work" (i.e. not physically correct).
There are 3rd party tools for extracting gloss and specular based on the diffuse map, however, the resulting textures are not accurate.

I therefore take a predefined material in the target 3d application which kind of matches the desired characteristics of my photoscanned object as reference for the other maps.
Then it's tedious Photoshopping/masking using the diffuse texture as a basis.

Yes, getting the physically correct specular/gloss (or metal/roughness) maps from photoscanned projects would be awsome!
However, this would require a complex setup with varying illumination angles, colors etc. per shot (search for scanned materials, light field).

If anyone knows of a more efficient approach, I'd also be extremely grateful :)

General / Re: First UAV for archaeological surveying on a budget
« on: November 18, 2020, 03:08:35 PM »
I primarily use the Mavic 2 Pro for cultural heritage projects, mainly building documentation.
But your requirements may differ.

- portability of course
- long flight duration (30 min)
- great image quality for a small sensor:
20 MP/ RAW, Camera is produced in cooperation with Hasselblad (but this may be mainly marketing/branding)
- highly reliable obstacle avoidance (for me a _must_ for oblique image projects where one often needs to fly close to the object and/or avoid structural obstacles)
- great stabilization when hovering, as it uses additional ground sensing cameras (allows for long exposures, even indoors without GPS)
- so far I've successfully used it with the flight planning tools DJI GS Pro, Pix4d and Map Pilot. But threre are many more apps, just haven't used them long enough for a substantial judgement

- ISO limitations: beyond 200, such sensors produce far too much noise, IMHO

I've also used the Inspire with a Zenmuse X5 (FC550) in parallel to the Mavic 2 Pro on the same project, giving me a rare one-to-one comparison:
the image quality is identical (judged using the RAWs in Lightroom)!

General / Re: Can't move object
« on: April 22, 2019, 10:08:05 AM »
Maybe because the model already has a global orientation (i.e. from camera-GPS)?
In this case you can choose "Reset Transform" in the Object Move/Rotate/Scale pulldown menu.

General / Re: How high an overlap percentage is too much?
« on: January 15, 2019, 02:34:16 PM »
Hi Brit,

a few things come to mind if you need to reduce processing time for your scene:
  • RAM: I always have the Task Manager running when processing scenes. If more than 32 GB are used in your case, processing will take too long
  • Region: the Region seems rather large. Check if you can scale it down to just the hill
  • Medium vs. High: the performance penalty for High quality is VERY large. Medium should be fine with Mavic 2 images (of course depending on the LOD you require)
  • Orbits: I've found that 5 degrees (=72 images) is on the safe side, but I've also had sucessful turntable scans with 10 degrees

General / Re: Pc Build Help
« on: December 21, 2018, 06:33:16 PM »
When configuring my workstation I was also considering the TR 2990WX.
However, due to the memory management, the 32 cores don't scale well with all applications, sometimes there is even a performance penalty.

I therefore chose the 16 core 2950X and am absolutely satisfied: it also has great single core performance (useful in many other applications surrounding photogrammetry, i.e. RAW image processing, point cloud editing etc.).

Currently there are also other top CPUs for photogrammetry (such as the i9 9900K), but they don't have the 3d rendering performance of 16 or 32 core TRs and they "only" support 64 GB RAM.

Here's a bench mark overview: (which is based on the older Photoscan 1.3.3 version).

Pages: 1 [2] 3