Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Cyberworld

Pages: [1]
1
I've recently updated to Metashape version 1.8.4 with latest recommended NVidia drivers for two Quadro RTX4000 (driver is 516.94).

I run my usual routine, but now textures aren't being rendered correctly with the Mosaic blending mode when GPU acceleration is enabled. I've done the same texture using the Mosaic, Average and Disabled options with all other settings the same (Diffuse map, Generic for UV, 4K resolution, Hole filling and Ghosting enabled if present, From Images). I attach three screenshots one with Mosaic and the other with Disabled (Average gives about the same results) and with Mosaic without GPU.

Is this a bug?

2
General / Model quality or how to interpret errors in Metashape?
« on: August 25, 2021, 09:25:37 AM »
Just a quick question on how one should interpret the errors reported in the Info windows and Reference pane. I am working on high detail-high quality objects for the GLAM sector and I need to make sure that what I am seeing visually matches what the software considers as good quality.

For the Info windows usually the RMS reprojection error ranges between 0.2-0.35 pixels. Is this good enough?

For the Reference pane the Total error for markers is very confusing, as it gives errors that range from 5 to 10m, but in the pixels column the error is between 1-3 pixels. I know my pixel size is about 5nm, hence the error should be very small.

Also in the Reference pane the Total error for scale bars is almost always less than 0.1mm (hovers between 0.02-0.08mm). The accuracy is verified by conducting measurements using the ruler tool in Metashape and comparing it with physical measurements on the objects themselves using a digital ruler, where measurements match up to two decadal points of a centimeter (0.01cm).

Why does the Total error in metres for markers is off? Is it because I'm not using a spatial measurement device (e.g. Total Station) to pinpoint them in physical space?

Thank you for any clarity on the issue.

PS: I have already read the Manual for Metashape, but it does not provide practical examples on how to interpret the numbers.

3
Python and Java API / Python script no longer running in Metashape 1.7.4
« on: August 24, 2021, 12:54:32 PM »
Has anything changed with how Python is handled in Metashape 1.7.4? The script below worked with 1.7.3, but it no longer does.

Code: [Select]
import Metashape

chunk = Metashape.app.document.chunk

for marker in chunk.markers:
for photo in list(marker.projections.keys()):
if not marker.projections[photo].pinned:
marker.projections[photo] = None
print("Projection for "+marker.label + " removed on " + photo.label)

print("Script finished")

Any suggestions?

4
General / Automate marker pinning via Python
« on: July 08, 2021, 11:46:13 AM »
I have searched the forums for marker batch operations, but I'm coming up empty, save for removal of blue flag markers via a Python script.

The issue I have is that I'm running heavy photogrammetry projects (c. 600-800 photos), each one having more than 15 markers on it. Marker auto-detection worked, but has labelled everything with blue flag markers. Pinning each one is very time consuming, and there is very limited need to move markers, as the vast majority of them have been correctly placed by the algorithm. So the question is: Is there any way to programmatically batch convert all the blue flag markers on photos to green flag, namely actually pin them?

Thank you for any help on this. Also it would make for a great new feature for future versions of Agisoft.

5
After testing for several hours and with several different sets of data, I'm concluding that Metashape 1.7.2 does not process or parse correctly 16-bit or 8-bit photos with the ProPhoto RGB colour space. Upon importing photos, the colour values change and for some reason the quality of the produced thin point cloud is lower than with using other colour spaces (checked with Adobe RGB, sRGB, and DNG/RAW without assigned colour space). Since the Metashape manual states that "Metashape uses the full color range for image matching operation and is not downsampling the color information to 8 bit", I'm considering this a potential software bug, unless there is confirmation that the ProPhoto RGB is not a supported colour space in Metashape.

An example of what happens in another post I made earlier today: https://www.agisoft.com/forum/index.php?topic=13301.0

6
General / Isn't 16-bit colour supposed to be better for model quality?
« on: April 24, 2021, 03:14:54 AM »
I've been doing some field testing in Metashape. I've taken a set of 40 photos with the same camera both as JPEG (8-bit) and RAW/DNG (the sensor captures 10-bit colour). The camera saves the photos as both formats, I did not retake the photos and changed format. Then, I've introduced the JPEGs and RAW/DNGs as they are into Metashape, and also converted the RAWs to TIFFs in 16-bit ProPhoto RGB using Photoshop. I did not touch the colours or white balance for the TIFFs, only increased a bit the sharpness and exposure. On visual check-up the TIFFs look almost identical to the JPEGs colour-wise.

All processing done in Metashape used the exact same parameters for the sets (same alignment, dense cloud, mesh and texture parameters).

Upon alignment of the images in Metashape I noticed that JPEGs produced better mean and max RMS than TIFFs. Why is that? Isn't higher colour depth supposed to be a better for model accuracy and image alignment?

Upon texture production, the TIFFs came out with a greenish tint that is not present in the source images! Why does this happen? I know Metashape supports up to 32-bit EXRs, hence 16-bit ProPhoto should not be an issue.

I'm attaching a couple of screenshots with close-ups of the textured model in 8bit JPEG and 16bit TIFF. Both the quality drop and colours shift are evident in the 16-bit. Am I doing something wrong or is it a software issue? Is there a way to let Metashape know what colour space are the data introduced?

Note that the RAW/DNG model produced slightly better mean and max RMS, and similar colour to the JPEGs, so the issue is with the 16-bit TIFFs in ProPhoto RGB.

7
General / Use of markers and marker corrections
« on: April 11, 2021, 12:56:05 AM »
For a project I'm working on we use either non-coded or coded targets. After initial alignment of photos, we use the detect markers feature to auto-detect the targets on the images and place markers (and later scale bars).

Question 1: If I set markers manually, instead of using the detect markers feature, does it make any difference if my initial alignment already has a good RMS?

When the detect markers process finishes, it successfully finds the targets between the photos and places markers. However, on some photos the markers are placed a bit off the centre of the target and thin black lines indicate their residual errors. We then go over each photo to move markers back to the correct centre of the associated target hitting the Update button after all processes are done.

Question 2: Does manually moving the markers back to the correct centre of the target help increase the accuracy of the model? If yes, what does this action do exactly? Does it re-align the specific camera where the correction was applied?

Question 3: Instead of hitting just the Update button on the Reference pane, do we also need to run Optimise cameras?

The reason for questions 2-3 is because we don't really see much improvement in the RMS errors (initial RMS is around 0.25-0.3 (0.22-0.25 pix), after correction of marker/targets they stay about the same), and were wondering whether all the trouble of manually correcting each target is actually worth it.

Thank you for any clarifications.


8
Bug Reports / Null Image error during Mesh Refinement
« on: February 11, 2021, 01:19:58 AM »
I've been trying to refine a rather large mesh (20 mil. - this is necessary, it is a woodcarved artefact with lots of detail) on a suitably equipped workstation (128GB RAM, two Quadro RTX4000, Intel i9-9940X). The project has aligned successfully 594 cameras, produced both sparse and dense clouds, and a mesh from the dense cloud. In the past refine mesh worked for meshes up to 14-18mil polygons. Is this one failing due to larger polygon count or something else?

I attach the log data:
Code: [Select]
2021-02-10 11:27:29 RefineMesh: quality = Ultra high, iterations = 4, smoothness = 0.3
2021-02-10 11:27:29 Initializing...
2021-02-10 11:27:31 Found 2 GPUs in 0 sec (CUDA: 0 sec, OpenCL: 0 sec)
2021-02-10 11:27:31 Using device: Quadro RTX 4000, 36 compute units, free memory: 6723/8192 MB, compute capability 7.5
2021-02-10 11:27:31   driver/runtime CUDA: 11020/8000
2021-02-10 11:27:31   max work group size 1024
2021-02-10 11:27:31   max work item sizes [1024, 1024, 64]
2021-02-10 11:27:31 Using device: Quadro RTX 4000, 36 compute units, free memory: 6503/8192 MB, compute capability 7.5
2021-02-10 11:27:31   driver/runtime CUDA: 11020/8000
2021-02-10 11:27:31   max work group size 1024
2021-02-10 11:27:31   max work item sizes [1024, 1024, 64]
2021-02-10 11:27:31 Device 'Quadro RTX 4000' has 6631 Mb of free memory
2021-02-10 11:27:32 Device 'Quadro RTX 4000' has 6411 Mb of free memory
2021-02-10 11:27:32 Analyzing mesh...
2021-02-10 11:27:33 Faces: 20000000, Vertices: 10004718
2021-02-10 11:27:33 Memory required on each device: 2507 Mb + 4196 Mb = 6703 Mb
2021-02-10 11:27:33 Using device 'Quadro RTX 4000' with out of core 1x0x2 subdivision
2021-02-10 11:27:33 Using device 'Quadro RTX 4000' with out of core 1x0x2 subdivision
2021-02-10 11:49:13 Images quality=UltraHigh is too low (1.92356 pixels per triangle)...
2021-02-10 11:49:13 Target quality=UltraHigh is too low for model detalization. Processing with quality=UNKNOWN instead...
2021-02-10 11:49:13 Stage #1 out of 1
2021-02-10 11:49:14 Faces: 20000000, Vertices: 10004718
2021-02-10 11:49:14 Memory required on each device: 0 Mb + 4196 Mb = 4196 Mb
2021-02-10 11:49:15 Subdividing mesh...
2021-02-10 11:49:22 Faces: 20000000, Vertices: 10004718
2021-02-10 11:49:22 Memory required on each device: 0 Mb + 4196 Mb = 4196 Mb
2021-02-10 11:49:22 Loading photos...
2021-02-10 11:49:23 Finished processing in 1314.39 sec (exit code 0)
2021-02-10 11:49:24 Error: Null image

9
Bug Reports / Remote Access and Metashape
« on: January 27, 2021, 06:40:42 AM »
Hello,

I have installed Metashape 1.7.1 Pro on a workstation with the following specs:
- CPU: Intel i9-9940X.
- RAM: 128GB.
- GPU: 2x NVidia Quadro RTX4000.

I am trying to work remotely on the workstation from an HP laptop that has the following specs:
- CPU: Intel i5-6440HQ.
- RAM: 16GB.
- GPU: Intel HD530.

I have tried to run remotely both from TeamViewer and AnyDesk. In both Metashape opens as in the attached screenshot. I've tried adding the --opengl angle argument per some earlier posts found in the forum, but that didn't work either. What am I doing wrong?

Thank you for any reply.

All best

10
General / Photogrammetry with targets
« on: November 18, 2020, 12:53:08 AM »
Probably a question that common sense already answers, but here it goes anyway: I'm working on object photogrammetry and I have targets of known size (5mm) around my subject in order to be able to scale the object during processing. Question is: What are the methodological/practical requirements to use these targets in Agisoft Metashape to create scales out of them? More specifically:
- If I mask the images to exclude background, can the targets be in the masked region or do they need to be in the region that gets processed?
- If the targets need to be in the region that gets processed, can I later delete them in dense cloud or mesh stage or if I do then scales won't work?

Thank you for reading the question and any help provided. :D

11
General / To focus stack or not to focus stack?
« on: November 03, 2020, 12:44:03 AM »
I guess the question is rather obvious: I have been taking hundreds of photos of movable complex objects (c.600-1700 photos per object), and due to the requirement for detailed/precise work and the characteristics of the lens used (Sigma 50mm Macro on Canon 5DS R - lens can't focus to infinity) it is necessary on many occasions to take from the same spot multiple photos to have in focus/cover the entire object surface.

Given the situation the question arises: Will it be more or less helpful for Agisoft Metashape to perform focus stacking on the photos? E.g. if I take 4 photos of the same horizontal/vertical angle with different focus points on the object and focus stack them in a photo editor, will this combined focus stacked photo be more helpful for Metashape, or should I keep the original photos? Has anyone experimented with focus stacking in Metashape?

Disclaimer: I've already run the original photos with excellent results (c. 0.5mm error based on targets), but I'm wondering whether focus stacking could have any benefit for the final outcome.

12
General / Ground control points and georeferencing
« on: July 29, 2020, 12:42:16 AM »
Hello to all,

I am trying to understand how georeferencing works in Metashape and in the process develop a field protocol for data capture. At the moment the equipment available for the team comprises a drone with camera, handheld DSLR camera, ground control targets and a high accuracy total station. None of the equipment has GPS capabilities and the principal aim of our project is to digitise cultural heritage monuments. The question is the following:
- If we capture photos, have several targets appropriately placed around the monument and get local measurements with the total station, will it be possible in the future to georeference the point cloud produced (assuming we can georeference either the actual GCPs, specific points on the monument, or the points where we took measurements with the total station)?
- Accuracy-wise does it make a difference if we start with a local coordinate system and then transfer-translate to a georeference system or if we go for full georeference right from the start?

Thank you for any help!

Pages: [1]