Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - jenkinsm

Pages: [1] 2
I'm starting to test the use of a fisheye lens for my projects and the initial results in 2.0.1 on Mac OS were excellent. I installed 2.1 on my Windows PC and did another test - the mesh is great, but the textures are completely wrong. It's even placing texture where there is no mesh. See attached photos.

I'm about to roll back to 2.0.3 on Windows to see if the problem still exists. I am also testing the same photos on Mac. I'll update this post if I learn anything else relevant.

General / Advice needed for a tricky project
« on: April 02, 2023, 09:53:50 PM »
I'm wondering if anyone has advice for how to deal with this specific issue: I am making an indoor go-kart track and I have a complete set of photos for the whole building. Roughly half of the photos are close-up oblique shots of the track surface itself, and the rest of the images cover the building (walls, support beams, etc.)

If I align and make a mesh from only the photos of the track surface, the track turns our relatively good. Very little noise and overall a result I am happy with.

The problem occurs when I try to make a mesh that includes the images of the walls and support structures. Those images also show parts of the track surface, although not in much detail. The mesh made from all of the photos has a very noisy track surface, which makes it unusable for baking normal maps.

What can I do to "force" Metashape to use the close-up track surface images for that part of the mesh and ignore the track surface shown in the other images?

I'm looking for a semi-automated solution that doesn't involve masking images individually.

So far I've tried to generate masks on the "walls" images after making a mesh of the track surface only, but this resulted in some areas getting masked out that shouldn't be. I think it's because the track is on two levels with a ramp, so MS is masking areas that show the underside of the upper level.

I have also tried aligning just the track images, reducing the overlap, then aligning the "walls" images with the existing aligned track images, and making a mesh from that. The resulting mesh is pretty much the same.

Is there something I'm missing that would get me the results I'm after?

Bug Reports / Seems like 2.0 is using more RAM to generate textures
« on: March 10, 2023, 12:21:50 AM »
I haven't done a direct back-to-back comparison, but I was able to generate 8K, 16K, and 32K textures before upgrading to 2.0 and now it seems like I'm limited to 4k textures. The projects I'm testing now are much smaller with much fewer pictures, but the scene is very different so that might be a factor here.

Has Metashape been altered to use more RAM when generating textures? I'll try to do an A/B test between 1.8 and 2.0 to provide more concrete data. Right now I'm pretty frustrated, so I hope this is a bug and not a new limitation.

I'm still learning Python and don't know much, so please tell me if what I'm trying to do is impossible.

I used ChatGPT to write a script that uses OpenCV to mask out the sky in my input images and another script that will detect cars and mask them out.

I installed OpenCV both within Metashape using pip and on my system, and I made sure that my system version of Python is 3.9, same as in Metashape.

Unfortunately, when I try 'import cv2' in Metashape, it gives an error saying that numpy is not installed and I need to install it using 'pip install numpy' — but when I do that, it says that numpy is already found in the installation directory.

I'm stuck in this loop where OpenCV thinks numpy isn't installed, but in reality it is installed.

What am I doing wrong here? How can I get it to work?

I want to set a different capability for specific nodes based on which of these functions is being performed:


For .detect I would want the nodes that do not have a GPU to be set to "Any" because they can detect points nearly as quickly as the GPU nodes.

For .prematch and .match I need to set those nodes to "CPU" because they are way too slow compared to the GPU nodes, so the rest of the nodes complete while those take forever.

I looked thru the Python API reference and searched for "prematch" but nothing came up, which makes me think this is not currently possible.

Does anyone know if this can be done and how?

If it's not possible now, then it certainly needs to be added in order to use network processing as efficiently as possible.


I figured it all out.

General / Can I use Network Processing between my home computer and AWS?
« on: January 16, 2023, 12:07:40 AM »
I have started exploring the use of AWS to speed up my photogrammetry workflow and so far, none of the AWS instances I have tried are faster than my home desktop computer.

They each have strengths and weaknesses (for example, one instance has 48 threads at 4.5 GHz and 1.5 TB of RAM - but no GPUs...) so I was thinking to try and include one or more AWS instances in a processing network that also includes my home computer(s). That way, I can use each machine for their strengths without having to transfer entire projects into the cloud and back down again.

Is this possible? I tried setting it up using the IP address of the AWS server but I'm guessing port 5840 is blocked somewhere and I would need to open it.

I know that the drive with the project files has to be shared and accessible on all computers, and I am thinking to use Google Drive File Streaming for that (it has the same drive letter on every machine). But this may not work since it'd appear as a local drive on each machine and not a network drive.

Has anyone done this successfully? What steps do I need to take to get it working? Use tunneling through a VPN?

For the record, I have gigabit internet (both upload and download) so I'm not too concerned with that being a bottleneck. I've already tried network processing at home on a gigabit LAN and it works fine.

Any advice is welcome. Thanks!

General / Pixel size when using downsampled images - original or 2x?
« on: January 12, 2023, 03:03:04 AM »
I'm testing out a workflow where I capture source data using 8.3K raw video (~35 MP - 8256 x 4644) but then reduce it to 4128 x 2322 for the photos that will go into Metashape. My projects use tens of thousands of images so reducing the file size by 3/4 will be a huge benefit both for storage and processing speed.

I am wondering whether I should use the original pixel size in the Camera Calibration dialog, or if I should double the pixel size in order to "match" the images that are going into Metashape.

Is there a correct answer to this? Does it even matter, since it's just used for the initial estimate?

Thanks in advance!

General / Dense Cloud Generation + Network Processing = Stuck at 5%
« on: December 14, 2022, 03:04:56 AM »
Hi all,

I am trying out network processing for the first time to see if I can speed up my very large projects. Normally I generate a mesh from depth maps, but I discovered that making a point cloud first gives me the ability to remove trees, cars, and houses before making a mesh (these look bad in the final product and removing from the mesh has proven to be difficult and laborious).

My current project has over 22000 35 MP images in EXR format. The photos are on two drives, but both drives are shared on the network and both nodes can access all the photos. The depth maps are already created in Medium quality, and I am reusing  the depth maps to generate a point cloud.

Currently, the Network Monitor shows the progress is at 5% and it hasn't progressed in the last hour.

Should I continue to let this run or do I need to change something in order for this to work?

I do not have any tweaks enabled and the images cover a 5 km road, so each photo has probably 15-20 neighbors max.

Also, the monitor shows this running on 1/1 node, so I'm wondering if it's still figuring out how to split up the job.

The CPU on the active node is at 100% but it's barely using any memory or disk.. So it's doing something, but I don't know what!

Any advice is appreciated. Thanks!

This is what the log states currently:

2022-12-13 14:51:36 BuildDepthMaps: quality = Medium, depth filtering = Aggressive, PM version, reuse depth maps
2022-12-13 14:51:37 processing finished in 1.583 sec
2022-12-13 14:51:38 BuildDenseCloud: point colors = 1
2022-12-13 14:51:40 Generating dense point cloud...
2022-12-13 14:51:40 processing finished in 1.587 sec
2022-12-13 14:51:41 BuildDenseCloud.initialize (1/1): point colors = 1
2022-12-13 14:51:42 Generating dense point cloud...
2022-12-13 14:51:42 initializing...
2022-12-13 14:53:18 selected 22774 cameras in 95.725 sec
2022-12-13 14:53:18 working volume: 1874191x558706x299686
2022-12-13 14:53:18 tiles: 458x136x73

General / Weird banding/noise pattern in mesh from depth maps
« on: October 28, 2022, 02:52:25 AM »
Does anyone know what is causing this weird banding pattern in my mesh? (See attached image)

I generated the mesh from depth maps using a set of 22,785 images that were well-aligned.

I'm wondering if Optimizing Cameras caused this.

It's a big problem because it shows up in my normal maps and looks terrible in-game.

Open to hearing all suggestions. Thanks!

General / EXR support in Metashape
« on: April 23, 2022, 07:45:30 AM »
I was able to successfully import EXR files into Metashape, however, the gamma looks incorrect. The image is much brighter than in Mac's Preview and Finder, and also DaVinci Resolve where the EXRs were exported from. It could be a color space flag that needs to be changed in Resolve, but since the image looks right in Finder and Preview, I'm assuming Metashape is the culprit here.

Does anyone know why this is happening? The attached pic shows Metashape on the left and Preview on the right.

Bug Reports / Changing color levels produces red/cyan shift
« on: April 18, 2022, 08:50:23 PM »
I am trying to use the "Adjust Color Levels..." tool that you open by right-clicking on an image, but when I change all the values by the same amount (150 in my case) the image becomes dramatically Red. If I reduce below 100, then it shifts toward Cyan.

(See attached image)

Weirdly, the thumbnail shows the correct image, but not the large viewer.

What's going on here?

I'm trying to merge two large chunks into one and I thought it would be quick, but I'm surprised to see that it's going through the "Selecting pairs..." process which is taking a long time. Does anyone know why it's doing this? I didn't align the chunks, I'm just trying to merge them.

I was wondering about these attributes that affect alignment. I want to make sure I am using all the best settings to get the highest number of aligned images in a single alignment operation, while still being reasonably fast.

1) Focal Length - Will I get a higher number of correctly aligned photos by entering the 35mm focal length (in my case, 16mm) or by leaving it blank so Metashape can estimate it? And how does this affect speed?

2) Capture distance - Same as above - will I get a higher number of correctly aligned photos by entering the capture distance? And how does this affect speed?

3) Rolling shutter compensation - I know this slows down alignment. But does it affect the number of correctly aligned photos? And/or does it affect the quality of the resulting mesh?

Thanks in advance!

I am working on several gigantic projects with well over 200,000 TIFF images each. I would like to know if there are any tips or hidden secrets that would allow me to import all of my photos much faster.

The slow importing happens on my Windows machine where the photos are stored on a 32 TB RAID 0 (2 drives), whereas my M1 Max MacBook Pro loads photos from an external SSD almost instantly. Granted, I haven't tried 100k at once on the MacBook, and I know it's faster partly because of the SSD, but I'm still surprised by how long it takes on the PC.

Putting the photos on an SSD is not an option, as each project uses several TB of photos. Converting to PNG or JPEG is not ideal because that's an extra step I don't want to include, and my workflow can only output TIFF/DPX/EXR frames directly (I'm capturing roads on video and using DaVinci Resolve to mask out the sky, adjust the contrast/midtone detail, and mask out the capture vehicle).

Any tips or tricks I should try?

Pages: [1] 2