Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jenkinsm

Pages: 1 2 3 [4] 5
46
General / Can Metashape support transparency (alpha channel) in images?
« on: February 02, 2022, 08:51:17 AM »
This is maybe a question for the developers more than anyone else. Can Metashape be updated to include support for alpha channel transparency in images?

I figured out that I can mask out the sky super quickly in DaVinci Resolve and export 16-bit uncompressed TIFF images with alpha. But Metashape currently interprets the transparent area as solid black and the black ends up as part of the mesh at the end.

I'd hope this is an easy addition because it will speed up my workflow probably tenfold. Right now, the best solution I've found is to either use a Photoshop action with the Sky Selection tool to make masks as PNG files or use my DaVinci Resolve method and output black-and-white 8-bit TIFF files. The first method produces smaller files but takes a lot longer, while the second method is a lot faster but results in a second set of TIFF images that either waste space or would then have to be converted to PNG or JPEG.

Including support for transparency would mean I could do everything in one pass in Resolve: split my video into frames, flatten the contrast while increasing the midtone detail, AND mask out the sky/other unwanted parts of the image (such as the capture car).


47
General / Re: Using external drives (USB)
« on: February 01, 2022, 06:36:22 AM »
You could see maybe some slowness during detecting points phase if you have more than one GPU card(e.g. 3x RTX 3070 or better) and all are feeded during this phase from classic HDD drive with read speeds ~ 100MB/s.
I am using RTX 2060 super and 18Mpix JPEG files, each ~ 10MB big. My system can feed GPU at speed ~ 3-4 JPEGs/s and my disk read speed during that phase is ~30-35 MB/s...even if I copy that files on SSD, the read speeds are the same.
USB 3.0 connection speeds are much much higher than it is necessary.
You will need good write speed at the end of dense cloud creation phase when partial dense clouds are merged into final one...you can check read/write speed in task manager performance tab during this process. If you will see constantly read or write speeds higher than ~100-130MB/s then it would be worth to save project files on SSD...otherwise classic HDD will be enough.
You can have photos and project files on the same drive, because after adding photos into empty project and start aligning phase, the photos will be automaticaly cached in system RAM by windows.


You, sir, are a veritable goldmine of information! Do you teach a course on what you do? Or have any tutorials online? If not you should consider it!

48
General / How does Metashape decide which cameras to use for texturing?
« on: January 31, 2022, 01:03:20 PM »
I found this note from Alexey in an old post from 2013 and I was wondering is anyone has more information about how Metashape chooses which camera to use for texturing:


"For texturing PhotoScan uses cameras that are looking in the direction parallel to the surface normal in the area being textured."

My question is: What happens if two (or more) cameras are looking in the direction parallel to the surface normal?

In my case, I captured a road using an iPhone (this is the high quality dataset) and a 360 GoPro (reference dataset for GPS and image alignment)

I wasn't expecting to use the 360 photos for mesh and texture generation, but it turns out that the 360 cam contributes to the tree meshes in areas that were out of frame on the iPhone, so I'd like to use them.

So what I need to k ow is how to create the textures in such a way that the road and surroundings are textured from the iPhone images, while the treetops are textured from the GoPro.

My assumption is that I will have to mask out the bottom half of the GoPro images, but I would love to hear any insights into how Metashape handles thus.

49
General / Re: How to find the optimal texture resolution ?
« on: January 31, 2022, 12:53:39 PM »
Before metashape creates UV on model, you can't know how big or how many textures you will need.
For creating UV I am using external app (RizomUV) where after creating UV islands I see what is the texel density for certain resolution.
Second parameter I know is what is the pixel density on my photos(roughly calculated). Then it is easy decision if make the final texture larger/smaller or for really big project how many textures will be needed.
In attachment example 16k texture after packing of UV islands was ~enough to keep pixel density of original photos.


How do you calculate the pixel density and once you have that number, what do you do with it? I'm working on a large scale project for VR gaming and I want to preserve as much detail as possible, so I'd like to learn and apply your method for determine the number of textures I'll need (they'll all be either 16 or 32K)

50
General / Re: Trouble aligning chunks
« on: January 26, 2022, 11:37:40 PM »
Hello everyone,

I'm asking for your help since I can't seem to find a solution to align the two chunks of my object. In my case it's a flat piece of brick. I tried capturing the object in many ways and failed to have a 360 degrees cover of the object. The most effective way I found online for flat objects (specially stones) was to capture them in two series, one time on its head fixed in clay, and the other on its bottom. I got the best results from that technique yet metashape can't seem to read the two chunks as part of the same object and auto align them, knowing that there's enough overlay for that.
I attached the pictures of the two chunks.

I have the standard licence, so I can't use markers to manually align them, which means that I have to rely on the auto-align in the standard version.
What can be done to solve this problem? or is there better workflow for my case?

Thanks a lot!

Did you delete all of the unnecessary points (the clay etc) before aligning the two chunks? Leave enough overlap for the software to align similar points, but definitely get rid of any unwanted points before aligning.

This is explained here: https://www.youtube.com/watch?v=PYEBND8eTZY

51
General / Re: How to find the optimal texture resolution ?
« on: January 26, 2022, 11:36:01 PM »
I'm using 8k textures.

How can i find how many of them i need for a model to get the best result ?

using too few is obviously not giving a good result.

but i have noticed that too many gives a worse result too so it's important to have a way to find the right number.

What worse results are you getting by using too many textures? And how would you define "too many"?

I tend to generate 16K or 32K textures and use a few for huge models, or one for smaller ones. Seems to work great for my purposes.

Maybe instead of generating more texture atlases, try generating larger ones.

52
General / Re: when is convenient refine mesh?
« on: January 26, 2022, 11:33:55 PM »
It really depends on the project. If it's a "simple" object scan or relatively small scan it's IMO best to generate the mesh in ultra-high from the get go. But for larger scenes it may help to generate mesh in medium or high and refine afterwards. But keep in mind that the refine mesh process relies heavily on the GPU's VRAM (the more you have the better because if it runs out of VRAM Metashape will use system RAM alongside it & this will totally trash your processing times).

Cheers

Mak

Do you have any tips on how many photos (or megapixels) is the "limit" for a given amount of VRAM?

I have a very very large project that I am working on, and I am debating between generating a high-quality mesh and generating a low-quality mesh then refining. I am testing different approaches and the problem with the higher-quality meshes is that a lot of roadside objects and terrain do not end up in the final mesh, and the road itself tends to have more holes or really rough geometry. I like the overall quality and "completeness" that I get from Low or Medium-quality depth map-based meshes, but there are some fine details in the road that are not represented accurately (namely the cat eyes and rumble strip down the middle of the road, between the yellow lines.)

These details are important to me because the end result will be a drivable map for a VR driving simulator and the cat eyes/rumble strip provide a level of immersion that is not currently found in any of the tracks available online.

I noticed that the higher-quality meshes capture these details perfectly, but the low-quality mesh just has a rough approximation of these details and that's not enough for me. But the tradeoff is that the trees, fences, grass, etc alongside the road are more prevalent in the low-quality mesh. That's why I was thinking to generate using Low and then refine.

All that said, the test chunk I am working with has around 3,700 12 MP 16-bit TIFF photos and Refine Mesh is taking a crazy long time to do one iteration. I bet I've exceeded my VRAM limit (in spite of having 24 GB of VRAM on an RTX 3090). So, it might be worthwhile to break apart this chunk before refining each portion, but I don't know how small I should make each portion in order to not exceed my VRAM limit.

Any suggestions?

If Refine Mesh ends up being too time-consuming (and it sounds like it will), then I may take a hybrid approach where I generate the road mesh using ultra high quality and separately generate the surroundings using low quality. I would then decimate the road to my desired resolution. I don't know what problems I might run into here, so any advice is appreciated.

Thanks in advance!

53
Another day, another problem.

I set a few thousand more photos to align overnight and when I woke up, I continued the process I had developed yesterday where I place GCPs and then optimize to get the photos to follow the road contours.

However, today the photos would not move after optimization.

I'm getting really frustrated since as far as I can tell, I'm following all the steps in the manual and in various tutorials. I realize this project may be too large for Metashape to handle, but if so then why is that limitation not denoted anywhere? Is that truly my problem, or am I committing some other error?

54
I believe I figured everything out and got it working properly. I still welcome any feedback/advice about my workflow since I plan to do many more projects like this in the future (corridor mapping for VR driving simulators).

Here's what I changed/did differently to make everything work:

- Thinned the sparse point cloud to get rid of uncertain tie points. They seemed to be preventing Optimize from doing anything at all.

- Changed the marker accuracy to .001m which made the markers line up with the points I chose in Google Maps. The OSM map in Metashape seems to match perfectly the Google one, at least in this area, so the GCPs line up with the exact locations I chose in Google Maps (as shown on the OSM).

- Went through all of the initial photos AGAIN to disable the projected markers. This keeps happening and is probably due to the fact that Metashape is expecting aerial nadir images with GCPs and not close-range terrestrial images. I discovered that in the reference pane, I can right-click on a marker and remove all the projections. Now I do this every time I add a new marker and then add it manually in the corresponding photos.

- Optimized the cameras after each step listed above. I turned on/off various GCPs and it seemed to work well when I enabled only a few GCPs at first (20% of 52) and then enabled more GCPs and optimized each time. Eventually, I was able to get the entire project to align with the OSM in Metashape and am currently aligning additional images.

I wish Metashape would not project markers onto images very far from where it was placed. It was very frustrating to figure all of this out by trial and error!

55
Thanks, I will send the project today. It's around 100 GB currently.

One thing that I think is either causing this error, or is happening as a result of it, is that GCPs I add at the far end are being "blue flag" projected onto the initial photos several miles away. This prevents the GCPs from aligning with their real-world locations. Going through the photos and manually clearing the marker placement fixed it once, but then when I add more GCPs it happens again. By "fix" I mean I optimized the cameras and the tie points aligned with the map as expected. But after aligning more photos, I can't get it to line up anymore. Also, I can't find a way to batch clear marker projections, so I have to go through dozens or hundreds of photos and individually clear the projections.

You can see in the attached image 1 how there are only two real GCPs (point 4 and point 33) and the rest are projected from locations throughout the corridor. The blue flags are new ones, and all the white flags are ones I cleared previously.

Image 2 shows the top-down view of the photos I've aligned so far. The project is an 18-mile stretch of road captured in a 24 fps video. Right now I am stitching the images from the Eastbound capture.

The highlighted line shows the actual road, and all of the GCP coordinates correspond to points along that road. The coordinates were taken from Google Maps and the accuracy is set to 10m.

As you can see, the model wants to wrap around on itself despite Metashape doing a fairly good job of sticking to the curvature of the bends. I think this is happening because of the GCPs, specifically the far-right GCPs being "blue flag" projected onto the far-left images. Although, before adding any GCPs it was also showing signs of warping.

I've been aligning a batch of photos, adding GCPs, optimizing, adding more photos, aligning, adding GCPs, optimizing, repeat. If this is the wrong method, let me know and I'll adjust my workflow.

I calibrated the lens in Metashape using the chessboard, but I did not indicate the pixel size and focal length (this is because the camera app I used produces 4000x3000 video whereas the stills with EXIF information are slightly larger, and I did not want to risk inputting the wrong values. Does this make any difference?)

The screenshot of my reference pane with GCP coordinates has 0.1 as the accuracy, but the accuracy has been set to 10.0 up until this point and my problem occurred with that setting (I am trying different things to see what might fix this).

56
I thought I fixed this error but now it's back again and I don't know what is causing it. I input about 25 GCPs and when I go to Optimize, I get this error.

Any help is appreciated!

57
Feature Requests / Re: better support for rigid camera rig
« on: January 10, 2022, 10:47:36 PM »
Hello dskarlat,

If you have several cameras rigidly mounted relatively to each other, you can use multi-camera system approach.

After the images are loaded via Add Folder command according to the folder layout, you can go to the Tools Menu -> Camera Calibration dialog where in the Slave Offset tab (for each slave camera) you can define the measured offsets between the sensors (if known) or allow Metashape to estimate them by enabling Adjust Location option.

Hi Alexey,

I am wondering about this in regards to cameras mounted in/on a vehicle (being used to capture a road while driving along it). In my case, the camera mounting positions are fixed relative to each other, but the mounts are not 100% stable (they vibrate a little) so the relative positions change slightly from one shot to the next. Also, the cameras are not synchronized and the FOV is different for each camera.

In this case, is it recommended to use the "Adjust Location" option or not?

Thanks!

58
General / Re: Are changes to dng images take into account ?
« on: December 26, 2021, 10:20:12 PM »
I have the same issue. Attached are two clips from the same DNG file. The first is what it looks like in Photoshop & Lightroom: exposure is quite good. The second is what the file looks like in Metashape Pro: exposure is blown out.

The photo was captured as a Sony RAW file, imported into Lightroom where Auto Tone Control was applied, then it was saved as a DNG and imported into Metashape.

Is Metashape ignoring the change to the tone curve? Is it able to use the detailed tone information in the DNG or is it attempting to process the "blown out" overexposed pixels that we see in the file as viewed in Metashape?

I am looking for the same answer. Anyone know how Metashape handles DNG image data?

59
General / Re: Disappointing performance results on new MacBook Pro M1 Max
« on: November 14, 2021, 03:24:07 AM »
I tried a test project on the M1 Max w/ 32 core GPU and 32 GM RAM and I was thoroughly impressed with its speed. I will do more testing to compare it to my desktop PC but I think if you're experiencing performance issues maybe there is some other problem with your project?

60
Bug Reports / Video import not working (tried 1.7 PC and 1.8 Mac)
« on: November 12, 2021, 10:12:21 PM »
I tried to import a video in various formats but nothing worked. I used both 1.7.1 on a PC and 1.8 on a Mac. The PC gave an error and the Mac did nothing after proceeding from the video import dialog box.

What are the exact codec specifications that Metashape will accept? The manual talks about the container only (mpy, mov, etc.) but does not mention the codec requirements or limitations.

Also, does it have a resolution limit? These are 4K videos shot on an iPhone.

Pages: 1 2 3 [4] 5