Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - MikeZ

Pages: [1]
General / Structured lights scans aligned in Metashape
« on: July 10, 2021, 10:44:29 PM »
Structured light scanner can capture also a photo. At the end of the process we have photo and poit cloud. How prepare these data to align it in Metashape like lidar scans with 360 panorama ?

Bug Reports / Texture generation fails (solved to delete)
« on: July 02, 2020, 10:05:38 PM »
Metashape standart 1.6.2

Samyang 8mm (manual lens so missing exif info) on sony 6400. ARW exported to TIFF

1. Correcting image orientation to "horisontal" on every photo (skipping this step doesnt help)
2. ARW -> TIFF
3. Import photos and calibration
4. Alignment is fine
5. Point cloud and colors fine
7. Mesh fine
8. Texutre- fails

What can I do?

I was using lens calibration made in 1.6.3 in 1.6.2. That caused the error.

General / Real-time alignment
« on: June 22, 2020, 05:33:32 PM »
With feature "keep key points" we can constantly add photos to the alignment. If we could automatically refresh data set and automatically align the photos it would be possible to shoot and align the photos at the same time. Is it possible?

The exporter of panoramas should have a leveling horizon option. Doing that in 3D view is not accurate at all. We should be able to place a simple line and then using the mouse correct position.

Possibility to export panorama as layers would be great. It will give a full control of the image in postproduction.

Even greater it will be if we have a chance to export those layers as PSB file.

With these upgrades, Metashape will become pano stitching software killer.

General / Photo blending creates strange artifacts
« on: November 28, 2019, 11:40:06 PM »
Station stitching works like a charm in metashape. Everything would be perfect but... texture blending produces strange artifacts. It happens to contrasted and hard edges parts of panorama. The easier place to see it is at the top of the buildings. Unfortunately, it makes panoramas completely unusable. Is it a Metashape problem or maybe I'm doing something wrong? I use tiff format, no lossy compression. When I try to mask out the sky and then export panorama, it tends to give a horrible effect of strange dark blurs.

Bug Reports / Metashape causing crashes and screen flickering
« on: July 07, 2019, 11:33:32 PM »
AsRock AB350 Pro4
Ryzen 7 1700X
Vega 64 sapphire
64gb ram
Win 10 PRO x64

I have random crashes during dense generation. The bigger data set the crash seems to be surer. The screen goes black and PC and I need to hard reset it or the screen turns black and my driver software shows error - Agisoft freeze. Not really often but it happened a few times Metashape gave me "crash report" information. But it happens really rare.

So obviously I was not blaming Metashape. I was sure that this is about my card settings/drivers/wiring etc. I checked everything and I have run the stress test. It is 3 hours when CPU and GPU work on 100%. Everything is perfectly stable. I'm gonna let it works for a few more hours but it seems that the problem is unfortunately Metashape.

Last crash was during depth map generation. No more than 6% of RAM used, GPU and CPU cool. Nothing really spectacular. When I launch GPU operation in Metashape screen starts to flicker a little bit also... I'm stressing my GPU to the max and I haven't seen anything like that. Everything works fine, I have issues with stability only with Metashape. Do you have any idea how to solve this problem? As I'm starting to work with bigger data sets (2k photo +) software seems to be completely unusable for me. I can align photos without any problem, I had never crash during this operation. Of course, I have this strange flickering immediately after GPU starts working. I don't know where should I look for a solution.

Best regards,

General / Estimate image quality - does it really work?
« on: April 28, 2019, 05:31:22 PM »
As I mentioned before, I'm exercising Agisoft before starting the project. This is also a thing that was worrying me when I was doing private scans. I understand that this tool, check just the sharpest part of the image so misfocused photos can have still very high score. This is the tool to erase the photos with motion blur and... it doesn't work. It seems to work with high-quality input, like FF DSLR (I'm not really sure because I haven't tested it very well) but when I use small sensors cameras it works horribly. On some photos, where I'm struggling to recognize subject because of huge motion blur, I got results like 0,7 or higher. At the same time, pretty sharp photos received note 0,4 or lower. The lower the quality of a photo is the bigger chance to make a mistake. At this point, this tool is totally unusable. Maybe I'm doing something wrong? I need to note that removing blurry photos insanely important for me as I assume to work with low-quality inputs. Does anyone know another way to throw away unusable photos? Of course, I'm not asking about manual elimination :)

General / Depth map generations works only partially - solutions
« on: April 28, 2019, 04:53:19 PM »
Hi guys. At the beginning, I would like to say that Agisoft is probably my favorite software. I like the way you develop it and also how responsive makers are. I've read so many posts by Alexey that I feel like we are close friends :). I use agisoft for a few years. In the past, I had a chance to use pro license now I use my private standard. I've reached the point when reading only is not enough anymore. I need to ask some questions. Reason for that is one. I'm planning a huge digitization project and I need to choose the right software for it. It will be super cool if Agisoft can do it but I need to be sure.

I'm gonna write separate posts with my questions. It always produces nice content.

As it was explained before: Agisoft won't generate a depth map for cameras that positions are in the field of view of another camera. That was explained on "the corridor" example. Everything is fine but:

I have a test scan. It contains 4k cameras. I choose one part to realign. I chose photos that were used to reconstruct a small sculpture. It was 95 cameras. Reconstruction was done quite nicely. So I thought if (as input was each 15th frame from footage)i will get a way better result from whole footage. I imported around 1700 frames of the subject. I aligned it properly and... I was able to generate only 150 depth maps. What is worst those depth maps were not corresponding to those that were generated at first attempt. I've lost half of the statue. After reading forum i decided to turn off half of the cameras. The effect was worst, 90 depth maps. I disable manually some cameras that can cause problems. Nothing happened. I have disabled 90% of cameras so i had the similar representation of frames as at first attempt. from 100 cameras i received only two depth maps.

How is it possible that from one data set, a similar quantity of cameras I received quite opposite results? How I can control this issue. How i can select and erase causing problems cameras?

The footage was made by smartphone in extremely bad light conditions. A lot of frames are blurry and unsharp. As I said before, this is an exercise that is aiming to prepare workflow for big project. In this scenario, there is no chance to capture photos again.

cheers guys

Another realigment of the statue and from 1758 frames i have 1737 aligned. 884 depth maps. It seems to be quite mysterious. I have not disabled any camera.

It seems that it depends on alignment. When i use pair selection i have the worst result. It was mentioned before that it can depends on the number of matches between the photos. Is there any line that we can draw that describes the minimum? Do tweak with maximum neighbors can influence depth map generation?

Edit 2
It's definitely about the maches ... camera optimalisation erased 800 fro 880 depth maps

Pages: [1]