Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - MikeZ

Pages: 1 [2]
The exporter of panoramas should have a leveling horizon option. Doing that in 3D view is not accurate at all. We should be able to place a simple line and then using the mouse correct position.

Possibility to export panorama as layers would be great. It will give a full control of the image in postproduction.

Even greater it will be if we have a chance to export those layers as PSB file.

With these upgrades, Metashape will become pano stitching software killer.

General / Photo blending creates strange artifacts
« on: November 28, 2019, 11:40:06 PM »
Station stitching works like a charm in metashape. Everything would be perfect but... texture blending produces strange artifacts. It happens to contrasted and hard edges parts of panorama. The easier place to see it is at the top of the buildings. Unfortunately, it makes panoramas completely unusable. Is it a Metashape problem or maybe I'm doing something wrong? I use tiff format, no lossy compression. When I try to mask out the sky and then export panorama, it tends to give a horrible effect of strange dark blurs.

Bug Reports / Re: Metashape causing crashes and screen flickering
« on: August 18, 2019, 10:47:08 PM »

The problem seems to be more and more annoying. Now I have crashes even if I process small sets like 100-500 of photos. I sent the log and windows event viewer files to support. This time my screen went black and after it recovered metashape was not responsive. My gpu controlling software is also dead. I'm more than sure that this issue is related to GPU drivers.

Bug Reports / Re: Metashape causing crashes and screen flickering
« on: July 20, 2019, 11:06:00 PM »
I updated to 1.5.3 version.

I had already 3 crashes in a row on the large photo set. The screen goes black, fans in my case works at 100% and it ends with the shutdown. The second type of crash the screen goes black and it returns to the desktop without the rebooting. Metashape is gone - no crash report option, my GPU driver is informing me that values were restored to the defaults. The third one is Metashape hangs - no crash report option too.

As I know that overclocking software can interact with metashape I restored my drivers to the moment that I didn't give permission for Watmann to work. So I assume it doesn't.

This kind of behavior on Vega 64 is known and might be caused by wrong wiring or wrong overclocking in Watmann. So I have checked wiring and restored Watmann to the factory settings. As I mentioned before, the problem is that it appears only when I use metashape. I passed 3 hours stress test of  CPU GPU and RAM  at the same time. No issues. In Event Viewer, there is always a line dedicated to wuauserv service that cannot locate a file. I do sometimes other GPU/CPU extensive tasks like rendering or batch processing. I had never an issue. I went down with my GPU frequency about 200, still crashing. At the same time I have no problem with processing smaller sets like 200 of photos on ultra-high quality. It is strictly connected with the quantity of the photos. The model is an interior with a huge amount of overlapping photos. I think I will do another similar set to test this problem well.

Bug Reports / Metashape causing crashes and screen flickering
« on: July 07, 2019, 11:33:32 PM »
AsRock AB350 Pro4
Ryzen 7 1700X
Vega 64 sapphire
64gb ram
Win 10 PRO x64

I have random crashes during dense generation. The bigger data set the crash seems to be surer. The screen goes black and PC and I need to hard reset it or the screen turns black and my driver software shows error - Agisoft freeze. Not really often but it happened a few times Metashape gave me "crash report" information. But it happens really rare.

So obviously I was not blaming Metashape. I was sure that this is about my card settings/drivers/wiring etc. I checked everything and I have run the stress test. It is 3 hours when CPU and GPU work on 100%. Everything is perfectly stable. I'm gonna let it works for a few more hours but it seems that the problem is unfortunately Metashape.

Last crash was during depth map generation. No more than 6% of RAM used, GPU and CPU cool. Nothing really spectacular. When I launch GPU operation in Metashape screen starts to flicker a little bit also... I'm stressing my GPU to the max and I haven't seen anything like that. Everything works fine, I have issues with stability only with Metashape. Do you have any idea how to solve this problem? As I'm starting to work with bigger data sets (2k photo +) software seems to be completely unusable for me. I can align photos without any problem, I had never crash during this operation. Of course, I have this strange flickering immediately after GPU starts working. I don't know where should I look for a solution.

Best regards,

General / Re: cannot get all the cameras aligned
« on: April 28, 2019, 05:53:42 PM »
Wrrrr my post was erased... I hate it.

1. Go to manual and read section aligment. Maybe it is about settings
2. It seems that your photos are calibrated as spheres. Are they really are?
3. If I know mavic cant take real 360 photos. If you are using those panoramas that it can produce you should know that they are not suitable for photogrammetry

General / Estimate image quality - does it really work?
« on: April 28, 2019, 05:31:22 PM »
As I mentioned before, I'm exercising Agisoft before starting the project. This is also a thing that was worrying me when I was doing private scans. I understand that this tool, check just the sharpest part of the image so misfocused photos can have still very high score. This is the tool to erase the photos with motion blur and... it doesn't work. It seems to work with high-quality input, like FF DSLR (I'm not really sure because I haven't tested it very well) but when I use small sensors cameras it works horribly. On some photos, where I'm struggling to recognize subject because of huge motion blur, I got results like 0,7 or higher. At the same time, pretty sharp photos received note 0,4 or lower. The lower the quality of a photo is the bigger chance to make a mistake. At this point, this tool is totally unusable. Maybe I'm doing something wrong? I need to note that removing blurry photos insanely important for me as I assume to work with low-quality inputs. Does anyone know another way to throw away unusable photos? Of course, I'm not asking about manual elimination :)

General / Depth map generations works only partially - solutions
« on: April 28, 2019, 04:53:19 PM »
Hi guys. At the beginning, I would like to say that Agisoft is probably my favorite software. I like the way you develop it and also how responsive makers are. I've read so many posts by Alexey that I feel like we are close friends :). I use agisoft for a few years. In the past, I had a chance to use pro license now I use my private standard. I've reached the point when reading only is not enough anymore. I need to ask some questions. Reason for that is one. I'm planning a huge digitization project and I need to choose the right software for it. It will be super cool if Agisoft can do it but I need to be sure.

I'm gonna write separate posts with my questions. It always produces nice content.

As it was explained before: Agisoft won't generate a depth map for cameras that positions are in the field of view of another camera. That was explained on "the corridor" example. Everything is fine but:

I have a test scan. It contains 4k cameras. I choose one part to realign. I chose photos that were used to reconstruct a small sculpture. It was 95 cameras. Reconstruction was done quite nicely. So I thought if (as input was each 15th frame from footage)i will get a way better result from whole footage. I imported around 1700 frames of the subject. I aligned it properly and... I was able to generate only 150 depth maps. What is worst those depth maps were not corresponding to those that were generated at first attempt. I've lost half of the statue. After reading forum i decided to turn off half of the cameras. The effect was worst, 90 depth maps. I disable manually some cameras that can cause problems. Nothing happened. I have disabled 90% of cameras so i had the similar representation of frames as at first attempt. from 100 cameras i received only two depth maps.

How is it possible that from one data set, a similar quantity of cameras I received quite opposite results? How I can control this issue. How i can select and erase causing problems cameras?

The footage was made by smartphone in extremely bad light conditions. A lot of frames are blurry and unsharp. As I said before, this is an exercise that is aiming to prepare workflow for big project. In this scenario, there is no chance to capture photos again.

cheers guys

Another realigment of the statue and from 1758 frames i have 1737 aligned. 884 depth maps. It seems to be quite mysterious. I have not disabled any camera.

It seems that it depends on alignment. When i use pair selection i have the worst result. It was mentioned before that it can depends on the number of matches between the photos. Is there any line that we can draw that describes the minimum? Do tweak with maximum neighbors can influence depth map generation?

Edit 2
It's definitely about the maches ... camera optimalisation erased 800 fro 880 depth maps

Pages: 1 [2]