Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - octopus

Pages: [1]
1
Hi.
I am testing a short Python script for the new meshing option that makes the model from Depth Maps
(without building Dense Cloud), and have a problem  with the face_count parameter, which seems to
be ignored as if no decimation happens.
For a project with 285 photos (Metashape 1.5.1), in my Python script, I align photos, and run the following
line with different values for quality and face_count.

    my_chunk.buildModel (face_count=mycount, source=Metashape.DepthMapsData, quality=myquality)

And for the following combinations, I always get a resulting mesh model that has the similar number of faces.

[mycount, myquality ----> number of faces in the mesh result]
1000000, Metashape.LowestQuality  ----> 91.9 million faces
LowFaceCount, Metashape.LowestQuality    ----> 92.1 million faces
MediumFaceCount, Metashape.LowestQuality  ----> 91.8 million faces
MediumFaceCount, Metashape.MediumQuality  ----> 91.9 million faces

The face_count value seems ignored, and also changing the quality does not affect the mesh size.
On top of that, processing with MediumQuality takes naturally more time than LowestQuality, but
the resulting mesh does not seem to be very different, at least visually.

I add that if I use Build Mesh UI command from the Workflow menu instead of using Python script,
I get a reasonable result like below.

1000000, Metashape.LowestQuality  ----> 999,945 faces
LowFaceCount, Metashape.LowestQuality    ----> 19,469 faces
MediumFaceCount, Metashape.LowestQuality  ----> 68,193 faces
MediumFaceCount, Metashape.MediumQuality  ----> 1,073,548 faces

I am wondering why this is happening. I would appreciate if anyone has an advise.
Thank you.

2
General / Re: fisheye photos and texture blending
« on: November 15, 2018, 06:43:24 PM »
Many thanks, James.

I figured that out and tried to
1. correct vignetting in Photoshop
2. masked out the area around the circular edge

And if I mask out quite a bit, I get much better result with
still some seams. (See below.) I think I need to
make more careful vignetting correction as
preprocess outside Photoscan.

Thanks again.

3
General / fisheye photos and texture blending
« on: November 14, 2018, 12:01:43 PM »
Hi.
I am making a model from photos taken by a 360 camera (Samsung Gear 360) and have a question.
Does MOSAIC texture blending mode (Build Texture command) work for fisheye photos?

Here is my situation. I took 10 photos with this camera of two fish eye lenses, that produced 20
fisheye photos (circular images). They were processed with FISHEYE option in Photoscan Pro 1.4.3 and
made a decent model. I then used GENERIC/MOSAIC combination for texture mapping, but the result
has lots of distinctive seam lines, and using GENERIC/AVERAGE option seems to produce the
almost identical result. Using Color Optimization before texture generation does not help either.
The camera uses auto exposure, so there is a variation of brightness among the photos. But when I
work with regular frame cameras, ordinary exposure difference among photos are usually resolved well
by MOSAIC blending option. In contrast, in this fisheye photo project, it seems MOSAIC is doing
nothing.

I attach some comparison screenshot below. The top one uses 5 fisheye photos for texture generation with
MOSAIC option, and is showing the hard seam lines as marked by yellow line. The next one uses
AVERAGE, and the image looks the same. The last one uses only one camera, which covers the
left half of the model and as expected, there is no seam. I add that if I use all 40 photos for texture
generation, MOSAIC option gets me seam lines all over the model.

Thank you.

4
Hi Alex;
Many thanks for your explanation. Now I have a better understanding on chunk alignment.
I managed to align my chunks satisfactly with point-based method.

If I may add one further question, does the two methods below make any difference?
Assume there are 4 sets of photos, set A, set B,  set C and set D, each with several hundreds photos. And

 method 1: Import photos of A, B, C, D into one chunk and align all the photos together in one operation.
 method 2: First, for each of A, B, C, D, align photos and make a small chunk. Then align these
                   4 chunks (A, B, C. D) and merge them together to make a chunk including all the cameras.

I imagine method 1 would adjust each camera position considering all other photos, while method 2 positions
each camera only relative to the photos in each set. Therefore

 a.  method 1 takes significantly longer time for photo alignment
 b.  method 1 camera positions are much more precise than method 2.
 c.  method 1 also requires much more RAM for photo alignment

Are these points correct?

My computer has limited amount of RAM, and has difficulty in method 1. I am hoping method 2 makes OK result.

T

5
Hi. I am trying to align multiple chunks (Workflow>Align Chunks command) by using point-based option and have a question.

Does the point-based method use tie points, or does it use dense cloud points of source chunks?
    Currently, my chunks are processed with high-quality photo-alignment option and low-quality dense-cloud generation option.
    To get a good chunk alignment, is it better to re-process each chunk with high quality dense cloud generation, or
    it does not matter if the aligning calculation relies on tie points only?

Also, in either way, does it help to clean-up by manually deleting noise data (inaccurately placed tie points or
   dense-cloud points generated around the edges of the model) before running the chunk align command?

I would appreciate any advice very much


6
Hi.
 I just upgraded from 1.2.6 Pro to 1.3.1 Pro version. When I select GPU tab in Preference menu, Photoscan crashes.
(Windows dialog shows up and says Agisoft Photoscan has stopped working.)

My environment is:
 Windows 7 Enterprise
 Dell Latitude E6430
 16GBRAM
 NVIDIA NVS 5200M

In 1.2.6, I never had this problem.
Thanks.

7
General / Re: cleaning up interpolated mesh around roof line
« on: August 29, 2016, 04:21:41 PM »
Hi ekbmuts;
 Thanks a lot for the suggestion.
 The white roof line against bright sun was one problem,
 but after making Dense Cloud, I was able to cleanup the
 unwanted points in the sky, by masking the sky in
 the photos and using the
  Tools>Dense Cloud>Select Points by Mask
 and just deleting those points. This tool is great.

 So, the Dense Cloud is pretty accurate, and if I mesh it without
 interpolation option, it produces a model of the roof edge
 without spilling into the sky. But if I mesh it with interpolation
 option, the roof edge spills back into the sky.

 I read in some other post that this is what interpolation
 option does for object edges, so I accept it. But I'd like
 to know if there is any way to easily remove these unwanted
 spilled portion of the mesh. Ideally, if I can select them
 by the same Points by Mask method (or Mesh by Mask)
 as Dense Cloud, but I do not find any tool like that.

 I wonder if any good method is available for such a situation.



8
General / cleaning up interpolated mesh around roof line
« on: August 26, 2016, 08:16:10 PM »
Hi. I am making a model of a building from photos taken from ground.
It has a roof shape with ornaments that make unwanted dense cloud points
around the roof edge inside the sky. So, I removed them by masking the
sky portion in the photos, and selecting dense cloud points in the sky.

Now the problem I have is that generating mesh with interpolation enabled
still creates white area around the roof edge. Is there any way to
easily clean it up? The selection method by masked photos would be great
if it works also with mesh, but it seems to work only with dense cloud points.
I need to use interpolation for meshing since it works very well to
fill holes in many other locations in the model.

The image attached shows the cleaned up dense cloud (left), mesh with
interpolation (middle) and mesh without interpolation (right).

Thanks.

Pages: [1]