Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Yoann Courtois

Pages: [1] 2 3 ... 18
1
General / Bigger mask while reusing depth maps
« on: February 14, 2019, 11:35:28 AM »
Dear all,

I noticed during my last process that if I cancel the dense cloud generation (launch from GUI) after all depth maps are built , those one are not deleted anymore and can be used again.

As a reminder, dense cloud generation (launch from GUI) is composed of two processing steps :
- Depth maps building
- Dense cloud building

My question is, if then I modify the image mask (in order to mask a bigger part of images), is this new mask used when dense cloud generation is relaunched while reusing depth maps ?
With other words, is the mask used only during depth maps building or is it also taken into account during dense cloud building ?

Regards

2
Python Scripting / Re: Model faces not linked with model vertices
« on: February 01, 2019, 01:52:38 PM »
Okey ! Thank you, now I see !

It might be great to update the API documentation, and explain that chunk.model.faces
  • .vertices is indeed a tuple of integer, but those integers are the location of the 3 vertices in the chunk.model.vertices list !


Regards

3
Python Scripting / Re: Model faces not linked with model vertices
« on: January 31, 2019, 02:10:39 PM »
Hello Alexey,

After some investigations, I still don't understand how it could work like that.

At first : "checkFaceTask" is called :
Code: [Select]
def checkFaceTask(face):
    global vertices
    face_vertices = [vertices[v] for v in face.vertices]
    checkFace(face, face_vertices)

But face.vertices is just a tuple of 3 integers.
So then, "checkFace" take as parameters "face", which is a model.face, and "face_vertices", which is a list of 3 integers:
Code: [Select]
def checkFace(face, face_vertices):
    global region
    R = region.rot  # Bounding box rotation matrix
    C = region.center  # Bounding box center vector
    size = region.size
    remove_vertices = 0

    for vertex in face_vertices:

        v = vertex.coord
        v.size = 3
        v_c = v - C
        v_r = R.t() * v_c

        if abs(v_r.x) > abs(size.x / 2.):
            remove_vertices += 1
        elif abs(v_r.y) > abs(size.y / 2.):
            remove_vertices += 1
        elif abs(v_r.z) > abs(size.z / 2.):
            remove_vertices += 1
        else:
            continue

    if remove_vertices == 3:
        face.selected = True

Then how it's possible to call vertex.coord for vertex in face_vertices, because vertex become an integer, so no .coord exist

After some test in Metashape console, I got the same fail.

Regards

4
Python Scripting / Model faces not linked with model vertices
« on: January 30, 2019, 06:23:25 PM »
Hi !

I'm currently trying to selected 3D model faces using their vertex coordinates, but I'm not able to find any link between faces (Metashape.Model.Face) and vertices (Metashape.Model.Vertex).
Indeed, the first one is included ".vertices", but it's only a tuple of three numbers (which looks to be vertices number or key) and no coordinates.
The second one is included ".coord", but no number (key ?).

Then, model vertices have coordinates but no link with faces, and so faces have no positioning informations.

Could someone help ?

Regards

5
Bug Reports / Re: Classification looks broken
« on: December 27, 2018, 01:22:22 PM »
Thanks a lot.
May you tell me when next version update will be done ?

Regards

6
Bug Reports / Re: Classification looks broken
« on: December 26, 2018, 11:02:45 AM »
Hello Alexey,

Even though the coordinates used for GCPs are in RGF93/CC46 system (EPSG 3946), every thing was calculated in local system. Then, no correction from projection system but il doesn't affect such small survey.

Indeed you are right. I've processed this dataset a lot of time and this one has an incorrect aligned bloc of several pictures. Nevertheless, it doesn't affect the rest of the survey either.

Even I used this example for reporting classification vagueness, I only started this topic after several experiences, using different datasets (which have been now deleted)

Regards

7
Hi !

Is there nothing more now ?
Indeed, with GUI, I remove all connected component size but the main one (threshold = 99%)
But, as my model can contain 10 000 or 10 millon faces, it's quite difficult to define a threshold as a number of faces for python scripting

Regards

8
Bug Reports / Re: Classification looks broken
« on: December 21, 2018, 11:43:17 AM »
Classification done with 1.5.0 build 7204, I'll send you the point cloud used.

Gonna try again with the very last one.

Thanks for your investigation

9
Bug Reports / Classification looks broken
« on: December 20, 2018, 06:20:41 PM »
Ground point classification gives really bad results in Metashape (but also in PhotoScan 1.4), whereas it was really better before.
Is this normal ?

Here is a screen shot from a standard survey where we can see really random result.

Regards

10
Python Scripting / Re: Setting maximum dimension of exported orthomosaic
« on: November 29, 2018, 12:03:50 PM »
Hi John !

Here is the solution :

Hello mikeb,

You can use the following methods related to orthomosaic to calculate the require export resolution:
chunk.orthomosaic.width
chunk.orthomosaic.height
chunk.orthomosaic.resolution

So if you need to export the orthomosaic using max-dimension = X, then I think you can use the following:

Code: [Select]
dx = dy = max(chunk.orthomosaic.width, chunk.orthomosaic.height) * chunk.orthomosaic.resolution / X

11
Python Scripting / Re: Getting projection error of markers
« on: November 06, 2018, 04:44:38 PM »
Perfect !  8)
Thanks

12
Python Scripting / Re: Getting projection error of markers
« on: November 06, 2018, 01:18:01 PM »
This reprojection error could be add to such process to export "all in one" about markers

Hello ARF__,

You can try the following for the active chunk (if projected coordinate system is used):

Code: [Select]
import PhotoScan

chunk = PhotoScan.app.document.chunk

for marker in chunk.markers:
      source = marker.reference.location
      estim = chunk.crs.project(chunk.transform.matrix.mulp(marker.position))
      error = estim - source
      total = error.norm()
      print(marker.label, error.x, error.y, error.z, total)

13
Python Scripting / Re: Getting projection error of markers
« on: November 06, 2018, 01:02:07 PM »
Hi !

Is there a new way to export reprojection error in pixel with 1.4 API ?

Regards

14
Python Scripting / Re: exporting reference table as csv
« on: November 05, 2018, 05:25:47 PM »
Hi Alexey,

I would like to indeed export average reprojection error and number of tie point for images, as well as average reprojection error and number of projection for markers, which are all unavailable with saveReference function, and unobtainable somewhere else through the API documentation.
May you help me in such needs ?

Regards

15
Hi,

Firstly, I would say that aerial survey well match with pole survey, which well match with ground survey. Indeed, angle of view very differ from aerial to ground point of view, which is the weak point of SIFT algorithm. Pole survey is then a good compromise to insure the 3D reconstruction.

After that, is order to acquire data of an area invisible from aerial point of view (like several hundreds of m² in the middle of a 20ha drone survey), I would flight all above the area, and acquire pictures with a pole under the mask (trees ?), while overflow the masked area so that a strip aroung the mask area would be mapped from pole and drone point of view.
Overlap of both aerial and pole surveys could be keep normal, like 80% front and 70% side.

Finally, I would suggest you to use check points both in area exclusively mapped by drone and area mapped with the pole (and masked for the drone), so that you could improve you survey process and figure out the accuracy of your 3D model.

Regards

P.S. Feel free to share your results and some screen shots of your project  :)

Pages: [1] 2 3 ... 18