Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Yoann Courtois

Pages: [1] 2 3
1
Bug Reports / Two sensors for one camera
« on: March 25, 2022, 12:25:55 PM »
Hello,

Since 2022 version release (1.8.*), we have noticed that images taken from a single camera may create two different sensors after being imported in Metashape.
I wasn't able to identify which metadata has been used to seperate those two image groups. At least image format and focal length are the same !

May you have some tips ? Or is it a bug ?

Regards

2
Bug Reports / DNS error on license server
« on: March 07, 2022, 07:09:42 PM »
Dear Agisoft team,

We are currently encountering a problem to access to our license server. The DNS access looks down.
All our process are then currently stopped.

Regards

3
Python and Java API / Images rematching in an aligned project
« on: January 25, 2022, 10:30:17 PM »
Hello !

We encounter difficulties to rematch images that are part of a project that has been already aligned (with keep_keypoints=True).
Indeed, we would like to manually match precise image couples that have been first, during main process, ignored by pre-selections for example, in order to add tie points between images we know they could match.

We have been able to list all image couples that we would like to rematch, but :

- If both images of a couple already have saved key points (with keep_keypoints=True during main process), the matchPhotos(reset_matches=False) method ignore this couple, even if this couple has never been matched before.
- If both image key points are removed (with removeKeypoints method), the matchPhotos(reset_matches=False) method doesn't process the couple.

Then, we would like to be able to manually match a couple of images where both have saved key points.
Otherwise, we would like to be able to remove key points of selected images (not all of them as removeKeypoints do), so that we could have couples with an image with key points, and the other without.

Hope our problem is clearly enough described !

Regards

4
Hey !

Based on previous topic (https://www.agisoft.com/forum/index.php?topic=11167.msg50189#msg50189), we found how we can get 3*3 rotation matrix and then yaw/pitch/roll values from 4*4 transformation matrix of an aligned images.
Using same function but with .translation() instead of .rotation(), we can easily get 3*1 translation matrix and so X,Y,Z coordinates.

But now for special test we would like to invert the process :
Based on 3*3 rotation and 3*1 translation matrix, we would like to get the 4*4 transformation matrix in metashape system.

Translation part is easy as X, Y and Z can be directly put in the 4*4 matrix like this :
Code: [Select]
[[*, *, *, X],
 [*, *, *, Y],
 [*, *, *, Z],
 [0, 0, 0, 1]]

But for the rotation part, it looks .rotation() apply a calculation from the "star part" on the above matrix ([[*,  *, ...]]) to the 3*3 rotation matrix.
So we would like to invert this function to create our own 4*4 matrix based on 3*3 rotation and 3*1 translation matrix.

Note : We get our 3*3 rotation matrix from yaw/pitch/roll values using ypr2mat() function

Hope it's clear enough  ::)

Regards

5
General / Estimated Yaw, Pitch, Roll anomaly
« on: December 10, 2021, 05:53:18 PM »
Hello !

We have something which looks like an anomaly in estimated yaw, pitch, roll value.
Most of images of our projects are OK, but some have crazy values.

Here is an example :
- Pitch of 76° but the image is clearly taken horizontally
- Yaw of 221° but the closest images, which looking almost exactly the same direction, has a yaw of 319°
- Roll of -86°, but better looking about -20°.

We have tested everything (optimize, realign...) but this image always converge to the same estimated orientation.

Hope you can help.
Regards

6
General / Worse depth maps in 1.7 release
« on: May 19, 2021, 02:48:33 PM »
Hello !

We are carrying on our test of 1.7 release but now struggling to understand differences between depth maps comparing with the same project calculated in 1.5 release.

Looking at following capture, the depth maps are really worse on the right (1.7) than on the left (1.5).
We are using aggressive filtering in both process.

Has anything changed about this step of the process ?

Regards

7
Python and Java API / Point cloud gradual selection threshold
« on: May 19, 2021, 01:08:43 PM »
Hello !

We have recently compare our process using old 1.5 release and new 1.7 release.

Using the same dataset, we were surprised of some major differences of number of points selected with gradual selection, using of course same thresholds.

Our questions are :
  • Has anything changed in calculation method of gradual selections ?
  • We have found that Reprojection error threshold come from pixel error divided by key point size. Right ? We have different results between 1.5 and 1.7 while mean key point size is closely the same.
  • What about Reprojection uncertainty and Projection accuracy ? The first come from ratio between distance between images and distance between image and tie point. But it seems we miss some coefficients. For the second one, we know it comes from ratio between both distances between image and tie points, but something else seems also to be taken into account.
  • Is there any possibility to get or calculate the mean key point size, or all individual the key point size ?

Regards

8
Python and Java API / Resolution of raster builders
« on: May 18, 2021, 12:43:36 PM »
Hello !

We are struggling with resolution when building raster, starting with DEM.

Indeed, we are not able to set the resolution taken into account while building DEM.

Could you help us ?
Regards

Input :
Code: [Select]
chunk.buildDem([some_parameters], resolution=0.01)

Executed :
Code: [Select]
BuildDem: source data = Dense cloud, interpolation = Enabled, resolution = 0.01
generating 34799x33343 dem (10 levels, 0.0155608 resolution)

9
Python and Java API / Marker pixel error
« on: April 17, 2020, 04:48:39 PM »
Hey !

I'm trying to work around marker pixel error. I've seen many posts which detailled how to calculate it :
Code: [Select]
            proj = marker.projections[camera].coord
            reproj = camera.project(marker.position)
            error = (proj - reproj).norm()

However, when some mis-detection appear, it's some time not possible to project marker.position on image and reproj return None.

I'm questionning myself how GUI is able to get a pixel error on those images ("show detail" on the marker)

Hope it's clear.
Regards

10
Python and Java API / Script from archive doesn't save project
« on: March 19, 2020, 07:11:17 PM »
Hey !

Something really crazy appeared in my script.
Indeed, sometimes I have to start from old project which has been stored as an archive (*.psz)
So I open the archive file and then save it into a project file (*.psx) in order to work inside.

But after the following piece of code, no more savings are effective (using doc.save())
But ... how ?!?!

Code: [Select]
           __project_path = "directory/project.psx"
           __archive_path = "directory/archive.psz"

            doc = Metashape.app.document
            doc.open(path=__archive_path, ignore_lock=True)
            chunk = doc.chunk
            doc.save(path=__project_path)

Regards

11
Python and Java API / Update transform with import data (dense cloud)
« on: February 14, 2020, 08:00:47 PM »
Dear Agisoft team,

Within an automated workflow, we would need to update the transformation of a chunk, including a imported dense cloud.
However, even if the point cloud import is done before the transformation, basic stuff (images + tie points) are transformed but the dense cloud remain at the imported coordinates.

Is it possible to include all 3D datas in updateTransform() command ? Or may it exist any solution to transform the dense cloud together with the rest of the chunk ?

P.S. I know it is impossible to "optimize" a dense cloud. What I would need is only apply a basic Helmert transform on this cloud.

Regards

12
Python and Java API / Fail to load calibration
« on: October 31, 2019, 02:30:00 PM »
Hello !

I struggle with personal calibration loading. Here is my code :

Code: [Select]
chunk=Metashape.app.document.chunk

# importing cameras from only one sensor

my_sensor = chunk.sensors[0]
my_sensor.type = Metashape.Sensor.Type.Fisheye
my_sensor.user_calib = Metashape.Calibration()
my_sensor.calibration.load("my_path/my_calibration.xml", format='xml')
It return False ...

I've checked everything (file existence, sensor name, sensor type, etc.).
I've able to set my_sensor.fixed = True, but not able to load my calibration ...

Thanks for your help

13
Hi everyone,

Within my automated pipeline, I've developed a method that calculate and set the smallest bounding box which included a "project contour" polygon.
The calculation has been separated in 3 steps :
- Set the bounding box center
- Set the bounding box size
- Set the bounding box orientation

Everything works very fine until today ... when I decided to include coordinates system in my pipeline.
Indeed, in "Local Coordinates" ('LOCAL_CS["Local Coordinates",LOCAL_DATUM["Local Datum",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]]]'), everything is OK,
but when I use projected coordinates system (For exemple EPSG::3946 in France), the box orientation doesn't work anymore (Center and size are OK !)...

Here is the following code for bounding box orientation setting (Where vertex_a / vertex_b / vertex_c are the corners of my minimum rotated rectangle that include my polygon) :
Code: [Select]
    cos_teta = (vertex_c.x - vertex_a.x) / vertex_a.distance(vertex_c)
    sin_teta = (vertex_c.y - vertex_a.y) / vertex_a.distance(vertex_c)

    __pseudo_region.rot = [[sin_teta, -cos_teta, 0],
                           [cos_teta, sin_teta, 0],
                           [0, 0, 1]]

    chunk.region.rot = chunk.transform.matrix.rotation().inv() * __pseudo_region.rot

After some investigation, my pseudo_region hasn't changed, so the problem is coming from the chunk rotation matrix (chunk.transform.matrix.rotation()).
I cannot understand how chunk.crs has inlfuenced the chunk.transform matrix... and have no idea how to fix the problem...

Regards

14
General / Bigger mask while reusing depth maps
« on: February 14, 2019, 11:35:28 AM »
Dear all,

I noticed during my last process that if I cancel the dense cloud generation (launch from GUI) after all depth maps are built , those one are not deleted anymore and can be used again.

As a reminder, dense cloud generation (launch from GUI) is composed of two processing steps :
- Depth maps building
- Dense cloud building

My question is, if then I modify the image mask (in order to mask a bigger part of images), is this new mask used when dense cloud generation is relaunched while reusing depth maps ?
With other words, is the mask used only during depth maps building or is it also taken into account during dense cloud building ?

Regards

15
Python and Java API / Model faces not linked with model vertices
« on: January 30, 2019, 06:23:25 PM »
Hi !

I'm currently trying to selected 3D model faces using their vertex coordinates, but I'm not able to find any link between faces (Metashape.Model.Face) and vertices (Metashape.Model.Vertex).
Indeed, the first one is included ".vertices", but it's only a tuple of three numbers (which looks to be vertices number or key) and no coordinates.
The second one is included ".coord", but no number (key ?).

Then, model vertices have coordinates but no link with faces, and so faces have no positioning informations.

Could someone help ?

Regards

Pages: [1] 2 3