Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - adehecq

Pages: [1]
1
Hello,
is there any update on this topic? I would also be interested in keeping the exterior orientation fixed, while solving for the interior orientation during camera optimization. Is there a solution with the Python API?
Thank you,

Amaury

2
General / Re: What does Metashape do with fiducial marks?
« on: July 23, 2021, 09:55:24 AM »
Hi PRobert1968,

Sorry for the slow reply, but I don't get any notifications when I get a reply on the forum and I don't come often.
I think what you are talking about (finding tie points between aerial images and modern basemap) is different from the issue I raised. In my case, I have 4 fiducial markers, that I can use to correct distortions of the film introduced during the acquisition and scanning. Usually an affine transformation is sufficient, but in that specific case, I had to use a projective transformation. It does not seem to be handled well by Metashape and I was wondering if this is something that could be improved.

I work on Bolivian aerial images. I'm based in Europe.

Best,
Amaury

3
General / What does Metashape do with fiducial marks?
« on: June 24, 2021, 04:40:06 PM »
Hi,

Sorry if this has been answered before, but I could not find the information on the forum.
I was wondering what does Metashape exactly with fiducial marks when working with film camera? What sort of transformation does it apply to the images to correct for scanner/film distortion?

The reason I'm asking is that I'm trying to process a set or aerial images from the 1960s. This is not an easy data set as the images where scanned in two different periods with scanners of different resolution and quality. Even after providing the fiducial markers positions, the alignment didn't work so well.
In the end, I warped the images externally (using a Python script) using the 4 fiducial marks positions so that the markers in all my images were perfectly aligned and the images had the same dimensions. That's when I realized that an affine transformation was not sufficient to make the alignment of the markers consistent. Instead I had to use a projective transform. Only then I got really good results in Metashape.
My assumption is that Metashape only corrects for an affine transformation. If so, would it be possible to enable projective transformations to be corrected?

I was also wondering, is Metashape able to handle images of different resolutions? (one set is 14 microns, the other 21). Although the tie point matching is scale independent, I'm not sure it's the case for the depth maps calculation.

Thanks a lot for your feedback!

 

4
Bug Reports / Re: Segmentation fault with Metashape 1.7.2
« on: May 18, 2021, 10:59:17 AM »
Hi Alexey,

sorry for the long silence. A report has been submitted yesterday with the reference number you gave.
Thank you for looking into this.

Amaury

5
Bug Reports / Segmentation fault with Metashape 1.7.2
« on: May 11, 2021, 02:42:02 PM »
Hello,

I've downloaded the latest Metashape pro (1.7.2) for Linux. Once I uncompress the folder and run ./metashape-pro/metashape, I get a segmentation fault.
Version 1.6.6 works without problem on this machine, but I wanted to upgrade because I could not manage to install external Python modules with that version.
Here's the distribution I use:
Distributor ID:   Debian
Description:   Debian GNU/Linux 9.13 (stretch)
Release:   9.13
Codename:   stretch

Thank you,
Amaury

6
Hello,

I am trying to automate the processing of a set of terrestrial images using the Python API. Typically, the workflow in the GUI consisted in:
1. Importing the pictures ('Add Photos')
2. Importing a reference containing the approximate camera positions and orientations ('Import reference' -> CSV file)
3. Setting the camera pixel size + focal length and keeping intrinsic parameters to fixed values ('Tools' / 'Camera calibration')
4. Selecting the orientations in the reference pane, to allow the reference estimates to be used
5. Aligning the cameras ('Align Photos')

I think I am able to run 1, 2 and 3 correctly in the Python API:
Code: [Select]
doc = Metashape.Document()

# Create chunk
chunk = doc.addChunk()

# Find pictures in folder
ImageFiles = glob.glob('./000-175-*.tif')
ImageFiles.sort()

# Add pictures to project
for fileName in ImageFiles:
   chunk.addPhotos([fileName])

sensor = chunk.sensors[0]
sensor.pixel_height = 0.021
sensor.pixel_width = 0.021
sensor.focal_length = 161
sensor.fixed_location = False
sensor.fixed_rotation = False
sensor.fixed_calibration = True

# Load camera positions   
chunk.crs = Metashape.CoordinateSystem("EPSG::21781")
loadReferenceSuccess = chunk.importReference('camera_positions_rotations.txt', Metashape.ReferenceFormatCSV, "nxyzabc", delimiter=',',crs=chunk.crs)
For info, the CSV file contains the following columns: Labels, E, N, z, Omega, Phi, Kappa.

But I have issues with number 5. The following code only results in invalid tie points:
Code: [Select]
chunk.matchPhotos()
chunk.alignCameras()
whereas running 'Align Photos' in the GUI result in similar tie points, but which are valid. Is there any additional step performed by 'Align Photos' that I am missing?

I also could not find how to do 4 with the API. How can I make sure that the initial orientations are taking into account (not only positions). In the GUI, this does not seem to be the default.
Thanks!

7
Hi Alexey,

Thanks for the link. I saw that before and tried it. Unfortunately, it's returns me the same yaw/pitch/roll that I provided in, which is the rotation between the camera frame and local tangent plan.
In my test case, my camera is looking north, parallel to the ground, so YPR is about 0, 90, 0. But the rotation between the camera frame and the ECEF frame should be very different than that.

The link you said states : "how to calculate the exterior orientation parameters [...] in the georeferenced chunk". I tried setting the chunk CRS to EPSG 4978 (geocentric) to see if it makes a difference, but it doesn't. The returned R matrix is the same.
In the following line "R = (m * T * camera.transform * Metashape.Matrix().Diag([1, -1, -1, 1])).rotation()"
What is the meaning of m and the diagonal matrix ( 1,-1,-1, 1)?
If I remove m from this equation, wouldn't it give me the rotation matrix to ECEF?
Thanks,

8
Hello,

I would like to export the cameras aligned within Metashape into a different reference system for use with an exteranl software (NASA Ames Stereo Pipeline). The software works only in geocentric ECEF coordinates. I would need to extract each camera center and rotation matrix into this coordinate system. I f I'm correct, the following lines of code will convert the camera center into ECEF?

   camera = chunk.cameras[0]
   T = chunk.transform.matrix
   cen_p = camera.center
   cen_t = T.mulp(cen_p)

However, I'm not sure how to export the rotation matrix in the ECEF system. Could you please help me with that?
Thank you.

Pages: [1]