Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - jochemla

Pages: [1]
1
General / Re: Normals in laz export
« on: August 31, 2023, 10:56:35 AM »
Hi there,
Normals are stored by Agisoft Metashape within LAS files as extra dimensions or extra bytes (you can use Lastools\lasinfo to see these fields), with field names being
Code: [Select]
"normal x", "normal y", "normal z" with spaces without quotes
The convention (see this thread) for storing normals within LAS as established by ASPRS is instead to use the
Code: [Select]
NormalX, NormalY, NormalZ
PDAL understands NormalX, nx or normal_x, but not that used by current Metashape exported las, "normal "x" with a space.

It would be great if Metashape could resolve that and use the official LAS normals naming convention instead on export - NormalX. Maybe this could be moved as a bugfix request.

Hope this helps.
Jonathan

2
Python and Java API / Run script at exit/when Photocsan quits
« on: November 02, 2015, 05:32:09 PM »
Hi everyone,

I would like to be able to run a script when Photoscan exits. I know I can call a script's function from a GUI menu entry. What I would like to do is to call a function - which would write a small text file - when Photoscan exits/quits (either when the user selects the menu File - Exit or when using the small red cross).

Is this possible ? I could not find any luck in the docs. The alternative would be to add a File - "Quit while writing file" menu entry, but the use would be limited.

Thanks in advance for your help.

Best regards,

Jonathan

3
Hello Alexey,

Alright, thanks again for your time !

Jonathan

4
Hello Alexey,

Thanks a lot for your answer. You are right, the GPS info is not accurate enough to be used as is - we still need the correction given through photogrammetry to adjust the camera positions and orientations. Moreover, we do not have calibrated images (so no intrinsic parameters), so we cannot use the import camera function. Thanks for pointing this out.

As for the roll, pitch, yaw, I thought it could make the matching a bit faster, but if it not implemented in Photoscan, that is fine. When saying I can load the data for cameras one by one, you mean if I have the extrinsic and extrinsic precisely calibrated, right ? And can I ask you what georeferencing and optimization purposes the reference can be used for ?

Thanks again for your help !

Best regards,

Jonathan

5
Hello Alexey,

Thanks a lot for your answer. You are right, the GPS info is not accurate enough to be used as is - we still need the correction given through photogrammetry to adjust the camera positions and orientations. Thanks for pointing this out.

As for the roll, pitch, yaw, I thought it could make the matching a bit faster, but if it not implemented in Photoscan, that is fine. Thanks again for your help !

Jonathan

6
Hi everyone,

I am writing som ePython scripts to automate some processes. I made these scripts available through new menu entries. The problem is that it is hard to debug with this kind of scripts - the script simply exits when there is an error, without saying at which line the error is.

If I run the scripts using the Tools/Run Script entry, then I can find where the error lies, but this is not the most convenient debug workflow. So my question is :

Do you have an idea on how to debug these scripts run from a menu entry ? What are you using to write your Python scripts (I am using Notepad++, but there is no Python auto-completion like there is in the Photoscan command line (using Ctrl+Space) for example. I cannot use PyCharm for example since I have no access to PhotoScan.py library (which is normal, but annoying when writing large scripts).

Thanks a lot for your help ! :)

Jonathan

7
Hi everyone,

Sorry, this post was in the wrong section.

I am trying to reconstruct quite complex scenes, and I am looking for ways to improve the speed of the reconstruction. I can access, for each photo I am using, to the orientation of the camera. This info is stored in three XMP tags for pitch, yaw, roll (the orientation of the UAV that took the pictures). I can access them from Python API, and am now wondering if these can be of any use to photoscan. I searched through the forums and found several answers, but I think they may be outdated, which is why I am opening this topic.

On April 09, 2012, Alexey said that "Source yaw, pitch, roll data is not used yet" ( http://www.agisoft.com/forum/index.php?topic=224.msg924#msg924 ). Is it still the case now ? Or is the orientation used for the Tie Points detection step now ? Is it used in the following operations ? I've done the reconstruction for several chunks, based on photos with GPS exif, and could find the estimated orientation in the Reference pane listed in this topic : http://www.agisoft.com/forum/index.php?topic=2572.0 .

The solution I am thinking of right now in order to import the orientation ( based on http://www.agisoft.com/forum/index.php?topic=3215.msg17080#msg17080 ) would be to loop through all the image files, and build a CSV file storing the orientation info as Omega Phi Kappa (as well as the GPS, but this may not be necessary since it is loaded from the EXIF). Then, I would like to import this camera calibration file into PhotoScan, also through the Python scripting. I could find an importCameras function in the 1.1.0 Python API, but no more details. I could find an example script for 1.0.4 here : http://www.agisoft.com/forum/index.php?topic=3095.msg16338#msg16338 but again, it seems to be too old for v 1.1.0, with which I use the chunk.addPhotos(photo_list) function. Second solution, write this data into the GPSPitch, GPSRoll and GPSDirection EXIF tags ( http://www.agisoft.com/forum/index.php?topic=2416.msg12846#msg12846 ), but they are not read yet in Photoscan right ?

First question : This topic : http://www.agisoft.com/forum/index.php?topic=3650.msg19088#msg19088 says we can use chunk.addPhotos and then chunk.importCameras. My problem is also that I have some cameras with the same filename, but different folders (different filepaths). Is it possible to build an importCamera ready CSV file taking that into account ? From what I've read on the forum, it only needs to store the name of the image. Moreover, what should the format of this CSV/txt file be ?

Second question : it would be easier for me not to build this CSV file. Is it possible to loop through the cameras of the chunk, read the orientation from the image file, set this information to the camera ? Will then Photoscan use it or do I need first to tell it to take the cameras orientation into account ?

I hope everything was clear. If not, you can ask me anything :)

Thanks in advance for your help,

Best regards,

Jonathan

8
Hi everyone,

I am trying to reconstruct quite complex scenes, and I am looking for ways to improve the speed of the reconstruction. I can access, for each photo I am using, to the orientation of the camera. This info is stored in three XMP tags for pitch, yaw, roll (the orientation of the UAV that took the pictures). I can access them from Python API, and am now wondering if these can be of any use to photoscan. I searched through the forums and found several answers, but I think they may be outdated, which is why I am opening this topic.

On April 09, 2012, Alexey said that "Source yaw, pitch, roll data is not used yet" ( http://www.agisoft.com/forum/index.php?topic=224.msg924#msg924 ). Is it still the case now ? Or is the orientation used for the Tie Points detection step now ? Is it used in the following operations ? I've done the reconstruction for several chunks, based on photos with GPS exif, and could find the estimated orientation in the Reference pane listed in this topic : http://www.agisoft.com/forum/index.php?topic=2572.0 .

The solution I am thinking of right now in order to import the orientation ( based on http://www.agisoft.com/forum/index.php?topic=3215.msg17080#msg17080 ) would be to loop through all the image files, and build a CSV file storing the orientation info as Omega Phi Kappa (as well as the GPS, but this may not be necessary since it is loaded from the EXIF). Then, I would like to import this camera calibration file into PhotoScan, also through the Python scripting. I could find an importCameras function in the 1.1.0 Python API, but no more details. I could find an example script for 1.0.4 here : http://www.agisoft.com/forum/index.php?topic=3095.msg16338#msg16338 but again, it seems to be too old for v 1.1.0, with which I use the chunk.addPhotos(photo_list) function. Second solution, write this data into the GPSPitch, GPSRoll and GPSDirection EXIF tags ( http://www.agisoft.com/forum/index.php?topic=2416.msg12846#msg12846 ), but they are not read yet in Photoscan right ?

First question : This topic : http://www.agisoft.com/forum/index.php?topic=3650.msg19088#msg19088 says we can use chunk.addPhotos and then chunk.importCameras. My problem is also that I have some cameras with the same filename, but different folders (different filepaths). Is it possible to build an importCamera ready CSV file taking that into account ? From what I've read on the forum, it only needs to store the name of the image. Moreover, what should the format of this CSV/txt file be ?

Second question : it would be easier for me not to build this CSV file. Is it possible to loop through the cameras of the chunk, read the orientation from the image file, set this information to the camera ? Will then Photoscan use it or do I need first to tell it to take the cameras orientation into account ?

I hope everything was clear. If not, you can ask me anything :)

Thanks in advance for your help,

Best regards,

Jonathan

9
Hi everyone,

First of all, this is my first post on the forums, so thanks for all the work of the PhotoScan team. It is a very nice tool to use from the GUI, as well as a very customizable tool using the Python API.

I am mostly using the Python scripting interface to adapt the full processing chain to my needs. I am facing two problems here, mostly due to the fact that I am processing quite complex scenes.

 - First of all, I have 55 Chunks aligned and reconstructed (meshed and textured), each containing about 150 photos of 4608x3456 pixels (about 15 000 tie points, 10M dense cloud points, 700 000 faces each). The file weighs now 32GB, and I have 32GB of memory. Thanks to the GPS Exif data present in the images, each chunk's model is mostly correctly positioned. There are however some misalignments between neighbouring chunks - although they share some cameras - and so I try to align them using the 'Align Chunks' PhotoScan feature. I use the 'camera based' option to make it faster. When I do so, the chunks are totally badly aligned, not even parallel although the cameras are mostly in the same plane.
Here, it is important to note that I am using symbolic links on windows when importing the photos - the symlinks I have created, chunk by chunk, a folder for each chunk, pointing to the real files photos. I think the problem may be because some images of the project share the same filename, although they are different images. Also important, if two chunks share the image_0017.jpg, then they will be in a chunk_1 folder and in the chunk_2 folder. They are not in the same folder.
So my question is : How does photoscan does the camera based chunks alignment ? Does it uses only the camera names ? Does it needs the images to be in the same folder ? If I try to do the alignment on only two chunks, it sometimes does work, but not always. I have seen this issue : http://www.agisoft.com/forum/index.php?topic=3623.msg18987#msg18987 , but it does not correctly answers my problem.

 - Secondly, the file size is now the same as my RAM. This is probably due to the geometry, textures and so on being loaded in Photoscan. Therefore, if I try to reconstruct 56th chunk, PhotoScan makes the compute freeze, and I need to restart. The problem is that I need to process 128 chunks in total. Is there a way to do so without increasing my memory ? I will then need the 128 chunks to be aligned with one another, and would love to do so automatically. A Python solution can also be the answer. I am also wondering : does aligning the chunks before the geometry is computed using the Camera GPS, works ?
 
 Thanks a lot for your help, and do not hesitate if you have any questions !
 
 Best regards,
 
 Jonathan

10
Feature Requests / Way to make visualization of large projects faster
« on: April 13, 2015, 10:56:20 AM »
Hi everyone,

Here is an idea to make visualization faster. I am often using textured mode to view the results of teh full processing chain.

Once the textures are loaded (this step might take some time, when loading several texture files of 4096x4096 for each chunk), I can see the results for a viewpoint, they are really nice (thanks PhotoScan), but I am not able to move in the scene - trying to move the viewpoint shows lags.

So I was thinking it would be a nice feature to be able to choose (from the preferences dialog for example) to de-activate the texture view when moving the viewpoint. If this option was enabled, this would switch for example to the sparse colored point cloud when moving the point cloud (at mouse clic, mouse-wheel and so on).

Thanks again for the good work,

Jonathan

Pages: [1]