Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - eiscar

Pages: [1]
1
Python and Java API / Re: Splitting into chunks
« on: May 13, 2020, 09:58:13 AM »
Exactly, that is what I have seen, and I would like to have the absolute coordinates be with respect to the same frame of reference for all chunks. Can I somehow set a coordinate system up in the original chunk and have all the copies keep the cameras coordinates wrt to it?

2
Python and Java API / Re: Splitting into chunks
« on: May 12, 2020, 05:36:37 PM »
Thank you for the suggestion, I will give it a try and report back.

As a follow up, I would also like to know how coordinates of camera positions are affected when images are removed from the chunk. As I mentioned in the previous post, I import a large number of images and then just align them. After alignment I would like to split them into chunks, but keeping their relative coordinates to each other, such that if I was to merge them again they would appear in the correct locations.  From my experiments so far it seems like the frame of reference is a different one for all the chunks I generate. Is there a way around this?

Best,
Eduardo

3
Python and Java API / Re: Getting keypoints for a Camera/Photo
« on: May 12, 2020, 11:55:17 AM »
Sure, you will not get the keypoint feature vector, but having the projection coordinates in each image and the associated 3D tiepoint can be very useful depending on the application.

Best,
Eduardo

4
Python and Java API / Splitting into chunks
« on: May 11, 2020, 06:17:16 PM »
Hi,

I have a large project with images coming from different sources. I process all the images together and would now like to move each set of images into a different chunk keeping their alignment and sparse pointcloud information.  I am currently achieving this by generating copies of the large, source chunk for each group of images and then removing  all images that are not in a given set from it. This works fine for smaller projects, but for larger projects just copying the chunk takes more than 5 minutes per set, and there are many...

Is there any more efficient way to accomplish what I want?

Thanks a lot,

Best,
Eduardo 

5
Python and Java API / Re: Getting keypoints for a Camera/Photo
« on: May 11, 2020, 05:53:14 PM »
You could export the project cameras to bundler format and then obtain the keypoints from there.  The format is open source, human readable and documented here: https://www.cs.cornell.edu/~snavely/bundler/bundler-v0.4-manual.html

6
Bug Reports / Re: Can't save project due to missing thumbnails.zip
« on: May 11, 2020, 09:58:58 AM »
Removing the thumbnails from the chunk and then saving worked. Thanks a lot.

Best,
Eduardo

7
Bug Reports / Can't save project due to missing thumbnails.zip
« on: May 08, 2020, 10:55:10 AM »
Hi all,

I have been working on a rather large project over the last week. I have been saving the project regularly without a problem. However, last night when I wanted to save I encountered the error message
Code: [Select]
Error: Can't open file: No such file or directory (2): [i]path/to/project/[/i]metashape/project.files/0/0/thumbnails/thumbnails.zip[/i]


I have checked the folder and it is empty indeed. I can only assume that something went wrong in Metashape and the file got deleted, because I did not access the folder at any point. At one point however I added the same images twice to the project, maybe that had something to do with it. I am using Metashape Professional, version 1.6.2 build 10247 on a machine with Ubuntu 18.04

I would really hate losing all the work that has gone into this project so far. Is there a way to force Metashape to regenerate the thumbnails file or just ignore the errror and save the project as is? Maybe an export/import trick to work around it? I only have aligned the images so far, no dense pointcloud or meshes are contained in the project.

Any ideas are appreciated, thanks!

Best,
Eduardo

8
Bug Reports / Re: GPU error CUDA_ERROR_OUT_OF_MEMORY (2) at line 123
« on: August 19, 2018, 09:02:40 PM »
Looking at the output of NVIDIA-SMI command I found out that two of the GPUs were being used by another process. Disabling those GPUs through the mask solved the out of memory error. The reconstruction is now running and seems to be working on the other two GPUs, but I have seen a few of the  "Warning: cudaStreamDestroy failed: out of memory (2)" warnings pop up.

Just in case, the driver version is 396.37.

9
Bug Reports / GPU error CUDA_ERROR_OUT_OF_MEMORY (2) at line 123
« on: August 19, 2018, 01:28:43 AM »
Hi,

I am running Photoscan through the Python API and trying to process some images acquired with a MAVIC drone. The script has worked without issues with other datasets from different cameras, but fails with the images from the MAVIC with a CUDA_ERROR_OUT_OF_MEMORY (2) at line 123 message. I have tried reducing the image resultion to a quarter, but the error persists. I am using version 1.4.3, build 6529.

I have attached the complete error message:

Code: [Select]
MatchPhotos: accuracy = High, preselection = generic, reference, keypoint limit = 40000, tiepoint limit = 4000, apply masks = 0, filter tie points = 0
Using device: TITAN V, 80 compute units, 12066 MB global memory, compute capability 7.0
  driver version: 9020, runtime version: 5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using device: TITAN V, 80 compute units, 12066 MB global memory, compute capability 7.0
  driver version: 9020, runtime version: 5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using device: TITAN V, 80 compute units, 12066 MB global memory, compute capability 7.0
  driver version: 9020, runtime version: 5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using device: TITAN V, 80 compute units, 12066 MB global memory, compute capability 7.0
  driver version: 9020, runtime version: 5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
[GPU] photo 1: 40000 points
[GPU] photo 3: 40000 points
Warning: cudaStreamDestroy failed: out of memory (2)
Warning: cudaStreamDestroy failed: out of memory (2)
Traceback (most recent call last):
  File "/media/bucket/tmp_code/photoscan-drono-monocular.py", line 133, in <module>
    main()
  File "/media/bucket/tmp_code/photoscan-drono-monocular.py", line 126, in main
    process(args.input_path, output_path)
  File "/media/bucket/tmp_code/photoscan-drono-monocular.py", line 67, in process
    chunk.matchPhotos(accuracy = accuracy, generic_preselection = generic_preselection, reference_preselection = reference_preselection, filter_mask = False, keypoint_limit = keypoints, tiepoint_limit = tiepoints)
RuntimeError: CUDA_ERROR_OUT_OF_MEMORY (2) at line 123


Any ideas? Thanks!

Pages: [1]