Hi there,
I have a reconstruction that has been failing a few times in a row. I ended up setting up a script that logged free disk space during the process and I think this is what is failing: my computer is running out of space and Metashape crashes... however there is nothing in the python script log or shell output that would indicate this.
This is the tail of the shell output
Found 1 GPUs in 0.00085 sec (CUDA: 0.000152 sec, OpenCL: 0.000681 sec)
Using device: NVIDIA GeForce RTX 3090, 82 compute units, free memory: 23820/24267 MB, compute capability 8.6
driver/runtime CUDA: 11040/8000
max work group size 1024
max work item sizes [1024, 1024, 64]
8920568 matches found in 55.5983 sec
matches combined in 1.01701 sec
filtered 1043774 out of 4918130 matches (21.223%) in 1.83047 sec
saved matches in 0.018059 sec
loaded matching data in 0.001176 sec
loaded matching partition in 0.012572 sec
loaded keypoint partition in 0.000253 sec
loaded matches in 11.8576 sec
setting point indices... 168390005 done in 22.7045 sec
generated 168390005 tie points, 3.75358 average projections
removed 5084558 multiple indices
removed 67880 tracks
removing stationary tracks...
So my question is this: in order to verify my hypothesis that disk space is killing this process, Is there are way in the Python API that I can catch an "out of space" error that is not being thrown to the shell console?
Cheers!