Forum

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - kaz

Pages: [1]
1
Hello,

I'm running Metashape on MacBook Pro13-inch 2017 and AWS EC2 p2.xlarge ubuntu18.04 and noticed alignCamera on MBP(1sec) is much faster than on AWS(28sec) even though AWS seems to have similar or better CPU.
All CPU cores are utilized 100% during computation for AWS.
GPU computation is disabled.
Logs are attached.
I tried other EC2 instances(g3.4xlarge, p3.2xlarge, c5.2xlarge) but there were no large difference from p2.xlarge.

Do anyone know the reason of that and how to improve performance on AWS?

Specs:
CPUMEM
MBPCore i5-7360U 2.3 GHz dual core 16GB 2133 MHz LPDDR3
AWSXeon E5-2686 v4 2.3 GHz 4 cores61GB

Optimization become slower after line 124(log_aws.txt) on AWS.
Code: [Select]
111 adding camera 9 (10 of 25), 31 of 48 used
112 adding 48 points, 4 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
113 adjusting: xxxxxxxxxx 0.374631 -> 0.363007
114 adding 1 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
115 adjusting: xxxxxxxxxx 0.383473 -> 0.379003
116 adding 0 points, 1 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
117 optimized in 0.05614 seconds
118 adding camera 10 (11 of 25), 43 of 53 used
119 adding 60 points, 3 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
120 adjusting: xxxxxxxxxx 0.377789 -> 0.368868
121 adding 1 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
122 adjusting: xxxxxxxxx 0.369398 -> 0.369248
123 adding 0 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
124 optimized in 1.16628 seconds
125 adding camera 11 (12 of 25), 40 of 44 used
126 adding camera 12 (13 of 25), 35 of 46 used
127 adding 109 points, 2 far (3 threshold), 0 inaccurate, 1 invisible, 0 weak
128 adjusting: xxxxxxxxxx 0.371148 -> 0.355195
129 adding 3 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
130 adjusting: xxxxxxxx 0.356127 -> 0.355829
131 adding 0 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
132 optimized in 1.34306 seconds

Code:
Code: [Select]
import time
from glob import glob
import Metashape

filenames = glob('./images/image*.JPG')

chunk = Metashape.Document().addChunk()
chunk.addPhotos(filenames)

start_time = time.time()
chunk.matchPhotos(accuracy=Metashape.LowAccuracy, generic_preselection=True, reference_preselection=False)
print('\n******************************\n')
print('matchPhotos: %d sec' % (time.time()-start_time))
print('\n******************************\n')

start_time = time.time()
chunk.alignCameras(adaptive_fitting=False)
print('\n******************************\n')
print('alignCameras: %d sec' % (time.time()-start_time))
print('\n******************************\n')

Any comment could be appreciated.

Thank you,
Kaz

2
Python and Java API / alignCameras() produces different results
« on: May 14, 2019, 07:37:48 AM »
Hello,

I found that the script run alignCameras() sometimes produces different results(different # of point_cloud) from the same data(the same 50 images).
Logs after "Aligning groups by 4771 points" looks different.
Why does this occur?

Gui run stably produces the same result(It's the same as log1).

Thanks,
Kaz

#code
Code: [Select]
import Metashape

Metashape.app.gpu_mask = 1

chunk = Metashape.Document().addChunk()
chunk.addPhotos(filenames)
chunk.matchPhotos(accuracy=Metashape.MediumAccuracy, generic_preselection=True, reference_preselection=False)
chunk.alignCameras()
print(chunk.point_cloud)

chunk.buildDepthMaps(quality=Metashape.MediumQuality, filter=Metashape.MildFiltering)
chunk.buildDenseCloud()
print(chunk.dense_cloud)

#Script log1(<PointCloud '22837 points'>) well aligned
Code: [Select]
AddPhotos
MatchPhotos: accuracy = Medium, preselection = generic, keypoint limit = 40000, tiepoint limit = 4000, apply masks = 0, filter tie points = 0
Using device: GeForce GTX 1080 Ti, 28 compute units, free memory: 10368/11177 MB, compute capability 6.1
  driver/runtime CUDA: 10000/5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
...
...
...
adding 12948 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adding camera 0 (25 of 50), 1130 of 1156 used
adding camera 0 (26 of 50), 919 of 967 used
adding camera 0 (27 of 50), 858 of 938 used
adding 1366 points, 20 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
2 blocks: 26 24
calculating match counts... done in 9.5e-05 sec
Aligning groups by 4771 points
iteration 0: 3747 points, 0.140811 error
iteration 1: 3640 points, 0.141818 error
iteration 2: 3632 points, 0.14192 error
iteration 3: 3630 points, 0.141945 error
iteration 4: 3630 points, 0.141945 error
groups 0 and 1: 3630 robust from 4771
overlapping groups selected in 0.001289 sec
scheduled 1 merging groups
block: 2 sensors, 50 cameras, 26018 points, 113884 projections
block_sensors: 0.00160217 MB (0.00160217 MB allocated)
block_cameras: 0.019455 MB (0.0202332 MB allocated)
block_points: 1.19101 MB (1.5 MB allocated)
block_tracks: 0.0992508 MB (0.0992508 MB allocated)
block_obs: 2.17216 MB (2.17216 MB allocated)
block_ofs: 0.198509 MB (0.198509 MB allocated)
block_fre: 0 MB (0 MB allocated)
adding 25833 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.272373 -> 0.217871
adding 38 points, 2 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.220065 -> 0.219735
adding 1 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.220215 -> 0.220177
adding 0 points, 1 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 7.08525 seconds
f 1664.76, cx -35.5362, cy 19.7938, k1 0.113896, k2 -0.655136, k3 1.05925
f 1655.3, cx -43.955, cy -9.15457, k1 0.126255, k2 -0.719508, k3 1.07979
adjusting: xxxxxxxxxxxxxxxxxxxx 2.98004 -> 0.221133
loaded projections in 4e-06 sec
tracks initialized in 0.009187 sec
adding 25868 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
1 blocks: 50
calculating match counts... done in 3e-06 sec
overlapping groups selected in 1e-06 sec
1 blocks: 50
final block size: 50
adding 25868 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
3 sigma filtering...
adjusting: xxxxxxxxxxxxxxx 0.222925 -> 0.220699
point variance: 0.252345 threshold: 0.757034
adding 0 points, 895 far (0.757034 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxx 0.164334 -> 0.161165
point variance: 0.182503 threshold: 0.547508
adding 4 points, 901 far (0.547508 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxx 0.142501 -> 0.141082
point variance: 0.158329 threshold: 0.474987
adding 12 points, 626 far (0.474987 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxx 0.132058 -> 0.131368
point variance: 0.146501 threshold: 0.439504
adding 15 points, 412 far (0.439504 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxx 0.126408 -> 0.126143
point variance: 0.140155 threshold: 0.420466
adding 21 points, 249 far (0.420466 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 6.70875 seconds
coordinates applied in 0 sec
<PointCloud '22837 points'>
BuildDepthMaps: quality = Medium, depth filtering = Mild, reuse depth maps
Using device: GeForce GTX 1080 Ti, 28 compute units, free memory: 10376/11177 MB, compute capability 6.1
  driver/runtime CUDA: 10000/5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using CUDA device 'GeForce GTX 1080 Ti' in concurrent. (2 times)
sorting point cloud... done in 0.000219 sec
processing matches... done in 0.006638 sec
initializing...
selected 50 cameras from 50 in 0.176025 sec
...
...
...
<DenseCloud '580288 points'>

#Script log2(<PointCloud '12557 points'>) not well aligned
Code: [Select]
AddPhotos
MatchPhotos: accuracy = Medium, preselection = generic, keypoint limit = 40000, tiepoint limit = 4000, apply masks = 0, filter tie points = 0
Using device: GeForce GTX 1080 Ti, 28 compute units, free memory: 10377/11177 MB, compute capability 6.1
  driver/runtime CUDA: 10000/5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
...
...
...
adding 12948 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adding camera 0 (25 of 50), 1130 of 1156 used
adding camera 0 (26 of 50), 919 of 967 used
adding camera 0 (27 of 50), 858 of 938 used
adding 1366 points, 20 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
2 blocks: 26 24
calculating match counts... done in 0.000678 sec
Aligning groups by 4771 points
groups 0 and 1: 0 robust from 4771
groups 1 and 0: 0 robust from 4771
overlapping groups selected in 0.020549 sec
2 blocks: 26 24
final block size: 26
adding 14083 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
3 sigma filtering...
adjusting: xxxxxxxxxxxxxxxxxxxx 0.201584 -> 0.192206
point variance: 0.219569 threshold: 0.658706
adding 0 points, 476 far (0.658706 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxx 0.142129 -> 0.14069
point variance: 0.159395 threshold: 0.478184
adding 0 points, 480 far (0.478184 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxx 0.126026 -> 0.125444
point variance: 0.140933 threshold: 0.422798
adding 4 points, 302 far (0.422798 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxx 0.118887 -> 0.118649
point variance: 0.132522 threshold: 0.397566
adding 8 points, 184 far (0.397566 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxx 0.11509 -> 0.114924
point variance: 0.127948 threshold: 0.383845
adding 11 points, 107 far (0.383845 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 6.02422 seconds
coordinates applied in 0 sec
<PointCloud '12557 points'>
BuildDepthMaps: quality = Medium, depth filtering = Mild, reuse depth maps
Using device: GeForce GTX 1080 Ti, 28 compute units, free memory: 10384/11177 MB, compute capability 6.1
  driver/runtime CUDA: 10000/5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using CUDA device 'GeForce GTX 1080 Ti' in concurrent. (2 times)
sorting point cloud... done in 0.000126 sec
processing matches... done in 0.003345 sec
initializing...
selected 26 cameras from 26 in 0.118217 sec
...
...
...
<DenseCloud '449853 points'>

#Gui log(<PointCloud '23218 points'>) well aligned
Code: [Select]
Matching photos...
Detecting points...
Using device: GeForce GTX 1080 Ti, 28 compute units, free memory: 10523/11177 MB, compute capability 6.1
  driver/runtime CUDA: 10000/5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
...
...
...
adding 14387 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adding camera 0 (27 of 50), 1144 of 1180 used
adding camera 0 (28 of 50), 982 of 1027 used
adding camera 0 (29 of 50), 690 of 733 used
adding 1486 points, 10 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
2 blocks: 24 26
calculating match counts... done in 0.000115 sec
Aligning groups by 4271 points
iteration 0: 2331 points, 0.0713392 error
iteration 1: 2201 points, 0.0715029 error
iteration 2: 2184 points, 0.0715415 error
iteration 3: 2184 points, 0.0715415 error
iteration 4: 2184 points, 0.0715415 error
groups 0 and 1: 2184 robust from 4271
overlapping groups selected in 0.001426 sec
scheduled 1 merging groups
block: 2 sensors, 50 cameras, 26506 points, 113797 projections
block_sensors: 0.00160217 MB (0.00160217 MB allocated)
block_cameras: 0.019455 MB (0.019455 MB allocated)
block_points: 1.21335 MB (1.5 MB allocated)
block_tracks: 0.101112 MB (0.101112 MB allocated)
block_obs: 2.17051 MB (2.17051 MB allocated)
block_ofs: 0.202232 MB (0.202232 MB allocated)
block_fre: 0 MB (0 MB allocated)
adding 26231 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.276145 -> 0.227149
adding 35 points, 3 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.22649 -> 0.226138
adding 0 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 4.72872 seconds
f 1657.45, cx -41.4175, cy -3.9976, k1 0.14099, k2 -0.793362, k3 1.21134
f 1668.01, cx -39.3074, cy 28.0903, k1 0.11177, k2 -0.627066, k3 1.00088
adjusting: xxxxxxxxxxxxxxxxx 3.20788 -> 0.227533
loaded projections in 6.2e-05 sec
tracks initialized in 0.008128 sec
adding 26264 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
1 blocks: 50
calculating match counts... done in 2e-06 sec
overlapping groups selected in 1e-06 sec
1 blocks: 50
final block size: 50
adding 26264 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
3 sigma filtering...
adjusting: xxxxxxxxxxxxxxx 0.230273 -> 0.227857
point variance: 0.258924 threshold: 0.776772
adding 0 points, 942 far (0.776772 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxx 0.168736 -> 0.165978
point variance: 0.18678 threshold: 0.560341
adding 0 points, 932 far (0.560341 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxx 0.146487 -> 0.145254
point variance: 0.161896 threshold: 0.485687
adding 7 points, 613 far (0.485687 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxx 0.136615 -> 0.136071
point variance: 0.150833 threshold: 0.452498
adding 16 points, 363 far (0.452498 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxx
...
...
...

3
Python and Java API / GPU is recognized, but not used for computation
« on: March 28, 2019, 04:39:56 PM »
Hello,

I'm using Metashape pro(trial mode) and trying GPU acceleration.
nvidia-driver and cuda looks successfully installed and Metashape recognizes GPU.
However, console logs show that GPU is not used for computation.

Code: [Select]
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Fri_Feb__8_19:08:17_PST_2019
Cuda compilation tools, release 10.1, V10.1.105
$ nvidia-smi
Thu Mar 28 13:16:02 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.40.04    Driver Version: 418.40.04    CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           On   | 00000000:00:1E.0 Off |                    0 |
| N/A   31C    P8    24W / 149W |     11MiB / 11441MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
$ lspci | grep -i nvidia
00:1e.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)
$ python3
Python 3.6.7 (default, Oct 22 2018, 11:32:17)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import Metashape
>>> from glob import glob
>>> app = Metashape.app
>>> print(app.activated)
True
>>> print(app.cpu_enable)
False
>>> print(app.enumGPUDevices())
[{'name': 'Tesla K80', 'vendor': '', 'version': '', 'compute_units': 13, 'mem_size': 11996954624, 'clock': 823, 'pci_bus_id': 0, 'pci_device_id': 30}]
>>> print(app.document)
None
>>> chunk = Metashape.Document().addChunk()
>>> chunk.addPhotos(glob('/home/ubuntu/images/*.JPG'))
AddPhotos
>>> chunk.matchPhotos(accuracy=Metashape.HighAccuracy, generic_preselection=True, reference_preselection=False)
MatchPhotos: accuracy = High, preselection = generic, keypoint limit = 40000, tiepoint limit = 4000, apply masks = 0, filter tie points = 0
[CPU] photo 0: 40000 points
[CPU] photo 1: 40000 points
[CPU] photo 2: 40000 points
[CPU] photo 3: 40000 points
[CPU] photo 4: 40000 points
[CPU] photo 5: 40000 points
[CPU] photo 6: 40000 points
[CPU] photo 7: 40000 points
[CPU] photo 8: 40000 points
[CPU] photo 9: 40000 points
points detected in 16.3362 sec
8464 matches found in 2.15852 sec
matches combined in 0.001021 sec
filtered 565 out of 5121 matches (11.033%) in 0.010804 sec
45 pairs selected in 1.2e-05 sec
setting point indices... 1632 done in 0.000264 sec
setting point indices... 1760 done in 0.000292 sec
setting point indices... 1754 done in 0.000301 sec
setting point indices... 1759 done in 0.000282 sec
29 skeletal pairs selected in 0.001752 sec
37272 matches found in 24.5823 sec
matches combined in 0.004737 sec
filtered 1800 out of 22770 matches (7.90514%) in 0.050163 sec
finished matching in 43.1466 sec
setting point indices... 14399 done in 0.002991 sec
generated 14399 tie points, 2.36662 average projections
removed 74 multiple indices
removed 7 tracks
selected 13738 tracks out of 14392 in 0.001161 sec
>>> chunk.alignCameras()
AlignCameras: adaptive fitting = 0
processing matches... done in 0.002644 sec
selecting camera groups... done in 0.000167 sec
processing block: 10 photos
pair 3 and 4: 2754 robust from 3045
adding photos 3 and 4 (2754 robust)
adding 3045 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxx 0.501094 -> 0.426675
adding 0 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 0.077249 seconds
adding camera 2 (3 of 10), 810 of 827 used
adding camera 1 (4 of 10), 504 of 567 used
adding 3435 points, 18 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxx 0.939181 -> 0.407846
adding 18 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxx 0.417938 -> 0.417659
adding 0 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 0.329659 seconds
adding camera 0 (5 of 10), 1130 of 1146 used
adding 1748 points, 6 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxx 0.46897 -> 0.39034
adding 5 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxx- 0.393588 -> 0.393544
adding 0 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 0.359436 seconds
adding camera 5 (6 of 10), 352 of 403 used
adding 425 points, 15 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxx 0.509356 -> 0.404971
adding 14 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxx 0.424025 -> 0.422606
adding 0 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 0.415614 seconds
adding camera 6 (7 of 10), 217 of 233 used
adding 913 points, 6 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxx 0.463106 -> 0.43043
adding 5 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxx 0.432917 -> 0.432899
adding 0 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 0.36593 seconds
adding camera 7 (8 of 10), 279 of 297 used
adding camera 8 (9 of 10), 192 of 208 used
adding 2715 points, 8 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxx 0.526935 -> 0.452374
adding 8 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxx- 0.457398 -> 0.457251
adding 1 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxx 0.461273 -> 0.461001
adding 0 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 0.754189 seconds
adding camera 9 (10 of 10), 614 of 641 used
adding 1425 points, 13 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxx 0.562741 -> 0.504114
adding 18 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxx 0.532952 -> 0.528803
adding 4 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxx 0.541485 -> 0.540656
adding 0 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 0.961424 seconds
3 sigma filtering...
adjusting: xxxxxxxxxxxxxxxxxxxx 0.540713 -> 0.517997
point variance: 0.444362 threshold: 1.33309
adding 0 points, 273 far (1.33309 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.377631 -> 0.326831
point variance: 0.279026 threshold: 0.837077
adding 36 points, 344 far (0.837077 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.276296 -> 0.267695
point variance: 0.227315 threshold: 0.681944
adding 2 points, 280 far (0.681944 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxx- 0.240279 -> 0.237732
point variance: 0.200783 threshold: 0.60235
adding 1 points, 203 far (0.60235 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxx 0.224382 -> 0.223851
point variance: 0.188153 threshold: 0.564458
adding 2 points, 124 far (0.564458 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 11.4174 seconds
f 3305.42, cx 11.7488, cy 1.92097, k1 0.0181888, k2 -0.128515, k3 0.177712
finished sfm in 14.9392 seconds
loaded projections in 4e-06 sec
tracks initialized in 0.003351 sec
adding 13698 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
1 blocks: 10
calculating match counts... done in 2.3e-05 sec
overlapping groups selected in 2e-06 sec
1 blocks: 10
final block size: 10
adding 13698 points, 0 far (7.056 threshold), 0 inaccurate, 0 invisible, 0 weak
3 sigma filtering...
adjusting: xxxxxxxxxxxxxxxxxxxx 0.56243 -> 0.486996
point variance: 0.417775 threshold: 1.25333
adding 0 points, 263 far (1.25333 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.35271 -> 0.321258
point variance: 0.274236 threshold: 0.822708
adding 18 points, 343 far (0.822708 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.273322 -> 0.264858
point variance: 0.224883 threshold: 0.67465
adding 3 points, 277 far (0.67465 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxx 0.238415 -> 0.236211
point variance: 0.199409 threshold: 0.598227
adding 2 points, 191 far (0.598227 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxx 0.223951 -> 0.223556
point variance: 0.187885 threshold: 0.563655
adding 1 points, 117 far (0.563655 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 10.752 seconds
coordinates applied in 0 sec
>>> chunk.buildDepthMaps(quality=Metashape.MediumQuality, filter=Metashape.AggressiveFiltering)
BuildDepthMaps: quality = Medium, depth filtering = Aggressive, reuse depth maps
sorting point cloud... done in 0.000145 sec
processing matches... done in 0.002121 sec
initializing...
selected 10 cameras from 10 in 0.071193 sec
loaded photos in 0.664424 seconds
[CPU] estimating 722x922x128 disparity using 722x922x8u tiles
timings: rectify: 0.09905 disparity: 2.27298 borders: 0.086345 filter: 0.089302 fill: 1e-06
[CPU] estimating 730x962x160 disparity using 730x962x8u tiles
timings: rectify: 0.104356 disparity: 2.49192 borders: 0.069019 filter: 0.0737 fill: 3e-06
[CPU] estimating 764x798x128 disparity using 764x798x8u tiles
timings: rectify: 0.08933 disparity: 2.10999 borders: 0.057944 filter: 0.064444 fill: 0
[CPU] estimating 761x779x160 disparity using 761x779x8u tiles
timings: rectify: 0.088414 disparity: 2.0863 borders: 0.049731 filter: 0.059432 fill: 3e-06
[CPU] estimating 764x798x128 disparity using 764x798x8u tiles
timings: rectify: 0.091295 disparity: 2.09426 borders: 0.058651 filter: 0.064256 fill: 1e-06
[CPU] estimating 756x791x128 disparity using 756x791x8u tiles
timings: rectify: 0.088436 disparity: 2.05294 borders: 0.058272 filter: 0.063434 fill: 1e-06
[CPU] estimating 834x1412x96 disparity using 834x1412x8u tiles
timings: rectify: 0.173333 disparity: 3.88662 borders: 0.121873 filter: 0.127906 fill: 2e-06
[CPU] estimating 751x954x96 disparity using 751x954x8u tiles
timings: rectify: 0.103894 disparity: 2.30994 borders: 0.078245 filter: 0.083853 fill: 1e-06
[CPU] estimating 730x962x160 disparity using 730x962x8u tiles
timings: rectify: 0.104105 disparity: 2.44989 borders: 0.07728 filter: 0.081361 fill: 1e-06
[CPU] estimating 715x1054x128 disparity using 715x1054x8u tiles
timings: rectify: 0.11121 disparity: 2.56873 borders: 0.092414 filter: 0.095186 fill: 0
[CPU] estimating 834x1412x96 disparity using 834x1412x8u tiles
timings: rectify: 0.174806 disparity: 3.97186 borders: 0.122027 filter: 0.128461 fill: 2e-06
[CPU] estimating 817x1109x128 disparity using 817x1109x8u tiles
timings: rectify: 0.132742 disparity: 3.1369 borders: 0.078658 filter: 0.08608 fill: 2e-06
[CPU] estimating 722x922x128 disparity using 722x922x8u tiles
timings: rectify: 0.099858 disparity: 2.22933 borders: 0.085556 filter: 0.088087 fill: 0
[CPU] estimating 715x1054x128 disparity using 715x1054x8u tiles
timings: rectify: 0.111225 disparity: 2.60386 borders: 0.092611 filter: 0.095737 fill: 1e-06
[CPU] estimating 756x791x128 disparity using 756x791x8u tiles
timings: rectify: 0.087268 disparity: 2.05475 borders: 0.065156 filter: 0.069457 fill: 1e-06
[CPU] estimating 770x783x128 disparity using 770x783x8u tiles
timings: rectify: 0.08759 disparity: 2.10721 borders: 0.057979 filter: 0.062992 fill: 1e-06
[CPU] estimating 685x569x192 disparity using 685x569x8u tiles
timings: rectify: 0.05822 disparity: 1.36695 borders: 0.027094 filter: 0.033829 fill: 1e-06
[CPU] estimating 693x612x160 disparity using 693x612x8u tiles
timings: rectify: 0.064522 disparity: 1.49832 borders: 0.035022 filter: 0.040941 fill: 1e-06
[CPU] estimating 686x816x160 disparity using 686x816x8u tiles
timings: rectify: 0.083026 disparity: 1.98471 borders: 0.044152 filter: 0.050145 fill: 1e-06
[CPU] estimating 690x903x160 disparity using 690x903x8u tiles
timings: rectify: 0.093601 disparity: 2.21271 borders: 0.048712 filter: 0.055036 fill: 0
[CPU] estimating 683x910x160 disparity using 683x910x8u tiles
timings: rectify: 0.092818 disparity: 2.19315 borders: 0.051369 filter: 0.057003 fill: 1e-06
[CPU] estimating 787x897x128 disparity using 787x897x8u tiles
timings: rectify: 0.103347 disparity: 2.40656 borders: 0.067949 filter: 0.071702 fill: 0
[CPU] estimating 725x752x160 disparity using 725x752x8u tiles
timings: rectify: 0.07993 disparity: 1.85393 borders: 0.051447 filter: 0.056054 fill: 0
[CPU] estimating 698x753x160 disparity using 698x753x8u tiles
timings: rectify: 0.076466 disparity: 1.82075 borders: 0.046665 filter: 0.051096 fill: 1e-06
[CPU] estimating 762x779x160 disparity using 762x779x8u tiles
timings: rectify: 0.089792 disparity: 2.03182 borders: 0.046233 filter: 0.053356 fill: 1e-06
[CPU] estimating 770x783x128 disparity using 770x783x8u tiles
timings: rectify: 0.088999 disparity: 2.07068 borders: 0.053223 filter: 0.058585 fill: 1e-06
[CPU] estimating 818x1109x128 disparity using 818x1109x8u tiles
timings: rectify: 0.132312 disparity: 3.04826 borders: 0.077317 filter: 0.086092 fill: 1e-06
[CPU] estimating 751x954x96 disparity using 751x954x8u tiles
timings: rectify: 0.105255 disparity: 2.30889 borders: 0.087121 filter: 0.090969 fill: 0
[CPU] estimating 682x910x160 disparity using 682x910x8u tiles
timings: rectify: 0.093826 disparity: 2.19117 borders: 0.051896 filter: 0.057245 fill: 1e-06
[CPU] estimating 713x716x160 disparity using 713x716x8u tiles
timings: rectify: 0.075973 disparity: 1.8616 borders: 0.043468 filter: 0.049295 fill: 3e-06
[CPU] estimating 787x897x128 disparity using 787x897x8u tiles
timings: rectify: 0.103096 disparity: 2.37235 borders: 0.075299 filter: 0.080032 fill: 0
[CPU] estimating 730x922x128 disparity using 730x922x8u tiles
timings: rectify: 0.097996 disparity: 2.22938 borders: 0.072798 filter: 0.079778 fill: 1e-06
[CPU] estimating 735x953x160 disparity using 735x953x8u tiles
timings: rectify: 0.101413 disparity: 2.36388 borders: 0.069506 filter: 0.073967 fill: 0
[CPU] estimating 700x894x160 disparity using 700x894x8u tiles
timings: rectify: 0.0931 disparity: 2.15467 borders: 0.058254 filter: 0.063525 fill: 0
[CPU] estimating 726x752x160 disparity using 726x752x8u tiles
timings: rectify: 0.08176 disparity: 1.83914 borders: 0.044956 filter: 0.050194 fill: 2e-06
[CPU] estimating 730x922x128 disparity using 730x922x8u tiles
timings: rectify: 0.099374 disparity: 2.23343 borders: 0.076625 filter: 0.081027 fill: 0
[CPU] estimating 792x1145x128 disparity using 792x1145x8u tiles
timings: rectify: 0.132924 disparity: 3.037 borders: 0.106715 filter: 0.109799 fill: 1e-06
[CPU] estimating 749x1120x128 disparity using 749x1120x8u tiles
timings: rectify: 0.123673 disparity: 2.83131 borders: 0.093842 filter: 0.097428 fill: 2e-06
[CPU] estimating 701x894x160 disparity using 701x894x8u tiles
timings: rectify: 0.093229 disparity: 2.16211 borders: 0.049242 filter: 0.059221 fill: 1e-06
[CPU] estimating 749x1120x128 disparity using 749x1120x8u tiles
timings: rectify: 0.125167 disparity: 2.8781 borders: 0.090737 filter: 0.095708 fill: 2e-06
[CPU] estimating 742x1193x128 disparity using 742x1193x8u tiles
timings: rectify: 0.13196 disparity: 2.92279 borders: 0.103698 filter: 0.107726 fill: 1e-06
[CPU] estimating 699x753x160 disparity using 699x753x8u tiles
timings: rectify: 0.077672 disparity: 1.80912 borders: 0.045842 filter: 0.050818 fill: 2e-06
[CPU] estimating 735x953x160 disparity using 735x953x8u tiles
timings: rectify: 0.103416 disparity: 2.3859 borders: 0.066919 filter: 0.071199 fill: 1e-06
[CPU] estimating 792x1145x128 disparity using 792x1145x8u tiles
timings: rectify: 0.134466 disparity: 3.03424 borders: 0.099231 filter: 0.103082 fill: 0
[CPU] estimating 743x1193x128 disparity using 743x1193x8u tiles
timings: rectify: 0.131829 disparity: 2.87018 borders: 0.097419 filter: 0.100467 fill: 1e-06

Depth reconstruction devices performance:
 - 100% done by CPU
Total time: 128.229 seconds

>>> chunk.buildDenseCloud()
BuildDenseCloud: point colors = 1
selected 10 cameras in 0.131688 sec
working volume: 1265x1109x498
tiles: 1x1x1
selected 10 cameras in 3.5e-05 sec
preloading data... done in 0.368999 sec
filtering depth maps... done in 11.8143 sec
preloading data... done in 0.53306 sec
working volume: 1265x1109x498
tiles: 1x1x1
1907032 points extracted

Best regards,
kaz

4
Dear Agisoft support team,

I started Metashape pro 30-day trial.
I successfully installed both gui and command line tools(Please see attached images).

However I'm not able to import Metashape in python.
It gives the message "No license found. Details: No license for product (-1)"

It is not possible to use command line tool as a trial user?
I hope to test script functionality before purchase.

Environment: macOS High Sierra 10.13.6

Many thanks

Best regards
kaz

Pages: [1]