Forum

Author Topic: alignCameras() produces different results  (Read 3714 times)

kaz

  • Newbie
  • *
  • Posts: 21
    • View Profile
alignCameras() produces different results
« on: May 14, 2019, 07:37:48 AM »
Hello,

I found that the script run alignCameras() sometimes produces different results(different # of point_cloud) from the same data(the same 50 images).
Logs after "Aligning groups by 4771 points" looks different.
Why does this occur?

Gui run stably produces the same result(It's the same as log1).

Thanks,
Kaz

#code
Code: [Select]
import Metashape

Metashape.app.gpu_mask = 1

chunk = Metashape.Document().addChunk()
chunk.addPhotos(filenames)
chunk.matchPhotos(accuracy=Metashape.MediumAccuracy, generic_preselection=True, reference_preselection=False)
chunk.alignCameras()
print(chunk.point_cloud)

chunk.buildDepthMaps(quality=Metashape.MediumQuality, filter=Metashape.MildFiltering)
chunk.buildDenseCloud()
print(chunk.dense_cloud)

#Script log1(<PointCloud '22837 points'>) well aligned
Code: [Select]
AddPhotos
MatchPhotos: accuracy = Medium, preselection = generic, keypoint limit = 40000, tiepoint limit = 4000, apply masks = 0, filter tie points = 0
Using device: GeForce GTX 1080 Ti, 28 compute units, free memory: 10368/11177 MB, compute capability 6.1
  driver/runtime CUDA: 10000/5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
...
...
...
adding 12948 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adding camera 0 (25 of 50), 1130 of 1156 used
adding camera 0 (26 of 50), 919 of 967 used
adding camera 0 (27 of 50), 858 of 938 used
adding 1366 points, 20 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
2 blocks: 26 24
calculating match counts... done in 9.5e-05 sec
Aligning groups by 4771 points
iteration 0: 3747 points, 0.140811 error
iteration 1: 3640 points, 0.141818 error
iteration 2: 3632 points, 0.14192 error
iteration 3: 3630 points, 0.141945 error
iteration 4: 3630 points, 0.141945 error
groups 0 and 1: 3630 robust from 4771
overlapping groups selected in 0.001289 sec
scheduled 1 merging groups
block: 2 sensors, 50 cameras, 26018 points, 113884 projections
block_sensors: 0.00160217 MB (0.00160217 MB allocated)
block_cameras: 0.019455 MB (0.0202332 MB allocated)
block_points: 1.19101 MB (1.5 MB allocated)
block_tracks: 0.0992508 MB (0.0992508 MB allocated)
block_obs: 2.17216 MB (2.17216 MB allocated)
block_ofs: 0.198509 MB (0.198509 MB allocated)
block_fre: 0 MB (0 MB allocated)
adding 25833 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.272373 -> 0.217871
adding 38 points, 2 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.220065 -> 0.219735
adding 1 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.220215 -> 0.220177
adding 0 points, 1 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 7.08525 seconds
f 1664.76, cx -35.5362, cy 19.7938, k1 0.113896, k2 -0.655136, k3 1.05925
f 1655.3, cx -43.955, cy -9.15457, k1 0.126255, k2 -0.719508, k3 1.07979
adjusting: xxxxxxxxxxxxxxxxxxxx 2.98004 -> 0.221133
loaded projections in 4e-06 sec
tracks initialized in 0.009187 sec
adding 25868 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
1 blocks: 50
calculating match counts... done in 3e-06 sec
overlapping groups selected in 1e-06 sec
1 blocks: 50
final block size: 50
adding 25868 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
3 sigma filtering...
adjusting: xxxxxxxxxxxxxxx 0.222925 -> 0.220699
point variance: 0.252345 threshold: 0.757034
adding 0 points, 895 far (0.757034 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxx 0.164334 -> 0.161165
point variance: 0.182503 threshold: 0.547508
adding 4 points, 901 far (0.547508 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxx 0.142501 -> 0.141082
point variance: 0.158329 threshold: 0.474987
adding 12 points, 626 far (0.474987 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxx 0.132058 -> 0.131368
point variance: 0.146501 threshold: 0.439504
adding 15 points, 412 far (0.439504 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxx 0.126408 -> 0.126143
point variance: 0.140155 threshold: 0.420466
adding 21 points, 249 far (0.420466 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 6.70875 seconds
coordinates applied in 0 sec
<PointCloud '22837 points'>
BuildDepthMaps: quality = Medium, depth filtering = Mild, reuse depth maps
Using device: GeForce GTX 1080 Ti, 28 compute units, free memory: 10376/11177 MB, compute capability 6.1
  driver/runtime CUDA: 10000/5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using CUDA device 'GeForce GTX 1080 Ti' in concurrent. (2 times)
sorting point cloud... done in 0.000219 sec
processing matches... done in 0.006638 sec
initializing...
selected 50 cameras from 50 in 0.176025 sec
...
...
...
<DenseCloud '580288 points'>

#Script log2(<PointCloud '12557 points'>) not well aligned
Code: [Select]
AddPhotos
MatchPhotos: accuracy = Medium, preselection = generic, keypoint limit = 40000, tiepoint limit = 4000, apply masks = 0, filter tie points = 0
Using device: GeForce GTX 1080 Ti, 28 compute units, free memory: 10377/11177 MB, compute capability 6.1
  driver/runtime CUDA: 10000/5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
...
...
...
adding 12948 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adding camera 0 (25 of 50), 1130 of 1156 used
adding camera 0 (26 of 50), 919 of 967 used
adding camera 0 (27 of 50), 858 of 938 used
adding 1366 points, 20 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
2 blocks: 26 24
calculating match counts... done in 0.000678 sec
Aligning groups by 4771 points
groups 0 and 1: 0 robust from 4771
groups 1 and 0: 0 robust from 4771
overlapping groups selected in 0.020549 sec
2 blocks: 26 24
final block size: 26
adding 14083 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
3 sigma filtering...
adjusting: xxxxxxxxxxxxxxxxxxxx 0.201584 -> 0.192206
point variance: 0.219569 threshold: 0.658706
adding 0 points, 476 far (0.658706 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxx 0.142129 -> 0.14069
point variance: 0.159395 threshold: 0.478184
adding 0 points, 480 far (0.478184 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxx 0.126026 -> 0.125444
point variance: 0.140933 threshold: 0.422798
adding 4 points, 302 far (0.422798 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxx 0.118887 -> 0.118649
point variance: 0.132522 threshold: 0.397566
adding 8 points, 184 far (0.397566 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxx 0.11509 -> 0.114924
point variance: 0.127948 threshold: 0.383845
adding 11 points, 107 far (0.383845 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 6.02422 seconds
coordinates applied in 0 sec
<PointCloud '12557 points'>
BuildDepthMaps: quality = Medium, depth filtering = Mild, reuse depth maps
Using device: GeForce GTX 1080 Ti, 28 compute units, free memory: 10384/11177 MB, compute capability 6.1
  driver/runtime CUDA: 10000/5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
Using CUDA device 'GeForce GTX 1080 Ti' in concurrent. (2 times)
sorting point cloud... done in 0.000126 sec
processing matches... done in 0.003345 sec
initializing...
selected 26 cameras from 26 in 0.118217 sec
...
...
...
<DenseCloud '449853 points'>

#Gui log(<PointCloud '23218 points'>) well aligned
Code: [Select]
Matching photos...
Detecting points...
Using device: GeForce GTX 1080 Ti, 28 compute units, free memory: 10523/11177 MB, compute capability 6.1
  driver/runtime CUDA: 10000/5050
  max work group size 1024
  max work item sizes [1024, 1024, 64]
...
...
...
adding 14387 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adding camera 0 (27 of 50), 1144 of 1180 used
adding camera 0 (28 of 50), 982 of 1027 used
adding camera 0 (29 of 50), 690 of 733 used
adding 1486 points, 10 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
2 blocks: 24 26
calculating match counts... done in 0.000115 sec
Aligning groups by 4271 points
iteration 0: 2331 points, 0.0713392 error
iteration 1: 2201 points, 0.0715029 error
iteration 2: 2184 points, 0.0715415 error
iteration 3: 2184 points, 0.0715415 error
iteration 4: 2184 points, 0.0715415 error
groups 0 and 1: 2184 robust from 4271
overlapping groups selected in 0.001426 sec
scheduled 1 merging groups
block: 2 sensors, 50 cameras, 26506 points, 113797 projections
block_sensors: 0.00160217 MB (0.00160217 MB allocated)
block_cameras: 0.019455 MB (0.019455 MB allocated)
block_points: 1.21335 MB (1.5 MB allocated)
block_tracks: 0.101112 MB (0.101112 MB allocated)
block_obs: 2.17051 MB (2.17051 MB allocated)
block_ofs: 0.202232 MB (0.202232 MB allocated)
block_fre: 0 MB (0 MB allocated)
adding 26231 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.276145 -> 0.227149
adding 35 points, 3 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxxxxxxxx 0.22649 -> 0.226138
adding 0 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
optimized in 4.72872 seconds
f 1657.45, cx -41.4175, cy -3.9976, k1 0.14099, k2 -0.793362, k3 1.21134
f 1668.01, cx -39.3074, cy 28.0903, k1 0.11177, k2 -0.627066, k3 1.00088
adjusting: xxxxxxxxxxxxxxxxx 3.20788 -> 0.227533
loaded projections in 6.2e-05 sec
tracks initialized in 0.008128 sec
adding 26264 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
1 blocks: 50
calculating match counts... done in 2e-06 sec
overlapping groups selected in 1e-06 sec
1 blocks: 50
final block size: 50
adding 26264 points, 0 far (3 threshold), 0 inaccurate, 0 invisible, 0 weak
3 sigma filtering...
adjusting: xxxxxxxxxxxxxxx 0.230273 -> 0.227857
point variance: 0.258924 threshold: 0.776772
adding 0 points, 942 far (0.776772 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxxxxxx 0.168736 -> 0.165978
point variance: 0.18678 threshold: 0.560341
adding 0 points, 932 far (0.560341 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxxxx 0.146487 -> 0.145254
point variance: 0.161896 threshold: 0.485687
adding 7 points, 613 far (0.485687 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxxxx 0.136615 -> 0.136071
point variance: 0.150833 threshold: 0.452498
adding 16 points, 363 far (0.452498 threshold), 0 inaccurate, 0 invisible, 0 weak
adjusting: xxxxxx
...
...
...
« Last Edit: May 29, 2019, 09:36:16 AM by kaz »

kaz

  • Newbie
  • *
  • Posts: 21
    • View Profile
Re: alignCameras() produces different results
« Reply #1 on: May 29, 2019, 04:52:54 AM »
I understand that the reason of different results is employment of stochastic values.
However, I don't know why gui run is more stable than script run.
Any help would be much appreciated.

    The alignment results may be different due to stochastic values are employed. However, if there are no problems with the overlap and coverage the results of the processing stages shouldn't differ much.

    The difference may be considerable for unstable alignment (lack of texture pattern, low overlap, low accuracy or low image resolution).

I am using the following script to automate my workflow of generating an orthomosaic for a set of 180 aerial photos -

Code: [Select]
doc = PhotoScan.app.document
doc.save("projects/frame_2.psx")

# add photos
chunk = doc.addChunk()
chunk.addPhotos(images)  # images is a list of image paths from a directory

# image matching and alignment
for frame in chunk.frames:
    frame.matchPhotos(accuracy=PhotoScan.HighAccuracy, preselection=PhotoScan.GenericPreselection, keypoint_limit=40000, tiepoint_limit=1000)
chunk.alignCameras()

# dense point cloud

chunk.buildDenseCloud(quality=PhotoScan.HighQuality,filter=PhotoScan.MildFiltering)

# build mesh
chunk.buildModel(surface=PhotoScan.HeightField, interpolation=PhotoScan.EnabledInterpolation, face_count=PhotoScan.HighFaceCount, source=PhotoScan.DenseCloudData, classes=[PhotoScan.Created])   # classes provide control on which kind of terrain we are mapping

# build UV
chunk.buildUV(mapping=PhotoScan.OrthophotoMapping)

# build texture
chunk.buildTexture(blending=PhotoScan.MosaicBlending, size=8192)

doc.save()

# build orthophoto
chunk.buildOrthomosaic()

# export orthophoto
chunk.exportOrthomosaic(path="projects/frame_2.tif", jpeg_quality=99)

However, at the end of processing, I noticed that the output obtained after processing the same dataset through the API  is drastically different from the output I obtain by following the orthophoto generation process (detailed here - http://www.agisoft.com/index.php?id=28) in the GUI.

I am running the Pro 30-day trial version on 64-bit Linux system with 8GB RAM & a GeForce GTX 740 graphics card.

Any idea why this would be happening?

Best regards,
Kaz
« Last Edit: May 29, 2019, 09:54:47 AM by kaz »

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14846
    • View Profile
Re: alignCameras() produces different results
« Reply #2 on: May 29, 2019, 12:47:47 PM »
Hello Kaz,

Which kind of object or scene you are reconstructing?

Do you get more stable results using higher tie point limit value and having adaptive camera model fitting disabled?
Best regards,
Alexey Pasumansky,
Agisoft LLC

kaz

  • Newbie
  • *
  • Posts: 21
    • View Profile
Re: alignCameras() produces different results
« Reply #3 on: May 31, 2019, 05:09:06 AM »
Hello Alexey,
Thank you for your reply.

I'm reconstructing several objects.
Script run is stable for some objects, but not stable for others.
On the other hand, gui run is stable for any object.

Adaptive camera model fitting is disabled for both gui and script run, and the results are still different for both run.
Higher tie point limit is not effective.(Tie point result(21,245) is larger than tie point limit(4,000)? This is not point of my question.)

What I want to know is why and whether the results are different between gui and script run on the same condition.
The difference is possible?
Please check the attached image to confirm that every option is the same for both run.

I'm using Metashape-pro 1.5.2

Best regards,
Kaz
« Last Edit: May 31, 2019, 05:04:18 PM by kaz »

Alexey Pasumansky

  • Agisoft Technical Support
  • Hero Member
  • *****
  • Posts: 14846
    • View Profile
Re: alignCameras() produces different results
« Reply #4 on: May 31, 2019, 09:01:27 PM »
Hello Kaz,

"Tie point limit" parameter for matching means the upper limit for the tie point projections on the individual photos and not the threshold of the total number of points in the sparse point cloud.

Maybe you can send the image set to us where you are observing the unstable behavior?
Best regards,
Alexey Pasumansky,
Agisoft LLC

kaz

  • Newbie
  • *
  • Posts: 21
    • View Profile
Re: alignCameras() produces different results
« Reply #5 on: June 03, 2019, 09:57:01 AM »
Hello Alexey,

I understood "Tie point limit". 
Thank you.

Could you download images from this link?
https://drive.google.com/drive/folders/1dM8irayllo0rfG6ab_3yTtLxpYT9lPVR?usp=sharing
The table below shows No. of produced tie points.


Environment
- Ubuntu18.04.2(64bit)
- Geforce1080Ti/PCIe/SSE2
- Metashape1.5.2
- Memory: 15.6 GiB
- CPU: Core i7-8700K @ 3.70Hz x 12

Best regards,
Kaz
« Last Edit: June 04, 2019, 05:27:06 AM by kaz »

kaz

  • Newbie
  • *
  • Posts: 21
    • View Profile
Re: alignCameras() produces different results
« Reply #6 on: June 10, 2019, 04:58:07 AM »
Hello Alexey,

I don't want to rush you, but were you able to download the images from the link above?
If no, I'll prepare in another way.

Best regards,
Kaz
« Last Edit: June 10, 2019, 07:18:23 AM by kaz »