Forum

Author Topic: Alignment Experiments  (Read 94350 times)

Marcel

  • Sr. Member
  • ****
  • Posts: 309
    • View Profile
Alignment Experiments
« on: March 14, 2015, 12:35:43 PM »
I did some tests to find out what the the influence of the Key Point Limit is on the alignment because it always bugged me not knowing what what Key Point Limit was high enough.

This is the project I used for testing, a scan of an rusty sewer lid:



It's photographed with a D800 and a 35mm lens, and has a total of 52 photos at 36MP resolution.

I ran alignments with the following settings:

Accuracy: High
Pair pre-selection: Generic
Key point limit: from 1000 all the way up to 320000
Tie point limit: 0 (no limit)

Quick explanation of the settings:

Accuracy
At High accuracy, Photoscan uses the full resolution photo  (Medium would use the image at 50%, Low at 25%).

Pair Pre Selection
With Pair pre-selection set to Generic, Photoscan will make a quick pre-scan to see which photos share the same view. If photos do not share the same view then it makes no sense to compare the points in the photos. This makes the alignment much faster (and with good quality photos it has no impact on quality at all)

Key Point Limit
The maximum number of points Photoscan will extract from each photo. For a high quality 36 Megapixel photo the maximum number of points that can be extract is usually around 240000. For a 21 Megapixel photo this is generally 180000 points.

Tie Point Limit
This setting has been added not so long ago. I am not completely sure, but I think when this setting is active Photoscan makes a pre-selection based on (visual) quality of the extract points (so it only compares the highest quality points).

For example, if your Key Point limit is set to 40000 and the Tie Point Limit is set to 1000, then Photoscan will first extract 40000 points for each photo, and only keep the best 1000 points. These 1000 points per photo are then used for the alignment calculations.

This would speed up the alignment a lot because there is only a fraction of the points to compare, but since I am not sure about this setting I have set this value to 0 (=no maximum).



I ran alignments for Key point limits from 1000 all the way up to 320000, and put the results in some graphs.

Number of Points in the Sparse Cloud after alignment:



A higher Key point limit means more points in the Sparse Cloud. It starts levelling off after 240.000 points, because the maximum number of points that can be extracted is being reached for some images.


Alignment time:



The alignment time is pretty much linear, which was a surprise to me because I expected it to be exponential. The graph varies a bit, because I was using my computer for other things as well so the times are not completely accurate.

Reprojection Error

The next graph is the reprojection error:



The Reprojection error is a measure of accuracy of the points, measured in pixels. When you think about this, these values are pretty impressive: Photoscan is able to align the cameras with a sub-pixel accuracy!

The reprojection error for an alignment with 40.000 points is almost twice as big as for an alignment with 120.000 points (0.7 vs 0.4 pixels). But we can optimize the Sparse Point Cloud and redo the camera alignment.

To do this, I have used Edit->Gradual Selection->Reprojection Error with a value of 0.5 and removed those points. This gets rid of all points with a reprojection error larger than 0.5 (about a third of the points in the Sparse Cloud) Then I used Tools->Optimize Cameras to redo the alignment of the cameras. After optimizing, the graph for the reprojection error looks like this.



So after optimization the reprojection error is pretty much the same for all Key point limits (and I did not loose that many points) . The reprojection error now has values around 0.25, so Photoscan managed to align the cameras with a precision a quarter of a pixel!

I tried optimizing the point cloud even further by using "Gradual Selection -> Reconstruction Uncertainty = 8", but the reprojection error actually increased slightly after these optimizations. I don't think the accuracy is actually worse (since I deleted bad points), so maybe the reprojection error is not the best indicator of the accuracy of the alignment?

Dense Cloud Quality

All this talk about Reprojection Error is pretty theoretical, what is the effect on the Dense Point Cloud?
I did a Dense Cloud reconstruction at "High" quality for the various alignments. I converted the result to a normalmap, because we know from experience that a normalmap shows problems really well.



10000 points: there are some cameras that are excluded from the Dense Cloud reconstruction, so the Dense Cloud has some holes. Also, there is some general noise all over the scan.

20000 points: looks much better, but there is a very slight noise (only visible if you overlay the normalmaps).

40000 points: scan looks good

more than 40000 points: no visible difference in quality.

I also did a comparison with the Dense Cloud build at the Ultra quality setting. There wasn't any visible difference either.

Conclusion

The default value for the Key point limit (40000 points) seems to be well chosen. I don't see any improvement in quality of the Dense Cloud when using an alignment with more than 40000 Key points. If you look at the values of the reprojection error after optimization, this actually makes sense. The values are all under 0.3 pixels, which is well under the size of the details in the Dense Cloud. The alignment might be 0.1 pixel more precise, but this is way below the threshold of visibility (I would estimate that details in the Dense Cloud are for structures at least 2-3 pixels in size).

I will probably run my alignments at a higher Key point limit anyway (maybe 120000 points), just to be sure that my alignment is as accurate as possible. The Alignment doesn't take that much time compared to the DC reconstruction, and it gives me that warm safe feeling of doing it right.  :)

Please note that your results may vary: this is a very specific type of scan where all the photos are in the same plane. A more 3D object might need more points for a good alignment. Also, the photos in this project have almost perfect sharpness, so Photoscan has a very good input. If the photos would be less sharp, more points would be deleted during optimization (and using more points might be useful).











.

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Alignment Experiments
« Reply #1 on: March 14, 2015, 07:02:56 PM »
Interesting test.  Thanks for posting. 

igor73

  • Full Member
  • ***
  • Posts: 228
    • View Profile
Re: Alignment Experiments
« Reply #2 on: March 14, 2015, 07:15:54 PM »
Interesting test.  Thanks for posting. Would be interesting comparing different tie point  limits to.  I also done some rough tests and found that 40 000 points  seem to work best. 

tutoss

  • Newbie
  • *
  • Posts: 49
    • View Profile
Re: Alignment Experiments
« Reply #3 on: March 14, 2015, 07:47:11 PM »
nice read!

Mohammed

  • Full Member
  • ***
  • Posts: 191
    • View Profile
Re: Alignment Experiments
« Reply #4 on: March 15, 2015, 12:18:20 AM »
Hi Marcel ,

Great test, thanks for posting your results.
Can i ask you :
How you generate the graphs of (key points & Re projection error)?
and how you covert your result into Normal map?

Thanks again.
Mohammed

Marcel

  • Sr. Member
  • ****
  • Posts: 309
    • View Profile
Re: Alignment Experiments
« Reply #5 on: March 15, 2015, 12:43:30 AM »
Hi Marcel ,

Great test, thanks for posting your results.
Can i ask you :
How you generate the graphs of (key points & Re projection error)?
and how you covert your result into Normal map?

Thanks again.
Mohammed

I generated the graphs using Google Sheets, with the data I got from right clicking on a chunk and using "Show Info".

You can create a normalmap by meshing your point cloud, and then converting it using XNormal. But it's a slow and painful process and you need a ton of memory for larger point clouds.

ThomasVD

  • Full Member
  • ***
  • Posts: 107
    • View Profile
Re: Alignment Experiments
« Reply #6 on: March 18, 2015, 05:37:54 PM »
Thanks Marcel! Very interesting :)

Jack_in_CO

  • Newbie
  • *
  • Posts: 17
    • View Profile
Re: Alignment Experiments
« Reply #7 on: March 20, 2015, 07:49:49 PM »
Awesome explanation! Thanks!

zhr69

  • Newbie
  • *
  • Posts: 12
    • View Profile
Re: Alignment Experiments
« Reply #8 on: March 28, 2015, 01:55:57 PM »
Thanks for your great post.

I have a question regarding images size. My camera has 21MP resolution. But when I take photos with this size the result is not good. I have to take photos in 2MP resolution to achieve better results. Do everyone know what is it because? Is it related to number of key points limit? My settings is default photoscan values.


jtuhtan

  • Newbie
  • *
  • Posts: 37
    • View Profile
Re: Alignment Experiments
« Reply #9 on: March 28, 2015, 02:15:44 PM »
Thank you for sharing this great post!

From my experience (oblique and UAV-based imagery) there are only tendencies and not fixed formulas which should be used. For instance, for many small (< 100 21 MP images) oblique models, I usually get best performance by doing the following:

1) Set the max number of key points to 1 mio to ensure that all key points are generated. (As Marcel pointed out, 21 MP images will max out at a much lower number, but 1 mio just sets the threshold to "max")
2) Set the tie points to 0
3) Remove sparse points using gradual selection at 1 px or less (Marcel used 0.5 in his tests)
4) Optimize cameras
5) Repeat steps 3-4 until the reconstruction error is minimized

Depending on the imagery, a sub-pixel error of 0.5-0.1 can be achieved for most of my models. This takes a little more time processing at the onset, but the georeferenced model at the end is usually of a very high quality.

Here is an example of a small Austrian river made using the above workflow:
https://sketchfab.com/models/6c82e6f76167440cbff60862ea418f15

Jeff

Marcel

  • Sr. Member
  • ****
  • Posts: 309
    • View Profile
Re: Alignment Experiments
« Reply #10 on: March 29, 2015, 11:52:12 AM »
You can set the Key point Limit to 0 for maximum points as well (no need to set it to 1 million)

5) Repeat steps 3-4 until the reconstruction error is minimized

Does repeated optimization lower the reconstruction error a lot? I have never tried that, thanks for the tip!


navalguijo

  • Newbie
  • *
  • Posts: 20
  • Scanner Lover
    • View Profile
Re: Alignment Experiments
« Reply #11 on: May 08, 2015, 02:43:08 AM »
Thanx a Lot for this post!   :D

ajam13

  • Newbie
  • *
  • Posts: 8
    • View Profile
Re: Alignment Experiments
« Reply #12 on: May 27, 2015, 07:14:15 PM »
Thank you for sharing your work .. clear and concise ;)!
lgm

Magda

  • Newbie
  • *
  • Posts: 30
    • View Profile
Re: Alignment Experiments
« Reply #13 on: May 04, 2016, 01:58:41 PM »
Quote
Key Point Limit
The maximum number of points Photoscan will extract from each photo. For a high quality 36 Megapixel photo the maximum number of points that can be extract is usually around 240000. For a 21 Megapixel photo this is generally 180000 points.

How you know that from 36 Megapixel photo you can extract around 240000 keypoint?
I used Phantom 3 to take a pictures, my camera is 12Megapixel how I can calculate my max. number of potential key point?

Thank,
Magda

James

  • Hero Member
  • *****
  • Posts: 748
    • View Profile
Re: Alignment Experiments
« Reply #14 on: May 04, 2016, 03:08:01 PM »
I just ran some tests on a few existing images (of varying quality and content) and its seems the relationship between megapixels and maximum number of keypoints is approximately linear.

With 6MP Fujifilm S6500 i got ~35000 keypoints
With 12MP Nikon D90 i got ~70000 keypoints
With 16MP Nikon D7000 i got ~110000 keypoints
With 24MP Sony NEX7 i got ~150000 keypoints
With 36MP Nikon D800 i got ~240000 keypoints
With 50MP Canon 5DSR i got ~320000 keypoints

So the formula is that max no. keypoints ~= megapixels x 6400

i didn't include the fact that one of my nikon d800 images returned 280,000 keypoints as that ruined the graph and contradicted what Marcel posted. also his figure of 180,000 points for a 21MP camera doesn't fit either. i think this may be down to the fact that most of my images are not ideal, particularly with the lower end cameras, so there could be something more complex going on.

Also these were tests on JPG files, so TIFs or similar may give different results.

if you want to run the test on your own images, just make sure the console is visible when you run align images, with key point limit set to 0, and the console will display the number of key points detected in each image early on in the process.
« Last Edit: May 04, 2016, 03:11:22 PM by James »