Forum

Author Topic: Metashape Crashing on Large Project – 95K Image Alignment & Camera Optimization  (Read 608 times)

Fkybarless

  • Newbie
  • *
  • Posts: 44
    • View Profile
Hello,

I'm trying to align approximately 95,000 georeferenced images (both nadir and oblique) captured by a DJI Matrice 300 drone in a single chunk (covering an area of approximately 5 x 3.5 km). My computer configuration is as follows:

Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz   2.90 GHz (cores 8, 16 logical processors)
128 GB RAM
Windows 10 Pro
NVIDIA RTX 4060 8 GB

I'm using the following alignment settings:

Accuracy - high
Generic preselection - on
Reference preselection – on
Key point limit – 900
tie point limit – 150

And I have sufficient overlap between images.

However, my computer could not handle the task and shut down during processing. I then split the original chunk into two chunks of about 47,500 images each and ran the alignment with the same parameters. The alignment and subsequent merging of the chunks were successful (resulting in about 2.5 million tie points), but the project crashes during camera optimization after a long processing time.

I then attempted distributed processing using my computer as both the server and client, and an additional machine as a client. For alignment, I used the following settings:

Accuracy - high
Generic preselection - on
Reference preselection – on
Key point limit – 20000
tie point limit – 2000

Unfortunately, I was still unable to align all 95,000 images in a single chunk, and the system shut down again. For next alignment, I used the following settings:

Accuracy - high
Generic preselection - on
Reference preselection – on
Key point limit – 4000
tie point limit – 450

Unfortunately, the system shut down again.
How can I align 95,000 images in a single chunk effectively? Alternatively, how can I successfully optimize cameras for two merged chunks without the program crashing? Do you have any recommendations or best practices?

Thank you in advance!

mrv2020

  • Jr. Member
  • **
  • Posts: 82
    • View Profile
Hello,

Given the limitations encountered when trying to align all 95,000 images into a single chunk, I constantly deal with large groups of images and I suggest you use the official split_in_chunks_dialog.py script from Agisoft https://github.com/agisoft-llc/metashape-scripts/blob/master/src/split_in_chunks_dialog.py. Since the images are georeferenced, it allows you to automatically split your images into smaller, more manageable chunks, preserving the structure and facilitating the alignment into smaller chunks that your machine can handle more reliably, without modifying the desired precision parameters such as tie and key limits.

Once split, you can align each chunk individually using the same settings (or slightly reduced) and then use the "Merge Chunks" function to combine them.

I recommend you use an overlapping edge for better overlap between the splits when generating the merger chunks.

This approach significantly reduces the memory load and avoids crashes, while maintaining the accuracy of the model.

Best regards,

hogas_stefan

  • Newbie
  • *
  • Posts: 6
    • View Profile
"I recommend you use an overlapping edge for better overlap between the splits when generating the merger chunks."
How exactly should this be done ?

jenkinsm

  • Jr. Member
  • **
  • Posts: 79
    • View Profile
While 95k images is significantly more than I have dealt with personally in a single chunk (my max is around 20k), I think it shuold be possible. I have found this technique to be useful, so perhaps it will work for you too. The gist is to still process the whole data set in a single chunk, but to use the Network Processing feature which will intelligently break each step down into smaller pieces for "Workers" to process in parallel. You can do this even on a single machine with a single Worker node and benefit from it being handled slightly differently than processing directly in the main GUI.

https://www.youtube.com/watch?v=BYRIC-qkZJ8&pp=0gcJCdgAo7VqN5tD

If it does work for you, let us know!