Hello all,
I am trying to create an orthomosaic of two datasets: one containing tiff files with one channel, and one containing jpeg photos of RGB data. I have trouble to correctly align the cameras. The datasets are large, they contain +11,000 images.
The datasets have been clustered with each cluster containing approx 500 photos, so we can process smaller datasets. Still not all cameras are being used, and the result is distorted. Also there seems to be difference between the quality of the alignment of the tiff-files compared to the jpeg-files.
All this left me with the following questions:
- What is the influence of the hardware/software/data in the alignment process? Does a more powerful computer improve the quality of the result or only speed up the process. Does the shape of the data have an influence?
- Is there a limit of the nr. of photos one should use, and in what way is the aligning process influenced by the number of photos.
- What is the difference in aligning JPEG files vs TIFF files. Result of aligning 500 tiff files, usually gives a good result, but with JPEG files I can only get a reasonable result when using 100 photos.
A better understanding of the influence of these factor would help me a lot, so any help will be appreciated.