this thread moved a little away from the original subject (which is forgotten?)
Basically, I have never seen a job fail due to memory, after consuming all of SSD Swap.
Also, be it 64GB or 256GB of RAM, I think we are merely pushing the problem ahead for bigger jobs.
Should't it be possible to only process a chunk at the time in memory, then dump to swap/temp ?
If I can stitch 1km2 , then I should be fine to stitch 1000km2 , by processing an area, unloading most of it, keeping the border area to next chunk, then process that ?
Such processing would only require more tmp/swap , that can be fast on SSD stripe set with proper controllers, it would remove the "never enough RAM" in a world where we only generate more data as sensors improve.