The project has 1000 oblique images, each 15 MPix.
Split in chunks is no option due to the oblique imagery.
As I mentioned, i tried to process it as cluster.
In fine level task distribution, the server creates 60 tasks.
However, each task needs about 54 GB to process the data.
My cluster nodes have only 32 GB per node, so they start swapping.
I also tried to switch off the swap file. The result is the same, but now the nodes crashed instead of swapping (which is expected).
So my question is: Does the server take the node's memory into account, when he is defining the tasks? Can I increase or decrease the number of tasks? Or can I specify the "max memory" for a task in cluster processing?
It's no "windows" issue, a linux setup of my cluster showed the same behaviour...