Hi all,
I have recently been testing the network processing feature on our array of company modelling PCs.
I am encountering an error in which the first processing node seems to parse/preload data and run out of RAM, causing a
Unexpected Process Termination error message. Please see attached image. The system will then repeat the process, failing at the point at which it cannot be allocated any more memory.
Each processing node is equipped with 64GBs RAM & dual RTX 2080 supers (or similar).
I am aware that 64GBs of RAM is not by any means enough for the type of projects I am doing here, but I had assumed that the network processing feature would share the load in a way that would not cause errors like this.
Will network processing always parse/preload data on the first node before allocating tasks to the remaining nodes?
If this is the case, would it make sense to upgrade the 'main' node to 128GB ram or similar and leave the remaining nodes as they are? Or will I see this problem replicated on other nodes despite upgrading the main one?
I hope this makes sense! I would appreciate any input, cheers!
Kind regards,
Juan Shimmin
Graduate Survey Data Analyst - Rovco