CPU Memory

You need enough memory for: • 2-3 possibly DL workers per Accelerator (so 16-24 processes with 8 accelerators per node) • Even more memory for DL workers if you pull data from the cloud • Enough memory to load the model if you can’t load to accelerator directly • Often used for accelerator memory offloading - extends accelerator’s memory by swapping out the currently unused layers - if that’s the target use, then the more cpu memory is available - the better TO LOOK INTO FOR THE GH2000