Comments on: Common PCI-Express Myths for GPU Computing Users https://www.microway.com/hpc-tech-tips/common-pci-express-myths-gpu-computing/ We Speak HPC & AI Tue, 28 May 2024 17:18:52 +0000 hourly 1 https://wordpress.org/?v=6.7.1 By: John Murphy https://www.microway.com/hpc-tech-tips/common-pci-express-myths-gpu-computing/#comment-41 Tue, 27 Jun 2017 14:35:11 +0000 http://https://www.microway.com/?p=3417#comment-41 In reply to Ryan.

Hi Ryan, for multi-GPU systems, with GPUs PCIe lanes totaling to more than 40, on-board PCIe switches are required to manage the traffic. Otherwise the traffic would have to be managed entirely by the CPU, and such multi-GPU systems would exceed the 40 lane limit. We offer a system, for example, which can accommodate up to ten x16 PCIev3 GPUs on a single root PCIe complex, all attached to a single GPU. For further details, see Octoputer 4U 10-GPU Server with Single Root Complex for GPU-Direct.

]]>
By: Ryan https://www.microway.com/hpc-tech-tips/common-pci-express-myths-gpu-computing/#comment-40 Wed, 21 Jun 2017 19:41:29 +0000 http://https://www.microway.com/?p=3417#comment-40 Very good read. Thank you. I am curious about one thing that no one directly answers. Would a single XEON e5-26??(v4) be capable of handling more than 40 pcie lanes worth of GPU’s. I understand that the pcie lanes is the limitation of how many gpus can be used in a 3d rendering or gaming system. For CUDA rendering I have a client wanting to put 7 nvidia 1070’s in a single core xeon machine but I don’t see how the 40 pcie lanes are going to keep up with a x16/x8/x8/x8/x8/x8/x8/ setup, it seems like a waste of 3 or 4 gpu’s depending on how the setup is run. Any help would be great.

]]>
By: Joao Streibel https://www.microway.com/hpc-tech-tips/common-pci-express-myths-gpu-computing/#comment-39 Sun, 31 Jul 2016 19:56:10 +0000 http://https://www.microway.com/?p=3417#comment-39 Thank you very much for this. Helped me a lot!

]]>