How is GPU and memory usage determined in nvidia-smi results?

I am currently using the tool that comes with the nvidia driver 'nvidia-smi' to monitor performance on the GPU. When we use "nvidia-smi -a", it will provide information on the current GPU information, including kernel usage and GPU memory, temperature, etc. In the following way:

=============== NVSMI LOG ===============

Timestamp: Tue

Feb 22 22:39:09 2011

Driver Version: 260.19.26

GPU 0:

Product Name : GeForce 8800 GTX PCI Device/Vendor ID : 19110de PCI Location ID : 0:4:0 Board Serial : 211561763875 Display : Connected Temperature : 55 C Fan Speed : 47% Utilization GPU : 1% Memory : 0% 

I wonder how GPU and memory usage is determined? For example, GPU core utilization is 47%. Does this mean that 47% of active SMs work? Or are all GPU cores occupied in 47% of cases, and the remaining 53% of the time? For memory, does usage mean the ratio between the current bandwidth and the maximum bandwidth or the busy time ratio in the last block of time?

+6
gpu cuda nvidia
source share
2 answers

A post by a moderator on NVIDIA forums says that GPU usage and memory usage are based on last-second activity:

A busy GPU is actually a percentage of the time in the last second that SMs were busy, and memory usage is actually a percentage of the bandwidth used during the last second. Complete statistics on memory consumption come with the next version.

+5
source share

You can refer to this official API document: http://docs.nvidia.com/deploy/nvml-api/structnvmlUtilization__t.html#structnvmlUtilization__t

It says: "The percentage of time in the past sampling period during which one or more cores were run on the GPU."

+1
source share

All Articles