TORCH_CUDA_ARCH_LIST
is an environment variable used by PyTorch to specify the CUDA architectures for which you want to compile your code. This can be useful if you know that your code will only run on certain GPU architectures and you want to optimize the compilation process.
For example, if you have a NVIDIA GPU that supports the “sm_70” architecture (which corresponds to the Tesla V100), you could set the TORCH_CUDA_ARCH_LIST
environment variable like this:
export TORCH_CUDA_ARCH_LIST="7.0"
This tells PyTorch to only compile your code for the “sm_70” architecture. If you have multiple architectures that you want to support, you can list them all separated by semicolons:
export TORCH_CUDA_ARCH_LIST="7.0;7.5"
This would tell PyTorch to compile your code for both the “sm_70” and “sm_75” architectures. Note that the version numbers (e.g., “7.0”, “7.5”) correspond to the “sm_” architecture names used by NVIDIA.