PyTorch 2.0 Now Shipping With Better CPU & GPU Performance

PyTorch 2.0 uses torch/compile as its main API, TorchInductor with NVIDIA and AMD GPUs now relies on OpenAI Triton for the deep learning compiler, a Metal Performance Shaders back-end provides GPU-accelerated PyTorch on macOS, faster inference performance on AWS Graviton CPUs, and a variety of other new prototype features and technologies. For faster CPU operation is also several "critical" optimizations to the GNN inference and training as well as faster CPU performance by making use of Intel oneDNN Graph.
Downloads and more details on PyTorch 2.0 via PyTorch.org.
16 Comments