Show Your Support: This site is primarily supported by advertisements. Ads are what have allowed this site to be maintained on a daily basis for the past 18+ years. We do our best to ensure only clean, relevant ads are shown, when any nasty ads are detected, we work to remove them ASAP. If you would like to view the site without ads while still supporting our work, please consider our ad-free Phoronix Premium.
PyTorch 2.0 Now Shipping With Better CPU & GPU Performance
PyTorch 2.0 uses torch/compile as its main API, TorchInductor with NVIDIA and AMD GPUs now relies on OpenAI Triton for the deep learning compiler, a Metal Performance Shaders back-end provides GPU-accelerated PyTorch on macOS, faster inference performance on AWS Graviton CPUs, and a variety of other new prototype features and technologies. For faster CPU operation is also several "critical" optimizations to the GNN inference and training as well as faster CPU performance by making use of Intel oneDNN Graph.
Downloads and more details on PyTorch 2.0 via PyTorch.org.