PyTorch 2.6 released with Python 3.13 compatibility & FP16 support on x86 CPUs

PyTorch 2.6 released with Python 3.13 compatibility & FP16 support on x86 CPUs

PyTorch 2.6 has been released, introducing new features, including compatibility with Python 3.13 for the torch.compile function and a new torch.compiler.set_stance setting to enhance performance optimization control.

One of the key advancements in this release is the improvement of the AOTInductor, PyTorch's ahead-of-time compiler, which now supports Float16 (FP16) on x86 CPUs. This enhancement is particularly beneficial for Intel Xeon 6 P "Granite Rapids" processors, as it utilizes Advanced Matrix Extensions (AMX) to boost performance in both eager and Inductor modes.

The FP16 support, which was at a prototype level in PyTorch 2.5, has been upgraded to beta-level in this release, offering better performance and stability. This improvement has been rigorously tested across a wider range of workloads to ensure its reliability.

by Mauricio B. Holguin

cz
city_zen found this interesting
MORE ABOUT: #PyTorch
PyTorch iconPyTorch
  10
  • FreeOpen Source
  • ...

PyTorch is a software framework designed for fast and flexible experimentation, featuring a hybrid front-end and distributed training capabilities. It boasts an extensive ecosystem of tools and libraries, facilitating efficient production workflows. Rated 4.5, PyTorch's top alternatives include tinygrad, TensorFlow, and micrograd.

No comments so far, maybe you want to be first?
Gu