PyTorch 2.6 released with Python 3.13 compatibility & FP16 support on x86 CPUs
PyTorch 2.6 has been released, introducing new features, including compatibility with Python 3.13 for the torch.compile function and a new torch.compiler.set_stance setting to enhance performance optimization control.
One of the key advancements in this release is the improvement of the AOTInductor, PyTorch's ahead-of-time compiler, which now supports Float16 (FP16) on x86 CPUs. This enhancement is particularly beneficial for Intel Xeon 6 P "Granite Rapids" processors, as it utilizes Advanced Matrix Extensions (AMX) to boost performance in both eager and Inductor modes.
The FP16 support, which was at a prototype level in PyTorch 2.5, has been upgraded to beta-level in this release, offering better performance and stability. This improvement has been rigorously tested across a wider range of workloads to ensure its reliability.
