On the second day of Facebook’s annual developer conference F8, the company announced the arrival of PyTorch 1.1. The improvement is a big milestone for PyTorch and includes new developer tools, new APIs, TensorBoard support and much more.
Facebook announced that in collaboration with the AI community they managed to improve PyTorch in a number of ways including dynamic networks, eager and graph-based execution, hardware accelerated inference and distributed training.
The main new features in PyTorch 1.1 are: improvements to the JIT (just-in-time) compiler, experimental TensorBoard support and distributed training across multiple GPUs.
From now on, PyTorch users can use Tensorflow’s visualization toolkit – TensorBoard. It is a suite of web applications that allow users to keep track of the training process, to visualize evaluations, project embeddings in lower dimensional space, etc.
The TensorBoard Support for PyTorch 1.1 is experimental so developers and researchers are encouraged to report any issues or bugs with TensorBoard integration.
New Machine Learning Tools and Projects
PyTorch 1.1 includes a number of new machine learning tools developed by engineers at Facebook or within the collaborative AI community.
Some of the new tools present in version 1.1 include BoTorch (a Bayesian Optimization framework), AX (adaptive experiments platform), PyTorch BigGraph (distributed system for large graph embeddings), BigGAN Pytorch, GeomLoss, PyTorch Geometric, etc.
More details about all the new improvements in PyTorch 1.1 can be read in the official release notes and in Facebook’s blog post.