Facebook launched PyTorch Hub – a place where researchers and developers can upload and use pre-trained models.
The novelty comes as part of Facebook’s plan to bring PyTorch closer to production. With several improvements to the library itself, Facebook has advanced PyTorch and is trying to build a whole AI ecosystem around it.
PyTorch Hub, as an API and workflow for research reproducibility, allows researchers and engineers to quickly publish pre-trained models to a Github repository. According to Facebook, this will foster the research within machine learning by allowing researchers and developers to have plug&play models.
A model can be published to PyTorch Hub by creating a hubconf.py file and upload it to Github. hubconf.py is a simple python file which contains python functions that load a pre-trained model. These functions called “entrypoints” define the input and output of the model and one hubconf.py file can contain multiple entrypoints.
For model loading, users can reach the torch.hub.load() API, which will provide all accepted models residing within PyTorch Hub. The users can load the model definition as well as the pre-trained weights of the model in an easy way, through a simple interface.
The initial version of PyTorch Hub launched by Facebook already contains around 20 pre-trained models among which: ResNet, U-Net, Google’s BERT model, GPT, WaveGlow, etc. This is still a beta release of the API and engineers from Facebook are expecting feedback from users to further improve PyTorch Hub.