Researchers from IBM Research have proposed a novel biologically plausible learning algorithm for neural networks.
In their paper “Unsupervised Learning by Competing Hidden Units” published in the PNAS (Proceedings of the National Academies of Science) journal Dmitry Krotov and John J. Hopfield propose a different approach in training neural networks inspired by the synaptic plasticity in biological neural networks.
The novel algorithm tries to address the issue with the nonbiological aspect of deep learning: huge amounts of labeled data and nonlocal learning rule for modifying weights. Instead, the idea in the new algorithm is that hidden neurons compete between each other and only a few remain active in the end. The algorithm is inspired by Hebbian-like plasticity mechanisms seen in biological neural networks.
The learning algorithm first learns the weights of the lower layers in a completely unsupervised fashion, meaning those weights are completely task-agnostic. Then, on top of those layers, a classifier can be trained specifically for a certain task.
In the paper, the researchers compare the proposed algorithm to an end-to-end neural network trained with backpropagation for two datasets: MNIST and CIFAR-10. They show that the models trained with the proposed bio-inspired algorithm do not fit the training data perfectly, yet still achieve the same or a bit better performance on the test set. Moreover, the learned weights from both algorithms differ pretty much.
More about the algorithm can be read in the paper or in the official blog post. Researchers also provided a video lecture explaining the method. The code for the experiments is also open-sourced and available here.