Installing requirements :
$ py -m pip install -r requirements.txt
Running tests :
$ py test.py <image_path> <nsteps>
$ py test.py example.png 4
Usage in code :
from HopPics import *
hp = HopPics('path_to_pic.png')
hp.reconstruct_from_noise()
Hopfield networks (HNs) are among the simplest neural networks models. That is because there's almost nothing "neural" about them as they are based on the Ising Model from statistical physics. It only requires basic understanding of matrices and graph theory. It is rather useless in practice but tackles the notion of associative memory and shows a few limitations that are worth taking note of.
This project is a simple implementation of Binary Hopfield Networks to make images that are able to repair themselves through pattern recognition.
As stated before, Hopfield Networks are actually based on Ising models that are normally used in statistiscal physics and the only things that makes it neural is the usage of Hebb's rule which we will see more in detail in the A.
One of the interesting and instrumental concept of the Ising model is the energy function represented by the hamiltonian ie. the total energy of the system :
Where
Our goal with that energy function is to create a system such that :
- The system always converges to a low energy state.
- The system lowers the energy for states that it seeks to remember.
By having these two properties, we will be able to store and retrieve patterns as we'll see in the next part.
Usually Ising models are represented as n-lattices, but in the case of Hopfield Networks, we choose to represent our model as a complete graph K that consists in a set of weights
Here is the learning formula that models this phenomenom :
Where
What this does is basically making the weights decrease if
To calculate each step, we do the following calculations.
And then, each bit goes through an activation function that I chose to be :
The fact that the weights are strenghtened if they have the same state will give us what we can call associative memory or pattern recognition. This is because the result of each multiplication between the weight
Fig 2 : Examples of a destroyed picture of Margaret Hamilton reconstructed after 4 steps.
Fig 3 : Examples of a destroyed image of a panda reconstructed after 8 steps.
Both of these examples have been achieved by first destroying the image by adding random noise to the image (if you squint hard enough you can still see the panda). And even though those image are almost unrecognizable they are still able to snap back to the original if enough pixels are intact.
Hopfield Networks are a great entry point to learning AI and even though HNs are old they might still be useful and interesting to study. More precisely the limitations of HNs are the ones that are worth taking note of as it extends to other models of NNs as well and can help designing other models given a set of constraints and requirements.
For instance, the main limitation of HNs is the limited number of patterns that can be trained for the same model. But another limitation is the possibility for the network to settle for a pattern
Recent work has shown that this behaviour of pattern recognition and self-reconstruction can also be achieved through the more convenient method of training cellular automatas to self regenerate or self classification. And I think this is where I'll be headed towards for my next AI project.
- Hopfield Network, Alice Julien-Laferriere
- Hopfield network, John J. Hopfield, Scholarpedia
- Randazzo, et al., "Self-classifying MNIST Digits", Distill, 2020.
- Mordvintsev, et al., "Growing Neural Cellular Automata", Distill, 2020.
- "Neuronal Dynamics From single neurons to networks and models of cognition and beyond", Wulfram Gerstner, Werner M. Kistler, Richard Naud and Liam Paninski