I was recently struggling with the best way to randomly construct a well-connected, recurrent topology of neurons until I encountered Percolation theory.
There is a basic natural log scaling rule that essentially guarantees that you will have a well-connected topology (even with random connections) as long as you ensure a minimum # of connections are assigned to each element.
The required fanout at each order of magnitude network size goes something like:
What do you mean by well-connected topology?
If you mean that you can reach every neuron from any neuron then the number of connections you need is asymptotically n log n / 2 (not up to a constant factor or anything, just n log n / 2 on the nose, it's a sharp threshold), see [0].
In general when percolation is done on just n nodes without extra structure, it's called the Erdős–Rényi model [0], and most mathematicians even just call this "the" random graph model.
Ah but that's a bit different. The giant component doesn't connect all the neurons, only a fraction. The wiki page doesn't say this but if you have c * n / 2 edges then the fraction of neurons in the giant component is 1 + W(-c * exp(-c))/c where W is the Lambert W function [0], also called the product logarithm.
As c tends to infinity this fraction tends to 1 but it's never 1.
Christensen is one of the authors of "Complexity and Criticality", which I highly recommend to anyone interested in percolation theory, the Ising model, self organized criticality and other models [0].
This was a great YouTube video on the subject: https://www.youtube.com/watch?v=a-767WnbaCQ
A submission to 3blue1brown's SoME (summer of math explanation) competition
I was recently struggling with the best way to randomly construct a well-connected, recurrent topology of neurons until I encountered Percolation theory.
There is a basic natural log scaling rule that essentially guarantees that you will have a well-connected topology (even with random connections) as long as you ensure a minimum # of connections are assigned to each element.
The required fanout at each order of magnitude network size goes something like:
I've been able to avoid a lot of complicated code by leveraging this.What do you mean by well-connected topology? If you mean that you can reach every neuron from any neuron then the number of connections you need is asymptotically n log n / 2 (not up to a constant factor or anything, just n log n / 2 on the nose, it's a sharp threshold), see [0]. In general when percolation is done on just n nodes without extra structure, it's called the Erdős–Rényi model [0], and most mathematicians even just call this "the" random graph model.
[0] https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_....
I think we are on the same page.
https://en.wikipedia.org/wiki/Giant_component#Giant_componen...
Ah but that's a bit different. The giant component doesn't connect all the neurons, only a fraction. The wiki page doesn't say this but if you have c * n / 2 edges then the fraction of neurons in the giant component is 1 + W(-c * exp(-c))/c where W is the Lambert W function [0], also called the product logarithm. As c tends to infinity this fraction tends to 1 but it's never 1.
[0] https://en.wikipedia.org/wiki/Lambert_W_function
Relevant to the study of phase transitions in machine learning; e.g., https://openreview.net/forum?id=0pLCDJVVRD
Awesome primer on percolation https://www.youtube.com/watch?v=a-767WnbaCQ
Christensen is one of the authors of "Complexity and Criticality", which I highly recommend to anyone interested in percolation theory, the Ising model, self organized criticality and other models [0].
[0] https://www.worldscientific.com/worldscibooks/10.1142/p365#t...
I loved studying this stuff at undergrad. Was one of my favourite courses.