Rank-driven Markov processes
Michael Grinfeld, Philip A. Knight, and Andrew R. Wade
Journal of Statistical Physics, 146, no. 2, January 2012, 378‐407. DOI: 10.1007/s10955-011-0368-7
Abstract
We study a class of Markovian systems of $N$ elements taking values in $[0,1]$ that evolve in discrete time $t$ via randomized replacement rules based on the ranks of the elements. These rank-driven processes are inspired by variants of the Bak–Sneppen model of evolution, in which the system represents an evolutionary `fitness landscape' and which is famous as a simple model displaying self-organized criticality. Our main results are concerned with long-time large-$N$ asymptotics for the general model in which, at each time step, $K$ randomly chosen elements are discarded and replaced by independent $U[0,1]$ variables, where the ranks of the elements to be replaced are chosen, independently at each time step, according to a distribution $\kappa_N$ on $\{1,2,\ldots,N\}^K$. Our main results are that, under appropriate conditions on $\kappa_N$, the system exhibits threshold behaviour at $s^* \in [0,1]$, where $s^*$ is a function of $\kappa_N$, and the marginal distribution of a randomly selected element converges to $U[s^*, 1]$ as $t \to \infty$ and $N \to \infty$. Of this class of models, results in the literature have previously been given for special cases only, namely the `mean-field' or `random neighbour' Bak–Sneppen model. Our proofs avoid the heuristic arguments of some of the previous work and use Foster–Lyapunov ideas. Our results extend existing results and establish their natural, more general context. We derive some more specialized results for the particular case where $K=2$. One of our technical tools is a result on convergence of stationary distributions for families of uniformly ergodic Markov chains on increasing state-spaces, which may be of independent interest.
Further remarks
Here are some pictures from the paper.