The voter model can be expressed as follows. We have an undirected graph G, where each vertex has a label. For convenience, let's assume the labels are in {0,1}, although larger alphabets are possible. At each time step, a vertex x is chosen uniformly at random, and x chooses a neighbor y uniformly at random. Then x changes its label to match y's label. This is a Markov chain with absorbing states with all vertices having the same label. It's natural to ask things like what is the probability that you end up in the all 0/all 1 state from a given configuration.
If the graph is d-regular, and you start in a state with k 0's and n-k 1's, a simple argument gives that the probability of eventually being absorbed into the all 0 case is k/n. (Exercise Left To Reader.) Things seem to get more complicated for more general graphs. Here are some interesting questions that come to mind. Note: I don't know which are open and which aren't...
- Given a graph G and a starting configuration of labels, is there a polynomial time algorithm for computing the probability of being absorbed into the all 0 state? Or could a hardness result be proved? [My guess is there's a hardness result in there somewhere.]
- Given a star graph (one node in the center, connected to n-1 other vertices) and a starting configuration, what is the probability of being absorbed into the all 0 state (preferably in some natural closed form)? In particular, if you start with a 1 in the middle and a 0 everywhere else, what is this probability as a function of n? Notice that by symmetry here one can easily compute these values in polynomial time; the question is whether there are pleasant equations. [On some reflection, it's clear there will be nice equations; this is almost a birth-death chain with an added "phase" needed to track the center vertex. Exercise left to reader...]
- Similar questions to 1/2, but ask about the expected time/distribution of time until absorption.
- Given a graph G and a configuration of labels, is there a polynomial time algorithm for compute the stationary probability for that state?
- Are there natural families of graphs for which the stationary distribution for the anti-voting model has an easily expressible form? [d-regularity no longer seems to be enough...]
9 comments:
the closest i know of such problems are in the "evolutionary dynamics on graphs" papers by martin nowak. basically, 0 = cooperators, 1 = defectors; you start with an initial configuration in a population, run the dynamics much like you proposed, and try to compute the "fixation probability."
see: http://www.fas.harvard.edu/~ped/people/faculty/all_publications.html
A simple rule for the evolution of cooperation on graphs and social networks.
I couldn't help but take a stab at this. For voter model question #2, it turns out that the probability of reaching the all zero state from the given initial state is always 1/2, regardless of the value of n. My solution is a bit of a mess, but I'd be happy to share the details if anyone is interested.
I agree with you, though. Nothing is more fun than easily stated, not so easily solved Markov chain problems.
Given a star graph with n+1 nodes (it makes the notation better with n+1), with the center node labelled 1 and k of the other nodes labelled 0, the probability of reaching the all-zero state is kn/(n^2+1). Also, if the center node is labelled 0 and k of the other nodes are labelled 0, the probability of reaching the all-zero state is (kn+1)/(n^2+1).
Method: Let (k,1) denote the state where the center node is labelled 1 and k of the other nodes are labelled 0. Similiarly denote (k,0). Let p_k denote the probability of reaching (n,0) from (k,0). Similarly, let q_k denote the probability of reaching (n,0) from (k,1). Note that p_n = 1 and q_0 = 0.
After establishing the relevant recurrences in the usual way, the key manipulation is to obtain the identity: p_k + q_{k+1} = q_k + p_{k+1} for all k. The rest follows.
Oops. I misinterpreted the model. I had each randomly selected vertex imposing its label on one of its neighbours instead of vice versa which I guess is more of a contagion model than a voter model.
The recurrences are different in this case, but the key observation (i.e.: p_k + q_{k+1} = q_k + p_{k+1}) somehow remains the same. The correct recurrences give p_k = (n+k)/(2n) and q_k = k/(2n) in a star graph with n+1 nodes, which agrees with Randy Cogill's result.
What is the recurrence that you are getting? I'm getting p_k = (n-k)/n*(p_{k+1} + 1/(n+1)*q_k)
and q_k = k/n*(q_{k-1}+1/(n+1)*p_k), so I can't see how you obtain p_k + q_{k+1} = q_k + p_{k+1}?
The recurrences I found are:
p_k = n/(n+1) p_{k+1} + 1/(n+1) q_k
q_k = n/(n+1) q_{k-1} + 1/(n+1) p_k
As for question 1 about the hardness of computing the probability of absorption in a general graph G, it turns out that there is a simple solution. An exposition can be found in Chapter 2 of this thesis.
The main idea is to consider the system of random walks on G induced by looking at where each vertex last got its label.
The end result is that the probability of being absorbed in the all-zeros state is equal to the degrees of all the vertices initially set to zero divided by the sum of the degrees of all the vertices in G.
Cool reference! Thanks!
It is very difficult to study Markov chain topic. Not many good reference textbooks to study Markov chain.
I use Markov Chains and Stochastic Stability to study. This is good reference textbook.
Do you have any other good Markov Chains related textbooks recommend?
Regards,
Andy
Post a Comment