Top to random eigenvalues.py

From Werner KRAUTH

(Difference between revisions)
Jump to: navigation, search
Revision as of 22:37, 8 June 2024
Werner (Talk | contribs)
(Further information)
← Previous diff
Revision as of 08:10, 9 June 2024
Werner (Talk | contribs)
(Output)
Next diff →
Line 63: Line 63:
-NB: The number of cards in the deck is N, and the diameter of the transition matrix is N-1 (this is also computed by the program). The number of configurations is N!, in other words, it is huge.+NB: The number of cards in the deck is N, and the diameter of the transition matrix is N-1 (this is also computed by the program). The number of configurations is N!, in other words, it is huge. Markov-chain Monte Carlo algorithms can be successful, in spite of the huge sample space, precisely because the diameters of their transition matrix remains small.
==Further information== ==Further information==

Revision as of 08:10, 9 June 2024

Contents

Context

This page is part of my 2024 Beg Rohu Lectures on "The second Markov chain revolution" at the Summer School "Concepts and Methods of Statistical Physics" (3 - 15 June 2024).

In this example, I consider the transition matrix for the top-to-random shuffle discussed in Lecture 2. This N! x N! matrix has N distinct eigenvalues, and therefore huge degeneracies.

Python program

import numpy as np
import itertools
import scipy.linalg as la

def factorial(n):
    return 1 if n == 0 else (0 if n == 0 else factorial(n - 1) * n)

for N in [2, 3, 4, 5, 6, 7]:
    FacN = factorial(N)
    ConfCopy = [0] * N
    print(N, 'N', FacN)
#
#   Setup of transition matrix
#
    P   = np.zeros((FacN, FacN))
    Conf = [k for k in range(N)]
    ConfList = list(itertools.permutations(Conf))
    for Conf in ConfList:
        i = ConfList.index(tuple(Conf))
        ConfCopy[:] = Conf
        a = ConfCopy.pop(0)
        for k in range(N):
            TargetConf = ConfCopy[0:k] + [a] + ConfCopy[k:N - 1]
            j = ConfList.index(tuple(TargetConf))
            P[i][j] = 1.0 / float(N)
    eigvals, eigvecsl, eigvecsr = la.eig(P, left=True)
    eigvals.sort()
    stats = [0] * (N+1)
    for a in eigvals:
        index =  int(N * a.real + 0.5)
        stats[index] += 1
    print(stats)

Output

2 N 2 [1, 0, 1]

3 N 6 [2, 3, 0, 1]

4 N 24 [9, 8, 6, 0, 1]

5 N 120 [44, 45, 20, 10, 0, 1]

6 N 720 [265, 264, 135, 40, 15, 0, 1]

For N=6, the output means that

the eigenvalue 1 is 1 times degenerate

the eigenvalue 1 - 1/N is 0 times degenerate

the eigenvalue 1 - 2/N is 15 times degenerate

the eigenvalue 1 - 3/N is 40 times degenerate, etc

The degeneracy of the second eigenvalue is thus n(n-1)/2.


NB: The number of cards in the deck is N, and the diameter of the transition matrix is N-1 (this is also computed by the program). The number of configurations is N!, in other words, it is huge. Markov-chain Monte Carlo algorithms can be successful, in spite of the huge sample space, precisely because the diameters of their transition matrix remains small.

Further information

The program Top_to_random_simul.py contains a Monte Carlo simulation of the top-to-random shuffle. It will output a perfectly shuffled deck in the limit of infinite shuffling times.

Top_to_random_simul_stop.py performs a simulation with a stopping criterion: It will output a perfectly shuffled deck.

Top_to_random_TVD.py computes the total variation distance, starting from a single-permutation starting configuration at time t=0.

Finally, Top_to_random_TVD_Mix_Rel.py again computes the total variation distance, starting from a single-permutation starting configuration at time t=0, but it performs an analysis in terms of mixing and relaxation timesfor the small system sizes that are available to us.

References

Aldous D. and Persi Diaconis P., Shuffling Cards and Stopping Times, The American Mathematical Monthly 5, 333 (1986)

Diaconis, P., Fill, J., and Pitman, J., Analysis of top to random shuffles. Combinatorics, Probability, and Computing (1992) 1, 135-155.

Personal tools